PHYSICS This entry includes 4 subentries:
From the colonial period through the early nineteenth century, physics, which was then a branch of natural philosophy, was practiced by only a few Americans, virtually none of whom earned his living primarily in research. Some, like John Winthrop at Harvard, were college professors who were expected and encouraged only to teach. Others were gentlemanly amateurs with private laboratories. The physics of that day ranged from astronomy and navigation to pneumatics, hydrostatics, mechanics, and optics. In virtually all these subjects Americans followed the intellectual lead of Europeans, especially the British. As well as the practitioners of other sciences, they were also inspired by the English philosopher Francis Bacon, who had urged scholars to study the facts of nature and had taught that knowledge led to power. Thus, American physicists emphasized the accumulation of experimental facts rather than mathematical theorizing, and they made no distinction between abstract and practical research, or what a later generation would call pure and applied science. The archetypal American physicist was Benjamin Franklin, the retired printer, man of affairs, and deist, who was celebrated for his practical lightning rod as well as for his speculative and experimental contributions to electrical science.
From the Jacksonian era through the Civil War, American physics became more specialized, with its subject matter narrowing to geophysics, meteorology, and such topics of physics proper as the older mechanics and the newer heat, light, electricity, and magnetism. The leading American physicist of the period was Joseph Henry, who discovered electromagnetic induction while teaching at the Albany Academy in Albany, New York. Later he became a professor at Princeton and then the first secretary of the Smithsonian Institution. Imbibing the nationalism of the day, Henry worked to advance the study of physics and, indeed, of all science in America. With Henry's support, Alexander Dallas Bache, Franklin's great-grandson and the director of the U.S. Coast Survey, enlarged the scope of that agency to include studies in the geodesy and geophysics of the entire continent. In the 1850s the survey was the largest single employer of physicists in the country. Henry also channeled part of the Smithsonian's income into fundamental research, including research in meteorology. During Henry's lifetime, American physics became more professional; the gentlemanly amateur was gradually superseded by the college-trained physicist who was employed on a college faculty or by the government.
In the quarter-century after the Civil War, many physicists set themselves increasingly apart from utilitarian concerns and embraced the new ethic of "pure" science. At the same time, the reform of higher education gave physics a considerable boost by permitting students to major in the sciences, making laboratory work a standard part of the curriculum, creating programs of graduate studies, and establishing the advancement of knowledge, at least nominally, as an important function of the university and its professors. Between 1865 and 1890 the number of physicists in the United States doubled, to about 150. The profession included Albert A. Michelson, the first American to win the Nobel Prize in physics (1907), who measured the speed of light with unprecedented accuracy and invented the Michelson interferometer during his famed ether drift experiment in 1881. During the late 1870s and the 1880s, Henry A. Rowland won an international reputation for his invention of the Rowland spectral grating and for his painstakingly accurate determinations of the value of the ohm and of the mechanical equivalent of heat. Generally, American physics remained predominantly experimental, with the notable exception of the brilliant theorist Josiah Willard Gibbs of Yale, an authority in thermo dynamics and statistical mechanics.
Professionalization by the Early-Twentieth Century
In 1893 Edward L. Nichols of Cornell University inaugurated the Physical Review, the first journal devoted to the discipline in the United States. Six years later Arthur Gordon Webster of Clark University helped found the American Physical Society, which in 1913 assumed publication of the Review. After the turn of the century, a sharp rise in electrical engineering enrollments created an increased demand for college teachers of physics. Employment opportunities for physicists rose elsewhere also. Some of the major corporations, notably General Electric Company and American Telephone and Telegraph Company, opened industrial research laboratories; and the federal government established the National Bureau of Standards, whose charter permitted it to enter a wide area of physical research. Before World War I, the graduation of physics Ph.D.s climbed steadily, reaching 23 in 1914, when membership in the American Physical Society was close to 700.
Americans had not been responsible for any of the key discoveries of the 1890s—X rays, radioactivity, and the electron—that introduced the age of atomic studies.
Like many of their colleagues in Europe, the older American members of the profession were disturbed by the development in the early twentieth century of the quantum theory of radiation and the theory of relativity. But the younger scientists turned to the new atomic research fields, although not immediately to the new theories, with growing interest and enthusiasm. At the University of Chicago, Robert A. Millikan demonstrated that all electrons are identically charged particles (1909) and then more accurately measured the electronic charge (1913). Richard Tolman of the University of Illinois and Gilbert N. Lewis of the Massachusetts Institute of Technology delivered the first American paper on the theory of relativity (1908). By the beginning of World War I, modernist physicists like Millikan were moving into the front rank of the profession, which was focusing increasingly, at its meetings and in its publications, on the physics of the quantized atom.
During the war, physicists worked for the military in various ways, most notably in the development of systems and devices for the detection of submarines and for the location of artillery. Their success in this area helped bolster the argument that physics, like chemistry, could produce practical and, hence, economically valuable results. Partly in recognition of that fact, industrial research laboratories hired more physicists in the 1920s. Moreover, the funding for physical research rose considerably in both state and private universities. During the 1920s about 650 Americans received doctorates in physics; a number of them received postdoctoral fellowships from the International Education Board of the Rockefeller Foundation and from the National Research Council. After studying with the leading physicists in the United States and Europe, where the revolution in quantum mechanics was proceeding apace, many of these young scientists were well prepared for the pursuit of theoretical research.
World Class Physics by the 1930s
By the end of the 1920s the United States had more than 2,300 physicists, including a small but significant influx of Europeans, including Paul Epstein, Fritz Zwicky, Samuel Goudsmit, and George Uhlenbeck, who had joined American university faculties. During that decade, Nobel Prizes in physics were awarded to Millikan (1923), director of the Norman Bridge Laboratory of Physics (1921) and chief executive at the California Institute of Technology, and to Arthur H. Compton (1927) of the University of Chicago for his quantum interpretation of the collision of X rays and electrons. At the Bell Telephone Laboratories, Clinton J. Davisson performed the research in electron diffraction for which he became a Nobel laureate in 1937. By the early 1930s the American physics profession compared favorably in experimental achievement with its counterparts in Europe; and in theoretical studies its potential, although not yet its accomplishment, had also reached the first rank.
During the 1930s the interest of physicists shifted from the atom to the nucleus and to what were later called elementary particles. In 1932, while conducting research for which they later won Nobel Prizes, Carl Anderson of the California Institute of Technology identified the positron in cosmic rays and, at the University of California at Berkeley, Ernest O. Lawrence successfully accelerated protons to one million volts of energy with his new cyclotron. Despite the depression, which at first reduced the funds available for physics research, U.S. physicists managed to construct cyclotrons, arguing that the exploration of the nucleus might yield the secret of atomic energy or that the radioactive products of cyclotron bombardment might be medically useful, especially in the treatment of cancer. All the while, more Americans earned Ph.D.s in physics, and the profession was further enriched by such refugees from the Soviet Union as George Gamow, and from Nazi Europe as Albert Einstein, Hans Bethe, Felix Bloch, Victor Weisskopf, Enrico Fermi, Emilio Segrè, Leo Szilard, Eugene Wigner, and Edward Teller. By the end of the 1930s, the American physics profession, with more than 3,500 members, led the world in both theoretical and experimental research.
During World War II, physicists, mobilized primarily under the Office of Scientific Research and Development, contributed decisively to the development of microwave radar, the proximity fuse, and solid-fuel rockets. They also worked on the atomic bomb in various laboratories of the Manhattan Project, notably Los Alamos, New Mexico, which was directed by J. Robert Oppenheimer. Equally important, physicists began advising the military how best to use the new weapons tactically and, in some cases, strategically.
After World War II, American physicists became prominent figures in the government's strategic advisory councils, and they played a central role in the debates over nuclear and thermonuclear weapons programs in the 1950s and 1960s. Recognized as indispensable to the national defense and welfare, physics and physicists received massive governmental support in the postwar decades, notably from the National Science Foundation, the Atomic Energy Commission, and the Office of Naval Research. Thus, the profession expanded rapidly, totaling more than 32,000 by 1972. About half of all American physicists were employed in industry, most of the rest in universities and colleges, and the remainder in federal laboratories.
Many academic physicists did their research in groups organized around large, highly energetic particle accelerators, notably those at the Stanford Linear Accelerator Center and the Fermi National Accelerator Laboratory (Illinois). The large teams of scientists and engineers involved, the giant machines constructed, and the huge budgets required reflected a new style of research in peacetime, appropriately called Big Science. With these accelerators, American physicists were among the world's leaders in uncovering experimental data about elementary particles, one of the central fields of postwar physics. New particles were discovered by Emilio Segrè, Owen Chamberlain, Burton Richter, Samuel Ting, and Martin Perl, among others, while the necessary detection apparatus, such as bubble and spark chambers, were devised by Donald Glaser, Luis Alvarez, and others. Theoretical understanding included the work of Murray Gell-Mann, Steven Weinberg, and Sheldon Glashow in particle physics, Julian Schwinger, Richard P. Feynman, and Freeman Dyson in quantum electro dynamics, and Tsung Dao Lee and Chen Ning Yang in the nonconservation of parity. I. I. Rabi, Otto Stern, and others measured nuclear properties to unprecedented accuracy, while Maria Goeppert-Mayer advanced the shell model of the nucleus.
Charles H. Townes in the early 1960s played a major role in the development of the laser, an optical device useful both for research and several applications. These latter included bar-code readers in stores, compact disk players, and an X-ray laser, built in 1984 as a component of the now-defunct Strategic Defense Initiative, to destroy enemy missiles.
Meanwhile, physicists, notably at Princeton University, developed the tokamak, a donut-shaped magnetic enclosure in which ionized matter could be contained and heated to the very high temperatures necessary for nuclear fusion to take place. By 1991 they sustained fusion for two seconds, a step on the path to creating an energy machine similar to the fission reactor. Lasers were also being used in the attempt to achieve controlled nuclear fusion.
John Bardeen, Leon Cooper, and Robert Schrieffer in the early 1970s developed a theory of super conductivity to explain the phenomenon where, at very low temperatures, electrical resistance ceases. Physicists soon discovered that a combination of the elements niobium and germanium became superconducting at 22.3 K, about 2 degrees higher than the previous record, and in the late 1980s and 1990s scientists found yet other combinations with much higher (but still very cold) temperatures—above 120 K—that still lacked electrical resistance. Commercial applications, with great savings in electricity, were promising, but not near.
Other American physicists pursued such important fields as astrophysics and relativity, while in applied physics, William Shockley, John Bardeen, and Walter Brattain invented the transistor. This device, widely used in electronic products, made computers—and the information age—possible. It is an example of the way in which the products of physics research have helped to mold modern society. A measure of the quality of research in this country is the record that, from the time the Nobel Prize in Physics was initiated in 1901 to the year 2001, more than seventy American physicists won or shared this honor.
In the last half of the twentieth century physicists came out of their ivory towers to voice concerns about political issues with technical components. Veterans of the Manhattan Project in 1945–1946 created the influential Bulletin of the Atomic Scientists and formed the Federation of American Scientists to lobby for civilian control of atomic energy domestically and United Nations control of weapons internationally. During the intolerance of the McCarthy period of the 1950s, many physicists were held up to public scorn as communists or fellow travelers, or even feared as spies for the Kremlin. The President's Science Advisory Committee, formed in reaction to the Soviet Union's launch of Sputnik (1957), was initially dominated by physicists—whose understanding of the fundamentals of nature enabled them to advise knowingly on projects in other fields, such as missile technology.
The nation's principal organization of physicists, the American Physical Society, like many other professional groups, departed from its traditional role of publishing a journal and holding meetings. It began to lobby for financial support from a congress that contained few members with scientific credentials, and to issue reports on such controversial subjects as nuclear reactor safety, the Strategic Defense Initiative, and the alleged danger to health of electrical power lines. Some physicists participated in the long series of Pugwash Conferences on Science and World Affairs, meeting with foreign colleagues to help solve problems caused mostly by the arms race. Others created the Council for a Livable World, a political action committee whose goal was to help elect senators who supported arms control efforts. Still others joined the Union of Concerned Scientists, an organization that documented the danger of many nuclear reactors and the flaws of many weapons systems. The community of physicists had come of age, not only in producing world-class physics but in contributing to the economic
and political health of society, often from a socially responsible perspective.
Childs, Herbert. An American Genius: The Life of Ernest Orlando Lawrence. New York: Dutton, 1968.
Coben, Stanley. "The Scientific Establishment and the Transmission of Quantum Mechanics to the United States, 1919–1932." American Historical Review 76 (1971): 442–466.
Kevles, Daniel J. "On the Flaws of American Physics: A Social and Institutional Analysis." In Nineteenth-Century American Science. Edited by George H. Daniels. Evanston, Ill.: Northwestern University Press, 1972.
———. The Physicists: The History of a Scientific Community in Modern America. Cambridge, Mass.: Harvard University Press, 1995.
National Research Council, Physics Survey Committee. Physics in Perspective. Washington, D.C.: National Academy of Sciences, 1973.
Reingold, Nathan. "Joseph Henry." In Dictionary of Scientific Biography. Volume 6. Edited by Charles C. Gillispie. New York: Scribners, 1972.
Tobey, Ronald. The American Ideology of National Science, 1919– 1930. Pittsburgh, Pa.: University of Pittsburgh Press, 1973.
DanielKevles, S. S.Schweber
See alsoLaboratories ; Laser Technology ; Manhattan Project ; National Academy of Sciences ; National Bureau of Standards ; National Science Foundation ; Radar ; Rockets ; Sheffield Scientific School ; Strategic Defense Initiative .
High-energy physics, also known as particle physics, studies the constitution, properties, and interactions of elementary particles—the basic units of matter and energy, such as electrons, protons, neutrons, still smaller particles, and photons—as revealed through experiments using particle accelerators, which impart high velocities to charged particles. This extension of nuclear physics to higher energies grew in the 1950s. Earlier generations of accelerators, or "atom smashers, " such as the cyclotron, reached the range of millions of electron volts (MeV), allowing fast-moving charged particles to crash into targeted particles and the ensuing nuclear reactions to be observed. (Particles must collide with the nucleus of the target matter in order to be observed.) Immediately after World War II, Vladimir I. Veksler in the Soviet Union and Edwin McMillan at Berkeley independently devised the synchrotron principle, which adjusts a magnetic field in step with the relativistic mass increase experienced by particles traveling near the velocity of light. In this way more energy could be imparted to the projectiles. Since the moving particle's wavelength decreases as its energy increases, at high energies it provides greater resolution to determine the shape and structure of the target particles. By the 1970s large accelerators could attain hundreds of millions or even several billion electron volts (GeV) and were used to produce numerous elementary particles for study. Cosmic rays provide another source of high-energy particles, but machines offer a greater concentration under controlled circumstances, and are generally preferred.
Theoretical physics kept pace in understanding these particles, which compose the atomic nucleus, and their interactions. By the early 1960s physicists knew that in addition to the protons, neutrons, and electrons that had been used to explain atomic nuclei for several decades, there was a confusing number of additional particles that had been found using electron and proton accelerators. A pattern in the structure of the nucleus was discerned by Murray Gell-Mann at the California Institute of Technology and by the Israeli Yuval Ne'eman. Gaps in the pattern were noticed, predictions of a new particle were made, and the particle (the so-called Omega-minus) was promptly discovered. To explain the pattern, Gell-Mann devised a theoretical scheme, called the eightfold way, that attempted to classify the relationship between strongly interacting particles in the nucleus. He postulated the existence of some underlying but unobserved elementary particles that he called "quarks."
Quarks carry electrical charges equal to either one-third or two-thirds of the charge of an electron or proton. Gell-Mann postulated several different kinds of quarks, giving them idiosyncratic names such as "up" (with a charge of plus two-thirds), "down" (with a charge of minus one-third), and "strange." Protons and neutrons are clusters of three quarks. Protons are made of two up quarks and a single down quark, so the total charge is plus one. Neutrons are made of one up quark and two down quarks, so the total charge is zero.
Another group of particles, the mesons, are made up of quarks and antiquarks (identical to quarks in mass, but opposite in electric and magnetic properties). These more massive particles, such as the ones found independently by Burton Richter at the Stanford Linear Accelerator and Samuel C. C. Ting at Brookhaven National Laboratory in 1974, fit into the picture as being made from charm quarks. The masses of these particles, like the spectrum of the hydrogen atom used by Niels Bohr many decades earlier to elucidate the quantum structure of the outer parts of atoms, now provided a numerical key for understanding the inner structure of the atom. Six different "flavors" of quarks are required to account for these heavy particles, and they come in pairs: up-down, charm-strange, and top-bottom. The first member of each pair has an electrical charge of two-thirds and the second of minus one-third.
Meanwhile, Sheldon Lee Glashow at Harvard University, Steven Weinberg at the Massachusetts Institute of Technology, and Abdus Salam at Imperial College in London in 1968 independently proposed a theory that linked two of the fundamental forces in nature, electromagnetism and the so-called weak nuclear force. Their proposal, known as quantum field theory, involved the notion of
quarks and required the existence of three massive particles to "carry" the weak force: two charged particles (W+ and –) and one neutral particle (Z). These particles are short-lived, massive versions of the massless photons that carry ordinary light. All of these particles are called bosons, or more precisely, gauge bosons, because the theory explaining them is called a gauge theory. The name boson, which comes about for purely historical reasons, refers to a type of symmetry in which labels of the particles can be interchanged according to rules suggested by quantum mechanics, and the resulting forces (and gauge bosons) are found as a consequence of the symmetry requirements. By 1972 indirect evidence for the existence of the Z particle was found in Geneva at the European Organization for Nuclear Research (CERN). It was not until 1983 that the Z particle itself was found, also at CERN, and close on the heels of this discovery came the detection of the W particle.
In the United States, accelerator construction and use was supported primarily by the Atomic Energy Commission, by its successor, the Department of Energy, and by the National Science Foundation. One of the nation's principal machines, the Stanford Linear Accelerator fires particles down its two-mile length. Most other machines, such as those at CERN, Brookhaven (New York), KEK (Japan), and DESY (Germany) are circular or oval in shape. To increase energies still more, beams traveling in opposite directions are led to meet in "colliders, " thereby doubling the energy of collision. In early 1987 the Tevatron proton-antiproton accelerator at the Fermi National Accelerator Laboratory (Fermilab) in Illinois came into operation, a machine in the trillion electron volt range. Having narrowly missed out on some of the earlier discoveries, Fermilab scientists were particularly keen to find evidence for the postulated top quark, the only one of the quarks not yet measured and a particle so massive that only the most powerful accelerators could produce enough energy to find it. Their search at last succeeded in 1995.
The Standard Model
By the closing decades of the twentieth century, along with the quarks and bosons, a third type of particle completed the roster: the lepton, of which the electron, positron, and a group of neutrinos are the best known examples. The leptons and quarks provide the building blocks for atoms. The gauge bosons interact with the leptons and quarks, and in the act of being emitted or absorbed, some of the gauge bosons transform one kind of quark or lepton into another. In the standard model, a common mechanism underlies the electromagnetic, weak, and strong interactions. Each is mediated by the exchange of a gauge boson. The gauge bosons of the strong and weak interactions carry electrical charges, whereas the photon, which carries the electromagnetic interactions, is electrically neutral.
In its simplest formulation, the standard model of the strong, weak, and electromagnetic interactions, although aesthetically beautiful, does not agree with all the known characteristics of the weak interactions, nor can it account for the experimentally derived masses of the quarks. High-energy physicists hoped that the Superconducting Super Collider (SSC), a machine with a fifty-mile circumference that was under construction in Texas in the late 1980s, would provide data to extend and correct the standard model. They were greatly disappointed when Congress cut off funding for this expensive atom smasher.
The standard model is one of the great achievements of the human intellect. It will be remembered—together with general relativity, quantum mechanics, and the unraveling of the genetic code—as one of the outstanding intellectual advances of the twentieth century. It is not, however, the "final theory, " because too many constants still must be empirically determined. A particularly interesting development since the 1970s is the joining of particle physics with the astrophysics of the earliest stages of the universe. The "Big Bang" may provide the laboratory for exploration of the grand unified theories (GUTs) at temperatures and energies that are and will remain inaccessible in terrestrial laboratories. Also of profound significance will be an understanding of the so-called dark matter that comprises most of the mass of the universe.
In acknowledgment of the importance of the subject, experimental and theoretical high-energy physics research was recognized with a host of Nobel Prizes, many of them to American scientists. With the demise of the SSC, however, the field's future is likely to lie in machines built by associations of several nations.
Brown, Laurie M., and Lillian Hoddeson, eds. The Birth of Particle Physics. New York: Cambridge University Press, 1983.
Close, Frank, Michael Marten, and Christine Sutton. The Particle Explosion. New York: Oxford University Press, 1987.
Taubes, Gary. Nobel Dreams: Power, Deceit, and the Ultimate Experiment. New York: Random House, 1986.
Weinberg, Steven. Dreams of a Final Theory. New York: Pantheon Books, 1992.
See alsoEnergy, Department of .
The age-old goal of physicists has been to understand the nature of matter and energy. Nowhere during the twentieth century were the boundaries of such knowledge further extended than in the field of nuclear physics. From an obscure corner of submicroscopic particle research, nuclear physics became the most prominent and fruitful area of physical investigation because of its fundamental insights and its applications.
Discovery of the Nucleus
In the first decade of the twentieth century J. J. Thomson's discovery of the electron at Cambridge University's Cavendish Laboratory changed the concept of the atom as a solid, homogeneous entity—a "billiard ball"—to one of a sphere of positive electrification studded throughout with negative electrons. This "plum pudding" atomic model, with a different number of electrons for each element, could not account for the large-angle scattering seen when alpha particles from naturally decaying radioactive sources were allowed to strike target materials. Thomson argued that the alpha particles suffered a series of small deflections in their encounters with the target atoms, resulting in some cases in a sizable deviation from their initial path. But between 1909 and 1911 in the Manchester laboratory of Thomson's former pupil Ernest Rutherford, Hans Geiger and Ernest Marsden produced scattering data that showed too many alpha particles were bent through angles too large for such an explanation to be valid.
Instead of a series of small deflections, Rutherford suggested early in 1911 that large-angle scattering could occur in a single encounter between an alpha particle and a target atom if the mass of the atom were concentrated in a tiny volume. While the atomic diameter was of the order of 10–8 centimeters, this atomic core (or nucleus), containing virtually the atom's entire mass, measured only about 10–12 centimeters. The atom, therefore, consisted largely of empty space, with electrons circulating about the central nucleus. When an alpha-particle projectile closely approached a target nucleus, it encountered concentrated electrostatic repulsion sufficient to deflect it more than just a few degrees from its path.
The Danish physicist Niels Bohr absorbed these concepts while visiting Rutherford's laboratory and in 1913 gave mathematical formulation to the rules by which the orbital electrons behaved. The order and arrangement of these electrons were seen to be responsible for the chemical properties exhibited by different elements. Pursuit of this field led to modern atomic physics, including its quantum mechanical explanation, and bore fruit earlier than did studies in nuclear physics. Radioactivity was recognized as a nuclear phenomenon, and the emission of alpha particles, known by then to be nuclei of helium atoms; beta particles, long recognized as electrons; and gamma rays, an electromagnetic radiation, reopened the question of whether atoms were constructed from fundamental building blocks. The work in 1913 of Henry G. J. Moseley, another former student of Rutherford's, showed that an element's position in the periodic table (its atomic number), and not its atomic weight, determined its characteristics. Moreover, he established that the number of positive charges on the nucleus (equal to its atomic number) was balanced by an equal number of orbital electrons. Since atomic weights (A) were (except for hydrogen) higher than atomic numbers (Z), the atom's nuclear mass was considered to be composed of A positively charged particles, called protons, and A–Z electrons to neutralize enough protons for a net nuclear charge of Z.
Early Nuclear Transmutations
In 1919 Rutherford announced another major discovery. Radioactivity had long been understood as a process of transmutation from one type of atom into another, occurring spontaneously. Neither temperature, nor pressure, nor chemical combination could alter the rate of decay of a given radio element or change the identity of its daughter product. Now, however, Rutherford showed that he could deliberately cause transmutations. His were not among the elements at the high end of the periodic table, where natural radioactivity is commonly found, but were among the lighter elements. By allowing energetic alpha particles (42He) from decaying radium C' to fall upon nitrogen molecules, he observed the production of hydrogen nuclei, or protons (11 H), and an oxygen isotope. The reaction may be written aswhere the superscript represents the atomic weight and the subscript the atomic number, or charge.
During the first half of the 1920s Rutherford, now at Cambridge, where he had succeeded Thomson, was able to effect transmutations in many of the lighter elements. (In this work he was assisted primarily by James Chadwick.) But elements heavier than potassium would not yield to the alpha particles from their strongest radioactive source. The greater nuclear charge on the heavier elements repelled the alpha particles, preventing an approach close enough for transmutation. This finding suggested that projectile particles of energies or velocities higher than those found in naturally decaying radio elements were required to overcome the potential barriers of target nuclei. Consequently, various means of accelerating particles were devised.
In 1920 William D. Harkins, a physical chemist at the University of Chicago, conceived that the existence of a neutron would simplify certain problems in the construction of nuclei. In the same year, Rutherford (on the basis of incorrectly interpreted experimental evidence) also postulated the existence of such a neutral particle, the mass of which was comparable to that of the proton. Throughout the 1920s he, and especially Chadwick, searched unsuccessfully for this particle. In 1931 in Germany Walther Bothe and H. Becker detected, when beryllium was bombarded by alpha particles, a penetrating radiation, which they concluded consisted of energetic gamma rays. In France, Irène Curie and her husband, Frédéric Joliot, placed paraffin in the path of this radiation and detected protons ejected from that hydrogenous compound. They, too, believed that gamma rays were being produced and that these somehow transferred sufficient energy to the hydrogen atoms to break their chemical bonds. Chadwick
learned of this work early in 1932 and immediately recognized that beryllium was yielding not gamma rays but the longelusive neutron and that this particle was encountering protons of similar mass, transferring much of its kinetic energy and momentum to them at the time of collision. Since the neutron is uncharged, it is not repelled by atomic nuclei. Consequently, it can enter easily into reactions when it finds itself near a nucleus; otherwise it travels great distances through matter, suffering no electrostatic attractions or repulsions.
Quantum Mechanics Applied to the Nucleus
Werner Heisenberg, in Leipzig, renowned for his articulation of quantum mechanics and its application to atomic physics, in 1932 applied his mathematical techniques to nuclear physics, successfully explaining that atomic nuclei are composed not of protons and electrons but of protons and neutrons. For a given element, Z protons furnish the positive charge, while A–Z neutrons bring the total mass up to the atomic weight A. Radioactive beta decay, formerly a strong argument for the existence of electrons in the nucleus, was now interpreted differently: the beta particles were formed only at the instant of decay, as a neutron changed into a proton. The reverse reaction could occur also, with the emission of a positive electron, or positron, as a proton changed into a neutron. This reaction was predicted by the Cambridge University theoretician P. A. M. Dirac and was experimentally detected in 1932 by Carl D. Anderson of the California Institute of Technology in cloud-chamber track photographs of cosmic-ray inter-actions. Two years later the Joliot-Curies noted the same result in certain radioactive decay patterns. The "fundamental" particles now consisted of the proton and neutron—nucleons (nuclear particles) with atomic masses of about 1—and of the electron and positron, with masses of about 1/1,840 of a nucleon.
The existence of yet another particle, the neutrino, was first suggested in 1931 by Wolfgang Pauli of Zurich in an address before the American Physical Society. When a nucleus is transmuted and beta particles emitted, there are specific changes in energy. Yet, unlike the case of alpha decay, beta particles exhibited a continuous energy distribution, with only the maximum energy seen as that of the reaction. The difference between the energy of a given beta particle and the maximum was thought to be carried off by a neutrino, the properties of which—very small or zero mass and no charge—accounted for the difficulty of detecting it. In 1934 Enrico Fermi presented a quantitative theory of beta decay incorporating Pauli's hypothesis. Gamma radiation, following either alpha or beta decay, was interpreted as being emitted from the daughter nucleus as it went from an excited level to its ground state.
Further Understanding Provided by the Neutron
The neutron, the greatest of these keys to an understanding of the nucleus, helped to clarify other physical problems besides nuclear charge and weight. In 1913 Kasimir Fajans in Karlsruhe and Frederick Soddy in Glasgow had fit the numerous radio elements into the periodic table, showing that in several cases more than one radio element must be placed in the same box. Mesothorium I, thorium X, and actinium X, for example, all were chemically identical to radium; that is, they were isotopes. This finding meant they each had 88 protons but had, respectively, 140,136, and 135 neutrons. Also, in the pre–World War I period Thomson showed that nonradioactive elements exist in isotopic forms—neon, for example, has atomic weights of 20 and 22. His colleague F. W. Aston perfected the mass spectrograph, with which during the 1920s he accurately measured the masses of numerous atomic species. It was revealed that these masses were generally close to, but were not exactly, whole numbers. The difference was termed the "packing effect" by Harkins and E. D. Wilson as early as 1915, and the "packing fraction" by Aston in 1927. After 1932 it was also learned that atomic masses were not the sums of Z proton masses and A–Z neutron masses, and the difference was termed the "mass defect." The concept of nuclear building blocks (protons and neutrons) was retained; however, it was seen that a certain amount of mass was converted into a nuclear binding energy to overcome the mutual repulsion of the protons. This binding energy is of the order of a million times greater than the energies binding atoms in compounds or in stable crystals, which indicates why nuclear reactions involve so much more energy than chemical reactions.
The existence of deuterium, a hydrogen isotope of mass 2 (21 H), present in ordinary (mass 1) hydrogen to the extent of about 1 part in 4,500, was suggested in 1931 by Raymond T. Birge and Donald H. Menzel at the University of California at Berkeley and shortly there after was confirmed by Harold C. Urey and George M. Murphy at Columbia University, in collaboration with Ferdinand G. Brickwedde of the National Bureau of Standards. The heavy-hydrogen atom's nucleus, called the deuteron, proved to be exceptionally useful: it entered into some nuclear reactions more readily than did the proton.
Shortly after their discovery in 1932, neutrons were used as projectiles to effect nuclear transmutations by Norman Feather in England and Harkins, David Gans, and Henry W. Newson at Chicago. Two years later the Joliot-Curies reported the discovery of yet another process of transmutation: artificial radioactivity. A target not normally radioactive was bombarded with alpha particles and continued to exhibit nuclear changes even after the projectile beam was stopped. Such bombardment has permitted the production of about 1,100 nuclear species beyond the 320 or so found occurring in nature.
Nuclear Fission, Fusion, and Nuclear Weapons
During the mid-1930s, Fermi and his colleagues in Rome were most successful in causing transmutations with neutrons, particularly after they discovered the greater likelihood of the reactions occurring when the neutrons' velocities were reduced by prior collisions. When uranium,
the heaviest known element, was bombarded with neutrons, several beta-particle (0–1 e)-emitting substances were produced, which Fermi reasoned must be artificial elements beyond uranium in the periodic table. The reaction may be expressed aswith a possible subsequent decay of But radiochemical analyses of the trace amounts of new substances placed them in unexpected groupings in the periodic table, and, even worse, Otto Hahn and Fritz Strassmann, in Berlin toward the end of 1938, were unable to separate them chemically from elements found in the middle part of the periodic table. It seemed that the so-called transuranium elements had chemical properties identical to barium, lanthanum, and cerium. Hahn's longtime colleague Lise Meitner, then a refugee in Sweden, and her nephew Otto R. Frisch, at that time in Bohr's Copenhagen laboratory, in 1938 saw that the neutrons were not adhering to the uranium nuclei, followed by beta decay, but were causing the uranium nuclei to split (fission) into two roughly equal particles. They recognized that these fission fragments suffered beta decay in their movement toward conditions of greater internal stability.
With the accurate atomic-mass values then available, it was apparent that in fission a considerable amount of mass is converted into energy; that is, the mass of the neutron plus uranium is greater than that of the fragments. The potential for utilizing such energy was widely recognized in 1939, assuming that additional neutrons were released in the fission process and that at least one of the neutrons would rupture another uranium nucleus in a chain reaction. The United States, Great Britain, Canada, France, the Soviet Union, Germany, and Japan all made efforts in this direction during World War II. A controlled chain reaction was first produced in Fermi's "pile, " or "reactor, " in 1942 at the University of Chicago, and an uncontrolled or explosive chain reaction was first tested under the direction of J. Robert Oppenheimer in 1945 in New Mexico. Among the scientific feats of the atomic-bomb project was the production at Berkeley in 1940–1941 of the first man-made transuranium elements, neptunium and plutonium, by teams under Edwin M. McMillan and Glenn Seaborg, respectively. A weapon involving the fission of the uranium isotope 235 was employed against Hiroshima, and another using plutonium (element 94, the "Y" above) nuclei destroyed Nagasaki.
Like the fission of heavy elements, the joining together (fusion) of light elements is also a process in which mass is converted into energy. This reaction, experimentally studied as early as 1934 by Rutherford and his colleagues and theoretically treated in 1938 by George Gamow and Edward Teller, both then at George Washington University, has not been controlled successfully for appreciable periods of time (preventing its use as a reactor); but its uncontrolled form is represented in the hydrogen bomb, first tested in 1952.
The growth of "big science, " measured by its cost and influence, is manifest not only in weaponry and power-producing reactors but also in huge particle-accelerating machines. Alpha particles from naturally decaying radio-elements carry a kinetic energy of between about 4 and 10 million electron volts (MeV). But, as only one projectile in several hundred thousand is likely to come close enough to a target nucleus to affect it, reactions occur relatively infrequently, even with concentrated radioactive sources. Cosmic radiation, which possesses far greater energy, has an even lower probability of interacting with a target nucleus. Means were sought for furnishing a copious supply of charged particles that could be accelerated to energies sufficient to overcome the nuclear electro-static repulsion. This feat would both shorten the time of experiments and increase the number of reactions. Since electrical technology had little or no previous application in the range of hundreds of thousands or millions of volts, these were pioneering efforts in engineering as well as in physics. In the late 1920s Charles C. Lauritsen and H. R. Crane at the California Institute of Technology succeeded with a cascade transformer in putting 700,000 volts across an X-ray tube. Merle A. Tuve, at the Carnegie Institution of Washington, in 1930 produced protons in a vacuum tube with energies of more than a million volts. The next year, at Princeton, Robert J. Van de Graaff built the first of his electrostatic generators, with a maximum potential of about 1.5 million volts. In 1932 Ernest O. Lawrence and his associates at Berkeley constructed a magnetic resonance device, called a cyclotron because a magnetic field bent the charged particles in a circular path. The novelty of this machine lay in its ability to impart high energies to particles in a series of steps, during each revolution, thereby avoiding the need for great voltages across the terminals, as in other accelerators. The cyclotron soon exceeded the energies of other machines and became the most commonly used "atom smasher."
Although Americans excelled in the mechanical ability that could produce such a variety of machines, they were only beginning to develop theoretical and experimental research to use them. They also lacked the driving force of Rutherford. Since 1929 John D. Cockcroft and E. T. S. Walton had been building and rebuilding, testing and calibrating their voltage multiplier in the Cavendish Laboratory. Rutherford finally insisted that they perform a real experiment on it. The Russian George Gamow and, independently, Edward U. Condon at Princeton with R. W. Gurney of England, had applied quantum mechanics to consideration of the nucleus. Gamow concluded that particles need not surmount the potential energy barrier of about 25 MeV, for an element of high atomic number, to penetrate into or escape from the nucleus; instead these particles could "tunnel" through the barrier at far
lower energies. The lower the energy, the less likely it was that tunneling would occur, yet an abundant supply of projectiles might produce enough reactions to be recorded. With protons accelerated to only 125,000 volts, Cock-croft and Walton, in 1932, found lithium disintegrated into two alpha particles in the reaction Not only was this the first completely artificial transmutation (Rutherford's transmutation in 1919 had used alpha-particle projectiles from naturally decaying radio-elements), but the two also measured the products' range, and therefore energy, combined with a precise value of the mass lost in the reaction, and verified for the first time Albert Einstein's famous E=mc2 equation.
The United States continued to pioneer machine construction, often with medical and biological financial support: Donald W. Kerst of the University of Illinois built a circular electron accelerator, called a betatron, in 1940, and Luis W. Alvarez of Berkeley designed a linear proton accelerator in 1946. D. W. Fry in England perfected a linear electron accelerator (1946), as did W. W. Hansen at Stanford. Since particles traveling at velocities near that of light experience a relativistic mass increase, the synchrotron principle, which uses a varying magnetic field or radio frequency to control the particle orbits, was developed independently in 1945 by Vladimir I. Veksler in the Soviet Union and by McMillan at Berkeley. By the 1970s, large accelerators could attain hundreds of millions, or even several billion, electron volts and were used to produce numerous elementary particles. Below this realm of high-energy or particle physics, recognized as a separate field since the early 1950s, nuclear physics research continued in the more modest MeV range.
With these methods of inducing nuclear reactions and the measurements of the masses and energies involved, questions arose about what actually occurs during a transmutation. Traditional instruments—electroscopes, electrometers, scintillating screens, electrical counters—and even the more modern electronic devices were of limited value. Visual evidence was most desirable. At Chicago in 1923 Harkins attempted unsuccessfully to photograph cloud-chamber tracks of Rutherford's 1919 transmutation of nitrogen. In 1921 Rutherford's pupil P. M. S. Blackett examined 400,000 tracks and found that 8 exhibited a Y-shaped fork, indicating that the alpha-particle projectile was absorbed by the nitrogen target into a compound nucleus, which immediately became an isotope of oxygen by the emission of a proton. The three branches of the Y consisted of the incident alpha and the two products, the initially neutral and slow-moving nitrogen having no track. Had the now-discredited alternative explanation of the process been true, namely, that the alpha particle merely bounced off the nitrogen nucleus, which then decayed according to the reaction a track of four branches would have been seen.
Experimental work by Harkins and Gans in 1935 and theoretical contributions by Bohr the next year clearly established the compound nucleus as the intermediate stage in most medium-energy nuclear reactions. Alvarez designed a velocity selector for monoenergetic neutrons that allowed greater precision in reaction calculations, while Gregory Breit at the University of Wisconsin and the Hungarian refugee Eugene P. Wigner at Princeton in 1936 published a formula that explained the theory of preferential absorption of neutrons (their cross sections): If the neutrons have an energy such that a compound nucleus can be formed at or near one of its permitted energy levels, there is a high probability that these neutrons will be captured.
It was recognized that the forces holding nucleons together are stronger than electrostatic, gravitational, and weak interaction (beta particle–neutrino) forces and that they operate over shorter ranges, namely, the nuclear dimension of 10–12 centimeters. In 1936 Bohr made an analogy between nuclear forces and those within a drop of liquid. Both are short range, acting strongly on those nucleons/molecules in their immediate neighborhood but having no influence on those further away in the nucleus/drop. The total energy and volume of a nucleus/drop are directly proportional to the number of constituent nucleons/molecules, and any excess energy of a constituent is rapidly shared among the others. This liquid-drop model of the nucleus, which meshed well with Bohr's understanding of the compound-nucleus stage during reactions, treated the energy states of the nucleus as a whole. Its great success, discovered by Bohr in collaboration with John A. Wheeler of Princeton (1939), in explaining fission as a deformation of the spherical drop into a dumbbell shape that breaks apart at the narrow connection, assured its wide acceptance for a number of years.
The strongest opposition to this liquid-drop interpretation came from proponents of the nuclear-shell model, who felt that nucleons retain much of their individuality—that, for example, they move within their own well-defined orbits. In 1932 James H. Bartlett of the University of Illinois, by analogy to the grouping of orbital electrons, suggested that protons and neutrons in nuclei also form into shells. This idea was developed in France and Germany, where it was shown in 1937 that data on magnetic moments of nuclei conform to a shell-model interpretation.
To explain the very fine splitting (hyperfine structure) of lines in the optical spectra of some elements—spectra produced largely by the extra nuclear electrons—several European physicists in the 1920s had suggested that atomic nuclei possess mechanical and magnetic moments relating to their rotation and configuration. From the 1930s on, a number of techniques were developed for measuring such nuclear moments—including the radio-frequency resonance
method of Columbia University's I. I. Rabi—and from the resulting data certain regularities appeared. For example, nuclei with an odd number of particles have half units of spin and nuclei with an even number of particles have integer units of spin, while nuclei with an even number of protons and an even number of neutrons have zero spin. Evidence such as this suggested some sort of organization of the nucleons.
With the shell model overshadowed by the success of the liquid-drop model, and with much basic research interrupted by World War II, it was not until 1949 that Maria Goeppert Mayer at the University of Chicago and O. Haxel, J. H. D. Jensen, and H. E. Suess in Germany showed the success of the shell model in explaining the so-called magic numbers of nucleons: 2, 8, 20, 28, 50, 82, and 126. Elements having these numbers of nucleons, known to be unusually stable, were assumed to have closed shells in the nucleus. Lead 208, for example, is "doubly magic, " having 82 protons and 126 neutrons. More recent interpretations, incorporating features of both liquid-drop and shell models, are called the "collective" and "unified" models.
Aside from the question of the structure of the nucleus, after it was recognized that similarly charged particles were confined in a tiny volume, the problem existed of explaining the nature of the short-range forces that overcame their electrical repulsion. In 1935 Hideki Yukawa in Japan reasoned that just as electrical force is transmitted between charged bodies in an electromagnetic field by a particle called a photon, there might be an analogous nuclear-field particle. Accordingly, the meson, as it was called (with a predicted mass about 200 times that of the electron), was soon found in cosmic rays by Carl D. Anderson and Seth H. Neddermeyer. The existence of this particle was confirmed by 1938. But in 1947 Fermi, Teller, and Victor F. Weisskopf in the United States concluded that this mumeson, or muon, did not interact with matter in the necessary way to serve as a field particle; and S. Sakata and T. Inoue in Japan, and independently Hans A. Bethe at Cornell and Robert E. Marshak at the University of Rochester, suggested that yet another meson existed. Within the same year, Cecil F. Powell and G. P. S. Occhialini in Bristol, England, found the pi meson, or pion—a particle slightly heavier than the muon into which it decays and one that meets field-particle requirements—in cosmic-ray tracks. Neutrons and protons were thought to interact through the continual transfer of positive, negative, and neutral pions between them.
Wider Significance of Nuclear Physics
In addition to the profound insights to nature revealed by basic research in nuclear physics, and the awesome applications to power sources and weapons, the subject also contributed to important questions in other fields. Early in the twentieth century, Bertram B. Boltwood, a radiochemist at Yale, devised a radioactive dating technique to measure the age of the earth's oldest rocks, at one time a subject considered the domain of geologists. These procedures were refined, largely by British geologist Arthur Holmes and later by geochemist Claire Patterson at the California Institute of Technology, as data on isotopic concentrations were better appreciated and better measured, leading to an estimation of the earth's antiquity at several billion years. Measuring a shorter time scale with unprecedented accuracy, chemist Willard Libby at the University of Chicago developed a method of dating artifacts of anthropological age using the carbon 14 isotope. Nuclear physics informed yet another subject of longstanding fascination to humanity: What keeps stars shining over enormous periods of time? Just before World War II, Hans Bethe of Cornell conceived the carbon cycle of nuclear reactions and calculated the energy output of each step. And shortly after the war, Gamow extended the range of nuclear physics to the entire universe, answering the cosmological question of origin with the "Big Bang, " and detailing the nuclear reactions that occurred over the next several hundred million years.
Although nuclear physics is sometimes said to have been born during the early 1930s—a period of many remarkable discoveries—it can more appropriately be dated from 1911 or 1919. What is true of the 1930s is that by this time nuclear physics was clearly defined as a major field. The percentage of nuclear-physics papers published in Physical Review rose dramatically; other measures of the field's prominence included research funds directed to it, the number of doctoral degrees awarded, and the number of fellowships tendered by such patrons as the Rockefeller Foundation. Although they were by no means the only scientists fashioning the subject in the United States, Lawrence at Berkeley and Oppenheimer at the California Institute of Technology and Berkeley were dominating figures in building American schools of experimental and theoretical research, respectively. This domestic activity was immeasurably enriched in the 1930s by the stream of refugee physicists from totalitarian Europe—men such as Bethe, Fermi, Leo Szilard, Wigner, Teller, Weisskopf, James Franck, Gamow, Emilio Segrè, and, of course, Einstein. Prominent Europeans had earlier taught at the summer schools for theoretical physics held at several American universities; now many came permanently.
Much of this domestic and foreign talent was mobilized during World War II for the development of radar, the proximity fuse, and most notably the Manhattan Project, which produced the first atomic bombs. So stunning was the news of Hiroshima's and Nagasaki's obliteration that nuclear physicists were regarded with a measure of awe. In the opinion of most people nuclear physics was the most exciting, meaningful, and fearful area of science, and its usefulness brought considerable government support. American domination of nuclear physics in the postwar decades resulted, therefore, from a combination of the wartime concentration of research in the United States and the simultaneous disruptions in
Europe, and from another combination of rising domestic abilities and exceptional foreign talent, financed by a government that had seen (at least for a while) that basic research was applicable to national needs.
In the postwar period, the U.S. Atomic Energy Commission and then the Department of Energy supported most research in this field. It was conducted in universities and in several national laboratories, such as those at Los Alamos, Livermore, Berkeley, Brookhaven, Argonne, and Oak Ridge. With the most fashionable side of the subject now called high-energy or particle physics, ever more energetic particle accelerators were constructed, seeking to produce reactions at high energies that would reveal new particles and their interactions. Their size and cost, however, led to dwindling support. By the end of the twentieth century, the nation's two most significant machines were at the Stanford Linear Accelerator Center and the Fermi National Accelerator Laboratory. A larger machine of the next generation, the Super conducting Super Collider, was authorized by Congress and then cancelled when its fifty-mile-long tunnel was but a quarter excavated, because of its escalating, multi-billion-dollar price tag. Consequently, the research front will be at accelerator centers run by groups of nations for the foreseeable future.
Glasstone, Samuel. Sourcebook on Atomic Energy. 3d ed. Princeton, N.J.: Van Nostrand, 1967.
Livingston, M. Stanley. Particle Accelerators: A Brief History. Cambridge, Mass.: Harvard University Press, 1969.
Stuewer, Roger, ed. Nuclear Physics in Retrospect: Proceedings of a Symposium on the 1930s. Minneapolis: University of Minnesota Press, 1979.
Weiner, Charles, ed. Exploring the History of Nuclear Physics. New York: American Institute of Physics, 1972. Proceedings of the institute's conferences of 1967 and 1969.
Weisskopf, Victor F. Physics in the Twentieth Century: Selected Essays. Cambridge, Mass.: MIT Press, 1972.
See alsoEnergy, Department of ; Physics, High-Energy Physics .
Solid-state is the branch of research that deals with properties of condensed matter—originally solids such as crystals and metals, later extended to liquids and more exotic forms of matter. The multitude of properties studied and the variety of materials that can be explored give this field enormous scope.
Modern solid-state physics relies on the concepts and techniques of twentieth-century atomic theory, in which a material substance is seen as an aggregate of atoms obeying the laws of quantum mechanics. Earlier concepts had failed to explain the most obvious characteristics of most materials. A few features of a metal could be explained by assuming that electrons moved freely within it, like a gas, but that did not lead far. Materials technology was built largely on age-old craft traditions.
The Rise of Solid-State Theory
Discoveries in the first quarter of the twentieth century opened the way to answers. The work began with a puzzle: experiments found that for most simple solids, as the temperature is lowered toward absolute zero, adding even an infinitesimally small amount of heat produces a large change in temperature. The classical model of a solid made up of vibrating atoms could not explain this. In 1907, Albert Einstein reworked the model using the radical new idea that energy comes in tiny, discrete "quantum" packets. The qualitative success of Einstein's theory, as refined by other physicists, helped confirm the new quantum theory and pointed to its uses for explaining solid-state phenomena.
In 1912, scientists in Munich discovered an experimental method of "seeing" the internal arrangement of atoms in solids. They sent X rays through crystals and produced patterns, which they interpreted as the result of the scattering of the X rays by atoms arranged in a lattice. By the late 1920s, X-ray studies had revealed most of the basic information about how atoms are arranged in simple crystals.
The theories that attempted to explain solids still contained crippling problems. Solutions became available only after a complete theory of quantum mechanics was invented, in 1925 and 1926, by the German physicist Werner Heisenberg and the Austrian physicist Erwin Schrödinger, building on work by the Danish physicist Niels Bohr. A quantum statistics that could be applied to the particles in condensed matter was invented in 1926 by the Italian physicist Enrico Fermi and the British physicist P. A. M. Dirac.
The next few years were a remarkably productive period as the new conceptual and mathematical tools were applied to the study of solids and liquids. Many leading physicists were involved in this work—Germans, Austrians, Russians, French, British, and a few Americans, notably John Van Vleck and John Slater. Between 1928 and 1931, Felix Bloch, Rudolf Peierls, Alan Wilson, and others developed a powerful concept of energy bands separated by gaps to describe the energy distribution of the swarm of electrons in a crystal. This concept explained why metals conduct electricity and heat while insulators do not, and why the electrical conductivity of a class of materials called semiconductors varies with temperature. Another breakthrough came in 1933 when Eugene Wigner and his student Frederick Seitz at Princeton University developed a simple approximate method for computing the energy bands of sodium and other real solids. By 1934, some of the most dramatic properties of solids, such as magnetism, had received qualitative (if not quantitative) explanation.
But the models of the new theory remained idealizations, applicable only to perfect materials. Physicists could
not extend the results, for the available materials contained far too many impurities and physical imperfections. Most practically important characteristics (such as the strength of an alloy) were far beyond the theorists' reach. In the mid-1930s, many theorists turned their attention to fields such as nuclear physics, which offered greater opportunities for making exciting intellectual contributions.
Yet a broader base was being laid for future progress. Established scientists and engineers, particularly in the United States, were avidly studying the new quantum theory of solids. It also became a standard topic in the graduate studies of the next generation. Meanwhile, leaders in universities, industrial labs, and philanthropies were deliberately striving to upgrade American research in all fields of physics. Their efforts were reinforced by the talents of more than 100 European physicists who immigrated to the United States between 1933 and 1941 as a result of the political upheavals in Europe.
Dynamic Growth in World War II and After
Military-oriented research during World War II (1939–1945) created many new techniques that would be useful for the study of solids. For example, Manhattan Project scientists studied neutrons, and in the postwar period these neutral subatomic particles were found to be effective probes of solids, especially in exploring magnetic properties. The fervent wartime development of micro-wave radar also brought a variety of new techniques useful for studying solids, such as microwave spectroscopy, in which radiation is tuned to coincide with natural vibrational or rotational frequencies of atoms and molecules within a magnetic field. The Collins liquefier, developed just after the war at the Massachusetts Institute of Technology, made it possible for laboratories to get bulk liquid helium and study materials under the simplified conditions that prevail at extremely low temperatures. Methods were also developed during the war for producing single crystals in significant quantities. The production of pure crystals of the elements silicon and germanium, which found wartime use in microwave devices, became so highly developed that an enormous number of postwar studies used these semiconductors as prototypes for the study of solid-state phenomena in general.
Thus, by the late 1940s a seemingly mature field of solid-state physics was growing in scope and also in terms of the number of physicists attracted to the field. By 1947 solid-state physics had become a large enough field to justify establishing a separate division for it within the American Physical Society.
In the postwar period solid-state physics became even more closely tied to practical applications, which then stimulated new interest in the field and increased its funding. The development of the transistor offers a striking example. In January 1945, the Bell Telephone Laboratories in New Jersey officially authorized a group to do fundamental research on solids. William B. Shockley, one of the group's two leaders, believed that such research could lead to the invention of a solid-state amplifier. Members of a small semiconductor subgroup led by Shockley directed their attention to silicon and germanium, whose properties had been closely studied during the wartime radar program. In December 1947, two members of the group, the theorist John Bardeen and the experimentalist Walter Brattain, working closely together, invented the first transistor.
The transistor rectifies and amplifies electrical signals more rapidly and reliably than the more cumbersome, fragile, and costly vacuum tube. It rapidly found practical application. Among the first to take an interest were military agencies, driven by Cold War concerns to fund advanced research as well as development in all fields of physics. Commercial interests promptly followed; the first "transistorized" radio went on the market in 1954, and the term "solid-state" was soon popularized by advertisers' tags. Transistorized devices revolutionized communications, control apparatus, and data processing. The explosive growth of commercial and national security applications led to wide popular interest, and swift increases in funding for every kind of research. By 1960, there were roughly 2,000 solid-state physicists in the United States, making up one-fifth of all American physicists. Here, as in most fields of science since the war, more significant work had been done in the United States than in the rest of the world put together. As other countries recovered economically, they began to catch up.
Unlike most fields of physics at that time, in solid-state about half of the U.S. specialists worked in industry. Universities did not want to be left out, and starting in 1960 they established "materials science" centers with the aid of Department of Defense funding. As the name implied, the field was reaching past solid-state physicists to include chemists, engineers, and others in an interdisciplinary spirit.
Throughout the 1950s and 1960s, theory, technique, and applications of solid-state physics all advanced rapidly. The long list of achievements includes a theory for the details of atomic movements inside crystals, understanding of how impurities and imperfections cause optical properties and affect crystal growth, quantitative determination of such properties as electrical resistivity, and a more complete theory for the phase transitions between different states of matter. The biggest theoretical breakthrough of the period was an explanation of super conductivity in 1957 by Bardeen and two other American physicists, Leon N. Cooper and J. Robert Schrieffer. Their theory led the way to explanations of a whole series of so-called cooperative phenomena (which also include super fluidity, phase transitions, and tunneling) in which particles and sound-wave quanta move in unison, giving rise to strongly modified and sometimes astonishing properties. Theorists were further stimulated in 1972 when scientists at Cornell University, deploying ingenious new techniques to reach extremely low temperatures, discovered
that Helium-3 could become a super fluid with remarkable properties.
Meanwhile, many other important techniques were developed, such as the Josephson effect. In 1962, the young British physicist Brian D. Josephson proposed that a super current can "tunnel" through a thin barrier separating two super conductors. This led to important devices such as the SQUID (Super conducting Quantum Interference Device), which can probe the surface structures of solids and can even map faint magnetic fields that reflect human brain activity. Still more versatile were techniques to create entirely new, artificially structured materials. With vapors or beams of molecules, physicists could build up a surface molecule by molecule like layers of paint.
The list of applications continued to grow rapidly. By the mid-1970s, in addition to countless varieties of electronic diodes and transistors, there were, for example, solid-state lasers employed in such diverse applications as weaponry, welding, and eye surgery; magnetic bubble memories used in computers to store information in thin crystals; and improved understanding of processes in areas ranging from photography to metallurgy. In 1955, Shockley had established a semiconductor firm in California near Stanford University, creating a nucleus for what was later dubbed Silicon Valley—a hive of entrepreneurial capital, technical expertise, and innovation, but only one of many locales from Europe to Japan that thrived on solid-state physics. The creation of entire industries in turn stimulated interest in the specialty, now virtually a field of its own. In 1970 when the American Physical Society split up its massive flagship journal, the Physical Review, into manageable sections, the largest was devoted entirely to solids. But it was the "B" section, for in terms of intellectual prestige, solid-state physics had always taken second place behind fields such as nuclear and particle physics, which were called more "fundamental."
Condensed Matter from Stars to Super markets
To advance their status, and to emphasize their interest in ever more diverse materials, practitioners renamed the field; in 1978 the American Physical Society's division changed its name from "Solid State" to "Condensed Matter." The condensed-matter physicists were rapidly improving their understanding and control of the behavior of fluids and semidisordered materials like glasses. Theoretical studies of condensed matter began to range as far afield as the interiors of neutron stars, and even the entire universe in the moment following the Big Bang. Meanwhile, theory had become a real help to traditional solid-state technologies like metallurgy and inspired entire new classes of composite materials.
Experiment and theory, seemingly mature, continued to produce surprises. One spectacular advance, pointing to a future technology of submicroscopic machinery, was the development in the early 1980s of scanning micro-scopes. These could detect individual atoms on a surface, or nudge them into preferred configurations. Another discovery at that time was the Quantum Hall Effect: jumps of conductivity that allowed fundamental measurements with extraordinary precision. Later, a startling discovery at Bell Laboratories—using semiconductor crystals of unprecedented quality—revealed a new state of matter: a Quantum Hall Effect experiment showed highly correlated "quasiparticles" carrying only fractions of an electron's charge.
For research that could be considered fundamental, attention increasingly turned toward condensed matter systems with quantized entities such as cooperatively interacting swarms of electrons, seen especially at very low temperatures. The physics community was galvanized in 1986 when scientists at IBM's Zurich laboratory announced their discovery of super conductivity in a ceramic material, at temperatures higher than any previous super conductor. The established way of studying solids had been to pursue the simplest possible systems, but this showed that more complex structures could display startling new properties all their own. The study of "high-temperature" super conductivity has led to new concepts and techniques as well as hosts of new materials, including ones that super conduct at temperatures an order of magnitude higher than anything known before 1986. Many novel applications for microelectronics have grown from this field. Equally fascinating was the creation in the 1990s of microscopic clouds of "Bose-Einstein condensed" gases, in which the atoms behave collectively as a single quantum entity.
Most of this work depended on electronic computers: the field was advancing with the aid of its own applications. With new theoretical ideas and techniques developed in the 1960s, calculations of electronic structures became routine during the 1970s. In the 1980s, numerical simulations began to approach the power of experiment itself. This was most visible where the study of chaos and nonequilibrium phenomena, as in the phase transition of a melting solid, brought new understanding of many phenomena. There was steady progress in unraveling the old, great puzzle of fluids—turbulence—although here much remained unsolved. Studies of disorder also led to improved materials and new devices, such as the liquid crystal displays that turned up in items on super market shelves. Magnetism was studied with special intensity because of its importance in computer memories.
Physicists also cooperated with chemists to study polymers, and edged toward the study of proteins and other biological substances. Spider silk still beat anything a physicist could make. But the discovery that carbon atoms could be assembled in spheres (as in buckminsterfullerene) and tubes held hopes for fantastic new materials.
Some research problems now required big, expensive facilities. Ever since the 1950s, neutron beams from nuclear reactors had been useful to some research teams. A larger step toward "big science" came with the construction of machines resembling the accelerators of high-energy
physics that emitted beams of high-intensity radiation to probe matter. The National Synchrotron Light Source, starting up in 1982 in Brookhaven, New York, was followed by a half-dozen more in the United States and abroad. Yet most condensed-matter research continued to be done by small, intimate groups in one or two rooms.
In the 1990s the steep long-term rise of funding for basic research in the field leveled off. Military support waned with the Cold War, while intensified commercial competition impelled industrial leaders like Bell Labs to emphasize research with near-term benefits. The community continued to grow gradually along with other fields of research, no longer among the fastest. By 2001 the American Physical Society division had some 5,000 members, largely from industry; as a fraction of the Society's membership, they had declined to one-eighth. This was still more than any other specialty, and represented much more high-level research in the field than any other country could muster.
The field's impact on day-to-day living continued to grow. The applications of condensed-matter physics were most conspicuous in information processing and communications, but had also become integral to warfare, health care, power generation, education, travel, finance, politics, and entertainment.
Hoddeson, Lillian, et al, eds. Out of the Crystal Maze: Chapters from the History of Solid-State Physics. New York: Oxford University Press, 1992. Extended essays by professional historians (some are highly technical).
Hoddeson, Lillian, and Vicki Daitch. True Genius: The Life and Science of John Bardeen. Washington, D.C.: The Joseph Henry Press, 2002. Includes an overview of solid-state physics for the general reader.
Kittel, Charles. Introduction to Solid-State Physics. New York: Wiley, 1953. In five editions to 1976, the classic graduate-student text book.
Mott, Sir Nevill, ed. The Beginnings of Solid-State Physics. A Symposium. London: The Royal Society; Great Neck, N.Y.: Scholium International, 1980. Reminiscences by pioneers of the 1930s–1960s.
National Research Council, Solid-State Sciences Panel. Research in Solid-State Sciences: Opportunities and Relevance to National Needs. Washington, D.C.: National Academy of Sciences, 1968. The state of U.S. physics fields has been reviewed at intervals by panels of leading physicists. Later reviews, by the National Academy of Sciences, are:
National Research Council, Physics Survey Committee. Physics in Perspective. Vol. II, part A, The Core Subfields of Physics. Washington, D.C.: National Academy of Sciences, 1973. See "Physics of Condensed Matter, " pp. 445–558.
National Research Council, Physics Survey Committee, Panel on Condensed-Matter Physics. Condensed-Matter Physics. In series, Physics Through the 1990s. Washington, D.C.: National Academy Press, 1986.
National Research Council, Committee on Condensed-Matter and Materials Physics. Condensed-Matter and Materials Physics: Basic Research for Tomorrow's Technology. In series, Physics in a New Era. Washington, D.C.: National Academy Press, 1999.
Riordan, Michael, and Lillian Hoddeson. Crystal Fire: The Birth of the Information Age. New York: Norton, 1997. For the general reader.
Weart, Spencer R., and Melba Phillips, eds. History of Physics. New York: American Institute of Physics, 1985. Includes readable articles on aspects of the history by P. Anderson, P. Ewald, L. Hoddeson, C. S. Smith.
"Physics." Dictionary of American History. . Encyclopedia.com. (November 24, 2017). http://www.encyclopedia.com/history/dictionaries-thesauruses-pictures-and-press-releases/physics
"Physics." Dictionary of American History. . Retrieved November 24, 2017 from Encyclopedia.com: http://www.encyclopedia.com/history/dictionaries-thesauruses-pictures-and-press-releases/physics
It should be understood that a full understanding of the history of physics would include consideration of its institutional, social, and cultural contexts. Physics became a scientific discipline during the nineteenth century, gaining a clear professional and cognitive identity as well as patronage from a number of institutions (especially those pertaining to education and the state). Before the nineteenth century, researchers who did work that we now refer to as physics identified themselves in more general terms—such as natural philosopher or applied mathematician —and discussion of their work often adopts a retrospective definition of physics.
For researchers of the nineteenth century, physics involved the development of quantifiable laws that could be tested by conducting experiments and taking precision measurements. The laws of physics focused on fundamental processes, often discovered in particular areas of research, such as mechanics, electricity and magnetism, optics, fluids, thermodynamics, and the kinetic theory of gases. The various specialists saw physics as a unified science, since they shared the same concepts and laws, with energy becoming the central unifying concept by the end of the century. In forming its cognitive and institutional identity, physics distinguished itself from other scientific and technical disciplines, including mathematics, engineering, chemistry, and astronomy. However, as we will see, the history of physics cannot be understood without considering developments in these other areas.
The Middle Ages inherited a wealth of knowledge from antiquity, including the systematic philosophy of Aristotle (384–322 b.c.e.) and the synthesis of ancient astronomy in the work of the Hellenistic astronomer Ptolemy (fl. second century c.e.). In agreement with those before him, Aristotle maintained that the terrestrial and celestial realms, separated by the orbit of the Moon, featured entirely different physical behaviors. His terrestrial physics was founded on the existence of four elements (earth, water, air, and fire) and the idea that every motion requires the specification of a cause for that motion. Aristotle considered two broad classes of motion: natural motion, as an object returned to its natural place (as dictated by its elemental composition), and violent motion, as an object was removed forcibly from its natural place. Because the natural place of the element earth was at the center of the cosmos, Aristotle's physics necessitated a geocentric, or Earth-centered, model of the heavens.
Whereas the terrestrial realm featured constant change, the heavenly bodies moved in uniform circular orbits and were perfect and unchanging. Starting from an exhaustive tabulation of astronomical data, Ptolemy modeled the orbits of each heavenly body using a complex system of circular motions, including a fundamental deferent and one or more epicycles. Often, Ptolemy was forced to make additions, including the eccentric model (in which the center of rotation of the orbiting body was offset from Earth) and the equant model (in which a fictitious point, also not located at Earth, defined uniform motion).
Despite the great value of this work, the West lost a good portion of it with the erosion of the Roman Empire. Luckily, a number of Islamic scholars took an interest in the knowledge of the ancients. In addition to translating Aristotle and Ptolemy (among others) into Arabic, they commented on these works extensively and made a number of innovations in astronomy, optics, matter theory, and mathematics (including the use of "Arabic numerals," with the zero as a placeholder). For example, al-Battani (c. 858–929) made improvements to Ptolemy's orbits of the Sun and Moon, compiled a revised catalog of stars, and worked on the construction of astronomical instruments. Avempace (Ibn Badja, c. 1095–1138 or 1139) developed a position first staked out by the Neoplatonist philosopher John Philoponus (fl. sixth century c.e.), arguing that Aristotle was wrong to claim that the time for the fall of a body was proportional to its weight. After the reconquest of Spain during the twelfth century, ancient knowledge became available once again in the Latin West. Arab commentators such as Averroes (Ibn Rushd, 1126–1198) became influential interpreters of an Aristotle that was closer to the original texts and quite different from the glosses and explanatory aids that the West had grown accustomed to.
During the late Middle Ages, there was a general revival of learning and science in the West. The mathematician Jordanus de Nemore (fl. thirteenth century c.e.) pioneered a series of influential studies of static bodies. In addition to studying levers, Jordanus analyzed the (lower) apparent weight of a mass resting on an inclined plane. Despite the church's condemnation of certain radical interpretations of Aristotelianism during the late thirteenth century, there followed a flowering of activity during the fourteenth century, particularly concerning the problem of motion. Two important centers of activity were Merton College (at Oxford), where a group of mathematicians and logicians included Thomas Bradwardine (c. 1290–1349) and Richard Swineshead (d. 1355), and the University of Paris, which included John Buridan (c. 1295–1358) and Nicole Oresme (c. 1325–1382).
The scholars at Merton College adopted a distinction between dynamics (in which the causes of motion are specified) and kinematics (in which motion is only described). The dynamical problems implied by Aristotelian physics, especially the problem of projectile motion, occupied many medieval scholars (see sidebar, "Causes of Motion: Medieval Understandings"). In kinematics, the release from the search for causation encouraged a number of new developments. The Mertonians developed the concept of velocity in analogy with the medieval idea of the intensity of a quality (such as the redness of an apple), and distinguished between uniform (constant velocity) and nonuniform (accelerated) motion. They also gave the first statement of the mean velocity theorem, which offered a way of comparing constant-acceleration motion to uniform motion.
While the Mertonians presented their analyses of motion through the cumbersome medium of words, other scholars developed graphical techniques. The most influential presentation of the mean speed theorem was offered by Oresme, who recorded the duration of the motion along a horizontal line (or "subject line") and indicated the intensity of the velocity as a sequence of vertical line segments of varying height. Figure 1 shows that an object undergoing constant acceleration travels the same distance as if it were traveling for the same period of time at its average velocity (the average of its initial and final velocity). Although this work remained entirely abstract and was not based on experiment, it helped later work in kinematics, most notably Galileo's.
Following Aristotle's physics, medieval scholars pictured the celestial realm as being of unchanging perfection. Each heavenly body (the Sun, the Moon, the planets, and the sphere of the fixed stars) rotated around Earth on its own celestial sphere. Ptolemy's addition of epicycles on top of Aristotle's concentric spheres led medieval astronomers to speak of "partial orbs" within the "total orb" of each heavenly body. The orbs communicated rotational movement to one another without any resistive force and were made of a quintessence or ether, which was an ageless, transparent substance. Beyond the outermost sphere of the fixed stars was the "final cause" of the Unmoved Mover, which was usually equated with the Christian God. Buridan suggested that God impressed an impetus on each orb at the moment of creation and, in the absence of resistance, they had been rotating ever since. Both Buridan and Oresme considered the possibility of a rotating Earth as a way of explaining the diurnal motion of the fixed stars, and found their arguments to be promising but not sufficiently convincing.
Sixteenth and Seventeenth Centuries
The period of the scientific revolution can be taken to extend, simplistically but handily, from 1543, with the publication of Nicolaus Copernicus's De revolutionibus orbium coelestium, to 1687, with the publication of Isaac Newton's Philosophiae naturalis principia mathematica, often referred to simply as the Principia. The term "revolution" remains useful, despite the fact that scholars have suggested that the period shows significant continuities with what came before and after. Copernicus (1473–1543) was attracted to a heliocentric, or Sun-centered, model of the universe (already considered over one thousand years before by Aristarchus of Samos) because it eliminated a number of complexities from Ptolemy's model (including the equant), provided a simple explanation for the diurnal motion of the stars, and agreed with certain theological ideas of his own regarding the Sun as a kind of mystical motive force of the heavens. Among the problems posed by Copernicus's model of the heavens, the most serious was that it contradicted Aristotelian physics.
Heliocentrism was pursued again by the German mathematician Johannes Kepler (1571–1630). Motivated by a deep religious conviction that a mathematical interpretation of nature reflected the grand plan of the Creator and an equally deep commitment to Copernicanism, Kepler worked with the Danish astronomer Tycho Brahe (1546–1601) with the intention of calculating the orbits of the planets around the Sun. After Brahe's death, Kepler gained control of his former associate's data and worked long and hard on the orbit of Mars, eventually to conclude that it was elliptical. Kepler's so-called "three laws" were identified later by other scholars (including Newton) from different parts of his work, with the elliptical orbits of the planets becoming the first law. The second and third laws were his findings that the area swept out by a line connecting the Sun and a particular planet is the same for any given period of time; and that the square of a planet's period of revolution around the Sun is proportional to the cube of its distance from the Sun.
The career of Galileo Galilei (1564–1642) began in earnest with his work on improved telescopes and using them to make observations that lent strength to Copernicanism, including the imperfections of the Moon's surface and the satellites of Jupiter. His public support of Copernicanism led to a struggle with the church, but his greater importance lies with his study of statics and kinematics, in his effort to formulate a new physics that would not conflict with the hypothesis of a moving Earth.
His work in statics was influenced by the Dutch engineer Simon Stevin (1548–1620), who made contributions to the analysis of the lever, to hydrostatics, and to arguments on the impossibility of perpetual motion. Galileo also repeated Stevin's experiments on free fall, which disproved Aristotle's contention that heavy bodies fall faster than light bodies, and wrote about them in On Motion (1590), which remained unpublished during his lifetime. There, he made use of a version of Buridan's impetus theory (see sidebar, "Causes of Motion: Medieval Understandings"), but shifted attention from the total weight of the object to the weight per unit volume. By the time of Two New Sciences (1638), he generalized this idea by claiming that all bodies—of whatever size and composition—fell with equal speed through a vacuum.
Two New Sciences summarized most of Galileo's work in statics and kinematics (the "two sciences" of the title). In order to better study the motion of bodies undergoing constant acceleration, Galileo used inclined planes pitched at very small angles in order to slow down the motion of a rolling ball. By taking careful distance and time measurements, and using the results of medieval scholars (including the mean speed theorem), he was able to show that the ball's instantaneous velocity increased linearly with time and that its distance increased according to the square of the time. Furthermore, Galileo proposed a notion of inertial motion as a limiting case of a ball rolling along a perfectly horizontal plane. Because, in this limiting case, the motion of the ball would ultimately follow the circular shape of the earth, his idea is sometimes referred to as a "circular inertia." Finally, Galileo presented his analysis of parabolic trajectories as a compound motion, made up of inertial motion in the horizontal direction and constant acceleration in the vertical.
The French philosopher René Descartes (1596–1650) and his contemporary Pierre Gassendi (1592–1655) independently came up with an improved conception of inertial motion. Both suggested that an object moving at constant speed and in a straight line (not Galileo's circle) was conceptually equivalent to the object being at rest. Gassendi tested this idea by observing the path of falling weights on a moving carriage. In his Principia philosophiae (1644), Descartes presented a number of other influential ideas, including his view that the physical world was a kind of clockwork mechanism. In order to communicate cause and effect in his "mechanical philosophy," all space was filled with matter, making a vacuum impossible. Descartes suggested, for example, that the planets moved in their orbits via a plenum of fine matter that communicated the influence of the Sun through the action of vortices.
Building on work in optics by Kepler, Descartes used the mechanical philosophy to derive the laws of reflection and refraction. In his Dioptrics (1631), he proposed that if light travels at different velocities in two different media, then the sine of the angle of incidence divided by the sine of the angle of refraction is a constant that is characteristic of a particular pair of media. This law of refraction had been discovered earlier, in 1621, by the Dutch scientist Willibrord Snel, though Descartes was probably unaware of this work. In 1662, the French mathematician Pierre de Fermat recast the law of refraction by showing that it follows from the principle that light follows the path of least time (not necessarily the least distance) between two points.
The study of kinematics yielded various conservation laws for collisions and falling bodies. Descartes defined the "quantity of motion" as the mass times the velocity (what is now called "momentum") and claimed that any closed system had a fixed total quantity of motion. In disagreement with Descartes, Gottfried Wilhelm von Leibniz (1646–1716) suggested instead the "living force" or vis viva as a measure of motion, equal to the mass times the square of the velocity (similar to what is now called "kinetic energy"). For a falling body, Leibniz asserted that the living force plus the "dead force," the weight of the object times its distance off the ground (similar to "potential energy"), was a constant.
The culmination of the scientific revolution is the work of Isaac Newton. In the Principia (1687), Newton presented a new mechanics that encompassed not only terrestrial physics but also the motion of the planets. A short way into the first book ("Of the Motion of Bodies"), Newton listed his axioms or laws of motion:
- Every body perseveres in its state of being at rest or of moving uniformly straight forward, except insofar as it is compelled to change its state by forces impressed …
- A change in motion is proportional to the motive force impressed and takes place along the straight line in which that force is impressed …
- To any action there is always an opposite and equal reaction; in other words, the actions of two bodies upon each other are always equal and always opposite in direction … (1999, pp. 416–417)
The first law restates Descartes's concept of rectilinear, inertial motion. The second law introduces Newton's concept of force, as an entity that causes an object to depart from inertial motion. Following Descartes, Newton defined motion as the mass times the velocity. Assuming that the mass is constant, the "change of motion" is the mass (m) times the acceleration (a); thus the net force (F) acting on an object is given by the equation F ma. Analyzing the motion of the Moon led Newton to the inverse-square law of universal gravitation. Partly as a result of a debate with the scientist Robert Hooke (1635–1703), Newton came to see the Moon as undergoing a compound motion, made up of a tangential, inertial motion and a motion toward the Sun due to the Sun's gravitational attraction. The Dutch physicist Christiaan Huygens (1629–1695) had suggested that there was a centrifugal force acting away from the center of rotation, which was proportional to v2/r, where v is the velocity and r is the distance from the center of rotation. Newton had derived this result before Huygens but later renamed it the centripetal force, the force that is required to keep the body in orbit and that points toward the center of rotation. Using this relation and Kepler's finding that the square of the period was proportional to the cube of the distance (Kepler's "third law"), Newton concluded that the gravitational force on the Moon was proportional to the inverse square of its distance from Earth.
Newton presented his law of universal gravitation in the third book of the Principia ("The System of the World"), and showed that it was consistent with Kepler's findings and the orbits of the planets. Although he derived many of these results using a technique that he invented called the method of fluxions—differential calculus—Newton presented them in the Principia with the geometrical formalism familiar to readers of the time. He did not publish anything of his work on the calculus until De Analysi (1711; On analysis) during a priority dispute with Leibniz, who invented it independently.
It is helpful to identify two broad tendencies in eighteenth-and nineteenth-century physics, which had been noted by a number of contemporaries, including the German philosopher Immanuel Kant (1724–1804). On the one hand, a mechanical approach analyzed the physical universe as a great machine and built models relying on commonsense notions of cause and effect. This sometimes required the specification of ontological entities to communicate cause and effect, such as Descartes's plenum. On the other hand, the dynamical approach avoided mechanical models and, instead, concentrated on the mathematical relationship between quantities that could be measured. However, in avoiding mechanical models, the dynamical approach often speculated on the existence of active powers that resided within matter but could not be observed directly. Although this distinction is helpful, many scientists straddled the divide. Newton's physics, and the general Newtonian scientific culture of the eighteenth century, utilized elements of both approaches. It held true to a mechanical-world picture in analyzing macroscopic systems involving both contact, as in collisions, and action at a distance, as in the orbital motion. But it also contained a dynamical sensibility. Regarding gravity, Newton rejected Descartes's plenum and speculated that gravity might be due to an all-pervasive ether, tantamount to God's catholic presence. Such reflections appeared in Newton's private notes and letters, but some of these became known during the 1740s.
The development of mechanics during the eighteenth century marks one place where the histories of physics and mathematics overlap strongly. Mathematicians with an interest in physical problems recast Newtonian physics in an elegant formalism that took physics away from geometrical treatment and toward the reduction of physical problems to mathematical equations using calculus. Some of these developments were motivated by attempts to confirm Newton's universal gravitation. The French mathematician Alexis-Claude Clairaut (1713–1765) used perturbation techniques to account for tiny gravitational forces affecting the orbits of heavenly bodies. In 1747 Clairaut published improved predictions for the Moon's orbit, based on three-body calculations of the Moon, Earth, and Sun, and, in 1758, predictions of the orbit of Halley's comet, which changed slightly each time that it passed a planet. Some years later, Pierre-Simon Laplace produced a five-volume study, Celestial Mechanics (1799–1825), which showed that changes in planetary orbits, which had previously appeared to be accumulative, were in fact self-correcting. His (perhaps apocryphal) response to Napoléon's question regarding the place of God in his calculations has come to stand for eighteenth-century deism: "Sire, I had no need for that hypothesis."
The most important mathematical work was the generalization of Newton's mechanics using the calculus of variations. Starting from Fermat's principle of least time, Louis Moreau de Maupertuis (1698–1759) proposed that, for a moving particle, nature always sought to minimize a certain quantity equal to the mass times the velocity times the distance that the particle moves. This "principle of least action" was motivated by the religious idea that the economy of nature gave evidence of God's existence. The Swiss mathematician Leonhard Euler (1707–1783) recast this idea (but without Maupertuis's religious motivations) by minimizing the integral over distance of the mass of a particle times its velocity. The Italian Joseph-Louis Lagrange (1736–1813) restated and clarified Euler's idea, by focusing on minimizing the vis viva integrated over time. His Mécanique analytique (1787) summarized the whole of mechanics, for both solids and fluids and statics and dynamics.
In its high level of mathematical abstraction and its rejection of mechanical models, Lagrange's formalism typified a dynamical approach. In addition to making a number of problems tractable that had been impossible in Newton's original approach, the use of the calculus of variations removed from center stage the concept of force, a vector quantity (with magnitude and direction), and replaced it with scalar quantities (which had only magnitude). Lagrange was proud of the fact that Mécanique analytique did not contain a single diagram.
Newton's physics could be applied to continuous media just as much as systems of masses. In his Hydrodynamica (1738), the Swiss mathematician Daniel Bernoulli used conservation of vis viva to analyze fluid flow. His most famous result was an equation describing the rate at which liquid flows from a hole in a filled vessel. Euler elaborated on Bernoulli's analyses and developed additional formalism, including the general differential equations of fluid flow and fluid continuity (but restricted to the case of zero viscosity). Clairaut contributed to hydrostatics through his involvement with debates regarding the shape of the earth. In developing a Newtonian prediction, Clairaut analyzed the earth as a fluid mass. After defining an equilibrium condition, he showed that the earth should have an oblate shape, which was confirmed by experiments with pendulums at the earth's equator and as far north as Lapland.
The study of optics inherited an ambivalence from the previous century, which considered two different mechanical explanations of light. In his Opticks (1704), Newton had advocated a corpuscular, atomistic theory of light. As an emission of particles, light interacted with matter by vibrating in different ways and was therefore either reflected or refracted. In contrast with this, Descartes and Huygens proposed a wave theory of light, arguing that space was full and that light was nothing more than the vibration of a medium.
During the eighteenth century, most scientists preferred Newton's model of light as an emission of particles. The most important wave theory of light was put forward by Euler, who hypothesized that, in analogy with sound waves, light propagated through a medium, but that the medium itself did not travel. Euler also associated certain wavelengths with certain colors. After Euler, considerable debate occurred between the particle and wave theories of light. This debate was resolved during the early nineteenth century in favor of the wave theory. Between 1801 and 1803, the English physician Thomas Young conducted a series of experiments, the most notable of which was his two-slit experiment, which demonstrated that two coherent light sources set up interference patterns, thus behaving like two wave sources. This work was largely ignored until 1826, when Augustin-Jean Fresnel presented a paper to the French Academy of Science that reproduced Young's experiments and presented a mathematical analysis of the results.
Electrical research was especially fruitful in the eighteenth century and attracted a large number of researchers. Electricity was likened to "fire," the most volatile element in Aristotle's system. Electrical fire was an imponderable fluid that could be made to flow from one body to another but could not be weighed (see sidebar, "Forms of Matter"). After systematic experimentation, the French soldier-scientist Charles-François Du Fay (1698–1739) developed a two-fluid theory of electricity, positing both a negative and a positive fluid. The American statesman and scientist Benjamin Franklin (1706–1790) proposed a competing, one-fluid model. Franklin suggested that electrical fire was positively charged, mutually repulsive, and contained in every object. When fire was added to a body, it showed a positive charge; when fire was removed, the body showed a negative charge. Franklin's theory was especially successful in explaining the behavior of the Leyden jar (an early version of the capacitor) invented by Ewald Georg von Kleist in 1745. The device was able to store electrical fire using inner and outer electrodes, with the surface of the glass jar in between. Franklin's interpretation was that the glass was impervious to electrical fire and that while one electrode took on fire, the other electrode expelled an equal amount (see Fig. 3).
After early efforts by John Robison and Henry Cavendish, the first published precision measurements of the electric force law were attributed to the French physicist and engineer Charles-Augustin de Coulomb (1736–1806). Coulomb used a torsion balance to measure the small electrostatic force on pairs of charged spheres and found that it was proportional to the inverse square of the distance between the spheres and to the amount of charge on each sphere. At the close of the century, Cavendish used a similar device to experimentally confirm Newton's universal law of gravitation, using relatively large masses.
The development of physics during the nineteenth century can be seen as both a culmination of what went before and as preparing the stage for the revolutions in relativity and quantum theory that were to follow. The work of the Irish mathematician and astronomer William Rowan Hamilton (1805–1865) built on Laplace's revision of Newtonian dynamics to establish a thoroughly abstract and mathematical approach to physical problems. Originally motivated by his work in optics, Hamilton developed a new principle of least action. Instead of using Lagrange's integral of kinetic energy, Hamilton chose to minimize the integral of the difference between the kinetic and the potential energies. In applying this principle in mechanics, Hamilton reproduced the results of Euler and Lagrange, and showed that it applied to a broader range of problems. After his work was published, as two essays in 1833 and 1834, it was critiqued and improved upon by the German mathematician Carl Gustav Jacob Jacobi (1804–1851). The resulting Hamilton-Jacobi formalism was applied in many fields of physics, including hydrodynamics, optics, acoustics, the kinetic theory of gases, and electrodynamics. However, it did not achieve its full significance until the twentieth century, when it was used to buttress the foundations of quantum mechanics.
Causes of Motion: Medieval Understandings
Medieval scholars put considerable effort into modifying Aristotelian dynamics and answering the problems posed by it. Because most terrestrial bodies were composed of many elements, their natural motion was explained by summing the total power of heavy and light elements. This led medieval scholars to consider the minority type of material as providing a kind of "internal resistance." Using this idea, motion in a hypothetical void or vacuum could be explained by qualities of both motive force and resistance that inhered in the object itself.
Probably the greatest puzzle facing medieval interpreters of Aristotle was the violent motion of projectiles. If the motion of every object required the analyst to specify a mover or cause for that motion, then what caused projectiles to continue in their trajectories after they lost contact with their original projector? Aristotle suggested that a surrounding medium was pushed by the original mover and so continued to push the projectile. For medieval scholars who admitted the possibility of a vacuum, however, this explanation was not tenable. In addition, if the medium was slight compared to the projectile (such as the air compared to a stone), then it was difficult to see how a corporeal mover could continue to be the cause of violent motion. Motivated by such concerns, in the sixth century John Philoponus suggested that the continued motion of a projectile was due to an incorporeal force that was impressed on the projectile itself by the original mover. The projectile would finish its motion when this impressed force wore off.
Some eight hundred years later, in the fourteenth century at the University of Paris, Jean Buridan renamed Philoponus's impressed force "impetus," and used the concept to interpret the motion of projectiles and falling bodies. Once impressed on a projectile, the impetus could bring about virtually constant motion unless it was interrupted or dissipated by a resistive medium, a notion that bears some resemblance to the conceptions of inertia developed later. Buridan also attempted to quantify impetus, by saying it was proportional to the moving object's speed and its quantity of matter. As an object engaged in free fall, the power of gravity imparted larger and larger amounts of impetus to the object, thereby increasing its velocity.
A number of scholars attempted to quantify a relation between the impressed force and the velocity of an object. Paramount among these was Thomas Bradwardine of Merton College. In comparison to a projectile, a falling object presented special problems. Aristotle suggested that the velocity of the object was proportional to the total impressed force (F) and inversely proportional to the resistance of an ambient medium (R). Bradwardine rejected this formulation and a number of other suggestions by Philoponus, Avempace, and Averroes that involved simple ratios or subtractions of F and R. Instead, he proposed a dynamics in which the velocity of a body increased arithmetically as the ratio F/R increased geometrically. This formulation proved to be influential well into the sixteenth century.
Work on magnetism was encouraged by Alessandro Volta's (1745–1827) development, in 1800, of the voltaic pile (an early battery), which, unlike the Leyden jar, was able to produce a steady source of electric current. Inspired by the German philosophical movement of Naturphilosophie, which espoused that the forces of nature were all interrelated in a higher unity, the Danish physicist Hans Christian Ørsted (1777–1851) sought a magnetic effect from the electric current of Volta's battery. Ørsted's announcement of his success, in 1820, brought a flurry of activity, including the work of Jean-Baptiste Biot and Félix Savart, on the force law between a current and a magnet, and the work of André-Marie Ampère, on the force law between two currents. The magnetic force was found to depend on the inverse square of the distance but was more complex due to the subtle vector relations between the currents and distances. For the analysis of inverse-square force laws, the German mathematician Carl Friedrich Gauss (1777–1855) introduced, in 1839, the concept of "potential," which could be applied with great generality to both electrostatics and magnetism. This work grew from Gauss's efforts in measuring and understanding the earth's magnetic field, which he undertook with his compatriot Wilhelm Eduard Weber (d. 1891).
The Newtonian Synthesis
The Newtonian synthesis was, first and foremost, a unification of celestial and terrestrial physics. Newton's famous story of seeing an apple fall in his mother's garden does a good job in summarizing this achievement. According to the story, the falling apple made Newton consider that the gravitational force that influences the apple (a projectile in terrestrial motion) might also act on the moon (a satellite in celestial motion). He concluded that "the power of gravity … was not limited to a certain distance from the earth but that this power must extend much farther than was usually thought" (Westfall, 1980, p. 154). This idea is displayed in a famous diagram in the Principia, depicting a projectile being thrown from a mountain peak, which rests on a small planet; as the projectile is thrown with greater and greater speed, it eventually goes into orbit around the planet and becomes a satellite. Consideration of the moon's motion led Newton to the force law for universal gravitation. Simply by virtue of having mass, any two objects exert mutually attractive forces on each other (in accordance with the third law of motion). The inverse-square force law made the gravitational force quantified and calculable but regarding the cause of gravity itself, Newton famously claimed, "I feign no hypotheses."
As much as his specific scientific achievements, Newton's method of working became a touch-stone for scientists of the eighteenth century and defined a general scientific culture of "Newtonianism." In this regard, the Newtonian synthesis can be seen as a combination of three broad traditions: experiment, mathematics, and mechanism. Newton's Opticks (1704) exemplified the empirical, inductive approach recommended by Francis Bacon. There, Newton reports on careful experiments with light, including a series showing that when light passes through a prism, the prism does not modify the color of the light but rather separates it into its component colors (see Fig. 2). He also did experiments in which he shone monochromatic light on thin plates and films, to produce patterns of light and dark regions; these later became known as "Newton's rings."
The effort to describe physical events with mathematics, which was so evident in the work of Kepler, Galileo, and Descartes, reached its full expression in Newton's dynamics. The universal gravitation law, along with the three laws of motion and the calculus, presented a complete Newtonian approach to quantifying and calculating the motion of everything from the smallest atom to the largest planet. Closely related to this mathematical tendency is the mechanical philosophy pursued by Descartes, Gassendi, and Robert Boyle (1627–1691). Although Newton rejected Descartes's plenum, he retained a modified idea of mechanical causality. For Newton, gravity was an action at a distance; two masses acted on one another despite the fact that empty space lay between them. Defined as a change in motion, Newton's conception of force was a mechanical, causal agent that acted either through contact or through action at a distance.
The most significant work in magnetism was done by Michael Faraday (1791–1861) at the Royal Institution of London. By 1831, Faraday had characterized a kind of reverse Ørsted effect, in which a change in magnetism gave rise to a current. For example, he showed that this "electromagnetic induction" occurred between two electric circuits that communicated magnetism through a shared iron ring but, otherwise, were electrically insulated from one another (an early version of the transformer). Faraday made the first measurements of magnetic materials, characterizing diamagnetic, paramagnetic, and ferromagnetic effects (though this terminology is
Forms of Matter
The development of physics both contributed to and depended on ideas about the structure of matter. In this regard, the history of physics is tied to the history of chemistry. Both sciences inherited a debate that began with the ancients regarding atomism versus continuity. Combining the influences of, among others, Pythagoras and Democritus, Plato saw matter as being composed of atoms that had different geometrical shapes for each of the four elements. Against this, Aristotle developed a continuum theory of matter, in part because his theory of motion would be contradicted by the existence of a void. This debate was reawakened during the sixteenth and seventeenth centuries. On the one hand, Descartes embraced a continuum theory involving a plenum of fine matter and vortices, founded on the idea that motion is caused through contact. On the other hand, Robert Boyle proposed atomistic explanations of his finding that reducing the volume of a gas increased its pressure proportionately. Newton refined Boyle's ideas by interpreting pressure as being due to mutually repelling atoms, and recommended an atomistic stance for further research in chemistry and optics.
During the eighteenth and nineteenth centuries, many theorists and experimentalists posited the existence of a number of "imponderables," substances that could produce physical effects but could not be weighed. The first of these was proposed in 1703 by the German physician and chemist Georg Ernst Stahl in order to explain the processes of oxidation and respiration. Stahl's phlogiston theory, and the renewed interest in Newton's theories of an ether medium for gravity, encouraged further theories involving imponderables, most notably electrical fire (to describe the flow of static electricity) and caloric (to describe heat flow). Although the imponderables were eventually rejected, they served as useful heuristic devices in quantifying physical laws. For example, the Scottish chemist Joseph Black (1728–1799) used the caloric theory to found the study of calorimetry and to measure specific heat (the heat required to raise the temperature of a substance one degree), and latent heat (the heat required for a substance to change its state).
Even after the work of John Dalton, few chemists and physicists before 1890 accepted the actual existence of atoms. Nevertheless, they found the atomic hypothesis to be useful in suggesting experiments and interpreting the results. In 1863, the Irish physical chemist Thomas Andrews experimentally characterized the "critical point" between the gas and liquid phases: at relatively low temperatures, as the pressure was increased, the change from gas to liquid was abrupt; however, at relatively high temperatures, the transition was continuous. In part to account for the behavior of the critical point, the Dutch physicist Johannes Diderik van der Waals (1837–1923) assumed that the forces between atoms were attractive at large range but repulsive at short range. The work of van der Waals represented the first successful theory of phase transitions and showed how an atomistic model could describe both phases.
In the mid-nineteenth century, Michael Faraday and Julius Plücker (1801–1868), among others, pioneered research on the discharge of electricity through partially evacuated glass tubes. The British chemist William Crookes made a number of improvements to these discharge tubes and called the glowing material that formed in them the "fourth state of matter" (which was later dubbed "plasma" by the American chemist Irving Langmuir). Work in this area eventually led to Joseph John Thomson's discovery of the electron and Philipp Lenard's characterization, in 1899, of the photoelectric effect.
due to the English mathematician William Whewell). Finally, Faraday pioneered the concept of the field, coining the term "magnetic field" in 1845. He saw the "lines of force" of magnetic or electric fields as being physically real and as filling space (in opposition to the idea of action at a distance).
Second Law of Thermodynamics
The development of the second law of thermodynamics was intimately tied to the kinetic theory of gases, and carried with it the rebirth of atomism and the founding of statistical mechanics. Despite the fact that Sadi Carnot believed that caloric was not lost when it traveled from the hot body to the cool body of an engine, he recognized that the work delivered depended on the temperature difference between the two bodies and that this difference constantly decreased. This observation was clarified by the German physicist Rudolf Clausius. In 1851, a few years after the acceptance of the first law of thermodynamics, Clausius recognized the need for a second law, to account for the fact that energy was often irrecoverably lost by a system. In a paper published in 1865, Clausius analyzed thermodynamic cycles with a quantity that he dubbed the "entropy" and found that it usually went up or at best (for a reversible process) was zero.
The Austrian Ludwig Boltzmann read Clausius's paper and set about developing a mechanical interpretation of the second law. In a first attempt, published in 1866, he used Hamilton's principle to analyze the development of a thermodynamic system made up of discrete particles. After Joseph Stefan (1835–1893) alerted Boltzmann to James Clerk Maxwell's probabilistic approach, Boltzmann made refinements to Maxwell's ideas and incorporated them into his mechanical interpretation. In 1872, he published a paper that made use of a transport equation (now called the "Boltzmann equation") to describe the evolution of a probability distribution of particles. As the atoms of a gas collided and eventually reached an equilibrium velocity distribution, the entropy was maximized.
Boltzmann's ideas were met with a number of objections. One objection argued that because Newton's laws were reversible, thermodynamic processes described by the motion of atoms could be reversed in time to yield processes that deterministically went to states of lower entropy, thus contradicting the second law. Boltzmann's response highlighted the statistical nature of his interpretation, arguing that, given particular initial conditions, any thermodynamic system has a vastly greater number of final states available to it with relatively high entropy. An increase of entropy means that the system has become randomized as the available energy is spread around to its constituent atoms. In 1877 Boltzmann published a paper that incorporated this idea and defined the entropy as a log of a quantity measuring the number of states available to a system. In doing his calculations, Boltzmann used the device of counting energy in discrete increments, which he took to zero at the end of his calculation. This method, a harbinger of the quantization of energy, influenced Planck and Einstein, over twenty years later.
Boltzmann had less success answering a second set of objections regarding atomism. The British physicist William Thomson (1824–1907) and Scottish physicist Peter Tait (1831–1901) rejected atomism as a result of their adherence to the dynamical theory of matter, which rejected the existence of a void. Similarly, Ernst Mach put forward empiricist counterarguments, which rejected Boltzmann's adherence to entities that could not be confirmed by direct observation.
One of the pinnacles of nineteenth-century physics is the theory of electromagnetism developed by the Scottish physicist James Clerk Maxwell (1831–1879). Maxwell brought together the work of Coulomb, Ampère, and Faraday, and made the crucial addition of the "displacement current," which acknowledged that a magnetic field can be produced not only by a current but also by a changing electric field. These efforts resulted in a set of four equations that Maxwell used to derive wave equations for the electric and magnetic fields. This led to the astonishing prediction that light was an electromagnetic wave. In developing and interpreting his results, Maxwell sought to build a mechanical model of electromagnetic radiation. Influenced by Faraday's rejection of action at a distance, Maxwell attempted to see electromagnetic waves as vortices in an ether medium, interspersed with small particles that acted as idle wheels to connect the vortices. Maxwell discarded this mechanical model in later years, in favor of a dynamical perspective. This latter attitude was taken by the German experimentalist Heinrich Rudolph Hertz (1857–1894), who, in 1886, first demonstrated the propagation of electromagnetic waves in the laboratory, using a spark-gap device as a transmitter.
During the eighteenth century, most researchers saw the flow of heat as the flow of the imponderable fluid caloric. Despite developments, such as Benjamin Thompson's cannon-boring experiments, which suggested that heat involved some sort of microscopic motion, caloric provided a heuristic model that aided in the quantification of experimental results and in the creation of mathematical models. For example, the French engineer Sadi Carnot (1837–1894) did empirical work on steam engines which led to the theory of the thermodynamic cycle, as reported in his Reflections on the Motive Power of Fire (1824). A purely mathematical approach was developed by Jean-Baptiste-Joseph Fourier, who analyzed heat conduction with the method of partial differential equations in his Analytical Theory of Heat (1822).
Carnot's opinion that caloric was conserved during the running of a steam engine was proved wrong by the development of the first law of thermodynamics. Similar conceptions of the conservation of energy (or "force," as energy was still referred to) were identified by at least three different people during the 1840s, including the German physician Julius Robert von Mayer (1814–1878), who was interested in the human body's ability to convert the chemical energy of food to other forms of energy, and the German theoretical physicist Hermann Ludwig Ferdinand von Helmholtz (1821–1894), who gave a mathematical treatment of different types of energy and showed that the different conservation laws could be traced back to the conservation of vis viva in mechanics. The British physicist James Prescott Joule (1818–1889) did an experiment that measured the mechanical equivalent of heat with a system of falling weights and a paddlewheel that stirred water within an insulated vessel (see Fig. 4).
In his Hydrodynamica, Bernoulli had proposed the first kinetic theory of gases, by suggesting that pressure was due to the motion and impact of atoms as they struck the sides of their containment vessel. The work of the chemists John Dalton (1766–1844) and Amedeo Avogadro (1776–1856) indirectly lent support to such a kinetic theory by casting doubt upon the Newtonian program of understanding chemistry in terms of force laws between atoms. After John Herapath's work on the kinetic theory, in 1820, was largely ignored, Rudolf Clausius published two papers, in 1857 and 1858, in which he sought to derive the specific heats of a gas and introduced the concept of the mean free path between atomic collisions. James Clerk Maxwell added the idea that the atomic collisions would result in a range of velocities, not an average velocity as Clausius thought, and that this would necessitate the use of a statistical approach. In a number of papers published from 1860 to 1862, Maxwell completed the foundations of the kinetic theory and introduced the equipartition theorem, the idea that each degree of freedom (translational or rotational) contributed the same average energy, which was proportional to the temperature of the gas. Clausius and Maxwell's work in kinetic theory was tied to their crucial contributions to developing the second law of thermodynamics (see sidebar, "Second Law of Thermodynamics").
End of Classical Physics
By the close of the nineteenth century, many physicists felt that the accomplishments of the century had produced a mature and relatively complete science. Nevertheless, a number of problem areas were apparent to at least some of the community, four of which are closely related to developments mentioned above.
New rays and radiations were discovered near the end of the century, which helped establish (among other things) the modern model of the atom. These included the discovery (by William Crookes and others) of cathode rays within discharge tubes; Wilhelm Conrad Röntgen's finding, in 1895, of X rays emanating from discharge tubes; and Antoine-Henri Becquerel's discovery in 1896 that uranium salts were "radioactive" (as Marie Curie labeled the effect in 1898). Each of these led to further developments. In 1897, Joseph John Thomson identified the cathode rays as negatively charged particles called "electrons" and, a year later, was able to measure the charge directly. In 1898, Ernest Rutherford identified two different kinds of radiation from uranium, calling them alpha and beta. In 1902 and 1903, he and Frederick Soddy demonstrated that radioactive decay was due to the disintegration of heavy elements into slightly lighter elements. In 1911, he scattered alpha particles from thin gold foils and explained infrequent scattering to large angles by the presence of a concentrated, positively charged atomic nucleus.
The study of blackbody radiation (radiation from a heated object that is a good emitter) yielded results that are crucial to the early development of quantum mechanics. In 1893 Wilhelm Wien derived a promising "displacement law" that gave the wavelength at which a blackbody radiated at maximum intensity, but precision data failed to confirm it. Furthermore, classical theory proved unable to model the intensity curves, especially at lower wavelengths. In 1900 the German theoretical physicist Max Planck (1858–1947) derived the intensity curve using the statistical methods of the Austrian physicist Ludwig Eduard Boltzmann (1844–1906) and the device of counting the energy of the oscillators of the blackbody in increments of hf, where f is the frequency and h is a constant (now known as "Planck's constant"). Despite achieving excellent fits to data, Planck was hesitant to accept his own derivation, due to his aversion for statistical methods and atomism.
It is doubtful that Planck interpreted his use of energy increments to mean that the energy of the oscillators and radiation came in chunks (or "quanta"). However, this idea was clearly enunciated by Albert Einstein in his 1905 paper on the photoelectric effect. Einstein explained in this paper why the electrons that are ejected from a cathode by incident light do not increase in energy when the intensity of the light is increased. Instead, the fact that the electrons increase in energy when the frequency of the light is increased suggested that light comes in quantum units (later called "photons") and have an energy given by Planck's equation, hf.
Electromagnetic theory, though one of the most important results of nineteenth-century physical theory, contained a number of puzzles. On the one hand, electromagnetism sometimes gave the same result for all reference frames. For example, Faraday's induction law gave the same result for the current induced in a loop of wire for two situations: when the loop moves relative to a stationary magnet and when the magnet moves (with the same speed) relative to a stationary loop. On the other hand, if an ether medium were introduced for electromagnetic waves, then the predictions of electromagnetism should usually change for different reference frames. In a second paper from 1905, Einstein reinterpreted attempts by Henri Poincaré (1854–1912) and Hendrik Antoon Lorentz (1853–1928) to answer this puzzle, by insisting that the laws of physics should give the same results in all inertial reference frames. This, along with the principle of the constancy of the speed of light, formed the basis of Einstein's special theory of relativity.
See also Causality ; Change ; Chemistry ; Experiment ; Field Theories ; Mathematics ; Mechanical Philosophy ; Quantum ; Relativity ; Science .
Brush, Stephen G. The Kinetic Theory of Gases: An Anthology of Classic Papers with Historical Commentary. Edited by Nancy S. Hall. London: Imperial College Press, 2003.
Franklin, Benjamin, Benjamin Franklin's Experiments: A New Edition of Franklin's Experiments and Observations on Electricity. Edited by I. Bernard Cohen. Cambridge, Mass.: Harvard University Press, 1941.
Galilei, Galileo. Two New Sciences: Including Centers of Gravity and Force of Percussion. Translated by Stillman Drake. Madison: University of Wisconsin Press, 1974.
Newton, Isaac. Opticks; or, a Treatise of the Reflections, Refractions, Inflections, and Colours of Light. Based on the 4th ed., London 1730. New York: Dover, 1952.
——. The Principia: Mathematical Principles of Natural Philosophy. Translated by I. B. Cohen and Anne Whitman. Berkeley: University of California Press, 1999.
Maxwell, James Clerk. A Treatise on Electricity and Magnetism. 2 vols. Unabridged 3rd ed. New York: Oxford University Press, 1998.
Shamos, Morris H., ed. Great Experiments in Physics. 1959. Reprint, New York: Dover, 1987. Features brief introductions to each experiment, followed by passages from the original publications.
Berry, Arthur. A Short History of Astronomy: From Earliest Times through the Nineteenth Century. New York: Dover, 1961.
Brush, Stephen G. Statistical Physics and the Atomic Theory of Matter: From Boyle and Newton to Landau and Onsager. Princeton, N.J.: Princeton University Press, 1983.
Buchwald, Jed Z. The Creation of Scientific Effects: Heinrich Hertz and Electric Waves. Chicago: University of Chicago Press, 1994. A detailed account for the advanced reader.
——. The Rise of the Wave Theory of Light: Optical Theory and Experiment in the Early Nineteenth Century. Chicago: University of Chicago Press, 1989.
Cahan, David. An Institute for an Empire: The Physikalisch-Technische Reichanstalt, 1871–1918. Cambridge, U.K., and New York: Cambridge University Press, 1989. An exemplary study of how physical science served state interests.
Caneva, Kenneth L. Robert Mayer and the Conservation of Energy. Princeton, N.J.: Princeton University Press, 1993.
Cannon, John T., and Sigalia Dostrovsky. The Evolution of Dynamics: Vibration Theory from 1687 to 1742. New York: Springer-Verlag, 1981. Detailed history for mathematically adept readers.
Cercignani, Carlo. Ludwig Boltzmann: The Man Who Trusted Atoms. New York: Clarendon, 1998.
Clagett, Marshall. The Science of Mechanics in the Middle Ages. Madison: University of Wisconsin Press, 1959.
Cohen, H. Floris. The Scientific Revolution: A Historiographical Inquiry. Chicago: University of Chicago Press, 1994. A long but valuable historiographic survey.
Cohen, I. Bernard. The Birth of a New Physics. Rev. ed. New York, W. W. Norton, 1985. An excellent place to start for the general reader, covering the period from Copernicus to Newton.
——. The Newtonian Revolution. Norwalk, Conn: Burndy Library, 1987.
D'Agostino, Salvo. A History of the Ideas of Theoretical Physics: Essays on the Nineteenth and Twentieth Century Physics. Boston: Kluwer, 2000.
Damerow, Peter, et al. Exploring the Limits of Preclassical Mechanics: A Study of Conceptual Development in Early Modern Science. New York: Springer-Verlag, 1992. Detailed treatments of the motion studies of Descartes, Galileo, and Beeckman.
Dear, Peter. Revolutionizing the Sciences: European Knowledge and Its Ambitions, 1500–1700. Princeton, N.J.: Princeton University Press, 2001.
Dobbs, Betty Jo Teeter, and Margaret C. Jacob. Newton and the Culture of Newtonianism. Atlantic Highlands, N.J.: Humanities Press, 1995.
Dugas, René. A History of Mechanics. Translated by J. R. Maddox. New York: Dover, 1988. A useful survey from ancient to modern times, concentrating on the development of theory and often using long quotations from the original sources.
Duhem, Pierre. Essays in History and Philosophy of Science. Edited by Roger Ariew and Peter Barker. Indianapolis: Hackett, 1996. Includes Duhem's excellent article on the history of physics, written for an encyclopedia.
——. Medieval Cosmology: Theories of Infinity, Place, Time, Void, and the Plurality of Worlds. Edited and translated by Roger Ariew. Chicago: University of Chicago Press, 1985.
Elkana, Yehuda. The Discovery of the Conservation of Energy. Cambridge, Mass.: Harvard University, 1974.
Fraser, Craig G. Calculus and Analytical Mechanics in the Age of Enlightenment. Aldershot, U.K., and Brookfield, Vt.: Variorum, 1997. Detailed history for mathematically adept readers.
Gillmor, C. Stewart. Coulomb and the Evolution of Physics and Engineering in Eighteenth-Century France. Princeton, N.J.: Princeton University Press, 1971.
Goldstine, Herman H. A History of the Calculus of Variations from the Seventeenth through the Nineteenth Century. New York: Springer-Verlag, 1980. Detailed history for mathematically adept readers.
Grant, Edward. The Foundations of Modern Science in the Middle Ages: Their Religious, Institutional, and Intellectual Contexts. Cambridge, U.K., and New York: Cambridge University Press, 1996. A good place to start for the general reader.
Hakfoort, Casper. Optics in the Age of Euler: Conceptions of the Nature of Light, 1700–1795. Cambridge, U.K., and New York: Cambridge University Press, 1995.
Hankins, Thomas L. Science and the Enlightenment. Cambridge, U.K., and New York: Cambridge University Press, 1985. The best place to start for the general reader.
Harman, P. M. Energy, Force, and Matter: The Conceptual Development of Nineteenth-Century Physics. Cambridge, U.K., and New York: Cambridge University Press, 1982. A useful book but compact and difficult for the beginner.
——. The Natural Philosophy of James Clerk Maxwell. Cambridge, U.K., and New York: Cambridge University Press, 1998.
Heilbron, J. L. Electricity in the Seventeenth and Eighteenth Centuries: A Study of Early Modern Physics. Berkeley: University of California Press, 1979. A particularly good survey of both the conceptual and the institutional development of physics.
Hendry, John. James Clerk Maxwell and the Theory of the Electromagnetic Field. Bristol, U.K., and Boston: A. Hilger, 1986. A superb scientific biography with a useful interpretive framework that has been used in the present essay.
Hogendijk, Jan P., and Abdelhamid I. Sabra, eds. The Enterprise of Science in Islam: New Perspectives. Cambridge: MIT Press, 2003.
Holton, Gerald, and Stephen G. Brush. Physics, the Human Adventure: From Copernicus to Einstein and Beyond. New Brunswick, N.J.: Rutgers University Press, 2001. A textbook for teaching physics with history.
Jungnickel, Christa, and Russell McCormmach. Cavendish: The Experimental Life. Rev. ed. Cranbury, N.J.: Bucknell University Press, 1999.
——. Intellectual Mastery of Nature: Theoretical Physics from Ohm to Einstein. Chicago: University of Chicago Press, 1986. A detailed study of the conceptual and institutional development of theoretical physics as a subdiscipline.
Kline, Morris. Mathematical Thought from Ancient to Modern Times. New York: Oxford University Press, 1972.
Kuhn, Thomas S. Black-body Theory and the Quantum Discontinuity, 1894–1912. Chicago: University of Chicago Press, 1987.
——. The Copernican Revolution: Planetary Astronomy in the Development of Western Thought. Cambridge, Mass.: Harvard University Press, 1971.
Leijenhorst, Cees, Christoph Luthy, Johannes M. M. H. Thigjssen, eds. The Dynamics of Aristotelian Natural Philosophy from Antiquity to the Seventeenth Century. Leiden, Netherlands: Brill, 2002.
Lindberg, David C. The Beginnings of Western Science: The European Scientific Tradition in Philosophical, Religious, and Institutional Context, 600 b.c. to a.d. 1450. Chicago: University of Chicago Press, 1992. A good place to start for the general reader.
Lindberg, David C., ed. Science in the Middle Ages. Chicago: University of Chicago Press, 1978.
Mach, Ernst. The Principles of Physical Optics: An Historical and Philosophical Treatment. Mineola, N.Y.: Dover, 2003.
——. Principles of the Theory of Heat: Historically and Critically Elucidated. Edited by Brian McGuinness. Boston: D. Reidel, 1986.
——. The Science of Mechanics: A Critical and Historical Account of Its Development. Translated by Thomas J. McCormack. 6th ed. LaSalle, Ill.: Open Court, 1960. Mach's books are guilty of "presentism," the tendency to judge past science in terms of current knowledge. Nevertheless, his work should be studied by the more advanced student.
Maier, Anneliese. On the Threshold of Exact Science: Selected Writings of Anneliese Maier on Late Medieval Natural Philosophy. Edited and translated by Steven D. Sargent. Philadelphia: University of Pennsylvania Press, 1982.
Merz, John Theodore. A History of European Thought in the Nineteenth Century. Vols. 1 and 2. New York: Dover, 1965. Reprint of edition appearing between 1904 and 1912.
Olesko, Kathryn M. Physics as a Calling: Discipline and Practice in the Konigsberg Seminar for Physics. Ithaca, N.Y.: Cornell University Press, 1991.
Park, David. The Fire Within the Eye: A Historical Essay on the Nature and Meaning of Light. Princeton, N.J.: Princeton University Press, 1997. A good popularization.
Pullman, Bernard. The Atom in the History of Human Thought. New York: Oxford University Press, 1998. Guilty of "presentism," but nevertheless a useful survey.
Purrington, Robert D. Physics in the Nineteenth Century. New Brunswick, N.J.: Rutgers University Press, 1997. The best place to start for the general reader.
Segrè, Emilio. From Falling Bodies to Radio Waves: Classical Physicists and Their Discoveries. New York: W. H. Freeman, 1984.
——. From X-Rays to Quarks: Modern Physicists and Their Discoveries. San Francisco: W. H. Freeman, 1980.
Stephenson, Bruce. Kepler's Physical Astronomy. New York: Springer-Verlag, 1987.
Tokaty, G.A. A History and Philosophy of Fluid Mechanics. 2nd ed. New York: Dover, 1994.
Toulmin, Stephen, and June Goodfield. The Architecture of Matter. Chicago: University of Chicago Press, 1982. A history of theories of matter, from ancient to modern times.
Truesdell, C., The Tragicomical History of Thermodynamics, 1822–1854. New York: Springer-Verlag, 1980. Detailed history for mathematically adept readers.
Westfall, Richard S. The Construction of Modern Science: Mechanisms and Mechanics. Cambridge, U.K., and New York: Cambridge University Press, 1977.
——. Force in Newton's Physics: The Science of Dynamics in the Seventeenth Century. New York: Elsevier, 1971.
——. Never at Rest: A Biography of Isaac Newton. Cambridge, U.K., and New York: Cambridge University Press, 1980.
Whittaker, Edmund. A History of the Theories of Aether and Electricity. New York: Humanities Press, 1973.
Williams, L. Pearce. The Origins of Field Theory. Lanham, Md.: University Press of America, 1980. This classic study focuses on the work of Michael Faraday.
G. J. Weisel
"Physics." New Dictionary of the History of Ideas. . Encyclopedia.com. (November 24, 2017). http://www.encyclopedia.com/history/dictionaries-thesauruses-pictures-and-press-releases/physics-0
"Physics." New Dictionary of the History of Ideas. . Retrieved November 24, 2017 from Encyclopedia.com: http://www.encyclopedia.com/history/dictionaries-thesauruses-pictures-and-press-releases/physics-0
PHYSICS. Physics, as a structured mathematical and experimental investigation into the fundamental constituents and laws of the natural world, was not recognized as a discipline until late in the early modern period. Derived from the Greek word meaning 'to grow', in ancient and medieval times "physics" (or "natural philosophy") was concerned with the investigation of the qualitative features of any natural phenomena (psychological, chemical, biological, meteorological, etc.) and was often guided by the metaphysical and epistemological tenets set out in the physical books of the Aristotelian corpus. These included the idea of the cosmos as a finite sphere in which no void or vacuum could exist, the division between the sublunary and celestial realms (each with its own types of matter and motion), the doctrine of the four sublunary elements (earth, air, water, and fire, each naturally moving either upward or downward), and a complex causal theory according to which any natural change requires the interaction of an agent that initiates the change and a patient that undergoes the change. As with many of the developments of the early modern period, modern physics defined itself in reaction to these received Aristotelian ideas.
This is not to say that Aristotle did not go unchallenged until the early modern period. In Hellenistic times, for example, Aristotle's theory of natural motion was seen to need supplementation since it could not explain satisfactorily why a thrown object continued in projectile motion once separated from the cause of its motion (for example, a hand) instead of immediately resuming its natural motion downward. The concept introduced to explain this was impetus—a propelling, motive force transferred from the cause of motion into the projectile. Similarly, atomism posed a long-standing challenge to Aristotelian matter theory. According to atomism, the universe consisted of small material particles moving in a void, and all natural change could be explained by the particles coming together and separating in various ways.
The challenges reached their climax in 1277 as Archbishop Tempier of Paris issued a condemnation that forbade the teaching of Aristotle as dogma. Although other criticisms of Aristotelian philosophy continued through the fourteenth century and after, the basic Aristotelian ideas regarding the nature of motion and the cosmos persisted in European schools and universities well into the seventeenth century, albeit in Christianized forms. The critical treatments of Aristotelian philosophy became the seeds from which modern physics grew.
Many other social, economic, and intellectual events also were responsible for the birth of physics and modern science. The Reformation and its consequent religious wars, the voyages of exploration and exploitation, the rise of capitalism and market economies, and the geographical shift of power from the Mediterranean basin to the north Atlantic were of particular import. In a somewhat controversial fashion we might characterize these influences as promoting a social, economic, and intellectual sense of insecurity among the people of Europe and contributing to a concomitant rise in entrepreneurial and epistemic individualism. One important result of this was an increased skepticism both as an everyday viewpoint and, as in Michel de Montaigne's (1533–1592) case, a full-blown skeptical theory.
The rise of printing is particularly important among the cultural changes leading to the birth of physics. The printed text allowed for wider distribution of recently resurrected and translated ancient texts on philosophy and mathematics. Euclid's (fl. c. 280 b.c.e.) Elements, for example, was published in numerous modern editions, and the pseudo-Aristotelian Mechanics and the works of Archimedes (c. 281–212 b.c.e.) were brought to the Latin-educated public. These works formed the basis of the mixed or middle sciences (being both mathematical and physical) and provided the disciplinary form into which the new physics would fit. The use of diagrams and illustrations as teaching and learning devices was crucial to this revival of applied geometry. Books also allowed for a standardization of material that enabled widely dispersed individuals to study the same texts of classical and modern authors. In the sixteenth century, publications of how-to-do books and pamphlets brought mathematics and concerns about mechanical devices to a much broader public, including artisan and nonuniversity classes. However, the practical inclination toward mechanics was given theoretical credence by the anti-Aristotelian theory of atomism (reinvigorated in the Latin West by the early-fifteenth-century recovery of Lucretius's [c. 95–55 b.c.e.] De rerum natura [On the nature of things ]) and philosophical criticisms of Aristotle's theory of causality, which took mechanical devices as exemplars of phenomena for which Aristotle's theory could not properly account.
The increased focus on the workings of the natural world led to the institution of societies dedicated to scientific learning. In 1603, for example, the Academy of the Lynxes (Academie dei Lincei) was founded in Naples by Prince Federico Cesi (1585–1630). In 1662, the most influential of the new institutions, the Royal Society of London, was founded by Charles II of England (ruled 1660–1685). The society encouraged Christian gentlemen to study natural philosophy, held regular meetings, and published its proceedings. The Royal Society proved a venue for many amateurs to pursue science and may have created the first professional scientist by hiring Robert Hooke (1635–1703) as its curator of experiments.
THE NEED FOR A NEW THEORY OF THE NATURAL WORLD
The general attacks on the Aristotelian view of nature gained momentum through the pressing need to solve a set of particular physical problems that were largely intractable given Aristotelian premises. In particular, demand for a revision to Aristotelianism was brought to crucial focus by Nicolaus Copernicus's (1473–1543) publication of On the Revolutions of the Heavenly Spheres in 1543. In it, Copernicus laid out an astronomical system based on circles and epicycles much in the same mathematical vein as Claudius Ptolemy's (c. 100–170), but shifted the sun to the mathematical center of the earth's orbit and made the earth move in a threefold manner (daily, annual, and axial motion to account for precession). The theoretical shift left a major conceptual problem for Copernicus's followers: namely, how to reconcile a physical description of the universe with Copernicus's new mathematical description of it. In particular, it became problematic to talk about the motion of bodies on earth if the earth itself was moving and also to account for the motion of the earth itself. Tycho Brahe (1546–1601) was one of the first to worry about physical cosmos, and based on his own marvelous celestial observations, devised his own compromise system. But Tycho's system was qualitative and never put into good mathematical shape, and, therefore, useless to professional astronomers. Nevertheless, his work on comets did away with the crystalline spheres in which planets were thought to be embedded.
The first to successfully challenge Aristotle on his physics, matter theory, and cosmology—and, in the process, vindicate Copernicus—was Galileo Galilei (1564–1642). Galileo was trained by artisans, and after dropping out of medical school, began to work on problems of mechanics in an Archimedian manner, modeling his proofs on simple machines and floating bodies. Contributing to Galileo's confidence in the Copernican system was the construction of his own telescope in 1609 (one of the first) and his consequent investigation of the moon, the sun, the Milky Way, and the discovery of four moons of Jupiter. These investigations were published in The Starry Messenger in 1610 and in the Letters on the Sunspots in 1613. They affirmed Galileo's conviction that the earth was a material body like the other planets, and that Copernicus's system was an accurate physical description of the universe. But he still lacked an account of how bodies moved on an earth that was itself moving.
Galileo's most influential book, Dialogues concerning the Two Chief World Systems (1632), was his most elaborate defense of Copernicanism. In this book he argued most effectively that a theory of motion for a moving earth was not only possible but more plausible than the Aristotelian theory of motion. Specifically, he argued for a form of natural motion (inertia) where bodies moved circularly, and for the principle of the relativity of observed motion (which had been used before by Copernicus and others). This allowed him to claim that the motion of the earth was not perceptible since it was common to both the earth and bodies on it. At the end of the Dialogue, he thought he proved Copernicanism by claiming the earth's trifold motion could physically explain of the tides.
Galileo's condemnation for heresy under the papacy of his former friend Urban VIII was based on the Dialogue; he was put under house arrest for the rest of his life. During this time, he began work on his final publication, Discourses concerning Two New Sciences (1638). This work revived the Archimedean, mechanical physics he had virtually completed between 1604 and 1609. Here he argued for a one-element theory on which matter was to be understood solely by its mechanical properties, as Archimedean machines were understood, and for a theory of motion on which motion was essentially related to time. Particularly, he argued that falling bodies accelerate in proportion to the square of the time of their fall, and provided experimental evidence for this by measuring balls rolling down inclined planes. The emphasis on time as the important independent variable occurred to him from discovering the isochrony using pendulums, whose isochrony he discovered. As Galileo was working out the details of a new physics, Johannes Kepler (1571–1630) formed the world's first mathematical astrophysics. It was he who finally abandoned the principal assumptions of Ptolemaic and Copernican astronomy by introducing elliptical motion and demanding that astronomical calculation describe real physical objects. Although to his contemporaries Kepler was mostly known for producing the most accurate astronomical tables to date, his legacy lies in a reorientation of astronomy away from a predictive discipline aimed at mathematically "saving the phenomena" to one that combines observational predictions (how the planets move) with physical theory (why they move). For example, Kepler offered not only his so-called three laws describing planetary motion, but also answered the causal Copernican problem by explaining that the planets were moved by a quasi-magnetic force emanating from the sun that diminished with distance and were hindered by their natural inertia or "sluggishness." His integration of underlying physical mechanism and descriptive law, much in the same manner as Galileo's, was to become a hallmark of seventeenth-century science. It is in this sense that both thinkers built the foundation on which the mechanical philosophy was to rest.
THE NEW SYSTEMATIZERS
Although Galileo's and Kepler's works were complementary, neither thinker attempted to reformulate the whole of the Aristotelian natural philosophy. René Descartes (1596–1650), on the other hand, attempted to build a complete system to replace Aristotelianism and put philosophy, including natural philosophy and the science of motion, on a firm epistemic and theological basis (the Cartesian cogito —I think therefore I am—and that God is no deceiver; Meditations on the First Philosophy, 1641). Regarding motion, he shifted emphasis from Galileo's machines to collision laws and promulgated a version of straight-line inertia. Descartes's laws of collision combined with a belief in a corpuscular (if not strictly atomic) matter allowed him to consider many physical problems in terms of material contact action and resulting equilibrium situations. For example, Descartes attempted to account for planetary motion and gravity in terms of vortices of particles swirling around a center, pushing heavier particles down into the vortex while carrying others around in their whirl. The Cartesian program was laid out in its most complete form in The Principles of Philosophy (1644). There he used the vortex theory and the strict definition of place to placate the church and to show that Copernicanism was not literally true. Descartes hoped this book would become the standard text at Catholic schools, replacing even Thomas Aquinas, but it was placed on the Index of Prohibited Books in 1663.
Descartes's followers could be called "mechanical philosophers," though in fact the phrase was coined later by Robert Boyle (1627–1691). Most notable among them was Christiaan Huygens (1629–1695), who, apart from making several important astronomical discoveries (for example, the rings of Saturn and its largest moon, Titan), published works on analytic geometry, clockmaking, and the pendulum, and corrected Descartes's erroneous laws of collision. Huygens's laws were proven by using Galileo's principle of relativity of perceived motion in On Motion (published in 1703; composed in the mid-1650s). He forcefully championed Cartesian philosophy in his criticisms of Isaac Newton's (1642–1727) notion of gravity, rejecting it as a return to occult qualities and offering instead his own aetherial vortex theory in Discourse on the Cause of Heaviness (1690).
In England, Robert Boyle emerged as the most vocal champion of the new philosophy. Boyle wrote prolifically on physics, alchemy, philosophy, medicine, and theology, and approached all with a single and forcefully articulated mechanical worldview, though in practice he seldom rigorously applied it. For Boyle, all natural phenomena were to be studied experimentally, and explanations were to be given by the configurations and motions of minute material corpuscles. Boyle's writings either argue for this view generally—for example, The Origine of Formes and Qualities (1666)—or by example, for example, New Experiments Physico-Mechanicall, Touching the Spring of the Air and Its Effects (1660). In the Origine, for example, Boyle argues against the Scholastic reliance on substantial forms, holding these to be unintelligible in themselves and useless for practical purposes. Instead he offers explanation using analogies for natural processes that were already well worn: that of the lock and key and that of the world as a clock. Boyle's criticisms were widely circulated both in England and on the Continent. (It is of note that Robert Hooke's work on springs was more rigorous and his version of the mechanical philosophy in terms of vibrating particles was later to become more widely used than Boyle's.) Boyle, like some other seventeenth-century thinkers, was deeply committed to the use of mechanical science to further belief in God. This fact is important to note, as no great schism was felt in the seventeenth century between the findings of science and belief in the deity, although the charge of atheism was often leveled in battles between competing scientific schools, particularly against Thomas Hobbes (1588–1679), who may be the most coherent of all the mechanical philosophers, and who had the widest philosophical impact during the mid-century.
THE NEW PHYSICS
If the systematization of these modes of thought and physical problems into a coherent whole can be attributed to one man, it is Isaac Newton (1642–1727). In his Mathematical Principles of Natural Philosophy (1687), Newton combined the study of collision theory, a new theory regarding substantial forces and their the measure, and a new geometrical version of the calculus to draw consequences regarding motion both on earth and in the heavens. The book begins with the three laws of motion: the law of inertia, the force law, and the law of action and reaction. Although the law of inertia was first framed by Descartes, the latter two laws were Newton's stunning innovations. Of particular importance is the second law, in which Newton introduces a novel measure of force akin to the modern notion of impact (instantaneous change in momentum). Considering force in this way, Newton was able to treat the effect of any force as if it were the result of a collision between two bodies, thus reducing the variety of physical phenomena to cases of collision.
In general, the evolution of the concept of force in the seventeenth century constitutes a crucial feature in the birth of modern physics. At the beginning of the century, the term "force" was used with a variety of intuitive meanings. Lack of a precise concept was due, in part, to the fact that characterizations of force were derived from analyses of several different physical situations: equilibrium situations in terms of the law of the lever (where a specific weight was related to the force required to balance it), impact in collisions, and free fall. It was unclear how to relate these, which were all by the Aristotelian tradition violent motions, that is, against a body's natural inclination. With Descartes's formulation of the principle of inertia, the mechanical analogue of Aristotelian natural motion, force came to indicate the cause of any deviation from (seemingly natural) inertial motion. By further fixing its meaning in all cases, Newton was able to provide a unified treatment of the physical situations mentioned above.
Newton also showed that the Cartesian explanation of planetary motion by an aetherial vortex was untenable. Moreover, using Kepler's laws and a host of other planetary observations, he demonstrated that the planets must be drawn toward the sun (as well as toward one another) by a force inversely proportional to their distance and directly proportional to their mass: by a gravitational force. This was Newton's most contentious discovery. Although his laws of motion were quickly recognized as correct, Newtonian gravitation, was dismissed by many as a "fiction" and a "mere hypothesis." Put differently, since the gravitational force did not rely on the collisions or springs endorsed by mechanical philosophers, Newton's contemporaries perceived it as a return to recently banished occult Aristotelian properties. In general, since force (gravitational or otherwise) is not a directly perceivable property of matter, it seemed Newton was rejecting a mainstay of the mechanical philosophy by admitting ontologically gratuitous terms into his physical explanations. (George Berkeley [1685–1753] would try to recast Newtonian mechanics without force in his On Motion .)
Newton's most powerful critic in this and other regards was Gottfried Wilhelm Leibniz (1646–1716). Although their antagonism originated with a priority dispute in the mid-1690s over the invention of the calculus—which Newton and Leibniz had actually invented independently—it ended with Newton's anonymous writing of the official opinion of the Royal Society in which it was declared that he, Newton, was the true originator. This tiff was continued in a protracted epistolary debate between Leibniz and Newton's disciple, Samuel Clarke (1675–1729), over the metaphysical and religious implication of Newtonian physics. Leibniz claimed that Newton's theory of gravitation not only did not explain anything (since the notion of gravitational action-at-a-distance was itself unintelligible), but promoted atheism. The other key debate was over the nature of relative versus Newtonian absolute space. Similar debates between Newtonians and their detractors regarding the explanatory and theological significance of universal gravitation were to color the philosophical landscape well into the eighteenth century.
Most importantly, however, Newton's debates with Leibniz yielded Newton's most explicit characterizations of his scientific method, which were to serve as a basis for all later science. In warding off criticism, Newton often insisted that the notion of universal gravitation was not in the least hypothetical, but was securely and positively based on empirical evidence. His insistence that theoretical claims should be justified only by observations, even when dealing with properties not directly perceivable, contradicted the idea of some of his contemporaries, who were accustomed to deducing theoretical claims from higher-level metaphysical or theological principles. The reliance on observation and experiment, more than any of Newton's particular claims, quickly became a hallmark of science as a whole. The Royal Society, increasing professionalization, an experimental method, and a set of unique problems all testify to physics' emergence as its own discipline during the latter half of the seventeenth century.
Curiously, despite its numerous innovations, Newton's work was mostly written in an older geometrical style, not the differential calculus. The move away from geometry—which had dominated mathematical thinking since antiquity—was not completed until the middle of the eighteenth century, well after Newton's death, although it had begun in the early years of the seventeenth century with the work of François Viète (1540–1603), Thomas Harriot (c. 1560–1621), Descartes, and Pierre de Fermat (1601–1665) on infinitesimals and the algebraic treatment of curves. This new analytical treatment of mathematics was the cause of aforementioned dispute between Newton and Leibniz regarding the calculus: while Leibniz's version of the calculus was based on the algebraic techniques gaining strength at the time, Newton's version (at least as published during his lifetime) was a geometrical analogue. Leibniz's version was eventually adopted, and by the mid-eighteenth century, virtually all developments of the calculus were undertaken in an algebraic style. The culmination of this movement was to come in Leonhard Euler's (1707–1783) Mechanics or the Science of Motion Exposited Analytically (1736) and Joseph-Louis Lagrange's (1736–1813) Analytical Mechanics (1788).
Finally, it remains to remark that Newtonian physics and Newton himself, by name if not by precise deed, was taken as exemplary for the age that followed. Numerous works for children and women, now thought fit for education, appeared in many languages; among such were E. Wells, Young Gentleman's Course in Mechanicks, Optics, and Astronomy (1714) and Francesco Algarotti's (1712–1764) Sir Isaac Newton for Use of Ladies (1739). More serious discussions and popularizations of Newton and his work were also numerous. To mention only a few, we find in the early eighteenth century John Theophilus Desagulier's (1683–1744) Course of Experimental Philosophy (1744), Willem Gravesande's (1688–1742) Mathematical Elements of Natural Philosophy (1721), Henry Pemberton's View of Sir Isaac Newton's Philosophy (1728), and most impressive of all Colin Maclaurin's (1698–1746) posthumously published An Account of Sir Isaac Newton's Philosophy (1748). In France Newton also had his fame; Pierre Louis Moreau de Maupertuis (1698–1759) taught Newtonianism to Voltaire (François-Marie Arouet, 1694–1778), and to Madame du Chatelet (1706–1749), leading Voltaire to write the popular Elements of Newtonian Philosophy (1741), which is perhaps the best known but certainly not an isolated instance. Newton was seen during this time as the man who had brought modernity (and perhaps salvation) to England and to the world. The prevailing thought of the times was well summed up by Alexander Pope in his "Epitaph Intended for Sir Isaac Newton":
Nature and Nature's laws lay hid in night; God said, "Let Newton be!" and all was light.
See also Academies, Learned ; Astronomy ; Berkeley, George ; Boyle, Robert ; Brahe, Tycho ; Catholicism ; Copernicus, Nicolaus ; Descartes, René ; Galileo Galilei ; Huygens Family ; Kepler, Johannes ; Leibniz, Gottfried Wilhelm ; Mathematics ; Montaigne, Michel de ; Newton, Isaac ; Philosophy ; Reformation, Protestant ; Scientific Method ; Scientific Revolution .
Boyle, Robert. The Origin of Forms and Qualities According to the Corpuscular Philosophy. 2nd ed. Oxford, 1667. Reprinted in Selected Papers of Robert Boyle. Edited by M. A. Stewart. Indianapolis, 1991.
Descartes, René. Principles of Philosophy. Translated by V. R. Miller and R. P. Miller. Dordrecht and Boston, 1983 .
Galilei, Galileo. Dialogues on the Two Chief World Systems. Translated by S. Drake. Berkeley, 1967 .
Galilei, Galileo. Two New Sciences [Discorsi]. Translated by S. Drake. 2nd ed. Toronto, 1989 .
Maclaurin, Colin. An Account of Sir Isaac Newton's Philosophical Discoveries. Edited by L. L. Laudan. New York, 1968. Originally published 1748.
Newton, Isaac. The Principia: Mathematical Principles of Natural Philosophy: A New Translation. Translated by I. Bernard Cohen and Anne Whitman. Berkeley, 1999.
Aiton, E. J. The Vortex Theory of Planetary Motion. New York, 1972.
Bertoloni-Meli, Dominico. Equivalence and Priority: Newton versus Leibniz: Including Leibniz's Unpublished Manuscripts on the "Principia." Oxford, 1993.
Brackenridge, J. Bruce. The Key to Newton's Dynamics: The Kepler Problem and the Principia. Berkeley, 1995.
Cohen, I. B., and George E. Smith, eds. The Cambridge Companion to Newton. Cambridge, U.K., and New York, 2002.
Dijksterhuis, E. J. The Mechanization of the World Picture: Pythagoras to Newton. Translated by C. Dikshoorn. Princeton, 1961.
Dugas, René. A History of Mechanics. Translated by J. R. Maddox. Neuchatel and New York, 1955.
Eisenstein, Elizabeth L. The Printing Revolution in Early Modern Europe. Cambridge, U.K., and New York, 1983.
Grant, Edward. Physical Science in the Middle Ages. New York, 1971.
Hunter, Michael Cyril William. Archives of the Scientific Revolution: The Formation and Exchange of Ideas in Seventeenth-Century Europe. Woodbridge, U.K., and Rochester, N.Y., 1998.
Jardine, Nicholas. The Birth of History and Philosophy of Science: Kepler's "A Defense of Tycho against Ursus," with Essays on Its Provenance and Significance. Cambridge, U.K., and New York, 1984.
Kuhn, Thomas. The Copernican Revolution: Planetary Astronomy in the Development of Western Thought. Cambridge, Mass., 1957.
Machamer, Peter. "Introduction." In The Cambridge Companion to Galileo, edited by Peter Machamer. Cambridge, U.K., and New York, 1998.
——. "Individualism and the Idea (l) of Method." In Scientific Controversies: Philosophical and Historical Perspectives, edited by Peter Machamer, Marcello Pera, and Aristide Baltas. New York, 2000.
McMullin, Ernan. Newton on Matter and Activity. Notre Dame, Ind., 1978.
Meyer, G. The Scientific Lady in England, 1650–1760: An Account of Her Rise, with Emphasis on the Major Roles of the Telescope and Microscope. Berkeley, 1965.
Osler, Margaret J., ed. Rethinking the Scientific Revolution. Cambridge, U.K., and New York, 2000.
Westfall, Richard S. The Construction of Modern Science: Mechanisms and Mechanisms. Cambridge, U.K., and New York, 1977.
——. Force in Newton's Physics: The Science of Dynamics in the Seventeenth Century. London and New York, 1971.
Peter Machamer and Zvi Biener
"Physics." Europe, 1450 to 1789: Encyclopedia of the Early Modern World. . Encyclopedia.com. (November 24, 2017). http://www.encyclopedia.com/history/encyclopedias-almanacs-transcripts-and-maps/physics
"Physics." Europe, 1450 to 1789: Encyclopedia of the Early Modern World. . Retrieved November 24, 2017 from Encyclopedia.com: http://www.encyclopedia.com/history/encyclopedias-almanacs-transcripts-and-maps/physics
physics, branch of science traditionally defined as the study of matter, energy, and the relation between them; it was called natural philosophy until the late 19th cent. and is still known by this name at a few universities. Physics is in some senses the oldest and most basic pure science; its discoveries find applications throughout the natural sciences, since matter and energy are the basic constituents of the natural world. The other sciences are generally more limited in their scope and may be considered branches that have split off from physics to become sciences in their own right. Physics today may be divided loosely into classical physics and modern physics.
Classical physics includes the traditional branches and topics that were recognized and fairly well developed before the beginning of the 20th cent.—mechanics, sound, light, heat, and electricity and magnetism. Mechanics is concerned with bodies acted on by forces and bodies in motion and may be divided into statics (study of the forces on a body or bodies at rest), kinematics (study of motion without regard to its causes), and dynamics (study of motion and the forces that affect it); mechanics may also be divided into solid mechanics and fluid mechanics, the latter including such branches as hydrostatics, hydrodynamics, aerodynamics, and pneumatics. Acoustics, the study of sound, is often considered a branch of mechanics because sound is due to the motions of the particles of air or other medium through which sound waves can travel and thus can be explained in terms of the laws of mechanics. Among the important modern branches of acoustics is ultrasonics, the study of sound waves of very high frequency, beyond the range of human hearing. Optics, the study of light, is concerned not only with visible light but also with infrared and ultraviolet radiation, which exhibit all of the phenomena of visible light except visibility, e.g., reflection, refraction, interference, diffraction, dispersion (see spectrum), and polarization of light. Heat is a form of energy, the internal energy possessed by the particles of which a substance is composed; thermodynamics deals with the relationships between heat and other forms of energy. Electricity and magnetism have been studied as a single branch of physics since the intimate connection between them was discovered in the early 19th cent.; an electric current gives rise to a magnetic field and a changing magnetic field induces an electric current. Electrostatics deals with electric charges at rest, electrodynamics with moving charges, and magnetostatics with magnetic poles at rest.
Most of classical physics is concerned with matter and energy on the normal scale of observation; by contrast, much of modern physics is concerned with the behavior of matter and energy under extreme conditions or on the very large or very small scale. For example, atomic and nuclear physics studies matter on the smallest scale at which chemical elements can be identified. The physics of elementary particles is on an even smaller scale, being concerned with the most basic units of matter; this branch of physics is also known as high-energy physics because of the extremely high energies necessary to produce many types of particles in large particle accelerators. On this scale, ordinary, commonsense notions of space, time, matter, and energy are no longer valid.
The two chief theories of modern physics present a different picture of the concepts of space, time, and matter from that presented by classical physics. The quantum theory is concerned with the discrete, rather than continuous, nature of many phenomena at the atomic and subatomic level, and with the complementary aspects of particles and waves in the description of such phenomena. The theory of relativity is concerned with the description of phenomena that take place in a frame of reference that is in motion with respect to an observer; the special theory of relativity is concerned with relative uniform motion in a straight line and the general theory of relativity with accelerated motion and its connection with gravitation. Both the quantum theory and the theory of relativity find applications in all areas of modern physics.
Evolution of Physics
The earliest history of physics is interrelated with that of the other sciences. A number of contributions were made during the period of Greek civilization, dating from Thales and the early Ionian natural philosophers in the Greek colonies of Asia Minor (6th and 5th cent. BC). Democritus (c.460–370 BC) proposed an atomic theory of matter and extended it to other phenomena as well, but the dominant theories of matter held that it was formed of a few basic elements, usually earth, air, fire, and water. In the school founded by Pythagoras of Samos the principal concept was that of number; it was applied to all aspects of the universe, from planetary orbits to the lengths of strings used to sound musical notes.
The most important philosophy of the Greek period was produced by two men at Athens, Plato (427–347 BC) and his student Aristotle (384–322 BC); Aristotle in particular had a critical influence on the development of science in general and physics in particular. The Greek approach to physics was largely geometrical and reached its peak with Archimedes (287–212 BC), who studied a wide range of problems and anticipated the methods of the calculus. Another important scientist of the early Hellenistic period, centered in Alexandria, Egypt, was the astronomer Aristarchus (c.310–220 BC), who proposed a heliocentric, or sun-centered, system of the universe. However, just as the earlier atomic theory had not become generally accepted, so too the astronomical system that eventually prevailed was the geocentric system proposed by Hipparchus (190–120 BC) and developed in detail by Ptolemy (AD 85–AD 165).
Preservation of Learning
With the passing of the Greek civilization and the Roman civilization that followed it, Greek learning passed into the hands of the Muslim world that spread its influence from the E Mediterranean eastward into Asia, where it picked up contributions from the Chinese (papermaking, gunpowder) and the Hindus (the place-value decimal number system with a zero), and westward as far as Spain, where Islamic culture flourished in Córdoba, Toledo, and other cities. Little specific advance was made in physics during this period, but the preservation and study of Greek science by the Muslim world made possible the revival of learning in the West beginning in the 12th and 13th cent.
The Scientific Revolution
The first areas of physics to receive close attention were mechanics and the study of planetary motions. Modern mechanics dates from the work of Galileo and Simon Stevin in the late 16th and early 17th cent. The great breakthrough in astronomy was made by Nicolaus Copernicus, who proposed (1543) the heliocentric model of the solar system that was later modified by Johannes Kepler (using observations by Tycho Brahe) into the description of planetary motions that is still accepted today. Galileo gave his support to this new system and applied his discoveries in mechanics to its explanation.
The full explanation of both celestial and terrestrial motions was not given until 1687, when Isaac Newton published his Principia [Mathematical Principles of Natural Philosophy]. This work, the most important document of the Scientific Revolution of the 16th and 17th cent., contained Newton's famous three laws of motion and showed how the principle of universal gravitation could be used to explain the behavior not only of falling bodies on the earth but also planets and other celestial bodies in the heavens. To arrive at his results, Newton invented one form of an entirely new branch of mathematics, the calculus (also invented independently by G. W. Leibniz), which was to become an essential tool in much of the later development in most branches of physics.
Other branches of physics also received attention during this period. William Gilbert, court physician to Queen Elizabeth I, published (1600) an important work on magnetism, describing how the earth itself behaves like a giant magnet. Robert Boyle (1627–91) studied the behavior of gases enclosed in a chamber and formulated the gas law named for him; he also contributed to physiology and to the founding of modern chemistry.
Newton himself discovered the separation of white light into a spectrum of colors and published an important work on optics, in which he proposed the theory that light is composed of tiny particles, or corpuscles. This corpuscular theory was related to the mechanistic philosophy presented early in the 17th cent. by René Descartes, according to which the universe functioned like a mechanical system describable in terms of mathematics. A rival theory of light, explaining its behavior in terms of Waves, was presented in 1690 by Christian Huygens, but the belief in the mechanistic philosophy together with the great weight of Newton's reputation was such that the wave theory gained relatively little support until the 19th cent.
Development of Mechanics and Thermodynamics
During the 18th cent. the mechanics founded by Newton was developed by several scientists and received brilliant exposition in the Analytical Mechanics (1788) of J. L. Lagrange and the Celestial Mechanics (1799–1825) of P. S. Laplace. Daniel Bernoulli made important mathematical studies (1738) of the behavior of gases, anticipating the kinetic theory of gases developed more than a century later, and has been referred to as the first mathematical physicist.
The accepted theory of heat in the 18th cent. viewed heat as a kind of fluid, called caloric; although this theory was later shown to be erroneous, a number of scientists adhering to it nevertheless made important discoveries useful in developing the modern theory, including Joseph Black (1728–99) and Henry Cavendish (1731–1810). Opposed to this caloric theory, which had been developed mainly by the chemists, was the less accepted theory dating from Newton's time that heat is due to the motions of the particles of a substance. This mechanical theory gained support in 1798 from the cannon-boring experiments of Count Rumford (Benjamin Thompson), who found a direct relationship between heat and mechanical energy.
In the 19th cent. this connection was established quantitatively by J. R. Mayer and J. P. Joule, who measured the mechanical equivalent of heat in the 1840s. This experimental work and the theoretical work of Sadi Carnot, published in 1824 but not widely known until later, together provided a basis for the formulation of the first two laws of thermodynamics in the 1850s by William Thomson (later Lord Kelvin) and R. J. E. Clausius. The first law is a form of the law of conservation of energy, stated earlier by J. R. von Mayer and Hermann Helmholtz on the basis of biological considerations; the second law describes the tendency of energy to be converted from more useful to less useful forms.
The atomic theory of matter had been proposed again in the early 19th cent. by the chemist John Dalton and became one of the hypotheses of the kinetic-molecular theory of gases developed by Clausius and James Clerk Maxwell to explain the laws of thermodynamics. The kinetic theory in turn led to the statistical mechanics of Ludwig Boltzmann and J. W. Gibbs.
Advances in Electricity, Magnetism, and Thermodynamics
The study of electricity and magnetism also came into its own during the 18th and 19th cents. C. A. Coulomb had discovered the inverse-square laws of electrostatics and magnetostatics in the late 18th cent. and Alessandro Volta had invented the electric battery, so that electric currents could also be studied. In 1820, H. C. Oersted found that a current-carrying conductor gives rise to a magnetic force surrounding it, and in 1831 Michael Faraday (and independently Joseph Henry) discovered the reverse effect, the production of an electric potential or current through magnetism (see induction); these two discoveries are the basis of the electric motor and the electric generator, respectively.
Faraday invented the concept of the field of force to explain these phenomena and Maxwell, from c.1856, developed these ideas mathematically in his theory of electromagnetic radiation. He showed that electric and magnetic fields are propagated outward from their source at a speed equal to that of light and that light is one of several kinds of electromagnetic radiation, differing only in frequency and wavelength from the others. Experimental confirmation of Maxwell's theory was provided by Heinrich Hertz, who generated and detected electric waves in 1886 and verified their properties, at the same time foreshadowing their application in radio, television, and other devices. The wave theory of light had been revived in 1801 by Thomas Young and received strong experimental support from the work of A. J. Fresnel and others; the theory was widely accepted by the time of Maxwell's work on the electromagnetic field, and afterward the study of light and that of electricity and magnetism were closely related.
Birth of Modern Physics
By the late 19th cent. most of classical physics was complete, and optimistic physicists turned their attention to what they considered minor details in the complete elucidation of their subject. Several problems, however, provided the cracks that eventually led to the shattering of this optimism and the birth of modern physics. On the experimental side, the discoveries of X rays by Wilhelm Roentgen (1895), radioactivity by A. H. Becquerel (1896), the electron by J. J. Thomson (1897), and new radioactive elements by Marie and Pierre Curie raised questions about the supposedly indestructible atom and the nature of matter. Ernest Rutherford identified and named two types of radioactivity and in 1911 interpreted experimental evidence as showing that the atom consists of a dense, positively charged nucleus surrounded by negatively charged electrons. Classical theory, however, predicted that this structure should be unstable. Classical theory had also failed to explain successfully two other experimental results that appeared in the late 19th cent. One of these was the demonstration by A. A. Michelson and E. W. Morley that there did not seem to be a preferred frame of reference, at rest with respect to the hypothetical luminiferous ether, for describing electromagnetic phenomena.
Relativity and Quantum Mechanics
In 1905, Albert Einstein showed that the result of the Michelson-Morley experiment could be interpreted by assuming the equivalence of all inertial (unaccelerated) frames of reference and the constancy of the speed of light in all frames; Einstein's special theory of relativity eliminated the need for the ether and implied, among other things, that mass and energy are equivalent and that the speed of light is the limiting speed for all bodies having mass. Hermann Minkowski provided (1908) a mathematical formulation of the theory in which space and time were united in a four-dimensional geometry of space-time. Einstein extended his theory to accelerated frames of reference in his general theory (1916), showing the connection between acceleration and gravitation. Newton's mechanics was interpreted as a special case of Einstein's, valid as an approximation for small speeds compared to that of light.
Although relativity resolved the electromagnetic phenomena conflict demonstrated by Michelson and Morley, a second theoretical problem was the explanation of the distribution of electromagnetic radiation emitted by a blackbody; experiment showed that at shorter wavelengths, toward the ultraviolet end of the spectrum, the energy approached zero, but classical theory predicted it should become infinite. This glaring discrepancy, known as the ultraviolet catastrophe, was solved by Max Planck's quantum theory (1900). In 1905, Einstein used the quantum theory to explain the photoelectric effect, and in 1913 Niels Bohr again used it to explain the stability of Rutherford's nuclear atom. In the 1920s the theory was extensively developed by Louis de Broglie, Werner Heisenberg, Wolfgang Pauli, Erwin Schrödinger, P. A. M. Dirac, and others; the new quantum mechanics became an indispensable tool in the investigation and explanation of phenomena at the atomic level.
Particles, Energy, and Contemporary Physics
Dirac's theory, which combined quantum mechanics with the theory of relativity, also predicted the existence of antiparticles. During the 1930s the first antiparticles were discovered, as well as other particles. Among those contributing to this new area of physics were James Chadwick, C. D. Anderson, E. O. Lawrence, J. D. Cockcroft, E. T. S. Walton, Enrico Fermi, and Hideki Yukawa.
The discovery of nuclear fission by Otto Hahn and Fritz Strassmann (1938) and its explanation by Lise Meitner and Otto Frisch provided a means for the large-scale conversion of mass into energy, in accordance with the theory of relativity, and triggered as well the massive governmental involvement in physics that is one of the fundamental facts of contemporary science. The growth of physics since the 1930s has been so great that it is impossible in a survey article to name even its most important individual contributors.
Among the areas where fundamental discoveries have been made more recently are solid-state physics, plasma physics, and cryogenics, or low-temperature physics. Out of solid-state physics, for example, have come many of the developments in electronics (e.g., the transistor and microcircuitry) that have revolutionized much of modern technology. Another development is the maser and laser (in principle the same device), with applications ranging from communication and controlled nuclear fusion experiments to atomic clocks and other measurement standards.
See I. M. Freeman, Physics Made Simple (1990); R. P. Feynman, The Character of Physical Law (1994); K. F. Kuhn, Basic Physics (2d ed. 1996); J. D. Bernal, A History of Classical Physics (1997); R. L. Lehrman, Physics the Easy Way (3d ed. 1998); C. Suplee, Physics in the 20th Century (1999); A. Pais, The Genius of Science: A Portrait Gallery of Twentieth Century Physicists (2000).
"physics." The Columbia Encyclopedia, 6th ed.. . Encyclopedia.com. (November 24, 2017). http://www.encyclopedia.com/reference/encyclopedias-almanacs-transcripts-and-maps/physics
"physics." The Columbia Encyclopedia, 6th ed.. . Retrieved November 24, 2017 from Encyclopedia.com: http://www.encyclopedia.com/reference/encyclopedias-almanacs-transcripts-and-maps/physics
Classical physics is the science of physics as it was conceptualized and practiced in the three centuries prior to the advent of either quantum physics or relativity early in the twentieth century. The character of classical physics is well-represented by Isaac Newton's (1642–1727) formulation of the study of motion and James Clerk Maxwell's (1831–1879) approach to the study of electromagnetism.
Classical mechanics, the scientific study of motion in the style developed in the seventeenth century by Newton, is often taken as the foundational branch of classical physics. General physics courses commonly begin with the study of motion and use Newtonian mechanics as the setting in which numerous basic concepts, such as energy, force, and momentum, are first introduced.
Physics has long been concerned with understanding the nature and causes of motion. In the tradition of ancient Greek philosophy, the cosmos was thought to be divided into two distinctly differing realms—the terrestrial (near Earth) realm and the celestial realm (the region of the moon and beyond). As conceived in Greek thought, these two realms were not only spatially distinct, but they differed in character from one another in substantial ways. For one thing, the "natural" motions of things (motions that needed no further causation) in these two realms were presumed to be radically different.
According to Aristotle (384–322 b.c.e.), who was for nearly two millennia taken to be the authority on these matters, motion in the terrestrial realm required the continuous application of a cause. Remove the cause, and motion would cease. When a horse ceases to pull a cart, for instance, the cart comes to a halt. In Newton's formulation, however, what requires an active cause is not motion itself, but acceleration—any change in the speed or direction of motion. In effect, Newton's First Law of Motion asserts that the natural motion of things is uniform motion, straight-line motion at constant speed. Any deviation from this—any acceleration, that is—would require a cause. The name for this cause is force—specifically, the force exerted on one object by interaction with another. Expressed more traditionally, Newton's First Law states that unless acted upon by an applied force, an object will continue in a state of rest or uniform motion.
What happens when a force is applied to an object? The answer to that question is the subject of Newton's Second Law of Motion: When acted upon by an applied force, an object will accelerate; the resultant acceleration will be in the same direction as the applied force, and its magnitude will be directly proportional to the magnitude of the applied force and inversely proportional to the object's mass. Stated more succinctly, acceleration is proportional to force divided by mass. This statement, more than any other, functions as the core of Newtonian dynamics, Newton's formulation of the fundamental cause-effect relationship for motion. Force is the cause; acceleration is the effect. For a substantial class of motions, with exceptions to be noted later, this formulation continues to provide a fruitful way to predict or account for acceleration in response to applied forces.
Newton's Third Law of Motion is a statement about the character of the applied forces mentioned in the first two laws. All such forces occur in pairs and are the result of two bodies interacting with one another. When two bodies interact, says Newton, each exerts a force on the other. When bodies A and B interact, the force exerted on A by B is equal in magnitude and opposite in direction to the force exerted on B by A. This is sometimes abbreviated to read, "action equals reaction," but the meanings of action and reaction must be very carefully specified.
Among the various types of forces that contribute to the acceleration of terrestrial objects is the force of gravity—the force that causes apples, for example, to fall to the ground, or to "accelerate earthward." It was the genius of Newton that allowed him to consider the possibility that the orbital motion of the moon, which entails an acceleration toward the Earth, might also be a consequence of the Earth's gravitational attraction.
This suggestion required a remarkable break with Aristotelian tradition. According to Aristotle, the natural motion of the moon, of the planets, or of any other member of the celestial realm was entirely different from the terrestrial motions considered so far. The natural motion of celestial bodies was neither rest nor uniform straight-line motion. Rather, the motion of celestial bodies would necessarily be based on uniform circular motion, motion at constant speed on a circular path. In the spirit of this assumption, Claudius Ptolemy in the second century crafted a remarkably clever combination of uniform circular motions with which to describe the motions of the sun, moon, and planets relative to the central Earth.
However, building on the fruitful contributions of astronomers Nicolaus Copernicus (1473–1543), Galileo Galilei (1564–1642), and Johannes Kepler (1571–1630), Newton was able to demonstrate that Kepler's sun-centered model for planetary motions could be seen as but one more illustration of Newton's theory regarding the cause-effect relationship for motion. The moon was steered in its orbit around the Earth in response to a force exerted by the Earth on the moon. The Earth and the other planets orbited the sun in response to a force exerted on them by the sun. What was the force operating in these celestial motions? The same kind of force that caused apples to accelerate earthward—the universal gravitational force.
It was helpful to recognize gravity as a force exerted by one object on another. It was exceptionally insightful for Newton to propose that every pair of objects everywhere in the universe exerted gravitational forces on one another. Gone was the confusion of two kinds of natural motions. Gone was the even greater distinction between terrestrial and celestial realms—one characterized by imperfection and change, the other characterized by perfection and constancy. The cosmos is one system, not two. The world is a universe made of one set of substances and behaving according to one set of patterns. Classical mechanics provided the means to study all motions, both terrestrial and celestial, with one and the same methodology.
Classical electromagnetism provided a systematic account of numerous phenomena involving the interaction of electric charges and currents. Electric charges at rest were considered to be the source of electric fields—modifications in the nature of space that cause other charges to experience a force. Electric charges in motion, giving rise to an electric current, were considered to be the source of magnetic fields, modifications in the nature of space that could be detected by a magnetic compass and caused other electric currents to experience a force. Given any static distribution of electric charge, the configuration of the resultant electric field could be computed. Given any distribution of electric currents, the configuration of the resultant magnetic field could be computed. Given these electric and magnetic field configurations, the forces on all electric charges and currents could be predicted.
In addition to phenomena involving static charge distributions and steady electric currents, another important category of phenomena arises from dynamically changing configurations of charge or current. When charge or current configurations change, the resultant electric and magnetic fields will also change. However, changes in these field configurations must propagate at a finite speed—now called the speed of light, approximately 300,000 kilometers per second. Electromagnetic radiation is the phenomenon of traveling variations, or waves, in electric and magnetic field strength caused by accelerated electric charges. The electromagnetic spectrum spans the full range of wavelength values from very short to very long—from gamma rays, X-rays, and ultraviolet to visible light, infrared, microwaves and radio waves. Maxwell's equations—four mathematical statements that systematically integrated the work of predecessors like Charles-Augustin de Coulomb (1736–1806), Hans Christian Oersted (1777–1851), Michael Faraday (1791–1867), and André-Marie Ampère (1775–1836)—were taken to be the complete specification of all electromagnetic phenomena, including electromagnetic radiation.
Limitations of classical physics
Until the early twentieth century, classical physics appeared to be adequate to account for all observed phenomena. But new discoveries soon demonstrated that, although classical physics would continue to provide a convenient and powerful means of dealing with many phenomena, it needed to be supplemented with other theoretical strategies based on differing sets of assumptions regarding the fundamental character of the physical universe. In the arena of electromagnetism, for instance, classical physics assumed that electromagnetic energy could be continuously varied in value and that its transmission could be fully described in terms of traveling electromagnetic waves. However, in order to account for such phenomena as blackbody radiation (electromagnetic energy radiated by any warm object) and the photoelectric effect (electrons ejected from the surface of a metal illuminated by light), physicists had to propose and accept the idea that electromagnetic energy was transmitted in particle-like quanta of energy, now called photons. Phenomena in which the photon character of electromagnetic radiation plays a central role requires the employment of quantum physics in place of classical physics.
Quantum physics is also needed to account for the behavior of extremely small systems like atoms and molecules. The motion of electrons relative to atomic nuclei cannot be adequately described in the language of classical mechanics. Contrary to Newtonian expectations, the energy of atoms and molecules is not continuously variable, but is quantized—restricted to certain specific values. And, contrary to the expectations of classical electromagnetism, electrons in motion relative to atomic nuclei do not radiate energy continuously, but only when making a transition from one stable energy state to another of lower energy value. Consistent with the Principle of Conservation of Energy, the amount of energy lost by the atom is exactly equal to the energy carried away by the emitted photon.
A second shortcoming of classical physics becomes evident when Newtonian mechanics attempts to deal with things that are moving at very high speed relative to an observer. When this speed becomes a substantial fraction of the speed of light, several Newtonian expectations require modification. Many of these modifications are accounted for by the Special Theory of Relativity proposed by Albert Einstein (1879–1955) in 1905. The relationship between kinetic energy (energy associated with motion) and speed must be modified. Distance and time intervals once thought to be invariant become dependent on relative motion. Even the mass of an object is measured differently by different observers. Other modifications are accounted for by Einstein's General Theory of Relativity, published in 1916, which deals with the interaction of mass and the geometry of space. The General Theory describes the force of gravity in a manner very different from Newton's and is able to account for several discrepancies between observation and Newtonian predictions.
Religious concerns and classical physics
Classical physics gave support to the idea that the world was fundamentally deterministic. Given full information about the configuration and motion of some system today, its entire future could, in principle, be computed. Its future was considered to be fully determined by its present. But is there room in such a universe for contingency or choice? The apparent absence of choice presents difficulties for religious concepts like human responsibility and human accountability to God for obedience to revealed standards for moral action.
Another religious concern arises when one inquires about the character and role of divine action in the universe. When Newton considered the future motions of the planets in the solar system, for instance, he judged that this set of orbital motions was inherently unstable and would, from time to time, need to be adjusted by God to restore the desired array of orbits. This introduction of occasional supernatural interventions may be considered a form of the God of the gaps approach to divine action: the universe is presumed to lack some quality or capability that must be compensated for by direct divine action. In the case of planetary motions, for example, Newton considered the universe to lack the capability of maintaining a stable set of orbits. This "capability gap" could, however, be bridged with occasional acts of supernatural intervention. Eventually, however, it was demonstrated that the system of planetary orbits was, in fact, stable, thereby removing the need for occasional gap-bridging interventions. When a "gap" of this sort becomes filled, the God of the gaps becomes superfluous. For this reason, many contemporary theologians are inclined to see divine action, not as a supernatural compensation for capability gaps in the universe, but as an essential aspect of an enriched concept of what takes place naturally.
See also Aristotle; Determinism; Divine Action; God of the Gaps; Gravitation; Newton, Isaac; Physics, Quantum; Relativity, General Theory of; Relativity, Special Theory of; Wave-particle Duality
bernal, j. d. history of classical physics. new york: barnes and noble, 1997.
pullman, bernard. the atom in the history of human thought, trans. axel r. reisinger. oxford: oxford university press, 1998.
feynman, richard p.; leighton, robert b.; and sands, matthew l. the feynman lectures on physics. boston: addison-wesley, 1994.
goldstein, herbert; poole, charles; and safko, john l. classical mechanics, 3rd edition. upper saddle river, n.j.: prentice hall, 2002.
griffiths, david j. introduction to electrodynamics, 3rd edition. upper saddle river, n.j.: prentice hall, 1998.
halliday, david; resnick, robert; and walker, jearl. fundamentals of physics, 6th edition. new york: wiley, 2000.
jackson, john david. classical electrodynamics, 3rd edition. new york: wiley, 1998.
symon, keith r. mechanics, 3rd edition. boston: addison- wesley, 1971.
howard j. van till
"Physics, Classical." Encyclopedia of Science and Religion. . Encyclopedia.com. (November 24, 2017). http://www.encyclopedia.com/education/encyclopedias-almanacs-transcripts-and-maps/physics-classical
"Physics, Classical." Encyclopedia of Science and Religion. . Retrieved November 24, 2017 from Encyclopedia.com: http://www.encyclopedia.com/education/encyclopedias-almanacs-transcripts-and-maps/physics-classical
Physics and astronomy , from which all other sciences derive their foundation, are attempts to provide a rational explanation for the structure and workings of the Universe. The creation of the earliest civilizations and of mankind's religious beliefs was profoundly influenced by the movements of the Sun, Moon , and stars across the sky. As our most ancient ancestors instinctively sought to fashion tools through which they gained mechanical advantage beyond the strength of their limbs, they also sought to understand the mechanisms and patterns of the natural world. From this quest for understanding evolved the science of physics. Although these ancient civilizations were not mathematically sophisticated by contemporary standards, their early attempts at physics set mankind on the road toward the quantification of nature.
In Ancient Greece, in a natural world largely explained by the whim of gods, the earliest scientists and philosophers of record dared to offer explanations of the natural world based on their observations and reasoning. Pythagoras (582–500 b.c.) argued about the nature of numbers, Leucippus (c. 440 b.c.), Democritus (c. 420 b.c.), and Epicurus (342–270 b.c.) asserted matter was composed of extremely small particles called atoms.
Many of the most cherished arguments of ancient science ultimately proved erroneous. For example, in Aristotle's (384–322 b.c.) physics, for example, a moving body of any mass had to be in contact with a "mover," and for all things there had to be a "prime mover." Errant models of the universe made by Ptolemy (c.a.d.100–170) were destined to dominate the Western intellectual tradition for more than a millennium. Midst these misguided concepts, however, were brilliant insights into natural phenomena. More then 1700 years before the Copernican revolution, Aristarchus of Samos (310–230 b.c.) proposed that the earth rotated around the Sun and Eratosthenes Of Cyrene (276–194 b.c.), while working at
the great library at Alexandria, deduced a reasonable estimate of the circumference of the earth.
Until the collapse of the Western Roman civilization there were constant refinements to physical concepts of matter and form. Yet, for all its glory and technological achievements, the science of ancient Greece and Rome was essentially nothing more than a branch of philosophy. Experimentation would wait almost another two thousand years for injecting its vigor into science. Although there were technological advances and more progress in civilization than commonly credited, during the Dark and Medieval Ages in Europe science slumbered. In other parts of the world, however, Arab scientists preserved the classical arguments as they developed accurate astronomical instruments and compiled new works on mathematics and optics.
At the start of the Renaissance in Western Europe, the invention of the printing press and a rediscovery of classical mathematics provided a foundation for the rise of empiricism during the subsequent Scientific Revolution. Early in the sixteenth century, Polish astronomer Nicolaus Copernicus's (1473–1543) reassertion of heliocentric theory sparked an intense interest in broad quantification of nature that eventually allowed German astronomer and mathematician Johannes Kepler (1571–1630) to develop laws of planetary motion. In addition to his fundamental astronomical discoveries, Italian astronomer and physicist Galileo Galilei (1564–1642) made concerted studies of the motion of bodies that subsequently inspired seventeenth century English physicist and mathematician Sir Isaac Newton's (1642–1727) development of the laws of motion and gravitation in his influential 1687 work, Philosophiae Naturalis Principia Mathematica (Mathematical Principles of Natural Philosophy).
Following the Principia, scientists embraced empiricism during an Age of Enlightenment. Practical advances spurred by the beginning of an industrial revolution resulted in technological advances and increasingly sophisticated instrumentation that allowed scientists to make exquisite and delicate calculations regarding physical phenomena. Concurrent advances in mathematics, allowed development of sophisticated and quantifiable models of nature. More tantalizingly for physicists, many of these mathematical insights ultimately pointed toward a physical reality not necessarily limited to three dimensions and not necessarily absolute in time and space .
Nineteenth century experimentation culminated in the formulation of Scottish physicist James Clerk Maxwell's (1831–1879) unification of concepts regarding electricity, magnetism , and light in his four famous equations describing electromagnetic waves.
During the first half of the twentieth century, these insights found full expression in the advancement of quantum and relativity theory . Scientists, mathematicians, and philosophers united to examine and explain the innermost workings of the universe—both on the scale of the very small subatomic world and on the grandest of cosmic scales.
By the dawn of the twentieth century more than two centuries had elapsed since Newton's Principia set forth the foundations of classical physics. In 1905, in one grand and sweeping Special Theory of Relativity, German-American physicist Albert Einstein (1879–1955) provided an explanation for seemingly conflicting and counter-intuitive experimental determinations of the constancy of the speed of light, length contraction, time dilation, and mass enlargements. A scant decade later, Einstein once again revolutionized concepts of space, time and gravity with his General Theory of Relativity.
Prior to Einstein's revelations, German physicist Maxwell Planck (1858–1947) proposed that atoms absorb or emit electromagnetic radiation in discrete units of energy termed quanta. Although Planck's quantum concept seemed counter-intuitive to well-established Newtonian physics, quantum mechanics accurately described the relationships between energy and matter on atomic and subatomic scale and provided a unifying basis to explain the properties of the elements.
Concepts regarding the stability of matter also proved ripe for revolution. Far from the initial assumption of the indivisibility of atoms, advancements in the discovery and understanding of radioactivity culminated in renewed quest to find the most elemental and fundamental particles of nature. In 1913, Danish physicist Niels Bohr (1885–1962) published a model of the hydrogen atom that, by incorporating quantum theory , dramatically improved existing classical Copernican-like atomic models. The quantum leaps of electrons between orbits proposed by the Bohr model accounted for Planck's observations and also explained many important properties of the photoelectric effect described by Einstein.
More mathematically complex atomic models were to follow based on the work of the French physicist Louis Victor de Broglie (1892–1987), Austrian physicist Erwin Schrödinger (1887–1961), German physicist Max Born (1882–1970) and English physicist P.A.M Dirac (1902–1984). More than simple refinements of the Bohr model, however, these scientists made fundamental advances in defining the properties of matter—especially the wave nature of subatomic particles. By 1950, the articulation of the elementary constituents of atoms grew dramatically in numbers and complexity and matter itself was ultimately to be understood as a synthesis of wave and particle properties.
Against a maddeningly complex backdrop of politics and fanaticism that resulted in two World Wars within the first half of the twentieth century, science knowledge and skill became more than a strategic advantage. The deliberate misuse of science scattered poisonous gases across World War I battlefields at the same time that advances in physical science (e.g., x-ray diagnostics) provided new ways to save lives. The dark abyss of WWII gave birth to the atomic age. In one blinding flash, the Manhattan Project created the most terrifying of weapons that could—in an blinding flash—forever change the course of history for all peoples of the earth.
The insights of relativity theory and quantum theory also stretched the methodology of science. No longer would science be mainly exercise in inductively applying the results of experimental data. Experimentation, instead of being only a genesis for theory, became a testing ground to test the apparent truths unveiled by increasingly mathematical models of the universe. Moreover, with the formulation of quantum mechanics, physical phenomena could no longer be explained in terms of deterministic causality, that is, as a result of at least a theoretically measurable chain causes and effects. Instead, physical phenomena were described as the result of fundamentally statistical, unreadable, indeterminant (unpredictable) processes.
The development of quantum theory, especially the delineation of Planck's constant and the articulation of the Heisenburg uncertainty principle carried profound philosophical implications regarding limits on knowledge. Modern cosmological theory (i.e., theories regarding the nature and formation of the universe) provided insight into the evolutionary stages of stars (e.g., neutron stars, pulsars, black holes, etc.) that carried with it an understanding of nucleosythesis (the formation of elements) that forever linked mankind to the lives of the very stars that had once sparked the intellectual journey towards an understanding of nature based upon physical laws.
With specific regard to geology , the twentieth century development of geophysics and advances in sensing technology made possible the revolutionary development of plate tectonic theory.
See also Earth Science; History of exploration I (Ancient and classical); History of exploration II (Age of exploration); History of exploration III (Modern era); History of geo-science: Women in the history of geoscience; History of manned space exploration
"Physics." World of Earth Science. . Encyclopedia.com. (November 24, 2017). http://www.encyclopedia.com/science/encyclopedias-almanacs-transcripts-and-maps/physics
"Physics." World of Earth Science. . Retrieved November 24, 2017 from Encyclopedia.com: http://www.encyclopedia.com/science/encyclopedias-almanacs-transcripts-and-maps/physics
See also 25. ASTRONOMY ; 100. COSMOLOGY ; 189. GRAVITY ; 342. RADIATION .
- the state or quality of having different properties along different axes. See also 54. BOTANY . —anisotropic, adj.
- the condition of constant, uninterrupted variability of direction or position. —astatic, adj.
- the theory of atoms.
- the branch of physics that deals with living things. —biophysicist, n. —biophysical, adj.
- the study of heat and electricity.
- the science that studies crystallization and the forms and structures of crystals. —crystallographer, n. —crystallographic, crystallographical, adj.
- a property of certain materials of being repelled by both poles of a magnet, thus taking a position at right angles to the magnet’s lines of influence.
- the measurement of energy used in doing work. —dynamometer, n. —dynamometric, dynamometrical, adj.
- orientation in relation to a current of electricity. —electrotropic, adj.
- the branch of physics that studies energy and its transformation. —energeticist, n. —energeticistic, adj.
- a doctrine that asserts that certain phenomena can be explained in terms of energy. —energist, n.
- the application of alternating electrical current for therapeutic purposes. —faradic, adj.
- the determination of focal length. —focimetric, adj.
- static electricity. Also called Franklinic electricity .
- a direct electrical current, especially one produced by chemical action. —galvanic, adj.
- a work on the production of electric current by chemical means. —galvanologist, n. —galvanological, adj.
- the measurement of the strength of electric currents, by means of a galvanometer. —galvanometric, galvanometrical, adj.
- the physics of the earth, including oceanography, volcanology, seismology, etc. —geophysicist, n. —geophysical, adj.
- the study of the behavior of rotating solid bodies. —gyrostatic, adj. —gyrostatically, adv.
- Chemistry. the study of salts. Also called halotechny .
- the similarity of the crystalline forms of substances that have different chemical compositions. —homeomorphous, adj.
- 1. the science concerned with the laws governing water and other liquids in motion and their engineering applications.
- 2. applied or practical hydrodynamics.
- the study of forces that act on or are produced by liquids. Also called hydromechanics . —hydrodynamic, hydrodynamical, adj.
- the branch of hydrodynamics dealing with the laws of gases or liquids in motion. —hydrokinetic, adj.
- hydrodynamics. —hygrometric, hygrometrical, adj.
- the study of the equilibrium and pressure of liquids. —hydrostatician, n. —hydrostatic, hydrostatical, adj.
- the branch of physics concerned with the measurement of moisture in the air. —hygrometric, hygrometrical, adj.
- close similarity between the forms of different crystals. See also 44. BIOLOGY . —isomorph, n. —isomorphic, adj.
- the branch of mechanics that deals with motion without reference to force or mass. —kinematic, kinematical, adj.
- the study of magnets and magnetism.
- the state exhibited by a crystal, having three unequal axes with one oblique intersection; the state of being monoclinic. See also 44. BIOLOGY . —monosymmetric, monosymmetrical, adj.
- the technology of optical instruments and apparatus.
- the study of the wave-forms of changing currents, voltages, or any other quantity that can be translated into electricity, as light or sound waves. —oscillographic, adj.
- the measurement of osmotic pressure, or the force a dissolved substance exerts on a semipermeable membrane through which it cannot pass when separated by it from a pure solvent. —osmometric, adj.
- the doctrine that explains the universe in physical terms.
- the science that studies matter and energy in terms of motion and force. —physicist, n. —physical, adj.
- a property of some crystals of showing variation in color when viewed in transmitted light or from different directions. Also called pleochromatism, polychroism, polychromatism . —pleochroic, pleochromatic, adj.
- the theory that nature contains no vacuums. Cf. vacuism. —plenist, n.
- pleochromatism, polychroism, polychromatism
- the study of fire and heat, especially with regard to chemical analysis.
- the measurement of radiant energy by means of a radiometer, an instrument composed of vanes which rotate at speeds proportionate to the intensity of the energy source. —radiometric, adj.
- the transformation of radiant energy into sound.
- measurement of the distribution of energy in a spectrum by means of a spectrobolometer, an instrument combining a bolometer and a spectroscope. —spectrobolometric, adj.
- the branch of mechanics or physics that deals with matter and forces in equilibrium. —statical, adj.
- an apparatus for illustrating in graphic form the composition of two simple harmonic motions at right angles.
- the science of operating or controlling mechanisms by remote control, especially by radio.
- the science or study of the emission of electrons from substances at high temperatures. —thermionic, adj.
- the science or study of the equilibrium of heat.
- the science and technology of friction, lubrication, and wear.
- a property, peculiar to certain crystals, of transmitting light of three different colors when viewed from three different directions. Also called trichromatism . —trichroic, adj.
- 1. the condition of having, using, or combining three colors.
- 2. trichroism. —trichromatic, adj.
- Rare. the science of rotary motion. —trochilic, adj.
- the theory that nature permits vacuums. Cf. plenism. —vacuist, n.
- electricity generated by chemical means, as in a cell or battery; galvanism.
"Physics." -Ologies and -Isms. . Encyclopedia.com. (November 24, 2017). http://www.encyclopedia.com/education/dictionaries-thesauruses-pictures-and-press-releases/physics
"Physics." -Ologies and -Isms. . Retrieved November 24, 2017 from Encyclopedia.com: http://www.encyclopedia.com/education/dictionaries-thesauruses-pictures-and-press-releases/physics
Physics is the science that deals with matter and energy and with the interaction between them. Perhaps you would like to determine how best to aim a rifle in order to hit a target with a bullet. Or you want to know how to build a ship out of steel and make sure that it will float. Or you plan to design a house that can be heated just with sunlight. Physics can be used in answering any of these questions.
Physics is one of the oldest of the sciences. It is usually said to have begun with the work of Italian scientist Galileo Galilei (1564–1642) in the first half of the seventeenth century. Galileo laid down a number of basic rules as to how information about the natural world should be collected. For example, the only way to obtain certain knowledge about the natural world, he said, is to carry out controlled observations (experiments) that will lead to measurable quantities. The fact that physics today is based on careful experimentation, measurements, and systems of mathematical analysis reflects the basic teachings of Galileo.
Classical and modern physics
The field of physics is commonly subdivided into two large categories: classical and modern physics. The dividing line between these two subdivisions can be drawn in the early 1900s. During that period, a number of revolutionary new concepts about the nature of matter were proposed. Included among these concepts were Einstein's theories of general and special relativity, Planck's concept of the quantum, Heisenberg's principle of indeterminacy, and the concept of the equivalence of matter and energy.
In general, classical physics can be said to deal with topics on the macroscopic scale, that is on a scale that can be studied with the largely unaided five human senses. Modern physics, in contrast, concerns the nature and behavior of particles and energy at the submicroscopic level. The term submicroscopic refers to objects—such as atoms and electrons—that are too small to be seen even with the very best microscope. One of the interesting discoveries made in the early 1900s was that the laws of classical physics generally do not hold true at the submicroscopic level.
Perhaps the most startling discovery made during the first two decades of the twentieth century concerned causality. Causality refers to the belief in cause-and-effect; that is, classical physics taught that if A occurs, B is certain to follow. For example, if you know the charge and mass of an electron, you can calculate its position in an atom. This kind of cause-and-effect relationship was long regarded as one of the major pillars of physics.
Words to Know
Determinism: The notion that a known effect can be attributed with certainty to a known cause.
Energy: The ability to do work.
Matter: Anything that has mass and takes up space.
Mechanics: The science that deals with energy and forces and their effects on bodies.
Submicroscopic: Levels of matter that cannot be directly observed by the human senses, even with the best of instruments; the level of atoms and electrons.
What physicists learned in the early twentieth century is that nature is not really that predictable. One could no longer be certain that A would always cause B. Instead, physicists began talking about probability, the likelihood that A would cause B. In drawing pictures of atoms, for example, physicists could no longer talk about the path that electrons do take in atoms. Instead, they began to talk about the paths that electrons probably take (with a 95 percent or 90 percent or 80 percent probability).
Divisions of physics
Like other fields of science, physics is commonly subdivided into a number of more specific fields of research. In classical physics, those fields include mechanics; thermodynamics; sound, light, and optics; and electricity and magnetism. In modern physics, some major subdivisions include atomic, nuclear, high-energy, and particle physics.
The classical divisions. Mechanics is the oldest field of physics. It is concerned with the description of motion and its causes. Many of the basic concepts of mechanics grew out of the work of English physicist Isaac Newton (1642–1727) in about 1687. Thermodynamics sprang from efforts to develop an efficient steam engine in the early 1800s. The field deals with the nature of heat and its connection with work.
Sound, optics, electricity, and magnetism are all divisions of physics in which the nature and movement of waves are important. The study of sound is also related to practical applications that can be made of this form of energy, as in radio communication and human speech. Similarly, optics deals not only with the reflection, refraction, diffraction, interference, polarization, and other properties of light, but also with the ways in which these principles have practical applications in the design of tools and instruments such as telescopes and microscopes.
The study of electricity and magnetism focuses on the properties of particles at rest and on the properties of those particles in motion. Thus, the field of static electricity examines the forces that exist between charged particles at rest, while current electricity deals with the movement of electrical particles.
The modern divisions. In the area of modern physics, nuclear and atomic physics involve the study of the atomic nucleus and its parts, with special attention to changes that take place (such as nuclear decay) in the atom. Particle and high-energy physics, on the other hand, focus on the nature of the fundamental particles of which the natural world is made. In these two fields of research, very powerful, very expensive tools such as linear accelerators and synchrotrons (atom-smashers) are required to carry out the necessary research.
Interrelationship of physics to other sciences
One trend in all fields of science over the past century has been to explore ways in which the five basic sciences (physics, chemistry, astronomy, biology, and earth sciences) are related to each other. This trend has led to another group of specialized sciences in which the laws of physics are used to interpret phenomena in other fields. Astrophysics, for example, is a study of the composition of astronomical objects—such as stars—and the changes that they undergo. Physical chemistry and chemical physics, on the other hand, are fields of research that deal with the physical nature of chemical molecules. Biophysics, as another example, is concerned with the physical properties of molecules essential to living organisms.
[See also Quantum mechanics; Relativity, theory of ]
"Physics." UXL Encyclopedia of Science. . Encyclopedia.com. (November 24, 2017). http://www.encyclopedia.com/science/encyclopedias-almanacs-transcripts-and-maps/physics-0
"Physics." UXL Encyclopedia of Science. . Retrieved November 24, 2017 from Encyclopedia.com: http://www.encyclopedia.com/science/encyclopedias-almanacs-transcripts-and-maps/physics-0
In some ways we would not have computers today were it not for physics. Furthermore, the needs of physics have stimulated computer development at every step. This all started due to one man's desire to eliminate needless work by transferring it to a machine.
Charles Babbage (1791–1871) was a well-to-do Englishman attending Cambridge University in the early 1800s. One day he was nodding off over a book containing tables of astronomical phenomena. He fancied that he would become an astronomical mathematician. The motion of heavenly bodies was, of course, governed by the laws of physics. For a moment, he thought of having the tables calculated automatically. This idea came up several times in succeeding years until he finally designed a calculator, the Difference Engine, that could figure the numbers and print the tables. A version of the Difference Engine made by someone else found its way to the Dudley Observatory in Albany, New York, where it merrily cranked out numbers until the 1920s. Babbage followed this machine with a programmable version, the Analytical Engine, which was never built. The Analytical Engine, planned as a more robust successor to the Difference Engine, is considered by many to be the first example of a modern computer.
In the late 1800s, mathematician and scientist Lord Kelvin (William Thomson) (1824–1907) tried to understand wave phenomena by building a mechanical analog computer that modeled the waves on beaches in England. This was a continuation of the thread of mechanical computation applied to understand physical phenomena in the 1800s.
In the 1920s, physicist Vannevar Bush (1890–1974) of the Massachusetts Institute of Technology built a Differential Analyzer that used a combination of mechanical and electrical parts to create an analog computer useful for many problems. The Differential Analyzer was especially suited for physics calculations, as its output was a smooth curve showing the results of mathematical modeling. This curve was very accurate, more so than the slide rules that were the ubiquitous calculators in physics and engineering in the first seven decades of the twentieth century.
Beginning during World War II and finishing just after the war ended, the Moore School of the University of Pennsylvania built an electronic digital computer for the U.S. Army. One of the first problems run on it was a model of a nuclear explosion. The advent of digital computers opened up whole new realms of research for physicists.
Physicists like digital computers because they are fast. Thus, big problems can be figured out, and calculations that are boring and repetitious by hand can be transferred to computers. Some of the first subroutines, blocks of computer code executed many times during the run of a program, were inspired by the needs of physics.
Even though digital computers were fast with repetitious tasks, the use of approximation and visualization has the largest effect on physicists using electronic computers. Analog machines, both mechanical and electronic, have output that models real world curves and other shapes representing certain kinds of mathematics. To calculate the mathematical solution of physical problems on digital computers meant the use of approximation. For example, the area under a curve (the integral) is approximated by dividing the space below the curve into rectangles, figuring out their area, and adding the small areas to find the one big area. As computers got faster, such approximations were made up of an ever-increasing number of smaller rectangles.
Visualization is probably the physicist's task most aided by computers. The outputs of Lord Kelvin's machine and the Differential Analyzer were drawn by pens connected to the computational components of the machine. The early digital computers could print rough curves, supplemented by cleaner curves done on a larger scale by big plotters. Interestingly, the plotters drew what appeared to be smooth lines by drawing numerous tiny straight lines, just like a newspaper photograph is really a large number of gray points with different shades. Even these primitive drawing tools were a significant advance. They permitted physicists to see much more than could be calculated by hand.
In the 1960s, physicists took millions of photographs of sub-atomic particle collisions. These were then processed with human intervention. A "scanner" (usually a student) using a special machine would have the photographs of the collisions brought up one by one. The scanner would use a trackball to place a cursor over a sub-atomic particle track. At each point the scanner would press a button, which then allowed the machine to punch the coordinates on a card. These thousands upon thousands of cards were processed to calculate the mass and velocity of the various known and newly discovered particles. These were such big jobs that they were often run on a computer overnight. Physicists could use the printed output of batch-type computer systems to visualize mentally what was really happening. This is one of the first examples of truly large-scale computing. In fact, most of the big calculations done over the first decades of electronic digital computing had some relationship to physics, including atomic bomb models, satellite orbits, and cyclotron experiments.
The advent of powerful workstations and desktop systems with color displays ended the roughness and guessing of early forms of visualization. Now, many invisible phenomena, such as fields, waves, and quantum mechanics, can be modeled accurately in full color. This is helping to eliminate erroneous ideas inspired by the poor visualizations of years past. Also, these computer game–quality images can be used to train the next generation of physics students and their counterparts in chemistry and biology classes, making tangible what was invisible before.
Finally, the latest and perhaps most pervasive of physics-inspired computer developments is the World Wide Web. It was first developed as a way of easily sharing data, including graphics, among researchers in the European cyclotron community and also for those outside of it with appropriate interests. So whenever a browser is launched, 200 years of physics-driving computer development is commemorated.
see also Astronomy; Data Visualization; Mathematics; Navigation.
James E. Tomayko
Merrill, John R. Using Computers in Physics. Boston: Houghton Mifflin Company, 1976.
"Physics." Computer Sciences. . Encyclopedia.com. (November 24, 2017). http://www.encyclopedia.com/computing/news-wires-white-papers-and-books/physics
"Physics." Computer Sciences. . Retrieved November 24, 2017 from Encyclopedia.com: http://www.encyclopedia.com/computing/news-wires-white-papers-and-books/physics
Physics is the branch of scientific investigation that focuses its attention on fundamental concepts, patterns, and relationships involving matter, energy, space, and time. Other natural sciences, such as chemistry, biology, geology, and astronomy, also deal with these categories in their investigation of material systems like atoms, molecules, life processes, organisms, planets, stars, and galaxies, but physics is concerned with the most basic and universal principles that apply to all of these diverse systems.
It is sometimes convenient to divide physics into several different arenas of concern, such as mechanics (the study of motion), electromagnetism and optics, thermodynamics, quantum physics, atomic physics, nuclear physics, particle physics, and relativity (the study of space, time, and gravity).
Classical mechanics is the study of motion in the manner established by Isaac Newton in the seventeenth century. Among its major contributions is a fruitful method for describing the cause-effect relationship for motion in a quantifiable manner. A force, like the familiar push or pull, functions as the cause of acceleration (any change in the speed or direction of motion), which is its effect. Another major contribution of Newton was his concept and description of the force of gravity that is experienced and exerted by every object possessing the quality of mass. The gravitational force that causes apples to fall earthward is also the kind of kind of force that steers the moon in its orbit around the Earth and the planets in their orbits around the sun.
Electromagnetism encompasses all phenomena in which electric and magnetic fields play a role. In classical physics, fields may be thought of as qualities of space that lead objects with certain properties to experience a force. Any object possessing the property of electric charge, for example, will experience a force in the presence of an electric field. Electromagnetic radiation (light, X-rays, radio waves) may be understood as variations in electric and magnetic fields that travel at the characteristic speed of three hundred thousand meters per second through space.
Thermodynamics is concerned with the manner in which energy, especially heat energy, affects the state of a system and its interaction with its environment. Energy, often characterized as the capacity to do work, appears in a diversity of forms and may be changed in either form or location as a consequence of some physical process. In all processes, however, the sum of the energy possessed by a system and its environment remains constant. This principle, called the First Law of Thermodynamics, or the conservation of energy, is thought to apply without exception to all physical phenomena.
Quantum theory describes the structure and behavior of systems like atoms, atomic nuclei, and molecules. Extremely small structures behave in a manner different from the predictions of classical mechanics. The quantities of energy possessed by a system or exchanged between systems, for instance, is restricted to certain values only. Furthermore, the outcome of many processes is open to diverse options, each outcome having a calculable probability of occurrence.
Relativity theory provides a framework for speaking of the interactive relationships among space, time, mass, and gravity. Special Relativity describes the way in which the experience of time and space are interrelated, while General Relativity focuses its attention on the interrelationships among mass, space, gravity, and motion.
See also Cosmology, Physical Aspects
asimov, isaac. understanding physics. new york: barnes and noble books, 1988.
kuhn, karl. basic physics: a self-teaching guide, 2nd edition. new york: wiley, 1996.
halliday, david; resnick, robert; and walker, jearl. fundamentals of physics, 6th edition. new york: wiley, 2000.
feynman, richard p.; leighton, robert b.; and sands, matthew l. the feynman lectures on physics. boston: addison-wesley, 1994.
howard j. van till
"Physics." Encyclopedia of Science and Religion. . Encyclopedia.com. (November 24, 2017). http://www.encyclopedia.com/education/encyclopedias-almanacs-transcripts-and-maps/physics
"Physics." Encyclopedia of Science and Religion. . Retrieved November 24, 2017 from Encyclopedia.com: http://www.encyclopedia.com/education/encyclopedias-almanacs-transcripts-and-maps/physics
"physics." World Encyclopedia. . Encyclopedia.com. (November 24, 2017). http://www.encyclopedia.com/environment/encyclopedias-almanacs-transcripts-and-maps/physics
"physics." World Encyclopedia. . Retrieved November 24, 2017 from Encyclopedia.com: http://www.encyclopedia.com/environment/encyclopedias-almanacs-transcripts-and-maps/physics
phys·ics / ˈfiziks/ • pl. n. [treated as sing.] the branch of science concerned with the nature and properties of matter and energy. The subject matter of physics, distinguished from that of chemistry and biology, includes mechanics, heat, light and other radiation, sound, electricity, magnetism, and the structure of atoms. ∎ the physical properties and phenomena of something: the physics of plasmas.
"physics." The Oxford Pocket Dictionary of Current English. . Encyclopedia.com. (November 24, 2017). http://www.encyclopedia.com/humanities/dictionaries-thesauruses-pictures-and-press-releases/physics
"physics." The Oxford Pocket Dictionary of Current English. . Retrieved November 24, 2017 from Encyclopedia.com: http://www.encyclopedia.com/humanities/dictionaries-thesauruses-pictures-and-press-releases/physics