The thought that the bewildering variety of the world might be the result of many different arrangements of certain simple kinds of basic stuff is a very old one. In the sixth century b.c.e., various pre-Socratic philosophers explored such ideas. Thales thought that the fundamental entity might be water, while Anaximenes favored air. Such attempts were both insightful and hopelessly premature. A more sophisticated notion was the atomism introduced by Democritus a century later and promoted with considerable literary skill by the Latin poet Lucretius in the first century b.c.e.
Progress of a recognizably modern kind began with chemist John Dalton's (1766–1844) atomic theory, which introduced in 1803 the notion of atomic weights, derived principally from the properties of gases. Chemist and physician William Prout's (1785–1850) observation in 1815 that most of these weights were near integer multiples of the atomic weight of hydrogen led to what one might call the first true theory of elementary particles, with hydrogen as the conjectured fundamental building block.
In 1897, physicist Joseph Thomson (1856–1940) convincingly demonstrated that there are light, electrically negative particles (subsequently called electrons ) that are constituents of what, until then, had been considered to be the indivisible atom. In 1911, physicist Ernest Rutherford (1871–1937) successfully interpreted experiments in which projectiles called alpha particles were significantly deflected by a thin gold foil as showing that the positive charge in the atom was concentrated at its center. Rutherford had discovered the nucleus.
In the rest of the twentieth century there followed a series of discoveries, each of which led in turn to a yet deeper conception of the structure of matter, expressed in terms of still smaller constituents playing the role of "elementary" particles. Each phase of these investigations, often pictured metaphorically as peeling another layer off the nuclear "onion," had a sequential form. The process of discovery took place in two parts. The first half consisted in the revelation of an increasing proliferation of putative elementary entities. An example would be the varieties of different nuclei generating the chemical properties of the ninety-two elements of the periodic table. There is a strong conviction in the human mind (exemplified as much by the pre-Socratic philosophers as by twentieth-century physicists) that the fundamental structure of matter should take a simple form, elegant and economical in its character. Proliferation threatens this conviction, but rescue comes in the second half of the process of discovery. Patterns are discerned linking together the proliferating elements, and these patterns are interpreted as reflecting the ways in which a small number of yet more fundamental constituents can be combined. In this way the next level of structure is revealed. It seems fundamental enough until, in turn, it too begins to proliferate, and the cycle begins again.
Thus, nuclei were first recognized as being made up of two kinds of nuclear particles, protons and neutrons. Then experimentalists began to discover many short-lived cousins of these nuclear particles and a proliferation began to threaten. However, the association of these different forms of nuclear matter into certain patterns (called the eightfold way by its most insightful investigator, Murray Gell-Mann [b. 1929]) eventually led to the identification of the quark level in the structure of matter.
Consideration of symmetry provides an important mathematical tool for the understanding of pattern formation. For example, the beautiful pattern of a snowflake is due to the sixfold symmetry that leaves it unchanged under a rotation of sixty degrees. It turned out that the patterns of nuclear matter were also generated by symmetry principles, though principles of a more abstract kind than those given by simple rotations in space. Gell-Mann identified the relevant symmetry as being associated with what mathematicians call the group SU (3). The SU (3) structure involves certain kinds of transformation applied to a set of three basic objects. Such a mathematical fact did not necessarily imply a physical counterpart but, if it did, the corresponding physical entities would generate the next layer in the nuclear onion. Gell-Mann named these entities quarks.
There was initially doubt about the physical reality of quarks. The theory requires them to have fractional electric charge (2/3 - -1/3), and no such particles have ever been observed in nature. However, when indirect evidence of their existence came to light, it proved to be very convincing. The experiments involved what is called deep inelastic scattering. This is the analogue of the experiments that enabled Rutherford to discover the nucleus, but conducted at much higher energy. Projectiles, such as electrons, when scattered off protons and neutrons, were discovered sometimes to "bounce back" in just the way that they would if they were hitting pointlike quarks lying within these nuclear particles. Physicists could eventually understand why projectiles behaved this way, but in the case of quarks there was a new feature without any precedent in physical experience. However strong the impact of the projectiles, it never proved powerful enough to actually eject a single quark. Eventually, physicists were forced to conclude that quarks were "confined," that is to say, the forces that bound them inside protons and neutrons were always strong enough to overcome the effect of the impact, however great that might be. No one has ever seen an individual quark.
The forces that produce quark confinement are generated by the exchange of further particles that, in the relentlessly jokey terminology endemic in particle physics, are called gluons. Further discoveries of exotic kinds of nuclear matter increased the number of types of quark from three to six. These ideas, together with others of a more technical character, constitute what has come to be called the Standard Model.
Only one piece in the jigsaw that defines the standard model is still missing. This is the particle proposed by Peter Higgs as the source of mass within the theory. Particle accelerators can yield energies that are just on the border of where this Higgs particle (as it is called) may be expected to show up. Establishing its existence would be extremely satisfying.
The Standard Model describes very well the properties of subnuclear matter but, with its six varieties of quark and with other somewhat inelegant elaborations, there is an air of proliferation about it. Most physicists, therefore, do not feel a final satisfaction with the Standard Model. There are two ways in which one might hope eventually to go beyond it. One is the discovery of a Grand Unified Theory (GUT).
In terms of directly observed phenomena there seem to be four basic forces of nature: strong nuclear forces (holding nuclei together); electromagnetic forces (holding atoms and bulk matter together); weak nuclear forces (causing matter to decay); and gravity. One of the triumphs of the Standard Model was to show that two of these forces, electromagnetic and weak, are in reality aspects of a single phenomenon, a fact that becomes clear experimentally at very high energies. Physicists believe that at even higher energies (such as would be present in the very early universe) these two forces would unite with the strong nuclear force to give a GUT. The detailed form this theory might be expected to take has not been established.
At higher energies still, there is the possibility that gravity and the GUT unite. For technical reasons, however, a theory of this super-unified kind is even harder to formulate than a GUT. The best speculative prospect appears to be superstring theory (or its generalizations), in which quarks and electrons are pictured as modes of vibration of extremely tiny strings oscillating in many dimensions, all but four of which (space and time) are "rolled up" out of empirical sight.
Lessons for theology
Particle physics is methodologically the most reductionist form of physics. It encourages the thought of constituent reductionism, implying that were a human being to be decomposed into bits and pieces, the ultimate result would be an immense collection of quarks, gluons, and electrons. This observation, however, by no means proves that human beings are nothing but collections of elementary particles, since such a decomposition would kill the person. In fact, quantum physics encourages an antireductionist stance, because it has been shown that there is a counterintuitive mutual entanglement of quantum entities, even when they are spatially separated (the EPR effect). It does not seem that even the subatomic world can be treated purely atomistically.
An important technique of discovery in fundamental physics has proved to be the search for equations endowed with the unmistakable quality of mathematical beauty. Paul Dirac (1902–1984), one of the founding figures of quantum mechanics, once expressed the opinion that it was more important to have mathematical beauty in one's equations than to have them fit experiment. Of course, he did not mean that empirical adequacy is irrelevant to physics, but apparent failure to fit experiment might be due to a number of reasons, such as making an incorrect approximation in solving the equations, or even to the experimental results themselves being wrong. But if the equations were ugly, there was really no hope, for ugliness ran counter to everything that experience of fundamental physical theory had led one to expect. Dirac made his own significant discoveries through just such a quest for mathematical beauty, and the same principle is the guiding strategy followed by the bold proponents of superstring theory. It seems that the physical world is not only rationally transparent to our enquiry, it is also rationally beautiful. Beneath the vast variety of everyday objects, at the subatomic level there is a fundamental structure that is intellectually exciting in its simplicity and profoundly satisfying in the elegance and economy of its order. The reward for doing particle physics is the sense of wonder at its discoveries. The theistic religious believer will readily see the mind of the Creator behind the rationally beautiful order of the physical world.
Another lesson one may learn from particle physics is that human powers of rational prevision are severely limited. Time and again, nature has proved surprising as it resists our prior expectations. In the 1950s, particle physicists who were attempting to make sense of certain weak decays faced profound difficulties. After much fruitless struggle, the situation was transformed and made intelligible when in 1956 two physicists, Tsung Dao Lee and Chen Ning Yang, proposed the abandonment of what had been a cherished belief of particle physicists. Until then, it had been an article of faith that there could be no intrinsic handedness in nature, meaning that fundamental processes should show no preference for right-handed versions over left-handed versions or—putting it another way—that the laws of physics seen in a mirror should look exactly the same as the laws of physics observed directly. This supposed property was called the conservation of parity, and it was believed to be a self-evident truth about nature. Lee and Yang showed that this was not so, a discovery for which they rightly and promptly received the Nobel Prize.
Particle physics teaches us that the physical world is extremely surprising. It would be strange if that were not also true of human encounter with the much deeper mystery of divine reality. Physicists do not favor the question "Is it reasonable?" with its tacit presumption that one knows beforehand what form rationality should take. Rather, they ask the more open question "What makes you think this might be the case?" Theology too can benefit from seeking belief motivated by experience rather than by a priori expectation.
Finally, particle physicists believe in unseen realities (quarks) because such a belief makes sense of great swathes of physical experience. For them, it is intelligibility that affords the clue to existence. This does not seem altogether different from the reasons for theology's belief in the unseen reality of God.
greene, bryan. the elegant universe: superstrings, hidden dimensions, and the quest for an ultimate theory. london: jonathan cape, 1999.
pagels, heinz r. the cosmic code: quantum physics and the language of nature. london: michael joseph, 1982.
pais, abraham. inward bound: of matter and forces in the physical world. oxford: oxford university press, 1986.
polkinghorne, john. the particle play: an account of the ultimate constituents of matter. oxford: w. h. freeman, 1979.
polkinghorne, john. rochester roundabout: the story of high energy physics. harlow, uk: longman; new york: w. h. freeman, 1989.
weinberg, steven. dreams of a final theory: the search for the fundamental laws of nature. london: hutchinson, 1993.
particle accelerator, apparatus used in nuclear physics to produce beams of energetic charged particles and to direct them against various targets. Such machines, popularly called atom smashers, are needed to observe objects as small as the atomic nucleus in studies of its structure and of the forces that hold it together. Accelerators are also needed to provide enough energy to create new particles. Besides pure research, accelerators have practical applications in medicine and industry, most notably in the production of radioisotopes. A majority of the world's particle accelerators are situated in the United States, either at major universities or national laboratories. In Europe the principal facility is at CERN near Geneva, Switzerland; in Russia important installations exist at Dubna and Serpukhov.
Design of Particle Accelerators
There are many types of accelerator designs, although all have certain features in common. Only charged particles (most commonly protons and electrons, and their antiparticles; less often deuterons, alpha particles, and heavy ions) can be artificially accelerated; therefore, the first stage of any accelerator is an ion source to produce the charged particles from a neutral gas. All accelerators use electric fields (steady, alternating, or induced) to speed up particles; most use magnetic fields to contain and focus the beam. Meson factories (the largest of which is at the Los Alamos, N.Mex., Scientific Laboratory), so called because of their copious pion production by high-current proton beams, operate at conventional energies but produce much more intense beams than previous accelerators; this makes it possible to repeat early experiments much more accurately. In linear accelerators the particle path is a straight line; in other machines, of which the cyclotron is the prototype, a magnetic field is used to bend the particles in a circular or spiral path.
The early linear accelerators used high voltage to produce high-energy particles; a large static electric charge was built up, which produced an electric field along the length of an evacuated tube, and the particles acquired energy as they moved through the electric field. The Cockcroft-Walton accelerator produced high voltage by charging a bank of capacitors in parallel and then connecting them in series, thereby adding up their separate voltages. The Van de Graaff accelerator achieved high voltage by using a continuously recharged moving belt to deliver charge to a high-voltage terminal consisting of a hollow metal sphere. Today these two electrostatic machines are used in low-energy studies of nuclear structure and in the injection of particles into larger, more powerful machines. Linear accelerators can be used to produce higher energies, but this requires increasing their length.
Linear accelerators, in which there is very little radiation loss, are the most powerful and efficient electron accelerators; the largest of these, the Stanford linear accelerator (SLAC), completed in 1957, is 2 mi (3.2 km) long and produces 20-GeV—in particle physics energies are commonly measured in millions (MeV) or billions (GeV) of electron-volts (eV)—electrons. SLAC is now used, however, not for particle physics but to produce a powerful X-ray laser. Modern linear machines differ from earlier electrostatic machines in that they use electric fields alternating at radio frequencies to accelerate the particles, instead of using high voltage. The acceleration tube has segments that are charged alternately positive and negative. When a group of particles passes through the tube, it is repelled by the segment it has left and is attracted by the segment it is approaching. Thus the final energy is attained by a series of pushes and pulls. Recently, linear accelerators have been used to accelerate heavy ions such as carbon, neon, and nitrogen.
In order to reach high energy without the prohibitively long paths required of linear accelerators, E. O. Lawrence proposed (1932) that particles could be accelerated to high energies in a small space by making them travel in a circular or nearly circular path. In the cyclotron, which he invented, a cylindrical magnet bends the particle trajectories into a circular path whose radius depends on the mass of the particles, their velocity, and the strength of the magnetic field. The particles are accelerated within a hollow, circular, metal box that is split in half to form two sections, each in the shape of the capital letter D. A radio-frequency electric field is impressed across the gap between the D's so that every time a particle crosses the gap, the polarity of the D's is reversed and the particle gets an accelerating "kick." The key to the simplicity of the cyclotron is that the period of revolution of a particle remains the same as the radius of the path increases because of the increase in velocity. Thus, the alternating electric field stays in step with the particles as they spiral outward from the center of the cyclotron to its circumference. However, according to the theory of relativity the mass of a particle increases as its velocity approaches the speed of light; hence, very energetic, high-velocity particles will have greater mass and thus less acceleration, with the result that they will not remain in step with the field. For protons, the maximum energy attainable with an ordinary cyclotron is about 10 million electron-volts.
Two approaches exist for exceeding the relativistic limit for cyclotrons. In the synchrocyclotron, the frequency of the accelerating electric field steadily decreases to match the decreasing angular velocity of the protons. In the isochronous cyclotron, the magnet is constructed so the magnetic field is stronger near the circumference than at the center, thus compensating for the mass increase and maintaining a constant frequency of revolution. The first synchrocyclotron, built at the Univ. of California at Berkeley in 1946, reached energies high enough to create pions, thus inaugurating the laboratory study of the meson family of elementary particles.
Further progress in physics required energies in the GeV range, which led to the development of the synchrotron. In this device, a ring of magnets surrounds a doughnut-shaped vacuum tank. The magnetic field rises in step with the proton velocities, thus keeping them moving in a circle of nearly constant radius, instead of the widening spiral of the cyclotron. The entire center section of the magnet is eliminated, making it possible to build rings with diameters measured in miles. Particles must be injected into a synchrotron from another accelerator. The first proton synchrotron was the cosmotron at Brookhaven (N.Y.) National Laboratory, which began operation in 1952 and eventually attained an energy of 3 GeV. The 6.2-GeV synchrotron (the bevatron) at the Lawrence Berkeley National Laboratory was used to discover the antiproton (see antiparticle).
The 500-GeV synchrotron at the Fermi National Accelerator Laboratory at Batavia, Ill., was built to be the most powerful accelerator in the world in the early 1970s, with a ring circumference of approximately 4 mi (6 km). The machine was upgraded (1983) to accelerate protons and counterpropagating antiprotons to such enormous speeds that the ensuing impacts delivered energies of up to 2 trillion electron-volts (TeV)—hence the ring was been dubbed the Tevatron. The Tevatron was an example of a so-called colliding-beams machine, which is really a double accelerator that causes two separate beams to collide, either head-on or at a grazing angle. Because of relativistic effects, producing the same reactions with a conventional accelerator would require a single beam hitting a stationary target with much more than twice the energy of either of the colliding beams.
Plans were made to build a huge accelerator in Waxahachie, Tex. Called the Superconducting Supercollider (SSC), a ring 54 mi (87 km) in circumference lined with superconducting magnets (see superconductivity) was intended to produce 40 TeV particle collisions. The program was ended in 1993, however, when government funding was stopped.
In Nov., 2009, the Large Hadron Collider (LHC), a synchroton constructed by CERN, became operational, and in Mar., 2010, it accelerated protons to 3.5 TeV to produce collisions of 7 TeV, a new record. The LHC's main ring, which uses superconducting magnets, is housed in a circular tunnel some 17 mi (27 km) long on the French-Swiss border; the tunnel was originally constructed for the Large Electron Positron Collider, which operated from 1989 to 2000. The LHC was shut down in 2013–15 to make improvements designed to permit it to produce collisions involving protons that have been accelerated up to 7 TeV (and collisions of lead nuclei at lower energies), and in trials in 2015 it produced collisions of 13 TeV, a further record. The LHC is being used to investigate the Higgs particle as well as quarks, gluons, and other particles and aspects of physics' Standard Model (see elementary particles). In 2012 CERN scientists announced the discovery of a new elementary particle consistent with a Higgs particle; they confirmed its discovery the following year.
The synchrotron can be used to accelerate electrons but is inefficient. An electron moves much faster than a proton of the same energy and hence loses much more energy in synchrotron radiation. A circular machine used to accelerate electrons is the betatron, invented by Donald Kerst in 1939. Electrons are injected into a doughnut-shaped vacuum chamber that surrounds a magnetic field. The magnetic field is steadily increased, inducing a tangential electric field that accelerates the electrons (see induction).
Accelerators are used in experiments to force high-energy particles to collide with other particles. The way the fragments of particles behave after the collision provides information on the forces within atoms. This is used for research purposes and to generate high-energy X-rays and gamma rays. In a linear accelerator, particles travel in a straight line, usually accelerated by an electric field. In a cyclotron, particles are accelerated in a spiral path between pairs of D-shaped magnets with an alternating voltage between them. In a synchrocyclotron, the accelerating voltage is synchronized with the time it takes the particles to make one revolution. A synchrotron consists of a large circular tube with magnets to deflect the particles in a curve and radio-frequency fields to accelerate them. The most advanced accelerators are colliders, in which beams of particles moving in opposite directions collide, thus achieving higher energy of interaction. See also bubble chamber