The Development of Computational Mathematics

views updated

The Development of Computational Mathematics

Overview

In the post-World War II years, computers became increasingly important for solving very difficult problems in mathematics, physics, chemistry, and other fields of science. By making it possible to find numerical solutions to equations that could not be solved analytically, computers helped to revolutionize many areas of scientific inquiry and engineering design. This trend continues as supercomputers are used to model weather systems, nuclear explosions, and airflow around new airplane designs. Since this trend has not yet leveled off, it is still too soon to say what the final result of these computational methods will be, but they have been revolutionary thus far and seem likely to become even more important in the future.

Background

The first attempts to invent devices to help with mathematical calculations date back at least 2,000 years, to the ancestor of the abacus. "Rod numerals," used in India and China, were not only used for counting, but also for simple calculations, too. These gave way to the abacus, which was used throughout Asia and beyond for several hundred years. However, in spite of the speed and accuracy with which addition, subtraction, multiplication, and division could be done on these devices, they were much less useful for the more complex mathematics that were being invented.

The next step towards a mechanical calculator was taken in 1642 by Blaise Pascal (1623-1662), who developed a machine that could add numbers. Thirty years later, German mathematician Gottfried Leibniz (1646-1716) invented a machine that could perform all four basic arithmetic functions to help him avoid the tedium of manually calculating astronomical tables. In developing this machine, Leibniz stated, "it is unworthy of excellent men to lose hours like slaves in the labor of calculation which could safely be relegated to anyone else if machines were used." Unfortunately, mechanical inaccuracies in Leibniz's machine—unavoidable, given the technology of the times—rendered it undependable for any but the simplest calculations.

In 1822 English inventor Charles Babbage (1792-1871) developed a mechanical calculator called the "Difference Engine." Unlike previous machines, Babbage's invention could also solve some equations in addition to performing arithmetic operations. In later years, Babbage attempted to construct a more generalized machine, called an Analytical Engine, that could be programmed to do any mathematical operations. However, he failed to build it because of the technological limitations under which he worked. Others tried to carry Babbage's vision through to fruition, also without success because they, like he, were limited to constructing mechanical devices that had unavoidable inaccuracies built into them due to the technology of the day.

With the development of electronics in the 1900s, the potential finally existed to construct an electronic machine to perform calculations. In the 1930s, electrical engineers were able to show that electromechanical circuits could be built that would add, subtract, multiply, and divide, finally bringing machines up to the level of the abacus. Pushed by the necessities of World War II, the Americans developed massive computers, the Mark I and ENIAC, to help solve ballistics problems for artillery shells, while the British, with their computer, Colossus, worked to break German codes. Meanwhile, English mathematician Alan Turing (1912-1954) was busy thinking about the next phase of computing, in which computers could be made to treat symbols the same as numbers and could be made to do virtually anything.

Turing and his colleagues used their computers to help break German codes, helping to turn the tide of the Second World War in favor of the Allies. In the United States, simpler machines were used to help with the calculations under way in Los Alamos, where the first atomic bomb was under development. Meanwhile, in Boston and Aberdeen, Maryland, larger computers were working out ballistic problems. All of these efforts were of enormous importance toward the Allied victories over Germany and Japan, and proved the utility of the electronic computer to any doubters.

The first purely scientific problem taken on by the electronic computers was that of solving Schroedinger's "wave equation," one of the central equations used in the then-new field of quantum mechanics. Although this equation, properly used, could provide exact solutions to many vexing problems in physics, it was so complex as to defy manual solution. Part of the reason for this involved the nature of the equation itself. For a simple atom, the number of calculations necessary to precisely show the locations and interactions of a single electron with its neighbors could be up to one million. For several electrons in the same atom, the number of calculations increased dramatically, pushing such problems beyond the range of even most of today's computers. Attacking the wave equation was one of the first tasks of the "newer" computers of the 1950s and 1960s, although it was not until the 1990s that supercomputers were available that could actually do a credible job of examining complex atoms or molecules.

Through the 1960s and 1970s scientific computers became steadily more powerful, giving mathematicians, scientists, and engineers everbetter computational tools with which to ply their trades. However, these were invariably mainframe and "mini" computers because the personal computer and workstation had not yet been invented. This began to change in the 1980s with the introduction of the first affordable and (for that time) powerful small computers. With the advent of powerful personal computers, Sun and Silicon Graphics workstations, and other machines that could fit on a person's desk, scientific computing reached yet another level, especially in the late 1990s when desktop computers became nearly as powerful as the supercomputers of the 1970s and 1980s. At the same time, supercomputers continued to evolve, putting incredible amounts of computational power at the fingertips of researchers. Both of these trends continue to this day with no signs of abating.

Impact

The impact of computational methods of mathematics, science, and engineering has been nothing short of staggering. In particular, computers have made it possible to numerically solve important problems in mathematics, physics, and engineering that were hitherto unsolvable.

One of the ways to solve a mathematical problem is to do so analytically. To solve a problem analytically, the mathematician will attempt, using only mathematical symbols and accepted mathematical operations, to come up with some answer that is a solution to the problem. For example, the problem 0 = x2 - 4 has an exact solution that can be arrived at analytically. By rewriting this problem as x2 = 4 (which is the result of adding 4 to each side of the equality), we can then determine that x is the square root of 4, which means that x = 2. This is an analytical solution because it was arrived at by simple manipulations of the original equation using standard algebraic rules.

On the other hand, more complex equations are not nearly so amenable to analytical solution. Schroedinger's wave equation, for example, cannot be solved in this manner because of its extreme complexity. Equations describing the flow of turbulent air past an airplane wing are similarly intractable, as are other problems in mathematics. However, these problems can be solved numerically, using computers.

The simplest and least elegant way to solve a problem numerically is simply to program the computer to take a guess at a solution and, depending on whether the answer is too high or too low, to guess again with a larger or smaller number. This process repeats until the answer is found. In the above example, for instance, a computer would start at zero and would notice that (0)2 - 4 = -4. This answer is too small, so the computer would guess again. A second guess of 1 would give an answer of -3, still too small. Guessing 2 would make the equation work, ending the problem. Similarly, computers can be programmed to take this brute force approach with virtually any problem, returning numerical answers for nearly any equation that can be written.

In other cases, for example, in calculating the flow of fluids, a computer will be programmed with the equations showing how a fluid behaves under certain conditions or at certain locations. It then systematically calculates the different parameters (for example, pressure, speed, and temperature) at hundreds or thousands of locations. Since each of these values will affect those around it (for example, a single hot point will tend to cool off as it warms neighboring points), the computer is also programmed to go back and recalculate all of these values, based on its first calculations. It repeats this process over and over until satisfied that the calculations are as accurate as they can be.

Consider, for example, the problem of trying to calculate the temperatures all across a circuit board. If the temperature of any single point is the average of the four points adjacent to it, the computer will simply take those four points, average their temperatures, and give that value to the point in the middle. However, when this happens, the calculated temperature of all the surrounding points will change because now the central point has a different temperature. When they change, they in turn affect the central point again, and this cycle of calculations continues until the change in successive iterations is too small to matter much. This is called "finite difference" computation, and it is a powerful tool in the hands of engineers and scientists.

The bottom line is that computer methods in the sciences have had an enormous impact on mathematics, the sciences, engineering, and our world. By freeing skilled scientists from the drudgery of endless calculations, they have freed these people to make more and more important discoveries in their fields. And by making some complex problems solvable for the first time, they have helped us to design better machines, to better understand our world and universe, and to make advances that would have otherwise been impossible.

P. ANDREW KARAM

Further Reading

Banchoff, Thomas. Beyond the Third Dimension. New York: Scientific American Library, 1990.

Kaufmann, William, and Larry Smarr. Supercomputing and the Transformation of Science. New York: Scientific American Library, 1993.

About this article

The Development of Computational Mathematics

Updated About encyclopedia.com content Print Article

NEARBY TERMS

The Development of Computational Mathematics