Humans have always desired mechanical aids to computations. There is evidence of "computing" devices such as the present-day abacus, from as early as the thirteenth century c.e. The first computing devices were accumulators only capable of adding or subtracting. Even "adding machines," which were made well into the twentieth century, could only perform that one function. Subtraction is nothing more than adding a negative number.
Nearly all modern computers are digital, which means that all the internal machine states are either on or off, a one or a zero, true or false, or other nomenclature. There is nothing between a zero and a one such as one-half or one-third etc. The number of bits used to define a quantity sets the number of different values that the quantity can have. As an example, a quantity represented by an 8-bit binary number can only be one of 256 values. The least significant bit of a binary number is the least amount by which two binary numbers can differ. If, in the example, the least significant bit were one, an eight-bit binary number would define integers from zero to 255.
Analog computing, on the other hand, uses physical characteristics to represent numerical values. For example, the slide rule uses distance to represent the logarithms of numbers, and an oscilloscope uses electric current to show the amplitude and frequency of waves. In an analog computer, the internal signals of the computer can assume any value. As an example, a voltage can vary from zero to one volt where there are an infinite number of values between the minimum of zero and a maximum of one volt. In a mechanical machine, voltage would be replaced with distance, or displacement such as the turning of a shaft. A pointer could be attached to the shaft, which will be a part of a mechanical dial to display an answer.
Many of the early computing devices are digital such as the previously mentioned abacus. Only one bead could be pushed along the wire. It was not possible to move a fraction of a bead to be used in calculations. These historic adding machines were well suited to accounting where the monetary system was inherently digital. If the least significant bit of an accounting machine were also the smallest unit of currency, the machine would be adequate for most calculations.
The Slide Rule
For accounting, addition, subtraction, multiplication, and division are usually the only mathematical operations required. Scientists and engineers routinely perform much more sophisticated mathematics such as trigonometric functions, logarithms, exponentiation, and many others. The ubiquitous tool of the engineer and scientist for calculations until the development of the hand-held scientific calculator in the early 1970s was the slide rule. The slide rule was invented in the 1600s and uses logarithms.
The slide rule could do any common mathematical function except add and subtract. The interesting characteristic of the slide rule was the device actually added and subtracted but it added and subtracted logarithms. When using logarithms to multiply, the logarithms are added. To determine the answer, the "anti" logarithm of the resulting sum is found. Division involved subtracting logarithms. Trigonometric functions were shown on the slide rule by simply transferring the "trig tables" found in a mathematics handbook to the slide rule.
The major problem of the slide rule was it was only accurate to about three decimal places, at best. When the hand-held scientific calculator appeared, the use of the slide rule disappeared, virtually overnight!
Early Calculators and Computers
The slide rule and the adding machine are "calculators." Fixed numbers are entered and fixed answers result. One requirement for engineers and scientists is to solve problems where the numbers are always in a state of change. Mathematician Isaac Newton (1642–1727) in his study of mechanical motion discovered the need for a math that would solve problems where the numbers were changing. Newton invented what he called "fluxions," which are called "derivatives" in modern calculus. Think of fluxions as describing something in the state of flux. Equations written using variables that are in a constant state of change are called "differential equations." Solving these equations can be very difficult particularly for a class of equations called "nonlinear." Solving these equations requires a nonlinear algebra and can be very complicated. The most effective way to solve this type of equation is to use a computer.
Some of the first true computers, meaning they were not calculators or accounting machines, were invented for the purpose of predicting tides. Later machines solved difficult differential equations. These machines used electric motors, gears, cams, and plotting devices to draw the solution of a differential equation. These early mechanical devices were called "differential analyzers." These machines were "programmed" by installing various gears, shafts, and cams on a large frame. These machines were actually used for solving equations up until the end of World War II.
Similar mechanical computers were used to control various machines such as naval guns. An analog computer would receive information relative to the ship's location, heading, speed, wind direction, and other parameters as well as operator-entered data concerning the type of projectile, the amount of explosive charge and, most important, the location of the target. The mechanical analog computer would then control the aiming of the gun. Perhaps the most well known analog computer was the computer used for controlling anti-aircraft guns from radar data during World War II.
Mechanical computers are very heavy and slow. After the war, engineers took advantage of the rapidly growing field of electronics to replace the mechanical components of the analog computer. Special amplifiers can add, subtract, and perform calculus operations such as differentiation and integration. The amplifiers were placed in what is called a "feedback" circuit where some of the output signal is fed back to the input. The nature of the feedback circuit would determine the mathematical operation performed by the amplifier. These amplifiers were called "operational amplifiers" because they perform mathematical operations. Feedback circuits can perform exponentiation, multiplication, and division, taking logarithmic and trigonometric functions. Replacing the bulky, massive mechanical components of the analog computer with electronic circuits resulted in a much faster analog computation. In the early days of the electronic computer, analog computers were faster than digital computers when solving complex differential equations.
Longevity of Analog Computers
Analog computation is still used long after digital computers achieved very high levels of performance. The physical world is mostly analog. Parameters such as distance, angles, speed, and so on are all analog quantities. If a simple calculation is required of an analog function where both the input and output are analog, it is often not worth the expense of a digital microprocessor to perform the task that an operational amplifier can provide. If the calculation is complicated, the advantages of the digital computer will justify the use of a microprocessor.
One of the more common modern applications of analog computers is in process control. As an example, a simple analog circuit using a few operational amplifiers may be used to control the temperature of an industrial process where the use of a digital computer is not warranted.
The stand-alone analog computer does not exist at the time of this writing. The stored-program digital computer has the distinct advantage that the computer is completely programmed by software and does not require any external feedback components. Very powerful software exists for solving differential equations and systems based on differential equations, both linear and nonlinear. Even though the "analog computer" no longer exists, operational amplifiers are standard electronic components and are used in a large number of applications still performing mathematical operations.
see also Abacus; Binary Number System; Digital Computing; Napier's Bones; Slide Rule.
Albert D. Helfrick
Aspray, William, ed. Computing Before Computers. Ames, IA: Iowa State University Press, 1990.
Cortada, James W. The Computer in the United States: From Laboratory to Market, 1930 to 1960. Armonk, NY: M. E. Sharpe, 1993.
Roy, Michael R. A History of Computing Technology. Los Alamitos, CA: IEEE Computer Society Press, 1997.