Digital Computing

views updated

Digital Computing

There are two basic data transfer and communication systems in computing technologydigital and analog. Analog systems have continuous input and output of data, while digital systems manipulate information in discrete chunks. Although digital devices can use any numeric system to manipulate data, they currently only use the binary number system, consisting of ones and zeros. Information of all types, including characters and decimal numbers, are encoded in the binary number system before being processed by digital devices. In mixed systems, where sensors may deliver information to a digital computer in analog form, such as a voltage, the data have to be transformed from an analog to digital representation (usually binary also).

The chief difference between digital and analog devices is related to accuracy and speed. Since encoding is generally necessary for digital systems, it is not possible to exactly represent things like data from sensors , oscilloscopes , and other instruments. The information is numeric, but changes constantly. Therefore, what goes into a digital computer is an approximation. An example is the use of floating-point arithmetic to process large numbers in digital devices. Conversion from their complete form to floating-point representation (which is usually some power of ten) may result in some inaccuracy as a few figures of the least significant digits may be lost in trying to fit the mantissa (or fraction part) in the registers of a digital device. When floating-point numbers are used in calculations, the error compounds. As for speed, digital devices work on the coded representations of reality, rather than the analog model, which works from reality. This makes digital devices inherently slower due to the conversions and discrete nature of the calculations involved.

Analog devices can work on a continuous flow of input, whereas digital devices must explicitly sample the data coming in. Determination of this sample rate is an important decision affecting the accuracy and speed of real-time systems. One thing that digital computers do more easily is to include the evaluation of logical relationships. Digital computers use Boolean arithmetic and logic. The logical decisions of computers are probably as important as their numerical calculations.

The first instance of digital computing was the abacus . In fact, the word "digital" may originate from the "digits" of the hand that is used to manipulate the counters of the abacus, although the name may also have come from the tradition of finger counting that pre-dated it. The origins of the abacus are too ancient to have been recorded, but it made its appearance in China in C.E. 1200 and other parts of East Asia within a few hundred years. The abacus is not just a toy. Evidence of this was clearly presented when a Japanese arithmetic specialist, using an abacus, beat a U.S. Army soldier, who was using an electrical calculator, in a series of calculations in 1946. The abacus is digital, with five ones on one side of each post, and a five on the other side, the posts being the tens columns.

In 1642 French mathematician and philosopher Blaise Pascal (16231662) built a machine that was decimal in nature. Each dial of his calculator represented a power of ten. Each tooth of the gears represented a one. He also invented an ingenious carry mechanism, moving beyond the abacus, which required "carry overs" to be done mentally.

The first digital computer of the modern sort was the programmable calculator designed, but never built, by British mathematician Charles Babbage (17911871). This Analytical Engine used base-10 numbers, with each digit a power of ten and represented by a gear tooth. The first electronic computers used both the common number systems, binary and decimal. The machine built by American physicist John Vincent Atanasoff (19031995) and his graduate student Clifford E. Berry (19181963) in the late 1930s could only solve a restricted class of problems, but it used digital circuits in base-2 .

The Electronic Numerical Integrator and Computer (ENIAC), designed by American engineers J. Presper Eckert (19191995) and John W. Mauchly (19071980), is considered the first general purpose electronic digital machine. It used base ten, simplifying its interface. The first truly practical programmable digital computer, the Electronic Delay Storage Automatic Calculator (EDSAC), designed by Maurice V. Wilkes (1913) in Cambridge, England, in 1949, used binary representation. Since then, all digital devices of any practicality have been binary at the machine level, and octal (base 8) or hexadecimal (base 16) at a higher level of abstraction.

Digital computers are forgiven their small inaccuracies because of their speed. For instance, the integral calculation is figuring out the area under a curve. The digital solution is to "draw" a large number of rectangles below the curve approximating its height. Each of these rectangles has tiny lines representing two sidesthe larger the number of rectangles, the smaller the lines. These tiny lines approximate the curved line at many points, so if one adds up the areas of each rectangle, he obtains an estimate of the integral. The faster the machine, the more rectangles one can program it to create, and the more accurate the calculation.

There is also a time lag inherent in analog-to-digital conversion and vice versa. Many aircraft flight control systems have sensors that generate analog signals. For instance, one might transmit a voltage, the magnitude of which is important. This could be converted to a number in digital form, which can then be worked on by the flight computer, and the answer finally converted from digital form into a voltage again. These control systems are able to calculate new values fifty times a second.

These speed improvements in digital computers became common when the entire processor was integrated on one chip. Data transfers between the components on the chip are quite fast due to the small distances between them. Analog-to-digital and digital-to-analog converters can also be chip-based, avoiding the need for any speedup.

Digital computers are made of up to more than 1,000 of these processors working in parallel. This makes possible what has been the chief strength of digital computers all along: doing what is impossible for humans. At first, they performed calculations faster, if not better. Then they controlled other devices, some digital themselves, some analog, almost certainly better than people. Finally, the size and speed of digital computers make possible the modeling of wind flowing over a wing, the first microseconds of a thermonuclear explosion, or a supposedly unbreakable code.

see also Abacus; Analog Computing; Binary Number System.

James E. Tomayko

Bibliography

Williams, Michael R. A History of Computing Technology, 2nd ed. Los Alamitos, CA: IEEE Computer Society Press, 1997.