Number System, Real

views updated

Number System, Real


The question "How many?" prompted early civilizations to make marks to record the answers. The words and signs used to record how many were almost surely related to our body parts: two eyes, five fingers on one hand, twenty fingers and toes. For instance, the word "digit," which we use for the symbols that make up all our numerals (0, 1, 2, 3, 4, 5, 6, 7, 8, 9), is the Latin word for finger.

These first numbers are now called the set of counting numbers: {1, 2, 3,}, and sometimes this set is called the natural numbers. Notice that the counting numbers do not include 0. Whereas some early cultures, including the Egyptians, the Chinese, and the Mayans, understood the concept of zero, the digit 0 did not appear until some time after the other nine digits.

In the earliest development of counting, many languages used "one, two, many," so that the word for three may have meant simply "much." People everywhere apparently developed counting systems based on repeating some group, as in counting "one group of two, one group of two and one, two groups of two." We know that a scribe in Egypt used number signs to record taxes as early as 2500 b.c.e.

Hundreds of unbaked clay tablets have been found that show that the Babylonians, in the region we know today as Iraq, were using marks for one and for ten in the 1,700 years before the birth of Christ. These tablets show that the idea of place value was understood and used to write numbers. Number signs were needed not only to count sheep or grain but also to keep track of time.

Many civilizations developed complex mathematical systems for astronomical calculations and for recording the calendar of full moons and cycles of the Sun. These earliest records included fractions, or rational numbers, as well as whole numbers, and used place value in ways similar to the way decimal fractions are written today. In a manuscript, possibly from the sixth century, fractions were written with the numerator above and the denominator below, but without the division line between them. The bar used for writing fractions was apparently introduced by the Arabs around 1000 c.e.

Early forms of the Arabic-Hindu numerals, including 0, appeared sometime between 400 c.e. and 850 c.e., though recent evidence suggests that 0 may have been invented as early as 330 b.c.e. The zero sign began as a dot. It is possible that the late development of 0 was because people did not see zero as a meaningful solution when they were doing practical problems.

About 850 c.e., a mathematician writing in India stated that 0 was the identity element for addition, although he thought that division by 0 also resulted in a number identical to the original. Some 300 years later, another Hindu mathematician explained that division by 0 resulted in infinity.

Number rods were used by the Chinese as a computational aid by 500 b.c.e. The Koreans continued to use number rods after the Chinese and the Japanese had replaced the counting rods with beads in the form of an abacus. Red rods represented positive numbers, and black rods represented negative numbers.

The book Arithmetica, by Diophantus (c. 250 c.e.), calls an equation such as 4x + 20 = 4 "absurd" because it would lead to x 4. Negative numbers are mentioned around 628 c.e. in the work of an Indian mathematician, and later they appear in all the Hindu math writings. Leonardo Pisano Fibonacci, writing in 1202, paid no attention to negative numbers. It was not until the Renaissance that mathematics writers began paying attention to negative numbers.

The idea of letting a variable, such as a or x, represent a number that could be either positive or negative was developed around 1659. The negative sign as we know it began to be used around 1550, along with the words "minus" and "negative" to indicate these numbers.

The idea of square roots, which leads to irrational numbers such as , apparently grew from the work of the Pythagoreans with right triangles. Around 425 b.c.e., Greeks knew that the square roots of 3, 5, 6, and 7 could not easily be measured out with whole numbers. Euclid, around 300 b.c.e. classified such square roots as irrational; that is, they cannot be expressed as the ratio of two whole numbers.

The history of the development of human knowledge of the real numbers is not clearly linear. Different people in widely separated places were thinking and writing about mathematics and using a variety of words and notations to describe their conclusions. The development of numbers that are not realthat is, of numbers that do not lie on what we today call the real number linebegan around 2,000 years ago.

The square root of a negative number, which leads to the development of the complex number system, appears in a work by Heron of Alexandria around 50 c.e. He and other Greeks recognized the problem, and Indian mathematicians around 850 stated that a negative quantity has no square root. Much later, in Italy, after the invention of printing, these roots were called "minus roots."

In 1673, Wallis said that the square root of a negative number is no more impossible than negative numbers themselves, and it was he who suggested drawing a second number line perpendicular to the real number line and using this as the imaginary axis.

see also Calendar, Numbers in the; Integers; Mathematics, Very Old; Number Sets; Numbers and Writing; Numbers, Complex; Numbers, Irrational; Numbers, Rational; Numbers, Real; Numbers, Whole; Radical Sign; Zero.

Lucia McKay

Bibliography

Eves, Howard. An Introduction to the History of Mathematics. New York: Holt, Rinehart and Winston, 1964.

Hogben, Lancelot. Mathematics in the Making. London: Crescent Books, 1960.

. The Wonderful World of Mathematics. New York: Garden City Books, 1995.

Steen, Lynn Arthur, ed. On the Shoulders of Giants, New Approaches to Numeracy. Washington, D.C.: National Academy Press, 1990.

More From encyclopedia.com