views updated

# Codification and Employment of the Principle of Mathematical Induction

## Overview

Over the course of the nineteenth century, mathematicians and philosophers were forced to change their view of the subject matter of mathematics. At the start of the century many scholars considered it to be a collection of self-evident truths about nature. By 1900 many were convinced that mathematics was principally a creation of the human imagination and as such subject to error. This shift in view resulted from the discovery of new geometries and number systems calling into question whether the traditional mathematical description of nature using Euclidean geometry and real numbers was necessarily correct and free of potential contradictions. The principle of induction was formalized during this period in an attempt to put arithmetic on a firm logical foundation.

## Background

The principle of induction is simply stated: If P(n) denotes a statement in which n can be any of the integers and P(n) is true when n=1 and it can further be proven that P(n+1) is true whenever P(n) is true, then P(n) is true for all the positive integers n. As a simple example we let P(n) be the statement: "The sum of the first n integers is n(n+1)/2." This statement is clearly true when n=1. With simple algebra we can show that adding n+1 to n(n+1)/2 yields (n+1)((n+1)+1)/2, so the statement must be true for all integers n, no matter how large. The principle of induction has been used implicitly many times throughout the history of mathematics. Euclid (c. 300-260b.c.) assumed it in his famous proof that there is no largest prime number. It was used explicitly by the mathematician Mauroly in an arithmetic text published in 1575, and by philosopher and mathematician Blaise Pascal (1623-1662) in 1665 in the derivation of what we now call Pascal's triangle. It was not given a name, however, until 1838 when an article entitled "Induction (Mathematics)" by Augustus De Morgan (1806-1871) appeared in the Penny Cyclopedia. By then it was being used, or at least assumed, by the growing number of mathematicians and scientists using infinite series to solve problems in applied mathematics. By the end of the century it was nonetheless to become a point of considerable controversy in the attempt to put mathematics on a fully rigorous basis.

By the beginning of the nineteenth century mathematical proof had come to be regarded as a model of clear reasoning. The importance of proof varied, however, from one area of mathematics to another. In geometry, the enduring influence of Euclid's classic text, with its emphasis on proofs, had set the tone for future work. In arithmetic and the theory of numbers, the properties of the integers were considered sufficiently well known that there was less attention to deriving results from axioms and postulates. The situation changed, however, as mathematicians discovered that the famous parallel postulate of Euclid could be replaced by other postulates and still leave a consistent geometry. One could not then be certain that the postulates of Euclid were self-evident truths about space, or even, as the philosopher Kant had maintained, inherent in the way in which the human mind thought about space.

Considerable attention was then devoted to the foundations of arithmetic and the number system. One major thrust was the development of the theory of sets by the German mathematician Georg Cantor (1845-1918), beginning in 1873. At first set theory appeared to offer a true means of developing all the important ideas of mathematics within a common framework. The number three, for instance, could be thought of as the set with all the possible sets of three elements as members. Set theory also allowed an approach to discussing infinite quantities. A set was said to contain an infinite number of elements if its elements could be set into a one-toone correspondence with the elements. Thus, since the set of integers {n} could be put into a one-to one correspondence with the set of even integers {2n} it is an infinite set. Unfortunately, paradoxes soon became apparent. One could conceive of a set of all sets. This set would be a member of itself, that is, of the set of all sets. One could as easily consider a set of all sets not members of themselves. If this set were a member of itself it would not meet the requirement for membership in itself. If it were a member of itself it would clearly not qualify for membership in itself. Serious paradoxes were inevitable if sets were allowed to contain other sets as members.

An alternative approach to the foundations of arithmetic was provided in 1889 by Giuseppe Peano (1858-1932), a professor of mathematics at the University of Turin in Italy, who developed a set of five axioms to describe the set of natural numbers, {1, 2, 3, ...}, based on the idea of succession. Peano's axioms can be stated as:

1. 1 is a natural number.
2. The successor of any natural number is a natural number.
3. If two natural numbers are equal, their successors are equal.
4. 1 is not the successor of any natural number.
5. If S is any set, and 1 is an element of S, and if the fact that any natural number belongs to S implies that its successor belongs to S, then S contains all the natural numbers.

Peano proved that each of his axioms was independent of the others. The fifth axiom is a rigorous statement of the principle of mathematical induction

## Impact

Increased concern with the foundations of mathematics was perhaps an inevitable consequence of the growth of professional mathematics that has continued steadily since the seventeenth century. Initially mathematicians communicated through correspondence and the occasional publication of books. In the eighteenth century there were a few paid positions in national academies. By the nineteenth century the need for advanced technical education was appreciated by national leaders and a growing industrialist class in France, Germany, the Netherlands, the United Kingdom, and, somewhat later, the United States. Technical institutes and universities were established in these countries, creating a need for full-time instructors in mathematics. Newly established journals were devoted to pure and applied mathematics as distinct from the physical sciences. Mathematicians developed a separate identity as mathematicians, and there was both the manpower and a forum for the close examination of mathematical ideas.

Traditionally, logic has been divided into two parts: deductive logic, which allows us to deduce the consequences of a given set of statements assumed to be true, and inductive logic, which permits us to infer general truths from specific examples. Euclidean geometry is often taken to be an excellent example of deductive logic, while mathematical induction is a form of inductive logic, since it allows conclusions to be drawn about all the integers based on a relation between successive integers.

A desire to avoid the paradoxes to which set theory might lead led to the emergence of the logistic school of mathematics, the group with which Peano himself identified. The goal of this group was to establish the principles of mathematics on the basis of deductive logic alone, without any dependence on the properties of numbers. The most impressive product of this school is the three-volume Principia Mathematica first published over the years 1910 to 1913 by the English mathematicians and philosophers Bertrand Russell (1872-1970) and Alfred North Whitehead (1861-1947). Over a thousand pages long with very sparse text interrupting many thousands of lines of symbolic manipulation, this work progresses so carefully that the number "one" is not defined until over a hundred pages of preliminary results. Russell and Whitehead attempted to avoid the difficulties associated with sets containing other sets as members by developing an elaborate "theory of types," in effect eliminating the possibility that sets could be considered to belong or not belong to themselves. They also resisted assuming the principle of induction, but in the end were forced to adopt an "axiom of reducibility," which most mathematicians found even less acceptable.

The possibility of the logistic approach providing a firm basis for all of mathematics was ruled out permanently in a remarkable paper published in 1931 by the Austrian mathematician Kurt Gödel (1906-1978). Gödel's proof, as it has come to be known, showed that any formal system which could lead to the usual rules for the multiplication of integers would necessarily allow the existence of meaningful statements which could neither be proven nor disproven within the system. The proof was based on a clever coding scheme, which made it possible to assign a unique integer to the assertion that a given string of symbols constituted a proof of a particular assertion. Gödel's scheme, which made use of the availability of an infinite number of distinct prime numbers (as established by Euclid using implicit induction), made it possible to then construct statements which asserted their own non-provability, in any such logical system.

The full impact of Gödel's proof is still being examined by mathematicians and philosophers. Many would agree that it has eliminated any possibility of mathematics becoming a "closed book," with all the important results to be established once and for all. Instead, many mathematicians regard it to show that mathematics is in its essence an arena for human creativity, not unlike music or poetry, with an endless capacity for invention and discovery.

DONALD R. FRANCESCHETTI

Boyer, Carl B. A History of Mathematics. New York: Wiley, 1968.

Kline, Morris. Mathematical Thought from Ancient to Modern Times. New York: Oxford University Press, 1972.

Kline, Morris. Mathematics, The Loss of Certainty. New York: Oxford University Press, 1980.