Advances in Logic during the Nineteenth Century

views updated

Advances in Logic during the Nineteenth Century

Overview

The nineteenth century witnessed the formalization of traditional logic along symbolic lines, followed by an attempt to recast the foundations of mathematics in rigorous logical form. The extensive development of mathematical logic was motivated in part by the discoveries of new geometries and new number systems that caused many mathematicians to question the logical soundness of traditional mathematical ideas. The attempt to impose the strictest rigor on arithmetic, the most fundamental area of mathematics, would eventually lead to a number of surprising results, results that caused many philosophers and mathematicians to modify their views of the very nature of mathematics. The same developments would nonetheless provide techniques essential to the development of digital computers, artificial intelligence, and modern theories of language.

Background

Logic is the subject that deals with the drawing of correct conclusions from statements assumed to be true. It was first formalized by Greek philosopher Aristotle (384-322 b.c.) in terms of verbal examples that assumed the grammatical structure of the Greek language. The methods of deductive logic are readily applicable to mathematical reasoning, and the proofs of Euclidean geometry are traditionally presented as a matter of logical deduction. The high esteem in which Aristotle was held during the Middle Ages insured that his system would be studied and elaborated upon by generations of philosophers and theologians.

German philosopher and mathematician Gottfried Wilhelm Leibniz (1646-1716) was the first modern figure to suggest that the essence of logic could be captured in a set of rules for the manipulation of symbols. In fact, Leibniz had gone a bit further to envision a "universal language," a language so precise that one could not draw incorrect conclusions from it. Although Leibniz worked towards his "calculus of reasoning" as well as on many other scientific problems for a 35-year period, his logical work did not attract much immediate interest. The first sustained treatment of symbolic logic was provided in 1847 by George Boole (1815-1864), who followed with other publications and a book, An Investigation into the Laws of Thought, in 1854. Boole provided a formalism in which the traditional laws of reasoning could be expressed in purely symbolic terms. He noted that statements could be assigned numerical values of 1 if true and 0 if false, so that the truth values of the various possible logical combinations could be computed in an algebraic way. Boole also noted the connection between logic and the theory of sets. In 1847 Augustus De Morgan (1806-1871) published a book, Formal Logic, that complemented Boole's ideas and indicated how logical combinations, written in symbolic form, cold be transformed into equivalent forms.

In 1879 German mathematician and philosopher Gottlob Frege (1848-1925) published an 88-page booklet entitled Begriffsschrift ("Concept Language"), which might be regarded as a fresh start towards Leibniz's "calculus of reasoning." In this work he presented an entirely symbolic language that could express the logical possibilities in any situation. Frege made a number of important innovations. In particular, Frege's formalism allowed for statements that include quantifiers, such as "for all" or "for some." Frege also provided for statements that included variables, that is, symbols that could stand for other concepts. Unfortunately, Frege's notation, which made use of vertical and horizontal lines, proved quite unwieldy and not well suited for publication.

Frege's motivation was, in part, to establish the laws of arithmetic on a firm logical foundation. In 1884, in a publication entitled Foundations of Arithmetic, Frege attempted to prove that all arithmetic reasoning could be established on the basis of logic alone. Frege defined the equality of numbers in terms of the establishment of a one-to-one correspondence between elements of two sets. He also proposed that each of the individual integers might be interpreted as the class or infinite set of sets containing all sets with the same number of members.

The next major mathematician to be concerned with symbolic logic was Giuseppe Peano (1858-1932). Peano was unaware of the work of Frege when he published The Principles of Arithmetic in 1889, but he acknowledged Frege in subsequent publications. Peano introduced symbol manipulation rules equivalent to those of Frege but in far more tractable notation, and in particular presented a set of axioms for the natural numbers that conclude with a statement of the principle of induction. Peano's axioms, though largely a restatement of the ideas of Richard Dedekind (1831-1916), are perhaps the most precise formulation of the properties of integers to emerge in the nineteenth century. Unlike Frege, Peano was able to assemble a group of colleagues who would apply his approach in different areas of mathematics, resulting in the publication over the years 1895-1905 of a 5-volume Formulary of Mathematics, which made it credible that his approach could form the basis for a complete foundations of mathematics.

Impact

For mathematics the nineteenth century began and ended in an atmosphere of confidence and achievement but was interrupted by periods of doubt and controversy. In 1800 most mathematicians could accept the notion that mathematics was a study of the self-evident properties of space and number, a study that had proven its value to science and engineering. The discovery of non-Euclidean geometries and of number-like objects such as quaternions, matrices, and even Boole's truth-values suggested that the principles of mathematics were not as self evident as had been supposed, and thus there was a new emphasis on rigorous argument. By the close of the century there was confidence again that mathematics was consistent if arithmetic reasoning was consistent, and it seemed that the followers of Frege and Peano would soon be able to derive the principles of arithmetic from logic itself. Some mathematicians even imagined formulating a set of axioms and transformation rules for all of mathematic in a symbolic language such that any meaningful combination of symbols could be determined to be true or false by an automatic procedure. German mathematician David Hilbert (1862-1943) summarized this expectation in 1900 in an address to the Second International Congress of Mathematics, when he predicted that proofs of the decidability and completeness of mathematics would be found. As will be seen the exact opposite turned out to be the case.

The nineteenth century had left some unfinished business. The set theory of Boole, DeMorgan, and German Georg Cantor (1845-1918) allowed for sets of different infinite sizes and allowed the possibility of sets containing themselves as members. In a letter written to Frege in 1901, English philosopher Bertrand Russell (1872-1970) stated what has come to be known as Russell's paradox, that any set theory that allows for a set to have membership in itself (as Frege's theory of numbers) did would allow for a set of all sets not members of themselves, and that it is impossible to classify this set as either a member or not a member of itself.

An attempt to eliminate the paradoxes of set theory from symbolic logic led to an elaborate "theory of types" developed by Russell and philosopher Alfred North Whitehead (1861-1947) in the three-volume Principia Mathematica, first published over the years 1910-13. The Principia also introduced the formal rules of inference lacking in Peano's original work and attempted to eliminate the principle of induction by a more general principle of "equivalence."

The possibility that a single set of axioms and rules of inference, such as those in Principia Mathematica,, could form a firm basis for all of mathematics was ruled out permanently in a remarkable paper published in 1931 by Austrian mathematician Kurt Gödel (1906-1978). Gödel showed that any formal system that included the usual rules for the multiplication of integers would necessarily allow the existence of meaningful statements that could neither be proven nor disproved within the system. Gödel's result would have a profound effect on the philosophy of mathematics, discouraging attempts to find a single axiomatic basis and stimulating a debate between "formalists" who believed that the content of the subject was only what the manipulations revealed, and "intuitionists" who believed that mathematical objects, no matter how "idealized," were nonetheless real and could be thought about.

The possibility that the truth of mathematical statements could be decided by a mechanical procedure was ruled out with the publication of a paper by British mathematician Alan Turing (1912-1954) in 1937. After formalizing the notion of mechanical procedure for manipulating symbols, Turing was able to demonstrate that there was no way to tell if such a mechanical procedure applied to an arbitrary string of mathematical symbols would give a result in a finite number of steps. Turing's realization was disappointing from the standpoint of Hilbert's hope that all mathematical statements would be decidable by a single method. However, in his analysis of mechanical procedure Turing conceived the modern notion of a digital computer as an information processing machine that could be programmed to carry out tasks expressed in symbolic form.

The computer would, of course, turn out to profoundly affect almost every area of human activity, and it is not surprising that by the 1950s a serious interest in the development of machine or "artificial" intelligence would begin to occupy philosophers, psychologists, and engineers. These disciplines would begin to concern themselves with how knowledge was representable in computer memory, which brought up further questions about the representation of knowledge in the human brain. While the issues are far from resolved, much artificial intelligence research is based on the first order predicate calculus, a clear descendent of Frege's Begriffsschrit. Indeed, one of the first successes of artificial intelligence was a program named "Logic Theorist," which could prove theorems in the system of Principia Mathematica.

DONALD R. FRANCESCHETTI

Further Reading

Bell, Eric Temple. The Development of Mathematics. New York: McGraw-Hill, 1945.

Boyer, Carl B. A History of Mathematics. New York: Wiley, 1968.

Kline, Morris. Mathematical Thought from Ancient to Modern Times. New York: Oxford University Press, 1972.

Kline, Morris. Mathematics: The Loss of Certainty. New York: Oxford University Press, 1980.

Taton, Rene. History of Science: Science in the Nineteenth Century. New York: Basic Books, 1965.

Van Heijenoort, Jean. From Frege to Gödel: A Source Book in Mathematical Logic, 1879-1931. Cambridge, MA: Harvard University Press, 1967.

About this article

Advances in Logic during the Nineteenth Century

Updated About encyclopedia.com content Print Article

NEARBY TERMS

Advances in Logic during the Nineteenth Century