# Infinity in Mathematics and Logic

# INFINITY IN MATHEMATICS AND LOGIC

The notion of infinity, and the problems, both philosophical and mathematical, that arise from it have been a central concern for over two millennia. Any serious thought about the nature of space, time, God (or gods), mathematics, and motion quickly leads to more general concerns regarding the notion, or notions, of infinity intimately tied up with such issues. As a result, it is unsurprising that philosophers throughout history have thought deeply about what infinity is, whether the notion is coherent, whether there are infinite entities (or infinitely many entities), and how we can know about such entities if they exist.

This entry focuses on two aspects of the infinite. The first is infinite divisibility, the idea that an object can, in some sense (and perhaps only ideally), be divided into an infinite collection of smaller and smaller parts. The puzzles that arise from such division are central both to philosophical thinking about notions such as part and whole and to the mathematical analysis of lines, surfaces, and other continuous objects. The second aspect to be addressed is already implicit in the first—the idea that there can be infinitely large collections at all. Much of the history of mathematics and philosophy can be seen as an (often indirect) inquiry into the coherence of such collections and how they differ from finite collections.

As a result, this entry will for the most part ignore other interesting, but less central, issues within the literature on infinity, including infinitesimals, mereological theories containing gunk, unrestrictedly general quantification, nonstandard set theories, and nonmathematical uses of the term "infinite" (e.g., theological understandings of the infinite). The discussion below, then, is not meant to be a comprehensive survey of all aspects of infinity (or even all aspects of this notion as it appears within mathematics, logic, and metaphysics), but is instead intended to provide a basic understanding of two important themes underlying hundreds of years of thought on the topic.

## Infinite Divisibility: Aristotle and Zeno

Given the long pedigree of thought regarding infinity, it seems apropos to begin at (or near) the beginning, with the ancient Greeks. While mathematics in general, and geometry in particular, was central to Plato's philosophy, he has little to say regarding the nature of the infinite. His few comments on the topic occur in the *Philebus*, where he equates the infinite with the unlimited, unbounded, excessive, and indefinite.

Identifying the infinite with such notions is, in hindsight, less helpful than one might hope: The surface of a sphere, which in a certain sense has no boundaries or limits, nevertheless has finite area. Extremely large collections can be thought of as excessive yet finite. Indefiniteness has, recently, been associated more with vague phenomenon (such as the boundary between colors) than with the infinite. Thus, Plato's discussion, while interesting, fails to provide clear criteria for distinguishing between finite and infinite collections, relying rather on the idea that we can tell the difference when we see it.

Plato's star pupil, Aristotle, follows his teacher in neglecting to provide a clear definition of the infinite (and in equating it with suspect notions such as unboundedness). Even so, he made an influential contribution by distinguishing between two different types of infinity, or two different ways of conceiving infinite collections. This contribution was due to Aristotle's need to respond to Zeno of Elea's paradoxes.

Zeno presented four paradoxes that, through clever uses of infinity, demonstrated (so it was claimed) that motion was impossible. Here we consider only two: the paradox of Achilles and the paradox of the runner (an adequate response to either can likely be generalized to the others).

The paradox of Achilles is perhaps the best known of Zeno's puzzles. Swift Achilles is to run a race against a tortoise, and the tortoise is given a head start. Zeno argues that, no matter how fast Achilles runs, he can never overtake the tortoise. First Achilles must reach the point at which the tortoise started, call it P_{1}. By the time he does so, however, the tortoise will have traveled some short distance further, to a point we can call P_{2}. So Achilles's next task is to run from P_{1} to P_{2}. By the time he achieves this, the tortoise will have traveled a bit further, to P_{3}. So Achilles's next task is to run to P_{3}. But by then the tortoise will have reached P_{4}, and so on.

Thus, according to Zeno, Achilles can never pass the tortoise and win the race, because, no matter how fast he runs, each time he reaches a point where the tortoise was, the tortoise will have moved a bit farther on. Stating the conclusion more carefully, Zeno's argument does not (and was most likely not intended to) show that Achilles cannot overtake the tortoise; rather, it demonstrates that there is a conceptual puzzle regarding how he does so.

Zeno's paradox of the runner is similar, slightly less well known, but mathematically a bit more elegant. Imagine a runner who must run from point 0 to point 1. Before reaching 1, he must reach the midpoint between 0 and 1 (i.e., point ½). Once he has reached ½, however he must, before he can reach 1, run to the midpoint between ½ and 1, (i.e., 3/4). Then he must run to the midpoint between 3/4 and 1 (i.e., 7/8), and then run to 15/16, and 31/32, and so on.

Zeno concluded that, in traveling from 0 to 1, the runner traverses an infinite number of distinct distances.

It is worth noting that this paradox depends on a by now well-known mathematical fact: Some infinitely long lists of numbers (or *infinite series* ) have a finite sum. In particular, the construction of the paradox demonstrates an at least implicit awareness that:

1 = ½ + ¼ + ⅛ + 1/16 + 1/32 + …

or, in more modern notation:

We can provide an intuitively compelling (although not mathematically rigorous) argument demonstrating this as follows. Set the infinite sum equal to x:

x = ½ + ¼ + ⅛ + 1/16 + 1/32 + …

Multiply both sides by 2:

2x = 1 + ½ + ¼ + ⅛ + 1/16 + 1/32 + …

Subtract the first line from the second, and we obtain the desired result:

x = 1

As Aristotle realized, there are two puzzles lurking within Zeno's paradoxes, and only one of them is difficult. First, the worry might be that Zeno's arguments suggest that we can accomplish infinitely many tasks in a finite amount of time (for example, traveling through the infinite sequence of distances 0 to ½, ½ to ¼, ¼ to ⅛ … or inhabiting the infinite sequence of points ½, ¼, ⅛ …). This, however, as Aristotle noted, is a misleading way of characterizing the situation, because if we can divide distances in the way envisioned by Zeno, then we can divide time in the same way (so our minute of time can be divided into the first half of a minute, then the next quarter of a minute, then the next eighth of a minute, …). Rather, according to Aristotle, the puzzle concerns the idea that we can ever complete infinitely many tasks (no matter how they are described). In other words, is the infinite division of either space or time that Zeno envisioned legitimate?

The answer for Aristotle had to be "no," because, as already noted, he equated the infinite with the unbounded or unlimited. As a result it should be impossible to complete infinitely many tasks, because it is impossible to reach the bound or limit of something that, by its nature, is unlimited. Aristotle had little choice but to conclude that there is a mistake lurking within Zeno's argument, and it is in his explanation of this mistake that his potentially/actually infinite distinction makes its appearance.

Zeno's paradoxes result from the fact that line segments are (or at least seem to be) infinitely divisible. Consider an arbitrary (finite) line segment. We can easily divide the line into two halves, producing two parts. Additional divisions, of course, are also possible. The crucial question now arises: How many distinct parts does the line segment contain? In some sense, at least, the correct answer is infinitely many, since for any part we can further subdivide it into two halves, obtaining two more parts. Thus, for any finite number of parts we divide the line into, we can further subdivide those segments to obtain more parts.

Aristotle distinguished this sort of unboundedness—the potentially infinite—from actually infinite collections. On the one hand, a collection is potentially infinite if we can continue to add to it without limit. On the other hand, the actual infinite is, for Aristotle, a completed totality, that is, an unbounded collection that is nevertheless present all at once. In considering the division of a line segment Aristotle writes that "It is always possible to think of a larger number: for the number of times a magnitude can be bisected is infinite. Hence the infinite is potential, never actual; the number of parts that can be taken always surpasses any assigned number" (Physics 207b8).

In understanding Aristotle's potential/actual distinction it is useful to distinguish between three distinct theses:

(1) Any part of a line can be divided into distinct subparts.

(2) A line contains infinitely many distinct parts.

(3) A line contains infinitely small parts.

The first claim, according to Aristotle (and most thinkers since) is undeniable. The third, which asserts the existence of so-called infinitesimals, would be of great importance in the development of the calculus during the seventeenth century. The second claim is, for our purposes, the crucial one. According to Aristotle, claim (2) is ambiguous, having both a true and a false reading.

Aristotle understood the first of these three claims as something such as: given a time *t* _{1}, any part of a line that exists at *t* _{1} can be divided into distinct subparts at some future time *t* _{2}. Once we recognize the temporal ingredient of claim (1), two distinct readings of (2) are then apparent:

(2a) For any number

n, there is a timetsuch that a line has been (or can be) divided into (at least)nparts att.(2b) There is a time

tsuch that a line has infinitely many parts at timet.

Although (2b) implies (2a), Aristotle argued, in effect, that (2a) does not imply (2b): Imagine that we have a line segment in front of us, and each hour we divide up each of its parts into two subparts. Then (2a) is true (assuming we live forever and never forget to carry out the divisions) yet (2b) fails, because there is never a particular time at which we have finished dividing. The collection of parts of a line segment is potentially infinite if (2a) is true, and is actually infinite if (2b) holds.

It is important to note that Aristotle's distinction contains both a constructive aspect and a temporal aspect. First, lines (and other objects) are not presented to us already divided into their parts, rather, the division of an object into its components is somehow a construction that we perform on it. Second, these constructions cannot be carried out all at once, but must be carried out one by one in time. This aspect of Aristotle's view is what prevents the potential infinite from collapsing into the actual infinite, because it allows us to distinguish between a series of ever increasing finite collections spread out through time and an infinite collection existing at a particular time.

Aristotle applied the potential versus actual distinction, not just to parts of line segments, but to the natural (counting) numbers 0, 1, 2, 3, … as well (we include 0 and 1 here because they are now considered to be the first two natural numbers, although Aristotle would not have recognized 0, and perhaps not even 1, as a number at all). The natural numbers, for Aristotle, are potentially infinite, because for any number we have counted to, we can always count one (or ten, or one hundred) numbers further. Nevertheless, we will never reach a time when we will have counted out all the numbers.

Aristotle's final contribution lies in his insistence that not only some but all infinities are potential, not actual. Thus, there are no infinitely long line segments, only a potential series of ever longer finite line segments, and no infinitely large collections, only series of larger and larger finite ones.

The denial of absolute infinity provided Aristotle with a solution to Zeno's puzzles: If space and time are only potentially infinite, then the sort of division of space and time necessary for Zeno's construction is illicit. In the paradox of the runner, the runner does not pass through infinitely many different distances (nor does he occupy infinitely many different points, because Aristotle thought that points only exist if a line has in fact been divided into two parts that meet at that point). Rather, given any particular time, there are only finitely many parts into which the distance between 0 and 1 will have been divided into. Of course, at a later time we might further subdivide the distance into a larger (but still finite) collection of parts. It does not follow, however, that we have therefore traveled over infinitely many distances, because the parts of the path from 0 to 1 are merely potentially infinite.

For approximately two thousand years Aristotle's view remained unchallenged—philosophers and mathematicians both (for the most part) denied the existence of actual, completed infinities, arguing that the notion of potential infinity could, within both philosophy and mathematics, fulfill any role that the infinite might need to play.

## Rigor at Last: Dedekind Infinity

During the development and rigorization of the calculus (a long, torturous period stretching from the beginning of the seventeenth century to the end of the nineteenth century) it became evident that the further development of classical mathematics required actual infinities. (The details need not detain us here. Suffice it to say at least some infinite collections needed to be viewed as complete, because it became necessary to study arbitrary subcollections of these infinite structures). If Aristotle and his followers were correct, however, and the only viable understanding of the infinite was the potential one, then classical mathematics was facing a crisis.

Fortunately, a number of philosophers and mathematicians stepped into the breach. They were faced with two main mathematical tasks and one philosophical one: first, to provide a rigorous definition of "infinite"; second, to provide a mathematical theory of the infinite that clarified how the behavior of infinite collections differed from the well-known behavior of finite collections; and third, and perhaps most importantly, to provide a philosophical account of the actual infinite that defended its intelligibility. The first two tasks were successfully carried out during the nineteenth century, by Richard Dedekind (1901) and Georg Cantor (1955) respectively.

Before considering definitions of infinity, it is useful to introduce some terminology. The existence of an actually infinite collection involves the idea that such a collection can be presented all at once, as a completed totality—that is, in some sense, as a single thing. Such totalities, considered as single objects, are called *sets*. The central idea behind *set theory* is that, given any collection of objects (or almost any, see the discussion of *Russell's paradox* below), there exists another object—the set containing exactly the original objects. Thus, if we start out with three distinct persons, Alan, Bob, and Carl, we obtain the following sets:

{Alan}, {Bob}, {Carl}, {Alan, Bob}, {Alan, Carl}, {Bob, Carl}, {Alan, Bob, Carl}

Note that the one-membered set {Alan}, called the *singleton* of Alan, is not the same thing as Alan himself, because {Alan} is a set, yet Alan, who is a person, is not. At this point it should be noted, as well, that the collection containing no objects, the so-called *empty set* { } or ∅, although somewhat puzzling (how can there exist a collection formed out of nothing?) is nevertheless accepted as a set, and thus an object, in most accounts of set theory.

Sets are objects, so there is nothing to prevent us from forming collections of these (i.e., sets of sets), obtaining, for example:

{∅, {Alan}, {Bob}, {Alan, Bob}}

This expression names a single object, a set collecting together four other sets.

There are two important relations that can hold between sets and other objects: *membership* and *subsethood*. The members of a set are those objects that were collected together to form the set. Using "*x* ∈ *y* " to express the claim that *x* is a member of *y* (and "∉" for nonmembership), we have:

Alan ∈ {Alan}

{Alan}∈ {∅, {Alan}, {Bob}, {Alan, Bob}}

and:

{Alan} ∉ {Alan}

Alan ∉ {∅, {Alan}, {Bob}, {Alan, Bob}}

Our second notion, subsethood, can be defined in terms of membership. Given two sets A and B, A is a subset of B if and only if every member of A is a member of B. This implies that every set is a subset of itself, and, using "*x* ⊆ *y* " to express the claim that *x* is a subset of *y*, we have:

{Alan, Bob} ⊆ {Alan, Bob, Carl}

{Alan, Bob} ⊈ {∅, {Alan}, {Bob}, {Alan, Bob}}

Finally, a *function f* from a set A to a set B, symbolized by:

f: A → B

is a mapping that assigns to each member of A exactly one member of B. For example, there is a function:

f_{1}: {Alan, Bob} → {Alan, Bob, Carl}

which maps Alan to Bob and Bob to Carl (*f* _{1} does not map anyone to Alan). We can express this symbolically as:

f_{1}(Alan) = Bob

f_{1}(Bob) = Carl

Another example is:

f_{2}: {Alan, Bob, Carl} → {Alan, Bob}

mapping Alan to Bob, Bob to Alan, and Carl to Bob:

f_{2}(Alan)= Bob

f_{2}(Bob) = Alan

f_{2}(Carl) = Bob

There are two conditions on functions that will be crucial in what follows. First, a function is from A to B is *injective* (or *one-to-one* ) if and only if no two distinct members of A get mapped onto the same member of B. In the examples above, *f* _{1} is injective, but f_{2} is not, because both Alan and Carl both get mapped onto Bob. Second, a function from A to B is *surjective* (or *onto* ) if and only if every member of B gets mapped onto by some member of A. f_{2} above is surjective, whereas *f* _{1} is not, because neither of Alan or Bob gets mapped onto Alan. A function that is both one-to-one and onto is *bijective*.

We can now consider possible definitions of infinite. One obvious approach suggests itself: To define finite in terms of counting, that is, a set is finite if and only if we can assign 0 the first member of the set, 1 to the second, 2 to the third, … and at some point we reach the last member of the set, to which we assign some natural number. Using the terminology introduced above:

A set A is finite if, and only if, there is a natural number n and a bijective function:

f: A → {0, 1, 2, …,n−1}

Thus, on this definition a set is finite if there is a number n and a bijective function from A to the set of natural numbers less than n. A set is infinite if there is no such function.

For example, the set {Alan, Bob, Carl} is finite because:

f_{3}: {Alan, Bob, Carl} → {0, 1, 2}

f_{3}(Alan) = 0

f_{3}(Bob) = 1

f_{3}(Carl) = 2

is bijective, whereas the set of natural numbers {0, 1, 2, 3, …} is, on this definition, infinite.

While this definition provides the desired results, there is a problem. The definition works by determining whether a set is finite or infinite in terms of whether it can be mapped bijectively onto the natural numbers less than *n*, for some *n*. The reason this works is that we know, for any number *n*, that the set of numbers less than *n* is finite. What could be more obvious? The problem, however, is that the definition lacks the sort of generality required in both mathematics and philosophy. What we want is a criterion that tells us which sets are infinite and which sets are finite. What we have is a criterion that tells us this, assuming that we already know that certain sets of natural numbers are finite.

An appropriately general definition of infinite set was produced by Richard Dedekind in 1888. The definition is based on an insight into infinite collections that traces back to Galileo Galilei (and probably to the ancient Greeks). Galileo noticed that there is a bijective mapping between the natural numbers and the even natural numbers:

f_{4}: {0, 1, 2, 3, …} → {0, 2, 4, 6, …}

provided by mapping each number onto its double:

f_{4}(0) = 0

f_{4}(1) = 2

f_{4}(2) = 4

f_{4}(3) = 6etc.

Galileo argued that this provided further evidence against the notion of actual infinity, because if both the natural numbers and the even numbers were completed infinities, then the latter would be a part of the former. The existence of the bijective mapping, however, seemed to imply that there was just as much "stuff" in each infinite collection, violating the (at the time sacrosanct) dictum that the part must be less than the whole.

Dedekind, however, embraced the puzzling nature of this discovery, and proposed an alternative definition that does not rely on prior knowledge that certain sets of natural numbers are finite. Instead, a set is infinite, according to Dedekind, if it can be mapped bijectively onto a proper subset of itself (a subset of a set A is *proper* if it is not identical to A itself):

A set A is

Dedekind infiniteif, and only if, there is a function:

f: A → A(i.e., a function from a set A to itself) that is injective but not surjective.

A set is (*Dedekind* -) *finite* if there is no such mapping. With this definition in place, actual, completed infinities are at least partially vindicated, insofar as mathematicians and philosophers now have a general, formal criterion for distinguishing between finite and infinite sets.

At roughly the same time Bernard Bolzano (2004) produced a competing definition of infinite set: A set A is infinite if there is a nonterminating series of sets B_{1}, B_{2}, B_{3}, … such that each set in the series is a subset of A, each set in the series is a subset of all sets that follow it in the series, and no set in the series is a subset of any set in the series that precedes it. In other words, a set is infinite if it contains a series of subsets that get "bigger" and "bigger" without end. While Bolzano's definition is especially interesting in light of its obvious connection to Aristotle's ideas regarding the infinite, there are legitimate worries regarding whether the definition is of any help, since "nonterminating series" seems synonymous with "infinite series." Thus, this definition, like the one we began with (but unlike Dedekind's), assumes an understanding of the concept that we are trying to define.

With a rigorous definition of infinite set in hand, the next step in securing the notion of actually infinite from Aristotelian worries would be a well worked out theory of the existence and "behavior" of infinite sets. Much of this theory was worked out by Cantor, and will be the subject of the next section. Before examining Cantor's work, however, it should be noted that, even after the work of Dedekind and Cantor, not everyone accepted that infinite totalities existed. A number of influential thinkers denied that Dedekind's definition, interesting or not, applied to any complete collections, returning to Aristotle's distinction between potential and actual infinity (although most took Aristotle's idea of the potentially infinite as being potentially extendable in time to be little more than a metaphor). Post-Dedekind/Cantor views of this sort include Kronecker's finitism, Brouwer's intuitionism, Weyl's constructivism, and the later Wittgenstein.

## Sizes of Infinity: Cantor's Infinite Numbers

Georg Cantor began with the idea that infinite sets, like finite sets, have a corresponding number. Such numbers, which measure how many things are contained in a set, are called *cardinal numbers* (or *cardinalities* ). We represent the cardinal number of a set A as Card(A).

The idea that infinite collections have cardinal numbers, or, equivalently, that there are "infinite" numbers coming after the natural numbers 0, 1, 2, 3 … , was rejected prior to the nineteenth century on the grounds that infinite numbers required actual infinite totalities, and this in turn (so the story went) implied a contradiction. For example, the great seventeenth-century philosopher, logician, and mathematician Gottfried Leibniz wrote that "I proved beyond any doubt that the number or multitude of all numbers implies a contradiction, if taken as a unitary whole. I think that the same is true of the largest number" (1849, p. 535). One notable aspect of this line of thought is the assumption that the existence of infinite numbers implies the existence of a largest infinite number, or a number of all numbers.

This assumption traces to a rather basic intuition: won't any two "unbounded" or "unlimited" collections, whether complete or potential, contain the same number of elements (or members), because both just keep going and going? Cantor's great contribution to both mathematics and philosophy was his discovery that the answer to this question is "no."

Cantor begins, not by asking which number should be attached to a particular set, but by asking what criteria can be given for deciding when two sets have the same number (independently of which particular number or numbers are involved). To illustrate the difference between the two approaches, consider the following situation: you have two baskets of fruit, one containing apples, the other containing oranges, and you need to determine whether the number of apples in the first basket is the same as the number of oranges in the second basket.

There are two strategies. First, you could count the apples, count the oranges, and then determine if the two numbers are the same. This strategy corresponds to the first approach, where we first assign particular numbers to sets and then compare them. On the second approach, you repeatedly remove one apple from the first basket and one orange from the second, until you run out of either apples or oranges. When you reach a stage where you cannot remove a pair consisting of one apple and one orange, there are three options: Either there are apples left in the first basket, in which case there are more apples than oranges; or there are oranges left in the second basket, in which case there are more oranges than apples; or both baskets are empty, in which case the number of apples is the same as the number of oranges. (The second strategy, while perhaps more efficient, is less informative, because you do not, in the end, know how many apples or oranges were in the baskets, but only whether or not there was more of one than the other. A similar difficulty arises with regard to the *continuum hypothesis*, which will be discussed below.) The second "pairing" strategy amounts to nothing more or less than attempting to construct a bijective mapping between the set of apples and the set of oranges.

Cantor's insight was in noticing that, although extending the first strategy (counting and comparing numbers obtained) to infinite sets is not viable until we have a well-worked-out theory of infinite cardinal number (which was exactly what he was attempting to formulate), the second strategy can be applied to infinite sets (almost) as easily as to finite ones.

The following principle, a version of what has come to be called *Hume's Principle* and which we can call *Hume's Principle for Sets*, sums up Cantor's approach:

HPS: For any sets A and B:Card(A) = Card(B)

if, and only if, there is a bijective function

f: A → B.

We supplement this with the following definition of "less than" for cardinal numbers:

Def_{<}: For any sets A and B:Card(A) < Card(B)

if, and only if, there is an injective function

f: A → B but no surjective functiong: A → B.

This definition agrees with the intuitive one for finite sets. For example, we can verify that Card({Alan, Bob}) < Card({Alan, Bob, Carl}) (i.e., two is less than three) by noting that there is an injective mapping from the first to the second (*f* _{1} above provides one of the six possible injective mappings) but there is no surjective mapping, because one of the three members of the latter set will always be "missed."

Consider Galileo's puzzle again. Galileo showed that there was a bijective mapping between the set of natural numbers {0, 1, 2, 3 …} and the set of even natural numbers {0, 2, 4, 6 …}. Thus, Card({0, 1, 2, 3 …}) = Card({0, 2, 4, 6 …}). More surprisingly, Cantor also proved that the set of rational numbers (i.e., all numbers that can be written as a fraction a/b where a and b are both natural numbers) has the same cardinal number as the set of natural numbers. Cantor called this number ℵ_{0}, and a set is *countable* if, and only if, it is either finite or its cardinal number is ℵ_{0} (reflecting the fact that such sets are the same size as some set of natural, or "counting" numbers). Dedekind's definition of infinite set implies that ℵ_{0} is the smallest infinite cardinal number; in other words, there is no infinite set that cannot be mapped surjectively onto the natural numbers.

Cantor used ℵ_{1}, ℵ_{2}, ℵ_{3} … as names for the second, third, fourth … infinite numbers, and sets with these cardinal numbers are called *uncountable*. Providing names for infinite cardinal numbers is one thing, however; showing that there are sets that have those numbers is something else. Thus, Cantor's next task was to demonstrate that there were infinite sets that do not receive ℵ_{0} as their number, that is, that there are infinite sets "bigger than" the set of natural numbers. He did this in two ways.

The first strategy depends on a second species of number, the *ordinal numbers*. Whereas cardinal numbers measure how many members a set has, ordinal numbers are a measure of particular orderings on that set (compare one, two, three … with first, second, third …). In other words, ordinal numbers attach, not to sets by themselves, but to a set plus an ordering on that set, and the same (infinite) set can correspond to different ordinal numbers if we consider different orderings on it.

More carefully, an ordinal number attaches to a pair consisting of a set A and an ordering ≤ on A. If a set/ordering pair 〈A, ≤〉 is to receive an ordinal number, then the relation ≤ must be a well-ordering (we call such pairs well-ordered sets, and represent the ordinal number of a well-ordered set 〈A, ≤〉 as Ord(A, ≤)).

We begin with the notion of a *totally ordered set*. An ordering ≤ is a total ordering on A (i.e., 〈A, ≤〉 is a totally ordered set) if, and only if, it satisfies the following three conditions (here and below I assume familiarity with the notation of first-order logic):

Antisymmetry: (∀x)(∀y)((x≤y∧y≤x) →x=y)

Transitivity: (∀x)(∀y)(∀z)((x≤y∧y≤z) →x≤z)

Comparability: (∀x)(∀y)(x≤y∨y≤x)

More intuitively, a relation on a set A is a total ordering if, and only if: (i) given two distinct objects in A, it cannot be the case that the first is less than or equal to the second and the second is less than or equal to the first (if so, then they would be the same object); (ii) for any three objects in A, if the first is less than or equal to the second, and the second is less than or equal to the third, then the first is less than or equal to the third; and (iii) given any two objects in A, either the first is less than or equal to the second, or the second is less than or equal to the first (this implies that any object in the ordering is less than or equal to itself). Two examples of total orderings are the natural numbers {0, 1, 2, 3 …} on their standard ordering (i.e., 0 ≤ 1 ≤ 2 ≤ 3 ≤ …), and the integers {… −3, −2, −1, 0, 1, 2, 3 …} on their standard ordering (i.e., … ≤ −3 ≤ −2 ≤ −1 ≤ 0 ≤ 1 ≤ 2 ≤ 3 ≤ …).

A totally ordered set 〈A, ≤〉 is a *well-ordered set* if and only if the following additional condition holds:

Well-foundedness: (∀B ⊆ A)(∃x∈ B)(∀y ∈ B)(x≤y)

Loosely put, if ≤ is a well ordering on a set A, then there is no "infinitely descending chain" in 〈A, ≤〉, that is, there is no infinite sequence *x* _{1} ≥ *x* _{2} ≥ *x* _{3} ≥ *x* _{4} ≥ … (although there can be infinitely ascending chains *x* _{1} ≤ *x* _{2} ≤ *x* _{3} ≤ *x* _{4} ≤ …). The standard ordering on the natural numbers is a well ordering, whereas the standard ordering on the integers is not, because the negative integers form an infinitely descending chain.

Finally, we need the notion of an *order-preserving* function from one ordered set to another:

Given two ordered sets 〈A, ≤

_{1}〉 and 〈B, ≤_{2}〉, and a functionf: A → B,fisorder preservingif and only if, for any two membersxandyof A,x≤_{1}yif, and only iff(x) ≤_{2}f(y).

In other words, if we take two members of A, where the first is, according to the ordering on A, less than or equal to the second, then an order preserving function will map the first object onto a member of B that is, according to the ordering on B, less than or equal to the member of B onto which the second object is mapped.

We can now provide an analogue of *Hume's Principle* for ordinals, which we can call the *Order Type Principle For Sets* (OTP):

OTP: For any well-ordered sets 〈A, ≤_{1}〉 and 〈B, ≤_{2}〉:Ord(A, ≤

_{1}) = Ord(B, ≤_{2})if, and only if, there is an order preserving bijective function f: A → B.

and provide an analogous definition of *less than* for ordinal numbers:

Def: For any well-ordered sets 〈A, ≤_{1}〉 and 〈B, ≤_{2}〉:Ord(A, ≤

_{1}) < Ord(B, ≤_{2})if, and only if, there is an order preserving injective function

f: A → B, but no order preserving surjective functionf: A → B.

Given this definition, Cantor was able to prove that the ordinal numbers, ordered by < as defined above, are themselves well-ordered.

Cantor also proved that the ordinal number of the natural numbers on their standard ordering is the smallest infinite ordinal number, which he called ω (An ordinal is infinite if it is the ordinal of some well-ordered set 〈A, ≤〉 where A is infinite, an ordinal is countable if it is the ordinal of some well-ordered set 〈A, ≤〉 where A is countable, and so on).

The standard ordering on the natural numbers, however, is not the only way to order them. Instead, we might move zero from the beginning to the end, so that (on this ordering) 1 is the least natural number, and zero is greater than any other natural number (i.e., 1, 2, 3, 4, 5 … 0). The ordinal corresponding to this ordering is greater than ω, and there is no ordinal less than this one but greater than ω. Thus, this ordinal, called ω + 1 (because it consists of a "copy" of ω followed by a single element) is the second infinite ordinal. Continuing in this way, we can then move 1 from the beginning to the end (i.e., 2, 3, 4, 5 … 0, 1) obtaining ω + 2, and then ω + 3, ω + 4, … We can then consider the ordering consisting of all of the even natural numbers followed by all of the odd natural numbers (i.e., 0, 2, 4, 6, … 1, 3, 5, 7 …), whose ordinal number is ω + ω, and so on.

Because there are many different ways of wellordering the natural numbers, resulting in different countable ordinals, a natural question to ask is how many different countable ordinals there are (or, equivalently, how many different types of well-ordering can be constructed from the natural numbers). Cantor proved that the cardinal number of the set of natural numbers (what Cantor called the *first number class* ) is less than the cardinal number of the set of countable ordinals (the *second number class* ), and that there is no set whose cardinal number is greater than the former and less than the latter. In other words, ℵ_{1}, the second infinite cardinal number, is the cardinal number of the set of countable ordinals.

We can then go on to ask about the cardinal number of the set of ordinals of well-ordered sets of size ℵ_{1} (i.e., how many different types of well-ordering are there on a set of objects of size ℵ_{1}?). The answer is ℵ_{2}. How many ordinals of size ℵ_{2}? Surprise, its ℵ_{3}! And so on. In this way, Cantor managed to use ordinal numbers to prove the existence of a series of sets of cardinality ℵ_{1}, ℵ_{2}, ℵ_{3}, and even ℵ_{ω} (the first infinite cardinal number that is larger than infinitely many other infinite cardinal numbers). In fact, for any ordinal number α, there is a set whose cardinal number is ℵ_{α}.

Cantor had a second means by which to prove that there are uncountably infinite sets, a method that relies on the notion of *powerset*. The powerset of a set A (or ℘(A)) is the set that contains exactly the subsets of A. For example:

℘({Alan, Bob}) = {∅, {Alan}, {Bob}, {Alan, Bob}}.

Notice that whereas {Alan, Bob} has two members, its powerset has 2^{2} = 4 members. This holds more generally: if κ is the cardinal number of a set A, then 2^{κ} is the cardinal number of ℘(A) (even for infinite sets).

Cantor proved that, for any set A, Card(A) < Card(℘(A)). The method of proof is known as the method of *diagonalization*, and generalizations of it have become immensely important in mathematics. For more technically interested readers, a proof follows (readers not interested in purely mathematical matters may skip the next paragraph).

We can provide an injective function from A to ℘(A) by mapping each member of A onto its singleton (because, for each member of A, its singleton is a subset of A and thus a member of ℘(A)). No mapping from A to ℘(A) can be surjective, however. Let f: A → ℘(A) be an arbitrary function from A to its powerset. Define the set B as follows: for any object *x, x* ∈ B if, and only if, *x* ∈ A and *x* ∉ *f* (*x* ). Assume, for *reductio*, that there is *c* ∈ A such that *f* (*c* ) = B. By the definition of B, we have *c* ∈ B if, and only if, *c* ∈ A and *c* ∉ *f* (*c* ), which implies that *c* ∈ B if, and only if, *c* ∉ B. Contradiction, so *f* cannot be surjective.

Thus, the cardinality of any set is less than the cardinality of its powerset, and, as a result, for any cardinal number ℵ_{κ} , ℵ_{κ} < 2^{ℵκ}. We might wonder why anyone would make all this fuss over powersets, however. Haven't we already seen that we can construct larger and larger sets, and thus obtain larger and larger cardinal numbers, using the ordinal numbers?

There are two aspects of Cantor's diagonalization result that are noteworthy. The first concerns the import of cardinality results for ordinary mathematics. We might wonder why everyday mathematicians (and ordinary nonmathematicians) should worry about different "sizes" of infinity, because neither ordinary folk nor most professional mathematicians run across infinite sets of ordinal numbers in their everyday business. Cantor's result, however, connects the theory of cardinal numbers to more intuitive, everyday mathematical concerns, because the cardinal number of the set of real numbers (i.e., the cardinal number of the set containing all the numbers on the continuous number line) is 2^{ℵ0}. As a result, there are more points on a continuous line than there are natural numbers! Because real numbers and lines are commonly used within basic mathematics, this result provides a direct connection between Cantor's theory and the practice of everyday measurement and mathematics.

The second reason that Cantor's result regarding powersets is interesting is that it introduced one of the great unsolved problems of mathematics. One might wonder, because we have the notation ℵ_{0}, ℵ_{1}, ℵ_{2}, … ℵ_{ω} … for the series of cardinal numbers, why we use a different notation (2^{ℵ0}) for the cardinal number of the powerset of the natural numbers. The reason is simple: Although we know that 2^{ℵ0} is larger than ℵ_{0}, we do not know how much larger. In particular, we do not know whether Cantor's *continuum hypothesis* :

2

^{ℵ0}= ℵ_{1}

is true or false. The truth of the continuum hypothesis amounts to the claim that there are no sets that are strictly larger than the set of natural numbers yet smaller than the set of real numbers. (Our account of cardinal numbers failed to settle this problem because our bijection strategy tells us whether one number is larger without necessarily telling us how much larger.)

More generally, for any ordinal number α, we can ask whether:

2

^{ℵα}= ℵ_{α+1}

is true. The claim that the above is true for all ordinals α is know as the *generalized continuum hypothesis*.

Despite great effort, Cantor (and others that followed him) failed to settle the issue one way or another. In retrospect, this is not surprising. During the 1940s Kurt Gödel (1986, 1989) proved that if the standard principles of set theory are consistent, then they do not allow one to refute the continuum hypothesis, in other words, adding the continuum hypothesis to standard set theory does not lead to inconsistency. Roughly two decades later Paul Cohen (1963, 1964) proved that the same basic axioms fail to prove the continuum hypothesis as well (again, assuming that standard set theory is consistent). As a result, we can add either the continuum hypothesis or its negation to standard set theory, and either way no contradiction results.

## The Infinitely Large Today: Zermelo Fraenkel Set Theory

Of course, the Gödel/Cohen result is not all that interesting until one knows what the standard axioms of set theory are. The theory in question is called *Zermelo Fraenkel set theory* (or ZFC) and consists of the following axioms and axioms schemes (assume here that the quantifiers range only over sets).

First, we have the

axiom of extensionality:

Extensionality: (∀x)(∀y)(x=y↔ (∀z)(z∈x↔z∈y))

which says that there cannot be two distinct sets with exactly the same members (i.e., sets are individuated by their members). Next, we have two purely existential axioms, that is, axioms that assert outright the existence of objects. The *empty set axiom* :

Empty Set: (∃x)(∀y)(¬y∈x)

states that there is a set that contains no members (i.e., ∅). The *axiom of (Zermelo) infinity* :

Infinity: (∃x)(∅ ∈x∧ (∀y)(y∈x→ {y} ∈x))

states that there is a set that contains the empty set and the singleton of every set which it contains (i.e., the set contains ∅, {∅}, {{∅}}, {{{∅}}} …).

Following these we have what we can call conditional existence axioms, which tell us which sets can be "built up" from previously existing objects. The first of these is the *pairing axiom* :

Pairing: (∀x)(∀y)(∃z)(∀w)(w∈z↔ (w=x∨w=y))

The pairing axiom asserts that, given any two objects, there is a set that contains exactly those two objects and nothing else. The pairing axiom guarantees that the singleton of any object exists (because we can just take the pair of an object and itself). Next we have the *union axiom* :

Union: (∀x)(∃y)(∀z)(z∈y↔ (∃w)(w∈x∧z∈w))

which states that, given any set, there is second set that contains exactly the members of the members of the first. Unsurprisingly, we also have an axiom asserting the existence of the powerset of any set:

Powerset: (∀x)(∃y)(∀z)(z∈y↔ (∀w)(w∈z→w∈y))

Our next principle, the *axiom of choice*, states that, given any set of nonempty pairwise disjoint sets, there exists a second set that contains exactly one member from each of the sets contained in the original set. The axiom of choice is often replaced with a more easily understood, but provably equivalent, principle called the *well-ordering principle* :

Well-Order: (∀x)(∃R)(<x, R> is a well-ordering)

The well-ordering principle guarantees that for any set, no matter how large, there is a relation that well-orders its members. During the first half of the twentieth century the status of the axiom of choice was highly controversial; since then it has become a standard part of the everyday mathematician's toolkit.

Our final two conditional existential principles take the form, not of single axioms, but axiom schemes, which have infinitely many instances. The first is the *axiom(s) of separation*. Given any condition Φ expressible in the language of set theory:

Separation: (∀x)(∃y)(∀z)(z∈y↔ (z∈x∧ Φz))

This axiom states that, given any set and any condition on objects, there exists a set that contains exactly the members of the original set that satisfy the condition in question. The second schematic principle is the *axiom(s) of replacement*, which consists of all instances of:

Replacement:(∀x)(∃y)(∀z)(z∈ y ↔ (∃w)(w∈x∧f(w) =z))

where *f* is any function definable in the language of set theory. Put loosely, given any set and any function, replacement insures that there is a set containing exactly the objects obtained by applying the function to the members of the original set.

The final axiom of *Zermelo Fraenkel set theory* does not assert the existence of any sets, but instead imposes a restriction on what sorts of sets exist. The *axiom of foundation* :

Foundation: (∀x)((∃y)(y∈x) → (∃z)(z∈x↔ ¬ (∃w)(w∈x↔w∈z)))

asserts that any nonempty set (i.e., any set other than ∅) contains as a member some second set that has no members in common with the original set. Although it is difficult to sum up the consequences of this axiom in simple terms, its main purpose it to rule out the existence of sets that contain themselves, such as the (potential) set that has itself as its only member, that is, Ω = {Ω}.

When confronted with such a list, a number of questions naturally arise, including: (1) Might there be a simpler set of principles that does the same job?; (2) Why have we chosen these principles?; (3) Might there be additional principles that we have overlooked? Complete answers to all of these questions are beyond the scope of the present article, but partial answers can at least be given.

Regarding the first question, we can rule out one initially promising simplified theory, *Naive Set Theory*, which has one axiom schema. The principle in question is the *naive comprehension principle* :

Naive Comp:(∃y)(∀z)(z∈y↔ Φ(z))

which states that, for any condition whatsoever (as long as we can express it in our set theoretic language), there is a set containing exactly the objects that satisfy that condition (note the similarity to separation above). The naive comprehension principle entails all of the axioms given above. Unfortunately, however, it also entails a contradiction, as was famously proved by Bertrand Russell (1996).

The reasoning, which has come to be known as Russell's paradox, proceeds as follows. Given naive comprehension, some sets will be members of themselves (such as the set of all sets) and some will not (such as the empty set). Because a set's not being a member of itself is a condition expressible in the language of set theory (i.e., *x* ∉ *x* ), if the following instance of naive comprehension:

(∃*y* )(∀*z* )(*z* ∈ *y* ↔ *x* ∉ *x* )

were true then there would be a set that contains exactly those sets that are not members of themselves. Call this (supposed) set the *Russell Class*, or R (collections too ill-behaved to form a set are called *proper classes*, although this terminology obscures the fact that such "classes" cannot be objects at all). To obtain the contradiction, we need only ask whether R is a member of itself. By the criteria just stated, we can conclude that R is a member of itself if, and only if, it is not a member of itself. The contradiction is evident. (Similar arguments demonstrate that there cannot be a set of all ordinals, a set of all cardinals, or a set of all sets.)

Thus, we must reject naive comprehension, and with it any conception of set that allows for the existence of the Russell Class. ZFC provides us with one means to achieve this. The next question, however, is why we have chosen these axioms, because presumably there are other collections of principles that would do the same job. The answer is that the axioms were not chosen at random, nor was their selection based merely on their individual plausibility. Instead, these axioms were selected because they correspond to natural thoughts regarding how one should separate the legitimate sets from those "collections," such as the Russell Class, that are not well-behaved enough to correspond to sets.

There are, roughly speaking, two intuitive pictures of the universe of sets that have motivated the formulation of ZFC. The first, and more influential, is founded on the idea that each set is built up from other sets or objects that are simpler, or at least prior to, the set in question. This notion of set is known as the *iterative conception of set*, and is summarized by George Boolos:

According to the iterative, or cumulative, conception of sets, sets are formed at stages; indeed, every set is formed at some stage of the following "process": at stage 0 all possible collections of individuals are formed … The sets formed at stage 1 are all possible collections of sets formed at stage 0, … The sets formed at stage 2 are all possible collections of sets formed at stages 0 and 1. The sets formed at stage 3 are all possible collections of sets formed at stages 0, 1, and 2 … The sets formed at stage 4 … In general, for any natural number

n, the sets formed at stagenare all possible collections of sets formed at stages earlier thann, i.e. stages 0, 1, … ,n− 1. Immediately after all stages 0, 1, 2, … there is a stage, stage ω. The sets formed at stage ω are, similarly, all possible collections of collections of sets formed at stages earlier ω, i.e., stages 0, 1, 2, … After stage ω comes stage ω+1: at which … In general, for each α, the sets formed at stage α are all possible collections of sets formed at stages earlier than α. There is no last stage: each stage is immediately followed by another. Thus there are stages ω+2, ω+3, … and so it goes. (1989, p. 88)

Another notion that has been used to justify the particular choice of axioms constituting ZFC (and which is closer to what Cantor originally had in mind) is the *limitation of size conception of set*. The underlying thought is that problematic collections, such as the Russell Class, are not sets because they are, in some sense, too big. Boolos sums up the limitation of size conception as the view that "objects form a set if and only if they are not in one-one correspondence with all the objects there are" (1989, p. 90), in other words, a collection forms a set if, and only if, it is smaller than the universe of all sets or all objects.

The exact role that such conceptions of set can, and should, play within the philosophy and practice of set theory is something of an open question. One particular worry surrounding such intuitive pictures is that the axioms of *ZFC* do not all seem to follow from a single conception. For example, the powerset axiom seems basic on the iterative conception of set but less obvious on a limitation of size understanding, whereas the axiom of replacement seems straightforward on the latter approach but somewhat questionable on the iterative conception. Nevertheless, we can at least recognize that these conceptions played a significant role in the actual choice of axioms included within ZFC.

The final question to ask is whether there might be additional axioms that can be added to the basic theory. The answer, of course, is "yes." We have already seen one such principle, the continuum hypothesis. As we noted, we can add either the continuum hypothesis or its negation to ZFC, and in either case we obtain a consistent theory. If one thinks there is a unique universe of sets (and debate rages over this question) then at most one of these theories can be correct. Even so, the Gödel/Cohen results are enough to guarantee that ZFC alone leaves at least some set theoretic questions unanswered.

There is another type of question to which ZFC provides only a partial answer, namely determining how many sets there are. The axioms of ZFC imply that the universe of sets is larger than any particular set, otherwise the axiom(s) of replacement would provide us with a set of all sets that, using the axiom(s) of separation, would provide us with the Russell Class and a contradiction. Thus, we can ask: How big must the universe be in order to satisfy the axioms of ZFC?

Before providing the answer, we need a few more definitions. First, the *supremum* of a set of cardinal numbers A (i.e., sup(A)) is the smallest cardinal greater than or equal to each of the cardinal numbers in the set. Second, a cardinal number κ is *regular* if and only if, given any set of cardinals A, if card(A) < κ and, for every cardinal γ ∈ A, γ < κ, then sup(A) < κ (intuitively, a cardinal κ is regular if we cannot reach it by summing up smaller cardinals unless we add up at least κ-many smaller cardinals). Finally, a cardinal κ is *strongly inaccessible* if and only if it is uncountable, it is regular and, for each cardinal γ less than κ, 2^{γ} is also less than κ.

We can now answer the question just raised: any collection of objects that satisfies the axioms of ZFC has a cardinal number at least as big as the first strongly inaccessible cardinal number. (We are fudging here a bit, because we have assumed that only sets have cardinal numbers. One should consult a good textbook to see the formal details.)

If there must be (according to the axioms of set theory) at least strongly inaccessibly many objects, our next question might be: Why should there not be a set that is that big? The answer, according to set theorists, is that there is (or at least, it is worth exploring the assumption that there is). Thus, one of the most common axioms added to ZFC is the *strong inaccessible cardinal axiom*, which is equivalent to the claim that there is a set whose cardinal number is strongly inaccessible.

We need not stop here. Once we have determined how big the universe needs to be in order to satisfy all of the axioms of ZFC plus the strong inaccessible cardinal axiom, we can add axioms that assert the existence of even larger sets (and thus larger cardinal numbers). This process can continue indefinitely.

Thus, recognition of the fact that there is no set of all sets and no largest set leads us to posit the existence of larger and larger sets, resulting in stronger and stronger theories. Principles that assert the existence of extremely large sets are called *large cardinal axioms* (technically, a *large cardinal* is any cardinal number whose existence cannot be proved from the axioms of ZFC alone). Surprisingly, large cardinal axioms often have consequences for less esoteric areas of mathematics, such as the theory of the real numbers, although for the most part they fail to impact the status of the continuum hypothesis. In addition, adopting stronger theories including large cardinal axioms allows us to prove the consistency of weaker theories. As a result, the study of large cardinal axioms is one of the most fruitful areas of research within set theory.

The introduction of large cardinal axioms is one instance of a more general method for generating new set theoretic principles. The method, known as *reflection*, assumes that, for any sentence true of the entire set theoretic universe, there will be a set such that the sentence will also be true when restricted to that set. Thus, letting *SI* be the claim that the universe contains strongly inaccessibly many sets, reflection tells us that, because *SI* is true, *SI* must be true when restricted to some set (i.e., there is a set that contains inaccessible many other sets as members, or, equivalently, there is a set whose cardinal number is a strong inaccessible cardinal).

Interestingly, reflection principles imply (and in some cases are equivalent to) strong versions of the axiom of choice. For example, let Φ be the claim that there is no well-ordering on the entire universe of sets. Reflection implies that if Φ is true, then there is some set whose contents cannot be well ordered. The axiom of choice is equivalent to the claim that every set can be well-ordered, however, so, if we accept both the axiom of choice and reflection, then Φ must be false; in other words, there must be a relation that well-orders the entire universe of sets. This thesis, known as *global choice*, is stronger than the axiom of choice.

We can now observe the great irony of the theory of the infinite. The revolution in the mathematics and philosophy of the infinite occurred in late nineteenth century when mathematicians such as Cantor and Dedekind abandoned the Aristotelian view that infinite collections could not, and should not, be thought of as completed totalities (i.e., sets). Their account of infinite totalities, however, and subsequent work on large cardinals, suggests a universe of sets that in some sense is ever expanding, with no upper limit to the size or variety of sets themselves. As a result, Aristotle was at least partially right, because there are collections (such as the collection of all objects, or all sets) that cannot, and should not, be thought of as completed, definite totalities.

Instead, the universe of sets, considered as a whole, is what has come to be called *indefinitely extensible*. Russell, reflecting on the paradoxes that arise when some intuitive "collections" are treated as sets (such as the paradox that bears his name) described the situation as follows:

A concept is indefinitely extensible if, for any definite characterization of it, there is a natural extension of this characterization, which yields a more inclusive concept; this extension will be made according to some general principle for generating such extensions, and, typically, the extended characterization will be formulated by reference to the previous, unextended characterization. (1963, pp. 195–196)

Understanding Russell's "definite characterization" as the contents of a set, any time we think we have collected together all the objects, or all of the sets, together into a single set, there will turn out to be more objects, or sets, which we somehow missed. To put the point loosely, Cantor's embrace of the actually infinite has led us, in the end, to the recognition that the set theoretic hierarchy itself is (or is something very much like) potentially infinite.

## Actual Infinity and Infinite Divisibility

As we saw at the beginning of this entry, Aristotle's solution to Zeno's paradoxes relied on the fact that we do not accomplish infinitely many things when we run from point 0 to point 1, because it is never the case that all of the parts of this journey (including the infinite sequence of distances 0 to ½, ½ to ¼, ¼ to ⅛, …) are present at one time. This solution relied, in turn, on the distinction between potential and actual infinity and on Aristotle's insistence that the actual infinite is illusory. If, however, our post-Cantorian mathematical view allows, and even embraces, actually infinite collections, then we are left with the possibility that the problematic parts of the runner's path from 0 to 1 are actually present as a completed totality. If so, then, assuming that motion is possible, it follows that we can accomplish infinitely many tasks (and we do so every time we wiggle our little finger!)

The most tempting response to this line of thought is "So what?" We might be surprised to learn that any action (no matter how slight) involves infinitely many tasks, but this astonishing fact is little more than the result of a clever (and perhaps misleading) description of the event. The movement from 0 to 1 might be, on one description, composed of infinitely many smaller motions, but it can also be viewed as a single continuous action, that of moving from 0 to 1. It is this latter fact that explains how we can accomplish the movement, and furthermore do so in a finite amount of time.

A task that consists of infinitely many subtasks carried out in a finite amount of time (such as Zeno's description of the runner traveling from 0 to 1) is called a *supertask*. Once we admit that Zeno was right, and we can sometimes carry out supertasks, problems emerge. There are other easily describable supertasks that seem to lead to paradoxes. Two of the most famous are *Thompson's Lamp* and *Bernardete's Paradox*.

Imagine an ideal lamp (indestructible, and able to be switched on or off instantaneously) that, at exactly 12:00, is turned on. ½ minute later, it is switched off, then ¼ minute later it is switched back on, then ⅛ minute later it is switched off, and then 1/16th minute later it is switched on, … The infinite series of switchings will be completed at exactly 12:01. The puzzling question that arises is: Will the lamp be on or off once the supertask is completed? There seems to be no good reason to answer one way rather than another. Nevertheless, a lamp, even an ideal one, must be either on or off.

Thompson's Lamp is similar to Zeno's paradox of the runner in that it divides a unit of time into the first half, the first half of the second half, the first half of the last fourth, the first half of the last eighth, … Bernardete's paradox, however, forces us to bizarre (if not outright contradictory) conclusions using the mirror image of this division, namely dividing a unit of time into the last half, the last half of the first half, the last half of the first fourth, the last half of the first eighth, … (more intuitively, we can see it as having the structure … + 1/32 + 1/16 + ⅛ + ¼ + ½ instead of ½ + ¼ + ⅛ + 1/16 + 1/32 + …).

José Bernardete constructs a number of different versions of his paradox, here we will consider a particularly striking formulation. Imagine an object that exists up until 12:00, when an infinite series of gods take notice of it. The gods are omnipotent and, additionally, they always carry out the actions they decide upon. The first god decides that if the object still exists at ½ minute after 12:00 then he will annihilate it (he will do nothing otherwise). The second god decides to annihilate the object if it still exists at ¼ minute past 12:00 (and again, do nothing otherwise). The third god decides to annihilate the object at ⅛ minute after 12:00 if it still exists, and so on. There is no threat to the existence of the object other than the intentions of each of the infinite series of gods. We can conclude that the object will suddenly cease to exist at exactly 12:00, yet nothing (and in particular, no god) will have caused its destruction.

First of all, assume that the object exists past 12:00. The there must be some fraction of a minute 1/*x* such that the object existed for (at least) that long after 12:00. If so, however, then one of the gods failed to live up to his intentions, because there will be some god in the list who decided to destroy the object if it still existed at 1/*y* minutes past 12:00 where 1/*y* is less than 1/*x* (because the series ½, ¼, ⅛ 1/16… tends to zero). But this contradicts the fact that the gods always carry out their intentions. Thus, the object will cease to exist at 12:00.

The natural assumption to make is that one or more of the gods must have destroyed it, but we can see this is incorrect as well. Each of the gods decided to destroy the object at a time after 12:00 (and to do nothing otherwise). Because the object did not survive past 12:00, the gods did nothing. Thus, the object blinks out of existence at exactly 12:00 yet nothing acted in such a way as to cause its disappearance. Even if not exactly paradoxical, Bernardete's paradox is deeply puzzling.

Of course, unlike Zeno's runner, both of these puzzles begin with an absurd situation. On the one hand we have a lamp that can be turned on and off arbitrarily fast and infinitely many times without malfunction, on the other we have an infinite collection of gods (most of us nowadays still puzzle over whether or not there is even one god!). It is tempting to conclude that the proper reaction to these puzzles should be mere amusement but not worry, because they concern situations that are at such a distance from our everyday experience of the world.

This would be a mistake. Both Thomson's Lamp and Bernardete's paradox are intended to challenge our understanding of the infinite and infinite divisibility. To dodge such challenges by noting that they require physically impossible situations or events is to miss the point. Surely neither an indestructible lamp nor an infinite pantheon of gods is logically impossible, and if they are logically possible then their behavior is relevant to our understanding of the infinite (which is, after all, a logical, or at least mathematical, concept). Thus, these puzzles, and others like them, are not mere curiosities but instead represent important unsolved problems confronting the coherence of, and our understanding of, the notion of infinity at the heart of mathematics and philosophy.

This entry ends in rougly the same way in which it began, with a discussion of infinite divisibility. This is the topic Aristotle focused on, and where his notion of the potentially infinite gained its greatest (albeit temporary) success. Even as we abandoned Aristotle's proscription on the actually infinite and embraced modern set theory, puzzles such as Russell's paradox and the nonexistence of a set of all sets prevented us from rejecting the potentially but not actually infinite altogether. Finally, given our acceptance of at least some actually infinite collections, paradoxes similar in structure to those that Aristotle first considered continue to plague our understanding of collections that keep going and going and going and going …

** See also ** Aristotle; Bolzano, Bernard; Brouwer, Luitzen Egbertus Jan; Cantor, Georg; Constructivism and Conventionalism; Continuity; Galileo Galilei; Gödel, Kurt; Intuitionism; Leibniz, Gottfried Wilhelm; Plato; Russell, Bertrand Arthur William; Set Theory; Weyl, (Claus Hugo) Hermann; Wittgenstein, Ludwig Josef Johann; Zeno of Elea.

## Bibliography

### historical sources

Aristotle. *The Physics*. Book 8. Translated by Daniel W. Graham. Oxford, U.K.: Clarendon Press, 1999.

Bolzano, Bernhard. *The Mathematical Works of Bernhard Bolzano*. Translated by Steve Russ. Oxford, U.K.: Oxford University Press, 2004.

Cantor, Georg. *Contributions to the Founding of the Theory of Transfinite Numbers*. New York: Dover, 1955.

Dedekind, Richard. *Essays on the Theory of Numbers*. New York: Dover, 1901.

Leibniz, Gottfried. *Mathematische Schriften*. 7 vols, edited by C. I. Gerhardt. Berlin: A. Asher, 1849–1863.

Leibniz, Gottfried. *Philosophical Papers and Letters*. 2nd ed. Translated by Leroy Loemker. Dordrecht, Netherlands: Kluwer, 1976.

Plato. *Philebus*. Translated by R. Hackforth. Cambridge, U.K.: Cambridge University Press, 1972.

Russell, Bertrand. *The Principles of Mathematics*. New York: Norton, 1996.

### contemporary views on infinity and set theory

Boolos, George. "Iteration Again." *Philosophical Topics* 42 (1989): 5–21. Reprinted in *Logic, Logic, and Logic*, edited by Richard Jeffrey. Cambridge, MA: Harvard University Press, 1999, 88–104.

Cohen, Paul. "The Independence of the Continuum Hypothesis." *Proceedings of the National Academy of Sciences USA* 50 (1963): 1143–1148.

Cohen, Paul. "The Independence of the Continuum Hypothesis II." *Proceedings of the National Academy of Sciences USA* 51 (1964): 105–110.

Enderton, Herbert. *Elements of Set Theory*. New York: Academic Press, 1977.

Gödel, Kurt. *Collected Papers*. Vol. 1, *1929–1936*. Oxford, U.K.: Oxford University Press, 1986.

Gödel, Kurt. *Collected Papers*. Vol. 2, *1938–1974*. Oxford, U.K.: Oxford University Press, 1989.

Kunen, Kenneth. *Set Theory: An Introduction to Independence Proofs*. Amsterdam: North Holland, 1998.

Moore, A. *The Infinite*. New York: Routledge, 2001.

### zeno's paradoxes and supertasks

Bernardete, J. *Infinity: An Essay in Metaphysics*. Oxford, U.K.: Clarendon Press, 1964.

Earman, John, and John Norton. "Infinite Pains: The Trouble with Supertasks." In *Benacerraf and his Critics*, edited by A. Morton and Steven Stich. Cambridge, MA: Blackwell, 1996.

Kirk, Geoffrey, John Raven, and Malcolm Schofield. *The Presocratic Philosophers*. Cambridge, U.K.: Cambridge University Press, 1983.

*Roy T. Cook (2005)*

#### More From encyclopedia.com

#### About this article

# Infinity in Mathematics and Logic

#### You Might Also Like

#### NEARBY TERMS

**Infinity in Mathematics and Logic**