Conventionalism

views updated

CONVENTIONALISM

In the physical sciences, some very basic facts or principles appear to have a status that is difficult to categorize: not simply empirically discovered; not purely analytic (true by virtue of already established meanings); fundamental, but without quite being ordinary physical laws. Incompatible-looking alternative principles are conceivable; sometimes we can even see how an alternative physical framework could be built on them. Such principles are held, by some philosophers, to be true by convention. They are parts of our physical theories that had to be conventionally chosen by us over other incompatible postulates, whether or not we were overtly aware of this element of choice.

The most famous examples of putative conventional truths are to be found in our theories of space and time, and will be discussed below. But some are not directly related to space and/or time: in classical physics, Isaac Newton's famous 2nd law, F = ma, is an example. This law at first sight looks like it cannot be a convention, for surely, as the center of his mechanics, the 2nd law is far from true by stipulation. But in the Newtonian paradigm, the 2nd law served as ultimate arbiter of the questions (a) whether an external force is, in fact, acting on a given object; and (b) if so, what its magnitude was. While not necessarily immune to rejection or revision in the long run, this principle was nevertheless a postulate that helped constitute the meaning of other terms such as "force," and functioned as something akin to a definition, with a warrant akin to a prioricity.

Conventionalism of the Duhem-Quine sort holds that one can always maintain the truth of the 2nd law (or any other conventional truth), come what may. The backing for this claim comes from the holism of scientific theory testing (Pierre Duhem) or more generally, of conceptual frameworks (W. V. O. Quine): Since the things we hold true form an interconnected web, any one belief or postulate that faces apparent disconfirmation may be preserved as "true," as long as we are willing to make enough compensatory adjustments among other beliefs. (For example, the 2nd law may be held true in the face of motions that appear not to conform to it, as long aswe are willing to postulate the existence of hitherto-undiscovered forces acting on the relevant bodies.)

However, one can usually imagine circumstances in which unbearable tensions arise in our conceptual frameworks from the insistence on retention of the putative conventional principle, and one is effectively forced to give it up. If this is right, then the original claim of conventionalitythat the principle in question is a mere stipulation or definitionlooks like something of an exaggeration. Are there in fact any choices in the creation of adequate physical theory that are genuinely free, conventional choices (as, for example, choice of units is), without being completely trivial (as, again, choice of units is)? Many philosophers have thought that space-time structures give us true examples of such conventionality. The debates over conventionalism form a significant fraction of twentieth-century philosophy of space and time, and work continues in a wave of recent books and papers (Friedman 1999, Ryckman 2004). We cannot hope to do justice to the depth and complexity of the arguments of the major thinkers here, but will limit ourselves to introducing the main themes and key arguments, directing the reader to further resources in the bibliography.

Conventionalism about Space

Before the eighteenth century all philosophers of nature assumed the Euclidean structure of space; it was thought that Euclid's axioms were true a priori. The work of Nikolai Lobachevsky, Bernhard Riemann, and Carl Friedrich Gauss destroyed this belief; they demonstrated, first, that consistent non-Euclidean constant-curvature geometries were possible, and later that even variably curved space was consistent and analytically describable. But what, exactly, does it mean to say that space is Euclidean or Riemannian? A naïve-realist interpretation can of course be given: there exists a thing, space, it has an intrinsic structure, and that structure conforms to Euclid's (or Riemann's) axioms. But space, so described, is not observable in itself; only the material phenomena governed by physical laws are. When philosophers gave attention to this fact, they realized that our physical theories always contain assumptions or postulates that coordinate physical phenomena with spatial and temporal structures. Light rays in empty space travel in straight lines, for example; rigid bodies moved through space without stresses do not change their length; and so on. So-called axioms of coordination are needed to give meaning and testability to claims about the geometry of space.

The need for axioms of coordination seems to make space for conventionalism. For suppose that, under our old axioms of coordination, evidence starts to accumulate that points toward a non-Euclidean space (triangles made by light rays having angles summing to less than 180 degrees, for example). We could change our view of the geometry of space; but equally well, say conventionalists, we could change the axioms of coordination. By eliminating the postulate that light rays in empty space travel in straight lines (perhaps positing some "universal force" that affects such rays), we could continue to hold that the structure of space itself is Euclidean. According to the strongest sorts of conventionalism, this preservation of a conventionally chosen geometry can always be done, come what may. Henri Poincaré (1902/1952) defended the conventionality of Euclidean geometry; but he also conjectured that it would always be simpler to construct mechanics on assumption of Euclidean geometry. (Poincaré argued, on the basis of the work of Hermann von Helmholtz and Sophus Lie on free-mobility mechanics, that the possible geometries among which we must make a conventional choice are just three: Euclidean, Riemannian, and Lobachevskian. He thus did not consider the possibility of the variably curved space-time introduced by Albert Einstein in 1912).

Poincaré did not defend a wide-ranging, Duhem-Quine style of conventionalism; rather his view might better be thought of as neo-Kantian (as was also true of Hans Reichenbach in his first book on space and time [1920[). The Euclidean status of space is a convention that plays the role of a constitutive, a priori axiom with respect to mechanics and the rest of physics. By contrast, although he held it to be synthetic, arithmetic was not conventional for Poincaré: we have no choice but to regard it as true. But when it comes to the choice between Euclidean and (say) Lobachevsky geometry for real space, Poincaré's defense of the tenability of the Euclidean convention becomes basically an instance of the Duhem-Quine thesis: by making compensatory adjustments in our physics (specifically, introducing "universal forces" of the right sort), we can continue to hold that space is Euclidean even if direct measurements with rods and light-rays do not conform to that geometry. We will explore this idea further via Reichenbach, its most vigorous proponent in the twentieth century.

Reichenbach introduces the basic argument for conventionalism with an example that has become a classic:

G represents a planar surface on which some 2-D beings live; we suppose that the surface is "really" flat almost everywhere, except for a non-flat hump centered around the point A'. The people on the plane can "know" that they have this hump in their space because of the way triangles' angles measure inside the hump. Using their measuring rods, they regard the segments A'B' and B'C' as equal in length.

Now we suppose that G is actually made of glass, and light shining from above casts shadows of everything on G onto the plane E below. People on E have their own measuring rods and so on. Let's suppose that, as it happens, the measuring rods on E behave exactly like the shadows of the E-rods: declaring AB congruent to BC, for example. Reichenbach has us suppose that there is a heating source under E that causes measuring rods to expand as they approach A, with no heat beyond the limits of the shadow of the hump. If the beings on E knew nothing of heat and how it expands measuring rods, what will they conclude? Like the G-people, they will conclude that their space has a non-Euclidean hump in it, centered on A.

The example brings to light the apparent impossibilityat least under the described circumstancesof determining whether one "really" lives in a curved space, or a flat Euclidean space with certain "universal" forces affecting things like measuring rods, light rays, and so on. (Reichenbach's "universal" forces affect every object in exactly the same way, and cannot be shielded out; they are clearly modeled on the force of gravity, for reasons we will see below.) Now Reichenbach makes the key move in arguing for his conventionalism about physical geometry, and it is based on his verificationist empiricism: given that there is no way in principle to determine which of these is the case, we should reject the question itself as a mistake based on false presuppositions. There is no fact of the matter, in the case discussed, about whether G is "really" flat or rather "really" has a hump. A conventional choice must be made concerning whether to keep the geometry flat or not; after that, one can determine the presence (or absence) of universal forces as required.

We should note the irony here: in order to introduce his conventionalism, Reichenbach had to present us with hypothetical cases in which there is a nonconventional fact of the matter about the intrinsic geometry of space, then argue that we should disbelieve in these facts after all. Realists about space (or space-time) respond to Reichenbach precisely on this point: The fact that we cannot determine the geometry of space beyond any possibility of doubt, due to the logical possibility of other physical theories postulating a different geometry, does not entail that there is no fact of the matterunless, of course, one subscribes to the most far-reaching of verificationist views of meaning and truth.

But Reichenbach is not quite so easily dismissed. The "intrinsic" geometries of G and E were introduced as a crutch for the imagination, to get us ready to see how the combination of a geometry G and a set of physical postulates about forces, F, are only testable (hence meaningful) together. Once the point is understood, we realize that our "intrinsic" geometries by themselves had no significance. The combination of G and F together, by contrast, is both meaningful and testable: the E-residents can certainly tell that their world is such that if held to have a Euclidean space, then there are universal forces acting in the A-region; or if held to have no universal forces, then space is non-Euclidean in the region around A. Which they decide to adopt is up to them, and the decision is a conventional one (perhaps based on simplicity or convenience).

anti-conventionalist responses

Roberto Torretti (1983) and others have criticized Reichenbach's notion of a universal force, arguing that (a) gravity does not meet the criteria established by Reichenbach; and (b) physicists would never take such a stipulated, truly-unverifiable concept seriously. Recalling the analogy used by Reichenbach (and Poincaré before him) of the deformation of a measuring rod by heat, notice that real material objects respond differently to temperature changes: steel expands while a ceramic contracts when heated, for example. The differential response of some materials to the physical "force" of heat is crucial to its playing a significant role in physics. Gravity, too, is a force that affects different objects differently: a meter-stick made of steel with ball-shaped ends will change its length little, if at all, when held vertical in a gravity field like the Earth's; but a meter-stick made of foam rubber with steel ball-shaped ends will change significantly in the same gravity field. The force of gravity is indeed universal in the sense of (1) affecting all massive bodies equally per unit of mass, and in (2) being un-shieldable. But the universal forces Reichenbach discusses would appear to be rather different, affecting all bodies equally on a per unit volume basis, so as to change their sizes by exactly the same amount, regardless of internal constitution. Torretti argues that there are no forces in real physics that act in such a way.

Interestingly, though it was not known to Reichenbach, there are illustrations of potentially conventional elements that may be discerned in classical Newtonian gravitation theory, though not involving Reichenbachean universal forces. The arguably conventional choices are in fact two-fold (see Friedman 1983). First, one may add an arbitrary (constant) universal acceleration to every body in space: This acceleration changes no observable phenomena, and in fact is implemented simply by adding a term to the gravitational potential Φ. This extra term in the potential generates a universal gravitational force that accelerates every object at the same rateseemingly a real-science example of Reichenbach's universal forces, but in fact different: The force does not deform any body's shape or behavior relative to other bodies, hence does not change the Euclidean geometry of space. Physicists customarily chose Φ so as to make such overall-acceleration equal to zero.

Secondly, mathematicians in the early twentieth century discovered how to transform Isaac Newton's gravity theory into a curved-space-time theory analogous to Einstein's General Theory of Relativity (GTR, about which more below): in this formulation, there are no gravity forces and instead the local geometry of space-time is curved, non-Euclidean. (Note however that this is only true of space-time, not space on its ownthat remains flat, i.e., Euclidean.) Still, the example illustrates the conventionalist point: we might have had to choose whether to consider space-time flat/Euclidean, and let gravity be a universal force explaining why things do not always move on straight-line paths (geodesics); or instead, eliminate the "force" of gravity, allow that our space-time is curved, and hold that all bodies follow geodesics of the curved geometry of space-time. If we imagine that physics had really turned out to show our world perfectly Newtonian, it is easy to see that we might find the conventionalist viewpoint attractive, compared to a realism that denies us the possibility of knowing what sort of space-time we live in, whether we are moving uniformly or instead with a frightful acceleration, and so forth.

Conventionalism and GTR

Discussions of conventionalism took a dramatic turn because of the work of Einstein. With its variably curved space-time, GTR obviously posed new challenges and opportunities for both sides on the conventionality of geometry. In the first half of the twentieth century GTR was widely viewed as vindicating a significant conventionalism or neo-Kantian "constitutive a priori" element in physics. In addition to Reichenbach, Ernst Cassirer, Moritz Schlick, and Adolf Grünbaum are some notable figures of twentieth-century philosophy who argued for the conventionality of space-time's geometry in the context of GTR (see Ryckman [2004] for an extensive and nuanced discussion of this early interpretive wave). Recent scholars have tended to be skeptical that any nontrivial conventionalist thesis is tenable in GTR; Friedman, Torretti and Hilary Putnam are prominent examples here.

Grünbaum, a student of Reichenbach's, recast the arguments for conventionalism in a non-epistemological form, more suited to the post-positivist climate of the 1960s and 1970s. He also brought forth a novel argument for the conventionality of geometry, based on the intrinsic metrical amorphousness of a continuous space. If space were composed of discrete atoms or chunks, it would thereby have a built-in metric. The distance between the ends of a meter stick would be determined by the number of space-atoms traversed by the line of its center, for example. But if, as most physical theories postulate, space(-time) is a continuum, then it cannot have any such built-in metric. (Grünbaum seems to be thinking of space as, intrinsically, just a topological manifold.) The metrical properties must be imposed extrinsically, by phenomena and bodies existing in the space (or space-time). And again, we must adopt conventions about which processes, bodies etc. are taken as constitutive for the geometry of space. See Grünbaum (1973) and Friedman (1983) for extensive discussion.

Einstein's GTR gave impetus to conventionalism in several ways; here we will mention just one. Consider Einstein's Equivalence Principle (EP), which says that a body that is uniformly accelerating (e.g., a rocketship) may consider itself as "at rest," but in the presence of a gravitational field that pulls everything downward. Conversely, according to the EP, a body freely "falling" under a gravitational force may equally well consider itself as "at rest" in a space without any gravitational forces. (The EP was, we see, implicit in our discussion of the two conventional elements in Newtonian gravity theory above.) A strong reading of this principle leads to the view that the existence or non-existence of a gravitational field is not a fact "out there" in the world, but rather something which we must arbitrarily decide. However, since gravitational fields (i.e., regions of local curvature of space-time) caused by bodies like planets and stars can be empirically distinguished from gravity-free regionsthe EP is only true "locally," and to first approximation in small regionsthe apparent freedom to choose turns out to be illusory.

Conventionality of Simultaneity

But it was in 1905, rather than 1915, that Einstein gave the greatest boost to conventionalism. In the astounding first few pages of "On the Electrodynamics of Moving Bodies," the paper that introduced the Special Theory of Relativity (STR) Einstein overthrew the Newtonian view of space-time structureand, in passing, noted that part of the structure with which he intended to replace it had to be chosen by convention. That part was simultaneity. Einstein investigated the operational significance of a claim that two events at different locations happen simultaneously, and realized that it must be defined in terms of some clock synchronization procedure. The obvious choice for such a procedure was to use light-signals: Send a signal from event A for observer 1, have it be received and reflected back by observer 2 (at rest relative to 1), event B, and then received by 1 again at event C. The event B is then simultaneous with an event E, temporally midway between A and C.

Or is it? To suppose that it is, is to assume that the velocity of light on the trip from A to B is the same as its velocity from B to C (or, more generally, that light has the same velocity in a given frame, in all directions). This seems like a very good thing to assume. But can it be verified? Einstein thought not. All ways of directly measuring the one-way velocity of light seemed to require first having synchronized clocks at separated locations. But if this is right, we are going in circles: we need to know light's one-way velocity to properly synchronize distant clocks, but to know that velocity we need antecedently synchronized clocks.

To break the circle, Einstein thought we needed to make a conventional choice: We stipulate that event E is simultaneous with B (i.e., that light's velocity is uniform and direction-independent). Other choices are clearly possible, at least for the purposes of developing the dynamics and kinematics of STR. Following Reichenbach, these are synchronizations with ½ ( being the proportion of the round-trip time taken on the outbound leg only, freely specifiable between 0 and 1 exclusive). Adopting one of these ½ choices is equivalent to stipulating that the velocity of light is different in different spatial directions, without offering any physical reason for the difference, which some philosophers and physicists would find objectionable. It is also a recipe for calculational misery of a very pointless kind. But the Einstein of 1905, and many philosophers since then, thought that such a choice cannot be criticized as objectively wrong. Ultimately, they say, distant simultaneity is not only frame-relative, but partly conventional. It is important to see how different the situation is from Newtonian physics, in which there is no upper limit to the velocity of causal signals. In Newtonian physics, as long as we prohibit objects from moving " backward" in time, the existence of arbitrarily high velocities means that, given a specific event here, only one instant of time there can be chosen as simultaneous; that is, there is no scope for conventionality of simultaneity at all.

Many philosophers have been skeptical of the conventionality of simultaneity in STR. In 1967 Brian Ellis and Peter Bowman argued that slow clock transport offers a means of synchronizing distant clocks that is independent of the velocity of light. Their idea was this: in STR, of course, when a clock is accelerated from rest in a given frame up to some constant velocity, then decelerated to rest again at a distant location, there are the notorious time-dilation effects that prevent us from regarding the clock as having remained in synch with clocks at its starting point (the accelerated clock will have fallen behind the rest-clockthough this can, again, only be directly verified if it is brought back to its starting place for comparison with the rest clock). And calculation of the size of the effect depends on having established a distant-simultaneity convention (i.e., a choice of ). So it looks as though carrying a clock from observer 1 to observer 2 will not let us break the circle.

But Ellis and Bowman noted that the time dilation effect tends to zero as clock velocity goes to zero, and this is independent of -synchronization. Therefore, an "infinitely slowly" transported clock allows us to establish distant synchrony, and measure light's one-way velocity. Infinitely slow transport is not, of course, a practical method for synchronizing clocks. The point is rather this: Since we can prove mathematically that the time dilation effect goes to zero as velocity of transport approaches zero, we can establish the conceptual point that the one-way velocity of light is non-conventional. Conventionalists were not persuaded, and the outcome of the fierce debate provoked by Ellis and Bowman's paper was not clear (Norton 1986).

In 1977 David Malament took up the conventionalist challenge from a different perspective. One way of interpreting the claim of conventionalists such as Grünbaum is this: The observable causal structure of events in an STR-world does not suffice to determine a unique frame-dependent simultaneity choice. By "causal structure" we mean the network of causal connections between events; loosely speaking, any two events are causally connectable if they could be connected by a material process or light-signal. In STR, the "conformal structure" or light-cone structure at all points is the idealization of this causal structure. It determines, from a given event, what events could be causally connected to it (toward the past or toward the future). Grünbaum and others believed that the causal structure of space-time by no means singles out any preferred way of cutting up space-time into "simultaneity slices."

Malament showed that, in an important sense, they were wrong. The causal/conformal structure of Minkowski space-time does pick out a unique frame-relative foliation of events into simultaneity slices. Or rather, more precisely, the conformal structure suffices to determine a unique relation of orthogonality. If we think of an -choice as the choice of how to make simultaneity slices relative to an observer in a given frame, then Malament showed that the conformal structure is sufficient to define a unique, orthogonal foliation, which corresponds to Einstein's = ½ choice. But most conventionalists do not view Malament's result as a refutation of their view (Janis 1983), in part because Malament's proof starts from assumptions that are arguably already in violation of the spirit of the conventionalist's view.

In recent years philosophers have begun to consider whether quantum theories may shed light on the debates concerning simultaneity; see Gunn and Vetharaniam (1995) and Karakostas (1997) for arguments for and against the idea that quantum field theory refutes the conventionalist claim. Bain (2000) shows that while it is true that one can formulate quantum field physics in coordinate systems corresponding to ½ simultaneity, by choosing the generally covariant formulation of the theory (i.e., a formulation that is valid, roughly speaking, in any coordinate system whatsoever), doing so requires the introduction of a new mathematical object or "field," whose role is basically to represent the standard orthogonal simultaneity slices. That is, while one can nominally "choose" a simultaneity standard different from ½, the compensatory adjustments one is forced to introduce in order to make the theory work are such that one can see the "true" temporal structure lurking just under the surface. Conventionalists and anti-conventionalists disagree, of course, over whether the scare-quotes may be removed from the "true" in this verdict.

See also Philosophy of Physics; Space; Time.

Bibliography

Bain, J. "The Coordinate-Independent 2-Component Spinor Formalism and the Conventionality of Simultaneity." Studies in History and Philosophy of Modern Physics 31B (2000): 201226.

DiSalle, R. "Spacetime Theory as Physical Geometry." Erkenntnis 42 (1995): 317337.

Duhem, P. The Aim and Structure of Physical Theory. Translated by P. Wiener. Princeton, NJ: Princeton University Press, 1954.

Ellis, B., and P. Bowman. "Conventionality in Distant Simultaneity." Philosophy of Science 34 (1967): 116136.

Friedman, M. Foundations of Space-Time Theories: Relativistic Physics and Philosophy of Science. Princeton, NJ: Princeton University Press, 1983.

Friedman, M. Reconsidering Logical Positivism. Cambridge, U.K.: Cambridge University Press, 1999.

Gunn, D., and I. Vetharaniam. "Relativistic Quantum Mechanics and the Conventionality of Simultaneity." Philosophy of Science 62 (1995): 599608.

Janis, A. "Simultaneity and Conventionality." In Physics, Philosophy and Psychoanalysis, edited by R, Cohen, 101110. Reidel: Dordrecht, 1983.

Karakostas, V. "The Conventionality of Simultaneity in the Light of the Spinor Representation of the Lorentz Group." Studies in the History and Philosophy of Modern Physics 28 (1997): 249267.

Malament, D. "Causal Theories of Time and the Conventionality of Simultaneity." Noûs 11 (1977): 293300.

Norton, J. "The Quest for the One Way Speed of Light." British Journal for the Philosophy of Science 37 (1986): 118120.

Poincaré, H. Science and Hypothesis (1902). New York: Dover Books, 1952.

Putnam, H. "An Examination of Grünbaum's Philosophy of Geometry." In Philosophy of Science: The Delaware Seminar, Vol. 2, edited by B. Baumrin, 205255. New York: Interscience, 1963.

Redhead, M. "The Conventionality of Simultaneity." In Philosophical Problems of the Internal and External Worlds: Essays on the Philosophy of Adolf Grünbaum, edited by J. Earman, A. Janis, G. Massey, and N. Rescher, 103128. Pitsburgh, PA: University of Pitsburgh Press, 1993.

Reichenbach, H. Philosophie der Raum-Zeit-Lehre. Berlin and Leipzig: Walter de Gruyter, 1928. Translated by Maria Reichenbach and John Freund as The Philosophy of Space and Time (New York: Dover, 1957).

Reichenbach, H. Relativitätstheorie und Erkenntnis apriori. Berlin: Julius Springer 1920. Translated by Maria Reichenbach as The Theory of Relativity and A Priori Knowledge (Berkeley and Los Angeles: University of California Press, 1965).

Ryckman, T. The Reign of Relativity: Philosophy in Physics 19151925. New York: Oxford University Press, 2004.

Torretti, R. Relativity and Geometry. Oxford, U.K.: Pergamon Press, 1983.

Carl Hoefer (2005)

About this article

Conventionalism

Updated About encyclopedia.com content Print Article

NEARBY TERMS

Conventionalism