Theories and Theoretical Terms
Theories and Theoretical Terms
THEORIES AND THEORETICAL TERMS
In mathematical logic, a theory is the deductive closure of a set of axioms (that is, the set of all propositions deducible from a set of axioms). In the early- and mid- twentieth century, philosophers of science, under the influence of Bertrand Russell's work in philosophy of language and philosophy of mathematics, attempted rationally to reconstruct scientific knowledge by representing scientific theories with the powerful conceptual tools provided by the theory of formal languages.
The Syntactic View of Theories
The syntactic view of theories (also called the received view) was developed by Rudolf Carnap, Ernest Nagel, Hans Reichenbach, and other logical empiricists. Like David Hume, these philosophers thought that insofar as scientific theories accurately describe the world, they cannot be known a priori, but they also recognized that some elements of our theoretical knowledge seem to be independent of the empirical facts. For example, Isaac Newton's second law states that the force on a body is proportional to the rate of change of its momentum, where the constant of proportionality is the inertial mass. This law cannot be tested in an experiment, because it is part of what gives meaning to the concepts employed to describe the phenomena. Hence, the logical empiricists argued, physical theories can be split into a part that expresses definitions of basic concepts and relations among them, and a part that relates to the world. The former part also includes the purely mathematical axioms of the theory and, trivially, all the logical truths expressible in the language of the theory. This part of the theory is a priori knowledge and concerns matters purely of convention. The factual content of the theory is confined to the latter part, and hence the fundamental empiricist principle that the physical world cannot be known by pure reason is satisfied.
Empiricists argue that meaning must originate in experience, and the logical empiricists used this criterion to criticize speculative metaphysics and to place limits on legitimate scientific theorizing. However, we can have no direct experience of theoretical entities such as neutrinos or theoretical properties such as spin. How can theoretical terms be meaningful? The logical empiricists tried to use logic to show how the theoretical language of science is related to the everyday language used to describe the observable world. They were motivated by the verification principle, according to which a (nontautological) statement is meaningful if and only if it can be verified in the immediacy of experience, and the verifiability theory of meaning, according to which the meaning of particular terms (other than logical constants) is either directly given in experience or consists in how those terms relate to what is directly given in experience.
The idea is that a physical theory will have a canonical formulation satisfying the following conditions:
1. L is a first-order language with identity, and K is a calculus defined for L.
2. The nonlogical terms of L can be partitioned into two disjoint sets, one of which contains the observation terms, V O, and the other of which contains the theoretical terms, V T.
3. There are two sublanguages of L, and corresponding restrictions of K, such that one (L O) contains no V T terms and the other (L T) no V O terms. These sublanguages together do not exhaust L, of course, since L also contains mixed sentences.
4. The observational language L O is given an interpretation in the domain of concrete observable entities, processes, events, and their properties. An interpretation of language L (in the model-theoretic sense used here) attributes a reference to each of the nonlogical terms in L at the metalinguistic level. If the axioms of a theory are true under some interpretation, then that interpretation is a model for the theory.
5. The theoretical terms of L are given a partial interpretation by means of two kinds of postulates: theoretical postulates, which define internal relations among the V T terms and do not feature V O terms, and correspondence rules or bridge principles, which feature mixed sentences and relate the V T and V O terms. (These correspondence rules are also known as "dictionaries," "operational definitions," and "coordinative definitions," depending on the author. All these terms designate a set of rules connecting theoretical terms to observable states of affairs.)
The theoretical postulates are the axioms of the theory, and the purely theoretical part of the theory is the deductive closure of these axioms under calculus K. The theory as a whole, TC, is the conjunction of T and C, where T is the conjunction of the theoretical postulates and C is the conjunction of the correspondence rules.
The logical empiricists soon abandoned the attempt to give language L O an interpretation in terms of immediate experience. It was decided instead that it is just as good to opt for a physicalist language, that is, one that refers only to physical objects, properties, and events (Friedman 1999). Initially, it was required that the theoretical terms of L be given explicit definitions (this was Carnap's original goal, but he had abandoned it by the time of his 1936–1937 paper). An example of such a definition of a theoretical term V T is the following:
∀x (V T(x ) ↔ [Px → Qx ]),
where P is some preparation of an apparatus (known as a test condition) and Q is some observable response of the apparatus (so P and Q are describable in V O terms alone). For example, an explicit definition of temperature can be given as follows: Any object x has temperature t if and only if when x is put in contact with a thermometer, it gives a reading of t. If theoretical terms could be so defined, this would show that they are convenient devices, can in principle be eliminated, and need not be regarded as referring to anything in the world (this view is called semantic instrumentalism ).
It was soon realized that explicit definition of theoretical terms is highly problematic. Perhaps the most serious difficulty is that, according to this definition, if we interpret the conditional in the square brackets as material implication, theoretical terms are trivially applicable when the test conditions do not obtain (because if the antecedent is false, the material conditional is always true). If, in contrast, we interpret the conditional as strict implication, then the theoretical term is applicable only when the test conditions obtain. In other words, either everything never put in contact with a thermometer has temperature t (under material implication), or only those things put in contact with a thermometer are candidates for having temperature t (under strict implication). This is clearly inadequate, since scientists use the language of temperature as if things have a temperature whether anybody chooses to measure it or not.
The natural way to solve this problem is to allow subjunctive assertion in explicit definitions. That is, we define the temperature of object x in terms of what would happen if x were put in contact with a thermometer. Here temperature is understood as a dispositional property. Unfortunately, this raises further problems. First, unactualized dispositions, such as the fragility of a glass that is never damaged, seem to be unobservable properties, and they give rise to statements whose truth conditions are problematic for empiricists, namely counterfactual conditionals, such as "If the glass had been dropped, it would have broken," where the antecedent is false. Dispositions are also modal, that is, they involve possibility and necessity, and empiricists since Hume have disavowed objective modality. Like laws of nature and causation, dispositions are problematic for empiricists. Second, no one has ever provided explicit definitions for terms like "space-time curvature," "spin," and "electron," whether dispositional or not, and there are no grounds for thinking that they could be.
However, advocates of the syntactic view did not abandon the attempt to anchor theoretical terms to the observable world. This is the point of the correspondence rules that connect the theoretical terms with the observational ones and so ensure their cognitive meaningfulness. They do not define the former in terms of the latter; rather, together with the theoretical postulates, they offer a partial interpretation for them. The correspondence rules are also intended to specify procedures for applying the theory to the phenomena. Theoretical concepts such as those of vital forces and entelechies were criticized by the logical empiricists because their advocates failed to express them in terms of precise, testable laws.
According to the view developed so far, TC is fully interpreted only with respect to its V O terms, which refer to ordinary physical objects (such as ammeters, thermometers, and the like) and their states; the V T terms are only partially interpreted. The models of TC comprise all the possible interpretations of TC in which the V O terms have their normal meanings and under which TC is true. The problem for the advocate of the syntactic approach is that there will be many models in general, so there is no unique interpretation for the theory as a whole. Hence, it would seem to make no sense to talk of TC being true or false of the world. Hempel (1963) and Carnap (1939) solved this problem by stipulating that TC is to be given an intended interpretation; theoretical terms are interpreted as (putatively) referring to the entities, processes, events, and properties appropriate to their normal meanings in scientific (and everyday) use.
Thus, if the meaning of the term "electron," say, derives from the picture of electrons as tiny billiard balls or classical point particles, this picture is important in determining what the theory of electrons refers to. Once the explicit-definition project is abandoned, one must accept that the meanings of theoretical statements lacking testable consequences are nonetheless important in determining the referents of the V T terms. As Suppe put it, "When I give a semantic interpretation to TC, I am doing so relative to the meanings I already attach to the terms in the scientific metalanguage. In asserting TC so interpreted, I am committing myself to the meaning of 'electron' and so on, being such that electrons have those observable manifestations specified by TC " (1977, p. 92).
This version of the syntactic view is committed to the idea that theoretical terms have excess or surplus meaning over and above the meaning given by the partial interpretation in terms of what can be observed. Herbert Feigl explicitly recognized this in 1950 and was thus led to argue for the view that theoretical terms genuinely refer to unobservable entities (scientific realism ).
Perhaps the most widespread criticism of the syntactic view is that it relies on the distinction between observational terms and theoretical terms. This distinction is supposed to correspond to a difference in how language works. Observational terms are more or less ostensibly defined and directly refer to observable features of the world, while theoretical terms are indirectly defined and refer to unobservable features of the world. Examples of the former presumably include "red," "pointer," "heavier than"; examples of the latter would include "electron," "charge density," "atom." Putnam (1962/1975) and many others have argued that there is no objective line to be drawn between observational and theoretical language, and that all language depends on theory to a degree. Moreover, eliminating theoretical terms, even if it were possible, would not eliminate talk of the unobservable, because it is possible to talk about the unobservable using V O terms only, for example, by saying that there are particles that are too small to see. (William Demopoulos has argued that this criticism is irrelevant to the project of offering a rational reconstruction of theories.)
Whether or not the distinction between observational and theoretical terms can be drawn in a nonarbitrary way, the syntactic view also faces criticism concerning the correspondence rules. These rules were supposed to have three functions: (a) to generate (together with the theoretical postulates) a partial interpretation of theoretical terms, (b) to give the theoretical terms cognitive significance by connecting them with what can be observed, (c) to specify how the theory is related to the phenomena. There are several problems concerning (c). First, if the correspondence rules are part of the theory, then whenever a new experimental technique is developed in the domain of the theory and the correspondence rules change to incorporate the new connections between theoretical terms and reality, the theory will change. This is counterintuitive. Another problem, raised by Suppe (1977), is that there are probably an indefinite number of ways of applying a theory, and so there ought to be an indefinite number of correspondence rules, but the formulation of the syntactic view requires that there be only finitely many. Furthermore, theories are often applied to phenomena by means of other theories used to establish a causal connection between the states of affairs described by the theory and the behavior of some measuring apparatus. For example, theories of optics are needed to link the occurrences of line spectra with changes in the energy states of electrons. The correspondence rules in this case will incorporate principles of optics to offer mechanisms and explanations for the behavior of measuring devices. Suppe concludes that correspondence rules are not an integral part of the theory as such but rather are auxiliary assumptions about how the theory is to be applied.
Nancy Cartwright (1983, 1989) and many others have argued that the syntactic view is misleading about how scientific theories are applied, because auxiliary assumptions about background conditions are rarely, if ever, sufficient for deriving concrete experimental predictions from a theory. Rather, these authors argue, the connections between abstract theory and concrete experiment are complex, nondeductive, and involve the use of many theories, models, and assumptions that are not yet part of the original theory.
The Semantic Approach to Scientific Theories
According to the semantic or model-theoretic view of theories, theories are better thought of as families of models rather than as partially interpreted axiomatic systems. Theories are "extralinguistic entities which may be described or characterized by a number of different linguistic formulations" (Suppe, p. 221).
To understand the semantic approach, first consider a modification of the syntactic view due to Ernest Nagel (1961) and Mary Hesse (1966). These authors insist that there are always models for a theory, whether true of the world or not. According to Nagel, "An interpretation or model for the abstract calculus … supplies some flesh for the skeletal structure in terms of more or less familiar conceptual or visualizable materials" (p. 90). He is here thinking of models like the billiard-ball model of a gas. This model supplies an iconic representation for the theory of gases (we interpret "gas molecule" as referring to a billiard ball and then picture the gas accordingly). This concrete picture allows the physicist to visualize the system and may also provide heuristic guidance for the future development of the theory. Hesse does not restrict models of theories to those that feature "familiar conceptual or visualizable materials," like the billiard-ball model. She regards mathematical structures specified by the formalism of a theory as a paradigm type of model. Indeed, she goes so far as to say that a model can be "any system, whether buildable, picturable, imaginable, or none of these, which has the characteristic of making a theory predictive" (1966, p. 19). In this she seems right in that many theories of contemporary physics, such as quantum mechanics, do not admit of models consisting of familiar or visualizable materials.
The origins of the semantic approach can be traced to Evert Beth and Patrick Suppes. The latter coined the slogan "[T]he correct tool for philosophy of science is mathematics, not meta-mathematics" (see for example, 1961/1969) and thought of theories as set-theoretic structures. Bas van Fraassen (1980, 1989) further elaborated and generalized Beth's approach: Theories are presented by specifying a class of state spaces with laws of coexistence (synchronic constraints) and laws of succession (diachronic constraints), which together specify the allowable trajectories for systems whose states are represented by parameters located in the state space. Examples of laws of coexistence are Boyle's gas law and the Pauli exclusion principle for energy states of electrons and other fermions; examples of laws of succession include the Schrödinger wave equation in quantum mechanics and Hamilton's equations of motion in classical mechanics.
An advantage claimed for the semantic approach is that it is closer to the practice of science, since scientists do not deduce empirical results directly from theories, but rather use theories in conjunction with models that apply to the system in question. Much of the practice of science concerns the development of new models to extend the domain of application of well-known theories. According to Ron Giere (1988) and Bas van Fraassen (1980, 1989), theories are partly linguistic entities insofar as they include various theoretical hypotheses linking models with systems in the real world, but are nonlinguistic insofar as they essentially involve populations of models. Such models "are the means by which scientists represent the world" (Giere, p. 80). Properly speaking, then, a theory comprises the models it uses and hypotheses that assert a similarity between a real system and some aspects of a model (other aspects are left out because of idealization and approximation).
Giere leaves this relation of similarity unanalyzed. For van Fraassen, the relation between theories and the world is one of isomorphism: "To present a theory is to specify a family of structures, its models ; and secondly, to specify certain parts of those models (the empirical substructures ) as candidates for the direct representation of observable phenomena. The structures which can be described in experimental and measurement reports we can call appearances : the theory is empirically adequate if it has some model such that all the appearances are isomorphic to empirical substructures of that model" (1980, p. 64). The appearances are the representations of the phenomena, in other words, mathematical models of the data (Suppes 1962).
The Reference of Theoretical Terms
Theoretical terms that allegedly refer to unobservable entities cannot be defined ostensively. If the reference of theoretical terms, such as "electron," is fixed by the relevant scientific theory, the sense of such a term fixes its reference (this is called a descriptivist theory of reference ). Thomas Kuhn (1962) argued that the sense of many scientific terms—terms such as "atom," "electron," "species," and "mass"—has changed considerably during the course of scientific revolutions. If the references of theoretical terms are fixed by the whole of the theories in which they feature, then any change in the latter will result in a change in the former.
In response, Hilary Putnam, in "Explanation and Reference" (1975), advocated a radically different account of the meaning of theoretical terms. He pointed out that most people have no idea how to link many terms with their references but nonetheless successfully refer to particular kinds of things using them. They do so by deferring to experts. For example, most people successfully use the word "platinum" even though lack an explicit definition and have no way to distinguish samples. Only a few experts have detailed criteria.
Putnam advocates a causal theory of reference for natural-kind terms. According to this theory, the referent of "water," for example, is whatever causes the experiences that give rise to talk of water. Reference is fixed not by the description associated with a term, but by the cause of the use of the term. This allows for continuity of reference across theory changes. Even though theories about electrons have changed, and hence the meaning of the term "electron" has changed, the term, Putnam argues, has always referred to whatever causes the phenomena that prompted its introduction, such as the conduction of electricity by metals.
The Ramsey-Sentence Approach to Theories
Frank Ramsey argued that the content of a physical theory is captured in its Ramsey sentence, the result of taking an axiomatization of the form described above and replacing all the theoretical terms with variables and existentially quantifying over the latter. For example, ∅(O 1, …, O n ; T 1, …, T m ) has the Ramsey sentence ∃t 1, …, ∃t m ∅(O 1, …, O n ; t 1, …, t m ). In effect, the Ramsey sentence of a theory is a statement in higher-order logic that says that the theory has a model consistent with a fixed interpretation of the observational terms. Ramsey thus treated theoretical terms as disguised definite descriptions. The Ramsey sentence and the original theory both imply the same observational sentences involving O -terms, and hence the factual content of the latter is captured by the former. David Lewis (1970) used Ramsey's method to show how new theoretical terms could be defined in terms of antecedently understood theoretical terms, rather than observational terms.
The Ramsey-sentence approach to theories has been thought to show that scientific knowledge of the unobservable theoretical world is purely structural (Worrall 1989). This raises technical problems discussed in Demopoulos (forthcoming), Demopoulos and Friedman (1985), and Psillos (2000).
Beth, Evert. "Towards an Up-to-Date Philosophy of the Natural Sciences." Methodos 1 (1949): 178–185.
Carnap, Rudolf. Foundations of Logic and Mathematics. Chicago; University of Chicago Press, 1939.
Carnap, Rudolf. "Testability and Meaning." Philosophy of Science 3 (1936): 419–471; 4 (1937): 1–40.
Cartwright, Nancy. How the Laws of Physics Lie. Oxford, U.K.: Oxford University Press, 1983.
Cartwright, Nancy. Nature's Capacities and Their Measurement. Oxford, U.K.: Oxford University Press, 1989.
Demopoulos, William. "Carnap's Philosophy of Science." In The Cambridge Companion to Carnap, edited by R. Creath and Michael Friedman. Cambridge, U.K.: Cambridge University Press, forthcoming.
Demopoulos, William, and Michael Friedman. "Critical Notice: Bertrand Russell's The Analysis of Matter: Its Historical Context and Contemporary Interest." Philosophy of Science 52 (1985): 621–639.
Feigl, Herbert. "Existential Hypotheses: Realistic versus Phenomenalistic Interpretations." Philosophy of Science 17 (1950): 35–62.
Friedman, Michael. Reconsidering Logical Positivism. Cambridge, U.K.: Cambridge University Press, 1999.
Giere, Ronald N. Explaining Science. Chicago: University of Chicago Press, 1988.
Hempel, Carl. "Implications of Carnap's Work for Philosophy of Science." In The Philosophy of Rudolf Carnap, edited by Paul Schilpp. LaSalle, IL: Open Court, 1963.
Hesse, Mary. Models and Analogies in Science. Oxford, U.K.: Oxford University Press, 1966.
Kuhn, Thomas. The Structure of Scientific Revolutions. Chicago: University of Chicago Press, 1962.
Lewis, David. "How to Define Theoretical Terms." Journal of Philosophy 67 (1970): 427–446.
Nagel, Ernest. The Structure of Science. New York: Harcourt, Brace, and World, 1961.
Psillos, Stathis. "Carnap, the Ramsey-Sentence, and Realistic Empiricism." Erkenntnis 52 (2000): 253–279.
Putnam, Hilary. "Explanation and Reference." In his Mind, Language, and Reality. Vol. 2 of Philosophical Papers. Cambridge, U.K.: Cambridge University Press, 1975.
Putnam, Hilary. "What Theories Are Not" (1962). In his Mathematics, Matter, and Method. Vol. 1 of Philosophical Papers. Cambridge, U.K.: Cambridge University Press, 1975.
Ramsey, Frank Plumpton. "Theories" (1929). In his Foundations of Mathematics and Other Logical Essays, edited by R. B. Braithwaite, 212–236. Paterson, NJ: Littlefield and Adams, 1960.
Suppe, Frederick, ed. The Structure of Scientific Theories. 2nd ed. Urbana: University of Illinois Press, 1977.
Suppes, Patrick. "A Comparison of the Meaning and Use of Models in Mathematics and the Empirical Sciences" (1961). In his Studies in the Methodology and Foundations of Science, 10–23. Dordrecht, Netherlands: Reidel, 1969.
Suppes, Patrick. "Models of Data." In Logic, Methodology, and the Philosophy of Science, edited by Ernest Nagel, Patrick Suppes, and Alfred Tarski, 252–267. Stanford, CA: Stanford University Press, 1962.
Van Fraassen, Bas C. Laws and Symmetry. Oxford, U.K.: Oxford University Press, 1989.
Van Fraassen, Bas C. The Scientific Image. Oxford, U.K.: Oxford University Press, 1980.
Worrall, John. "Structural Realism: The Best of Both Worlds?" Dialectica 43 (1989): 99–124.
James Ladyman (2005)