Analysis, Philosophical
ANALYSIS, PHILOSOPHICAL
Philosophical analysis is a term of art. At different times in the twentieth century, different authors have used it to mean different things. What is to be analyzed (e.g., words and sentences versus concepts and propositions), what counts as a successful analysis, and what philosophical fruits come from analysis are questions that have been vigorously debated since the dawn of analysis as a self-conscious philosophical approach. Often, different views of analysis have been linked to different views of the nature of philosophy, the sources of philosophical knowledge, the role of language in thought, the relationship between language and the world, and the nature of meaning—as well as to more focused questions about necessary and apriori truth. Indeed the variety of positions is so great as to make any attempt to extract a common denominator from the multiplicity of views sterile and not illuminating.
Nevertheless analytic philosophy—with its emphasis on what is called "philosophical analysis"—is a clear and recognizable tradition. Although the common core of doctrine uniting its practitioners scarcely exceeds the platitudinous, a pattern of historical influence is not hard to discern. The tradition begins with G. E. Moore, Bertrand Russell, and Ludwig Wittgenstein (as well as Gottlob Frege, whose initial influence was largely filtered through Russell and Wittgenstein). These philosophers set the agenda, first, for logical positivists such as Rudolf Carnap, Carl Hempel, and A. J. Ayer and then later for Wittgenstein, who in turn ushered in the ordinary language school led by Gilbert Ryle and J. L. Austin. More recently the second half of the twentieth century has seen a revival of Russellian and Carnapian themes in the work of W. V. Quine, Donald Davidson, and Saul Kripke. Analytic philosophy, with its changing views of philosophical analysis, is a trail of influence, the broad outlines of which we will trace here.
G. E. Moore
We begin with George Edward Moore, whose influence, along with that of his Cambridge classmate Bertrand Russell, was felt from their student days in the last decade of the nineteenth century throughout the whole of the twentieth. As a student Moore, who was to become the great defender of the Common Sense view of the world, was fascinated and perplexed by what he took to be the dismissive attitude toward common sense adopted by some of his philosophical mentors. He was particularly puzzled about the doctrines of absolute idealism that time is unreal (and so our ordinary belief that some things happen before other things must, in some way, be mistaken), that only the absolute truly exists (and so our ordinary conception of a variety of independently existing objects is incorrect), and that the essence of all existence is spiritual (and so our ordinary, non-mentalistic view of material objects is erroneous). Moore was curious how proponents of such doctrines could think themselves capable of so thoroughly overturning our ordinary ways of looking at things. How could anyone by mere reflection arrive at doctrines the certainty of which was sufficient to refute our most fundamental pre-philosophical convictions?
Before long he came to believe one couldn't. On the contrary, one's justification for a general principle of philosophy could never outweigh one's justification for the most basic tenets of the Common Sense view of the world. In essence he held that philosophers have no special knowledge that is prior to, and more secure than, the best examples of what we all pre-theoretically take ourselves to know. The effect of this position was to turn the kind of philosophy done by some of his teachers on its head. According to Moore the job of philosophy is not to prove or refute the most basic propositions, those we have no choice but to accept. It is however a central task of philosophy to explain how we know them. The key to doing so, he thought, was to analyze precisely what these propositions state, and hence what we know, when we know them.
Moore turned his method of analysis on two major subjects—perceptual knowledge and ethics. Although he achieved important results in both, they didn't fulfill his hopes for analysis. For example despite making a persuasive case in "A Defense of Common Sense" (1925) and "Proof of an External World" (1939) that we do know such elementary truths as I am perceiving this and this is a human hand, he never succeeded in explaining how, precisely, perception guarantees their truth. Moreover his speculative explorations of different analyses of their contents—briefly canvassed in "A Defense of Common Sense"—didn't advance the case very far. The paucity of these results—in which analysis aims at theoretical reconstructions of the contents of ordinary propositions—contrasts with the modest but much more successful conception of analysis that emerges from his painstaking philosophical practice in papers such as "The Refutation of Idealism" (1903). The burden of that piece is to show that idealists who hold that all of reality is spiritual have no good reason for their view. A crucial step is the isolation and analysis of a premise—roughly For anything to exist, or be real, is for it to be experienced —that Moore takes to be crucial to their argument. His point is that in order to play the role required by the argument, it must be a necessary truth. But, he thinks, the only plausible ground for believing it to be necessary lies in wrongly taking the concept of being experienced to be (analytically) included in the concept of an object existing, or being real—a mistake, he thinks, that is akin to wrongly identifying the sensation of yellow with that of which it is a sensation. Putting aside the accuracy of Moore's depiction of his opponents, or of his contentious views of the distinction between analytic and synthetic propositions, the paper is a beautiful example of the theoretically modest but philosophically illuminating practice of analysis at which Moore excelled—conceptual clarification, the drawing of clear distinctions, avoidance of equivocation, logical rigor, and attention to detail.
Much the same can be said about his use of philosophical analysis in ethics. On the one hand his enormously influential view that good is unanalyzable may be criticized for falling prey to a crippling dilemma. On any understanding of analyzability on which the unanalyzability of good would justify Moore's claim that conclusions about what is good are not derivable from, or supported by, premises that don't contain it, his "open question" argument does not show that good is unanalyzable; whereas on any understanding of analyzability on which his argument does establish that good is unanalyzable, this result does not justify the claim that conclusions about what is good can't be derived from or supported by premises that don't talk about goodness. In this sense his most famous ethical analysis was unsuccessful. Moreover this failure was connected to his official view of analysis, which conferred a privileged status on those necessary, apriori truths that reflect part-whole relations between concepts—roughly those propositions expressed by sentences that can be reduced to logical truths by putting synonyms for synonyms (where pairs of synonyms are thought to be easily recognizable by anyone who understands them)—as opposed to those necessary, apriori truths that do not fall into this category. Far from a source of strength, this theoretically-loaded conception of analysis was, arguably, Moore's Achilles heel.
On the other hand the decidedly more modest, theoretically uncontentious, conception of analysis that emerged from his exemplary analytic practice of unrelenting, conceptual clarification undeniably advanced the subject and served as a model for generations of analytic philosophers to come. It also produced, in the first paragraph of Principia Ethica (1903), what may be the best expression of the guiding spirit of analytic philosophy, and philosophical analysis, ever written.
It appears to me that in Ethics, as in all other philosophical studies, the difficulties and disagreements, of which its history is full, are mainly due to a very simple cause: namely to the attempt to answer questions, without first discovering precisely what question it is which you desire to answer. I do not know how far this source of error would be done away, if philosophers would try to discover what question they were asking, before they set about to answer it; for the work of analysis and distinction is often very difficult: we may often fail to make the necessary discovery, even though we make a definite attempt to do so. But I am inclined to think that in many cases a resolute attempt would be sufficient to ensure success; so that, if only this attempt were made, many of the most glaring difficulties and disagreements in philosophy would disappear. At all events, philosophers seem, in general, not to make the attempt, and, whether in consequence of this omission or not, they are constantly endeavoring to prove that that 'Yes' or 'No' will answer questions, to which neither answer is correct, owing to the fact that what they have before their minds is not one question, but several, to some of which the true answer is 'No', to others 'Yes.' (p. vii)
Bertrand Russell
Bertrand Russell's views on philosophical analysis are unique in two respects. They are more explicit, highly articulated, and theoretically fruitful than those of other leading figures; and their historical influence remains unsurpassed. The most well-known of his doctrines about philosophical analysis is his theory of descriptions presented in "On Denoting" (1905). The initial problem to be solved was an ontological one, posed by negative existentials—sentences of the form ⌜α doesn't exist ⌝ in which α is a name or description. The puzzle posed by such a sentence is that if it is true then there would seem to be nothing named or described; but if α doesn't stand for anything then it is hard to see how the sentence can be meaningful at all, let alone true. According to Russell the problem arises from false ideas about meaning—(i) the idea that the meaning of α is the entity it names or describes, and (ii) the idea that the meaning of ⌜α doesn't exist ⌝ is a proposition that predicates non-existence of that entity. At first blush these ideas seem doubly problematic since, on the one hand, if α doesn't stand for anything then there is nothing for non-existence to be predicated of, and on the other if there is an object with the property of non-existence, it would seem that there must exist an object that doesn't exist, which is a contradiction. Since Russell thought that (i) and (ii) led to these paradoxical results, he rejected both. His theory of descriptions is a proposal for replacing them with a conception of meaning that avoids such paradox.
Russell begins by distinguishing grammatically proper names (like the ordinary names of people and places) from logically proper names (this and that ). Whereas the meaning of a logically proper name is its referent, the meaning of a grammatically proper name n for a speaker s is given by some singular definite description, ⌜the F ⌝, that s associates with n. When it comes to singular definite descriptions, Russell's view is that they are incomplete symbols, which have no meaning in isolation. By this he means three things: (i) that the objects (if any) they denote are not their meanings, (ii) that the propositions expressed by sentences containing them do not contain constituents corresponding to them, and (iii) that their meanings can be given by rules that explain the systematic contributions they make to the meanings of sentences containing them.
Consider, for example, the negative existential ⌜The F doesn't exist ⌝. To understand this sentence is to grasp the proposition it expresses. However since for Russell its grammatical form is not the same as the logical form of the proposition p it expresses, he found it useful to translate it into a formula of his logical system the syntactic structure of which did match the logical structure of p. (Russell later came to think that he could dispense with propositions themselves as real entities, and get by with his logico-linguistic structures alone, but that may be regarded as a never-fully-worked-out afterthought.) The logical form of ⌜The F doesn't exist⌝ was identified with that of of ⌜∼∃ x ∀ y (Fy ↔ y = x)⌝—where the proposition expressed by this formula was seen as having three constituents: negation, the property expressed by '∃x', of being "sometimes true," and the propositional function f expressed by the sub formula ⌜∀ y (Fy ↔ y = x)⌝. This function assigns to any object o the proposition that says of o that it is identical with any object y if and only if y has the property expressed by F. Since o is identical with itself and nothing else, this means that the proposition f assigns to o is one that is true if and only if o, and only o, has the property expressed by F. Finally to say of a propositional function that it "is sometimes true" is to say that in at least one case it assigns a true proposition to an object. Putting all this together we get the result that the negative existential ⌜The F doesn't exist ⌝ expresses a proposition which is true if and only if there is no object which is such that it, and only it, has the property expressed by F. Since this proposition simply denies that a certain propositional function has a certain property, neither the truth nor the meaningfulness of the negative existential that expresses it requires there to be any object with the property of non-existence.
Negative existentials were, in Russell's view, special in that they contain the grammatical predicate exist, which, on his analysis, does not function logically as a predicate of individuals. However his theory was intended to cover all sentences containing descriptions. Whenever ⌜is G⌝ does function as a predicate, the analysis of ⌜The F is G⌝ is ⌜∃ x ∀ y (Fy ↔ y = x) & Gx ⌝, which may be paraphrased there is something such that it, and only it, is F, and it is also G. In "On Denoting," Russell showed how this analysis could be used to solve several logicolinguistic puzzles, and many other applications have been found since then. With the exception of Gottlob Frege's invention of the logical quantifiers in his Begriffsschrift (1879), one would be hard pressed to identify any comparably fruitful idea in the history of philosophical analysis.
Russell's revival of Frege's logicist program of reducing arithmetic to logic—in Principia Mathematic with Whitehead (1910, 1912) and Introduction to Mathematical Philosophy (1919)—represented a different, more philosophically ambitious kind of analysis. The task of deriving the axioms of Peano arithmetic from what Russell took to be axioms of pure logic required defining the arithmetical primitives zero, successor, and natural number in purely logical terms. Russell's approach (which he shared with Frege) was both elegant and natural. Let zero be the set whose only member is the empty set; let the successor of a set x (of sets) be a set y (of sets) with the following property: For each member of y the result of removing a member leaves one with a member of x. It follows that the successor of zero (i.e., the number one) is the set of all single-membered sets, the successor of one (i.e. the number two) is the set of all pairs, and so on. Note how natural this is. What is the number two? It is that which all pairs have in common; more precisely, it is the set of which they, and only they, are members. Finally the set of natural numbers is defined as the smallest set containing zero and closed under the operation of successor.
With these definitions, together with Russell's proposed logical axioms (formulated within his theory of logical types, so as to avoid paradox), the axioms of Peano arithmetic can be derived as theorems. As a result, arithmetical sentences can be viewed as convenient abbreviations of the complex formulas associated with them by the Russellian definitions. Since the sentences of higher mathematics can themselves be viewed as abbreviations of complex arithmetical sentences, it seemed to many that Russell's reduction had succeeded in showing that all of mathematics can be regarded as an elaboration of pure logic and that all problems in the philosophy of mathematics could, in principle, be solved by a correct philosophical account of logic. Thus the reduction, in addition to being recognized as a substantial technical achievement, was viewed by many as a stunning demonstration of the extraordinary philosophical power of Russell's version of logico-linguistic analysis. No matter that his system of logic and theory of types was, in point of fact, epistemologically less secure than arithmetic itself; the program of attacking philosophical problems by associating the sentences that express them with hidden logical forms was considered to have taken a huge step forward.
Russell pushed the program further in Our Knowledge of the External World (1914), in which he applied his method of analysis to Moore's problem of the external world. The problem that perplexed Moore was that, although we know that there are material objects and although our evidence is perceptual, there seems to be a gap between this evidence and that which we know on the basis of it. Whereas material objects are public and independent of us, Moore had come to think of the data provided to us by our sensory impressions as logically private and dependent for their existence on the perceiver.
Russell set out to bridge this gap. His solution was to analyze material-object talk as talk about a system of interrelated private perspectives—a forerunner of the idea that material objects are logical constructions out of sense data. According to this view sentences that appear to be about material objects are really about the sense data of perceivers, and each material-object sentence is analyzable into a conjunction of categorical and hypothetical sentences about sense data. Apart from the obvious, Berkeleyan problems inherent in this view, its portents of the future of philosophical analysis were ominous. Prior to this Russell's main examples of analysis—his theory of descriptions and logicist reduction—were precisely formulated and well worked out. By contrast the supposed analysis of material object statements was highly programmatic—neither Russell nor anyone else ever attempted to provide a fully explicit and complete analysis of any material-object statement. It was supposed to be enough to sketch the outlines that presumed analyses were supposed to take.
This programmatic approach also characterized Russell's position in his 1918 lectures "The Philosophy of Logical Atomism," in which he sketched the outlines of an ambitious philosophical system that posited a thoroughgoing parallelism between language and the world. The idea was to use the techniques of logical and linguistic analysis to reveal the ultimate structure of reality. Before, Russell had offered analyses piecemeal—to provide solutions to different philosophical problems as they came up. Now he sought to develop a systematic framework in which philosophy would, for all intents and purposes, be identified with logico-linguistic analysis. However it was his former student, Ludwig Wittgenstein, who pushed this idea the furthest.
Early Wittgenstein
The Tractatus (1922) is an intricate, ingenious, and highly idiosyncratic philosophical system of the general sort Russell had imagined. In it Wittgenstein presents his conception of a logically perfect language, which, he believes, underlies all language and, presumably, all thought. Crucial to the construction of a theory of meaning for this language is the account of its relation to the world, which we are told in the opening two sentences is the totality of facts rather than things. The simplest—atomic—sentences of language correspond (when true) to simple—atomic—facts. The constituents of these facts are metaphysically simple objects and universals named by linguistically simple expressions—logically proper names and predicates. All meaningful sentences are said to be truth functions of atomic sentences, each of which is logically independent of all other atomic sentences. Since atomic facts are similarly independent, all and only the possible assignments of truth values to atomic sentences determine possible worlds, which are possible constellations of atomic facts. The actual world is the combination all existing atomic facts.
For Wittgenstein what a sentence says is identified with the information it provides about the location of the actual world within the logical space of possible worlds. If S is atomic then S represents the actual world as being one that contains the possible atomic fact the existence of which would make S true. If S is both meaningful and logically complex, then S is a truth function of a certain set As of atomic sentences, and S represents the actual world as containing a constellation of facts that corresponds to an assignment of truth values to As that would make S true. However, in the system of the Tractatus, P is a member of As only if there are situations in which the truth value of S is affected by which truth value is assigned to P—only if there are complete assignments of truth values to As which differ solely in what they assign to P that determine different truth values for S. Since, when S is a tautology, its truth does not depend on the truth values of any atomic sentence in this way, it follows that S isn't a truth function of any non-empty set of such sentences.
For Wittgenstein this means that tautologies don't provide any information about the world, and so, strictly speaking, don't say anything. In this sense tautologies are not fully meaningful, though we may regard them as meaningful in the degenerate sense of arising from meaningful atomic sentences by permitted applications of truth-functional operators. Thinking of them in this way we may take tautologies to be true, so long as we understand that they don't state or correspond to any facts. For Wittgenstein there are no necessary facts for necessary truths to correspond to. Rather their truth is an artifact of our linguistic system of representation. Because of this, he thought, they should be knowable apriori, simply by understanding them and recognizing their form.
Many philosophers found the strikingly simple Tractarian conception of necessity, apriority, and logical truth to be compelling. According to the Tractatus (i) all necessity is linguistic necessity, in the sense of being the result of our system of representing the world, rather than the world itself; (ii) all linguistic necessity is logical necessity, in that all necessary truths are tautologies; (iii) all tautologies are knowable apriori; and (iv) only necessary truths are apriori. In short the necessary, the apriori, and the logically true are one and the same. These truths make no claims about the world but instead constitute the domain of logic, broadly construed. All other truths are contingent and knowable only by empirical investigation. These truths do make claims about the world and constitute the domain of science.
There are no other meaningful sentences, save for the logically or contingently false. According to the Tractatus, all meaningful sentences are either tautologies, contradictions, or contingent, aposteriori statements which are truth functions of atomic sentences that describe possible combinations of the basic metaphysical simples that make up the world. Since virtually all of the traditional statements of ethics, philosophy, and religion seem to fall outside these categories, Wittgenstein concluded that these statements are nonsense. No aspect of his system was more fascinating to readers of the Tractatus than this consequence of his global criterion of intelligibility. Moreover his conclusion was not limited to language. If one assumes, as Wittgenstein clearly did, that all genuine thoughts are in principle expressible by meaningful sentences then his criterion not only fixes the limits of meaning but it also fixes the limits of thought. Since ethical, philosophical, and religious sentences are meaningless, they don't express propositions; since there are no such propositions for us to believe, we have no ethical, philosophical, or religious beliefs.
Where does this leave philosophy and philosophical analysis? The lesson of the Tractatus is that here are no meaningful philosophical claims and hence no genuine philosophical questions for philosophers to answer. What then is responsible for the persistence of the discipline and for the illusion that it is concerned with real problems for which solutions might be found? Linguistic confusion. As Wittgenstein saw it all the endless disputes in philosophy are due to this one source. If we could ever fully reveal the workings of language, our philosophical perplexities would vanish, and we would see the world correctly. Fortunately, philosophy can help. Although there are no new true propositions for philosophers to discover, they can clarify the propositions we already have. Like Russell, Wittgenstein believed that everyday language disguises thought by concealing true logical form. The proper aim of philosophy is to strip away the disguise and illuminate the form. In short, philosophy is a kind of linguistic analysis that doesn't solve problems but dissolves them. As he put it in his first post-Tractatus paper, "Some Remarks on Logical Form" (1929),
The idea is to express in an appropriate symbolism what in ordinary language leads to endless misunderstandings. That is to say, where ordinary language disguises logical structure, where it allows the formation of pseudo-propositions, where it uses one term in an infinity of different meanings, we must replace it by a symbolism which gives a clear picture of the logical structure, excludes pseudo-propositions, and uses its terms unambiguously.
(p. 163)
Though the Wittgenstein of the Tractatus did not himself practice this form of analysis, the vision of analysis he articulated was one that later philosophers found attractive in its own right, quite apart from the doctrines that led him to it.
Logical Positivism
We now turn to something new—a self-conscious school of philosophy that arose through the collaborative efforts of several like-minded thinkers, including, most prominently, Rudolf Carnap, Moritz Schlick, Hans Reichenbach, A. J. Ayer, and Carl Hempel. The evolving creation of many minds, logical positivism was not monolithic; there was always plenty of disagreement on matters of detail, and even its central doctrines were never formulated in a way that commanded universal assent. The positivists did, however, share a common commitment to the development of certain themes inherited largely from Russell and Wittgenstein. From Russell they took the theory of descriptions as the paradigm of philosophical analysis (so characterized by F. P. Ramsey), the reduction of arithmetic to logic as the key to the nature of all mathematical truth (set out in Hempel's "On the Nature of Mathematical Truth," 1945), and the systematic, empiricist reconstruction of our knowledge of the external world—undertaken in Carnap's The Logical Structure of the World (1928). From Wittgenstein they took the idea of a test of intelligibility, the identification of necessary, apriori, and analytic truth, the bifurcation of all meaningful statements into the analytic versus empirical, the dismissal of whole domains of traditional philosophy as meaningless nonsense, and the goal of philosophy as the elimination of linguistic confusion by philosophical analysis.
The centerpiece of logical positivism was, of course, the empiricist criterion of meaning, which stated roughly that a non-analytic, non-contradictory sentence S is meaningful if and only if S is in principle verifiable or falsifiable—where verifiability and falsifiability are thought of as logical relations RV and RF between observation statements and S. Although the idea initially seemed simple, the devil proved to be in the details. One source of contention was the nature of observation statements. Initially Carnap, Schlick, and others construed them as reports of private sense data of observers. However the dangers of solipsism and phenomenalism soon forced a retreat to reports of (unaided) observation of everyday physical objects. Even then the theoretical/observational distinction proved elusive, with obvious strain on the clarity and plausibility of the criterion of meaning.
Defining the relations RV and RF that were to hold between meaningful (empirical) sentences and observation statements proved even more problematic. Initially it was hoped that the needed relations could be something quite strong—like the notion of being either conclusively verifiable (i.e., logically entailed by some finite, consistent set of observation statements) or conclusively falsifiable (i.e., something the negation of which is conclusively verifiable). However it soon became clear that when RV and RF are defined in this way, many obviously meaningful statements of science and everyday life are wrongly characterized as meaningless. This led to the attempt, illustrated by Ayer's proposal in the Introduction to the second edition of Language, Truth and Logic (1946), to define empirical meaningfulness in terms of a weak notion of verifiability—roughly that of being a statement which, when combined with an independently meaningful theory T, logically entailed one or more observation statements not entailed by T alone. However, as Alonzo Church demonstrated in his 1949 review of Ayer, this criterion was far too promiscuous, classifying no end of nonsense as meaningful.
There were of course other attempts to secure a workable empiricist theory of meaning, such as Carnap's criterion of translatability into an empiricist language, sketched in his 1936 essay "Testability and Meaning." But as Hempel showed in "Problems and Changes in the Empiricist Criterion of Meaning" (1950) this formulation runs into serious problems over theoretical terms in science. In Hempel's view the source of these problems is that sentences about theoretical entities are meaningful by virtue of being embedded in a network of hypotheses and observational statements, which as a whole makes testable predictions. As W. V. Quine emphasized even more forcefully in "Two Dogmas of Empiricism" (1951) these predictions are the product of all the different aspects of the system working together—in the sense that, given a set of observational predictions made by a theoretical system, one cannot in general match each prediction with a single discreet hypothesis, or small set of hypotheses.
Quine suggests that this is the crucial fact that makes it impossible to devise an adequate criterion of empirical meaningfulness for individual sentences. If for each sentence S we could isolate a set P of predictions made by S alone, and if P exhausted the contribution made by S to the predictions made by the theory as a whole then one could define S in terms of P. However the interdependence of S with other sentences in the system makes this impossible. Thus, Quine maintained, what we have to look for is not the empirical content of each statement taken in isolation, but rather its role in an articulated system that, as a whole, has empirical content. This point effectively marked the end of the positivists' version of verificationism.
Quine
From the Tractatus through logical positivism and beyond, many analytic philosophers identified the apriori with the necessary and attempted to explain both by appealing to the analytic. As they saw it there simply is no explaining what necessity is, how we can know any truth to be necessary, or how we can know anything apriori, without appealing to statements that are, and are known to be, true by virtue of meaning. From this point of view necessary and apriori truths had better be analytic, since if they aren't one can give no intelligible account of them at all. Ironically this theoretical weight placed on analyticity left the doctrines about necessity, apriority, and analyticity advocated by positivists and others vulnerable to a potentially devastating criticism. If it could be shown that analyticity cannot play the explanatory role assigned to it, then their commitment to necessity, apriority, and perhaps even analyticity itself might be threatened. This was precisely Quine's strategy.
He launched his attack in "Truth by Convention" (1936), the target of which is the linguistic conception of the apriori. According to this view all apriori knowledge is knowledge of analytic truths, which in turn is explained as arising from knowledge of the linguistic conventions governing our words. This view was attractive because it provided a seemingly innocuous answer to the question of how any statement could be known without empirical confirmation: A statement can be known in this way only if it is devoid of factual content—that is, only if its truth is entirely due to its meaning. Surely, it was thought, there is no mystery in our knowing what we have decided our words are to mean. But then, it was concluded, there must be no mystery in the idea that the truth of a sentence may follow, and be known to follow, entirely from such decisions. Putting these two ideas together, proponents of the linguistic conception of the apriori thought that they had found a philosophical explanation of something that otherwise would have been problematic.
Quine argued that this is not so. As noted, the proposed explanation rests on two bits of knowledge taken to be unproblematic—(i) knowledge of what our words mean, and (ii) knowledge that the truth of certain sentences follows from our decisions about meaning. However there is a problem here, located in the words follows from. Clearly we don't stipulate the meanings of all the necessary/apriori/analytic truths individually. Rather, it must be thought, we make some relatively small number of meaning stipulations and then draw out the consequences of these stipulations for the truth of an indefinitely large class of sentences. What is meant here by consequences ? Not wild guesses or arbitrary inferences with no necessary connection to their premises. No, by consequences proponents of the linguistic apriori meant something like logical consequences, knowable apriori to be true if their premises are true. But now we have gone in a circle. According to these philosophers, all apriori knowledge of necessary truths—including apriori knowledge of logical truths—arises from our knowledge of the linguistic conventions we have adopted to give meanings to our words. However, in order to derive this apriori knowledge from our linguistic knowledge, one has to appeal to antecedent knowledge of logic itself. Either this logical knowledge is apriori or it isn't. If it is then some apriori knowledge is not explained linguistically; if it isn't then it is hard to see how any knowledge could qualify as apriori. Since neither alternative was acceptable to proponents of the linguistic apriori, Quine's attack was a telling one.
Fifteen years later, in "Two Dogmas of Empiricism" (1951), he renewed it. He agreed with the positivists' premise that there is no explaining necessity and apriority without appealing to analyticity. However he challenged the idea that any genuine distinction can be drawn between the analytic and the synthetic without presupposing the very notions they are supposed to explain—a point he sought to drive home by demonstrating the circularity of the most obvious attempts to define analyticity. Hence, he concluded, there is no way of explaining and legitimating necessity and apriority—or analyticity either. For him this meant that there is no genuine distinction to be drawn between the analytic and the synthetic, the necessary and the contingent, or the apriori and the aposteriori. The idea that any such distinctions exist was one of the two dogmas targeted in his article.
In assessing this argument it is important to remember that it was directed at a specific conception of analyticity, which was taken to be the source of necessity and apriority. Although this conception was widely held at the time Quine wrote, it is radically at variance with the post-Kripkean perspective according to which necessity and apriority are, respectively, metaphysical and epistemological notions that are non-coextensive and capable of standing on their own. From this perspective the attempt to explain necessity and apriority in terms of analyticity appears to be badly mistaken. Since Quine's circularity argument shares the problematic presupposition that all these notions are acceptable only if such an explanation can be given, it doesn't come off much better. For this reason Quine should not be seen as giving a general argument against analyticity. At most his argument succeeds in undermining one particular conception that enjoyed a long run among analytic philosophers in the middle fifty years of the twentieth century.
The second dogma attacked by Quine is radical reductionism, the view that every meaningful sentence is translatable into sentences about sense experience. Quine points out that the two dogmas—(i) that there is a genuine analytic/synthetic distinction, and (ii) radical reductionism—are linked in empiricist thinking by verificationism. Roughly speaking, verificationism holds that two sentences have the same meaning if and only if they would be confirmed or disconfirmed by the same experiences. Given this notion of synonymy, one could define analyticity as synonymy with a logical truth. Thus if verificationism were correct then the analytic/synthetic distinction would be safe. Similarly if verificationism, or at any rate a particularly simple version of verificationism, were correct then any empirical sentence would be translatable into the set of observation sentences that would confirm it, and radical reductionism would be saved. For these reasons, Quine concludes, if simple verificationism were correct then the two dogmas of empiricism would be corollaries of it.
By the time Quine wrote "Two Dogmas," verificationism, as a theory of meaning for individual sentences, was already dead, as was radical reductionism. Nevertheless he noted that some philosophers still maintained a modified version of the latter according to which each (synthetic) statement is, by virtue of its meaning, associated with a unique set of possible observations that would confirm it and another that would disconfirm it. Against this Quine argued that verification is holistic, by which he meant that most sentences don't have predictive content in isolation but are empirically significant only insofar as they contribute to the predictive power of larger empirical theories. Since he continued to assume, with the positivists, that meaning is verification, his position was one of holistic verificationism. According to this view the meaning of a theory is, roughly, the class of possible observations that would support it, and two theories have the same meaning if and only if they would be supported by the same possible observations. Since individual sentences don't have meanings on their own, any sentence can be held true in the face of any experience (by making necessary adjustments elsewhere in one's overall theory), and no sentence is immune from revision—since given a theory T incorporating S, Quine thought that one could construct a different, but predictively equivalent, and hence synonymous, theory T incorporating the negation of S.
The resulting picture of philosophy and philosophical analysis that emerges from Quine's work is radically at variance with any we have seen. He rejects the doctrine that philosophical problems arise from confusion about the meanings of words or sentences, and with it the conception of philosophy as providing analyses of their meaning. He rejects these views because he rejects their presuppositions—that words and sentences have meanings in isolation and that we can separate out facts about meanings or linguistic conventions from the totality of all empirical facts. For Quine philosophy is continuous with science. It has no special subject matter of its own, and it is not concerned with the meanings of words in any special sense. Philosophical problems are simply problems of a more abstract and foundational sort than the ordinary problems of everyday science.
In later years Quine put less emphasis on holistic verificationism (which is itself beset with problems akin to earlier versions of verificationism), but he did not back away from his skepticism about our ordinary, pre-theoretic conception of meaning. Instead he deepened and extended his attack with his doctrine of the indeterminacy of translation in Word and Object (1960) and its corollary, the inscrutability of reference, in "Ontological Relativity" (1969). Since Quine, the naturalist, could find no place in nature for meaning and reference as ordinarily conceived, he repudiated both in favor of radically deflated, behaviorist substitutes. Thus it should not be surprising that there is no place in his brave new world for philosophical analysis as a distinctive intellectual activity. Nevertheless his actual philosophical practice is hard to discern from that of his analytic predecessors. Like them he does little, when arguing for his central doctrines, to delineate their alleged contributions to the observational predictions made by our overall theory of the world.
Later Wittgenstein
In The Philosophical Investigations (1953) Wittgenstein outlines a new, essentially social conception of meaning that contrasts sharply with the one presented in the Tractatus. In the earlier work language was viewed on the model of a logical calculus in which conceptual structure is identical with logical structure, and all meaningful sentences are truth-functions of atomic sentences that represent metaphysically simple objects standing in relations isomorphic to those in which logically proper names stand in the sentences themselves. In the Investigations the picture is quite different. Language is no longer seen as a calculus, derivability by formal logical techniques is accorded no special role in explaining conceptual connections among sentences, and naming is not taken to be the basis of meaning. Instead, meaning arises from socially conditioned agreement about the use of expressions to coordinate the activities and further the purposes of their users. For the later Wittgenstein, to know the meaning of an expression is not to know what it names or how to define it but to know how to use it in interacting with others.
According to this conception of meaning, understanding a word is not a psychological state but rather a disposition to apply it in the correct way over a wide range of cases; where by the correct way we do not mean the way determined by a rule the speaker has internalized. The problem, as Wittgenstein sees it, with appealing to such rules to explain our understanding of words is that rules are themselves made up of symbols that must be understood if they are to be of any use. Obviously this sort of explanation can't go on forever. In the end we are left with a large class of words or symbols that we understand and are able to apply correctly, despite the fact that what guides us and makes our applications correct are not further rules of any sort. When we reach rock bottom we are not guided by rules at all; we simply apply expressions unthinkingly to new cases.
What determines whether these new applications are correct? The mere fact that I am inclined to call something F can't guarantee that I am right. If my use of F is to be meaningful, there must be some independent standard that my application is required to live up to in order to be correct. Wittgenstein thinks this standard can't come from me alone. The reason it can't is that the same argument that shows that the standard of correctness cannot be determined by an internalized rule can be repeated to establish that it can't be determined by any belief, intention, or other contended mental state of mine. The problem, Wittgenstein thinks, is that in order to perform such a role, any such mental state must itself have gotten its content from somewhere. A regress argument can then be used to conclude that the contents of all my words and all my mental states must, in the end, rest on something other than my mental states. Thus, he suggests, the standard of correctness governing my use of F cannot rest on anything internal to me, but must somehow come from the outside. What more natural place to look for this than in the linguistic community of which I am a part? Hence, he suggests, for me to use F correctly is for me to apply it in conformity with the way it is applied by others. For Wittgenstein this, in turn, implies that F must be associated with public criteria by which someone else can, in principle, judge whether my use of it is correct. Language is essentially public; there can be no logically private language.
This conception of language leads Wittgenstein to a new conception of philosophy and philosophical analysis. He continues to believe that philosophical problems are linguistic, and that philosophical analysis is the analysis of language—but this analysis is no longer seen as a species of logical analysis. According to the new conception there is no such thing as the logical form of a sentence, and one should not imagine that sentences have unique analyses. According to Wittgenstein we do not give an analysis of a sentence because there is anything wrong with it that demands clarification. We give an analysis when something about it leads us into philosophical confusion. The same sentence might even receive different analyses, if people become confused about it in different ways. In such a case each analysis may clear up a particular confusion, even if no analysis clears them all up.
Accompanying this deflationary view of analysis is a highly deflationary conception of philosophy. According to the Investigations the philosophical analysis of language does not aim at, and cannot issue in, theories of any kind. Philosophy, as Wittgenstein says in section 109, "is a battle against the bewitchment of our intelligence by means of language." According to this view the task of philosophy is essentially therapeutic. It is the untangling of linguistic confusions, achieved by examining our words as they are ordinarily used, and contrasting that use with how they are misused in philosophical theories and explanations.
This deflationary conception arises naturally from Wittgenstein's new ideas about meaning, plus certain unquestioned philosophical presuppositions that he brings to the enterprise. These include his long-held convictions (i) that philosophical theses are not empirical, and hence must be necessary and apriori, and (ii) that the necessary, the apriori, and the analytic are one and the same. Because he takes (i) and (ii) for granted, he takes it for granted that if there are any philosophical truths, they must be analytic. To this he adds his new conception of meaning—with its rejection of abstract logical forms, its deflationary view of rule-following and algorithmic calculation, and its emphasis on social conditioning as generating agreement in our instinctive applications of words. Having jettisoned his old conception of meaning as something hidden and replaced it with a conception of meaning that sees it as arising from an unquestioning, socially-conditioned agreement, he has little room in his conceptual universe for surprising philosophical truths. Genuinely philosophical truths, if there are any, can only be necessary and apriori, which in turn are taken to be true in virtue of meaning.
But how are the analytic truths of interest to a philosopher to be established if they are not to be translated into the formulas of a logical calculus and demonstrated by being given rigorous but sometimes also innovative and insightful logical proofs? For the Wittgenstein of the Investigations, the answer is that they don't need to be established, since they are already implicitly recognized by competent language users. To be sure they may sometimes need to be brought into focus by assembling examples of ordinary use that illustrate the constitutive role they play in our language; but there is little room here for surprising philosophical discoveries. Such is the official view of the Investigations.
As with the Tractatus, there is an evident problem here. Wittgenstein's official view of philosophy is at variance with his own philosophical practice. His general theses about language and philosophy (to say nothing of his surprising and, arguably, revisionist views about sensation and other psychological language arising from the private language argument) are by no means obvious or already agreed upon; nor are they the sorts of things that one can just see to be true, once they are pointed out. On the contrary they require substantial explanation and argument, if they are to be accepted at all. As was so often the case throughout the twentieth century, the practice of philosophical analysis—understood as whatever it is that analytic philosophers do—eluded the official doctrines about analysis propounded by its leading practitioners.
The Ordinary Language School
This school, which received great impetus from the Investigations, was shaped by two leading ideas. The first was that since philosophical problems are due solely to the misuse of language, the job of the philosopher is not to construct elaborate theories to solve philosophical problems but to expose linguistic confusions that fooled us into thinking there were genuine problems to be solved in the first place. The second idea was that meaning itself—the key to progress in philosophy—is not to be studied from an abstract scientific or theoretical perspective. Rather philosophers were supposed to assemble observations about the ordinary use of words, and to show how misuse of certain words leads to philosophical perplexity. In retrospect this combination of views seems quite remarkable: All of philosophy depends on a proper understanding of something that there is no systematic way of studying. Fortunately this anti-theoretical approach changed over time with much of the progress in the period being marked by significant retreats from it—including Austin's theory of performatives in How to Do Things with Words and Paul Grice's theory of conversational implicature in "Logic and Conversation" (both originally delivered as the William James Lectures at Harvard, in 1955 and 1967, respectively).
A good example of the standard, anti-theoretical approach is Ryle's Dilemmas (1953), in which he identifies the main aim of philosophy as that of resolving dilemmas. For Ryle a dilemma arises when obvious theories or platitudes appear to conflict with one another. In such cases a view that is unobjectionable in its own domain comes to seem incompatible with another view that is correct when confined to a different domain. When this happens we find ourselves in the uncomfortable position of seeming to be unable jointly to maintain a pair of views, each of which appears correct on its own. Ryle believes that in most cases the apparent conflict is an illusion to be dispelled by philosophical analysis. However, the needed analysis is not a matter of defining key concepts or uncovering hidden logical forms. Although analysis is conceptual what is wanted is never a sequence of definitions that could in principle be presented one by one. Instead Ryle compares the required analysis to the description of the position of wicket keeper in cricket. Just as one can't describe that position without describing how it fits in with all the other positions in cricket, so, Ryle thinks, one cannot usefully analyze a concept without tracing its intricate connections with all the members of the family of concepts of which it is a part.
His most important application of this method is to psychological language, in The Concept of Mind (1949). There he rejects what he calls the myth of "the Ghost in the Machine," according to which belief and desire are causally efficacious, mental states of which agents are non-inferentially aware. Ryle takes this view to be "entirely false" and to be the result of what he calls "a category-mistake," by which he means that it represents mental facts as belonging to one conceptual type, when they really belong to another. He illustrates this with the analogy of someone who visits different buildings and departments of a university and then asks "But where is the university?" Here the category mistake is that of taking the university to be a separate building or department alongside the others the visitor has seen, rather than being the way in which all the different buildings and departments are coordinated.
Similarly, Ryle maintains, someone who believes that the mind is something over and above the body fails to realize that the mind is not a separate thing, and that talk of the mental is really just talk about how an agent's actions are coordinated. According to this view, to attribute beliefs and desires to an agent is not to describe the internal causes of the agent's action but simply to describe the agent as one who would act in certain ways if certain conditions were fulfilled. This is rather surprising. According to Ryle's ordinary-language ideology, philosophy is not supposed to give us new theories but to untangle linguistic confusions—leaving us, presumably, with a less muddled version of what we pretheoretically thought. Here, however, his aim was to undermine a certain widely-held view of the mind and to provide what, arguably, amounts to a sweeping revision of our ordinary conception of the mental.
J. L. Austin was similarly ambitious. In his elegant classic Sense and Sensibilia, published in 1962 but delivered as lectures several times between 1947 and 1959, he attempted to dissolve, as linguistically confused, phenomenalism, skepticism about knowledge of the external world and the traditional sense-data analysis of perception. His goal was to show these positions to be incoherent by undermining the presupposition that our knowledge of the world always rests on conceptually prior evidence of how things perceptually appear. For this he employed two main strategies. One was to try to show that certain statements—such as, "there is a pig in front of me" in normal circumstances, with the animal in plain sight—are statements about which the claim that knowledge of them requires evidence of how things appear cannot be true. Austin drew this conclusion from the observation that it would be an abuse of language for the speaker in such a situation to say, "It appears that there is a pig in front of me," or "I have evidence that there is a pig in front of me." His other strategy was to argue that appearance statements themselves are parasitic on ordinary non-appearance statements and so cannot be regarded as conceptually prior to the latter.
Neither strategy was successful. The first was rebutted by Ayer in "Has Austin Refuted the Sense Datum Theory" (1967), in which he pointed out that the abuse that Austin spotted was, in effect, a matter of Gricean conversational implicature (Don't make your conversational contribution too weak! ) from which no conclusion about the possibility of knowledge without evidence can be drawn. The general lesson here is that not all matters of language use (or misuse) are matters of meaning (or truth). Austin's second strategy, though not similarly rebutted, was not developed in enough detail to be compelling. In addition it faced the general difficulty (common to many ordinary-language attempts to undermine skepticism) of appealing to non-skeptical claims about meaning to refute the skeptic. Even if the view of meaning is correct, it may have little argumentative force against a determined skeptic.
By contrast the theory of performative utterances given in How to Do Things with Words (1962) has become an enduring fixture of the study of language. The idea, in its simplest form, is that utterances of sentences like "I promise to come" or "I name this ship The Ferdinand" are, in proper circumstances, not reports of actions but performances of them. Although there have been many disputes about how to develop this idea, there is no question that there is something to it. Austin himself was inclined to think that performative utterances of this sort were attempts, not to state facts, but to perform certain conventionally recognized speech acts.
For a time this idea generated considerable optimism about performative analyses of important philosophical concepts of the sort illustrated by Peter Strawson's 1949 paper, "Truth"—according to which ⌜It is true that S ⌝ is analyzed as ⌜I concede/confirm/endorse that S⌝—and R. M. Hare's The Language of Morals (1952)—according to which ⌜ That is a good N⌝ is assimilated to ⌜ I commend that as an N⌝. However, these views, along with other ambitious attempts to use performative analyses to sweep away age-old philosophical problems, ran into serious difficulties. Chief among them was the point—made by Peter Geach in "Ascriptivism" (1960) and John Searle in "Meaning and Speech Acts" (1962)—that any analysis of the meaning of S must explain the contribution S makes to complex sentences of which it is a constituent. Since analyses that focus exclusively on the speech acts performed by utterances of S on its own don't—and often can't—do this, they cannot be taken to be correct accounts of meaning. This reinforced a message noted earlier; not all aspects of language use are aspects of meaning. As this point sunk in, the need for systematic theories to sort things out became clear, and the ordinary language era drew to a close.
Later Developments
Many philosophers found what they were looking for in Donald Davidson's attempt to construct, in the 1960s and 1970s, a theory of meaning for natural language modeled on Alfred Tarski's formal definition of truth for logic and mathematics. According to Davidson it is possible to construct finitely axiomatizable theories of truth for natural languages L that allow one to derive—from axioms specifying the referential properties of its words and phrases—a T-sentence, ⌜'S' is a true sentence of L if and only if p⌝, for each sentence S of L, which gives its truth conditions. Since such a theory gives the truth conditions of every sentence on the basis of its semantically significant structure, it is taken to count as a theory of meaning for L. The theory is empirically tested by comparing the situations in which speakers hold particular sentences to be true with the truth conditions it assigns to those sentences. According to Davidson's view the correct theory of meaning is, roughly, the theory TM according to which the conditions in which speakers actually hold sentences to be true most closely matches the conditions in which TM, plus our theory of the world, predicts the sentences to be true. Roughly put Davidson takes the correct theory to be the one according to which speakers of L turn out to be truth tellers more frequently than on any other interpretation of L.
This bold idea generated a large volume of critical comment, both pro and con, over the next two decades. One important cluster of problems centers around the fact that the T-sentences generated by Davidsonian theories are material biconditionals and so provide truth conditions of object-language sentences only in the very weak sense of pairing each such sentence with some metalanguage sentence or other that has the same truth value.
One popular way of countering this difficulty is to strengthen the theory of meaning by putting it in the form of a theory of truth relative to a context of utterance and a possible world-state. This approach, widely known as possible worlds semantics, was pioneered from the 1940s through the 1970s by Carnap, Saul Kripke, Richard Montague, David Lewis, and David Kaplan, among others. As commonly pursued it involves enriching the formal languages amenable to Tarski's techniques, so that they incorporate more and more of the concepts found in natural language—including modal concepts expressed by words like actual, necessary, possible, could, and would, temporal concepts expressed by natural-language tenses, and indexical expressions like I, we, you, he, now, and today. By the end of the century it had become possible to imagine the day in which natural languages would be treatable in something close to their entirety by the descendants of the logical techniques initiated by Tarski. Analyses of central philosophical concepts, formulated in terms of possible world-states, had also become commonplace, as illustrated by the highly influential treatment of counterfactual conditionals given by Robert Stalnaker and David Lewis as well as Lewis's related analysis of causation.
However the most important philosophical development in the last half of the century occurred in Princeton in January of 1970 when Saul Kripke, then twenty-nine years old, delivered the three lectures that became Naming and Necessity. Their impact was profound, immediate, and lasting. In the philosophy of language Kripke's work ranks with that of Frege in the late nineteenth century, and of Russell and Tarski in the first half of the twentieth. Beyond the philosophy of language, it fundamentally changed the way in which much philosophy is done. The most important aspects of the work are (i) a set of theses about the meaning and reference of proper names according to which neither their meanings nor reference-determining conditions are determined by descriptions associated with them by speakers; (ii) a corresponding set of theses about the meaning and reference of natural kind terms such as heat, light, water, and tiger ; (iii) a compelling defense of the metaphysical concepts of necessity and possibility; (iv) a sharp distinction between necessity and apriority; (v) forceful arguments that some necessary truths are knowable only aposteriori and some apriori truths are contingent; and (vi) a persuasive defense of the view that objects have some of their properties essentially and others accidentally. In addition to these explicit aspects of the work, Kripke's discussion had far-reaching implications for what has come to be known as externalism about meaning and belief—roughly the view that the meanings of one's words, as well as the contents of one's beliefs, are partly constituted by facts outside oneself. Finally Naming and Necessity played a large role in the implicit but widespread rejection of the view—so popular among earlier analytic philosophers—that philosophy is nothing more than the analysis of language.
See also Analytic and Synthetic Statements; Analytic Feminism; Austin, John Langshaw; Ayer, Alfred Jules; Carnap, Rudolf; Common Sense; Davidson, Donald; Frege, Gottlob; Grice, Herbert Paul; Hare, Richard M.; Hempel, Carl Gustav; Idealism; Kaplan, David; Kripke, Saul; Lewis, David; Logical Positivism; Materialism; Montague, Richard; Moore, George Edward; Philosophy of Language; Presupposition; Quine, Willard Van Orman; Ramsey, Frank Plumpton; Reichenbach, Hans; Russell, Bertrand Arthur William; Ryle, Gilbert; Schlick, Moritz; Searle, John; Strawson, Peter Frederick; Tarski, Alfred; Whitehead, Alfred North; Wittgenstein, Ludwig Josef Johann.
Bibliography
Austin, John Langshaw. How to Do Things with Words. New York: Oxford University Press, 1962.
Austin, John Langshaw. Sense and Sensibilia. London: Oxford University Press, 1962.
Ayer, Alfred Jules. "Has Austin Refuted the Sense-Datum Theory?" Synthese 17 (1967): 117–140.
Ayer, Alfred Jules. Language, Truth and Logic. New York: Dover, 1952.
Carnap, Rudolf. The Logical Structure of the World (1928). Berkeley: University of California Press, 1969.
Carnap, Rudolf. Meaning and Necessity. Chicago: University of Chicago, 1947.
Carnap, Rudolf. "Testability and Meaning." Philosophy of Science 3 (1936): 419–71, and 4 (1937): 1–40.
Church, Alonzo. Review of Language, Truth and Logic. 2nd ed. Journal of Symbolic Logic 14 (1949): 52–53.
Davidson, Donald. "Truth and Meaning." Synthese 17 (1967): 304–323.
Davidson, Donald. "Radical Interpretation." Dialectica 27 (1973): 313–328. Reprinted in Inquiries into Truth and Interpretation. Oxford: Clarendon, 2001.
Davies, Martin. Meaning, Quantification, and Necessity. London: Routledge & Kegan Paul, 1981.
Dummett, Michael. "What Is a Theory of Meaning?" In Mind and Language, edited by Samuel Guttenplan. Oxford: Clarendon Press, 1975.
Dummett, Michael. "What Is a Theory of Meaning? (II)." In Truth and Meaning, edited by Gareth Evans and John McDowell. Oxford: Clarendon Press, 1976.
Foster, John A. "Meaning and Truth Theory." In Truth and Meaning, edited by Gareth Evans and John McDowell. Oxford: Clarendon Press, 1976.
Frege, Gottlob. Begriffsschrift. Halle, AS: Louis Nebert, 1879.
Geach, Peter. "Ascriptivism." Philosophical Review 69 (1960): 221–225.
Grice, Paul. "Logic and Conversation." In Studies in the Way of Words. Cambridge, MA: Harvard University Press, 1989.
Hare, Richard. The Language of Morals. Oxford: Clarendon Press, 1952.
Hempel, Carl. "On the Nature of Mathematical Truth." American Mathematical Monthly 52 (1945): 543–556. Reprinted in The Philosophy of Mathematics. 2nd ed., edited by Paul Benacerraf and Hilary Putnam. Cambridge, U.K.: Cambridge University Press, 1983.
Hempel, Carl. "Problems and Changes in the Empiricist Criterion of Meaning." Revue Internationale de Philosophie 4 (1950): 41–63. Reprinted in Logical Positivism, edited by Alfred Jules Ayer. New York: The Free Press, 1959.
Kaplan, David. "Demonstratives." In Themes from Kaplan, edited by Joseph Almog, John Perry, and Howard Wettstein. New York: Oxford University Press, 1989.
Kripke, Saul. "A Completeness Theorem in Modal Logic." Journal of Symbolic Logic 24 (1959): 1–14.
Kripke, Saul. Naming and Necessity. Cambridge, MA: Harvard University Press, 1980. Originally published in Semantics of Natural Languages, edited by Donald Davidson and Gilbert Harman. Dordrecht: Reidel, 1972.
Kripke, Saul. "Semantical Considerations on Modal Logic." Acta Philosophica Fennica 16 (1963): 83–94. Reprinted in Reference and Modality, edited by Leonard Linsky. London: Oxford University Press, 1971.
Lewis, David, "Causation." Journal of Philosophy 70 (1973): 556–567.
Lewis, David. Counterfactuals. Cambridge, MA: Harvard University Press, 1973.
Montague, Richard. Formal Philosophy. New Haven: Yale University Press, 1974.
Moore, George Edward. Principia Ethica. London: Cambridge University Press, 1903.
Moore, George Edward. "The Refutation of Idealism." Mind 12 (1903): 433–453.
Moore, George Edward. "A Defense of Common Sense." In Contemporary British Philosophy (Second Series), edited by John Henry Muirhead. New York: Macmillan, 1925. Reprinted in Philosophical Papers. New York: Collier 1962.
Moore, George Edward. "Proof of an External World." Proceedings of the British Academy XXV (1939): 273–300.
Quine, Willard Van Orman. "Ontological Relativity." In Ontological Relativity and Other Essays. New York: Columbia University Press, 1969.
Quine, Willard Van Orman. "Truth by Convention." Philosophical Essays for A. N. Whitehead, edited by O. H. Lee. New York: Longmans, 1936. Reprinted in Ways of Paradox. New York: Random House, 1966.
Quine, Willard Van Orman. "Two Dogmas of Empiricism." Philosophical Review 60 (1951): 20–43.
Quine, Willard Van Orman. Word and Object. Cambridge, MA: MIT Press, 1960.
Ramsey, Frank. The Foundations of Mathematics. London: Routledge and Kegan Paul, 1931.
Russell, Bertrand. Introduction to Mathematical Philosophy. New York: Macmillan, 1919. Reprinted New York: Dover, 1993.
Russell, Bertrand. "Knowledge by Acquaintance and Knowledge by Description." Proceedings of the Aristotelian Society 11 (1910–1911): 108–128.
Russell, Bertrand. "On Denoting." Mind 14 (1905): 479–493.
Russell, Bertrand. Our Knowledge of the External World. Chicago and London: Open Court, 1914. Reprinted London: Routledge, 1993.
Russell, Bertrand. "The Philosophy of Logical Atomism." The Monist 28 (1918): 495–527; and 29 (1919): 33–63, 190–222, 344–80; reprinted La Salle, IL: Open Court, 1985.
Russell, Bertrand, and Alfred North Whitehead. Principia Mathematica. Vol. 1. London: Cambridge University Press, 1910.
Russell, Bertrand, and Alfred North Whitehead. Principia Mathematica. Vol. 2. London: Cambridge University Press, 1912.
Ryle, Gilbert. The Concept of Mind. New York: Barnes and Noble, 1949.
Ryle, Gilbert. Dilemmas. Cambridge, U.K.: Cambridge University Press, 1953.
Searle, John. "Meaning and Speech Acts." Philosophical Review 71 (1962): 423–432.
Soames, Scott. Philosophical Analysis in the Twentieth Century, Vol. 1: The Dawn of Analysis. Princeton, NJ: Princeton University Press, 2003.
Soames, Scott. Philosophical Analysis in the Twentieth Century, Vol. 2: The Age of Meaning. Princeton, NJ: Princeton University Press, 2003.
Soames, Scott. "Semantics and Semantic Competence." Philosophical Perspectives 3 (1989): 575–596.
Soames, Scott. "Truth, Meaning, and Understanding." Philosophical Studies 65 (1992): 17–35.
Stalnaker, Robert. "A Theory of Conditionals." In Studies in Logical Theory, American Philosophical Quarterly Monograph Series 2. Oxford: Blackwell, 1968. Reprinted in Ifs, edited by William Harper, Robert Stalnaker, and Glenn Pearce. Dordrecht: Reidel, 1980.
Tarski, Alfred. "The Concept of Truth in Formalized Languages." In Logic, Semantics, and Metamathematics, edited by John Corcoran. Indianapolis: Hackett, 1983.
Wittgenstein, Ludwig. Philosophical Investigations. Translated by G. E. M. Anscombe. Oxford: Blackwell, 1953.
Wittgenstein, Ludwig. "Some Remarks on Logical Form." Proceedings of the Aristotelian Society supp. Vol. 9 (1929): 162–171.
Wittgenstein, Ludwig. Tractatus Logico-Philosophicus. Translated by C. K. Ogden. London: Routledge & Kegan Paul, 1922.
Scott Soames (2005)