Philosophy of Language
PHILOSOPHY OF LANGUAGE
What, if anything, can philosophy teach us about language? It is a feature of English that its adjectives come before its nouns, as in green table. This syntactic fact distinguishes English from French. In English there is a difference in sound between words that begin with a b and ones that begin with a p. This phonological fact distinguishes English from other languages. Some Arabic languages, for example, have trill sounds. This phonetic feature distinguishes these Arabic languages from English. Are any of these linguistic features philosophically interesting?
It is doubtful whether any philosopher seriously believes that, qua philosopher, they have anything interesting to say about the syntactic, phonetic, and phonological features of languages in general or of English in particular. Why, then, should it be any different for all of the other features of language? For example, that in English a relative pronoun proceeds the noun phrase it modifies or that English declarative sentences are of the subject-verb-object variety, are interesting facts about English syntax, but why should any of this be of philosophical interest?
Many theorists claim that philosophers of language are interested in answering questions of the sort: What need someone know in order to understand his or her language? Do they need to know the sorts of facts just mentioned? In some sense of know, they must. Someone who speaks English, normally, can recognize another as a non-English speaker, as a nonnative English speaker, or not a perfectly fluent English speaker simply by virtue of the fact that this speaker employs syntactic structures or phonemes that are not a part of English, or fails to recognize differences between distinct phonemes of English. For example, if someone failed to recognize a difference between an articulation of the words bit and bet, this would constitute partial evidence that the individual in question does not (fully) grasp English. But why is this philosophical? It is not! Still, philosophy does matter to language. Why anyone should think so is a complicated matter; one that an answer to will be sketched in the sections that follow.
It is uncontroversial that linguistic expressions carry meaning. Right now, you are looking at ink marks on a piece of paper. These marks are in English, they have meaning, and should you know these meanings, you can figure out what they say. We spend a lot of our lives exercising our communicative abilities; abilities to produce utterances (spoken, written, felt, etc.) that others can interpret; and, abilities to interpret utterances that others have produced. These abilities in assigning meanings to expressions—simple and complex—are required in order to ask for help, read traffic signs, interest others, surf the net, read newspapers, write e-mails, watch movies, comfort others, listen to lectures, order food, read a bus schedule, buy wine, quarrel, and make jokes.
One of the central topics in philosophy of language today is to provide an explicit and systematic account of whatever knowledge we have of the meanings of the expressions of our language that enables us to communicate with it. Surrounding these projects are a number of subtle philosophical issues.
What is Meaning?
What is the meaning of an expression? Traditional scholarly books and articles all weigh in with one analysis or another about the nature of meaning. Some posit that the meaning of an expression is what it applies to (apple means the set of apples), the idea that we associate with it (God means, say, the idea of a benevolent omnipotent omniscient being), or the characteristic behavior that its uses evince (Fire ! means run for safety), and so on.
Criticisms run that this or that analysis cannot be right, because if meaning were this, then two expressions that differ in meaning would turn out to be synonymous, or that a meaningful expression would turn out to be, on the proposed analyses, meaningless. For example, a critic of the view that the meaning of an expression is what it applies to might argue that even though the two sentences "Cicero was Roman" and "Tully was Roman" are not synonymous as the referents of Cicero and Tully are the same. A critic of the view that the meaning of an expression is the idea(s) we associated with it might argue that even though someone can associate the idea of warm weather with the word grass, the idea of warm weather is still not part of its meaning. Anyone who denies this should visit Ireland in January.
Though neither argument is definitive (after all, paraphrasing Ludwig Wittgenstein, theories do not get refuted; they just become no longer interesting to defend), they still illustrate how theories of meaning can be, and often are, evaluated. In traditional criticisms, intuitions about what we believe expressions to mean are dominate. The question of what the relationship is between theories (the sole aim of which is to provide an analysis of an important concept) and theories (the aim of which is to explain various phenomena) is left open by this to and fro (for more on the analysis of the concept of meaning, see William P. Alston's Philosophy of Language ).
A major shift in the philosophical study of meaning took place about fifty years ago with the abandonment of efforts to analyze the concept of meaning (Quine 1953). But, if it is not an analysis of the concept of meaning that philosophers are after, what, then, warrants evaluations of various claims about what meaning is?
Whatever meaning is, it is relatively uncontroversial what knowledge of it enables us to do: It enables us to understand language. Because we know what the expressions of our language mean, we understand English. In rejecting an account we are saying in effect that this cannot be what we know that enables to understand English, because if it were we need not understand English. Thus, if you were taught the referent of every English word, you would not understand an English interlocutor. On this account, being asked, "Was Cicero the same man as Tully?," should produce bewilderment. On the referential theory, it is analogous to being asked whether bachelors are unmarried men. But if it is not knowledge of the referent of an expression that enables one to understand it, what does enable one to understand it?
The picture that understanding a word is learning to associate an idea with it goes back at least to the early empiricist Thomas Hobbes. It is a bad theory, for suppose you were told, "Though grass covers Ireland in January, it is not warm there then." Were your understanding of the word grass to include the idea of warmth, you should find this comment linguistically confused, much like being told "Though John is a bachelor, he has a wife!" But if understanding consists neither in knowing the referents of your words, nor the ideas you associate with them, what then might you know that would enable you to understand English?
The picture that dominated theories of meaning throughout most of the last century is (various versions of) linguistic behaviorism (Skinner 1957). Linguistic competence with an expression is knowing how to behave appropriately when confronted with its uses. For example, suppose you are told "Go get a coke!" In virtue of understanding English, what should you do? Should you automatically get a coke? Presumably not, for that would render linguistically competent English speakers all very active. Perhaps you need only know what you are supposed to do. But what are you supposed to do when someone asks you for a coke? Good manners might require that you should do something when asked, but understanding English requires nothing of you. These various critical points are intended to establish that no particular behavior is associated with language understanding, and so they scream out for clarification from anyone who wants to be a behaviorist about linguistic competence, clarification that was never forthcoming (Chomsky 1959).
Meaning is Relational, Extrinsic, Vague, and Conventional
Beginning with a banality such as understanding a language requires knowing the meanings of its expressions, as philosophers well know, is a necessary precaution against a rampant background of skepticism in some philosophical quarters about the notion of meaning. Some of this skepticism generates from the consideration that whatever is alleged to carry meaning does not do so inherently. For instance, there is nothing about English words that requires "Snow is white" to mean that snow is white. In another language, they might mean grass is green, and so it follows that whatever words mean depends partly on the language from which these words originate. But this sort of relativity should not compromise the reality of what words mean. After all, no one is inherently a father. The relational property of fatherhood depends on a relationship to someone else—a child. Likewise, whether or not a string of words means that snow is white depends on this string's relationship to a specific language.
This issue concerning the meaning of words should not be confused with reservations about the reality or truth of conventions. Being married, like fatherhood, is a relational property. But unlike fatherhood, marriage is not grounded in biology. It is, so to speak, a matter of convention or social arrangement who is married to whom. But, extant conventions might easily have been different. Everyone who is currently married might just as easily not have been without suffering any substantial change to their being—rather, only a change in convention. It is a mistake, however, to infer from this possibility that there really is no such thing as marriage. Likewise, if it is a matter of convention that dog means dog and not cat, then it does not follow that there should be a dispute over what dog means.
The reality of meaning is equally left uncompromised by considerations about vagueness or borderline cases. Two words translate or paraphrase each other just in case they share the same meaning. In many instances, we are simply unsure whether two words translate or paraphrase each other; and there is no higher source to which we can appeal to settle our doubts. In short, that meaning is relational, extrinsic, vague, or conventional does not compromise its reality.
Language and Use of Language: Semantics and Pragmatics
Of course, linguistic meaning is not our only employment of the concept of meaning. We sometimes speak of another's action as meaningful, as when identifying purpose as our aim. In seeking the meaning for which Bill burned down his house, however, it need not be assumed that Bill's act of burning down his house is meaningful in the same way as the English sentence "Bill burned down his house" is. For one, it is not conventional meaning we seek in another's act, but rather the underlying intentions. For what reason did Bill carry out his sorry deed?
Similarly, people use words with intent. John might assert "Snow is white" because he wants to alert his listeners to the fact that English is his native tongue. No one would conclude on this basis that the words "Snow is white" mean that English is John's language. We can see clearly that with these speech acts, the notion of meaning enters twice. First, in choosing a vehicle to express our message, words whose conventional meaning best conveys that message are employed. And, secondly, in interpreting a linguistic act, an attributed meaning can and often does exceed this conventional meaning.
An audience can exploit context and individual histories in order to discern an agent's purpose or message. Why did he tell me, "I love you," when he knows that I am fully aware of it? Does he mean to reassure me? Or, does he dread losing me, and so, means by his words for me to feel guilt about our imminent separation? Such exegetical issues concern us all whenever we try to size up what others mean by their particular use of words. With conventional linguistic meaning, speakers rely on a prior comprehension in order to convey successfully a message; with these other sorts of meaning, speakers hope—wittingly or not—to exploit presumed shared beliefs and expectations in discerning nonconventional meaningful aspects of linguistic acts.
In summary: When theorizing about meaning, it is crucial to distinguish between language and the use of language. Languages, such as English, exist independently (in a sense that requires clarification) of what anyone happens to do with them. If these sentences together in this order had never been assembled, it would have made no difference to the existence of English. English words and sentences would have meant whatever they do. Speakers simply exploit the meanings of these words in their writings, and a reader exploits those same meanings in order to understand what is written. For an example, consider sentence (1): Some American musicians are scared of a small Norwegian troll.
Most likely, (1) has never before been written. That, of course, does not prevent it from meaning whatever it does in English. It has its meaning independently of ever having been uttered or thought about. So far our discussion has been primarily concerned with the meaning that sentences have in English (by virtue of being English sentences)—that is, their conventional or literal meaning. The study of the literal meaning of words and sentences is often called semantics.
Conventional meaning, however, is as we have seen not the be all and end all of communication. We often (maybe always) use sentences to communicate contents quite different from their conventional meaning, as observed in the following conversation. Sam asks Chris in sentence (2): Can you help Alex with his paper tonight? Chris in sentence (3) responds: I'm driving into New York to see Jill. By uttering (3), Chris can succeed in telling Sam that she cannot help Alex with her paper that night. Of course, that's not the literal meaning of sentence (3). The literal meaning of that sentence is that Chris is driving into New York to see Jill. But by uttering (3), Chris can succeed in communicating to Sam more than the literal meaning of the sentence she uttered. The study of how words and sentences can be used to communicate contents that go beyond their literal meaning is often called pragmatics. The goal of pragmatics is to study the various mechanisms that speakers exploit to communicate content that goes beyond literal meaning (for more on the distinction between pragmatics and semantics, see H.P. Grice's Studies in the Ways of Words ). But in ascribing conventional meaning, one can incur theoretical costs.
Representational and Compositional Meaning (Semantic) Theories
Philosophers of language and linguists talk about the vehicles that carry meaning as both representational and compositional. Representations represent—so the sentence "Bill Clinton is tall" represents Bill Clinton as tall; however, the sentence "The president of the United States in 1999 is tall" also is true of Bill Clinton, and also represents him as tall, but it does so in a different manner. But it differs not only inasmuch as it uses a different vehicle. The Italian sentence "Il presidente degli Stati Unitii in 1999 e' alto " represents Bill Clinton as tall in exactly the same way that "The president of the United States in 1999 is tall" does, even though these two vehicles of representation are distinct. With these two sentences, however, the vehicles are synonymous—they carry the same meaning, whereas the first two are not synonymous, though both vehicles happen to be true in the same circumstances.
Suppose, for instance, that someone else had been president in 1999; then, the latter two sentences with definite descriptions might be false, but the first sentence with a proper name would still be true. So, whatever meaning is, it would appear to be more fine-grained than a mere symbol-object relationship. If words were merely tags for objects, no two co-tags would differ in meaning. It would seem that vehicles denote objects under representational guises, and these guises are part of what that expression means.
There has been much written about the nature of this guise, yet little of it has been clear. Whatever guises are, we have seen that they must be more fine-grained than the objects to which expressions apply because expressions with the same referent can differ in meaning, but guises must also be more coarsely-grained than the ideas speakers associate with expressions. Two people might use the same expression but associate different ideas with it; for you, snow might connote misery but for a skier it might connote joy.
Synonymous sentences in the same or distinct languages are supposed to share guises; those that are non-synonymous do not, even if the sentences happen to be about the same objects, events, or state of affairs. Like the shadows in Plato's cave allegory, guises suggest existing somewhere in between linguistic items and idiosyncratic ideas associated with expressions by individual speakers, on the one hand, and the objects to which they are conventionally attached, on the other.
Guises are what determine whether a linguistic item is about one thing and not another; they are the concepts that enable us to understand the linguistic items we use. The definite descriptions the forty-second president of the United States and the husband of Hilary Clinton pick out the same person, Bill Clinton, but they do so in different ways. The ways in which they pick him out are another way to think about the guises associated with expressions. The former expression picks out Bill Clinton partly by virtue of his having the property of being the forty-second president of the United States; and the latter expression picks him out partly by virtue of his being Hilary Clinton's husband. Thus, these two expressions each represent the same individual, but they do in different ways—under different guises.
But there is more to the concept of a guise than is evidenced by representational powers. Natural languages are essentially productive and systematic. They exhibit productivity in that there are no obvious upper bounds on the number of creative linguistic acts that can be performed through speech. Novel sentences can be formed by conjoining any two meaningful indicative sentences—as in, "John left, but Mary stayed"—or by prefacing any meaningful indicative sentence with a psychological verb—as in, "Carl believes that Martha is ill" or "Carl fears that Martha is ill."
Because humans lack magical abilities, this capacity to produce and comprehend novel linguistic acts requires explanation. The standard explanation is that speakers of a natural language must have learned rules that enable them to determine the meaning of a complex expression strictly on the basis of its significant parts. The existence of such compositional rules explains our capacity with productive representational systems—by assuming that any unbounded representational system is compositional, we have an explanation for mastery over productive representational systems (for further discussion of compositionality, see Jerry Fodor's and Ernie Lepore's Compositionality Papers ).
The property of compositionality can also be invoked in order to explain the following feature: It is a distinctive feature of English that when a grammatical sentence of the form "A R's B" is meaningful, then if "B R's A" is grammatical, not only is it also meaningful, but its parts are presumed to make the exact same meaningful contribution that they do in the original configuration. This aspect of a representational system is referred to as its systematicity.
The existence of a set of compositional rules accounts for systematicity as well as productivity. Compositionality requires that meaningful expressions compose in systematic ways to produce meaningful complexes. The expressions the red shoe, the table, and fell on mean what they mean regardless of whether they are configured to read "The red shoe fell on the table," or "The table fell on the red shoe." To be more specific: reconsider (1). Its literal meaning and, indeed, the literal meaning of any English sentence, depends on two factors: A) the meaning of the words (i.e., some, American, musicians, are and troll ; and B) the way in which these words are assembled. Put together as in (4), what results is a sentence entirely different in meaning from (1): (4) Some Norwegian musicians are scared of a small American troll.
From these apparently obvious facts we can derive the idea that languages have compositional meaning theories. The idea is that the literal meaning of a sentence (its literal or conventional content) is the result of the (literal/conventional) meaning of its parts (the words in it) and the manner in which these parts are put together (their mode of composition).
Furthermore, as we have already noted, in addition to the systematicity of our sentences, speakers are also able to understand and produce indefinitely many sentences—sentences neither they nor anyone else in their community has ever uttered before. This shows that their knowledge of language must be productive; it must extend beyond a fixed lexicon of predefined static elements, and must include a generative system that actively composes linguistic knowledge so as to describe arbitrarily complex structures. The hallmark of productivity in language is recursion. Recursive patterns of complementation, as in (5), and recursive patterns of modification, as in (6) and (7), allow phrases to be nested indefinitely many times within a single sentence: (5) Chris thinks that Kim thought that Robin wanted Sandy to leave; (6) Chris bought a gorgeous new French three-quart covered copper saucepan; (7) Chris is writing a book that describes inventors that have built machines that changed the world that we live in.
Speakers' capacity to formulate and recognize an open-ended array of possible sentences shows how acute a problem it is to coordinate meaning across speakers. When we learn the meanings of expressions of our native language, we must generalize from the finite record of our previous experience to an infinity of other expressions and situations. If we thereby arrive at a common understanding of the meanings of these expressions, it must be because language is structured by substantive and inherent constraints that we are able to exploit. More generally, if our discoveries in the theory of meaning are to help explain how speakers can use language meaningfully, we should expect that the generative mechanisms we postulate as theorists will be compatible with the psychological mechanisms that underlie speakers' abilities.
There are many ways to implement this idea of a compositional meaning theory. One that has been prominent in the philosophical literature is that a theory of meaning for a natural language, L, should consist of a finite set of axioms specifying the meaning of the words and the rules for how they can be composed. These axioms would then permit the derivation of theorems that specify the meaning of complex expressions (such as some American musicians ) and sentences, such as (1)–(7). So understood, a semantic theory is a formal theory from which we can derive the meaning of an infinity of English sentences. The reason why (1)–(7) mean what they mean in English is that their meanings are encoded, so to speak, in the basic axioms of a correct meaning theory for English.
A straightforward way, then, for a philosopher of language to explain productivity and systematicity is to assume that the meanings of particular sentences can be calculated by inference from general facts about meaning in the language. For example, consider the compositional meaning theory presented in (8)–(10): (8) Snow is a noun phrase and refers to the stuff snow; (9) White is an adjective phrase and refers to the property whiteness; and (10) If N is a noun phrase and refers to the stuff S and A is an adjective phrase and refers to the property P, then N is A is a sentence and is true if, and only if, S is P.
From this theory, we can derive (11) as a logical consequence: "Snow is white" is true if, and only if, snow is white. Why should we think of (11) as a characterization of the meaning of the English sentence "Snow is white?" We can because it links up this sentence with a condition in the world stated in objective terms—in this case, the condition that snow is white. As theorists of meaning, we can utilize this kind of theory, which Donald Davidson calls an interpretive truth-theory, to provide a general account of how sentences link up with conditions in the world (Davidson 1967, 2001; Lepore and Ludwig 2005).
We use atomic formulas to axiomatize the meanings for elementary structures in the language and use conditional formulas to describe the meaning of complex structures in the language as a function of the semantics of their constituents. We then reason logically from the axioms to associate particular sentences with particular conditions in the world. As in (8)–(10), this inference will be compositional, in that the conclusions we derive will be inferred through a logical derivation that mirrors the syntactic/grammatical derivation of the sentence.
There are two ways to view interpretive truth-theories such as (8)–(10). We can exploit an interpretive truth-theory to formulate a theory of meaning for a new language. For example, we could be pursuing translation. In this case, we are interested in systematically articulating translations of sentences in the object language in terms of sentences in our own; we understand these translations to be derived by inference from the axioms of the theory. Another way to view interpretive truth-theories (and other sorts of compositional theories of meaning), such as (8)–(10), is as ingredients of the speakers' psychology. On this view, we regard the axioms of a theory of meaning as generalizations that native speakers know tacitly about their own language. When speakers formulate or recognize particular utterances, they reason tacitly from this implicit theory to derive conclusions about specific new sentences. On this understanding, interpretive truth-theories offer an explanation of how speaker knowledge of meaning and inference underlie linguistic competence.
Formalism in Philosophy of Language
The view we just described invites an analogy between the semantics of natural languages and the semantics of the artificial languages of formal logic. The analogy goes back to Gottlob Frege (1879), who took logic to clarify the features of natural language essential for correct mathematical thought and communication. The work of Richard Montague (1974) took the analogy further. Montague explicitly advocated an exact parallel between the semantic analysis of English—what ordinary speakers actually know about their language—and the semantics of intensional higher-order logic. In fact, many techniques originally developed for giving semantics to logical languages turn out to be extremely useful in carrying out semantic analysis.
Indirect Speech Acts
Interpreting a dream partly involves assigning it meaning, but does this imply that dreams are representational in the way that language is? In one sense, they are obviously so. This is the sense in which we might say of any image that it is representational. An image of a horse is of a horse, and not of sheep. But this is a notion of representation irrelevant to our current concerns in the philosophy of language, because it appeals to a natural (and not a conventional) relation between an image and its corresponding object. If dreams are supposed to be representational in the same sense in which photographs or other sorts of images are, then talk of a compositional theory of interpretation or meaning of dreams is not anything like the sort of theory that one invokes for systems of representations such as natural language. For one, photographic images are neither productive nor systematic, nor are they even fine-grained in the way in which linguistic representational systems are. An image of Bill Clinton is an image of the president of the United States, and nothing short of an election can pull them apart. More famously, an image of John giving Bill a toy is indistinguishable from an image of Bill receiving a toy from John, though these inseparable acts are distinct. It is clear that the sort of systematicity that occurs so naturally within bona fide linguistic representational systems cannot be applied to images with the same ease.
We return now to our earlier contrast between literal/conventional meaning and meaning in purpose or what we might call agent meaning. When the subject is employing so-called indirect speech acts, then what one means by one's words must take into consideration background factors. So, for example, suppose Janet says, "It's raining outside." Her words mean that it is raining outside, but she herself might mean for her audience to bring their umbrellas. When Janet spoke she intended her audience to come to believe what she was trying to get across. In order for her words to have meant that her audience is to take their umbrellas, she must have intended her audience to recognize her ulterior motive.
Speaker meaning in contrast to literal/conventional meaning, then, requires (at least) two sorts of intentions, one about what a speaker is trying to get their audience to believe by their utterance and another about getting them to recognize what he or she is trying to do. More specifically, what a speaker means by their words depends on what they intend their audience to come to believe, and what he or she intends them to recognize him or her as intending them to come to believe. Both component intentions, tacitly or not, must accompany an utterance in order for the speaker to mean something by what they say. By Janet's utterance of "It's raining," she means for her listener to bring their umbrella just in case she intends them to come to believe this and she intends them to recognize that she intends them to come to believe this. She intends for them to come to believe they are to bring their umbrella, and she intends them to recognize that she intends them to come to believe they are to bring their umbrella.
Implicit in our discussion is, of course, the assumption that speaker meaning can exceed word meaning. For you to bring your umbrella is not what Janet's words "It's raining" literally/conventionally mean, nor is it implied by anything that these words literally/conventionally mean. Speaker meaning is determined by word meaning alone just in case it is either expressed or implied by what the words used mean; conversely, it is not determined by literal meaning alone if it is neither expressed nor implied by what the speaker's words literally mean. A simple test separates the former distinction from the latter. If we try to deny speaker meaning determined by word meaning, then we end up making inconsistent claims. Because Janet can consistently assert that it is raining outside without intending for you to bring any umbrella, what she means is neither expressed nor implied by what her words mean (Grice 1989).
Inquiries about speaker meaning not determined by word meaning are about nonlinguistic motives, beliefs, desires, wishes, fears, hopes, and other psychological states that provoke verbal expression. Speaking is an action; it is what we do with meaningful words. This requires reasons, and reasons not entirely about what our words mean. Linguistic and nonlinguistic psychological states both come into play.
Sentences Meaning and Understanding
To sum up: One chief goal of philosophy of language is to show how speaker knowledge of a natural language allows speakers to use utterances of sentences from their language meaningfully. As we have seen, one rough and tentative answer has been: If speakers know a recursive compositional meaning theory for their language, then they can use its rules and axioms to calculate interpretive truth conditions for arbitrarily complex novel sentences. But we have also seen that even if speakers can infer the truth conditions of sentences from their language on the basis of (tacitly) employing a compositional meaning theory for their language, such knowledge alone cannot account for all of what goes on in communication. Communication invariably takes us further than the literal/conventional meaning of our words. How do we go further in a communicative exchange than what our words literally mean?
A preliminary, approximate answer is this: We begin by idealizing the information mutually available to us in a conversation as our common ground (Stalnaker 1973). The common ground settles questions about whose answers are uncontroversial, in that interlocutors know the answers, know that they know the answers, and so forth. Meanwhile, the common ground leaves open a set of possibilities about which there is not yet agreement: Maybe there is a matter of fact that could turn out (for all that the interlocutors know) to be one of various ways, or maybe the interlocutors actually do know how it turns out but do not realize that the knowledge is shared—so it could be that the others know, and it could be that they do not—and so forth. We might represent these possibilities in the common ground as a set of possible worlds (situations).
Let the set of possible worlds in which a given sentence is true represent the proposition associated with the sentence. If we adopt this picture, then we can formalize the effect that asserting a formula has on the common ground. When interlocutor A asserts a formula f, he or she introduces into the conversation the information that f is true. Suppose that f expresses the proposition that p. Before A asserts f, the common ground is some set of worlds C. After, the common ground must also take into account f. This formula f restricts the live possibilities by requiring the worlds that are in the common ground to make true the further proposition that p. So, the change that occurs when A asserts f is that the common ground goes from C to C together with the proposition that p. This concise model forms the basis of a range of research characterizing the relationship between truth-conditional semantics (literal/conventional meaning) and conversational pragmatics in formal terms (van Benthem and ter Meulen 1997).
This idealization obviously has its limits. And it is easy to come up with strange puzzles when one moves (perhaps inadvertently) beyond the limits of these idealizations. Before considering one such puzzle, we digress to discuss perhaps one of the most important results from one of the most important research programs in the philosophy of language in the last half-century.
Saul Kripke and Hilary Putnam on Twin Earth
Imagine a planet exactly like Earth, except that where Earth has water, this other planet, Twin Earth, has another mysterious substance, say, XYZ. To human senses, this substance seems exactly the same as water; nevertheless, it has a fundamentally different chemical structure. Imagine further that it is still the year 1700, and chemical structure has yet to be discovered. Still, we judge that the English word water, on Earth, means water, whereas the Twin English word water, on Twin Earth, means XYZ. Moreover, if an earthling were suddenly teleported to Twin Earth, they would still speak English, and their word water would still mean water—this despite the fact that they might have exactly the same dispositions as Twin Earthers have to accept or reject statements about their new surroundings. In short, the unfortunate earthling would think they were surrounded by lots of water, and would be completely wrong.
What moral should we draw from Putnam's (1975) Twin Earth thought experiments? Should we conclude that when you look at how a speaker is disposed to respond to English sentences, water can be interpreted equally well as water, XYZ, or even the disjunction of the two? These interpretations are different, and they assign distinct truth values to English sentences in meaningful (but ultimately inaccessible) situations. In fact, though, when we say water in English means water, according to Kripke, we are applying a standard based on our recognition that English speakers intend to pick out a particular kind of stuff in their own environment.
As a community, English speakers have encountered this stuff and named it water. And as a community, English speakers work together to ensure first that the community maintains the referential connection between the word water and that stuff, and only secondarily, that the individuals in the community can themselves identify examples of the stuff in particular situations. When as observers we recognize that water means water, we are not summarizing the epistemic abilities of particular speakers. Rather, we are summarizing social commitments and causal connections in the community that have worked across speakers to hook the word water up with the stuff, and keep it that way. What philosophers of language do, ultimately, is to explain how speakers can use language to refer in shared ways to shared aspects of the world.
Kripke (1972) motivates his account with an analogy between words for kinds, such as water, and proper names, such as Richard Feynman. In the case of proper names, we can point to the social practices that initially fix the reference of a name and transmit that reference within the community. A baby boy is born. His parents call him by a certain name. They talk about him to their friends. Others meet him. The name spreads from link to link much like a chain. To use another example: Let us say that a speaker on the far end of a similar type of chain, who hears about Richard Feynman, may be referring to him even though they cannot remember from whom they first heard his name. They know Feynman is a famous physicist. A certain passage of communication reaching ultimately to the man himself does reach the speaker. The speaker is then referring to Feynman even though he or she cannot identify him uniquely. He or she does not know what a Feynman diagram is and does not know what the Feynman theory of pair production and annihilation is. Not only that, the speaker would have trouble distinguishing between Gell-Mann and Feynman (Kripke 1980).
The result is that we can judge a speaker's reference with a proper name independently of sentences that the speaker would accept or reject. In the case of common nouns such as water, the word has had its reference since time immemorial. Nevertheless, new speakers still link themselves into chains of reference that participate in and preserve the connection between water and water. So analogously, we take an English speaker's word water to refer to water, independently of sentences the speaker would accept or reject.
Most philosophers of language find the Kripke/Putnam views about the meanings of names and so-called natural kind terms satisfying; it offers a close fit to an intuitive understanding of ourselves. It seems that we really do commit to use our words with the same reference as our community. And when others make claims about the world, it seems that we really do assess and dispute those claims with respect to the common standard in the community.
For example, on the Kripke/Putnam view, we inevitably focus on certain aspects of an agent's verbal behavior and not others when we assign meanings to their utterances. We do so because we locate the theory of meaning as part of a broader science of the mind, which combines a theory of language with a theory of action (including an account of our intentions and social relationships) and a theory of perception (including an account of the limits and failings of our observation). The theory of meaning in itself explains only so much—and, not surprisingly, just because we understand the meaning of someone's sentences, we do not ipso facto understand them.
Crucially, this new view predicts that some statements are necessarily true solely in virtue of the meanings of the words involved. We have already seen that it is a fact about meaning that Richard Feynman names Richard Feynman, or that water names water. We can go further. Hesperus names the planet Venus, Phosphorus names the planet Venus, is names the identity relation. So sentence (12) follows, just as a matter of meaning alone: (12) Phosphorus is identical to Hesperus. Given that Hesperus and Phosphorus are both names for the planet Venus, (12) must be true. There is no way that that planet could have failed to be that planet. Like sentence (12), the other facts that follow from the meanings of our language are necessarily true.
However, on the Kripke/Putnam account, facts about meaning turn out not to be knowable a priori. We discover them. To illustrate, imagine that, early on, the ancient Greeks were in an epistemic situation that left it open whether the bright object that sometimes appeared in the morning sky was the same as the bright object that sometimes appeared in the evening sky. They could not distinguish themselves from their doubles on a Twin Earth where the morning star and the evening star actually were distinct objects (alien satellites, we might suppose). These Twin Earthers would speak a language in which (12) translates into a false sentence—indeed, a necessarily false sentence. For the ancient Greeks, however, the translation of (12) was necessarily true. Eventually, the ancient Greeks advanced their science, and improved their epistemic situation. They realized that, in our case, there is only one celestial object. At the same time, then, they discovered that (12) is necessarily true.
When we reflect on the generality of Twin Earth thought experiments, it is clear that facts about meaning are knowable a posteriori. We can imagine being quite wrong about what our world is like. In these imaginary situations, our empirical errors extend to errors we make about what our words mean. And, of course, we can also imagine disagreeing with others about what the world is like. Though we are committed to use our words with the shared reference of our community, we must be prepared to resolve our dispute by giving up facts that we think are necessarily true—facts that we think characterize the meanings of our words and the contents of our thoughts. With this model of how proper names and common nouns attach to the world before us, we are now ready to return to the puzzles alluded to above in connection with assertion.
Why would a speaker ever assert an identity statement like (12)? The trigger for a puzzle comes from arguments that sentence (12) must be true. If this is so, then consider what happens when A asserts C. We update the common ground C by intersecting it with the set of all possible worlds (situations)—the proposition expressed by Hesperus is Phosphorus—leaving exactly the same set C. A, therefore, on this model, has done nothing; the interlocutors' information has not changed at all! But obviously this result is absurd. What has gone wrong?
In fact, in assuming that assertions update the context with the proposition they express, we have implicitly assumed that the participants in the conversation have certain and complete knowledge of their language. For example, interlocutors can calculate that Hesperus is Phosphorus expresses a necessarily true proposition only if they can calculate that Hesperus names Venus and Phosphorus names Venus. Of course, under such circumstances, they do not learn anything from the sentence. It is easy to see how this assumption could go unnoticed.
In discussion, we typically assume the reference of our terms—precisely what matters in the "Hesperus is Phosphorus" case—is not at issue. However, consider how to formalize uses of sentences in more realistic situations (as we do so, we must be careful to respect the intuitions of Kripke's and Putnam's thought experiments [Stalnaker 1978]). Suppose an interlocutor B does not know that Hesperus is Phosphorus. What that really means is that B cannot distinguish between two possible situations. In the first, there is only one heavenly body out there, and B's community speaks a language English1 where both Hesperus and Phosphorus are names for that body. In the second, there are two distinct heavenly bodies, and B speaks a language English2 where Hesperus is a name for one of them and Phosphorus is a name for the other. Because these possibilities are open for B, they must both also be represented in the common ground.
Now, we need a correspondingly expressive notion of assertion. When interlocutor A says something, A is committed that it is true according to the standards for reference that prevail in the community. Any assertion that A makes should turn out to be true in the language A speaks. What we have just seen is that any point of evaluation w in the common ground could potentially have its own language Englishw with relevant differences from English as spoken in the real world. Adapting Stalnaker's (1978) terminology, we can associate any utterance u with a diagonal proposition; this proposition is true at a point w if the proposition that u expresses in Englishw is true in w.
In the case of Hesperus being Phosphorus, the effect of A's assertion is to intersect the common ground with this diagonal proposition. Concretely, we retain in the common ground worlds of the first kind, where English1 is spoken, Hesperus and Phosphorus are necessarily the same and A's assertion is necessarily true. However, we discard from the common ground worlds of the second kind, where English2 is spoken, Hesperus and Phosphorus are necessarily different and A's assertion is necessarily false (there is substantially more to be said about the relationship between utterance meaning and the information that interlocutors convey).
We have seen that many important philosophical issues have to be settled in advance before a theorist can construct a compositional meaning theory for, say, English in order to account for linguistic competence with English. For example, the theorist is required to be guided by some idea of what counts as getting it right. If the goal is to get a set of axioms from which the theorist can infer the literal meaning of all possible English sentences, he or she needs to have some idea of how to determine that a particular theory implies the correct literal meanings. Here are four interrelated philosophical topics devoted to such methodological issues:
- Semantics-Pragmatics Distinction: Within the totality of communicated content (all the information communicated by an utterance) it is difficult to distinguish between the literal content and that which is generated through various pragmatic mechanisms (it has proved exceedingly difficult to distinguish between semantic content and pragmatic content). Any theory of meaning must incorporate criteria that distinguish different kinds of content and tells us how to classify content. Many debates in philosophy of language are based, in part, on different ways of drawing the semantics—pragmatics distinction.
- Role of Appeals to Intuitions: Most arguments in philosophy of language appeal to intuitions. We appeal to intuitions about what was said about grammaticality, about inferential connections, and sometimes about what would be true in other possible worlds. No position in the philosophy of language can be defended without various appeals to intuitions. That raises two questions: Why should we think intuitions provide us with reliable evidence? What kinds of intuitions should we rely on?
- The Nature of Meaning: How a philosopher of language goes about constructing a theory of meaning will depend on what he thinks meaning is. Are meanings entities? Is meaning reducible to something else? Do we even need to appeal to meaning or can we leave it out of theory of communication? The meanings of sentences are often referred to as propositions. What are propositions? These foundational issues have dominated discussion in philosophy of language for centuries.
- The Nature of Languages: There is an ongoing philosophical debate about what languages are, what kind of objects they are. Some think they are abstract objects, some think they are social/public objects, some think they are psychological structures, some think natural languages such as English should play an important theoretical role, some think they are superfluous in a serious meaning theory.
Wider Philosophical Implications
To the noninitiated, research in the philosophy of language can seem technical and without deep philosophical implications. However, any such perception is simply the result of ignorance. Debates in the philosophy of language have wide-reaching implications for all branches of philosophy and research in those other branches inevitably make assumptions about issues that belong under the rubric philosophy of language. Indeed, it is not possible to do serious work in any branch of philosophy today without a solid training in the philosophy of language.
The list of such important connections between the philosophy of language and the rest of philosophy could be made very, very long indeed. Limitations of space require we restrict attention to a few topics—epistemology will be one of them. Some of the most discussed contemporary positions in contemporary epistemology draw in a very direct way on views from the philosophy of language.
David Lewis (1996) claims that the epistemological skeptic (i.e., someone who argues that knowledge is impossible) can be refuted once the correct theory of meaning for know is adopted. According to Lewis, the correct theory for know is one that assigns it a context sensitive meaning, much as with the expressions I, you, and here. Obviously, once someone claims that the meaning of an expression is context sensitive, they become accountable to the philosophy of language. The theory of meaning for context sensitive expressions such as I is well-evidenced, and so, if know is like them it will have to stand up to certain qualifying tests all such expressions satisfy.
Putnam (1982) argues that his theory of meaning and reference implies that the skeptic's central argument is incoherent. His argument is based on a philosophical position on the nature of meaning. To the extent that his theory of meaning stands up to the scrutiny of the philosopher of language, skepticism may be refuted.
Kripke (1972) argues, as we saw above, that his theory of proper names refutes the traditional view (going back at least to Immanuel Kant) that necessary truths can only be knowable a priori and contingent truths only a posteriori. According to Kripke, it follows from the theories of meaning for proper names and natural kind terms such as gold and tiger that we can discover necessary truths empirically (many scientific discoveries turn out to be discoveries of necessary truths), and it turns out that we can gain knowledge of contingent facts a priori.
Some of the most discussed contemporary positions in contemporary metaphysics also draw in a very direct way on views from the philosophy of language. Kripke (1972) argues that his theory of reference implies that mental states cannot be physical states (i.e., that materialism is false).
Some of the most discussed contemporary positions in contemporary value theory draw in a very direct way on views from the philosophy of language also. One of the central strands in contemporary ethics is called expressivism. This is the view that sentences containing moral terms (e.g., good, bad, should and so on) cannot be true or false. They serve simply to express attitudes. Expressivism is a view about the meaning of words (Ayer 1946).
See also Artificial and Natural Languages; Conditionals; Content, Mental; Contextualism; Davidson, Donald; Frege, Gottlob; Hobbes, Thomas; Intuition; Kant, Immanuel; Kripke, Saul; Language; Lewis, David; Meaning; Montague, Richard; Phonology; Plato; Pragmatics; Propositions; Putnam, Hilary; Reference; Rule Following; Semantics; Semantics, History of; Sense; Syntactical and Semantical Categories; Syntax; Vagueness; Wittgenstein, Ludwig Josef Johann.
Alston, W. Philosophy of Language. Englewood, NJ: Prentice Hall, 1964.
Ayer, A.J. Language, Truth, and Logic (1936), 2nd ed. London: Gollancz, 1946.
Benthem, Johan van and Alice ter Meulen, eds. The Handbook of Logic and Language. Cambridge, MA: MIT Press, 1997.
Chomsky, Noam. "A Review of B. F. Skinner's Verbal Behavior." Language 35 (1959): 26–58.
Davidson, Donald. Inquiries into Truth and Interpretation. 2nd ed. Oxford: Clarendon Press, 2001.
Davidson, Donald. "Truth and meaning." Synthese 17 (1967): 304–323.
Fodor, Jerry A. and Ernie Lepore. The Compositionality Papers. Oxford: Oxford University Press, 2002.
Frege, Gottlob. Begriffsschrift, eine der arithmetischen nachgebildete formelsprache des reinen denkens. Halle: L. Nebert, 1879.
Grice, H.P. Studies in the Way of Words. Cambridge, MA: Harvard University Press, 1989.
Kripke, Saul A. Naming and Necessity. Cambridge MA: Harvard University Press, 1980.
Kripke, Saul A. Naming and Necessity. In Semantics of Natural Language, edited by D. Davidson and G. Harman. Reidel, 1972.
Lepore, Ernest and Kirk Ludwig. Donald Davidson: Meaning, Truth, Language, and Reality. Oxford: Oxford University Press, 2005.
Lewis, David. "Elusive Knowledge." Papers in Metaphysics and Epistemology (1996): 549–567.
Ludlow, Peter. Readings in the Philosophy of Language. Cambridge, MA: MIT Press, 1997.
Montague, Richard. "English as a Formal Language." In Linguaggi nella società e nella tecnica. Milan: Edizioni di Comunità, 1970.
Montague, Richard. Formal Philosophy: Selected Papers of Richard Montague, edited and with an introduction by Richmond H. Thomason. New Haven, CT; London: Yale University Press, 1974.
Montague, Richard. "The Proper Treatment of Quantification in Ordinary English." In Approaches to Natural Language, edited by K. J. J. Hintikka, J. M. E. Moravcsik, and P. Suppes. Dordrecht: Reidel, 1973.
Putnam, Hilary. "The Meaning of 'Meaning.'" In Philosophical Papers II: Mind, Language, and Reality. Cambridge, U.K.: Cambridge University Press, 1975.
Putnam, Hilary. Reason, Truth, and History. Cambridge, U.K.: Cambridge University Press, 1982.
Quine, W.V.O. "Two Dogmas of Empiricism." In From a Logical Point of View. Cambridge, MA: Harvard University Press, 1953.
Skinner, B. F. Verbal Behavior. New York: Appleton-Century-Crofts, 1957.
Stalnaker, Robert. "Assertion." In Syntax and Semantics 9, edited by Peter Cole. New York: Academic Press, 1978.
Stalnaker, Robert. Context and Content. Oxford: Oxford University Press, 1999.
Stalnaker, Robert. "Presuppositions." Journal of Philosophical Logic 2 (4) (1973): 447–457.
Ernest Lepore (2005)
"Philosophy of Language." Encyclopedia of Philosophy. . Encyclopedia.com. (June 26, 2017). http://www.encyclopedia.com/humanities/encyclopedias-almanacs-transcripts-and-maps/philosophy-language
"Philosophy of Language." Encyclopedia of Philosophy. . Retrieved June 26, 2017 from Encyclopedia.com: http://www.encyclopedia.com/humanities/encyclopedias-almanacs-transcripts-and-maps/philosophy-language
Language, Philosophy of
LANGUAGE, PHILOSOPHY OF.
This entry includes two subentries:Ancient and Medieval
"Language, Philosophy of." New Dictionary of the History of Ideas. . Encyclopedia.com. (June 26, 2017). http://www.encyclopedia.com/history/dictionaries-thesauruses-pictures-and-press-releases/language-philosophy
"Language, Philosophy of." New Dictionary of the History of Ideas. . Retrieved June 26, 2017 from Encyclopedia.com: http://www.encyclopedia.com/history/dictionaries-thesauruses-pictures-and-press-releases/language-philosophy