Skip to main content

Semantics

SEMANTICS

Semantics is the study of meaning. More specifically, semantics is concerned with the systematic assignment of meanings to the simple and complex expressions of a language. The best way to understand the field of semantics is to appreciate its development through the twentieth century. In what follows, that development is described. As will be seen, advances in semantics have been intimately tied to developments in logic and philosophical logic.

Though there were certainly important theories, or proto-theories, of the meanings of linguistics expressions prior to the seminal work of the mathematician and philosopher Gottlob Frege, in explaining what semantics is it is reasonable to begin with Frege's mature work. For Frege's work so altered the way language, meaning and logic are thought about that it is only a slight exaggeration to say that work prior to Frege has been rendered more or less irrelevant to how these things are currently understood.

In his pioneering work in logic Begriffschrift, eine der arithmetischen nachgebildete Formalsprache des reinen Denkens, which was published in 1879, Frege literally revolutionized the field. It is well beyond the scope of the present entry to describe Frege's achievements in this work. But it should be said that one of his most important contributions was to achieve for the first time a clear understanding of the semantic functioning of expressions of generality, such as 'every,' 'some' and so on. This made it possible to understand, again for the first time, how sentences containing multiple expressions of generality, such as 'Every skier loves some mountain,' manage to mean what they do. In a series of papers written in the late 1800s, Frege articulated a novel theory of meaning for languages that was to be very influential. These papers included "Function and Concept" (1891), "On Concept and Object" (1892) and most famously "On Sense and Reference" (1892).

Frege made a fundamental distinction between expressions that are unsaturated or incomplete and expressions that are complete. The former he called concept words (perhaps concept expressions would be better) and the latter he called proper names. A sentence like:

1. Frege runs.

can be split up into the part that is unsaturated, the concept word 'runs,' and the complete part, the proper name 'Frege.' All expressions, Frege thought, are associated with a sense and a reference. These both have some claim to be called the meaning of the expression in question, and so it is probably best to think of Frege as claiming that there are two components to the meaning of an expression. The referent of an expression can be thought of as the thing in the world the expression stands for. Thus, the referent of the proper name 'Frege' is Frege himself. And the referent of the concept word 'runs' is a concept, which Frege took to be a function from an object to a truth value. So the concept 'runs' refers to maps an object o to the truth value true iff o runs. Otherwise, it maps the object to false. By contrast the sense of an expression Frege thought of as a way or mode in which the referent of the expression is presented. So perhaps Frege can be "presented" as the author of Begriffschrift. Then the sense of the name 'Frege' is the descriptive condition the author of Begriffschrift. It is perhaps more difficult to think of senses of concept words, but it helps to think of them as descriptive conditions that present the concept that is the referent in a certain way.

Now Frege thought that the sense of an expression determines its referent. So the sense of 'Frege' is a mode of presentation of Frege, a descriptive condition that Frege uniquely satisfies in virtue of which he is the referent of 'Frege.' Further, in understanding a linguistic expression, a competent speaker grasps its sense and realizes that it is the sense of the expression.

Of course complex linguistic expressions, such as 1 above, also have senses and references. Frege held that the sense of a complex expression is determined by the senses of its parts and how those parts are combined. (Principles of this general sort are called principles of compositionality, and so it could be said that Frege held a principle of compositionality for senses.) Indeed, Frege seems to have held the stronger view that the sense of a complex expression is literally built out of the senses of its parts. In the case of 1, its sense is the result of combining the sense of 'runs' and of 'Frege.' Frege believed that just as the expression 'runs' is unsaturated, so its sense too must be unsaturated or in need of completion. The sense of 'Frege,' by contrast, like the expression itself, is whole and complete (not in need of saturation). The sense of 1 is the result of the whole sense of 'Frege' saturating or completing the incomplete/unsaturated sense of 'runs.' It is the unsaturated sense of 'runs' that holds the sense of 1 together, and this is true generally for Frege. Frege called the sense of a declarative sentence like 1 a thought. Thus in "On Concept and Object" (p. 193) Frege writes:

For not all the parts of a thought can be complete; at least one must be unsatured or predicative; otherwise they would not hold together.

Similarly, Frege held that the reference of a complex expression is determined by the references of its parts and how they are put together (i.e. he held a principle of compositionality for referents). In the case of 1, the referent is determined by taking the object that is the referent of 'Frege' and making it the argument of the function that 'runs' refers to. This function maps objects to the True or the False depending on whether they run or not. Thus, the result of making this object the argument of this function is either the True or the False. And whichever of these is the result of making the object the argument of the function is the referent of 1. So sentences have thoughts as senses and truth values (the True; the False) as referents.

Concerning Frege's account of sentences containing quantifiers, expressions of generality such as 'every,' 'some' etc., consider the sentence

2. Every student runs.

The words 'student' and 'runs' are both concept words. Thus they have unsaturated senses and refer to concepts: functions from object to truth values. Now Frege thought that a word like 'every' was doubly unsaturated. To form a whole/complete expression from it, it needs to be supplemented with two concept words ('student' and 'runs' in 2). The sense of 'every' is also doubly unsaturated. Thus the sense of 2 is a thought, a complete sense, that is the result of the senses of 'student' and 'runs' both saturating the doubly unsaturated sense of 'every' (in a certain order). By contrast, the referent of 'every' must be something that takes two concepts (those referred to by 'student' and 'runs' in 2) and yields a referent for the sentence. But as we have seen, a sentence's referent is a truth value. Thus the referent of 'every' must take two concepts and return a truth value. That is, its referent is a function from a pair of concepts to a truth value. In essence, 'every' refers to a function that maps the concepts A and B (in that order) to the True iff every object that A maps to the true, B maps to the true (i.e. iff every object that falls under A falls under B).

Above it was mentioned that Frege thought that the referent of a complex expression was a function of the referents of its parts and how they are combined (compositionality of reference). Some examples seem to show that this is incorrect. Consider the following:

3. Chris believes that snow is white.

3a. Chris believes that Mt. Whitney is more than 14,000 feet high.

These sentences may well have different referents, that is, truth values. But the embedded sentences ('snow is white'; 'Mt. Whitney is more than 14,000 feet high') have the same referents (the True) and the other parts of the sentences have the same referents as well. But then it would seem that compositionality of reference would require that 3 and 3a have the same reference/truth value. Frege famously gets out of this apparent problem by claiming that 'believes' has the effect of shifting the referents of expressions embedded with respect to it. In 3 and 3a, the shifted referents of the embedded sentences are their usual senses. So in these environments, the sentences have different referents because they express different thoughts outside of contexts involving 'believes' and related devices.

Frege's doctrine of sense and reference constitutes a semantical theory of languages, because it claims that the meanings of linguistic expressions have these two components, and it gives an account of what the senses and referents of different kinds of linguistic expressions are.

Shortly after Frege had worked out his semantical theory of sense and reference, the English philosopher and mathematician Bertrand Russell was working out a theory of the meanings, or information contents of sentences. While Frege had held that the thought expressed by a sentence, which captures the information the sentence encodes, consisted of senses, Russell (1903) held that the information encoded by sentences were propositions, where the constituents of propositions, far from being Fregean senses, where roughly (and for the most part) the things the propositions is about. Thus, whereas Frege held that 1 expressed a thought containing a mode of presentation of Frege and a mode of presentation of the concept of running, Russell held that the proposition expressed by 1 contained Frege himself and the concept of running (though Russell thought of concepts differently from the way Frege did). This contrast has more than historical significance, because current semanticists are classified as Fregean or Russellian depending on whether they hold that the information contents of sentences contain the things those information contents are about (objects, properties and relationsRussellian) or modes of presentation of the things those information contents are about (Fregean).

In the early part of the twentieth century, the philosophical movement known as Logical Positivism achieved dominance, especially among logically minded philosophers who might have been interested in semantics. The Positivists thought that much of traditional philosophy was literally nonsense. They applied the (pejorative) term "metaphysics " to what they viewed as such philosophical nonsense. The Positivists, and especially Rudolf Carnap, developed accounts of meaning according to which much of what had been written by philosophers was literally meaningless. The earliest and crudest Positivist account of meaning was formulated by Carnap (1932). On this view, the meaning of a word was given by first specifying the simplest sentence in which it could occur (its elementary sentence ). Next, it must be stated how the word's elementary sentence could be verified. Any word not satisfying these two conditions was therefore meaningless. Carnap held that many words used in traditional philosophy failed to meet these conditions and so were meaningless.

Carnap called philosophical statements (sentences) that on analysis fail to be meaningful pseudo-statements. Some philosophical statements are pseudo-statements, according to Carnap, because they contain meaningless terms as just described. But Carnap thought that there is another class of philosophical pseudo-statements. These are statements that are literally not well formed (Carnap gives Heidegger's "We know the nothing." as an example).

The downfall of the Positivist's theory of meaning was that it appeared to rule out certain scientifically important statements as meaningless. This was unacceptable to the Positivists themselves, who were self consciously very scientifically minded. Carnap heroically altered and refined the Positivists account of meaningfulness, but difficulties remained. Hempel (1950) is a good source for these developments.

At about the same time Carnap was formulating the Positivists' account of meaning, the Polish logician Alfred Tarski was involved in investigations that would change forever both logic and semantics. It had long been thought that meaning and truth were somehow intimately connected. Indeed, some remarks of Wittgenstein's in his Tractatus Logico-Philosophicus ("4.024. To understand a proposition means to know what is the case, if it is true.") had led many to believe that the meaning of a sentence was given by the conditions under which it would be true and false. However, the Positivists had been wary of the notion of truth. It seemed to them a dangerously metaphysical notion, (which is why they "replaced" talk of truth with talk of being verified ).

Against this background, Tarski showed that truth ('true sentence') could be rigorously defined for a variety of formal languages (languages, growing out of Frege's work in logic, explicitly formulated for the purpose of pursuing research in logic or to be used to precisely express mathematical or scientific theories). Though earlier papers in Polish and German contained the essential ideas, it was Tarski (1935) that alerted the philosophical world to Tarski's important new results.

Tarski himself despaired of giving a definition of true sentence of English (or any other naturally occurring language). He thought that the fact that such languages contain the means for talking about expressions of that very language and their semantic features (so English contains expressions like 'true sentence,' 'denotes,' 'names,' etc.) meant that paradoxes, such as the paradox of the liar, are formulable in such languages. In turn, Tarski thought that this meant that such languages were logically inconsistent and hence that there could be no correct definition of 'true sentence' for such languages.

Nonetheless, Tarski's work made the notion of truth once again philosophically and scientifically respectable. And it introduced the idea that an important element, perhaps the sole element, in providing a semantics for a language was to provide a rigorous assignment to sentences of the language the conditions under which they are true. (Tarski's 1935 paper for the most part gave definitions of true sentence for languages with fixed interpretations. The now more familiar notion of true sentence with respect to a model was introduced later. See Hodges [2001] for details.)

Carnap's Meaning and Necessity (1947) is arguably the first work that contemporary semanticists would recognize as a work in what is now considered to be semantics. Following Tarski, Carnap distinguishes the languages under study and for which he gives a semantics, object languages, from the languages in which the semantics for the object languages are stated, metalanguages. The object languages Carnap primarily considers are a standard first order language (S1), the result from adding 'N' ("a sign for logical necessity") to that language (S2), and ordinary English. Carnap does not give detailed descriptions of any of these languages, noting that the book

" is intended not so much to carry out exact analyses of exactly constructed systems as to state informally some considerations aimed at the discovery of concepts and methods suitable for semantical analysis" (p. 8).

The heart of Carnap's semantics for these languages is given by rules of designation for predicates and individual constants, rules of truth for sentences and rules of ranges for sentences. The rules of designation state the meanings of the predicates and individuals constants using English as the metalanguage. So we have (p. 4):

's' is a symbolic translation of 'Walter Scott'

'Bx''x is a biped'

The rules of truth simply provide a Tarski style definition of truth for sentences of the language, (the definition assumes fixed meanings given by the rules of designation for predicates and individual constants). In order to specify the rules of range, Carnap introduces the notion of a state-description. For a language, say S1, a state description in S1 is a set that contains for every atomic sentence of S1, either it or its negation, but not both; and it contains no other sentences. Carnap comments (p. 9):

it [a state-description in S1] obviously gives a complete description of a possible state of the universe of individuals with respect to all properties and relation S expressed by predicates of the system. Thus the state-descriptions represent Leibniz' possible worlds or Wittgenstein's possible states of affairs.

Next Carnap gives a recursive characterization of a sentence holding in a state-description. An atomic sentences holds in a state-description iff it is a member of it. A disjunctions holds in it iff one of its disjuncts holds in it, etc. The characterization of holding in a state description is designed to formally capture the intuitive idea of the sentence being true if the possible world represented by the state-description obtained (i.e. if all the sentences belonging to the state-description were true). Given a sentence S, Carnap calls the class of state-descriptions in which S holds its range. Thus the clauses in the characterization of holding in a state-description Carnap calls rules of ranges. Regarding these rules of ranges, Carnap writes (p. 910):

By determining the ranges, they give, together with the rules of designation for the predicates and the individual constants , an interpretation for all sentences of S1, since to know the meaning of a sentence is to know in which of the possible cases it would be true and in which not, as Wittgenstein has pointed out.

Thus, Carnap regards the rules of ranges together with the rules of designation as giving the meaning of the sentences of S1 (the connection with truth and the rules of truth is that there is one state-description that describes the actual world, and a sentence is true iff it holds in that state-description).

Using these resources, Carnap defines his well known L concepts. We here concentrate on L-truth and L-equivalence. Before getting to that, we must say something about Carnap's notion of explication. Carnap believed that one of the main tasks for philosophers was to take a "vague or not quite exact" concept, and replace it by a more exact concept that one had clearly characterized. This new concept, called by Carnap the explicatum of the old concept, was intended to be used to do the work the old concept was used to do. Carnap thought that the notion of L-truth was the explicatum of the vague notions of "logical or necessary or analytic truth" (p. 10).

A sentence is L-true in a semantical system (e.g. S1) iff it holds in every state description in that system. Carnap regarded this as a precise characterization of Leibniz's idea that necessary or analytic or logical truths hold in all possible worlds. Next, Carnap defines the notion of L-equivalence for sentences, predicates and individual constants. Effectively, two names, predicates or sentences are L-equivalent (in a semantical systeme.g. S1) iff they have the same extension at every state-description in that system, (so L-equivalent names must name the same individual at every state description, L-equivalent predicates must be true of the same individuals at every state description, etc.).

The importance of Carnap's notion of L-equivalence is that he uses it to sketch a semantics for belief ascriptions. In order to do this, Carnap extends his notion of L-equivalence in several ways. First, he extends it so that expressions of different "semantical systems" (roughly, formal languages) may be L-equivalent (in effect, expressions e of system 1 and e' of system 2 are L-equivalent just in case the semantical rules of the two systems together suffice to show that the expressions have the same extension, p. 57). Second, he extends the notion of L-equivalence to apply to sentential connectives, variables (they are L-equivalent iff they have the same range of values) and to quantifiers (they are L-equivalent iff they are quantifiers of the same sort [universal, existential] whose variables are L-equivalent, p. 58). Third, he defines what it is for two expressions of the same or different semantical systems (again, roughly formal languages) to be intensionally isomorphic. Roughly, expressions are intensionally isomorphic just in case they are built up in the same way out L-equivalent parts. With these tools in hand, Carnap writes (p. 6162):

It seems that the sentence 'John believes that D' in S [a fragment of Englishsee p. 53] can be interpreted by the following semantical sentence:

15-1. 'There is a sentence 𝔖1in the semantical system S' such that (a) 𝔖i is intensionally isomorphic to 'D' and (b) John is disposed to an affirmative response to 𝔖i

Though Carnap's semantics for belief ascriptions was criticized by Alonzo Church (1950), many philosophers were influenced by Carnap's idea that the objects of belief are structured entities built up in the same way out of entities with the same intensions. See, for example, Lewis (1970).

The final important feature of Meaning and Necessity was its semantic treatment of modality. Carnap begins his discussion of modality by mentioning the work of C. I. Lewis (presumably he had in mind especially Lewis and Langford [1932]) in constructing various systems of modal logic. As mentioned above, Carnap considered as an object of semantical investigation a language that was the first order predicate logic (S1) supplemented with the sign 'N' "for logical necessity." He called the resulting language S2. Syntactically, prefixing 'N' to a matrix (either a sentence or a formula with free variables) results in a matrix. A detailed discussion of Carnap's semantics for this modal language would go beyond the scope of the present entry. However, a couple points are worth making. First, if we just consider the case in which 'N' fronts a sentence (formula with no free variables) ϕ, to get the rules of range for S2 we would simply add to the rules of range of S1 the following:

N(ϕ) holds in every state-description if ϕ holds in every state description; otherwise N(ϕ) holds in no state-description.

This is a consequence of Carnap's idea that 'N' is the sign for logical necessity, and the notion of L-truth is the explicatum of the vague notion of logical necessity. Thus a sentence fronted by 'N' should hold at a state description iff the sentence it embeds holds at every state-description. But then if the sentence fronted by 'N' holds at a state-description, it holds at every state-description. Thus, the above.

But of course since 'N''could front a matrix with free variables, one could then attach a quantifier to the result. Letting '..u..' be a matrix containing the variable 'u' free, we get things like

(u)N(..u..)

That is, we get quantifying into the sign 'N' for logical necessity. However, Carnap's treatment here results in the above being equivalent to (indeed, L-equivalent to)

N(u)(..u..).

The important point, however, is that Carnap had sketched a semantics for quantified modal logic.

Though virtually all of the crucial analyses and explications in Meaning and Necessity were eventually significantly modified or rejected (the explication of "logical necessity" by the notion of L-truth, understood in terms of holding at all state-descriptions; the treatment of 'N,' the sign of "logical necessity"; and the semantics for belief ascriptions), the work was nonetheless very important in the development of semantics. It provided a glimpse of how to use techniques from logic to systematically assign semantic values to sentences of languages, and began the project of providing a rigorous semantics for recalcitrant constructions like sentences containing modal elements and verbs of propositional attitude.

In the 1950s and early 1960s Carnap's ideas on the semantic treatment of modal logic were refined and improved upon. The result was the now familiar "Kripke style" semantics for modal logic. Kripke's formulations will be discussed here, but it is important to understand that similar ideas were in the air (see Hintikka [1961], Kanger [1957], and Montague [1960a]). Though these works were in the first instance works in logic, as we will see, they had a profound effect on people who were beginning to think about formal semantics for natural languages.

We will concern ourselves with the specific formulations in Kripke (1963). What follows will be of necessity slightly technical. The reader who is not interested in such things can skip to the end of the technical discussion for informal remarks. Assume that we have a standard first order logic with sentential connectives ,& and (the first and third one-place, the second two-place), individual variables (with or without subscripts) x,y,z, ; n-place predicates Pn, Qn, (0 place predicate letters are propositional variables ), and universal quantifier (for any variable xi, (xi)). A model structure is a triple G, K, R , where K is a set, G K and R is a reflexive relation on K (i.e. for all H K, H R H). Intuitively, G is the "actual world" and the members of K are all the possible worlds. R is a relation between worlds and is usually now called the accessibility relation. Intuitively, if HR H (H is accessible from H), then what is true in H is possible in H. Again intuitively, the worlds accessible from a given world are those that are possible relative to it.

Putting conditions on R gives one model structures appropriate to different modal logics. If R is merely reflexive, as required above, we get an M model structure. If R is reflexive and transitive (i.e. for any H, H, H K, if H R H and H R H, then H R H), we get an S4 model structure. Finally, if R is reflexive, transitive and symmetric (i.e. for any H, H K, if H R H, then H R H), we get an S5 model structure. (It should be recalled that for Carnap, state-descriptions, which represented possible worlds, were each accessible for every otherin effect because there was no accessibility relation between state-descriptions; thus translated into the present framework Carnap's "models" would be S5 models. Also, in Kripke's semantics, possible worlds (members of K) are primitive; in Carnap's, of course, they are explicated as state descriptions.) A quantificational model structure is a model structure G, K, R together with a function ψ that assigns to every H in K a set of individuals: the domain of H. Intuitively this is the set of individuals existing in the possible world H. Of course, this allows different worlds (members of K) to have different domains of individuals. This formally captures the intuitive idea that some individuals that exist might not have, and that there might have been individuals that there aren't.

Given a quantificational model structure, consider the set U which is the union of ψ(H) for all H in K. Intuitively, this is the set of all possible individuals (i.e. the set U of individuals such that any individual in the domain of any world is in U). Then Un is the set of all n-tuples whose elements are in U. A quantificational model on a quantificational model structure G, K, R is a function φ that maps a zero-place predicate and a member of K to T or F; and for n>0, an n-place predicate and a member of K to a subset of Un. We extend φ by induction to assign truth values to all formula/world pairs relative to a function assigning members of U to variables :

1. Propositional Variable : Let f be a function assigning elements of U to all individual variables. Let P be a propositional variable. Then for any H in K, φ(P, H)=T relative to f iff φ(P, H)=T; otherwise φ(P, H)=F relative to f.

2. Atomic : Let f be as in 1. For any H in K, φ(Pnx1,,xn, H)=T relative to f iff f(x1), , f(xn) φ(Pn, H); otherwise φ(Pnx1, , xn, H)=F relative to f.

(Note that 2 allows that an atomic formula can have a truth value at a world relative to an assignment to its variables, where some or all of its variables get assigned things not in the domain of the world, since f assigns elements of U to free variables; and φ assigns subsets of Un to Pn!)

3. Truth functional connectives : Let f be as in 1. Let A and B be formulae. For any H in K, φ(A &B, H)=T relative to f iff φ(A, H)=T relative to f and φ(B, H)=T relative to f; otherwise φ(A &B, H)=F relative to f. (Similarly for )

4. Modal operator : Let f be as in 1. Let A be a formula. φ(A, H)=T relative to f iff φ (A, H)=T relative to f for all H K such that H R H; otherwise φ(A, H)=F relative to f.

(Note that according to 4, whether a formula A is true at a world (relative to f) depends only on whether A is true at all worlds accessible from the original world.)

5. Quantifier : Let f be as in 1. Let A (x, y1, yn) be a formula containing only the free variables x, y1,,yn. For any H in K, and any function g (assigning elements of U to free variables), suppose φ(A(x, y1, , yn), H) relative to g is defined. Then φ((x) A(x, y1, yn), H)=T relative to f iff for every f such that f(x) ψ (H) and f differs from f at most in that f(x) is not f(x), φ(A(x, y1, yn), H) =T relative to f; otherwise, φ((x) A(x, y1, yn), H) =F relative to f.

(As Kripke notes, that in 5 we consider only functions f such that f(x) ψ(H) means that quantifiers range over only the objects that exist at the world where the quantified sentence is being evaluated.)

Now having gone through Kripke's semantics for quantified modal logic in some detail, let us step back and ask why it was important in terms of thinking of the semantics of natural language. People like Richard Montague, who we will discuss below, were clearly influenced in their thinking about the semantics of natural language by Kripke's semantics for modal logic, (recall too that Montague [1960a] itself contained ideas related to Kripke's). Since at least Carnap's Meaning and Necessity (and perhaps before), philosophers had thought of sentences as semantically associated with propositions and of n-place predicates as semantically associated with n-place relations (properties being one-place relations). Further, they had thought of these propositions and relations as determining truth values and extensions for the sentences and predicates expressing them relative to a "possible world" (which, of course, Carnap represented by a state description).

Now in Montague (1960b), it is suggested that an n-place relation just is a function from possible worlds to a set of n-tuples (intuitively, the set of n-tuples whose elements stand in the relation in question from the stand point of the world in question); and that a proposition just is a function from possible worlds to truth values. Generalizing these ideas leads straightforwardly to possible worlds semantics for natural languages discussed below. Further, Montague claims this way of understanding relations and propositions (which Montague calls predicates, one-place predicates, then, are properties; and zero-place predicates are propositions) is to be found for the first time in Kripke (1963). This, in turn, means that at least Montague saw the seeds of possible worlds semantics for natural languages in Kripke (1963).

This initially seems at least a little bit strange, since nowhere in Kripke (1963) does one find the identification of propositions with functions from possible worlds to truth values or relations with functions from possible worlds to sets of n-tuples. However, it is easy to see why a logician like Montague would see those ideas in Kripke (1963). Consider again a model on a quantificational model structure, forgetting for the moment about functions f that are assignments to free variables and that the domains of members of K can vary, (essentially, this means we are considering a model on a propositional model structure). A model φ on a (M/S4/S5) model structure G, K, R assigns to a propositional variable (a zero-place predicatean atomic formula without any variables) and a member of K either T or F. Now consider a particular propositional variable P. Consider the function fP defined as follows:
For any H in K, fP(H) = T iff φ(P, H)=T; otherwise
fP(H) =F
fP is a function from worlds to truth values and so can be thought of a la Montague as the proposition expressed by P (in the model φ on the model structure G, K, R )! That is, propositions, understood as functions from worlds to truth values, are trivially definable using Kripke's models. Similar remarks apply to n-place relations, understood as functions from possible worlds to sets of n-tuples of individuals. It seems likely that this is why a logician like Montague would take Kripke to have introduced them. Montague, after making the attribution to Kripke, does add (p.154): " Kripke employs, however, a different terminology and has in mind somewhat different objectives."

These functions from worlds to truth values or sets of n-tuples are now generally called intensions. Their values at a world (truth values; sets of n-tuples) are generally called extensions (at worlds). The idea that the primary job of semantics is to assign to expressions of natural languages intensions and extensions of the appropriate sort very much took hold in the wake of work by Kripke and others in the semantics of modal logic.

With the resources Kripke and others had made available in hand, researchers thinking about the semantics of natural languages eagerly made use of them. Thus the late 1960s and early 1970s saw dizzying progress in natural language semantics as the techniques for modal logic were applied. Two works from that era that particularly capture the spirit of the times are Lewis (1970) and Montague (1973). The latter will be discussed here, since it is probably the most sophisticated and influential of the works of that period. The particular semantic phenomena Montague was concerned to understand were the workings of verbs of propositional attitudes like 'believes,' the workings of intensional verbs like 'worships' and related phenomena (see p. 248 where Montague lists some of his concerns).

We saw above that both Frege and Carnap were also concerned with understanding the semantics of verbs like 'believes.' We are now in a position to say more about why such expressions attract the attention of semanticists. Consider the expression 'It is not the case' in sentences like

4. It is not the case that snow is white.

4a. It is not the case that Mt. Whitney is more than 14,000 feet high.

Whether a sentence fronted by 'It is not the case' is true or false depends only on the extension/truth value of the embedded sentence. Since both the embedded sentences are true, 4 and 4a are both false. Let's put this by saying that 'It is not the case that' creates extensional contexts. As we saw above, 'believes' doesn't create extensional contexts. 3 and 3a can differ in truth value even though the embedded sentences are both true. Let's say that 'believes' creates nonextensional contexts. The same is true of 'Necessarily.' The following differ in truth value even though the embedded sentences have the same extensions/truth values:

5. Necessarily, everything is identical to itself.

5a. Necessarily, Aristotle is a philosopher.

Finally, intensional verbs like 'worship' exhibit similar behavior and we could extend our characterization of creating nonextensional contexts so as to include such verbs. For even though 'Samuel Clemens' and 'Mark Twain' have the same extension (a certain individual), the following two sentences apparently may differ in extension/truth value:

6. Lori worships Samuel Clemens.

6a. Lori worships Mark Twain.

Now semanticists have been puzzled as to how to think of the semantics of expressions that create nonextensional contexts. But the work of Carnap and Kripke suggested the way to understand 'Necessarily.' In particular,

Necessarily S is true at a world w just in case the intension of S maps every world (accessible from w) to true.

In other words, whereas 'It is not the case' looks at the extension of the sentence it embeds to determine whether the entire sentence containing it is true, 'Necessarily' looks at the intension of the sentence it embeds to determine whether the entire sentence containing it is true. And given Kripke's semantics, intensions were well defined, respectable entities: functions from worlds to extensions. This made it appear to many that a semantics that assigned intensions to expressions could treat all expressions creating nonextensional contexts. Certainly, Montague had a version of this view.

As indicated above, Montague (1973) wanted to provide semantic treatments of verbs of propositional attitude such as 'believes,' intensional verbs such as 'worships,' and other phenomena. We will concentrate on these phenomena as well as Montague's treatment of quantification. Montague (1973) provides a syntax for a fragment of English. The fragment includes common nouns ('woman'; 'unicorn'), intransitive verbs (including 'run' and 'rise'), transitive verbs, (including both intensional transitives and "normal" transitive verbs like 'love'), ordinary names and pronouns, adverbs (including 'rapidly' and 'allegedly'), prepositions, verbs of propositional attitude and modal sentence adverbs ("adsentences"'necessarily'). The fragment allows the formation of relative clauses (though they employ the somewhat stilted 'such that,' so that we get things like 'man such that he loves Mary') and so complex noun phrases, as well as prepositional phrases and quantifier phrases ('Every woman such that she loves John'). Thus, Montague's syntactic fragment includes sentences like:

7. Every man loves a woman such that she loves him.

8. John seeks a unicorn.

9. John talks about a unicorn.

10. Mary believes that John finds a unicorn.

11. Mary believes that John finds a unicorn and he eats it.

It should be noted that many sentences of Montague's fragment had non-trivially different syntactic analyses: that is, distinct syntactic analyses that are interpreted differently semantically. So, for example, 8 above has an analysis on which 'a unicorn' is the constituent last added to the sentence and an analysis on which 'John' is the last constituent added. The latter has an interpretation on which it may be true even if there are no unicorns and so John is seeking no particular one. The former requires John to be seeking a particular unicorn. Thus, it is really syntactic analyses of sentences, and not the sentences themselves, that get semantic interpretations.

The next aspect of Montague's semantic treatment of his fragment of English is his intensional logic. Montague's intensional logic is typed. In particular, e and t are the basic types; and whenever a and b are types, a,b is a type. Finally, for any type a, s,a is a type. For each type, there will be both constants and variables of that type (and hence quantifiers of that type). The key to understanding the syntactic interactions of the expressions of various types is to know that if α is of type a,b and β is of type a, then α(β) is of type b. Interpretations assign expressions of the logic various denotations (relative to an assignment of values to variables). Expressions of type e get assigned individuals (possible individuals); expressions of type t get assigned truth values. Expressions of type a,b get assigned as denotations functions from denotations of type a to denotations of type b. Finally, expressions of type s,a get assigned functions from a world/time pair to a denotation of type a ("an intension of a type a expression"). To take some examples, expressions of type e,t get assigned functions from individuals to truth values (the denotations can alternatively be thought of as sets of individuals: those that get assigned to true). Expressions of type s,e are assigned functions from world/time pairs to individuals. Such functions Montague called individual concepts. Expressions of type s,e,t are assigned functions from individual concepts to truth values (alternatively, sets of individual concepts). Expressions of type s,t are assigned functions from world/time pairs to truth values. As indicated above, Montague thought of these as propositions.

The way Montague provided a semantic interpretation of his syntactic fragment of English was to provide an algorithm for translating English sentences (really, syntactic analyses of English sentences) into his intensional logic. Then the interpretation of the English sentences was given by the interpretation of its translation in intensional logic. Recall again that sentences like 8 above can be true even if there are no unicorns. Thus, a verb like 'seeks' could not have as its denotation (really, its translation into intensional logic could not have as its denotation) a relation between individuals (or a function from individuals to a function from individuals to truth values).

In order to get the proper results, Montague decided to assign to common nouns and intransitive verbs as their denotations sets of individual concepts rather than sets of individuals. Verbs like 'believes' have as their denotations functions from propositions to sets of individual concepts. Since individual concepts essentially function as individuals in Montague's semantics (recall that common nouns like 'man' have as denotations sets of individual concepts), this treatment essentially amounts to holding that verbs of propositional attitude denote relations between individuals and propositions. Quantifiers such as 'Every man' denote sets of properties of individual concepts (functions from world/time pairs to sets of individual concepts). Roughly, 'Every man walks' is true at a world and time w,t just in case the property of individual concepts that determines the correct set of individual concepts denoted by 'walks' at every world and time is in the set of properties of individual concepts denoted by 'Every man' at w,t. 'Necessarily' denotes at a world/time w,t a set of propositions: those that are necessary at w,t.

Finally, a transitive verb denotes a function from properties of properties of individual concepts (denotations of expressions of type s,s,s,e,t,tfunctions from world/time pairs to sets of properties of individual concepts) to sets of individual concepts. Again, recalling that individual concepts essentially stand in for individuals in Montague's framework, this means that transitive verbs in effect denote relations between individuals and properties of properties of individuals. Note that this means that for 8 to be true at a world/time pair w,t is for John to stand in a relation to the property of being a property possessed by a unicorn. This can be the case even if there are no unicorns.

Montague chose to treat all expressions of a given syntactic category the same way semantically. This means that transitive verbs like 'loves' get the odd denotation required by 'seeks' to get 8 right. But don't we want 'John loves Mary' to be true at world/time pair iff the individual John stands in a relation to the individual Mary? Surely this shouldn't require instead that John stands in a relation to the property of being a property possessed by Mary. Where's the love (between individuals)? Montague essentially requires interpretations to make true meaning postulates for "ordinary" verbs like 'loves,' and these end up insuring that 'John loves Mary' is true at w,t iff John and Mary themselves are properly related.

Montague's semantic account here was very influential. He showed that the resources Kripke and others developed for the semantics of modal logic could be rigorously applied to natural languages, and arguably treat such recalcitrant expressions as 'believes,' 'necessarily,' and 'seeks.' Montague's basic approach was picked up by many philosophers and linguists and much work in semantics through the 1980s and beyond was conducted in this framework. Indeed, much work is still done in this and closely related frameworks.

At about the same time Montague was doing his pioneering work on formal semantics for natural languages, Donald Davidson was developing a very different approach to semantics. Davidson (1967) begins with the idea that a theory of meaning for a natural language must specify how the meaning of a sentence is determined by the meanings of the words in it, and presumably how they are combined (in other writings, Davidson puts the point by saying that the meaning of sentence must be a function of a finite number of features of the sentencepresumably, one is its syntax). Davidson thought that only a theory of this sort could provide an explanation of the fact that on the basis of mastering a finite vocabulary and a finite number of syntactic rules, we are able to understand a potentially infinite number of sentences. More specifically, Davidson thought a theory of meaning should comprise an axiomatized theory, with a finite number of axioms, that entails as theorems (an infinite number of) statements specifying the meaning of each sentence of the language. Davidson thought that grasping such a theory would allow one to understand all the sentences of the language. Further, as suggested above, such a theory would explain how creatures like us are capable of understanding an infinite number of sentences. It would only require us to grasp the axioms of the theory of meaning, which are finite in number.

It might be thought that the theorems of a theory of meaning of the sort discussed would be all true sentences of the form 's means that p,' where 's' is replaced by a structural description of a sentence of the language and 'm' is replaced by a term referring to a meaning. Further, it might be thought that a theory would have such theorems in part by assigning meanings to the basic expressions of the language (such assignments being made by axioms). However, Davidson thinks that we have not a clue as to how to construct such a theory, mainly because we have no idea how the alleged meanings of simpler expressions combine to yield the meanings of complex expression of which they are parts. Thus, Davidson concludes, postulating meanings of expressions gets us nowhere in actually giving a theory of meaning for a language.

Davidson's counterproposal as to what a theory of meaning should be like is radical. A theory of meaning must be a finite number of axioms that entail for every sentence of the language a true sentence of the form 's is true iff p,' where 's' is replaced by some sort of description of a sentence whose theory of meaning we are giving, and 'p' is replaced by some sentence. Henceforth, we will call such sentences T-sentences. Recalling our discussion of Tarski, the language we are giving a theory of meaning for is the object language and the theory of meaning is given in the metalanguage. Thus, the formulation just given requires the metalanguage to have some sort of (presumably standardized) description of each sentence of the object language (to replace 's'); if we imagine 'p' to be replaced by the very sentence that what replaces 's' describes (as Davidson sometimes supposes) the metalanguage must also contain the sentences of the object language. In short, Davidson held that to give a theory of meaning for a language is to give a Tarski-style truth definition for it.

Tarski thought that a condition of adequacy for a theory of truth for a (in his case, formal) language L was that the theory has as consequences all sentences of the form 's is true (in L) iff p', where 's' is replaced by a structural description of a sentence of the object language and 'p' is replaced by a translation of it. Here Tarski clearly seemed to think that for one sentence to translate another is for them to share a meaning. However in characterizing what is to replace 'p' in his T-sentences, Davidson cannot require 'p' to be replaced by a translation of the sentence the thing replacing 's' describes, assuming anyway that for one sentence to be a translation of another is for them to share the same meaning. For Davidson eschews meanings. After all, a theory of truth was supposed to be a theory of meaning; it would hardly do, then, to appeal to meanings in constructing one's theory of truth. Thus Davidson famously merely requires the T-sentences to be true. But this requirement is very weak, for 'iff' is truth functional in Davidson's T-sentences, and so the sentences require for their truth only that the two sides share a truth value. But then there is nothing in principle yet to prevent having a theory of truth for English that yields not:

12. 'Snow is white' is true (in English) iff snow is white.

but instead

13. 'Snow is white' is true (in English) iff grass is green.

After all, 13 is true! Davidson was aware of this consequence of his view, and explicitly discussed it. He claimed that by itself, the fact that a theory of truth yields 13 as a theorem instead of 12 doesn't cut against it. However, the theory has to get all the other T-sentences coming out true, and Davidson thought it was unlikely that it could do that and yield 13 as a theorem.

Of course, the picture sketched so far needs to be complicated to account for contextually sensitive expressions. It won't do to have as theorems of one's truth theory things such as:

14. 'I am hungry' is true (in English) if I am hungry.

Davidson himself thought that the way to deal with this was to relativize truth to e.g. a speaker and a time (to handle tense). Others have suggested that a theory of truth for a language containing such contextually sensitive words must define truth for utterances of sentences. For example, see Weinstein (1974).

Further complications are required as well. Natural language contains devices not contained in the relatively austere formal languages for which Tarski showed how to define truth. Natural languages contain verbs of propositional attitude ('believes'), non-indicative sentences and other features. Davidson attempted to provide accounts of many such devices in other papers. Davidson (1968) for example takes up verbs of propositional attitude.

One sometimes hears model theoretic approaches to semantics contrasted with those that offer an absolute truth theory. The contrast is illustrated by comparing Montague and Davidson, since each is perhaps the paradigmatic case of one of these approaches. As we saw, Montague gives a semantics for English sentences by associating them with formulae of intensional logic. He then gives a semantics for the formulae of intensional logic. Now the latter includes a definition of truth relative to an interpretation (and other parameters as well). As discussed, expressions of Montague's intensional logic only have denotations (and intensions) relative to interpretations, which are also sometimes called models. Roughly, then, a model theoretic semantics is one that defines truth relative to models or interpretations. By contrast, as we have seen, Davidson wants a theory of truth simpliciter (actually, truth for L, but truth isn't relativized to models). Thus, Davidson's approach is sometimes called an absolute truth theory approach. I believe it is fair to say that most semanticists today use a model theoretic approach.

The 1960s and 1970s saw an explosion in the sort of model theoretic semantics pioneered by Montague, Lewis and others. Some of the important developments had to do with evolving notions of an index of evaluation. As we saw above, in Montague's intensional logic, expressions are assigned extensions/denotations at world/time pairs (under an interpretation relative to an assignment of values to variablesthis will be suppressed in the present discussion for ease of exposition). In particular, formulae are assigned truth values at a pair of a world and time.

Since expressions of Montague's English fragment receive semantic interpretations by being given the interpretation assigned to the expressions of intensional logic they are translated into, exactly similar remarks apply to English expressions and sentences. We shall call these elements at which expressions are assigned extensions (in this case, world/time pairs) indices. (Terminology here varies: Montague called these things points of reference ; Lewis [1970] called them indices, which is probably the most common term for them.) It should be obvious why sentences are assigned truth value at worlds. The reason Montague included times in his indices was that his intensional logic included tense operators in order that he could capture the rudimentary behavior of tense in English. Semantically, such operators work by shifting the time element of the index. Thus, where P is a past tense operator, φ a formula, w a world and t a time, Pφ is true at w,t iff φ is true at w,t for some t prior to t. Similarly, modal operators shift the world element of the index: Necessarily φ is true at w,t iff φ is true at w,t for all w.

So the truth values of formulae of Monatgue's intensional logic, and so of the English sentences they translate, depend on (or vary with) both a world and a time. Of course, it was noticed that the truth values of some English sentences vary with other features as well, such as who is speaking (if the sentence contains 'I'); who is being addressed (if the sentence contains 'you'); where the sentences is uttered (if the sentence contains 'here') and so on. A natural thought was to build into indices features for all such expressions, so that indices would contain all the features that go into determining extensions of expression. Thus, indices would be n-tuples of a world, time, place, speaker, addressee and so on. Lewis (1970) is a good example of an "index semantics" with indices containing many features. However, a number of developments resulted in such approaches being abandoned or at least significantly modified.

Hans Kamp (1971) discovered that in a language with standard feature-of-index shifting tense operators and contextually sensitive expressions that are sensitive to that same feature, such as 'now,' one needs two temporal coordinates. The point can be illustrated using a sentence in which 'now' occurs embedded under e.g. a past tense operator (assume 'one week ago' is a past tense operator):

15. One week ago Sarah knew she would be in Dubrovnik now.

When this sentence is evaluated at an index, there must be a time in the index for 'one week ago' to shift. The embedded sentence ('Sarah knew she would be in Dubrovnik now') is then evaluated relative to an index whose time feature has been shifted back one week. But then if 'now' takes that time as its value, we predict that 15 means that one week ago Sarah knew she would be in Dubrovnik then. But the sentence doesn't mean that. So the index must contain a second time, in addition to the one shifted by 'one week ago,' that remains unshifted so that the embedded occurrence of 'now' can take it as its value.

Kamp's requirement of there being two time coordinates is sometimes called the requirement of double indexing. I emphasize again that the requirement stems from there being in the language an operator that shifts a certain feature (time, in our case) and a contextually sensitive expression that picks up as its value the same feature. The argument above given for double indexing of times, then, assumes that temporal expressions ('One week ago') are index shifting operators. Many, including the present author, doubt this claim. (See King [2003] for discussion.) But similar arguments (involving 'actual' and 'Necessarily') could be given for double indexing of worlds.

At any rate, on the basis of such considerations, it was thought that minimally, one needed two indices, each of which contained (at least) a world and a time. However it was Kaplan (1989) (written in the early 1970s and circulated for years in mimeograph form) that provided the proper theoretical understanding of double indexing. Kaplan forcefully argued that not only do we need two indices for the reasons Kamp suggested as well as others (see section VII of 'Demonstratives'), but we need to recognize that the indices are representing two very different things, with the result that we need to recognize two different kinds of semantic values. One index represents context of utterance. This is the index that provides values for contextually sensitive expressions such as 'I,' 'now,' 'here' and so on. The intuitive picture is that a sentence taken relative to a context of utterance has values assigned to such contextually sensitive expressions. This results in the sentence having a content, what is said by the sentence, taken in that context.

So If I utter 'I am hungry now' on June 12, 2006, the content of the sentence in that context, what I said in uttering it then, is that Jeff King is hungry on June 12, 2006. Now that very content can be evaluated at different circumstances of evaluation, which are what the other index represents. For simplicity, think of circumstances of evaluation as simply possible worlds. Then we can take the sentence 'I am hungry now' and consider its content relative to the context of utterance described above. That content, or proposition, can then be evaluated for truth or falsity at different circumstances of evaluation (possible worlds). It is true at worlds in which Jeff is hungry on June 12, 2006 and false at those where he is not.

This distinction between context and circumstance, which the two indices represent, gives rise to a distinction between two kinds of semantic value (here we confine ourselves to the semantic values associated with sentences). On the one hand, the sentence 'I am hungry now' has a meaning that is common to utterances of it regardless of speaker or time. It is this meaning that determines what the content of that sentence is taken relative to contexts with different speakers and times. So this meaning, which Kaplan called character, determines a function from contexts to propositional content or what is said. By contrast, there is a sense in which the sentence 'I am hungry now' uttered by me now and Rebecca tomorrow means different things. This is because the sentence has different contents relative to those two contexts. So content is the other kind of semantic value had by sentences. Contents are true or false at worlds, so contents determine functions from worlds to truth values. In summary, character determines a function from context to content; content determines a function from worlds to truth values. Kaplan's distinction between context and circumstance and the corresponding distinction between character and content has been hugely influential and widely accepted.

Another important feature of Kaplan's (1989) work is his argument that both demonstratives (contextually sensitive words whose use requires the speaker to do something like demonstrate (point at) who she is talking about: 'he,' 'she,' 'this,' 'that') and pure indexicals (contextually sensitive words that don't require such demonstrations: 'I,' 'today,' etc.) are devices of direct reference. If we think of contents of sentences, propositions, as structured entities having as constituents the individuals, properties and relations that are the contents (relative to a context) of the expressions in the sentence, a view Kaplan likes, we can understand the claim that indexicals and demonstratives directly refer as the claim that these expressions contribute to propositions (relative to a context) the individuals they refer to (in the context). Thus, when I say: 'I am hungry,' the indexical 'I' contributes me to the proposition expressed by that sentence in that context.

Historically, the importance of this direct reference account of indexicals and demonstratives is its anti-Fregean thrust. Recall that for Frege, expressions generally, even those that refer to individuals, contribute to propositions senses that pick out their references and not the references themselves. In claiming that indexicals and demonstratives contribute individuals to propositions rather than senses that pick out those individuals, Kaplan was proposing a radically anti-Fregean account of indexicals and demonstratives. Kaplan's arguments here complemented the anti-Fregean arguments of one of the most influential works in philosophy of language of the twentieth century: Saul Kripke's (1980) Naming and Necessity.

Among other things, Kripke (1980) provided powerful arguments against what he sometimes calls the description theory of names. On the description theory, names are held to be both synonymous with definite descriptions and (more weakly) to have their references fixed by definite descriptions. So, for example, 'Aristotle' might be thought to be synonymous with 'the teacher of Alexander,' and whoever satisfies this description is the referent of 'Aristotle.' Frege's view was thought to be a version of the description theory, since Frege seems to say that the sense of a proper name can be expressed by a definite description (Frege [1892a] note B), in which case the name and descriptions would be synonymous. Kripke argued very compellingly that descriptions were neither synonymous with, nor determined the reference of, proper names. As to synonymy, Kripke pointed out that whereas

16. The teacher of Alexander taught Alexander.

expressed (nearly) a necessary truth,

17. Aristotle taught Alexander.

expresses a highly contingent truth. But if the name and description were synonymous, the two sentences should be synonymous and so both should be contingent or both should be necessary. But they aren't. Indeed, the name and description seem to function very differently semantically. As Kripke famously noted, whether 17 is true at any possible world depends on the properties of Aristotle at that world. This because 'Aristotle' is what Kripke called a rigid designator : the expression designates Aristotle at every world where he exists, and never designates any individual other than Aristotle. Hence evaluating the sentence at a world always requires us to check Aristotle's properties there. By contrast, 'the teacher of Alexander' presumably designates different individuals at different worlds, depending on who taught Alexander there. Thus, this expression is non-rigid.

As to descriptions determining the referents of names, Kripke adduced a number of considerations but perhaps the most persuasive was the following. Consider a name and any description that allegedly fixes the referent of the name, say 'the man who proved the completeness of arithmetic' fixes the referent of 'Godel.' If we imagine that in fact some man Schmidt satisfies the description, we do not conclude that he is the referent of 'Godel.' Quite the contrary, we conclude that the referent of 'Godel,' that is, Godel, fails to satisfy the description. But then the description does not fix the referent of the name (i.e. the referent is not whoever satisfies the description).

The arguments of Kaplan (1989) and Kripke (1980), together with arguments given by Donnellan, Marcus, Putnam and others turned semantics in a very anti-Fregean direction from the 1970s on. This anti-Fregean strain as applied to singular terms is sometimes called the new theory of reference.

As we saw above, Kaplan claimed that indexicals and demonstratives were directly referential and contributed their referents (relative to a context) to the propositions expressed by sentences in which they occur (interestingly, this is not reflected in Kaplan's [1989] formal system, which makes use of unstructured propositions that have no constituents corresponding to the words in the sentences that express the propositions; but his informal remarks make clear his intent). By contrast, though Kripke (1980) argued against the descriptive theory of names, he cautiously made no positive claims about what names contribute to propositions (the preface to Kripke [1980] makes clear that this caution was intendedsee pp. 2021). In a series of works in the 1980s, most famously Salmon (1986) and Soames (1987), Scott Soames and Nathan Salmon offered powerful arguments in favor of the view that names too were devices of direct reference and contributed only their bearers to propositions expressed by sentences in which they occur. Both Soames and Salmon defended the view that sentences (relative to contexts) express structured propositions, with names (and indexicals and demonstratives) contributing the individuals to which they refer to propositions. Salmon and Soames both also thought that attitude ascriptions such as the following:

18. Nathan believes that Mark Twain is an author.

assert that the subject (Nathan) stands in a certain relation (expressed by 'believes') to a structured proposition (expressed by the embedded sentence). If that is right and if names contribute only individuals to propositions expressed by sentences in which they occur, then (assuming a simple principle of compositionality) 18 expresses the same proposition as

19. Nathan believes that Sam Clemens is an author.

Thus, on the Soames-Salmon view 18 and 19 cannot differ in truth value. Though this seems counterintuitive, Soames (1987) and Salmon (1951) offer spirited defenses of this result. Soames (1987) also offers extremely compelling arguments against the view that propositions are unstructured sets of worlds (or circumstances). Some version of the Soames/Salmon view is widely considered to be the standard direct reference view in semantics. Views such as theirs, which make use of structured propositions and endorse direct reference for names, demonstratives and indexicals, are often called Russellian.

About the same time the new theory of reference was becoming prominent, quite different developments were taking place in semantics. In pioneering work first presented in the late 1960s (as the William James Lectures at Harvard; later published in Grice [1989] as Essay 2), Paul Grice sought to give a (somewhat) systematic account of (as we would now put it) how the production of a sentence with a certain semantic content can convey further information beyond its semantic content. To give an example from Grice, suppose A and B are planning their itinerary for a trip to France and both know A wants to visit C if doing so wouldn't take them too far out of their way. They have the following exchange:

A: Where does C live?

B: Somewhere in the south of France.

Since both are aware that B offered less information than is required for the purposes at hand, and since B can be presumed to be attempting to cooperate with A, B conveys that she doesn't know where C lives, though this is no part of the semantic content of the sentence she uttered. Grice gave an account of how such information (not part of the semantic content of any sentence asserted) can be conveyed. The account depended on the claim that conversational participants are all obeying certain principles in engaging in conversation. The main idea, as illustrated above, is that conversational participants are trying in some way to be cooperative, and so to contribute to the conversation at a given point what is required given the purpose and direction of the conversation. Grice's central theoretical idea was that certain types of information exchange and certain types of regularities in conversations don't have purely semantic explanations. The study of how information gets conveyed that goes beyond the semantic content of the sentences uttered falls in the field of pragmatics, (which is why, though Grice's work is extremely important, it hasn't been discussed more in an entry on semantics).

In a series of papers that (for our purposes anyway) culminated in Stalnaker (1978), Robert Stalnaker, consciously following Grice, was concerned with ways in which in conversations information can be conveyed that goes beyond the semantic contents of sentences uttered as a result of conversational participants obeying certain principles governing conversation. More specifically, Stalnaker developed an account of how context of utterance and semantic contents of sentences (relative to those contexts) produced in those contexts can mutually influence each other.

Of course, how context influences the semantic content of sentences relative to those contexts was already fairly well understood. As discussed above, for example, context supplies the semantic values relative to those contexts for contextually sensitive expressions such as 'I.' Stalnaker sought to understand how the content relative to a context of a sentence uttered can affect the context. Stalnaker began by introducing the notion of speaker presupposition. Stalnaker understood the proposition expressed by a sentence (relative to a context) to be a set of possible worlds (the set of worlds in which the sentence taken in that context is true). Very roughly, the propositions a speaker presupposes in a conversation are those whose truth he takes for granted and whose truth he thinks the other participants take for granted too.

Consider now the set of possible worlds that are compatible with the speaker's presuppositions (the set of worlds in which every presupposed proposition is true). Stalnaker calls this the context set, and it is for him a central feature of a context in which a conversation occurs. (Strictly, every participant in the conversation has his own context set, but we will assume that these are all the sameStalnaker calls this a non-defective context.) They contents of sentences (relative to a context) affect the context in the following way: if a sentence is asserted and accepted, then any world in the context set in which the sentence (taken in that context) is false is eliminated from the context set. In short, (accepted) assertions function to reduce the size of the context set, or eliminate live options.

Stalnaker uses this idea to explain a variety of phenomena, including how the utterance of sentences with trivial semantic content (relative to a context) can nonetheless be informative. It is important to see that Stalnaker, like Grice, took his account here to be not part of semantics, but rather to be something that presupposed the semantics of sentences (taken in contexts). In short, like Grice's work, it was work in pragmatics. However, Stalnaker's idea that the information conveyed by the utterance of multiple sentences in a discourse can go beyond anything countenanced by traditional semantics and that it is important to understand the dynamics of conversation to understand how information is conveyed influenced others who went on to develop semantic theories that capture the dynamics of conversation, (Lewis [1979] was another important early influence to the same effect).

In the early 1980s, Irene Heim (1982) and Hans Kamp (1981) independently arrived at very similar semantic accounts that were intended to apply to multi-sentence discourses. Kamp's view is called Discourse Representation Theory (DRT ) and Heim's view is sometimes called that or File Change Semantics (FCS ). To take a simple example of the sort that DRT and FCS were designed to handle, consider a (short) discourse such as:

20. Alan owns a donkey. He beats it.

Using Kamp's formulation, the discourse representation structure (DRS ) associated with the first sentence of 20 would (roughly) look as follows:

x1 x2

x1=Alan

donkey(x2)

x1 owns x2

After the utterance of the second sentence, the DRS associated with the entire discourse would looks as follows, where we have simply added one more line (a condition ) to the DRS for the first sentence of 20 (we assume that 'He' is anaphoric on 'Alan' and 'it' on 'a donkey'):

x1 x2

x1=Alan

donkey(x2)

x1 owns x2

x1 beats x2

Note that expressions like 'a donkey' introduce variables (called discourse referents ) and predicates ('donkey') into DRS's and not existential quantifiers. Again very roughly, this DRS (and hence the original discourse) is true in a model iff there is an assignment to the variables of the DRS that results in all its conditions being true in that model. It is the requirement that there is such an assignment that results in default existential quantification of free variables. So though indefinites like 'a donkey' are not existential quantifiers on this view, they have existential force (in this case, anyway) due to default existential quantification of free variables. Aside from the desire to apply semantics at the level of discourse instead of sentence, much of the motivation for DRT and FCS came from cases such as 20 (and others) in which a pronoun is anaphoric on another expression (see entry on anaphora ).

DRT and FCS led directly to the development of other semantic accounts designed to capture the dynamics of conversation. In the paper that initiated what is now often called dynamic semantics, Groenendijk and Stokhof (1991) make clear that they see their account as a descendent of DRT and throughout the paper they compare their Dynamic Logic (DL ) account to DRT. The basic idea of DL is that instead of thinking of expressions as having "static" meanings, think of meanings as things that given inputs, produce outputs. A bit more formally, think of the meanings (in models) of formulae of first order logic as given by the sets of assignments to variables that satisfy the formulae. So for example, the meaning of 'Fx' in a model M is the set of all assignments such that they assign to 'x' something in the extension of 'F' in the model M. Dynamic logic claims that the meaning of a formula in first order logic is a set of pairs of assignments: the first, the input assignment; the second, the output assignment. For "externally dynamic" expressions (e.g. conjunction, existential quantifiers), these can differ and the result is that interpreting these expression can affect how subsequent expressions get interpreted. For since the output assignments can be different from the input assignments for these dynamic expressions, and since the output of these expressions may be the input to subsequent expressions, the interpretation of those subsequent expressions may be affected.

There is currently much research being done within the framework of dynamic semantics, particularly among linguists. Muskens, van Benthem and Visser (1997) provide a good general overview.

There are many important topics in semantics that could not be covered in the present article. These include the theory of generalized quantifiers, the semantics of conditionals, the semantics of non-declarative sentences, the semantics of metaphor and two dimensional semantics. Interested readers are encouraged to pursue these matters on their own.

See also Carnap, Rudolf; Conversational Implicature; Davidson, Donald; Frege, Gottlob; Grice, Herbert Paul; Heidegger, Martin; Hempel, Carl Gustav; Hintikka, Jaakko; Kaplan, David; Kripke, Saul; Lewis, Clarence Irving; Lewis, David; Logical Positivism; Marcus, Ruth Barcan; Meaning; Modality, Philosophy and Metaphysics of; Montague, Richard; Pragmatics; Putnam, Hilary; Reference; Russell, Bertrand Arthur William; Syntax; Tarski, Alfred; Wittgenstein, Ludwig Josef Johann.

Bibliography

Ayer, A.J., ed. Logical Positivism. Glencoe, IL: Free Press, 1959.

Beaney, Michael, ed. The Frege Reader. Oxford U.K.; Cambridge, MA: Blackwell, 1997.

Bennett, Michael. "Some Extensions of a Montague Fragment." PhD diss, UCLA, 1974.

Carnap, Rudolf. "The Elimination of Metaphysics through The Logical Analysis of Language" (1932). In Logical Positivism, edited by A.J. Ayer. Glencoe, IL: Free Press, 1959.

Carnap, Rudolf. Meaning and Necessity, A Study in Systematics and Modal Logic. Chicago: University of Chicago Press, 1947.

Church, Alonzo. "A Formulation of the Logic of Sense and Denotation." In Structure, Method, and Meaning: Essays in Honor of Henry M. Sheffer. New York: Liberal Arts Press, 1951.

Church, Alonzo. "On Carnap's Analysis of Statements of Assertion and Belief." Analysis 10 (1950): 9799.

Davidson, Donald. "On Saying That." Synthese 19 (1968): 130146

Davidson, Donald. "Truth and Meaning." Synthese 17 (1967): 304323.

Frege, Gottlob. "Function and Concept" (1891). In The Frege Reader. Oxford U.K.; Cambridge, MA: Blackwell, 1997.

Frege, Gottlob. "On Concept and Object" (1892). In The Frege Reader. Oxford U.K.; Cambridge, MA: Blackwell, 1997.

Frege, Gottlob. "On Sense and Reference" (1892). In The Frege Reader. Oxford, U.K.; Cambridge, MA: Blackwell, 1997.

Grice, Paul. Studies in the Ways of Words. Cambridge, MA: Harvard University Press, 1989.

Groenendijk, J., and M. Stokhof. "Dynamic Predicate Logic." Linguistics and Philosophy 14 (1991): 39100.

Heim, Irene. "The Semantics of Definite and Indefinite Noun Phrases." Doctoral thesis, University of Massachusetts, 1982.

Hempel, Carl. "The Empiricist Criterion of Meaning" (1950). In Logical Positivism. Glencoe, IL: Free Press, 1959.

Hintikka, Jaakko. "Modality and Quantification." Theoria 27 (1961): 110128.

Hodges, Wilfrid. "Tarski's Truth Definitions." The Stanford Encyclopedia of Philosophy, winter 2001 ed. Available from http://plato.stanford.edu/archives/win2001/entries/tarski-truth/.

Kamp, Hans. "Formal Properties of 'Now.'" Theoria 37 (1971): 227273

Kamp, Hans. "A Theory of Truth and Semantic Representation." In Formal Methods in the Study of Language, edited by J. Groenendijk and M. Stokhof. Amsterdam: Mathematical Centre, 1981.

Kanger, Stig. Provability in Logic. Stockholm: Almqvist & Wilksell, 1957.

Kaplan, David. "Demonstratives." In Themes from Kaplan, edited by Joseph Almog, John Perry, and Howard Wettstein. New York: Oxford University Press, 1989.

King, Jeffrey C. "Tense, Modality, and Semantic Values." Philosophical Perspectives 17 (2003): 195245.

Kripke, Saul. "A Completeness Theorem in Modal Logic." The Journal of Symbolic Logic 24 (1) (1959): 114.

Kripke, Saul. "Semantical Considerations on Modal Logic" (1963). In Reference and Modality, edited by Leonard Linsky. London: Oxford University Press, 1971.

Kripke, Saul. Naming and Necessity (1972). Cambridge, MA: Harvard University Press, 1980.

Lewis, C. I., and C. H. Langford. Symbolic Logic. New York: Century, 1932.

Lewis, David. "General Semantics." Synthese 22 (1970): 1867.

Lewis, David. "Scorekeeping in a Language Game." Journal of Philosophical Logic 8 (1979): 339359.

Montague, Richard. "Logical Necessity, Physical Necessity, Ethics, and Quantifiers" (1960a). In Formal Philosophy; Selected Papers of Richard Montague, edited by Richmond Thomason. New Haven, CT: Yale University Press, 1974.

Montague, Richard. "On the Nature of Certain Philosophical Entities" (1960b). In Formal Philosophy; Selected Papers of Richard Montague, edited by Richmond Thomason. New Haven CT: Yale University Press, 1974.

Montague, Richard. "The Proper Treatment of Quantification in Ordinary English" (1973). In Formal Philosophy; Selected Papers of Richard Montague, edited by Richmond Thomason. New Haven, CT: Yale University Press, 1974.

Muskens, Reinhard, Johan van Ventham, and Albert Visser. "Dynamics." In Handbook of Logic and Language, edited by Johan van Bentham and Alice ter Meulen. Cambridge, MA: MIT Press, 1997.

Russell, Bertrand. Principles of Mathematics. 2nd ed. Cambridge, U.K.: Cambridge University Press, 1903.

Salmon, Nathan. Frege's Puzzle. Cambridge, MA: MIT Press, 1986.

Soames, Scott. "Direct Reference, Propositional Attitudes, and Semantic Content." Philosophical Topics 15 (1987): 4787.

Stalnaker, Robert. "Assertion" (1978). In Context and Content; Essays on Intentionality in Speech and Thought. New York: Oxford University Press, 1999.

Tarski, Alfred. "Der Wahrheitsbegriff in den formalisierten Sprachen." Studia Philosophica 1 (1935): 261405.

Weinstein, Scott. "Truth and Demonstratives." Nous 8 (1974): 179184.

Jeffrey C. King (2005)

Cite this article
Pick a style below, and copy the text for your bibliography.

  • MLA
  • Chicago
  • APA

"Semantics." Encyclopedia of Philosophy. . Encyclopedia.com. 17 Aug. 2018 <http://www.encyclopedia.com>.

"Semantics." Encyclopedia of Philosophy. . Encyclopedia.com. (August 17, 2018). http://www.encyclopedia.com/humanities/encyclopedias-almanacs-transcripts-and-maps/semantics

"Semantics." Encyclopedia of Philosophy. . Retrieved August 17, 2018 from Encyclopedia.com: http://www.encyclopedia.com/humanities/encyclopedias-almanacs-transcripts-and-maps/semantics

Learn more about citation styles

Citation styles

Encyclopedia.com gives you the ability to cite reference entries and articles according to common styles from the Modern Language Association (MLA), The Chicago Manual of Style, and the American Psychological Association (APA).

Within the “Cite this article” tool, pick a style to see how all available information looks when formatted according to that style. Then, copy and paste the text into your bibliography or works cited list.

Because each style has its own formatting nuances that evolve over time and not all information is available for every reference entry or article, Encyclopedia.com cannot guarantee each citation it generates. Therefore, it’s best to use Encyclopedia.com citations as a starting point before checking the style against your school or publication’s requirements and the most-recent information available at these sites:

Modern Language Association

http://www.mla.org/style

The Chicago Manual of Style

http://www.chicagomanualofstyle.org/tools_citationguide.html

American Psychological Association

http://apastyle.apa.org/

Notes:
  • Most online reference entries and articles do not have page numbers. Therefore, that information is unavailable for most Encyclopedia.com content. However, the date of retrieval is often important. Refer to each style’s convention regarding the best way to format page numbers and retrieval dates.
  • In addition to the MLA, Chicago, and APA styles, your school, university, publication, or institution may have its own requirements for citations. Therefore, be sure to refer to those guidelines when editing your bibliography or works cited list.