# Models, Mathematical

# Models, Mathematical

*Although mathematical models are applied in many areas of the social sciences, this article is limited to mathematical models of individual behavior. For applications of mathematical models in econometrics, see* Econometric Models, Aggregate. *Other articles discussing modeling in general include* Cybernetics, Probability, Scaling, Simulation, *and* Simultaneous Equation Estimation. *Specific models are discussed in various articles dealing with substantive topics.*

Theories of behavior that have been developed and presented verbally, such as those of Hull or Tolman or Freud, have attempted to describe and predict behavior under any and all circumstances. Mathematical models of individual behavior, by contrast, have been much less ambitious: their goal has been a precise description of the data obtained from restricted classes of behavioral experiments concerned with simple and discrimination learning; with detection, recognition, and discrimination of simple physical stimuli; with the patterns of preference exhibited among outcomes; and so on. Models that embody very specific mathematical assumptions, which are at best approximations applicable to highly limited situations, have been analyzed exhaustively and applied to every conceivable aspect of available data. From this work broader classes of models, based on weaker assumptions and thus providing more general predictions, have evolved in the past few years. The successes of the special models have stimulated, and their failures have demanded, these generalizations. The number and variety of experiments to which these mathematical models have been applied have also grown, but not as rapidly as the catalogue of models.

Most of the models so far developed are restricted to experiments having discrete trials. Each trial is composed of three types of events: the presentation of a stimulus configuration selected by the experimenter from a limited set of possible presentations; the subject’s selection of a response from a specified set of possible responses; and the experimenter’s feedback of information, rewards, and punishments to the subject. Primarily because the response set is fixed and feedback is used, these are called choice experiments (Bush et al. 1963). Most psychophysical and preference experiments, as well as many learning experiments, are of this type. Among the exceptions are the experiments without trials—e.g., vigilance experiments and the operant conditioning methods of Skinner. Currently, models for these experiments are beginning to be developed.

## Measures

With attention confined to choice experiments, three broad classes of variables necessarily arise—those concerned with stimuli, with responses, and with outcomes. The response variables are, of course, assumed to depend upon the (experimentally) independent stimuli and upon the outcome variables, and each model is nothing more or less than an explicit conjecture about the nature of this dependency. Usually such conjectures arestated in terms of some measures, often numerical ones, that are associated with the variables. Three quite different types of measures are used: physical, probabilistic, and psychological. The first two are objective and descriptive; they can be introduced and used without reference to any psychological theory, and so they are especially popular with atheoretical experimentalists, even though the choice of a measure usually reflects a theoretical attitude about what is and is not psychologically relevant. Although we often use physical measures to characterize the events for which probabilities are defined, this is only a labeling function which makes little or no use of the powerful mathematical structure embodied in many physical measures. The psychological measures are constructs within some specifiable psychological theory, and their calculation in terms of observables is possible only within the terms of that theory. Examples of each type of measure should clarify the meaning.

*Physical measures.* In experimental reports, the stimuli and outcomes are usually described in terms of standard physical measures: intensity, frequency, size, weight, time, chemical composition, amount, etc. Certain standard response measures are physical. The most ubiquitous is response latency (or reaction time), and it has received the attention of some mathematical theorists (McGill 1963). In addition, force of response, magnitude of displacement, speed of running, etc., can some-times be recorded. Each of these is unique to certain experimental realizations, and so they have not been much studied by theorists.

*Probability measures.* The stimulus presentations, the responses, and the outcomes can each be thought of as a sequence of selections of elements from known sets of elements, i.e., as a schedule over trials. It is not usual to work with the specific schedules that have occurred but, rather, with the probability rules that were used to generate them. For the stimulus presentations and the outcomes, the rules are selected by the experimenter, and so there is no question about what they are. Not only are the rules not known for the responses, but even their general form is not certain. Each response theory is, in fact, a hypothesis about the form of these rules, and certain relative frequencies of responses are used to estimate the postulated conditional response probabilities.

Often the schedules for stimulus presentations are simple random ones in the sense that the probability of a stimulus’ being presented is independent of the trial number and of the previous history of the experiment; but sometimes more complex contingent schedules are used in which various conditional probabilities must be specified. Most outcome schedules are to some degree contingent, usually on the immediately preceding presentation and response, but sometimes the dependencies reach further back into the past. Again, conditional probabilities are the measures used to summarize the schedule. *[See* Probability.]

*Psychological measures.* Most psychological models attempt to state how either a physical measure or a probability measure of the response depends upon measures of the experimental independent variables, but in addition they usually include unknown free parameters—that is, numerical constants whose values are specified neither by the experimental conditions nor by independent measurements on the subject. Such parameters must, therefore, be estimated from the data that have been collected to test the adequacy of the theory, which thereby reduces to some degree the stringency of the test. It is quite common for current psychological models to involve only probability measures and unknown numerical parameters, but not any physical measures. When the numerical parameters are estimated from different sets of data obtained by varying some independent variables under the experimenter’s control, it is often found that the parameters vary with some variables and not with others. In other words, the parameters are actually functions of some of the experimental variables, and so they can be, and often are, viewed as psychological measures (relative to the model within which they appear) of the variables that affect them. Theories are sometimes then provided for this dependence, although so far this has been the exception rather than the rule.

The theory of signal detectability, for example, involves two parameters: the magnitude, *d’*, of the psychological difference between two stimuli; and a response criterion, *c*, which depends upon the outcomes and the presentation schedule. Theories for the dependence of *d’* and c upon physical measures have been suggested (Luce 1963; Swets 1964). Most learning theories for experiments with only one presentation simply involve the conditional out-come probabilities and one or more free parameters.Little is known about the dependence of these parameters upon experimentally manipulable variables. In certain scaling theories, numerical parameters are assigned to the response alternatives and are interpreted as measures of response strength (Luce & Galanter 1963). In some models these parameters are factored into two terms, one of which is assumed to measure the contribution of the stimulus to response strength and the other of which is the contribution due to the outcome structure.

The phrasing of psychological models in terms only of probability measures and parameters (psychological measures) has proved to be an effective research strategy. Nonetheless, it appears important to devise theories that relate psychological measures to the physical and probability measures that describe the experiments. The most extensive mathematical models of this type can be found in audition and vision (Hurvich et al. 1965; Zwislocki 1965). The various theories of utility are, in part, attempts to relate the psychological measure called utility to physical measures of outcomes, such as amounts of money, and probability measures of their schedules, such as probabilities governing gambles (Luce & Suppes 1965). In spite of the fact that it is clear that the utilities of outcomes must be related to learning parameters, little is known about this relation. [*See* Gambling; Game Theory; Utility.]

## The nature of the models

The construction of a mathematical model involves decisions on at least two levels. There is, first, the over-all perspective about what is and is not important and about the best way to secure the relevant facts. Usually this is little discussed in the presentation of a model, mainly because it is so difficult to make the discussion coherent and convincing. Nonetheless, this is what we shall attempt to deal with in this section. In the following section we turn to the second level of decision: the specific assumptions made.

*Probability vs. determinism.* One of the most basic decisions is whether to treat the behavior as if it arises from some sort of probabilistic mechanism, in which case detailed, exact predictions are not possible, or whether to treat it as deterministic, in which case each specific response is susceptible to exact prediction. If the latter decision is made, one is forced to provide some account of the observed inconsistencies of responses before it is possible to test the adequacy of the model. Usually one falls back on either the idea of errors of measurement or on the idea of systematic changes with time (or experience), but in practice it has not been easy to make effective use of either idea, and most workers have been content to develop probability models. It should be pointed out that, as far as the model is concerned, it is immaterial whether the model builder believes the behavior to be inherently probabilistic, or its determinants to be too complex to give a detailed analysis, or that there are uncontrolled factors which lead to experimental errors.

*Static vs. dynamic models.* A second decision is whether the model shall be dynamic or static. (We use these terms in the way they are used in physics; static models characterize systems which do not change with time or systems which have reached equilibrium in time, whereas dynamic models are concerned with time changes.) Some dynamic models, especially those for learning, state how conditional response probabilities change with experience. Usually these models are not very helpful in telling us what would happen if, for example, we substituted a different but closely related set of response alternatives or outcomes. In static models the constraints embodied in the model concern the relations among response probabilities in severaldifferent, but related, choice situations. The utility models for the study of preference are typical of this class.

The main characteristic of the existing dynamic models is that the probabilities are functions of a discrete time parameter. Such processes are called stochastic, and they can be thought of as generating branching processes through the fanning out of new possibilities on each trial (Snell 1965). Each individual in an experiment traces out one path of the over-all tree, and we attempt to infer from a small but, it is hoped, typical sample of these paths something about the probabilities that supposedly underlie the process. Usually, if enough time is allowed to pass, such a process settles down—becomes asymptotic—in a statistical sense. This is one way to arrive at a static model; and when we state a static model, we implicitly assume that it describes (approximately) the asymptotic behavior of the (unknown) dynamic process governing the organisms.

*Psychological vs. mathematical assumptions.* Another distinction is that between psychological and formal mathematical assumptions. This is by no means a sharp one, if for no other reason than that the psychological assumptions of a mathematical model are ultimately cast in formal terms and that psychological rationales can always be evolved for formal axioms. Roughly, however, the distinction is between a structure built up from elementary principles and a postulated constraint concerning observable behavior. Perhaps the simplest example of the latter is the axiom of transitivity of preferences; if a is preferred to *b* and *b* is preferred to *c*, then *a* will be preferred to *c*. This is not usually derived from more basic psychological postulates but, rather, is simply asserted on the grounds that it is (approximately) true in fact. A somewhat more complex, but essentially similar, example is the so-called choice axiom which postulates how choice probabilities change when the set of possible choices is either reduced or augmented (Luce 1959). Again, no rationale was originally given except plausibility; later, psychological mechanisms were proposed from which it derives as a consequence.

The most familiar example of a mathematical model which is generally viewed as more psychological and less formal is stimulus sampling theory. In this theory it is supposed that an organism is exposed to a set of stimulus “elements” from which one or more are sampled on a trial and that these elements may become “conditioned” to the performed response, depending upon the outcome that follows the response (Atkinson & Estes 1963). The concepts of sampling and conditioning are interpreted as elementary psychological processes from which the observed properties of the choice behavior are to be derived. Lying somewhere between the two extremes just cited are, for example, the linear operator learning models (Bush & Mosteller 1955; Sternberg 1963). The trial-by-trial changes in response probabilities are assumed to be linear, mainly because of certain formal considerations; the choice of the limit points of the operators in specific applications is, however, usually based upon psychological considerations; and the resulting mathematical structure is not evaluated directly but, rather, in terms of its ability to account for the observed choice behavior as summarized in such observables as the mean learning curve, the sequential dependencies among responses, and the like.

## Recurrent theoretical themes

Beyond a doubt, the most recurrent theme in models is independence. Indeed, one can fairly doubt whether a serious theory exists if it does not include statements to the effect that certain measures which contribute to the response are in some way independent of other measures which contribute to the same response. Of course, independence assumes different mathematical forms and therefore has different names, depending upon the problem, but one should not lose sight of the common underlying intuition which, in a sense, may be simply equivalent to what we mean when we say that a model helps to simplify and to provide understanding of some behavior.

*Statistical independence.* In quite a few models simple statistical independence is invoked. For example, two chance events, *A* and *B*, are said to be independent when the conditional probability of *A*, given *B*, is equal to the unconditional probability of A; equivalently, the probability of the joint event *AB* is the product of the separate probabilities of *A* and *B*.

A very simple substantive use of this notion is contained in the choice axiom which says, in effect, that altering the membership of a choice set does not affect the relative probabilities of choice of two alternatives (Luce 1959). More complex notions of independence are invoked whenever the behavior is assumed to be described by a stochastic process. Each such processstates that some, but not all, of the past is relevant in understanding the future: some probabilities are independent of some earlier events. For example, in the “operator models” of learning, it is assumed thatthe process is “path independent” in the sense that it is sufficient to know the existing choice probability and what has happened on that trial in order to calculate the choice probability on the next trial (Bush & Mosteller 1955). In the “Markovian” learning models, the organism is always in one of a finite number of states which control the choice probabilities, and the probabilities of transition from one state to another are independent of time, i.e., trials (Atkinson & Estes 1963). Again, the major assumption of the model is a rather strong one about independence of past history. [*See* Markov Chains.]

*Additivity and linearity*. Still another form of independence is known as additivity. If *r* is a response measure that depends upon two different variables assuming values in sets *A*_{1} and *A*_{2}, then we say that the measure is additive (over the independent variables) if there exists a numerical measure *r*_{1} on *A*_{1} and *r*_{2} on *A*_{2} such that for *x*_{1} in *A*_{1} and *x*_{2}, in *A*_{2}, *r*(*x*_{1}, *x*_{2}) =*r*_{1}(*x*_{1}) + *r*_{2}(*x*_{2}). This assumption for particular experimental measures *r* is frequently postulated in the models of analysis of variance as well as derived from certain theories of fundamental measurement. A special case of additivity known as linearity is very important. Here there is but one variable (that is, *A*_{1} = *A*_{2} = *A*); any two values of that variable, *x* and *x’* in *A*, combine through some physical operation to form a third value of that variable, denoted *x* * *x’* and there is a single measure *r* on *A* (that is, *r*_{1} = *r*_{2} = *r*) such that *r*(*x* * *x*’) = *r(x)* + *r*(*x*’). Such a requirement captures the superposition principle and leads to models of a very simple sort. These linear models have played an especially important role in the study of learning, where it is postulated that the choice probability on one trial, *p _{n}*, can be expressed linearly in terms of the probability,

*p*

_{n-1}, on the preceding trial. Other models also postulate linear transformations, but not necessarily on the response probability itself. In the “beta” model, the quantity

*P*(1—

_{n}/*P*) is assumed to be transformed linearly; this quantity is interpreted as a measure of response strength (Luce 1959).

_{n}*Commutativity.* The “beta” model exhibits another property that is of considerable importance, namely, commutativity. The essence of commutativity is that the order in which the operators are applied does not matter; that is, if *A* and *B* are operators, then the composite operator *AB* (apply *B* first and then *A*) is the same as the operator *BA*. Again, there is a notion of independence—independence of the order of application. It is an extremely powerful property that permits one to derive a considerable number of properties of the resulting process; however, it is generally viewed with suspicion, since it requires the distant past to have exactly the same effect as the recent past. A commutative model fails to forget gradually.

## Nature of the predictions

As would be expected, models are used to make a variety of predictions. Perhaps the most general sorts of predictions involve broad classes of models. For example, probabilistic reinforcement schedules for a certain class of distance-diminishing models, i.e., ones that require the behavior of two subjects to become increasingly similar when they are identically reinforced, can be shown to be ergodic, which means that these models exhibit the asymptotic properties that are commonly taken for granted. A second example is the combining-of-classes theorem, which asserts that if the theoretical descriptions of behavior are to be independent of the grouping of responses into classes, then only the linear learning models are appropriate.

At a somewhat more detailed level, but still encompassing several different models, are predictions such as the mean learning curve, response operating characteristics, and stochastic transitivity of successive choices among pairs of alternatives. Sometimes it is not realized that conceptually quite different models, which make some radically different predictions, may nonetheless agree completely on other features of the data, often on ones that are ordinarily reported in experimental studies. Perhaps the best example of this phenomenon arises in the analysis of experiments in which subjects learn arbitrary associations between verbal stimuli and responses. A linear incremental model, of the sort described above, predicts exactly the same mean learning curve as does a model that postulates that the arbitrary association is acquired on an all-or-none basis. On the face of it, this result seems paradoxical. It is not, because in the latter model, different subjects acquire the association on different trials, and averaging over subjects thereby leads to a smooth mean curve that happens to be identical with the one predicted by the linear model. Actually, a wide variety of models predict the same mean learning curve for many probabilistic schedules of reinforcement, and so one must turn to finer-grained features of the data to distinguish among the models. Among these differential predictions are the distribution of runs of the same response, the expected number of such runs, the variance of the number of successes in a fixed block of trials, the mean number of total errors, the mean trial of last error, etc. [*See* Statisticaldentifiability.]

The classical topic of individual differences raises issues of a different sort. For the kinds of predictions discussed above it is customary to pool individual data and to analyze them as if they were entirely homogeneous. Often, in treating learning data this way, it is argued that the structural conditions of the experiment are sufficiently more important determinants of behavior than are individual differences so that the latter may be ignored without serious distortion. For many experiments to which models have been applied with considerable success, simple tests of this hypothesis of homogeneity are not easily made. For example, when a group of 30 or 40 subjects is run on 12 to 15 paired-associate items, it is not useful to analyze each subject item because of the large relative variability which accompanies a small number of observations. On the other hand, in some psycho- physical experiments in which each subject is run for thousands of trials under constant conditions of presentation and reinforcement, it is possible to treat in detail the data of individuals. The final justification for using group data, on the assumption of identical subjects, is the fact that for ergodic processes, which most models are, the predictions for data averaged over subjects are the same as those for the data of an individual averaged over trials.

Another issue, which relates to group versus individual data, is parameter invariance. One way of asking if a group of individuals is homogeneous is to ask whether, within sampling error, the parameters for individuals are identical. Thus far, however, more experimental attention has been devoted to the question of parameter invariance for sets of group data collected under different experimental conditions. For instance, the parameters of most learning models should be independent of the particular reinforcement schedule adopted by the experimenter. Although in many cases a reasonable degree of parameter invariance has been obtained for different schedules, it is fair to say that the results have not been wholly satisfactory.

For a detailed discussion of the topics of this section, see Sternberg (1963) and Atkinson and Estes (1963).

## Model testing

Most of the mathematical models used to analyze psychological data require that at least one parameter, and often more, be estimated from the data before the adequacy of the model can be evaluated. In principle, it might be desirable to use maximum-likelihood methods for estimation. Perhaps the central difficulty which prevents our using such estimators is that the observable random variables, such as the presentation, response, and outcome random variables, form chains of infinite order. This means that their probabilities on any trial depend on what actually happened in all preceding trials. When that is so, it is almost always impractical to obtain a useful maximum-likelihood estimator of a parameter. In the face of such difficulties, less desirable methods of estimation have perforce been used. Theoretical expressions showing the dependency on the unknown parameter of, for example, the mean number of total errors, the mean trial of first success, and the mean number of runs, have been equated to data statistics to estimate the parameters. The classical methods of moments and of least squares have sometimes been applied successfully. And, in certain cases, maximum-likelihood estimators can be approximated by pseudo-maximum-likelihood ones that use only a limited portion of the immediate past. For processes that are approximately stationary, a small part of the past sometimes provides a very good approximation to the full chain of infinite order, and then pseudo-maximum-likelihood estimates can be good approximations to the exact ones. Because of mathematical complexities in applying even these simplified techniques, Monte Carlo and other numerical methods are frequently used. [*See* Estimation.]

Once the parameters have been estimated, the number of predictions that can be derived is, in principle, enormous: the values of the parameters of the model, together with the initial conditions and the outcome schedule, uniquely determine the probability of all possible combinations of events. In a sense, the investigator is faced with a plethora of riches, and his problem is to decide what predictions are the most significant from the standpoint of providing telling tests of a model. In more classical statistical terms, what can be said about the goodness of fit of the model?

Just as with estimation, it might be desirable to evaluate goodness of fit by a likelihood ratio test. But, a fortiori, this is not practical when maximum-likelihood estimators themselves are not feasible. Rather, a combination of minimum chi-square techniques for both estimation and testing goodness of fit have come to be widely used in recent years. No single statistic, however, serves as a satisfactory over-all evaluation of a model, and so the report usually summarizes its successes and failures on a rather extensive list of measures of fit.

A model is never rejected outright because it does not fit a particular set of data, but it may disappear from the scene or be rejected in favor of another model that fits the data more adequately. Thus, the classical statistical procedure of accepting or rejecting a hypothesis—or model—is in fact seldom directly invoked in research on mathematical models; rather, the strong and weak points of the model are brought out, and new models are sought that do not have the discovered weaknesses. [*See* Goodness Of Fit; more detail on these topics can be found in Bush 1963].

## Impact on psychology

Although the study of mathematical models has come to be a subject in its own right within psychology, it is also pertinent to ask in what ways their development has had an impact on general experimental psychology.

For one, it has almost certainly raised the standards of systematic experimentation: the application of a model to data prompts a number of detailed questions frequently ignored in the past. A model permits one to squeeze more information out of the data than is done by the classical technique of comparing experimental and control groups and rejecting the null hypothesis whenever the difference between the two groups is sufficiently large. A successful test of a mathematical model often requires much larger experiments than has been customary. It is no longer unusual for a quantitative experiment to consist of 100,000 responses and an equal number of outcomes. In addition to these methodological effects on experimentation and on data analysis, there have been substantive ones. Of these we mention a few of the more salient ones.

*Probability matching.* A well-known finding, which dates back to Humphreys (1939), is that of probability matching. If either one of two responses is rewarded on each trial, then in many situations organisms tend to respond with probabilities equal to the reward probabilities rather than to choose the more often rewarded response almost all of the time. Since Humphreys’ original experiment, many similar ones have been performed on both human and animal subjects to discover the extent and nature of the phenomenon, and a great deal of effort has been expended on theoretical analyses of the results. Estes (1964) has given an extensive review of both the experimental and the theoretical literature. Perhaps the most important contribution of mathematical models to this problem was to provide sets of simple general assumptions about behavior which, coupled with the specification of the experimenter’s schedule of outcomes, predict probability matching. As noted above, investigators have not been content with just predicting the mean asymptotic values but have dealt in detail with the relation between predicted and observed conditional expectations, run distributions, variances, etc. Although this experimental paradigm for probability learning did not originate in mathematical psychology, its thorough exploration and the resulting interpretations of the learning process have been strongly promoted by the many predictions made possible by models for this paradigm.

*The all-or-none model.* A second substantive issue to which a number of investigators have addressed mathematical models is whether or not simple learning is of an all-or-none character. As noted earlier, the linear model assumes learning to be incremental in the sense that whenever a stimulus is presented, a response made, and an outcome given, the association reinforced by the outcome is thereby made somewhat more likely to occur. In contrast, the simple all-or-none model postulates that the subject is either completely conditioned to make the correct response, or he is not so conditioned. No intermediate states exist, and until the correct conditioning association is established on an all-or-none basis, his responses are determined by a constant guessing probability. This means that learning curves for individual subjects are flat until conditioning occurs, at which point they exhibit a strong discontinuity. The problem of discriminating the two models must be approached with some care since, for instance, the mean learning curve obtained by averaging data over subjects, or over subjects and a list of items as well, is much the same for the two models. On the other hand, analyses of such statistics as the variance of total errors, the probability of an error before the last error, and the distribution of last errors exhibit sharp differences between the models. For paired-associates learning, the all-or-none model is definitely more adequate than the linear incremental model (Atkinson & Estes 1963). Of course, the issue of all-or-none versus incremental learning is not special to mathematical psychology; however, the application of formal models has raised detailed questions of data analysis and posed additional theoretical problems not raised, let alone answered, by previous approaches to the problem.

*Reward and punishment.* The classic psychological question of the relative effects of reward and punishment (or nonreward) has also arisen in work on models, and it has been partially answered. In some models, such as the linear one, there are two rate parameters, one of which represents the effect of reward on a single trial and the other of which represents the effect of nonreward. Their estimated values provide comparable measures of the effects of these two events for those data from which they are estimated. For example, Bush and Mosteller (1955) found that a trial on which a dog avoided shock (reward) in an avoidance training experiment produced about the same change in response probabilities as three trials of nonavoidance (punishment). No general law has emerged, however. The relative effects of reward and nonreward seem to vary from one experiment to another and to depend on a number of experimental variables.

When using a model to estimate the relative effects of different events, the results must be interpreted with some care. The measures are meaningful only in terms of the model in which they are defined. A different model with corresponding re- ward and nonreward parameters may lead to the opposite conclusion. Thus, one must decide which model best accounts for the data and use it for measuring the relative effects of the two events. Very delicate issues of parameter estimation arise, and examples exist where opposite conclusions have been drawn, depending on the estimators used. The alternative is to devise more nonpara-metric methods of inference which make weaker assumptions about the learning process. A detailed discussion of these problems is given by Sternberg (1963, pp. 109-116). [*See* Learning, *article on* Reinforcement.]

*Homogenizing a group.* If one wishes to obtain a homogeneous group of subjects after a particular experimental treatment, should all subjects be run for a fixed number of trials, or should each subject be run until he meets a specific performance criterion? Typically it is assumed by those who use such a criterion that individual subjects differ; that, for example, some are fast learners and some are slow. It is further assumed that all subjects will achieve the same performance level if each is run to a criterion such as ten successive successes. Now it is clear that for identical subjects, it is simpler to run them all for the same number of trials and perhaps use a group performance criterion. It is, however, less obvious whether it would be better to do this than to run each to a criterion. An analysis of stochastic learning models has shown that running each of identical subjects to a criterion introduces appreciable variance in the terminal performance levels. One can study individual differences only in terms of a model and assumptions about the distributions of the model parameters. When this is done, it becomes evident that very large individual differences must exist to justify using the criterion method of homogenizing a group of subjects.

*Psychophysics.* The final example is selected from psychophysics. With the advent of signal detection theory it became increasingly apparent that the classical methods for measuring sensory thresholds are inherently ambiguous, that they depend not only, as they are supposed to, on sensitivity but also on response biases (Luce 1963; Swets 1964). Consider a detection experiment in which the stimulus is presented only on a proportion π of the trials. Let *p*(*Yǀs*) and *p*(*Yǀn*) be the probabilities of a “Yes” response to the stimulus and to no stimulus respectively. If the experiment is run several times with different values of π between 0 and 1, then *p*(*Yǀn*), as well as *p*(*Yǀs*), which is a classical threshold measure, varies systematically from 0 to 1. The data points appear to fall on a smooth, convex curve, which shows the relation, for the subject, between correct responses to stimuli and incorrect responses to no-stimulus trials (false alarms). Its curvature, in effect, characterizes the subject’s sensitivity, and the location of the data point along the curve represents the amount of bias, i.e., his over-all tendency to say “Yes,” which varies with π, with the payoffs used, and with instructions. Several conceptually different theories, which are currently being tested, account for such curves; it is clear that any new theory will be seriously entertained only if it admits to some such partition of the response behavior into sensory and bias components. This point of view is, of course, applicable to any two-stimulus-two-response experiment, and often it alters significantly the qualitative interpretation of data. [*See*Attention; Psychophysics.]

Although one cannot be certain about what will happen next in the application of mathematical models to problems of individual behavior, certain trends seem clear. (1) The ties that have been established between mathematical theorists and experimentalists appear firm and productive; they probably will be strengthened. (2) The general level of mathematical sophistication in psychology can be expected to increase in response to the in-creasing numbers of experimental studies that stem from mathematical theories. (3) The major applications will continue to center around well-defined psychological issues for which there are accepted experimental paradigms and a considerable body of data. One relatively untapped area is operant (instrumental) conditioning. (4) Along with models for explicit paradigms, abstract principles (axioms) of behavior that have wide potential applicability are being isolated and refined, and attempts are being made to explore general qualitative properties of whole classes of models. (5) Even though the most successful models to date are probabilistic, the analysis of symbolic and conceptual processes seems better handled by other mathematical techniques, and so more nonprobabilistic models can be anticipated.

Robert R. Bush, R. Duncan Luce, And Patrick Suppes

[*See also*Decisionmaking, *article on*psychologicalaspects; Simulation, *article on* Individualbehavior. *Other relevant material may be found in*Attention; Learning; Mathematics; Probability; Psychometrics; Psychophysics; Scaling.]

## BIBLIOGRAPHY

Atkinson, Richard C.; and Estes, William K. 1963 Stimulus Sampling Theory. Volume 2, pages 121–268 in R. Duncan Luce, Robert R. Bush, and Eugene Galanter (editors), *Handbook of Mathematical Psychology.* New York: Wiley.

Bush, Robert R. 1963 Estimation and Evaluation. Volume 1, pages 429–469 in R. Duncan Luce, Robert R. Bush, and Eugene Galanter (editors), *Handbook of Mathematical Psychology.* New York: Wiley.

Bush, Robert R.; Galanter, Eugene; and Luce, R. Duncan 1963 Characterization and Classification of Choice Experiments. Volume 1, pages 77–102 in R. Duncan Luce, Robert R. Bush, and Eugene Galanter (editors), *Handbook of Mathematical Psychology.* New York: Wiley.

Bush, Robert R.; and Mosteller, Frederick 1955 *Stochastic Models for Learning.* New York: Wiley.

Estes, William K. 1964 Probability Learning. Pages 89–128 in Symposium on the Psychology of Human Learning, University of Michigan, 1962, *Categories of Human Learning.* Edited by Arthur W. Melton. New York: Academic Press.

Humphreys, Lloyd G. 1939 Acquisition and Extinction of Verbal Expectations in a Situation Analogous to Conditioning. *Journal of Experimental Psychology* 25: 294–301.

Hurvich, Leo M.; Jameson, Dorothea; and Krantz, David H. 1965 Theoretical Treatments of Selected Visual Problems. Volume 3, pages 99–160 in R. Duncan Luce, Robert R. Bush, and Eugene Galanter (editors), *Handbook of Mathematical Psychology.* New York: Wiley.

Luce, R. Duncan 1959 *Individual Choice Behavior.* New York: Wiley.

Luce, R. Duncan 1963 Detection and Recognition. Volume 1, pages 103–190 in R. Duncan Luce, Robert R. Bush, and Eugene Galanter (editors), *Handbook of Mathematical Psychology.* New York: Wiley.

Luce, R. Duncan; and Galanter, Eugene 1963 Psychophysical Scaling. Volume 1, pages 245–308 in R. Duncan Luce, Robert R. Bush, and Eugene Galanter (editors), *Handbook of Mathematical Psychology.* New York: Wiley.

Luce, R. Duncan; and Suppes, Patrick 1965 Preference, Utility, and Subjective Probability. Volume 3, pages 249–410 in R. Duncan Luce, Robert R. Bush, and Eugene Galanter (editors), *Handbook of Mathematical Psychology.* New York: Wiley.

Mcgill, William J. 1963 Stochastic Latency Mechanisms. Volume 1, pages 309–360 in R. Duncan Luce, Robert R. Bush, and Eugene Galanter (editors), *Handbook of Mathematical Psychology.* New York: Wiley.

Snell, J. Laurie 1965 Stochastic Processes. Volume 3, pages 411–486 in R. Duncan Luce, Robert R. Bush, and Eugene Galanter (editors), *Handbook of Mathematical Psychology.* New York: Wiley.

Sternberg, Saul 1963 Stochastic Learning Theory. Volume 2, pages 1-120 in R. Duncan Luce, Robert R. Bush, and Eugene Galanter (editors), *Handbook of Mathematical Psychology.* New York: Wiley.

Swets, John A. (editor) 1964 *Signal Detection and Recognition by Human Observers: Contemporary Readings.* New York: Wiley.

Zwislocki, Jozef 1965 Analysis of Some Auditory Characteristics. Volume 3, pages 1–98 in R. Duncan Luce, Robert R. Bush, and Eugene Galanter (editors), *Handbook of Mathematical Psychology.* New York: Wiley.

#### More From encyclopedia.com

#### You Might Also Like

#### NEARBY TERMS

**Models, Mathematical**