Learning

views updated Jun 11 2018

Learning

In addition to the general articles under this heading, broad fields of learning phenomena are reviewed in Forgetting; Imitation; Imprinting; Thinking. More specific concepts relevant to learning are discussed in Concept formation; Drives; Fatigue; Intelligence and intelligence testing; Motivation; Problem solving; Reasoningand logic. The role of learning in personal development is discussed in Developmental psychology; Intellectual development; Language, article onlanguage development; Moral development; Sensory and motor development; Socialization. The role of learning in society is treated in Adult education; Education; Educational psychology; Intellectuals; Knowledge, sociology of; Literacy; Teaching; Universities. The importance of learning is also emphasized in Mental disorders, treatment of. Theories of learning are discussed in Gestalt theory; Information theory; Learning theory; Models, mathematical. Some applications of learning are discussed in Brainwashing; Communication, mass; Communication, political; Persuasion; Propaganda. The measurement of learning is discussed in Achievement testing; Intelligence and intelligence testing; Response sets. Of direct relevance to learning are the biographies of Bekhterev; Guthrie; Hull; Montessori; Pavlov; Sechenov; Tolman; Watson.

I. IntroductionGregory Kimble

BIBLIOGRAPHY

II. Classical ConditioningW. J. Brogden

BIBLIOGRAPHY

III. Instrumental LearningLawrence Casler

BIBLIOGRAPHY

IV. ReinforcementStanley S. Pliskoff and Charles B. Ferster

BIBLIOGRAPHY

V. Discrimination LearningDouglas H. Lawrence

BIBLIOGRAPHY

VI. Avoidance LearningRichard L. Solomon

BIBLIOGRAPHY

VII. Neurophysiological AspectsRobert A. McCleary

BIBLIOGRAPHY

VIII. Verbal LearningLeo Postman

BIBLIOGRAPHY

IX. TransferRobert M. Gagné

BIBLIOGRAPHY

X. Acquisition of SkillEdward A. Bilodeau

BIBLIOGRAPHY

XI. Learning in ChildrenLewis P. Lipsitt

BIBLIOGRAPHY

XII. Programmed LearningRussell W. Burris

BIBLIOGRAPHY

I INTRODUCTION

Learning has been defined (Kimble 1961) as a relatively permanent change in a behavioral tendency, which occurs as a result of reinforced practice. One purpose of this definition, as with any definition, is to delimit as precisely as possible a particular realm of discourse. Thus, a word or two appears to be in order with respect to topics that this definition specifically includes and excludes.

Changes in behavior occur as a result of many processes, not all of which are forms of learning. The above definition succeeds in eliminating most if not all of these changes. Behavioral changes occurring as a result of maturation are ruled out by the requirement of dependence upon practice. Changes resulting from motivational fluctuations are temporary and are eliminated by the reference to a permanent change. Changes in behavior that come under the heading of forgetting and experimental extinction are excluded by the reference to reinforced practice. Learning necessitates the appropriate use of reward or punishment. If these operations, collectively called reinforcement, are omitted, learning disappears; experimental extinction or “unlearning” takes place. The reference to reinforced practice is necessary to exclude such changes from the definition of learning.

Turning now to matters that the definition does not exclude, it should be noted that the definition says nothing about the kinds of behavioral changes that qualify as learning. There is, for example, no suggestion that learning always leads to an improvement in behavior: bad habits as well as good habits are encompassed by the definition. Similarly, acquired motives, attitudes, and values come within the scope of the definition as easily as do changes in language habits and motor skills. Finally, the use of the term “tendency” allows the definition to cover cases in which the products of learning do not immediately appear in performance. In this way the definition covers the numerous cases in which an individual learns something that may not be put to practical use for years. Someone might learn as a boy scout that moss typically grows on the north side of trees and that this information may be used to find one’s way out of a forest when lost, but not actually perform any responses based on this information until much later, if ever. The distinction implicit in the previous statements, that between learning and performance, is basic to the psychology of learning. In general, learning refers to the establishment of tendencies; performance refers to the translation of these tendencies into behavior.

Historical background Although ideas basic to the modern psychology of learning have existed for millennia, especially in the associationistic philosophies, the immediate antecedents of the scientific study of learning are to be found in the work of three scientists writing in the late nineteenth and early twentieth centuries: the German Ebbinghaus, the Russian Pavlov, and the American Thorndike [seeEbbinghaus; Pavlov; Thorndike].

Ebbinghaus. Ebbinghaus fathered the study of verbal rote learning (1885). As materials, he used meaningless three-letter, consonant–vowel–consonant, sequences, which have come to be called nonsense syllables. GOC, TER, and BIV are examples. Ebbinghaus constructed lists of these materials of various lengths, memorized them under various conditions, and attempted to recall them after various amounts of time. He discovered many of the laws of such learning, which remain valid today. It is of incidental interest that Ebbinghaus also appears to have been the first psychologist to make use of the ideas of statistics and probability.

Pavlov. Using dogs as subjects, Pavlov studied the simple form of learning that we now call classical conditioning (1927). The Pavlovian procedure consisted of presenting the dog with food or an acid solution, which made the animal salivate, shortly after the presentation of some neutral stimulus, such as a tone or light, which did not. After several such pairings, the dog came to salivate at the presentation of the neutral stimulus as if that stimulus had somehow become a substitute for food or acid. Pavlov was able to demonstrate many of the basic phenomena of conditioning. He also developed a quasi-neurological theory to account for such learning.

Thorndike. Thorndike also worked with lower animals, such as dogs, cats, and chickens, and studied what we now call instrumental learning. The most famous of Thorndike’s studies were those in which cats learned to operate a latch to escape from a puzzle box and to obtain a bit of fish outside (Thorndike 1898—1901). On the basis of his observations in these studies, Thorndike was led to develop an influential theory of learning in which three hypotheses were central: (1) learning consists of the formation of connections between stimuli and responses; (2) learning is a gradual rather than a sudden or insightful process; and (3) learning depends not just upon practice but also upon reward and punishment. This last hypothesis Thorndike called the law of effect.

Taxonomy of learning

As these historical materials indicate, the scientific study of learning involves the use of widely different procedures. It is useful, in fact, to differentiate forms of learning in addition to those described above. The most important of these are considered here.

Classical conditioning

Many investigators make a distinction between forms of learning in which the subject’s reactions lead to reward or punishment and those in which such events take place independently of the subject’s behavior. The former arrangement defines instrumental learning; the latter identifies classical conditioning. The four most important aspects of the classical conditioning experiment may be described by referring to the example of Pavlovian conditioning mentioned above.

(1)Unconditioned stimulus (US) —any stimulus that at the outset of an experiment produces a regular reaction. In the typical Pavlovian experiment the US is food.

(2)Unconditioned response (UR) —the consistent reaction evoked by the US just referred to. In the Pavlovian experiments this was the salivation.

(3.)Conditioned stimulus (CS) —a neutral stimulus paired with the US for experimental purposes. In the typical Pavlovian experiments, this was a light, buzzer, bell, or ticking metronome.

(4)Conditioned response (CR). After several pairings of the CS and CS, a response resembling the UR may be elicited by the conditioned stimulus. This is the conditioned response, conditioned reaction, or conditioned reflex.

Studies of the conditioned reflex show that such learning is a very general process. Reactions that have been conditioned, in addition to salivation, include the galvanic skin response, the eyeblink, a blocking of the alpha rhythm of the brain, pupillary dilation, vasodilation and constriction, and secretion of various internal organs. It has also been demonstrated that the range of neutral stimuli to which such conditioning may take place is wide and includes all of the stimuli that the organism ordinarily perceives and some that it ordinarily may not perceive.

In discussing classical conditioning, two items of interpretation appear to be important. First, as the responses listed above suggest, the reactions modifiable by classical conditioning are often emotional reactions. Thus, classical conditioning appears to be the mechanism by which hopes, fears, attitudes, and other emotionally toned reactions are established. Second, it is important to restrict the application of the term “conditioning” to classical conditioning. The extension of the concept of conditioning to almost every form of learning, as some authors have done, leads to confusion.

Simple instrumental learning

Most learning situations differ from classical conditioning in that the organism’s reactions are instrumental to the securing of reward or the avoidance of punishment —hence the name “instrumental learning. “In purely physical terms, there are four possible relationships between a given response and reward and punishment: the response in question may (1) produce reward, (2) avoid a punishment, (3) lead to punishment, or (4) lead to the withholding of reward. The great majority of experimental work in simple instrumental learning involves the first two of these, reward learning and avoidance learning. We shall discuss reward learning here, postponing the treatment of avoidance learning until later.

A common device for the study of reward learning is the Skinner box. A representative version of this apparatus might consist of a chamber about one foot on a side. A lever extends into the box on one side. If the rat (the species most commonly studied in the Skinner box) presses the lever, a bit of food or a small dipper of water is automatically presented.

Investigations of simple instrumental learning employ two general procedures: free responding and discriminative. In the latter method a distinctive signal such as a light or tone or the insertion of the lever into the box is used to indicate when reward is available. If the animal presses the bar in the presence of the discriminative stimulus, reward occurs. Bar pressures at other times go unrewarded. In the free-responding situation, there is no discriminative stimulus to indicate when reward is available.

Schedules of reinforcement. Most investigations of learning in the Skinner box employ the free-responding procedure and some version of a partial-reinforcement schedule. The rat does not receive reward for every bar depression but is reinforced on a schedule that is defined either in terms of a temporal interval or in terms of a specified number of responses. Thus, there are both interval and ratio schedules of reinforcement. Moreover, the number of responses required for reinforcement or the temporal interval separating reinforcements may be fixed (regular) or variable (irregular). The combinations of these physical arrangements generate four basic schedules of reinforcement: (1) fixed interval, (2) variable interval, (3) fixed ratio, and (4) variable ratio. In the fixed interval schedule the animal is reinforced for the first response after a standard period of time, perhaps one minute. In the variable interval schedule the animal is rewarded for the first response after some average amount of time, such as one minute, but the intervals separating successive reinforcements vary widely around this average value. Similarly with ratio schedules, the fixed ratio schedule is one in which the animal is rein-forced for the nth (for example, the fifteenth) response and the variable ratio schedule provides reward after some average, but varying, number of bar depressions. These schedules of reinforcement produce characteristic behavioral patterns that cannot be described within the scope of this article. The interested reader is referred to Skinner (1938), Ferster and Skinner (1957), or to any standard book on learning (for example, Hall 1966; Kimble 1961). One practical consequence of any partial reinforcement schedule is the establishment of great persistence in behavior. This is particularly true of the variable interval schedule. This schedule, therefore, is widely used in experiments in which many tests must be conducted. The investigation of stimulus generalization, which we shall discuss later, and the influence of drugs upon behavior are important examples.

Complex instrumental learning

The simple experimental situations typified by the Skinner box are relatively recent in the history of the scientific study of learning. Earlier investigations tended to use more complex situations, the multiunit maze being, by far, the most popular. The general abandonment of these procedures probably resulted from two difficulties: the complex instrumental learning situations were difficult to subject to automation; and, more importantly, the learning that takes place in such situations is so complex as to defy analysis. On the other hand, investigations of complex instrumental learning did lead to a preliminary statement of certain laws of learning and to the development of certain important concepts. One of the most important items in the first category was suggested by the fact that mazes tend to be learned in a backward order; the correct turns near the goal are learned first, and those near the starting point are learned last. This fact implies the existence of a goal gradient or delay of reinforcement gradient, which means that responses followed immediately by reward are learned more rapidly than those for which reward is delayed.

Habit family hierarchy. The most useful general concept to come from the study of complex instrumental learning is that of habit family hierarchy. Investigators of maze behavior, for example, noticed that the initial behavior of the rat in the maze was not merely random wandering but revealed certain dependabilities. The rat might show a consistent tendency to turn right rather than left, to proceed straight ahead rather than to make a turn at all, to prefer dim alleys to those more brightly illuminated, and to choose paths leading in the general direction of the home cage over those that lead in the opposite direction. Such observations suggested the general proposition that the learner comes to the learning situation with a repertoire of responses (habit family) that vary in strength (hierarchy). This made it possible to view complex instrumental learning as a reorganization of the habit family hierarchy.

Acquisition of motor skill

At the human level, an important form of learning is the acquisition of motor skill. Improvements in handwriting, baseball pitching, piano playing, and bicycle riding are familiar examples. The only caution that should be urged is that it is important to distinguish between skills that emerge as a result of learning and those that appear as a result of maturation. In the young child, walking is an example of the latter type of skill. Although we speak of “learning to walk,” experimental studies have shown that this skill is almost entirely the result of maturation and that practice has relatively little to do with it.

As is the case in all other areas of learning, laboratory study has led to a refinement in methods and to the use of relatively standardized procedures. A very commonly used device for the study of motor learning is the pursuit rotor (rotary-pursuit apparatus). The pursuit rotor resembles a phonograph turntable. Its main feature is a motor-driven disc, which usually turns at the rate of 60 rpm. At a distance of four or five inches from the center, there is a small target approximately the size of a dime. The subject’s task is to keep the point of a hinged stylus in contact with the target while the disc rotates. The measure of performance is the amount of time the subject keeps the point of the stylus in contact with the target.

Massed versus distributed practice. Some of the most important results obtained from pursuit rotor studies deal with the spacing of practice trials. In one investigation (Kimble & Shatel 1952), subjects received fifteen 50–second trials per day for ten days (see Figure 1). One group learned under conditions of massed practice, in which the trials were separated by IQ-second rest pauses. The other group learned under conditions of distributed (spaced) practice, the trials being separated by rest pauses of 70 seconds. The results of the investigation were as follows: (1) Massed practice produces a serious interference with the acquisition

of pursuit rotor skill. (2) Under either condition of practice, improvement continues for a long time. In Figure 1, it can be seen that even after 150 trials, the subjects under both conditions of practice still continue to show improvement. (3) The initial trials on any day’s session show certain interesting characteristics. One of these is the phenomenon of warm-up, which is most conspicuously displayed in the later sessions of the subjects who learn under distributed practice. The first trials are quite inferior to the final trials of the preceding day, and it may take six or eight trials for the warm-up process to be complete and for the level of performance of the previous day to be reached. (4) Under conditions of massed practice, a different effect may appear at the beginning of each practice. This is an improvement in performance that apparently occurs as the result of rest and the disappearance of a fatiguelike state produced by massed practice. This phenomenon is most obvious in the early sessions under massed practice. In Figure 1 the straight lines through the massed-practice functions are fitted curves used to estimate what performance would have been on the first trial of a particular session if a day’s rest had not intervened (open circles) and what it would have been on that same trial if there had been no need to warm up (filled circles). The difference between the open and closed circles is a measure of this improvement, technically called reminiscence. It is of interest that in this experiment reminiscence disappears late in learning. (5) If these subjects, who practice with their preferred hand, are tested for performance with their nonpreferred hand and if appropriate control procedures are employed, it is possible to demonstrate that the performance of the nonpreferred hand benefits considerably from practice with the preferred one. This characteristic of motor learning, called transfer of training, is very conspicuous in motor skills.

Rote verbal learning

As was mentioned at the outset of this article, one of the earliest forms of learning to receive scientific study was verbal learning, which Ebbinghaus began to investigate in the late nineteenth century. In its modern form the study of verbal learning takes two major forms, serial learning and paired-associate learning. Serial learning involves the memorization of lists, typically lists of nonsense syllables; paired-associate learning, as the name implies, involves the learning of pairs of items in the way one learns a foreign language vocabulary.

Both of these forms of rote learning are influenced in the same way by certain variables:

(1) The manipulation of distribution of practice

leads, as in motor learning, to better performance under spaced practice than under massed practice; but typically the effect is of a much less impressive magnitude than in motor learning. (2) Both proceed much more rapidly with meaningful materials (for example, a list containing the words house, robin, wagon, money, uncle, etc.) than with nonsense materials (for example, a list containing the items TOZ, LUN, GIB, VUR, DEC).

Studies of serial learning have revealed characteristic differences in the ease of learning different portions of the list. The very first items are easiest; those at the end are next easiest; the most difficult items are those just after the middle of the list. This phenomenon, illustrated in Figure 2, may be referred to as a serial position function.

Interference phenomena. The paired-associate learning procedure has been particularly useful in the study of interferences of the kind thought to be responsible for normal forgetting. Suppose a subject learns the following pairs of words:

table–bright

dozen–forest

value–camel

willow–stone

label–graze

He then learns these pairs:

table–lozenge

dozen–tempest

value–blister

willow–horse

label–trial

Note that the stimulus words are the same in the two lists but that the responses are different. Referring to the stimulus and response words in the first list as A and B, respectively, the items in the second list can be referred to as A and C. This A–B, A–C relationship leads to great difficulty in remembering the A–B associations. The establishment of such interferences is commonly thought by psychologists of verbal learning to be the essential condition for forgetting.

Learning to learn

If subjects are required to learn a series of lists of verbal materials, they show a steadily improving ability as a function of the number of lists previously committed to memory (unless, as just noted, the lists are constructed to interfere with each other). The results typically obtained in experiments on learning to learn appear in Figure 3. Among the most important experimental demonstrations of this fact are the investigations of Harlow (1949). Harlow taught monkeys a series of several hundred discrimination

problems. During the course of this experiment, the subjects improved to the point where they were solving new discriminations after just one trial.

Basic phenomena of simple learning

Most students of learning assume that the variety of forms of learning considered in the previous section all obey the same basic laws. For this reason, it has seemed expedient to most such students to study the basic properties of learning in simple situations, often with lower organisms as subjects. Thus, realistic presentations of what are regarded as the basic phenomena of learning (this section), as well as of its most fundamental laws (next section), must depend heavily upon studies of classical conditioning and simple instrumental learning.

Acquisition and the learning curve

During the course of practice, a subject’s performance changes in a direction that indicates an increase in the strength of the underlying process. The phenomenon of habit acquisition is often represented in the form of a learning curve. The shape and direction of such functions depend upon the particular measure of learning employed. Idealized but typical functions appear in Figure 4. In what fol-

lows, we shall limit ourselves to a report of investigations where increases in the measure plotted reflect increases in the strength of a habit.

Extinction

As mentioned earlier, and as will be developed in more detail later, learning requires the use of reinforcements; for example, allowing the subject the opportunity to obtain food or avoid punishment for performing the response to be learned. The omission of reinforcement leads to a reduction in performance that Pavlov called experimental extinction and is now more often referred to simply as extinction.

Spontaneous recovery

If, following extinction, the subject is allowed a period of rest, there frequently occurs a spontaneous increase, or spontaneous recovery, in the strength of the previously extinguished response. This increase resembles the increase called reminiscence, which occurs in motor learning. Many theorists (for example, Hull 1951) regard both as reflecting the dissipation of some type of inhibitory process. What next happens to the strength of the spontaneously recovered response depends upon whether or not it is reinforced. The reintroduction of reinforcement leads to the rapid re-establishment of the full strength of the response. Omission of reinforcement leads to re-extinction. Figure 5 provides a graphic summary of the phenomena described.

Stimulus generalization

Ordinarily, in a conditioning experiment the conditioned stimulus is precisely controlled. If the response is tested with other stimuli, the conditioned reaction may occur but in diminished strength. For example, Guttman and Kalish (1958) trained pigeons to peck at a disc illuminated with a light of 550 mµ, and tested the reaction of the pigeon to lights of other colors. The measure of response strength employed was the rate of pecking. These investigators obtained results indicating that there is a generalization gradient (Figure 6 illustrates these findings). In general, such a gradient shows the transfer of response strength to stimuli similar to the training stimulus and a reduction in strength that is proportional to the difference between the training and test stimuli.

Discrimination

The tendency for a response to generalize means that the subject fails to discriminate between similar stimuli. Discrimination between two stimuli may be obtained by presenting the two stimuli either together (allowing the organism to choose one) or in succession (allowing the organism to respond or not) and reinforcing responses to one stimulus and withholding reinforcement for responses to the other, provided that the organism’s sensory mechanisms can detect the difference. Following such training, the subject

typically learns to respond to the reinforced stimulus but not to the other. It is clear that the establishment of a discrimination involves all of the basic phenomena discussed so far: Responses to the reinforced stimulus are acquired and then generalized to the nonreinforced stimulus. These latter responses are extinguished by nonreinforcement but presumably are subject to spontaneous recovery.

Concept formation. At a level much more complex than that of simple learning, it seems very likely that the formation of concepts entails a

process of discrimination learning. A concept obviously involves a tendency to treat diverse things as identical (generalization) but to limit the extent of such indiscriminate reaction.

The laws of learning and performance

The major preoccupation of students of learning has been with the experimental manipulation of a variety of variables in an effort to determine their lawful relationship to learned changes in behavior. As we shall see, it is easy to list variables that have powerful effects upon performance in the learning situation. What is not so easy is to determine with certainty whether the effect is upon learning or performance. To illustrate this difficulty, suppose that two groups of rats learn a maze under conditions that are exactly alike except that one group learns after having been deprived of food for 24 hours and the other group learns after having been deprived of food for only 2 hours. The learning curves obtained on these two groups of subjects would surely be very different (see Figure 7). But is this difference a difference in learning or performance—or both? The obvious way to find out is to subdivide each group at some point when an impressive difference in behavior has been established, testing some previously very hungry animals when they are only moderately hungry and some previously moderately hungry animals when they are very hungry. Under controlled conditions and with the change in motiva tion, the performance of both groups changes immediately to what it would have been if the new condition of motivation had prevailed from the beginning (see Figure 7). In short, there is no evidence that motivation has any effect upon learning in an experiment of this sort. The influence, which is a powerful one, is entirely upon performance.

The difficulty in the experiment just described exists for every other variable that might be manipulated. Thus, in the sections to follow, we shall present several important regularities emerging from the experimental study of learning; but, except in connection with the first of these, we shall not return to the question whether the effect is on learning or performance. It will be sufficient to say that the current trend in the thinking of psychologists of learning is to assign more and more of these variables a role as determiners of performance rather than of learning.

Number of practice trials

By definition, learning depends upon practice; and it is obvious that the amount of practice must figure in some way in determining the amount of learning. There is considerable argument, however, about the kind of law involved (Kimble 1961, pp. 109–136). Some psychologists have maintained that all learning is, in some sense, insightful and occurs in just one trial; others have insisted that all learning represents the gradual strengthening of some underlying process. The learning-performance issue is a concern chiefly of the theorists who maintain that all learning is basically insightful, for the fact of the matter is that most learning curves reflect gradual improvements in performance. The general way out of this paradox taken by insight theorists has involved the assumption that learning involves numerous subskills that are acquired suddenly but after different amounts of practice, producing the appearance of gradual learning.

Amount of reinforcement

Obviously a practice trial, among other things, provides an occasion for the administration of reinforcement. Several of the best established laws of learning relate to reinforcement in some way. It is known, for example, that the amount of reinforcement is an

important variable. Up to some limit, increasing amounts of reinforcement lead to improvements in performance, the characteristic function being negatively accelerated. The same results have been obtained for quality of reinforcement. Subjects learn more rapidly for a highly desirable reinforcer than for a less desirable one.

Delay of reinforcement

It is now well established that the time separating a response and its reinforcer is a very powerful variable in determining the progress of behavioral change in a learning situation. In general, the longer reinforcement is delayed following the execution of the response, the slower the rate of such change. What is most surprising in the results of such studies is that when extraneous sources of reinforcement are eliminated (for example, Grice 1948), it has been found that little if any learning occurs with delays greater than four or five seconds.

Secondary reinforcement. This last fact, of course, raises the question of what mechanisms have been at work in experiments in which subjects have learned with a fairly long delay of reinforcement. The commonly accepted answer is in terms of secondary reinforcement. A discussion of the details of secondary reinforcement would lead us far afield, but fortunately a nontechnical presentation of the argument will suffice. Suppose a rat runs a maze and at the end is restrained in a delay chamber for five minutes before being allowed access to the food used as a reinforcer. Suppose, further, that the rat learns the maze under these conditions, which obviously involve a delay of reinforcement much greater than the four or five seconds mentioned above. How are these two sets of facts to be brought into harmony? The argument in terms of secondary reinforcement goes this way: Cues in the delay chamber come to stand for food because they are always present just before food becomes available. Since these cues stand for food, they have some of the same characteristics as food, including the important characteristic of functioning as (secondary) reinforcers. Thus, the cues in the delay chamber serve as immediate (secondary) reinforcers and promote the progress of learning. The obvious implication of this argument is that if the cues preceding reinforcement varied from trial to trial, so that no stable association could be formed, there would be a serious disruption in the progress of learning.

The delay of reinforcement gradient is basic to the theoretical interpretation of a variety of phenomena in learning. For example, the backward order of elimination of blinds in a complex maze referred to earlier probably reflects the operation of this gradient. A gradient with all the features of the delay of reinforcement gradient also seems to apply to punishment. Miller (for example, 1959) has developed a theory of approach–avoidance conflict that is based on simultaneous operation of gradients based on reward and punishment.

The interstimulus interval

The experiments on delay of reinforcement described in the preceding section were all experiments in instrumental learning. A related variable in classical conditioning is the time between conditioned and unconditioned stimuli, often referred to as the interstimulus interval. Studies of this variable have produced relatively consistent results, which Figure 8 presents graphically. Two features of the interstimulus interval are important: (1) Backward conditioning, in which the unconditioned stimulus precedes the conditioned stimulus, leads to little or no conditioning. (2) The function for forward conditioning, in which the conditioned stimulus precedes the unconditioned stimulus, displays a conspicuous optimal interval; intervals either longer or shorter than the optimum produce inferior conditioning. For many response systems the optimal interval is in the neighborhood of .5 second. Recent investigations suggest that this optimal interval is more limited than was once thought. These studies (for example, Noble & Adams 1963; Noble & Harding 1963) have tended to indicate (1) that for lower animals the optimal interval is longer than .5 second and (2) that its duration may be different at different points in practice.

Other variables

The variables described above are representative of those studied in investigations of simple learning. A complete catalogue of such variables is beyond the scope of this report. Thus, we shall supplement the foregoing review by

simply mentioning certain other variables and only briefly indicating their effect upon performance.

Motivation. As we have seen, motivation ordinarily facilitates performance in the learning situation. In some circumstances, however, motivation may energize tendencies that interfere with the response to be learned. Under these circumstances, motivation, particularly very strong motivation, may appear to interfere with learning.

Distribution of practice. As we have also seen, distribution of practice usually favors rapid learning. In some complex tasks, however, massed practice may aid in the elimination of initial errors and briefly speed the progress of learning.

Intensity of the unconditioned stimulus. The intensity of the unconditioned stimulus in classical conditioning behaves as the amount of reinforcement does in instrumental learning; the greater the intensity of the unconditioned stimulus, the more rapidly conditioning proceeds.

Intensity of the conditioned stimulus. The intensity of the conditioned stimulus usually has little effect on the speed of conditioning; but recent studies (Grice & Hunter 1964) show that when strong and weak conditioned stimuli are used in the same experiment with the same subjects, the effectiveness of this variable increases considerably.

A final point is that the effects of the variables mentioned in this and the preceding section interact; that is, the effect of one depends upon the values of the others. The precise nature of the interactions remains to be worked out for almost all combinations of variables.

The nature of reinforcement

The major theoretical issues in the psychology of learning are traceable to the opinions of E. L. Thorndike. As mentioned earlier, from Thorndike’s studies of cats in a simple instrumental learning situation, Thorndike developed three important hypotheses about the nature of learning: learning is gradual rather than sudden; learning consists in the formation of stimulus-response connections; and at bottom, reinforcement entails the operation of rewards and punishments. This last idea is the one that we shall develop most fully in this section. As Thorndike put it:

The Law of Effect is that: Of several responses made to the same situation, those which are accompanied or closely followed by satisfaction to the animal will, other things being equal, be more firmly connected with the situation, so that, when it recurs, they will be more likely to recur; those which are accompanied or closely followed by discomfort to the animal will, other things being equal, have their connections with that situation weakened, so that, when it recurs, they will be less likely to occur. The greater the satisfaction or discomfort, the greater the strengthening or weakening of the bond. ([1898–1901] 1911, p. 244)

This statement came in for severe criticism on two particular counts: (1) It was criticized as subjective and unscientific in that the terms “satisfaction” and “discomfort” appeared to entail commitments to a mentalism that the psychology of 1911 was struggling to escape. Thorndike, however, was on safer methodological ground than his critics realized for he offered a means of objectifying these terms that we find perfectly acceptable today. “By a satisfying state of affairs is meant one which the animal does nothing to avoid, often doing such things as attain and preserve it. By a discomforting or annoying state of affairs is meant one which the animal commonly avoids or abandons” ([1898–1901] 1911, p. 245).

(2) The law of effect was criticized as circular: that is, learning to approach some object (for example, food) could define an object as a satisfier that, in turn, could be used to explain the learning that served to define it as a satisfier in the first place. Although Thorndike himself was not particularly clear on how to deal with this criticism, later advocates of his general point of view (for example, Miller & Dollard 1941) provided an answer. In general, this answer is that the transparent circularity in the example above disappears when it is understood that the definition of a satisfier in a defining experiment of the type proposed by Thorndike establishes the object as a general satisfier that will function as a reinforcer in a variety of learning situations. That is, given that food functions as a reinforcer in one situation, it is possible to predict with a fair degree of certainty that it will function in a similar way in a host of others. The law of effect has survived these criticisms and now has become the position by which other interpretations of reinforcement are usually defined.

The law of effect

Before proceeding further with this discussion, it is important to make a distinction between empirical and theoretical versions of the law of effect. The empirical law of effect involves nothing more than the simple factual statement that there are objects, such as food, water, and escape from punishment, that function dependably as reinforcers. The theoretical law of effect, on the other hand, states (using Thorndike’s terminology) that these events are reinforcers because they are satisfiers. Because of its factual status, the empirical law of effect is not an object of dispute. Arguments about the nature of reinforcement involve the theoretical law of effect.

The statement that all reinforcers are satisfiers or annoyers is merely one of several proposals that have been offered as to the ultimate nature of reinforcement. For purposes of exposition, it is convenient to identify three general classes of such proposals, which we shall call tension-reduction theory, stimulational theory, and reactional theory.

Tension-reduction theory

Tension-reduction theory maintains that the essential condition of reinforcement is the alleviation of a state of physiological or psychological tension. In the past, tension-reduction theory was so closely tied to Thorndike’s theory that current usage tends to identify the law of effect with this position and often erroneously equates tension-reduction theory with reinforcement theory.

At different times, and in the hands of different authors, need-reduction, drive-reduction, and drive-stimulus-reduction theories of reinforcement have been offered. Need-reduction theory identifies reinforcement with the satisfaction of some physiological need (for example, food, water, or sex) that if not attained means that the individual or its species will perish. Although this theory is attractive because of its affinity to biological processes, certain facts make its acceptance impossible: (1) There are many rewards that appear to correspond to no biological need. These include rewards that satisfy such acquired motives as those for affection, dominance, and accomplishment. (2) Certain biological needs appear to involve no correlated reinforcer. One of these is the need for oxygen, which is present at very high altitudes but which does not seem to create a state of tension or drive. There is no evidence that the administration of oxygen under these circumstances is a reward.

Difficulties such as these led certain theorists (Hull 1943) to distinguish between need (a physiological condition) and drive (the psychological experience associated with needs) and to suggest that reinforcement is drive reduction rather than need reduction. We shall apply this distinction presently.

The drive-stimulus–reduction theory of reinforcement (Miller & Dollard 1941) suggests that drives are always intense stimuli and that drive reduction (assumed to be reinforcing) is a matter of stimulus reduction.

Stimulational theory

Stimulational theory maintains that particular stimuli are reinforcing and distinguishes itself from tension-reduction theory in these terms. Thus, food is a reinforcer because of its taste (not because it reduces hunger), and water is a reinforcer because of the stimulational aspects of drinking (not because it reduces thirst).

Reactional theory

Reactional theory stands in opposition to both tension-reduction theory and to stimulational theory and holds that it is the act of eating or drinking, rather than taste or drive reduction, that is essential to reinforcement.

Experimental tests

Disagreements among these various interpretations of reinforcement have led, over the years, to a wide variety of experimental tests designed to establish the validity of one particular interpretation. Typically, tension-reduction theory has provided the point of departure, and adherents of opposing theories have attempted to strengthen their theoretical positions by discrediting tension-reduction theory. For example, Sheffield and Roby (1950) demonstrated that rats will learn to run a simple maze for a reward consisting of saccharine dissolved in water. The significance of this finding derives from the fact that saccharine has no nutritional value, being eliminated from the body chemically unchanged. This suggests that it must be either the sweet taste of saccharine (stimulational theory) or the act of ingestion (reactional theory) that provides for reinforcement. Advocates of tension-reduction theory, however, were able to point out that although saccharine produces no reduction in need it may produce a reduction in drive. Thus, Miller (1963) reported that rats that were allowed to drink a saccharine solution subsequently ate less food than the control subjects, which were allowed to drink only water. Miller’s interpretation was that the consumption of saccharine had led to a reduction of the hunger drive although obviously it had not altered the rats’ need for food.

Similar problems for tension-reduction theory were provided by demonstrations that the opportunity to explore or to manipulate the environment is rewarding for lower animals. For example, Butler (1953) was able to show that rhesus monkeys will learn a discrimination for no other reward than the opportunity to see out of their normally closed cage. Other investigators (Harlow, Harlow, & Meyer 1950) demonstrated that monkeys learned to distinguish between manipulable and nonmanipulable objects apparently for no other reward than the opportunity to manipulate them. Still others (Kish 1955) showed that rats will learn a bar-pressing response to turn on a dim light or that they will learn a simple maze for the opportunity to explore a novel environment. For some of these demonstrations, but not all, tension-reduction theorists were able to deal with the prob lem by the postulation of a motive to explore or to manipulate and by the assumption that learning depended on the reduction of these drives. Obviously there are certain difficulties with such explanations in that they open the possibility of postulating a new motive for every demonstrable type of reward. On the other hand, it is known that some of these motives, for example, the exploratory motive, increase in strength with deprivation, as many other motives do. Such evidence makes the interpretation somewhat more acceptable in that it lends credence to the concept of an exploratory motive by providing independent evidence for it.

A special threat to tension-reduction theory has recently come in the form of demonstrations that rats will learn a variety of responses (the most common response is bar pressing) for a weak electrical stimulation of a variety of areas of the brain stem. The simplest interpretation of such a result is that such stimulation is somehow pleasant for the rat, and such demonstrations have been interpreted as a support for stimulational theory. On the other hand, Olds (1958) has shown that the effectiveness of brain stimulation depends in part upon the level of the rat’s hunger and sex drives. This opens the possibility that brain stimulation reduces these and other motives.

At the same time that tension-reduction theorists were dealing with these attacks upon their position, they were also providing more positive evidence in support of their own position. A typical experiment is that of Miller and Kessen (1952). These investigators demonstrated that rats learned a simple discrimination for a reward provided by the introduction of food directly into the stomach by way of a fistula. Such learning took place in the absence of taste stimulation emphasized by stimulational theory and ingestive behavior emphasized by reactional theory. This appears to leave the alleviation of hunger (tension reduction) as the only remaining mechanism of reinforcement.

It is apparent that the variety of experimental tests described above did not succeed in establishing any particular theory as the obviously correct theory of reinforcement. This state of affairs has had two important consequences. One consequence is that certain theorists, most notably Collier (see, for example, Collier & Myers 1961), have made a strong case for the view that reinforcement entails a variety of mechanisms, probably all of those emphasized by the more specialized theories of reinforcement.

The other important consequence is the increased appeal of multiprocess theories of learning. Such theories propose that learning itself involves a number of subtypes and that the mechanisms of reinforcement differ for the various forms of learning.

Multiprocess theories

The most popular form of multiprocess theory is a two-process theory that maintains that the mechanisms of reinforcement are different for classical conditioning and instrumental learning. The position is that instrumental learning occurs as a result of reinforcement provided by tension reduction, whereas for classical conditioning all that is necessary is the contiguous occurrence of conditioned and unconditioned stimuli (or, in some versions, conditioned stimulus and unconditioned response).

One of the appealing features of two-process theory is the readiness with which it can be applied to avoidance learning, which is difficult to understand in terms of any single principle of reinforcement. Suppose we consider the following experimental arrangement: A rat is placed in a Skinner box and on each trial a light comes on and five seconds later an electric shock is applied to the animal’s feet through an electrifiable grid in the floor, unless, in the meantime, the rat presses the bar. Rats are able to learn this response quite quickly. Two-process theory deals with this learning as follows. On the early trials, before the rat has learned to press the bar and avoid the shock, light and shock are paired on every trial as in classical conditioning. This pairing leads to a conditioning to the light of a fear reaction. On subsequent trials, the appearance of the light arouses fear in the subject, and this, in turn, leads to a heightened level of activity. In the course of such activity, sooner or later the animal presses the bar, terminating the light, reducing fear, and also avoiding the shock. The reduction in fear, which is contingent upon the cessation of the light, provides reinforcement for the bar-pressing reaction.

Psychopathology. Applications of learning theory to psychopathology, for example, those attempted by such theorists as Bollard and Miller (1950), have made important use of a two-process explanation in their descriptions of neurotic symptomotology. Phobias are often interpreted as direct or symbolic representations of classically conditioned fear reactions; and neurotic behavior is viewed as avoidance behavior motivated by fear and reinforced by fear reduction.

Gregory Kimble

[See alsoForgetting. Other relevant material may be found inAchievement Testing; Drives; Motivation; Nervous System, article onBrain Stimulation; Stimulation drives.]

BIBLIOGRAPHY

Butler, Robert A. 1953 Discrimination by Rhesus Monkeys to Visual-exploration Motivation.Journal of Comparative and Physiological Psychology 46:95–98.

Collier, George; and Myers, Leonhard 1961 The Loci of Reinforcement. Journal of Experimental Psychology 61:57–66.

Dollard, John; and Miller, Neal E. 1950 Personality and Psychotherapy: An Analysis in Terms of Learning, Thinking, and Culture. New York: McGraw-Hill. → A paperback edition was published in 1965.

Ebbinghaus, Hermann (1885) 1913 Memory: A Contribution to Experimental Psychology. New York: Columbia Univ., Teachers College. → First published as Über das Gedächtnis. A paperback edition was published in 1964 by Dover.

Ferster, C. B.; and Skinner, B. F. 1957 Schedules of Reinforcement. New York: Appleton.

Grice, G. R. 1948 The Relation of Secondary Reinforcement to Delayed Reward in Visual Discrimination Learning. Journal of Experimental Psychology 38: 1–16.

Rgice, G. R.; and Hunter, J. J. 1964 Stimulus Intensity Effects Depend Upon the Type of Experimental Design. Psychological Review 71:247–256.

Guttman, Norman; and Kalish, Harry L. 1958 Experiments in Discrimination. Scientific American 198, no. 1:77–82.

Hall, John F. 1966 The Psychology of Learning. Philadelphia: Lippincott.

Harlow, Harry F. 1949 The Formation of Learning Sets. Psychological Review 56:51–65.

Harlow, Harry F.; Harlow, M. K.; and Meyer, D. R. 1950 Learning Motivated by a Manipulation Drive. Journal of Experimental Psychology 40:228–234.

Hillman, Beverly; Hunter, W. S.; and Kimble, G. A. 1953 The Effect of Drive Level on the Maze Performance of the White Rat. Journal of Comparative and Physiological Psychology 46:87–89.

Hull, Clark L. 1943 Principles of Behavior: An Introduction to Behavior Theory. New York: Appleton.

Hull, Clark L. 1951 Essentials of Behavior. New Haven: Yale Univ. Press.

Kimble, Gregory A. 1961 Hilgard and Marquis’ Conditioning and Learning. 2d ed., rev. New York: Appleton. → First published in 1940 as Conditioning and Learning, by Ernest R. Hilgard and Donald G. Marquis.

Kimble, Gregory A.; and Shatel, R. B. 1952 The Relationship Between Two Kinds of Inhibition and the Amount of Practice. Journal of Experimental Psychology 44:355–359.

Kish, George B. 1955 Learning When the Onset of Illumination Is Used as Reinforcing Stimulus. Journal of Comparative and Physiological Psychology 48:261–264.

Miller, Neal E. 1959 Liberalization of Basic S–R Concepts : Extensions to Conflict Behavior, Motivation and Social Learning. Volume 2, pages 196–292 in Sigmund Koch (editor),Psychology: A Study of a Science. New York: McGraw-Hill.

Miller, Neal E. 1963 Some Reflections on the Law of Effect Produce a New Alternative to Drive Reduction. Volume 11, pages 65–112 in Nebraska Symposium on Motivation. Edited by Marshall R. Jones. Lincoln: Univ. of Nebraska Press.

Miller, Neal E.; and Dollard, John 1941 Social Learning and Imitation. New Haven: Yale Univ. Press; Oxford Univ. Press.

Miller, Neal E.; and Kessen, M. L. 1952 Reward Effects of Food Via Stomach Fistula Compared With Those of Food Via Mouth. Journal of Comparative and Physiological Psychology 45:555–564.

Noble, Merrill; and Adams, C. K. 1963 Conditioning in Pigs as a Function of the Interval Between CS and US. Journal of Comparative and Physiological Psychology 56:215–219.

Noble, Merrill; and Harding, G. E. 1963 Conditioning in Rhesus Monkeys as a Function of the Interval Between CS and US. Journal of Comparative and Physiological Psychology 56:220–224.

Olds, James 1958 Effects of Hunger and Male Sex Hormone on Self-stimulation of the Brain. Journal of Comparative and Physiological Psychology 51:320–324.

Pavlov, Ivan P. (1927) 1960 Conditioned Reflexes: An Investigation of the Physiological Activity of the Cerebral Cortex. New York: Dover. → First published as Lektsii o rabote bol—shikh polusharii golovnogo mozga.

Sheffield, Fred D.; and Roby, Thornton B. 1950 Reward Value of a Non-nutritive Sweet Taste. Journal of Comparative and Physiological Psychology 43:471–481.

Skinner, B. F. 1938 The Behavior of Organisms: An Experimental Analysis. New York: Appleton.

Thorndike, Edward L. (1898–1901) 1911 Animal Intelligence: Experimental Studies. New York: Macmillan.

II CLASSICAL CONDITIONING

Classical (or Pavlovian, respondent, or type-S) conditioning refersB to any of a group of specific procedures that, when applied to an organism under appropriate conditions, result in the formation of the type of learned behavior known as the conditioned response. The term also refers to phenomena and relationships discovered through experiments using classical conditioning procedures. The adjective “classical” is used to distinguish these procedures from the more recently developed instrumental, or operant, conditioning procedures, which also lead to the formation of conditioned responses.

The Russian physiologist I. P. Pavlov was primarily responsible for the development of the methods and nomenclature of classical conditioning, and he discovered and described many of the most important associated phenomena (Pavlov 1927). The early writings of Pavlov had a profound influence on the development of behaviorism by John B. Watson, who considered classical conditioning to be the basis of acquisition of all learned behavior. The wide acceptance of behaviorism by American psychologists and the availability of two English translations of Pavlov’s Lectures on Conditioned Reflexes (1923) in the late 1920s were followed by an increased and sustained output in the United States of published research on conditioning. However, most of this research was conducted by psychologists who used instrumental techniques more often than they used classical conditioning. Although Pavlov’s methods and data were behavioral in nature, he treated them as bearing directly upon the physiology of the cerebral cortex. His theory of conditioning is, therefore, a theory of brain function. (The present article, however, will deal mainly with its more behavioral aspects.) The Russian work on conditioning since Pavlov has very largely followed the pattern set by him.

Characteristics The following characteristics are principal features of classical conditioning. A response already within the repertory of the experimental subject is designated the unconditioned response (UR) and the stimulus that evokes it is called the unconditioned stimulus (US). Another stimulus, one that does not elicit the UR or any response similar to it, is designated the conditioned stimulus (CS). The CS and the US are presented repeatedly to the subject, either simultaneously or with the CS preceding but overlapping the US in time. A response similar to the UR that develops to the CS is called the conditioned response (CR). The change in this response to the CS from an initial zero magnitude or frequency to a positive magnitude or frequency following practice constitutes a learned acquisition that is called conditioning.

Pavlov and his collaborators used the dog as their experimental subject. Salivation was the UR to the US of food or dilute acid. The CSs were lights, sounds, or pressures systematically applied to the skin, and the CR was salivation. But Pavlov discovered that the sights and sounds produced accidentally by the experimenter and his apparatus might also become CSs or interfere with the process of conditioning. He found it necessary to develop techniques and apparatus for collecting and measuring the magnitude and latency of the salivary CRs and URs and for controlling the duration, magnitude, and time relations of the CSs and USs. These and similar procedures for isolation of the experimental subject and for control of the environment and measurement of responses are part of the technical procedures of classical conditioning.

Generality Classical conditioning has very great generality. There is no apparent limit to the kinds of responses that can be conditioned in this manner or to the kinds of events that can serve as CSs. Any response that is evoked consistently by the US can be conditioned, and any stimulus that passes the initial test of neutrality can serve as a CS. Many stimuli fail this test, particularly when human beings are the subjects.

The generality of classical conditioning can be viewed also in terms of the level in the phyletic series and the chronological age of the organism in which conditioning first occurs. The few reports of classical conditioning of one-celled organisms have not been confirmed, and there is serious doubt that it is possible to achieve conditioning in these organisms. The evidence on conditioning of the worm is in similar confusion, and it is very doubtful that organisms without a true nervous system can be conditioned. With organisms higher in the phyletic series, who have a true nervous system, there is little question that they can be conditioned. Conditioning of infants of many species, including the human being, has been reported. There have also been reports of successful classical conditioning of the human fetus in the age range of 6.5–8.5 months.

Parameters of classical conditioning It is possible to measure the latency, duration, and amplitude of a CR and also the frequency of its occurrence within a specified period of time. Ordinarily, only one or perhaps two of these measurements will be made concurrently, and whatever measure is applied to the UR will be applied also to the CR as a basis for comparison. Similar but not identical assessments of conditioning are obtained when different measures are used.

As conditioning develops over practice trials, the latency of the CR decreases, while the duration, the amplitude or magnitude, and the frequency of the CR increase. Eventually, each of these measures approaches an asymptotic level and does not change with further conditioning trials. The rates at which such asymptotic levels are approached vary with the measures used. It is common practice in experiments on classical CRs either to give all subjects an equal number of conditioning trials or to continue conditioning training until the same performance level has been attained by every subject. In the former case, individual differences between subjects show up in the variable level of conditioning attained, and in the latter case the differences appear in variations in the number of training trials required to reach the performance criterion. There are advantages and disadvantages to each of these procedures, but the results of any given experiment will depend upon this methodological consideration as well as upon the measurement procedures.

Temporal contiguity of stimuli The rate or amount of conditioning and its relative difficulty are functions of the time relations between the CS and the US. Different names have been given to conditioned responses that are established under different time relations of the conditioned and un-conditioned stimuli.

Simultaneous conditioning. The simultaneous conditioned response is developed when the CS and theUS are coincident or when the onset of the CS precedes the onset of the US by an amount of time just sufficient for the CR to occur. Exact simultaneity makes it impossible to follow the course of conditioning over each trial since there is no way to distinguish between the CR and the UR. If occasionally the CS is presented alone, the CR can be measured—but then these trials are extinction trials and interfere with conditioning. Since the acquisition process cannot be measured precisely in the simultaneous situation, it is standard procedure for the onset of the CS to precede the onset of the US and for the CR to be considered a simultaneous CR. The length of the interval between the onset of the CS and onset of the US that defines the simultaneous CR depends on the latency of the UR. For very fast striated-muscle responses, this interval ranges from a quarter of a second up to several seconds. For slow, or long-latency, responses, such as those of smooth muscle and glands, the interval varies from a few seconds up to thirty or more seconds.

Delayed conditioning. With the delayed conditioned response the interval between the onset of the CS and the onset of the US is longer than it is for the simultaneous CR, and the delayed CR also has a longer latency. Delayed CRs are subclassified into short delay and long delay CRs, depending upon the interval between onset of the CS and onset of the US.

Trace conditioning. The trace conditioned response is very similar to the delayed CR. The time relations for onset of the CS and onset of the US are identical. The time relations differ only with respect to the termination of the CS. For delayed conditioning, the CS overlaps the US and terminates either with it or some time after its onset. In trace conditioning the CS lasts as long as a CS in simultaneous conditioning, but it terminates a considerable time before the onset of the US. The trace CR has the same latency as the delayed CR, but because of the short duration of the CS it occurs in the time interval between the termination of the CS and the onset of the US. Since it is assumed in classical conditioning that no response occurs in the absence of a stimulus, it is assumed in this situation that there is a trace of the CS to which the CR is made.

Temporal conditioning. With the temporal conditioned response the US is presented alone at a constant rate. The time interval between successive presentations of the US functions as a CS. The CR occurs just prior to the time at which the US is to be presented.

Backward conditioning. For the backward conditioned response, onset of the CS occurs after termination of the US and the UR. The CR occurs, of course, to the CS, after the prior occurrence of the US and the UR.

Pseudo conditioning. The pseudo conditioned response occurs without any impaired training trials of the CS and the US. First the US is presented alone for a series of trials. Then, after a short interval of time, the CS is presented alone. If a response similar to the UR occurs to the CS, it is called a pseudo conditioned response.

Effects of temporal variations. The simultaneous CR is acquired most rapidly and with the greatest ease. The first CR may occur on the second or third trial. Delayed and trace CRs are acquired with about equal difficulty if the time between the onset of the CS and that of the US is equal for both. Both are more difficult to establish than the simultaneous CR. For simultaneous, delayed, and trace conditioning the difficulty of conditioning increases as the time between the onset of the CS and that of the US increases. With long delay or long trace conditioning it is impossible to develop a CR unless a simultaneous CR is established first and training trials are then arranged in which the time interval between onset of the CS and onset of the US is gradually increased.

The temporal CR for short intervals of time is more readily established than the simultaneous CR. It may be acquired even when the interval between presentations of the US is as great as thirty minutes. It has not been studied extensively, but because it can be so easily acquired in standard simultaneous conditioning, it is necessary to vary the intervals between trials in a random sequence in order to prevent the occurrence of temporal conditioning.

The backward CR is more difficult to establish than are forward CRs. The difficulty of backward conditioning increases as the interval beween termination of the US and onset of the CS increases. The range over which this relation holds is small, since there is no evidence of backward conditioning for the longer time intervals between the onset of the US and the onset of the CS that are possible in delayed or trace forward conditioning.

Pseudo conditioning appears to depend upon the use of a very-high-intensity US that produces a large-magnitude, diffuse, emotional UR. The pseudo CR is neither as readily established nor as stable as a CR based upon equivalent practice with simultaneous conditioning procedures. Pseudo conditioning may be treated as a variant of backward conditioning, since presentation of the US precedes presentation of the CS. The efficiency of the method is low.

Other factors affecting conditioning The study of other variables that affect the rate or speed of acquisition has been conducted primarily with simultaneous CRs, However, what evidence there is suggests that the effects of these variables on other types of CRs are similar. Distribution of practice, magnitude of the US, deprivation condition of the organism, physiological condition, neural condition, intensity of the CS, and the number of trials are independent variables that are well established as relating to acquisition.

Distributed versus massed practice. Some degree of distribution of practice provides the most rapid conditioning. The particular optimal distribution depends upon other variables, such as time relations, nature of the US and CS, and nature of the organism. The results universally show massed practice to be inferior to some form of distributed practice in speed of conditioning.

Magnitude of the unconditioned stimulus. Speed of conditioning increases with the magnitude of the US up to a point and thereafter declines with further increases in the magnitude of the US. This has been well established for CRs for which the US is food or electric shock.

Deprivation. The deprivation condition of the organism interacts with the magnitude of the US when the US is food. With very large magnitudes of US and fairly low degrees of deprivation, after only a few conditioning trials the subject may have received its total daily intake and become satiated. When the US is held constant at relatively small magnitudes and the period of deprivation increases, speed or amount of acquisition increases up to a point and then declines with further increase. This relation holds separately for food and for water, but there is probably an interaction between the two.

Deprivation is sometimes treated as a particular physiological condition, but physiological condition usually refers to endocrinological, biochemical, or drug conditions, and the effects of these on conditioning are complex.

Neural conditions. The effects of neural conditions are studied through application of the CS and US at different levels of the nervous system or through the use of conditioning procedures on organisms when varying levels of the nervous system are rendered functionless. Conditioning is possible if the CS is electrical stimulation of the sensory cortex or of the sensory tracts in the spinal cord. There is much evidence that conditioning will not occur if the US is direct stimulation of the motor cortex. The decorticate animal can be conditioned, but the weight of evidence is against the possibility of conditioning the spinal animal.

Stimulus intensity and complexity. Conditioning increases in rate and magnitude as the CS is increased in intensity from the stimulus threshold to the middle range of intensity but not to greater intensities. Conditioning at very high CS intensities has not been tested. Conditioning occurs at a greater magnitude and faster rate with compound CSs than with any one of the single component stimuli, whether the CSs are from the same or from different sense modalities. Variation in the characteristics of CSs makes it possible to study sensory thresholds and discriminations by conditioning procedures.

Transfer Many instances of positive and negative transfer have been found in studies of classical conditioning. Transfer is usually positive when it is measured as a difference between performances under conditions that involve acquisition procedures. It is most often negative when it is measured as a difference between performance at a terminal level of acquisition and a subsequent performance that results from alteration of some variable present during acquisition. Positive transfer occurs for any subsequent CR elicited by a new CS, by a new US, or by both at the same time. The only instances of negative transfer under the above conditions occur when the UR and CH for the second treatment are the reverse of or are incompatible with those of the first conditioning treatment.

Stimulus generalization, response generalization, and incentive generalization are forms of positive transfer. Stimulus generalization refers to the occurrence of the CR to stimuli similar to the CS in the absence of specific training with those stimuli. Stimulus generalization declines as the degree of similarity along a given dimension (such as frequency of a tone in cycles per second) decreases. The form of stimulus-generalization gradients is not known because of serious technical difficulties in measuring stimulus generalization. Cross-modal stimulus generalization occurs only rarely. Response generalization can be studied only in limited fashion, such as from right to left side of body, but may also involve opposing responses when the CR is prevented from occurring. Incentive generalization refers to positive transfer of a CR that occurs following variation in the US, such as in the kind of food or in the frequency or locus of application of electric shock.

Extinction The most striking transfer phenomenon is experimental extinction. This form of negative transfer occurs when, after acquisition, Instrumental Learning the US is omitted and the CS is presented alone under the same schedule as that used during acquisition. There is a progressive decrement in the magnitude and frequency of occurrence of the CR to the zero level. This phenomenon has led to a conception of reinforcement of the CR by the US. Empirical nonreinf or cement refers to nothing more than the decrement of conditioning when the L7S is absent from the training procedure, and empirical reinforcement refers to nothing more than the original acquisition of the CR when the US is present in the training procedure and the reinstatement of the CR by reintroduction of the US following extinction. The theoretical views of reinforcement are many, and much of the study of extinction has been directed toward discovery of the reinforcement functions of the US.

With classical conditioning, there appears to be a positive relation between strength of conditioning and resistance to experimental extinction. The greater the amount of conditioning training and the larger the measures of CR magnitude, the greater the resistance to extinction. Any variable that increases the strength of a CR also increases its resistance to extinction. As the degree of extinction training is increased, the amount of training required to re-establish the CR is increased. There is even continued “silent” extinction beyond the zero level of CR: a greater amount of conditioning training is required to re-establish the CR than that required when just the zero extinction level is attained. If there is successive alternation of conditioning and extinction, each reversal requires fewer training trials, until a single trial of either the conditioning or the extinction procedure is sufficient to provide consistent response or response failure to the CS. Speed of extinction is greater for massed extinction trials than it is for distributed extinction trials. Extinction of a CR to one CS will increase the rate of extinction of the CR to a second CS that has not been given extinction training. This is called secondary extinction and is evidence of a generalization of extinction. Delay, trace, and pseudo CRs show more rapid extinction than do simultaneous CRs.

Although reintroduction of the US to the training procedure is the most efficient way to reverse experimental extinction, there are three other procedures that will also produce reversal. If the subject is removed from the experimental room at the time a zero level of extinction has been reached and is returned at a later time, the CS will evoke a CR. This recovery of the conditioned response is called spontaneous recovery. However, with sufficient extinction training there is no spontaneous recovery.

It is possible also to produce recovery of conditioning following extinction by presenting the US alone for a few trials. The CR will then be evoked by the CS presented alone.

Recovery of the CR following experimental extinction may also be produced by disinhibition. This occurs when a novel stimulus is presented in the laboratory at the termination of an extinction series. Disinhibition is a temporary phenomenon, and its magnitude is a function of the extent of extinction and the intensity of the disinhibiting stimulus.

Retention The retention of classical CRs has received little study. What evidence there is indicates very high degrees of retention in animals who have experienced no conditioning for several years.

W. J. Brogden

[Directly related are the biographies of PAVLOV and WATSON. Other relevant material may be found in FORGETTING.]

BIBLIOGRAPHY

Hilgard, Ernest R.; and Marquis, Donald G. (1940) 1961 Hilgard and Marquis’ Conditioning and Learning. Revised by Gregory A. Kimble. 2d ed. New York: Appleton. → First published as Conditioning and Learning.

Pavlov, Ivan P. (1923) 1928 Lectures on Conditioned Reflexes: Twenty-five Years of Objective Study of Higher Nervous Activity (Behavior) of Animals. New York: International Publishers. → First published as Dvadtsatiletnii opyt ob’jektivnogo izucheniia vysshei nervnoi deiatel’nosti (povedeniia) zhivotnykh.

Pavlov, Ivan P. (1927) 1960 Conditioned Reflexes: An Investigation of the Physiological Activity of the Cerebral Cortex. New York: Dover.→ First published as Lektsii o rabote bol’shikh polusharii golovnogo mozga.

Razran, G. 1961 The Observable Unconscious and the Inferable Conscious in Current Soviet Psychophysiology: Interoceptive Conditioning, Semantic Conditioning, and the Orienting Reflex. Psychological Review 68:81–147.

Stevens, S. S. (editor) 1951 Handbook of Experimental Psychology. New York: Wiley.

III INSTRUMENTAL LEARNING

The concept of instrumental learning is a powerful one, primarily because it assists psychologists in their goals of predicting and controlling, or modifying, behavior. The concept is based on an empirically derived functional relationship between the probability of a response and the previous consequences of that response: while battles between theoreticians have not yet been fully resolved, it is generally useful to consider that the probability of a response to a stimulus situation is particularly likely to be strengthened if the response is followed by what may loosely be called a satisfying state of affairs.

This presentation will be chiefly concerned with applications and implications of instrumental, or operant, learning that may be of more general interest to social scientists; the technical aspects are covered elsewhere [seeLearning, article onreinforcement].

Adherents of this position have maintained that most of the forms of behavior exhibited by infra-human animals can be effectively analyzed in instrumental terms. A dog approaches when its name is called, horses respond to signals from their riders, cats “know” where to go for food, and cattle avoid an electrified fence, all because of the consequences of previous relevant behavior. Applying the techniques of instrumental conditioning in laboratory settings, experimental psychologists have succeeded in eliciting behavior of great complexity. B. F. Skinner has trained pigeons to engage in behavior that strikingly resembles table tennis. He and his colleagues have also, by reinforcing aggressive behavior, converted usually placid birds into “vicious” killers. The familiar reverse procedure, taming, is also accomplished by instrumental conditioning. Discriminations between colors, shapes, tones, and so forth, can be taught by differential application of reinforcements. For example, an animal can readily be trained to approach a green disc when a low-pitched tone is presented and to approach a red disc in response to a tone of higher frequency. A procedure as simple as this provides a most reliable means of ascertaining sensory capacities of animals. If, say, a food-deprived rat is given food whenever it presses a blue lever but is given no reinforcement when it presses a yellow lever of equal brightness, then equal rates of pressing the two levers would lead us to conclude that the animal cannot distinguish between the two colors. For the sake of simplicity this example has ignored both the possibility that in this instance reinforcement is not effective and the important finding that the kinesthetic and other stimulation resulting from a lever press may itself be reinforcing; recent studies indicate that almost any form of nonnoxious stimulation may possess, under certain specifiable conditions, extremely important reinforcing properties.[SeeLearning, article ondiscrimination learning.]

In terms of potential for generalization, few consequences of the learning process are more influential than what Harry F. Harlow (1949) has called “learning to learn.” A monkey that has learned to make a particular discriminative response in a particular stimulus situation (e.g., to the “odd” stimulus in a three-stimulus array) has apparently learned not only that a particular response is rewarded in this situation but also that situations may have embedded in them stimulus properties which, if reacted to appropriately, lead to reinforcement. Thus, if “oddness” ceases to be relevant or reinforcing in this or other situations, a new principle will be sought (e.g., the largest object or the object on the far left of the array). Furthermore, this principle will be discovered far more readily if the animal has previously learned the value of learning.

There is good reason to believe that even some of the most “basic” forms of animal behavior are acquired through and maintained by instrumental conditioning. Copulation, for example, at least in those mammalian species in which it has been carefully studied, is apparently a product of learning. Rhesus monkeys without previous sexual experience have been observed to engage in considerable trial and error—males mounting males, and so on—before arriving at the usual preference for heterosexual intercourse. It is clear that “normal” sexual behavior becomes preferred simply because, for anatomical reasons, it is most likely to be reinforced. Statistically deviant forms of sexuality can be explained in exactly the same way: homosexual or autoerotic activity will tend to be repeated to the extent that it has been positively reinforced. A critical period for learning may exist: learning during a particular portion of the animal’s life may be more influential than learning that takes place either earlier or later. Such a possibility would seem to hold greater promise for future research than speculations about the “mental health” of the deviant animals. [SeeImprinting; Sexual behavior.]

In a similar vein is the finding by Melzack and Scott (1957) that escaping from pain is a function of particular stimulus dimensions present early in life. It is apparently no longer safe to assume that if a stimulus-response bond is of sufficient biological importance, it is genetically transmitted. [SeePain.]

As a final example of operant conditioning in infrahuman animals, let us consider what has been called “neurotic” behavior. After an animal has learned that a particular response (e.g., turning left in a T maze) is followed by positively reinforcing stimulation, the experimenter arranges for a strong electric shock, or other negative reinforcer, to be contingent upon the same response. If the experimenter has skillfully controlled his variables, he will have induced conflict. The frustrated animal will display fairly stereotyped behavior: going round in circles, “freezing,” or showing other signs of acute and stressful indecision. Even if the shock apparatus is disconnected, the animal will continue to exhibit this behavior (which presumably serves to reduce anxiety) and may even starve to death unless “therapeutic” measures are taken. It is the compulsive self-destructive nature of the behavior and its origin in conflict that may justify the designation of “neurotic.” Generalization to the human level must, of course, be undertaken with great caution. It should be noted that neurotic behavior can also be elicited by classical procedures. [SeeConflict, article onpsychological aspects.]

Instrumental conditioning in humans

B. F. Skinner chose to give to his seminal 1938 book the general title The Behavior of Organisms. No new principles, he suggests, need be invoked in order to study human behavior. There is good reason to believe that the human equivalents of the types of behavior described above are also attained, or at least maintained, by means of operant conditioning. Thus, for example, while a definitive experiment will probably never be conducted, it seems reasonable to assume that all forms of sexual behavior are learned by trial and error, by imitation, or by instruction. And both the tendency to imitate and the tendency to follow instructions can them-selves be understood as functions of the organism’s reinforcement history[seeImitation].

Language

Perhaps the most important human skill from the standpoint of psychology is language. The initial babblings of the infant are random or semirandom; some sounds, like “ma,” are easier to produce than others and will therefore be emitted at a relatively higher base rate. These babblings are differentially reinforced. Differential reinforcement simply means, in this case, that the mother —or other socializing agent—will find certain sounds more reinforcing than others and will be-have in such a way as to increase the frequency of these sounds. “Ma,” as a response, is strengthened; “ka” and “la” are not. The infant next learns that “ma” is reinforced only under certain conditions. Under other conditions (e.g., when only the father is present), “ma” is either ignored or is negatively reinforced. More complicated utterances can be built up in similar fashion. Thus, while the development of a repertoire of sounds and sound combinations that serve as responses to specific external or internal stimulus situations may be a prodigious feat, it is not a particularly mysterious one (Skinner 1957). Not all language is learned in this rather inefficient fashion. Many utterances result from imitation or from intentional instruction.

Recent experiments have revealed that the verbal behavior of adults can likewise be manipulated. Rates of emission, particular sounds, parts of speech, sentence length, and content are responsive to operant-conditioning procedures in which a smile or a nod provides quite effective reinforcement (see, for example, Portnoy & Salzinger 1964). It might also be pointed out here that operant conditioning is now being used by Lilly (1963) in an effort to teach bottle-nose dolphins to use the English language “intelligently.”[SeeLanguage, article onLanguage development.]

Creativity

Another important kind of activity, creativity, is also susceptible to analysis in these terms. Here the key psychological, as distinguished from sociological, variable is novelty of response to a stimulus situation. Numerous studies (e.g., Maltzman et al. 1960) have demonstrated that the individual will learn to make novel responses if he is rewarded for doing so. A “motive” for originality is thus instilled just as readily as the “motive” for conformity is instilled in most of our educational institutions. The learned tendency to produce “original” responses, combined with the learned tendency to view one’s productions critically, may be enough to explain even the highest achievements of human creativity. The fact that computer-type machines can be programmed to “create” music in a large variety of styles points in the same direction. [SeeConformity; Creativity.]

Emotional behavior

Operant conditioning also provides a useful framework for studying those most complicated psychological processes, the emotions. Whether they are subjective states or physiological events, or both, emotions may be viewed as (a) responses to antecedent stimuli and (b) stimuli for consequent responses. As responses, emotions are learnable. It is extremely doubtful that the newborn infant has any emotions at all; he almost definitely is not afraid of the dark, or of falling, or of snakes; he does not love his mother, nor does he feel inferior. How are these emotions learned? It appears that no new principles need be invoked. A child observing his mother’s frightened reactions to a thunderstorm may exercise his acquired tendency to imitate by manifesting similar signs of fright. These signs are reinforced by maternal solicitude. Studies indicate high correlations between the fears of children and of their parents (Hagman 1932).

As an extended hypothetical example, consider what is generally called love. The child may learn that the words “I love you” are often followed by more tangible rewards and that his own manifestations of “loving” behavior are followed by rewards from people in his environment. Consequently he may learn to seek out circumstances in which it is appropriate for such kinds of behavior to occur. He wants to love and be loved. The rewards of establishing a love relationship, which go far beyond the food and tactile stimulation that probably served as initial reinforcement, may include temporary freedom from corporal punishment, victory in a sibling rivalry, or erotic stimulation. With such a multitude of possible reinforcements, the child may readily learn those forms of behavior that elicit “loving” behavior from the people in his environment. And one particularly effective method for achieving this is to engage in “loving” behavior oneself. But since the well-socialized child has also probably learned that there are negative reinforcements for deceptive behavior, he must get himself to actually “feel” the emotion he is expressing. This necessary internal state is condition able, but a classical conditioning model is probably more useful here than the operant model.[SeeAffection; Emotion; Moral development; Personality, article onPersonality development.]

Religiosity

One’s religious convictions may likewise be regarded as nothing more than a complex set of learned responses to a complex set of stimuli. The objection that man is born with a knowledge of and reverence for God receives its strongest rebuttal from the practices of the major religions, whose emphasis on Sunday school and other forms of religious instruction seems at odds with such concepts as revelation or innate knowledge. The combination of formal religious training, informal parental inculcation (e.g., answering “God did it” to difficult questions in the natural sciences), and ubiquitous social pressures (e.g., the motto “In God We Trust” on U.S. currency) makes clear why so many children engage in religious behavior. Whether the mediator in such cases is the learned motive to conform or the learned motive to imitate, religious utterances and other religious activities tend to be positively reinforced. The intense emotionality that so often accompanies or defines the “religious experience” may result from the capacity of religion to satisfy such needs as dependency, affiliation, and (perhaps) erotic gratification, needs that may themselves be products of instrumental learning. Thus, to the extent that religiosity is inferred from behavior, principles of conditioning appear to be sufficient for a complete explanation.[SeeReligion.]

Mental illness

As a final example of the wide-spread applicability of these principles, we may consider those forms of behavior that characterize “mental illness.” It is possible to regard the disordered or undesired behavior simply as a set of responses that have become progressively stronger because of their reinforcing consequences. This formulation holds true even if the behavior (nail-biting, cigarette smoking, destructive interpersonal relationships, self-degrading activities) appears to have negative consequences. The “reward” in such cases may be temporary relief from anxiety, satisfaction of abnormally strong learned needs to con-form or not conform, to confirm a self concept, and so on. There may be a wide gap in complexity between a rat that makes the correct choice in a T maze and an accident-prone human, but it is possible to understand the behavior of both animals in terms of the same principles.[SeeMental disorders; Neurosis.]

Practical applications

In most sciences, including psychology, the goals of prediction and control, or modification, are inextricably related. While the foregoing paragraphs have emphasized the prediction of responses, the following examples refer specifically to the possibility of response modification or control.

Programmed instruction

The widespread adoption of “teaching machines” and other programmed instructional devices in schools all over the world attests to the ability of operant methods to establish the repertoire of stimulus-response bonds deemed necessary by society. Instead of the primary reinforcement of a pellet of food, so often used to control or shape the behavior of infrahuman animals, it has been found that such secondary reinforcers as “the feeling of success” are extremely effective for human students of all ages. There is nothing mysterious about these secondary reinforcers; their emergence can be predicted—or arranged—by virtue of their frequent association with primary rewards. So long as teacher shortages exist, programmed instructional devices will continue to be useful adjuncts to more conventional pedagogic techniques. But these devices are of far more than ancillary value. They provide advantages that are uniquely their own. Each student is permitted to proceed at his own rate and thus avoids either the boredom or the frustration that may result from the single-level approach so often necessary in crowded classrooms. Furthermore, the use of programmed materials largely does away with extrinsic rewards, such as grades and teacher-approval, by relying primarily on the reinforcing nature of the learning process itself. While many of their potentialities remain to be developed, it is difficult to conceive of any academic subject matter that cannot be taught, and taught effectively, by means of these methods.[SeeLearning, article Onprogrammed learning.]

Behavior therapy

Principles of operant conditioning have also been found to be extremely useful in the modification of undesirable behavior. The behavior in question, whether it involves an isolated S–R connection (e.g., fear-responses to heights) or a complex pattern of behavioral tendencies from which some clinicians infer “neurosis” or “psychosis,” can be altered by extinguishing the undesired response while building up a desired response (or set of responses) to the same stimuli. For example, homosexuality—a form of behavior that is notoriously resistant to traditional varieties of psychotherapy—often responds quite favorably to what is called behavior therapy (see, for example, Feldman & MacCulloch 1964). Procedures differ widely, but the following outline of treatment may be of illustrative value. The “patient” (a term which is particularly inappropriate in this context) is requested by the therapist to have a homosexual fantasy. When he signals that the fantasy has reached a peak of excitement, the individual receives a painful electric shock. This procedure is repeated over several sessions, with the result that in subsequent interviews, when it is clear to the patient that shock will not occur, he reports that homosexual thoughts and behaviors are gradually being extinguished. During the extinction process, heterosexual motives and activities are strengthened by means of familiar techniques of reinforcement. Early in the treatment, for example, the individual is directed to masturbate while engaging in heterosexual fantasies; before very long, an association develops between having an orgasm and visualizing a partner of the opposite sex. Within one year most individuals exposed to this form of treatment are behaving in ways acceptable to society and to themselves outside the therapist’s office. New forms of undesirable behavior do not appear, and the proportion of “relapses” is far smaller than that encountered in other forms of treatment. [SeeMental disorders, treatment of, article onbehavior therapy; Sexual behavior, article onhomosexuality; see also Feldman & MacCulloch 1964.]

Obviously, behavior therapy is not simply symptom removal. Starting with the premises that the undesired behavior has been learned and that what-ever has been learned can be unlearned, the method proceeds to instill new learnings as efficiently as possible. As a by-product of this counterconditioning, we may expect a reduction in the anxiety engendered by the behavior in question. This reduction, as measured, for example, by the galvanic skin reflex, will, in turn, tend to lower the frequency of pathological behavior. Alternatively the therapist may choose to attack the anxiety more directly by viewing it as a learned response to specifiable interpersonal or other stimuli.[SeeAnxiety; Personality, article onthe field.]

There appears to be no qualitative difference between the treatment of a simple self-destructive habit and of the most complex of neurotic constellations. Although some critics might accuse them of unjustified reductionism, proponents of behavior therapy would allege that the more conventional methods of treatment take longer and have a lower success rate because the necessary learning process is managed inexpertly, being in-correctly regarded as little more than an epiphenomenon of insight, catharsis, and so on.

The apparent therapeutic effectiveness of Skinnerian methods should not blind the reader to the equally stimulating applications of Pavlovian methodology to behavior therapy. The cautionary note should also be added that the number of individuals, the number of conditions treated, and the duration of follow-up studies are not yet sufficient to justify unreserved acceptance of the new methodology. Still, unlike certain other recent innovations in therapy, the use of conditioning is firmly based on a mass of quite unequivocal laboratory data.[SeeMental disorders, treatment of.]

Implications

As has been indicated, the principles and practices of instrumental conditioning provide useful tools for the prediction and control of behavior. This practical utility leads to a consideration of a number of quite crucial questions.

First, can a science of psychology exist entirely on the basis of prediction and control, without regard to the task of understanding behavior? The question virtually answers itself if the goals of understanding are made explicit. Although the wish to understand is, for some, based largely on intellectual curiosity rather than on the desire for practical applications, there are only two ways that one can persuasively confirm, test, or demonstrate understanding: by predicting or by controlling the phenomena he claims to understand. Furthermore, some psychologists wish to understand primarily because they wish to predict and control. It might also be pointed out here that the individual who wishes to apply psychological principles to his own betterment can do so by means of, and perhaps only by means of, that intelligent arrangement of stimulus-response contingencies called self-control. Clearly, the specification of empirical regularities in the occurrences of stimuli and responses eliminates the need for prior “understanding.” The thoroughgoing adherent of the instrumental point of view might also claim that explanations of behavior in terms of the functioning of the central nervous system are likewise unnecessary.

Second, are there any forms of behavior that do not make sense within the framework of instrumental learning? Some writers have argued that behavior which is very complex requires a more complicated explanatory model; but complex behavior—including behavior that involves language —yields readily to instrumental analysis. Such analysis of a phenomenon is not, of course, logically identical to a valid causal explanation of it. Some also maintain that there are forms of complex behavior (referred to as “instinctive”) that are genetically determined, but the realm of instinct seems to dwindle as more and more alleged instances yield to analysis in terms of prenatal or early postnatal conditioning. Certainly at the human level the concept of instinct seems no longer useful. On the other hand, there is no need to deny the existence of genetically transmitted unconditioned reflexes. [SeeGenetics, article ongenetics and behavior; instinct.]

Perhaps the best objection to what might be called instrumental imperialism is that many kinds of behavior fit more readily into the framework of classical, rather than operant, conditioning. But it may be that these two categories are not really distinct from each other. To give a somewhat oversimplified example, Pavlov’s dogs may have learned to salivate to the initially neutral stimulus because the response of salivation was “paid off” by the presentation of food.

A final consideration has to do with the ancient problem of free will versus determinism. If behavior is nothing but responses to stimuli, and if the stimuli determine the responses, then the concept of free will ceases to be necessary. The fact that different people may respond differently to the same stimulus is, of course, beside the point. The stimulus may not be the “same” at all, being contingent upon receptors, thresholds, and previous conditioning. And even if the stimuli are viewed as identical, response differences would be explicable by virtue of individual differences in reinforcement histories or physical abilities.

Because of the number and the complexity of the determining variables, some behavior may be, practically speaking, “unpredictable.” But this practical limitation in no way justifies an explanation of such acts in terms of free will. Analogously, the result of a coin flip, while usually attributed to “chance,” is the inevitable outcome of a set of variables: air currents, the force of the flip, the distance the coin is permitted to drop, and so on. While these variables can be ascertained only with great difficulty, we do not conclude that the coin has manifested free will. The same line of reasoning may be raised against those who invoke the physicists’ principle of indeterminacy in support of the free-will position.

In short, as the instruments, methods, and concepts in the science of psychology become increasingly sophisticated, the number of unpredictable and uncontrollable human acts appears to be shrinking proportionately. The widespread application of conditioning procedures is not without its dangers. But the possible abuses of this powerful tool should not obscure the recognition of its potential advantages. It does not seem unrealistically optimistic to view instrumental conditioning as a way, perhaps the way, to elicit from human beings those forms of creative, satisfying, and socially useful behavior that the less-systematic educational methods have so conspicuously failed to obtain.

Instrumental conditioning is by no means a new method of behavioral development. Indeed, if its principles are valid, they were operating long before they were formulated. But the recent advances that have been reviewed herein suggest that these principles will play an increasingly pivotal role in twentieth-century psychology.

Lawrence Casler

BIBLIOGRAPHY

Feldman, M. P.; and Mac Culloch, M. J. 1964 A Systematic Approach to the Treatment of Homosexuality by Conditioned Aversion: Preliminary Report. American Journal of Psychiatry 121:167-171.

Hagman, Elmer R. 1932 A Study of Fears of Children of Pre-school Age. Journal of Experimental Education 1:110-130.

Harlow, Harry F. 1949 The Formation of Learning Sets. Psychological Review 56:51-65.

Lilly, John C. 1963 Productive and Creative Research With Man and Dolphin.Archives of General Psychiatry 8:111-116.

Maltzman, Iyrving et al. 1960 Experimental Studies in the Training of Originality.Psychological Monographs 74, no. 6.

Melzack, Ronald; and Scott, T. H. 1957 The Effects of Early Experience on the Response to Pain.Journal of Comparative and Physiological Psychology 50:155-161.

Portnoy, Stephanie; and Salzinger, Kurt 1964 The Conditionability of Different Verbal Response Classes: Positive, Negative and Nonaffect Statements.Journal of General Psychology 70:311-323.

Rogers, Carl; and Skinner, B. F. 1956 Some Issues Concerning the Control of Human Behavior. Science 124:1057-1066.

Skinner, B. F. 1938 The Behavior of Organisms: An Experimental Analysis. New York: Appleton.

Skinner, B. F. 1953 Science and Human Behavior. New York: Macmillan.

Skinner, B. F. 1957 Verbal Behavior. New York: Appleton.

IV REINFORCEMENT

The principle of reinforcement is not new. One form of that principle, the law of effect, dates back to Thorndike (1898-1901), who was one of the first systematic experimenters to observe that the development and maintenance of new instrumental performances are closely controlled by their environmental consequences. Thorndike theorized that an organism’s behavior was “stamped in” when it was followed by a satisfying state of affairs. By a satisfying state of affairs, Thorndike meant a condition that the animal did nothing to avoid, and whose maintenance and renewal the animal sought. Although our language has developed in the interest of greater scientific objectivity, and our experimental methods have progressed in the direction of greater precision and analytical prowess, Thorndike’s early observations on trial-and-error learning represent the foundations of modern effect, or reinforcement, theory.

In contrast to the trial-and-error, or instrumental, learning studied by Thorndike, Pavlov (1927) worked with classical conditioning procedures. Perhaps the best known example of Pavlov’s work is salivary conditioning. A stimulus which does not initially elicit salivation (the conditioned stimulus: a bell or metronome, for example) is presented in close temporal conjunction with a substance that does elicit salivation when placed in the mouth (the unconditioned stimulus: food powder or dilute acid, for example). After several paired presentations of the conditioned and unconditioned stimuli —provided sufficient attention is given to the details of the conditioning procedure—the conditioned stimulus gains the power to elicit salivation as a conditioned response. Because the development and maintenance of the conditioned response are closely dependent upon presentation of the unconditioned stimulus, the latter has been called a reinforcing stimulus.

Generalizing from the above considerations, it can be said that both instrumental, or Thorndikean, and classical, or Pavlovian, reinforcers may be looked upon as critical events in a learning episode. Just as the occurrence of reinforcement “strengthens” behavior, so the omission of reinforcement “weakens” behavior. In both instrumental and classical conditioning, the elimination of behavior by removing the reinforcer responsible for its maintenance is called extinction. Space does not permit a detailed comparison between instrumental and classical conditioning procedures. The reader is referred to Kimble’s revision of Hilgard and Marquis’ Conditioning and Learning (1940).

While this discussion has stressed the importance of reinforcement in learned behavior, it should be noted that not all psychologists agree on this point. E. R. Guthrie (1935), for example, developed a theoretical system in which learning does not depend on reinforcement. Although Guthrie agreed with Thorndike that learning consists of the bonding or conditioning of responses to stimuli, Guthrie maintained that simple temporal contiguity of response and stimulus is sufficient. Thorndike, it will be recalled, stated that reinforcement, that is, the satisfying state of affairs, was necessary in addition to stimulus-response contiguity. Perhaps the most extensive stimulus-response reinforcement theory was developed by C. L. Hull (1943). The similarities and differences among the various theories of learning constitute a study in them-selves—Hilgard’s Theories of Learning (Hilgard & Bower 1948) should be consulted as a general reference. An indication of the type of research evolving from a theoretical concern with the nature of reinforcement is provided by Birney and Teevan’s Reinforcement (1961), a collection of original papers—some classics—by prominent experimentalists.

Instrumental or operant behavior

Particularly important in the development of knowledge regarding the dynamics of reinforcement has been the work of B. F. Skinner (1938; 1953) and his colleagues. Skinner has adopted a nontheoretical, descriptive approach in his analysis of behavior, and the results of his work have had great practical and systematic significance. The methodology characteristic of Skinner’s work has been analyzed and discussed by Sidman (1960) in his book Tactics of Scientific Research.

Instrumental, or operant, behavior may be defined as behavior that is under the control of its environmental consequences. Opening a door, walking across the street, speaking, etc., are examples of operant behaviors. When the consequence of a behavior serves to increase the frequency or probability of occurrence of the behavior, we refer to the consequence as reinforcement. Positive reinforcement involves the onset of some stimulus as the reinforcing consequence; negative reinforcement involves stimulus termination as the reinforcing consequence. Negatively reinforcing stimuli are often called aversive stimuli; it has been found that the onset of an aversive stimulus contingent upon a behavior will often decrease the probability of occurrence of that behavior. The reinforcement relationships just described are actually quite complex, and no simple statement will adequately summarize all of the detailed facts regarding positive and negative reinforcement. However, types of reinforcers and the ways in which they have been manipulated provide for some of the well-known behavioral paradigms.

Reward training

In reward training a positive reinforcer is contingent upon the occurrence of a response. Thorndike’s experimental situation is a case in point; modern versions of the procedure involve such arbitrarily selected experimental behaviors as lever pressing, running in mazes and alleys, turning a wheel, jumping a gap, and, in humans, verbal behavior. Typical reinforcers that have been employed are food and water, for an animal appropriately deprived; the opportunity to engage in sexual activity or to explore a novel environment, money, praise, etc., have been used with humans.

Escape training

Negative reinforcement involves the termination of an aversive stimulus. The behavior which terminates that stimulus is called escape behavior. Arbitrarily selected behaviors like those mentioned above have been used to study the properties of negative reinforcement. The most frequently used negative reinforcer has been electric shock, although reproof, social isolation, etc., have been used with humans.

Avoidance training

While an escape-training paradigm involves the presentation of the aversive stimulus independent of the organism’s behavior, a paradigm can be arranged in which some arbitrary response postpones or avoids the delivery of the aversive stimulus. Any response which does so is an avoidance response. Often a warning stimulus, such as a light or a buzzer, precedes by some predetermined period of time the scheduled occurrence of the aversive stimulus. In this arrangement, called discriminated avoidance, a response occurring between the onset of the warning stimulus and the scheduled onset of the aversive stimulus is the avoidance response. Typically, a response occurring during that interval terminates the warning stimulus and results in the avoidance of the aversive stimulus. Sidman (1953) has carefully studied an avoidance procedure, called nondiscriminated avoidance, in which no warning stimulus occurs. Instead, the aversive stimulus, such as electric shock, is scheduled to occur on a purely temporal basis. A response recycles a timer, and the shock is postponed. Ordinarily, more than one temporal interval is involved in this kind of experiment.

Punishment training

Punishment training involves the onset of an aversive stimulus contingent upon the occurrence of a response. An effective procedure for studying punishment has been employed by Azrin and is described by Azrin and Holtz (1965). Animals are trained to respond through the use of positive reinforcement. A punishment is then applied, and the local, transient, and permanent effects of punishment are studied.

This outline is necessarily brief and cannot do justice to the many detailed findings in the control of behavior through reinforcement contingencies. In order to explore further some of those findings, however, we may consider in more detail the positive reinforcement of operant behavior.

Reinforcement and chaining

Some of the important facts regarding reinforcement can be displayed by considering a laboratory example where a pigeon is trained to step on a treadle in the rear of an experimental chamber, then to peck on an illuminated plastic disk or key on the wall, and, finally, to approach and eat from a grain tray. The first step is the adaptation of a hungry pigeon to the experimental chamber. After the bird is in the chamber for several minutes, the food tray is raised, with the grain illuminated by a small overhead light. The tray is held in place until the bird sees and eats some grain. After the bird has eaten for several seconds, the tray is dropped away, out of reach of the bird, and the light is turned off. The procedure is repeated until the bird responds immediately to the lifting of the tray and the illumination of the grain. By temporally spacing tray presentations, we provide for the extinction of approach behavior when the tray is not in the lifted position.

For the next step, we illuminate the translucent plastic disk on the wall. The pigeon is trained to stay in the vicinity of the disk by means of a procedure known as successive approximation. In this procedure each movement of the bird is noted, and as soon as a movement occurs that brings him a little closer to the disk, the grain tray is immediately lifted for a few seconds and the bird is allowed to eat. When the bird is near the disk his finer movements are observed, and, again by successive approximation, we bring his beak closer and closer to the disk until he pecks it. Each closer approximation to the desired response of pecking the disk is immediately followed by, that is, reinforced with, access to the grain tray. Next, we darken the disk and permit the pigeon to peck. The grain tray is not lifted. Soon the key is illuminated and a peck is followed by the grain tray’s being lifted. Several pecks at the dark disk are allowed, but none of them is reinforced with food. When the disk is illuminated, a peck produces grain. This procedure (that is, discrimination training) results in a rapid decrease in the frequency of pecking at the dark disk, with the maintenance of a high probability of pecking at the illuminated disk. In the final step, another example of successive approximation, we start with a dark disk. A movement by the pigeon in the direction of the treadle is immediately followed by illumination of the disk. The pigeon is allowed to peck the disk and eat from the tray. As before, when he pecks at the dark disk, the grain tray is not lifted. By the illumination of the disk, contingent on some preselected aspect of the bird’s behavior, we get him closer and closer to, and finally stepping on, the treadle. The behavior sequence is complete. The pigeon steps on the treadle; the disk is illuminated; the pigeon pecks the disk; the grain tray is immediately raised for a few seconds; the pigeon approaches the tray, and sees and eats the grain.

Paradigm of behavior chain

The behavior chain may be written symbolically, as follows (“S:R→ “indicates the stimulus, S, in the presence of which a specified response,Rx, will have a specified effect, that is, →, the production of a new stimulus):

If the behavioral chain in question is “free running,” that is, if the bird is permitted to run it through over and over (a recycling chain), we might specify another stimulus event, S0. The dropping away of the grain tray, S0, is “produced” by (more exactly, correlated with) RI. Thus, S0 becomes the stimulus event in the presence of which the bird emits JR0, approach to the treadle. The approach response R0 produces S4. Thus the chain is closed, forming a continuous behavioral sequence that might be expected to continue as long as deprivation (motivational) variables are effective, all else being equal.

Conditioned and primary reinforcers

In analyzing the sequence of events that we have just described, it is important to notice that the actual reinforcers maintaining the specific responses are light and sound rather than the ingestion of food. This is the distinction between conditioned and primary reinforcement. While the primary reinforcement, that is, the ingestion of food, is a necessary condition for maintaining the bird’s over-all performance, the conditioned reinforcers are made instantaneously and precisely contingent on the exact form of behavior that we wish to maintain. Virtually all operant behavior is maintained by conditioned reinforcers such as sounds and lights analogous to those described above, rather than directly by primary reinforcers such as food, water, oxygen, etc.

Principles of reinforcement

The preceding demonstration illustrates the following important principles pertinent to the operation of reinforcement:

(1) The strength of the several components of the response chain—stepping on the treadle, pecking the key, approaching the food tray—is maintained by their immediately reinforcing consequences. This fact can be demonstrated by performing extinction operations within the chain. Suppose we permit the pigeon to step on the treadle; but now, unlike the previous situation, we do not illuminate the disk when the pigeon responds in this way. Since the disk is not illuminated, the pigeon does not peck at it; but ceasing to illuminate the disk also constitutes the removal of conditioned reinforcement for the initial chain link, that is, stepping on the treadle. As a consequence of that removal of reinforcement, we may note first an increase and then certainly a decrease in the disposition of the bird to step on the treadle. The chain has been broken at its initial link. As a matter of fact, we could have broken the chain at any of its links by the simple expedient of removing the immediately reinforcing consequences of any of the responses making up the chain.

(2) Experiments of the kind just described demonstrate clearly the essential relationship among the several reinforcers of the chain. Food, the primary reinforcer that occurs at the end of the behavior sequence, is necessary to maintain the entire chain in strength, but each one of the several arbitrary links is closely controlled by the conditioned reinforcer that it immediately produces.

(3) A stimulus such as the illuminated response key serves a double function. Not only is it a reinforcer for the immediately preceding behavior, but it sets the occasion for the next response in the required sequence. Illumination of the key not only reinforces stepping on the treadle, but it also sets the occasion (that is, serves as a discriminative stimulus) for which a peck at the key will be reinforced. Implicit in the discriminative control by the illuminated disk is the corollary fact that when the disk is not illuminated, responses to it will have no further effect; the animal cannot progress any further in the chain. Since the animal’s behavior is under the specific control of each stimulus element in the chain, it is this discriminative control by each stimulus element in the chain that keeps the sequential emission of the chain going.

(4) The chain is constructed by starting with its final component. After the final component is securely developed, the next to the final one is added. When this is securely developed, another is added, and so on. In other words, the chain is built in a backward sequence. The reason for this procedure is readily appreciated if we take note of the fact that at the beginning of our training procedure we have at our disposal a single strong reinforcer, namely, the grain. The other events that finally serve as the conditioned link reinforcers are initially neutral and arbitrary events, such as light and sound. In order to establish such stimuli as conditioned reinforcers, it is necessary to associate them with already established reinforcers. Therefore, the compound stimulus, consisting of a flash of light illuminating the grain and the slap of the tray being raised, is established as a reinforcer through its association with the grain. It is thus capable of strengthening and maintaining peck responses on the illuminated disk. Note, however, that the illuminated disk is now correlated with the flash of the feeder light and the sound of the tray. By virtue of this association, the illuminated disk itself becomes a conditioned reinforcer and can be used to strengthen and maintain still an earlier member in the chain. Thus, practical considerations dictate the backward development of the sequence of conditioned reinforcers, and it is this development that makes advisable the backward development of a behavioral chain. A detailed review of positive conditioned reinforcement has been published by Kelleher and Gollub (1962).

Kinds of control through reinforcement

Continuous versus intermittent reinforcement

For many years the typical laboratory experiment involved the continuous reinforcement of the criterion response. Continuous reinforcement refers to a schedule of reinforcement in which the behavior in question is reinforced each time it occurs. It is clear that an analysis of behavior restricted to this experimental program can have limited applicability to the affairs of men. Men live in complicated societies. Their behaviors are not reinforced by automatic “grain trays,” but are subject rather to the possible whims and fancies of such agencies of society as government bureaus, social groups, religious groups, and, perhaps most important for the day-to-day existence of most of us, other individuals at home, at work, and at play. If there is any outstanding characteristic of people either in groups or as individuals, it is that their behavior is complexly determined. As a consequence of this complex determination, behavior is reinforced on an intermittent basis when individuals interact. For this reason, the general problem of intermittent reinforcement must occupy a central and crucial place in the experimental analysis of behavior, if the latter is to come to grips with the problems of human performance.

Intermittent reinforcement refers to the case where some of, rather than all, the occurrences of the specified response are followed by a reinforcer. The phrase schedule of reinforcement refers to the particular rule by which reinforcement is made contingent upon some occurrence of a response. Broadly speaking, there are two general schemes whereby reinforcers can be related to response emission. Within either of these schemes, not to mention those cases where they are combined, there are literally thousands of different schedules. Many of the simpler ones and some of the more complex ones have been extensively studied in the laboratory using both animal and human subjects.

Interval and ratio schedules

When a rule specifying the contingency between a response and its reinforcement involves the passage of time, we speak of interval schedules. For example, we may specify that reinforcement will occur on the first response following a fixed period of elapsed time since the last reinforcement. Such a schedule is referred to as a fixed interval schedule of reinforcement. On the other hand, when the contingency involves some number of responses we speak of ratio schedules. We may specify that reinforcement will occur following the emission of the nth response since the last reinforcement. This rule, specifying a fixed number of responses, is ordinarily referred to as a fixed ratio schedule of reinforcement. These are simple cases, but they exemplify the two broad classifications of response-reinforcement contingencies referred to as “interval “and “ratio “schedules. These two broad classifications of reinforcement contingencies produce behaviors that have markedly different properties.

Characteristics of ratio schedules. Fixed ratio schedules are generally characterized by high rates of response emission. As the reinforcer is made contingent upon successively higher and higher response requirements, sharp breaks in responding ordinarily develop. Initially, these breaks appear following a reinforcement and preceding the next ratio run. Later, when the ratio requirement has reached some relatively high number, breaks may occur at various places during the ratio run. An outstanding characteristic of ratio performance is that the organism is either not responding (pausing) or is responding at a relatively constant rate. If the ratio requirement is increased still further, responding becomes relatively sporadic; we refer to this condition as ratio strain.

It is no secret that a type of gambling machine, the slot machine, is designed in accordance with the principles of ratio behavior. The payoff frequency must be great enough so that the gambling behavior does not show marked strain. On the other hand, the exact ratio contingency must not be defined by the emission of a fixed number of responses. If that were the case, we would observe potential gamblers waiting for the other fellow to play until, of course, N— 1 coins had been fed the machine. Then there would ensue a dash to the machine in order to make the payoff response. Instead, the slot machine is programmed according to a variable ratio schedule of reinforcement. In this case, reinforcement is again contingent upon the emission of a number of responses, but the number of responses required differs from reinforcement to reinforcement. A variable ratio schedule of reinforcement is less susceptible to the development of ratio strain. Although very large numbers of response occurrences may be required for some instances of reinforcement, other instances occur after very few responses. Through judicious selection of a sequence of ratio sizes in a variable ratio program, the slot machine may be made to show a consistent profit.

Characteristics of interval schedules. The properties of interval schedules are different from those of ratio schedules. Interval schedules often show intermediate rates of responding. In the fixed interval case mentioned earlier, one frequently observes a relatively smooth transition from a zero rate of responding immediately after a reinforcement to a fairly high rate of responding preceding the next reinforcement. The characteristic shape of the fixed interval, cumulative-response graph is referred to as a fixed interval scallop. As in the case of variable ratio reinforcement, we can specify a rule that defines the variable interval case. In a variable interval schedule of reinforcement, reinforcement availability is again made contingent upon elapsed time, but, unlike the fixed interval case, the periods of time that must elapse between the reinforcements vary in a random sequence around some selected value. By the careful selection of interval sizes and their exact order of occurrence, one can produce a nearly uniform rate of responding, if one desires to do so. In fact, there have been variable interval response graphs that were so regular, a straightedge would be required to detect deviations from regularity. It can be seen, then, that the schedule of reinforcement, to a considerable extent, serves to control the rate and pattern of response emission. Schedules also serve to determine the characteristics of extinction, when reinforcements are no longer obtainable. A schedule of continuous reinforcement produces a relatively brief extinction curve, whereas a schedule of intermittent reinforcement may produce a protracted extinction curve characterized by a gradual transition from a high rate of responding to a zero rate after variable interval reinforcement, or gradually increasing periods of no responding punctuated by response bursts at a constant rate after ratio reinforcement.

Motivation. Schedules of reinforcement, to a large extent, account for some of the properties of behavior that are often referred to as “motivational.” Individuals characterized as highly motivated or “driven” may in fact be individuals who are capable of sustaining high ratio requirements without obvious signs of strain. On the other hand, people who are characterized as lazy or indolent may be, in fact, individuals who are not capable of sustained performance on even a modest ratio requirement. While such characterizations must not be accepted on the basis of face validity, they do have the merit of suggesting methods of changing the behaviors of such individuals. In this way, the research of the animal laboratory can be brought to bear on the problems of human behavior. The reader interested in the details of reinforcement schedules should consult Schedules of Reinforcement by Ferster and Skinner (1957). [SeeDrives; Motivation.]

Differentiation of new response forms

It has been seen that schedules of reinforcement can be utilized to control the rate and pattern of response emission. Another major function of reinforcement is to create “new” behavior. By the creation of new behavior, we do not mean the creation of something out of nothing, but rather the transition from one form of behavior into another. There are many examples from the world of human affairs. The powerful and accurate play of the professional golfer is created from the fumbling, awkward movements of the beginner. The changes characterizing such a transition in behavior are not simply quantitative in the sense that a change in response rate is quantitative. The professional golfer does not simply move faster or swing his club more often. Rather, his performance is qualitatively different from that of the beginner. It may be seen that the development of new behavior is often a problem in the acquisition of skills. We have already specified the essential process by which skills are acquired; that is, successive approximation. Once we can specify the form of the final behavior that we desire, we can, starting with almost any arbitrary performance, bring about the desired behavior by stages. The instructor, teacher, therapist, or any other individual who is concerned with the creation of new behavior in others must be capable of recognizing closer and closer approximations to the desired performance. In addition to recognizing these closer and closer approximations, he must have at his disposal a conditioned reinforcer that may be presented immediately upon the appearance of an acceptable intermediate performance. Verbal reinforcers such as “good” or “now you have it” are often used with humans. Improved control over the immediacy of reinforcement was one of the major considerations in the development of the new and very promising technique of programmed instruction or, as it is often called—with misplaced emphasis on the hardware—“teaching machines.”

Differential reinforcement. The critical procedure in the development of new behavior involves a process known as differential reinforcement. Differential reinforcement refers to a procedure in which reinforcement is administered upon the occurrence of some behaviors and withheld upon the occurrence of other behaviors. The extremely powerful and precise control that may be gained over behavior through differential reinforcement is responsible for the success of the successive approximation technique. Since reinforcement may be made contingent upon either a qualitative or intensive property of a response, the procedure may be used to change the topographical characteristics of the response or its intensity.

Consider the example of a young child ignored by his parents. In searching for attention, he may emit a wide range of specific behaviors, differing enormously with respect to topography. Any of these behaviors that succeeds in gaining attention from the parents will be strengthened to some extent and become prepotent over the others. Attention is reinforcing. By the process of differential reinforcement, the parents can create a new and strong behavior pattern in the young child. It is no accident that in the practical case the new behavior pattern typically involves some element of aversiveness for the parent. The parent, after all, is a reinforceable organism. Termination of the child’s aversive behavior serves to reinforce the parent’s behavior, which, as we have noted, may likewise serve to reinforce the behavior of the child. It may be readily appreciated how a vicious feedback system may be developed. In order to gain attention from the parent, the child raises his voice or generally displays some other form of temperamental behavior. Because this behavior is aversive to the parent, the parent terminates it by responding to the child. The attention thereby shown to the child reinforces the temperamental display. Through a process of adaptation to the aversive properties of the child’s behavior, or simply because the parent may not want to “give in” so readily, attention may be withheld on some specific instance of a temperamental display. Since an increase in the intensity of a temperamental display will ordinarily establish a new level of aversiveness, it is likely that the parent will respond to that new level with immediate attention. As a result, a more intense form of the temperamental display is differentially reinforced. The end result of such a feedback system. is one that most of us have seen at one time or another. The fundamental mistake made by the parent at the outset is to respond to (that is, reinforce) any form of behavior that has aversive properties for him. By withholding reinforcement under these conditions and responding with attention when some form of nonaversive behavior is emitted by the child, the whole problem can be avoided. On the other hand, starting with a child who has already developed in strength some form of aversive behavior, the principle of differential reinforcement may be used in order to short-circuit the development of the feedback system. Simple withholding of reinforcement on all temperamental displays by the child will produce eventual extinction of that form of behavior. Perhaps a more positive approach would be to combine extinction of intense forms of temperamental display with deliberate reinforcement of less and less intense exhibitions by the child. Ultimately, the child will have learned that attention, and hence satisfaction of his needs, will be forthcoming only when his request is stated moderately.

The dynamic properties of reinforcement

We have stressed the importance of immediacy in the effective use of reinforcement. A reinforcement that is delayed after the occurrence of the criterion behavior will ordinarily occur in close temporal proximity to some other behavior. Although there might be some tendency for the criterion behavior to be strengthened, maximal strengthening will occur with respect to the intervening behavior.

Superstition and uncontingent reinforcement

If the intervening behavior that is maximally strengthened is incompatible with the criterion behavior, we might, in fact, note a decrease in the strength of the criterion behavior. Perhaps the most dramatic example of the effect of uncontingent reinforcement is the well-known “superstition” experiment (Skinner 1948).

A hungry pigeon is placed in an experimental chamber, and the grain tray is operated at fairly frequent intervals, but independent of the animal’s behavior. After a period of time, we note the development of some rather strong behavioral patterns during the intervals between reinforcement. Frequently, these behavior patterns appear quite bizarre. The pigeon, for example, may hop about on one leg while fluttering a wing, or the bird may dance furiously from one side of the box to the other while stretching its neck. The important point is that these behaviors have developed as a function of the uncontingent reinforcement:Simply because the reinforcement has not been made experimentally contingent upon some specified response does not mean that the reinforcement was without effect. In fact, what will always happen is the strengthening, by the reinforcement, of some chance behavior. By the process of differential reinforcement, the behavior that is accidentally reinforced may show some slow drift in topography. After a period of time, we actually might note a completely different topography from that which we observed earlier. As a general principle, we may state that the more immediate the reinforcement is with respect to the criterion response, the more highly stereotyped the criterion response is likely to be. Less immediate reinforcement will produce somewhat looser control, with a noticeable tendency for the criterion response to change in time. We may, as a matter of fact, make an experimental prediction about the superstition experiment just described. If the uncontingent reinforcers are presented at fairly frequent intervals, we will note the relatively rapid development of some arbitrary and perhaps bizarre behavior that will be fairly resistant to drift, that is, it will maintain a roughly similar topography over long periods of time. If the uncontingent reinforcements are delivered at less frequent intervals, we will note a susceptibility to change and drift in the accidentally reinforced behavior. In the extreme, if the uncontingent reinforcers are delivered at widely spaced intervals, then the drift becomes such a dominating characteristic of the behavior that we fail to notice a long-range strengthening effect of the reinforcement at all. Our conclusion, therefore, is that for reinforcement to be maximally effective it must follow the to-be-reinforced response without delay.

Reinforcement and deprivation

A second dynamic property of reinforcement is its relationship to deprivation. Some environmental events will be effective as reinforcers only if the organism has been deprived of some commodity. Food, for example, is effective in controlling the behavior of an animal only if that animal has been made hungry through food deprivation. Similarly, water can be used as a reinforcer only if the animal has been deprived of water. Other kinds of reinforcers, even at the level of lower organisms, can be effective in the absence of deprivation. Electric shock, for example, can serve as a very powerful negative reinforcer without the organism having been deprived of any commodity such as food or water. It is no accident that negative reinforcement is the most popular form of behavior control employed by the average person. It is easy to dispense in its varied forms, and it does not depend for its effect on some prior operation not under the control of the punisher, such as deprivation of the punishee. Many of us have met individuals who through some quirk of behavioral development are themselves positively reinforced by their own dispensation of negative reinforcement.

Novelty as a reinforcer

There is now evidence to indicate that even at the level of the rat, a novel situation may serve to reinforce positively and to maintain exploratory behavior (Montgomery 1954). It is clear, however, that deprivation-independent reinforcers become more important as one ascends the phylogenetic scale. It has been demonstrated quite clearly that at the level of the monkey, behavior may be reinforced and maintained if the monkey has a brief opportunity to look out from an experimental chamber into a room that is occupied by other monkeys or by people (Butler 1953). Curiosity, then, is a motive in higher animals, and curiosity satisfaction is a potent reinforcer.

Generalized reinforcers

Reinforcers such as food and water and oxygen are of obvious importance in the control of lower animals. Of course they can also serve to control the behavior of higher animals. Ordinarily, however, the behavior of higher animals is under the control of nonhomeo-static reinforcers. A child, for example, can be powerfully reinforced by some particular play activity or by some manipulatable novel object, such as a brightly colored toy or a plastic ring. Adult humans can be powerfully reinforced by a wide range of reinforcers that we refer to as generalized reinforcers. These may be defined as specific events or objects that can be used to reinforce a wide range of different behaviors across many motivational systems, both homeostatic and nonhomeostatic. In the life of human beings, the most obvious example of a generalized reinforcer is money. More subtle, but nonetheless just as powerful, are such reinforcers as praise, attention, and improvement in living standard and working conditions. It is interesting to note that “improvement in working conditions” can serve as a reinforcer for the behavior of lower animals also. It has been demonstrated, for example, that a pigeon will peck at one key when the sole consequence of behavior on that key is to change the schedule of reinforcement to a more favorable one on a second key (Findley 1958). A more favorable schedule may be defined either by a higher rate of reinforcement or less work per reinforcement.

Physiological mechanisms

Finally, a recent finding offers considerable promise for the laboratory study of the physiological mechanisms of reinforcement. It has been demonstrated by Olds and Milner (1954) that such animals as rats and cats will work to produce weak electrical stimulation of certain brain loci. The technique holds great promise for the study of the neural substrates of reward. Olds (1962) has recently summarized most of the studies on reward by electrical stimulation of the brain.

Stanley S. Pliskoff
AND Charles B. Ferster

[Other relevant material may be found in Drives; Motivation; Nervous SYSTEM, article on BRAIN STIMULATION; Stimulation Drives;and in the biographies of Guthrie; Hull; Pavlov; Thorn-DIKE.]

BIBLIOGRAPHY

Azrin, N. H.; and Holtz, W. C. 1965 Punishment. Pages 380–447 in Werner K. Honig (editor),Operant Behavior: Areas of Research and Application. New York: Appleton.

Birney, Robert C.; and Teevan, Richard C. (editors) 1961 Reinforcement, an Enduring Problem in Psychology: Selected Readings. Princeton, N.J.: Van Nostrand.

Butler, Robert A. 1953 Discrimination Learning by Rhesus Monkeys to Visual-exploration Motivation. Journal of Comparative and Physiological Psychology 46:95–98.

Ferster, C. B.; and Skinner, B. F. 1957 Schedules of Reinforcement. New York: Appleton.

Findley, Jack D. 1958 Preference and Switching Under Concurrent Scheduling.Journal of the Experimental Analysis of Behavior 1:123–144.

Guthrie, Edwin R. (1935) 1960 The Psychology of Learning. Rev. ed. Gloucester, Mass.: Smith.

Hilgard, Ernest; and Bower, Gordon H. (1948) 1966 Theories of Learning. 3d ed. New York: Appleton. → Ernest Hilgard was sole author of the previous editions.

Hilgard, Ernest R.; and Marquis, Donald G. (1940) 1961 Hilgard and Marquis’ Conditioning and Learning. 2d ed., revised by Gregory A. Kimble. New York: Appleton.

Hull, Clark L. 1943 Principles of Behavior: An Introduction to Behavior Theory. New York: Appleton.

Kelleher, Roger T.; and Gollub, Lewis R. 1962 AReview of Positive Conditioned Reinforcement. Journal of the Experimental Analysis of Behavior 5:543–597.

Montgomery, K. C. 1954 The Role of Exploratory Drive in Learning. Journal of Comparative and Physiological Psychology 47:60–64.

Olds, James 1962 Hypothalamic Substrates of Reward. Physiological Reviews 42:554-604.

Olds, James; and Milner, Peter 1954 Positive Reinforcement Produced by Electrical Stimulation of Septal Area and Other Regions of Rat Brain. Journal of Comparative and Physiological Psychology 47:419-427.

Pavlov, Ivan P. (1927) 1960 Conditioned Reflexes: An Investigation of the Physiological Activity of the Cerebral Cortex. New York: Dover. → First published as Lektsii o rabote boVshikh polusharii golovnogo mozga.

Sidman, Murray 1953 Avoidance Conditioning With Brief Shock and No Exteroceptive Warning Signal. Science 118:157-158.

Sidman, Murray 1960 Tactics of Scientific Research: Evaluating Experimental Data in Psychology. New York: Basic Books.

Skinner, B. F. 1938 The Behavior of Organisms: An Experimental Analysis. New York: Appleton.

Skinner, B. F. 1948 “Superstition” in the Pigeon. Journal of Experimental Psychology 38:168–172.

Skinner, B. F. 1953 Science and Human Behavior. New York: Macmillan.

Thorndike, Edward L. (1898-1901) 1911 Animal Intelligence: Experimental Studies. New York: Macmillan.

V DISCRIMINATION LEARNING

Discrimination learning, or the acquisition of ability to respond differentially to objects in the environment, is of continuing interest to psychologists for both empirical and theoretical reasons. Its study provides an opportunity for exploring the sensory capacities of the nonverbal organism, as well as the possibility of relating the fields of learning, perception, and attention, a possibility that has motivated much of the theoretical speculation available on this subject. Before attempting to evaluate the progress made in this direction, however, it is helpful to outline the most common procedures used in studying the learning of discriminations, most of which have employed laboratory animals.

Simultaneous discrimination procedure In a simultaneous discrimination procedure, the animal is usually confronted with two stimulus objects on each trial. These objects normally are alike in all respects except for variation on some given attribute such as brightness, color, or shape. The animal indicates its choice between them by some reaction such as picking one up, approaching one, or the like. A “correct” choice is rewarded with something desirable, perhaps a pellet of food. An incorrect choice is nonrewarded or even punished.

Initially, of course, the animal shows only a chance level of accuracy because it has no prior knowledge of which object is related to reward and which to punishment. With continued training, the percentage of successful choices may increase to the point where it is clear the animal has mastered the problem. When the appropriate experimental controls have been used, such performance is clear evidence that the animal is sensitive to the stimulus attribute under investigation. The experimenter may then decrease the difference between the two objects. As he continues to do this, the animal’s accuracy again approaches a chance level. From these data, it is possible to determine the animal’s differential sensitivity to values on this stimulus attribute.

Successive discrimination procedure In a successive discrimination procedure, only one of the two stimulus objects is presented on a given trial, but each occurs equally often in a sequence of such trials. In a T maze, for example, the stimulus object is displayed at the choice point, and the animal has the alternatives of turning into either the right or left alley. If turning left is the correct and rewarded response to one of the objects, then turning right is the correct and rewarded response in the presence of the second object. Thus, a different reaction must be related to each of the two stimulus objects in order to master the successive discrimination.

Alternatively, the animal’s choice may be between reacting and not reacting. In classical conditioning procedures, for example, the animal is required to make the conditioned response in the presence of one object, the positive stimulus, and to inhibit this response in the presence of a different object, the negative stimulus. This technique has been valuable in determining absolute thresholds, that is, the minimum amount of an attribute that the animal can detect. Once the discrimination is established, the intensity or amount of the positive stimulus is gradually reduced until the animal no longer responds to it.

Generalization and discrimination Theoretical interest in discrimination learning has tended to concentrate on two opposing tendencies that the animal shows during training. If it has been trained to respond in some definite manner to one stimulus, it will generalize or transfer this tendency to respond to a wide variety of other similar stimuli. This occurs spontaneously, without any specific training with these new stimuli. This tendency to generalize is of considerable interest to the theorist because it is one obvious basis for the transfer of training. But at the same time it poses a problem for him. He must not only indicate the conditions under which this tendency to generalize is aroused, but he must also explain why this generalization occurs for certain new stimuli but not for others.

An opposing tendency, however, is also evident during discrimination training. Using the procedures described above, the animal can be taught to make differential responses to two very similar stimuli, that is, stimuli that would otherwise evoke quite similar responses in accordance with the generalization tendency. But during the course of discrimination training this generalization tendency is suppressed. The animal now acts as though the two stimuli were perceived as being distinctly different. Thus, a focal problem for a theory of discrimination learning is to explain the disappearance or suppression of the tendency to generalize as a result of the training procedures used.

Hull’s theory One of the more influential accounts of discrimination learning was developed by Clark Hull (1943). This formulation has two unique features. First, it does not attempt to explain the tendency to generalize; instead it assumes that it is an innate and universal characteristic of all organisms. Second, it accounts for the partial suppression of this tendency during discrimination learning by postulating a second and opposing form of generalization.

Hull’s basic formulation is that every time an animal’s response to a stimulus object is followed by a reward, there is an increase in the probability that the animal will respond in the same way to that stimulus object the next time it is presented. This is the excitatory tendency, the tendency to respond. This excitatory tendency, however, is not specific to the rewarded stimulus. It generalizes or spreads to other stimuli in direct proportion to the degree of similarity they have to the rewarded stimulus. This differential spread is conceived of as a generalization gradient. The amount of excitatory tendency is greatest for the rewarded stimulus, but this amount diminishes along a continuum of de-creasing stimulus similarity. Basically, this conception is a description of what is empirically observed in transfer-of-training studies.

The suppression of this generalization tendency during discrimination training is accounted for by postulating an opposing tendency. It is assumed that each time the animal’s response to a stimulus is not rewarded there is an increase in the strength of an inhibitory tendency, the tendency to withhold or suppress the response in the presence of that stimulus. This inhibitory tendency also generalizes. It forms a gradient that has a maximum value at the nonrewarded stimulus and that diminishes in magnitude along a continuum of decreasing stimulus similarity.

In these terms, a conditioned discrimination can be conceived of in the following way. On some trials the animal is confronted by one stimulus and on the remainder of the trials by a second, similar stimulus. If it responds appropriately to the first stimulus, the positive one, it is rewarded. Thus, some amount of the excitatory tendency is associated with this stimulus and generalizes, although to a lesser degree, to the second stimulus. However, if the animal responds in the same way when this second or negative stimulus is presented, no reward is given. This results in a certain amount of inhibitory tendency becoming associated with the negative stimulus. This in turn generalizes, although to a lesser degree, to the positive stimulus. It is assumed that these successive interactions between excitatory and inhibitory processes continue during discrimination training until the following two conditions occur: (1) the excitatory tendency associated with the positive stimulus clearly outweighs the inhibitory tendency that has generalized to this stimulus, and (2) the converse is true for the negative stimulus. At this point the animal shows clearcut discrimination behavior by responding appropriately on each trial to the positive stimulus and withholding that response to the negative one.

When stated more exactly, this formulation has a wide range of implications concerning the rates of learning to be expected in various discrimination tasks, the types of transfer or transposition behavior that should occur, and the like. A sufficient number of these implications have received empirical support to justify considerable faith in this approach. At the same time, these empirical studies have indicated a number of weaknesses in the basic concepts of the formulation. Perhaps the most important of these weaknesses is the absence of any means of defining the degree of similarity between two stimuli independently of the observed generalization behavior of the animal. This suggestion of circularity in the system has led in large part to a number of alternative formulations of discrimination learning.

The concept of similarity In the psychological literature, stimulus similarity has been treated as a response-inferred construct. This means that the degree of similarity between two stimulus objects can be inferred only from the behavior of the animal with respect to them. If the stimuli are equivalent in the sense of producing comparable reactions, then, psychologically, they are highly similar for that animal. Phrased in this way, a theoretical account of the concept of similarity must state the conditions under which the animal will treat two stimulus objects as though they were equivalent. This can be done either in terms of assumptions about the make-up of the organism or in terms of the physical properties of the stimulus objects.

Organismic attributes. One possible approach to this concept of similarity stems from the early work of Pavlov (1927). He claimed that similarity was largely a function of the neurological organization of the animal’s cortex. Whether an animal perceives two stimuli as similar, as inferred from the generalization of behavior from one to the other, depends upon the spatial proximity of the sensory projections associated with these stimuli. If these are close together, considerable interaction or generalization can occur. With wider separations, the stimulus objects are perceived as independent or nonsimilar units. These neurological assumptions about the basis of similarity continue to have some influence in the physiological literature but have had little influence on psychological theorizing.

Stimulus attributes. Recent theoretical interest has centered on the possibility of defining similarity in terms of overlap or of common elements in the two stimulus objects. This approach was prominent in the writings of Thorndike ([1913] 1921, chapter 4) and Guthrie (1935) but has been given a much more precise formulation in statistical learning theory (Estes 1959). The basic notion in this approach is that each stimulus object, plus the stimulus context in which it appears, is to be conceived of as potentially a population, or large set, of stimulus elements rather than as a single, unanalyzable unit. On a given trial the animal experiences only a randomly selected sample, or subset, of this potential population of elements that constitutes the object. This trial-to-trial variability in the sample of elements experienced is due to a number of factors; it stems in part from the impossibility of exactly reproducing the physical stimulus situation, in part from moment-to-moment variation in the state of an organism, in part from changes in the postural orientation of the animal with respect to the stimulus object, and so on. In order for the animal to experience all the elements in the population, it must be exposed repeatedly to the same stimulus object.

With two stimulus objects, of course, there are two populations of stimulus elements. Some of these elements may be common to the two populations, the rest being specific to one or the other. The proportion of common elements to the total number of elements in the combined populations is one possible measure of the degree of similarity between the two stimulus objects. The greater the proportion of overlapping elements, the greater is the degree of similarity.

The unique feature of this conception of similarity is that it does away with the need to assume that the animal has an innate tendency to generalize its learned behaviors. If an animal does transfer such a response from a training stimulus to a new one, it is because the populations of elements constituting the two objects have elements in common. These common elements were associated with the learned response during training with the first stimulus object. Consequently, there is a definite probability that they will evoke that same response when they again occur in the context of the new stimulus object. The probability that this generalization will occur increases as the proportion of overlapping elements in the two populations increases; the larger this degree of overlap, the more likely it becomes that any one sampling from the new stimulus object will contain a large number of them.

The assumptions underlying this conception of similarity have been stated mathematically in several recent formulations of statistical learning theory, making possible much more precise statements as to when generalization will occur and the amount of such transfer to be expected. As these predictions have considerable empirical support, it would appear that a real advance has been made in understanding the basis for psychological similarity.

Paradoxically, however, the success of this approach in predicting generalization has led to an impasse in attempting to apply the same assumptions to discrimination learning. If taken in its most literal form, this conception of similarity as being due to overlapping stimulus elements implies that an animal can never achieve a perfect discrimination between two similar stimulus objects, since their common elements would always be associated with each of the different responses, an implication that is clearly at variance with empirical observation.

Consequently, if this conception of similarity in terms of common elements is to apply to discrimination learning, as well as to generalization situations, additional assumptions must be made concerning these stimulus populations. One possibility is that their composition of elements is modified during discrimination training. In some sense the number of common elements is reduced, or at least their control over choice behavior is minimized [seeGuthrie; Thorndike].

Selective attention In a different context, the idea that the effective stimuli for behavior are modified or transformed during the course of learning has been discussed under the heading of selective attention. Two conceptualizations of this transformation process have been suggested. One of these involves the enrichment of, or additions to, the effective stimulus; the other emphasizes a reduction in the content of the stimulus.

Enrichment of the effective stimulus. The enrichment idea was clearly formulated by William James (1890, pp. 508 ff. in volume 1 of 1950 edition). He denied that the immediate sensory input from the stimulus object is the direct elicitor of choice behavior. Instead, he suggested that the effective stimulus is the complex of ideas, emotions, and other reactions that are associated with this sensory input. The main implication of this formulation is that the complexes associated with two stimulus objects may have proportionately less overlap and fewer common elements than do the immediate sensory experiences that give rise to them. Consequently, the tendency to generalize is reduced as these differentiating complexes develop during the course of learning. In more recent discussions of discrimination learning, this notion has been formulated somewhat more explicitly in terms of the concept of the “acquired distinctiveness of cues” (Miller & Bollard 1941).

Elimination of irrelevant cues. A more popular approach to the problem, however, is to view the stimulus transformation process as one in which the initial sensory inputs are gradually stripped of all irrelevant or nondifferentiating aspects. Only the differentiating, nonoverlapping aspects remain as the effective stimulus for the choice behavior. The simplest of these formulations postulates the learning of receptor-orienting behaviors. If, for instance, an animal looks at the top halves of two stimulus objects, the sensory input from them is likely to be quite different than if the animal looks at the bottom halves. Consequently, the animal may learn that certain receptor orientations lead to a more accurate discrimination than would be otherwise possible insofar as these sensory inputs contain a minimum number of common aspects and a maximum number of differentiating aspects. Learned orienting behaviors of this sort undoubtedly are involved in many types of discrimination, but the concept would seem to be of quite limited usefulness in situations where there is minimal need or opportunity for visual search procedures.

Stimulus coding. A number of alternative mechanisms have been proposed to account for the selective aspects of attention. These are variously referred to as “filter theory” (Broadbent 1961), “analyzer mechanisms” (Sutherland 1959), or “stimulus coding behaviors” (Lawrence 1963).

While these differ in many details, the general approach can be illustrated in terms of stimulus coding behaviors.

The stimulus coding formulation recognizes that the sensory input at any moment depends as much on the behavior of the organism as it does on the characteristics of the external stimulus object. For instance, the tactual sensations from a piece of sandpaper are quite different depending upon whether the individual merely places a finger tip on it or draws a finger rapidly across the surface. Thus, the effective stimulus to which the individual reacts is a joint function of his own behavior and the characteristics of the stimulus object.

In order to allow for both of these factors in determining the effective stimulus, the stimulus coding formulation assumes that the total sensory input from the discrimination situation is functionally divided into two parts. The first part corresponds to the stable, recurring aspects of the situation and the second part to the changing, variable aspects. In a successive discrimination, the stable part of the input may correspond to the characteristics of the room, the apparatus, and the like; the variable aspects may correspond to the characteristics of the stimulus objects to be discriminated.

It is assumed that the stable aspects of the situation become associated with, and control, an implicit, inferred coding response. When this coding response is elicited, it reacts on, or interacts with, the sensory input from the stimulus object. As a result of this interaction, a new input is generated which is called the “coded stimulus.” When, as in discrimination learning, there are two different stimulus objects but only one coding response, two different coded stimuli are produced. These control the choice behavior.

In this schema, the characteristics of the coded stimuli change whenever there is a change in the coding response even though the stimulus objects remain constant. The coded stimuli are resultants from an interaction between a coding response and the immediate sensory inputs. A change in either the coding response or the immediate sensory inputs modifies the coded stimuli. The range of values the coded stimuli can assume, however, is limited by the actual properties of the sensory inputs. These latter are members of the interaction, and therefore the resultants cannot be independent of them. The implication is that these coded stimuli correspond to parts of, relationships within, or other limited aspects of the sensory input. But even with this restriction, the coded stimuli can vary greatly with changes in the coding response.

To complete the formulation, it is assumed that the coding response varies in a trial-and-error fashion during the initial stages of discrimination learning. Gradually, however, it shifts in that direction which tends to minimize the confusion and overlap between the coded stimuli. This is, of course, the direction that maximizes the accuracy of the discrimination. Thus, a dual learning process is always involved in a discrimination procedure: The animal must discover a coding response that produces highly distinctive coded stimuli, and at the same time learn which overt choice reactions are appropriate to these coded stimuli.

Formulations of this type offer a mechanism that permits the effective stimulus for behavior to be continuously modified throughout the course of learning. They permit the animal to react initially to the total stimulus input including the common, or overlapping, aspects. Thus, there can be the broad generalization characteristic of such situations. But with experience, the effective stimulus shifts in the direction that corresponds to the differentiating, nonoverlapping aspects of the stimulus objects. This ensures a high level of accuracy for the discrimination and offers a solution to the dilemma encountered in statistical learning theory. On the other hand, it is obvious that these formulations once again make stimulus similarity a direct function of the animal’s own behavior.

Additional aspects It should be emphasized, however, that even more precise and powerful theories of this type would not do justice to the wide range of empirical effects found during discrimination learning. The experimental literature contains many suggestions of changes in response selectivity, of heightened motivational effects, and of conflict resolution that largely fall outside the bounds of any of the theoretical approaches so far mentioned.

A clear example of changes in response selectivity is provided by the studies on learning set (Harlow 1959). These demonstrate that an animal can solve simultaneous discriminations with incredible rapidity after repeated experience with this class of problems. A sophisticated monkey requires only one or two information trials to reach a high level of mastery, whereas initially it required a prolonged training period to solve an equivalent problem. An analysis of this type of learning suggests that the increase in efficiency is in part due to the elimination of the response biases so prominent in the naive animal, for example, such biases as position preferences, alternation tendencies, or tendencies toward perseveration in a given response.

Motivational changes are apparent in studies on contrast effects. Pavlov first demonstrated these in his studies on positive and negative induction. He found that once a successive discrimination was established by conditioning procedures, a series of presentations of the positive or rewarded stimulus object tended to strengthen and maintain the inhibitory tendencies evoked by the negative stimulus object. This was true even though the negative stimulus object was now followed by reward. Conversely, a series of presentations of the negative or nonrewarded stimulus object tended to strengthen and maintain the excitatory tendency evoked by the positive stimulus object, even though this was no longer rewarded. Descriptively, it is as though the animal has built up a set of contrasts as the result of discrimination training; experience with the positive stimulus object increases the undesirability of the nonrewarded behavior, and experience with the negative stimulus object enhances the desirability of the rewarded behavior. Comparable phenomena have been demonstrated with other types of discrimination training.

Related to these contrast effects are any number of phenomena that can be grouped in terms of conflict resolution. Perhaps the most dramatic of these is the change in behavior that occurs in experimental neurosis (Liddell 1956). After a successive discrimination is well established, the two stimulus objects are made more and more similar until the animal is unable to discriminate between them. Occasionally, under these conditions, the animal begins to show agitated and highly emotional behavior. This emotionality persists for long periods of time both in the experimental situation and in other contexts. Equally impressive are the abortive and stereotypic behaviors exhibited by animals who have been frustrated by being forced to respond in an unsolvable discrimination situation (Maier 1949).

This brief and highly selected survey of the many behavioral changes that occur during discrimination learning is sufficient to indicate the limitations of present theories on this subject. These formulations have been primarily concerned with developing appropriate concepts to deal with generalization phenomena, stimulus similarity, and selective attention. They obviously have not dealt adequately, as yet, with the phenomena of response selectivity, motivational changes, and conflict resolution.

Douglas H. Lawrence

[Other relevant material may be found inAttention; Concept Formation; Models, Mathematical; Perception, article onPerceptual Development; and in the biography ofHull.]

BIBLIOGRAPHY

Atkinson, Richard C.; and Estes, William K. 1963 Stimulus Sampling Theory. Volume 2, pages 121–268 in R. Duncan Luce, Robert R. Bush, and Eugene Galanter (editors), Handbook of Mathematical Psychology. New York: Wiley.

Broadbent, D. E. 1961 Human Perception and Animal Learning. Pages 248–272 in W. H. Thorpe and O. L. Zangwill (editors), Current Problems in Animal Behaviour. Cambridge Univ. Press.

Estes, William K. 1959 The Statistical Approach to Learning Theory. Volume 2, pages 380–491 in Sigmund Koch (editor), Psychology: A Study of a Science. New York: McGraw-Hill.

Guthrie, Edwin R. (1935) 1960 The Psychology of Learning. Rev. ed. Gloucester, Mass.: Smith.

Harlow, Harry F. 1959 Learning Set and Error Factor Theory. Volume 2, pages 492–537 in Sigmund Koch (editor), Psychology: A Study of a Science. New York: McGraw-Hill.

Hull, Clark L. 1943 Principles of Behavior: An Introduction to Behavior Theory. New York: Appleton.

James, William (1890) 1962 The Principles of Psychology. 2 vols. New York: Smith.

Lawrence, Douglas H. 1963 The Nature of a Stimulus: Some Relationships Between Learning and Perception. Volume 5, pages 179–212 in Sigmund Koch (editor),Psychology: A Study of a Science. New York: McGraw-Hill.

Liddell, Howard S. 1956 Emotional Hazards in Animals and Man. Springfield, 111.: Thomas.

Mackintosh, N. J. 1965 Selective Attention in Animal Discrimination Learning. Psychological Bulletin 64: 124-150. → A more recent and more easily accessible account of N. S. Sutherland’s viewpoint and a review of the experimental evidence bearing on it.

Maier, Norman R. F. 1949 Frustration: The Study of Behavior Without a Goal. New York: McGraw-Hill. → A paperback edition was published in 1961.

Miller, Neal E.; and Dollard, John C. 1941 Social Learning and Imitation. New Haven: Yale Univ. Press; Oxford Univ. Press.

Pavlov, Ivan P. (1927) 1960 Conditioned Reflexes: An Investigation of the Physiological Activity of the Cerebral Cortex. New York: Dover. → First published as Lektsii o rabote bol’shikh polusharii golovnogo mozga.

Sutherland, N. S. 1959 Stimulus Analysing Mechanisms. Volume 2, pages 575–609 in Teddington, England, National Physical Laboratory, Mechanisation of Thought Processes: Proceedings of a Symposium. London: H.M. Stationery Office. → This is paper No. 2, session 4A, of the National Physical Laboratory Symposium No. 10. Reprinted in 1961.

Thorndike, Edward L. (1913) 1921 Educational Psychology. Volume 2: The Psychology of Learning. New York: Columbia Univ., Teachers College.

VI AVOIDANCE LEARNING

In avoidance learning the organism comes to behave in an anticipatory or foresightful manner in order that unpleasant events no longer will occur. It learns to respond to certain cues or danger signals before painful or frightening stimuli can occur, and it performs acts that usually prevent the painful events from occurring. In the following account empirical generalizations are emphasized to demonstrate the wide variety of variables operating in avoidance learning. In some cases these generalizations are not very well established, and so they represent only the current state of affairs and are always subject to change as more experiments are reported. Avoidance learning represents an empirical focus today, and hundreds of experiments each year are completed in this area of science. Empirical findings have clearly outstripped adequate theoretical accounts of the avoidance learning process.

Avoidance training experiments of a scientific sort have, for obvious humanitarian and ethical reasons, been confined mainly to animal subjects. There have been few experiments in which human subjects were studied, and those experiments have yielded results quite similar to those obtained in animal experiments. There is every reason to believe that the variables in control of animal avoidance learning are not very different across mammalian species.

Two different types of training

Active avoidance. Imagine, if you will, a white rat placed by an experimenter (E) into a small training box. The floor of the box is an electrifiable grid of metal rods. At a height of 10 inches above the floor and hinged as a shelf to the side of the box is a small platform. The lid of the box is a transparent plastic plate, and above it is suspended a 60–watt lamp.E allows his subject S, the rat, to explore the box for a few minutes. Then training begins.E switches on the light above the rat box for a 30–second period. S continues his sniffing and exploring during the switching on and off of the light. E repeats the procedure 5 times, noting that S does not jump onto the platform. Then E switches on the light, waits for 5 seconds, and turns on the power supply which electrifies the metal grid floor of the box and shocks S. S squeals, rushes about, leaps into the air, and then, 12 seconds after the shock is turned on, jumps onto the platform where there is no shock. As S lands, E turns off the light. One minute later, the hinged shelf is momentarily lowered, dropping S onto the grid again. Then 2 minutes later, a second training trial is begun. The light goes on, and 5 seconds later the shock goes on. The rat leaps onto the platform 4 seconds after the shock goes on.

The rat’s performance has improved in two trials of training. We say he is learning to escape from shock. After several escape training trials have gone by, S eventually will respond directly to the onset of the light, and he will jump onto the platform without the stimulus of shock. This jump is called an avoidance response. By this response, S avoids the shock, and since the light is turned off when he jumps onto the platform, he also escapes the light.

S has learned to respond in an anticipatory fashion in such a way that if he jumps quickly at the onset of the light he will never again receive the shock. This type of process is called active avoidance learning. S is taught what to do to minimize pain and distress. Note that he is punished for doing everything else but jumping onto the platform whenever the light goes on. He is, therefore, not being taught anything specific that he should not do. In the active avoidance training procedure, the light is usually called the discriminative stimulus (Sd),cue, or signal; the shock is called the unconditioned stimulus (US). Because the escape responses are instrumental in terminating the shock and the light and because the avoidance responses are instrumental in preventing the shock and terminating the light, they are often called instrumental responses. They change the environment in such a way as to make it more acceptable to the subject. They operate on the environment and so are often called operants.

Passive avoidance

A somewhat different yet very important avoidance training procedure produces passive avoidance learning. Passive avoidance training corresponds to our everyday conception of punishment. In this procedure S is taught specifically what not to do, but he is not taught what to do.

Imagine now that another white rat is placed by E on the platform of the small training box. This S is hungry, and E has placed a food pellet for him on the grid floor. After some hesitation, S scrambles down from the platform and eats the food pellet. E then picks S up and places him on the platform again. This time S jumps down more quickly than he did on the first trial, and he again eats a food pellet. After many trials, S shows stable food-getting behavior; he jumps down quickly and uniformly on each training trial. Now E can get rid of this instrumental response (jumping off the platform) by means of passive avoidance training. E electrifies the grid whenever S jumps down to get a pellet. Eventually, S will stay on the platform rather than jump down. S has learned to avoid shock by avoiding specific action. What S does when on the platform is not being specified by the training procedure, and so anything he does, as long as it does not lead him to the grid floor, goes unpunished.

Active and passive avoidance learning can take place under a wide variety of training procedures, and the characteristics of learning depend heavily on the type of procedure E uses.

Variants of the two types of training

Active avoidance

Six important variants of the active avoidance procedure warrant discussion.

The method of gradual emergence. By use of the general training conditions described under active avoidance, it can be shown that S will learn avoidance responses either very suddenly or very gradually by varying the training techniques. For example, when small movements or reflexive responses are selected by E to be the active avoidance responses, learning is slow and tortuous. Forepaw flexion responses in the dog often require several hundred training trials in order to establish them as reliable avoidance responses. The same is true of a tiny toe movement in human Ss. In contrast, massive movements that change the S’s environment are quick to emerge as reliable avoidance responses. Requiring a rat to jump onto a platform, as described above, requiring a dog to jump from one compartment of a box to another, or requiring a human S to push a knob a distance of 2 feet—all three are examples of efficient situations for producing sudden and reliable avoidance learning. A way of interpreting these findings is that medium-probability operants make the best avoidance responses, while high-probability, shortlatency respondents (reflexes) make the poorest avoidance responses.

A characteristic of some avoidance responses is their persistence. In general, those training conditions leading to efficient learning also lead to a high degree of resistance to extinction. Such responses as the jumping onto a platform described above can persist over hundreds of trials without a shock being administered. This is not likely to be true of responses like forelimb flexion or toe flexion. The persistence of avoidance responses, as an empirical phenomenon, has extensive implications for studies of human neuroses. Long-lasting phobias, obsessive and compulsive behavior, and neurotic defenses of many types can be fruitfully analyzed in terms of the special experimental conditions that established such behavioral rigidity. Sometimes therapeutic treatments are deduced from such analyses. [SeeObsessive-compulsive Disorders; Phobias.]

Other variables influencing the ease of active avoidance learning and, inversely, the ease of extinction are the Sd–US time interval, the intensity of the US, the similarity of the Sd to the US, the immediacy of termination of the US and the Sd after the performance by S of the avoidance response, and the occurrence of events arousing responses that are incompatible with the required avoidance response. Usually, there is an optimum Sd–US interval for each type of avoidance response. If an avoidance response is a long-latency, complex operant, a long Sd–US interval of perhaps 5 to 10 seconds will be optimal. For short-latency, reflexive types of avoidance responses, short Sd–US intervals of around 1 to 2 seconds will be optimal. Usually, for a given type of avoidance response, lengthening the Sd–US interval beyond the optimal interval will facilitate extinction of the response when shock is no longer administered.

Shock intensity complexly influences avoidance learning. There is an optimum intensity for each type of response, and intensities lower or higher than the optimum will retard learning and facilitate extinction. The optimum intensity decreases as the complexity of the avoidance response increases. This is known as the Yerkes—Dodson law. When the Sd is frightening, learning is more rapid than when it is neutral. Thus, a mild shock used as an Sd will facilitate learning, as will a frightening buzzer. A nonfrightening light may be less efficient in controlling stable avoidances. Delaying either the termination of shock following an escape response, or the termination of the Sd following an avoidance response, or both, will retard avoidance learning, and even short delays of about 5 seconds may make it impossible for S to learn. Finally, some avoidance responses are incompatible with the innate fear responses of S and can interfere with correct responses as fear increases. For example, when S is a rat, we have to try to eliminate innate “freezing” reactions that often occur when the Sd becomes fear-arousing as a result of its association in time with the painful US. Often a rat will “freeze” when the Sd goes on and thus fail to avoid during the Sd–US time interval. He may escape easily as soon as the shock goes on if the shock is of an appropriate intensity for vigorous behavior arousal. One way of eliminating freezing is to decrease US intensity. Another way is to increase the Sd–US interval. Quite often, however, S fails to avoid in many experimental situations, and these failures have not yet been analyzed sufficiently by psychologists. Rather, they tend to be ignored as accidents or are attributed to unspecified individual differences. They represent an area of ignorance.

Most successful avoidance learning in the method of gradual emergence is characterized by a high level of fear and emotionality early in learning, when shocks are still being administered, followed by declining emotionality when the avoidance responses become reliable. Along with these correlated events, the topography of the avoidance responses themselves becomes stereotyped. When this stereotypy occurs, extinction is not easy to produce by constant elicitation of the responses.

The method of prior response shaping. In the method of prior response shaping, the S is first trained to escape the US without a signalizing Sd. After S is an expert escaper, the warning signal or Sd is paired with the US. Using this method, the avoidance responses are very much like the escape responses. In contrast, the method of gradual emergence often produces avoidance responses different in appearance from the escape responses from which they were derived.

Escape with short Sd–US interval. When the Sd–US interval is too short to allow avoidances, except on test trials when the Sd is presented without the US, very poor avoiding is produced along with reliable and short-latency escaping. The S is usually very fearful and emotionally disturbed.

The method of prior fear conditioning. In the method of prior fear conditioning, the Sd and US are paired closely in time on each trial, but there is at first no escape or avoidance response available to S. He merely learns to fear the Sd. After many trials, S is then allowed to terminate the Sd by means of a response in his repertory. If such a response is emitted in the presence of the Sd, the response is quickly adopted by S as a reliable avoidance response. On the other hand, S often does not make the required response, and so he never learns. Failures of this type are frequently produced by this method. Extinction of avoidance responses appears to occur more readily by this method than by the method of gradual emergence. This method is sometimes called the “acquired drive” experiment.

The Pavlovian method. In the Pavlovian procedure, E presents S with an Sd—US sequence repeatedly, but S cannot do anything either to prevent or to terminate the painful US. Instead, the US is omitted on test trials, and sometimes the S will demonstrate a consistent type of anticipatory or preparatory response pattern. Russian physiologists call this method “motor conditioning.” When many test trials are run, we note that the US does not occur no matter what anticipatory responses may be evoked by the Sd. Thus, such trials are like avoidance-response trials in other methods. Despite this, very unstable learning occurs. Often the response consists of constantly varying struggling and diffuse emotional expressions.

The Sidman method. In the Sidman procedure, no Sd is used (although one can be used). Instead, the US is regularly presented if S does not perform a particular response desired by E. If S does perform the required response, the US is delayed for a fixed time interval. The avoidance response thus “buys” shock-free time for S. Note that the US—US interval can be varied independently of the response-US interval. If an Sd is used, it can come anywhere in the US–US interval or in the response–US interval. This method can produce very stable avoidance responding and high resistance to extinction. There are, however, many individual failures of rats to learn. These failures can be reduced if the avoidance response is capable of terminating the US. This method is especially interesting because Ss often develop stable response rates, with the avoidance responses appearing at regular intervals. The responses appear to be under the control of some type of “internal time mechanism” that serves as an Sd substitute. As long as this mechanism elicits responses at interresponse time intervals less than the response-shock interval, S never receives a shock. The method is often viewed as revelatory of the build-up in time of “conditioned anxiety” whenever the anxiety becomes intolerable during the response-shock interval, S makes another response. The similarity of this phenomenon to human compulsive neuroses is often pointed out.

Passive avoidance

Two major variants of the passive avoidance procedure will be discussed.

Punishment methods. There are two general types of punishment techniques. One technique used in the punishment method is illustrated by the general example given above to describe passive avoidance training in the rat. The rat is presented with a painful US contingent upon each jump from the platform to the grid floor of the training box.

The second technique, called the secondary aversive stimulus technique, establishes a previously neutral stimulus as an aversive CS (conditioned stimulus) by pairing it in time with several presentations of a painful or frightening US. When the CS evokes an acquired fear or anxiety reaction, called the “conditioned emotional response” (CER), the CS is then used to punish a specific type of behavior. The example we used above to illustrate passive avoidance learning would have been a secondary aversive stimulus technique if the rat, instead of being shocked for jumping down to the food, had been presented with a CS previously paired with shock.

Punishment procedures may be applied in an attempt to eliminate at least five different types of behavior: (1) an instrumental response previously established by rewards, illustrated by punishing the rat for jumping off the platform to get food; (2) an instrumental response previously established by punishment, illustrated by punishing a shock avoidance response with another frightening stimulus; (3) a consummatory response, such as eating, drinking, or copulating; (4) a complex, instinctive response pattern, such as nest building in birds: and (5) a simple reflexive reaction, such as an eyeblink. The results obtained by punishment procedures differ for each of these five categories.

First, when an instrumental response that has been previously established by rewards is punished, the outcome depends heavily on: (1) duration of the punishing stimulus—short-duration punishments that are presented after the response has occurred usually produce temporary suppression of the response and are followed by recovery of the response (sometimes to supernormal levels), while long-duration stimuli often suppress the punished behavior for long time intervals; (2)intensity of the punishing stimulus—low-intensity punishing stimuli will suppress behavior temporarily, but the behavior recovers, often to a level more vigorous than that prior to the use of punishment, but high-intensity punishing stimuli will often suppress behavior for long periods of time; (3)delay of punishment—the sooner the punishing stimulus is applied after an unwanted response has occurred the more effective will the punishment be, giving us what is known as the “temporal gradient of delay of punishment” (4)repeated exposure—Ss often show some adaptation to repeated punishments, and, therefore, new punishments are often more effective than familiar ones, provided of course that their duration and intensities are equal; (5) reward-punishment habituation—if a response is simultaneously rewarded and punished and if the punishment is of low intensity and duration, sometimes the punishment will not only be ineffective in suppressing the response but will also be able to serve as a reward in its own right (this is similar to masochism in neurotic disturbances); (6) existence of a rewarded alternative—if alternative response A is punished and alternative B is quickly rewarded, the punishment will be very effective in suppressing A, but when there is no rewarded alternative to A, the response will recover more quickly from the suppressing effects of punishment; (7)temporal discriminative alternatives—the housebroken dog learns to urinate under condition A (outdoors) and never to urinate under condition B (indoors), provided that frequent punishment for B is followed quickly in time by no punishment for A (this is often referred to as “impulse control” training); (8) temporal order of reward and punishment—when a reward is followed regularly by punishment, the behavior leading to the reward is often suppressed, but if exactly the same behavior is evoked by punishment and then rewarded, the behavior can be strengthened, and S may come to tolerate the punishment or even to seek it out; (9) species-specific characteristics—a toy snake can be used to punish behavior in monkeys, but it does not bother a rat, for example; (10) the resistance to extinction of the response being punished—responses which would normally be extinguished quickly in the absence of reward will be suppressed more readily by punishment than will responses normally having a high resistance to regular extinction procedures.

Second, when an instrumental response that has been previously established as an active avoidance response is punished, the outcome is hard to predict. Sometimes the response intended by E to be suppressed by the punishment is actually energized. Probably this facilitation is produced most reliably when the punishing stimulus is very similar to or the same as that which was used as the US in the earlier avoidance training procedure. A good deal of ignorance exists in this field of study.

Third, when a consummatory response is punished, the suppression is very often long-lasting and emotionally disturbing. Punishment seems to be more effective when applied to this type of behavior than it is when used to suppress instrumental responses, for reasons that are at present quite mystifying.

Fourth, when punishment is used to suppress complex, species-specific, instinctive behavior patterns, the results are often confusing. Sometimes displacement occurs; that is, S shows behavior characteristics of another behavior pattern. Frightening an animal for courting behavior may induce nest-building behavior or other inappropriate acts.

Finally, little is known of the effects of punishment for specific reflexive behavior. This contingency happens frequently in everyday affairs, as when an involuntary act annoys others, but the phenomenon has not been studied systematically in the laboratory.

The CER method. The CER procedure differs from the punishment procedure in a subtle but evidently important way. No specific passive avoidance response is established. Thus, this method is similar to the punishment method. But, in contrast to the punishment method, the CER method does not apply a punishment to a specific response. Instead, a frightening, secondary aversive stimulus is added to the general surroundings of S for limited periods of time. Often the usual behavior in that environment is depressed in rate, or amplitude, or quickness. A typical example is as follows: a rat is trained to press a lever when he is hungry in order to obtain food pellets. After this lever pressing occurs at a reliable rate, the CER procedure is introduced. A previously neutral stimulus is now associated with shock in a special shock box (repeated CS-US pairings, with no escape or avoidance possible) and evokes a CER. When the CER stimulus is presented in the lever box, the lever-pressing rate often decreases. The CER stimulus is not presented contingent upon S’s lever-pressing response. Rather, it occurs without regard to the behavior being rewarded by food. Despite this, the instrumental behavior is often suppressed. Indeed, the CER technique may often produce suppression as effectively as does the punishment method.

One major value of the CER procedure has been in the assessment of psychologically active drugs. Often, tranquilizers have been shown to minimize the response suppression attendant on the CER stimulus, and the special characteristics of behavior during the suppression period in drugged Ss can be of value in analyzing the action of the drug. The method has also been used to assess the level of fear controlled by a CS, assuming that the more the ongoing, aṕpetitive, and instrumental behavior is suppressed, the more fear arousing is the CS. This dependent variable can then be related to events taking place during fear conditioning or to those occurring during avoidance training in which the same CS is used. For example, the Sd in avoidance training in situation A may not suppress appetitive behavior in situation B—thus leading to the conclusion that the Sd no longer arouses much fear. This finding has been correlated with observations of declining fear during stereotyped avoidance responding and with the finding that conditioned heart-rate elevation decreases during late phases of successful avoidance responding as the behavior becomes stereotyped. Thus, active and passive avoidance procedures can be combined to yield significant interrelationships between emotional reactions and instrumental responding.

Two of the most pressing questions concerning avoidance learning are theoretical ones. What mechanism produces the first avoidance response? What mechanism reinforces avoidance responses? Here, we are still in the dark. One explanatory scheme is cognitive. It argues that S comes to anticipate shock by virtue of Sd–US pairings and that S comes to know how to terminate shock by virtue of escape responses. Then, by an insightful inference, S terminates the Sd by performing an avoidance response. Another explanatory scheme depends heavily on the James-Lange theory of emotion. It argues that the Sd–US pairings lead to conditioned fear (CER) in the presence of the Sd. When the CER is intense enough to be as arousing as the shock itself, then S does in the presence of the Sd what he has learned to do in the presence of the shock. He performs the escape response during the Sd–US interval, and so he comes to avoid. Avoidance responses remove the Sd, thus reducing the anxiety level, and so avoidances are reinforced by anxiety reduction. Finally, another explanatory scheme depends heavily on proprioceptive feedback arising from skeletal movements. It argues that S learns to avoid movements associated with shock, thus leaving S to perform only movements not associated with shock. Certain proprioceptive stimulus patterns acquire aversive properties during the escape training phase or, as in the Sidman method, during the phase of learning wherein the US is frequently presented. Avoidance of aversive proprioceptive stimulus patterns gradually “shapes up” the avoidance behavior. At the moment there seems to be no decisive evidence that would allow us to choose among the major explanatory alternatives. However, many current experiments are being aimed at the theoretical systems to probe their strengths and weaknesses in predicting important variables and phenomena.

Richard L. Solomon

[Other relevant material may be found in Anxiety; Defense Mechanisms; Electroconvulsive Shock; Stress.]

BIBLIOGRAPHY

Brady, Joseph V.; and Hunt, Howard F. 1955 An Experimental Approach to the Analysis of Emotional Behavior.Journal of Psychology 40:313-324. DINSMOOR, JAMES A. 1954 Punishment: I. The Avoidance Hypothesis.Psychological Review 61:34-46.

Estes, William K. 1944 An Experimental Study of Punishment. Psychological Monographs 57, no. 3.

Ferster, Charles B. 1958 Control of Behavior in Chimpanzees and Pigeons by Time Out From Positive Reinforcement. Psychological Monographs 72, no. 8.

Gibson, Eleanor J. 1952 The Role of Shock in Reinforcement. Journal of Comparative and Physiological Psychology 45:18-30.

Gwinn, Gordon T. 1949 The Effects of Punishment on Acts Motivated by Fear. Journal of Experimental Psychology 39:260-269.

Holz, William C.; and Azrin, Nathan H. 1961 Discriminative Properties of Punishment. Journal of the Experimental Analysis of Behavior 4:225-232.

Kamin, Leon J. 1959 The Delay of Punishment Gradient. Journal of Comparative and Physiological Psychology 52:434-437.

Kamin, Leon J.; Brimer, C. J.; and Black, A. H. 1963 Conditioned Suppression as a Monitor of Fear of the CS in the Course of Avoidance Training. Journal of Comparative and Physiological Psychology 56:497-501.

Keehn, J. D. 1959 On the Non-classical Nature of Avoidance Behavior. American Journal of Psychology 72:243-247.

Lichtenstein, P. E. 1950 Studies of Anxiety: I. The Production of Feeding Inhibition in Dogs. Journal of Comparative and Physiological Psychology 43:16-29.

Masserman, Jules H.; and Pechtel, Curtis 1953 Neuroses in Monkeys: A Preliminary Report of Experimental Observations. New York Academy of Sciences, Annals 56:253-265.

Miller, Neal E. 1960 Learning Resistance to Pain and Fear: Effects of Overlearning, Exposure, and Rewarded Exposure in Context. Journal of Experimental Psychology 60:137-145.

Mowrer, Orval H. 1960 Learning Theory and Behavior. New York: Wiley.

Sheffield, Fred D. 1948 Avoidance Training and the Contiguity Principle. Journal of Comparative and Physiological Psychology 41:165-177.

Sidman, MURRAY 1*953 Avoidance Conditioning With Brief Shock and No Exteroceptive Warning Signal. Science 118:157-158.

Solomon, Richard L. 1964 Punishment. American Psychologist 19:239-253.

Solomon, Richard L.; and Brush, Elinor S. 1956 Experimentally Derived Conceptions of Anxiety and Aversion. Volume 4, pages 212–305 in Marshall R. Jones (editor), Nebraska Symposium on Motivation. Lincoln: Univ. of Nebraska Press.

Solomon, Richard L.; Kamin, Leon J.; and Wynne, Lyman C. 1953 Traumatic Avoidance Learning: The Outcome of Several Extinction Procedures With Dogs. Journal of Abnormal and Social Psychology 48:291-302.

Solomon, Richard L.; and Wynne, Lyman C. 1954 Traumatic Avoidance Learning: The Principles of Anxiety Conservation and Partial Irreversibility. Psychological Review 61:353-385.

Turner, Lucille H.; and Solomon, Richard L. 1962 Human Traumatic Avoidance Learning. Psychological Monographs 76, no. 40.

Yerkes, R. M.; and Dodson, J. D. 1908 Relation of Strength of Stimulus to Rapidity of Habit Formation. Journal of Comparative Neurology and Psychology 18: 459-482.

VII NEUROPHYSIOLOGICAL ASPECTS

How the brain changes as a result of an organism’s experiences, how it maintains its altered state through time, and how it influences future behavior in a modified but systematic manner are some of the most intriguing mysteries facing modern biology. Direct experimental study of this problem started shortly after the turn of the century, a time when substantial neuroanatomical knowledge had already accumulated, although the electrical techniques that evolved into those of modern day neurophysiology were then in their simplest, primitive stages. By the 1920s, two pioneering behavioral scientists, Lashley (1929) in the United States and Pavlov (1927) in Russia, were well along with their classical studies of learned behavior in animals and were attempting to relate their findings to the function of the brain.

Lashley, studying instrumental behavior in rats with experimentally created brain lesions, and Pavlov, theorizing about brain function from his studies of conditioned behavior in dogs, both focused their attention on the uppermost layer of the brain—the neocortex. The cortex, as it is more commonly called, gained early attention because of its greater size in man and the other higher animals. This anatomical fact suggested that the cortex might be particularly concerned with such complex neural processes as learning. It was not until the 1950s that investigations concerned with the physiology of the process of learning started to disengage themselves from the belief that the neocortex is exclusively responsible for the fixation of experience that permits new behavior patterns to be acquired.

From the standpoint of the nervous system, experience is some temporospatial pattern of transitory electrical activity in the nerve cells (neurons) of the brain. The basic neurophysiological question, then, is how can this evanescent neural activity modify the circuits of the brain so that they remain uniquely altered after the initiating physiological event has vanished. Any final understanding of this process requires answers to a number of interrelated and interlocking questions. First of all, how are the stimulus events that are to be learned electrophysiologically coded for introduction into the nervous system and transmission throughout the complex pathways of the brain? In what parts of the brain are the relevant coded messages integrated and stored for future use? How do electrical neural events, arising transitorily during initial learning, manage to induce a patterned and relatively permanent change in the brain, a change that is presumed to be chemical or structural? Finally, what is the physical nature of this semipermanent brain cell change, which can persist in neural tissue for years despite the active metabolic turnover in neurons, and how does this cellular change selectively modify the brain’s subsequent functioning so that the organism’s behavior can be an adaptive synthesis of past experience and current environmental demands? These are questions that have dominated research in the neurophysiology of learning.

Electrophysiological aspects

The nerve impulse

Any speculation about the physical basis of learning has of course been heavily influenced by the existing state of neuro-physiological knowledge. The conspicuous electrical event that was first observed in the early studies of peripheral nerves was the nerve impulse, action potential, or spike, as it is variously called. This bioelectric activity is crucial in neural functioning and can be recorded from the stringlike extensions (fibers or axons) of all nerve cells. It is the nerve impulse, propagated along nerve fibers as the result of rapidly shifting chemical changes, that allows one neuron to influence the activity of other neurons with which it is in contact. At these points of contact (synapses), it is now known that the traveling nerve impulse induces the secretion of minute amounts of biochemical compounds (neurotransmitters) that, in turn, can influence the electrophysiological state of the next neuron in line. In this way, a nerve impulse can be initiated in the adjoining nerve cell.

It was early recognized that the basic nerve impulse was an all-or-none process. If the activation of a nerve cell reaches a given threshold, a spike of a predetermined size will be propagated at a given speed along the cell’s axon. Further, once such a spike has developed and subsided (in several milliseconds at most), either in the initially stimulated neuron or in adjoining neurons via synaptic connections, there ensues a sequence of physiological changes (after potentials) that systematically influences the “firing” threshold of the cells for a brief time period. Over the span of about a tenth of a second, it is first more difficult and then easier to initiate a second spike. These were the primary facts of high-speed neural function that were available to behavioral scientists of the 1930s and early 1940s as they contemplated the overwhelming plasticity and long-term memory of the complex, multisynaptic brain.

Reverberating circuits and synaptic change

The first attempts to define the neurophysiological mechanisms that might provide the basis of learning were developed from these simple basics of brain function. Many more facts about the functioning of the central nervous system are now available, but the broad outlines of the generally accepted neurophysiological mechanism of learning have not changed in principle. It is still thought that nerve impulses, bombarding some combination of synapses in a pattern that is somehow appropriate to the task to be learned, bring about a change in the functional characteristics of the synapses and, thereafter, that particular pattern of nerve impulses will induce the response sequence that occurred during the original learning. Reverberating neuron circuits are thought capable of supplying the time necessary for the electrical activity to lead to some form of permanent synaptic change that would account for the long-term behavioral changes that can follow a learning experience. Possibly the best-known statement of this point of view was that of the Canadian psychologist Donald O. Hebb, who suggested that the permanent neural changes might be based on the structural growth of appropriate axon endings at the synapse (Hebb 1949). His specific suggestion has yet to be proved or, for that matter, disproved.

Brain waves and electroencephalography

With improved electronic techniques that permit the direct study of brain activity, a variety of characteristics of brain function have been discovered. Neurophysiologists, for example, are looking with increased interest at the variety of slowly shifting electrical potentials that can be recorded throughout the central nervous system. Whereas the small spike of a primary nerve impulse, for example, can subside in less than a millisecond, these larger potentials can oscillate as slowly as several times a second. There are also even slower shifts of D.C. voltage, lasting seconds or even minutes. Certain of these slow potentials oscillate spontaneously even in the resting or sleeping brain. These are the brain waves seen in the well-known electroencephalogram (EEC). Although independently discovered in lower animals by R. Caton in England and A. Beck in Poland in 1875 and 1890, respectively, the EEC did not receive widespread attention until 1935, when the German psychiatrist Hans Berger reported that the same slow electrical rhythms were also evident in the human brain. The EEC remained largely a clinical tool, little used in behavioral research, until some of its neural and behavioral correlates started to be better understood because of the classic study made by the Italian neurophysiologist Giuseppe Moruzzi and his American collaborator, H. W. Magoun, in 1949. These investigators, together with many that followed them, showed that the EEG could serve as an indicator of arousal (or the lack of it) in the cortex as a whole or in restricted centers within the brain (Magoun 1958). As we shall see, behavioral scientists have subsequently started to use the EEG to evaluate the level of activity in various brain centers during different stages of learning.

Horizontal versus vertical organization

Aside from the more specific question of which particular structures within the brain may be necessary for learning, there is the prior but related question concerning the general flow of brain activities during complex behavior in general and learning in particular. General attitudes concerning this matter have shifted considerably during the half century or so since the beginning of direct laboratory study of brain mechanisms and learning.

Horizontal organization—association areas

The earliest point of view, and still the most common oversimplification of the facts, placed almost exclusive faith in the importance of the multilayered neocortex, which is conveniently located at the top of the central nervous system. As we have seen, the pattern of the phylogenetic development of the brain conspicuously pointed to the increased size of the cortex as the most likely source of control for such higher mental processes as learning. In its most traditional form, this corticocentric frame of reference also emphasized the horizontal organization of the neural substrate of learning, which is presumed to take place in the transcortical pathways that connect the sensory and motor areas of the neocortex. The so-called association areas, located between the sensory-input and motor-output areas, were assumed to supply pools of synapses where changes in transmission characteristics could afford new patterns of transcortical connections that would provide for the changing patterns of learned behavior. The highly influential Pavlovian theory of learning was like this in general form, the transcortical effects being visualized as irradiating neural influences between sensory and motor areas in the cortex.

Vertical organization

Along with the recent growth of interest in the role of subcortical structures in all varieties of behavior, there has developed a newer point of view, which emphasizes the vertical organization of the brain and the recurrent interactions between neural centers at all levels of the central nervous system, including the cortex. Possibly the most convincing evidence of the importance of extracortical pathways in learning has been reported from studies of animal conditioning in which the training was managed exclusively with direct electrical stimulation of sensory and motor areas in the cortex. After suitable pairing of sensory stimulation (CS) and motor stimulation (US), activation of the sensory electrode by itself led to the limb movement (now the conditioned response) that had occurred originally only when the motor electrode was stimulated (Doty 1961). Cutting transcortical connections between the two electrodes did not eliminate the conditioned response. That transcortical pathways are not necessary for such a newly developed neural circuit was further confirmed by studies in which similar electrophysiological conditioning was accomplished even though the sensory electrode was in one cerebral hemisphere, the motor electrode in the other, and the interhemispheric connections (corpus callosum) completely severed (Doty & Giurgea 1961). While the cortex is certainly involved, in one way or another, in a variety of kinds of learned behavior, it appears to be substantially dependent on vertical interconnections that exist at all levels of the central nervous system.

Geography of the learning process

The recording of spontaneous EEC rhythms during simple learning situations, usually one form or another of classical conditioning, has shown that various parts of the brain are differentially active throughout successive stages of learning.

Alpha rhythm and alpha blocking

The use of brain wave changes to trace the shifts of brain activity during learning was initiated by an accidental observation of the French neurophysiologists G. Durup and A. Fessard in 1935. These investigators were studying the EEG alpha rhythm, a moderately slow wave form that can be recorded from the visual cortex of a resting animal during periods of reduced visual stimulation. This slow brain wave is arrested (alpha blocking) and replaced with a faster, lower voltage pattern following the onset of a bright light. This faster EEG pattern is what has since come to be known as the arousal pattern and, as discussed previously, is thought to indicate increased activity in the brain area concerned. Durup and Fessard were photographing examples of alpha blocking when they noticed that the click of the camera shutter started to induce the blocking even before the bright light was presented. They recognized that the click, by virtue of being paired with the light, had acquired the ability to influence the alpha waves. The conditioning-like properties of these paired sensory events attracted the attention of investigators throughout the world, and, with many modifications and elaborations, the study of spontaneous EEG changes during various conditioning procedures has since received widespread study (Morrell 1961a, pp. 444–451).

Localization of the arousal pattern

Early in conditioning, an EEG arousal pattern is seen widely throughout all levels of the brain. With further pairing of the conditioned and unconditioned stimuli, however, these generalized electrical changes start receding to areas, particularly cortical ones, that are related to the unconditioned stimulus and, finally, to areas concerned with the conditioned response. Since the arousal type of EEG is taken to mean that patterned or potentially integrating neural activity is taking place, widespread circuits apparently are active, or available, for use early in learning; as learning progresses, the cells and path ways involved become more localized along with the differentiation and refinement of the conditioned response.

Slow wave changes. During the early period of diffuse brain activity, a subcortical EEC arousal pattern has been particularly noted in the reticular formation and limbic system and is thought by seme to be related to attentional and motivational priming of the central nervous system, preparatory to learning, when the organism finds itself facing a novel situation. Two Hungarian investigators, K. Lissák and A. Grastyán, reported (1960) a specific type of slow wave change (theta) in the subcortical hippocampus during learning. They believe that this change, which arises during early training and subsides shortly before conditioning is complete, represents a suppression mechanism that damps distracting activity, both neural and behavioral, during the time that the learned response is being developed. When conditioning is complete, the EEC arousal pattern is seen most consistently in those connections between the thalamus and the neocortex that are topographically appropriate to the final learned response.

“Tagged” brain-wave changes. Comparable findings have been reported recently from a similar, although slightly more elegant, experimental procedure in which tagged brain-wave changes, as they are called, are recorded during conditioning. In this type of study, a flashing or flickering light is used as the conditioned stimulus. If the flicker rate is not too divergent from the 10-per-second rate of the brain’s alpha rhythm, EEC oscillations at about the same rate as the conditioned stimulus can be detected as they shift among different brain structures at various stages of the learning. While such a procedure was first reported about twenty years ago by M. N. Livanov and K. Poliakov in connection with a standard conditioning study, it has recently been put to more elaborate experimental use by the American research team of E. R. John and K. F. Killam (1960). These investigators recorded such frequency-specific EEC changes, as they are called, while cats were learning a differential discrimination problem in which two lights, flashing at different rates, were the positive and the negative stimulus. There were thus two frequency-specific changes to be sought in the brain wave record. The cats had to learn to perform an avoidance response to one flicker rate but not to the other. Under these conditions, the general sequence in which the tagged EEC changes appear in different brain structures during training was much the same as already discussed above. John and Killam, however, discovered one new phenomenon of considerable interest. They found that when the flicker rate of the stimulus, and thus the tagged EEC rhythm seen in the visual cortex, matched the frequency of the EEC pattern recorded from certain subcortical structures, a correct response was more apt to be made. Discordance between the EEC frequencies at these two sites, on the other hand, was commonly correlated with either an error of omission or commission. It is as though the way the animal “reads” the stimulus or, for one reason or another, is “set” to respond to it is represented by the subcortical frequency, while the temporal events in the visual cortex are tied, of course, to the actual flickering stimulus being presented. The importance of an organism’s expectancies or response set in conditioning situations is not a new idea (Sperry 1955), although it is presently receiving renewed consideration.

Evaluation of evidence

While these bioelectric events indicate something about the widely shifting focuses of neural activity that occur during the course of even a simple learning experience, there is no compelling reason to believe that EEC changes of the type just discussed, no matter how meaningful their localization may appear to be, represent electrical changes that are necessarily associated with stimulus recognition or the eventual fixation of memory. For example, when EEG arousal occurs in its immediate vicinity, an individual cortical neuron may show increased transmission of nerve impulses, decreased transmission, or neither (Jasper et al. 1960). While conditioning systematically brings about statistical changes in the activities of single cells in particular brain areas, an invariant relation between the occurrence of a conditioned response and activity in a specific neuron thus far has not been reported. It may be, of course, that the performance of even a specific learned response by a particular subject does not always involve precisely the same individual neurons.

The hippocampus and memory

Although much that we have considered indicates the diffuse nature of the neural substrate of learned behavior, scientists in several disciplines have become interested recently in the possibility that one subcortical structure, the hippocampus, contributes uniquely to the memory process. The hippocampus is in the rhinencephalon, the primitive portion of the fore-brain. Recent interest in the hippocampus arose as the result of memory losses observed in human patients who had sustained damage to this brain structure as the result of either surgery or disease (Milner 1959). Such findings in man are particularly convincing since, in contrast tc lower animals, one can be more confident that the deficit is in memory per se and not in some performance capacity that simply leaves the animal subject unable to demonstrate what has in fact been remembered. These neurological patients show a striking loss of recent memory, particularly if they are distracted while trying to memorize or have had no chance for repetitious practice. Retrograde amnesia for as long as a year prior to hippocampal damage is also seen in people with this type of brain lesion.

While no details are known about the manner in which the hippocampus might contribute to memory, one group of investigators discovered that electrophysiological activity in the hippocampal area of the cat did differ with correct and incorrect choices in a maze (Adey et al. 1960). The majority of studies employing hippocampally damaged animals, however, have failed to find the expected loss in learning ability when the operated subjects were measured on discrimination tasks or conditioned avoidance tests. It could be, however, that the failure to demonstrate a memory loss in animals with hippocampal lesions is due to the fact that tests of animal learning are typically the type that measure learning across a series of practice trials. This is the kind of repetitious training that, based on human studies at least, might minimize evidence of the surgically produced memory deficit. This explanation receives some support from the fact that mice, with chemically induced hippocampal damage, showed dramatic losses of recent memory when they were tested with a very simple learning task that they could have mastered in only a few practice trials (Flexner et al. 1963).

Mechanisms of memory storage

A variety of behavioral experiments support the notion that the early and late stages of the process of memory fixation are probably based on different physiological mechanisms (Gerard 1961). There is an early stage, which lasts for less than an hour and is easily disrupted by a variety of physiological insults to the brain, such as lowering its oxygen supply, cooling it, or bombarding it with electric current (Glickman 1961). Thereafter, the memory trace, or engram as it is sometimes called, becomes more rigidly fixed and cannot be disassembled so easily. As already discussed, these facts are usually taken to mean that the first evanescent steps in memory fixation are electrical in nature and then subsequently become anchored in some biochemical or structural alteration, most commonly presumed to be at the synapse.

Aside from these two well-recognized stages of memory fixation, sometimes called the dynamic and static stages of memory, there may be two additional critical time periods in the sequence of physical changes that underlie memory fixation. For one thing, chemical interference in the hippocampal area can eliminate a newly learned habit as long as a week after the original learning but not thereafter (Flexner et al. 1963). Finally, generalized brain trauma, such as a concussion, can disrupt the memory for events extending back for months and years prior to the injury. Yet, memory commonly remains intact for the stretch of years preceding this period of retrograde amnesia. There thus could be at least four successive steps in the process of memory storage: (1) an acute process, initiated at the time of learning and completed before an hour has passed; (2) a semiacute second process, at least in the hippocampal area, requiring some number of days; (3) a slowly developing third stage, completed over a period of months or years; and (4) a final, static stage of memory, which is not commonly open to disruption by either experimental or clinical influences. As Deutsch (1962) has pointed out, however, it is not clear that all these stages necessarily involve different physiological processes; they could represent, in part, different degrees of development in some common process. Finally, since it is not uncommon for the memories lost in retrograde amnesia to be retrieved when the patient recovers, it may well be that long-term memories are never really eliminated by generalized trauma to the brain but are only made unavailable for current use.

Dominant focus and postpolarization memory

In 1953, V. S. Rusinov discovered an electrophysiological procedure by which he was able to produce new temporary connections between sensory and motor cells in the cortex. He applied a mild anodal current to a small area of motor cortex in the rabbit and found, during this period of polarization, that a previously indifferent sensory stimulus, such as a novel tone, induced a discrete skeletal movement that was related topographically to the part of the motor area that was polarized. The polarizing current is thought to lower the excitability threshold of the motor cells that it influences and thus permit sensory activity in widespread brain areas to “drain,” so to speak, through the polarized area and initiate motor activity appropriate to the polarized motor cells. Using a term originally suggested by A. A. Ukhtomskii in 1926, such an area of elevated excitability is called a dominant focus.

If a dominant focus is induced in the part of the motor area that, for example, initiates response pattern A (e.g., right foreleg flexion), the focus will help maintain previous conditioning if the con ditioned response was pattern A. On the other hand, a conditioned response of pattern B is suppressed by the induction of dominant focus A. Such findings suggest that something like a dominant focus might be involved during the early stages of conditioning. The effect of a dominant focus persists for about thirty minutes after the anodal current is removed (postpolarization period), which matches reasonably well the time course of the early dynamic stage of memory as it is reported by some workers.

Frank Morrell (1961fr), an American investigator, studied the activity of individual motor cells within the area of a dominant focus and started to analyze the fiber pathways, which are critical for this interesting phenomenon. He demonstrated, further, a degree of specificity for the 30-minute “memory” in the area of a previous dominant focus; the motor area can be activated during this postpolarization period only by a stimulus that has been presented during the period of polarization. Both transcortical and subcortical pathways were found to be necessary for the activation of the dominant focus by an effective stimulus event. Finally, perseverating activity of nerve impulses was not detected during the postpolarization period in the area of the dominant focus. This is contrary to what might have been expected if reverberating circuits of nerve impulses were responsible for the short-term maintenance of memory in the area. If the short-term phase of memory really is electrical in nature, this finding suggests that the process might be based on persistent graded potentials rather than on propagated nerve impulses. It is still possible, of course, that the early memory process is based on some other fragile biological process, which is possibly chemical and which is not apparent to the recording electrode of the neurophysiologist.

RNA and long-term memory. The physical basis of long-term memory is no better understood than the case of recent memory. As we have seen, the most popular idea is that some permanent change of a structural, or at least chemical, nature takes place at the synapses, and thereafter the routing or patterning of nerve impulse transmission is altered. No specific structural change at a synapse, associated with learning, has ever been demonstrated experimentally. In recent years, however, there has been growing interest in the possibility that changes in the ribonucleic acid (RNA) molecules of the nerve cell might be responsible for structural changes, at the synapse or elsewhere, that could then serve the purpose of long-term memory. This possibility, suggested by Joseph J. Katz and Ward C. Halstead in 1950, now has received some indirect experimental support, although definite proof of such a mechanism is still lacking (Dingman & Sporn 1964). The general idea is that patterns of neural activity, impinging on a particular nerve cell, would shape the structure of complex RNA molecules within the cell. The RNA, as a regulator of protein synthesis in the cell, would thereafter perpetuate chemically coded protein molecules, thus rendering the cell, for example, maximally sensitive to the temporospatial pattern of neural activity that had originally induced the structural change in the RNA (Hyden 1962). How electrical neural activity could modify the structural pattern of RNA or how structurally coded RNA might then influence the synaptic characteristics of its neuron is still entirely speculative. Nevertheless, changes both in the concentration of RNA and in the specific chemical structure of RNA have been demonstrated in specific brain centers that have been subjected to high levels of neural activity or, more interestingly, have been involved in the learning of new patterns of behavior.

Robert A. Mccleary

[Other relevant material may be found in Nervous System; Psychology, article On Physiological Psychology; Senses, article On Central Mechanisms; Stimulation Drives; and in the biographies of Flourensand Lashley.]

BIBLIOGRAPHY

Adey, W. R.; Dunlop, C. W.; and Hendrix, C. E. 1960 Hippocampal Slow Waves: Distribution and Phase Relationships in the Course of Approach Learning. AM.A. Archives of Neurology 3:74–90.

Deutsch, J. A. 1962 Higher Nervous Function: The Physiological Bases of Memory. Annual Review of Physiology 24:259–286.

Dingman, Wesley; and Sporn, Michael B. 1964 Molecular Theories of Memory. Science New Series 144:26–29.

Doty, R. W. 1961 Conditioned Reflexes Formed and Evoked by Brain Stimulation. Pages 397–412 in Daniel E. Sheer (editor), Electrical Stimulation of the Brain: An Interdisciplinary Survey of Neurobehavioral Integrative Systems. Austin: Univ. of Texas Press. Doty, R. W.; and Giurgea, C. 1961 Conditioned Reflexes Established by Coupling Electrical Excitation of Two Cortical Areas. Pages 133–151 in Council for International Organizations of Medical Sciences, Brain Mechanisms and Learning: A Symposium. Oxford: Blackwell; Springfield, 111.: Thomas.

Flexner, Josefa B.; Flexner, L. B.; and Stellar, E. 1963 Memory in Mice as Affected by Intracerebral Puromycin. Science New Series 141:57–59.

Gerard, R. W. 1961 The Fixation of Experience. Pages 21–35 in Council for International Organizations of Medical Sciences, Brain Mechanisms and Learning: A Symposium. Oxford: Blackwell; Springfield, 111.: Thomas.

Glickman, Stephen E. 1961 Perseverative Neural Processes and Consolidation of the Memory Trace. Psychological Bulletin 58:218–233.

Hebb, Donald O. 1949 The Organization of Behavior: A Neuropsychological Theory. New York: Wiley. Hyden, H. 1962 A Molecular Basis of Neuron–Glia Interaction. Pages 55–69 in Francis O. Schmitt (editor), Macromolecular Specificity and Biological Memory. Cambridge, Mass.: M.I.T. Press.

Jasper, H. H.; Ricci, G.; and Doane, B. 1960 Micro-electrode Analysis of Cortical Cell Discharge During Avoidance Conditioning in the Monkey. Electroencephalography and Clinical Neurophysiology (Supplement 13): 137–155.

John, E. R.; and Killam, K. F. 1960 Studies of Electrical Activity of Brain During Differential Conditioning in Cats. Pages 138–148 in Society of Biological Psychiatry, Recent Advances in Biological Psychiatry. New York: Grune.

Lashley, Karl S. 1929 Brain Mechanisms and Intelligence: A Quantitative Study of Injuries to the Brain. Univ. of Chicago Press.

Lissak, K.; and Grastyan, E. 1960 The Changes of Hippocampal Electrical Activity During Conditioning. Electroencephalography and Clinical Neurophysiology (Supplement 13): 271–277.

Magoun, Horace W. (1958) 1963 The Waking Brain. 2d ed. Springfield, 111.: Thomas.

Milner, Brenda 1959 The Memory Defect in Bilateral Hippocampal Lesions. Psychiatric Research Reports 11:43–58.

Morrell, Frank 196la Electrophysiological Contributions to the Neural Basis of Learning. Physiological Reviews 41:443–494.

Morrell, Frank 1961b Effect of Anodal Polarization on the Firing Pattern of Single Cortical Cells. New York Academy of Sciences, Annals 92:860–876.

Pavlov, Ivan P. (1927) 1960 Conditioned Reflexes: An Investigation of the Physiological Activity of the Cerebral Cortex. New York: Dover. → First published as Lektsii o rabote bol’shikh polusharii golovnogo mozga.

Sperry, R. W. 1955 On the Neural Basis of the Conditioned Response. British Journal of Animal Behaviour 3: 41–44.

VIII VERBAL LEARNING

Research in the area of verbal learning is concerned with the experimental analysis of the acquisition and retention of verbal habits. The major emphasis in both experimentation and theory construction has been on rote learning, that is, the mastery of verbal materials in a prescribed arrangement. Studies of rote learning have provided the central body of empirical facts, analytic tools, and theoretical concepts dealing with verbal learning.

Historical developments

Early experimental studies

The first systematic experimental investigation of rote learning was carried out by the German psychologist Hermann Ebbinghaus, whose treatise Memory (1885) occupies an undisputed position as a classic in the field. Ebbinghaus set out to show that higher mental processes such as memory could be studied under strictly controlled experimental conditions and that they could be precisely measured. His conception of the processes of learning and memory was heavily influenced by the British empiricist doctrine of association by contiguity. In a monumental series of experiments, which were carried out with himself as the only subject, Ebbinghaus introduced procedures and methods of analysis which provided the point of departure for the subsequent development of the entire area of verbal learning. The large majority of Ebbinghaus’ experiments were concerned with the acquisition and retention of series of discrete verbal units. In an effort to develop standardized materials that could be used interchangeably in a large variety of learning tasks, Ebbinghaus devised the nonsense syllable as the unit to be used in the construction of verbal series. (A nonsense syllable is a consonant–vowel–consonant combination devoid of dictionary meaning.) While such materials turned out to be far from equal in difficulty, the introduction of these materials was the first step toward the standardization and classification of the verbal units used in studies of rote learning. To provide a uniform standard of attainment with respect to which differences among tasks and conditions of practice could be evaluated, Ebbinghaus established the concept of a criterion of performance, for example, the errorless reproduction of an entire series. The number of repetitions or the amount of time required to reach this criterion could then be related to such variables as the length of the series or the temporal distribution of practice periods. A fixed criterion of performance also made it possible to evaluate retention after the cessation of practice; specifically, Ebbinghaus measured the amount of retention in terms of the amount of time saved in relearning a series relative to the time required for original acquisition. Using these methods of analysis, Ebbinghaus established a number of basic principles of acquisition and retention. His findings included evidence that the amount of practice time per item increases with the length of the series and that a strong positive relationship exists between the number of repetitions of a task and the degree of retention. Another famous product of Ebbinghaus’ investigations is the classical curve of retention, which is characterized by a steep initial drop in retention immediately upon the end of learning, followed by more gradual losses there-after.

Ebbinghaus’ approach was soon adopted in other laboratories, and additional techniques for the study of rote verbal learning were developed rapidly. Among the early German investigators, G. E. Miiller deserves special mention. In Müller’s laboratory the first systematic studies of the processes of interference in retention were carried out. He was also responsible for many refinements in experimental technique, such as the use of an automatic exposure device for the presentation of learning materials (the prototype of the contemporary memory drum). Within less than two decades after the appearance of Ebbinghaus’ pioneer investigations, the study of rote verbal learning had become a standard procedure in laboratories of experimental psychology.

American functionalists

In the United States the development of the area of verbal learning is historically tied to the functionalist movement, which helped to make the experimental study of learning a central concern of contemporary experimental psychology and which emphasized the application of psychological principles to problems of education. Although those working in the functionalist tradition put the discovery of empirical laws ahead of formal theory construction, there was a strong predilection for the analysis of the learning process in terms of principles of association. A discussion of the influence of the functionalist movement on the psychology of learning is provided by Hilgard ([1948] 1956, chapter 10). The associationist orientation permitted a ready translation of theoretical concepts and experimental operations into the language of stimulus-response psychology, which became widely accepted under the influence of behaviorism. Thus, the prevalent approach to problems of verbal learning became that of an associationistic stimulus-response psychology. With some important exceptions to be noted later, there was no strong commitment to formal theories of behavior.

A pragmatic orientation is apparent in the writings of the experimental psychologists who had a major influence on the development of the field of human learning. An exposition of the prevailing theoretical approach was given in Edward S. Robinson’s Association Theory Today (1932). The broad definition of association as “the establishment of functional relations among psychological activities and states in the course of individual experience” (p. 7) was designed to accommodate the pursuit of a wide range of empirical questions within a common associationist framework. Robinson stressed the multiplicity of the laws of association and the necessity of reformulating such laws as functional relations between these multiple antecedent conditions on the one hand and measures of associative strength on the other.

The emphasis on functional relations led to the formulation of a program of dimensional analysis as a guide to experimental investigation. The essential objectives of such a program were first outlined by McGeoch (1936); a later systematic exposition of this approach may be found in an important article by Melton (1941). Learning situations vary continuously with respect to a manifold of descriptive characteristics or dimensions, and learning tasks can be ordered along these dimensions to provide the framework for the exploration of quantitative functional relations. Among the major axes of reference is the verbal-motor dimension; purely verbal tasks define one extreme and predominantly motor ones the other. A second dimension is defined by the degree to which the subject must discover the correct response and ranges from the acquisition of rote series to problem-solving situations. A third dimension refers to the degree to which mastery of a task requires a response to relational rather than to absolute properties of the stimuli.

Influence of conditioning theory

While dimensional analysis has provided a thread of continuity in empirical research, there have been important attempts to conceptualize the facts of verbal learning within the framework of general psychological theory. During the period between the two world wars an important landmark was the publication of the Mathematico–Deductive Theory of Rote Learning by Clark L. Hull and his associates (1940). In this analysis, the basic phenomena of serial rote learning are deduced from a set of postulates that were derived largely from the theory of classical conditioning. The effective strength of the associations linking the members of a verbal series was conceived as representing the balance of excitatory and inhibitory potentials, in accord with the dual-process interpretation of Pavlovian conditioning. With the aid of assumptions about the conditions governing the growth and decline of excitatory and inhibitory tendencies, specific quantitative predictions were made concerning the shape of the serial-position curve (level of performance as a function of the position of an item in a series), the effects of the temporal distribution of practice trials, and other properties of serial learning. Some of these predictions were confirmed experimentally; however, the scope of the theory is limited, and its influence has been declining with the rapid accumulation of empirical findings that fall outside its boundary conditions.

Another systematic application of principles of classical conditioning to verbal learning was proposed by Eleanor J. Gibson (1940). Gibson’s analysis centered on the concepts of stimulus generalization and differentiation, which were adopted from conditioning theory. Stimulus generalization refers to the tendency for the conditioned response to be elicited by stimuli similar to the conditioned stimulus. The amount of generalization describes a gradient, that is, it is directly related to the degree of similarity between the training stimulus and the test stimulus. Differentiation refers to the reduction of generalization as a consequence of reinforcement of responses to the training stimulus and nonreinforcement of responses to other test stimuli. According to Gibson’s analysis, speed of acquisition is determined by the rate at which differentiation among the stimulus items is achieved during practice. Thus, speed of learning should vary inversely with the degree of interstimulus similarity. In general, the experimental facts are consistent with this prediction. The theory also predicts, in accordance with principles of conditioning, that generalization tendencies recover spontaneously over time, with a consequent loss of differentiation. It follows that long-term retention, like speed of acquisition, should vary inversely with interstimulus similarity. However, this prediction has consistently failed to be confirmed. This lack of support for one of the critical deductions from the theory necessarily calls into question the validity of the basic postulates. The analytic power of the theory is also limited by the failure to consider the role of response generalization along with that of stimulus generalization. A comprehensive evaluation of the theory, which has exerted considerable influence on research in verbal learning ever since its publication, is provided by Underwood (1961).

Gestalt psychology

Although there has been wide agreement among investigators of verbal learning on the usefulness of associationist concepts in the formulation of empirical questions and theoretical issues, such agreement has been by no means general. A quite different approach is represented by exponents of the gestalt school of psychology, whose work in verbal learning was directed primarily at the validation of general principles of their theory. An exposition of this approach may be found in Koffka (1935, chapters 10-13). From the point of view of gestalt theory, learning and retention are governed by the same principles of organization that govern the formation of perceptual units. In the acquisition of verbal tasks, relationships such as similarity and proximity between the component items are considered critical in determining the readiness with which the organization required for mastery is achieved. The organizations developed during learning are, in turn, assumed to be preserved in the nervous system as memory traces, whose subsequent development is likewise governed by principles of organization. Re-exposure to part of an organized pattern activates the trace of the pattern and thus permits the recall of other component parts. Association is, therefore, interpreted as a special case of organization. Experimental studies initiated by gestalt psychologists have sought to demonstrate the applicability to verbal learning and memory of principles of perceptual organization. An example is provided by the studies of the effects of perceptual isolation. When a unique item is embedded in an otherwise homogeneous series, the unique or “isolated” item is recalled better than the average member of the homogeneous series. According to gestalt theory, the traces of the homogeneous items suffer assimilation and lose their identity, whereas the trace o^ the unique item remains distinctive a7 * accessible. This dependence of recall on the relationship between items is interpreted as analogous to the salience of a perceptual figure against a homogeneous background. Alternative interpretations have been offered, for example, the differential susceptibility of isolated and homogeneous items to generalization. The conditions determining the isolation effect are still under investigation. This example illustrates the fact that crucial experiments permitting a clear-cut decision between gestalt and associationist interpretations have often been difficult to design. [see Gestalt theory.]

Recent developments

The two decades since the end of World War II have witnessed a rapid growth of activity in the field of verbal learning, with several new developments adding greatly to the diversity of experimental methods and theoretical concerns. Perhaps the most important trend is the convergence on common problems of research in psycholinguistics and in verbal learning. This development is reflected in the increased emphasis on the role of natural language habits in the analysis of verbal learning. A large amount of work has centered on the assessment of the effects on acquisition and retention of the associative hierarchies that are developed through linguistic usage. The method of free association (in which the individual is required to respond to each stimulus word with the first other word that comes to mind) and related normative techniques have been used to determine the structure of verbal associations characteristic of a given speech community. Learning tasks constructed on the basis of such norms are then used to evaluate the influence of preexisting associative patterns on the formation of new verbal habits. Within this problem area a focus of special concern has been the study of mediational processes, that is, of the ways in which pre-existing associations serve to facilitate the establishment of connections between initially unrelated terms (see Jenkins 1963). The influence of contemporary linguistic analysis is also reflected in the growing number of investigations concerned with the role of grammatical habits in the acquisition and performance of verbal tasks. Largely under the influence of George A. Miller (e.g., 1962), much of the recent work has been directed at the exploration of the psychological processes suggested by the principles of transformational grammar.

In the experimental study of memory processes, several influences have converged to produce an upsurge of interest in short-term retention. Any a priori distinction between short-term and long-term memory is, of course, arbitrary; in practice the operational difference is between retention intervals of the order of seconds or minutes on the one hand and of hours, days, or weeks on the other. Rapid developments in the theory of communication and in the study of man–machine interaction have brought to the fore the question of man’s capacity to process and to store continuously changing inputs of information; for example, in the performance of monitoring tasks incoming information has to be retained for critical periods of time to permit effective action. Thus, the study of short-term memory is an integral part of the analysis of the nervous system as a limited-capacity channel for the transmission of information. This general approach is well represented by the work of Broadbent (1958), who has introduced a number of influential new techniques for the measurement of short-term memory. The availability of the analytic methods of information theory has, of course, been of considerable value in bringing order to measures of immediate memory that are obtained with a wide variety of materials. Short-term memory also has continuing systematic significance for theories of the physiological basis of memory. A central concept of several influential theories is that of a transitory memory trace, which is assumed to fade or decay rapidly unless it is restored by repetition or rehearsal. The assumption of a short-lived immediate trace is characteristically supplemented by the postulation of a separate and distinct mechanism of long-term storage. A considerable amount of effort is being devoted to experimental tests of the dual-process conception. A systematic question on which agreement does not appear to be in sight as yet is whether the principles governing short-term and long-term memory are continuous or discontinuous. [See Forgetting.]

Recent developments in verbal learning, as in several other special fields of psychology, include the construction of mathematical models for the description of circumscribed sets of data. Among the most influential approaches have been the stochastic models that treat acquisition as a probabilistic process. Very briefly, it is assumed that on any given learning trial (a) the organism samples the environmental events which constitute the stimulus situation; (b) all the stimulus elements sampled become connected to the response occurring contiguously with them; and (c) such association by contiguity occurs in an all-or-none fashion, that is, reaches maximal strength on a single trial. The probability of occurrence of the response increases as more and more stimulus elements are connected with it. While the most important applications of these models have been to discrimination learning and conditioning, the models have also been used for the acquisition of verbal associations as well. A point of major theoretical significance is the assumption made by stochastic models that association by contiguity occurs in an all-or-none fashion. If verbal stimuli function as single elements, it follows that associations are not built up gradually as a function of practice but, instead, change in probability from zero to one after a single occurrence. The assumption that associations may vary continuously in strength and are built up gradually through practice has been explicit or implicit in associationist interpretations of verbal learning. This assumption, which has been designated as the incremental hypothesis, has been challenged in recent years by exponents of the all-or-none position. Experimental tests to decide between these alternative conceptions have focused on the question of whether there is a growth in associative strength on practice trials prior to the first correct response. While it is fair to say that the evidence thus far has favored the incremental position, the issue cannot be regarded as finally settled (for a review see Postman 1963). The emergence of this controversial issue illustrates the recent trend toward the consideration of hypotheses about the nature of association and memory in the context of studies of verbal learning. [See Models, Mathematical.]

A brief survey of some representative experimental methods and findings in the area of verbal learning now follows. The evidence is grouped under the headings of acquisition, transfer of training, and retention. Detailed reviews and discussions of the relevant literature may be found in McGeoch (1942) and in the collections of papers edited by Gofer (Conference on Verbal Learning . . . 1961), Gofer and Musgrave (Conference on Verbal Learning ... 1963), and Melton (Symposium . . . 1964).

Acquisition

It will be convenient to consider the analysis of the process of acquisition with reference to specific experimental procedures, each of which focuses on a different type of verbal performance.

Paired-associate learning

In the pairedassociate method, the subject’s task is to learn a prescribed response to each of a list of stimulus terms (much as in vocabulary learning). The characteristics of the stimulus and response terms can be varied independently. An important analytic advantage of the method is, therefore, that it permits the assessment of stimulus and response functions in the acquisition of associations. Progress in this task will depend on the extent to which the following requirements are met: (a) the stimulus terms are differentiated from each other; (b) the prescribed responses are available as integrated units in the subject’s repertoire; and (c) stable associative connections are developed between the appropriate stimulus and response terms.

It is apparent that the requirements of the task with respect to the stimulus and response terms are not the same. Whereas the response terms must be recalled as prescribed, the stimulus terms need only be discriminated from each other and recognized as the occasions for the performance of the appropriate responses. The subject is free, therefore, to attend to only those characteristics of the stimulus which are minimally essential for the placement of the correct responses, that is, the subject can practice “stimulus selection.” Recognition of this fact has led to the distinction between nominal and functional stimuli (Underwood 1963). The nominal stimulus refers to stimulus terms as specified by the experimenter for presentation to the subject; the functional stimulus refers to those characteristics of the stimulus which actually function as cues for the learner. The available evidence indicates that stimulus selection does, in fact, occur within the limits permitted by the requirements of the task; for example, when the stimulus is a compound composed of elements which vary in meaningfulness, there is a strong tendency to select the more meaningful element as the functional cue.

The analysis of the components of the paired-associate task makes it useful to conceive of the total acquisition period as divided into two successive stages, namely, a response-learning stage and an associative stage (Underwood & Schulz 1960, pp. 92-94). During the former, the prescribed responses are established as integrated units available for performance; during the latter, the responses are linked to the appropriate stimuli. If the responses are items in the subject’s pre-experimental repertoire, for example, familiar words, the response-learning stage reduces to a response-recall stage during which the subject learns to restrict his responses to the units in the list. The two stages certainly overlap in time. The distinction is, however, useful in the analysis of the conditions which influence performance in paired-associate learning.

Of the task variables which influence speed of acquisition, two will be singled out on the basis of the magnitude and reliability of their effects. These are (a) meaningfulness and (b) intralist similarity. In current usage, the term “meaningfulness “refers to several scaled characteristics of verbal units, such as the probability of a unit evoking an association within a limited period of time, the number of different associations evoked by the unit, etc. These indices tend to be closely related to each other and to the frequency of occurrence of the unit in the language (for a survey of measures of meaningfulness see Underwood & Schulz 1960). Intralist similarity may be either formal or semantic. Formal similarity is defined by the degree to which overlapping elements, such as letters, are used in the construction of different units included in a list; this variable is characteristically manipulated in lists composed of nonsense items. Semantic similiarity refers to the degree of synonymity and applies to lists composed of words.

Meaningfulness. The speed of paired-associate learning varies widely as a function of meaningfulness, but the relationship is much more pronounced when responses rather than stimuli are varied in meaningfulness (Underwood & Schulz 1960). From the point of view of the two-stage analysis of acquisition, it is clear that meaningfulness decisively influences the response-learning stage: the more fully a response unit conforms to prior linguistic usage the more readily it enters into association with new stimuli. There are two factors which may serve to reduce the effectiveness of the variable of stimulus meaningfulness: (a) stimulus selection may counteract the differences that exist in the nominal stimuli; (b) increases in stimulus meaningfulness may facilitate not only the formation of associative linkages with the prescribed responses but also the development of inappropriate associations with other responses. Thus, associative facilitation and interference may increase concurrently as a function of stimulus meaningfulness.

Intralist similarity. The effects of intralist similarity also differ depending on whether stimuli or responses are manipulated. In general, speed of acquisition varies inversely with the degree of similarity of the stimulus terms. As stimuli become less discriminable, the associative phase of learning is retarded. Variations in response similarity, on the other hand, have only small effects on the rate of learning, except for units of low meaningfulness. The usual absence of a large effect is attributable to the balance between two opposed influences: As responses become more similar, the amount of response learning is reduced; at the same time, individual responses become less discriminable from each other and the associative phase is prolonged.

Serial learning

Serial-learning tasks require the reproduction of a series of items in a prescribed order. The experimental procedure typically consists of the paced presentation of the successive members of the series, with the subject required to anticipate each item before it appears. Speed of acquisition varies reliably as a function of the ordinal position of the item in the series. The initial items are usually acquired first, the terminal items next, and the central items last. Thus, when percentage of correct responses during learning is plotted against serial position, a typical bow-shaped curve is obtained. Classical interpretations of serial learning (for example, that of Hull mentioned earlier) were based on the assumption that an individual member of the series serves a dual function during acquisition: it is the response to the immediately preceding item and the stimulus for the immediately following one. The bow-shaped serial position curve was attributed to interferences from incorrect associations among nonadjacent members of the series. Given certain assumptions about the number and strength of such remote associations, it can be shown that the total amount of interference should first increase and then decrease as a function of serial position. Recent experiments have, however, served to call the classical conception of serial learning into question. Some of the critical evidence comes from experiments in which the subject first learns a serial list and immediately thereafter a paired-associate list in which the pairs are composed of adjacent members of the serial list. If the mastery of the serial list depends on the establishment of a chain of sequential associations, pronounced facilitation should be found in the acquisition of the paired-associate list. A conclusive test of this prediction has proved difficult. Performance on the critical transfer task appears to be complexly determined and sensitive to procedural variations. The difficulties encountered by the classical hypothesis have raised questions about the nature of the functional stimulus in serial learning; for example, it has been suggested that each member of the series is associated to its ordinal position in the series rather than to the preceding item. This problem is receiving considerable experimental attention at the present time (Underwood 1963).

Free learning

The two methods discussed above require the establishment of prescribed links between verbal units, and, thus, they focus on the development of sequential associations. By contrast, the method of free learning yields information about the ordering and reproduction of verbal units when no sequential constraints are placed upon the subject. Under this method, a list of items, such as words, is presented to the subject, and he is then permitted to reproduce them in whatever order they occur to him (free recall). The method is useful in the investigation of pre-experimental habits of classifying verbal units and of the process of recall (for general discussions see Deese 1961; Postman 1964). The major determinants of the amount of free recall are (a) the total learning time prior to the test of recall and (b) the number and strength of the pre-experimental associations between the units in the list. The total learning time is the product of the list and the presentation time per item. The number of items recalled after a single exposure of a list remains approximately invariant with total learning time, that is, the number of items recalled increases with the length of the list and with the amount of study time per item, but a decrease in the value of one of these variables can be compensated for by an increase in the other. Thus, it is the total amount of time available for practice which is critical rather than the length of the task or the exposure rate per se. With length and rate held constant, the number of words recalled varies with the average degree of associative connection between the items in the list (as determined, for example, by the method of free association). In the absence of external constraints, recall performance reflects pre-existing associations and relations between the component units of the list. As free learning continues beyond a single presentation and recall, stable groupings of items are likely to be formed which are carried over from one test of recall to the next.

Transfer of training

Transfer of training refers to the influence of prior learning on the acquisition of new habits. The transfer effects may be positive or negative, depe’nding on whether the earlier training facilitates or inhibits the mastery of the later task. In studies of verbal learning it has been conventional to distinguish between specific and general transfer effects. Specific transfer effects are those at, tributable to known similarity relations between successive tasks; general transfer effects represent the development of skills which cannot be ascribed to known similarity relations between tasks. General effects will be considered first.

General effects

When unrelated verbal lists are learned in the laboratory, the speed of acquisition typically increases, sharply at first and more gradually thereafter. Such progressive practice effects have been demonstrated for both paired-associate and serial learning. When learning sessions are held daily, the gains in performance are considerably greater within sessions than from one session to the next. A common interpretation of this finding is that the gains within a session are largely a matter of warm-up, that is, the development of postural adjustments, rhythms of responding, and other components of a set appropriate to the learning task. Such adjustments to the requirements of the task may be expected to be lost once the subject leaves the experimental situation. The changes which persist from one session to the next are attributed to “learning-to-learn,” that is, the acquisition of higher-order habits or modes of attack which are relatively stable and persistent. According to this analysis, warm-up both develops and declines more rapidly than do the habits which constitute learning-to-learn. The possibility cannot be ruled out, however, that perceptual-motor adjustments are conditioned to the experimental situation and that components of learning-to-learn are forgotten.

Specific effects

The principles of specific transfer have been investigated primarily as a function of the similarity relations between the stimulus terms and/or the response terms of successive tasks. One general principle is that the amount of transfer, whether positive or negative, increases as the stimulus terms in successive tasks become more similar; stimulus identity is, therefore, the condition of maximal transfer. At a given level of similarity, the sign (positive or negative) and degree of transfer vary with the similarity, or strength of pre-existing associative connection, between successive responses. A large array of experimental findings may be subsumed under the following general principle: as the responses become increasingly dissimilar, the transfer effects shift from positive to negative. Thus, positive transfer is obtained when the successive responses learned to identical or similar stimuli are associatively related; negative transfer results when new unrelated responses are learned to old stimuli. These principles are, however, subject to modification by other factors, such as the readiness with which successive tasks can be differentiated from each other. For example, when old stimuli and old responses are re-paired, there is considerable negative transfer; even though the repertoire of responses remains the same, the identity of both stimuli and responses makes differentiation between successive tasks extremely difficult (for a discussion of methods and designs in the study of transfer see McGeoch 1942).

Retention

Retention refers to the persistence over time of changes produced by practice. It is apparent that retention is an integral component of acquisition; a habit can be mastered only if there is retention from one practice trial to the next. In the present context, however, the term retention refers to measurements of performance which are made after the end of a period of formal practice. The operational distinction between measures of acquisition and of retention is a convenient and, indeed, an essential one for investigation of the conditions of forgetting. The amount of retention is, of course, always inferred from specific measures of performance; the absolute level of performance will vary with the specific method of testing. Thus, after a given amount of practice, tests of recognition usually yield higher retention scores than do tests of recall, although the degree of discrepancy may vary widely as a function of the specific conditions of recall and recognition.

The basic empirical fact which theories of retention have sought to account for is the progressive decline in performance that occurs as a function of time after the end of practice. The position now held most widely attributes forgetting to the interference that develops between successively learned habits. Two major types of interference are distinguished, namely, retroactive inhibition and proactive inhibition. Retroactive inhibition refers to the interference produced by the acquisition of new’habits between the end of practice and the test of retention; proactive inhibition occurs when earlier habits interfere with the retention of a more recent task. In both cases, the amount of interference varies with the similarity relations between successive tasks; specifically, the amount of interference is governed by the same conditions of intertask similarity as is negative transfer. Thus, negative transfer in acquisition, retroactive inhibition, and proactive inhibition are complementary manifestations of habit interference. Retroactive and proactive inhibition differ with respect to the development of interference effects over time. Whereas retroactive inhibition is present to its full extent immediately after acquisition of the interfering task, proactive inhibition develops gradually over time. Several specific mechanisms responsible for the observed interference effects have been identified experimentally. These include (a) the unlearning, in the sense of reduced availability, of old associations during the acquisition of new ones; (b) competition between incompatible responses at the time of recall which may cause performance to be blocked or a dominant error to displace a correct response; and (c) failure to differentiate between the members of alternative response systems at the time of recall (for a review of interference theory see Postman 1961).

The principles of retroactive and proactive inhibition have been established in experimental situations in which the conditions of interference can be fully controlled. Interference theory assumes that the same principles apply to forgetting outside the laboratory. For example, the forgetting of a verbal task would be attributed to interference from other verbal habits acquired both prior and subsequent to that task. Proactive inhibition is likely to play a larger role than retroactive inhibition to the extent that the number and strength of prior habits exceed those of subsequent habits.

Regardless of theoretical interpretation, certain facts about the long-term retention of verbal tasks are well supported by the experimental evidence. The single most important determinant of the amount of long-term retention is the degree of original learning, with resistance to forgetting a direct function of the degree of overlearning. It is essential, therefore, to hold the degree of original learning constant whenever the influence of other variables, such as meaningfulness or intra-task similarity, on retention is to be evaluated. The evidence available thus far shows that the effects of such variables are minor relative to the sheer degree of original learning. The practical implication is that overlearning provides the most certain means of insuring the long-term stability of verbal habits.

Leo Postman

[See alsoForgetting; Language; and the biographies ofEbbinghaus; Hull; Müller, Georg Elias.]

BIBLIOGRAPHY

Broadbent, Donald E. 1958 Perception and Communication. Oxford: Pergamon.

Conference on Verbal Learning and Verbal Behavior, New York University, 1959 1961 Verbal Learning and Verbal Behavior: Proceedings. Edited by Charles N. Cofer. New York: McGraw-Hill.

Conference on Verbal Learning and Verbal Behavior, Second, Ardsley-on-Hudson, N.Y., 1961 1963 Verbal Behavior and Learning; Problems and Processes: Proceedings. Edited by Charles N. Cofer and Barbara S. Musgrave. New York: McGraw-Hill.

Deese, James 1961 From the Isolated Verbal Unit to Connected Discourse. Pages 11–31 in Conference on Verbal Learning and Verbal Behavior, New York University, 1959, Verbal Learning and Verbal Behavior: Proceedings. Edited by Charles N. Cofer. New York: McGraw-Hill.

Ebbinghaus, Hermann (1885) 1913 Memory: A Contribution to Experimental Psychology. New York: Columbia Univ., Teachers College. -> First published as Uber das Gedächtnis. A paperback edition was published in 1964 by Dover.

Estes, William K. 1959 The Statistical Approach to Learning Theory. Volume 2, pages 380–491 in Sigmund Koch (editor), Psychology: A Study of a Science. New York: McGraw-Hill.

Gibson, Eleanor J. 1940 A Systematic Application of the Concepts of Generalization and Differentiation to Verbal Learning. Psychological Review 47:196-229.

Hilgard, Ernest R. (1948) 1956 Theories of Learning. 2d ed. New York: Appleton.

Hull, Clark L. et al. 1940 Mathematico–Deductive Theory of Rote Learning: A Study in Scientific Methodology. New Haven: Yale Univ. Press; Oxford Univ. Press.

Jenkins, James J. 1963 Mediated Associations: Paradigms and Situations. Pages 210–245 in Conference on Verbal Learning and Verbal Behavior, Second, Ardsley-on-Hudson, N.Y., 1961, Verbal Behavior and Learning; Problems and Processes: Proceedings. Edited by Charles N. Cofer and Barbara S. Musgrave. New York: McGraw-Hill.

Koffka, Kurt 1935 Principles of Gestalt Psychology. New York: Harcourt.

McGeoch, John A. 1936 The Vertical Dimensions of Mind. Psychological Review 43:107-129.

McGeoch, John A. (1942) 1952 The Psychology of Human Learning. 2d ed., rev. New York: Longmans.

Melton, Arthur W. 1941 Learning. Pages 667–686 in Walter S. Monroe (editor), Encyclopedia of Educational Research. New York: Macmillan.

Miller, George A. 1962 Some Psychological Studies of Grammar. American Psychologist 17:748-762.

Postman, Leo 1961 The Present Status of Interference Theory. Pages 152–179 in Conference on Verbal Learning and Verbal Behavior, New York University, 1959, Verbal Learning and Verbal Behavior: Proceedings. New York: McGraw-Hill.

Postman, Leo 1963 One-trial Learning. Pages 295–335 in Conference on Verbal Learning and Verbal Behavior, Second, Ardsley-on-Hudson, N.Y., 1961, Verbal Behavior and Learning; Problems and Processes: Proceedings. Edited by Charles N. Cofer and Barbara S. Musgrave. New York: McGraw-Hill.

Postman, Leo 1964 Short-term Memory and Incidental Learning. Pages 145–201 in Symposium on the Psychology of Human Learning, University of Michigan, 1962, Categories of Human Learning. Edited by Arthur W. Melton. New York: Academic Press.

Robinson, Edward S. 1932 Association Theory Today. New York: Century.

Symposium on the Psychology of Human Learning, University of Michigan, 1962 1964 Categories of Human Learning. Edited by Arthur W. Melton. New York: Academic Press.

Underwood, Benton J. 1961 An Evaluation of the Gibson Theory of Verbal Learning. Pages 197–223 in Conference on Verbal Learning and Verbal Behavior, New York University, 1959, Verbal Learning and Verbal Behavior: Proceedings. Edited by Charles N. Cofer. New York: McGraw-Hill.

Underwood, Benton J. 1963 Stimulus Selection in Verbal Learning. Pages 33–75 in Conference on Verbal Learning and Verbal Behavior, Second, Ardsleyon-Hudson, N.Y., 1961, Verbal Behavior and Learning; Problems and Processes: Proceedings. Edited by Charles N. Cofer and Barbara S. Musgrave. New York: McGraw-Hill.

Underwood, Benton J.; and Schulz, Rudolph W. 1960 Meaningfulness and Verbal Learning. Philadelphia: Lippincott.

IX TRANSFER

The phrase “transfer of learning,” or “transfer of training,” refers to a class of phenomena that are aftereffects of learning. When some particular performance has been learned by an individual, the capability established by that learning affects to some extent other activities of the individual. The effects of the learning are said to transfer to these other activities. Having learned some performance, the individual may thereby be enabled to exhibit some additional, different performance that he could not do prior to learning. As more commonly used, transfer of training means that the individual is able to learn something else more readily than he could prior to the original learning (positive transfer) or that he is able to learn something else less readily than he could before the original learning (negative transfer).

Transfer of learning, since it is virtually always present as a learning effect, may reasonably be considered an essential characteristic of the learning process. Accordingly, it may be shown to play a role in a wide variety of human activities, including the learning of language, social customs, values, and attitudes and the acquisition of the human skills and knowledge that underlie practically all types of vocational activity. Transfer of learning is of particular importance in formal education. In the opinion of many educators, education should have transfer of learning, or “transferability of knowledge,” as a recognized goal. It is generally agreed that the assessment of outcomes of education and training should include measures of transfer in addition to more direct measures of learning.

In studying the phenomenon of transfer of learning, investigators have employed a wide variety of situations and techniques. The work of experimental psychologists includes the exploration of such questions as (1) the degree of transfer resulting from the establishment of conditioned responses; (2) the specificity and limitations to transfer in the learning of simple perceptual and motor acts; (3) the occurrence of transfer between learned actions performed by different body members, such as the hand and foot; (4) the extent of bilateral transfer, as between actions performed by the left and right hands; (5) the occurrence of positive and negative transfer (called interference) in connection with the learning of verbal associates and sequences; (6) the positive transfer to a variety of specific novel situations resulting from the learning of principles; (7) the acquisition of a capability for transfer to novel discrimination problems in monkeys and human beings; and (8) the relation of transfer to the mediating effects of language in children. An interesting line of investigation has been undertaken by neurophysiologists in determining the conditions of transfer of training in animals with “split brains.”

Educational aspects. In the field of educational research, studies have been concerned with the broad question of transfer of learning among the component subjects and topics that make up the school curriculum. Older studies arose from controversies over the doctrine of formal discipline, which held that certain subjects of the curriculum, such as Latin, geometry, and logic, derived a great part of their value from the general (that is, transferable) discipline they imparted to the mind, thus facilitating the learning of other subjects (Thorn-dike 1924). While this doctrine is probably quite true in the sense that certain capabilities acquired in school are much more widely transferable than others, the rationale for the choice of particular subjects was not a convincing one. At any rate, educational studies in the older tradition were concerned with such questions as whether transfer of training could be measured from the study of Latin to the study of English or from that of geometry to other fields necessitating the use of logical thinking.

More modern studies of educational transfer have concerned themselves with such questions as the extent to which certain kinds of within-subject learning transfer positively to advanced topics, such as the concept of the numberline to later mathematical topics, the discrimination of language sounds to the learning of foreign language utterances, etc. Additionally, there has been concern with the possibilities of negative transfer between the learning of such activities as the formal statements of verbal definitions and the later learning of advanced mathematical principles and between the learning of letter sounds and the acquisition of reading skill. Finally, there is an increasing interest in the exploration of the possibility of designing instruction to develop such highly transferable individual characteristics as thinking strategies, curiosity, and even creativity.

Observing and measuring transfer

The simplest observations of transfer of learning occur in the following ways. The individual is known to have learned a new performance, such as spelling the word nation in response to the oral direction, “Spell nation.” It may now be found that the same child is able to learn to spell such words as motion and lotion much more rapidly than he learned to spell nation. The inference is that the child learns to spell motion more rapidly than he would have if he had not first learned to spell nation. Besides the specific outcome of the original learning, the performance of spelling nation, there has been another aftereffect of learning: some residual capability, which shows itself in the speeding up of the learning to spell a different word. A different example, illustrating negative transfer, is the following. An individual has moved to a new location and must learn two new telephone numbers, those of his office and his home. One is 643-2795, and the other is 297-6534. He has learned single telephone numbers previously, without a great deal of difficulty. But he now finds that he makes many errors in trying to learn these two numbers, tending to substitute one portion of one number for another portion of the other number. Over a period of time, he finds that, in his experience, learning these two new numbers turns out to be more than twice as difficult as learning any one number has been. There appears to be interference between the two learning tasks; in other words, the inference is made that the learning of one number produces a negative transfer effect, which slows down the attainment of recalling the other number.

Design of transfer experiments

While neither of these sequences of observation and inference is unreasonable, it is apparent that certain variables are uncontrolled, and this generates a requirement for a more painstaking method of observing transfer in an experimental sense. Returning to the example of learning to spell, it is clear that for any given individual we do not really know how long it should take him to learn to spell either nation or motion because we do not know where learning begins. Possibly some peculiarity of his past learning makes nation a difficult word and motion an easy one. Accordingly, an experimental design for the measurement of transfer typically includes a pretest to measure the initial capabilities of the individual before learning begins. Still another possibility must be considered: perhaps the increase of facility at the second task (spelling motion) is engendered partly by increased motivation, partly by a “set” to learn, or partly by “warm-up” factors (Thune 1950), rather than by the specific effects of learning the first task (spelling nation). As a consequence, it is usually considered necessary to include in a transfer experiment a control condition for this set of “general” factors.

The typical schema for experimental study of transfer that results from these control requirements is one that uses two groups of individuals, either assumed or shown to be equivalent at the beginning of learning. Table 1 provides an example.

Table 1
Transfer groupControl group
1. Takes pretest on task 21. Takes pretest on task 2
2. Learns task 12. Learns a task of a sort very
 different from tasks 1 and
 2 but requiring the same
 amount of time as task 1
3. Learns task 23. Learns task 2

When this design is used, an average difference in the ease of learning (often, the time to learn) of task 2 in the two groups is taken to be an indication of “specific” transfer of training from the learning of task 1 to the learning of task 2. This and other experimental designs for the study of transfer are discussed by Murdock (1957).

Quantification

The amount of transfer of learning is usually expressed as a percentage (Gagne et al. 1948). Should it be found that a second performance is learned no more readily than if the learning of a first performance had never occurred, transfer is said to be zero, or 0 per cent. If the second performance is found to be fully learned after the first performance has been learned, transfer is 100 per cent. Amounts between these extremes are, of course, often found. It is also possible to express negative transfer in this way; if a second performance takes half again as long to learn following initial learning of a first performance, the amount of transfer can be ex-pressed as — 50 per cent. The use of percentages in expressing amount of transfer is, however, largely a matter of convenience and has no particular systematic significance for an understanding of the phenomenon.

Conditions of positive and negative transfer

The occurrence of transfer of training depends, by definition, on the occurrence of previous learning. For certain performances, such as the learning of a set of verbal associates, the evidence indicates that the amount of positive transfer obtained is directly related to the amount of initial learning (Gagne & Foster 1949; Mandler & Heinemann 1956). As for negative transfer, a somewhat more complex relationship has been found to hold: the amount of interference exhibited in the second task increases as the amount of practice on the initial task is increased to some intermediate amount (McGeoch 1952, pp. 335-339). As practice of the initial task continues, however, the interference with the second task decreases and may under some conditions come to yield positive transfer (Mandler 1954).

Effects of similarity

As a general rule, the amount of transfer (positive or negative) is influenced by the similarity of the performance initially learned to the second performance in which the occurrence of transfer is observed. For example, when a conditioned response is established to a tone of 1,000 cycles, the amount of positive transfer exhibited to tones differing in pitch from this original stimulus bears a direct relation to the degree of physical similarity of the second tone to the original one (Hovland 1937). According to Gibson’s study (1941), when the two performances are such that negative transfer is found, the amount of such interference increases with the degree of similarity between the stimuli of the first and second tasks. There is an apparent paradox to these findings, whose resolution seems to depend upon a careful definition of what aspects of the two performances are being compared in similarity. According to Osgood (1949), the prediction of the direction of transfer (positive or negative) depends upon the differential specification of the stimuli of the two tasks and of their responses. A brief and partial summary of Osgood’s conclusions may be given as follows. When tasks 1 and 2 contain identical responses, transfer is increasingly positive as the similarity of the stimuli of the two tasks increases. When tasks 1 and 2 contain identical stimuli, transfer is increasingly negative as the similarity of the responses of the two tasks decreases.

Despite the clarifying analyses that have been made, it is nevertheless apparent at the present time that the relation of transfer to the similarity of learned performances remains a perplexing and essentially unsolved problem. Two practical situations may serve as bench marks in consideration of what has yet to be understood about this relationship. (1) An individual first learns to drive a standard-shift automobile with the gearshift attached to the floor. A later model car comes equipped with the same type of transmission but with the gearshift attached to the steering column, so that first gear is now down rather than back, second gear is now up rather than forward and so on. Under these circumstances of apparent difference, the two tasks would nevertheless be judged as similar by almost any driver, and in fact the amount of positive transfer is close to 100 per cent. (2) The second situation is one in which a later model car comes equipped with a four-speed transmission; the gearshift, which is on the floor, must be pushed forward for first gear, backward for second gear, and so on. The situation in the second task is not only dissimilar; it may also be judged to have in it certain elements of reversal relative to the first task. It is a common experience that a considerable amount of negative transfer occurs under these conditions. Reversal of stimulus-response relationships has also been shown in laboratory tasks to produce large amounts of negative transfer (Lewis & Shephard 1950).

Mediational processes

Other limitations of the similarity principle in its present state of development are also shown by studies in which the performance acquired depends upon the learning of a mediational process.

Concept formation. The studies of Kendler and Kendler (1961) have shown that the performance of seven-year-old children who are required to learn a reversal of a discrimination task is markedly superior to the performance of four-year-olds on the same reversed task. These findings suggest that transfer occurs to the second task, which is dis similar to the original task to the extent of requiring an opposite choice (the choice of a white square as opposed to a black one), because the older children are able to provide an implicit verbal mediator (such as “the opposite”); whereas the amount of transfer for the younger children is very small because they are unable to do this. Logically related are Harlow’s studies of learning in monkeys (1949). These animals, over many practice periods, were able to acquire the capability of choosing “the odd one” of three objects, even though the particular objects used may have been highly dissimilar in physical appearance to those used during the original learning.

Acquisition of principles. The importance of mediational processes for transfer of training is also illustrated by a number of studies concerned with transfer of principles. Principles, whether verbally stated or not, relate classes of stimuli to classes of performance; accordingly, they remove the control of performance from the specific stimuli of the situation. If a principle is learned in connection with some particular performance, then it is to be expected that this principle will make possible broad transfer to an entire class of problem situations. In connection with card-trick and matchstick problems, as well as with other tasks, Katona (1940) showed that the acquiring of principles led to high degrees of transfer to classes of problems that differed from those of original learning in physical appearance. In contrast, learning to solve the original problems without acquiring such principles resulted in only small amounts of transfer to new problems. One meaning of “teaching for transfer” in educational settings appears to be the encouragement of principle learning as opposed to rote learning on the part of pupils.

Mechanisms of transfer

Although referred to by a single class name, it is fairly certain that the various phenomena called transfer of training represent several different kinds of events. The differences among them are to be found in the specific conditions that generate them and, consequently, in the kinds of mechanisms that may be inferred to account for them.

Stimulus generalization. When a conditioned response is established to a signaling stimulus, it is found that the same response, diminished in frequency or strength, is given to other stimuli differing from the initial one along some physical dimension. This finding, the phenomenon of stimulus generalization, has been obtained many times, and the previously cited results of Hovland (1937) are typical. The underlying mechanism in this case appears to be a fairly dependable characteristic of the functioning central nervous system. The effects of stimulation on the nervous system are not highly specific but are generalized. The amount of this generalization may be markedly reduced by further conditioning contrasting stimulation that is positive in its effects with stimulation that is negative, a procedure referred to as discrimination training.

Transfer in associative learning. An association of words like ready-joyful or small-klein is frequently described by psychologists as an S— R(stimulus-response) association, in which the first member of the pair is called the stimulus member; through learning, this first member comes to elicit the second, or response, member. This form of learning has an extensive literature of its own that cannot be thoroughly summarized here (see McGeoch 1952; Underwood 1964). The following conclusions, which come from these investigations, however, have particular relevance to the phenomenon of transfer of training and to suggested underlying mechanisms.

One of these conclusions is that under many conditions, positive transfer occurs in the learning of verbal associates when previous learning “predifferentiates” the stimulus members (Gibson 1940; Gannon & Noble 1961). A second finding is that positive transfer is also a common occurrence when the response members of the paired associates have been made familiar through previous learning (Underwood & Schulz 1960). The mechanism of transfer suggested by these two findings is that the learning of paired associates is not simply a matter of associating an S and an R; it is better conceived as the “linking” of two performances, the first of which may be called “recognizing the stimulus member” and the second “uttering the response member. “The thorough learning of these two different performances apparently transfers positively to the subsequent learning of the completed linking, or “association.”

A third finding throws additional light on the process of association. Positive transfer in paired-associate learning is generated by previously learned mediating responses, which appear to serve the function of “coding” the stimulus member into the response member (McGuire 1961; Jenkins, 1963). The associate hand to the French word main may, for example, be mediated by the previously learned word manual. Still a third kind of previously learned performance, then, appears to be responsible for positive transfer; the ease of learning pairs of associates is markedly affected by the prior learning of what may be called a “coding performance.”

The fourth conclusion that may be drawn from studies of associative learning has, in one form or another, been the subject of hundreds of experimental investigations. Negative transfer (interference) results when the learning of a pair of associates A-B is followed by the learning of the pair A-C, which has the same stimulus member but a different response member (Postman 1961; Underwood 1964). Although there is a great deal of evidence for this and related findings, it remains true at present that the mechanism by means of which such interference occurs has not been clearly delineated. The idea that there is “response competition” (Postman 1961) is a widely accepted view but seems little more than a renaming of the phenomenon of interference itself. A more promising possibility is the suggestion that the learning of a second response member also requires the complete erasure of the first from memory, that is, it extinguishes it after the fashion of extinction in a conditioned response (Barnes ’ Underwood 1959).

A quite different sort of hypothesis concerning the nature of negative transfer as it occurs in associative learning is receiving increasing attention. This is the proposition that the interference of a first task with a second (as in the arrangement A-B, then A-C) does not affect the learning of the second task at all but only its retention (Tulving 1964). This idea is consistent with the more general notion that the learning of each associate occurs in a single trial. Such a view, carried to its logical conclusion, would lead to the belief that negative transfer is essentially a process that reduces the probability of recall of learned associates in the phenomena called proactive interference and retroactive interference.

Transfer by means of concepts

The learning of relational concepts like “middle,” “below,” and “the odd one” and object concepts like “tree” and “door” has the effect of freeing performances from specific stimulus control. Having acquired a concept through learning, the individual is able to deal with a great variety of specific instances of the class that the concept represents. Particularly, it is true that the individual’s performance can be correctly mediated by a concept, even though the specific instance to which he must respond has never been encountered during learning (cf. Kendler 1964; Gagne 1965). Concepts, therefore, are intimately bound with transfer of training. In order to demonstrate that an individual has learned a concept, one must show that the effects of the learning will apply to not previously encountered members of the class of stimuli that are denoted by the concept’s name. The mechanism by means of which the central nervous system accomplishes this feat of generalization is not well understood at present. This kind of capability is, of course, not restricted to human beings, although human conceptual behavior seems often to involve the use of language (Kendler 1964).

Transfer from principles

If principles are thought of as combinations of concepts, then for a similar reason they too provide the basis for transfer of training from the specific instances of learning to a large class of performances. The learning of a principle must be demonstrated by means of a test of transfer of training; it must be possible to show that the individual is able to apply the principle in a variety of situations that have not been specifically presented during learning (Gagne 1964). Hendrickson and Schroeder (1941) showed that the direct teaching of a principle in verbal form to high school students resulted in positive transfer to the task of hitting a target under water and that the transfer was greater than that produced by direct practice. Katona’s findings (1940) concerning the transfer value of learning principles in the solution of card-trick and match-stick problems have been verified and elaborated by Hilgard and his colleagues (1953). To these laboratory studies concerning the effectiveness of principle learning for transfer must be added a great mass of unrecorded observations of teachers, who do not hesitate to assert that a principle such as is implied by the expression a • b + a • c = a(b + c) accomplishes the job of knowledge transfer better than almost any number of specific examples like 2- 3 + 2- 4 = 2-7 .

Transferability of learning

There are, then, a number of ways in which transfer can come about in behavior, ranging from the relatively specific stimulus generalization of a conditioned response to the very broad applicability of a principle. Whatever particular objective learning may have, it is reasonable to state that it will always be accompanied by an additional outcome of transfer of learning. So far as formal education is concerned, and even more broadly for the functioning of the human individual in society, the transferability of acquired knowledge and skill is often considered a more important goal than any number of specific learning accomplishments. For it is such transfer that makes it possible for the individual to solve new problems, to adjust to new situations, and to make novel inventions. Enthusiasm for transferability as an educational goal needs to be tempered by the reflection that transfer depends upon prior learning.

Robert M. Gagne

[Other relevant material may be found in ConceptFORMATION; Forgetting; ResponseSETS.]

BIBLIOGRAPHY

Barnes, Jean M.; and Underwood, Benton J. 1959 “Fate” of First-list Associations in Transfer Theory. Journal of Experimental Psychology 58:97-105.

GagnÉ, Robert M. 1964 Problem Solving. Pages 293–323 in Symposium on the Psychology of Human Learning, University of Michigan, 1962,Categories of Human Learning. New York: Academic Press.

GagnÉ, Robert M. 1965 Conditions of Learning. New York: Holt.

GagnÉ, Robert M.; and Foster, H. 1949 Transfer to a Motor Skill From Practice on a Pictured Representation.Journal of Experimental Psychology 39:342-354.

GagnÉ, Robert M.; Foster, H.; and Crowley, M. C. 1948 The Measurement of Transfer of Training.Psychological Bulletin 45:97-130.

Gannon, Donald R.; and Noble, Clyde e. 1961 Familiarization (n) as a Stimulus Factor in Paired-associate Verbal Learning.Journal of Experimental Psychology 62:14-23.

Gibson, Eleanor J. 1940 A Systematic Application of the Concepts of Generalization and Differentiation to Verbal Learning.Psychological Review 47:196-229.

Gibson, Eleanor J. 1941 Retroactive Inhibition as a Function of Degree of Generalization Between Tasks. Journal of Experimental Psychology 28:93-115.

Harlow, Harry F. 1949 The Formation of Learning Sets.Psychological Review 56:51-65.

Hendrickson, Gordon; and Schroeder, William H. 1941 Transfer of Training in Learning to Hit a Sub-merged Target.Journal of Educational Psychology 32: 205-213.

Hilgard, Ernest R.; Irvine, R. P.; and Wipple, J. E.1953 Rote Memorization, Understanding, and Transfer: An Extension of Katona’s Card-trick Experiments. Journal of Experimental Psychology 46:288-292.

Hovland, Carl I. 1937 The Generalization of Conditioned Responses: I. The Sensory Generalization of Conditioned Responses With Varying Frequencies of Tone.Journal of General Psychology 17:125-148.

Jenkins, James J. 1963 Mediated Associations: Paradigms and Situations. Pages 210–257 in Conference on Verbal Learning and Verbal Behavior, Second, Ardsleyon-Hudson, N.Y., 1961,Verbal Behavior and Learning: Problems and Processes, Proceedings. New York: McGraw-Hill.

Katona, George 1940 Organizing and Memorizing. New York: Columbia Univ. Press.

Kendler, Howard H. 1964 The Concept of the Concept. Pages 211–236 in Symposium on the Psychology of Human Learning, University of Michigan, 1962,Categories of Human Learning. New York: Academic Press.

Kendler, Howard H.; and Kendler, Tracy S. 1961 Effect of Verbalization on Reversal Shifts in Children. Science 134:1619-1620.

Lewis, Don; and Shephard, Alfred H. 1950 Devices for Studying Associative Interference in Psychomotor Performance: iv. The Turret Pursuit Apparatus.Journal of Psychology 29:173-182.

Mcgeoch, John A. 1952 The Psychology of Human Learning. 2d ed., rev. New York: Longmans. → The first edition was published in 1942 by Longmans.

Mcguire, William J. 1961 A Multiprocess Model for Paired-associate Learning.Journal of Experimental Psychology 62:335-347.

Mandler, George 1954 Transfer of Training as a Function of Degree of Response Overlearning.Journal of Experimental Psychology 47:411-417.

Mandler, George; and Heinemann S. 1956 Effect of Overlearning of a Verbal Response on Transfer of Training.Journal of Experimental Psychology 52: 39-46.

Murdoch, Bennet B., JR. 1957 Transfer Designs and Formulas.Psychological Bulletin 54:313-326. OSGOOD, CHARLES E. 1949 The Similarity Paradox in Human Learning: A Resolution. Psychological Review 56:132-143.

Postman, Leo 1961 The Present Status of Interference Theory. Pages 152–179 in Conference on Verbal Learning and Verbal Behavior, New York University, 1959,Verbal Learning and Verbal Behavior: Proceedings. New York: McGraw-Hill.

Thorndike, Edward L. 1924 Mental Discipline in High School Studies.Journal of Educational Psychology 15: 1-22; 83-98.

Thorndike, Edward L.; and Woodworth, Robert S. 1901 The Influence of Improvement in One Mental Function Upon the Efficiency of Other Functions.Psychological Review 8:247-261; 384-395; 553-564.

Thune, Leland E. 1950 The Effect of Different Types of Preliminary Activities on Subsequent Learning of Paired-associate Material.Journal of Experimental Psychology 40:423-438.

Tulving, Endel 1964 Intratrial and Intertrial Retention: Notes Towards a Theory of Free Recall Verbal Learning.Psychological Review 71:219-237.

Underwood, Benton J. 1964 The Representativeness of Rote Verbal Learning. Pages 47–78 in Symposium on the Psychology of Human Learning, University of Michigan, 1962,Categories of Human Learning. New York: Academic Press.

Underwood, Benton J.; and Schulz, Rudolph W. 1960 Meaning fulness and Verbal Learning. Philadelphia: Lippincott.

X ACQUISITION OF SKILL

In the scientific inquiry into the nature of skill, which has been largely conducted by experimental psychologists, operational definitions of skill are generally stated in terms of overt responses and controlled stimulation. Responses are subdivided into three types: verbal, motor, and perceptual, which typically stress speaking, moving, and judging, respectively. Common verbal tasks require the memorization of a list of words; motor tasks demand precise movements of the limbs and body; and perceptual tasks require discrimination of sensory information. Responses are evaluated or scored by means of errors, rates, pressures, amplitudes, time sharing, and information transmitted. Stimuli, on the other hand, are energy inputs to the operator and are expressed in units, such as frequency, length, time, and weight.

The study of skill has largely been confined to a relatively few laboratory tasks and trainers. The inquiry has been directed far less to the arts, the shop, and the playing field than to identifying variables that cut across many jobs and finding general laws that stand for many specific tasks. For example, practice, rest, feedback, and transfer are prominent variables.

The work of the skills psychologist may be divided into two parts. In his basic research, he seeks relevant variables, discovers empirical laws of relations between variables, and constructs theories to account for the laws. In his applied work, he takes part in selecting personnel for special jobs, helps to design display and control stations, and prescribes some of the training rules of educational programs.

World War II provided the impetus for an accelerated study of motor skill. It was necessary to select from hundreds of thousands of men a limited number to fly airplanes, aim gunnery equipment, etc. A battery of tests to determine psychomotor abilities was developed containing eight apparatus tests: tracking moving targets, setting dials, etc. A candidate’s performance rank on these devices turned out to be very much related to his proficiency during later training for aircrew stations in air force schools. The devices used in the apparatus tests did not resemble air force or civilian hardware, yet they brought out basic learning and performance factors, such as reaction time, speed of movement, and pattern discrimination common to operable equipment. The great success of the apparatus tests in predicting later behavior led to the accelerated growth of both theoretical and applied studies of skills. One of the specific accomplishments was to provide an impetus to the laboratory study of what skills are and of how they are learned.

Tasks involving continuous responses . The major components of a typical research device involving continuous responses include (1) a visual display station, such as an oscilloscope; (2) a mechanical or electronic means for programming the display, such as the movement of a pip of light in some prescribed way; (3) a station with a control, such as an airplane stick, for the operator to compensate for pip movement; and (4) a means of measuring the operator’s response output with respect to the stimulus input. Appropriate stick responses for spatial coordinates x and y balance or neutralize the programmed displace ments, and the pip remains quiet, centered, and under control. Ordinarily, several groups of operators or subjects are trained, each group under a special condition. One set of conditions may involve change in the properties of the apparatus; another set may involve change in the methods of training. Obvious display variations are changes in target speed and path complexity; simple control variations involve resistance to movement and amount of pip movement per unit of stick displacement. These and other variations have led to the discovery of quantitative relationships between responses and the conditions of practice. In addition, all of these variations are often treated by a systems approach in which the output of men and machines is expressed as a function of the input. Examples of maturing areas of application are the piloting of aircraft and submarines and target detection and identification in radar.

The training expert gives advice on schedules of practice: when, how long, how often. He decides on practice matters pertaining to individual tasks: their relative emphases and staging. He makes important recommendations on the operator’s data-processing abilities and need for training aids. The operator may be in for long periods of vigilance, and he ought to detect faint or occasional signals; in addition, the operator is expected to make suitable decisions in the available time and to select and execute the proper response for the system. The expert’s most critical analyses center on feedback (information about past performance) and the manner of its representation, since any solution for coding the feedback (which is essential) necessitates selecting a sensory modality and temporal, spatial, and numerical schedules of transformation.

Tasks involving discrete responses . One major premise underlying tasks involving discrete responses is that the next response (R) depends upon the knowledge of results of previous responses (KR), that is,R = f (K R). The relations between KR and R are arbitrary, and transformations always obtain. For example, if a blindfolded person were directed to “Draw a 3-inch line,” he need not be informed of his error after each and every attempt, nor are we compelled to report a +⅛ inch error as “too far by ⅛ inch.” It is possible, of course, to report any numerical error at any time.

As the line-drawing example shows, targets and responses need not be in continuous motion, although the variables of continuous and discrete types of tasks are quite similar. The task of learning to move levers and knobs through a critical distance has afforded a simple situation for studying the conditions regulating learning and performance. In these simple tasks, the simplest train of events is R1, KR1,R2, KR2, … ,Rn,KRn the timing can be anything at all. A few illustrations of typical findings will suffice: (1) the massing of trials produces faster learning than does spacing; (2) the occasional omission of KR does not prejudice the effect of a later KR on the following R and (3) even day or week intervals between R and KR do not necessarily impair the learning of R. The learning of R, however, is seriously handicapped by (1) KRs which are vague (“You didn’t do very well”) and (2) KRs displaced from their normal position by another R, that is, R1, R2, KR1, R3, etc. Some investigators interpret the primary role of KR as reinforcing in much the same way that food can be used to shape the behavior of the hungry animal. Others interpret the primary role of KR as informational and treat it as a stimulus variable that serves as a representational code for the response or its effect.

Work tasks A man’s output is dependent upon his recent and remote history of responding. His rate of work, for example, depends upon such obvious variables as work periods, rests, and loads. Rate also depends upon anticipated conditions of practice, rest, and load. According to the reactive inhibition theory of work decrement, decrement in performance is attributable to the build-up of reactive inhibition, and recovery in performance is attributable to the decay of inhibition. This theory is quite elaborate and effective; indeed, it explains a great deal more decrement data than do physiological-fatigue theories.

Among work tasks that have been studied are prolonged efforts at cranking, canceling letters of the alphabet, and packing small objects. The investigation of vigilance, a related topic, arose with the introduction of radar—watching radar is associated with infrequent, but critical, stimulation and with losses in performance at the critical moment. Losses in proficiency, however, may be caused by other means, a prominent one being response overload. Overload can be readily brought about by requiring reactions to more than one task. The breakdown in monitoring the incoming signals is intensified by increasing their number, complicating their constitution, or raising their frequency.

Forgetting A learned series of skilled procedures, such as an instrument check-out sequence, is much more susceptible to forgetting than a response that requires muscular coordination. The forgetting of a motor skill that may occur over periods of extended disuse is quickly overcome by comparatively few trials of retraining. Still, forgetting of even simple motor skills has been demonstrated, the phenomenon being more readily observed in changes in variance than in means.

Recent analyses of a person’s ability to remember a list of words have shown that a person is far less likely to forget than experiments since the 1890s have led us to believe. Recent work on verbal retention has made more use of meaningful material—one word per subject and normative information on word-association structures—and more use of recalling under conditions of controlled retention environments. Retrieval of words from memory seems to depend strongly on free-association processes. Cultural norms have been tabulated which show the probability (p) of any response word to a stimulus word, for example, for the stimulus word thirsty, the p(R1) for water (the most frequent response) is .35, the p(R2) for drink (the second most frequent response) is .30, etc. If a naïve student is taught the word drink (as one of several words in a list), later, in the presence of the word thirsty, if drink is not recalled, water is likely to intrude instead. The illustration shows the effects of language habits established some time ago on present recall behavior (Bilodeau 1966 a).

The explanations of forgetting are nearly all related to interference theory, either retroactive or proactive. If the reader cannot quite recall the items of yesterday’s breakfast, it may be that this morning’s fare intrudes or otherwise interferes (retroaction); if the failure to recall can be traced to breakfasts prior to yesterday, then proactive agents are to blame. The bulk of the literature favors retroaction as the mechanism of forgetting, but proaction is favored by present-day investigators.

Transfer of training An individual is never tested or required to perform under the very same conditions which constituted training. There is always at least a small difference; sometimes there is a large one. The inquiry into the effects of these differences is called transfer of training. The objective of any training program is to maximize the amount of transfer, although examination of actual instructional programs might make us wonder. The student might be trained to read to himself, but when tested he might be required to read orally; another’s training might be characterized by his watching, testing by his performing. Generally, it is found that learning almost anything (referred to as Task A) facilitates the learning of almost anything similar (Task B). That is, the transfer is ordinarily positive in sign. Generally, it is less than 100 per cent in quantity. In order to obtain more than 100 per cent transfer, a training trial in Task A must be superior to a training trial in Task B when subsequently evaluated by the skill shown on Task B. Strictly speaking, more than 100 per cent transfer is most difficult to find. It appears that if Task B performance serves as the criterion, it is better to train at B and, if possible, avoid Task A from the start. Task B, however, in the hands of the novice may involve elements of danger or excessive expense, and so Task A may be substituted for Task B after all. For example, though it is probably true that the best training for helicopter piloting involves learning to fly the helicopter itself, the craft is dangerous and costly to operate. The ground trainers are inefficient, but if ten hours in them are actually worth five hours in the air, the 50 per cent transfer figure works to advantage.

The findings on negative transfer (detrimental effect of Task A upon the subsequent performance of Task B) are fairly clear. When Task A interferes with B, the interference is usually small and disappears quickly with additional practice on B. Indeed, there is even evidence to show that reversed forms of the same task involve the same psychomotor factors, and, further, there is no evidence for an individual trait of susceptibility to negative transfer. It can be speculated that to immunize oneself against negative transfer, or even to accelerate the normal processes of positive transfer, an exposure to any and many tasks is desirable. On the other hand, if a small amount of negative transfer includes one fatal error, the small amount should be considered most carefully.

Most psychologists believe that learning is an incremental process which could not take place without transfer. The number of constituent elements in two adjacent learning trials (A1 and A2) and the number of elements in common is believed to determine the amount of transfer. Because the events of training and education are never exactly reproduced in later life, a knowledge of the principles of transfer is of top priority for all users of applied skills research.

Composition rules Skills have been analyzed and then synthesized by methods of probability, correlation, and geometry. A probability and a correlation model are sketched below to show how the reduction of skill to its components is accomplished in principle.

Probability model. Imagine that an operator views two meters whose pointers continually wander from center and that the pointers can be recentered by means of cranks for the left and right

The stippled portions of the instances of B (that is, B’, B”, B’”) represent the temporal overlap of A and B, the time during which both hands are on target simultaneously. If A and B are fixed at probabilities of .5, time sharing is maximum (.5) for B’, minimum (0) for B”, and at chance (.25) for B’”.

hands. Figure 1 represents a polygraph record of the on-off target time for hands A and B. The total time represented is unity and, for simplicity only, the probability (p) for each hand’s being on-target is arbitrarily fixed at .5. Three special cases of coordination are shown: for A and B′ the time that both hands are on the target at the same time is maximum (.5); for A and B″ it is minimum (0); and for A and B‴ it is at the level of chance (.25). In each of these three cases, the hands are equally coordinated in the sense of equal proficiency at their separate tasks (.5); but in the sense of time sharing, the probability of the joint event p(AB) ranges widely from 0 to .5. Somewhat surprisingly, many training situations yield results resembling B‴ or chance time sharing, whatever the value of p. As a rule of thumb, the multiplicative formula p(Ap (B) for independent events is used to produce a very good estimate of p(AB). A better-than-chance score, then, is a show of positive coordination.

The multiplicative formula has a number of applications. If it is generalized to a three-part profile where, for example, A = .90, B = .90, and C = .20, the following predictions of their joint occurrence [p(ABC)] are possible: the best prediction is .90 x .90 x .20 = .16; instant improvement may be dramatically obtained through any increase in the poorest part at the expense of a better part, .70 x .90 x .40 = .25; the maximum possible score is [K⅓90 + .90 + .20)]3 = .30; the worst possible score in a rearranged profile is .00. The training objective now becomes one of raising the value of p(ABC). The most common way is to raise the sum of the part probabilities through additional standard practice. A second possibility is to make a change in the profile of the component parts, while holding their sum constant. Still a third method would strive to break the multiplicative rule and replace it with better-than-chance pairing of events.

The correlation model. Another method of skills analysis intercorrelates the scores from (1) within a single task but from different stages of practice and (2) different tasks at the same or different stages of practice. The two techniques reveal the amounts of variance (the statistic σ2) common to two or more variables and establish the degree of relationship. The questions at issue involve abilities, pre-experimental experience, integration of components into total task, and training procedures. To date most of this work has involved the correlations among length of time scores, such as time on target in tracking tasks.

Intratask analyses show relationships between (1) the first trial and successive later ones to be progressively lower and (2) adjacent trials to grow progressively larger. These patterns mean that the underlying composition of skill becomes simpler with increasing proficiency, for example, fewer abilities contribute to sophisticated than to naive performance. Intertask analyses show that (1) predictor tests can provide better estimates for final than for initial criterion trials when the operator is skilled at both tasks; and (2) final level of criterion performance can be better predicted by extratask measures than by earlier levels of skill on the criterion-learning task.

Edward A. Bilodeau

[Other relevant material may be found in Attention; Cybernetics; Fatigue; Forgetting.]

BIBLIOGRAPHY

Adams, Jack A. 1964 Motor Skills.Annual Review of Psychology 15:181-202.

Andreas, Burton G. 1960 Experimental Psychology. New York: Wiley. → A college text; an introduction to the laboratory analysis of learning.

Bilodeau, Edward A. 1966a Retention. Pages 315–350 in Edward A. Bilodeau (editor), Acquisition of Skill. New York: Academic Press.

Bilodeau, Edward A. (editor) 1966b Acquisition of Skill. New York: Academic Press. → The book contains a survey of motor and verbal skills learning by a number of leading contributors to the field.

Bilodeau, Edward A.; and Bilodeau, Ina Mcd. 1961 Motor-skills Learning. Annual Review of Psychology 12:243–280.

Fitts, Paul M. 1964 Perceptual-Motor Skill Learning. Pages 243–285 in Symposium on the Psychology of Human Learning, University of Michigan, 1962, Categories of Human Learning. New York: Academic Press. → Definitions, taxonomies, models and other issues with a communication-computer flavor.

Fleishman, Edwin A. 1962 The Description and Prediction of Perceptual-Motor Skill Learning. Pages 137–175 in Robert Glaser (editor), Training Research and Education. Univ. of Pittsburgh Press. → A survey of work involving correlation and factor analysis.

Scott, Myrtle G. (1942) 1963 Analysis of Human Motion: A Textbook in Kinesiology. 2d ed. New York: Appleton.

XI LEARNING IN CHILDREN

Learning may be defined broadly to encompass relatively permanent behavior changes that result from experience. The experiential requirement usually implies that for the learning organism some changes in the associative properties of certain stimuli have taken place in such a way that the stimuli produce response effects that are different after training than before. Two general classes of empirical investigations in the area of children’s learning may be cited. One type involves experimental manipulation of variables through laboratory investigation. Such studies have varied the number of conditioning trials, schedules of reinforcement, delay and magnitude of reinforcement, and motivational factors and have measured the resulting change in response or some attribute of response (Bijou & Baer I960; Spiker I960; Lipsitt 1963; White 1963; Rheingold & Stanley 1963; Munn 1946). The second category includes investigations of such training variables as familial or parental practices pertaining to feeding, toilet training, and effects of deprivation in infancy or in early childhood (McCandless 1961). The concepts of imprinting and trauma are not alien to this type of study, although little solid research with children is available on such matters.

Basic learning processes

Many of the experimental procedures used with animals and human adults have been adapted to the study of child behavior at all age levels. These studies generally indicate that for both classical (Pavlovian) conditioning and operant (Skinnerian) learning processes, the various parameters pertinent to conditioning in animals also control the occurrence and rate of conditioning in children.

Classical conditioning

It has been demonstrated that in classical conditioning in children, the time interval between the initially neutral stimulus (the conditioned stimulus, CS) and the initially effective response-producing stimulus (the unconditioned stimulus, UCS) is pertinent to the rapidity and strength of conditioning: there is an optimal interval of approximately half a second that varies somewhat with age and the nature of the response. A positive relationship has also been demonstrated between the drive or arousal level of the child and the rate of classical conditioning where that level has been variously defined by measures of muscular tension, tests of anxiety, or instruction-induced stress states. As in other organisms, it has been shown that the sensory modalities to which the CS and UCS are directed are pertinent to the conditioning process, as are the number of paired CS–UCS trials administered, and the nature of the tests used to measure the presence and strength of conditioning. In fact, the conclusion that conditioning has occurred often depends on the nature of these tests, including such procedural technicalities as whether interspersed test trials were administered among the training trials, whether conditioning was measured solely during extinction, or whether the response-recording apparatus permitted detection of subtle aspects of the reaction. In short, classical conditioning in the infant and child is a well-documented phenomenon, although knowledge of the variables affecting the phenomenon, including the age of the child, requires additional and extensive investigation. At present it appears quite likely, for instance, that there is an interaction between the age of the child and certain other parameters pertinent to the speed and strength of conditioning. One suggestion is that a younger child may require longer CS–UCS intervals for optimal conditioning than an older one.

Several investigators have recently established classical conditioning in neonates under rather well-controlled experimental conditions. Conditioning of appetitive responses is apparently obtained somewhat more easily than is classical conditioning of avoidance responses, at least under the levels of noxious stimulation that have been utilized. Some investigators have pointed out that fetal response to oral stimulation is developed neurophysiologically very early and that this response, and presumably its conditionability, has great survival value for the organism and the species. It can be pointed out, however, that certain withdrawal reactions to aversive stimuli also develop early and, under adverse circumstances, could play an important role in the survival of the organism. Again, investigations involving both appetitive and aversive processes in infants are much needed.

It may be added that the behavioral phenomenon of habituation has been well documented in infants and children, just as it has been with a wide variety of infrahuman organisms. Habituation is a progressive diminution of response that occurs as a result of repetitive presentation of a given stimulus. There may be, in fact, several different response-decrement phenomena, resulting from different conditions of stimulus presentation and varying histories of the organism, only one or some of which might be properly classified as learning processes.

Operant conditioning

Operant conditioning has also received considerable attention from investigators of child behavior. The systematic reinforcement of a response initially low in the child’s hierarchy of responses importantly affects the rate at which that response will subsequently occur. Operant conditioning may involve the presentation of a “positive” event contingent on the desired response. Examples of such positive reinforcers would be the awarding of candy or the introduction of a signal indicating correctness of response. Alternately, “negative” reinforcement involves termination of an aversive stimulus contingent on occurrence of a response to be learned. Both kinds of reinforcement ultimately yield increases in the response to which the reinforcement is addressed. A third type of response-contingent event is the withdrawal of a positive reinforcer on the occasion of an undesired response. Automatic obliteration of a motion picture, for instance, has been shown to be most effective in terminating or suppressing undesired behavior, such as thumb sucking in children. The operant technique capitalizes on the well-known law of effect and has been shown under many experimental arrangements to exert powerful control over various kinds of behavior, including imitative responses, smiling, emission of expressions of courtesy, and language. It has been found that different types of reinforcement schedules (e.g., intermittent versus continuous) tend to produce different response patterns in children; these patterns result in differential susceptibility to extinction when the reinforcer is ultimately withdrawn.

Discrimination learning

Discrimination learning studies are another type of conditioning investigation having to do with effects of reinforcement in maintaining or generating certain kinds of behavior. These studies typically involve discrete trial procedures rather than techniques that permit the subject to respond at any time. In discrimination learning, the child is rewarded for choosing the correct manipulandum, or correct stimulus, among multiple opportunities present. Numerous parameters have been shown to influence the occurrence and rate of children’s discrimination learning, including some variables that are unique to the articulate organism and, therefore, affect learning differently at different ages and in children of varying intelligence and social circumstances. Such studies of children’s discrimination learning have contributed importantly to the development and extension of behavior theory; this seems to be particularly true in the areas of verbal learning and mediational (cognitive) effects upon performance.

It has been demonstrated in children that presenting the discriminative stimuli simultaneously generally produces more rapid learning than presenting the same stimuli successively, particularly if the response required is directed at or involves the manipulation of those stimuli. However, successive presentation and simultaneous presentation do not produce very different effects if the response is to be made to a locus removed from the stimulus source, such as buttons that are some distance away from the stimuli. Although psychophysical scaling of discriminative stimuli would be required for a meaningful comparison of the ease of discrimination learning in one sense modality with another, it appears that learning is more easily achieved when the discriminative cues involve variations in stimulus size and less easily achieved when the stimuli vary in color. Solid (stereometric) objects tend to be discriminated more rapidly than do two-dimensional representations of the same figures. Greater magnitudes of reward (including more preferred rewards) and lesser delays of reward produce more rapid learning than their opposites; recent data, however, suggest that the relationship may not be a monotonic one and that greater delays may result in better retention. Studies are needed of interactions of such incentive attributes of rewards as size and delay with factors affecting drive or arousal level, such as frustration. In some circumstances, increased delay of reward, for instance, may lead to poorer performance; but in other circumstances it is possible that such delay could increase frustration, which may in turn facilitate some aspect of performance, particularly those responses that are already prepotent.

Much new research on children’s discrimination learning has dealt with effects of variables that have been historically of great interest to the general experimentalist but that only recently have been selected for extensive study by child psychologists. Some of these, for instance, pertain to the relative importance of positive and negative stimuli (i.e., reinforcement versus nonreinforcement) with most results indicating that both types of trial enhance performance.

Orientation behavior. White (1963) has noted that orientation behavior of children, or “observing responses,” has been neglected by most experimenters, perhaps partly because this aspect of behavior is seldom included in formal learning theories. Because visual scanning of stimuli increases sharply at the onset of criterion performance in discrimination learning, an attentional shift important to the production of criterion or solution behavior is not unlikely. Studies of attentional behavior in discrimination learning as well as studies of observing behavior per se are becoming more frequent. In particular, a marked interest has recently arisen in the orientational behavior of human infants, wherein children from birth onward are provided with visual (and other) stimulation to which their reactions are recorded. It has been demonstrated that neonates respond differently to visual stimuli, depending primarily on the complexity of those stimuli, and that shifts in interest (denned in terms of length and frequency of fixation) occur with increasing age and experience. Since visual fixation occurs very early in life and is seemingly controlled by at least crudely specifiable stimulus attributes, there exists the intriguing possibility that attentional responses may be trained or conditioned very early through systematic reinforcement. Some writers, moreover, have suggested that visual orienting behavior of infants and the early changes in such behavior may be analogous to imprinting behavior found in lower organisms. Although much of this remains to be studied, the implication is that young human beings may “reach out” and follow with their eyes much as lower animals fixate upon and remain with objects that they encounter early [see Attention; Imprinting].

Transfer of learning

Much work in children’s discrimination learning has dealt with transfer of training. The phenomenon principally involved is that of generalization, whereby learning a certain response to a given stimulus predisposes the organism to respond to other similar stimuli, and proportionately so the greater the similarity.

Two general types of transfer of training, i.e., nonspecific and specific, have been extensively studied in children. Both of these are pertinent to the generalization of learning from one situation to another, whether from one laboratory situation to another, from the laboratory to “real life,” or from one “real life” situation to another. Nonspecific transfer, variously referred to as “warm-up” or “learning to learn” depending upon the conditions used to induce it, refers to the subject becoming “set” to perform in certain prescribed ways. This type of transfer has to do with the skills involved in manipulating the response objects, viewing the stimuli properly, or merely relaxing and awaiting instructions. Quite possibly, such warm-up may impair as well as facilitate subsequent performance depending upon the requirements of the subsequent task. Specific transfer may also either facilitate or impair performance, and it re*fers to the influence of earlier task requirements on subsequent task performance, particularly to whether the previously learned task is similar or not to the subsequently learned task.

Many studies of specific transfer in both verbal and motor discrimination learning have been done with children. Much of the paired associate work with children has concentrated on the negative transfer phenomenon, created by retraining the subject to make new responses to stimuli to which other responses had been previously learned. Many of these paired associate studies, however, have dealt with verbal mediation that can produce either positive or negative transfer depending on the specific stimuli and responses involved. Some of these studies of proactive facilitation or interference deal with the phenomena of acquired distinctiveness or acquired equivalence. Acquired equivalence studies have demonstrated that children have more difficulty learning differential responses to stimuli to which they have previously learned similar names, whereas acquired distinctiveness studies have shown that if children have previously learned different names for the discriminative stimuli any subsequent learning of differential responses to the stimuli will be easier.

Transposition and verbal mediation. Considerable attention has focused on a special kind of training transfer in children—that known as transposition behavior. Typically, the child is first trained to select one of two or more stimuli simultaneously presented, such as the larger of a pair of circles; transposition is said to have occurred if the child, when confronted with a transfer task involving presentation of the larger stimulus together with a still larger one, chooses the largest rather than the specific stimulus to which response has been previously reinforced. Lower animals often show transposition when the transfer pair is very similar to the training pair but a breakdown in such transposition when the test pair is more dissimilar. Several studies have shown that the same is true for young children, but total transposition tends to occur with older or more articulate children. Presumably, possession of a concept, such as “larger than,” accounts for extension of transposition to the very dissimilar stimuli. Transposition attracts developmental interest because transposition behavior clearly tends to change with increasing chronological and mental age and because language skill seems to bear an important relationship to the phenomenon.

Corroborative evidence for the importance of verbality in discrimination learning is found in studies comparing “reversal shift” with “nonreversal shift” procedures. The typical experiment involves the presentation of stimuli varying in two dimensions, with one of these dimensions providing the cues pertinent to making the correct response. For instance, both size and brightness might be varied, the child being required to respond to the dark rather than the bright stimulus, regardless of size. A reversal shift would involve a change to making a bright response correct, while a nonreversal shift would make size the pertinent dimension. Non-reversal shift has been found easier for animals and young children; reversal shift is easier for adults and more articulate (or older) children. Presumably, verbal mediational factors, such as the subject informing himself covertly of the pertinent dimension, are crucial [see Concept Formation].

The study of children’s cognitive processes has been approached recently by many students of paired associate learning and verbal mediation. While much exploration remains to be done on the mechanisms by which verbal or symbolic responses mediate and control behavior, it has become increasingly apparent that children are excellent subjects for such study. Work with children should enable extensions of behavior theory that would not be possible otherwise. There is the interesting and not unlikely possibility, moreover, that studies of verbal learning in children, including the phenomena of associative clustering, free association, and other aspects of verbal expression, will illuminate important personality processes and anomalies. The suggestion does not seem amiss, for instance, that self-concepts (the responses that humans make to themselves about themselves) may be viewed as covert verbal responses which are learned according to the principles by which other responses are learned and that these self-conceptualizations may act as mediational responses to affect subsequent learning.

Effects of early experience

The study of children’s learning and the lasting effects of such learning necessarily includes any documentation of sequelae of “crucial” life circumstances. Thus, any studies relating parent–child variables or institutional factors as antecedents to behavior of children fall within the scope of the present topic. Effects of traumatic experiences and psychodynamic hypotheses about such effects ultimately refer to learned changes in behavior that reflect familial or other social circumstances. One of the difficulties inherent in the study of the relationship between such early experiences and later behavior is that the behavioral phenomena must occur in natura to be the subject of study, since it is impossible or undesirable to produce such behavior deliberately. Consequently, factors other than those specifically investigated have the opportunity of producing effects on the behavior studied. For instance, it has been demonstrated that infants who were rated as being permissively fed engaged in more “reality play” at preschool age, whereas children who were rated as being rigidly fed engaged in more fantasy. While such a finding is interesting and suggestive and does support clinically held presuppositions about the influences of feeding schedules on children’s behavior, the possibility exists that both rigidity in feeding and the occurrence of fantasy behavior in children are products of a common type of parenthood. This possibility necessarily attenuates the cause–effect relationship one would wish to infer from the data.

The same methodological weakness lies in the cross-cultural approach to collecting developmental data on effects of early-experience factors. For instance, it has been shown that there is a rather high negative correlation between the age of weaning and intensity of guilt feelings among members of a large number of cultures. While it is tempting, on the basis of such data, to conclude that guilt is produced by early weaning or oral frustration, it is possible that those cultures which reinforce guilt responses are also those which happen to wean early. Perhaps both guilt and early weaning are behavioral phenomena produced by some third causative factor. Another study related oral pessimism and optimism to age of weaning (whether before or after four months of age), and found that the oral pessimists tended to have been weaned earlier. A number of studies exist that, like those cited, implicate the age and style of weaning as causatively pertinent social determinants of later behavior, but few of these studies permit more than conjectural conclusions.

Toilet training

Another area of children’s social training which involves a great investment of parental time and produces considerable conflict and anxiety is toilet behavior. Just as certain crucial interactions between parent and child may occur around oral activities when the child is in infancy, so later may the child’s excretory activities become the focus of much parental attention. Studies suggest that toilet-training practices do constitute an important “arena” within which parents and children interact, often unpleasantly, to produce potentially lasting developmental effects. The earlier toilet training starts, the longer it takes to complete. Also, the earlier such training starts, the more annoying, frustrating, and generally unpleasant experiences there are likely to be between the parties involved. Rigid toilet training, along with a constellation of other restrictive parental attributes, seems to be associated with slower development, and mothers who are high in anxiety tend to start toilet training earlier than more relaxed mothers.

Deprivation of social stimulation

While effects of institutionalization and, in general, deprivation of social stimulation remain to a certain extent controversial, the bulk of evidence suggests that such experiences often produce serious emotional and intellectual deficits. The effects are not as controversial as is the specification of the real antecedent events producing these effects, e.g., whether the pertinent variable is separation from a mother or sheer reduction in human or environmental contacts. It does seem reasonable to assume that institutional and deprivational effects consist largely of sequelae to previous unfortunate learning circumstances [seeInfancy, article onTHE EFFECTS OF EARLY EXPERIENCE].

Lewis P. Lipsitt

[See alsoDevelopmental psychology; Intellectual development. Other relevant material may be found inInfancy; Intelligence and intelligence testing; Language, article onLANGUAGE DEVELOPMENT; Perception, article onPERCEPTUAL DEVELOPMENT; Reading disabilities; Sensory and motor development; Socialization; Stimulation drives.]

BIBLIOGRAPHY

Bijou, Sidney W.; and Baer, Donald M. 1960 The Laboratory–Experimental Study of Child Behavior. Pages 140–197 in Paul H. Mussen (editor), Handbook of Research Methods in Child Development. New York: Wiley.

Lipsitt, Lewis P. 1963 Learning in the First Year of Life. Volume 1, pages 147–195 in Lewis P. Lipsitt and Charles C. Spiker (editors), Advances in Child Development and Behavior. New York: Academic Press.

MCCandless, Boyd R. 1961 Children and Adolescents: Behavior and Development. New York: Holt.

Munn, Norman L. (1946) 1954 Learning in Children. Pages 374–458 in Leonard Carmichael (editor), Manual of Child Psychology. 2d ed. New York: Wiley.

Rheingold, Harriet L.; and Stanley, Walter C. 1963 Developmental Psychology. Annual Review of Psychology 14:1–28.

Spiker, Charles C. 1960 Research Methods in Children’s Learning. Pages 374–420 in Paul H. Mussen (editor), Handbook of Research Methods in Child Development. New York: Wiley.

White, Sheldon H. 1963 Learning. Pages 196–235 in National Society for the Study of Education, Committee on Child Psychology, Child Psychology. Yearbook, Vol. 62, part 1. Univ. of Chicago Press.

XII PROGRAMMED LEARNING

The term “programmed learning” is used to describe an instructional situation in which materials presented in a controlled sequence require the learner to respond in a way that meets specified criteria of the program objectives. Terms often used synonymously are “programmed instruction,” “automated instruction,” “automatic tutoring,” or even “teaching machines.”

Because of the control over responses and sequence of presentation, the materials are referred to as a “program.” The responses made by the learner may be completing a statement with a word or words, writing an answer to a question, making a selection in a multiple-choice situation, imitating auditory or visual stimuli with oral or motor responses, stating agreement or disagreement, or solving a problem. The program may be presented to the learner through a mechanical device, known as a teaching machine, or in a book, known as a programmed textbook. The materials are programmed so that a tutorial situation is approximated without the immediate presence of a human tutor.

Programmed learning is viewed as a technological advancement in education and training developed in order to meet the increasing complexities in nearly all areas of human learning endeavor. In education, these complexities include the numbers to be educated, the rapidly expanding body of knowledge to be taught, and the special cases within a population—e.g., the intellectually gifted, the retarded, the delinquent, and the illiterate. Problems in management development and training-retraining associated with automation are concerns in business and industry to which the techniques of programmed learning are applicable.

History

Sidney L. Pressey

A device that could administer and score tests automatically was exhibited by Sidney L. Pressey in 1924. In a description of the uses of this device in his educational psychology classes at Ohio State University, Pressey also described the effectiveness of this machine for drill and recitation ([1926] 1960, pp. 35-41). The machine, which looked like a four-key typewriter, presented multiple-choice questions to the student. After the student had completed instruction through lectures and text reading, the machine was used to test his retention. The key corresponding to the student’s choice for each item was pressed. If the student made the correct choice, the machine would present the next question; however, if the student chose incorrectly, the machine would not advance. The machine recorded the total number of key presses for the entire test. The immediate bringing into awareness of the correctness of a response provided more effective application of several of Thorndike’s principles of learning than could the normal behaviors of the human teacher. Pressey observed that students who were tested by machine for weekly units of work showed higher achievement than students who took conventional tests.

Educators and trainers gave almost no consideration to the work done by Pressey and some of his students with the machine. After several years of effort modifying the device and applying it to several types of courses at different age levels, Pressey stopped working on the device and stated that education could not stay in a “crude handicraft stage” but would have to begin “quantity production methods” ([1932] 1960, pp. 47-51). He also predicted that new instruments and materials would be developed to facilitate research and sweeping advances in education and learning. Whether because of cultural inertia or other reasons, automated instructional devices failed to become established among educators and psychologists.

B. F. Skinner

More than twenty years later, in the 1950s, B. F. Skinner (1954, pp. 86–97) pointed out that education as a technology of learning did not approximate in its practice those principles observed and confirmed in learning research. Skinner stated that there were two principles of the learning process that had to be considered by those involved in teaching and training. The first, “contingencies of reinforcement,” he described as a serious application of Thorndike’s “law of effect” since it makes certain that desired responses appear in the student’s behavior and that these responses are immediately reinforced. The second principle maintains that reinforcement should be arranged or “scheduled” so that the learner continues to make responses, i.e., so that the material keeps him interested. Responses that successfully approximated the criteria of learned behavior should be emitted by the learners, and any other responses would be considered a faulty arrangement of the stimuli presented to the learners.

On the basis of these principles Skinner stated that anyone wishing to control the learning situation so that the desired changes in behavior would occur must consider the following questions: (1) What responses are desired to meet the criterion of learning? (2) What sort of successive approximations in emitted responses will lead to the desired behavior? (3) What reinforcers are available in the particular situation? (4) How can the reinforcements be arranged so that behavior can be maintained in necessary strength?

It was obvious to Skinner that educational practice would have to change radically to be abie to construct an instructional situation that would meet these requirements. For example, almost no provision was made for each learner to emit successive approximations of the desired behavior, nor was there any provision for the desired responses to be frequently and immediately reinforced. He observed that the reinforcements used in education were usually indirectly related, at best, to the responses desired for learning and that the contingencies of reinforcement, if considered at all, are arranged most haphazardly. The teacher as the primary reinforcing agent certainly was not adequate in most instructional situations. Some sort of device was needed.

A number of studies followed in which programs and machines were developed and tested, applying the principles described by Skinner. Programs in the areas of physics, remedial reading and vocabulary building, spelling, German, arithmetic, algebra, and psychology were involved. Various machines were designed and built for these programs. Much of this work was done under Skinner’s direction and influence and reported by him a few years later (Skinner 1958, pp. 969-977).

The mechanism, or machine, had a number of features differing from Pressey’s. The learner was required to compose his answer rather than select one from alternatives. Skinner argued that in stepby-step approximations plausible alternative choices presented to the learner can potentially strengthen unwanted responses. The machine would present only one frame, or item, to which the learner responded, and all other frames were out of sight. The machine would not advance to the next frame until the learner responded correctly on the current frame. Coding the correct answers into the machine made this feature automatic. With older subjects it was felt that the learner could himself make the comparison between his response and the correct one and that preceded answers might make the program too rigid. These frames were on a disc, which revolved on a turntable; the frames were exposed one at a time, and the learner composed his answer on a strip of paper exposed in another opening. After making his response, he raised a lever that caused his response to move under a transparent cover and at the same time exposed the correct answer. Lowering the lever caused the disc to expose the next frame. The machine could only control the presentation of the program. This control is most vital in the learning situation, but it is the program or material being exposed that teaches.

The characteristics of this learning situation can be described as follows: (1) The student is forced to be active in the learning situation. Unlike less-controlled situations, such as lectures, text reading, movies, or television, he is forced to make responses to stimuli as they are presented to him. (2) He must give the correct response before proceeding further. Again, this differs from techniques where the next stimulus can be presented whether or not the student is ready to proceed. (3) Through the step-by-step approximation, it is apparent when the learner is ready for the next step. (4) With hints, suggestions, and promptings the program helps the learner to make the correct response. (5) Immediate reinforcement is given to each appropriate response. The exposure of the answer is reinforcement, and this immediate feedback is sufficient to maintain the strength of the behavior, i.e., “keep him going.”

Norman A. Crowder

Somewhat different approaches to automated instructional devices began to appear in 1958. These differed from Skinner’s mainly in what was termed intrinsic programming: it was not so important that errors be completely omitted from the learner’s responses but that the program should adjust to the correct or incorrect response. Examples of this type of programming are the Tab Item, digital computers that adjust problems automatically according to the learner’s responses, and Crowder’s automatic tutoring devices. Since Crowder and Skinner represent the two major approaches to automated instruction in the developmental period, the comparison will be made between what Crowder has described as intrinsic programming and what has been discussed concerning Skinner’s approach.

Crowder’s intrinsic program goes beyond “knowledge of results” to an evaluation of the communication process between the learner and the program. Crowder stated that it is impossible to understand the learning process wUh specific material so completely that perfect step-by-step approxmation can be constructed. To overcome this handicap he built into the program an evaluation of the learner’s responses in order to make corrections when the learner does not adequately understand each step. A simple example of how this is done is Crowder’s “TutorText” (1960, pp. 286-298). A problem is introduced and the learner makes a choice among the answers that are presented at the bottom of the page in a multiple-choice arrangement. Along with each choice is a page number to which he is referred on the basis of that answer. If his answer is a correct one, he is informed of that fact and is presented with the next step. If his answer is incorrect, his error will be pointed out and explained, and he is referred to the original problem again. The “AutoTutor” is a more complex mechanism, which presents microfilm, motion picture film, or both and records responses and response time on a paper tape. Crowder described what he calls greater flexibility both within and between program steps. “Within-step flexibility” refers to each item of a page or screen presentation, and this is a larger amount of material than is presented in one frame on Skinner’s program. Crowder states that this larger amount of material, or flexibility, is necessary because of the complexity of the material and the complexity of the learners. The “between-iterns flexibility,” sometimes referred to as “branching,” is necessary because all incorrect responses represent a communication failure that needs correcting, and this can be done only by repeating some items or introducing special items to clear the misunderstanding.

Another major difference between the approaches of Skinner and Crowder is the question of response mode. Skinner emphasizes the necessity of the subject constructing his response rather than responding to a multiple-choice situation. Related to this difference is the fact that the steps from one frame to the next represent a wide jump in Crowder’s intrinsic programming as opposed to the small steps in Skinner’s linear programming.

Programmed learning criteria

Regardless of the differences between what has been described as linear programming and intrinsic programming, certain criteria can be established for both, which distinguish programming from other techniques and devices of instruction.

(1) Stimuli to which the learner must respond are presented to him. Active participation is required of the learner in contrast to the situations of the lecture, textbook, and audio-visual aids.

(2) The sequence of the material presented is highly controlled as a result of prior observation of its content within and between steps.

(3) A two-way communication is established, since immediate feedback is given by the program to the learner’s response. The learner is aware of his progress at all times.

(4) Reinforcement or reward (usually this is immediate feedback) is used to keep the learner responding or interested.

(5) The learner responds to the program at his own rate; this then is similar to a tutorial situation.

(6) Learning occurs without a human instructor in the immediate situation.

Another way of contrasting the techniques of automated instruction with the more traditional educational methods is in the emphasis on what pays off for the learner rather than for the instructor. The lecturer, textbook writer, and the director of various audio–visual aids make use of those techniques which work for each in his own medium. In building a program the emphasis is on the learner’s behavior at each step from beginning to end.

Research and development

The first reports of the use of programs in instruction began to appear in 1958, and most of the early reports were based on programs and machines developed and tested under Skinner’s direction and influence. Much of this early effort was reported by Skinner in an article in Science which received a wide audience and gave impetus to the teaching-machine movement (Skinner 1958). In fact, the article was titled “Teaching Machines,” and this was the first time these devices were given this label; the continued use of the term is an indication of the impact of that article.

Effectiveness of programmed instruction

The earliest research yielded some rather dramatic results that indicated a superiority of programmed instruction to more conventional techniques. This superiority was demonstrated in the significant differences found in the amount of time spent in learning and in learner performance.

The research and development in the next few years was phenomenal—a commentary on the value of this early effort and the tremendous need that many scholars felt existed for work in this direction. By 1964 more than two hundred research reports in programming appeared, directed toward the questions of whether programs do teach and, if so, which of the significant variables in the teaching-learning situation are under the control of the program.

The evidence leaves no doubt whether the programs teach: they do. Results of programs developed that use the models of Skinner, Crowder, or Pressey, as well as recent variations or combinations of these, contribute to this conclusion. Furthermore, learning occurs whether the program is presented by machine or in a text format. Learners varying in age from preschool to adult and in ability from the retarded and adult illiterate to advanced graduate student and practicing professional have been the subjects of these observations of program effectiveness. Programs have been used to teach motor, verbal, and perceptual skills at nearly every level of difficulty.

The question of whether programs teach more efficiently and effectively than other possible techniques was one of the first asked in research; in fact, most of the early research was concerned with a comparison of programmed instruction with conventional instruction. All but a few of these studies showed either a significant difference in favor of the program or no difference between the two. These observations were made with programs using the Skinnerian linear, or “shaping” model, the Crowder “intrinsic,” or communication model, and the automated test model of Pressey. Although most of the programs constructed for these studies were of the linear type, enough programs employing the techniques of other models were used to indicate no inherent superiority of a particular set of techniques. It should be pointed out that these studies lacked the precision and thoroughness to warrant much confidence in them; the programs in most cases covered relatively small amounts of instructional material and generally were too crude and hastily developed to be exemplary of a desirable programming technology. Nevertheless, the results were such that most researchers felt that programs provided effective instruction, and because of the control over the teaching-learning situation inherent in the programming approach, their efforts were directed toward isolating those variables which make the instructional situation effective.

Analysis of the learning process

Most of the research has been done by psychologists and educational psychologists with the objective of basing a description of the learning process in instructional situations upon psychological principles of learning. Not since Thorndike had experimental psychologists interested in learning directed a concerted effort toward the application of learning principles in instruction. Because of techniques of behavioral analysis developed by Skinner and because of the respected position he enjoys among experimental psychologists, a great number of psychologists were attracted to analysis of instruction through techniques of behavioral analysis developed in the laboratory. It was the application of the methodology to the applied situations in education and training, rather than a comparison of new and conventional instructional techniques, that attracted most psychologists. The major research, then, was concerned with isolating and describing the critical variables in the teaching-learning situation, using the method of behavioral analysis.

It was natural that many researchers began by attempting to replicate in modified form the earlier learning laboratory experiments and that the first of these were directed toward those variables found to be significant in Skinner’s techniques of behavioral analysis. The techniques to be used for shaping the learner’s responses, the effect of errors on this shaping process, the characteristics of the responses to be made by the learner, and the identification of the reinforcers in these learning situations were the problems covered in the early research; namely, how should the stimulus material be presented to the learner, in what mode and in what relationship to the stimulus should the learner’s response be made, is confirmation of correct responses a reinforcement, and what effect does a high error rate have on learning?

Amount of information. A number of studies have focused upon varying the amount of information to which the learner is to respond. This amount ranged in scope from short statements to one or two paragraphs or even several pages of written material. The results of these studies are not easy to interpret; the amount of information is difficult to measure in terms of length alone and is not independent of the type of information transmitted. Generally the results favor smaller amounts, especially in the early steps of instruction. Since most of the studies used short programs involving a relatively small amount of information, critical tests of this question have yet to be made.

Sequence of information, Related to amount of information is the problem of how the information is to be sequenced. Should the information be arranged according to “expert” understanding of the specific material? Is there some logical pattern underlying the learning task that can be used in sequencing the material for instruction? Several experiments have failed to show any difference between ordered or random sequencing of the material, but these have been with short programs. A few studies that have been concerned with analyzing the material to identify categories of learning tasks for instructional sequencing appear as the major effort in attacking this area. By basing the instruction on the learner’s present repertory and then proceeding through the material that has been sequenced according to the characteristic responses to be learned, the studies have made a major contribution to the technology of programmed learning.

Mode and importance of response. A relatively large number of experiments have been concerned with the response mode, i.e., overt responses, covert responses, multiple-choice responses, or reading the same material with no required response. In the great majority of studies no difference has been found between the three types of active responses, and evidence does not clearly indicate that active responding is superior to merely reading the material. The results, however, do show a relationship between errors during learning and some criteria of performance at the end of instruction. When response errors are made in the program, evidence suggests that those students required to make a correct response before proceeding further, ultimately perform at a higher level. Obviously, a short program with a low error probability would not yield much difference, especially if the performance criteria were not particularly sensitive.

In addition to the mode of response, the question of the relationship of the response to the material has been investigated, i.e., is the response critical to the material presented by the stimulus? Although only a few studies have been directed toward this question, the evidence indicates superior results from those programs in which critical response is required.

The nature of reinforcement. The research area receiving the major emphasis in the early work in programmed learning has been the application of the principle of reinforcement. What is reinforcing in the programming situation? Confirmation of correct responses is not clearly a reinforcer in all programming situations; the responses of some students do extinguish in the presence of confirmation while those of others fail to extinguish in the absence of confirmation. In several studies the effects of prompting–the correct response being shown to the learner, who is then required to repeat it—have been compared to those of confirmation, and in most of these studies prompting led to higher performance than confirmation. Obviously, reinforcement in the programming situation is related to the incentive conditions under which the learner is responding. One study suggests that the appearance of the frame in the machine is itself reinforcing. Other efforts to control responses have made highly desirable behaviors contingent upon making responses in a program. For example, a peertutor situation makes use of this by requiring the student to learn in order to teach another student. This has been most successful in teaching adult illiterates to read and write. Nevertheless, the complex relationship of intrinsic and extrinsic reinforcers present in human learning situations makes the task of identifying effective reinforcers extremely difficult indeed.

Errors. One of the clearest results of the research has been recognition of the relationship between the number of errors and performance criteria. Programs with a lower probability of response errors tend to be related to ultimately higher criterion performance. The cause of a high rate of error obviously cannot be separated from other variables in the instructional situation; therefore, attempts to solve this particular problem become somewhat circular. There has been no adequate analysis of the effect of errors in intrinsic programs except an awareness that a learner’s attitude tends to be negative when the error rate is high. Regardless of the type of program, the evidence indicates that errors need to be corrected immediately before proceeding.

Evaluation

Since the beginning of concerted research effort to describe the significant variables in the area of programmed learning, one is struck by the high proportion of studies in which no differences have been observed. It is clear that the variables involved in effective instruction have not been isolated and described. Many studies that have registered observable differences are counterbalanced by contradictory evidence in other research or lack sufficient replication to allow extrapolation to general instruction procedure. While the effort has been considerable, the period during which this work has occurred has been a brief one, the programs have covered only small amounts of material, and the instruments for evaluation have lacked precision.

Potential

Although positive contributions to an instructional technology from specific research efforts are few, the fact remains that never has the teaching–learning situation received the attention of so many experimentalists interested in human learning. Programmed learning represents an application of behavioral analysis techniques to the learning of meaningful material. The controlled observations possible through programmed learning are making possible more precise descriptions of behavior in the instructional situation than at any time previously. The necessity of evaluating instruction with stated objectives in behavioral terms and the effectiveness of the principles of active response and immediate reinforcement to- instruction have been successfully demonstrated. From these beginnings a technology can be expected to develop which translates, in a systematic and highly generalizable way, the specified terminal behaviors into the form and sequence of the instructional task.

To many psychologists and educators the attraction of programming, and certainly the success of the approach thus far, has been the attention to laboratory research in learning. As noted earlier, however, the research in programming has been concerned with showing the influence of learning variables studied in the laboratory to applied learning situations; in general, the results of this research have been somewhat equivocal. Gagne (1962, pp. 85-86) has noted that the identification and arrangements of task components are more important in developing efficient and effective instructional situations than many of the variables studied in the learning laboratory, e.g., reinforcement, meaningfulness, distribution of practice, and so forth. Also, Melton (1959, pp. 100-101) has stated that laboratory research has not produced sufficient knowledge of different learning areas to allow an integration of possible generalizations to be highly useful in application. Melton also stated that there is yet no satisfactory taxonomical scheme to describe specific tasks that humans perform.

It is apparent that a number of factors are contributing to the absence of any rapid integration of a science of learning with an educational technology. The limited development of a science of learning, a taxonomy that allows placement of learning tasks in a dimensional matrix, the mutually exclusive efforts of the experimental and educational psychologists beween the 1930s and late 1950s, and the complex interaction of variables in an educational learning situation are some of the obstacles that have held back such an integration. By the mid-1960s, however, these obstacles seemed to be disappearing. Experimental psychologists were introducing into the laboratory problems from applied instructional situations, and experimental and educational psychologists were cooperating on research projects at an increasing rate. Programmed learning introduced a methodology for observing and controlling behavior in an instructional situation which attracted the experimentalist, and the programming technique proved to be an effective instructional instrument which attracted the educational psychologist. Clearly the effort was made to build an educational technology from a science of learning just as an engineering technology was built from basic sciences. Equally clear was the necessity for an area of transitional research to develop a taxonomy of tasks useful to technology and the science of learning.

Breadth of application

By 1965 there were more than a thousand programs published and available for purchase in the United States. Of these, approximately two-thirds were educational programs for courses or units within courses at all levels—elementary through graduate school. Programs were available for teaching beginning reading skills, mathematics at all levels, second-language reading and listening skills, spelling, grammar, punctuation, economic concepts, music fundamentals, statistics, genetics, biology, medicine, and physics. Many more programs were being developed or had been developed and were being used for limited objectives in specific classes. Programs were being used in other special situations, such as educational and vocational counseling, marriage counseling, interpersonal relationships, and the teaching of recreational skills.

Nearly three hundred programs in the field of business and industry were published and available by 1965; these included programs in areas of secretarial skills, management skills, bank teller skills, salesman training, and consumer training. Many more programs had been developed for the exclusive training of personnel of a particular company. Also, the military and the U.S. Public Health Service have developed a large number of programs to train their personnel.

Except in those cases in which the material presented or the response required demands a mechanical device, most programs are available in a text format. Language-listening skills, pitch discrimination, or the control of the responses of a small child are examples of such specific demands. The text format has provided economy and flexibility advantageous to programming’s extended use in education, but the format probably has had a restraining effect on making broader application of the programming technique in education.

Use outside the United States. Because of the development of the programming technique in the United States, most research has taken place and the largest number of programs have been published there, but considerable effort in the research and development of programmed materials has been made in other countries. Considerable use of the techniques has been made in many European and Latin American countries, particularly in Great Britain, Germany, Sweden, the Soviet Union, the Netherlands, and Brazil. Much of the work has been done in these countries by following models of efforts in the United States. With the exception of the Soviet Union, a science of learning has not developed in other countries to the extreme that it has in the United States, a fact which has limited the use of the programming model elsewhere. Because of a highly developed psychology of learning and its differences from that in the United States, the Russians might be expected to make significant contributions to the programming field.

In 1963, two Unesco-sponsored workshops, one in Nigeria and the other in Jordan, introduced programmed instruction to areas of the world where it may have special significance. The necessity for more efficient and effective methods of education and training is especially great in the so-called developing countries, but this necessity is compounded by the world-wide scarcity of teachers. The self-instruction feature of programming makes its potential obvious.

There is little doubt that programmed learning represents a significant union of the science of learning with the practical problems of learning management. Effective teaching and training devices have been constructed. These early devices undoubtedly are extremely crude in comparison to what may appear in the future. The limit and potential of the use of programming in education and training are far from being determined in this early stage of development. The more important contribution of programmed learning to teaching and training is the introduction of a technique for an experimental analysis of behavior. Through the technique the practical problems of learning management can be brought under control so that careful observation and precise descriptions of learner responses to stimulus materials in an instructional situation can be made.

Russell W. Burris

[Other relevant material may be found in Educational Psychology; Simulation, article on Individual Behavior; and in the biography of Thorndike.]

BIBLIOGRAPHY

Center For Programed Instruction, New York 1962 The Use of Programed Instruction in U.S. Schools: Report of a Survey of the Use of Programed Instructional Materials in the Public Schools of the United States During the Year 1961-1962. New York: The Center. → Compiled and produced by the Center’s Research Division in cooperation with the U.S. Depart ment of Health, Education and Welfare.

Conference On Application Of Digital Computers To Automated Instruction, Washington, D.C., 19 1962 Programmed Learning and Computer-based Instruction: Proceedings. New York: Wiley. → See especially John E. Coulson’s contribution, “A Computerbased Laboratory for Research and Development in Education,” on pages 191–204.

Crowder, Norman A. 1960 Automatic Tutoring by Intrinsic Programming. Pages 286–298 in Arthur A. Lumsdaine and Robert Glaser (editors), Teaching Machines and Programmed Learning: A Source Book. Washington: National Education Association, Department of Audio–Visual Instruction.

GagnÉ, Robert M. 1962 Military Training and Principles of Learning. American Psychologist 17:83-91.

GagnÉ, Robert M. 1965 The Conditions of Learning. New York: Holt.

Galanter, Eugene (editor) 1959 Automatic Teaching: The State of the Art. New York: Wiley.

Glaser, Robert (editor) 1962 Training Research and Education. Univ. of Pittsburgh Press.

Green, Edward J. 1962 The Learning Process and Programmed Instruction. New York: Holt.

Holland, James G. 1960 Teaching Machines: An Application of Principles From the Laboratory.Journal of the Experimental Analysis of Behavior 3:275-287.

Lumsdaine, Arthur A. 1961 Student Response in Programmed Instruction: A Symposium on Experimental Studies of Cue and Response Factors in Group and Individual Learning From Instructional Media. Washington: National Academy of Sciences-National Research Council.

Lumsdaine, Arthur A.; and Glaser, Robert (editors) 1960 Teaching Machines and Programmed Learning: A Source Book. Washington: National Education Association, Department of Audio-Visual Instruction. Mager, Robert F. 1961 Preparing Objectives for Programmed Instruction. San Francisco: Fearon.

Melton, Arthur W. 1959 The Science of Learning and the Technology of Educational Methods. Harvard Educational Review 29:96-106.

Pressey, Sidney L. (1926) 1960 A Simple Apparatus Which Gives Tests and Scores—and Teaches. Pages 35–41 in Arthur A. Lumsdaine and Robert Glaser (editors), Teaching Machines and Programmed Learning: A Source Book. Washington: National Education Association, Department of Audio-Visual Instruction. → First published in Volume 23 of School and Society.

Pressey, Sidney L. (1932) 1960 A Third and Fourth Contribution Toward the Coming “Industrial Revolution” in Education. Pages 47–51 in Arthur A. Lumsdaine and Robert Glaser (editors), Teaching Machines and Programmed Learning: A Source Book. Washington: National Education Association, Department of Audio-Visual Instruction. → First published in Volume 36 of School and Society.

Pressey, Sidney L. 1963 Teaching Machine (and Learning Theory) Crisis.Journal of Applied Psychology 47:1-6.

Schramm, Wilbur L. 1962 Programmed Instruction, Today and Tomorrow. New York: Fund for the Advancement of Education.

Schramm, Wilbur L. 1964 The Research on Programed Instruction: An Annotated Bibliography. U.S. Office of Education, Bulletin No. 35. Washington: U.S. Department of Health, Education and Welfare, Office of Education.

Skinner, B. F. 1954 The Science of Learning and the Art of Teaching. Harvard Educational Review 24:86-97.

Skinner, B. F. 1958 Teaching Machines. Science 128: 969-977.

Learning

views updated May 17 2018

LEARNING


analogical reasoning
dedre gentner
jeffrey loewenstein

causal reasoning
joseph p. magliano
bradford h. pillow

conceptual change
carol l. smith

knowledge acquisition, representation, and organization
danielle s. mcnamara
tenaha o'reilly

neurological foundation
howard eichenbaum

perceptual processes
john j. rieser

problem solving
richard e. mayer

reasoning
thomas d. griffin

transfer of learning
daniel l. schwartz
na'ilah nasir

ANALOGICAL REASONING

Analogy plays an important role in learning and instruction. As John Bransford, Jeffrey Franks, Nancy Vye, and Robert Sherwood noted in 1989, analogies can help students make connections between different concepts and transfer knowledge from a well-understood domain to one that is unfamiliar or not directly perceptual. For example, the circulatory system is often explained as being like a plumbing system, with the heart as pump.

The Analogical Reasoning Process

Analogical reasoning involves several sub-processes:(1) retrieval of one case given another; (2) mapping between two cases in working memory; (3) evaluating the analogy and its inferences; and, sometimes,(4) abstracting the common structure. The core process in analogical reasoning is mapping. According to structure-mapping theory, developed by Dedre Gentner in 1982, an analogy is a mapping of knowledge from one domain (the base or source) into another (the target) such that a system of relations that holds among the base objects also holds among the target objects. In interpreting an analogy, people seek to put the objects of the base in one-to-one correspondence with the objects of the target so as to obtain the maximal structural match. The corresponding objects in the base and target need not resemble each other; what is important is that they hold like roles in the matching relational structures. Thus, analogy provides a way to focus on relational commonalities independently of the objects in which those relations are embedded.

In explanatory analogy, a well-understood base or source situation is mapped to a target situation that is less familiar and/or less concrete. Once the two situations are alignedthat is, once the learner has established correspondences between themthen new inferences are derived by importing connected information from the base to the target. For example, in the analogy between blood circulation and plumbing, students might first align the known facts that the pump causes water to flow through the pipes with the fact that the heart causes blood to flow through the veins. Given this alignment of structure, the learner can carry over additional inferences: for example, that plaque in the veins forces the heart to work harder, just as narrow pipes require a pump to work harder.

Gentner and Phillip Wolff in 2000 set forth four ways in which comparing two analogs fosters learning. First, it can highlight common relations. For example, in processing the circulation/plumbing analogy, the focus is on the dynamics of circulation, and other normally salient knowledgesuch as the red color of arteries and the blue color of veinsis suppressed. Second, it can lead to new inferences, as noted above. Third, comparing two analogs can reveal meaningful differences. For example, the circulation/plumbing analogy can bring out the difference that veins are flexible whereas pipes are rigid. In teaching by analogy, it is important to bring out such differences; otherwise students may miss them, leading them to make inappropriate inferences. Fourth, comparing two analogs can lead learners to form abstractions, as amplified below.

What Makes a Good Analogy

As Gentner suggested in 1982, to facilitate making clear alignments and reasonable inferences, an analogy must be structurally consistentthat is, it should have one-to-one correspondences, and the relations in the two domains should have a parallel structure. For example, in the circulation/plumbing system analogy, the pump cannot correspond to both the veins and the heart. Another factor influencing the quality of an analogy is systematicity: Analogies that convey an interconnected system of relations, such as the circulation/pumping analogy, are more useful than those that convey only a single isolated fact, such as "The brain looks like a walnut." Further, as Keith Holyoak and Paul Thagard argued in 1995, an analogy should be goal-relevant in the current context.

In addition to the above general qualities, several further factors influence the success of an explanatory analogy, including base specificity, transparency, and scope. Base specificity is the degree to which the structure of the base domain is clearly understood. Transparency is the ease with which the correspondences can be seen. Transparency is increased by similarities between corresponding objects and is decreased by similarities between noncorresponding objects. For example, in 1986 Gentner and Cecile Toupin found that four-to six-year-old children succeeded in transferring a story to new characters when similar characters occupied similar roles (e.g., squirrel [.arrowright] chipmunk; trout salmon), but they failed when the match was cross-mapped, with similar characters in different roles (e.g., squirrel [.arrowright] salmon; trout chipmunk). The same pattern has been found with adults. Transparency also applies to relations. In 2001 Miriam Bassok found that students more easily aligned instances of "increase" when both were continuous (e.g., speed of a car and growth of a population) than when one was discrete (e.g., attendance at an annual event). Finally, scope refers to how widely applicable the analogy is.

Methods Used to Investigate Analogical Learning

Much research on analogy in learning has been devoted to the effects of analogies on domain understanding. For example, in 1987 Brian Ross found that giving learners analogical examples to illustrate a probability principle facilitated their later use of the probability formula to solve other problems. In classroom studies from 1998, Daniel Schwartz and John Bransford found that generating distinctions between contrasting cases improved students' subsequent learning. As reported in 1993, John Clement used a technique of bridging analogies to induce revision of faulty mental models. Learners were given a series of analogs, beginning with a very close match and moving gradually to a situation that exemplified the desired new model.

Another line of inquiry focuses on the spontaneous analogies people use as mental models of the world. This research generally begins with a questionnaire or interview to elicit the person's own analogical models. For example, Willet Kempton in 1986 used interviews to uncover two common analogical models of home heating systems. In the (incorrect) valve model, the thermostat is like a faucet: It controls the rate at which the furnace produces heat. In the (correct) threshold model, the thermostat is like an oven: It simply controls the goal temperature, and the furnace runs at a constant rate. Kempton then examined household thermostat records and found patterns of thermostat settings corresponding to the two analogies. Some families constantly adjusted their thermostats from high to low temperatures, an expensive strategy that follows from the valve model. Others simply set their thermostat twice a daylow at night, higher by day, consistent with the threshold model.

Analogy in Children

Research on the development of analogy shows a relational shift in focus from object commonalities to relational commonalities. This shift appears to result from gains in domain knowledge, as Gentner and Mary Jo Rattermann suggested in 1991, and perhaps from gains in processing capacity as suggested by Graeme Halford in 1993. In 1989 Ann Brown showed that young children's success in analogical transfer tasks increased when the domains were familiar to them and they were given training in the relevant relations. For example, three-year-olds can transfer solutions across simple tasks involving familiar relations such as stacking and pulling, and six-year-olds can transfer more complex solutions. In 1987 Kayoko Inagaki and Giyoo Hatano studied spontaneous analogies in five-to six-year-old children by asking questions such as whether they could keep a baby rabbit small and cute forever. The children often made analogies to humans, such as "We cannot keep the baby the same size forever because he takes food. If he eats, he will become bigger and bigger and be an adult." Children were more often correct when they used these personification analogies than when they did not. This suggests that children were using humansa familiar, well-understood domainas a base domain for reasoning about similar creatures.

Retrieval of Analogs: The Inert Knowledge Problem

Learning from cases is often easier than learning principles directly. Despite its usefulness, however, training with examples and cases often fails to lead to transfer, because people fail to retrieve potentially useful analogs. For example, Mary Gick and Holyoak found in 1980 that participants given an insight problem typically failed to solve it, even when they had just read a story with an analogous solution. Yet, when they were told to use the prior example, they were able to do so. This shows that the prior knowledge was not lost from memory; this failure to access prior structurally similar cases is, rather, an instance of "inert knowledge"knowledge that is not accessed when needed.

One explanation for this failure of transfer is that people often encode cases in a situation-specific manner, so that later remindings occur only for highly similar cases. For example, in 1984 Ross gave people mathematical problems to study and later gave them new problems. Most of their later remindings were to examples that were similar only on the surface, irrespective of whether the principles matched. Experts in a domain are more likely than novices to retrieve structurally similar examples, but even experts retrieve some examples that are similar only on the surface. However, as demonstrated by Laura Novick in 1988, experts reject spurious remindings more quickly than do novices. Thus, especially for novices, there is an unfortunate dissociation: While accuracy of transfer depends critically on the degree of structural match, memory retrieval depends largely on surface similarity between objects and contexts.

Analogical Encoding in Learning

In the late twentieth century, researchers began exploring a new technique, called analogical encoding, that can help overcome the inert knowledge problem. Instead of studying cases separately, learners are asked to compare analogous cases and describe their similarities. This fosters the formation of a common schema, which in turn facilitates transfer to a further problem. For example, in 1999 Jeffrey Loewenstein, Leigh Thompson, and Gentner found that graduate management students who compared two analogical cases were nearly three times more likely to transfer the common strategy into a subsequent negotiation task than were students who analyzed the same two cases separately.

Implications for Education

Analogies can be of immense educational value. They permit rapid learning of a new domain by transferring knowledge from a known domain, and they promote noticing and abstracting principles across domains. Analogies are most successful, however, if their pitfalls are understood. In analogical mapping, it is important to ensure that the base domain is understood well, that the correspondences are clear, and that differences and potentially incorrect inferences are clearly flagged. When teaching for transfer, it is important to recognize that learners tend to rely on surface features. One solution is to minimize surface features by using simple objects. Another is to induce analogical encoding by asking learners to explicitly compare cases. The better educators understand analogical processes, the better they can harness them for education.

See also: Learning, subentry on Transfer of Learning; Learning Theory, subentry on Historical Overview.

bibliography

Bassok, Miriam. 2001. "Semantic Alignments in Mathematical Word Problems." In The Analogical Mind: Perspectives from Cognitive Science, ed. Dedre Gentner, Keith J. Holyoak, and Biocho N. Kokinov. Cambridge, MA: MIT Press.

Bransford, John D.; Franks, Jeffrey J.; Vye, Nancy J.; and Sherwood, Robert D. 1989. "New Approaches to Instruction: Because Wisdom Can't Be Told." In Similarity and Analogical Reasoning, ed. Stella Vosniadou and Andrew Ortony. New York: Cambridge University Press.

Brown, Ann L. 1989. "Analogical Learning and Transfer: What Develops?" In Similarity and Analogical Reasoning, ed. Stella Vosniadou and Andrew Ortony. New York: Cambridge University Press.

Brown, Ann L., and Kane, Mary Jo. 1988. "Preschool Children Can Learn to Transfer: Learning to Learn and Learning from Example." Cognitive Psychology 20:493523.

Chen, Zhe, and Daehler, Marvin W. 1989. "Positive and Negative Transfer in Analogical Problem Solving by Six-Year-Old Children." Cognitive Development 4:327344.

Clement, John. 1993. "Using Bridging Analogies and Anchoring Intuitions to Deal with Students' Preconceptions in Physics." Journal of Research in Science Teaching 30:12411257.

Gentner, Dedre. 1982. "Are Scientific Analogies Metaphors?" In Metaphor: Problems and Perspectives, ed. David S. Miall. Brighton, Eng.: Harvester Press.

Gentner, Dedre. 1983. "Structure-Mapping: A Theoretical Framework for Analogy." Cognitive Science 7:155170.

Gentner, Dedre, and Rattermann, Mary Jo. 1991. "Language and the Career of Similarity." In Perspectives on Thought and Language: Inter-relations in Development, ed. Susan A. Gelman and James P. Brynes. London: Cambridge University Press.

Gentner, Dedre; Rattermann, Mary Jo; and Forbus, Kenneth D. 1993. "The Roles of Similarity in Transfer: Separating Retrievability from Inferential Soundness." Cognitive Psychology 25:524575.

Gentner, Dedre, and Toupin, Cecile. 1986. "Systematicity and Surface Similarity in the Development of Analogy." Cognitive Science 10:277300.

Gentner, Dedre, and Wolff, Phillip. 2000. "Metaphor and Knowledge Change." In Cognitive Dynamics: Conceptual Change in Humans and Machines, ed. Eric Dietrich and Arthur B. Markman. Mahwah, NJ: Erlbaum.

Gick, Mary L., and Holyoak, Keith J. 1980. "Analogical Problem Solving." Cognitive Psychology 12:306355.

Gick, Mary L., and Holyoak, Keith J. 1983. "Schema Induction and Analogical Transfer." Cognitive Psychology 15:138.

Goswami, Usha. 1992. Analogical Reasoning in Children. Hillsdale, NJ: Erlbaum.

Halford, Graeme S. 1993. Children's Understanding: The Development of Mental Models. Hillsdale, NJ: Erlbaum.

Holyoak, Keith J., and Koh, K. 1987. "Surface and Structural Similarity in Analogical Transfer." Memory and Cognition 15:332340.

Holyoak, Keith J., and Thagard, Paul R. 1995. Mental Leaps: Analogy in Creative Thought. Cambridge, MA: MIT Press.

Inagaki, Kayoko, and Hatano, Giyoo. 1987. "Young Children's Spontaneous Personification as Analogy." Child Development 58:10131020.

Kempton, Willet. 1986. "Two Theories of Home Heat Control." Cognitive Science 10:7590.

Kolodner, Janet L. 1997. "Educational Implications of Analogy: A View from Case-Based Reasoning." American Psychologist 52:(1) 5766.

Loewenstein, Jeffrey; Thompson, Leigh; and Gentner, Dedre. 1999. "Analogical Encoding Facilitates Knowledge Transfer in Negotiation." Psychonomic Bulletin and Review 6:586597.

Markman, Arthur B., and Gentner, Dedre. 2000. "Structure Mapping in the Comparison Process." American Journal of Psychology 113:501538.

Novick, Laura R. 1988. "Analogical Transfer, Problem Similarity, and Expertise." Journal of Experimental Psychology: Learning, Memory, and Cognition 14:510520.

Perfetto, Greg A.; Bransford, John D.; and Franks, Jeffrey J. 1983. "Constraints on Access in a Problem Solving Context." Memory and Cognition 11:2431.

Reed, Steve K. 1987. "A Structure-Mapping Model for Word Problems." Journal of Experimental Psychology: Learning, Memory, and Cognition 13:124139.

Ross, Brian H. 1984. "Remindings and Their Effects in Learning a Cognitive Skill." Cognitive Psychology 16:371416.

Ross, Brian H. 1987. "This Is Like That: The Use of Earlier Problems and the Separation of Similarity Effects." Journal of Experimental Psychology: Learning, Memory, and Cognition 13:629639.

Ross, Brian H. 1989. "Distinguishing Types of Superficial Similarities: Different Effects on the Access and Use of Earlier Problems." Journal of Experimental Psychology: Learning, Memory, and Cognition 15:456468.

Schank, Roger C.; Kass, Alex; and Riesbeck, Christopher K., eds. 1994. Inside Case-Based Explanation. Hillsdale, NJ: Erlbaum.

Schwartz, Daniel L., and Bransford, John D. 1998. "A Time for Telling." Cognition and Instruction 16:475522.

Spiro, Rand J.; Feltovich, Paul J.; Coulson, Richard L.; and Anderson, Daniel K. 1989. "Multiple Analogies for Complex Concepts: Antidotes for Analogy-Induced Misconception in Advanced Knowledge Acquisition." In Similarity and Analogical Reasoning, ed. Stella Vosniadou and Andrew Ortony. New York: Cambridge University Press.

Dedre Gentner

Jeffrey Loewenstein

CAUSAL REASONING

A doorbell rings. A dog runs through a room. A seated man rises to his feet. A vase falls from a table and breaks. Why did the vase break? To answer this question, one must perceive and infer the causal relationships between the breaking of the vase and other events. Sometimes, the event most directly causally related to an effect is not immediately apparent (e.g., the dog hit the table), and conscious and effortful thought may be required to identify it. People routinely make such efforts because detecting causal connections among events helps them to make sense of the constantly changing flow of events. Causal reasoning enables people to find meaningful order in events that might otherwise appear random and chaotic, and causal understanding helps people to plan and predict the future. Thus, in 1980 the philosopher John Mackie described causal reasoning as "the cement of the universe." How, then, does one decide which events are causally related? When does one engage in causal reasoning? How does the ability to think about causeeffect relations originate and develop during infancy and childhood? How can causal reasoning skills be promoted in educational settings, and does this promote learning? These questions represent important issues in research on causal reasoning

Causal Perceptions and Causal Reasoning

An important distinction exists between causal perceptions and causal reasoning. Causal perceptions refer to one's ability to sense a causal relationship without conscious and effortful thought. According to the philosopher David Hume (17111776), perceptual information regarding contiguity, precedence, and covariation underlies the understanding of causality. First, events that are temporally and spatially contiguous are perceived as causally related. Second, the causal precedes the effect. Third, events that regularly co-occur are seen as causally related. In contrast, causal reasoning requires a person to reason through a chain of events to infer the cause of that event. People most often engage in causal reasoning when they experience an event that is out of the ordinary. Thus, in some situations a person may not know the cause of an unusual event and must search for it, and in other situations must evaluate whether one known event was the cause of another. The first situation may present difficulty because the causal event may not be immediately apparent. Philosophers have argued that causal reasoning is based on an assessment of criteria of necessity and sufficiency in these circumstances. A necessary cause is one that must be present for the effect to occur. Event A is necessary for event B if event B will not occur without event A. For example, the vase would not have broken if the dog had not hit the table. A cause is sufficient if its occurrence can by itself bring about the effect (i.e., whenever event A occurs, event B always follows). Often, more than one causal factor is present. In the case of multiple necessary causes, a set of causal factors taken together jointly produces an effect. In the case of multiple sufficient causes, multiple factors are present, any one of which by itself is sufficient to produce an effect.

The Development of Causal Perception and Causal Reasoning Skills

Causal perception appears to begin during infancy. Between three and six months of age, infants respond differently to temporally and spatially contiguous events (e.g., one billiard ball contacting a second that begins to roll immediately) compared to events that lack contiguity (e.g., the second ball begins to roll without collision or does not start to move until half a second after collision). Thus, the psychologist Alan Leslie proposed in 1986 that infants begin life with an innate perceptual mechanism specialized to automatically detect causeeffect relations based on contiguity. However, psychologists Leslie Cohen and Lisa Oakes reported in 1993 that familiarity with role of a particular object in a causal sequence influence ten-month-old infants' perception of causality. Therefore, they suggest that infants do not automatically perceive a causal connection when viewing contiguous events. The question of whether infants begin with an innate ability to automatically detect causality, or instead gradually develop casual perception through general learning processes remains a central controversy concerning the origins of causal thought.

Although infants perceive causal relationships, complex causal reasoning emerges during early childhood and grows in sophistication thereafter. Thus, information about precedence influences causal reasoning during childhood. When asked to determine what caused an event to occur, three-year-olds often choose an event that preceded it, rather than one that came later, but understanding of precedence becomes more consistent and general beginning at five years of age. Unlike contiguity and precedence, information about covariation is not available from a single casual sequence, but requires repeated experience with the co-occurrence of a cause and effect. Children do not begin to use covariation information consistently in their casual thinking before eight years of age. Because the various types of information relevant to causality do not always suggest the same causal relation, children and adults must decide which type of information is most important in a particular situation.

In addition to the perceptual cues identified by Hume, knowledge of specific causal mechanisms plays a central role in causal reasoning. By three years of age, children expect there to be some mechanism of transmission between cause and effect, and knowledge of possible mechanisms influences both children's and adults' interpretation of perceptual cues. For instance, when a possible causal mechanism requires time to produce an effect (e.g., a marble rolling down a lengthy tube before contacting another object), or transmits quickly across a distance (e.g., electrical wiring), children as young as five years of age are more likely to select causes that lack temporal spatial contiguity than would otherwise be the case. Because causal mechanisms differ for physical, social, and biological events, children must acquire distinct conceptual knowledge to understand causality in each of these domains. By three to four years of age, children recognize that whereas physical effects are caused by physical transmission, human action is motivated internally by mental states such as desires, beliefs, and intentions, and they begin to understand some properties of biological processes such as growth and heredity. Furthermore, conceptual understanding of specific causal mechanisms may vary across cultures and may be learned through social discourse as well as through direct experience.

A fundamental understanding of causality is present during early childhood; however, prior to adolescence children have difficulty searching for causal relations through systematic scientific experimentation. Preadolescents may generate a single causal hypothesis and seek confirmatory evidence, misinterpret contradictory evidence, or design experimental tests that do not provide informative evidence. In contrast, adolescents and adults may generate several alternative hypotheses and test them by systematically controlling variables and seeking both disconfirmatory and confirmatory evidence. Nevertheless, even adults often have difficulty designing valid scientific experiments. More generally, both children and adults often have difficulty identifying multiple necessary or sufficient causes.

Teaching Causal Reasoning Skills

The psychologist Diane Halpern argued in 1998 that critical thinking skills should be taught in primary, secondary, and higher educational settings. Casual reasoning is an important part of critical thinking because it enables one to explain and predict events, and thus potentially to control one's environment and achieve desired outcomes.

Three approaches to teaching causal reasoning skills may be efficacious. First, causal reasoning skills can be promoted by teaching students logical deduction. For example, teaching students to use counterfactual reasoning may help them assess whether there is a necessary relationship between a potential cause and an effect. Counterfactual reasoning requires student to imagine that a potential cause did not occur and to infer whether the effect would have occurred in its absence. If it would occur, then there is no causal relationship between the two events.

Second, causal reasoning skills can be promoted by teaching students to generate informal explanations for anomalous events or difficult material. For instance, learning from scientific texts can be particularly challenging to students, and often students have the misconception that they do not have adequate knowledge to understand texts. The psychologist Michelene Chi demonstrated in 1989 that students who use their general world knowledge to engage in causal, explanatory reasoning while reading difficult physics texts understand what they read considerably better than do students who do not draw upon general knowledge in this way. Furthermore, in 1999 the psychologist Danielle McNamara developed a reading training intervention that promotes explanatory reasoning during reading. In this program, students were taught a number of strategies to help them to use both information in the text and general knowledge to generate explanations for difficult material. Training improved both comprehension of scientific texts and overall class performance, and was particularly beneficial to at-risk students.

Third, the psychologist Leona Schauble demonstrated in 1990 that causal reasoning skills can be promoted by teaching students the principles of scientific experimentation. A primary goal of experimentation is to determine causal relationships among a set of events. Students may be taught to identify a potential cause of an effect, manipulate the presence of the cause in a controlled setting, and assesses whether or not the effect occurs. Thus, students learn to use the scientific method to determine whether there are necessary and sufficient relationships between a potential cause and an effect. Because the principles of science are often difficult for students to grasp, teaching these principles would provide students with formal procedures for evaluating causal relationships in the world around them.

See also: Learning, subentry on Reasoning; Learning Theory, subentry on Historical Overview; Literacy, subentry on Narrative Comprehension and Production; Reading, subentries on Comprehension, Content Areas.

bibliography

Bullock, Merry; Gelman, Rochel; and Baillargeon, Renee. 1982. "The Development of Causal Reasoning." In The Developmental Psychology of Time, ed. William J. Friedman. New York: Academic Press.

Chi, Michelene T. H., et al. 1989. "Self-Explanation: How Students Study and Use Examples in Learning to Solve Problems." Cognitive Science 13:145182.

Cohen, Leslie B., and Oakes, Lisa M. 1993. "How Infants Perceive a Simple Causal Event." Developmental Psychology 29:421433.

Epstein, Richard L. 2002. Critical Thinking, 2nd edition. Belmont, CA: Wadsworth.

Halpern, Diane F. 1998. "Teaching Critical Thinking for Transfer across Domains." American Psychologist 53:449455.

Hume, David. 1960. A Treatise on Human Nature (1739). Oxford: Clarendon Press.

Kuhn, D.; Amsel, Eric; and O'Loughlin, Michael. 1988. The Development of Scientific Thinking Skills. San Diego, CA: Academic Press.

Leslie, Alan M. 1986. "Getting Development off the Ground: Modularity and the Infant's Perception of Causality." In Theory Building in Developmental Psychology, ed. Paul Van Geert. Amsterdam: North-Holland.

Mackie, John L. 1980. The Cement of the Universe. Oxford: Clarendon Press.

McNamara, Danielle S., and Scott, Jeremy L. 1999. Training Reading Strategies. Hillsdale, NJ: Erlbaum.

Schauble, Leona. 1990. "Belief Revision in Children: The Role of Prior Knowledge and Strategies for Generating Evidence." Journal of Experimental Child Psychology 49:3157.

Sedlak, Andrea J., and Kurtz, Susan T. 1981. "A Review of Children's Use of Causal Inference Principles." Child Development 52:759784.

Wellman, Henry M., and Gelman, Susan A. 1998. "Knowledge Acquisition in Foundational Domains." In Handbook of Child Psychology: Cognition, Perception, and Language, 5th edition, ed. Deanna Kuhn and Robert Siegler. New York: Wiley.

White, Peter A. 1988. "Causal Processing: Origins and Development." Psychological Bulletin 104:3652.

Joseph P. Magliano

Bradford H. Pillow

CONCEPTUAL CHANGE

The term conceptual change refers to the development of fundamentally new concepts, through restructuring elements of existing concepts, in the course of knowledge acquisition. Conceptual change is a particularly profound kind of learningit goes beyond revising one's specific beliefs and involves restructuring the very concepts used to formulate those beliefs. Explaining how this kind of learning occurs is central to understanding the tremendous power and creativity of human thought.

The emergence of fundamentally new ideas is striking in the history of human thought, particularly in science and mathematics. Examples include the emergence of Darwin's concept of evolution by natural selection, Newton's concepts of gravity and inertia, and the mathematical concepts of zero, negative, and rational numbers. One of the challenges of education is how to transmit these complex products of human intellectual history to the next generation of students.

Although there are many unresolved issues about how concepts are mentally represented, conceptual-change researchers generally assume that explanatory concepts are defined and articulated within theory-like structures, and that conceptual change requires coordinated changes in multiple concepts within these structures. New concepts that have arisen in the history of science are clearly part of larger, explicit theories. Making an analogy between the organization of concepts in scientists and children, researchers have proposed that children may have "commonsense" theories in which their everyday explanatory concepts are embedded and play a role. These theories, although not self-consciously held, are assumed to be like scientific theories in that they consist of a set of interrelated concepts that resist change and that support inference making, problem solving, belief formation, and explanation in a given domain. The power and usefulness of this analogy is being explored in the early twenty-first century.

A challenge for conceptual-change researchers is to provide a typology of important forms of conceptual change. For example, conceptual differentiation is a form of conceptual change in which a newer (descendant) theory uses two distinct concepts where the initial (parent) theory used only one, and the undifferentiated parent concept unites elements that will subsequently be kept distinct. Examples of conceptual differentiation include: Galileo's differentiation of average and instantaneous velocity in his theory of motion, Black's differentiation of heat and temperature in his theory of thermal phenomena, and children's differentiation of weight and density in their matter theory. Conceptual differentiation is not the same as adding new subcategories to an existing category, which involves the elaboration of a conceptual structure rather than its transformation. In that case, the new subcategories fit into an existing structure, and the initial general category is still maintained. In differentiation, the parent concept is seen as incoherent from the perspective of the subsequent theory and plays no role in it. For example, an undifferentiated weight/density concept that unites the elements heavy and heavy-for-size combines two fundamentally different kinds of quantities: an extensive (total amount) quantity and an intensive (relationally defined) quantity.

Another form of conceptual change is coalescence, in which the descendant theory introduces a new concept that unites concepts previously seen to be of fundamentally different types in the parent theory. For example, Aristotle saw circular planetary and free-fall motions as natural motions that were fundamentally different from violent projectile motions. Newton coalesced circular, planetary, free-fall, and projectile motions under a new category, accelerated motion. Similarly, children initially see plants and animals as fundamentally different: animals are behaving beings that engage in self-generated movement, while plants are not. Later they come to see them as two forms of "living things" that share important biological properties. Conceptual coalescence is not the same as simply adding a more general category by abstracting properties common to more specific categories. In conceptual coalescence the initial concepts are thought to be fundamentally different, and the properties that will be central to defining the new category are not represented as essential properties of the initial concepts.

Different forms of conceptual change mutually support each other. For example, conceptual coalescences (such as uniting free-fall and projectile motion in a new concept of accelerated motion, or plants and animals in a new concept of living things) are accompanied by conceptual differentiations (such as distinguishing uniform from accelerated motion, or distinguishing dead from inanimate). These changes are also supported by additional forms of conceptual change, such as re-analysis of the core properties or underlying structure of the concept, as well as the acquisition of new specific beliefs about the relations among concepts.

Mechanisms of Conceptual Change

One reason for distinguishing conceptual change from belief revision and conceptual elaboration is that different learning mechanisms may be required. Everyday learning involves knowledge enrichment and rests on an assumed set of concepts. For example, people use existing concepts to represent new facts, formulate new beliefs, make inductive or deductive inferences, and solve problems.

What makes conceptual change so challenging to understand is that it cannot occur in this way. The concepts of a new theory are ultimately organized and stated in terms of each other, rather than the concepts of the old theory, and there is no simple one-to-one correspondence between some concepts of the old and new theories. By what learning mechanisms, then, can scientists invent, and students comprehend, a genuinely new set of concepts and come to prefer them to their initial set of concepts?

Most theorists agree that one step in conceptual change for both students and scientists is experiencing some form of cognitive dissonance an internal state of tension that arises when an existing conceptual system fails to handle important data and problems in a satisfactory manner. Such dissonance can be created by a series of unexpected results that cannot be explained by an existing theory, by the press to solve a problem that is beyond the scope of one's current theory, or by the detection of internal inconsistencies in one's thinking. This dissonance can signal the need to step outside the normal mode of applying one's conceptual framework to a more meta-conceptual mode of questioning, examining, and evaluating one's conceptual framework.

Although experiencing dissonance can signal that there is a conceptual problem to be solved, it does not solve that problem. Another step involves active attempts to invent or construct an understanding of alternative conceptual systems by using a variety of heuristic procedures and symbolic tools. Heuristic procedures, such as analogical reasoning, imagistic reasoning, and thought experiments, may be particularly important because they allow both students and scientists to creatively extend, combine, and modify existing conceptual resources via the construction of new models. Symbolic tools, such as natural language, the algebraic and graphical representations of mathematics, and other invented notational systems, allow the explicit representation of key relations in the new system of concepts.

In analogical reasoning, knowledge of conceptual relations in better-understood domains are powerful sources of new ideas about the less-understood domain. Analogical reasoning is often supported by imagistic reasoning, wherein one creates visual depictions of core ideas using visual analogs with the same underlying relational structure. These depictions allow the visualization of unseen theoretical entities, connect the problem to the well-developed human visual-spatial inferencing system, and, because much mathematical information is implicit in such depictions, facilitate the construction of appropriate mathematical descriptions of a given domain. Thought experiments use initial knowledge of a domain to run simulations of what should happen in various idealized situations, including imagining what happens as the effects of a given variable are entirely eliminated, thus facilitating the identification of basic principles not self-evident from everyday observation.

Case studies of conceptual change in the history of science and science education reveal that new intellectual constructions develop over an extended period of time and include intermediate, bridging constructions. For example, Darwin's starting idea of evolution via directed, adaptative variation initially prevented his making an analogy between this process and artificial selection. He transformed his understanding of this process using multiple analogies (first with wedging and Malthusian population pressure, and later with artificial selection), imagistic reasoning (e.g., visualizing the jostling effects of 100,000 wedges being driven into the same spot of ground to understand the tremendous power of the unseen force in nature and its ability to produce species change in a mechanistic manner), and thought experiments (e.g., imagining how many small effects might build up over multiple generations to yield a larger effect). Each contributed different elements to his final concept of natural selection, with his initial analogies leading to the bridging idea of selection acting in concert with the process of directed adaptive variation, rather than supplanting it.

Constructing a new conceptual system is also accompanied by a process of evaluating its adequacy against known alternatives using some set of criteria. These criteria can include: the new system's ability to explain the core problematic phenomena as well as other known phenomena in the domain, its internal consistency and fit with other relevant knowledge, the extent to which it meets certain explanatory ideals, and its capacity to suggest new fruitful lines of research.

Finally, researchers have examined the personal, motivational, and social processes that support conceptual change. Personal factors include courage, confidence in one's abilities, openness to alternatives, willingness to take risks, and deep commitment to an intellectual problem. Social factors include working in groups that combine different kinds of expertise and that encourage consideration of inconsistencies in data and relevant analogies. Indeed, many science educators believe a key to promoting conceptual change in the classroom is through creating a more reflective classroom discourse. Such discourse probes for alternative student views, encourages the clarification, negotiation, and elaboration of meanings, the detection of inconsistencies, and the use of evidence and argument in deciding among or integrating alternative views.

Educational Implications

Conceptual change is difficult under any circumstances, as it requires breaking out of the self-perpetuating circle of theory-based reasoning, making coordinated changes in a number of concepts, and actively constructing an understanding of new (more abstract) conceptual systems. Students need signals that conceptual change is needed, as well as good reasons to change their current conceptions, guidance about how to integrate existing conceptual resources in order to construct new conceptions, and the motivation and time needed to make those constructions. Traditional education practice often fails to provide students with the appropriate signals, guidance, motivation, and time.

Conceptual change is a protracted process calling for a number of coordinated changes in instructional practice. First, instruction needs to be grounded in the consideration of important phenomena or problems that are central to the experts' frameworkand that challenge students' initial commonsense framework. These phenomena not only motivate conceptual change, but also constrain the search for, and evaluation of, viable alternatives. Second, instruction needs to guide students in the construction of new systems of concepts for understanding these phenomena. Teachers must know what heuristic techniques, representational tools, and conceptual resources to draw upon to make new concepts intelligible to students, and also how to build these constructions in a sequenced manner.

Third, instruction needs to be supported by a classroom discourse that encourages students to identify, represent, contrast, and debate the adequacy of competing explanatory frameworks in terms of emerging classroom epistemological standards. Such discourse supports many aspects of the conceptual-change process, including making students aware of their initial conceptions, helping students construct an understanding of alternative frameworks, motivating students to examine their conceptions more critically (in part through awareness of alternatives), and promoting their ability to evaluate, and at times integrate, competing frameworks.

Finally, instruction needs to provide students with extended opportunities for applying new systems of concepts to a wide variety of problems. Repeated applications develop students' skill at applying a new framework, refine their understanding of the framework, and help students appreciate its greater power and scope.

See also: Categorization and Concept Learning; Learning, subentry on Knowledge Acquisition, Representation, AND Organization.

bibliography

Carey, Susan. 1999. "Sources of Conceptual Change." In Conceptual Development: Piaget's Legacy, ed. Ellin K. Scholnick, Katherine Nelson, Susan A. Gelman, and Patricia H. Miller. Mahwah, NJ: Erlbaum.

Chi, Micheline. 1992. "Conceptual Change within and across Ontological Categories: Examples from Learning and Discovery in Science." In Cognitive Models of Science, ed. Ronald N. Giere. Minnesota Studies in the Philosophy of Science, Vol. 15. Minneapolis: University of Minnesota Press.

Chinn, Clark A., and Brewer, William F. 1993. "The Role of Anomalous Data in Knowledge Acquisition: A Theoretical Framework and Implications for Science Instruction." Review of Educational Research 63 (1):149.

Clement, John. 1993. "Using Bridging Analogies and Anchoring Intuitions to Deal with Students' Preconceptions in Physics." Journal of Research in Science Teaching 30 (10):12411257.

Dunbar, Kenneth. 1995. "How Scientists Really Reason: Scientific Reasoning in Real-World Laboratories." In The Nature of Insight, ed. Robert J. Sternberg and Janet E. Davidson. Cambridge, MA: MIT Press.

Gentner, Dedre; Brem, Sarah; Ferguson, Ronald; Markman, Arthur; Levidow, Bjorn; Wolff, Phillip; and Forbus, Kenneth. 1997. "Analogical Reasoning and Conceptual Change: A Case Study of Johannes Kepler." Journal of the Learning Sciences 6 (1):340.

Laurence, Stephen, and Margolis, Eric. 1999. "Concepts and Cognitive Science." In Concepts: Core Readings, ed. Eric Margolis and Stephen Laurence. Cambridge, MA: MIT Press.

Lehrer, Richard; Schauble, Leona; Carpenter, Susan; and Penner, David. 2000. "The Interrelated Development of Inscriptions and Conceptual Understanding." In Symbolizing and Communicating in Mathematics Classrooms: Perspectives on Discourse, Tools, and Instructional Design, ed. Paul Cobb, Erna Yackel, and Kay McClain. Mahwah, NJ: Erlbaum.

Millman, Arthur B., and Smith, Carol L. 1997. "Darwin's Use of Analogical Reasoning in Theory Construction." Metaphor and Symbol 12 (3):159187.

Nercessian, Nancy J. 1992. "How Do Scientists Think? Capturing the Dynamics of Conceptual Change in Science." In Cognitive Models of Science, ed. Ronald N. Giere. Minnesota Studies in the Philosophy of Science, Vol. 15. Minneapolis: University of Minnesota Press.

Pintrich, Paul R.; Marx, Ronald W.; and Boyle, Robert A. 1993. "Beyond Cold Conceptual Change: The Role of Motivational Beliefs and Classroom Contextual Factors in the Process of Conceptual Change." Review of Educational Research 63 (2):167199.

Posner, Gerald; Strike, Kenneth; Hewson, Peter; and Gertzog, W. A. 1982. "Accommodation of a Scientific Conception: Toward a

Theory of Conceptual Change." Science Education 66:211227.

Smith, Carol; Maclin, Deborah; Grosslight, Lorraine; and Davis, Helen. 1997. "Teaching for Understanding: A Study of Students' Preinstruction Theories of Matter and a Comparison of the Effectiveness of Two Approaches to Teaching about Matter and Density." Cognition and Instruction 15 (3):317393.

Van Zee, Emily, and Minstrell, Jim. 1997. "Using Questioning to Guide Student Thinking." Journal of the Learning Sciences 6:227269.

Vosniadou, Stella, and Brewer, William F. 1987. "Theories of Knowledge Restructuring in Development." Review of Educational Research 57:5167.

White, Barbara. 1993. "Thinker Tools: Causal Models, Conceptual Change, and Science Instruction." Cognition and Instruction 10:1100.

Wiser, Marianne, and Amin, Tamir. 2001. "'Is Heat Hot?' Inducing Conceptual Change by Integrating Everyday and Scientific Perspectives on Thermal Phenomena." Learning and Instruction 11 (45):331355.

Carol L. Smith

KNOWLEDGE ACQUISITION, REPRESENTATION, AND ORGANIZATION

Knowledge acquisition is the process of absorbing and storing new information in memory, the success of which is often gauged by how well the information can later be remembered (retrieved from memory). The process of storing and retrieving information depends heavily on the representation and organization of the information. Moreover, the utility of knowledge can also be influenced by how the information is structured. For example, a bus schedule can be represented in the form of a map or a timetable. On the one hand, a timetable provides quick and easy access to the arrival time for each bus, but does little for finding where a particular stop is situated. On the other hand, a map provides a detailed picture of each bus stop's location, but cannot efficiently communicate bus schedules. Both forms of representation are useful, but it is important to select the representation most appropriate for the task at hand. Similarly, knowledge acquisition can be improved by considering the purpose and function of the desired information.

Knowledge Representation and Organization

There are numerous theories of how knowledge is represented and organized in the mind, including rule-based production models, distributed networks, and propositional models. However, these theories are all fundamentally based on the concept of semantic networks. A semantic network is a method of representing knowledge as a system of connections between concepts in memory.

Semantic Networks

According to semantic network models, knowledge is organized based on meaning, such that semantically related concepts are interconnected. Knowledge networks are typically represented as diagrams of nodes (i.e., concepts) and links (i.e., relations). The nodes and links are given numerical weights to represent their strengths in memory. In Figure 1, the node representing DOCTOR is strongly related to SCALPEL, whereas NURSE is weakly related to SCALPEL. These link strengths are represented here in terms of line width. Similarly, some nodes in Figure 1 are printed in bold type to represent their strength in memory. Concepts such as DOCTOR and BREAD are more memorable because they are

FIGURE 1

more frequently encountered than concepts such as SCALPEL and CRUST.

Mental excitation, or activation, spreads automatically from one concept to another related concept. For example, thinking of BREAD spreads activation to related concepts, such as BUTTER and CRUST. These concepts are primed, and thus more easily recognized or retrieved from memory. For example, in David Meyer and Roger Schvaneveldt's 1976 study (a typical semantic priming study), a series of words (e.g., BUTTER) and nonwords (e.g., BOTTOR) are presented, and participants deter mine whether each item is a word. A word is more quickly recognized if it follows a semantically related word. For example, BUTTER is more quickly recognized as a word if BREAD precedes it, rather than NURSE. This result supports the assumption that se mantically related concepts are more strongly connected than unrelated concepts.

Network models represent more than simple associations. They must represent the ideas and complex relationships that comprise knowledge and comprehension. For example, the idea "The doctor uses a scalpel" can be represented as the proposition USE (DOCTOR, SCALPEL), which consists of the nodes DOCTOR and SCALPEL and the link USE (see Figure 2). Educators have successfully used similar diagrams, called concept maps, to communicate important relations and attributes among the key concepts of a lesson.

Types of Knowledge

There are numerous types of knowledge, but the most important distinction is between declarative and procedural knowledge. Declarative knowledge refers to one's memory for concepts, facts, or episodes, whereas procedural knowledge refers to the ability to perform various tasks. Knowledge of how to drive a car, solve a multiplication problem, or throw a football are all forms of procedural knowledge, called procedures or productions. Procedural knowledge may begin as declarative knowledge, but is proceduralized with practice. For example, when first learning to drive a car, you may be told to "put the key in the ignition to start the car," which is a declarative statement. However, after starting the car numerous times, this act becomes automatic and is completed with little thought. Indeed, procedural knowledge tends to be accessed automatically and require little attention. It also tends to be more durable (less susceptible to forgetting) than declarative knowledge.

Knowledge Acquisition

Listed below are five guidelines for knowledge acquisition that emerge from how knowledge is represented and organized.

Process the material semantically. Knowledge is organized semantically; therefore, knowledge acquisition is optimized when the learner focuses on the meaning of the new material. Fergus Craik and Endel Tulving were among the first to provide evidence for the importance of semantic processing. In their studies, participants answered questions concerning

FIGURE 2

target words that varied according to the depth of processing involved. For example, semantic questions (e.g., Which word, friend or tree, fits appropriately in the following sentence: "He met a ____on the street"?) involve a greater depth of processing than phonemic questions (e.g., Which word, crate or tree, rhymes with the word late ?), which in turn have a greater depth than questions concerning the structure of a word (e.g., Which word is in capital letters: TREE or tree ?). Craik and colleagues found that words processed semantically were better learned than words processed phonemically or structurally. Further studies have confirmed that learning benefits from greater semantic processing of the material.

Process and retrieve information frequently. A second learning principle is to test and retrieve the information numerous times. Retrieving, or self-producing, information can be contrasted with simply reading or copying it. Decades of research on a phenomenon called the generation effect have shown that passively studying items by copying or reading them does little for memory in comparison to self-producing, or generating, an item. Moreover, learning improves as a function of the number of times information is retrieved. Within an academic situation, this principle points to the need for frequent practice tests, worksheets, or quizzes. In terms of studying, it is also important to break up, or distribute retrieval attempts. Distributed retrieval can include studying or testing items in a random order, with breaks, or on different days. In contrast, repeating information numerous times sequentially involves only a single retrieval from long-term memory, which does little to improve memory for the information.

Learning and retrieval conditions should be similar. How knowledge is represented is determined by the conditions and context (internal and external) in which it is learned, and this in turn determines how it is retrieved: Information is best retrieved when the conditions of learning and retrieval are the same. This principle has been referred to as encoding specificity. For example, in one experiment, participants were shown sentences with an adjective and a noun printed in capital letters (e.g. The CHIP DIP tasted delicious.) and told that their memory for the nouns would be tested afterward. In the recognition test, participants were shown the noun either with the original adjective (CHIP DIP), with a different adjective (SKINNY DIP), or without an adjective (DIP). Noun recognition was better when the original adjective (CHIP) was presented than when no adjective was presented. Moreover, presenting a different adjective (SKINNY) yielded the lowest recognition. This finding underscores the importance of matching learning and testing conditions.

Encoding specificity is also important in terms of the questions used to test memory or comprehension. Different types of questions tap into different levels of understanding. For example, recalling information formation involves a different level of understanding, and different mental processes, than recognizing information. Likewise, essay and open-ended questions assess a different level of understanding than multiple-choice questions. Essay and open-ended questions generally tap into a conceptual or situational understanding of the material, which results from an integration of text-based information and the reader's prior knowledge. In contrast, multiple-choice questions involve recognition processes, and typically assess a shallow or text-based understanding. A text-based representation can be impoverished and incomplete because it consists only of concepts and relations within the text. This level of understanding, likely developed by a student preparing for a multiple-choice exam, would be inappropriate preparation for an exam with open-ended or essay questions. Thus, students should benefit by adjusting their study practices according to the expected type of questions.

Alternatively, students may benefit from reviewing the material in many different ways, such as recognizing the information, recalling the information, and interpreting the information. These latter processes improve understanding and maximize the probability that the various ways the material is studied will match the way it is tested. From a teacher's point of view, including different types of questions on worksheets or exams ensures that each student will have an opportunity to convey their understanding of the material.

Connect new information to prior knowledge.

Knowledge is interconnected; therefore, new material that is linked to prior knowledge will be better retained. A driving factor in text and discourse comprehension is prior knowledge. Skilled readers actively use their prior knowledge during comprehension. Prior knowledge helps the reader to fill in contextual gaps within the text and develop a better global understanding or situation model of the text. Given that texts rarely (if ever) spell out everything needed for successful comprehension, using prior knowledge to understand text and discourse is critical. Moreover, thinking about what one already knows about a topic provides connections in memory to the new informationthe more connections that are formed, the more likely the information will be retrievable from memory.

Create cognitive procedures. Procedural knowledge is better retained and more easily accessed. Therefore, one should develop and use cognitive procedures when learning information. Procedures can include shortcuts for completing a task (e.g., using fast 10s to solve multiplication problems), as well as memory strategies that increase the distinctive meaning of information. Cognitive research has repeatedly demonstrated the benefits of memory strategies, or mnemonics, for enhancing the recall of information. There are numerous types of mnemonics, but one well-known mnemonic is the method of loci. This technique was invented originally for the purpose of memorizing long speeches in the times before luxuries such as paper and pencil were readily available. The first task is to imagine and memorize a series of distinct locations along a familiar route, such as a pathway from one campus building to another. Each topic of a speech (or word in a word list) can then be pictured in a location along the route. When it comes time to recall the speech or word list, the items are simply found by mentally traveling the pathway.

Mnemonics are generally effective because they increase semantic processing of the words (or phrases) and render them more meaningful by linking them to familiar concepts in memory. Mnemonics also provide ready-made, effective cues for retrieving information. Another important aspect of mnemonics is that mental imaging is often involved. Images not only render information more meaningful, but they provide an additional route for finding information in memory. As mentioned earlier, increasing the number of meaningful links to information in memory increases the likelihood it can be retrieved.

Strategies are also an important component of meta-cognition, which is the ability to think about, understand, and manage one's learning. First, one must develop an awareness of one's own thought processes. Simply being aware of thought processes increases the likelihood of more effective knowledge construction. Second, the learner must be aware of whether or not comprehension has been successful. Realizing when comprehension has failed is crucial to learning. The final, and most important stage of meta-cognitive processing is fixing the comprehension problem. The individual must be aware of, and use, strategies to remedy comprehension and learning difficulties. For successful knowledge acquisition to occur, all three of these processes must occur. Without thinking or worrying about learning, the student cannot realize whether the concepts have been successfully grasped. Without realizing that information has not been understood, the student cannot engage in strategies to remedy the situation. If nothing is done about a comprehension failure, awareness is futile.

Conclusion

Knowledge acquisition is integrally tied to how the mind organizes and represents information. Learning can be enhanced by considering the fundamental properties of human knowledge, as well as by the ultimate function of the desired information. The most important property of knowledge is that it is organized semantically; therefore, learning methods should enhance meaningful study of new information. Learners should also create as many links to the information as possible. In addition, learning methods should be matched to the desired outcome. Just as using a bus timetable to find a bus-stop location is ineffective, learning to recognize information will do little good on an essay exam.

See also: Learning, subentry on Conceptual Change; Reading, subentry on Content Areas.

bibliography

Anderson, John R. 1982. "Acquisition of a Cognitive Skill." Psychological Review 89:369406.

Anderson, John R., and LebiÈre, Christian. 1998. The Atomic Components of Thought. Mahwah, NJ: Erlbaum.

Bransford, John, and Johnson, Marcia K. 1972. "Contextual Prerequisites for Understanding Some Investigations of Comprehension and Recall." Journal of Verbal Learning and Verbal Behavior 11:717726.

Craik, Fergus I. M., and Tulving, Endel. 1975. "Depth of Processing and the Retention of Words in Episodic Memory." Journal of Experimental Psychology: General. 194:268294.

Crovitz, Herbert F. 1971. "The Capacity of Memory Loci in Artificial Memory." Psychonomic Science 24:187188.

Glenberg, Arthur M. 1979. "Component-Levels Theory of the Effects of Spacing of Repetitions on Recall and Recognition." Memory and Cognition 7:95112.

Guastello, Francine; Beasley, Mark; and Sinatra, Richard. 2000. "Concept Mapping Effects on Science Content Comprehension of Low-Achieving Inner-City Seventh Graders." RASE: Remedial and Special Education 21:356365.

Hacker, Douglas J.; Dunlosky, John; and Graesser, Arthur C. 1998. Metacognition in Educational Theory and Practice. Mahwah, NJ: Lawrence Erlbaum.

Jensen, Mary Beth, and Healy, Alice F. 1998. "Retention of Procedural and Declarative Information from the Colorado Drivers' Manual." In Memory Distortions and their Prevention, ed. Margaret Jean Intons-Peterson and Deborah L. Best. Mahwah, NJ: Lawrence Erlbaum.

Kintsch, Walter. 1998. Comprehension: A Paradigm for Cognition. Cambridge, Eng.: Cambridge University Press.

Light, Leah L., and Carter-Sobell, Linda. 1970. "Effects of Changed Semantic Context on Recognition Memory." Journal of Verbal Learning and Verbal Behavior 9:111.

McNamara, Danielle S., and Kintsch, Walter. 1996. "Learning from Text: Effects of Prior Knowledge and Text Coherence." Discourse Processes 22:247287.

Melton, Arthur W. 1967. "Repetition and Retrieval from Memory." Science 158:532.

Meyer, David E., and Schvaneveldt, Roger W. 1976. "Meaning, Memory Structure, and Mental Processes." Science 192:2733.

Paivio, Allen. 1990. Mental Representations: A Dual Coding Approach. New York: Oxford University Press.

Rumelhart, David E., and McClelland, James L. 1986. Parallel Distributed Processing: Explorations in the Microstructure of Cognition, Vol. 1: Foundations. Cambridge, MA: MIT Press.

Slamecka, Norman J., and Graf, Peter. 1978. "The Generation Effect: Delineation of a Phenomenon." Journal of Experimental Psychology: Human Learning and Memory 4:592604.

Tulving, Endel, and Thompson, Donald M. 1973. "Encoding Specificity and Retrieval Processes in Episodic Memory." Psychological Re-view 80:352373.

Yates, Francis A. 1966. The Art of Memory. Chicago: University of Chicago Press.

Danielle S. McNamara

Tenaha O'Reilly

NEUROLOGICAL FOUNDATION

Learning is mediated by multiple memory systems in the brain, each of which involves a distinct anatomical pathway and supports a particular form of memory representation. The major aim of research on memory systems is to identify and distinguish the different contributions of specific brain structures and pathways, usually by contrasting the effects of selective damage to specific brain areas. Another major strategy focuses on localizing brain areas that are activated, that is, whose neurons are activated during particular aspects of memory processing. Some of these studies use newly developed functional imaging techniques to view activation of brain areas in humans performing memory tests. Another approach seeks to characterize the cellular code for memory within the activity patterns of single nerve cells in animals, by asking how information is represented by the activity patterns within the circuits of different structures in the relevant brain systems.

Each of the brain's memory systems begins in the vast expanse of the cerebral cortex, specifically in the so-called cortical association areas (see Figure 1). These parts of the cerebral cortex provide major inputs to each of three main pathways of processing in subcortical areas related to distinct memory functions. One system mediates declarative memory, the memory for facts and events that can be brought to conscious recollection and can be expressed in a variety of ways outside the context of learning. This system involves connections from the cortical association areas to the hippocampus via the parahippocampal region. The main output of hippocampal and parahippocampal processing is back to the same cortical areas that provided inputs to the hippocampus, and are viewed as the long-term repository of declarative memories.

The other two main pathways involve cortical inputs to specific subcortical targets that send direct outputs that control behavior. One of these systems mediates emotional memory, the attachment of affiliations and aversions towards otherwise arbitrary stimuli and modulation of the strength of memories that involve emotional arousal. This system involves cortical (as well as subcortical) inputs to the amygdala as the nodal stage in the association of sensory inputs to emotional outputs effected via the hypothalamic-pituitary axis and autonomic nervous system, as well as emotional influences over widespread brain areas. The second of these systems mediates procedural memory, the capacity to acquire habitual behavioral routines that can be performed without conscious control. This system involves cortical inputs to the striatum as a nodal stage in the association of sensory and motor cortical information with voluntary responses via the brainstem motor system. An additional, parallel pathway that mediates different aspects of sensori-motor adaptations involves sensory and motor systems pathways through the cerebellum.

The Declarative Memory System

Declarative memory is the "everyday" form of memory that most consider when they think of memory. Therefore, the remainder of this discussion will focus on the declarative memory system. Declarative memory is defined as a composite of episodic memory, the ability to recollect personal experiences, and semantic memory, the synthesis of the many episodic memories into the knowledge about the world. In addition, declarative memory supports the capacity for conscious recall and the flexible expression of memories, one's ability to search networks of episodic and semantic memories and to use this capacity to solve many problems.

Each of the major components of the declarative memory system contributes differently to declarative memory, although interactions between these areas are also essential. Initially, perceptual information as well as information about one's behavior is processed in many dedicated neocortical areas. While the entire cerebral cortex is involved in memory processing, the chief brain area that controls this processing is the prefrontal cortex. The processing accomplished by the prefrontal cortex includes the acquisition of complex cognitive rules and concepts and working memory, the capacity to store information briefly while manipulating or rehearsing the information under conscious control. In addition, the areas of the cortex also contribute critically to memory processing. Association areas in the prefrontal, temporal, and parietal cortex play a central role in cognition and in both the perception of sensory information and in maintenance of short-term traces of recently perceived stimuli. Furthermore, the organization of perceptual representations in cerebral cortical areas, and connections among these areas, are permanently modified by learning experiences, constituting the long term repository of memories.

The parahippocampal region, which receives convergent inputs from the neocortical association areas and sends return projections to all of these areas, appears to mediate the extended persistence of these cortical representations. Through interactions between these areas, processing within the cortex can take advantage of lasting parahippocampal representations, and so come to reflect complex associations between events that are processed separately in different cortical regions or occur sequentially in the same or different areas.

These individual contributions and their inter-actions are not conceived as sufficient to link representations of events to form episodic memories or to form generalizations across memories to create a semantic memory network. Such an organization requires the capacity to rapidly encode a sequence of events that make up an episodic memory, to retrieve that memory by re-experiencing one facet of the event, and to link the ongoing experience to stored episodic representations, forming the semantic network. The neuronal elements of the hippocampus contain the fundamental coding properties that can support this kind of organization.

However, interactions among the components of the system are undoubtedly critical. It is unlikely that the hippocampus has the storage capacity to contain all of one's episodic memories and the hippocampus is not the final storage site. Therefore, it seems likely that the hippocampal neurons are involved in mediating the reestablishment of detailed cortical representations, rather than storing the details themselves. Repetitive interactions between the cortex and hippocampus, with the parahippocampal region as intermediary, serve to sufficiently coactivate widespread cortical areas so that they eventually develop linkages between detailed memories without hippocampal mediation. In this way, the networking provided by the hippocampus underlies its role in the organization of the permanent memory networks in the cerebral cortex.

FIGURE 1

See also: Brain-Based Education.

bibliography

Eichenbaum, Howard. 2000. "A Cortical-Hippocampal System for Declarative Memory." Nature Reviews Neuroscience 1:4150.

Eichenbaum, Howard, and Cohen, Neal J. 2001. From Conditioning to Conscious Recollection: Memory Systems of the Brain. Upper Saddle River, NJ: Oxford University Press.

chacter, Daniel L., and Tulving, Endel, eds. 1994. Memory Systems 1994. Cambridge, MA: MIT Pr.

Squire, Larry R., and Kandel, Eric R. 1999. Memory: From Mind to Molecules. New York: Scientific American Library.

Squire, Larry R.; Knowlton, Barbara; and Musen, Gail. 1993. "The Structure and Organization of Memory." Annual Review of Psychology 44:453495.

Howard Eichenbaum

PERCEPTUAL PROCESSES

As Eleanor Gibson wrote in her classic text Principles of Perceptual Learning and Development, perceptual learning results in changes in the pickup of information as a result of practice or experience. Perception and action are a cycle: People act in order to learn about their surroundings, and they use what they learn to guide their actions. From this perspective, the critical defining features of perception include the exploratory actions of the perceiver and the knowledge of the events, animate and inanimate objects, and surrounding environment gained while engaged in looking, listening, touching, walking, and other forms of direct observation. Perception often results in learning information that is directly relevant to the goals at hand, but sometimes it results in learning that is incidental to one's immediate goals.

Perception becomes more skillful with practice and experience, and perceptual learning can be thought of as the education of attention. Perceivers come to notice the features of situations that are relevant to their goals and not to notice the irrelevant features. Three general principles of perceptual learning seem particularly relevant. First, unskillful perceiving requires much concentrated attention, whereas skillful perceiving requires less attention and is more easily combined with other tasks. Second, unskillful perceiving involves noticing both the relevant and irrelevant features of sensory stimulation without understanding their meaning or relevance to one's goals, whereas skillful perceiving involves narrowing one's focus to relevant features and understanding the situations they specify. And third, unskillful perceiving often involves attention to the proximal stimulus (that is, the patterns of light or acoustic or pressure information on the retinas, cochleae, and skin, respectively), whereas skillful perceiving involves attention to the distal event that is specified by the proximal stimulus.

Different Domains

Perceptual learning refers to relatively durable gains in perception that occur across widely different domains. For example, at one extreme are studies demonstrating that with practice adults can gain exquisite sensitivity to vernier discriminations, that is, the ability to resolve gaps in lines that approach the size of a single retinal receptor. At the opposite extreme, perceptual learning plays a central role in gaining expertise in the many different content areas of work, everyday life, and academic pursuits.

In the realm of work, classic examples include farmers learning to differentiate the sex of chickens, restaurateurs learning to differentiate different dimensions of fine wine, airplane pilots misperceiving their position relative to the ground, and machinists and architects learning to "see" the three-dimensional shape of a solid object or house from the top, side, and front views.

In the realm of everyday life, important examples include learning to perceive emotional expressions, learning to identify different people and understand their facial expressions, learning to differentiate the different elements of speech when learning a second language, and learning to differentiate efficient routes to important destinations when faced with new surroundings.

In "nonacademic" subjects within the realm of academic pursuits, important examples involve music, art, and sports. For example, music students learn to differentiate the notes, chords, and instrumental voices in a piece, and they learn to identify pieces by period and composer. Art students learn to differentiate different strokes, textures, and styles, and they learn to classify paintings by period and artist. Athletes learn to differentiate the different degrees of freedom that need to be controlled to produce a winning "play" and to anticipate what actions need to be taken when on a playing field.

Finally, perceptual learning plays an equally broad role in classically academic subjects. For example, mathematics students gain expertise at perceiving graphs, classifying the shapes of curves, and knowing what equations might fit a given curve. Science students gain expertise at perceiving laboratory setups. These range widely across grade levels and domains, including the critical features of hydrolyzing water in a primary school general science setting, molecular structures in organic chemistry and genetics, frog dissections in biology, the functional relation of the frequency of waves and diffraction in different media in physics, and the critical features of maps in geology.

The borders separating perceptual learning from conceiving and reasoning often become blurred. And indeed, people perceive in order to understand, and their understanding leads to more and more efficient perception. For example, Herbert A. Simon elaborated on this in 2001 in his discussion of the visual thinking involved in having an expert understanding of the dynamics of a piston in an internal combustion engine. When experts look at a piston or a diagram of a piston or a graph representing the dynamics of a piston, they "see" the higher order, relevant variables, for example, that more work is performed when the combustion explosion moves the piston away from the cylinder's base than when the piston returns toward the base. The ability to "see" such higher-order relations is not just a question of good visual acuity, but it instead depends on content knowledge (about energy, pressure, and work) and on an understanding of how energy acts in the context of an internal combustion engine. In a 2001 article, Daniel Schwartz and John Bransford emphasized that experience with contrasting cases helps students differentiate the critical features when they are working to understand statistics and other academic domains. In a 1993 article, J. Littlefield and John Rieser demonstrated the skill of middle school students at differentiating relevant from irrelevant information when attempting to solve story problems in mathematics.

Classical Issues in Perceptual Learning and Perceptual Development

Perceptual development involves normative agerelated changes in basic sensory sensitivities and in perceptual learning. Some of these changes are constrained by the biology of development in well-defined ways. For example, the growth in auditory frequency during the first year of life is mediated in part by changes in the middle ear and inner ear. Growth in visual acuity during the first two years is mediated in several ways: by changes in the migration of retinal cells into a fovea, through increasing control of convergence eye movements so that the two eyes fixate the same object, and through increasing control of the accommodate state of the lens so that fixated objects are in focus. The role of physical changes in the development of other perceptual skills, for example, perceiving different cues for depth, is less clear.

Nativism and empiricism are central to the study of perception and perceptual development. Stemming from philosophy's interest in epistemology, early nativists (such as seventeenth-century French mathematician and philosopher René Descartes and eighteenth-century German philosopher Immanuel Kant) argued that the basic capacities of the human mind were innate, whereas empiricists argued that they were learned, primarily through associations. This issue has long been hotly debated in the field of perceptual learning and development. How is it that the mind and brain come to perceive three-dimensional shapes from two-dimensional retinal projections; perceive distance; segment the speech stream; represent objects that become covered from view? The debate is very lively in the early twenty-first century, with some arguing that perception of some basic properties of the world is innate, and others arguing that it is learned, reflecting the statistical regularities in experience. Given that experience plays a role in some forms of perceptual learning, there is evidence that the timing of the experience can be critical to whether, and to what degree, it is learned effectively.

The "constancy" of perception is a remarkable feat of perceptual development. The issue is that the energy that gives rise to the perception of a particular object or situation varies widely when the perceiver or object moves, the lighting changes, and so forth. Given the flux in the sensory input, how is it that people manage to perceive that the objects and situations remain (more or less) the same? Research about perceptual constancies has reemerged as an important topic as computer scientists work to design artificial systems that can "learn to see."

Intersensory coordination is a major feature of perception and perceptual development. How is it, for example, that infants can imitate adult models who open their mouths wide or stick out their tongues? How is it that infants can identify objects by looking at them or by touching them and can recognize people by seeing them or listening to them?

The increasing control of actions with age is a major result of perceptual learning, as infants become more skillful at perceiving steps and other features of the ground and learn to control their balance when walking up and down slopes.

In 1955 James Gibson and Eleanor Gibson wrote an important paper titled "Perceptual Learning: Differentiation or Enrichment?" By differentiation they meant skill at distinguishing smaller and smaller differences among objects of a given kind. By enrichment they meant knowledge of the ways that objects and events tend to be associated with other objects and events. Their paper was in part a reaction to the predominant view of learning at the time: that learning was the "enrichment" of responses through their association with largely arbitrary stimulus conditions. The authors provided a sharp counterpoint to this view. Instead of conceiving of the world as constructed by add-on processes of association, they viewed perceivers as actively searching for the stimuli they needed to guide their actions and decisions, and in this way coming to differentiate the relevant features situated in a given set of circumstances from the irrelevant ones.

See also: Attention; Learning Theory, subentry on Historical Overview.

bibliography

Acredolo, Linda P.; Pick, Herb L.; and Olsen, M. 1975. "Environmental Differentiation and Familiarity as Determinants of Children's Memory for Spatial Location." Developmental Psychology 11:495501.

Adolph, Karen E. 1997. Learning in the Development of Infant Locomotion. Chicago: University of Chicago Press.

Arnheim, Rudolph. 1974. Art and Visual Perception: A Psychology of the Creative Eye. Berkeley: University of California Press.

Aslin, Richard N. 1998. "Speech and Auditory Processing during Infancy: Constraints on and Precursors to Language." In Handbook of Child Psychology, 5th edition, ed. William Damon, Vol. 2: Cognition, Perception, and Language, ed. Deanna Kuhn and Robert S. Siegler. New York: Wiley.

Bahrick, Lorraine E., and Lickliter, Robert. 2000. "Intersensory Redundancy Guides Attentional Selectivity and Perceptual Learning in Infancy." Developmental Psychology 36:190201.

Baillargeon, Renee. 1994. "How Do Infants Learn about the Physical World?" Current Directions in Psychological Science 3:133140.

Barsalou, Lawrence W. 1999. "Perceptual Symbol Systems." Behavior and Brain Sciences 22:577660.

Bransford, John D., and Schwartz, Daniel L. 2000. "Rethinking Transfer: A Simple Proposal with Multiple Implications." Review of Research in Education 24:61100.

Bryant, Peter, and Somerville, S. 1986. "The Spatial Demands of Graphs." British Journal of Psychology 77:187197.

Dodwell, Peter C., ed. 1970. Perceptual Learning and Adaptation: Selected Readings. Harmonds-worth, Eng.: Penguin.

Dowling, W. Jay., and Harwood, Dane L. 1986. Music Cognition. New York: Academic Press.

Epstein, William. 1967. Varieties of Perceptual Learning. New York: McGraw-Hill.

Fahle, Manfred, and Poggio, Tomaso, eds. 2000. Perceptual Learning. Cambridge, MA: MIT Press.

Garling, Tommy, and Evans, Gary W. 1991. Environment, Cognition, and Action: An Integrated Approach. New York: Oxford University Press.

Gibson, Eleanor J. 1969. Principles of Perceptual Learning and Development. Englewood Cliffs, NJ: Prentice-Hall.

Gibson, Eleanor J., and Pick, Anne D. 2000. An Ecological Approach to Perceptual Learning and Development. New York: Oxford University Press.

Gibson, Eleanor J., and Walk, Richard D. 1961. "The 'Visual Cliff."' Scientific American 202:6471.

Gibson, James J., and Gibson, Eleanor J. 1955. "Perceptual Learning: Differentiation or Enrichment?" Psychological Review 62:3241.

Goldstone, Robert L. 1998. "Perceptual Learning." Annual Review of Psychology 49:585612.

Goodnow, Jacqueline J. 1978. "Visible Thinking: Cognitive Aspects of Change in Drawings." Child Development 49:637641.

Granrud, Carl E. 1993. Visual Perception and Cognition in Infancy. Hillsdale, NJ: Erlbaum.

Haber, Ralph N. 1987. "Why Low-Flying Fighter Planes Crash: Perceptual and Attentional Factors in Collisions with the Ground." Human Factors 29:519532.

Johnson, Jacqueline S., and Newport, Elissa L. 1989. "Critical Period Effects in Second Language Learning: The Influence of Maturational State on the Acquisition of English as a Second Language." Cognitive Psychology 21:6099.

Johnson, Mark. 1998. "The Neural Basis of Cognitive Development." In Handbook of Child Psychology, 5th edition, ed. William Damon, Vol. 2: Cognition, Perception, and Language, ed. Deanna Kuhn and Robert S. Siegler. New York: Wiley.

Jusczyk, Peter W. 2002. "How Infants Adapt Speech-Processing Capacities to Native Language Structure." Current Directions in Psychological Science 11:1518.

Kellman, Philip, and Banks, Martin S. 1998. "Infant Visual Perception." In Handbook of Child Psychology, 5th edition, ed. William Damon, Vol. 2: Cognition, Perception, and Language, ed. Deanna Kuhn and Robert S. Siegler. New York: Wiley.

Littlefield, J., and Rieser, John J. 1993. "Semantic Features of Similarity and Children's Strategies for Identifying Relevant Information in Mathematical Story Problems." Cognition and Instruction 11:133188.

McLeod, Peter; Reed, Nick; and Diences, Zoltan. 2001. "Toward a Unified Fielder Theory: What We Do Not Yet Know about How People Run to Catch a Ball." Journal of Experimental Psychology: Human Perception and Performance 27:13471355.

Postman, Leo. 1955. "Association Theory and Perceptual Learning." Psychological Review 62:438446.

Quinn, Paul C.; Palmer, Vanessa; and Slater, Alan M. 1999. "Identification of Gender in Domestic Cat Faces with and without Training: Perceptual Learning of a Natural Categorization Task." Perception 28:749763.

Rieser, John J.; Pick, Herb L.; Ashmead, Daniel H.; and Garing, A. E. 1995. "Calibration of Human Locomotion and Models of Perceptual-Motor Organization." Journal of Experimental Psychology: Human Perception and Performance 21:480497.

Saarni, Carolyn. 1998. "Emotional Development: Action, Communication, and Understanding." In Handbook of Child Psychology, 5th edition, ed. William Damon, Vol. 3: Social, Emotional, and Personality Development, ed. Nancy Eisenberg. New York: Wiley.

Saffran, Jenny R.; Aslin, R. N.; and Newport, E. L. 1996. "Statistical Learning by Eight-Month-Old Infants." Science 274:19261928.

Saffran, Jenny R., and Griepentrog, G. J. 2001. "Absolute Pitch in Infant Auditory Learning: Evidence for Developmental Reorganization." Developmental Psychology 37:7485.

Schwartz, Daniel L., and Bransford, John D. 2001. "A Time for Telling." Cognition and Instruction 16:475522.

Simon, Herbert A. 2001. "Observations on the Sciences of Science Learning." Journal of Applied Developmental Psychology 21:115121.

Tighe, L. S., and Tighe, T. J. 1966. "Discrimination Learning: Two Views in Historical Perspective." Psychological Bulletin 66:353370.

Von Hofsten, Claes. 1994. "Planning and Perceiving What Is Going to Happen Next." In The Development of Future-Oriented Processes, ed. Marshall M. Haith, Janette B. Benson, and Ralph J. Roberts. Chicago: University of Chicago Press.

Walk, Richard D. 1966. "Perceptual Learning and the Discrimination of Wines." Psychonomic Science 5:5758.

Walker-Andrews, Arlene, and Bahrick, Lorraine E. 2001. "Perceiving the Real World: Infants' Detection of and Memory for Social Information." Infancy 2:469481.

Welch, Robert B. 1978. Perceptual Modification: Adapting to Altered Sensory Environments. New York: Academic Press.

John J. Rieser

PROBLEM SOLVING

Cognitive processing aimed at figuring out how to achieve a goal is called problem solving. In problem solving, the problem solver seeks to devise a method for transforming a problem from its current state into a desired state when a solution is not immediately obvious to the problem solver. Thus, the hallmark of problem solving is the invention of a new method for addressing a problem. This definition has three parts: (1) problem solving is cognitive that is, it occurs internally in the mind (or cognitive system) and must be inferred indirectly from behavior; (2) problem solving is a process it involves the manipulation of knowledge representations (or carrying out mental computations); and (3) problem solving is directed it is guided by the goals of the problem solver.

The definition of problem solving covers a broad range of human cognitive activities, including educationally relevant cognitionfiguring out how to manage one's time, writing an essay on a selected topic, summarizing the main point of a textbook section, solving an arithmetic word problem, or determining whether a scientific theory is valid by conducting experiments.

A problem occurs when a problem solver has a goal but initially does not know how to achieve the goal. This definition has three parts: (1) the current state the problem begins in a given state; (2) the goal state the problem solver wants the problem to be in a different state, and problem solving is required to transform the problem from the current (or given) state into the goal state, and (3) obstacles the problem solver does not know the correct solution and an effective solution method is not obvious to the problem solver.

According to this definition a problem is personal, so that a situation that is a problem for one person might not be a problem for another person. For example, "3 + 5 = ___" might be a problem for a six-year-old child who reasons, "Let's see. I can take one from the 5 and give it to the 3. That makes 4 plus 4, and I know that 4 plus 4 is 8." However, this equation is not a problem for an adult who knows the correct answer.

Types of Problems

Routine and nonroutine problems. It is customary to distinguish between routine and nonroutine problems. In a routine problem, the problem solver knows a solution method and only needs to carry it out. For example, for most adults the problem "589 × 45 = ___" is a routine problem if they know the procedure for multicolumn multiplication. Routine problems are sometimes called exercises, and technically do not fit the definition of problem stated above. When the goal of an educational activity is to promote all the aspects of problem solving (including devising a solution plan), then nonroutine problems (or exercises) are appropriate.

In a nonroutine problem, the problem solver does not initially know a method for solving the problem. For example, the following problem (reported by Robert Sternberg and Janet Davidson) is nonroutine for most people: "Water lilies double in area every twenty-four hours. At the beginning of the summer, there is one water lily on the lake. It takes sixty days for the lake to be completely covered with water lilies. On what day is the lake half covered?" In this problem, the problem solver must invent a solution method based on working backwards from the last day. Based on this method, the problem solver can ask what the lake would look like on the day before the last day, and conclude that the lake is half covered on the fifty-ninth day.

Well-defined and ill-defined problems. It is also customary to distinguish between well-defined and ill-defined problems. In a well-defined problem, the given state of the problem, the goal state of the problem, and the allowable operators (or moves) are each clearly specified. For example, the following water-jar problem (adapted from Abrahama Luchins) is an example of a well defined problem: "I will give you three empty water jars; you can fill any jar with water and pour water from one jar into another (until the second jar is full or the first one is empty); you can fill and pour as many times as you like. Given water jars of size 21, 127, and 3 units and an unlimited supply of water, how can you obtain exactly 100 units of water?" This is a well-defined problem because the given state is clearly specified (you have empty jars of size 21, 127, and 3), the goal state is clearly specified (you want to get 100 units of water in one of the jars), and the allowable operators are clearly specified (you can fill and pour according to specific procedures). Well-defined problems may be either routine or nonroutine; if you do not have previous experience with water jar problems, then finding the solution (i.e., fill the 127, pour out 21 once, and pour out 3 twice) is a nonroutine problem.

In an ill-defined problem, the given state, goal state, and/or operations are not clearly specified. For example, in the problem, "Write a persuasive essay in favor of year-round schools," the goal state is not clear because the criteria for what constitutes a "persuasive essay" are vague and the allowable operators, such as how to access sources of information, are not clear. Only the given state is cleara blank piece of paper. Ill-defined problems can be routine or nonroutine; if one has extensive experience in writing then writing a short essay like this one is a routine problem.

Processes in Problem Solving

The process of problem solving can be broken down into two major phases: problem representation, in which the problem solver builds a coherent mental representation of the problem, and problem solution, in which the problem solver devises and carries out a solution plan. Problem representation can be broken down further into problem translation, in which the problem solver translates each sentence (or picture) into an internal mental representation, and problem integration, in which the problem solver integrates the information into a coherent mental representation of the problem (i.e., a mental model of the situation described in the problem). Problem solution can be broken down further into solution planning, in which the problem solver devises a plan for how to solve the problem, and solution execution, in which the problem solver carries out the plan by engaging in solution behaviors. Although the four processes of problem solving are listed sequentially, they may occur in many different orderings and with many iterations in the course of solving a problem.

For example, consider the butter problem described by Mary Hegarty, Richard Mayer, and Christopher Monk: "At Lucky, butter costs 65 cents per stick. This is two cents less per stick than butter at Vons. If you need to buy 4 sticks of butter, how much will you pay at Vons?" In the problem translation phase, the problem solver may mentally represent the first sentence as "Lucky = 0.65," the second sentence as "Lucky = Vons - 0.02," and the third sentence as "4 × Vons = ___." In problem integration, the problem solver may construct a mental number line with Lucky at 0.65 and Vons to the right of Lucky (at 0.67); or the problem solver may mentally integrate the equations as "4 × (Lucky + 0.02) = ____." A key insight in problem integration is to recognize the proper relation between the cost of butter at Lucky and the cost of butter at Vons, namely that butter costs more at Vons (even though the keyword in the problem is "less"). In solution planning, the problem solver may break the problem into parts, such as: "First add 0.02 to 0.65, then multiply the result by 4." In solution executing, the problem solver carries out the plan: 0.02 + 0.65 =0.67, 0.67 × 4 = 2.68. In addition, the problem solver must monitor the problem-solving process and make adjustments as needed.

Teaching for Problem Solving

A challenge for educators is to teach in ways that foster meaningful learning rather than rote learning. Rote instructional methods promote retention (the ability to solve problems that are identical or highly similar to those presented in instruction), but not problem solving transfer (the ability to apply what was learned to novel problems). For example, in 1929, Alfred Whitehead used the term inert knowledge to refer to learning that cannot be used to solve novel problems. In contrast, meaningful instructional methods promote both retention and transfer.

In a classic example of the distinction between rote and meaningful learning, the psychologist Max Wertheimer (1959) described two ways of teaching students to compute the area of a parallelogram. In the rote method, students learn to measure the base, measure the height, and then multiply base times height. Students taught by the A = b × h method are able to find the area of parallelograms shaped like the ones given in instruction (a retention problem) but not unusual parallelograms or other shapes (a transfer problem). Wertheimer used the term reproductive thinking to refer to problem solving in which one blindly carries out a previously learned procedure. In contrast, in the meaningful method, students learn by cutting the triangle from one end of a cardboard parallelogram and attaching it to the other end to form a rectangle. Once students have the insight that a parallelogram is just a rectangle in disguise, they can compute the area because they already know the procedure for finding the area of a rectangle. Students taught by the insight method perform well on both retention and transfer problems. Wertheimer used the term productive thinking to refer to problem solving in which one invents a new approach to solving a novel problem.

Educationally Relevant Advances in Problem Solving

Recent advances in educational psychology point to the role of domain-specific knowledge in problem solvingsuch as knowledge of specific strategies or problem types that apply to a particular field. Three important advances have been: (1) the teaching of problem-solving processes, (2) the nature of expert problem solving, and (3) new conceptions of individual differences in problem-solving ability.

Teaching of problem-solving processes. An important advance in educational psychology is cognitive strategy instruction, which includes the teaching of problem-solving processes. For example, in Project Intelligence, elementary school children successfully learned the cognitive processes needed for solving problems similar to those found on intelligence tests. In Instrumental Enrichment, students who had been classified as mentally retarded learned cognitive processes that allowed them to show substantial improvements on intelligence tests.

Expert problem solving. Another important advance in educational psychology concerns differences between what experts and novices know in given fields, such as medicine, physics, and computer programming. For example, expert physicists tend to store their knowledge in large integrated chunks, whereas novices tend to store their knowledge as isolated fragments; expert physicists tend to focus on the underlying structural characteristics of physics word problems, whereas novices focus on the surface features; and expert physicists tend to work forward from the givens to the goal, whereas novices work backwards from the goal to the givens. Research on expertise has implications for professional education because it pinpoints the kinds of domain-specific knowledge that experts need to learn.

Individual differences in problem-solving ability. This third advance concerns new conceptions of intellectual ability based on differences in the way people process information. For example, people may differ in cognitive stylesuch as their preferences for visual versus verbal representations, or for impulsive versus reflective approaches to problem solving. Alternatively, people may differ in the speed and efficiency with which they carry out specific cognitive processes, such as making a mental comparison or retrieving a piece of information from memory. Instead of characterizing intellectual ability as a single, monolithic ability, recent conceptions of intellectual ability focus on the role of multiple differences in information processing.

See also: Creativity; Learning, subentry on Analogical Reasoning; Mathematics Learning, subentry on Complex Problem Solving.

bibliography

Chi, Michelene T. H.; Glaser, Robert; and Farr, Marshall J., eds. 1988. The Nature of Expertise. Hillsdale, NJ: Erlbaum.

Dunker, Karl. 1945. On Problem Solving. Washington, DC: American Psychological Association.

Feuerstein, Reuven. 1980. Instrumental Enrichment. Baltimore: University Park Press.

Hegarty, Mary; Mayer, Richard E.; and Monk, Christopher A. 1995. "Comprehension of Arithmetic Word Problems: Evidence from Students' Eye Fixations." Journal of Educational Psychology 84:7684.

Hunt, Earl; Lunneborg, Cliff; and Lewis, J. 1975. "What Does It Mean to Be High Verbal?" Cognitive Psychology 7:194227.

Larkin, Jill H.; McDermott, John; Simon, Dorothea P.; and Simon, Herbert A. 1980. "Expert and Novice Performance in Solving Physics Problems." Science 208:13351342.

Luchins, Abrahama S. 1942. Mechanization in Problem Solving: The Effect of Einstellung. Evanston, IL: American Psychological Association.

Mayer, Richard E. 1992. Thinking, Problem Solving, Cognition, 2nd edition. New York: Freeman.

Mayer, Richard E. 1999. The Promise of Educational Psychology. Upper Saddle River, NJ: Prentice-Hall.

Nickerson, Raymond S. 1995. "Project Intelligence." In Encyclopedia of Human Intelligence, ed. Robert J. Sternberg. New York: Macmillan.

Pressley, Michael J., and Woloshyn, Vera. 1995. Cognitive Strategy Instruction that Really Improves Children's Academic Performance. Cambridge, MA: Brookline Books.

Sternberg, Robert J., and Davidson, Janet E. 1982. "The Mind of the Puzzler." Psychology Today 16:3744.

Sternberg, Robert J., and Zhang, Li-Fang, eds. 2001. Perspectives on Thinking, Learning, and Cognitive Styles. Mahwah, NJ: Erlbaum.

Wertheimer, Max. 1959. Productive Thinking. New York: Harper and Row.

Whitehead, Alfred North. 1929. The Aims of Education. New York: Macmillan.

Richard E. Mayer

REASONING

Reasoning is the generation or evaluation of claims in relation to their supporting arguments and evidence. The ability to reason has a fundamental impact on one's ability to learn from new information and experiences because reasoning skills determine how people comprehend, evaluate, and accept claims and arguments. Reasoning skills are also crucial for being able to generate and maintain viewpoints or beliefs that are coherent with, and justified by, relevant knowledge. There are two general kinds of reasoning that involve claims and evidence: formal and informal.

Formal Reasoning

Formal reasoning is used to evaluate the form of an argument, and to examine the logical relationships between conclusions and their supporting assertions. Arguments are determined to be either valid or invalid based solely on whether their conclusions necessarily follow from their explicitly stated premises or assertions. That is, if the supporting assertions are true, must the conclusion also be true? If so, then the argument is considered valid and the truth of the conclusion can be directly determined by establishing the truth of the supporting assertions. If not, then the argument is considered invalid, and the truth of the assertions is insufficient (or even irrelevant) for establishing the truth of the conclusion. Formal reasoning is often studied in the context of categorical syllogisms or "if-then" conditional proofs. Syllogisms contain two assertions and a conclusion. An example of a logically valid syllogism is: All dogs are animals; all poodles are dogs; therefore poodles are animals. A slight change to one of the premises will create the invalid syllogism: All dogs are animals; some dogs are poodles; therefore all poodles are animals. This argument form is invalid because it cannot be determined with certainty that the conclusion is true, even if the premises are true. The second premise does not require that all poodles are dogs. Thus, there may be some poodles who are not dogs and, by extension, some poodles who are not animals. This argument is invalid despite the fact that an accurate knowledge of dogs, poodles, and animals confirms that both the premises and the conclusion are true statements. This validity-truth incongruence highlights the important point that the conceptual content of an argument or the real-world truth of the premises and conclusion are irrelevant to the logic of the argument form.

Discussions of formal reasoning may sometimes refer to the rules of logic. It is common for formal reasoning to be described as a set of abstract and prescriptive rules that people must learn and apply in order to determine the validity of an argument. This is the oldest perspective on formal reasoning. Some claim that the term formal reasoning refers directly to the application of these formal rules.

However, many theorists consider this perspective misguided. Describing formal reasoning as the evaluation of argument forms conveys a more inclusive and accurate account of the various perspectives in this field. There are at least four competing theories about how people determine whether a conclusion necessarily follows from the premises. These theories are commonly referred to as rule-based perspectives, mental models, heuristics, and domain-sensitive theories. People outside the rule-based perspective view the rules of logic as descriptive rules that simply give labels to common argument forms and to common errors or fallacies in logical reasoning. These theories are too complex to be detailed here, and there is currently no consensus as to which theory best accounts for how people actually reason. A number of books and review articles provide comprehensive discussions of these theories and their relative merits; one example is Human Reasoning: The Psychology of Deduction by Jonathan Evans, Stephen Newstead, and Ruth Byrne.

There is a consensus that human reasoning performance is poor and prone to several systematic errors. Performance on formal reasoning tasks is generally poor, but can be better or worse depending upon the particular aspects of the task. People perform worse on problems that require more cognitive work, due to excessive demands placed on their limited processing capacity or working memory. The required cognitive work can be increased simply by having more information, or by the linguistic form of the argument. Some linguistic forms can affect performance because they violate conventional discourse or must be mentally rephrased in order to be integrated with other information.

In addition, people's existing knowledge about the concepts contained in the problem can affect performance. People have great difficulty evaluating the logical validity of an argument independent of their real-world knowledge. They insert their knowledge as additional premises, which leads them to make more inferences than is warranted. Prior knowledge can also lead people to misinterpret the meaning of premises. Another common source of error is belief bias, where people judge an argument's validity based on whether the conclusion is consistent with their beliefs rather than its logical relationship to the given premises.

The systematic errors that have been observed provide some insights about what skills a person might develop to improve performance. Making students explicitly aware of the likely intrusion of their prior knowledge could facilitate their ability to control or correct such intrusions. Students may also benefit from a detailed and explicit discussion of what logical validity refers to, how it differs from real-world truth or personal agreement, and how easy it is to confuse the two. Regardless of whether or not people commonly employ formal rules of logic, an understanding and explicit knowledge of these rules should facilitate efforts to search for violations of logical validity. Theorists of informal reasoning such as James Voss and Mary Means have made a similar argument for the importance of explicit knowledge about the rules of good reasoning. Errors attributed to limited cognitive resources can be addressed by increasing reasoning skill, and practice on formal reasoning tasks should increase proficiency and reduce the amount of cognitive effort required. Also, working memory load should be reduced by external representation techniques, such as Venn diagrams.

Informal Reasoning

Informal reasoning refers to attempts to determine what information is relevant to a question, what conclusions are plausible, and what degree of support the relevant information provides for these various conclusions. In most circumstances, people must evaluate the justification for a claim in a context where the information is ambiguous and incomplete and the criteria for evaluation are complex and poorly specified. Most of what is commonly referred to as "thinking" involves informal reasoning, including making predictions of future events or trying to explain past events. These cognitive processes are involved in answering questions as mundane as "How much food should I prepare for this party?" and as profound as "Did human beings evolve from simple one-celled organisms?" Informal reasoning has a pervasive influence on both the everyday and the monumental decisions that people make, and on the ideas that people come to accept or reject.

Informal and formal reasoning both involve attempts to determine whether a claim has been sufficiently justified by the supporting assertions, but these types of reasoning differ in many respects. The vast majority of arguments are invalid according to formal logic, but informal reasoning must be employed to determine what degree of justification the supporting assertions provide. Also, the supporting assertions themselves must be evaluated as to their validity and accuracy. Formal reasoning involves making a binary decision based only on the given information. Informal reasoning involves making an uncertain judgment about the degree of justification for a claim relative to competing claimsand basing this evaluation on an ill-defined set of assertions whose truth values are uncertain.

Based on the above characterization of informal reasoning, a number of cognitive skills would be expected to affect the quality of such reasoning. The first is the ability to fully comprehend the meaning of the claim being made. Understanding the conceptual content is crucial to being able to consider what other information might bear on the truth or falsehood of a claim. Other cognitive processes involved in reasoning include the retrieval of relevant knowledge from long-term memory, seeking out new relevant information, evaluating the validity and utility of that information, generating alternatives to the claim in question, and evaluating the competing claims in light of the relevant information.

Successful reasoning requires the understanding that evidence must provide information that is independent of the claim or theory, and that evidence must do more than simply rephrase and highlight the assumptions of the theory. For example, the assertion "Some people have extrasensory perception" does not provide any evidence about the claim "ESP is real." These are simply ways of restating the same information. Evidence must be an assertion that is independent of the claim, but that still provides information about the probable truth of the claim. An example of potential evidence for the claim that "ESP is real" would be "Some people know information that they could not have known through any of the normal senses." In other words, evidence constitutes assertions whose truth has implications for, but is not synonymous with, the truth of the claim being supported.

Without an understanding of evidence and counterevidence and how they relate to theories, people would be ineffective at identifying information that could be used to determine whether a claim is justified. Also, lack of a clear distinction between evidence and theory will lead to the assimilation of evidence and the distortion of its meaning and logical implications. This eliminates the potential to consider alternative claims that could better account for the evidence. People will also fail to use counterevidence to make appropriate decreases in the degree of justification for a claim.

Discussions of informal reasoning, argumentation, and critical thinking commonly acknowledge that a prerequisite for effective reasoning is a belief in the utility of reasoning. The cognitive skills described above are necessary, but not sufficient, to produce quality reasoning. The use of these skills is clearly effortful; thus, people must believe in the importance and utility of reasoning in order to consistently put forth the required effort. The epistemology that promotes the use of reasoning skills is the view that knowledge can never be absolutely certain and that valid and useful claims are the product of contemplating possible alternative claims and weighing the evidence and counterevidence. Put simply, people use their reasoning skills consistently when they acknowledge the possibility that a claim may be incorrect and also believe that standards of good reasoning produce more accurate ideas about the world.

Inconsistent, selective, and biased application of reasoning skills provides little or no benefits for learning. Greater reasoning skills are assumed to aid in the ability to acquire new knowledge and revise one's existing ideas accordingly. However, if one contemplates evidence and theory only when it can be used to justify one's prior commitments, then only supportive information will be learned and existing ideas will remain entrenched and unaffected. The development of reasoning skills will confer very little intellectual benefit in the absence of an epistemological commitment to employ those skills consistently.

General Reasoning Performance

Reports from the National Assessment of Educational Progress and the National Academy of Sciences consistently show poor performance on a wide array of tasks that require informal reasoning. These tasks span all of the core curriculum areas of reading, writing, mathematics, science, and history.

Some smaller-scale studies have attempted to paint a more detailed picture of what people are doing, or failing to do, when asked to reason. People demonstrate some use of informal reasoning skills, but these skills are underdeveloped and applied inconsistently. Children and adults have a poor understanding of evidence and its relationship to theories or claims. Only a small minority of people attempt to justify their claims by providing supporting evidence. When explicitly asked for supporting evidence, most people simply restate the claim itself or describe in more detail what the claim means. It is especially rare for people to generate possible counter-evidence or to even consider possible alternative claims.

The inconsistent application of informal reasoning skills could have multiple causes. Some theorists suggest that reasoning skills are domain specific and depend heavily on the amount of domain knowledge a person possesses. Alternatively, underdeveloped or unpracticed skills could lead to their haphazard use. A third possibility is that people's lack of explicit knowledge about what good reasoning entails prevents them from exercising conscious control over their implicit skills.

Inconsistent use of informal reasoning skills may also arise because people lack a principled belief in the utility of reasoning that would foster a consistent application of sound reasoning. People have extreme levels of certainty in their ideas, and they take this certainty for granted. In addition, the application of reasoning skills is not random, but is selective and biased such that prior beliefs are protected from scrutiny. This systematic inconsistency cannot be accounted for by underdeveloped skills, but can be accounted for by assuming a biased motivation to use these skills selectively. Regardless of whether or not people have the capacity for sound reasoning, they have no philosophical basis that could provide the motivation to override the selective and biased use of these skills.

Development of Reasoning Skills

There is only preliminary data about how and when informal reasoning skills develop. There is preliminary support that the development of reasoning takes a leap forward during the preadolescent years. These findings are consistent with Piagetian assumptions about the development of concrete operational thinking, in other words, thinking that involves the mental manipulation (e.g., combination, transformation) of objects represented in memory. However, younger children are capable of some key aspects of reasoning. Thus, the improvement during early adolescence could result from improvements in other subsidiary skills of information processing, from meta-cognitive awareness, or from an increase in relevant knowledge.

A somewhat striking finding is the lack of development in informal reasoning that occurs from early adolescence through adulthood. Some evidence suggests that college can improve reasoning, but the overall relationship between the amount of postsecondary education and reasoning skill is weak at best. The weak and inconsistent relationship that does exist between level of education and reasoning is likely due to indirect effects. Students are rarely required to engage in complex reasoning tasks. However, the spontaneous disagreements that arise in the classroom could expose them to the practice of justifying one's claim. Also, engagement in inquiry activities, such as classroom experiments, could provide implicit exposure to the principles of scientific reasoning.

There are relatively few programs aimed at developing informal reasoning skills; hence, there is little information about effective pedagogical strategies. Where they do exist, curricula are often aimed at developing general reasoning skills. Yet, many believe that effective reasoning skills are domain-or discipline-specific. Nevertheless, given the pervasive impact of reasoning skills on learning in general, it is clear that more systematic efforts are needed to foster reasoning skills at even the earliest grade levels. Of the approaches that have been attempted, there is some evidence for the success of scaffolding, which involves a teacher interacting with a student who is attempting to reason, and prompting the student to develop more adequate arguments. Another approach is to explicitly teach what good reasoning means, what evidence is, and how evidence relates to theories. This approach could be especially effective if classroom experiments are conducted within the context of explicit discussions about the principles of scientific reasoning. Also, if reasoning skills are discussed in conjunction with the content of the core subject areas, then students may develop an appreciation for the pervasive utility and importance of reasoning for the progress of ideas.

A number of theorists have suggested that debate between students with opposing views could foster the basic skills needed for informal reasoning. Debates could give students practice in having to consider opposing viewpoints and having to coordinate evidence and counterevidence in support of a claim. Also, providing justification for one's positions requires some cognitive effort, and the norms of social dialogue could provide the needed motivation. However, interpersonal debates are most commonly construed as situations in which individuals are committed to a position ahead of time, and in which their goal is to frame the issue and any evidence in a manner that will persuade their opponent or the audience that their own position is correct. Students' reasoning is already greatly impaired by their tendency to adopt a biased, defensive, or noncontemplative stance. Debate activities that reinforce this stance and blur the difference between defending a claim and contemplating a claim's justification may do more harm than good. To date, there is no empirical data that compare the relative costs and benefits of using interpersonal debate exercises to foster critical reasoning skills.

See also: Learning, subentry on Causal Reasoning; Learning Theory, subentry on Historical Overview.

bibliography

Baron, Jonathan. 1985. Rationality and Intelligence. Cambridge, Eng.: Cambridge University Press.

Baron, Jonathan. 1988. Thinking and Deciding. Cambridge, Eng.: Cambridge University Press.

Boyer, Ernest L. 1983. High School: A Report on Secondary Education in America. New York: Harper and Row.

Cary, Susan. 1985. "Are Children Fundamentally Different Thinkers and Learners Than Adults?" In Thinking and Learning Skills: Current Research and Open Questions, Vol. 2, ed. Susan Chipman, Judith Segal, and Robert Glaser. Hillsdale, NJ: Erlbaum.

Evans, Jonathan St. B. T.; Newstead, Stephen E.; and Byrne, Ruth M. J. 1993. Human Reasoning: The Psychology of Deduction. Hillsdale, NJ: Erlbaum.

Johnson-Laird, Philip N., and Byrne, Ruth M. J. 1991. Deduction. Hillsdale, NJ: Erlbaum.

Kuhn, Deanna. 1991. The Skills of Argument. Cambridge, Eng.: Cambridge University Press.

Means, Mary L., and Voss, James F. 1996. "Who Reasons Well? Two Studies of Informal Reasoning Among Children of Different Grade, Ability, and Knowledge Levels." Cognition and Instruction 14:139178.

Nickerson, Raymond S. 1991. "Modes and Models of Informal Reasoning: A Commentary." In Informal Reasoning and Education, ed. James F. Voss, David N. Perkins, and Judith W. Segal. Hillsdale, NJ: Erlbaum.

Perloms, David N. 1985. "Postprimary Education Has Little Impact on Informal Reasoning." Journal of Educational Psychology 77:562571.

Stein, Nancy L., and Miller, Christopher A. 1991. "I WinYou Lose: The Development of Argumentative Thinking." In Informal Reasoning and Education, ed. James F. Voss, David N. Perkins, and Judith W. Segal. Hillsdale, NJ: Erlbaum.

Voss, James F., and Means, Mary L. 1991. "Learning to Reason via Instruction and Argumentation." Learning and Instruction 1:337350.

Vygotsky, Lev S. 1978. Mind in Society: The Development of Higher Psychological Processes, ed. Michael Cole. Cambridge, MA: Harvard University Press.

Thomas D. Griffin

TRANSFER OF LEARNING

Imagine that every time that people entered a new environment they had to learn how to behave without the guidance of prior experiences. Slightly novel tasks, like shopping online, would be disorienting and dependant on trial-and-error tactics. Fortunately, people use aspects of their prior experiences, such as the selection of goods and subsequent payment, to guide their behavior in new settings. The ability to use learning gained in one situation to help with another is called transfer.

Transfer has a direct bearing on education. Educators hope that students transfer what they learn from one class to anotherand to the outside world. Educators also hope students transfer experiences from home to help make sense of lessons at school. There are two major approaches to the study of transfer. One approach characterizes the knowledge and conditions of acquisition that optimize the chances of transfer. The other approach inquires into the nature of individuals and the cultural contexts that transform them into more adaptive participants.

Knowledge-Based Approaches to Transfer

There are several knowledge-based approaches to transfer.

Transferring out from instruction. Ideally, the knowledge students learn in school will be applied outside of school. For some topics, it is possible to train students for the specific situations they will subsequently encounter, such as typing at a keyboard. For other topics, educators cannot anticipate all the out-of-school applications. When school-based lessons do not have a direct mapping to out-of-school contexts, memorization without understanding can lead to inert knowledge. Inert knowledge occurs when people acquire an idea without also learning the conditions of its subsequent application, and thus they fail to apply that idea appropriately. Memorizing the Pythagorean formula, for example, does not guarantee students know to use the formula to find the distance of a shortcut.

Knowing when to use an idea depends on knowing the contexts in which the idea is useful. The ideas that people learn are always parts of a larger context, and people must determine which aspects of that context are relevant. Imagine, for example, a young child who is learning to use the hook of a candy cane to pull a toy closer. As the child learns the action, there are a number of contextual features she might also learn. There are incidental featuresit is Christmas; there are surface featuresthe candy is small and striped; and there are deep featuresthe candy cane is rigid and hooked. Instruction for transfer must help the child discern the deep features. This way the child might subsequently use an umbrella handle to gather a stuffed animal instead of trying a candy-striped rope.

When people learn, they not only encode the target idea, they also encode the context in which it occurs, even if that context is incidental. For a study published in 1975, Gooden and Baddeley asked adults to learn a list of words on land or underwater (while scuba diving). Afterwards, the adults were subdivided; half tried to remember the words underwater and half on land. Those people who learned the words underwater remembered them better underwater than on land, and those people who learned the words on land remembered them better on land than underwater. This result reveals the context dependency of memory. Context dependency is useful because it constrains ideas to appear in appropriate contexts, rather than cluttering people's thoughts at odd times. But context dependency can be a problem for transfer, because transfer, by definition, has to occur when the original context of learning is not reinstatedwhen one is no longer in school, for example.

Surface features, which are readily apparent to the learner, differ from incidental features, because surface features are attached to the idea rather than the context in which the idea occurs. Surface features can be useful. A child might learn that fish have fins and lay eggs. When he sees a new creature with fins, he may decide it is a fish and infer that it too lays eggs. Surface features, however, can be imperfect cues. People may overgeneralize and exhibit negative transfer. For example, the child may have seen a dolphin instead of a fish. People may also undergeneralize and fail to transfer. A child might see an eel and assume it does not lay eggs. Good instruction helps students see beneath the surface to find the deep features of an idea.

Deep features are based on structures integral to an idea, which may not be readily apparent. To a physicist, an inclined plane and scissors share the same deep structure of leverage, but novices cannot see this similarity and they fail to use a formula learned for inclined planes to reason about scissors.

Analogies are built on deep features. For example, color is to picture as sound is to song. On the surface, color and sound differ, as do pictures and song. Nonetheless, the relation of used to create makes it possible to compare the common structure between the two. Analogy is an important way people discover deep features. In the 1990s, Kevin Dunbar studied the laboratory meetings of cell biologists. He found that the scientists often used analogies to understand a new discovery. They typically made transfers of near analogies rather than far ones. A far analogy transfers an idea from a remote body of knowledge that shares few surface features, as might be the case when using the structure of the solar system to explain the structure of an atom. A near analogy draws on a structure that comes from a similar body of knowledge. The scientists in Dunbar's study used near analogies from biology because they had precise knowledge of biology, which made for a more productive transfer.

Instruction can help students determine deep features by using analogous examples rather than single examples. In a 1983 study, Mary Gick and Keith Holyoak asked students how to kill a tumor with a burst of radiation, given that a strong burst kills nearby tissue and a weak burst does not kill the tumor. Students learned that the solution uses multiple weak radiation beams that converge on the tumor. Sometime later, the students tried to solve the problem of how a general could attack a fortress: If the general brought enough troops to attack the fortress, they would collapse the main bridge. Students did not propose that the general could split his forces over multiple bridges and then converge on the fortress. The students' knowledge of the convergence solution was inert, because it was only associated with the radiation problem. Gick and Holyoak found they could improve transfer by providing two analogous examples instead of one. For example, students worked with the radiation problem and an analogous traffic congestion problem. This helped students abstract the convergence schema from the radiation context, and they were able to transfer their knowledge to the fortress problem.

Transferring in to instruction. In school, transfer can help students learn. If students can transfer in prior knowledge, it will help them understand the content of a new lesson. A lesson on the Pythagorean theorem becomes more comprehensible if students can transfer in prior knowledge of right triangles. Otherwise, the lesson simply involves pushing algebraic symbols.

Unlike transfer to out-of-school settings, which depends on the spontaneous retrieval of relevant prior knowledge, transfer to in-school settings can be directly supported by teachers. A common approach to help students recruit prior knowledge uses cover stories that help students see the relevance of what they are about to learn. A teacher might discuss the challenge of finding the distance of the moon from the earth to motivate a lesson on trigonometry. This example includes two ways that transferring in prior knowledge can support learning. Prior knowledge helps students understand the problems that a particular body of knowledge is intended to solvein this case, problems about distance. Prior knowledge also enables learners to construct a mental model of the situation that helps them understand what the components of the trigonometric formulas refer to.

Sometimes students cannot transfer knowledge to school settings because they do not have the relevant knowledge. One way to help overcome a lack of prior knowledge is to use contrasting cases. Whereas pairs of analogies help students abstract deep features from surface features, pairs of contrasting cases help students notice deep features in the first place. Contrasting cases juxtapose examples that only differ by one or two features. For example, a teacher might ask students to compare examples of acute, right, and obtuse triangles. Given the contrasts, students can notice what makes a right triangle distinctive, which in turn, helps them construct precise mental models to understand a lesson on the Pythagorean theorem.

Person-Based Approaches to Transfer

The second approach to transfer asks whether person-level variables affect transfer. For example, do IQ tests or persistence predict the ability to transfer? Person-based research relevant to instruction asks whether some experiences can transform people in general ways.

Transferring out from instruction. An enduring issue has been whether instruction can transform people into better thinkers. People often believe that mastering a formal discipline, like Latin or programming, improves the rigor of thought. Research has shown that it is very difficult to improve people's reasoning, with instruction in logical reasoning being notoriously difficult. Although people may learn to reason appropriately for one situation, they do not necessarily apply that reasoning to novel situations. More protracted experiences, however, may broadly transform individuals to the extent that they apply a certain method of reasoning in general, regardless of situational context. For example, the cultural experiences of American and Chinese adults lead them to approach contradictions differently.

There have also been attempts to improve learning abilities by improving people's ability to transfer. Ann Brown and Mary Jo Kane showed young children how to use a sample solution to help solve an analogous problem. After several lessons on transferring knowledge from samples to problems, the children spontaneously began to transfer knowledge from one example to another. Whether this type of instruction has broad effectsfor example, when the child leaves the psychologist's laboratoryremains an open question. Most likely, it is the accumulation of many experiences, not isolated, short-term lessons, that has broad implications for personal development.

Transferring in to instruction. When children enter school, they come with identities and dispositions that have been informed by the practices and roles available in their homes and neighborhoods. Schools also have practices and roles, but these can seem foreign and inhospitable to out-of-school identities. Na'ilah Nasir, for example, found that students did not transfer their basketball "street statistics" to make sense of statistics lessons in their classrooms (nor did they use school-learned procedures to solve statistics problems in basketball). From a knowledge approach to transfer, one might argue that the school and basketball statistics were analogous, and that the children failed to see the common deep features. From a person approach to transfer, the cultural contexts of the two settings were so different that they supported different identities, roles, and interpretations of social demands. People can view and express themselves quite differently in school and nonschool contexts, and there will therefore be little transfer.

One way to bridge home and school is to alter instructional contexts so children can build identities and practices that are consistent with their out-of-school personae. Educators, for example, can bring elements of surrounding cultures into the classroom. In one intervention, African-American students learned literary analysis by building on their linguistic practice of signifying. These children brought their cultural heritage to bear on school subjects, and this fostered a school-based identity in which students viewed themselves as competent and engaged in school.

Conclusion

The frequent disconnect between in-school and out-of-school contexts has led some researchers to argue that transfer is unimportant. In 1988, Jean Lave compared how people solved school math problems and best-buy shopping problems. The adults rarely used their school algorithms when shopping. Because they were competent shoppers and viewed themselves as such, one might conclude that school-based learning does not need to transfer. This conclusion, however, is predicated on a narrow view of transfer that is limited to identical uses of what one has learned or to identical expressions of identity.

From an educational perspective, the primary function of transfer should be to prepare people to learn something new. So, even though shoppers did not use the exact algorithms they had learned in school, the school-based instruction prepared them to learn to solve best-buy problems when they did not have paper and pencil at hand. This is the central relevance of transfer for education. Educators cannot create experts who spontaneously transfer their knowledge or identities to handle every problem or context that might arise. Instead, educators can only put students on a trajectory to expertise by preparing them to transfer for future learning.

See also: Learning, subentries on Analogical Reasoning, Causal Reasoning, Conceptual Change.

bibliography

Boaler, Jo, and Greeno, James G. 2000. "Identity, Agency, and Knowing in Mathematical Worlds." In Multiple Perspectives on Mathematics Teaching and Learning, ed. Jo Boaler. West-port, CT: Ablex.

Bransford, John D.; Franks, Jeffrey J.; Vye, Nancy J.; and Sherwood, Robert D. 1989. "New Approaches to Instruction: Because Wisdom Can't Be Told." In Similarity and Analogical Reasoning, ed. Stella Vosniadou and Andrew Ortony. Cambridge, Eng.: Cambridge University Press.

Bransford, John D., and Schwartz, Daniel L. 1999. "Rethinking Transfer: A Simple Proposal with Multiple Implications." In Review of Research in Education, ed. Asghar Iran-Nejad and P. David Pearson. Washington, DC: American Educational Research Association.

Brown, Ann L., and Kane, Mary Jo. 1988. "Preschool Children Can Learn to Transfer: Learning to Learn and Learning from Example." Cognitive Psychology 3 (4):275293.

Ceci, Stephen J., and Ruiz, Ana. 1993. "Transfer, Abstractness, and Intelligence." In Transfer on Trial, ed. Douglas K. Detterman and Robert J. Sternberg. Stamford, CT: Ablex.

Chi, Michelene T.; Glaser, Robert; and Farr, Marshall J. 1988. The Nature of Expertise. Hillsdale, NJ: Erlbaum.

Dunbar, Kevin. 1997. "How Scientists Think: Online Creativity and Conceptual Change in Science." In Creative Thought, ed. Thomas B. Ward, Stephen M. Smith, and Jyotsna Vaid. Washington DC: APA.

Gentner, Dedre. 1989. "The Mechanisms of Analogical Reasoning." In Similarity and Analogical Reasoning, ed. Stella Vosniadou and Andrew Ortony. Cambridge, Eng.: Cambridge University Press.

Gick, Mary L., and Holyoak, Keith J. 1983. "Schema Induction and Analogical Transfer." Cognitive Psychology 15 (1):138.

Godden, D. R., and Baddeley, A. D. 1975. "Context-Dependent Memory in Two Natural Environments: On Land and Under Water." British Journal of Psychology 66 (3):325331.

Lave, Jean. 1988. Cognition in Practice. Cambridge, Eng.: Cambridge University Press.

Lee, Carol. 1995. "A Culturally Based Cognitive Apprenticeship: Teaching African-American High School Students Skills of Literary Interpretation." Reading Research Quarterly 30 (4):608630.

Moll, Luis C., and Greenberg, James B. 1990. "Creating Zones of Possibilities: Combining Social Contexts for Instructions." In Vygotsky and Education, ed. Luis C. Moll. Cambridge, Eng.: Cambridge University Press.

Nisbett, Richard E.; Fong, Geoffrey T.; Lehman, Darrin R.; and Cheng, Patricia W.1987. "Teaching Reasoning." Science 238 (4827):625631.

Novick, Laura R. "Analogical Transfer, Problem Similarity, and Expertise." Journal of Experimental Psychology: Learning, Memory, and Cognition 14 (3):510520.

Peng, Kaiping, and Nisbett, Richard E. 1999. "Culture, Dialectics, and Reasoning about Contradiction." American Psychologist 54 (9):741754.

Schwartz, Daniel L., and Bransford, John D. 1998. "A Time for Telling." Cognition and Instruction 16 (4):475522.

Daniel L. Schwartz

Na'ilah Nasir

Learning

views updated May 11 2018

LEARNING

Learning can occur in a variety of manners. An organism can learn associations between events in their environment (classical or respondent conditioning), learn based upon the reinforcements or punishments that follow their behaviors (operant or instrumental conditioning), and can also learn through observation of those around them (observational learning). Learning principles are of particular importance for school performance.

Classical or Respondent Conditioning

In the early twentieth century, Ivan Pavlov, a Russian scientist, unwittingly stumbled upon an important discovery for the field of behavioral psychology. While studying digestion in dogs, he discovered that after being fed a few times, the animals would salivate before actually receiving food. The dogs were associating external cues such as the sound of the food cabinet being opened with being fed, so they would salivate upon hearing these sounds before they saw the food.

This phenomenon is called classical conditioning or respondent conditioning. Pavlov found that by pairing a previously neutral stimulus, such as the sound of a cabinet being opened, with a stimulus that generates an automatic response, such as presenting meat, which automatically causes dogs to salivate, a dog will come to associate the neutral stimulus with the automatic response. After many pairings of the neutral stimulus (sound) with the automatic stimulus (meat), the response to the neutral stimulus, without the presence of the automatic or unconditional stimulus, produced salivation on its own. So, eventually, just hearing the cabinet being opened was sufficient for Pavlov's dogs to begin salivating.

Under natural circumstances, food causes a dog to salivate. This response is not learned or conditioned so the food is called an unconditioned stimulus while salivation is called an unconditioned response because it occurs without any prior conditioning or learning. After multiple pairings of the sound with food, however, the sound alone would cause the dogs to salivate. The sound has now become a conditioned stimulus and the salivation is the conditioned response because the dogs have been conditioned to salivate to the sound.

Pavlov later found that he did not even need to pair the conditioned stimulus (sound) directly with the unconditioned stimulus (food) in order to cause a conditioned response. He found that if he first conditioned the dogs to salivate at the sound, and then paired the sound with a wooden block, the dogs would eventually salivate at the sight of the block alone even though the block itself was never paired with the food. This is called second-order conditioning; a second neutral stimulus is paired with the conditioned stimulus and eventually becomes a conditioned stimulus as well.

In the 1920s John Watson, an American psychologist, applied the principles of classical conditioning to human beings. He conducted an experiment on an eleven-month-old baby, "Little Albert," in which a startling noise occurred as Albert was presented with a white rat. Startling noises are unconditionally upsetting to infants, causing them to cry and crawl away, while young children are not afraid of white rats. After multiple pairings of the rat and the startling noise, however, Little Albert developed a fear of the rat, crying and crawling away even if the loud noise did not occur. Watson thus showed that classical conditioning also works with humans and that it works for spontaneous emotional responses as well as physiological ones.

Manipulating Classical Conditioning

Researchers have found many phenomena associated with classical conditioning. In some cases a neutral stimulus very similar to the conditioned stimulus will elicit the conditioned response even if it has never been paired with the unconditioned stimulus before; this phenomenon is known as generalization. An example of generalization in Pavlov's dogs would be the dogs salivating to a sound of a different pitch than the one that was paired with food. Discrimination training can eliminate generalization by presenting the generalized stimulus without the unconditioned stimulus (food).

It is possible to erase the effects of conditioning by presenting the conditioned stimulus without the unconditioned stimulus. In other words, after successfully conditioning a dog to salivate to a sound, experimenters can eliminate the effects of the conditioning by presenting the sound many times without presenting the food. This process is called extinction. After a period following extinction, the original conditioned response might return again; this phenomenon is called spontaneous recovery, and it can be eliminated through a new series of extinction trials.

Classical Conditioning and Psychopathology

Behavioral theorists believe that certain psychological disorders are a result of a form of classical conditioning. Watson's experiment on Little Albert suggests that phobias might be learned through pairing a neutral or harmless stimulus with an unconditionally frightening event, thus causing the person to associate fear with the harmless stimulus. Treatment for phobias involves extinguishing the association between fear and the neutral stimulus through so-called systematic desensitization and flooding.

In systematic desensitization, patients are slowly presented with the feared object in stages, beginning with the least-feared situation and ending with a situation that provokes the most fear. The therapist teaches the patient to remain relaxed as the feared object approaches so eventually the patient associates it not with fear but with calmness and relaxation. One example is the case of Little Albert, in which Watson attempted to extinguish the baby's fear of the white rat by giving him food (a stimulus that elicited pleasure) while showing the white rat. In this case, the white rat ceases to be paired with a fear-inducing stimulus and instead becomes linked to a pleasure-inducing stimulus.

In flooding, the therapist attempts to alter the pair that has been classically conditioned. In this case, however, the patient agrees to be surrounded by the fear-inducing stimulus and not attempt to escape the situation. Flooding functions like extinction because the stimulus is present without the aversive response, so association weakens between the neutral stimulus and the fear response. After a long period, the patient ceases to be afraid of the stimulus.

Thus, classical or respondent conditioning is a purely behavioral type of learning. Animals or people conditioned in this manner do not consciously learn the associations between the stimuli and the responses. Instead, because the pairings occur repeatedly, the conditioned stimulus elicits the conditioned response unconsciously. In some instances, however, these responses are not automatic; instead, certain outcomes will induce the animals or humans to repeat the behavior while other outcomes cause them not to repeat the behavior.

Operant or Instrumental Conditioning

Operant conditioning, also known as instrumental conditioning, is based on the consequences that follow an organism's behavior. Behaviors that are followed by a reward, or reinforcement, usually increase in frequency, while behaviors that are followed by punishments usually decrease in frequency. The context in which the rewards or punishments are received has an effect on how the association between the behavior and the consequence following the behavior are learned. In addition, how often reinforcement follows any particular behavior has an effect on how well the association is learned.

The Effect of Reward or Punishment on Behavior

American psychologist Edward Thorndike's Law of Effect states that depending on the outcome, some responses get weakened while other responses get strengthened, and this process eventually leads to learning. Thorndike noted that when an animal was rewarded for a certain behavior, that behavior became progressively more frequent while behaviors that did not elicit a reward weakened and became sporadic, finally disappearing altogether. In other words, unlike classical conditioning, what follows a behavior or response is what is primarily important.

In his mid-twentieth-century experiments with rats and pigeons, American psychologist B. F. Skinner found that animals use their behaviors to shape their environment, acting on the environment in order to bring about a reward or to avoid a punishment. Skinner called this type of learning operant or instrumental conditioning. A reward or reinforcement is an outcome that increases the likelihood that an animal will repeat the behavior. There are two types of reinforcement: positive and negative. Positive reinforcement is something given that increases the chance that the animal or person will repeat the behavior; for example, smiling or praise whenever a student raises her hand is a form of positive reinforcement if it results in increased hand-raising. Negative reinforcement occurs when something is taken away; stopping an electric shock to elicit a behavior from a rat is an example, because whatever behavior the rat exhibited to terminate the shock will increase.

A punishment, on the other hand, is an outcome for which the likelihood of a future behavior decreases. For example, spanking or slapping a child is an example of punishment, as is grounding, because all three can be expected to reduce the occurrence of the behavior that preceded them.

There are a number of ways in which someone can manipulate an animal's or a person's behavior using operant or instrumental conditioning. One of these methods is called shaping and involves reinforcing behaviors as they approach the desired goal. Suppose a person wants to train a dog to jump through a hoop. He would first reward the dog for turning toward the hoop, then perhaps for approaching the hoop. Eventually he might reward the dog only for walking through the hoop if it is low to the ground. Finally, he would raise the hoop off the ground and reward the dog only for jumping through the hoop.

The Role of Context

Context is extremely important for operant conditioning to occur. Both animals and people must learn that certain behaviors are appropriate in some contexts but not in others. For instance, a young child might learn that it is acceptable to scribble with a crayon on paper but not on the wall. Similarly, Skinner found that animals can discriminate between different stimuli in order to receive a reward. A pigeon can discriminate between two different colored lights and thereby learn that if it pecks a lever when a green light is on it will receive food, but if it pecks when the red light is on it will not receive food.

What is more, animals can discriminate between different behaviors elicited by different contexts. For example, a rat can learn that turning around clockwise in its cage will result in getting food but that in a different cage turning counterclockwise will bring a reward. Animals will also generalize to other stimuli, performing the desired behavior when a slightly different stimulus occurs. For instance, a pigeon that knows that pecking a lever when a green light is on will bring food might also peck the lever when a different-colored light is on. Both generalization and discrimination help animals and people learn which behaviors are appropriate in which contexts.

Reinforcement Schedules

The rate of reinforcement can also affect the frequency of the desired response. Delaying reinforcement slows learning down, although research shows that humans can learn from delayed reinforcements, and that it is often difficult to forfeit immediately positive outcomes in order to avoid adverse ones later.

The schedule of reinforcement also plays a critical role in affecting response rates. There are two types of reinforcement schedules: interval schedules and ratio schedules. Interval schedules are reinforcement schedules in which rewards are given after a certain period of time. Ratio schedules are schedules in which rewards are given after a specific number of correct responses. As seen below, the time interval or response ratio can either be fixed or variable.

The schedule that elicits the most rapid frequency of responses is the fixed ratio schedule. In this case, the animal knows it will receive a reward after a fixed number of responses so it produces that number as quickly and frequently as possible. This phenomenon also occurs with people; if craftspeople are paid for each object they make, they will try to produce as many objects as possible in order to maximize their rewards.

Generating nearly as rapid a frequency of responses as the fixed ratio schedule is the variable ratio schedule. In this case, the number of responses needed to produce a reward varies so the animal or person will emit the desired behavior frequently on the chance that the next time might bring the reward. Lotteries and slot machines function on a variable ratio schedule, thus inducing people to want to play again.

Interval schedules tend to produce slower frequencies of response. A fixed interval schedule will produce fewer responses early in the interval with an increase as the time for the reward approaches. One example in human behavior is the passing of bills in Congress. As elections approach, the number of bills passed increases dramatically, with a swift decline after the election. A variable interval schedule, on the other hand, produces a slow but steady frequency of response; for instance, a teacher giving "pop" quizzes at irregular intervals encourages her students to maintain a consistent level of studying throughout the semester.

Although classical or respondent conditioning involves automatic responses to behavior, operant or instrumental conditioning is a result of the decision to produce a certain behavior in order to receive a reward or avoid a punishment.

Observational Learning

Learning does not always occur directly as a result of punishment or reinforcement, but can occur through the process of watching others. Children can learn from observing rewards or punishments given to someone else, and do not need to be the recipients themselves. This form of social learning is called observational learning. The terms "imitation" and "modeling" are often used interchangeably and are types of observational learning.

Imitation and Modeling

Imitation may be a powerful means through which infants can learn from those around them. Andrew Meltzoff and M. Keith Moore's classic 1977 study illustrated imitation of tongue protrusion, lip protrusion, and mouth opening by two- to three- week-old infants. For this behavior to occur, infants must match what they see the model doing with what they feel themselves doing, and it has been demonstrated in infants three days old. Thus, it seems that imitation occurs from birth onward and that infants may learn many new behaviors in this way.

As children grow, they imitate more complex behaviors than simple mouth movements. A researcher who has performed much research in the area of observational learning in children is Albert Bandura. His best-known study of modeling in children involved aggressive behavior. While children observed, models either physically attacked or nonaggressively interacted with a large inflatable doll called Bobo. The children were then given the opportunity to play with Bobo. Those who had observed the aggressive model displayed twice as much aggressive behavior as those who had observed the nonaggressive model. In addition, the children who had observed the aggressive model performed aggressive acts that had not been modeled, illustrating that generalization had occurred. These findings indicate that children can indeed learn what behavior is appropriate in a given situation through observation alone.

Observational learning can have other effects as well. The opposite of the Bobo findings can occur in which inhibition of a class of behaviors becomes less likely after observation. Often inhibition occurs after observing another person being punished for performing a certain type of behavior, such as aggressive behavior in general.

Through his studies on observational learning, Bandura developed his cognitive theory of observational learning. He posited that four mental processes need to be present in order for observational learning to occur. One mental process is that of attention; that is, a child must find the model interesting enough to hold the child's attention. The child must also be able to hold the model's behavior in memory in order to imitate the behavior later. In addition, without sufficient motor control, the child would be unable to mimic the model's behaviors. Finally, motivation is integral in that the child must have a reason to perform the behavior that was modeled.

Bandura's cognitive theory of observational learning is helpful for understanding why children imitate behavior in some cases and not others. In particular, children are more likely to imitate a model when they see the model's behavior rewarded rather than punished. In addition, self-efficacy beliefs play into a child's choice of imitation. If the child believes that she does not have the talent necessary to imitate a particular behavior, she will not attempt to do so. Thus it seems that both cognitive and social factors come into play in observational learning, and that is why Bandura's theory is also called a social cognitive theory of learning.

Observational Learning in Practice

Observational learning can be seen in practice in many settings. First, it seems that children can imitate behaviors they have seen on television—behaviors that are often aggressive behaviors. There are many factors that determine whether a child will imitate an aggressive model on television. The observing child must first identify with the model in order to consider imitating the model. The consequences of the aggressive behavior are also a factor. In addition, if the child is old enough to realize that aggression on television does not represent reality, he is less likely to imitate the behavior. Finally, what the parents tell the child about the aggressive behavior he is viewing also plays a role in whether or not the child will imitate the behavior.

Observational learning is also important in the learning of sex roles. It has been found that children can learn appropriate behaviors for each sex by reading, watching television, or observing real models.

Another type of behavior that has been found to be learned through observation is prosocial behavior (positive or helpful behavior). Children increase their giving and helping behaviors after observing a peer or adult doing the same and even after viewing such behavior on a television program. In addition, it has been found that modeling of prosocial behavior results in more prosocial behavior in the learner than simple statements that prosocial behavior is good.

Observational learning is often used in therapeutic settings. People can be trained in assertiveness through observation of an assertive therapist. In addition, people can learn to overcome phobias through observation of others interacting calmly with the object of their fear.

In sum, imitation and modeling, both of which are forms of observational learning, begin with simple behaviors in infancy and continue on to complex behaviors in childhood and adulthood. Bandura has theorized that cognitive and social factors interact in observational learning and affect whether an observer will imitate a behavior or not. Observational learning occurs in many settings and has also been used in therapy.

Relationship of Learning to School Performance

The concepts discussed above (such as conditioning, imitation, and modeling) would seem to have little role to play in modern education. Teachers, especially in the later grades, favor so-called constructive approaches to learning, which means that they arrange the environment in such a way that children are allowed to discover relationships on their own. This approach stands in contrast to the concept of conditioning, where the child can be seen as a passive receptacle who absorbs what the teacher presents, without regard to how it fits with the child's preexisting knowledge. Educators continue to debate these two extreme approaches, and some forms of conditioning and imitation, such as drilling multiplication tables, continue to be popular in U.S. schools. Furthermore, in classes for children with special needs, it is still common for classical and operant principles to shape children's behavior. In such classrooms, teachers award points for acceptable behavior and take away points for unacceptable behavior. Children can redeem these points for perks such as extra recess. So, notwithstanding the debate between learning theorists and constructivists, learning principles are still common in classrooms although the application is sometimes not a conscious result of the teacher's planning.

See also:MEMORY

Bibliography

Bandura, Albert. Social Foundations of Thought and Action: A Social Cognitive Theory. Englewood Cliffs, NJ: Prentice-Hall, 1986.

Bandura, Albert, Dorothea Ross, and Sheila Ross. "Transmission of Aggression through Imitation of Aggressive Models." Journal of Abnormal and Social Psychology 63 (1961):575-582.

Domjan, Michael. The Essentials of Conditioning and Learning. Pacific Grove, CA: Brooks/Cole, 1996.

Freidrich, Lynette, and Aletha Stein. "Prosocial Television and>Young Children: The Effects of Verbal Labeling and Role Playing on Learning and Behavior." Child Development 16 (1975):27-36.

Hay, Dale, and Patricia Murray. "Giving and Requesting: Social Facilitation of Infants' Offers to Adults." Infant Behavior and Development 5 (1982):301-310.

Meltzoff, Andrew, and M. Keith Moore. "Newborn Infants Imitate Adult Facial Gestures." Child Development 54 (1983):702-709.

Parke, Ross, and Ronald Slaby. "The Development of Aggression."In Paul Mussen ed., Handbook of Child Psychology, 4th edition. New York: Wiley, 1983.

Schiamberg, Lawrence. Child and Adolescent Development. New York:Macmillan, 1988.

Spiegler, Michael D., and David Guevremont. Contemporary Behavior Therapy, 4th edition. Elmsford, NY: Pergamon, 1990.

Stephen J.Ceci

Rebecca L.Fraser

Maria GabrielaPereira

Learning

views updated May 18 2018

LEARNING

Learning is the process of forming associations that result in a relatively long-lasting change in the organism. Learning that involves relations between events is called associative learning, and the primary forms of associative learning are called classical and instrumental conditioning. Learning and memory are closely associated phenomena because memory occurs as a consequence of learning.

Older adults typically complain that their memory is not as good as it used to be, even though it may be their ability to learn that is actually most affected. Whereas their memories from the young adult period of their lives may be quite intact, older adults have greater difficulty remembering people's names or the items to pick up at the grocery store. This type of memory involves the formation of new associations and thus involves learning as well as memory. When a memory can be elicited, it indicates that learning has occurred. However, learning can occur and still not be demonstrated as a memory at a later time. The learning may have been poor in the first place (a common problem in normal aging), the memory may have decayed with time (over the older adults' life spans there is a much longer time period available for memory decay to occur), there may be injury or impairment in the brain (more likely in older than in younger adults), or the memory may be temporarily unavailable for retrieval because of the particular state of the person. For instance, older adults have sensory and perceptual deficits that affect memory test performance, and some also get more anxious and fatigued during testing than do younger adults.

Associative learning is most commonly investigated with classical (Pavlovian) and instrumental (Thorndikian) conditioning. Both paradigms involve exposing the organism to relations between events. The history of the study of classical conditioning is relatively long, beginning late in the nineteenth century. Although he received the Nobel prize in 1904 for his research on the physiology of digestion in dogs, Ivan Petrovitch Pavlov had already turned his attention to investigating formally the phenomenon of classical conditioning. Pavlov found that when he presented a neutral stimulus such as a bell shortly before he placed meat powder on the tongue of a dog, the bell would elicit a similar response to the response elicited by the meat powder. Namely, the dog would salivate when it heard the bell.

Instrumental conditioning was first systematically studied by Edward Lee Thorndike early in the twentieth century. In the case of instrumental conditioning, reinforcement (the consequence) is contingent upon the occurrence of a given behavioral response. For most of the twentieth century, techniques for the investigation of simple associative learning have been available, and these techniques have been applied to the study of normal aging and neuropathology in aging. Despite paradigmatic differences between classical and instrumental conditioning, the existing data in animal studies indicate that age-related differences in instrumental performance usually parallel those found in classical conditioning. In humans, there is far more evidence of deficits in classical than in instrumental conditioning.

Classical conditioning

Pavlov was the first to observe that old dogs classically condition more slowly than young dogs. This initiated the very fruitful investigation of classical conditioning and normal aging. First in Germany and later in the United States, investigators moved from studying the slower autonomic nervous system responses such as salivation to assessment of conditioning in the somatic nervous system using the eyeblink response. The standard format for the presentation of stimuli in classical conditioning was named the "delay" procedure because there is a delay between the onset of the conditioned stimulus (CS) and the onset of the unconditioned stimulus (US). A neutral stimulus such as a tone or light is the CS, and it is presented for a duration of around half a second. While it is still on, the reflex-eliciting US air puff is presented, and the CS and US end together 50 to 100 msec later. Learning occurs when the organism responds to the CS before the onset of the US. This learned response is called the conditioned response (CR). Many additional classical conditioning procedures are used, but most studies of aging have used the delay procedure discussed here.

Differences between classical eyeblink conditioning in young and elderly nursing home residents in a single seventy-to-ninety-trial conditioning session were first observed in the early 1950s by the Russian scientists and reported in the United States by Edward Jerome in the first Handbook of the Psychology of Aging in 1959. These results were replicated and extended to normal, community-residing older adults. The main and striking result was the relative inability of the older subjects to acquire CRs. Several studies in Diana Woodruff-Pak's laboratory demonstrated that age differences in eyeblink conditioning do not begin in old age, rather the deficits begin to appear in mid-life by the age of fifty years.

The neural circuitry underlying eyeblink classical conditioning in all mammals including humans has been almost completely identified. The essential site of the changes that occur during learning reside in the cerebellum, and the hippocampus, while not essential, can affect the rate of conditioning. Significant changes occur in the cerebellum around the age of fifty years. Anatomical (volumetric) brain magnetic resonance imaging (MRI), delay eyeblink conditioning, and extensive neuropsychological testing were carried out in Woodruff-Pak's laboratory in healthy older subjects. The correlation between the volume of the cerebellum and eyeblink conditioning performance was exceedingly high. Hippocampal volume and total cerebral volume were also measured, but neither hippocampal nor total cerebral volume correlated with eyeblink classical conditioning. A similar high correlation between cerebellar volume and eyeblink conditioning was found in young adults. These volumetric MRI results add to the increasing evidence in humans demonstrating a relationship between the integrity of the cerebellum and eyeblink classical conditioning.

Whereas age-related changes in eyeblink classical conditioning do not impact the daily life of older adults, these changes focused researchers on aging in a brain structure that has been relatively overlooked in gerontological investigations of cognition. Recent findings have demonstrated a role for the cerebellum in such cognitive domains as attention, working memory, visuospatial processing, and language. Age-related changes in the cerebellum may play a role in the aging of these cognitive abilities.

Instrumental conditioning

Instrumental conditioning refers to the type of learning in which the probability of a response is altered by a change in the consequences for that response. When a grandmother smiles and says, "Good boy," right after her grandson feeds the dog, the probability that the boy will feed the dog again is increased. Relatively few attempts have been made to create animal models of learning, memory, and aging using instrumental conditioning, and less research has been carried out on aging and instrumental conditioning than has been carried out on aging and classical conditioning.

To summarize the results on instrumental conditioning in aging animals, it appears that there is a consistent deficit that is restricted to the early association of the response with its consequence, but there is less of a difference once a response is established to a criterion level. This may be analogous to small or nonexistent differences in memory between young and older humans once initial learning is equated.

These observations indicate that only under certain circumstances are age-related effects on learning significant. These effects may be overcome by additional training, and they appear to reflect quantitative rather than qualitative differences. In many instances there is intact memory and relearning of previously learned behaviors by older organisms. This result suggests that recent experience involving activation of the neurological elements contributing to learning and memory may ameliorate age-related differences in learning.

Diana S. Woodruff-Pak

See also Memory.

BIBLIOGRAPHY

Green, J. T., and Woodruff-Pak, D. S. "Eye-blink Classical Conditioning: Hippocampus is for Multiple Associations as Cerebellum is for Association-Response." Psychological Bulletin 126 (2000): 138158.

Houston, F. P.; Stevenson, G. D.; McNaughton, B. L.; and Barnes, C. A. "Effects of Age on the Generalization and Incubation of Memory in the F344 Rat." Learning and Memory 6 (1999): 111119.

Port, R. L.; Murphy, H. A.; and Magee, R. A. "Age-related Impairment in Instrumental Conditioning is Restricted to Initial Acquisition." Experimental Aging Research 22 (1996): 7381.

Woodruff-Pak, D. S. Neuropsychology of Aging. Oxford, U.K.: Blackwell, 1997.

Woodruff-Pak, D. S.; Goldenberg, G.; Downey-Lamb, M. M.; Boyko, O. B.; and Lemieux, S. K. "Cerebellar Volume in Humans Related to Magnitude of Classical Conditioning." NeuroReport 14 (2000): 609615.

Woodruff-Pak, D. S., and Jaeger, M. E. "Predictors of Eyeblink Classical Conditioning over the Adult Age Span." Psychology and Aging 13 (1998): 193205.

Woodruff-Pak, D.S., and Steinmetz, J. E., eds. Eyeblink Classical Conditioning. Vol. 1, Applications in Humans. Vol. 2, Animal Models. Boston: Kluwer Academic Publishers, 2000.

Learning

views updated May 14 2018

240. Learning

See also 233. KNOWLEDGE ; 338. QUESTIONING ; 405. UNDERSTANDING .

academicism, academism
1. the mode of teaching or of procedure in a private school, college, or university.
2. a tendency toward traditionalism or conventionalism in art, literature, music, etc.
3. any attitudes or ideas that are learned or scholarly but lacking in worldliness, common sense, or practicality. academie , n., adj. academist , n.
academism
1. the philosophy of the school founded by Plato.
2. academicism. academist , n. academie, academical , adj.
anti-intellectualism
antagonism to learning, education, and the educated, expressed in literature in a conscious display of simplicity, earthiness, even colorful semi-literacy. anti-intellectual , n., adj.
autodidactics
the process of teaching oneself. autodidact , n.
bluestockingism
1. the state of being a pedantic or literal-minded woman.
2. behavior characteristic of such a woman.
clerisy
men of learning as a class or collectively; the intelligentsia or literati.
didacticism
1. the practice of valuing literature, etc., primarily for its instructional content.
2. an inclination to teach or lecture others too much, especially by preaching and moralizing.
3. a pedantic, dull method of teaching. didact , n. didactic , adj.
didactics
the art or science of teaching.
doctrinism
the state of being devoted to something that is taught. doctrinist , n.
educationist
1. British. aneducator.
2. a specialist in the theory and methods of education. Also called educationalist .
Froebelist
a person who supports or uses the system of kindergarten education developed by Friedrich Froebel, German educational reformer. Also Froebelian .
gymnasiast
a student in a gymnasium, a form of high school in Europe. See also 26. ATHLETICS .
Gymnasium
(in Europe) a name given to a high school at which students prepare for university entrance.
literati
men of letters or learning; scholars as a group.
literator
a scholarly or literary person; one of the literati.
lucubration
1. the practice of reading, writing, or studying at night, especially by artificial light; burning the midnight oil.
2. the art or practice of writing learnedly. lucubrator , n. lucubrate , v.
opsimathy
Rare. 1. a late education.
2. the process of acquiring education late in life.
paideutics, paedeutics
the science of learning.
pedagogics, paedogogics
the science or art of teaching or education. pedagogue, paedagogue, pedagog , n. pedagogie, paedagogic, pedagogical, paedagogical , adj.
pedagogism
1. the art of teaching.
2. teaching that is pedantic, dogmatic, and formal.
pedagogy, paedagogy
1. the function or work of a teacher; teaching.
2. the art or method of teaching; pedagogics.
pedanticism
1. the character or practices of a pedant, as excessive display of learning.
2. a slavish attention to rules, details, etc; pedantry. pedant , n. pedantic , adj.
pedantocracy
rule or government by pedants; domination of society by pedants.
pedantry
pedanticism, def. 2.
polytechnic
a school of higher education offering instruction in a variety of vocational, technical, and scientific subjects. polytechnic , adj.
professorialism
the qualities, actions, and thoughts characteristic of a professor. professorial , adj.
propaedeutics
the basic principles and rules preliminary to the study of an art or science. propaedeutic, propaedeutical , adj.
quadrivium
in the Middle Ages, one of the two divisions of the seven liberal arts, comprising arithmetic, geometry, astronomy, and music. See also trivium .
realia
objects, as real money, utensils, etc., used by a teacher in the classroom to illustrate aspects of daily life.
savant
a scholar or person of great learning.
scholarch
a head of a school, especially the head of one of the ancient Athenian schools of philosophy.
sophist
1. Ancient Greece. a teacher of rhetoric, philosophy, etc.; hence, a learned person.
2. one who is given to the specious arguments often used by the sophists. sophistic, sophistical , adj.
sophistry
1. the teachings and ways of teaching of the Greek sophists.
2. specious or fallacious reasoning, as was sometimes used by the sophists.
Sorbonist
a doctor of the Sorbonne, of the University of Paris.
symposiarch
Ancient Greece. the master of a feast or symposium; hence, a person presiding over a banquet or formal discussion.
symposiast Rare.
a person participating in a symposium.
symposium
learned discussion of a particular topic. Also spelled symposion .
technography
the study and description of arts and sciences from the point of view of their historical development, geographical, and ethnic distribution.
theorist
a person who forms theories or who specializes in the theory of a particular subject.
trivium
in the Middle Ages, one of the two divisions of the seven liberal arts, comprising logic, grammar, and rhetoric. See also quadrivium .
tyrology
Rare. a set of instructions for beginners.

Learning

views updated May 23 2018

Learning

The need for experimental proof is a key part of the scientific definition of learning. As outside observers of animal behavior, humans are practically incapable of understanding which cognitive processes, if any, lead to the production of a certain animal behavior. For example, a pigeon may be trained to type the letters "f-o-o-d" on a typewriter when it is hungry.

Although we may be tempted to conclude that the pigeon has learned a new word, this is unlikely. There may be many explanations for the pigeon's behavior. Only with carefully designed experiments and a general learning theory, we can begin to dissect exactly what motives cause the animal to behave this way. Although we may observe animals performing complex tasks in the wild, we cannot conclude that the animal has learned without rigorous experimental tests in a controlled setting.

Even in a strictly monitored experiment, the results of a learning test can be inconclusive. The general model for such experiments was to train an animal to perform a task. This training was accomplished by choosing a natural behavior of an animal and modifying or encouraging that behavior with a treat and a signal. Eventually, the signal will cause the animal to perform that behavior in order to receive the treat.

In other words, the unconditioned stimulus , or natural behavior of the animal, is paired with a conditioned stimulus, the chosen signal. When the animal responds to the conditioned stimulus with the appropriate behavior, it is rewarded with a treat, which leads to an eating reward. Thus, a rat that displays the normal ratlike behavior of rearing on its hind legs is suddenly rewarded with a piece of fruit every time it performs the behavior. At the same time that it rears, a red light flashes at the side of the cage. Eventually, flashing the red light will cause the rat to rear on its hind legs, supposedly because it expects to receive its treat. In this case, the rearing behavior is the unconditioned stimulus, the red light is the conditioned stimulus, and the fruit is the reward.

Psychologists and cognitive biologists have long argued over what exactly this hypothetical rat is learning. It is possible that the rat equates the conditioned stimulus, abbreviated as CS, with the treat reward, abbreviated as R. If this is true, then the rat learns that the flashing light means that treats are coming.

Alternatively, the rat may understand that rearing, which is the unconditioned stimulus (US), leads to a treat reward; in this case the light tells the rat when to rear, but has no real meaning with regards to the treat. Both of these examples assume that the rat is associating the stimulus, S, with the reward, R. This is called S-R learning. Another explanation is that the rat learned that one stimulusrearing or the lightleads to another stimulus, the appearance of the treat.

The subtle difference between this interpretation and S-R learning is that in this case, appearance of the food is not the rat's reward. Instead, appearance of the treat is merely another stimulus that causes the rat to eat, and the actual consumption of the food is the rat's reward. Because the treat is considered another stimulus in this theory, it is referred to as S-S learning.

A class of scientists known as behaviorists believed that animals are defined solely by their behaviors, and that behaviors are determined entirely by environmental cues. Thus every animal is born with the ability to learn any new task. Two of this field's main proponents were the psychologists B. F. Skinner and Ivan Pavlov. Pavlov discovered what was called classical conditioning, a method of pairing a conditioned and an unconditioned stimulus so that eventually the conditioned stimulus alone elicits the response.

His famous experiment tested a dog salivating when food is presented. If a bell is rung at the same time the food is presented, every time, eventually the dog will salivate to the sound of the bell even in the absence of food. Skinner used similar principles to describe the behavior of humans. He proposed the theory of operant conditioning. According to this theory, the animal performs a wide variety of activities in its daily life, and some of these activities are rewarded by a reinforcing stimulus. This reinforcement increases the effect of the operant, the behavior directly preceding the reinforcing stimulus. Operant conditioning is widely used to train animals, but it is also a theory on their methods of learning.

Ethologists such as Nikolaas Tinbergen disagreed with the behaviorists' opinion that the mechanisms of learning are the same for all animals. Ethologists believe that natural animal behaviors are innate, meaning that the animal is born with a neural system that promotes certain species-specific behavior. This is why animals tend to produce species-specific vocalizations and behaviors, even when they are raised in very foreign environments.

Tool use is one example of an innate behavior for several organisms, such as the male satin bower bird of Australia and New Zealand, which constructs elaborate abstract designs out of twigs, leaves, and dirt to attract the female. He decorates these sculptures by crushing berries and fruits for their pigmented juice and then painting the structure using a wad of bark as a paintbrush. Elaboration of tool use is more easily taught to those species that have the innate tendency to use tools.

The combination of behaviorist and ethologist influences on the study of learning have shaped modern psychology. Animals are still used to test how humans learn, but unique species traits are now taken into consideration when interpreting the data.

see also Behavior; Tool Use.

Rebecca M. Steinberg

Bibliography

Dachowski, Lawrence, and Charles F. Flaherty. Current Topics in Animal Learning: Brain, Emotion, and Cognition. Hillsdale, NJ: Lawrence Erlbaum Associates, 1991.

Lutz, John. An Introduction to Learning and Memory. Prospect Heights, IL: Waveland Press, Inc., 2000.

Schmajuk, Nestor, and Peter C. Holland, eds. Occasion Setting: Associative Learning and Cognition in Animals. Washington, DC: American Psychological Association, 1998.

Learning

views updated May 11 2018

Learning

Learning is the alteration of behavior as a result of experience. When an organism changes its behavior, it is said to have learned. Many theories have been formulated to explain the process of learning.

Early in the twentieth century, learning was primarily described through behaviorist principles that included associative, or conditioned responsethe ability of an animal to connect a previously irrelevant stimulus with a particular response.

One form of associative learningclassical conditioningis based on the pairing of two stimuli. Through an association with an unconditioned stimulus, a conditioned stimulus eventually elicits a conditioned response, even when the unconditioned stimulus is absent. The earliest and best known documentation of associative learning was demonstrated by Ivan Pavlov, who conditioned dogs to salivate at the sound of a bell.

In operant conditioning, a response is learned because it leads to a particular consequence (reinforcement), and it is strengthened each time it is reinforced. Without practice any learned behavior is likely to cease, however, repetition alone does not ensure learning; eventually it produces fatigue, boredom, and suppresses responses. Positive reinforcement strengthens a response if it is presented afterwards, while negative reinforcement strengthens it by being withheld. Generally, positive reinforcement is the most reliable and produces the best results. In many cases, once the pattern of behavior has been established, it may be sustained by partial reinforcement, which is provided only after random responses.

In contrast to classical and operant conditioning, which describe learning in terms of observable behavior, other theories focus on learning derived from motivation, memory, and cognition. Wolfgang Köhler, a founder of the Gestalt school of psychology, observed the importance of cognition in the learning process when he studied the behavior of chimpanzees. In his experiments, Köhler concluded that insight was key to the problem-solving conducted by chimpanzees. The animals did not just stumble upon solutions through trial and error, but rather demonstrated a holistic understanding of problems that they solved through moments of revelation. In the 1920s, Edward Tolman illustrated how learning can involve knowledge without observable performance. The performance of rats who negotiated the same maze on consecutive days without reward improved drastically after the introduction of a goal box with food, indicating that they had developed cognitive maps of the maze prior to the reward although it had not been observed in their behavior.

In the 1930s, Clark L. Hull and Kenneth W. Spence introduced the drive-reduction theory. Based on the tendency of an organism to maintain balance by adjusting physiological responses, the drive-reduction theory postulated that motivation is an intervening factor in times of imbalance. Imbalances create need, which in turn create drives; both encourage action in order to reduce the drive and meet the need. According to drive-reduction theory, the association of stimulus and response in classical and operant conditioning only results in learning if accompanied by drive reduction.

Perceptual learning theories postulate that an organisms readiness to learn is of primary importance to its survival. Perceptual skills are intimately involved in producing more effective responses to stimuli. In the laboratory, perceptual learning has been tested and measured by observing the effects of practice on perceptual abilities. Subjects are given various auditory, olfactory, and visual acuity tests. With practice, subjects improve their scores, indicating that perceptual abilities are not permanent but are modifiable by learning. In studies of animal behavior, the term perceptual learning is sometimes used to refer to those instances in which an animal learns to identify a complex set of stimuli that can be used to guide subsequent behavior. Examples of such perceptual learning include imitation and observational learning, song learning in birds, and imprinting in newborn birds and mammals. Imprinting occurs only during the first 30 or so hours of life. It is a form of learning in which a very young animal fixes its attention on the first object with which it has visual, auditory, or tactile experience and thereafter follows that object.

Observational learning, also known as modeling or imitation, proposes that learning occurs as a result of observation and consequence. Behavior is learned through imitation, however behavior that is rewarded is more readily imitated than behavior that is punished. Termed vicarious conditioning, this type of learning is present when there is attention to the behavior, retention and the ability to reproduce the behavior, and motivation for the learning to occur.

Current research on learning is highly influenced by computer technology, both in the areas of computer-assisted learning and in the attempt to further understand the neurological processes associated with learning by developing computer-based neural networks that simulate different types of learning.

learning

views updated May 29 2018

learning A process by which an animal's response to a particular situation may be permanently altered, usually in a beneficial way, as a result of its experience. Learning allows an animal to respond more flexibly to the situations it encounters: learning abilities in different species vary widely and are adapted to the species' environment. On a physiological level, learning involves changes in the connections of neurons in the central nervous system (see synaptic plasticity). Numerous different categories of learning have been proposed, including habituation, associative learning (through conditioning), trial-and-error learning, insight learning, latent learning, and imprinting.

Learning in animals

An animal's survival prospects are greatly improved if the animal alters its behaviour according to its experience. Learning increases its chances of obtaining food, avoiding predators, and adjusting to other often unpredictable changes in its environment. The importance of learning in the development of behaviour was stressed particularly by US experimental psychologists, such as John B. Watson (1878–1958) and B. F. Skinner (1904–90), who studied animals under carefully controlled laboratory conditions. They demonstrated how rats and pigeons could be trained, or ‘conditioned’, by exposing them to stimuli in the form of food rewards or electric shocks. This work was criticized by others, notably the ethologists, who preferred to observe animals in their natural surroundings and who stressed the importance of inborn mechanisms, such as instinct, in behavioural development. A synthesis between these two once-conflicting approaches has now been achieved: learning is regarded as a vital aspect of an animal's development, occurring in response to stimuli in the animal's environment but within constraints set by the animal's genes. Hence young animals are receptive to a wide range of stimuli but are genetically predisposed to respond to those that are most significant, such as those from their mother.

Conditioning

The classical demonstration of conditioning was undertaken by Ivan Pavlov in the early 1900s. He showed how dogs could learn to associate the ringing of a bell with the presentation of food, and after a while would salivate at the sound of the bell alone. He measured the amount of saliva produced by a dog, and showed that this increased as the animal learnt to associate the sound of the bell with presentation of food. The dog became conditioned to respond to the sound of the bell.

Such learning is widespread among animals. Pavlov's experiment involved positive conditioning, but negative conditioning can also occur. For example, a young bird quickly learns to associate the black-and-orange markings of the cinnabar moth's caterpillars with their unpleasant taste, and to avoid eating such caterpillars in future.

Trial-and-error learning

This occurs when the spontaneous behaviour of an animal accidentally produces a reward. For example, a hungry cat is placed in a box and required to pull on a string loop to open the door and gain access to food (see illustration). After various scratching and reaching movements, it accidentally pulls the loop and is released from the box. Its behaviour is instrumental in securing a reward. On subsequent occasions, the cat's attention becomes increasingly focused on the loop, until eventually it pulls the loop straightaway on entering the box.

Insight learning

Chimpanzees can learn to stack crates or boxes to form a platform or to manipulate poles in order to reach an otherwise inaccessible bunch of bananas. A chimp may apparently solve such a problem suddenly, as if gaining insight after mental consideration of the problem. Such complex learning benefits from previous experience, in this instance by simply ‘playing’ with crates, boxes, or poles.

Imprinting

This is a form of learning found in young animals, especially young birds, in which they form an attachment to their mother in early life, thereby ensuring that they are taken care of and do not wander off. For example, chicks or ducklings follow the first large moving object that they encounter after hatching. This is normally their mother, but artifically incubated youngsters can become imprinted on a wooden decoy, as illustrated here, or even on a human being – as originally demonstrated in goslings and ducklings by Konrad *Lorenz Imprinting occurs during a particularly sensitive period of development: the attachment formed by an animal to an imprinted individual or object lasts well into its adult life.

Learning

views updated May 21 2018

Learning

Learning produces a relatively long-lasting change in behavior as a result of experience. The ability to learn, to gain from experience, allows animals to adapt to and cope with variable environments and therefore contributes to reproductive fitness.

Habituation and Sensitization

Habituation, the most rudimentary learning process, can occur in single-celled animals as well as all higher animals. Habituation is the reduction of a response to a stimulus as a result of repeated low-level stimulation. For example, protozoans contract when touched. However, repeated touching causes a gradual decrease in this response and is not the result of fatigue or sensory adaptation but rather true learning. In fact, habituation in planaria survives regeneration; when a planarian is split in two, both new planaria exhibit the response learned by the original one. Increased response magnitude, or sensitization, can also occur to a repeated stimulus if it is of high intensity or aversive (unpleasant). Sensitization has only been observed in multicellular organisms with at least a rudimentary nerve network.

Animals with central nervous systems can learn through more complex processes that allow them to adapt to a larger variety of environmental circumstances. The main types are classical conditioning, operant conditioning, imitation, and imprinting.

Classical Conditioning

Classical conditioning (also called Pavlovian conditioning after its discoverer Ivan Pavlov) involves the creation of a conditioned reflex. In a classic experiment, a bell (neutral stimulus) is rung just before meat powder (unconditioned stimulus) is squirted into a dog's mouth. The meat powder produces the reflexive response of salivation (unconditioned response). If the bell is rung and followed by a squirt of meat powder into the mouth many times in succession (with a rest period between presentations), eventually salivation will occur to the sound of the bell before the meat powder is squirted into the mouth. The bell is now a conditioned stimulus, and the salivation to the bell is now called a conditioned response; they comprise a new conditioned reflex. The conditioned response will be sustained as long as the ringing of the bell continues to be correlated with the presentation of the meat powder. As in this example, conditioned responses are probably adaptive because they prepare the organism for the forthcoming unconditioned stimulus.

Operant Conditioning

A second type of conditioning, operant conditioning, does not involve reflexes at all. Rather, certain kinds of voluntary behavior, usually skilled motor behavior, are affected by the consequences that follow. Stimuli associated with particular contingencies do not force a response as in the case of reflexes. Rather, such stimuli alter the likelihood that a behavior will occur. For example, the "open" sign on the door of a restaurant makes it likely someone who is ready for a meal will open the door because of past experience.

In general, pleasant events increase the likelihood of, or reinforce, voluntary (operant) behavior, and unpleasant events weaken or punish operant behavior. New behavior can be created through operant conditioning using a procedure called shaping, or the reinforcement of successive approximations of a target behavior. For example, a dog can learn to roll over if a skillful trainer provides it with food and praise (the reinforcement) for closer and closer approximations of rolling over during a training session.

Imitation

Many species also learn through imitation. In general, it is a fast and efficient way of learning functional new behaviors. For example, in England some birds had learned to get milk by piercing the caps of milk bottles on doorsteps. Over a period of years, this behavior spread to several species of birds and other parts of the British Isles. There is disagreement about whether imitation is a special case of operant conditioning or an additional type of learning.

Imprinting

Imprinting is the development of an attachment to the mother or, if the mother is absent, any moving object close by during a certain brief period in the life of a young animal. For example, a newly hatched goose or duck will become attached to a shoe box, a human being, or any object if the goose or duck is removed from its nest shortly after hatching. Comparable behavior can be observed in many mammal species such as sheep, deer, and dogs. The adaptive value of following a mother is obvious. Again, there is disagreement over whether imprinting is a special case of operant conditioning or a unique type of learning.

see also Behavior, Genetic Basis of; Natural Selection; Nervous Systems

Lynda Paulson LaBounty

Bibliography

Abramson, Charles I. Invertebrate Learning: A Laboratory Manual and Source Book. Washington, DC: American Psychological Association, 1990.

Chance, Paul. Learning and Behavior, 4th ed. Pacific Grove, CA: Brooks Cole, 1999.

Mazur, James E. Learning and Behavior, 5th ed. Upper Saddle River, NJ: Prentice Hall, 2001.

Learning

views updated May 23 2018

Learning

Learning is the alteration of behavior as a result of experience. When an organism is observed to change its behavior, it is said to learn. Many theories have been formulated by psychologists to explain the process of learning. Early in the twentieth century, learning was primarily described through behaviorist principles that included associative, or conditioned response. Associative learning is the ability of an animal to connect a previously irrelevant stimulus with a particular response. One form of associative learning—classical conditioning—is based on the pairing of two stimuli. Through an association with an unconditioned stimulus, a conditioned stimulus eventually elicits a conditioned response, even when the unconditioned stimulus is absent. The earliest and most well-known documentation of associative learning was demonstrated by Ivan Pavlov, who conditioned dogs to salivate at the sound of a bell. In operant conditioning , a response is learned because it leads to a particular consequence (reinforcement), and it is strengthened each time it is reinforced. Without practice any learned behavior is likely to cease, however, repetition alone does not ensure learning; eventually it produces fatigue, boredom, and suppresses responses. Positive reinforcement strengthens a response if it is presented afterwards, while negative reinforcement strengthens it by being withheld. Generally, positive reinforcement is the most reliable and produces the best results. In many cases once the pattern of behavior has been established, it may be sustained by partial reinforcement, which is provided only after selected responses.

In contrast to classical and operant conditioning, which describe learning in terms of observable behavior, other theories focus on learning derived from motivation, memory , and cognition . Wolfgang Köhler, a founder of the Gestalt school of psychology , observed the importance of cognition in the learning process when he studied the behavior of chimpanzees . In his experimentation, Köhler concluded that insight was key in the problem-solving conducted by chimpanzees. The animals did not just stumble upon solutions through trial and error , but rather they demonstrated a holistic understanding of problems that they solved through moments of revelation. In the 1920s, Edward Tolman illustrated how learning can involve knowledge without observable performance. The performance of rats who negotiated the same maze on consecutive days without reward improved drastically after the introduction of a goal box with food, indicating that they had developed cognitive maps of the maze prior to the reward although it had not been observed in their behavior.

In the 1930s, Clark L. Hull and Kenneth W. Spence introduced the drive-reduction theory. Based on the tendency of an organism to maintain balance by adjusting physiological responses, the drive-reduction theory postulated that motivation is an intervening factor in times of imbalance. Imbalances create need, which in turn create drives; both encourage action in order to reduce the drive and meet the need. According to drive-reduction theory, the association of stimulus and response in classical and operant conditioning only results in learning if accompanied by drive reduction.

Perceptual learning theories postulate that an or ganism's readiness to learn is of primary importance to its survival, and this readiness depends largely on its perceptual skills. Perceptual skills are intimately involved in producing more effective responses to stimuli. In the laboratory, perceptual learning has been tested and measured by observing the effects of practice on perceptual abilities. Subjects are given various auditory, olfactory, and visual acuity tests. With practice, subjects improve their scores, indicating that perceptual abilities are not permanent but are modifiable by learning. In studies of animal behavior, the term perceptual learning is sometimes used to refer to those instances in which an animal learns to identify a complex set of stimuli that can be used to guide subsequent behavior. Examples of such perceptual learning include imitation and observational learning, song learning in birds , and imprinting in newborn birds and mammals . Imprinting occurs only during the first 30 or so hours of life. It is a form of learning in which a very young animal fixes its attention on the first object with which it has visual, auditory, or tactile experience and thereafter follows that object.

Observational learning, also known as modeling or imitation, proposes that learning occurs as a result of observation and consequence. Behavior is learned through imitation, however behavior that is rewarded is more readily imitated than behavior that is punished. Termed vicarious conditioning, this type of learning is present when there is attention to the behavior, retention and the ability to reproduce the behavior, and motivation for the learning to occur.

Current research on learning is highly influenced by computer technology, both in the areas of computer-assisted learning and in the attempt to further understand the neurological processes associated with learning by developing computer-based neural networks that simulate different types of learning.

About this article

learning

All Sources -
Updated Aug 13 2018 About encyclopedia.com content Print Topic