Learning Theory: A History

Updated About encyclopedia.com content Print Article Share Article
views updated

LEARNING THEORY: A HISTORY

Even before psychology became an experimental science in the 1890s, learning was an important part of it. But there came a time in the 1910s when psychologists started to become fascinated by learning concepts and learning theories. The 1930s and 1940s are sometimes called the golden age of learning theory; that was when learning was the heart and soul of psychology. And then gradually the gold began to lose its glitter. The theorists did not seem able to settle their differences of opinion, psychologists began to think that the differences were only a matter of opinion with little empirical significance, and there emerged a growing distaste for the great debates over fundamental issues. In the 1960s new procedures and new phenomena were discovered that led psychologists away from the basic issues that the learning theorists had debated. Learning remains an important part of psychology, but the issues are quite different from the classical ones, and there is little theorizing in the grand style that characterized the golden age.

British empiricism culminated with the fourth edition of Alexander Bain's The Senses and the Intellect (1894). Everything psychological about humans is based on experience and is due to learning, he said. Bain argued that through association, sensations are linked with each other and with responses. He argued further that sensations can arouse ideas, and that when one's ideas of pleasure and pain are aroused, they are particularly likely to produce responses. At the same time, Morgan (1894) reported a number of rather casual learning experiments, interpreting his results very much as Bain had. Morgan had the same reliance upon associationist theory, empiricist philosophy, and the pleasure-pain principle. The difference was that Bain was a philosopher who thought about human knowledge, whereas Morgan was a naturalist who conducted research with animals. Looking back, it appears that Morgan's orientation was compelling because learning theory turned to the study of animals rather than of human learning, and to experimental studies rather than philosophical speculation.

Edward Thorndike

The first systematic experimental study with animals was Edward Thorndike's (1898) puzzle-box experiment. Thorndike simply measured the time it took a cat to pull a string that opened the door of the box so that it could go out and eat. He was struck by the fact that the time scores decreased steadily and smoothly over trials; he never found a sudden improvement in performance. He therefore concluded that the cat was not learning anything about ideas but must be acquiring some sort of direct connection between the stimulus (S) that was present and the response (R) of pulling the string. It must be a direct neural connection between S and R. Thus, at the outset of learning experiments and learning theory, there was a strong commitment to the S-R concept of learning. One attraction of this approach was that it minimized all mentalist concepts; it took the mind out of the picture. It was "scientific."

Thorndike (1911) introduced what he called the Law of Effect, what we now call the law of Reinforcement. Whether an S-R connection is strengthened on a particular trial depends, he argued, upon the environmental effect of the response. If the effect is positive, such as providing the hungry cat with food, then the connection gets stronger. Bain had a similar principle; he said the association will get stronger if the response produces pleasure. But pleasure is a mental concept, and so it had to go. Thorndike's version preserved the reinforcement mechanism but got rid of the mind and everything in it. It also took the control of the organism's behavior away from the organism and put it in the environment. The cat's behavior was totally controlled by the stimuli in the situation and the food.

John B. Watson

John B. Watson (1914) called Thorndike's approach behaviorism; it was the ultimate mechanistic psychology. Everything remotely related to the mind was discarded. Even Thorndike's reinforcement mechanism was tainted because a positive effect looked too much like pleasure. In Watson's psychology just the stimulus and response occurring together would create a connection between them. Everything was habit, or what was called learning by contiguity. Thus began two long lines of theorists—the contiguity people and the reinforcement people. Watson also discarded all motivation concepts; hunger became just an internal stimulus, and emotion just a set of fixed responses to certain kinds of stimuli. Watson's behaviorism was appealing because it was so mechanistic and so conceptually simple. At the same time, Watson suggested that to understand anything and everything psychological, one had to start with learning. The description of the fear conditioning of Little Albert (Watson and Rayner, 1920) stated that that is how our personalities take shape. The message, which was believed by many, if not all, psychologists, was that wherever one wanted to go in the field, one had to start with learning theory.

There was, however, one problem that would be significant historically: In order to deal with certain cognitive-looking phenomena, Watson introduced some miniature responses, responses so small that they were basically unobservable. When individuals first learn to read, they read out loud. As they become practiced enough to read silently, they still move our lips. Ultimately nothing seems to move. But no, Watson asserted, there are still tiny responses in the mouth and throat. And it is the feedback from these small responses that mediates and controls what looks like intelligent or speech-related behavior. So Watson's learning theory, which was so elegantly simple, objective, and scientific, was obliged to hypothesize unobservable little responses.

Edwin Guthrie

Edwin Guthrie (1935) was a contiguity theorist who followed Watson in rejecting motivation and in other ways. He explained the great complexity and unpredictability of behavior in terms of the complexity of the stimulus situation. At any moment there are potentially millions of stimulus elements that one might respond to or have one's behavior conditioned to. Conditioning itself is simple and sudden, but the effective stimulus situation is impossible to control. So Guthrie's learning theory was forced to hypothesize unobservable little stimuli. The same problem in time caused the demise of Clark L. Hull's theory, which, in order to account for what looked like cognitive behavior, had to hypothesize unobservable little motivation terms, entities called rG. The Skinnerians are no better off, for all their claims of objectivity and freedom from theory. They talk about self-reinforcement when the organism does something it is not supposed to, and they talk about conditioned (acquired) reinforcement when it does something that looks cognitive. Thus they are hypothesizing unobservable little reinforcers.

Ivan Pavlov and Edward Tolman

When Russian scientist Ivan Pavlov's work finally became available in English translation in 1927, it seemed vaguely familiar. It reminded readers of Watson's. They shared the same view of how important learning is and how it should be studied. The theory included no motivation, no reinforcement, no mind, and it was all very scientific. Pavlov emphasized inhibition, something that American psychologists had largely ignored but in time found fascinating. What was new was the procedure, the pairing of two stimuli; the bell and the food had to occur together. The critical contingency the experimenter had to control was the timing of the stimuli. With Thorndike's procedure the critical contingency was the relationship between the response and its "effect." The procedural contrast was called by different people Pavlovian versus trial and error, or classical conditioning versus instrumental, or respondent versus operant. There was always the uneasy feeling that while the two procedures were easily distinguished by their defining contingencies, perhaps there were not two separate underlying processes involved.

If there is a variable that one never varies, then one will never see its significance. Pavlov knew that his dogs had to be hungry or they would not salivate, so he always worked with hungry dogs. And so he never saw the significance of motivation. The first learning theorist to stress motivation was Edward Tolman (1932). He described a study by his student Tinklepaugh, who was studying monkeys and reinforcing their correct responses with pieces of banana. Occasionally Tinklepaugh would substitute lettuce for the banana; when this happened, the animals threw tantrums and became emotionally upset. Monkeys usually like lettuce and it can certainly be used as a reinforcer, so what had Tinklepaugh encountered here? First, he had the trivial finding that monkeys like bananas better than lettuce. Second, he had discovered that monkeys can anticipate receiving, or expect to receive, a particular kind of food. Thus, he had discovered what we call incentive motivation, motivation that depends upon the expected value of the outcome. Tolman's students also demonstrated effects of drive motivation, motivation that depends on the physiological state of the animal.

Thus, Tolman suddenly introduced two kinds of motivation, one psychological and one physiological, and he had abundant evidence for both kinds. He also challenged other conventional parts of Watsonian behaviorism, such as its mechanistic commitment. He introduced a cognitive language (e.g., expectancies) in place of connections and neurons. Tolman maintained that animals learn not S-R connections but the predictive significance and value of environmental stimuli, sequences of events (what leads to what), and where things are located in space (a "map" of a maze). In the 1940s Tolman developed the theme that animals learn places rather than responses (see, e.g., Tolman, Ritchie, and Kalish, 1946).

Those were exciting times. There were two paradigms, Pavlovian and Thorndikian, to be organized. One could explain all learning with this one, or that one, or with some of each (Mowrer, 1947). Mowrer attributed emotional and motivational learning to Pavlovian mechanisms, and most other learned behavior to reinforcement. There were contiguity theorists and reinforcement theorists. Some people studied motivation and others ignored it. Some were mechanists, and others appeared very cognitive. Some believed in tiny stimuli or responses, twinkles and twitches, and others looked at behavior globally. And new behavioral phenomena were being discovered at an accelerated rate.

Clark L. Hull

Could anyone put it all together? It seemed that Clark L. Hull and his dedicated followers might do it; they certainly tried. The great theory (Hull, 1943) was based on the reinforcement of S-R habits, but habits were only indirectly expressed in behavior. To be manifest, a habit had to be motivated by drive and/or incentive, and had to overcome the different kinds of inhibition that might be present. It was a very complex theory, but its virtue was that its complexity promised to match that of the empirical world. It was also a very explicit theory; everything was spelled out in detail. The theory even appeared to be able to explain away some of the mysterious things Tolman had reported. It was full of promise, and it gathered an enormous amount of attention.

Hull was fortunate to have a number of brilliant, energetic young associates who all agreed that this was the right kind of theory. Their disagreements were over details, and those differences called for further experiments to get everything straightened out. One could fuss over details, but all the Hullians endorsed the basic program. Miller and Dollard (1941) proposed a simpler model that anticipated many features of the great theory. Mowrer (1939) anticipated the all-important mechanism of reinforcement; he said a response is reinforced when it results in the reduction of some source of drive, such as fear. Kenneth Spence was another early associate of Hull's, and he had a multitude of graduate students who were proud to call themselves neo-Hullians and to work out different aspects of the theory. For them, the 1950s looked like the golden age because it was the time of awakening, the time of promise, and the time of pay-off. Many of them moved away from animal learning and into human experimental, social, developmental, and clinical psychology. Learning was the center, but the time had come to apply the principles of behavior far and wide. Watson and Thorndike, the first learning theorists, had promised to build a better world with learning theory, and the neo-Hullians felt that the time had come to make good on that promise.

Two things went wrong. One, which should have been only a minor tactical setback, was that the drive-reduction hypothesis of reinforcement was wrong. That was discovered early on (Sheffield and Roby, 1950), and in his last written work Hull (1952) acknowledged the problem and said the hypothesis might have to be altered. The whole point of theory, according to Hull, is to use it to generate research and then use the research to modify the theory. So the loss of this particular hypothesis should not have hurt the basic Hullian program. But the neo-Hullians were severely wounded and badly discouraged. Furthermore, by about 1970, new difficulties had arisen with the concept of reinforcement itself (Bolles, 1975).

A second difficulty was that during the 1960s there were many problems with incentive motivation. It was based on the little response rG, which had all the conceptual properties of a response (i.e., it was elicited by stimuli, it was conditionable, and it could be motivated). The problem was that it did not seem to be observable. The rG concept was needed to account for Tolman's discovery that animals learn places (Hull held that it was elicited by spatial stimuli), and it was needed to explain a variety of other effects; but it was beginning to look like a fiction, a figment. The Hullians said that when the animal looks to the left, it encounters stimuli that elicit rG and so it moves in that direction. Tolman said that the animal expects food to be off to the west, and since it is hungry and values food, it moves in that direction. If you cannot measure rG, then you have no way to test Hull's view against Tolman's view of the situation. Eventually psychologists figured out that Tolman's theory is untestable because one cannot measure expectancies or values, and that Hull's theory is untestable because one cannot measure rG. Learning theories are basically untestable. The great promise of Hull's theory was slipping away. The golden age was ending.

B. F. Skinner

Some found comfort in B. F. Skinner's approach. It could have been an alternative learning theory but chose to present itself as theoretically neutral. Certainly Skinnerians did not worry about theoretical matters as such. And they were eager to leave learning behind and move into other areas of psychology and into applied problems. Others began to understand that there was something fundamentally flawed in the whole enterprise begun by Thorndike and Watson. Psychology did not become a science because it exorcised the mind and analyzed everything into atomic S-R units; it became a science as it looked systematically at psychological phenomena. If one wants to understand a social phenomenon, then one does not need a basic learning theory; one needs to look at social situations, social motivation, and social behavior strategies. And that is the sort of thing psychologists do now.

See also:GUTHRIE, EDWIN R.; HULL, CLARK L.; MATHEMATICAL LEARNING THEORY; PAVLOV, IVAN; THORNDIKE, EDWARD; TOLMAN, EDWARD C.; WATSON, JOHN B.

Bibliography

Bain, A. (1894). The senses and the intellect, 4th edition. London: Longmans, Green.

Bolles, R. C. (1975). Learning theory. New York: Holt.

Guthrie, E. R. (1935). The psychology of learning. New York: Harper's.

Hull, C. L. (1943). Principles of behavior. New York: Appleton.

—— (1952). A behavior system. New Haven, CT: Yale University Press.

Miller, N. E., and Dollard, J. (1941). Social learning and imitation. New Haven, CT: Yale University Press.

Morgan, C. L. (1894). An introduction to comparative psychology. London: Scott.

Mowrer, O. H. (1939). A stimulus-response analysis of anxiety and its role as a reinforcing agent. Psychological Review 46, 553-564.

—— (1947). On the dual nature of learning: A reinterpretation of "conditioning" and "problem-solving." Harvard Educational Review 17, 102-148.

Pavlov, I. P. (1927). Conditioned reflexes, trans. G. V. Anrep. London: Oxford University Press.

Sheffield, F. D., and Roby, T. B. (1950). Reward value of a nonnutritive sweet taste. Journal of Comparative and Physiological Psychology 43, 461-481.

Thorndike, E. L. (1898). Animal intelligence: An experimental study of the associative processes in animals. Psychological Review Monograph Supplement 2 (8).

—— (1911). Animal intelligence. New York: Teachers College Press.

Tolman, E. C. (1932). Purposive behavior in animals and men. New York: Century.

Tolman, E. C., Ritchie, B. F., and Kalish, D. (1946). Studies in spatial learning: II. Place learning versus response learning. Journal of Experimental Psychology 36, 221-229.

Watson, J. B. (1914). Behavior. An introduction to comparative psychology. New York: Holt.

Watson, J. B., and Rayner, R. (1920). Conditioned emotional reactions. Journal of Experimental Psychology 3, 1-14.

Robert C.Bolles