Active and Passive Avoidance Learning: Behavioral Phenomena

views updated


Avoidance learning is the behavioral product of an instrumental (operant) training procedure in which a predictable aversive event, typically electric shock, does not occur contingent upon the occurrence or nonoccurrence of a specified response by the learning organism. Avoidance training occurs in two forms: active and passive. In the active form, the avoidance contingency depends on the occurrence of a specified response on the part of the organism; in the passive form, the avoidance contingency depends on the nonoccurrence (i.e., the suppression) of some specified response. The response to be suppressed may be either spontaneous or learned by virtue of prior reward training. In both forms, however, the avoidance contingency consists of the prevention or omission of a predictable noxious event. Noxious events are defined in terms of the preference relation in which the absence of the event is preferred (measured by choice) to the presence of the event. Usually the noxious event is electric shock, but loud noise, blasts of air, and high and low temperatures have been used.

Avoidance training also utilizes one of two procedures: discrete-trial or free-operant. In the discretetrial procedure a distinctive stimulus, called a warning signal (WS), signals the organism that the occurrence of the aversive event (e.g., electric shock) is imminent. In most experiments the WS-shock interval is five to sixty seconds in duration. In the active form, making the specified response during the WS-shock interval terminates the WS and prevents the occurrence of the shock. In the passive form, suppression of the specified response during the WS-shock interval prevents the occurrence of the shock. In both forms an intertrial interval (ITI) intervenes between successive presentations of the WS, usually in the range of 0.5 to 5.0 minutes.

In the active free-operant procedure there are no discrete trials signaled by WSs. Instead, the avoidance contingency is dependent on time. Specifically, two timers control events: a response-shock (R-S) timer (e.g., set for thirty seconds) and a shock-shock (S-S) timer (e.g., set for five seconds). Training starts with the S-S timer operating. Every time it runs out, it restarts and delivers an inescapable shock of some duration (e.g., 0.5 second). The specified response turns off the S-S timer and starts the R-S timer. Every additional response resets the R-S timer to its full value. If the R-S timer runs out, it presents a shock and starts the S-S timer (Sidman, 1953). This procedure has been used only in the active form. A variation of this procedure eliminates the S-S timer and makes shock termination contingent upon the specified response rather than upon a fixed duration of shock.

In addition, a free-operant passive procedure known as punishment simply takes a response, which occurs spontaneously or by virtue of prior reward training, and makes shock or some other aversive event contingent on the occurrence of that response. The response is usually suppressed. This is also called passive-avoidance training. It has been used in a procedure in which an animal such as a mouse or rat runs from a brightly lighted elevated platform into a dark compartment where it receives a single electric shock. The tendency to enter the dark compartment is innate, and the single punishment results in subsequent long latencies to reenter the dark compartment. This is called one-trial passive-avoidance training, and it has been used extensively in the study of memory because the learning event is fixed in time, which allows analysis and manipulation of temporarily constrained neuropharmacological and endocrine processes associated with learning. Alternatively, a hungry animal may initially be rewarded with food for pressing a lever and subsequently shocked for making that same response. Usually several shocks are required to suppress the lever pressing.

Warner (1932) was the first to use a discrete-trial active-avoidance procedure to study the association span of the white rat (using WS-shock intervals of one to thirty seconds); he used what has become known as a shuttle box, a two-compartment box in which the animal is required to run or jump back and forth between the two compartments to avoid the shock.

These procedures and the behaviors they produce have been of interest to psychologists since the early studies of behaviorism in the United States. John Watson and, especially, Edward Thorndike postulated that learned responses were a product of their consequences. That is, a response occurs, a pleasurable or aversive event ensues, and the response is reinforced (increases) if the event is pleasurable or punished (decreases) if the event is aversive.

Hilgard and Marquis (1940), two early behavioral theorists, had trouble accounting for avoidance learning because it was a product of a procedure where the reinforcing event was the response-contingent absence of an event, not the response-contingent presence of an event:

Learning in this [avoidance] situation appears to be based in a real sense on the avoidance of the shock. It differs clearly from other types of instrumental training in which the conditioned response is followed by a definite stimulus change—food or the cessation of shock [reward training (positive reinforcement) or escape training (negative reinforcement)]. In instrumental avoidance training the new response is strengthened in the absence of any such stimulus; indeed, it is strengthened because of the absence of such a stimulus. Absence of stimulation can obviously have an influence on behavior only if there exists some sort of preparation for or expectation of the stimulation. (pp. 58-59)

This theoretical problem was ostensibly solved by Mowrer (1950), supported by Solomon and Wynne (1954) and Rescorla and Solomon (1967), by postulating that Pavlovian conditioning of fear on early escape trials, in which the WS is paired with shock, provided the acquired motivation to terminate the WS (now a conditioned aversive stimulus), thus providing secondary (acquired) negative reinforcement for the escape-from-fear response (i.e., the avoidance response). Others thought that the fear response was instrumentally reinforced by the termination of shock (Miller, 1951), but the upshot was the same: Reduction of fear by termination of the WS, whether acquired by Pavlovian or instrumental means, was the source of the acquired negative reinforcement for the avoidance response. Thus, two processes were postulated: acquisition of fear during escape trials (by Pavlovian or operant conditioning) and acquisition of the instrumental avoidance response, reinforced by fear reduction. This theoretical interpretation was supported by the results of an elegant experiment by Kamin (1956). Additional research in support of two-process theory used a transfer paradigm in which animals were given Pavlovian conditioning in one situation, and the effects of those conditioned stimuli were observed when they were subsequently superimposed on an operant baseline of responding in another situation (Solomon and Turner, 1960). This two-process theory provides the best account of avoidance learning in its various forms.

Some animals of most species learn the avoidance contingency, whether in the active or the passive form, using discrete-trial or free-operant procedures. Dogs are particularly adept at avoidance learning in an active, discrete-trial shuttle-box procedure and typically show strong resistance to extinction (Solomon and Wynne, 1954). In contrast, rats are particularly difficult to train in an active, lever-press, discrete-trial procedure and require special training procedures (Berger and Brush, 1975). Thus, there are important differences among species and response requirements. Additionally, in all forms of avoidance learning—active and passive, discrete-trial and free-operant—there are enormous individual differences. Some individuals of whatever species learn rapidly and well, whereas others do not (Brush, 1966).

In view of these findings it is not surprising that several investigators have genetically selected for differences in avoidance learning. Bignami (1965) reported the first experiment with Wistar albino rats in which the selectively bred phenotypes were good or poor at avoidance learning in a shuttle box. The resulting strains are known as the Roman High Avoidance and Roman Low Avoidance strains (RHA and RLA, respectively). Training consisted of five daily sessions of fifty trials each. Selection was based on the number of avoidance responses during the first two sessions (many or few) and on good or poor retention from each session to the next. Selection was highly effective because, by the fifth generation, the RHA and RLA animals avoided, respectively, on 68 percent and 20 percent of the trials.

In 1977 Brush reported on the development of the Syracuse High Avoidance and Syracuse Low Avoidance strains (SHA and SLA, respectively). Long-Evans hooded rats were trained for sixty trials in automated shuttle boxes. The data from over twenty generations of selection indicated that shuttle-box avoidance learning is heritable: SHA and SLA animals avoided on 67 percent and 0 percent of the sixty trials of training. Realized heritability (h2, which can range between 0.0 and 1.0; Falconer, 1960) was estimated to be 0.16 in each strain, a value comparable with that found in other selection studies (Brush, Froehlich, and Sakellaris, 1979).

In 1978 Bammer reported on the first six generations of selective breeding of Sprague-Dawley albino rats for high and low levels of avoidance responding in a shuttle box. The resulting strains are known as the Australian High Avoidance and Australian Low Avoidance strains (AHA and ALA, respectively). Training consisted of fifty trials in one or more daily sessions. Realized heritability over the first five generations of selection was 0.18 and 0.27 for the AHA and ALA strains, respectively.

A unidirectionally selected strain, known as the Tokai High Avoider (THA), was bred in Japan from Wistar stock using a lever-press response and a free-operant procedure (S-S = 5 seconds, R-S = 30 seconds, shock duration = 0.5 second). The selection criterion was an avoidance rate of more than ninety-five percent in the last five of ten daily one-hour training sessions. Selection was successful: THA males and females learn faster and to a higher level of performance than unselected control animals from the original stock.

The fact that so many selective breeding experiments for avoidance behavior have been successful is a clear indicator of the extent to which this kind of behavior is under genetic control. In each experiment the individual variability within each strain becomes less as selection progresses, and it appears not to matter what the details of the training procedures are. For example, SHA animals do better than controls in a free-operant procedure, and THA animals do better than controls in discrete-trial, shuttle-box training. Similarly, AHA animals outperform ALA animals in a discrete-trial avoidance task quite different from the one in which they were selected. Thus, it is clear that avoidance learning is strongly influenced by genetic factors, and many behavioral, physiological, and anatomical correlates of avoidance learning have been identified. Several of those correlates appear to be closely linked, genetically, to the avoidance phenotypes. Researchers are trying to identify the mechanisms by which genes determine avoidance learning. Modern molecular-genetic technology might enable them to identify those genes.



Bammer, G. (1978). Studies on two new strains of rats selectively bred for high or low conditioned avoidance responding. Paper presented at the Annual Meeting of the Australian Society for the Study of Animal Behavior, Brisbane.

Berger, D. F., and Brush, F. R. (1975). Rapid acquisition of discrete-trial lever-press avoidance: Effects of signal-shock interval. Journal of the Experimental Analysis of Behavior 24, 227-239.

Bignami, G. (1965). Selection for high rates and low rates of avoidance conditioning in the rat. Animal Behavior 13, 221-227.

Brush, F. R. (1966). On the differences between animals that learn and do not learn to avoid electric shock. Psychonomic Science 5, 123-124.

—— (1977). Behavioral and endocrine characteristics of rats selectively bred for good and poor avoidance behavior. Activitas Nervosa Superioris 19, 254-255.

Brush, F. R., Froehlich, J. C., and Sakellaris, P. C. (1979). Genetic selection for avoidance behavior in the rat. Behavior Genetics 9, 309-316.

Falconer, D. S. (1960). Introduction to quantitative genetics. London: Oliver and Boyd.

Hilgard, E. R., and Marquis, D. G. (1940). Conditioning and learning. New York: Appleton-Century-Crofts.

Kamin, L. J. (1956). The effect of termination of the CS and avoidance of the US on avoidance learning. Journal of Comparative and Physiological Psychology 49, 420-424.

Miller, N. E. (1951). Learnable drives and rewards. In S. S. Stevens, ed., Handbook of experimental psychology. New York: Wiley.

Mowrer, O. H. (1950). On the dual nature of learning—a reinterpretation of "conditioning" and "problem solving." In Mowrer's Learning theory and personality dynamics. New York: Ronald Press.

Rescorla, R. A., and Solomon, R. L. (1967). Two-process learning theory: Relationships between Pavlovian conditioning and instrumental learning. Psychological Review 74, 151-182.

Sidman, M. (1953). Two temporal parameters of the maintenance of avoidance behavior by the white rat. Journal of Comparative and Physiological Psychology 46, 253-261.

Solomon, R. L., and Turner, L. H. (1960). Discriminative classical conditioning under curare can later control discriminative avoidance responses in the normal state. Science 132, 1,499-1,500.

Solomon, R. L., and Wynne, L. C. (1954). Traumatic avoidance learning: The principles of anxiety conservation and partial irreversibility. Psychological Review 61, 353-385.

Warner, L. H. (1932). The association span of the white rat. Journal of Genetic Psychology 39, 57-89.

F. RobertBrush

About this article

Active and Passive Avoidance Learning: Behavioral Phenomena

Updated About content Print Article