Hippocampus

views updated

Hippocampus

Both clinical neuropsychological studies and animal experiments involving damage to the hippocampal formation indicate that this structure plays a fundamental role in at least the initial establishment of long-term associative memory; however, the exact role of the hippocampus in associative memory, and the physiological and computational mechanisms by which this role is accomplished, remain subjects of intense study and debate. Although by no means proven, evidence favors the hypothesis that the hippocampus acts as a simple interim repository for memories of certain kinds of events, and that other (neocortical) circuitry draws on this repository during a process known as memory consolidation (e.g., Zola-Morgan and Squire, 1990). The following is a brief overview of some current ideas about how the unique circuitry of the hippocampal formation might enable rapid associative memory, and why such an interim repository might be necessary.

Specific events are generally represented in the nervous system as distributed patterns of activity within rather large populations of cells, and rarely by the activity of single cells. The activity pattern may be thought of as a vector, that is, a list of ones and zeros, indicating which neurons are firing and which are silent, or as a list of positive real numbers indicating the firing rates of the neurons over some short interval. Associative recall, by its most general definition, is the ability of the brain to reconstruct the vector corresponding to a stored event when presented with a vector that is missing some of the original information, has been corrupted somehow by noise, or merely bears some significant resemblance to the original. This ability is often called either pattern completion or autoassociative recall. Using the pre- and postsynaptic conjunction principle for modifying the connection strengths between neurons originally elaborated by Donald Hebb (1949), work in the 1960s and early 1970s by Kohonen (1972), Mart (1971), Steinbuch (1961), Willshaw and colleagues (1969), and others laid the theoretical foundations for how a simple pattern-completion network might operate. In particular, the work of Marr was seminal, because it outlined several clear principles as to how actual neural circuits might accomplish this.

The essence of these principles is illustrated in Figure 1, which may be thought of as an incomplete and crude model for hippocampal regions CA3 and fascia dentata, and their neocortical inputs. The axons from the granule cells of the fascia dentata make sparse but strong contacts with the CA3 pyramidal cells. The axons of the pyramidal cells feed back into the pyramidal layer, making contacts that are initially ineffective but can be made effective by implementing "Hebbian" synaptic enhancement. That is, whenever a pyramidal cell is strongly activated by its granule cell input, those synapses that it receives from other pyramidal cells that have been activated by the same input become strengthened (in this illustration, the strength goes from 0 to 1). By strengthening the connections among neurons that have been active together during an event, it becomes possible later to recall that event in its entirety, given only a fragment of it. One of Marr's important contributions was the idea that a small population of inhibitory interneurons plays the crucial role of assessing the total number of active inputs and adjusting the threshold of the memory (pyramidal) cells such that only a fixed proportion of them are allowed to fire. This adjustment is accomplished essentially by dividing the total excitation of a given cell by the total number of active inputs. In this way, when an incomplete event is presented, the effective threshold of the pyramidal cells is lowered (because there is less inhibition). If the reduced input is part of a stored event, all of the corresponding synapses will have been strengthened and the correct cells will fire. Incorrect cells will not fire because they will, on average, not have as many enhanced inputs in that particular event. Thus, provided it is unique, even a small fragment of a stored event will cause activation of the full event. The reader unfamiliar with these principles might benefit by working through the example in Figure 1. For more detailed discussion, see McNaughton and Nadel (1989) or McNaughton and Barnes (1990).

There are, of course, constraints on the amount (and kinds) of information that can be stored by such a memory, as can readily be understood by imagining the consequences of attempting to store so much information that all of the synapses have been enhanced. At this point no information is recoverable. One solution is to encode an event with the minimum number of fibers necessary to capture its vital features (sometimes called redundancy removal). The hippocampus constitutes the highest level of association cortex in the nervous system, receiving its input from polymodal association cortex. Much of the processing that occurs in these areas can be thought of as redundancy removal or feature extraction. Moreover, hippocampal cells are often silent for prolonged periods, suggesting that rather few of them are active at any one time. Another solution is to make the patterns to be stored as different from each other as possible, a process known as orthogonalization, because orthogonal vectors are uncorrelated.

Although feature extraction is itself an orthogonalization process, sometimes it is necessary to store as separate, one-time events that may differ only in some arbitrary but important way. Because feature detector formation requires "knowledge" of the long-term statistical regularities in the input, this solution will not work. An alternative solution is to use an extra layer of very many cells between the input and the memory cells. By spreading the connections from the input to this layer more or less randomly, yet adjusting the thresholds so that about the same total number of intermediate cells as input cells are active, the degree of overlap between input events will be reduced. This is merely a consequence of the facts that, in the larger population, the probability that a given cell is active in any one event is reduced, and the probability of its being active in any two events is the square of the single-event probability. Marr called this solution codon formation. It is likely that the granule cells of the fascia dentata perform something like this function.

Although greatly oversimplified, models such as this go a long way toward accounting for the known anatomical and physiological organization of the hippocampal formation. Many theoretical and experimental neuroscientists believe that something like this process goes on in the hippocampus during the original encoding of long-term associative memory. One of the major outstanding questions in the field, however, is why the hippocampus, and presumably the information stored there, is necessary only for a limited period after the initial registration. Estimates of exactly how long vary from hours to years in different species, and for different kinds of memory. At present only educated guesses can be offered in answer to this question. The first is that it is probably both unnecessary and biologically expensive simply to store all experience; however, often it cannot be predicted at the time of an event whether the information is sufficiently important or reliable to be stored permanently. Second, if a set of events does contain some statistical regularity, it may be possible to generate a new "feature detector" for that regularity, and hence to achieve redundancy reduction in permanent memory. In order to assess this, however, some representation of the raw events must be stored for comparison. J. L. McClelland (personal communication) has proposed a somewhat related hypothesis, expressed in terms of the parallel distributed processing or connectionist models of cognitive psychology. Briefly, the argument is that learning appropriate and efficient internal representations for a particular problem or set of events generally requires multiple exposures to the events (at least with currently discovered algorithms). During these exposures a kind of global error term for the performance of the memory can be computed and used to correct (in small steps) the set of synaptic connection strengths (weights). The argument, then, is that the hippocampus makes use of many cells to store a set of memories that later can be "played back" to the neocortex for the purpose of computing an appropriate set of connection weights to store the information with the minimal number of units. Ideas such as these, it is hoped, will eventually provide a firm computational explanation for the process of memory consolidation and the crucial role of the hippocampal formation.

A simple (and incomplete) model for the fascia dentata and CA3 subfields of the hippocampus illustrates how this system may implement autoassociation, using simple "Hebbian" synaptic enhancement. Inputs from fascia dentata essentially impose output patterns in CA3. The synapses of the activated recurrent collaterals that terminate on active pyramidal cells are enhanced (in this simple illustration, the weights go from 0 to 1). Later, input of some fragment of a stored pattern (1) activates a corresponding subset of pyramidal cells and their recurrent collaterals (2). Each pyramidal cell then adds up how many of its currently active recurrent collateral synapses have previously been enhanced, and divides this by the total number of active inputs. If this is equal to or greater than threshold (i.e., 1), the unit fires. If not too many patterns are stored, the result will be the output of the complete original pattern (3). The inhibitory interneuron sums the total input and sets the divisor for the pyramidal cells accordingly. There is some evidence that something of this sort actually happens in the brain.

See also:PARALLEL DISTRIBUTED PROCESSING MODELS OF MEMORY

Bibliography

Hebb, D. O. (1949). The organization of behavior. New York: Wiley.

Kohonen, T. (1972). Correlation matrix memories. IEEE Transactions on Computers C-21, 353-359.

Marr, D. (1971). Simple memory: A theory for archicorrex. Philosophical Transactions of the Royal Society of London B262, 23-81.

McNaughton, B. L., and Barnes, C. A. (1990). From cooperative synaptic enhancement to associative memory: Bridging the abyss. Seminars in the Neurosciences 2, 403-416.

McNaughton, B. L., and Nadel, L. (1989). Hebb-Marr networks and the neurobiological representation of action in space. In M. A. Gluck and D. E. Rumelhart, eds., Neuroscience and connectionist theory. Hillsdale, NJ: Erlbaum.

Steinbuch, K. (1961). Die Lernmatrix. Kybernetik 1, 36-45.

Willshaw, D. J., Buneman, O. P., and Longuet-Higgins, H. C. (1969). Nonholographic associative memory. Nature 222, 960-962.

Zola-Morgan, S. M., and Squire, L. R. (1990). The primate hippo-campal formation: Evidence for a time-limited role in memory formation. Science 250, 288-289.

Bruce L.McNaughton