Vision

views updated May 23 2018

Vision

I. OVERVIEWConrad G. Mueller

BIBLIOGRAPHY

II. EYE MOVEMENTSLorrin A. Riggs

BIBLIOGRAPHY

III. COLOR VISION AND COLOR BLINDNESSA. Chapanis

BIBLIOGRAPHY

IV. VISUAL DEFECTSAlbert J. Harris

BIBLIOGRAPHY

I. OVERVIEW

Vision is one of man’s most important sensory channels. Although the visual system responds to a very narrow band of the electromagnetic spectrum, radiations having wave lengths ranging from 400 to 700 millimicrons, the loss of this system seriously limits our ability to adapt to our environment. Light reflected from objects and living organisms provides us with many diverse kinds of information necessary for responding to environmental change. Vision is important in many quick, reflex adjustments to the environment. At a more complex level, it is important in responding to facial expressions and other gestures of living organisms.

Visual functions

In order to understand the importance of vision in behavior, we must be concerned with what the visual system can and cannot do. The visual system makes possible many basic discriminations, and it is upon these that we must build in responding to the infinite number of subtle cues provided by the environment.

Absolute sensitivity . The most primitive visual discrimination, of course, is our response to the presence or absence of light. All other uses of vision presuppose that this discrimination is possible. Whether a visual stimulus is seen will depend, first of all, on its intensity, a fact that makes the measurement of threshold one of the basic measurements in studying the visual system. An intensity threshold is a statement of the intensity that is just sufficient to make a light stimulus visible. The numerical value of the intensity threshold will depend upon many specific characteristics of the object viewed. For example, the intensity required to see large objects is less than that required for small objects; the intensity threshold is low for stimuli exposed for a long time, high for stimuli presented for a brief period. The threshold for detecting a stimulus is higher if we look directly at the stimulus figure than if it appears off to the side of our line of sight. [See Psychophysics.]

Some statements about human visibility, such as the last one, seem to violate our intuitive notions about seeing. In many of these cases we can trace the discrepancy to the manner in which we describe what is seen. In general, we are better able to resolve the details of an object if we look directly at it. However, a stimulus must be about ten times more intense to reach the threshold for central vision than to reach the threshold for peripheral vision.

Spatial discrimination . A second basic kind of discrimination that is possible with the visual system is the sensation (resolution) of spatial arrangements of light. Our ability to see spatial detail is called visual acuity. There are two major types of acuity, one of which is called visible acuity, the other, separable acuity.

Visible acuity. Visible acuity refers to the ability to see small objects against a background. One common technique for measuring it is to determine the finest dark line that is visible against a brighter background. The numerical value of acuity obtained under a given test condition is computed by taking the reciprocal of the threshold width of line, measured in terms of the visual angle subtended by this line at the eye. The angle is measured in minutes of visual angle. The reason for using the angular measure, rather than just stating the width of the line, is that the size of the object alone is not critical in predicting whether an object will be seen; the size is important only in relation to the distance the object is from the eye. Under optimal testing circumstances, man’s visible acuity permits him to detect lines subtending approximately 0.5 seconds of visual angle. This is equivalent to a line less than 1/100 inch in width, viewed at 100 yards.

Separable acuity. The second type of acuity is separable acuity; this refers to the ability to see a repetitive pattern as being striated or checkered. A common procedure for measuring separable acuity is to present a series of parallel lines equally spaced, i.e., a dark line, a bright line, a dark line, etc., all of equal width. The numerical measure of acuity used in this case is also an angular measure, the angular separation between adjacent dark lines (or bright lines) required for just detecting the striation. The threshold angle for separable acuity is about fifty to one hundred times greater than the threshold angle obtained with visible acuity, and there is evidence that the upper limit on separable acuity is imposed by the mosaic structure of the sense cells in the eye. The smallest separable angles one can obtain are between 30 seconds and one minute of visual angle; appropriate computation indicates that this corresponds to the diameter of the sense cells in the eye.

Both of these types of acuity are influenced by the intensity of the light background against which the dark lines are seen. Acuity is poor when it is measured at low intensity levels; it is good when the intensity levels are high.

Temporal discrimination. A third kind of visual discrimination of which we are capable involves sensitivity to temporal changes in the stimulus (temporal resolution). In this case interest centers on the ability to detect the temporal alternation of light and darkness. The typical procedure for measuring this ability is to measure what is called the critical fusion frequency. This is done by presenting alternating periods of light and dark and gradually increasing the rate of alternation until a steady light is seen. We can also measure the frequency of alternation that could just be seen as flickering; this is called the critical flicker frequency. These thresholds are essentially the same, although there are subtle differences between them. Both are called CFF. There are certain rates of alternation of light and dark that cannot be seen. We do not usually see the ripples of intensity in household lighting, most of which operates on current alternating at either 50 or 60 cycles per second. We do not usually see the flicker in commercial movies, although we frequently do see it in home movies because the number of alternations per second is lower.

Our ability to see temporally alternating light patterns depends on the intensity at which we make the measurement. When lights of high intensity alternate with dark periods, the alternation rate at which they fuse will be high, e.g., 50 or more cycles per second. Light of low intensity alternating with darkness may appear steady at repetition rates of 10 or 15 times per second.

The CFF depends upon the size of the stimulus; the larger the size of the stimulus, the higher the fusion rate. The CFF also depends on the region of the visual field in which the alternating stimulus appears; the visual system is better at resolving alternating patterns in the peripheral field than it is in the central field.

Discrimination of wave-length differences . It is important to emphasize that we are not equally sensitive to all wave lengths in the region from 400 to 700 . We are relatively insensitive to the extremes of the visible spectrum, i.e., to red and blue; we are maximally sensitive to the middle of this spectrum, i.e., to the yellow and green portions. When we attempt to measure the sensitivity to different wave lengths quantitatively, we discover that we possess two different sensitivity functions. We obtain one of these if we stimulate in the peripheral part of the visual field; this function is most easily obtained if we have first allowed the eye to become accustomed to the dark. We obtain the second if we measure the sensitivity to different wave lengths by presenting the stimulus in the central field of vision.

Duplicity theoryrods and cones. The first of these functions, obtained in the periphery, has a peak sensitivity at about 505 and is called the scotopic visibility curve; the second of these functions has a peak sensitivity at about 555 and is called the photopic visibility curve. The distinction between these two curves is important in several respects, since they appear to represent the action of two separate visual systems. The scotopic curve is usually attributed to the action of those sense cells called rods. These cells are present in large numbers in the periphery and are completely absent in the most central part of the eye. The photopic curve is felt to be due to the action of sense cells called cones. These are the only sense cells found in the very center of the retina; they are present, but in relatively small numbers, in the periphery. Since we lack color vision in the extreme periphery of the eye and have maximum color discrimination in the central field of vision, the cones and the photopic systems are considered to provide the mechanism for color vision. This view of the separate responsibilities of the rods and cones is called the duplicity theory.

Color blindness. All physical systems that respond differentially to various wave lengths possess, by definition, a differential sensitivity to wave lengths. In this trivial sense, probably all living organisms are sensitive to some wave-length differences. However, we reserve the term “wave-length discrimination” for the special case where the differences in the effect of two wave lengths cannot be eliminated by adjusting the intensity of either one. This turns out to be a rather exacting requirement. On the basis of this requirement, a number of species are said to lack color discrimination. Humans possess this ability as a species, but some individual members of the species do not possess this ability. They are said to be totally color-blind. Individuals who are color-blind perform differently than the normal observer in a number of ways. There are several varieties of color deficiency other than total color blindness. Individuals who have color deficiencies are likely to exhibit abnormal wave-length discrimination curves, and they are likely to see one portion of the visible spectrum as colorless. The portion of the spectrum that will appear colorless will depend on the type of color deficiency.

Color vision. We can measure threshold differences in wave length by selecting a standard stimulus having some reference wave length and adjusting the wave length of a comparison stimulus figure until the two stimuli are just detectably different in hue. If we repeat this procedure many times, using different standard wave lengths, we can determine how our capacity to see differences in hue varies with the region of the spectrum in which we are operating. The threshold differences in wave length vary, in a complex way, with the wave length of the standard stimulus, but the results can be crudely summarized by saying that over most of the visible spectrum we are able to detect differences of approximately two or three millimicrons. It has been estimated that there are about 120 discriminable color steps in the range from 400 to 700 .

One of the basic findings of color-mixture experiments is that it is possible to achieve color matches for all wave lengths in the visible spectrum by selecting three fixed wave lengths—called primaries or primary wave lengths—in the visible range and mixing them in different amounts and different combinations. This fact has provided one of the empirical bases for the generally accepted trichromatic, or three-color, theory of color vision.

Light and dark adaptation. One of the most impressive features of the visual system is the range of intensities over which it can operate. If we are properly adapted to the level of illumination, we can easily see, and move among, objects on a moonlit night or see clearly on a sun-drenched beach. The intensities involved in these two situations may differ by a factor of about ten million. The processes that permit this latitude of adjustment are called dark and light adaptation. These processes of accommodation to different illumination levels take time. The process of adaptation to the dark is the slower process; after being exposed to the intensity levels of a sunny beach, it will take us from thirty minutes to an hour to acquire full sensitivity at the level of moon illumination. The reverse process, light adaptation, is by comparison more rapid, requiring approximately five to ten minutes.

When it was said that acuity and flicker discrimination are better at high intensities than at low intensities, it was meant to apply only when we are adapted to these intensities. Most visual performances at high intensities are adversely affected if we are adapted to a very low intensity just prior to making the measurements.

Visual mechanisms

The study of what man can and cannot see has been accompanied by an interest in how he sees, i.e., in the anatomical and physiological basis of vision. Light as a stimulus for human behavior achieves its effectiveness by virtue of the elaborate machinery possessed by the human body for detecting and processing the information contained in the radiant energy in the visible part of the electromagnetic spectrum. The primary sources for this energy are the sun and numerous “artificial” light sources. The great diversity of patterns of light to which we must respond results partly from the variety of light sources but more importantly from the countless gradations of reflections and transmissions of light by the many objects in the environment.

The processes of detecting and analyzing these intricate patterns of light begin with lights (1) entering the eye through the cornea; (2) passing through the pupil, a label for the opening in a structure called the iris; and (3) being modified by the lens, which permits fine adjustments in the optical power of the eye to bring the light to focus on the retina. The retina is a layer of structures on the inside of the eye. This layer contains the light-sensitive receptor cells and an elaborately connected system of nerve cells. It is in this layer that light is detected and converted into activity in the fibers of the optic nerve. The information contained in this optic nerve activity is then transmitted to and further analyzed by the higher centers of the nervous system. It is quite clear that many of the characteristics of visual sensitivity discussed in the previous section result directly from the properties of this physiological system.

The rods and sensitivity. We have already had occasion to refer to the fact that the sense cells in the retina fall into two main groups, the rods and the cones. These cells are separable, both structurally and functionally. A photosensitive material, rhodopsin, has been extracted from the rods, and many of its physical and biochemical properties are well known. When light is absorbed, this substance is changed into retinene, which, in the presence of an appropriate enzyme system, is changed to vitamin A. In the proper biochemical environment, retinene is reformed from vitamin A and in turn regenerates rhodopsin. The fact that these reactions are reversible led early investigators to speak of this set of chemical events as the visual cycle. The importance of these chemical events for vision is now widely accepted and may be illustrated by the fact that the absorption curve for rhodopsin corresponds closely to the human dim-visibility curve and that the absolute thresholds change substantially with vitamin A deficiency, as well as by many other observations.

Although the linkage between the rhodopsin cycle and certain aspects of visual sensitivity is firmly established, the early expectation that changes in concentration of rhodopsin resulting from the presence of light would account directly for the sensitivity changes during adaptation now seems unlikely. The critical word here is “directly.” The concentration changes are in the right direction, and they have the appropriate time course; therefore, most investigators feel that the rhodopsin cycle is still clearly implicated in the processes of light and dark adaptation. The difficulty is that the changes in concentration of rhodopsin resulting from light stimulation fall far short of those required if the threshold changes measured in the human observer are to be explained by the light-trapping quality per se of this photochemical system. Some additional mechanism must be involved to yield the large changes in sensitivity. The exact nature of this additional mechanism (or mechanisms) is not established at the present time.

The evidence derived from the performance of the rod system under optimum test circumstances suggests that the visual sense cell comes very close to being a perfect radiant-energy detector. Stated in another way, it appears that the sensitivity of the rods is such that a single cell can respond when a single light quantum is absorbed. A quantum is the smallest packet of energy that can be radiated or absorbed by any physical system. Many investigators feel that the human rod can, in fact, respond to a single quantum absorption; even the most conservative estimates of rod sensitivity require no more than two or three quantal absorptions. How such small energy exchanges lead to the neural processes required to activate the higher brain centers is not completely understood.

The retina and the optic nerve. Absorption of quanta by the sense cell leads to electrical changes in the various layers of the retina and eventually to activity in individual nerve fibers in the optic nerve. The messages in these nerves occur in the form of trains of nerve impulses; each impulse is a brief electrochemical change (of the order of 1/1000 second in duration) that is propagated along the nerve fiber at a uniform speed.

There are about one million individual nerve fibers in one of our optic nerves, and each of the nerve fibers serves a particular region of the retina. The region of sensitivity of a given fiber is called its receptive field, and the receptive field of a particular fiber will overlap with the receptive fields of many, but not all, other fibers.

Because the optical properties of the eye produce an orderly projection of the light in the environment onto the retina and because individual nerve fibers are stimulated only by light in a restricted region of the retina, it follows that there must be considerable specialization among the nerve fibers in the optic nerve. This specialization does exist, but it is not perfect or complete. One reason for the imperfection becomes apparent when we consider the amount of convergence that is forced on the visual system by the anatomical details of retinal structure. There are about 125 million sense cells in the retina of each eye; there are only one million fibers in the optic nerve coming from each eye. These figures mean that in the representation of light patterns there must be a convergence, on the average, of 125 to 1. This figure is only an average, and it must be viewed cautiously when we think about visual function. Some fibers in the optic nerve may serve only a few sense cells, as is the case with fibers coming from the fovea, that part of the retina stimulated by a spot of light placed along the main line of sight of the eye. It is in the foveal region that we possess our greatest visual acuity. Other fibers in the optic nerve may serve several hundred sense cells; this is the common situation for the cells in the periphery of the retina, regions serving us when a spot of light is presented off to the side of our main line of sight. It has been suggested, for example, that the high sensitivity in the periphery may be due in part, if not completely, to these differences in neural connections in the two parts of the retina.

There are many direct lines of evidence showing that visual sense cells do not possess separate and independent information tracks to the higher brain centers. One kind of evidence comes from a conceptually simple experiment. When we record the electrical response of a single optic nerve fiber stimulated by a small spot of light and compare this response with that obtained when we present this light simultaneously with a second light in an adjoining region, we find that the responses are different. This is a simple example of the complex interactions that take place before any message leaves the retina. As the messages concerning patterns of light, shadows, and colors are carried to the higher centers in the brain many additional interactions are introduced.

While we still lack a complete understanding of the neurophysiology of the visual system, rapid advances have been made in the last decade or two, and a number of facts are now available. There is a spatial localization of activity in the brain, linked with the position of the stimulus in the field of view and therefore with the part of the retina stimulated. The kind of response a single nerve cell will give depends on the nature of the stimulation. Sometimes a cell will respond to the onset of the stimulus, sometimes to the termination; sometimes it will signal both the onset and the termination but will give no response to steady illumination. In a given test situation the cell will give one of these responses and do so reliably. For most nerve cells there are clearly defined regions in the visual field that will yield “on” responses and other regions that will yield “off” responses. The interaction of these regions is such that if they are stimulated equally, no response will occur. The organization of these fields leads to a new and different kind of specialization of function. There are cells that will respond to certain shapes and not to others; some cells respond to certain orientations of a figure and not to other orientations; some cells respond to stimuli moving in one direction and not in another. Data obtained by recording the electrical activity of the individual nerve cells in the brain have begun to provide a firm foundation for understanding the physiological basis of form and movement discrimination.

In an analogous way, we find a degree of specialization of function when we study the responsiveness of individual cells to lights of different wave lengths, i.e., to colors. Some cells show maximum sensitivity to one band of wave lengths; other cells have maximum sensitivity to another part of the visible spectrum. Such results are rapidly providing information on which we can.build a physiological theory of color vision and color blindness.

Conrad G. Mueller

[Other relevant material may be found inNervous system, article onstructure and function of the brain; perception; psychology, article On Physiological psychology; Psychophysics; Senses; and in the biographies ofHelmholtz; Hering; MÜller, Johannes.]

BIBLIOGRAPHY

GENERAL REFERENCES

Geldard, Frank A. 1953 The Human Senses. New York: Wiley.

Morgan, Clifford T. (1943) 1965 Physiological Psychology. 3d ed. New York: McGraw-Hill.

Mueller, Conrad G. 1965 Sensory Psychology. Engle-wood Cliffs, N.J.: Prentice-Hall.

Osgood, Charles E. (1953) 1959 Method and Theory in Experimental Psychology. New York: Oxford Univ. Press.

Stevens, S. S. (editor) 1951 Handbook of Experimental Psychology. New York: Wiley.

Woodworth, Robert S.; and Schlosberg, Harold (1938) 1960 Experimental Psychology. Rev. ed. New York: Holt. → Woodworth was the sole author of the 1938 edition.

READINGS AND SYMPOSIA

Bartley, Samuel H. (1941) 1963 Vision: A Study of Its Basis. Princeton, N.J.: Van Nostrand.

Graham, Clarence H. (editor) 1965 Vision and Visual Perception. New York: Wiley.

Le grand, Yves (1948) 1957 Light, Colour and Vision. New York: Wiley. → A translation of Lumière et couleurs, first published in 1948 as Volume 2 of Le Grand’s Optique physiologique.

Mechanisms of Vision. 1960 Journal of General Physiology 43, no. 6, part 2:1-195. → Symposium held at the Institute Venezolano de Investigaciones Cientificas, Caracas, Venezuela, July 31-August 3, 1959.

Photoreception. 1958 New York Academy of Sciences, Annals 74, article 2:161-406. → Papers presented at a conference on photoreception held in New York City, January 31-February 1, 1958.

Pirenne, Maurice H. 1948 Vision and the Eye. London: Chapman & Hall.

Symposium on Electrophysiology of the Visual System. 1958 American Journal of Ophthalmology 46, no. 3, part 2:1-182. → Papers presented at a symposium held in Bethesda, Maryland, January 16-17, 1958.

Symposium on Physiological Optics. 1963 Journal of the Optical Society of America 53, no. 1:1-201. → Papers presented at a symposium held in Washington, D.C., March 14-17, 1962.

Walls, Gordon L. 1942 The Vertebrate Eye and Its Adaptive Radiation. Bloomfield Hills, Mich.: Cran-brook Institute of Science.

II. EYE MOVEMENTS

Human vision involves both looking and seeing. The eyes must actively pursue one direction of regard after another in order, for example, for a person to drive a car, read a book, or simply admire a view. This is because the only objects we see clearly are those that are imaged on the fovea, a small spot at the very center of the receptor surface at the back of the eye. The principal function of eye movements is thus to enable us to survey the visual environment, bringing one portion of it after another into line with the central fovea. This function requires a precise and efficient neuromuscular mechanism for linking the two eyes.

The neuromuscular basis of eye movements can best be understood by considering each eye as a globe within a bony orbit or socket. The eye socket is lined with fibrous tissue that is sufficiently elastic to permit extensive rotation of the globe. Horizontal, vertical, and oblique rotations are produced by three pairs of muscles attached to the globe at appropriate points. The two muscles of each pair act reciprocally. For example, the contraction of one horizontal muscle rotates the eye to the right, and the contraction of the other, to the left; but when one is active, the other is relaxed so as not to oppose it.

The muscles of each eye are controlled by nerve fibers proceeding from the third, fourth, and sixth cranial nerves. Midbrain nuclei for these nerves are controlled, in turn, by fibers that run principally from the frontal lobe of the cortex, for voluntary eye movements, and from the occipital lobe, for involuntary and pursuit movements. Eye movements are critically affected by stimulation of the vestibular apparatus of the inner ear. It has been shown that eye movements can be elicited in experimental animals by electrical stimulation of many regions of the brain in the parietal—as well as the frontal and occipital—lobes, in the cerebellum, and in the brain stem.

Saccadic and conjugate movements . Human eye movements have been recorded by a number of experimental techniques. The most common form of ophthalmograph (eye-movement camera) is one in which a continuously moving film registers the momentary location of each eye, by responding to light that is reflected from the front surface of the cornea. Studies of this type of eye movement, which is required for reading, reveal that reading involves a succession of saccadic (jumpy) movements separated by pauses having a duration of about a fifth of a second. As a line of print is read from left to right, about four to eight of these pauses are made, whereupon the reader jumps back to the beginning of the next line and goes on with the same pattern. A good reader pauses only four or five times on each line, and his eye movements are so fast and accurate that they occupy scarcely 10 per cent of the reading time. A poor reader, on the other hand, pauses more times per line. His eyes may also retrace or dwell upon long or unfamiliar words. The inspection of such materials as photographs or geometrical diagrams also involves saccadic eye movements that carry the various parts of the figure into the center of foveal vision for most accurate discrimination. The two eyes exhibit such accurately coordinated (conjugate) movements that a given point of the inspection figure affects corresponding points on each eye, at all times.

Convergent and divergent movements . In another method of recording eye movements a plane mirror is attached to each eye and light reflected from it is focused onto a moving photographic film. This method of recording is sufficiently sensitive to be used for another form of eye movement— namely, the convergence of the eyes that takes place when observing an object that is nearby. Records made by embedding a small mirror in a tightly fitting contact lens worn on each eye reveal that an observer’s eyes alternately converge and diverge in order to look at a near point and a far one, respectively. Measurements have shown that a good observer performs vergence with great accuracy, thus assuring single vision and the stereoscopic fusion of the images seen by the two eyes under various conditions of object distance. It will be noted that vergence requires the eyes to move in opposite directions. They are, thus, completely different from the conjugate movements that take place when one looks from point to point of a figure seen at a constant distance from the observer. Vergence is also much slower and smoother than conjugate movements and more automatic in its action.

Fine, involuntary movements . A refinement of the plane-mirror technique permits the recording of still finer eye movements—namely, those that occur in spite of the observer’s attempts to fixate as steadily as possible. These minimal eye movements are tremors; small saccades, or jumps; and irregular drifts. Tremors and drifts appear to be largely independent of visual control, but the saccades serve to bring the eyes back to the center of fixation after they have wandered off.

Other functions of eye movements . It is clear that the eyes are never completely at rest. Optical techniques have been devised to counteract these minimal eye movements and, consequently, to cause an image to remain motionless on the retina of the eye. The result is the washing out and ultimate disappearance of any objects that are imaged in this way. Clearly, then, an additional function of eye movements is to prevent this temporary failure of vision so that prolonged observation can be maintained—for example, in sighting a gun or using a microscope on stationary objects.

However, the eyes are often required to observe moving objects, as in watching a game being played. In other cases, the observer himself may be moving, as in riding in a moving vehicle. Still more complex is the situation in which both occur, as in driving down a road and observing an oncoming car simultaneously. In each case the perception is such that there is a reference frame or stationary world with respect to which both the observer and the observed objects are judged to be moving. The apparent stability of the reference frame must be brought about by a complex sensorimotor feedback system involving postural reflexes, motions of the retinal image, and eye movements. It is obvious that so complex a mechanism must be based on “built-in” connections widely distributed throughout the nervous system and that its proper functioning requires normal patterns of maturation and development.

Factors affecting eye movements . A mechanism as complex as that controlling eye movements must, of course, be subject to malfunction due to congenital neural defects, the effects of drugs, or unusual environmental factors. Some congenital defects are forms of nystagmus, a class of eye movements having a rhythmic character that is at least partly nonvisual in its basis.

A common example of the effects of drugs is the diplopia (double images) associated with alcoholism. This typically results from esotropia (abnormal turning inwards ) of the eyes when attempting to fixate distant objects or exotropia (turning outwards) for near ones.

Unusual environmental factors include the rhythmic motion of a ship or aircraft, which may induce motion sickness. Strong vestibular stimulation can also result from placing a person on a rotating platform or chair or from injecting a warm solution into the external canal of one ear. In all these cases, abnormal eye movements are likely to appear. In general, these movements are compensatory—that is, rotating a person to the right causes the eyes to move to the left until a limit is reached, and the eyes then quickly return to the right. The sequence of slow movements to the left followed by rapid ones to the right is reversed when motion of the body is stopped. This reversal of eye movements is known as postrotational nystagmus.

Lorrin A. Riggs

[Other relevant material may be found in Dreams; Perception, articles on Depth perceptionand Illusions and aftereffects; reading disabilities; Sleep.]

BIBLIOGRAPHY

Alpern, Mathew 1962 Movements of the Eyes. Volume 3, pages 1-187 in Hugh Davson (editor), The Eye. New York: Academic Press.

Carmichael, Leonard; and Dearborn, Walter F. 1947 Reading and Visual Fatigue. Boston: Houghton Mifflin.

Riggs, Lorrin A.; and Niehl, Elizabeth W. 1960 Eye Movements Recorded During Convergence and Divergence. Journal of the Optical Society of America 50:913-920.

Riggs, Lorrin A.; and Ratliff, Floyd 1951 Visual Acuity and the Normal Tremor of the Eyes. Science 114:17-18.

III. COLOR VISION AND COLOR BLINDNESS

Color vision means the ability to see colors, to perceive and to discriminate objects on the basis of variations in the composition of the radiant energy that they emit, transmit, or reflect.

The richness, variety, and importance of normal color experience are truly impressive. Indeed, arche-ological findings show that color has played an important role in man’s culture even in prerecorded history. Ability to see colors contributes immeasurably to ideas of beauty and to the aesthetic appreciation of objects in the everyday world. Color is used routinely in business, industry, science, and medicine to code and identify objects and to communicate information. Color also seems to be associated with a variety of affective responses, feelings, emotions, and moods—liking, disliking, excitement, depression, etc. Although scientific understanding of the basis for these emotional concomitants of color is rudimentary, the consequences of such affective overtones are immensely important even in such practical things as the packaging and marketing of a large number of consumer products and in the enjoyment of them.

The stimulus for color vision. Seeing an object is dependent on light from that object entering the eyes. The light may be generated and emitted by self-luminous objects, such as light bulbs, phosphorescent substances, and the sun; reflected from nonluminous objects, such as table tops, paints, and fabrics; or transmitted through nilters, such as certain glasses, plastics, and liquids. In actual fact, all three processes are involved in producing most visible rays of light. For example, a given light ray may have been emitted originally by the filament of a light bulb, filtered while being transmitted through the glass envelope around the filament, and further modified through reflection from a surface.

The light to which our eyes are sensitive consists of a narrow band of radiation in the electromagnetic spectrum. This spectrum extends from the invisible, miles-long radio waves, through the yards-long waves used in television and FM broadcasting and the still shorter infra-red heat waves, across the visible spectrum to the invisible ultra-short ultraviolet waves, and out to the infinitesimally short waves of cosmic radiation. The visible spectrum consists of those waves that are roughly from 385 to 770 mjn (millimicrons) in length. When a beam of white light is dispersed by a prism and separated into its component wave lengths, the visible spectrum appears as a variegated display of vivid colors. Starting with deep violet at the short wave length end of the spectrum, the colors shade imperceptibly into bluish purple, blue, blue-green, green, yellow-green, yellow, orange, and deep red at the long wave length end.

Practically never, however, does an ordinary person see the colors produced by isolated wave lengths of radiation. The light coming from most objects is a mixture of a large number of wave lengths, and it is the particular combination of these wave lengths and the relative amounts of energy in them that give an object its characteristic color. If the distribution of wave lengths in a ray of light is known, its color can be specified exactly. The reverse is not true, however. A given color can be produced by any one of an infinite number of combinations of wave lengths.

The visual system. Decades of intensive research have still not clarified precisely how the eye and its associated neural structures transform radiant energy into color experience. It is known, however, that one of the most important steps in this transformation occurs in the light-sensitive layer of the eye, the retina, which is the innermost of three tunics or coats in the back part of the eyeball. Although the entire retina has an average thickness of only 300 microns (0.3 millimeters), microscopic examination reveals it to be a structure of prodigious complexity. Ten distinct layers contain literally hundreds of millions of nerve cells and fibers. So intricate is the network of interconnections among these elements that anatomists have succeeded only in tracing out some of their grosser features.

Just within the outermost (most rearward) layer of the retina is a layer of rod and cone cells. Although there is little doubt that absorption of light takes place within these rods and cones and is converted by them into nerve impulses, the exact process is not yet completely understood. It is known, too, that the rod and cone cells serve two different functions in vision. The former, of which there are about 120,000,000 in the human eye, are primarily involved in seeing under extremely dim illuminations, below that of full moonlight. These rods are achromatic receptors; that is, they respond only with sensations of white and various shades of gray, no matter what wave lengths stimulate them. The cones, of which there are about 6,500,000 in the normal eye, operate most effectively at high levels of illumination such as are normally encountered in daylight. These are the receptors that provide sharp form acuity and ability to see chromatic colors.

Color vision theories

Although theories of color vision abound in the scientific literature, it is safe to say that no single theory is consistent with all the known facts of normal and abnormal color vision (Judd 1951, pp. 830-836). Most theories agree, however, that there must be several types of photosensitive elements in the normal eye and that these different receptors are differentially sensitive to various segments of the visible spectrum. Further, most theories agree that different colors are experienced because of blends of responses resulting from the stimulation of these different kinds of receptors in various proportions.

Young-Helmholtz theory. The facts of color vision require that there be at least three different kinds of receptors in the eye. It was the physician Thomas Young who first recognized this fact. In 1801 he advanced the notion that there are three fundamental kinds of receptors in the eye, one sensitive primarily to red, one to yellow, and one to blue. Other color sensations, he reasoned, resulted from the additive effects of the outputs from each of these three receptors. A year later, however, Young changed his three receptors to red, green, and violet, on the basis of some new observations carried out on the spectrum by W. H. Wollaston. Young’s theory appears to have been largely ignored for the next half century and it was not until 1852 that the great German physiologist Hermann von Helmholtz resurrected and championed it. Since that time the theory has been commonly referred to as the Young-Helmholtz theory. In proposing three fundamental receptors, the Young-Helmholtz theory is scientifically parsimonious. Three, however, is merely a lower limit and other theorists have postulated as many as seven different kinds of receptors [see Helmholtz; see also Hartridge 1950, pp. 256-293].

Hering’s theory. For all of its attractiveness, the Young-Helmholtz theory has never been able to explain satisfactorily certain facts of color experience. Chief among these are the apparent linkages that appear to exist beween certain pairs of colors when either the stimulus conditions or the conditions of the human observer are systematically changed. For example, the discrimination of yellow and blue becomes much worse than that of red and green as the size of the stimulus is decreased. As another example, certain pairs of colors typically drop out in congenital color vision defects and in certain diseases of the eye. To take account of such data the physiologist Ewald Hering proposed in 1874 that there are three independent visual substances in the retina, each capable of reacting in either of two opposite directions through some sort of metabolic or chemical process. He termed these two directions of change assimilation and dissimilation. In his view the three visual substances were black-white, red-green, and yellow-

Table I
 VISUAL SUBSTANCE
DIRECTION OF CHANGEWhite-blackRed-greenYellow-blue
AssimilationBlackGreenBlue
DissimilationWhiteRedYellow

blue, and the color a person saw depended on the way in which a particular substance was responding. Table 1 presents the fundamental color sensations as Hering conceived of them.

Color sensations other than these, he argued, are produced from mixtures of these fundamental six in various proportions. The Hering, or opponent-process, theory has also had numerous staunch supporters through the years [seehering; see also, for example, Hurvich & Jameson 1957].

Evaluation and problems. The Young-Helm-holtz and Hering theories of color vision, or some variations of them, have been the two most prominent ones in the history of color vision, and they have largely dominated thinking and research in this area for the past century or more. However, the complexities of the visual system have always intrigued theoretically inclined scientists, and literally scores of other theories have been seriously proposed from time to time. None of these alternatives has really stood the test of time. For a brief summary of some other color-vision theories see Hartridge (1950) and Judd (1951).

For years it has been a disconcerting source of embarrassment to all color-vision theories that anatomical and physiological investigations failed to disclose the different kinds of cones that most of them hypothesize. The experimental procedures necessary to show the existence of different kinds of cones is conceptually simple but practically difficult, primarily because of the extremely small size of a cone (from 0.002 to 0.009 mm. in diameter) and the almost immeasurably small amount of pigment contained in each one. Further, to measure the absorption spectrum of the pigment in a single cone requires using a light so small and an intensity so low that random variations in the output of the light itself constitute a significant source of error. These and other technical difficulties have been surmounted only within recent years.

Empirical support. The year 1964 was an exciting one for color-vision theory, possibly one of the most significant years of the past century in this respect. Early that year, Marks, Dobelle, and MacNichol (1964), using extremely sophisticated apparatus, reported that they had examined single parafoveal cones from human and monkey retinas and found three types of receptors with maximum absorption in the yellow, green, and violet regions of the spectrum. The absorption spectra of the cones they found are strikingly similar to the sensitivities that have been postulated for the three receptors in some forms of the Young-Helmholtz theory. These findings were confirmed almost simultaneously by Brown and Wald (1964). Although much more work still needs to be done, there is at last direct and unequivocal evidence that primate color vision is mediated by at least three (and perhaps only three) different kinds of cones, each containing photopigments sensitive to different regions of the spectrum.

In order for us to see color, the differential responses of the cones to stimuli of various wave lengths must somehow be preserved in the inner nervous pathways of the optic system. Less is known about the functioning of these nerve pathways than is known about the ultimate receiving elements themselves, the rods and cones. Recent research, however, has done much to clarify the behavior of the ganglion cells, the so-called third-order neurons, two steps behind the receptor cells in the visual system. These are particularly important for any theory of color vision because it is from the ganglion cells that optic nerve fibers go to the brain. Anatomical studies show that ganglion cells may collect information from single cones (primarily in the fovea), from several rods, or from groups of rods and cones. Because of these intricate connections, we might expect that the ganglion cells would exhibit a more complex type of response than do single rods and cones. Electro-physiological studies confirm this suspicion.

Microelectrode studies of individual ganglion cell responses show that the same cell may be either excited or inhibited, depending on the wave length of the stimulus. Further, in some cases the same wave length of light may produce excitation at one intensity and inhibition at another intensity (MacNichol 1964). Our current thinking, therefore, is that both the Young-Helmholtz and the Hering theories of color vision are correct. The former, or some variation of it, appears to be a good description of the rods and cones. The latter, in some form or other, probably describes how the information from the rods and cones is encoded into complex on-off signals by the color-sensitive ganglion cells for transmission to higher visual centers.

Even less is known about what happens in the brain than is known about the functioning of the retina, and almost anything one might say about the way in which the electrical responses of the retina are transformed eventually into color experience is largely conjectural at this time.

Color and color phenomena

The definition of color. The word color itself is associated with certain semantic difficulties. The term is used to refer to (a) stimuli or things, as when one speaks of the color of grass, flowers, paints, or other objects; (b) sensations, as when one says that a color “looks red” or (c) characteristics of light that are identified neither with radiant energy nor with sensation (Optical Society of America 1953, p. 221). Further, the layman usually divides visual sensations into two broad classes: (a) those that are black, white, or gray and (b) those that are colored. In the technical literature, however, the word color is used as a collective name that includes not only what the ordinary person refers to as colors, such as red, green, and blue, but the series of blacks, grays, and whites as well. The former are designated chromatic colors (literally, “colored colors") and the latter achromatic colors (literally, “colorless colors"). Even many technical writers in the field are inconsistent in their use of the word color in this and other respects, sometimes using the term in its everyday common meaning, and sometimes in the more inclusive sense of the color-vision specialist.

Dimensions of color sensation. Color sensations can be classified and ordered without any reference to the characteristics of the stimuli that arouse them. Many such classifications of colors according to their similarities and differences have been attempted by artists, philosophers, and scientists over the past several hundred years. As a result several systems of ordering and classifying colors are now in existence. Almost without exception all such schemata agree that some type of three-dimensional model is needed to represent adequately the full range of color sensations that the normal person experiences. Figure 1 is a diagram that is widely used by psychologists and color scientists for this purpose.

Hue. Perhaps the most important of the three fundamental dimensions of color as a mental phenomenon is hue. It is the essential quality element that leads one to refer to colors by such distinctive names as red, yellow, green, blue, and violet and is what the ordinary person means when he says color. Hue sensations do not, however, occur in discrete groups. Instead they shade imperceptibly from one to another and, indeed, form a complete circle, as illustrated in Figure 1. Starting with red, for example, one can describe color sensations that

become progressively yellower, that is, the red first becomes orange-red, then orange, yellow-orange, and finally yellow. From yellow one can proceed by similarly continuous gradations to green, blue, violet, and back to red again. It is interesting that although the hue circle describing our sensations is complete, the hues in the visible spectrum are not. True purples and reddish purples cannot be seen in the spectrum. Most purples we see around us are made up of mixtures of waves from both the long wave-length and short wave-length ends of the spectrum.

Brightness. A second dimension of color sensation is brightness, the quantitative aspect of color sensation. Two common terms that are used to refer to variations in brightness are light and dark. It is easy to imagine two colors, say green, of identical hue but differing in brightness. As with hue, however, the brightness dimension also forms a continuum, shading imperceptibly from very light to very dark hues. In technical terms brightness is used to refer to variations in the intensity of lights, lightness to variations in the intensity of surface colors.

Saturation. The third dimension of color sensation, saturation, is the most difficult of the three to explain in words alone, without reference to actual color samples. Perhaps the best way of defining saturation is to say that it is the percentage of hue in a color. In this sense, it is roughly parallel to the concept of the purity of a chemical compound or the concentration of a chemical solution. In common speech, words such as pale or deep, ivedk or strong, are used to refer to variations in saturation. Light brown, for example, is a weakly saturated yellow-red of medium lightness, and moderate pink is a weakly saturated light red. In Figure 1 saturation is represented by radii originating at the center of the diagram and extending in all directions from the center. In this diagram, white, gray, and black are colors of zero saturation and no hue. The white-gray-black continuum varies only in brightness.

Psychophysics of color vision. Although one can study color sensations without relating them to particular physical stimuli, it is nonetheless true that sensations of color are most consistently and readily elicited by appropriate kinds of stimuli. Variations in hue are easily obtained by varying the wave length of spectrum lights, or the dominant wave length of mixtures of wave lengths. Variations in brightness are produced readily by increasing or decreasing the amount of radiant energy in a stimulus. Variations of saturation are the direct result of mixing white or gray light in various proportions with the light of isolated wave lengths from the visible spectrum.

The study of the precise relationships between color sensations and the physical stimuli that evoke them is the province of psychophysics. Decades of research have provided a number of exact psycho-physical functions that are useful not only for theoretical purposes but for many practical problems involving the control, measurement, specification, and production of colors. Examples of these are curves showing (a) the relative brightness of the spectrum colors when they are equated in the amount of radiant energy they contain, (b) the sensitivity of the eye to differences in wave length throughout the visible spectrum, and (c) the sensitivity of the eye to changes in saturation.

Color mixture. One of the most important of these psychophysical functions concerns color mixtures. Thomas Young, in 1801, deduced from the work of Newton that the full gamut of spectrum colors could be matched with mixtures of three suitably chosen primary colors. This early finding has been refined, amplified, and quantified in curves like those shown in Figure 2. The primary colors in this illustration are monochromatic lights of 480 mμ(blue), 510 mμ (a slightly bluish green) and 600 mμ (an orange-red). These curves show, for example, that a wave length of 575 mμ, which is seen as pure yellow, can be matched by roughly equal amounts of the red and green primaries.

Two supplementary points merit some clarification. First, the three primary colors, red, green, and blue, are not the same primaries used in mixing paints or pigments, usually magenta, yellow, and bluish green. The data in Figure 2 are for additive mixtures; lights are combined by adding them together in suitable proportions. Paints or pigments acquire their characteristic color by absorbing certain wave lengths. For this reason, mixing pigments is referred to as subtractive color mixing.

Second, although mixtures of the three primaries can match all of the hues in the visible spectrum, they cannot match both hue and saturation. The mixtures are almost uniformly less saturated than are the spectrum colors. To achieve complete matching for both hue and saturation, the spectrum colors must usually be desaturated by the addition of one of the primaries. These appear as negative quantities in Figure 2.

The facts of color mixing are of considerable theoretical and practical importance. They form the basis for one of the most popular theories of color vision, the three-receptor theory, which maintains that the normal eye contains three kinds of

cones, with sensitivities approximately as shown in Figure 2. In addition, the data of Figure 2 lie at the heart of one of our most important systems of color specification—the CIE (Commission Internationale de l’ficlairage, International Commission on Illumination) system. And, finally, the facts of color mixing find practical application in the production of color photography and color television.

Complementary colors. A special set of color mixtures are those involving complementary colors, first described by Newton in his Opticks in 1704. For every hue we can see there is some other hue that, mixed in the proper proportion with it, will cancel out both hues and leave only a perception of white or gray. Red, for example, mixed with the right amount of blue-green looks white. The same result occurs if we mix green and purple, or yellow and blue in the proper proportions. Complementary colors, therefore, are pairs of colors that yield white or gray when they are mixed additively.

Some wave lengths (those between 492 and 568 mμ.) have no complementaries within the visible spectrum; their complementaries lie in the region of the extraspectral hues. Such complementaries are real, that is, they exist in the world of real colors. They merely cannot be found in the spectrum.

Afterimages and contrast. The sense of sight has associated with it a number of interesting phenomena, some of which go almost completely unnoticed by the average person. Afterimages are one of these. Stare steadily at a brightly colored object for about a half minute under ordinary illumination. Then shift your gaze to some light neutral gray surface. In a few seconds you will see an image of the object you stared at, but in its complementary color. This is a negative, or complementary, afterimage. The usual explanation for negative afterimages is that certain cones in the retina become desensitized to the stimulus color during the period of prolonged fixation.

If one looks at a brightly colored object briefly, under very intense light, he will typically see a brief afterimage with approximately the same hue as the original. This is a positive, or homochromatic, afterimage.

Once he has seen afterimages, the average person almost inevitably asks, “Why haven’t I seen them before?” There are probably many reasons why these images so easily escape our attention. One is that we undoubtedly learn to ignore them deliberately. When we turn our eyes from one object to another, we neglect the residual imagery of the first object because it contributes nothing to our perception of the object under scrutiny at the moment. In fact we ordinarily notice afterimages only when they are so intrusive that they cannot be ignored (as, for example, the afterimage that results ffom looking at the sun unintentionally). In addition, our eyes are normally in motion most of the time. They are not normally at rest long enough for strong afterimages to build up. And, finally, afterimages are usually out of focus and brief in duration. All of these factors undoubtedly contribute to the elusive character of these sensations.

Another color phenomenon having much in common with afterimages is chromatic contrast enhancement, or, sometimes, simply chromatic contrast or color contrast. Chromatic contrast refers to an apparent increase in the perceptual difference between colors when they are placed next to each other. A piece of red paper on a bright green background looks much redder than it would by itself. Similarly a yellow appears yellower and a blue bluer when these colors are together than when they are separate. As these examples suggest, chromatic contrast is most pronounced for complementary or near complementary colors, although it can also be demonstrated for colors that are not complementaries. A gray patch of paper on a colored background tends to take on the hue of the complementary of the background. For example, a gray patch on a red background usually has a greenish tinge. Chromatic contrast has practical usefulness in art, architecture, interior decorating, advertising, and industry because it is an extremely effective way of accentuating objects or of making them stand out from their backgrounds.

Color perception

Modes of color appearance. In everyday life colors are not experienced as isolated color sensations. They appear in various contexts or settings and are identified with things—objects, surfaces, or lights. These more complex visual phenomena are referred to as color perceptions to distinguish them from the more elementary color sensations and psychophysical functions discussed immediately above.

The usually accepted method of describing modes of color appearance is to refer to the actual physical conditions under which colors can be experienced. Five such modes are ordinarily distinguished by color scientists (see Table 2). Color that is perceived as belonging to a source of light, for example, the blue-white of a fluorescent tube, is referred to as color in the illuminant mode. Objects in the field of view that reflect light and cast shadows and reflecting particles in the at mosphere sometimes permit the identification of the color of the illumination in the field of view even when one cannot see the source of the light. Color perceived in this way appears in the illumination mode. Color that is perceived as belonging to a surface, such as the surface of a book or lemon, appears in the surface mode. Light passing through a more or less uniform and transparent substance, for example, a decanter of wine, gives rise to the perception of color in the volume mode of appearance. Finally, color is most easily perceived in the film mode by looking at an extended surface color through a small aperture in a screen.

Under ordinary conditions the several modes of color appearance are each remarkably stable and consistent: surfaces are always seen as surfaces, illuminants as illuminants, and so on. Moreover, the viewing conditions that elicit the various modes of appearance are ordinarily so compelling that there is ready agreement among people about the precise nature of their color experiences. This stability makes these modes of color appearance a major factor in enabling man to respond effectively to his physical environment. On the other hand, the fact that the modes of color appearance are so dependent on viewing conditions means that sometimes very simple or even minor changes in external conditions can produce a shift from one mode to another. A simple example is the shift from a surface to a film mode, which results when one almost completely closes his eyes and allows them to defocus in looking at a surface. Examples

Table 2 — Dimensions of perceived color associated with the five modes of color appearance
  MODES OF APPEARAHCE
DIMENSIONSlllumi-nant (glow)lllumi-nation (fills space)Surface(object)Volume(object)Film(aperture)
† Asterisks indicate those modes that have been established for the several dimensions; parentheses indicate those dimensions that are only weakly or doubtfully associated with certain modes.
Source: Optical Society of America 1953, p. 151.
Hue*****
Saturation*****
Brightness**  *
Lightness  ** 
Duration*****
Size*(*)**(*)
Shape*(*)**(*)
Location*(*)**not in depth
Texture  ** 
Gloss (luster)  ** 
Transparency(*)*** 
Fluctuation (flicker, sparkle, glitter)**** 
Insistence*****
Pronouncedness*****

of shifts between other pairs of modes are described by the Optical Society of America (1953, p. 147 ff.).

Attributes of color perception. Each of the principal forms of color appearance is associated with several more elementary dimensions, as shown in Table 2. Hue, saturation, and brightness (for lights) or lightness (for surface colors) have, already been defined. Except for the last two, all of the terms are self-explanatory. Insistence means the impressiveness, or attention-catching power, of a perceived color; pronouncedness refers to the quality or “goodness” of a color perception, for example, the whiteness of a white or the greenness of a green.

Color blindness

Approximately 7 or 8 per cent of males of Caucasian stock, and about 0.5 per cent of women, are color blind to some extent. Although the defect is most often an inherited one, it may also be acquired as a concomitant of traumatic injuries to the eye or brain, certain kinds of diseases, for example, jaundice and multiple sclerosis, or as a result of the ingestion of sufficient amounts of certain drugs, for example, lead, nicotine, and alcohol. Acquired color blindness is frequently curable; inherited color blindness never is.

Congenital color blindness is inherited as a sex-linked recessive characteristic, and it is this that accounts for the greater incidence of the defect among men than among women. On the average, color-blind men marrying color-normal women have color-normal children, but the daughters of such a union are “carriers” of the defect. A woman carrier who marries a color-normal man transmits the defect to half her sons. Thus the most common channel of inheritance is from grandfather to grandson through the mother. A woman can inherit color blindness if her father is color blind and her mother is a carrier. Half the daughters and sons of such a union are color blind; the other half of the daughters are carriers, while the other half of the sons are color normal. If both parents are color blind, all the children are color blind.

Forms of color blindness. The term color blindness is an unfortunate and misleading one because the man in the street usually interprets it to mean that a person so affected is blind to all colors. This is a gross distortion of the facts. Color-vision scientists base the following classification of color-vision defects on color mixture data such as are shown in Figure 2.

Normal trichromats. Normal trichromats have normal color vision. They need mixtures of three suitably chosen primaries to match all the hues of the visible spectrum, and their mixture curves do not differ significantly from those in Figure 2.

Anomalous trichromats. Anomalous trichro-mats also need mixtures of three primaries to match all the hues of the spectrum, but they require excessive amounts of one of the primaries to achieve satisfactory matches. For this reason they are sometimes called “color weak.” Anomalous trichromats are classified according to which of the three primaries they require in greater than normal amounts. The protanomalous trichromat requires red; the deuteranomalous green; and the tritanomalous blue. Anomalous trichromats have relatively mild defects, which are evident principally in their confusion of certain weakly saturated hues—the pastels and tints.

Dichromats. Most color-blind people are dichromats, having two-color vision, in the sense that mixtures of only two of the primaries shown in Figure 2 are sufficient to match all the spectrum colors. Protanopes depend only on the blue and green primaries, deuteranopes on the blue and red primaries, and tritanopes on the green and red primaries. Although dichromats appreciate far fewer color differences than do the color normal, they can still make certain discriminations accurately and consistently.

Monochromats. Monochromats are very rare. For them all the colors in the spectrum are exactly alike in hue, differing, if at all, in brightness only.

Population differences in color blindness. Extensive color vision surveys among certain ethnic groups reveal that color blindness is decidedly less common among American Indians, Negroes, Papuans, and Fijians than among Caucasians. Various hypotheses have been advanced to account for these findings, but the data do not substantiate any particular explanation.

A. Chapanis

[See also Perception.]

BIBLIOGRAPHY

Brown, Paul K.; and Wald, George 1964 Visual Pigments in Single Rods and Cones of the Human Retina. Science 144:45-46, 51-52.

Chapanis, Alphonse 1965 Color Names for Color Space. American Scientist 53:327-346.

Evans, Ralph M. 1948 An Introduction to Color. New York: Wiley.

Hartridge, Hamilton 1949 Colours and How We See Them. London: Bell.

Hartridge, Hamilton 1950 Recent Advances in the Physiology of Vision. Philadelphia: Blakiston.

Hurvich, Leo M.; and Jameson, Dorothea 1957 An Opponent-process Theory of Color Vision. Psychological Review 64:384-404.

Judd, Deane B. 1951 Basic Correlates of the Visual Stimulus. Pages 811-867 in S. S. Stevens (editor), Handbook of Experimental Psychology. New York: Wiley.

Le Grand, Yves (1948)1957 Light, Colour and Vision. New York: Wiley. → A translation of Lumiere et couleurs, first published as Volume 2 of Optique physiologigue.

Macnichol, Edward F. Jr. 1964 Three-pigment Color Vision. Scientific American 211, no. 6:48-56.

Marks, W. B.; Dobelle, W. H.; and Macnichol, E. F. Jr. 1964 Visual Pigments of Single Primate Cones. Science 143:1181-1183.

Optical Society Of America, Committee On Colorimetry 1953 The Science of Color. New York: Crowell.

Pickford, Ralph W. 1951 Individual Differences in Colour Vision. London: Routledge.

Wright, William D. 1946 Researches on Normal and Defective Colour Vision. London: Kimpton.

IV. VISUAL DEFECTS

A visual defect may be defined as any condition that reduces the effective functioning of the eyes to a level below what is considered normal.

Visual acuity. Acuity of vision, the ability to distinguish aspects of the visual field, is usually tested by means of alphabet letters or other visual stimuli that are graduated in size. The usual scale takes as its standard the minimum size of symbol that can normally be correctly perceived at a distance of 20 feet and uses this as the numerator of a fraction whose denominator indicates the minimum size of symbol that the person tested is able to distinguish at 20 feet. Thus, the fraction representing normal vision is 20/20; the fraction 20/40 means that the person can distinguish only at 20 feet what a person of normal vision can distinguish at 40 feet.

Degrees of defect in acuity. Acuity of 20/30 is borderline; it is considered a defect for some purposes but is frequently ignored by eye specialists, especially in young children. Acuity of 20/40 or worse is considered a definite handicap that should be corrected. Since many severe defects of acuity can be corrected by the use of eyeglasses to produce 20/20 acuity, in practical situations attention is usually paid to the best acuity that the individual can attain with correction. Children whose acuity is 20/70 or less in the better eye after all necessary medical or surgical treatment and compensating lenses have been provided are considered eligible for placement in special “sight-saving” classes. With modern corrective aids such as telescopic, microscopic, and aspherical lenses, some people who fall within the usually accepted definition of blindness (20/200 acuity or less in the better eye with correction) are still able to read, work, and move about without assistance.

Types of visual defects

Defects due to eyeball structure. The three most common visual defects are myopia (nearsightedness), hypermetropia (farsightedness), and astigmatism. These are all due usually to variations from the normal shape of the eyeball.

Myopia. Myopia usually occurs when the eyeball is too long, so that light rays from a distant object focus before reaching the retina, thus blurring the retinal image. Light from a near object focuses at the retina or close to it, so that acuity in near vision may be normal or close to normal, while distance acuity without correction may be 20/200 or worse. Many cases of myopia are progressive during childhood but stabilize during adolescence. Properly fitted lenses can give normal acuity in distance vision to most myopic people.

Hypermetropia. Hypermetropia usually results when the eyeball is too short, so that light, especially when coming from a source near the eye, focuses behind the retina, with the result that, again, the retinal image is blurred. With moderate degrees of hypermetropia, distance acuity is often normal and sometimes superior. In near vision, moderate degrees of hypermetropia tend to result in extra accommodation of the shape of the lens, thus producing normal or near normal acuity. However, since this places an extra strain on eye muscles, the farsighted person tends to suffer from eyestrain, headaches, and discomfort when he pays attention to near objects for a long time, as in reading.

Astigmatism. Astigmatism tends to result from uneven curvature of the front part of the eye (in the cornea, the lens, or both), so that light rays coming into the eye are not evenly distributed over the retina. Thus, the intensity of light along a certain line (vertical, horizontal, or at an angle) may be increased, while light along other slopes is diminished. This produces an effect of irregularity in the images of objects, with apparent changes of shape or brightness as the angle from which they are observed changes. Uncorrected astigmatism frequently produces symptoms of eyestrain and discomfort.

Myopia, hypermetropia, and astigmatism can usually be corrected by properly prescribed and fitted lenses. Myopia requires a biconcave, or “minus,” correction. Hypermetropia requires a convex, or “plus,” correction. The strength of the correction is expressed in diopters. Astigmatism requires an aspherical lens to compensate for the distortion.

Presbyopia and amblyopia. Presbyopia is a condition that results from the gradual loss of elasticity in the lens of the eye, so that accommodation of the lens for near vision becomes progressively less effective. The result is deficient acuity in near vision. This condition usually becomes noticeable between the ages of 40 and 50. A plus correction, in the form of either reading glasses or bifocals, is required. For myopic people who develop presbyopia, the correction may be a reduction in the strength of their minus correction.

Since astigmatism often occurs together with myopia, hypermetropia, or presbyopia, the lenses needed to correct for these combinations must combine the properties needed for each of the defects separately.

Amblyopia is a defect of acuity for which no structural deviation in the shape of the eye can be discovered. It is usually not improved by the use of glasses. It can be caused by deterioration or defect in the retina or in the optic nerve. Amblyopia may range in severity from a very mild diminution of acuity to near blindness. It is important to note that the same degree of acuity defect may be present in amblyopia as in myopia or in hypermetropia; while it is usually not correctable in amblyopia, it is completely or at least partially correctable in the other two conditions. Amblyopia that results from suppression or disuse of one eye, which occurs in many cases of strabismus and heterophoria (see below), can often be arrested or improved by such measures as covering the preferred eye with a patch or by placing a frosted lens before it, thus forcing use of the suppressed eye.

Defects in eye coordination. In order to produce clear vision the eyes have to make four major types of adjustments.

Pupillary reflex. The pupillary reflex is the automatic adjustment of the size of the pupil to the intensity of illumination. By diminishing the size of the pupil, it protects the interior of the eye against very bright light; in dim lighting, it provides a relatively large opening of the pupil. Sluggishness, irregularity, or absence of the pupillary reflex may indicate a neurological condition or a temporary effect of drugs.

Accommodation reflex. The accommodation reflex is the automatic adjustment of the shape of the lens to the distance of the visual target. This adjustment is controlled by varying tension in the circular muscle that surrounds the lens. As mentioned above, the extra need for accommodation when the hypermetropic person looks at near objects tends to produce eyestrain. The diminishing power of accommodation with increasing age is the cause of presbyopia.

Convergence reflex. The convergence reflex is the automatic control of the degree to which the eyes turn in so that both focus on the same target. The eyes are practically parallel when viewing an object more than ten feet away, but they turn in considerably when aimed at a very near target.

Nystagmus. A fourth adjustment is necessary to ensure that the object being observed is at the center of the visual field, where acuity is greatest. Observation of a slowly moving object requires smooth, continuous movements adjusted to the speed and direction of the target. Observation of the details of a stationary object, as in reading, requires that the eyes alternate between pauses (fixations) and quick, jerky movements (saccadic movements). Observation of a rapidly moving object involves a combination of saccadic movements and slower pursuit movements. Nystagmus is the technical term for this alternation of fixations or slow pursuit movements and saccadic movements. It is normal in reading and when watching a rapidly moving object. Chronic nystagmus is a condition, usually congenital, in which saccadic movements occur constantly.

Visual fusion. Visual fusion is the combining into a single perception of the slightly different images sent to the brain by each eye. In normal fusion both accommodation and convergence are properly adjusted to the target. Fusion difficulty may result when acuity in the two eyes is quite unequal or when either accommodation or convergence is inaccurate. It is often associated with a lack of proper balance between the six pairs of eye muscles that turn the eyeballs. When fusion fails to occur, the person may see double; more commonly, the image from one eye is ignored or suppressed. Continuing suppression of one eye over a period of years may result in amblyopia in that eye, and this condition may progress to blindness. The person who comes to depend entirely on his preferred eye is usually not aware of his lack of fusion and may achieve normal acuity with the one eye.

Partial, incomplete, or slow fusion is more likely to interfere with efficient two-eyed vision than a complete absence of fusion, since the resulting visual image is blurred and somewhat variable. This is most likely to interfere with activities requiring rapid, precise focusing, such as reading or following a rapidly moving object, such as a baseball.

Strabismus. Strabismus (often referred to as cross-eyedness or squint) is a condition in which there is a marked deviation of one eye from the line of sight. When the deviant eye turns toward the nose, the condition is called internal, or convergent, strabismus. When the eye turns away from the nose, it is called external, or divergent, strabismus. In alternating strabismus, the eyes take turns in focusing on and turning away from the target, and both maintain their acuity. When the same eye turns away consistently, a progressive amblyopia may develop. In most cases of strabismus, surgical correction of inequalities in the eye muscles is necessary. Nonsurgical treatment may include covering the stronger eye with a patch part of the time, using lenses with a prismatic correction, and using a graded series of orthoptic exercises.

Heterophoria. Heterophoria is a mild lack of balance between the eye muscles, usually not noticeable to the ordinary observer, in which one eye deviates sufficiently from the line of sight to produce some fusion difficulty. The deviation may be inward (esophoria), outward (exophoria), or in the vertical plane (hyperphoria). As with strabismus, one eye may deviate consistently or the two eyes may alternate. There may be double vision, temporary clearing and blurring of vision, or clear vision with suppression of the deviating eye and possible development of amblyopia. Nonsurgical treatment is more often prescribed for cases of heterophoria than for cases of strabismus.

Weakness in the perception of the third dimension (called astereopsis) is closely associated with fusion difficulties. One of the main cues used in the perception of distance and depth is the slight disparity of the images obtained when the two eyes focus on the same target and these images are fused in the brain. The same principle operates in creating the illusion of depth in stereoscopic pictures. People who lack depth perception may experience difficulty in tasks requiring accurate hand-eye coordination, in sports, and in driving cars.

Color blindness. Color blindness refers to difficulty in distinguishing between colors on the part of people who have no difficulty in seeing shapes and forms and can distinguish between shades of gray. Total color blindness is quite rare. Much more common (occurring in 4 to 8 per cent of males) is partial color blindness, involving difficulty in distinguishing certain reds and greens from each other and from gray. In most cases this is a weakness of color vision rather than a total absence of it; strong reds and greens may be perceived, while weaker hues may be indistinguishable from gray. Individuals with a weakness of color vision may live for many years without being aware of the defect. One of the social consequences of awareness of the prevalence of partial color blindness has been the adoption of traffic-signal colors in which there is enough yellow in the red and enough blue in the green to allow the partially color-blind to distinguish between them. There is no known treatment for color blindness.

Night blindness. Night blindness involves slowness in adapting to a low level of illumination. This is particularly troublesome in night driving, as the person with this condition tends to be dazzled by the headlights of oncoming cars and recovers acuity of night vision quite slowly. The condition is thought to be related to a deficiency in the visual purple, a photosensitive substance in the retina, and this in turn may be related to a vitamin A deficiency.

Defects caused by injury or disease. Gradually decreasing acuity may result from a number of progressive eye conditions that, if unchecked, may lead to blindness. A cataract, for example, may involve a gradually increasing opacity of the lens of the eye over a period of many years. A cataract may result from mechanical injury, chemical poisoning, dietary deficiencies, or advancing age. Similarly, the gradually increasing visual defect accompanying the early stages of glaucoma may not be identified as such by the patient. Scarring of the cornea due to injury or penetration by foreign objects may cause more or less complete clouding of vision; in some cases surgical replacement of the clouded portion with a transplant of clear corneal tissue from a donor is possible (Berens 1960).

Implications of visual defects

Educational significance. Children whose corrected vision falls below 20/70 in the better eye generally need special educational aids. Sight-conservation classes provide these children with materials printed in large type, special typewriters, magnifiers, and other aids that enable them to utilize the limited vision they have. With modern magnifying lenses, many children who previously would have been limited to reading Braille can now learn to read regular print. Use of existing vision is encouraged. So far as possible, these children share activities with children who have normal vision. About 1 child in 500 needs sight conservation help in school (National Society …1961; Hathaway 1943).

Lesser degrees of visual defect may also have a significant bearing on success and adjustment in school, particularly with regard to reading and to studies dependent upon reading. A great deal of research has been done on the relation of visual defects to success in learning to read. In general, poor readers are more likely to be hypermetropic than are good readers; myopia is of litde significance in the causation of reading problems. Visual conditions that are most significant for reading are eye coordination difficulties involving depth perception, fusion, and lateral eye-muscle balance (Eames 1959; Harris 1961a; 1961b; Robinson 1958). These problems suggest the possibility of inadequacy in the neurological controls of the eye.

Occupational significance. Since World War n many studies have been made of the relationship between visual defects and industrial accidents. In general, it has been found that many workers have eye defects that either are not corrected at all or are corrected with inadequate or outdated lenses; in one plant, 53.7 per cent of all employees were found to be below the visual standards desirable for their jobs (Potter 1958). In addition to detecting defects and securing the best possible corrections, a visual-safety program requires visual-safety equipment in many kinds of jobs— equipment such as side shields and goggles to protect the eyes from splinters or abrasive dust, and absorptive lenses for those exposed to intense light and heat, such as furnacemen. The effectiveness of visual-safety equipment is demonstrated in a report from a large company which revealed that over a ten-year period eye injuries decreased 90 per cent, time lost dropped from 699 days to 22 days, and an estimated 82 eyes were saved from blindness (Sager 1954).

Another aspect of industrial vision work is the use of vision tests in job placement. Minimum visual standards have been worked out for many specialized occupations, and efficiency is increased when this factor is taken into account in personnel placement.

Visual screening tests

The most common type of test for visual acuity is the Snellen chart, in which lines of alphabet letters are printed in a variety of sizes to provide indications of acuity ranging from superior (20/16 or better), through normal (20/20), to severe defect (20/200). The Snellen chart is used at the standard distance of 20 feet, and each eye is tested separately. Several modifications not requiring knowledge of the alphabet are available for testing illiterates and young children—for example, an E chart, which requires distinguishing the direction in which the lines of the E are pointing.

A growing recognition of the limited testing possible with the Snellen-chart type of test has led to the development of several instruments to be used by nonmedical personnel to detect a range of eye defects and to refer the person tested for professional eye examinations. The Snellen chart tests only for acuity in distance vision. The newer visual-screening tests measure acuity in both distance and near vision and also include tests of eye-muscle balance, fusion, depth perception, and color perception (Imus 1949). The use of such instruments in schools and industry can significantly improve the effectiveness of the program for detecting and correcting visual defects.

Albert J. Harris[See also Blindnessand Reading Disabilities.]

BIBLIOGRAPHY

Berens, Conrad 1960 Prevention of Blindness: An Appraisal of Progress and Critical Needs. Sight-saving Review 30:132-138.

Duffy, John L. 1962 Saving Eyes and Dollars at Charlestown Naval Shipyard. Sight-saving Review 32:16-19.

Eames, Thomas H. 1959 Visual Handicaps to Reading. Journal of Education 141:1-35.

Harris, Albert J. 1961a How to Increase Reading Ability: A Guide to Developmental and Remedial Methods. 4th ed., rev. New York: McKay. → First published in 1940 by Longmans.

Harris, Albert J. 1961a Perceptual Difficulties in Reading. Volume 6, pages 282-290 in International Reading Association Conference, Proceedings. Edited by J. Allen Figurel. New York: Scholastic Magazines. → Each volume has a different title. Volume 6: Changing Concepts of Reading Instruction.

Hathaway, Winifred (1943) 1959 Education and Health of the Partially Seeing Child. 4th ed., rev. Published for the National Society for the Prevention of Blindness. New York: Columbia Univ. Press.

Imus, Henry A. 1949 Testing Vision in Industry. Volume 53, pages 266-274 in American Academy of Ophthalmology and Oto-laryngology, Transactions. Rochester, Minn.: The Academy.

Lowenfeld, Berthold 1955 Psychological Problems of Children With Impaired Vision. Pages 214-283 in William M. Cruickshank (editor), Psychology of Exceptional Children and Youth. Englewood Cliffs, N.J.: Prentice-Hall.

Morgan, W. Gregory; and Stump, M. Frank 1949 Benefits From Professional Eye Care for Workers With Lowered Visual Performance. Volume 54, pages 99-105 in American Academy of Ophthalmology and Oto-laryngology, Transactions. Rochester, Minn.: The Academy.

National Society for The Prevention of Blindness, Advisory Committee on Education of Partially Seeing Children 1961 Helping the Partially Seeing Child in the Regular Classroom. Sight-saving Review 31:170-177.

Potter, J. A. 1958 Seeing Eyes on the Job. National Safety News 78:44, 125-126.

Robinson, Helen M. 1958 The Findings of Research on Visual Difficulties and Reading. Volume 3, pages 107-111 in International Reading Association Conference, Proceedings. Edited by J. Allen Figurel. New York: Scholastic Magazines. → Each volume has a different title. Volume 3: Reading for Effective Living.

Sager, Herman 1954 Protection + Correction. National Safety News 70:28-29, 88-91.

Vision and Perception

views updated May 21 2018

VISION AND PERCEPTION

Perceiving is a constructive act. Using the data supplied by the senses and his knowledge of the world, the perceiver constructs his reality of the moment. Since the sensory systems have a limited capability for acquiring information, the constructed reality will reflect not only the present data but the persons interpretation of the information and its context. The perceiver does not simply record the actions of the physical world but reconstructs that world moment by moment. There are two factors constraining the construction of the percept: data and resource limitations. Data refers to the amount of information that may be acquired by a perceiver. This places the first restriction on the final construction of what is perceived. The influence of past experiences and knowledge on the creation of the percept is what is referred to as resource limitations. This entry will focus on the restrictions on the quality of the percept as influenced by data limitations associated with aging.

The descriptions are based on the average performance of adults as they age. As a presentation of normative information, it is not a prescription for what happens to each person. There is as much heterogeneity in the performance of elderly adults as there is in younger persons. This point is made because it is inappropriate to create a stereotype of an aging person as one who is affected by all of the changes described below. Instead, it is better to view this information as a guide to the potential changes that may occur to varying extents in the population.

In examining changes in vision and perception, it is important to consider the scope of the visual system. The eye is a complex structure whose optical properties, governed by the lens, and the shapes of the cornea and eyeball, as well as the neural structure and function of the retina dictate the quantity and quality of the sensory data that is acquired. These centers extract different types of information from the signal pattern, such as color and shape. Alterations in structure or function in one or more areas may underlie the age associated effects discussed in this entry.

Visual pathology

There are several vision disorders that are more common as one ages. These pathological changes alter the structure and function of the eye and can severely limit vision if not treated. The most common disorder is cataracts. A cataract is a pathological increase in lens opacity that severely limits visual acuity. While there is a reduction in lens clarity for nearly all elderly individuals, cataracts affect only one in twenty persons over the age of sixty-five. It is the leading cause of functional blindness in older adults but it can be successfully treated by simply removing the affected lens and replacing it with a prosthetic lens.

Glaucoma. is the second leading cause of blindness in the United States. Although the pressure within the eye remains essentially constant until the later decades of life, a pathological increase sometimes occurs. Glaucoma is characterized by both increased intraocular pressure and the resulting atrophy of the nerve fibers at the optic disk. The behavioral symptom of the disorder is a reduction in the perceivers visual field. That is, a person loses sensitivity in the periphery of his vision. Unfortunately, this alteration in vision often goes unnoticed until the damage is quite advanced. Regular screening for glaucoma by measuring intraocular pressure would permit the detection of the disease at a point when medical intervention could be effective in limiting damage to vision.

Macular degeneration is a deterioration of the retina in the area of central vision that is critical for the perception of fine detail and color. The afflicted person has difficulty with all visual tasks that are ordinarily dependent on central vision, such as reading, face identification, and television viewing. Strong magnification can be used to improve reading ability. Since the peripheral fields are not affected, the person does not have difficulty in walking and moving through his environment.

Visual processing

While diseases of the eye offer clear limitations to the acquisition of information and the accurate perception of the world, they only affect a minority of older adults. There are other changes which occur in vision that are considered normative, that is, they occur to most people. These alterations in structure and function can be shown to have a marked effect on the visual experiences of older persons. An appreciation of these factors can help us to understand the perceptions of elderly adults and to create behavioral interventions to compensate for their effect.

Light sensitivity. Perhaps the most important limit on data acquisition is the reduction in light sensitivity that occurs in adulthood. It is a common experience for a child to be chided by a parent or grandparent to turn on more lights while they read. Youll ruin your eyes trying to read in that light! the parent may exclaim. This event illustrates the difference in light sensitivity between the child and older person. While the child has sufficient sensitivity to light to be able to read easily, the parent would require more light to perform the same task.

Our maximum sensitivity to light starts to decline in the third decade of our lives. Indeed, starting with age twenty, the intensity of illumination must be doubled for every increase of thirteen years for a light to be just seen.

One reason for the reduction in sensitivity is that less light actually reaches the retina, the receptive surface of the eye, as we age. It has been estimated that the retinal illuminance of a sixty year old is only one-third that of a twenty year old. The reduction in retinal illuminance can be attributed to several factors including the marked reduction in pupil diameter known as senile miosis. That is, in older adults the pupil simply does not open as wide to capture light. The gradual opacification, or cloudiness, of the lens and the reduction of transparency in the vitreous body also contribute to the reduction in retinal illuminance. Finally, there is evidence of the loss of photoreceptor cells that would reduce light sensitivity.

A simple intervention to compensate for the reduction in light sensitivity is to increase the level of illumination for older adults. However, care must be taken to avoid glare effects, which are more common in elderly adults. Excessively bright light or light which is scattered by opacities in the lens can reduce visual performance by dazzling a person or reducing the contrast of an object. One can compensate for glare effects in reading by using high contrast or large size material. In driving, however, glare is an issue for nighttime drivers who are exposed to the headlights of oncoming cars. A concern is that it takes a substantially longer time for persons to recover from glare, as they grow older. The temporarily impaired driver is at greater risk for an accident.

Acuity is the capability to resolve fine detail. It is ordinarily assessed by asking the patient to read letters or symbols that are printed in high contrast. The smallest element, which can be resolved accurately, is the acuity limit of the observer. We have all noticed as we grew older that optical corrections became common among our peers. Acuity improves from childhood into adolescence and then starts to show a steady decline in early adulthood. Indeed, even when adults are fit with their best optical correction, a gradual decline with age in peak acuity is noted starting late in the third decade of life. Thus, older adults with corrective lenses can be expected to have more difficulty than their younger counterparts in resolving fine detail.

To focus light on the macula, which is the portion of the retina capable of resolving fine detail, the lens must accommodate or change shape. The flexibility or accommodative power of the lens diminishes with increasing age. At about the mid-forties this loss of accommodative power becomes serious enough to affect the ability to focus on near objects. This loss of accommodative power for near vision is known as presbyopia.

Contrast sensitivity. The measurement of acuity assesses the ability to resolve small details at a high level of contrast. It is also important to determine the ability of a person to resolve objects under lower levels of contrast. In the assessment of contrast sensitivity the minimum contrast required to detect difference between light and dark regions is determined. A common definition of contrast is (Lmax - Lmin) / (Lmax + Lmin) where Lmax is the maximum luminance in a stimulus display and Lmin is the minimum level of luminance present.

One method in the clinical assessment of contrast sensitivity is achieved by having the patient read letters of fixed size that vary in contrast. At the top of the chart the letters are very dark against a light background. The contrast or darkness of the letters is successively reduced in each line of the chart. The lowest contrast at which the person can read the letters accurately marks their contrast sensitivity. A person who can read very light letters has a better contrast sensitivity than one who is successful only with dark letters.

Stimuli composed of gratings or stripes in which the contrast is sinusoidally modulated are also used in assessment. At high contrast levels, the grating appears to be composed of fuzzy stripes against a light background. The lighter stripe or the lower level of the contrast at which a person can detect the grating, the better their contrast sensitivity. The width of the stripes is also varied to permit the determination of contrast sensitivity for different size stimuli. The variation of stimulus size is described in terms of spatial frequency where the number of stripes per unit area on the retina is a measure of the spatial frequency of the stimulus.

Contrast sensitivity peaks in adolescence and starts to decline in early adulthood. As would be expected from the acuity data, older adults require very high contrast to resolve small objects, or high spatial frequencies, at even higher levels of illumination. This difficulty also extends to low and intermediate spatial frequencies under lower levels of illumination. Measures of spatial contrast sensitivity have been shown to be superior to acuity measures in predicting performance on a wide variety of tasks. Since the accurate processing of lower spatial frequencies are important for reading, face and object recognition, and road sign identification, the reduction of contrast sensitivity to these spatial frequencies places the older perceiver at a disadvantage for quick and accurate responding.

Color perception. The ability to discriminate among colors peaks in the early twenties and declines steadily with advancing age. The discrimination of shorter wavelength colors, blues and greens, is particularly challenging for older observers. This reduction in color discrimination may be attributed to at least two sources. First of all, the lens yellows with adult aging causing a selective absorption of shorter wavelengths and consequently less light from that region to strike the retina. Secondly, there is evidence that there is a selective loss of sensitivity of the photoreceptors that are responsive to short wavelengths.

A consequence of the loss of sensitivity to shorter wavelengths is that white light, which is composed of all wavelengths, may appear faintly yellow. Also, blue objects may appear particularly dark and blues and dark greens may be indistinguishable. Such changes in color perception may affect the sartorial choices of older adults.

Depth perception. People live in a three-dimensional world. But we must infer the structure of that world from the two-dimensional array of light on our retinas. The construction of the third dimension is accomplished by using a number of cues, such as interposition, shading, and relative height. Only stereopsis sensitivity has been studied among different adult age groups. Stereopsis is the depth cue derived from the different images projected on the retinas by an object. Objects that are less than twenty feet from the observer will fall at slightly different positions on each retina. The disparity of these images is a cue for depth. The greater the disparity, the closer the object to the perceiver. As with the other vision characteristics that we have reviewed, stereopsis peaks in early adulthood with notable decreases in sensitivity after the fourth decade of life. Reductions in stereopsis sensitivity may affect the ability of a person to perform a number of important tasks such as hitting a curve ball, judging the distance from an object while parking a car, and walking. In the latter case, objects, such as sidewalk cracks and stair treads, whose depth is not appropriately discriminated, may become tripping hazards. More work is needed to fully appreciate the depth perception capabilities of older adults.

Motion perception. Objects in motion create a changing pattern of light on our retinas. Our ability to detect and discriminate these shifts of light stimulation is critical for our ability to determine not only the movement of objects but also our body motion and stability. While it is a subject that has generated a lot of interest, few studies of the impact of aging on motion perception have been reported. In one study of individuals from twenty-five to eighty years of age, the investigators reported that there was a linear decline of motion sensitivity with age. As with the decline in light sensitivity, such a pattern of change is suggestive of an age-related neurodegeneration in the visual system. However, several studies comparing the motion sensitivity of young and elderly adults have reported that the deficit in motion sensitivity was restricted to elderly women. That is, these studies reported that only elderly women and not men had poorer motion perception. A reason for such gender effects has not been suggested.

Beyond the detection of motion it is important to be able to judge the speed of an object. Accurate speed judgments permit drivers to merge onto highways and ballplayers to hit a baseball. In general, young adults are quite accurate in their speed judgments. One area where there is a critical failure in speed estimates that affects all ages is in the perception of large objects. A large plane appears to be floating very slowly on to the runway as it lands yet it is travelling at nearly two hundred miles per hour. A train approaching an intersection appears to be moving slowly enough for a driver to avoid a collision yet the seventy-mile-per-hour locomotive slams into the car. It is a strong illusion that large objects appear to move more slowly than their actual speed.

There has been limited work on the ability of older adults to judge the speed of automobiles. The evidence suggests that older people overestimate the speed of slowly moving cars while underestimating the speed of cars travelling at highway speeds. Such an effect may account for the hesitancy of older drivers to cross an active intersection or to merge on a highway. The importance of motion perception in general and speed judgments specifically demands that more work is required to understand the impact of aging on these abilities.

Stimulus persistence. The experience of a visual event does not end when the stimulus is removed. There is a phenomenal persistence of the event not to be confused with an afterimage. The latter occurs because of the fatigue and recovery of receptors while persistence is a continuation of information transmission. The duration of the persistence is inversely related to the luminance, contrast, and duration of the stimulus. That is, stronger visual events lead to shorter periods of visible persistence. It may be that weak stimuli persist longer to permit the perceiver to continue to extract needed information from the stimulus event. The cost of prolonged persistence is that separate stimulus events may blend together yielding indistinct perceptual events. An example of such blending is the fusion of light pulses in a fluorescent light. There are distinct pulses of light and dark intervals emitted by the fluorescent tube. Each light pulse results in a residual persistence of the light in our visual system. Because the rate of flicker is so fast, the persistence of the light is long enough to fill the dark interval leaving the viewer with the experience of continuous light. The pulse rate of light and dark intervals at which a person perceives the light as continuous is termed the critical flicker fusion (CFF) threshold.

Given the reduced light and contrast sensitivity of elderly adults and the inverse relationship between stimulus strength and persistence, it is to be expected that older perceivers will experience longer persistence. Indeed, the CFF threshold is lower for older observers. This means that an older adult presented with a relatively slowly flickering light will report that it is continuous while a young person will note the flicker. The fact that a stimulus event has weaker temporal integrity for elderly adults suggests that there may be significant misperceptions of sequentially occurring events. Indeed, it has been argued that prolonged stimulus persistence may be at the root of a number of perceptual deficits reported for elderly observers.

Perceptual span. We have noted a number of factors that limit the data available to older perceivers. A direct measure of the impact of these limitations on the construction of a percept may be made by noting the amount of information that a person can acquire in a brief glance. Such a measure is the perceptual span, which is also known as iconic memory. The visual information is available for only a brief period of time, such as a quarter of a second. The span is affected by the strength of the stimulus. Following the theme, which has been developed here, young adults are capable of capturing a large amount of data while older adults have a more limited capacity. The limit on the span of the older observers may be related to their reduced light and contrast sensitivities that result in relatively weak stimuli. This point was supported by a study that compensated for the reduced sensitivity of the elderly participants and found that under this special condition age differences in span were eliminated.

The limit on the perceptual span may be linked to what has been identified as the useful field of view (UFOV). The UFOV is the spatial extent within which highly accurate stimulus detection and identification can be performed. Measurement of the UFOV emphasizes the capability to acquire information in peripheral fields of vision where elderly adults have reduced light sensitivity. The UFOV of elderly participants is three times more restricted than young adults, meaning that they can examine only relatively small areas of the visual field. This restriction in the UFOV has been shown to be related to the incidence of automobile accidents at road intersections.

Conclusion

It has been shown that there are multiple factors that limit visual data acquisition by older perceivers. It also has been demonstrated that in some tasks the sensory limitations may be compensated by using stronger visual stimuli. An appreciation of the nature of the variables that influence our construction of reality will help us to understand the differences in perceptual experience as we age.

Grover C. Gilmore

See also Eye, Aging-Related Diseases; Hearing; Home Adaptation and Equipment; Memory.

BIBLIOGRAPHY

Botwinick, J. Aging and Behavior: A Comprehensive Integration of Research Findings, 2d ed. New York: Springer Publishing Co., 1978.

Corso, J. F. Aging Sensory Systems and Perception. New York: Praeger, 1981.

Corso, J. F. Sensory-Perceptual Processes and Aging. In Annual Review of Gerontology and Geriatrics, vol. 7. Edited by K. Warner Schaie and Carl Eisdorfer. New York: Springer Publishing Co., 1987.

Gilmore, G. C.; Wenk, H.; Naylor, L.; and Stuve, T. Motion Perception and Aging. Psychology and Aging 7 (1992): 654660.

Kline, D. W., and Schieber, F. Vision and Aging. In Handbook of the Psychology of Aging, 2d ed. Edited by James E. Birren and K. Warner Schaie. New York: Van Nostrand Reinhold Company, 1985.

Owsley, C., and Sloane, M. E. Contrast Sensitivity, Acuity, and the Perception of Real-World Targets. British Journal of Ophthalmology 71 (1987): 791796.

Owsley, C.; Ball, K.; McGwin, G., Jr.; Sloane, M. E.; Roenker, D. L.; White, M.; and Overley, T. Visual Processing Impairment and Risk of Motor Vehicle Crash Among Older Adults. Journal of the American Medical Association 279 (1998): 10831088.

Schieber, F. Aging and the Senses. In Handbook of Mental Health and Aging, 2d ed. Edited by James E. Birren, R. Bruce Sloane, Gene D. Cohen, Nancy R. Hooyman, Barry D. Lebowitz, May H. Wykle, and Donna E. Deutchman. San Diego: Academic Press, 1992.

Scialfa, C. T.; Guzy, L. T.; Leibowitz, H. W.; Garvey, P. M.; and Tyrell, R. A. Age Differences in Estimating Vehicle Velocity. Psychology and Aging 6 (1991): 6066.

Trick, G. L., and Silverman, S. E. Visual Sensitivity to Motion: Age Related Changes and Deficits in Senile Dementia of the Alzheimer Type. Neurology 41 (1991): 14371440.

vision

views updated May 29 2018

vision is the task of understanding the world through our eyes. It is probably the most difficult thing that we do with our brains, yet we do it every waking moment, and it is virtually effortless. Just open your eyes and the universe is there, in all the richness of its shapes and colours, its brightness, distance and movement. But the analysis that underlies seeing involves about one third of the entire human cerebral cortex — more than a billion nerve cells. That is one indication of the magnitude of the task of vision.

Using their eyes, most people can thread a needle, recognize thousands of faces, read a newspaper, drive a car, see an orange as orange whatever the colour of the illuminating light. Some people can fly a jet plane at three times the speed of sound, return a tennis ball served at 200 km an hour, distinguish a thrush from a female blackbird at 100 m, or an early Cubist still life by Picasso from one by Braque. Each of these is an accomplishment of staggering complexity. Even the most sophisticated of computer vision systems, which interpret signals from cameras mounted on robots, seem like idiots compared with the genius of normal human vision. This is another indication of the scale of the task of vision.

Vision involves the detection of light — electromagnetic, non-ionizing radiation, ranging in wavelength from about 400 to about 750 nanometres. The main natural source of light is stars, especially our own sun. Full sunlight appears white, but light consisting of a limited range of wavelengths appears coloured. Short wavelengths look blue, long wavelengths red. Most of the light that enters our eyes does not come directly from the sun but is reflected from the surfaces of objects. Most surfaces (except mirrors and pure white objects) absorb part of the spectrum of light, changing the wavelength composition of the reflected light, thus making the surfaces appear coloured.

Vision has humble origins. In its very simplest form, it probably appeared near the start of life on Earth, with single-celled organisms that produced photopigments — molecules that change shape when they absorb light, and trigger chemical reactions in the cell. The mere detection of light can be useful to organisms, enabling them to regulate their activity according to the time of day or the seasons of the year, and even allowing them to orientate themselves towards or away from the source of light. Eyes — organs for collecting light — exploit the fact that light travels in straight lines. They use a lens, a mirror, or even just a pinhole, to cast an image on to receptor cells containing photopigment (photoreceptors). The crucial feature of an image is that it contains information about individual objects in the scene and their relative positions, thus affording the animal an opportunity to recognize and respond to those objects, as long as it has the apparatus in its head to analyze the information. The other huge value of vision is that it works at a distance, and hence serves to predict the future:For I dipt into the future, far as human eye could see,
Saw the Vision of the world, and all the wonder that would be.
Alfred, Lord Tennyson , Locksley Hall


All vertebrate eyes are built to a common plan. Rather like cameras, they have a lens system that forms an inverted image on a layer of photoreceptor cells in the back of the retina, which lines the eyeball. In front of the receptors are alternating layers of nerve fibres and cells, forming a complex network through which signals from the receptors are passed. Each photoreceptor absorbs light over a particular band of wavelengths, thus providing, between them, a pattern of activity that can be used to retrieve the brightness and colour of light. Essentially, the photoreceptors pixellate the information in the image, reducing it to a point-by-point description of intensity and wavelength, rather like that on a computer screen. The grain of photographic emulsion in camera film does much the same. But cameras do not see. Vision depends on the interpretation of the patterns of activity from the photoreceptors, across space and time.

Part of the process of interpretation occurs within the retina itself. The essential function of all vertebrate retinas is to reduce the overwhelming flood of information that pours into the eye. In the human eye there are about 120 million rod photoreceptors, which work only in dim light, and 6 million cones, which respond under brighter conditions and are of three types, sensitive to light in the blue, green or red part of the spectrum. Each photoreceptor produces a signal, dependent on the intensity and wavelength composition of the light that it catches. In computer terminology, this translates into many megabytes of information every second. Evolution, ever the master of tricks and short-cuts to efficiency, has discovered ways in which unneeded information is removed, during processing in the retina, so that only the essential skeleton of the message is transmitted to the brain.

First, the overall number of ‘pixels’ is dramatically reduced. The signals are passed from the photoreceptors through several connections, to the last retinal cells in the chain, the ganglion cells, which cover the inner surface of the retina and whose axons stream out through a hole in the eyeball to form the optic nerve. Each ganglion cell in the fovea (the central part of the retina, which we point towards objects when we look directly at them), receives its main input from just one cone photoreceptor, perfectly conserving the fine-grain detail of that part of the image. But, compared with the roughly 125 million photoreceptors, there are a mere 1.5 million or so ganglion cells. Those in the peripheral parts of the retina pool signals from very large numbers of receptors. In effect the output of the retina is like very coarse-grain film for the peripheral parts, and very fine-grain just in the middle. The constant jerky movements of the eye, which occur about 3 times every second, deliver one part of the image after another to the high-resolution fovea.

The second function of the retina is to ‘filter’ the image in space and in time, through procedures somewhat similar to those used to ‘compress’ the information of an entire movie on to a DVD. Everyone is familiar with a phenomenon called dark adaptation: if you go from a bright environment into a dark room it is initially very hard to see anything, but vision gradually improves, over the course of fully half an hour. In other words, the eye changes its sensitivity over time to suit the average brightness of the scene — rather like having camera film that can constantly change its speed to match the light conditions. On a shorter time-scale, the eye transmits signals only when the image has just changed, for example, after an eye movement. Indeed, if the image is held absolutely stationary on a person's retina (by means of optical or electronic techniques), perception fades out completely within a few seconds.

Our detailed knowledge of the visual system has come largely from the study of animals, and especially from the use of tiny microelectrodes to record impulses from individual nerve cells or fibres. Retinal ganglion cells have been much studied in this way in totally anaesthetized animals (in which the retina, indeed much of the visual pathway, continues, surprisingly, to respond to visual stimulation). Each ganglion cell responds to changes of light intensity over a limited area of the retina, called the cell's receptive field, corresponding to the group of photoreceptors that influence the cell, via the network of connections in the retina. Roughly half the retinal ganglion cells respond with a burst of impulses when the centre of the receptive field is illuminated (ON cells). The other half respond to a decrease in illumination (OFF cells). Thus the output of the retina signals the relative brightness and darkness of each point or patch in the visual field.

Horace Barlow discovered (in the frog) and Steven Kuffler (in the cat) that ganglion cells also ‘filter’ the image in space (as well as in time), to achieve further information-compression. Essentially, the signals from each group of photoreceptors that feed the central part of the receptive field are inhibited by signals from surrounding photoreceptors, a process called lateral inhibition. This means that each ganglion cell signals the difference of illumination, or contrast, between the central and the surrounding part of its receptive field. Any cell whose receptive field happens to view a part of the image with uniform brightness (e.g. the sky on a cloudless day) will be fairly inactive, while those whose receptive fields lie at the boundary of a change of intensity in the image will send strong signals to the brain.

It is almost as if the retina reduces the image to a line drawing of the visual scene. Perhaps this accounts for the fact that simple outlines are so powerful in their ability to evoke rich perception: just think of the how much can be seen in a line drawing or etching by Rembrandt or Matisse.

In the retina of old-world monkeys (e.g. rhesus monkeys), assumed to be very similar to the human retina, the ON and OFF classes of ganglion cell can be further sub-divided into two main groups, called P cells and M cells (read on to discover the origin of these terms). P ganglion cells receive the central part of their receptive fields from one, or sometimes two (but not all three) types of colour-selective cone photoreceptors, and thus are colour-selective in their responses. M cells, which generally have larger receptive fields, receive input from all cone classes: they are not colour selective but are exquisitely sensitive to contrast and hence to movement of images on the retina. To some extent, this division of function between P and M cells is maintained through the visual pathway, and into the domain of visual perception.

The real business of vision is in the brain. Each optic nerve (the second cranial nerve) passes through a hole at the back of the bony orbit (the cavity in the skull that contains the eyeball), and the two nerves meet to form a distinctive cross-shaped structure, the optic chiasma, directly underneath the hypothalamus. (Actually, a small number of fibres branch off at this point to provide information about ambient light level to nerve cells of the suprachiasmatic nucleus, the heart of the body clock mechanism in the brain.)

In the optic chiasma, roughly half the nerve fibres cross over to the opposite side, and the rest continue on to the same side. It was Isaac Newton who first described this anatomical curiosity, and recognized its functional importance:


Are not the Species of Objects seen with both Eyes united where the optick Nerves meet before they come into the Brain, the Fibres on the right side of both Nerves uniting there, and after union going thence into the Brain in the Nerve which is on the right side of the Head, and the Fibres on the left side going into the Brain in the Nerve which is on the left side of the Head. (Opticks, Book 3, Part 1, 14th edition, 1730)


Thus, the arms of the optic chiasma that point towards the brain, called the optic tracts, contain a mixture of fibres from geometrically corresponding halves of the two retinas, which, because of optical inversion of the image, view the opposite half of the visual world. Essentially this arrangement splits the representation of the visual field neatly into two. The right side of the field is viewed by the left cerebral hemisphere, the left side by the right. This fits with a general rule, that the left hemisphere is concerned with everything to the right of the body — the skin of right side, control of the muscles of the right side, even sounds coming from the right — while the right hemisphere is devoted to the left side of the body.

This means that damage to the visual pathway on one side of the brain causes blindness or partial blindness in both eyes, on the opposite side of the visual field. Interruption of one optic tract causes total blindness in the opposite half of the visual field — hemianopia. Nothing at all is visible to one side of a precise vertical line through the middle of whatever the patient is looking at. Remarkably, patients with this condition are sometimes unaware that they are half-blind: they complain of not being able to read normally, or not being able to drive as well as they used to! This points up a sensible but surprising property of vision — that it is concerned with what we can see, and not with what we cannot see. Think of how indifferent we are to the fact that we cannot see behind our heads. Equally, we are normally unaware that most of the visual field (except that part falling on the fovea) is represented in the brain with very poor detail and colour.

In Dickens' Pickwick Papers, Sam Weller says:
Yes I have a pair of eyes … and that's just it. If they was a pair o' patent double million magnifyin' gas microscopes of hextra power, p'raps I might be able to see through a flight o' stairs and a deal door; but bein' only eyes, you see, my wision's limited.

Indeed, our ‘wision’ is limited — by the resolution of the optics of our eyes and the structure of the retina, by the range of wavelengths to which our photoreceptors are sensitive, and by the capacity of our brains to fathom, from the mere shadows that flit across the retina, what is there in the outside world. But mercifully we are normally blissfully unaware of those limitations of sight.

This leads to a more general conclusion. Visual experiences are externalized, i.e. they happen outside the body, not inside the head. The visual properties of objects appear to belong to them, not to be the products of the brain. We are hardly even aware of our eye movements, which cause the image to jerk and slew continuously across the retina. The task of vision is to inform about the outside world, not about the nature of vision.

The nerve fibres in the optic tract (the axons of retinal ganglion cells) terminate in two main areas of the brain. A minority project to a structure called the superior colliculus (the upper little hill), which can be seen as a bump, one on each side, on the roof of the midbrain, as well as to nearby tiny clusters of nerve cells (in the pretectum). This general region, the mammalian vestige of the principal visual centre in amphibia, reptiles, birds and fish, is concerned mainly with visual reflexes. It contains regions that regulate the size of the pupil of the eye in bright and dim conditions, and that make the eye involuntarily follow large moving objects. The main function of the two superior colliculi is to control the automatic tendency of the eyes, the head and the body, to turn towards objects of interest — so-called orienting responses. They are, in fact, centres for sensory integration, since they receive input from the ears and the skin as well as the eyes, all helping to guide such reactions.

The bulk of the fibres of the optic nerve reach the lateral geniculate (meaning knee-shaped) nucleus (the LGN) in the thalamus (an egg-shaped mass of grey matter through which virtually all information passes on its way to the cerebral cortex). In monkeys, the LGN has six layers. The information from the two eyes remains separate, each eye sending its fibres to three of the layers. The lower two layers are called magnocellular, because the nerve cells in them are relatively large. The neurons of the magnocellular layers receive input from the fibres of the M class of ganglion cells (that is why they are called M cells), and hence they are also sensitive to contrast and motion, but not colour. The upper four, parvocellular layers (two for each eye) contain relatively small nerve cells, and receive input (one-to-one connections in some cases) from the axons of P ganglion cells. Hence the parvocellular layers transmit information about colour and fine detail.

The fibres of the roughly 1.5 million cells in the LGN fan backwards and upwards in a bundle of white matter called the optic radiation, which passes to the back of the hemisphere to reach the region of cerebral cortex, called the primary visual cortex (or striate cortex, or area 17, or V1). During the First World War, the British neurologist Gordon Holmes examined the visual deficits of soldiers who had suffered shrapnel injuries to this region. If a tiny fragment had entered the back of the skull on one side, there was a corresponding blind patch, a scotoma, in the opposite side of the visual field. This implies that there is a kind of ‘map’ of the retinal image across the surface of the primary visual cortex. Indeed, individual nerve cells in the grey matter receives input, directly or via the network of connections in the cortex, from a limited group of cells in the LGN. Thus each cortical cell also has its own receptive field — a patch of retina, and hence visual field, through which it responds to appropriate visual stimuli.

Nerve cells in the middle layers of the cortex, where the incoming fibres mainly terminate, respond to brightening or darkening of a particular spot in the visual field, very much like neurons in the LGN. Indeed there are separate sub-layers receiving input from P-type and M-type cells. Input from the two eyes is still kept separate at this point, with axons from the right- and left-eye layers of the LGN terminating in a remarkable alternating pattern. Each eye's input occupies regions that form curving, branching ocular dominance stripes, each about 0.3 mm wide, running across the middle layers of the cortex. Alternate stripes are dominated by right eye, then left, forming a pattern similar to a fingerprint impressed on the visual cortex. Neighbouring stripes have input from roughly the same point in the visual field, seen through the two eyes.

Extraordinary things happen as the information is passed up and down within the grey matter, to the many other neurons in the cortex. David Hubel and Torsten Wiesel won the Nobel Prize in 1981 for their pioneering work on the physiology of the visual cortex. They discovered, first in cats and later in monkeys, that these neurons respond not just to light or dark spots, like the neurons that drive them, but selectively to lines or edges, falling on, or moving over, the receptive field. Each cell prefers a line stimulus, at a particular orientation, and the preferred orientation varies from cell to cell. Somehow, the property of orientation selectivity is created by the combination of all the nerve fibres that converge on each cell. These orientation-selective cells are arranged into a beautiful system of columns, presumably created by the fact that most connections within the cerebral cortex run up and down radially within the grey matter. The selective neurons within each column, perhaps 0.1 mm across, running from the surface down to the white matter, all prefer the same orientation. And the preferred orientation shifts progressively from column to column, across the cortex.

Orientation-selective neurons remain perhaps the best example of feature detection — the notion that sensory neurons are ‘programmed’ (partly through innate control of the ‘wiring’ of the pathway, partly through the effects of sensory experience early in life) to respond to particular information-rich features of the sensory world. The primary visual cortex starts the process of ‘dissecting’ the retinal image, so as to encode its essential structure. In normal conditions, these cells respond to the boundaries of objects in space, or to elements of the texture of surfaces, presumably describing these features to the rest of the brain. This is the beginning of a process that has been called inverse optics — inferring from the flat retinal image the true shapes and distribution of the objects that generated the image, ‘reversing’ the optical process that made the image.

Hubel and Wiesel also discovered that the vast majority of these orientation-selective neurons are also ‘binocularly driven’: they have receptive fields in roughly corresponding positions on both retinas, and are remarkably similar in their preferences for visual stimuli, whichever eye is open. Thus, in normal viewing conditions, these cells will be stimulated simultaneously through both eyes, by the two images of individual objects in space. This presumably accounts for the fact that we see only one, fused visual world, despite the fact that two eyes are viewing it.

Because our two eyes are horizontally separated in the head, when we view a three-dimensional scene, their retinal images are not absolutely identical. Binocular parallax, as it is called, creates tiny differences in the relative positions on the two retinas of the images of individual objects that lie at different distances from the eyes. Sir Charles Wheatstone first described, in 1838, the fact that we can interpret these minute differences between the two retinal images to perceive the solidity of objects and their relative distances in space. This skill, called stereopsis or stereoscopic vision, is a wonderful example of inverse optics. The brain has evolved mechanisms for analysing not just the individual retinal images, but also the differences between them, so as to understand the world.

Now, it turns out that, although the two receptive fields of individual visual cortical cells, on average, lie on geometrically corresponding points in the two retinas, there is a little variation in their relative positions. This, combined with the fact that the responses of neurons are often strongly enhanced when both receptive fields are stimulated simultaneously, means that individual such cells respond best to the boundaries of objects at particular distances, behind or in front of whatever the eyes are fixating. Thus, the processing that underlies stereopsis appears to start with the binocular neurons of the primary visual cortex.

The existence of a visual area in the back of the cerebral hemispheres was known in the nineteenth century. But at that time, the vast continent of uncharted cortex in between the major sensory and motor regions was thought simply to combine information, in some ill-defined way. It was called association cortex. Work on monkeys, starting in the 1960s, has shown that the entire association cortex of the rear part of the hemispheres is in fact devoted exclusively to the analysis of vision. It is divided into a huge patchwork of individual areas, each containing a representation of all or part of the visual field. These are known as extrastriate visual areas, to distinguish them from the striate cortex — the primary visual cortex. Virtually all the fibres from the LGN, carrying information from the eyes, reach only the striate cortex, and these other visual areas receive their input mainly from cortico-cortical connections, forming a complex network, with fibres running back and forth linking the striate cortex to the other areas.

While damage to the primary visual cortex leads to blindness in the corresponding area of the visual field, injury in extrastriate areas generally leads to more subtle deficits in perception. It must, however, be said that people rendered clinically blind by damage to the striate cortex can nevertheless sometimes respond unconsciously to a visual stimuli, by moving their eyes towards it, particularly if it moves rapidly or is of very high contrast. Indeed, some can ‘guess’ reliably the direction of movement of the stimulus and whether a flashed line is vertical or horizontal, even though they deny actually seeing it. This curious residual visual capacity, called ‘blindsight’, may be mediated by surviving connections from the eyes to other parts of the brain, perhaps via the superior colliculus.

Broadly speaking, the extrastriate areas of the cortex form two broad processing ‘streams’, both originating in the striate cortex. The ‘ventral stream’, which runs downwards into the lower parts of the temporal lobe, is dominated by the P-cell system, and thus contains information about colour and fine detail, while the ‘dorsal stream’, monopolized by M-cell input, runs up into the parietal lobe, and is concerned with the analysis of movement, and the detection of the position of objects in space. The ventral and dorsal streams have been dubbed ‘what’ and ‘where’ systems, although this is an over-simplification.

The ventral stream does seem to be mainly concerned with the recognition of objects, and it feeds signals to parts of the brain, especially the hippocampus, thought to be responsible for conscious visual memory. Neurons in some areas within the ventral stream have remarkable properties. In an area called V4, for instance, some cells respond selectively to surfaces of a particular colour, regardless of the spectral composition of the illuminating light. This correlates with the fact that we see the colours of objects as more or less constant, whatever the illumination — a phenomenon called colour constancy. To achieve this property, these neurons must somehow take account of the wavelength composition of light reflected from surrounding surfaces, a ‘computation’ that cannot be done in the primary visual cortex. Further south, in parts of the temporal lobe, are populations of nerve cells that respond selectively to the appearance of monkey or human faces, somehow detecting the combination of features that define a face. Even deeper into the ventral stream cells can ‘learn’ to respond specifically to one stimulus out of a series of objects or abstract shapes that the animal is shown as part of a memory task. This all suggests that the ventral stream is concerned with identifying and remembering objects.

This work on monkeys has underpinned the recent study of visual areas in the human brain, making use of the new imaging techniques of Positron Emission Tomography (PET) and functional Magnetic Resonance Imaging (fMRI), which essentially detect the small local changes of blood flow associated with activity in neurons. There may be as many as 50 different extrastriate visual areas in humans, and those occupying the lower part of the occipital and temporal lobes also seem to be concerned with the analysis of colour, faces and the identification of objects. Damage in these regions, caused, for instance, by STROKE, causes various, selective deficits in visual understanding, such as central achromatopsia, a form of colour blindness, or prosopagnosia, the inability to recognize individual faces. In extreme cases, damage of the ventral stream leads to the frightening condition of visual agnosia, in which patients simply cannot recognize familiar objects, despite all their basic visual functions being normal.

The dorsal stream in monkeys also has areas with distinctive physiological properties. One, called the middle temporal area (MT) or V5, seems deeply involved in the analysis of motion. Neurons here almost all respond to movement in a particular direction, and they probably also play a part in stereoscopic vision. Neighbouring areas are concerned with analysing the flow of patterns across the retina produced by movements of the head or the whole body through space. Even higher up, in the parietal lobe, cells respond to the positions and movements of objects in ways that imply that they are concerned with guiding hand and eye movements. Again, similar functional areas have been found in the upper parts of the human occipital lobe and the parietal lobe. Damage in these regions can produce such conditions as akinetopsia (deficiency in the perception of motion) and visual neglect (failure to attend to objects on the opposite side of visual space).

It has been argued that the dorsal stream is more concerned with unconscious visually-guided reactions, such as manipulating objects with the hands, while the ventral stream underlies the conscious perception of objects. Evidence for this view comes from the fact that some individuals with ventral stream damage, while unaware of the differences between particular objects, can nevertheless shape their hands correctly when asked to pick them up. Equally, some patients with dorsal stream damage make clumsy hand movements when they try to pick up objects that they can recognize perfectly well.

The huge adaptive value of vision has driven its explosive evolution. Its machinery dominates our brains; its impressions dominate our subjective lives. Indeed, for the sighted, it is hard to imagine life without it. Language is full of visual metaphors that bear testimony to the fact that vision is the main route to the mind. ‘I see what you mean’; ‘A person of vision’; ‘My point of view’, ‘A picture is worth a thousand words’. Moreover, vision not only underpins our understanding of the world around us but also sets the scale of beauty and ugliness. The view from a mountaintop, the skyline of New York, sunset in the south of France, Botticelli's Birth of Venus (see Venus). It is seeing that makes those things breathtaking. Vision rules our aesthetic lives.

Vision has been a favourite topic of some of the most eminent individuals in the history of science, including such physicists as Isaac Newton, James Clerk Maxwell, Thomas Young, Hermann von Helmholtz and Ernst Mach. Arguably, we know more about vision than any other high-level function of the brain. Yet much remains mysterious. How does the brain arrive at reliable interpretations of objects? How is the identity of every object we can distinguish represented in the brain? How is the subjective experience of seeing related to, and generated from, the activity of neurons? Indeed, what, if anything, does conscious experience add to the purely computational process of vision?

Colin Blakemore

Bibliography

Gregory, R. L. (2001) Eye and brain: the psychology of seeing, 5th edition. Oxford University Press, Oxford.
Hubel, D. H. (1988) Eye, brain and vision. Scientific American Library/W.H. Freeman, San Francisco.
Zeki, S. (1999) Inner vision. Oxford University Press, Oxford.


See also blindness; blindness, recovery from; colour blindness; consciousness; eye movements; eyes; illusions sensory receptors.

Vision

views updated Jun 11 2018

Vision

Our 3-D view of the world

Ocular dominance

Memory

Electrochemical messengers

Color vision

Optic pathway

Visual field

Optic chiasma

Visual cortex

Visual acuity

Retinal eccentricity

Luminance

Accommodation

Common visual problems strabismus

Amblyopia

Other common visual problems

Valuable vision

Resources

Vision is sight, the act of seeing with the eyes. In humans, sight conveys more information to the brain than either hearing, touch, taste, or smell, and contributes enormously to memory and other requirements for our normal, everyday functioning. Because we see objects with two eyes at the same time, human vision is binocular, and therefore stereoscopic. Vision begins when light enters the eye, stimulating photo-receptor cells in the retina called rods and cones. The retina forms the inner lining of each eye and functions in many ways like film in a camera. The photoreceptor cells produce electrical impulses which they transmit to adjoining nerve cells (neurons), which converge at the optic nerve at the back of the retina. The visual information coded as electrical impulses travels along nerve tracts to reach each visual cortex in the posterior of the brains left and right hemispheres. Each eye conveys a slightly different, two-dimensional (flat) image to the brain, which has the amazing ability to decode and interpret these images into a clear, colorful, three-dimensional view of the world.

Our 3-D view of the world

Because our eyes are separated by about 2.6 in (6.5 cm), each eye has a slightly different horizontal view. This phenomenon is called binocular displacement. The visual images reaching the retina of each eye is a two-dimensional flat image. In normal binocular vision, the blending of these two images into one single image is called stereopsis, which produces a three-dimensional view (one with a sense of depth), and allows the brain to accurately judge an objects depth and distance in space in relation to ourselves and other objects.

Depth and distance perception is also available without binocular displacement and is called monocular stereopsis. Even with one eye closed, a car close to us will appear much larger than the same sized car a mile down the road, or two rails of a railway line appear to draw closer together the further they run off into the distance. The ability to unconsciously and instantaneously assess depth and distance enables us to move about in space without continually bumping into objects or stumbling over steps.

Ocular dominance

Studies strongly indicate there is a critical period during which normal development of the visual system takes placea period when environmental information is permanently encoded within the brain. Although the exact time frame of the critical period is not clear, it is believed that by age six or seven years, visual maturation is complete. Animal studies show that if one eye is completely covered during the entire critical period, neurons in the visual pathway and brain connected to the covered eye do not develop normally. When that eye is finally uncovered, only neurons relating to the eye that was not covered function in the visual process. This is an example of ocular dominance, when cells activated by one eye dominate over the cells of the other.

Memory

Just as vision plays an important role in memory, memory plays an important role in vision. The brain accurately stores an incredible amount of visual data which it draws upon every time the eyes look at something. For example, imagine fishing in a quiet stream and the cork on your line begins to bob up and down in the water. Although you cannot see under the water, your brainfrom previous knowledgeremembers that a fish tugging at the worm on the hook will cause the floater to bob and tells you to pull the line in.

Electrochemical messengers

The entire visual pathwayfrom the retina to the visual cortexis paved with millions of neurons. From the time light enters the eye until the brain forms a visual image, vision relies upon the process of electrochemical communication between neurons. Each neuron has a cell body with branching fibers called dendrites and a single long, cylindrical fiber called an axon. When a neuron is stimulated it sends chemicals called neurotransmitters, which causes the release of electrical impulses along the axon. The point where information passes from one cell to the next is a gap called a synapse, and neurotransmitters affect the transmission of electrical impulses on to an adjacent cell. This synaptic transmission of impulses is repeated until the message reaches the appropriate location in the brain. In the retina, approximately 125 million rods and cones transmit information to approximately one million ganglion cells. This means that many rods and cones must converge onto one single cell. At the same time, however, information from each single rod and cone diverges on to more than one ganglion cell. This complicated phenomenon of convergence and divergence occurs along the entire optic pathway. The brain must transform all this stimulation into useful information and respond to it by sending messages back to the eye and other parts of the brain before we can see.

Our eyes adapt to an incredible range of light intensitiesfrom the glare of sunlight on glistening snow to the glow of moonlight on rippling water. Although the pupil regulates to some degree the amount of light entering the eye, it is the rods and cones which allow our vision to adapt to such extremes. Rod vision begins in dim light at the level of darkness and responds for up to five orders of intensity. Cones function in bright light and are responsible for color vision and visual activity.

When light hits the surface of an object, it is either absorbed, reflected, or passes throughas it does through clear glass. The amount of pigment in an object helps us determine its color. The amount of light absorbed by an object is determined by the amount of pigment, or color, contained in that object. The more heavily pigmented the object, the darker it appears because it absorbs more light. A sparsely pigmented object, which absorbs very little light and reflects a lot back, appears lighter.

Color vision

Human color perception is dependent on three conditions. First, whether we have normal color vision; second, whether an object reflects or absorbs light; and third, whether the source of light transmits wavelengths within the visible spectrum. Rods contain only one pigment which is sensitive to very dim light, and which facilitates night vision but not color. Cones are activated by bright light and allow us to see colors and fine detail. There are three types of cones that contain different pigments which absorbs wavelengths in the short (S), middle (M), or long (L) ranges. Cones are often labeled blue, green, and red, because they detect wavelengths in those color spectrums. The peak wavelength absorption of the S (blue) cone is approximately 430 nm; the M (green) cone 530 nm; and the L (red) cone 560 nm.

The range of detectable wavelengths for all three types of cones overlap, and two of themthe L and M conesrespond to all wavelengths in the visible spectrum. Most of the light we see consists of a mixture of all visible wavelengths which results in white light, like that of sunshine. However, cone overlap and the amount of stimulation they receive from varying wavelengths produces a fabulous range of vivid colors and gentle hues present in normal color vision. Approximately 8% of all human males experience abnormal color vision, or color blindness.

Actually, we do not see colors at all. A leaf, for example, appears green because it absorbs long- and short-wavelengths but reflects those in the middle (green) range, stimulating the M cones to transmit electrochemical messages to the brain which interprets the signals as the color green.

Optic pathway

Only about 10% of the light which enters the eye actually reaches the photoreceptors in the retina. This is because light must pass first through the cornea, pupil, lens, aqueous and vitreous humors (the liquid and gel-like fluids inside the eye) then through the blood vessels of the lining of the eye and then through two layers of nerve cells (ganglion and bipolar cells in the retina) (Figure 2).

Visual field

The entire scene projected onto the retinas of both eyes is called the visual field (Figure 1).

Optic chiasma

Synaptic transmission of impulses from retinal cells follows the optic nerve to the optic chiasma, an x-shaped junction in the brain where half the fibers from each eye cross to the other side of the brain. This means that some visual information from the right half of each retina (from the left visual field) travels

to the right visual cortex, and visual information from the left half of each retina (from the right visual field) travels to the left visual cortex. Information from the right half of our environment is processed in the left hemisphere of the brain, and vice versa. Damage to the optic pathway or visual cortex in the left brainperhaps from a strokecan cause complete loss of the right visual field. This means only information entering the eye from the left side of our environment is processed, even though information still enters the eye from both visual fields.

Visual cortex

Each visual cortex is about 2 in (5 cm) square and contains about 200 million nerve cells which respond to very elaborate stimuli. In primates, there are about 20 different visual areas in the visual cortex, the largest being the primary, or striate, cortex. The striate cortex sends information to an adjacent area which in turn transmits to at least three other areas about the size of postage stamps. Each of these areas then relays the information to several other remote areas called accessory optic nuclei. It is thought that the accessory optic nuclei plays a role in coordinating movement between the head and eyes so images remain focused on the retina when the head moves.

Visual acuity

Visual acuity, keenness of sight and the ability to distinguish small objects, develops rapidly in infants between the age of three and six months and decreases rapidly as people approach middle age. Good visual acuity is often called 20/20 vision. Optometrists test visual acuity when we have our eyes examined, and poor acuity is often correctable with glasses or contact lenses. As with every other aspect of vision, visual acuity is highly complex, and is influenced by many factors.

Retinal eccentricity

The area of the retina on which light is focused influences visual acuity, which is sharpest when the object is projected directly onto the central foveaa tiny indentation at the back of the retina comprised

entirely of cones. Acuity decreases rapidly toward the retinas periphery. It was initially believed this was because cones decrease in number moving out from the retina, disappearing altogether at the retinas periphery where only rods exist. However, recent studies indicate it may result from the decreasing density of ganglion cells toward the retinas periphery.

Luminance

luminance is the intensity of light reflecting off an object, and influences visual acuity. Dim light activates only rods, and visual acuity is poor. As luminance increases, more cones become active and acuity levels rise sharply. Pupil size also affects acuity. When the pupil expands, it allows more light into the eye. However, because light is then projected onto a wider area of the retina, optical irregularities can occur. A very narrow pupil can reduce acuity because it greatly reduces retinal luminance. Optimal acuity seems to occur with an intermediate pupil size, but the optimum size varies depending on the degree of external luminance. The difference in luminance reflected by each object in an image produces varying degrees of light,

KEY TERMS

Accommodation Changes in the curvature of the eye lens to form sharp retinal images of near and far objects.

Cones Photoreceptors for daylight and color vision are found in three types, each type detecting visible wavelengths in either the short, medium, or long, (blue, green, or red) spectrum.

Ganglion cells Neurons in the retina whose axons form the optic nerves.

Ocular dominance Cells in the striate cortex which respond more to input from one eye than from the other.

Optic pathway The neuronal pathway leading from the eye to the visual cortex. It includes the eye, optic nerve, optic chiasm, optic tract, genicu-late nucleus, optic radiations, and striate cortex.

Rods Photoreceptors which allow vision in dim light but do not facilitate color.

Stereopsis The blending of two different images into one single image, resulting in a three-dimensional image.

Suppression A blocking out by the brain of unwanted images from one or both eyes. Prolonged, abnormal suppression will result in underdevelopment of neurons in the visual pathway.

Synapse Junction between cells where the exchange of electrical or chemical information takes place.

Visual acuity Keenness of sight and the ability to focus sharply on small objects.

Visual field The entire image seen with both eyes, divided into the left and right visual fields.

dark, or color. Contrast between a white page and black letters enables us to read. The greater the contrast, the more acute the visual image.

Accommodation

accommodation is the eyes ability to adjust its focus to bring about clear, sharp images of both far and near objects. Accommodation begins to decline around age 20 and is so diminished by the mid-fifties that sharp close-up vision is seldom possible without corrective lenses. This condition, called presbyopia, is the most common vision problem in the world.

Common visual problems strabismus

Strabismus is seeing two images of a single object. Strabismus results from a lack of parallelism of the visual axes of the eyes. In one form (cross-eyes) one or both eyes turn inward toward the nose. In another form, wall-eyes, one or both eyes turn outward. A person with strabismus does not usually see a double imageparticularly if onset was at a young age and remained untreated. This is because the brain suppresses the image from the weaker eye, and neurons associated with the dominant eye (ocular dominance) take over.

While the causes of strabismus are not fully understood, it appears to be hereditary, often obvious soon after birth. In many cases, strabismus is correctable.

However, the critical period (probably to age six or seven years) involved in normal neuronal development of vision makes it necessary that the problem be detected and treated as early as possible.

Amblyopia

amblyopia, or lazy eye, is the most common visual problem associated with strabismus. Amblyopia involves severely impaired visual acuity, and is the result of suppression and ocular dominance; it affects an estimated four million people in the United States alone. One study suggests it causes blindness in more people under 45 years of age than any other ocular disease and injury combined.

Other common visual problems

Slight irregularities in the shape or structure of the eyeball, lens, or cornea cause imperfectly focused images on the retina. Resulting visual distortions include presbyopia (far-sightedness, or the inability to focus on close objects), myopia (near-sightedness, in which distant objects appear out of focus), and astigmatism (which causes distorted visual images). All of these distortions can usually be rectified with corrective lenses.

Valuable vision

Our memory and mental processes rely heavily on sight. There are more neurons in the nervous system dedicated to vision than to any other of the five senses, indicating visions importance in our lives. The almost immediate interaction between the eye and the brain in producing vision makes even the most intricate computer program pale in comparison. Although we seldom pause to imagine life without sight, vision is the most precious of all our senses. Without it, our relationship to the world about us, and our ability to interact with our environment, would diminish immeasurably.

See also Blindness and visual impairments; Color; Depth perception; Vision disorders.

Resources

BOOKS

Kaufman, Paul L. and Albert Alm, ed. Adlers Physiology of the Eye. 10th ed. St. Louis, Baltimore, Boston, Chicago, London, Philadelphia, Sydney, Toronto: Mosby Year Book, 2002.

Moller, Aage R. Sensory Systems: Anatomy and Physiology. New York: Academic Press, 2002.

Marie L. Thompson

Vision

views updated Jun 11 2018

Vision

Vision is sight, the act of seeing with the eyes. In humans, sight conveys more information to the brain than either hearing , touch , taste , or smell , and contributes enormously to memory and other requirements for our normal, everyday functioning. Because we see objects with two eyes at the same time, human vision is binocular , and therefore stereoscopic. Vision begins when light enters the eye , stimulating photoreceptor cells in the retina called rods and cones. The retina forms the inner lining of each eye and functions in many ways like film in a camera. The photoreceptor cells produce electrical impulses which they transmit to adjoining nerve cells (neurons), which converge at the optic nerve at the back of the retina. The visual information coded as electrical impulses travels along nerve tracts to reach each visual cortex in the posterior of the brain's left and right hemispheres. Each eye conveys a slightly different, two-dimensional (flat) image to the brain, which has the amazing ability to decode and interpret these images into a clear, colorful, three-dimensional view of the world.


Our 3-D view of the world

Because our eyes are separated by about 2.6 in (6.5 cm), each eye has a slightly different horizontal view. This phenomenon is called "binocular displacement." The visual images reaching the retina of each eye is a two-dimensional flat image. In normal binocular vision, the blending of these two images into one single image is called stereopsis, which produces a three-dimensional view (one with a sense of depth), and allows the brain to accurately judge an object's depth and distance in space in relation to ourselves and other objects.

Depth and distance perception is also available without binocular displacement and is called monocular stereopsis. Even with one eye closed, a car close to us will appear much larger than the same sized car a mile down the road, or two rails of a railway line appear to draw closer together the further they run off into the distance. The ability to unconsciously and instantaneously assess depth and distance enables us to move about in space without continually bumping into objects or stumbling over steps.

Ocular dominance

Studies strongly indicate there is a critical period during which normal development of the visual system takes place—a period when environmental information is permanently encoded within the brain. Although the exact time frame of the critical period is not clear, it is believed that by age six or seven years, visual maturation is complete. Animal studies show that if one eye is completely covered during the entire critical period, neurons in the visual pathway and brain connected to the covered eye do not develop normally. When that eye is finally uncovered, only neurons relating to the eye that was not covered function in the visual process. This is an example of "ocular dominance," when cells activated by one eye dominate over the cells of the other.


Memory

Just as vision plays an important role in memory, memory plays an important role in vision. The brain accurately stores an incredible amount of visual data which it draws upon every time the eyes look at something. For example, imagine fishing in a quiet stream and the cork on your line begins to bob up and down in the water . Although you cannot see under the water, your brain—from previous knowledge—remembers that a fish tugging at the worm on the hook will cause the floater to bob and tells you to pull the line in.


Electrochemical messengers

The entire visual pathway—from the retina to the visual cortex—is paved with millions of neurons. From the time light enters the eye until the brain forms a visual image, vision relies upon the process of electrochemical communication between neurons. Each neuron has a cell body with branching fibers called dendrites and a single long, cylindrical fiber called an axon. When a neuron is stimulated it sends chemicals called neurotransmitters, which causes the release of electrical impulses along the axon. The point where information passes from one cell to the next is a gap called a synapse , and neurotransmitters affect the transmission of electrical impulses on to an adjacent cell. This synaptic transmission of impulses is repeated until the message reaches the appropriate location in the brain. In the retina, approximately 125 million rods and cones transmit information to approximately one million ganglion cells. This means that many rods and cones must converge onto one single cell. At the same time, however, information from each single rod and cone "diverges" on to more than one ganglion cell. This complicated phenomenon of convergence and divergence occurs along the entire optic pathway. The brain must transform all this stimulation into useful information and respond to it by sending messages back to the eye and other parts of the brain before we can see.

Our eyes adapt to an incredible range of light intensities—from the glare of sunlight on glistening snow to the glow of moonlight on rippling water. Although the pupil regulates to some degree the amount of light entering the eye, it is the rods and cones which allow our vision to adapt to such extremes. Rod vision begins in dim light at the level of darkness and responds for up to five orders of intensity. Cones function in bright light and are responsible for color vision and visual activity.

When light hits the surface of an object, it is either absorbed, reflected, or passes through—as it does through clear glass . The amount of pigment in an object helps us determine its color. The amount of light absorbed by an object is determined by the amount of pigment, or color, contained in that object. The more heavily pigmented the object, the darker it appears because it absorbs more light. A sparsely pigmented object, which absorbs very little light and reflects a lot back, appears lighter.


Color vision

Human color perception is dependent on three conditions. First, whether we have normal color vision; second, whether an object reflects or absorbs light; and third, whether the source of light transmits wavelengths within the visible spectrum . Rods contain only one pigment which is sensitive to very dim light, and which facilitates night vision but not color. Cones are activated by bright light and allow us to see colors and fine detail. There are three types of cones that contain different pigments which absorbs wavelengths in the short (S), middle (M), or long (L) ranges. Cones are often labeled blue, green, and red, because they detect wavelengths in those color spectrums. The peak wavelength absorption of the S (blue) cone is approximately 430 nm; the M (green) cone 530 nm; and the L (red) cone 560 nm.

The range of detectable wavelengths for all three types of cones overlap, and two of them—the L and M cones—respond to all wavelengths in the visible spectrum. Most of the light we see consists of a mixture of all visible wavelengths which results in "white" light, like that of sunshine. However, cone overlap and the amount of stimulation they receive from varying wavelengths produces a fabulous range of vivid colors and gentle hues present in normal color vision. Approximately 8% of all human males experience abnormal color vision, or color blindness .

Actually, we do not "see" colors at all. A leaf , for example, appears green because it absorbs long- and short-wavelengths but reflects those in the middle (green) range, stimulating the M cones to transmit electrochemical messages to the brain which interprets the signals as the color green.


Optic pathway

Only about 10% of the light which enters the eye actually reaches the photoreceptors in the retina. This is because light must pass first through the cornea, pupil, lens , aqueous and vitreous humors (the liquid and gel-like fluids inside the eye) then through the blood vessels of the lining of the eye and then through two layers of nerve cells (ganglion and bipolar cells in the retina).


Visual field

The entire scene projected onto the retinas of both eyes is called the "visual field."


Optic chiasma

Synaptic transmission of impulses from retinal cells follows the optic nerve to the optic chiasma, an x-shaped junction in the brain where half the fibers from each eye cross to the other side of the brain. This means that some visual information from the right half of each retina (from the left visual field) travels to the right visual cortex, and visual information from the left half of each retina (from the right visual field) travels to the left visual cortex. Information from the right half of our environment is processed in the left hemisphere of the brain, and vice versa. Damage to the optic pathway or visual cortex in the left brain—perhaps from a stroke—can cause complete loss of the right visual field. This means only information entering the eye from the left side of our environment is processed, even though information still enters the eye from both visual fields.


Visual cortex

Each visual cortex is about 2 in (5 cm) square and contains about 200 million nerve cells which respond to very elaborate stimuli. In primates , there are about 20 different visual areas in the visual cortex, the largest being the primary, or striate, cortex. The striate cortex sends information to an adjacent area which in turn transmits to at least three other areas about the size of postage stamps. Each of these areas then relays the information to several other remote areas called accessory optic nuclei. It is thought that the accessory optic nuclei plays a role in coordinating movement between the head and eyes so images remain focused on the retina when the head moves.


Visual acuity

Visual acuity, keenness of sight and the ability to distinguish small objects, develops rapidly in infants between the age of three and six months and decreases rapidly as people approach middle age. Good visual acuity is often called 20/20 vision. Optometrists test visual acuity when we have our eyes examined, and poor acuity is often correctable with glasses or contact lenses. As with every other aspect of vision, visual acuity is highly complex, and is influenced by many factors.


Retinal eccentricity

The area of the retina on which light is focused influences visual acuity, which is sharpest when the object is projected directly onto the central fovea—a tiny indentation at the back of the retina comprised entirely of cones. Acuity decreases rapidly toward the retina's periphery. It was initially believed this was because cones decrease in number moving out from the retina, disappearing altogether at the retina's periphery where only rods exist. However, recent studies indicate it may result from the decreasing density of ganglion cells toward the retina's periphery.


Luminance

Luminance is the intensity of light reflecting off an object, and influences visual acuity. Dim light activates only rods, and visual acuity is poor. As luminance increases,
more cones become active and acuity levels rise sharply. Pupil size also affects acuity. When the pupil expands, it allows more light into the eye. However, because light is then projected onto a wider area of the retina, optical irregularities can occur. A very narrow pupil can reduce acuity because it greatly reduces retinal luminance. Optimal acuity seems to occur with an intermediate pupil size, but the optimum size varies depending on the degree of external luminance. The difference in luminance reflected by each object in an image produces varying degrees of light, dark, or color. Contrast between a white page and black letters enables us to read. The greater the contrast, the more acute the visual image.


Accommodation

Accommodation is the eye's ability to adjust its focus to bring about clear, sharp images of both far and near objects. Accommodation begins to decline around age 20 and is so diminished by the mid-fifties that sharp close-up vision is seldom possible without corrective lenses. This condition, called presbyopia, is the most common vision problem in the world.

Common visual problems

Strabismus

Strabismus is seeing two images of a single object. Strabismus results from a lack of parallelism of the visual axes of the eyes. In one form (cross-eyes) one or both eyes turn inward toward the nose. In another form, wall-eyes, one or both eyes turn outward. A person with strabismus does not usually see a double image—particularly if onset was at a young age and remained untreated. This is because the brain suppresses the image from the weaker eye, and neurons associated with the dominant eye (ocular dominance) take over.

While the causes of strabismus are not fully understood, it appears to be hereditary, often obvious soon after birth . In many cases, strabismus is correctable. However, the critical period (probably to age six or seven years) involved in normal neuronal development of vision makes it necessary that the problem be detected and treated as early as possible.


Amblyopia

Amblyopia, or lazy eye, is the most common visual problem associated with strabismus. Amblyopia involves severely impaired visual acuity, and is the result of suppression and ocular dominance; it affects an estimated four million people in the United States alone. One study suggests it causes blindness in more people under 45 years of age than any other ocular disease and injury combined.


Other common visual problems

Slight irregularities in the shape or structure of the eyeball, lens, or cornea cause imperfectly focused images on the retina. Resulting visual distortions include presbyopia (far-sightedness, or the inability to focus on close objects), myopia (near-sightedness, in which distant objects appear out of focus), and astigmatism (which causes distorted visual images). All of these distortions can usually be rectified with corrective lenses.


Valuable vision

Our memory and mental processes rely heavily on sight. There are more neurons in the nervous system dedicated to vision than to any other of the five senses, indicating vision's importance in our lives. The almost immediate interaction between the eye and the brain in producing vision makes even the most intricate computer program pale in comparison. Although we seldom pause to imagine life without sight, vision is the most precious of all our senses. Without it, our relationship to the world about us, and our ability to interact with our environment, would diminish immeasurably.

See also Blindness and visual impairments; Brain; Color; Depth perception; Vision disorders.


Resources

books

Hart, William M., Jr., ed. Adler's Physiology of the Eye. St. Louis: Mosby Year Book, 1992.

Hubel, David H. Eye, Brain, and Vision. New York: Scientific American Library, 1988.

Leibovic, K. N., ed. Science of Vision. New York: Springer-Verlag, 1990.

Lent, Roberto, ed. The Visual System-From Genesis to Maturity. Boston: Birkhauser, 1992.

Moller, Aage R. Sensory Systems: Anatomy and Physiology. New York: Academic Press, 2002.

von Noorden, Gunter K. Binocular Vision and Ocular Motility-Theory and Management of Strabismus. St. Louis: The C.V. Mosby Company, 1990.

Marie L. Thompson

KEY TERMS


. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Accommodation

—Changes in the curvature of the eye lens to form sharp retinal images of near and far objects.

Cones

—Photoreceptors for daylight and color vision are found in three types, each type detecting visible wavelengths in either the short, medium, or long, (blue, green, or red) spectrum.

Ganglion cells

—Neurons in the retina whose axons form the optic nerves.

Ocular dominance

—Cells in the striate cortex which respond more to input from one eye than from the other.

Optic pathway

—The neuronal pathway leading from the eye to the visual cortex. It includes the eye, optic nerve, optic chiasm, optic tract, geniculate nucleus, optic radiations, and striate cortex.

Rods

—Photoreceptors which allow vision in dim light but do not facilitate color.

Stereopsis

—The blending of two different images into one single image, resulting in a three-dimensional image.

Suppression

—A "blocking out" by the brain of unwanted images from one or both eyes. Prolonged, abnormal suppression will result in underdevelopment of neurons in the visual pathway.

Synapse

—Junction between cells where the exchange of electrical or chemical information takes place.

Visual acuity

—Keenness of sight and the ability to focus sharply on small objects.

Visual field

—The entire image seen with both eyes, divided into the left and right visual fields.

Vision

views updated May 29 2018

Vision

Definition

Vision is sight, the act of seeing with the eyes. Sight conveys more information to the brain than either hearing, touch, taste, or smell, and contributes enormously to memory and other requirements for normal human functioning.

Description

Because humans see objects with two eyes simultaneously, vision is binocular, and therefore stereoscopic. Vision begins when light enters the eye, stimulating photoreceptor cells in the retina called rods and cones. The retina forms the inner lining of each eye and functions in many ways like film in a camera. The photoreceptor cells produce electrical impulses which they transmit to adjoining nerve cells (neurons ), which converge at the optic nerve at the back of the retina. The visual information coded as electrical impulses travels along nerve tracts to reach each visual cortex in the posterior of the brain's left and right hemispheres. Each eye conveys a slightly different, two-dimensional image to the brain, which decodes and interprets these images into a colorful, three-dimensional view of the world. The speed of the completion of this task is sensitive enough that it can be registered only on scientific equipment, rather than by human observation.

Function

Because human eyes are separated by about 2.6 in (6.5 cm), each eye has a slightly different horizontal view. This phenomenon is called binocular displacement. The visual images reaching each eye's retina are two-dimensional and flat. In normal binocular vision, the blending of these images into one single image is called stereopsis.

Monocular stereopsis, or depth perception, is also available. For example, even with one eye closed, a nearby car will appear much larger than the same sized car a mile away. The ability to unconsciously and instantaneously assess depth and distance allows humans to move without continually bumping into objects, also providing eye/hand coordination.

Ocular dominance

Studies strongly indicate there is a critical period during which normal development of the visual system takes place and environmental information is permanently encoded within the brain. Although the exact time frame is not clear, it is believed that by age six or seven years, visual maturation is complete. Animal studies show that if one eye is covered during the critical period, neurons in the visual pathway and brain connected to the covered eye do not develop to optimal performance. When that eye is uncovered, only neurons relating to the unrestricted eye function in the visual process. This is an example of "ocular dominance," when cells activated by one eye dominate the cells of the other. It is not an abnormal development.

Memory

The same way in which vision plays an important role in memory, memory plays an important role in vision. The brain accurately stores visual data which it draws upon every time the eyes look at something.

Electrochemical messengers

The entire visual pathway—from the retina to the visual cortex—is paved with millions of neurons. From the time light enters the eye until the brain forms a visual image, vision relies upon the process of electrochemical communication between neurons. Each neuron has a cell body with branching fibers called dendrites and a single long, cylindrical fiber called an axon. When a neuron is stimulated it sends chemicals called neurotransmitters, which cause the release of electrical impulses along the axon. The point where information passes from one cell to the next is a gap called a synapse, and neurotransmitters affect the transmission of electrical impulses on to an adjacent cell. This synaptic transmission of impulses is repeated until the message reaches the appropriate location in the brain. In the retina, approximately 125 million rods and cones transmit information to approximately 1 million ganglion cells. As a result, that many rods and cones must converge onto one single ganglion cell. At the same time, however, information from each single rod and cone "diverges" on to more than one ganglion cell. This complicated phenomenon of convergence and divergence occurs along the entire optic pathway. The brain must transform all this stimulation into useful information and respond to it by sending messages back to the eye and other parts of the brain before we are able to see.

Although the pupil regulates to some degree the amount of light entering the eye, the rods and cones ennable vision to adapt to extremes. Vision ennabled by rods begins in dim light. Cones function in bright light and are responsible for color vision and visual activity.

When light hits the surface of an object, it is absorbed, reflected, or passes through it. The amount of light absorbed by an object is determined by the amount of pigment contained in that object. The more heavily pigmented the object, the darker it appears because it absorbs more light. A sparsely pigmented object, which absorbs little light and reflects a lot of back, appears lighter.

Color vision

Humans have three types of eye pigments: blue, green, and red. This combination, the primary colors, composes every impression of colors for humans. Human color vision extends 30 degrees from the macula, and after that distance, red and green are indistinguishable. That occurs due to the fact that in the periphery of the retina only a few cones are present that detect motion. Because rods are present, the periphery cannot determine colors. For example, a red object that is brought closer from the periphery will at first appear colorless. When the object is moved closer, the eyes will eventually pick up the red pigment.

Perception of color is dependent on three conditions. First, whether people have normal color vision; second, whether an object reflects or absorbs light; and third, whether the source of light transmits wavelengths within the visible spectrum. Rods contain only one pigment which is sensitive to very dim light, and which facilitates night vision but not color. Cones are activated by bright light and let us see colors and fine detail. There are three types of cones containing different pigments that absorb wavelengths in the short (S), middle (M), or long (L) ranges. The peak wavelength absorption of the S (blue) cone is approximately 430 nm; the M (green) cone 530 nm; and the L (red) cone 560 nm.

The range of detectable wavelengths for all three types of cones overlap, and two of them—the L and M cones—respond to all wavelengths in the visible spectrum. Most of the light we see consists of a mixture of all visible wavelengths which results in "white" light, like that of sunshine. Cone overlap and the amount of stimulation they receive from varying wavelengths produces the vivid colors and gentle hues present in normal color vision.

Optic pathway

Only about 10% of the light which enters the eye reaches the photoreceptors in the retina. This is because light must pass first through the cornea aqueous, pupil, lens, and vitreous humors (the liquid and gel-like fluids inside the eye), the blood vessels of the lining of the eye, and then through two layers of nerve cells (ganglion and bipolar cells in the retina).

Visual discrimination

The retina has the ability to distinguish between visual stimuli, and the greater this ability, the greater the sensitivity in making such distinctions. The retina distinguishes visual stimuli in three ways: light discrimination (brightness sensitivity), spatial discrimination (ability to recognize shapes and patterns) and temporal (sensations) discrimination. Human temporal discrimination is limited. For example, this allows people to watch television without noticing the wavy lines that would distort the picture.

Optic chiasma

Vision functions in the brain are divided into two areas: the afferent (sensory) system and the efferent (motor) system. Synaptic transmission of impulses from retinal cells follows the optic nerve (an extension of the brain) to the optic chiasma, also referred to as the optic chiasm, an x-shaped junction in the brain where half the fibers from each eye cross to the other side of the brain. Consequently, visual information from the right half of each retina travels to the right visual cortex, and visual information from the left half of each retina travels to the left visual cortex. Information from the right half of our environment is processed in the left hemisphere of the brain, and vice versa. Damage to the optic pathway or visual cortex in the left brain—perhaps from a stroke—can cause loss of the right visual field. As a result, only information entering the eye from the left side of our environment is processed, even though information still enters the eye from both visual fields.

Visual cortex

Each visual cortex is about 2 in (5 cm) square and contains about 200 million nerve cells which respond to elaborate stimuli. In primates, there are about 20 different visual areas in the visual cortex, the largest being the primary, or striate, cortex. The striate cortex sends information to an adjacent area which in turn transmits to at least three other areas about the size of postage stamps. Each of these areas then relays the information to several other remote areas called accessory optic nuclei.

Visual acuity

Visual acuity, keenness of sight, and the ability to distinguish small objects develops rapidly in infants between the age of three and six months, and decreases rapidly as people approach middle age. Optometrists and ophthalmologists test visual acuity during a routine examination, and poor acuity is often correctable with glasses, contact lenses, or refractive laser surgery. Visual acuity is highly complex and is influenced by many factors.

Retinal eccentricity

The area of the retina on which light is focused influences visual acuity, which is sharpest when the object is projected directly onto the central fovea—a tiny indentation at the back of the retina comprised entirely of cones. Acuity decreases rapidly toward the retina's periphery, as well as the number of cones. Studies have indicated recently that this may result from the decreasing density of ganglion cells toward the retina's periphery.

Luminance

Luminance is the intensity of light reflecting off an object, and influences visual acuity. Dim light activates only rods, and visual acuity is poor. As luminance increases, more cones become active and acuity levels rise. Pupil size also affects acuity. When the pupil expands, it allows more light into the eye. However, because light is then projected onto a wider area of the retina, optical irregularities can occur. Two issues are key regarding pupil size: light to the retina—more is better, up to a point; and, whether or not the light is hitting the rods or the the cones—for example, with bright illumination, the pupil naturally constricts because only cones are stimulated and thus increase visual acuity. A very narrow pupil can reduce acuity because it greatly reduces retinal luminance; but a small pupil (for example, a "pinhole") will increase acuity in people with refractive errors. Optimal acuity seems to occur with an intermediate pupil size, but the optimum size varies depending on the degree of external luminance.

Accommodation

Accommodation is the eye's ability to adjust its focus in order to bring about sharp images of both far and near objects. Accommodation begins to decline around age 20 and is so diminished by the mid-50s that sharp close-up vision is seldom possible without corrective lenses. This condition, called presbyopia, is the most common vision problem in the world.

Role in human health

Human memory and mental processes rely heavily on sight. There are more neurons in the nervous system dedicated to vision than to any other of the five senses, indicating vision's importance. The almost immediate interaction between the eye and the brain in producing vision makes even the most intricate computer program pale in comparison. Although sighted individuals might seldom pause to imagine life without sight, vision is considered to be the most desirable of all human senses. Without it, a person's relationship to the surrounding world and the ability to interact with the environment, is considered seriously diminished.

Common diseases and disorders

Color-blindness

Approximately 8% of all human males experience abnormal color vision, or color "blindness" or deficiency. Women who experience color-deficiency will pass the X-linked recessive gene to any son, and each will be color-blind. Color-blindness is caused when one of the pigments in a person's photoreceptors is abnormal. Red deficient individuals are easier to categorize because that wavelength has minimal overlap with the other primary colors.

Various diseases and conditions can also cause color-blindness. These defects usually occur in one eye and can be intermittent, while congenital defects are present in both eyes and remain constant.

Strabismus

Strabismus is the condition whereby visualization of two images occurs when viewing a single object. This results from a lack of parallelism of the visual axes of the eyes. In one form (known colloquially as cross-eyes) one or both eyes turn inward toward the nose. In another form, (known colloquially as walleyes), one or both eyes turn outward. A person with strabismus does not usually see a double image—particularly if onset was at a young age and remained untreated. This occurs due to the brain's suppressesion of the image from the weaker eye, causing neurons associated with the dominant eye (ocular dominance) take over.

Amblyopia (known colloquially as lazy eye) is the most common visual problem associated with strabismus. Amblyopia involves severely impaired visual acuity, and is the result of suppression and ocular dominance; it affects an estimated 4 million people in the United States and is a common cause of blindness in younger people.

Strabismus appears to be hereditary, and is often obvious soon after birth. In many cases, strabismus is correctable. The critical period extends until a child reaches the ages of six or seven. It is involved in normal neuronal development of vision thus rendering it crucial that the problem be detected and treated as early as possible.

Other common visual problems

Slight irregularities in the shape or structure of the eyeball, lens, or cornea cause imperfectly focused images on the retina. Resulting visual distortions include hyperopia (far-sightedness, or the inability to focus on close objects), myopia (near-sightedness, in which distant objects appear out of focus), and astigmatism (which causes distorted visual images) and presbyopia. These distortions can usually be rectified with corrective lenses or refractive surgery.

KEY TERMS

Accommodation— The eye's ability to focus clearly on both near and far objects.

Cones— Photoreceptors for daylight and color vision are found in three types, each type detecting visible wavelengths in either the short, medium, or long, (blue, green, or red) spectrum.

Ganglion cells— Neurons in the retina whose axons form the optic nerves.

Ocular dominance— When cells in the striate cortex respond more to input from one eye than from the other.

Optic pathway— The neuronal pathway leading from the eye to the visual cortex. It includes the eye, optic nerve, optic chiasm, optic tract, geniculate nucleus, optic radiations, and striate cortex.

Rods— Photoreceptors which allow vision in dim light but do not facilitate color.

Stereopsis— The blending of two different images into one single image, resulting in a three-dimensional image.

Suppression— A "blocking out" by the brain of unwanted images from one or both eyes. Prolonged, abnormal suppression will result in underdevelopment of neurons in the visual pathway.

Synapse— Junction between cells where the exchange of electrical or chemical information takes place.

Visual acuity— Keenness of sight and the ability to focus sharply on small objects.

Visual field— The entire image seen with both eyes, divided into the left and right visual fields.

Resources

BOOKS

Billig, Michael D., Gary H. Cassel, and Harry G. Randall. The Eye Book: A Complete Guide to Eye Disorders and Health. Baltimore and London: The Johns Hopkins University Press, 1998.

Hart, William M., Jr., ed. Adler's Physiology of the Eye. St. Louis, Baltimore, Boston, Chicago, London, Philadelphia, Sydney, Toronto: Mosby Year Book, 1992.

Lent, Roberto, ed. The Visual System-From Genesis to Maturity. Boston, Basel, Berlin: Birkhauser, 1992.

Zinn, Walter J., and Herbert Solomon. Complete Guide to Eyecare, Eyeglasses & Contact Lenses, 4th ed. Hollywood, FL: Lifetime Books, 1996.

ORGANIZATIONS

National Eye Institute of the National Institute of Health. 9000 Rockville Pike, Bethesda, MD 20892. (301) 496-5248. 〈http://www.nei.nih.gov〉.

The Lighthouse National Center for Education. 111 E. 59th Street. New York, NY 10022. (800) 334-5497. 〈http://www.lighthouse.org〉.

Vision

views updated May 23 2018

Vision

Different levels of vision correlate to the different types of eyes found in various species. The simplest eye receptor is that of planarians, a flatworm that abounds in ponds and streams. Planarians are moderately cephalized and have eye cups (or eyespots) located near the ganglia, a dense cluster of nerve cells with ventral nerve cords running along the length of the body.

The receptor cells within the cup are formed by layers of darkly pigmented cells that block light. When light is shined on the cup, stimulation of the photoreceptors occurs only through an opening on one side of the cup where there are no pigmented cells. As the mouth of one eye cup faces left and slightly forward, and the other faces right and forward, the light shining on one side can enter the eye cup only on that side.

This allows the ganglia to compare rates of nerve impulses from the two cups. The planarian will turn until the sensations from the two cups have reached an equilibrium and decreased. The observable behavior of the planarian is to turn away from the light source and seek a dark place under an object, an adaptation that protects it from predators.

As evolution progressed and cephalization increased, vison became more complex as well. There are true image-forming eyes of invertebrates : compound eyes and single-lens eyes . Compound eyes are found in insects and crustaceans (phylum Arthropoda) and in some polychaete worms (phylum Annelida). The compound eye has up to several thousand light detectors called ommatidia, each with its own cornea and lens. This allows each ommatidium to register light from a tiny part of the field of view. Differences in light intensity across the many ommatidia result in a mosaic image.

Although the image is not as sharp as that of a human eye, there is greater acuity at detecting movementan important adaptation for catching flying insects or to avoid threats of predation. This ability to detect movement is partly due to the rapid recovery of the photoreceptors in the compound eye. Whereas the human eye can distinguish up to 50 flashes per second, the compound eye recovers from excitation rapidly enough to distinguish flashes of at the rate of 330 per second. Compound eyes also allow for excellent color vision, and some insects can even see into the UV range of the spectrum.

The second type of invertebrate eye, the single-lens eye, is found in jellyfish, polychaetes, spiders, and many mollusks . Its workings are similar to that of a camera. The single lens focuses light onto the retina, a bilayer of cells that are photosensitive, allowing for an image to be formed.

Vertebrate vision also uses single-lens vision, but it evolved independently and differs from the single-lens eyes of invertebrates. Vertebrate eyes have the ability to detect an almost countless variety of colors and can form images of objects that are miles away. It can also respond to as little as one photon of light. But as it is actually the brain that "sees," one must also have an understanding of how the eye generates sensations in the form of action potentials and how the signals travel to the visual centers of the brain.

Structure of the Vertebrate Eye

The globe of the eyeball is composed of the sclera, a tough, white outer layer of connective tissue , and of the choroid, a thin, pigmented inner layer. The sclera becomes transparent at the cornea, which is at the front of the eye where light can enter. The anterior choroid forms the iris, the colored part of the eye. By changing size, the iris regulates the amount of light entering the pupil, the hole in the center of the iris. Just inside the choroid is the retina, which forms the innermost layer of the eye and contains the photoreceptor cells.

Information from the photoreceptor cells of the retina passes through the optic disc, where the optic nerve attaches to the eye. The optic disc can be thought of as a blind spot in a vertebrate's field of vision because no photoreceptors are present in the disc. Any light that is focused on the lower outside part of the retina, the area of the optic disc, cannot be detected.

The eye is actually composed of two cavities, divided by the lens and the ciliary body. The anterior, smaller cavity is between the lens and the cornea and the posterior , larger one is behind the lens, within the eyeball itself.

The ciliary body is involved in constant production of the clear, watery aqueous humor that fills the anterior cavity of the eye. (Blockage of the ducts from which the aqueous humor drains can lead to glaucoma and eventually blindness, as the increased pressure compresses the retina.) The posterior cavity is lubricated by the vitreous humor, a jellylike substance that occupies most of the volume of the eye. Both humors function as liquid lenses that help focus light on the retina.

The lens itself is a transparent protein disc that focuses images on the retina. Many fish focus by moving the lens forward and backward, camera-style. In humans and other mammals, focusing is achieved through accommodation, the changing of the shape of the lens. For viewing objects at a distance, the lens is flattened and when viewing objects up close, the lens is almost spherical.

Accommodation is controlled by the ciliary muscle, which contracts to pull the border of the choroid layer of the eye to the lens, causing suspensory ligaments to slacken. There is a decrease in tension and the lens becomes more elastic, allowing the lens to become rounder. For viewing at a distance, the ciliary muscle relaxes, allowing the choroid to expand, thus placing more tension on the suspensory ligament and pulling the lens flatter.

Signal Transduction

The photoreceptor cells in the retina are of two types: rods and cones. The rods are more sensitive to light but are not involved in distinguishing color. They function in night vision and then only in black and white. Cones require greater amounts of light to be stimulated and are, therefore, not initiated in night vision. However, they are involved in distinguishing colors.

The human retina has approximately 125 million rod cells and approximately 6 million cone cells. The rods and cones account for nearly 70 percent of all receptors in the body, emphasizing the importance of vision in a human's perception of the environment. The numbers of photoreceptors are partly correlated with nocturnal or diurnal habits of the species, with nocturnal mammals having a maximum number of rods.

In humans the highest density of rods is at the lateral regions of the retina. Rods are completely absent from the fovea, the center of the visual field. This is why it is harder to see a dim star at night if you look at it directly than if you look at the star at an angle, allowing the starlight to be focused onto rodpopulated regions of the retina. However, the sharpest day vision is achieved by looking directly at an object because the cones are most dense in the fovea, approximately 150 thousand cones per square millimeter. Some birds actually have more than one million cones per square millimeter, enabling species such as hawks to spot mice from very high altitudes.

Photoreceptor cells have an outer segment with folded membrane stacks with embedded visual pigments. Retinal is the light-absorbing molecule synthesized from vitamin A and bonded to opsin, a membrane protein in the photoreceptor. The opsins vary in structure from one type of photoreceptor to another. The light-absorbing ability of retinal is affected by the specific identity of the opsin partner.

The chemical response of retinal to light triggers a chain of metabolic events, which causes a change in membrane voltage of the photoreceptor cells. The light hyperpolarizes the membrane by decreasing its permeability to sodium ions, so there are fewer neurotransmitters being released by the cells in light than in dark. Therefore a decrease in chemical signals to cells with which photoreceptors synapse serves as a message that the photoreceptors have been stimulated by light.

The axons of rods and cones synapse with neurons , bipolar cells, which synapse with ganglion cells. The horizontal cells and amacrine cells help integrate information before it is transmitted to the brain.

The axons of the ganglion cells form optic nerves that meet at the optic chiasma near the center of the base of the cerebral cortex. The nerve tracts are arranged so that what is in the left field of view of both eyes is transmitted to the right side of the brain (and vice versa).

The signals from the rods and cones follow two pathways: the vertical pathway and the lateral pathway. In the vertical pathway, the information goes directly from receptor cells to the bipolar cells and then to the ganglion cells. In the lateral pathway, the horizontal cells carry signals from one photoreceptor to other receptor cells and several bipolar cells. When the rods or cones stimulate horizontal cells, these in turn stimulate nearby receptors but inhibit more distant receptor and bipolar cells that are not illuminated. This process, termed lateral inhibition , sharpens the edges of our field of vision and enhances contrasts in images.

The information received by the brain is highly distorted. Although the anatomy and physiology of vision has been extensively studied, there is still much to learn about how the brain can convert a coded set of spots, lines, and movements to perceptions and recognition of objects.

see also Nervous System; Growth and Differention of the Nervous System.

Danielle Schnur

Bibliography

Bradbury, Jack W., and Sandra L. Vehrencamp. Principles of Animal Communication. Sunderland, MA: Sinauer, 1998.

Campbell, Neil A., Lawrence G. Mitchell, and Jane B. Reece. Biology: Concepts and Connections, 3rd ed. Menlo Park, California: Benjamin/Cummings Publishing Company, 1993.

Vision

views updated May 29 2018

Vision

The eyes are the windows on the world. Vision is found widely in many different classes of animals and may have evolved independently at different times. Vision, which involves perception of light and dark, is distinct from simple light sensitivity, such as that displayed by germinating plant sprouts that respond to the sun's direction.

Eyecups

The complexity of eyes varies markedly in different groups of animals. Nonfocusing eyecups are found in the planarians, the medusas (jellyfish) of cnidarians, some snails, and some other invertebrates. Light enters a depression lined with pigment-containing, light-sensitive cells. Neurons connected to these cells carry messages to the rest of the nervous system. Because there is no focusing system, the general direction and intensity of light can be detected, but there can be no perception of form or image.

Compound Eyes

Most adult insects and crustaceans, as well as the horseshoe crab and the extinct trilobite, have compound eyes, constructed of as few as one (in some ants) to as many as thirty thousand (in some dragonflies) individual units called ommatidia. Each ommatidium is covered with a cornea, formed from the insect exoskeleton , and has its own crystalline cone within. Both structures focus light on the retinula (light-sensitive) cells at the base. The amount of light entering the ommatidium may be controlled by increasing or decreasing the amount of screening pigments within. The individual ommatidia do not usually cast clear images on the retinula cells, rather just a spot of color. The individual retinula cells then send this information into the brain, which puts all of the spots together to form a mental image.

Although the details of insect visual processing are unknown, there appear to be multiple levels of processing, as there are in vertebrate visual systems. Finally, insects usually have three ocelli, non-image-forming simple eyes, on the tops of their heads. These seem to awaken insects for their daily activities.

Camera Eyes

Vertebrates (including humans) and cephalopods (such as the octopus) have so-called camera eyes. Camera eyes have muscular rings called irises to control the amount of light that can hit the light-sensitive cells in the back of the eye. The ability to control the amount of light is called visual adaptation. Human eyes have a cornea on the outer surface that provides about 70 percent of the eye's focusing power, and they have an adjustable lens that provides the rest of the focusing power and allows accommodation, or change, of focus for near or far objects. Light entering the eye passes first through the cornea, then past the iris, through the lens, then the vitreous humor, which is a clear jellylike substance that gives the eye its shape. Light is absorbed by the retina, the layer of light-sensitive cells lining the back of the eye.

Light Transduction

Despite the differences in structure, eyes generally use the same set of biochemical tools to transduce light into a neural signal. A carotenoid compound (such as the chemical relatives of vitamin A), linked to a protein in the retinal cell membrane, captures the light energy. The light alters a chemical bond in the carotenoid, which then changes its shape, causing the membrane to alter its electrical state. The change in electrical state then will cause the retinal cell to release a chemical (called a neurotransmitter) which will excite an adjacent nerve cell. The carotenoid plus an associated protein is referred to as the visual pigment. (Interestingly, carotenes are also used by plants to help them capture the energy of the sun in photosynthesis.)

Image Processing

The visual image detected by the retina is not recorded whole and passed unchanged to the brain. Instead, the image is processed, with highlighting and integration of some features along the way. The degree of image processing varies among different types of animals. For example, toads have a "worm detector." When the optic nerves send signals to the visual-processing area of the brains to form a linear pattern, the brain says "worm" and the toad aligns to the worm and snaps it up.

The eyes of some animals have fields of vision with little or no overlap between the two eyes, giving them a 360-degree view of the world. Such wide fields of view are seen often in prey animals, allowing higher vigilance against predators. Some ground birds, for example, have eyes that have absolutely no overlap. In contrast, other animals have eyes with highly overlapping fields of vision. This allows stereoscopic vision, in which an object is viewed from two different points. Integration of these images, along with information about the relative direction in which the two eyes are pointing, allows depth perception, a critical tool for predators. It is also important for monkeys and other tree-dwelling primates, for instance, in order to know how far that next branch is so that they do not fall out of their trees!

Ultraviolet and Polarized Light

The visual spectrum of all animals goes from around 350 nanometers (ultraviolet) through all the colors most humans see to the infrared, around 800 nanometers (one nanometer equals one-billionth of a meter). In the vertebrates, elaborate color vision is found in the primates (including humans), birds, lizards, and fish. Most other mammals lack the ability to see red or other colors (including bulls).

Insects are less well able to see the red than humans can, but they do see colors, and some insects can detect ultraviolet light. Bees can see the hidden ultraviolet color patterns of black-eyed susans and other flowers, for instance, allowing them to hone in on these flowers more easily.

Another unusual light quality that insects can detect is the plane of light polarization. Light polarization means that all of the rays arriving at the retinal cells are vibrating in the same plane; light typically becomes polarized when it is reflected off surfaces. Insects' retinas are arranged so that they detect changes in polarization. This makes it possible for honeybees to determine the direction of the sun even on cloudy days. The sun's direction in the sky is a critical piece of information communicated in the bee dance that a scout bee will do to communicate the location of nectar or pollen sources to other bees in the hive.

see also Eye

David L. Evans

Bibliography

Drickamer, Lee C., Stephen H. Vessey, and Elizabeth M. Jakob. Animal Behavior, 5th ed. Dubuque, IA: McGraw-Hill, 1996.

Romoser, William S., and J. G. Stoffolano, Jr. The Science of Entomology, 4th ed. Boston: McGraw-Hill, 1998.

Saladin, Kenneth S. Anatomy and Physiology: The Unity of Form and Function, 2nd ed. Boston: McGraw-Hill, 2000.

Vision

views updated Jun 08 2018

Vision

The process of transforming light energy into neural impulses that can then be interpreted by the brain.

The human eye is sensitive to only a limited range of radiation, consisting of wavelengths between approximately 400 to 750 nanometers (billionths of a meter).

The full spectrum of visible color is contained within this range, with violet at the low end and red at the high end. Light is converted into neural impulses by the eye, whose spherical shape is maintained by its outermost layer, the sclera. When a beam of light is reflected off an object, it first enters the eye through the cornea, a rounded transparent portion of the sclera that covers the pigmented iris. The iris constricts to control the amount of light entering the pupil, a round opening at the front of the eye. A short distance beyond the pupil, the light passes through the lens, a transparent oval structure whose curved surface bends and focuses the light wave into a narrower beam, which is received by the retina. When the retina receives an image, it is upside down because light rays from the top of the object are focused at the bottom of the retina, and vice versa. This upside-down image must be rearranged by the brain so that objects can be seen right side up. In order for the image to be focused properly, light rays from each of its points must converge at a point on the retina, rather than in front of or behind it. Aided by the surrounding muscles, the lens of the eye adjusts its shape to focus images properly on the retina so that objects viewed at different distances can be brought into focus, a process known as accommodation. As people age, this process is impaired because the lens loses flexibility, and it becomes difficult to read or do close work without glasses.

The retina, lining the back of the eye, consists of ten layers of cells containing photoreceptors (rods and cones) that convert the light waves to neural impulses through a photochemical reaction. Aside from the differences in shape suggested by their names, rod and cone cells contain different light-processing chemicals (photopigments), perform different functions, and are distributed differently within the retina. Cone cells, which provide color vision and enable us to distinguish details, adapt quickly to light and are most useful in adequate lighting. Rod cells, which can pick up very small amounts of light but are not color-sensitive, are best suited for situations in which lighting is minimal. Because the rod cells are active at night or in dim lighting, it is difficult to distinguish colors under these circumstances. Cones are concentrated in the fovea, an area at the center of the retina, whereas rods are found only outside this area and become more numerous the farther they are from it. Thus, it is more difficult to distinguish colors when viewing objects at the periphery of one's visual field.

The photoreceptor cells of the retina generate an electrical force that triggers impulses in neighboring bipolar and ganglion cells. These impulses flow from the back layer of retinal cells to the front layer containing the fibers of the optic nerve , which leaves the eye though a part of the retina known as the optic disk. This area, which contains no receptor cells, creates a blind spot in each eye, whose effects are offset by using both eyes together and also by an illusion the brain creates to fill in this area when one eye is used alone. Branches of the optic nerve cross at a junction in the brain in front of the pituitary gland and underneath the frontal lobes called the optic chiasm and ascend into the brain itself. The nerve fibers extend to a part of the thalamus called the lateral geniculate nucleus (LGN), and neurons from the LGN relay their visual input to the primary visual cortex of both the left and right hemispheres of the brain, where the impulses are transformed into simple visual sensations. (Objects in the left visual field are viewed only through the right brain hemisphere, and vice versa.) The primary visual cortex then sends the impulses to neighboring association areas which add meaning or "associations" to them.

Further Reading

Hubel, David. Eye, Brain, and Vision. New York: Scientific American Library, 1987.

vision

views updated Jun 27 2018

vi·sion / ˈvizhən/ • n. 1. the faculty or state of being able to see: she had defective vision. ∎  the ability to think about or plan the future with imagination or wisdom: the organization had lost its vision and direction. ∎  a mental image of what the future will or could be like: a socialist vision of society. ∎  the images seen on a television screen. 2. an experience of seeing someone or something in a dream or trance, or as a supernatural apparition: the idea came to him in a vision. ∎  (often visions) a vivid mental image, esp. a fanciful one of the future: he had visions of becoming the Elton John of his time. ∎  a person or sight of unusual beauty.• v. [tr.] rare imagine.DERIVATIVES: vi·sion·al / -zhənl/ adj.vi·sion·less adj.