Entries

Encyclopedia of Aging International Encyclopedia of the Social SciencesThe Oxford Companion to the BodyBiology Further reading

NON JS

Hearing

HEARING

The auditory system changes as a consequence of the aging process, as well as a result of exposure to environmental agents and disease. The cumulative effect of these factors over the life span is a significant hearing loss among a large proportion of adults aged sixty-five years and older. Hearing loss associated exclusively with the aging process is known as presbycusis. Deterioration of the auditory system with age leads to changes not only in hearing sensitivity, but also to a decline in processing of speech stimuli, particularly in less-than-ideal listening environments. However, there is large individual variability in the auditory abilities of older people, as well as substantial gender differences in auditory performance. Thus, generalizations about hearing in aging and the impact of decline in the auditory sense must be considered with an understanding of the range of individual differences that may occur.

Prevalence of hearing loss

Hearing loss is the fourth most common chronic health condition reported by individuals who are sixty-five years and over (National Center for Health Statistics). Among males, 33 percent aged sixty-five to seventy-four years, and 43 percent aged seventy-five years and over report a hearing impairment. Comparable figures for females are 16 percent and 31 percent for sixty-five to seventy-four year olds and those seventy-five and older, respectively. A much higher prevalence rate of hearing loss among older people is reported in studies that actually test hearing sensitivity in the population. For example, 83 percent of the 2,351 participants in the Framingham Heart Cohort Study, ages fifty-seven to eighty-nine years, had some degree of hearing loss at one frequency in the speech range (Moscicki, Elkins, Baum, and McNamara). Using a criterion of average hearing sensitivity exceeding 25 decibels Hearing Level (dB HL) as indicating significant hearing loss, the overall prevalence rate in large population-based studies of older adults is about 46 percent.

Source of hearing problems and effects on the auditory system

The principal causes of significant hearing loss among older people are noise exposure, disease, heredity, and senescence. Exposure to industrial noise exceeding 90 dBA for an eight-hour workday over a period of time is known to cause permanent, high-frequency hearing loss. Additionally, a single exposure to very intense sound (exceeding 130 dBA) can also cause a permanent hearing loss that affects either all frequencies or selective, high frequencies. Diseases specific to the ear that affect adults include otosclerosis, Meniere's disease, and labyrinthitis. More than one hundred different abnormal genes causing sensorineural hearing loss have been identified. Although hereditary hearing loss accounts for about 50 percent of congenital childhood deafness, it is also thought to play a role in progressive hearing loss during later adulthood (Fischel-Godshian). At least one report describes a strong family pattern of presbycusis, particularly in women (Gates et al.). Finally, age-related deterioration of structures in the auditory system appears to occur among individuals with no significant history of noise exposure, otologic disease, or familial hearing loss.

The auditory system is housed within the temporal bone of the skull and consists of the outer ear, middle ear, inner ear, nerve of hearing (N. VIII), and central auditory nervous system. Evidence from anatomical studies of temporal bones and physiologic studies of auditory system function in older individuals suggest that age-related changes can occur at each level of the auditory system.

The outer ear consists of the pinna and the ear canal, which collect and amplify acoustic energy as it is transferred toward the tympanic membrane (eardrum). Changes commonly observed in the outer ear of older individuals include an enlargement of the pinnae, an increase in cerumen (earwax) production in the ear canal, and a change in the cartilage support of the ear canals. These factors can affect the sound field-to-eardrum transfer function and thereby alter sound transmission that is received at the tympanic membrane. Excessive cerumen, found in approximately 40 percent of an elderly population, can add a slight-to-mild high frequency conductive overlay to existing hearing thresholds.

The middle ear contains the three tiny bones, or ossicles (malleus, incus, and stapes), that are linked together as the ossicular chain. The principal function of the middle ear is to transmit acoustic energy effectively from the ear canal to the inner ear without an energy loss. The two middle ear muscles, the tensor tympani and stapedius, contract in response to loud sound to protect the inner ear from damage. With aging, the ligaments, muscles, and ossicles comprising the ossicular chain may degenerate, presumably causing a conductive hearing loss. Electrophysiologic measures of middle ear function (tympanometry) further indicate that the middle ear stiffens with age, thereby reducing the transmission of acoustic energy through the middle ear (Wiley et al., 1996).

The inner ear is composed of a fluid-filled bony labyrinth of interconnected structures including the cochlea. The cochlea contains the sensory end organ for hearing (the organ of Corti), which supports the inner and outer hair cells. These microscopic sensory hairs are essential for processing sound. The cochlea analyzes the frequency and intensity of sound, which is transmitted to the nerve of hearing by the inner hair cells. At the same time, the outer hair cells initiate a feedback mechanism resulting in the presence of acoustic energy in the ear canal (otoacoustic emissions). One prominent change in the inner ear with age is a loss of inner and outer hair cells in the basal turn of the cochlea (Schuknecht). Age-related loss of inner hair cells in this region produces a high frequency hearing loss and has been called sensory presbycusis. The loss of outer hair cells is expected to alter the feedback mechanism, possibly causing hearing loss and limited capacity to finely tune the frequency of sound. Electrophysiologic measures of outer hair cell function indicate that thresholds of otoacoustic emissions increase linearly with increasing age, although this age effect is confounded by the presence of hearing loss among older subjects (Stover and Norton). Another prominent change in the inner ear with aging is a decrease in the volume of vascular tissue, the stria vascularis, lining the outer cochlear wall. The stria vascularis maintains the chemical balance of the fluid in the cochlea, which in turn nourishes the hair cells. A loss of the vascular tissue produces a permanent hearing loss affecting most frequencies, called strial presbycusis (Schuknecht, 1993).

Approximately thirty-five thousand neurons comprise the afferent auditory branch of the eighth cranial nerve (N. VIII) in young, healthy adults. The auditory branch of N. VIII recodes the frequency, intensity, and timing information received from the hair cells and transmits it to the nuclei of the central auditory nervous system. With age, there is a loss of auditory neurons that accumulate over the life span. Considerable evidence demonstrates that the neuronal population comprising the auditory nerve is markedly reduced in aged human subjects compared to younger subjects. The effect on hearing, called neural presbycusis, is a mild loss of sensitivity but a considerable deficit in discriminating attributes of sound, including speech.

The nuclei of the central auditory nervous system transmit acoustic signals to higher levels, compare signals arriving at the two ears, recode the frequency of sound, and code other characteristics of the temporal waveform. Final processing of acoustic information is carried out in the primary auditory cortex, located in the superior temporal gyrus. There is a substantial reduction in the number of neurons in each nucleus of the central auditory nervous system with age, with the most prominent decline occurring in the auditory cortex (Willott). These alterations are thought to affect processing of complex acoustic stimuli, including distorted speech signals and sequences of tonal patterns.

Auditory performance

Hearing sensitivity decreases with increasing age among both men and women. A longitudinal study of hearing thresholds among individuals screened for noise exposure, otologic disease, and hereditary hearing loss showed that hearing thresholds decline progressively above age twenty years in men, and above age fifty years in women (Pearson et al.). The decline in hearing thresholds of the men was more than twice as fast as that of the women, at certain ages. Women showed the greatest decline in hearing sensitivity in the low frequencies, whereas men showed the greatest decline in the higher frequencies. For the unscreened population, the average thresholds of older men, sixty-five years of age, show normal hearing sensitivity in the low frequencies, declining to a moderate hearing loss (42 dB HL) at 3000 cycles per second (Hz) and above (Robinson). For women, the average hearing thresholds at age sixty-five years indicate a slight hearing loss (1625 dB HL) from 500 through 4000 Hz, and a mild hearing loss (30 dB HL) at 6000 Hz. The type of hearing loss typically is sensorineural, indicating that the site of lesion is the sensory mechanism of the inner ear or the nerve of hearing.

Hearing sensitivity in the ultra high audio-metric frequencies, above 8000 Hz, shows an age-related decline beginning in middle age that is greater than the decline in the lower audiometric frequencies (2508000 Hz) (Wiley et al., 1998). These extended high-frequency thresholds are highly correlated with thresholds at 4000 Hz and 8000 Hz, suggesting that early monitoring of extended high-frequency thresholds among young and middle-aged adults may be useful for predicting the onset of presbycusis and for recommending preventive measures.

The ability to detect changes in temporal (timing) characteristics of acoustic stimuli appears to decline with age. Gap detection is the ability to detect a brief silent interval in a continuous tonal stimulus or noise, and reflects the temporal resolving power of the ear. Elderly listeners generally show longer gap detection thresholds than younger listeners (Schneider and Hamstra). Older listeners also require longer increments in tone duration to detect a change in a standard tone duration, compared to younger listeners (Fitzgibbons and Gordon-Salant, 1994). Finally, older listeners' performance for discriminating and identifying tones in a sequence is poorer than that of younger listeners, for tones of equivalent duration (Fitzgibbons and Gordon-Salant, 1998). Taken together, these findings indicate that older listeners have limited capacity to process brief changes in acoustic stimuli. This limitation could affect discrimination of the rapid acoustic elements that comprise speech.

Older people demonstrate difficulty understanding speech. In quiet listening environments, the speech recognition problem is attributed to insufficient audibility of the high-frequency information in speech by older people with age-related, high-frequency hearing loss (Humes). Substantial difficulty recognizing speech in noise also characterizes the performance of older listeners. Some studies have shown that the difficulties in noise are largely associated with the loss of sensitivity (Souza and Turner); other studies suggest that there is an added distortion factor with aging that acts to further diminish performance (Dubno, Dirks, and Morgan). The findings in noise are highly variable across studies and are largely dependent upon the speech presentation level, type of speech material (i.e., nonsense syllables, words, sentences), and availability of contextual cues.

In everyday communication situations, speech can be degraded by reverberant rooms and by people who speak at a rapid rate. Reverberation refers to a prolongation of sound in a room, and causes elements of speech to mask later-occurring speech sounds and silent pauses. With rapid speech, there is a reduction in the duration of pauses between words, vowel duration, and consonant duration. Time compression is an electronic or computer method to simulate rapid speech. Age effects are evident for recognition of both reverberant and time-compressed speech, which are independent and additive to the effects of hearing loss (Gordon-Salant and Fitzgibbons). Moreover, multiple speech distortions of reverberant and time-compressed speech, or either time-compressed or reverberant speech in noise, are excessively difficult for older people. Because both of these types of distortions involve a manipulation of the temporal (time) speech waveform, the recognition problem of older people may reflect a deficit in processing the timing characteristics of sound. An alternative hypothesis is that an age-related cognitive decrement in rapid information processing limits the older person's ability to process speech presented at a fast rate (Wingfield et al.). It should be noted, however, that older people are able to perform quite well on many speech recognition tasks if given adequate contextual cues (Dubno, Ahlstrom, and Horwitz, 2000).

Impact of age-related hearing loss

Hearing impairment affects the quality of life for older individuals. The principle effects are related to the social and emotional impact of communication difficulties resulting from significant hearing loss (Mulrow et al.). Anecdotal reports of an association between dementia and hearing loss, or between depression and hearing loss, have not been replicated in well-controlled studies with large samples.

Older men and women adjust differently to their hearing loss. Women admit communication problems more often than men and assign greater importance to effective communication than men (Garstecki and Erler). This finding could be associated with differences in marital status between older men and women; older women are more likely to be widowed and thus rely to a greater extent on social interactions outside of the family. Men appear to adjust better to their hearing loss, as reflected by fewer reports of anger and stress associated with their hearing loss compared to reports of women. On the other hand, older men have a higher rate of denial of negative emotional reactions related to their hearing loss than women.

Remediation

Age-related sensorineural hearing loss cannot be ameliorated with medication or surgery. Rather, the principal form of treatment is amplification using a hearing aid. Analog and digital hearing aids are designed to amplify either all or selective frequencies based on an individual's hearing loss, with the goal of bringing all speech sounds into the range of audibility for the hearing-impaired listener. People with sensorineural hearing loss also experience a reduced tolerance for loud sounds. As a result, most hearing aids incorporate amplitude compression circuits to limit the output level of amplified sound without producing distortion. Hearing aids are quite effective for amplifying sound without producing discomfort. Thus, it is not surprising that older hearing-impaired people demonstrate significant benefit from hearing aids for understanding speech in quiet and noisy listening environments and for reducing their perceived hearing handicap (Humes, Halling, and Coughlin). However, there is wide individual variability in the magnitude of hearing aid benefit. The same amplification that hearing aids provide for a target speech signal is applied as well to noise, including the voices of people talking in a background. As a result, older hearing aid users often report less benefit from their hearing aids in noisy environments than in quiet environments. Only about 20 percent of older individuals with hearing loss purchase hearing aids. The prevailing reasons for lack of hearing aid use among older people are stigma, cost, and limited perceived benefit, particularly in noise. Another possible reason for hearing aid rejection by older people is that personal hearing aids do not overcome the older person's difficulties in understanding reverberant speech or rapid speech.

Frequency-modulated (FM) systems are amplification devices that can be beneficial for older listeners when they are located at a distance from a speaker. These can be independent systems or they can be components attached to a hearing aid and used as a selectable circuit. An FM system includes a microphone/transmitter placed near the speaker that broadcasts the sound, via FM transmission, to a receiver/amplifier located on the user. The amplified sound is unaffected by room acoustics, including noise and reverberation. This type of device is particularly helpful for older listeners in theaters, houses of worship, or classrooms, where a long distance between the speaker and the listener can aggravate the detrimental effects of poor room acoustics for older listeners with hearing loss.

Older individuals with bilateral, severe-to-profound hearing loss generally have widespread damage to the cochlea and derive minimal benefit from hearing aids and FM systems for recognizing speech. These individuals are potential candidates for a cochlear implant, a surgically implanted device that delivers speech signals directly to the auditory nerve via an electrode array inserted in the cochlea. Considerable success in receiving and understanding speech, with and without visual cues, has been reported for older cochlear implant recipients (Waltzman, Cohen, and Shapiro).

Regardless of the type of device used by the older hearing-impaired person, a successful remediation program includes auditory training, speechreading (lipreading) training, and counseling. The emphasis in these programs is training the older person to take full advantage of all available contextual cues, based on the consistent finding that older people are able to surmount most communication problems if contextual cues are available. Another principle of these programs is training effective use of nonverbal strategies (e.g., stage managing tactics for optimal viewing and listening) and verbal strategies (e.g., requesting the speaker to talk more slowly).

Prevention

Hearing sensitivity of older individuals in nonindustrialized societies is significantly better than that of older individuals in industrialized societies. This finding strongly suggests that there are preventable risk factors in industrialized societies for apparent age-related hearing loss. Exposure to intense noise and administration of ototoxic drugs are two well-known risk factors for acquired sensorineural hearing loss. The Baltimore Longitudinal Study of Aging has shown that elevated systolic blood pressure is associated with significant risk for hearing loss in men (Brant et al.). In the Beaver Dam epidemiological study, smoking was identified as a significant risk factor for sensorineural hearing loss among the 3,753 participants (Cruikshanks et al., 1998). Nonsmoking participants who lived with a smoker were also more likely to have a hearing loss than those not exposed to smoke in the home. The identification of these modifiable risk factors suggests that an effective program of prevention or delay of adult-onset hearing loss would include use of ear protection in noisy environments, control of hypertension, elimination of cigarette smoking, and monitoring the use of potentially ototoxic medications.

Sandra Gordon-Salant

See also Brain; Home Adaptation and Equipment; Vision and Perception.

BIBLIOGRAPHY

Brant, L. J.; Gordon-Salant, S.; Pearson, J. D.; Klein, L. L.; Morrell, C. H.; Metter, E. J.; and Fozard, J. L. "Risk Factors Related to Age-Associated Hearing Loss in the Speech Frequencies." Journal of the American Academy of Audiology 7, no. 3 (1996): 152160.

Cruickshanks, K. J.; Klein, R.; Klein, B. E.; Wiley, T. L.; Nondahl, D. M.; and Tweed, T. S. "Cigarette Smoking and Hearing Loss: The Epidemiology of Hearing Loss Study." Journal of the American Medical Association 279, no. 21 (1998): 17151719.

Dubno, J. R.; Ahlstrom, J. B.; and Horwitz, A. R. "Use of Context by Young and Aged Adults with Normal Hearing." Journal of the Acoustical Society of America 107, no. 1 (2000): 538546.

Dubno, J. R.; Dirks, D.D.; and Morgan, D. E. "Effects of Age and Mild Hearing Loss on Speech Recognition." Journal of the Acoustical Society of America 76, no. 1 (1984): 8796.

Fischel-Godshian, N. "Mitochondrial Deafness Mutations Reviews." Human Mutation 13, no. 4 (1999): 261270.

Fitzgibbons, P. J., and Gordon-Salant, S. "Age Effects on Measures of Auditory Duration Discrimination." Journal of Speech and Hearing Research 37, no. 3 (1994): 662670.

Fitzgibbons, P. J., and Gordon-Salant, S. "Auditory Temporal Order Perception in Younger and Older Adults." Journal of Speech, Language, and Hearing Research 41, no. 5 (1998): 10521060.

Garstecki, D., and Erler, S. F. "Older Adult Performance on the Communication Profile for the Hearing Impaired: Gender Difference." Journal of Speech, Language, and Hearing Research 42, no. 3 (1999): 735796.

Gates, G. A.; Couropmitree, N. N.; and Myers, R. H. "Genetic Associations in Age-Related Hearing Thresholds." Archives of OtolaryngologyHead and Neck Surgery 125, no. 6 (1999): 654659.

Gordon-Salant, S., and Fitgibbons, P. J. "Temporal Factors and Speech Recognition Performance in Young and Elderly Listeners." Journal of Speech and Hearing Research 36, no. 6 (1993): 12761285.

Humes, L. E. "Speech Understanding in the Elderly." Journal of the American Academy of Audiology 7, no. 3 (1996): 161167.

Humes, L. E.; Halling, D.; and Coughlin, M. "Reliability and Stability of Various Hearing-Aid Outcome Measures in a Group of Elderly Hearing-Aid Wearers." Journal of Speech, Language, and Hearing Research 39, no. 5 (1996): 923935.

Moscicki, E. K.; Elkins, E. F.; Baum, H. M.; and McNamara, P. M. "Hearing Loss in the Elderly: An Epidemiologic Study of the Framingham Heart Study Cohort." Ear and Hearing 6, no. 4 (1985): 184190.

Mulrow, C. D.; Aguilar, C.; Endicott, J. E.; Velex, R.; Tuley, M. R.; Charlip, W. S.; and Hill, J. A. "Association Between Hearing Impairment and the Quality of Life of Elderly Individuals." Journal of the American Geriatrics Society 38, no. 1 (1990): 4550.

National Center for Health Statistics. "Current Estimates from the National Health Interview Survey, 1995." Vital and Health Statistics 10 (1998): 7980.

Pearson, J. D.; Morrell, C. H.; Gordon-Salant, S.; Brant, L. J.; Metter, E. J.; Klein, L. L.; and Fozard, J. L. "Gender Differences in a Longitudinal Study of Age-Associated Hearing Loss." Journal of the Acoustical Society of America 97, no. 2 (1995): 11961205.

Robinson, D. W. "Threshold of Hearing as a Function of Age and Sex for the Typical Unscreened Population." British Journal of Audiology 22, no. 1 (1988): 520.

Schneider, B. A., and Hamstra, S. J. "Gap Detection Thresholds as a Function of Tonal Duration for Younger and Older Listeners." Journal of the Acoustical Society of America 106, no. 1 (1999): 371380.

Schuknecht, H. F. Pathology of the Ear, 2d ed. Philadelphia: Lea & Febiger, 1993.

Souza, P. E., and Turner, C. W. "Masking of Speech in Young and Elderly Listeners with Hearing Loss." Journal of Speech and Hearing Research 37, no. 3 (1994): 665661.

Stover, L., and Norton, S. J. "The Effects of Aging on Otoacoustic Emissions." Journal of the Acoustical Society of America 94, no. 5 (1993): 26702681.

Waltzman, S.; Cohen, N.; and Shapiro, B. "The Benefits of Cochlear Implantation in the Geriatric Population." OtolaryngologyHead and Neck Surgery 108, no. 4 (1993): 329333.

Wiley, T. L.; Cruickshanks, K. J.; Nondahl, D. M.; Tweed, T. S.; Klein, R.; and Klein, B. E. K. "Tympanometric Measures in Older Adults." Journal of the American Academy of Audiology 7, no. 4 (1996): 260268.

Wiley, T. L.; Cruickshanks, K. J.; Nondahl, D. M.; Tweed, T. S.; Klein, R.; and Klein, B. E. K. "Aging and High-Frequency Hearing Sensitivity." Journal of Speech, Language, and Hearing Research 41, no. 5 (1998): 10611072.

Willott, J. F. Aging and the Auditory System. San Diego: Singular Publishing Group, 1991.

Wingfield, A.; Poon, L. W.; Lombardi, L.; and Lowe, D. "Speed of Processing in Normal Aging: Effects of Speech Rate, Linguistic Structure, and Processing Time." Journal of Gerontology 40, no. 5 (1985): 579585.

Cite this article
Pick a style below, and copy the text for your bibliography.

  • MLA
  • Chicago
  • APA

Gordon-Salant, Sandra. "Hearing." Encyclopedia of Aging. 2002. Encyclopedia.com. 25 Jul. 2016 <http://www.encyclopedia.com>.

Gordon-Salant, Sandra. "Hearing." Encyclopedia of Aging. 2002. Encyclopedia.com. (July 25, 2016). http://www.encyclopedia.com/doc/1G2-3402200181.html

Gordon-Salant, Sandra. "Hearing." Encyclopedia of Aging. 2002. Retrieved July 25, 2016 from Encyclopedia.com: http://www.encyclopedia.com/doc/1G2-3402200181.html

Hearing

Hearing

BIBLIOGRAPHY

Hearing is an especially important avenue by which we gain information about the world around us; one reason is that it plays a primary role in speech communication, a uniquely human activity. Clearly, then, our ability to perceive our environment and therefore to interact with it, both in a physical and verbal or abstract sense, is dependent in large measure upon our sense of hearing [seeLanguage, article onspeech pathology; perception, article onspeech perception.]

The study of the auditory system is carried on by a variety of disciplines, including psychology, physics, engineering, mathematics, anatomy, physiology, and chemistry. This article deals primarily with work in psychology, although it will be necessary to refer to other areas for a more complete understanding of certain phenomena. The peripheral hearing mechanism is reviewed from the standpoint of anatomy, hydromechanical action, and electrical activity. The basic subjective correlates of auditory stimuli are discussed, together with current research and theory. In all cases, we are concerned with the normal rather than pathological auditory system.

The peripheral hearing mechanism. When one speaks of the ear, the image that first comes to mind is the flap of cartilaginous tissue, or the pinna, fixed to either side of the head. The presumed function of the pinna is to direct sound energy into the ear canal, or external auditory meatus. In some animals, such as the cat, the pinna may be directionally oriented independently of the head. For all practical purposes, however, man does not possess this ability. It has been shown that because of the particular shape of man’s pinnae, sound arriving at the head is modified differentially depending on its direction of arrival. This may well provide a cue for the localization of a sound source in space.

The external meatus and the eardrum. The external meatus is a tortuous canal about one inch in length, closed at the inner end by the eardrum, or tympanic membrane. The meatus forms a passageway through which sound energy may be transmitted to the inner reaches of the ear. The meatus has the general shape of a tube closed at one end; it tends to resonate at a frequency of about 3,000 cycles per second. Because of this resonance the pressure of sound waves at the eardrum, for frequencies in this vicinity, is twenty times greater than that at the pinna. The meatus, therefore, serves as a selective amplification device, and, interestingly enough, it is primarily in this frequency range that our hearing is most sensitive.

The middle ear. The eardrum marks the boundary between the outer and middle ear. At this point variations in sound pressure are changed into mechanical motion, and it is a function of the middle ear to transmit this mechanical motion to the inner ear, where it may excite the auditory nerve. This transmission is affected by three small bones, the auditory ossicles, which form a bridge across the middle ear. The ossicles are named for their shapes: the malleus (hammer), which is attached to the eardrum; the incus (anvil), which is fixed to the malleus; and the stapes (stirrup), which articulates with the incus and at the same time fits into an oval opening of the inner ear. The ossicles not only provide simple transmission of vibratory energy but in doing so actually furnish a desirable increase in pressure. The ossicles are held in place by ligaments and may be acted upon by two small muscles, the tensor tympani and the stapedius. The function of these muscles is not clear, but it has been suggested that their contraction, together with changes in mode of ossicular motion, serves at high levels of stimulation to reduce the effective input to the inner ear.

The inner ear. The “foot plate” of the stapes marks the end of the middle ear and the beginning of the inner ear. The inner ear actually consists of two portions that, although anatomically related, serve essentially independent functions. Here we are concerned only with the cochlea, which contains the sensory end organs of hearing. The cochlea, spiral in shape, is encased in bone and contains three nearly parallel, fluid-filled ducts running longitudinally. The middle of the three ducts has as its bottom a rather stiff membrane known as the basilar membrane. On this membrane is the organ of Corti, within which are the hair cells, or the sensory receptors for hearing. The hairs of the hair cells extend a short distance up into a gelatinous plate known as the tectorial membrane. When the basilar membrane is displaced transversely, the hairs are moved to the side in a shearing motion and the hair cells are stimulated.

Displacement of the basilar membrane is brought about by fluid movement induced by the pistonlike action of the stapes in the oval window. Since fluid is essentially noncompressible, its displacement is possible because of the existence of a second opening between the cochlea and the middle ear, the round window. When the stapes moves inward, a bulge is produced in the basilar membrane and the round window membrane moves outward. The bulge or local displacement of the baslar membrane is not stationary but moves down the membrane away from the windows. If the movement of the stapes is periodic, such as in response to a pure tone, then the basilar membrane is displaced alternately up and down. Thus, when a pure tone is presented to the ear, a wave travels continuously down the basilar membrane. The amplitude of this wave is not uniform but achieves a maximum value at a particular point along the membrane determined by the stimulus frequency. High frequencies yield maxima near the stapes; lower frequencies produce maxima progressively farther down the membrane.

Electrical potentials. Many of the mechanical events just described have an electrical counter part. The cochlear micro-phonic, an electrical potential derived from the cochlea, reflects the displacement of the basilar membrane. The endocochlear potential represents static direct current voltages within various portions of the cochlea, whereas the summating potential is a slowly changing direct current voltage that occurs in response to a high-intensity stimulus. Also observable is the action potential, which is generated by the auditory neurons in the vicinity of the cochlea. Neural potentials reflecting the activity of the hair cells are transmitted by the eighth cranial nerve to the central nervous system.

Psychoacoustics. Although it is true that one of the principal functions of man’s auditory system is the perception of speech, it does not necessarily follow that the exclusive use of speech stimuli is the best way to gain knowledge of our sense of hearing. In the study of hearing, the use of simple stimuli predominates and the common stimulus is the sine wave, or pure tone.

Traditionally, psychophysics has investigated problems in (1) detection of stimuli, (2) detection of differences between stimuli, and (3) relations among stimuli. Psychoacoustics has followed a similar pattern.

Threshold effects. It has been shown that a number of factors are influential in determining the minimum magnitude (often called intensity or amplitude) of an auditory signal that can be detected. Specifically, absolute thresholds are a function primarily of the frequency and duration of the signal. Under optimum conditions, sound magnitudes so faint as to approach the random movement of air molecules, known as Brownian movement, may be heard; the displacement of the basilar membrane in these cases is a thousand times less than the diameter of a hydrogen atom. Masked thresholds, those obtained in the simultaneous presence of a signal and other stimuli that mask its effect, depend on the frequency and relative magnitude of each stimulus.

In addition, previous auditory stimulation will affect subsequent absolute thresholds. Generally the effect is to lower auditory sensitivity, although in some instances sensitivity may be enhanced. Pertinent factors here include the magnitude, frequency, and duration of the “fatiguing” stimulus as well as the interval between the presentation of the fatiguing and test stimuli.

Differential thresholds. There are as many studies dealing with the detection of differences between two stimuli as there are ways in which stimuli may be varied. Only a few examples, therefore, will be cited here. With pure tones thresholds for hearing frequency differences become greater as frequency is increased, and smaller as magnitude is increased. Differences as small as one part in a thousand are detectable. Differential thresholds for magnitude depend upon the same parameters, but in a more complex way.

Noise stimuli, those sound waves lacking a periodic structure, may be varied with respect to magnitude, bandwidth, and center frequency, but differential thresholds with noise are generally predictable from the pure tone data.

Signal detection theory. Recently, there has come into psychophysics, principally by way of psychoacoustics, a new way of thinking about detection data. This new approach makes use of signal detection theory and offers some novel ideas. First, it offers a way to measure sensory aspects of detection independently of decision processes. That is, under ordinary circumstances, the overt response to the signal depends not only upon the functioning of the receptor but on the utility of the response alternatives. If it is extremely important that a signal be detected, under ordinary circumstances a subject is more likely to give a positive response regardless of the activity of the receptor.

Second, it rejects the notion of a threshold; that is, a threshold in the sense that a mechanism exists which is triggered if some critical stimulus level is exceeded. One basis for such rejection is clear from the previous paragraph.

The theory of signal detectability substitutes for the concept of a threshold, the view that detection of a stimulus is a problem in the testing of statistical hypotheses. For example, the testing situation can be so structured that two stimulus conditions exist: the signal is present and the signal is absent. Clearly, there are four alternatives: (1) the listener can accept the hypothesis that the signal was present when, in fact, it was; (2) he can reject this hypothesis under the same conditions; (3) he can accept the hypothesis that the signal was absent when, in fact, it was; (4) he can reject this hypothesis. By making certain assumptions about the characteristics of the stimulus and proceeding under the ideal condition that the observer makes use of all information in the signal, the probabilities associated with these alternatives may be mapped out. It has been shown that an actual observer behaves as if he were performing in this fashion, and his performance may therefore be compared to that ideally obtainable.

Suprathreshold phenomena. With signals that are easily audible, it is generally conceded that there are three primary perceptual dimensions to hearing: pitch, loudness, and quality. Considerable effort has been expended in searching for the stimulus correlates and physiological mechanisms associated with these dimensions.

Pitch and theories of hearing. With a simple pure tone, pitch is usually associated with the frequency of vibration of the stimulus. High frequencies tend to give rise to high pitches. His torically, there have been two general types of theories of hearing: “place” theories and “volley” theories (often called frequency theories).

The most commonly suggested mechanism of pitch is based on a place hypothesis, which holds that the pertinent cue for pitch is the particular locus of activity within the nervous system. It seems likely that stimulation of specific neurons or groups of neurons is related to the displacement patterns of the basilar membrane. The chief alternative to the place hypothesis is the volley or rate of neural discharge hypothesis, which holds that the rate or frequency with which neural discharge occurs within the auditory nerve is the determinant of pitch; the higher the frequency, the higher the pitch. The frequency of neural discharge, in turn, is synchronous with the stimulus frequency.

Any results in which pitch is influenced by any parameter other than frequency is not in accord with the neural discharge hypothesis. Such results include changes in pitch brought about by differences in the magnitude of the stimulus, by masking, fatigue, or auditory pathology. On the other hand, the place hypothesis cannot readily explain how a pitch corresponding to a particular frequency is perceived, when, in fact, little or no energy exists in the stimulus at that frequency. Such a situation exists for several pitch phenomena: the residue, periodicity pitch, time separation pitch, and Huggins’ effect.

Loudness. Loudness is related to the magnitude of the stimulus, but not exclusively so. Frequency and duration of the stimulus are secondary factors in determining loudness. The loudness of a stimulus depends upon prior acoustic stimulation in somewhat the same manner that absolute threshold does. Generally, loudness decreases following adaptation, and the pertinent parameters are the same as those that influence threshold shifts.

Quality. Quality, or timbre, is a complex perceptual quantity that appears to be associated with the harmonic structure of the sound wave. The greater the number of audible harmonics, the richer or fuller the sound will appear. The converse also appears to be true. Relatively little work has been done in this area.

Scaling and harmonics. Psychophysical scaling, or the assessment of the relation between the mag nitude of the stimulus and the magnitude of the sensation, has been of interest for many years. New methods, whose chief virtues are simplicity and relative freedom from bias, have recently stimulated additional research. Auditory dimensions that have been studied include loudness, pitch, duration, volume, density, and noxiousness.

The principal finding is that in nearly all cases the relation between stimulus and sensation is a power function.

Nonlinear effects. In simple systems in which response magnitude is a nonlinear function of stimulus input, harmonics are generated when the system input is in the form of a simple sinusoid. When the input is two sinusoids, or pure tones, then in addition to harmonics, components exist whose frequency is equal to the sum and the difference of the input frequencies. Such effects are seen when the auditory system is driven at moderate and high intensities. That is, additional tones called aural harmonics and sum and difference tones are perceived corresponding to the predicted frequencies. This seems to indicate that the auditory system behaves in a nonlinear fashion over the upper part of its dynamic range.

Binaural hearing. Under most conditions, stimuli arising from a single sound source are represented somewhat differently at each of the two ears. The auditory system makes use of these subtle differences in such a fashion that we are able to localize the sound in space. The binaural system is especially sensitive to small temporal disparities in the two inputs, being capable of discriminating differences as small as 0.000008 second. Intensity differences at the two ears also play a role in localization.

Although localization effects are the most dramatic event in binaural hearing, other interesting binaural phenomena occur. For example, less energy is required for threshold if both ears, rather than just one ear, are stimulated. Similarly, the same loudness may be achieved binaurally with less energy than it could be monaurally.

Our sense of hearing provides us with information relative to vibratory or acoustic events. This information relates to the magnitude, frequency, duration, complexity, and spatial locus of the event. The peripheral auditory system is an elegantly designed hydromechanical structure. The sensory cells themselves and complexities of their enervation are of considerable importance, but are less well understood. In total, hearing is an extremely versatile sensory process with exquisite sensitivity.

Arnold M. Small, Jr.

[Other relevant material may be found inAttention; Nervous system; Psychophysics; Senses; and in the biography ofHelmholtz.]

BIBLIOGRAPHY

Conference on the Neural Mechanisms of the Auditory and Vestibular Systems, Bethesda, Md., 1959 1960 Neural Mechanisms of the Auditory and Vestibular Systems. Springfield, 111.: Thomas. → The first 16 chapters deal with the auditory system.

Geldard, Frank A. 1953 The Human Senses. New York: Wiley.

Harvard University, Psycho-acoustic Laboratory 1955 Bibliography on Hearing. Cambridge, Mass.: Harvard Univ. Press.→ Contains more than ten thousand titles.

Helmholtz, Hermann L. F. von (1862) 1954 On the Sensations of Tone as a Physiological Basis for the Theory of Music. New York: Dover. → First published as Die Lehre von den Tonempfindungen als physio-logische Grundlage fur die Theorie der Musik. A classic which in many ways is as important today as when it was written.

Jerger, James (editor) 1963 Modern Developments in Audiology. New York: Academic Press. → Especially valuable for its readable review of signal detection theory and its coverage of the effect of acoustic stimulation on subsequent auditory perception.

Von Bekesy, Georg (1928–1958)1960 Experiments in Hearing. New York: McGraw-Hill. → A compilation of Georg Von Bekesy’s writings on cochlear mechanics, psychoacoustics, and the ear’s conductive processes.

Wever, Ernest G. 1949 Theory of Hearing. New York: Wiley. → A review of theories of hearing with a spe cial attempt to show the cogency of a volley theory.

Wever, Ernest G.; and lawrence, merle 1954 Physiological Acoustics. Princeton Univ. Press. → Emphasis on the mechanics of the middle ear.

Cite this article
Pick a style below, and copy the text for your bibliography.

  • MLA
  • Chicago
  • APA

"Hearing." International Encyclopedia of the Social Sciences. 1968. Encyclopedia.com. 25 Jul. 2016 <http://www.encyclopedia.com>.

"Hearing." International Encyclopedia of the Social Sciences. 1968. Encyclopedia.com. (July 25, 2016). http://www.encyclopedia.com/doc/1G2-3045000504.html

"Hearing." International Encyclopedia of the Social Sciences. 1968. Retrieved July 25, 2016 from Encyclopedia.com: http://www.encyclopedia.com/doc/1G2-3045000504.html

hearing

hearing Sounds are rapid variations in pressure, which are propagated through the air away from a vibrating object, such as a loudspeaker cone or the human vocal cords. Our sense of hearing allows us to detect and identify the myriad sounds present in our environment, and to determine their whereabouts. In humans and other animals with a poorly-developed sense of smell, hearing plays a particularly important role in alerting the listener to novel events in the environment. Through speech and music, human hearing also makes an extremely important contribution to social communication.

When the prongs of a tuning fork vibrate back and forth in a regular manner, a periodic sound is produced. For such a pure tone, the simplest type of sound, the pressure increases and then decreases following a smooth wave pattern (a sinusoidal function). The number of complete cycles per second is known as the frequency of the tone and is measured in Hertz (Hz). More commonly, natural sounds contain a number of different frequency components, the variation in intensity across the frequency range being referred to as the spectrum of the sound. The fundamental frequency of a complex tone corresponds to its perceived pitch, whereas the full spectrum determines the timbre, or sound quality. Thus, the same note played on two different musical instruments may sound different, as a result of differences in the additional frequencies in their spectra.

Young, healthy humans can hear sound frequencies from about 40 Hz to 20 kHz, although the upper frequency limit declines with age. Other mammals can hear frequencies that are inaudible to humans, both lower and higher. Some bats, for example, which navigate by echolocation, both emit and hear sounds with frequencies of more than 100 kHz. In general, there is a good match between the sound frequencies to which an animal is most sensitive and those frequencies it uses for communication. This is true in humans, who are most sensitive over a broad range of tones that cover the spectrum of human speech.

Compared with total atmospheric pressure, airborne sound waves represent extremely small pressure changes. The amplitude of the pressure variation in a sound directly determines its perceived loudness. Because the range of sound pressures that can be heard is so large, a logarithmic scale of decibels (dB) is used to measure sound intensity. On this scale, 0 dB is around the lowest sound level that can be heard by a human listener, whereas sound levels of 100 dB or more are uncomfortably loud and may damage the ears. At pop concerts and in discos the sound level can be much higher than this!

The design of the ear changed substantially between aquatic and terrestrial vertebrates, but has remained very similar among mammals (except for specializations for different parts of the frequency spectrum). The human ear is illustrated in the figure. It is subdivided into the external, middle, and inner ear. The visible part of the ear comprises the skin-covered cartilaginous external ear. This includes the pinna on the side of the head and the external auditory meatus, or ear canal, which terminates at the eardrum. As they travel into the ear canal, sounds are filtered so that the amplitude of different frequency components is altered in different ways depending on the location of the sound source. These spectral modifications, which are not perceived as a change in sound quality, help us to localize the source of the sound. They are particularly important for distinguishing between sounds located in front of and behind or above and below the listener, and for localizing sounds if you are deaf in one ear, or when listening to very quiet sounds, inaudible to one ear. Because of its resonance characteristics, the external ear also amplifies the sound pressure at the eardrum by up to 20 dB in humans over a frequency range of 2–7 kHz.

Lying behind the eardrum is an air-filled cavity known as the middle ear, which is connected to the back of the throat via the eustachian tube. Opening of this tube during swallowing and yawning serves to maintain the middle ear cavity at atmospheric pressure. Airborne sounds pass through the middle ear to reach the fluid-filled cochlea of the inner ear, where the process of transduction — the conversion of sound into the electrical signals of nerve cells — takes place. Because of its greater density, the fluid in the cochlea has a much higher resistance to sound vibration than the air in the middle ear cavity. To avoid most of the incoming sound energy from being reflected back, vibrations of the eardrum are mechanically coupled to a flexible membrane (the oval window) in the wall of the cochlea by the three smallest bones in the body (the malleus, incus, and stapes — together known as the ossicles). These delicately suspended bones improve the efficiency with which sound energy is transferred from the air to the fluid in the cochlea and therefore prevent the loss in sound pressure that would otherwise occur due to the higher impedance of the cochlear fluids. This is achieved primarily because the vibrations of the eardrum are concentrated on the much smaller footplate of the stapes, which fits into the oval window of the cochlea. The smallest skeletal muscles in the body are attached to the ossicles, and contract reflexly in response to loud sounds or when the owner of them speaks. These contractions dampen the vibrations of the ossicles, thereby reducing the transmission of sound through the middle ear. As with the external ear, the efficiency of middle ear transmission varies with sound frequency. Together, these structures determine the frequencies to which we are most sensitive.

The inner ear includes the cochlea, the hearing organ, and the semicircular canals and otolith organs, the sense organs of balance. Both systems employ specialized receptor cells, known as hair cells, for detecting mechanical changes within the fluid-filled inner ear. Projecting from the apical surface of each hair cell is a bundle of around 100 hairs called stereocilia. Deflection of the bundle of hairs by sound (in the cochlea) or head motion or gravity (in the balance organs) leads to the opening of pores in the membrane of the hairs that allow small, positively-charged ions to rush into the hair cell and change its internal voltage. This causes a neurotransmitter to be released from the base of the hair cell, which, in turn, activates the ends of nerve fibres that convey information from the ear towards the brain. Although there are some differences between the hair cells of the hearing and balance organs, they work in essentially the same way.

The mammalian cochlea is a tube which is coiled so that it fits compactly within the temporal bone. The length of the cochlea — just over 3 cm in humans — is related to the range of audible frequencies rather than the size of the animal. Consequently, this structure does not vary much in size between mice and elephants. It is subdivided lengthwise into two principal regions by a collagen-fibre meshwork known as the basilar membrane. Around 15 000 hair cells, together with the nerves that supply them and supporting cells, are distributed in rows along its length. Vibrations transmitted by the middle ear ossicles to the oval window produce pressure gradients between the cochlear fluids on either side of the basilar membrane, setting the membrane into motion. The hair cells are ideally positioned to detect very small movements of the basilar membrane. There are two types of hair cells in the cochlea. The inner hair cells form a single row, whereas the more numerous outer hair cells are typically arranged into three rows.

In the nineteenth century, the great German physiologist and physicist Hermann von Helmholtz proposed that our perception of pitch arises because each region of the cochlea resonates at a different frequency (rather like the different strings of a piano). The first direct measurements of the response of the cochlea to sound were made by Georg von Békésy a century later, on the ears of human cadavers. He showed that very loud sounds induced a travelling wave of displacement along the basilar membrane, which resembles the motion produced when a rope is whipped. Von Békésy observed that the wave built up in amplitude as it travelled along the membrane and then decreased abruptly. For high-frequency sounds, the peak amplitude of the wave occurs near the base of the cochlea (adjacent to the middle ear), whereas the position of the peak shifts towards the other end of the tube (the apex) for progressively lower frequencies. This indeed occurs because the basilar membrane increases in width and decreases in stiffness from base to apex. These observations, which led to von Békésy winning the Nobel Prize, established that the cochlea performs a crude form of Fourier analysis, splitting complex sounds into their different frequency components along the length of the basilar membrane.

More recently, much more sensitive techniques, which can measure vibrations of less than a billionth of a metre, have revealed that motion of the basilar membrane is dramatically different in living and dead preparations. In animals in which the cochlea is physiologically intact, the movements of the basilar membrane are amplified, giving rise to much greater sensitivity and sharper frequency ‘tuning’ than can be explained by the variation in width and stiffness along its length. This amplifying step most likely involves the living outer hair cells, which, when stimulated by sound, actively change their length, shortening and lengthening up to thousands of times per second. These tiny movements appear to feed energy back into the cochlea to alter the mechanical response of the basilar membrane. Damage to the outer hair cells, following exposure to loud sounds or ‘ototoxic’ drugs, leads to poorer frequency selectivity and raised thresholds of hearing. The active responses of the outer hair cells are probably responsible for the extraordinary fact that the ear itself produces sound, which can be recorded with a microphone placed close to the ear and used to provide an objective measure of the performance of the ear.

Vibrations of the basilar membrane, detected by the inner hair cells, are transmitted to the brain in the form of trains of nerve impulses passing along the 30 000 axons of the auditory nerve (which mostly make contact with the inner hair cells). Each nerve fibre responds to motion of a limited portion of the basilar membrane and is therefore tuned to a particular sound frequency. Consequently, the frequency content of a sound is represented within the nerve and the auditory regions of the brain by which fibres are active. For frequencies below about 5 kHz, the auditory nerve fibres act like microphones, in that the impulses tend to be synchronized to a particular phase of the cycle of the stimulus. This property, known as phase-locking, allows changes in sound frequency to be represented to the brain by differences in the timing of action potentials and is thought to be particularly important for pitch perception at low frequencies and for speech perception. The intensity of sound is represented within the auditory system by the rate of firing of individual neurons — the number of nerve impulses generated per second — and by the number of neurons that are active.

Auditory signals are relayed through various nuclei (collections of nerve cell bodies) in the brain stem and thalamus, up to the temporal lobe of the cerebral cortex. At each nucleus, the incoming fibres that relay information to the next group of nerve cells are distributed in a topographic order, preserving the spatial relationships of the regions of basilar membrane from which they receive information. This spatial ordering of nerve fibres establishes a neural ‘map’ of sound frequency in each nucleus. The extraction of biologically important information — ‘What is the sound? Where did it come from?’ — takes place in the brain. As a result of the complex pattern of connections that exist within the auditory pathways, many neurons, particularly in the cortex, respond better to complex sounds than to pure tones. Indeed, in certain animals, including songbirds and echolocating bats, physiologists have discovered neurons that are tuned to behaviourally important acoustical features (components of bird song or bat calls). But auditory processing reaches its zenith in humans, where different regions of the cerebral cortex appear (according to studies involving imaging techniques) to have specialized roles in the perception of language and music.

The ability to localize sounds in space assumes great importance for animals seeking prey or trying to avoid potential predators, and also when directing attention towards interesting events. Although sounds can be localized using one ear alone, an improvement in performance is usually seen if both ears hear the sound. Such binaural localization depends on the detection of tiny differences in the intensity or timing of sounds reaching the two ears. At the beginning of the twentieth century, Lord Rayleigh demonstrated that human listeners can localize sounds below about 1500 Hz using the minute differences between the time of arrival (or phase) of the sound at the two ears, which arise because the sound arrives slightly later at the ear further from the sound source. He also showed that interaural intensity differences, which result from the acoustical ‘shadow’ cast by the head, are effective cues at higher frequencies. Using these cues, listeners can distinguish two sources separated by as little as 1° in angle in the horizontal plane.

Studies in animals have shown that neurons in auditory nuclei of the brain stem receive converging signals from the two ears. By comparing the timing of the phase-locked nerve impulses coming from each side, some of these neurons show sensitivity to differences in the sound arrival time at the two ears of the order of tens of microseconds, whereas other neurons are exquisitely sensitive to interaural differences in sound level. As well as facilitating the localization of sound sources, binaural hearing improves our ability to pick out particular sound sources, which helps us to detect and analyze them, particularly against a noisy background (aptly termed the ‘cocktail party effect’).

Andrew J. King

Bibliography

Moore, B. C. J. (1997). An introduction to the psychology of hearing, (4th edn). Academic Press, London.
Pickles, J. O. (1988). An introduction to the physiology of hearing, (2nd edn). Academic Press, London.


See also deafness; ear, external; eustachian tube; hearing aid; sense organs; sensory integration; tinnitus.

Cite this article
Pick a style below, and copy the text for your bibliography.

  • MLA
  • Chicago
  • APA

COLIN BLAKEMORE and SHELIA JENNETT. "hearing." The Oxford Companion to the Body. 2001. Encyclopedia.com. 25 Jul. 2016 <http://www.encyclopedia.com>.

COLIN BLAKEMORE and SHELIA JENNETT. "hearing." The Oxford Companion to the Body. 2001. Encyclopedia.com. (July 25, 2016). http://www.encyclopedia.com/doc/1O128-hearing.html

COLIN BLAKEMORE and SHELIA JENNETT. "hearing." The Oxford Companion to the Body. 2001. Retrieved July 25, 2016 from Encyclopedia.com: http://www.encyclopedia.com/doc/1O128-hearing.html

Hearing

Hearing

Hearing is the process by which humans, using ears, detect and perceive sounds. Sounds are pressure waves transmitted through some medium, usually air or water. Sound waves are characterized by frequency (measured in cycles per second, cps, or hertz, Hz) and amplitude, the size of the waves. Low-frequency waves produce low-pitched sounds (such as the rumbling sounds of distant thunder) and high-frequency waves produce high-pitched sounds (such as a mouse squeak). Sounds audible to most humans range from as low as 20 Hz to as high as 20,000 Hz in a young child (the upper range especially decreases with age). Loudness is measured in decibels (dB), a measure of the energy content or power of the waves proportional to amplitude. The decibel scale begins at 0 for the lowest audible sound, and increases logarithmically, meaning that a sound of 80 db is not just twice as loud as a sound of 40 db, but has 10,000 times more power! Sounds of 100 db are so intense that they can severely damage the inner ear, as many jack-hammer operators and rock stars have discovered.

The ear is a complex sensory organ, divided into three parts: external (outer) ear, middle ear, and inner ear. The outer and middle ear help to protect and maintain optimal conditions for the hearing process and to direct the sound stimuli to the actual sensory receptors, hair cells, located in the cochlea of the inner ear.

Outer Ear and Middle Ear

The most visible part of the ear is the pinna, one of two external ear structures. Its elastic cartilage framework provides flexible protection while collecting sound waves from the air (much like a funnel or satellite dish); the intricate pattern of folds helps prevent the occasional flying insect or other particulate matter from entering the ear canal, the other external ear component. The ear (auditory) canal directs the sound to the delicate eardrum (tympanic membrane), the boundary between external and middle ear. The ear canal has many small hairs and is lined by cells that secrete ear wax (cerumen), another defense to keep the canal free of material that might block the sound or damage the delicate tympanic membrane.

The middle ear contains small bones (auditory ossicles) that transmit sound waves from the eardrum to inner ear. When the sound causes the eardrum to vibrate, the malleus (hammer) on the inside of the eardrum moves accordingly, pushing on the incus (anvil), which sends the movements to the stapes (stirrup), which in turn pushes on fluid in the inner ear, through an opening in the cochlea called the oval window. Small muscles attached to these ossicles prevent their excessive vibration and protect the cochlea from damage when a loud sound is detected (or anticipated). Another important middle ear structure is the auditory (eustachian) tube, which connects the middle ear to the pharynx (throat). For hearing to work properly, the pressure on both sides of the eardrum must be equal; otherwise, the tight drum would not vibrate. Therefore, the middle ear must be connected to the outside.

Sometimes, when there are sudden changes in air pressure, the pressure difference impairs hearing and causes pain. In babies and many young people, fluid often builds up in the middle ear and pushes on the eardrum. The stagnant fluids can also promote a bacterial infection of the middle ear, called otitis media (OM). OM also occurs when upper respiratory infections (colds and sore throats) travel to the middle ear by way of the auditory tube. Sometimes the pressure can be relieved only by inserting drainage tubes in the eardrum.

Inner Ear

The inner ear contains the vestibule, for the sense of balance and equilibrium, and the cochlea, which converts the sound pressure waves to electrical impulses that are sent to the brain. The cochlea is divided into three chambers, or ducts. The cochlear duct contains the hair cells that detect sound. It is sandwiched between the tympanic and vestibular ducts, which are interconnected at the tip. These ducts form a spiral, giving the cochlea a snail shell appearance. Inside the cochlear duct, the hair cells are anchored on the basilar membrane, which forms the roof of the vestibular duct. The tips of the hair cells are in contact with the tectorial membrane, which forms a sort of awning. When the stapes pushes on the fluid of the inner ear, it creates pressure waves in the fluid of the tympanic and vestibular ducts (like kicking the side of a wading pool). These waves push the basilar membrane up and down, which then pushes the hair cells against the tectorial membrane, bending the "hairs" (stereocilia). When stereocilia are bent, the hair cell is excited, creating impulses that are transmitted to the brain.

How does the cochlea differentiate between sounds of different pitches and intensities? Pitch discrimination results from the fact that the basilar membrane has different vibrational properties along its length, such that the base (nearest the oval window) vibrates most strongly to high frequency sounds, and the tip to low frequencies. The hair cells along the length of the cochlea each make their own connection to the brain, just like the keys on an electric piano are each wired for a certain note. Loud (high-amplitude) sounds cause the basilar membrane to vibrate more vigorously than soft (low-amplitude) sounds. The brain thus distinguishes loud from soft sounds by differences in the intensity of nerve signaling from the cochlea.

Hair cells themselves do not make the impulses that are transmitted to the central nervous system (CNS); they stimulate nerve fibers to which they are connected. These nerve fibers form the cochlear branch of the eighth cranial (vestibulocochlear) nerve. In the CNS, the information is transmitted both to the brainstem, which controls reflex activity, and to the auditory cortex, where perception and interpretation of the sound occur. By comparing inputs from two ears, the brain can interpret the timing of sounds from right and left to determine the location of the sound source. This is called binaural hearing.

see also Brain; Neuron

Harold J. Grau

Bibliography

Carmen, Richard. Our Endangered Hearing. Emmaus, PA: Rodale Press, 1977.

Stebbins, William C. The Acoustic Sense of Animals. Cambridge, MA: Harvard University Press, 1983.

Stevens, S. S., and Fred Warshofsky. Sound and Hearing (Life Science Library). New York: Time, 1967.

Cite this article
Pick a style below, and copy the text for your bibliography.

  • MLA
  • Chicago
  • APA

Grau, Harold J.. "Hearing." Biology. 2002. Encyclopedia.com. 25 Jul. 2016 <http://www.encyclopedia.com>.

Grau, Harold J.. "Hearing." Biology. 2002. Encyclopedia.com. (July 25, 2016). http://www.encyclopedia.com/doc/1G2-3400700206.html

Grau, Harold J.. "Hearing." Biology. 2002. Retrieved July 25, 2016 from Encyclopedia.com: http://www.encyclopedia.com/doc/1G2-3400700206.html

Hearing

Hearing

The ability to perceive sound.

The ear, the receptive organ for hearing, has three major parts: the outer, middle, and inner ear. The pinna or outer earthe part of the ear attached to the head, funnels sound waves through the outer ear. The sound waves pass down the auditory canal to the middle ear, where they strike the tympanic membrane, or eardrum, causing it to vibrate. These vibrations are picked up by three small bones (ossicles) in the middle ear named for their shapes: the malleus (hammer), incus (anvil), and stapes (stirrup). The stirrup is attached to a thin membrane called the oval window, which is much smaller than the eardrum and consequently receives more pressure.

As the oval window vibrates from the increased pressure, the fluid in the coiled, tubular cochlea (inner ear) begins to vibrate the membrane of the cochlea (basilar membrane) which, in turn, bends fine, hairlike cells on its surface. These auditory receptors generate miniature electrical forces which trigger nerve impulses that then travel via the auditory nerve, first to the thalamus and then to the primary auditory cortex in the temporal lobe of the brain . Here, transformed into auditory but meaningless sensations, the impulses are relayed to association areas of the brain which convert them into meaningful sounds by examining the activity patterns of the neurons, or nerve cells, to determine sound frequencies. Although the ear changes sound waves into neural impulses, it is the brain that actually "hears," or perceives the sound as meaningful.

The auditory system contains about 25,000 cochlear neurons that can process a wide range of sounds. The sounds we hear are determined by two characteristics of sound waves: their amplitude (the difference in air pressure between the peak and baseline of a wave) and their frequency (the number of waves that pass by a given point every second). Loudness of sound is influenced by a complex relationship between the wavelength and amplitude of the wave; the greater the amplitude, the faster the neurons fire impulses to the brain, and the louder the sound that is heard. Loudness of sound is usually expressed in decibels (dB). A whisper is about 30 dB, normal conversation is about 60 dB, and a subway train is about 90 dB. Sounds above 120 dB are generally painful

DECIBEL RATINGS AND HAZARDOUS LEVEL OF NOISE
Decibel level Example of sounds
Above 110 decibels, hearing may become painful.
Above 120 decibels is considered deafening.
Above 135, hearing will become extremely painful and hearing loss may result if exposure is prolonged.
Above 180, hearing loss is almost certain with any exposure.
30 Soft whisper
35 Noise may prevent the listener from falling asleep
40 Quiet office noise level
50 Quiet conversation
60 Average television, sewing machine, lively conversation
70 Busy traffic, noisy restaurant
80 Heavy city traffic, factory noise, alarm clock
90 Cocktail party, lawn mower
100 Pneumatic drill
120 Sandblasting, thunder
140 Jet airplane
180 Rocket launching pad

to the human ear. The loudest rock band on record was measured at 160 dB.

Pitch (how high or low a tone sounds) is a function of frequency. Sounds with high frequencies are heard as having a high pitch; those with low frequencies are heard as low-pitched. The normal frequency range of human hearing is 20 to 20,000 Hz. Frequencies of some commonly heard sounds include the human voice (120 to approximately 1,100 Hz), middle C on the piano (256 Hz), and the highest note on the piano (4,100 Hz). Differences in frequency are discerned, or coded, by the human ear in two ways, frequency matching and place. The lowest sound frequencies are coded by frequency matching, duplicating the frequency with the firing rate of auditory nerve fibers. Frequencies in the low to moderate range are coded both by frequency matching and by the place on the basilar membrane where the sound wave peaks. High frequencies are coded solely by the placement of the wave peak

Loss of hearing can result from conductive or sensorineural deafness or damage to auditory areas of the brain. In conductive hearing loss, the sound waves are unable to reach the inner ear due to disease or obstruction of the auditory conductive system (the external auditory canal; the eardrum, or tympanic membrane; or structures and spaces in the middle ear). Sensorineural hearing loss refers to two different but related types of impairment, both affecting the inner ear. Sensory hearing loss involves damage, degeneration, or developmental failure of the hair cells in the cochlea's organ of Corti, while neural loss involves the auditory nerve or other parts of the cochlea. Sensorineural hearing loss occurs as a result of disease, birth defects, aging , or continual exposure to loud sounds. Damage to the auditory areas of the brain through severe head injury, tumors, or strokes can also prevent either the perception or the interpretation of sound.

Further Reading

Davis, Lennard J. Enforcing Normalcy: Disability, Deafness, and the Body. New York: Verso, 1995.

Cite this article
Pick a style below, and copy the text for your bibliography.

  • MLA
  • Chicago
  • APA

"Hearing." Gale Encyclopedia of Psychology. 2001. Encyclopedia.com. 25 Jul. 2016 <http://www.encyclopedia.com>.

"Hearing." Gale Encyclopedia of Psychology. 2001. Encyclopedia.com. (July 25, 2016). http://www.encyclopedia.com/doc/1G2-3406000304.html

"Hearing." Gale Encyclopedia of Psychology. 2001. Retrieved July 25, 2016 from Encyclopedia.com: http://www.encyclopedia.com/doc/1G2-3406000304.html

Hearing

HEARING

A legal proceeding where an issue of law or fact is tried and evidence is presented to help determine the issue.

Hearings resemble trials in that they ordinarily are held publicly and involve opposing parties. They differ from trials in that they feature more relaxed standards of evidence and procedure, and take place in a variety of settings before a broader range of authorities (judges, examiners, and lawmakers). Hearings fall into three broad categories: judicial, administrative, and legislative. Judicial hearings are tailored to suit the issue at hand and the appropriate stage at which a legal proceeding stands. Administrative hearings cover matters of rule making and the adjudication of individual cases. Legislative hearings occur at both the federal and state levels and are generally conducted to find facts and survey public opinion. They encompass a wide range of issues relevant to law, government, society, and public policy.

Judicial hearings take place prior to a trial in both civil and criminal cases. Ex parte hearings provide a forum for only one side of a dispute, as in the case of a temporary restraining order, whereas adversary hearings involve both parties. Preliminary hearings, also called preliminary examinations, are conducted when a person has been charged with a crime. Held before a magistrate or judge, a preliminary hearing is used to determine whether the evidence is sufficient to justify detaining the accused or discharging the accused on bail. Closely related are detention hearings, which can also determine whether to detain a juvenile. Suppression hearings take place before trial at the request of an attorney seeking to have illegally obtained or irrelevant evidence kept out of trial.

Administrative hearings are conducted by state and federal agencies. Rule-making hearings evaluate and determine appropriate regulations, and adjudicatory hearings try matters of fact in individual cases. The former are commonly used to garner opinion on matters that affect the public—as, for example, when the environmental protection agency (EPA) considers changing its rules. The latter commonly take place when an individual is charged with violating rules that come under the agency's jurisdiction—for example, violating a pollution regulation of the EPA, or, if incarcerated, violating behavior standards set for prisoners by the Department of Corrections.

Some blurring of this distinction occurs, which is important given the generally more relaxed standards that apply to some administrative hearings. The degree of formality required of an administrative hearing is determined by the liberty interest at stake: the greater that interest, the more formal the hearing. Notably, rules limiting the admissibility of evidence are looser in administrative hearings than in trials. Adjudicatory hearings can admit, for example, hearsay that generally would not be permitted at trial. (Hearsay is a statement by a witness who does not appear in person, offered by a third party who does appear.) The Administrative Procedure Act (APA) (5 U.S.C.A. § 551 et seq.) governs administrative hearings by federal agencies, and state laws largely modeled upon the APA govern state agencies. These hearings are conducted by a civil servant called a hearing examiner at the state level and known as an administrative law judge at the federal level.

Legislative hearings occur in state legislatures and in the U.S. Congress, and are a function of legislative committees. They are commonly public events, held whenever a lawmaking body is contemplating a change in law, during which advocates and opponents air their views. Because of their controversial nature, they often are covered extensively by the media.

Not all legislative hearings consider changes in legislation; some examine allegations of wrongdoing. Although lawmaking bodies do not have a judicial function, they retain the power to discipline their members, a key function of state and federal ethics committees. Fact finding is ostensibly the reason for turning congressional hearings into public scandals. Often, however, critics will argue that these hearings are staged for attacking political opponents. Throughout the twentieth century, legislative hearings have been used to investigate such things as allegations of Communist infiltration of government and industry (the House Un-American Activities Committee hearings) and abuses of power by the executive branch (the watergate and whitewater hearings).

cross-references

Administrative Law and Procedure.

Cite this article
Pick a style below, and copy the text for your bibliography.

  • MLA
  • Chicago
  • APA

"Hearing." West's Encyclopedia of American Law. 2005. Encyclopedia.com. 25 Jul. 2016 <http://www.encyclopedia.com>.

"Hearing." West's Encyclopedia of American Law. 2005. Encyclopedia.com. (July 25, 2016). http://www.encyclopedia.com/doc/1G2-3437702106.html

"Hearing." West's Encyclopedia of American Law. 2005. Retrieved July 25, 2016 from Encyclopedia.com: http://www.encyclopedia.com/doc/1G2-3437702106.html

Hearing

198. Hearing

See also 111. DEAFNESS ; 132. EAR ; 309. PERCEPTION ; 380. SOUND .

acoumetry
the measurement of acuteness of hearing. acoumeter, n. acoumetric, adj.
anaudia
loss or absence of the power of hearing.
audiclave
an instrument that aids hearing.
audiology
1. the branch of medical science that studies hearing, especially impaired hearing.
2. the treatment of persons with impaired hearing. audiologist, n.
audiometer
an instrument for testing hearing. Also called sonometer. audiometry, n. audiometric, adj.
audiometry
a testing of hearing ability by frequencies and various levels of loudness. audiometrist, audiometrician, n. audiometric, audiometrical, adj.
auditognosis
Medicine. the sense by which sounds are understood and interpreted.
otocleisis
a closure of the hearing passages.
otomyasthenia
Medicine. a weakness of the ear muscles causing poor selection and amplification of sounds. otomyasthenic, adj.
otophone
1. an external appliance used to aid hearing; a hearing aid.
2. Medicine. a tube used in the auscultation of the ear.
otosis
a defect in hearing causing a false impression of sounds made by others.
paracusis
defective sense of hearing. Also paracousia.
phonism
a sound or a sensation of hearing produced by stimulus of another sense, as taste, smell, etc.
sonometer
audiometer. sonometry, n. sonometric, adj.

Cite this article
Pick a style below, and copy the text for your bibliography.

  • MLA
  • Chicago
  • APA

"Hearing." -Ologies and -Isms. 1986. Encyclopedia.com. 25 Jul. 2016 <http://www.encyclopedia.com>.

"Hearing." -Ologies and -Isms. 1986. Encyclopedia.com. (July 25, 2016). http://www.encyclopedia.com/doc/1G2-2505200209.html

"Hearing." -Ologies and -Isms. 1986. Retrieved July 25, 2016 from Encyclopedia.com: http://www.encyclopedia.com/doc/1G2-2505200209.html

hearing

hearing The sense by which sound is detected. In vertebrates the organ of hearing is the ear. In higher vertebrates variation in air pressure caused by sound waves are amplified in the outer and middle ears and transmitted to the inner ear, where sensory cells in the cochlea (see organ of Corti) register the vibrations. The resulting information is transmitted to the brain via the auditory nerve. The ear can distinguish between sounds of different intensity (loudness) and frequency (pitch).

Cite this article
Pick a style below, and copy the text for your bibliography.

  • MLA
  • Chicago
  • APA

"hearing." A Dictionary of Biology. 2004. Encyclopedia.com. 25 Jul. 2016 <http://www.encyclopedia.com>.

"hearing." A Dictionary of Biology. 2004. Encyclopedia.com. (July 25, 2016). http://www.encyclopedia.com/doc/1O6-hearing.html

"hearing." A Dictionary of Biology. 2004. Retrieved July 25, 2016 from Encyclopedia.com: http://www.encyclopedia.com/doc/1O6-hearing.html

hearing

hear·ing / ˈhi(ə)ring/ • n. 1. the faculty of perceiving sounds: people who have very acute hearing. ∎  the range within which sounds may be heard; earshot: she had moved out of hearing. 2. an opportunity to state one's case: I think I had a fair hearing. ∎ Law an act of listening to evidence in a court of law or before an official, esp. a trial before a judge without a jury.

Cite this article
Pick a style below, and copy the text for your bibliography.

  • MLA
  • Chicago
  • APA

"hearing." The Oxford Pocket Dictionary of Current English. 2009. Encyclopedia.com. 25 Jul. 2016 <http://www.encyclopedia.com>.

"hearing." The Oxford Pocket Dictionary of Current English. 2009. Encyclopedia.com. (July 25, 2016). http://www.encyclopedia.com/doc/1O999-hearing.html

"hearing." The Oxford Pocket Dictionary of Current English. 2009. Retrieved July 25, 2016 from Encyclopedia.com: http://www.encyclopedia.com/doc/1O999-hearing.html

hearing

hearing Process by which sound waves are experienced. Sound waves enter the auditory canal of the ear and vibrate the eardrum. The vibrations are transmitted by three small bones to the cochlea, where receptors generate nerve impulses that pass via the auditory nerve to the brain to be interpreted.

Cite this article
Pick a style below, and copy the text for your bibliography.

  • MLA
  • Chicago
  • APA

"hearing." World Encyclopedia. 2005. Encyclopedia.com. 25 Jul. 2016 <http://www.encyclopedia.com>.

"hearing." World Encyclopedia. 2005. Encyclopedia.com. (July 25, 2016). http://www.encyclopedia.com/doc/1O142-hearing.html

"hearing." World Encyclopedia. 2005. Retrieved July 25, 2016 from Encyclopedia.com: http://www.encyclopedia.com/doc/1O142-hearing.html

hearing

hearing: see ear.

Cite this article
Pick a style below, and copy the text for your bibliography.

  • MLA
  • Chicago
  • APA

"hearing." The Columbia Encyclopedia, 6th ed.. 2016. Encyclopedia.com. 25 Jul. 2016 <http://www.encyclopedia.com>.

"hearing." The Columbia Encyclopedia, 6th ed.. 2016. Encyclopedia.com. (July 25, 2016). http://www.encyclopedia.com/doc/1E1-X-hearing.html

"hearing." The Columbia Encyclopedia, 6th ed.. 2016. Retrieved July 25, 2016 from Encyclopedia.com: http://www.encyclopedia.com/doc/1E1-X-hearing.html

Facts and information from other sites