Hearing (or audition) is one of the traditional five senses, and refers to the ability to detect sound. In humans and other vertebrates, hearing is performed primarily by the auditory system: sound is detected by the ear and transduced into nerve impulses that are perceived by the brain.
- 1 Hearing in animals
- 2 Hearing in humans
- 3 Hearing tests
- 4 Hearing underwater
- 5 References
- 6 See also
- 7 External links
Hearing in animals
Not all sounds are normally audible to all animals. Each species has a range of normal hearing for both loudness (amplitude) and pitch (frequency). Many animals use sound in order to communicate with each other and hearing in these species is particularly important for survival and reproduction. In species using sound as a primary means of communication, hearing is typically most acute for the range of pitches produced in calls and speech.
Frequencies capable of being heard by humans are called audio or sonic. Frequencies higher than audio are referred to as ultrasonic, while frequencies below audio are referred to as infrasonic. Some bats use ultrasound for echo location while in flight. Dogs are able to hear ultrasound, which is the principle of 'silent' dog whistles. Snakes sense infrasound through their bellies, and whales, giraffes and elephants use it for communication.
The physiology of hearing in vertebrates is not fully understood at this time. The molecular mechanism of sound transduction within the cochlea and the processing of sound by the brain, (the auditory cortex) are two areas that remain largely unknown.
Hearing in humans
Humans can generally hear sounds with frequencies between 20 Hz and 20 kHz. Human hearing is able to discriminate small differences in loudness (intensity) and pitch (frequency) over that large range of audible sound. This healthy human range of frequency detection varies significantly with age, occupational hearing damage, and gender; some individuals are able to hear pitches up to 22 kHz and perhaps beyond, while others are limited to about 16 kHz. The ability of most adults to hear sounds above about 8 kHz begins to deteriorate in early middle age.
The visible portion of the outer ear in humans is called the auricle or the pinna. It is a convoluted cup that arises from the opening of the ear canal on either side of the head. The auricle helps direct sound to the ear canal. Both the auricle and the ear canal amplify and guide sound waves to the tympanic membrane or eardrum.
In humans, amplification of sound ranges from 5 to 20 dB for frequencies within the speech range (about 1.5–7 kHz). Since the shape and length of the human external ear preferentially amplifies sound in the speech frequencies, the external ear also improves signal to noise ratio for speech sounds.
The eardrum is stretched across the front of a bony air-filled cavity called the middle ear. Just as the tympanic membrane is like a drum head, the middle ear cavity is like a drum body.
Much of the middle ear's function in hearing has to do with processing sound waves in air surrounding the body into the vibrations of fluid within the cochlea of the inner ear. Sound waves move the tympanic membrane, which moves the ossicles, which move the fluid of the cochlea.
The cochlea is a snail shaped fluid-filled chamber, divided along almost its entire length by a membranous partition. The cochlea propagates mechanical signals from the middle ear as waves in fluid and membranes, and then transduces them to nerve impulses which are transmitted to the brain. It is responsible for the sensations of balance and motion.
Central auditory system
Sound information, encoded as spikes on the afferent fibers of the auditory nerve, travels via the cochlear nuclei to the superior olivary complex in the brainstem, and then via the lateral lemniscus to the inferior colliculus of the midbrain, being further processed at each waypoint. The information eventually reaches the medial geniculate body in the thalamus, and from there it is relayed to the auditory cortex. In the human brain, the primary auditory cortex is located in the temporal lobe. The primary auditory cortex connects to other cortical regions involving the identification and localization of sound sources, the perception of speech, music, and language, etc. Such association areas typically also receive input from visual cortex and other sensory cortical areas.
Representation of loudness, pitch, and timbre
Nerves transmit information through discrete electrical impulses known as "action potentials." As the loudness of a sound increases, the rate of action potentials in the auditory nerve fibres increases. Conversely, at lower sound intensities (low loudness), the rate of action potentials is reduced.
Different repetition rates and spectra of sounds, that is, pitch and timbre, are represented on the auditory nerve by a combination of rate-versus-place and temporal-fine-structure coding. That is, different frequencies cause a maximum response at different places along the organ of Corti, while different repetition rates of low enough pitches (below about 1500 Hz) are represented directly by repetition of neural firing patterns (known also as volley coding).
Loudness and duration of sound (within small time intervals) may also influence pitch to a small extent. For example, for sounds higher than 4000 Hz, as loudness increases, the perceived pitch also increases slightly.
Localization of sound
Humans are normally able to hear a variety of sound frequencies, from about 20 Hz to 20 kHz. Our ability to estimate just where the sound is coming from, sound localization, is dependent on both hearing ability of each of the two ears, and the exact quality of the sound. Since each ear lies on an opposite side of the head, a sound will reach the closest ear first, and its amplitude will be larger in that ear.
The shape of the pinna (outer ear) and of the head itself result in frequency-dependent variation in the amount of attenuation that a sound receives as it travels from the sound source to the ear: further this variation depends not only on the azimuthal angle of the source, but also on its elevation. This variation is described as the head-related transfer function, or HRTF. As a result, humans can locate sound both in azimuth and altitude. Most of the brain's ability to localize sound depends on interaural (between ears) intensity differences and interaural temporal or phase differences. In addition, humans can also estimate the distance that a sound comes from, based primarily on how reflections in the environment modify the sound, for example as in room reverberation.
Hearing and language
Human beings develop spoken language within the first few years of life, and hearing impairment can not only prevent the ability to talk but also the ability to understand the spoken word. By the time it is apparent that a severely hearing impaired (deaf) child has a hearing deficit, problems with communication may have already caused issues within the family and hindered social skills, unless the child is part of a Deaf community where sign language is used instead of spoken language (see Deaf Culture). In many developed countries, hearing is evaluated during the newborn period in an effort to prevent the inadvertent isolation of a deaf child in a hearing family. Although sign language is a full means of communication, literacy depends on understanding speech. In the great majority of written language, the sound of the word is coded in symbols. Although an individual who hears and learns to speak and read will retain the ability to read even if hearing becomes too impaired to hear voices, a person who never heard well enough to learn to speak is rarely able to read proficiently. Most evidence points to early identification of hearing impairment as key if a child with very insensitive hearing is to learn spoken language. Listening also plays an important role in learning a second language.
Hearing can be measured by behavioral tests using an audiometer. Electrophysiological tests of hearing can provide accurate measurements of hearing thresholds even in unconscious subjects. Such tests include auditory brainstem evoked potentials (ABR), otoacoustic emissions and electrocochleography (EchoG). Technical advances in these tests have allowed hearing screening for infants to become widespread.
Hearing threshold and the ability to localize sound sources are reduced underwater, in which the speed of sound is faster than in air. Underwater hearing is by bone conduction, and localization of sound appears to depend on differences in amplitude detected by bone conduction.
- Kung C., "A possible unifying principle for mechanosensation," Nature, 436(7051):647–54, 2005 Aug 4.
- (John F. Brugge and Matthew A. Howard, Hearing, Chapter in Encyclopedia of the Human Brain, ISBN 0-12-227210-2, Elsevier, Pages 429-448,2002)
- (Morton CC. Nance WE. Newborn hearing screening--a silent revolution. [Review] [47 refs] [Journal Article. Review] New England Journal of Medicine. 354(20):2151-64, 2006 May 18.)
- (Shupak A. Sharoni Z. Yanir Y. Keynan Y. Alfie Y. Halpern P. Underwater hearing and sound localization with and without an air interface. [Journal Article] Otology & Neurotology. 26(1):127-30, 2005 Jan.)
- Audiograms in mammals
- Auditory illusion
- Auditory brainstem response (ABR) test
- Auditory scene analysis
- Auditory system
- Cochlear implant
- Equal-loudness contour
- Hearing impairment
- Missing fundamental
- Music and the brain