The auditory center in the brain is a biological supercomputer that never takes a break. It analyzes and processes incoming sound patterns around the clock and protects us from sensory overload. But how does it work? How does the brain manage to filter out the correct sounds from the auditory chaos surrounding us? It’s all down to a highly complex evolutionary process, developed and refined over millions of years.
The most important organ of hearing is the brain
Once our ears have received incoming sounds and they’ve been converted into electrical impulses in the cochlea, the real work of hearing begins. The brain now has to rapidly unravel the highly complex nerve impulses, analyse them, and interpret them correctly.
The first step is when the auditory centre breaks down the complex waveforms that reach us as neurological signals into their main components: pitch (frequency) and volume (amplitude). It then compares the analysed waveforms with stored patterns (memory). This allows the brain to recognise the origin of a sound and the meaning that should be attached to it. For example, it determines whether a sound is speech or a danger signal. The sound of the wind or the hum of voices in a restaurant might be suppressed in our consciousness, while the voice of the person we’re talking to is filtered and amplified, allowing us to understand it better. This permanent and automatic assessment is essential, since we’re unable to focus on all the sounds around us.
A recent study has shown that the ability to choose between relevant and irrelevant noises is localised in the left hemisphere of the brain. And although the brain usually decides "automatically" which sounds we should perceive, we are also able to influence it consciously by focusing on a specific sound. We’re able to listen to an individual voice in a room full of people, for example, or to a single instrument in an orchestra.
Danger ahead? Our auditory centre never sleeps!
Regardless of how long or how deeply we sleep, our auditory centre (referred to by experts as the auditory cortex) never shuts down completely. However, to allow the rest of the brain and body to recover, the brain simply fades out almost all incoming sounds. These may include rain against the window, as well as the noisy rattle of trains going past at regular intervals. However, we wake up immediately if a potentially important sound reaches our ears. For example, a mother wakes when she hears her baby cry. Or we wake up at night when the silence is broken by an unusual sound such as a scream or a crash.
The firewall in our head
The auditory cortex is thus a kind of natural filter that protects us from sensory overload day and night. This internal firewall is essential for our health and well-being, especially in a world where all of our senses are constantly bombarded by stimuli.
However, the ability of the auditory centre to fulfil this important filtering function depends on well-functioning ears. It is only when the brain is supplied with complete and intact acoustic information that it can recognise which sounds are important and which should be suppressed. This is why it is essential to look after your hearing and protect it at all times. This means appropriate hearing protection in the workplace. If you become aware of hearing loss at any time, you must visit a hearing specialist as soon as possible. Even a delay of one or two years may mean it is already too late for a hearing aid to be effective. This is because the brain unlearns its ability to hear when it is not supplied with sufficient sounds over a long period of time.
How are your cocktail party skills?
Loud environments with a mixture of different sounds always present a challenge in terms of hearing. This is especially true in places where there are lots of people chatting, and where there is background music playing, such as in busy restaurants and bars. Spending time in such places can be extremely frustrating for people with (even limited) hearing loss, simply because they are unable to follow a conversation comfortably. The so-called cocktail party problem is a common early sign of hearing loss.
The speech that reaches our ears is nothing other than a complex mixture of soundwaves. Unravelling and deciphering this mixture is a major task for the brain. There are massive obstacles, including the problem of unit formation: How does the auditory centre manage to subdivide a spoken sentence in such a way that its components can be reconciled with memorised patterns?
You might initially assume that the units would be individual words. But this is not the case, since in almost all languages words flow into each other smoothly during speech. Nor are letters appropriate units, as they are pronounced differently depending on the word and the level of emphasis. Other difficulties involved in the understanding of speech include variations in pitch, depending on the mood and voice of the speaker, accents, dialects, and different speeds. All of these factors have a significant impact on the sound of the spoken word, resulting in very different patterns from person to person, even though the spoken sentence is identicalunderstand.
The brain apparently copes effortlessly with all this when interpreting language, and works at a tremendous speed. In fact, at a normal rate of speech, we perceive up to 14 speech signals per second. Another fascinating fact is that, if the rate of speech increases to 60 signals per second, the content becomes even easier for us to understand.
Even modern computer technology cannot compete. To date, there is no voice recognition program that recognises spoken language as rapidly as the human brain.
Our own voice sounds loudest
In order for us to be able to focus on what is essential, the auditory centre in the brain is able to distinguish between important and unimportant noises and direct our concentration to a specific source, such as the person we’re talking to. A study carried out in the U.S. demonstrated that this is achieved by increasing or decreasing activity in specific areas of our auditory centre while listening or speaking. Something amasing happens when we talk. In order for us to be able to hear our own voices against a background of other voices or in noisy environments (this is essential in order to adapt dynamically to the existing conditions), the brain makes sure that our own speech is always loud enough for us to hear.
The auditory centre: Small but powerful
Where is this miraculous auditory centre that does all this for us around the clock? And how big is it? You’ll scarcely believe the answer. The auditory cortex is no bigger than your thumbnail and it’s "hidden" in a coil of the cerebral cortex. In fact, we have two auditory centres, one in the left and one in the right hemisphere of the brain. Each comprises eleven different auditory fields, which are together responsible for the entire range of perceptible sound frequencies. Recent experiments have shown that there is apparently a division of labor between the left and right auditory centre. The left auditory cortex, for example, plays the main role in interpretation — that is, the recognition of acoustic signals. Scientists have also been able to demonstrate that the two sides of the auditory centre are constantly engaged in a lively exchange.
The senses work together
It is not only the two auditory centres in our brain that communicate: All our senses are interconnected, including hearing and sight. Research has shown that every person has a natural ability to lipread. As soon as we see mouth movements that are caused by speaking, our auditory centre is activated, even if there is no sound at all. We all know this from experience: It’s easier to understand what someone is saying when we’re looking at them. Other interpersonal communication channels, such as facial expressions, have no impact at all on the auditory cortex.