Do we hear sounds as they are, or do our expectations about what we are going to hear shape the way sound is processed? 

Through the use of computational neuroscience models, Bournemouth University’s Dr. Emili Balaguer-Ballester and colleagues are trying to map the way that the brain processes sound.

For example, it takes hundreds of milliseconds for sound to be processed along the neurons from the ear to the brain, yet we can recognize the sex of a speaker or a melody after just a few milliseconds.  Researchers are combining magneto and electroencephalography to map brain activity through recording electromagnetic currents which occur naturally in the brain and brain stem simultaneously. This allows for very detailed temporal information about how the brain processes sound to be recorded and they can use the data to develop models to show in detail how the brain processes sound.

“Almost 80% of connections between central and pre-cortical areas during sound processing seem to be top-down i.e. from the brain to the auditory peripheral system and not bottom-up, which is perhaps unexpected,” says Balaguer-Ballester . “As sound comes from an external stimulus, it would be fair to assume that most of our processing occurs from what we hear, but that is apparently not the case. What your brain expects to hear can be as important as the sound itself.”

Central processing disorders can lead to problems such as the delay of language development in children, so it is important to be able to pinpoint the neural parameter which is altered, in order to appropriately treat the cause of such an alteration, the authors say.

Citation: Balaguer-Ballester E, Clark NR, Coath M, Krumbholz K, Denham SL (2009) Understanding Pitch Perception as a Hierarchical Process with Top-Down Modulation. PLoS Comput Biol 5(3): e1000301