Friday, March 23, 2012

Why We Hear What We Hear, Part 3

Four Key Points

Here are four key points about the auditory periphery.

  • The auditory periphery analyzes all signals in a time/frequency tiling called ERBs or Barks.
  • Due to the mechanics of the cochlea, first arrivals have very strong, seemingly disproportionate influence on what you actually hear. This is actually useful in the real world.
  • Signals inside an ERB mutually compress.
  • Signals outside an ERB do not mutually compress.

The first arrival information in a given ERB, where the first arrival is the first signal in approximately the last 200 milliseconds, is emphasized by the mechanics of the cochlea. The compression of signals inside an ERB starts to take effect about 1 millisecond after the arrival of the sound, so the first part of a sound sends more information to the brain. This turns out to be very useful to us in distinguishing both perceived direction and diffuse sensation.

The partial loudnesses from the cochlea are integrated somewhere at the very edge of the CNS such that some memory of the past is maintained for up to 200 milliseconds. Level Roving Experiments show that when delays approaching 200 milliseconds exist between two sources, the ability to discern fine differences in loudness or timbre is reduced. It is well established you need very quick, click-less switching between signals when trying to detect very small differences between signals, otherwise you lose part of your ability to distinguish loudness differences.

Final Steps

There are two steps remaining in what you hear, both of them executed by the brain in a fashion I do not care to even guess about. The first step is analysis of the somewhat-integrated partial loudnesses into what I call “auditory features”. There is a great deal of data loss at this juncture, about 1/1000th of the information present at the auditory nerve remains after feature reduction, the rest being integrated or discarded into the information that remains. At this level, feature analysis is extremely plastic and can be guided by learning, experience, cognition, reflex, visual stimuli, state of mind, comfort, and all other factors. The features last a few seconds in short-term memory.

In the second step, the information from feature analysis is again reduced in size by about 100 to 1000 times, and turned into what I refer to as “auditory objects”. These are things that one can consciously describe, control, interpret, etc. Words are examples of things made of successive auditory objects. This process is as plastic as plastic can be, you can redirect yourself cognitively, be directed by visual stimuli, guesses, unconscious stimuli, including randomness, and is the final step before auditory input can be converted to long-term memory. Interestingly, this process can promote a feature to an object if you consciously focus on that feature. It is this process that is most affected by every other stimulus, including those generated internally. It is at these last two points, where short-term loudness is reduced to features and then objects, where we integrate the results from our senses and our knowledge, regardless of our intent or situation.

No comments:

Post a Comment