You are on page 1of 10

Topic: Audition

Slide 1: Module 1: Introduction


Module 1: Introduction

Slide 2: If a Tree Falls in the Forest
Module 1: Introduction
Subtopic 1: If a Tree Falls in the Forest

Im sure that you are familiar with the philosophical question: if a tree falls in the forest and no one is
around to hear it, does it still make a sound? Well, this question gets us right into the thick of determining
what sound is; is it strictly the product of an external stimulus, like a falling tree, in which case you would
say that the tree falling in the forest would make a sound, but that no one would hear it. On the other
hand, we may consider sound to be the result of our own sensory processing, in which case, you could
argue that if no one was there to hear the falling tree, then the tree would make no sound at all.

Slide 3: Sound Waves and Auditory systems
Module 1: Introduction
Subtopic 2: Sound Waves and Auditory systems

If were going to answer this question logically, we can all agree that a falling tree would produce sound
waves. And just like the visual system can interpret light waves to collect information about stimuli in the
environment, the auditory system can translate sound waves from vibrating objects into the psychological
experience of audition.

Slide 4: Sound Waves and Auditory systems
Module 1: Introduction
Subtopic 2: Sound Waves and Auditory systems

This means that, at least from a psychological point of view, the falling tree would certainly produce sound
waves, but the sound waves themselves do not make sound unless an auditory system is present to
translate those sound waves into the perceptual experience of audition.

Slide 5: Module 2: The Auditory Mechanisms of Different Species
Module 2: The Auditory Mechanisms of Different Species

Slide 6: Introduction to Auditory Mechanisms
Module 2: The Auditory Mechanisms of Different Species
Subtopic 1: Introduction to Auditory Mechanisms

Auditory mechanisms vary across different species according to specific needs: whether they live in
water, land or air; whether they need to communicate over long distances; whether they need to be able
to receive high or low frequency sounds, and so on. Lets examine some of the different designs of
auditory mechanisms of different species that have formed through the process of evolution.

Slide 7: Sound Frequency
Module 2: The Auditory Mechanisms of Different Species
Subtopic 2: Sound Frequency

One way that the hearing abilities of various species differ is the range of frequencies that can be
detected. For example, if youve ever tried blowing a dog whistle, you know that blowing the whistle
doesnt produce any audible sound to your own ears, but you will certainly have a dogs attention! The
dog whistle produces a sound at a high frequency that is beyond the range of human ears but well within
1
the range of the dogs auditory system.

Slide 8: Sound Frequency Perception in Vertebrates
Module 2: The Auditory Mechanisms of Different Species
Subtopic 3: Sound Frequency Perception in Vertebrates

Humans can perceive sounds that lie anywhere between 20 and 20,000 Hz, a respectable auditory range.
Relatively speaking, whales, dolphins and dogs have a wider hearing range, while frogs and birds have a
much narrower range of frequencies that they can detect. At the lower frequency detection extreme are
fish, while at the higher frequency detection extreme are bats and rodents. So if a fish and a bat were
able to communicate, the conversation would have to be based on the narrow range of frequency overlap
available to both species at about 2000 Hz. This would presumably sound very high-pitched to the fish
and very low-pitched to the bat.

Slide 9: Environmental Impacts on Auditory Structure
Module 2: The Auditory Mechanisms of Different Species
Subtopic 4: Environmental Impacts on Auditory Structure

Audible frequency range is determined in part by the evolution of the structures of the auditory system.
One key structure is the basilar membrane which contains the hearing receptors; sounds of different
frequencies are processed along different areas of the basilar membrane.

Slide 10: The Basilar Membrane
Module 2: The Auditory Mechanisms of Different Species
Subtopic 5: The Basilar Membrane

The basilar membrane varies in length across species; it is shortest in amphibians and reptiles, longer in
birds, and longest in mammals. A longer basilar membrane allows processing of a wider range of
frequencies. And so, mammals can discriminate the widest range of frequencies while most other
species cannot discriminate frequencies over 10,000 Hz.

Slide 11: Checkpoint 1

Slide 12: Module 3: The Stimulus - Sound Waves
Module 3: The Stimulus: Sound Waves

Slide 13: Introduction to Sound Waves
Module 3: The Stimulus: Sound Waves
Subtopic 1: Introduction to Sound Waves

To fully appreciate audition, we first need to understand the stimulus that is being processed. Like light,
sound travels in waves, although sound waves travel much slower and require some medium to travel
through. Sound waves are initiated by either a vibrating object, like our vocal cords or a guitar string, a
sudden burst of air, like a clap, or by forcing air past a small cavity, like a pipe organ. This causes the air
molecules surrounding the source of the sound to move, causing a chain reaction of moving air particles.

Slide 14: Responding to Changes in Air Pressure
Module 3: The Stimulus: Sound Waves
Subtopic 2: Responding to Changes in Air Pressure

This chain reaction is much like the ripples you observe when you throw a stone into a pond. The point
where the stone hits the pond produces waves that travel away in all directions, much like the alternating
bands of more and less condensed air particles that travel away from the source of a sound.

Slide 15: The eardrum responds to air pressure changes
Module 3: The Stimulus: Sound Waves
2
Subtopic 2: Responding to Changes in Air Pressure

These alternating bands of more and less compressed air molecules interact with the eardrum to begin
auditory processing. A band of compressed air molecules causes your eardrum to get pushed slightly
inwards, whereas a band of less dense air particles causes the eardrum to move outwards.

Slide 16: Sine Waves
Module 3: The Stimulus: Sound Waves
Subtopic 3: Sine Waves

The changes in air pressure over time that make up a sound wave can be graphed as a sine wave, as
shown here. In our survey of the neurophysiology of vision, we examined three physical characteristics of
a wave : amplitude, wavelength, and purity, which translated into the three psychological perceptions of
our visual world. In audition, the same three physical characteristics, when applied to sound waves,
translate into the three psychological properties of loudness, pitch, and timbre.

Slide 17: Amplitude: Measure of loudness
Module 3: The Stimulus: Sound Waves
Subtopic 3: Sine Waves

Variations in the amplitude or height of a sound wave affect the perception of loudness. Since waves of
greater amplitude correspond to vibrations of greater intensity, higher waves correspond to louder
sounds. Humans are sensitive to a very wide range of different sound amplitudes, and because of this,
loudness is measured using a logarithmic scale of decibels (dB). In this scale, the perceived loudness of
a sound doubles for every 10 dB increase.

Slide 18: Amplitude: Measure of loudness
Module 3: The Stimulus: Sound Waves
Subtopic 3: Sine Waves

A normal conversation takes place at around 60 dB, a whisper at around 27 dB, and sitting in the front
row at a rock concert means you get to hear the music at around 120 dB. As enjoyable as this may be,
even brief exposure to sounds this loud can cause physical pain and permanent damage. Many listeners
crank up their personal music listening devices such that they are effectively listening to their music at
120 dB.

Slide 19: Frequency: Measure of pitch
Module 3: The Stimulus: Sound Waves
Subtopic 3: Sine Waves

Sound waves also vary in the distance between successive peaks; this is called the wavelength or
frequency of the sound and this property affects the perception of pitch. Pitch is measured in Hertz (Hz),
which represents the number of cycles per second, or the number of times in a second that a sound wave
makes one full cycle from one peak to the next. So, if many wave peaks are condensed into one second,
then this sound will be of a high frequency, and result in the perception of a high pitched sound.

Slide 20: Frequency: Measure of pitch
Module 3: The Stimulus: Sound Waves
Subtopic 3: Sine Waves

We learned that what we call the visible spectrum of light is only a small portion of the total spectrum of
light waves; similarly, the audible zone of frequencies that humans can detect represents only a portion of
the possible frequencies that can be produced.

Slide 21: Timbre: Measure of complexity/purity
Module 3: The Stimulus: Sound Waves
3
Subtopic 3: Sine Waves

The third physical property of sound is purity, which affects our perception of timbre. So far, weve been
discussing simple sound waves consisting of only a single frequency of vibration. However, most of the
sounds we hear everyday are complex sounds that are composed of multiple sound waves that vary in
frequency. Timbre refers to the complexity of a sound.

Slide 22: Timbre: Measure of complexity/purity
Module 3: The Stimulus: Sound Waves
Subtopic 3: Sine Waves

For example, when you pluck a guitar string, it vibrates as a whole, which is the fundamental tone, but it
also vibrates at shorter segments along the string, called the overtones. The final sound you hear is a
mixture of the fundamental tone and all the overtones, and this combination is timbre. So a piccolo and a
Bassoon may both play the same note, but because each instrument produces a unique combination of
the fundamental frequency and overtones, they still sound different to us, even though each instrument is
producing the same frequency and amplitude.

Slide 23: Checkpoint 2

Slide 24: Module 4: The Ear
Module 4: The Ear

Slide 25: Introduction to the Ear
Module 4: The Ear
Subtopic 1: Introduction to the Ear

Now that we understand the stimulus that we hear, lets look at the instrument that is used to detect the
sound waves and convert them into something that the brain can interpret. This instrument is of course
the ear.

Slide 26: The Structure of the Ear
Module 4: The Ear
Subtopic 2: The Structure of the Ear

The ear can be divided into the external, middle, and inner ear, and each area conducts sound in a
different way. Incoming changes in air pressure are channelled through the external ear, onto the middle
ear, and amplified so that it can be detected as changes in fluid pressure by the inner ear. These
changes in fluid pressure are then finally converted to auditory neural impulses.

Slide 27: The External Ear
Module 4: The Ear
Subtopic 3: The External Ear

Lets begin this journey by examining the external ear, which is made up of the pinna, the ear canal, and
the eardrum. The pinna is what you probably think of when referrring to your ears; it is the folded cone
that collects sound waves in the environment and directs them along the ear canal. Since the ear canal
narrows as it moves towards the eardrum, it functions to amplify the incoming sound waves, much like a
horn. The eardrum is a thin membrane vibrating at the frequency of the incoming sound wave and forms
the back wall of the ear canal.

Slide 28: The Middle Ear
Module 4: The Ear
Subtopic 4: The Middle Ear

The middle ear begins on the other side of the eardrum, which connects to the ossicles, the three
4
smallest bones in the body. These ossicles are named after their appearance and consist of the hammer,
anvil, and stirrup.

Slide 29: Ossicles amplify signal sent to the oval window
Module 4: The Ear
Subtopic 4: The Middle Ear

The amplification of the vibrating waves continues here in the middle ear. The vibrating ossicles are
about 20 times larger than the area of the oval window to which they connect to create a lever system
that amplifies the vibrations even more. This additional amplification is necessary because the changes in
air pressure originally detected by the external ear are about to be converted to waves in the fluid-filled
inner ear.

Slide 30: The Inner Ear
Module 4: The Ear
Subtopic 5: The Inner Ear

The vibrating oval window connects to the cochlea of the inner ear .The cochlea is a fluid-filled tube,
about 35 mm long, coiled like a snail shell. The cochlea contains the neural tissue that is necessary to
transfer the changes in fluid to neural impulses of audition.

Slide 31: The Cochlea
Module 4: The Ear
Subtopic 6: The Cochlea

The oval window is actually a small opening in the side of the cochlea, and when the oval window is
made to vibrate, it causes the fluid inside the cochlea to become displaced. The round window, located at
the other end of the cochlea, accommodates for the movement of the fluid by bulging in and out
accordingly.

Slide 32: Basilar Membrane
Module 4: The Ear
Subtopic 7: Basilar Membrane

Inside the cochlea is a flexible membrane, called the basilar membrane, that runs the length of the
cochlea like a carpet. So, when the basilar membrane is pushed downwards, the fluid inside the cochlea
causes the round window to bulge out, and when the basilar membrane is forced upwards, the round
window bulges inwards.

Slide 33: Basilar Membrane
Module 4: The Ear
Subtopic 7: Basilar Membrane

Although the cochlea itself gets narrower towards the end, the basilar membrane actually gets wider
towards the end. Because the length of the basilar membrane varies in both flexibility and width, sounds
of different frequencies cause different regions of the membrane to vibrate. Higher frequency sounds
cause the end nearest the oval window to vibrate whereas lower frequency sounds cause the end nearest
the round window to vibrate.

Slide 34: Hair Cells
Module 4: The Ear
Subtopic 8: Hair Cells

The basilar membrane houses the auditory receptors, which are called hair cells. As the membrane
moves in response to the waves in the fluid, the hair cells also move, and this movement is finally
converted to neural impulses that the brain can understand.
5

Slide 35: Checkpoint 3

Slide 36: Module 5: Auditory Pathway - From Receptors to Auditory Cortex
Module 5: Auditory Pathway: From Receptors to Auditory Cortex

Slide 37: Introduction to the Auditory Pathway
Module 5: Auditory Pathway: From Receptors to Auditory Cortex
Subtopic 1: Introduction to the Auditory Pathway

When activated, the hair cells along the basilar membrane release a neurotransmitter. The hair cells form
synapses with bipolar cells, whose axons make up the cochlear nerve, a branch of the main auditory
nerve. Although the outer hair cells outnumber the inner hair cells by about 4 to 1, it is the inner hair cells
that mainly contribute to the signal in the cochlear nerve.

Slide 38: Cochlear Nerve and Hair Cells
Module 5: Auditory Pathway: From Receptors to Auditory Cortex
Subtopic 2: Cochlear Nerve and Hair Cells

There are some important differences between the inner and outer hair cells. Each inner hair cell
communicates with roughly 20 afferent fibers, which means that the signal from each inner hair cell has
exclusive rights to 20 direct links to the brain! The outer hair cells, on the other hand, have to share one
direct link to the brain with about 30 other outer hair cells.

The axons that synapse with the outer hair cells are thin and unmyelinated, whereas the axons that carry
information from the inner hair cells are thick and myelinated. The arrangement of these connections
suggests that even though there are far fewer inner hair cells than outer hair cells, the inner hair cells are
primarily responsible for transmitting the auditory signal to the brain.

Slide 39: Cochlear Nucleus
Module 5: Auditory Pathway: From Receptors to Auditory Cortex
Subtopic 3: Cochlear Nucleus
The neurotransmitter released by the hair cells is capable of triggering EPSPs in the cochlear nerve
fibers, which then sends this signal to the cochlear nucleus in the hindbrain. The cochlear nucleus has
separate dorsal and ventral streams.

Slide 40: The Dorsal and Ventral Stream
Module 5: Auditory Pathway: From Receptors to Auditory Cortex
Subtopic 7: The Dorsal and Ventral Stream

This dorsal and ventral stream is reminiscent of the way that information is processed by our visual
system. In the visual system, we learned that the ventral stream processes object recognition and the
dorsal stream processes the location of an object. Similar processes occur with the auditory system.

Slide 41: Topographical Organization
Module 5: Auditory Pathway: From Receptors to Auditory Cortex
Subtopic 8: Topographical Organization

Another similarity in how the brain processes visual and auditory information has to do with how the raw
information is organized along the neural pathways. Recall that the spatial organization of our visual
world is maintained at all levels along our visual pathway. So, for example, neighbouring locations in
space fall on neighbouring regions on our retinas, and this spatial organization is still true at the level of
our LGN, primary visual cortex, and extrastriate cortex. We learned that this type of organized neural
representation of our visual world is called topographical.

Slide 42: Tonotopic Organization
6
Module 5: Auditory Pathway: From Receptors to Auditory Cortex
Subtopic 9: Tonotopic Organization

Well, the same principle applies with our auditory sense, only it is called a tonotopic organization. Recall
that frequency is coded along different regions of the basilar membrane because sounds of different
frequencies displace the hair cells in these different regions. The hair cells connect to the cochlear nerve
such that neighbouring regions of hair cells remain together, and this organization is maintained all the
way through the auditory pathway to the primary auditory cortex.

Slide 43: Frequency and the Basilar Membrane
Module 5: Auditory Pathway: From Receptors to Auditory Cortex
Subtopic 10: Frequency and the Basilar Membrane

So, if we looked at how the primary auditory cortex responded to sounds of different frequencies, wed
see that the region of the basilar membrane that is closest to the oval window and responds the most to
low frequency sounds is represented at one end of area A1, whereas the other end of the basilar
membrane that is closest to the round window and responds best to high frequency sounds is
represented at the other end of area A1. With this type of organization, information about similar
frequencies is processed together.

Slide 44: Checkpoint 4

Slide 45: Module 6: Auditory Localization
Module 6: Auditory Localization

Slide 46: Introduction to Auditory Localization
Module 6: Auditory Localization
Subtopic 1: Introduction to Auditory Localization

In addition to being able to identify what the source of a sound is, were also able to localize where a
sound is coming from in space through auditory localization. Like the depth perception component of
visual localization, our skills in auditory localization rely on the fact that our sense organs are separated in
space.

Slide 47: There is no spatial map for audition
Module 6: Auditory Localization
Subtopic 1: Introduction to Auditory Localization

The process is a little different from visual localization. In vision, we learned that the location of an object
in the environment directly corresponds to the image of the object on the retina. In audition, there is no
such direct representation of the spatial arrangement of objects. Auditory localization is calculated from
the neural representations of incoming sound. With vision, we saw that retinal disparity occurs because
each eye sees a slightly different image, which gives us cues for the perception of depth. Similarly, the
fact that are ears located on the opposite sides of our head results in interaural differences in sound that
give us cues for auditory localization. The first interaural cue is the difference in time it takes for the sound
to reach each ear.

Slide 48: Time difference to arrive at each ear
Module 6: Auditory Localization
Subtopic 2: Two Cues to Auditory Localization

The slight difference in time it takes for a sound to arrive in each ear can be measured in the sub-
milliseconds. This may seem like a trivial difference, but it is dependent on the direction of the incoming
sound; specific neurons in the superior olivary complex respond to these slight differences in the timing of
arrival of the action potentials from each ear in response to the same sound.

7
Slide 49: Intensity difference at each ear
Module 6: Auditory Localization
Subtopic 2: Two Cues to Auditory Localization

For very close sounds, there is a detectable loss of intensity because the sound wave has to travel farther
to reach one ear than the other.

Slide 50: Intensity difference at each ear
Module 6: Auditory Localization
Subtopic 2: Two Cues to Auditory Localization

However, for sounds that are further ways, this difference is less detectable; instead, the ears rely on the
difference in intensity caused by the head which casts a sound shadow which diminishes the intensity
at the distal ear, much as light is diminished within the shadow you cast in sunlight.

Slide 51: Intensity difference at each ear
Module 6: Auditory Localization
Subtopic 2: Two Cues to Auditory Localization

Since input from each ear travels to both sides of the brain, these differences in intensity can be directly
compared to calculate the location of the sound. Some neurons in the superior olivary complex respond
specifically to these intensity differences from each ear while others respond specifically to the interaural
difference in arrival times for the sounds.

Slide 52: Sounds in front/behind have little difference
Module 6: Auditory Localization
Subtopic 2: Two Cues to Auditory Localization

When a sound is directly in front or directly behind you, it strikes both ears at the same time and you will
have difficulty locating the source of the sound. In this case, rotating your head will cause slight changes
in the sound intensity reaching each ear, and you will once again be able to localize the sound.

Slide 53: Pinna Cues
Module 6: Auditory Localization
Subtopic 2: Two Cues to Auditory Localization

Another type of acoustical cue is the sound direction produced by the characteristics folds and ridges of
our pinnae. The pinna diffracts incoming sound waves to make significant changes to the frequency
content of the sound that reaches the inner ear: some frequencies become amplified, while others are
attenuated. These changes are collectively called pinna cues and are required for accurately localizing
the elevation of a sound source. Because everyone has a unique ear shape, pinna cues are particular to
the individual, and are sometimes called earprints. When pinna cues are altered (by placing plastic
molds into the pinna cavities), there is a dramatic disorienting effect on localization ability, despite the fact
that the interaural difference cues are still available. Interestingly, over the course of weeks, subjects can
adapt to the new pinna cues, and localization becomes normal again. Whats even more fascinating is
what happens when the plastic molds are removed. To find out more, I suggest you read the paper by
Hofman et al. (1998, Nature Neuroscience).


Slide 54: Checkpoint 5

Slide 55: Module 7: Ecolocation in Bats
Module 7: Echolocation in Bats

Slide 56: Introduction to Echolocation in Bats
Module 7: Echolocation in Bats
8
Subtopic 1: Introduction to Echolocation in Bats

Have you ever heard of the expression blind as a bat? Well, this is just a myth, because many bats are
in fact very visually adept. However, most bats are also able to use an entirely different system that is
based on hearing, that allows them to identify prey, navigate their way through thick forests, and catch up
to two mosquitoes per second in total darkness!

Slide 57: Introduction to Echolocation in Bats
Module 7: Echolocation in Bats
Subtopic 1: Introduction to Echolocation in Bats

In fact, one study found that a bat with a 40 cm wingspan was able to fly through a 14 x 14 cm opening in
a grid in total darkness!

Slide 58: Echolocation
Module 7: Echolocation in Bats
Subtopic 2: Echolocation

So how can bats hunt and navigate so successfully without using their sense of vision? They use a
system of echolocation, through which a bat is able to form a perceptual image of the objects in the
surrounding environment by emitting a sound and then analyzing the time and frequency information that
is contained in the returning echoes.

Slide 59: Echolocation in Bats
Module 7: Echolocation in Bats
Subtopic 3: Echolocation in Bats

The bat first emits a burst of sound waves of a very high frequency which bounce off the object and return
to the bats ears. The bats brain analyzes the slight differences in the frequency content and timing of
the returning sound waves to determine the characteristics of objects in its environment.

Slide 60: Measure timing and frequency of return
Module 7: Echolocation in Bats
Subtopic 3: Echolocation in Bats

An object that is close to the bat will return echoes sooner in time than objects that are farther away;
objects that are moving will have echoes that are Doppler-shifted compared to stationary objects; and
objects that are textured will produce echoes that vary slightly in their return times relative to echoes from
objects that are smooth, much as a textured object alters the reflection of light in the visual world.

Slide 61: Evolution at Work
Module 7: Echolocation in Bats
Subtopic 4: Evolution at Work

So, bats have evolved a very efficient system for navigation and prey detection in an environment they
may otherwise be unable to exploit. In response to the selective pressure exerted by the bats abilities at
echolocation, some prey have evolved a sense of hearing designed especially for the detection of bat
calls.

Slide 62: Co-Evolution
Module 7: Echolocation in Bats
Subtopic 5: Co-Evolution in Bats and Moths

The interaction of the selection pressures of predator and prey is called co-evolution. Adaptation of traits
of one species can directly affect the adaptation of traits in another species.

9
Slide 63: Co-evolution in Bats and Moths
Module 7: Echolocation in Bats
Subtopic 5: Co-Evolution in Bats and Moths

Across many generations of predation and selection by echolocating bats, moths have evolved the ability
to hear sounds that match the frequency range used by most bats when theyre hunting insects using
echolocation. Being able to detect the bat has certainly helped the moth, because their chance of
survival increases significantly when they can hear the bat coming in advance and can respond by
engaging in a defensive flight pattern.

Slide 64: Dr. Paul Faure
Module 7: Echolocation in Bats
Subtopic 6: Dr. Paul Faure

So does this mean that the moth has the upper hand in this co-evolutionary arms race between moths
and bats? Well, this is an area that Dr. Paul Faure from the Department of Psychology, Neuroscience &
Behaviour here at McMaster has spent time investigating.

Slide 65: Co-evolution & Conclusion
Module 7: Echolocation in Bats
Subtopic 6: Dr. Paul Faure

Dr. Faures work demonstrates a beautiful example of co-evolution, where the selection pressures of a
predator can drive the evolution of a defensive trait in the prey, that can then result in further selection
pressures on the predator, and so on. In this case, predator and prey can interact indefinitely and be
locked in an arms race across many, many generations.

Slide 66: If a Tree Falls in the Forest
Module 7: Echolocation in Bats
Subtopic 7: If a Tree Falls in the Forest

The next time someone asks if a tree falls in the forest and no one is around to hear it, does it still make
a sound?, you can thoughtfully reply that the falling tree certainly does produce alternating bands of
compressed and less dense air molecules that travels in waves; whether you are close enough to allow
these waves to be directed by your pinna to vibrate your eardrum and convert fluid changes to neural
impulses is another question entirely.

Slide 67: Conclusion
Module 7: Echolocation in Bats
Subtopic 8: Conclusion

Because our experience with the world is so heavily dominated by vision, we often fail to consider the
important role of audition. Audition helps us to localize objects in the environment and focus our attention
to important events. As we have learned, there are many similarities as well as some important
differences in how visual and auditory input is processed in the brain. In our next lecture, well examine a
special application of audition: the psychology of music.

10

You might also like