You are on page 1of 9

SOUND TRANSMISSION AND ABSORPTION

SONOMETER
A Sonometer is a device for demonstrating the relationship between the
frequency of the sound produced by a plucked string, and the tension,
length and mass per unit length of the string. It was invented by
Pythogoras. These relationships are usually called Mersenne's laws after
Marin Mersenne (1588-1648), who investigated and codified them. For
small amplitude vibration, the frequency is proportional to:
a. the square root of the tension of the string,
b. the reciprocal of the square root of the linear density of the string,
c. the reciprocal of the length of the string.

It is a simple instrument used to verify the laws of stretched strings and to


determine the frequency of a tuning fork. It consists of a long hollow rectangular
wooden box (w) called the sound box, having three openings on one of its sides.
A metal hook or a peg P1, is rigidly fixed at one end with a frictionless pulley P 2,
attached to the other end. One end of a long metal wire of uniform cross-section
tied firmly to the peg, passes over two wedge shaped bridges A and B and then
over the pulley. The wire can be stretched by adding suitable load to the weight
hanger H which is attached to the other end. C is a movable bridge whose

ARCHITECTURAL ACOUSTICS
A r V. V a n a t h i

UNIT II

SOUND TRANSMISSION AND ABSORPTION


position can be adjusted between A and B so that any desired length of the wire
can be set into vibration.

WHAT IS SOUND WAVE?

A sound wave is the pattern of disturbance caused by the


movement of energy travelling through a medium (such as air,
water, or any other liquid or solid matter) as it propagates away
from the source of the sound.

The source is some object that causes a vibration, such as a ringing


telephone, or a person's vocal chords.

The vibration disturbs the particles in the surrounding medium;


those particles disturb those next to them, and so on.

The pattern of the disturbance creates outward movement in a


wave pattern, like waves of seawater on the ocean. The wave
carries the sound energy through the medium, usually in all
directions and less intensely as it moves farther from the source.

SOUND WAVES : LONGITUDINAL AND TRANSVERSE

Sound is transmitted through gases, plasma, and liquids as


longitudinal waves, also called compression waves. Through solids,
however, it can be transmitted as both longitudinal waves and
transverse waves.

Longitudinal sound waves are waves of alternating pressure


deviations from the equilibrium pressure, causing local regions of

ARCHITECTURAL ACOUSTICS
A r V. V a n a t h i

UNIT II

SOUND TRANSMISSION AND ABSORPTION


compression and rarefaction, while transverse waves (in solids) are
waves of alternating shear stress at right angle to the direction of
propagation.

PROPOGATION OF SOUND
Sound is a sequence of waves of pressure which propagates through
compressible media such as air or water. (Sound can propagate through
solids as well, but there are additional modes of propagation). During their
propagation, waves can be reflected, refracted, or attentuated by the
medium. The purpose of this experiment is to examine what effect the
characteristics of the medium have on sound.
All media have three properties which affect the behavior of sound
propagation:
1. A relationship between density and pressure. This relationship,
affected by temperature, determines the speed of sound within the
medium.
2. The motion of the medium itself, e.g., wind. Independent of the
motion of sound through the medium, if the medium is moving, the
sound is further transported.
3. The viscosity of the medium. This determines the rate at which
sound is attenuated. For many media, such as air or water,
attenuation due to viscosity is negligible.

ARCHITECTURAL ACOUSTICS
A r V. V a n a t h i

UNIT II

SOUND TRANSMISSION AND ABSORPTION

WHAT IS PITCH?
Pitch is a perceptual property that allows the ordering of sounds on a
frequency-related scale. Pitches are compared as "higher" and "lower" in
the sense associated with musical melodies,which require sound whose
frequency is clear and stable enough to distinguish from noise. Pitch is a
major auditory attribute of musical tones, along with duration, loudness,
and timbre.
WHAT IS AN OCTAVE?
In music, an octave (Latin octavus: eighth) or perfect octave is the
interval between one musical pitch and another with half or double its
frequency.An octave is defined as a 2:1 ratio of two frequencies.
FREQUENCY
This is the number of vibrations or pressure fluctuations per sec. It is given
by hertz. The frequency can be expressed as
f = 1 / T (1)
where
f = frequency (s-1, Hz)
T = time for completing one cycle (s)
Example - Frequency
The time for completing one cycle for a 500 Hz tone can be calculated
using (1) as
T = 1 / (500 Hz)
= 0.002 s
The range for human hearing is 20 to 20.000 Hz. By age 12-13.000 Hz are
the upper limit for many people.

ARCHITECTURAL ACOUSTICS
A r V. V a n a t h i

UNIT II

SOUND TRANSMISSION AND ABSORPTION

WAVELENGTH :
This is the distance travelled by the sound during the period of one
complete vibration.

Velocity of sound in air , plane waves , energy density ,sound intensity ,


decibel , sound pressure level
Refer Xerox
WHAT IS HARMONICS?
It is an overtone accompanying a fundamental tone at a fixed interval,
produced by vibration of a string, column of air, etc. in an exact fraction of
its length.
SPEECH AND MUSIC FREQUENCIES
Just as there are similarities and differences between speech and music
spectra, there are also similarities and differences between the perceptual
requirements for speech and for music. Compared with music, speech
tends to be a well-controlled spectrum with well established and
predictable perceptual characteristics.
In contrast, musical spectra are highly variable and the perceptual
requirements can vary based on the musician and the instrument being
played. This is an overview of five salient differences between speech and
music that have direct ramifications for hearing aid fittings.

ARCHITECTURAL ACOUSTICS
A r V. V a n a t h i

UNIT II

SOUND TRANSMISSION AND ABSORPTION

SPEECH VS. MUSIC SPECTRA:


Speech:

Speech, regardless of language has to be generated by a rather


uniform set of tubes and cavities.
The human vocal tract is approximately 17 cm from larynx (vocal
chords) to lips. The vocal tract can be either a single tube as is the
case of oral consonants and vowels, or a pair of parallel tubes when
the nasal cavity is open as in [m] and [n].
For example, the frequency of the resonances of the vocal tract
(called formants) are governed primarily by constrictions in the
mouth and the length of the vocal tract tube. Vocal tract lengths
cannot change significantly.

Music:

In contrast to the relatively well defined human vocal tract output,


there is no consistent, well defined, long-term music spectrum.
The outputs of various musical instruments are highly variable
ranging from a low-frequency preponderance to a high-frequency
emphasis.

PHYSICAL OUTPUT VS. PERCEPTUAL REQUIREMENTS OF THE


LISTENER:
Speech :

In speech, there are slight differences between various


languages in the proportion of audible cues that are important
for speech perception.
Frequency do vary slightly from language to language, but
generally show that for speech, most of the important sounds for
speech clarity derive from bands over 1000 Hz, whereas most of
the loudest perception of speech are from those bands below
1000 Hz.
Clarity in speech (which has more to do with auditory perception)
is derived from the higher frequencies.
Speech is phonetically more dominant in the lower frequencies.
That is, the auditory perception of speech has a significantly
different weighting than does the physical output from a
speaker's mouth. Despite the differences between the physical
output of the speech and the frequency requirements for optimal
speech understanding, the differences are constant and

ARCHITECTURAL ACOUSTICS
A r V. V a n a t h i

UNIT II

SOUND TRANSMISSION AND ABSORPTION


predictable- low frequency loudness cues and high frequency
clarity cues.
Music :

Regardless of the physical output of the musical instrument, the


perceptual needs of the musician or listener may vary depending
on the instrument. A stringed instrument musician needs to be
able to hear the exact relationship between the lower frequency
fundamental energy and the higher frequency harmonic
structure.
Not only does a violinist generate a wide range of frequencies,
but the violinist needs to be able to hear those frequencies.
In contrast, a woodwind player such as a clarinetist needs to be
able to hear the lower frequency inter-resonant breathiness.
When a clarinet player says ''that is a good sound'' they are
saying that the lower frequency noise in between resonances of
their instrument has a certain level. High frequency information
is not very important to a clarinet player (other than for loudness
perception).
One can, therefore, say that a clarinet player has a low
frequency phonemic requirement, despite the fact that the
clarinet player can generate as many higher frequency sounds
as can the violinist.
LOUDNESS SUMMATION, LOUDNESS, AND INTENSITY:
Speech :

The ''source'' of sound in the human vocal tract is the


vibration of the vocal cords.
This simply means that not only is there the fundamental
energy (typically 120-130 Hz for men and 180-220 Hz for
women) but there are evenly spaced harmonics at integer
multiples of the fundamental.
For a man's voice with a fundamental frequency of 125 Hz,
there are harmonics at 250 Hz, 375 Hz, 500 Hz, and so on
Therefore the minimal spacing between harmonics in
speech is on the order of at least 100 Hz. In other words,
no two harmonics would fall within the same critical band
with the result that there is minimal loudness summationsoft sounding speech is less intense and loud sounding
speech is more intense.

Music :
Some musical instruments are speech-like in the sense that
they generate mid- frequency fundamental energy with

ARCHITECTURAL ACOUSTICS
A r V. V a n a t h i

UNIT II

SOUND TRANSMISSION AND ABSORPTION


evenly spaced harmonics. Oboes, saxophones and violins are
in this category. That is, for the bass and cello, there is a poor
correlation between measured intensity and perceived
loudness.
THE ''CREST FACTOR'' OF SPEECH AND MUSIC:
The crest factor is a measure of the difference in decibels
between the peaks in a spectrum and the average or RMS
(root mean square) value. A typical crest factor with speech is
about 12 dB. That is, the peaks of speech are about 12 dB
more intense than the average values.
Typical crest factors for musical instruments are on the order
of 18-20 dB.
5. Different intensities for speech and music:
Typical outputs for normal intensity speech can range from 53
dB SPL for the [th] as in 'think' to about 77 dB SPL for the [a]
in 'father'. Shouted speech can reach 83 dB SPL. Music can be
on the order of 100 dB SPL with peaks and valleys in the
spectrum of +/- 18
HUMAN EAR CHARACTERISTICS :

The ear is a transducer converting sound pressure waves into


signals which are sent to brain.
The three principal parts of the human auditory system are the
outer ear, the middle ear, and the inner ear.
The outer ear is composed of the pinna and the auditory canal or
auditory meatus.The auditory canal is terminated by the tympanic
membrane or the
eardrum.

The middle ear is an air-filled cavity spanned by the three


tiny bones, on the whole called the ossicles. The 3 tiny bones are
the malleus, the incus, and the stapes.
The malleus is attached to the eardrum and the stapes is
attached to
the oval window of the inner ear. Together these three bones form
a
mechanical, lever-action connection between the air-actuated
eardrum
and the fluid-filled cochlea of the inner ear. The inner ear is
terminated
in the auditory nerve, which sends impulses to the brain.

ARCHITECTURAL ACOUSTICS
A r V. V a n a t h i

UNIT II

SOUND TRANSMISSION AND ABSORPTION


http://www.music.miami.edu/programs/mue/research/mesc
obar/thesis/web/Psychoacoustics.htm

ARCHITECTURAL ACOUSTICS
A r V. V a n a t h i

UNIT II

You might also like