You are on page 1of 8

Define Sound.

Explain the most basic components of a sound wave with appropriate


diagrams.
The sound is defined as the set of pressure fluctuations that propagates as waves through a solid,
liquid, gas or elastic medium. Commonly we define the sound to all sound waves are picked up
by our ears and interpreted by the brain, but that definition is incomplete due that in nature there
is a wide range of sounds that we cannot perceive.
The sound is produced as a result of the vibration of a material body such as the vibration caused
by tearing the string of a guitar, hitting the membrane of a drum or the vibrations produced in the
vocal folds as we speak, in all these cases the initial vibration is transmitted along all the atoms,
molecules and particles present in the medium producing the generation of sound waves. In order
that sound can be transmitted is necessary that the medium by which it spreads is elastic in
contrast molecules and atoms could not vibrate or transmit the disturbance and therefore the
sound will not transmitted.

The most basic components of sound wave are as follows:

Amplitude: The maximum extent of a vibration or displacement of a sinusoidal oscillation,


measured from the position of equilibrium. In the diagram below, letter 'a' denotes amplitude of a
sine wave.

Frequency: Frequency is measured as the number of wave cycles that occur in one second. The
unit of frequency measurement is Hertz (Hz for short). A frequency of 1 Hz means one wave
cycle per second.
Wavelength: Wavelength is the distance between two identical adjacent points in a wave. It is
typically measured between two easily identifiable points, such as two adjacent crests or troughs
in a waveform, most accurately measured in sinusoidal waves, which have a smooth and
repetitive oscillation.

What is a microphone. Explain different types of microphone and working principles with
diagrams.
A microphone, colloquially nicknamed mic or mike is a transducer that converts sound into an
electrical signal.
Microphones are used in many applications such as telephones, hearing aids, public address
systems for concert halls and public events, motion picture production, live and recorded audio
engineering, sound recording, two-way radios, megaphones, radio and television broadcasting,
and in computers for recording voice, speech recognition, VoIP, and for non-acoustic purposes
such as ultrasonic sensors or knock sensors.
Several different types of microphone are in use, which employ different methods to convert the
air pressure variations of a sound wave to an electrical signal. The most common are:

Dynamic Microphone: Dynamic microphones have a ―moving coil‖ inside of them. This
moving coil is attached to a diaphragm. The diaphragm is connected to a metal coil that is
suspended in the center of the magnet. As the diaphragm moves back and forth, the coil moves in
and out of the magnetic field, changing the polarity of the coil and creating an alternating current
on the wire that is connected to the coil. This design is similar to a speaker which has a voice coil
that is attached to a diaphragm or speaker cone. As current is applied to the voice coil, a
magnetic field is created which moves the speaker cone back and forth to produce sound.
Dynamic microphones are often preferred for use on stage, because they are quite sturdy and do
not require external power.
Ribbon Microphone: Ribbon microphones capture sound energy in a similar way as a dynamic
microphone. However, instead of a moving coil connected to a diaphragm, the active element in
a ribbon microphone is a thin piece of aluminum or other type of material. The metal strip or
ribbon is designed between two magnets in an electrical field that is disrupted as the ribbon
moves back and forth within the magnetic field inside the microphone capsule. This magnetic
field dance produces the alternating current that is then transmitted down the microphone cable.
Ribbon microphones must be handled and used with much more care and attention than
dynamics. The reason for this is the thin metal ribbon inside the capsule. They do not handle
large amounts of energy as well as a dynamic.

Condenser Microphone: Condenser microphones have some of the features of dynamics and
ribbons with new twists added. A condenser microphone is based on a moving plate and a
charged fixed plate and the capacitance that is generated between them. The diaphragm of a
condenser microphone is much thinner than a dynamic microphone and is treated with a metal
usually gold that increases its response as a metallic plate that comes in contact with electricity.
In a condenser microphone, there is a back plate placed between the diaphragm with a small
distance between them. Both of these plates are then charged with an electrical current as the
moving plate or diaphragm reacts to this electrical current. This back and forth process creates a
capacitance that is generated between the plates and an alternating current is created. This
current changes reflects the changes in capacitance and thus the sound that is characteristic of
ribbon microphones.
Explain digital recording chain with the help of appropriate diagrams.

To understand how a computer represents sound, consider how a film represents motion. A
movie is made by taking still photos in rapid sequence at a constant rate, usually twenty-four
frames per second. When the photos are displayed in sequence at that same rate, it fools us into
thinking we are seeing continuous motion, even though we are actually seeing twenty-
four discrete images per second. Digital recording of sound works on the same principle. We
take many discrete samples of the sound wave's instantaneous amplitude, store that information,
then later reproduce those amplitudes at the same rate to create the illusion of a continuous wave.
The job of a microphone is to transduce (convert one form of energy into another) the change in
air pressure into an analogous change in electrical voltage. This continuously changing voltage
can then be sampled periodically by a process known as sample and hold. At regularly spaced
moments in time, the voltage at that instant is sampled and held constant until the next sample is
taken. This reduces the total amount of information to a certain number of discrete voltages.

Time-varying voltage sampled periodically


A device known as an analog-to-digital converter (ADC) receives the discrete voltages from the
sample and hold device, and ascribes a numerical value to each amplitude. This process of
converting voltages to numbers is known as quantization. Those numbers are expressed in the
computer as a string of binary digits (1 or 0). The resulting binary numbers are stored in memory
— usually on a digital audio tape, a hard disk, or a laser disc. To play the sound back, we read
the numbers from memory, and deliver those numbers to a digital-to-analog converter (DAC) at
the same rate at which they were recorded. The DAC converts each number to a voltage, and
communicates those voltages to an amplifier to increase the amplitude of the voltage.
In order for a computer to represent sound accurately, many samples must be taken per second—
many more than are necessary for filming a visual image. In fact, we need to take more than
twice as many samples as the highest frequency we wish to record. (For an explanation of why
this is so, see Limitations of Digital Audio on the next page.) If we want to record frequencies as
high as 20,000 Hz, we need to sample the sound at least 40,000 times per second. The standard
for compact disc recordings (and for ‗CD-quality‘ computer audio) is to take 44,100 samples per
second for each channel of audio. The number of samples taken per second is known as
the sampling rate.
This means the computer can only accurately represent frequencies up to half the sampling rate.
Any frequencies in the sound that exceed half the sampling rate must be filtered out before the
sampling process takes place. This is accomplished by sending the electrical signal through
a low-pass filter which removes any frequencies above a certain threshold. Also, when the digital
signal (the stream of binary digits representing the quantized samples) is sent to the DAC to be
re-converted into a continuous electrical signal, the sound coming out of the DAC will contain
spurious high frequencies that were created by the sample and hold process itself. (These are due
to the ‗sharp edges‘ created by the discrete samples, as seen in the above example.) Therefore,
we need to send the output signal through a low-pass filter, as well.
The digital recording and playback process, then, is a chain of operations, as represented in the
following diagram.

Discuss microphones types in accordance to their polar patterns with diagrams.


―Polar pattern‖ refers to a microphone‘s directionality or pickup pattern – the three-dimensional
space surrounding the capsule where it is most sensitive to sound. There are six main polar
patterns: omnidirectional, cardioid, supercardioid, hypercardioid, ultra directional and figure of
8. Most microphones are designed with a specific pattern and are therefore best-suited for
specific applications.

Omnidirectional Microphones
Perfect for: interviews, moving subjects
Omnidirectional mics are the easiest to understand. Simply put, omnidirectional mics record
audio from every direction (360 degrees). Typically you will want to use an omnidirectional mic
when recording audio that you can‘t control very well (like ambience, a press conference, or a
moving talking head). Omnidirectional mics are the most flexible mics, but they are also the
noisiest. It‘s certainly a balancing act when on set. In a filmmaking context, you will almost
exclusively see omnidirectional pickup patterns in lav mics.
Bidirectional Microphones:
A bidirectional microphone is a mic designed to pickup audio equally from the front and back of
the mic. Typically, bidirectional microphones are used for radio interview recording or
podcasting. You will probably not find much use for a bidirectional microphone on a film set,
but they can sometimes be used as a backup mic for talk shows when placed on the hosts desk.
Unidirectional Microphones
Unidirectional mics are most sensitive to sound arriving from directly in front – the angle
referred to as 0 degrees – and less sensitive in other directions. This makes unidirectional
microphones effective at isolating the desired on-axis sound from both unwanted off-axis sound
and ambient noise.
Within the unidirectional category, there are three main polar patterns: cardioid, supercardioid,
& hypercardioid.
Cardioid
Cardioid is by far the most commonly used directional polar pattern. They have a wide on-axis
pick-up area and maximum rejection at 180 degrees off-axis – resulting in high gain before
feedback when stage monitors are placed directly behind. In many cases, ambient noise is
reduced by approximately two-thirds when compared to their omni counterparts, and this makes
them a staple for live performance and touring.
Applications: Live sound, Studio recording (particularly in less than ideal acoustic environments)
Supercardioid & Hypercardioid
Both super and hypercardioid options offer narrower front pickup angles than the cardioid – 115
degrees for the supercardioid and 105 degrees for the hypercardioid – alongside greater rejection
of ambient sound. Additionally, while the cardioid is least sensitive at the rear (180 degrees off-
axis), the supercardioid is least sensitive at 125 degrees and the hypercardioid at 110 degrees.
When placed properly they can provide more ‗focused‘ pick-up than the cardioid pattern with
better rejection of ambient noise and feedback.
It is also worth noting that hypercardioids are significantly more sensitive to sound from behind
than supercardioids, so although the front and side pickup is far ‗tighter‘, you‘ll need to be
particularly careful with mic placement. Singers with a tendency to move the mic while
performing will need to be aware that the smallest of movement on a hypercardioid is far more
likely to affect the mics performance when compared to cardioid or supercardioid.

What is acoustic balance in a sound studio. Why is it needed for a sound studio?
Today, the amazing power and low cost of digital technology has transitioned most recording
projects out of the large studio into smaller spaces and homes. Once all the gear is set up and the
first recordings are undertaken, the stark reality comes to light: What seems to sound good at
home sounds terrible in the car and on your friends‘ hi-fi system. The problem is simple-no
matter how good the microphone, preamp or recording system, unless the acoustics are balanced,
your recordings will not translate well when played back on other systems. Hard surfaces such as
gypsum walls, windows and hardwood floors cause primary reflections which make placing
instruments in the stereo field difficult. Secondary reflections introduce flutter echo and a reverb
trail which interferes with your mix. Low frequencies collide – either in or out of phase – causing
peaks and valleys (room modes and comb-filtering) which create hot spots depending on where
you are sitting in the room.

Describe Mono, Stereo, and Surround sound.


Mono
Monaural sound is an audio system in which audio signals are mixed and then routed through a
single audio channel. Considered less expensive for audio reproduction and recording, as only
basic devices are required, it is mainly used in hearing aids, cellphone and telephone
communications, public address systems and radio communications. Mono is the preferred audio
format when the focus is on clarity of a single amplified sound or voice. Monaural sound is also
known as monophonic sound or simply mono.
Monaural sound is the basic format for sound output. In the case of monaural sound
reproduction, only a single microphone and loudspeaker are required. It is also the same for
monaural sound recording. In the case of multiple headphones or loudspeakers, the sound signals
are mixed together and then fed through a common signal path. Similar to stereo sound, mono
can be seen in most FM radio, NICAM stereo-oriented TV and VCR formats, compact audio
cassettes and Minidiscs. However, it is not used in audio CDs and 8-track tapes.
One of the biggest benefits of using monaural sound is in the fact that the same sound level
reaches all listeners. In other words, unlike stereo audio systems, monaural systems do not
convey any sensation of location or depth. This factor is taken advantage of in well-designed
mono systems for speech reinforcement and speech intelligibility. Compared to a stereophonic
audio signal, a monaural signal has better signal strength with the same power.

Stereo
In stereophonic sound, sound is recorded on two different channels and then mixed or blended
back together, for an observable effect in playback. This is in contrast to monophonic sound,
which involves only one channel. Stereophonic sound is also known as stereo sound or stereo.
Common methods for creating stereophonic sound involve the placement of two or more
microphones for capturing different sound channels, and sophisticated mixing using state-of-the-
art sound equipment. The result is that the listener can hear audio in a kind of distributed field.
One of the enduring issues with stereo sound is how to deliver it through audio hardware. As the
"hi-fi" systems of the pre-digital age emerged, they were designed with stereo features. In
general, multiple speakers are set up to offer stereo sound experiences. The application of stereo
to digital sound is another field, where engineers look to accommodate this more refined sound
in new equipment and devices.
Surround Sound
Surround sound is a technology that is used for enriching the quality of audio reproduction for
listeners by using additional audio channels. Unlike screen channels, the sound produced by
surround sound technology is from a 360° radius in the two-dimensional plane. Surround sound
uses multiple channels, with each channel having a dedicated speaker within the system.
Surround sound provides listeners with excellent audio ambiance and richer and fuller sound.
Surround sound is a technique that allows the perception of sound spatialization to be enhanced
by manipulating sound localization. This can be achieved by using discrete and multiple audio
channels.
There are various formats and techniques for producing surround sound. Its reproduction can
also be varied through positioning and the addition of audio channels. True and virtual surround
sound systems exist. The latter uses fewer speakers, although the audio seems to be emanating
from multiple speakers. Surround sound not only helps in faithfully reproducing sound of the
original audio, but is also capable of significantly improving the system's dynamic range and
tonality, even at low to moderate audio volumes.
Surround sound is mostly used in movies, television and video games, which helps provide an
immersive audio experience.

You might also like