You are on page 1of 22

LAB FOUR

REVIEW
Understand the relationship between the classic tape studio and the
contemporary virtual studio of programs such as ProTools,
Audacity, and Audition.
Be able to use the basic functions of an audio editor:

use cut, copy, paste to move portions of a sound around


within the file

change the zoom to view more, or less, detail


Be able to change the default environment in ProTools:

Delete (or add) timebase rulers

Show/hide edit groups

Understand how to use (or turn off) the Smart Tool

Be able to load (Import) audio files into a ProTools session

Understand how (and why) to bounce tracks.

PROJECT TWO: PROCESSING


Due in Week Six.

ASSIGNMENT
Follow the instructions given under Procedure (below), and submit
the text file via WebCT under Assignments > Assignment 2
Processing
(This information is also available in the Assignments section
of WebCT, including a clickable link to the soundfiles).

Lab Four

PROCEDURE
Download the files to your folder
Point your browser here, and download the files you will be
required to log in using your SFU ID and password.
http://www.sfu.ca/sca/courses/fpa147/sounds/assignment2audi
o.zip
The .zip archive should extract itself into a folder called
"assignment2audio".
A. Listen to the files
Open the files in an audio editor.

Note: The audio editor must have some graphic analysis tools
(explained later) and the ability to generate test tones at specific
frequencies.
Also note that ProTools is not suitable for this assignment, since it has
no analysis capabilities.

Listen to each file and answer the following questions, using


terminology from the lectures:
Amplitude (envelope)
Describe the envelope shape. Is the sound continuous? Does
it have a sharp or slow attack? What is its steady state, if
any? Is there an internal rhythm to the sound? etc.
Frequency
Is the sound harmonic, inharmonic, or noise based? Can the
sound be described as being low, midrange, or high
frequency?
Timbre
How would you describe the sound: bright? dull? any other
words? Where do you think most of the spectral energy
is in the sound, and is this different than how you would
describe as its frequency?

66

Lab Four

B. Analyze the files


Use a spectrograph to analyze each sound.
Where is the dominant energy of the sound (in terms of
frequency)?
Generate a sine wave (of at least 2 seconds) at that frequency.
Do you hear it as being part of the sound?
Use a sonogram to analyze each sound.
Does the sound change over time?
Does the sonogram show you anything different about this
sound, than the spectrograph?
C. Process the files
For each sound, do the following:
Reverse the sound
How does reversing the sound alter its character? Is it still
recognizably the same sound?
Lower the frequency by one octave
How does lowering the frequency alter its character? Is it
still recognizably the same sound?
Remove the dominant spectral frequency band
Remove the as much of the spectrum except for dominant spectral
frequency band.
How does changing the timbre (spectral energy) alter its
character? Is it still recognizably the same sound?

FORMAT/PRESENTATION
Upload the file to WebCT, under Assignments Assignment 2 Processing.

EVALUATION
Project will be judged by the quality of observations, and ability to
apply terminology learned in class.

67

Lab Four

ABOUT THIS LABS TASKS


The material in the previous labs dealt mostly with how to do
things. While this lab, and future labs, will continue to provide
information about how to operate the software, we will also begin
to learn why to do certain things. Because we will be focusing upon
aural results, you will be required to complete a number of tasks
and listen to their results.

SIGNAL PROCESSING
Project Two is an introduction to signal processing, which is the
fundamental tool within electroacoustic music to apply variation to
sound material. Follow the instructions in the Assignments section
to download the soundfiles for this assignment these files will be
used in this lab for demonstration purposes as well.

SIGNAL PROCESSING
IN AUDIO EDITORS
Most audio editors allow for digital signal processing as well as
editing of audio material. In Amadeus, for example, there is a
dedicated menu entitled Effects for this task; in Audacity, the menu
is named Effect, while in Audition, there is an Effects menu; in
ProTools, there is a AudioSuite menu which provides access to
processing plugins (discussed in later labs).
In all cases, processing is accomplished by selecting material
in an open audio file by highlighting it, and then choosing an
available process from the menus.
The process will only be applied to the selected material. If nothing is
selected, the process will be applied to the entire audio file.

Because of the nature of audio editors, the process is nondestructive until you save your work. In other words, if you
process a file, listen to it immediately and dislike the result, you can
undo the process by selecting Undo (Command Z) from the Edit
menu on the Mac, or Undo (Alt Backspace, or Control Z) from the
Edit menu on WinXP.

68

Lab Four

TASK: REVERSAL
Reversal is one of the first processes that you should try when
experimenting with your sound object. Many of the other available
processes have some meaning within nature (natural reverberation
or filtering, for example); reversal is a completely abstract process
that was impossible before musique concrte.
Open up the five files for Project Two in your audio editor.
Select the first file (assign2_1.aif), and reverse the contents of the
soundfile.

Listen to both the original and the processed versions. How


much change is apparent between the two? What exactly has
changed? Has the timbre changed? (No.) Have the frequencies
changed? (No.) Has the amplitude envelope changed? (Yes.)

Consider the amplitude envelope of the original; is it


symmetrical or not? What happens when you reverse a
symmetrical envelope versus one that has a more percussive
(strong attack, immediate decay)?

Try this process with all the sounds from the assignment, and
compare the originals to the processed versions.

TASK: PITCH SHIFT


Changing the playback speed of a recorded tape was a prime
method of processing in musique concrte, and it has an important
correlation to a fundamental method of acoustic musical
development: transposition. Of course, changing the tape speed
produced not only pitch shift but also a corresponding time change;
lowering the pitch also slowed the material down.
By using digital signal processing, it is possible to change the
pitch of a file without changing the time it takes for the file to play.
This involves some complex mathematics, analysis, and resynthesis
of the soundfile; while it is a very important process that we will
use later on within the creative projects, for this lab we will limit
the pitch shifting to one that links frequency and time.
Close the file (assign2_1.aif) without saving, and reopen it to
return to the original, unprocessed version. Locate the pitch shift
process on your audio editor. This process can be a bit tricky to find
in different audio editors, since the analysis/resynthesis version
might be more obvious.

69

Lab Four

We have lowered the frequencies contained within our file by


half. Interestingly, all three programs accomplished the exact same
task, but described it in different ways. In Amadeus, we changed the
frequency by a factor of .5; in other words, we made each frequency
play at .5, or half, its original frequency. In Audacity, we changed
the frequency by 50 percent; in other words, we made each
frequency play at 50%, or half, its original frequency. In Audition,
we lowered each frequency by 12 semitones, which is a musical
measurement of one octave; in other words, we made each
frequency one octave lower, or half its original frequency.
How were the frequencies lowered, without knowing which
frequencies were contained within the sound? Was some sort of
analysis done on the soundfile? In short, no; recalling the
information given in Unit 3 (Digital Sound), we lowered the
frequencies by doubling their length. Not only will this lower the
frequencies, but also change the length of the sound by doubling it.
Any internal rhythms within the sound will thus become twice as
slow.
Undo and Redo
One of the great advantages to digital audio is the ability to use
standard computer operations, like Undo and Redo. In this case, it
is possible to go between the original version (select Undo from the
Edit menu in all three programs) to return to the original version,
then select Redo from the Edit menu to return to the edited
version. Learn the keyboard shortcuts for these processes in order
to quickly switch between the two versions.
After you have lowered the frequency of your file, compare
the lengths of the processed version to the original version (you
will have to look at the time lines). Notice that the processed
(slowed down) version is twice as long as the original.

Listen to the original and then to the processed version.

What is the difference between the two sounds? It should


sound, obviously, lower, and slower. In those sounds that lack
distinctive pitch content (and are either inharmonic or noise-based
sounds), it is sometimes difficult to hear that the processed file as
being one octave lower; they do, however, clearly sound lower.
Does this sound appear to be one octave lower?
Similarly, those sounds that lack clear rhythmic material are
difficult to hear that the processed files are twice as slow as the
originals; however, they clearly sound slower. For those sounds
70

Lab Four

that have an internal rhythm, this becomes more apparent;


individual events are now much easier to distinguish.
In most cases, the nature of the sound itself will have changed.
For example, objects will tend to sound bigger: the reason is that
larger objects make lower sounds and move more slowly.

Try this process with all the sounds from the assignment, and
compare the originals to the processed versions.

Series Transformation
Like any process, we can exceed the parameter range of the process
by executing the process more than once on the processed file. This
is the concept of series transformation.

Select the processed region (or the entire soundfile, if it has


been processed) and pitch shift it down another octave.

You should still be able to use the Undo/Redo capabilities to


return to the original, lowered by one octave, version, and the
lowered by two octaves version. Notice that second process has
resulted in a file that is twice as long as the former region and four
times as long as the original.

Listen to this new version.

Does it still bear some resemblance to the original? Is its


character still intact?
Notice that the sound is beginning to deteriorate; it is losing
high frequencies and starting to sound harsh. If you processed this
file once more, the digital artifacts would be even more noticeable.
Excessive ProcessingArtifacts
The lack of high frequencies resulted when all frequencies in the
sound are lowered by several octaves. For example, if the original
had a full frequency bandwidth (20 Hz to 20 kHz), the first
transposition down one octave would result in a bandwidth of 10 Hz
to 10 kHz; the second would result in 5 Hz to 5 kHz, the third 2.5 Hz
to 2.5 kHz, and so on. The diagram below shows this concept
graphically. The original sounds spectrum (top) is not only
transposed, but also compressed.

71

Lab Four

Series pitch shifting results in a lower frequency bandwidth.


Furthermore, each successive transposition results in a loss of
detail. Consider the fact that for a given second of the original
sound, 44,100 numbers (samples) were used to represent the signal.
The first transposition (down one octave) would take those
44,100 samples but use them over two seconds. The computer
needs to fill in every second sample. After three transpositions
(down four octaves), those original 44,100 samples are spread out
over eight seconds; for every original sample, the computer needs
to fill in seven samples!

Series pitch shifting results in a loss of detail. The lowest


waveform is transposed down four octaves; notice the straight
lines.
The simplest way of filling in the missing samples is through
interpolation, a process that calculates intermediary numbers
based upon straight lines. For example, given two samples of 100
and 200, an intermediary sample would be 150, which would occur
after a one-octave transposition down. However, in the case of a
72

Lab Four

four-octave downward transposition, the resulting samples would


be 100 114 128 143 157 171 185 200a straight line between the
original numbers. As you can see from the diagram above, these
straight lines predominate. Remember that higher frequencies have
shorter waveforms, and the detail of a given waveform constitutes
its higher frequencies; therefore, the lower waveform in the
example lacks high frequencies. Furthermore, the straight lines
result in the brittle or harsh soundthe digital artifact of extreme
pitch shifting.
What does this all mean?
Excessive pitch shifting results in degradation of the signal.

Pitch Shifting Voice

Repeat the above task using a voice-based recording, such as


Playground Rhyme, listening to the result of a one-octave
pitch shift.
In the pitch shift of natural sounds such as assign2_1.aif, the
result was a new abstract sound that suggested a larger object
making the sound. However, the pitch shift of children speaking
results in a sound that suggests drunken sailors or perhaps tipsy
giants. Just as the natural objects became larger through pitch
shifting (since larger objects make lower frequency sounds and
move more slowly), the children also become larger. Adults have
lower voices than children do because their vocal cords are larger
and their heads (the resonator for the human voice) are larger.
Transposing speech results in the impression of a huge resonating
head, what can be called the monster effect. Similarly,
transposing speech higher results in a smaller resonating body with
tiny vocal cords, which you may think of as the chipmunk effect.

When pitch shifting voice by more than half an octave up or down, be


very sensitive to the resulting allusion. The effect can quickly become
tiring and overused!

TIMBRAL PROCESSING VIA EXTERNAL PROCESSES


You may have noticed that within the processing menus of your
audio editor, processes may be listed in a way that suggests they
are not part of the program itself. This is because these particular
73

Lab Four

processes are, in fact, external to the program, and based upon the
concept of external plug-ins.

AUDIO PLUG-INS
Plug-ins are used throughout digital audio applications. They are
small applications that are created by third party develops (not the
original software designer, nor you) that follow a predesigned
format. The benefit of using plug-ins is that they are an extension of
the original program, without any upgrade to the program itself. In
other words, it is possible to continually add new plug-ins to a
program, and only requiring a restart of the program for them to
appear. Plug-ins must be located in a special directory on your
computer. Consult either the manual of your audio editor or the
plug-in itself. Quite often, the plug-ins come with an installer, a
small program that will place the plug-in in the required location.
Certain programs, like Audition, allow you to select the directories
that contain the plug-ins, allowing you to keep them in the same
directory as the program itself, for example.
Amadeus, Audacity, and Audition all allow for plug-ins; if you
have any installed, they appear within the Effects menu in Amadeus
(as submenus, under Audio Unit and VST Plug-ins); in Audacity,
they are found at the bottom of the Effect menu, and within the
Effects menu in Audition, within the submenu VST.
Plug-ins come in different formats; on the Mac, these include
the following:

74

Premiereinitially used within the video-processing program


Adobe Premiere. Premiere plug-ins are the oldest format, and
are getting harder to find.

VSTdesigned by Steinberg for use within its Cubase


software. Arguably the most popular audio plug-in format,
since they work on both the Mac and PC.

Audio Unitbuilt into the OSX operating system, these are


essentially free, very high quality processes that are
automatically loaded by any program that can use the Audio
Unit format.

MASdesigned by Mark of the Unicorn for use within its


Digital Performer software.

TDMdesigned by Digidesign for use with its hardwarebased ProTools systems.

Lab Four

AudioSuite (AS)designed by Digidesign for use within all of


its ProTools systems. A separate version of AudioSuite is RTAS,
the Real-Time AudioSuite, which allows for dynamic control
over parameters within ProTools.

On the PC, one standard plug-in format is DirectX, designed


by Microsoft as a general-purpose multimedia tool. Audition can
use both DirectX plug-ins, as well as VST plug-ins.
Plug-ins as virtual hardware
Within audio programs, plug-ins are mainly used as signal
processors (although many plug-ins exist that work as synthesis
engines). In this respect, they function as digital versions of what
was originally a hardware device: for example, a graphic equalizer.
The benefit of plug-ins in electroacoustic music, in comparison to
their analogue predecessors, is not only their affordability, but also
their interchangeability. For example, in an analogue studio, if you
were unhappy with your graphic equalizer, you had no choice but
to buy another piece of hardware (often an expensive proposition).
With digital audio, many of the standard (as well as novel) signal
processes are available as inexpensive, or free, plug-ins. As was
already mentioned, the Macintosh operating system, OSX, contains
within it high quality signal processors, available to any audio
program that can access Audio Units.

TASK: TIMBRAL PROCESSING


Filtering and equalization are also fundamental processes that you
will explore more fully in your creative projects. In electroacoustic
music, we are most often interested in filtering than equalization
because it produces effects that are more noticeable.
Close any files that you used in the previous task (without
saving them), and reopen them in order to return to the original,
unprocessed versions.
Listen to each of the original five sounds from the assignment.
Does each sound consist primarily of high, mid, or low frequencies,
or a combination of them? Recognizing the spectral qualities of a
sound will allow you to determine which type of filtering and/or
equalization will be most effective for that sound.

75

Lab Four

The Graphic Equalizer


One of the first timbral processes to explore is the graphic equalizer.
This process allows us to affect the entire spectrum of a sound, and
view our results while we listen to them.

Remember from Unit 3 Digital Audio, that equalizers affect a portion


of the spectrum, but allow the entire spectrum to pass, while filters
can remove portions of the spectrum entirely.

The Graphic EQ allows us to change the frequency content


over the entire spectrum. The frequency spectrum is equally
divided by the number of bands of equalization. For example, a 31
band EQ covers ten octaves, with each octave being divided into
three bands.

The Audio Unit Graphic EQ found in Amadeus and the Mac


version of Audacity
Each band allows for either boosting the frequency at that
band, or cutting it the amount of boost/cut is dependent upon the
actual plug-in. In the above example, the boost/cut is 20 decibels
(dB).

Click on one of the sliders in the midrange, and drag it to the


top (either +20 or +18 db), then press Preview to listen to your
sound.

Can you hear a change in the sound?


76

Lab Four

Return the previous slider to zero (the middle), and try


another, and then another, always previewing the sound.

Do the same with the other files from the assignment. (You
will have to close the plug-in window, select another open file,
and then re-select the process).

Notice how you can emphasize, or bring out, certain frequencies


within each sound. Also, notice that different frequencies will stand
out at different times within each sound, and between different
sounds. In other words, a single setting within the Graphic
Equalizer may be effective for one sound, but not another.
Lets explore some of the other equalizer filters available to us.
Low Shelf
As described in Unit 3 Digital Audio, the Low Shelf is actually
an equalizer, since it allows the entire frequency spectrum to pass.
It only affects the low frequencies, specifically those below the
cutoff frequency.

A low shelf EQ, with a 9 db boost and a cutoff of about 60 Hz


The bass control on your home or car stereo is actually a low
shelf EQ, with a fixed cutoff frequency note that on such a system,
you can amplify or attenuate the low frequencies, but they will
always remain within the frequency spectrum.
In the plug-in, the frequencies can be cut or boosted by an
impressive 40 dB a substantial amount. The gain, or amount of
affect on the spectrum, has an equally impressive range (for a Low
Shelf EQ) of 10 to 200 Hz. What is unknown, but very important, is
the rolloff, or slope of the filter. For example, with a 24db per
octave slope and a cutoff frequency of 100 Hz, frequencies between
100 Hz (the cutoff) and 50 Hz (one octave below the cutoff) will be
gradually reduced, with only a 24 db reduction at 50 Hz. Since the
77

Lab Four

maximum gain reduction is 40 db, this slope would continue to that


point, reaching it somewhere around 35 Hz (not quite 25 Hz, which
would be two octaves below the cutoff a 24 db/octave slope
would result in 48 db reduction at this point). In other words, it
would take about one and a half octaves to reach the desired gain
reduction of 40 db with a rolloff/slope of 24 db (which is a very
good slope in itself).

Listen to all of the soundfiles (click on the Preview button) for


assignment two through the Low Shelf EQ. - try boosting each
sound by 20 db, and cutting by 40 db. Move the cutoff
frequency slider as the sound is playing.

Notice that the lower the cutoff frequency, the less the apparent
affect upon the sound. This is due to the slope of the filter, and the
fact that the bandwidth upon which the EQ is operating upon is
being reduced.
Some sounds may appear not to have any change at all, even with a
40db increase. Why do you think this might be? (Hint how did
you originally describe the unaffected sound in terms of spectral
energy?)
Audition does not have a Low Shelf, nor a High Shelf, EQ. This is of
little concern, since these equalizers are of little use to us and are
used here merely for pedagogical reasons.
High Shelf
The high shelf will effectively do the opposite of the low shelf:
it will lower or boost all the frequencies above the cut-off frequency.
In this case, the frequency range is from 10 kHz to 22.05 kHz. Note
that only extremely high frequencies, in the upper octave of our
hearing range, will be affected.
Because of the limit of the cutoff frequency, only a limited
number of frequencies will be affected. As a result, in most cases,
there will be very little in terms of audible change to the sound.
Low Pass
Notice that the Low Pass filter has a cutoff frequency, but no
gain control. The reason for this is that the amount of reduction in a
filter is absolute the slope of the filter continues to 0, rather than a
flat shelf.

78

Lab Four

A low pass filters affect upon frequency


As the diagram above shows, the filters slope continues until
all energy is removed at a certain point. This point depends on the
cutoff frequency, and the slope of the filter.

Set the resonance to 0 db, and preview the sound while


moving the cutoff frequency. Compare this to the High Shelf
(with a negative gain). What is the difference?

The difference between the a low pass filter and a high shelf is
that the low pass filter completely removes the upper frequencies,
whereas the high shelf merely reduces them.
The resonance of a filter is the amount of feedback of the filters
output back to its input. Increasing the resonance also known as
Q - results in several alterations to the spectrum:
1. It increases the filters slope;
2. It actually lowers the energy below the cutoff frequency;
3. It increases the energy at the cutoff frequency.
Low Pass filters with high resonance sound similar to
Bandpass filters with a narrow bandwidth (discussed shortly), albeit
with more low frequency energy. A characteristic processing effect
involves sweeping a resonant lowpass filters cutoff frequency. In
fact, this is the principle employed by the wah-wah pedal.
Highly resonant filters can begin to attain out of control
feedback. In analogue filters, this led to a filter oscillating, or ringing,
which was a pleasant sound. In digital systems, the result is much
less pleasant.

Set the resonance to 10 db, and preview the sound while


sweeping the filters cutoff frequency. Notice how different
frequencies become highlighted.

79

Lab Four

Equalizing sounds with high gain and high resonance (Q) can result
in quite piercing sounds that may be hard to utilize with other
sounds.

Band Pass
Set the bandwidth to 12000 cents, preview the sound, and
move the cutoff frequency slider around.
You should hear no change in the sound. The reason is that
the bandwidth is so wide, every frequency is passing through. Cents,
the measurement of bandwidth for the AU Bandpass plug-in, is the
equal division of the octave into 1200 equal parts each pitch has a
difference of 100 cents from the next (i.e. C to C#). This is different
than frequency, which in non-linear. The bandwidth range, above,
is from one pitch to ten octaves.

A bandpass filters affect upon frequency content.


Set the bandwidth to 100 cents, preview the sound, and
move the cutoff frequency slider around.
This should result in the highlighting of different frequency
regions in the sound, similar to the Low Pass filter with resonance.
However, note the lack of low frequencies as compared to the Low
Pass filter, which passed all the frequencies below the cut-off. Also
notice that the sound will never distort with this particular
bandpass filter, since it has no gain frequency content within the
bandwidth is never amplified.

80

Lab Four

WORKING BY EAR VERSUS VISUALIZING SOUND


So far, you have been working completely by ear. You have
made changes to the cutoff frequency of various filters, and have had
to listen to the resulting change in the sound to determine whether
the process had any affect at all.
Hopefully, you will have realized that in order for a filter or EQ
to have any affect on a sound, the sound needs spectral energy in
that region. In other words, if a sound consists primarily of high
frequencies, it doesnt matter whether you boost or cut its gain using
a Low Shelf EQ, since that EQ will not have any affect upon the
sound.
Several audio editors now incorporate various analysis tools to
aid with our processing. These tools analysis the sound in various
ways, and graphically display various information that is of use to
us. The two types of analysis that we will discuss are the
spectrograph, and the sonogram.
Below is an example of a spectrograph. It displays the
frequency content of a sound at a particular instant. As we now know,
the frequency content of a sound can dramatically change over the
course of the sound; therefore, several spectrographs should be
taken at different points within a sound to get an idea about its
frequency content at different times.

A spectrograph of a sound, showing its mainly low frequency


content.

81

Lab Four

In this particular spectrograph, taken from Amadeus, the


sounds energy is all contained in the low frequencies. The frequency
scale at the bottom shows the frequency range of this particular
analysis, while the amplitude scale is relative to the highest
amplitude in the sound. Most of the energy of this sound occurs
between 100 and 200 Hz, with little, if any, energy occurring above
400 Hz. Thus, a High Shelf, or High Pass Filter, will have little affect
upon this sound. However, a Low Pass, Low Shelf certainly will
affect this sound, as will a judicious use of a Bandpass filter with a
narrow bandwidth.
Below is another analysis, this time of a different sound.
Notice that its energy is centred around 1000 Hz, with its
bandwidth limited to approximately 800 to 2400 Hz. While both a
Low Pass and High Pass might affect this sound, due to the sounds
limited bandwidth, a more effective filter would be the Bandpass.

A spectrograph of a different sound, displaying more midrange


frequency content.
Furthermore, since we now know the actual frequency content
of the sound, it is possible to tune the filters centre frequency
right on the sound highest energy (between 1000 and 1100 Hz).
The spectrograph does not give us any indication of how the
sounds timbre may change over time. The sonogram is a tool that
can provide this information.
Below is a sonogram of the same sound as above. Frequency is
displayed up and down, with lower frequencies lower down in the
graph. Time is displayed left to right, and amplitude is displayed
via colour ( in this example, lighter colours have the lowest
amplitude, darker colours the highest).
82

Lab Four

A sonogram of a sound, talking time (left to right) into account.


This particular sonogram displays less detail about the sound
at any given point, but shows us the dynamic nature of the sound.
For example, we can see that the energy is fairly constant in a given
frequency range (the blue portions), although it lacks a consistent
fundamental.

MORE ON PLUG-INS: USING VST PLUG-INS


The VST format is very common in audio programs, mainly
because of its cross-platform concept. One of the concepts behind
VST is its real-time nature, which attempts to model the traditional
use of effects in the analogue studio. On most analogue mixers,
input strips, which control a single track of audio, contain an insert
send and return. The input signal can be sent to an external
processor, such as an equalizer or reverberation unit, and the
processed signal can be returned to the same input strip. In effect,
the process is inserted into the strip without taking up any other
input channels.

83

Lab Four

A process inserted into the strip.


The benefit of this technique is that it allows processing of a
signal without using another input strip. The negative effect is you
lose the original signal; only the processed signal is available. It is
useful when only the processed signal (e.g., an equalized signal) is
desirable.
The other method is to use an effects send, which sends an
additional signal to an external processor. The output of the
processor is then returned to a separate input strip.

A process on a separate effects bux.


The benefit of this technique is that it allows dynamic mixing
of the two signals. It is useful for processes such as reverberation,
where it is desirable to control the relative amounts of the original
signal (the dry signal) with the processed signal (the wet signal).
In ProTools, the effects send model is used in its AudioSuite
plug-ins: the original remains unchanged, while the processed
version is separated. In its default setting, the processed version
replaces the original. However, you can easily place the original in
an adjacent track at the same time location to have control over
both. There will be more on this concept in later labs.
The VST plug-in uses the insert/return model. Therefore, you
only have access to the processed version of the signal. However,
many of the plug-ins allow for a mixing of the original with the
84

Lab Four

processed sound within the plug-in itself. Furthermore, because of


the real-time nature of VST plug-ins, they can calculate the process
while playing the original; this allows the user to preview the
process before making a final decision and changing the contents of
the soundfile itself.

NAMING YOUR PROCESSED FILE - SAVE AS VS. SAVE


Once you have processed your file, you have to decide whether to
save your file. For the second project, this is not necessary, since
you are merely exploring different processes; however, for
subsequent assignments, and in normal electroacoustic working
methods, you will need to save your changes.
However, if you save it directly (choosing Save from the Edit
menu), you will overwrite your original file. In most cases, this is
not a good idea because you will probably want to return to your
original sound to process it in a different way or to use the original
in its unaltered state.
After processing a file, choose Save As, rather than Save, to save a
new processed file.

In choosing a name for your processed file, it is also a good


idea to incorporate the original file name in some way. The name
can remind you which one was the original file when you are
listening to the processed version at a later point in time.
For example, several weeks from now in the creative projects,
you may feel that the sound to which you added reverb needs a
longer delay time, or that a great transposed sound could use a
companion or two at a different transposition level. Naming these
sounds Process 1, or Feb 19a, or crazy ping will not give you
any hints about what the original sound was!
If your original sound was river, then river-T-8 would let
you know that the original river file has been transposed (T) by
an interval of minus eight; river T3 would be a related
transposition process, but by an interval of plus three.

Some useful shorthand for your processes might be:


rev = reverse
T = transposition, with an additional number for the interval
rvb = reverberation; an additional number might refer to the
delay time
85

Lab Four

LPF, HPF, PF = low pass filter, high pass filter, peak filter; an
additional number might refer to the cut-off frequency.
slow-2, fast-1.5 = speed change; an additional number
indicates the amount of speed change.

WHERE TO GET VST PLUG-INS


There are many demo plug-ins and even free ones available on the
Internet. I did a search for free vst plug, and over 17,000 sites
came up.
Although Amadeus does not come with any VST plug-ins, the
download site does have a separate link to some free plug-ins:
(http://www.hairersoft.com/AmadPlugs.html).
Some demo plug-ins will work for only ten seconds at a time,
and then they have a one-second sound drop-out. You can work
around this limitation by processing only audio files that are less
than ten seconds long. However, even working in this way, you
still might get that ten-second drop-out in the middle of your
processed file. If this happens, undo the process (Command Z or
select Undo from the Edit menu) and retry it. If you time it right
(between the ten-second drop-outs), you should be able to get a
usable process.

TO DO THIS WEEK
Begin Assignment 2: Processing.

86

You might also like