You are on page 1of 84

Table of Contents

The Perfect Mix

There are many ways to get your songs to final form. What matters is not how you
get there, but that you do get there. Lets pretend you are enrolled in one of the
world’s fine universities and you are writing a Master’s Thesis. This is not just “any”
piece of drudge paperwork, but the culmination of you education. You know you have
to write in excellent form, have to watch out for tiny grammatical imperfections, and
make sure substance and style flows well. In short, you have to rewrite and edit, a
lot. It may take several experiments to get this just right. You might be working for
weeks, not going out to the clubs with your buds, even sending hopeful significant
others away. Why? The darn paper is important--you have to do well!Apply that same
attitude to your mix and you will have a great mix. Tweak’s axiom: The value under-
lying successful production is the same in all fields--art, architecture, music, quantum
mechanics, even political science and business. Beauty has a tone. Its not a tone you
hear with your ears or see with your eyes but that you realize on reflection. (That is,
when you stand back and ask “what is this?”). When you sense the passion of the
creator coming at you from the work of art they made for you, you begin to sense
the piece at hand is great.Lets assume, for this article, final form means a beautifully
polished piece of music in 16 bit 44.1 khz digital audio (i.e., the “red book” cd au-
dio standard) or a standard .wav or .aif file, perhaps at a higher resolution for later
mastering. You need to start, of course, with a fully or almost finished song. This is
the point where the writing ends and the TweakMeistering begins. I’m going to give
you some hard earned tips on Mixing and Mastering in the old analog style. Mixdown
and Mastering, traditionally speaking, are two very separate processes. Mixdown is
the art of leveling, equalizing and effecting all the various sources from many tracks
down to a stereo Mix. Mastering is the process of taking the stereo mix and putting
it in the final album-ready form. Recent software and hardware developments make
these processes easier and less expensive than they ever have been in the history of
making music. Given that much of the time we can stay in the digital domain we can
add processing to our heart’s content and maintain a high signal to noise ratio and
achieve optimum dynamics for the piece at hand.
Volume Balancing and The Final Mix

Step 1: The order for establishing volume balancing

1. Drums.
The drums providing the driving beat in a song, whether it’s from the kick drum or the
use of the snare. You want the drums to be loud enough in the mix so the listener can
feel that energy.

2. Bass.
You need to give that bass enough volume so it cuts through the drums. I find a volume
just above the drums is a good place to start. Before I get into my EQ work, I want to
hear the volume difference between the two instruments. Most of the drum kit that will
come into play on this will be the kick drum and somewhat the toms.

3. Guitars.
Layer the rhythm guitars on top of the bass. Then, layer the lead guitar on top of the
rhythm guitar.

4. Piano / keyboard.
The keyboard might be used for synth pads or a melodic line. Place it where it
would sound best in relation to the guitars. For example, a pad would go under the
guitars while a melodic line would go on top of rhythm guitar. There is no definitive
answer to each situation as each song is different and each mix can be different but
still sound good. Let your ears tell you where to ultimately place the volume for the
piano/keyboard.

5. Vocalists.
Start with the lead vocalist. Place their volume so they can be heard clearly along with,
yet above, the other instruments. They have to clearly hear them above the other
instruments. Next, bring in the backup singers and set them behind the lead vocalist.
Highlighting Instruments

Once the overall volume balance is set, look for the instruments that will be used to
drive the song. For example, one song is lead by the acoustic guitar while the next is
lead by the piano. Make note of this and mark it on your song list so you know to give
those instrument some extra bump during those songs.
Step 2: Creating the General Mix

The purpose of establishing a general mix is getting good sounds from your channels.
This means removing or minimizing bad frequencies, increasing the frequencies that
benefit the sound, and re-evaluating your volume balance. The general mix process
comes before you ever touch the audio effects.

Your goal in creating the general mix is creating a foundational mix that you can build
upon. Your goal, at this point, is not creating the final mix. Think of it as building a
wooden table. Before you can put the table together, you need to get the right material.

Keep in mind how you want the instrument / vocal to sound and how you want the
whole band to sound. Doing this, you’ll be performing a bit of natural blending but know
the general mix is considered a rough mix, especially at the first pass.

The general mix process can produce an unexpected volume change. Frequencies play
a part in volume. As you boost and cut frequencies at significant degrees, the volume
of that sound may increase or decrease. That’s when you’ll need to tweak your channel
volumes.

It’s important you progress through your general mix channels in the same order as
you did in your volume balancing. They are listed below in the same order. Keep in mind
the idea that each sound builds upon the sound that came before.

IMPORTANT:
Please note I can’t give you a lot of specifics on mixing. That is to say, I can’t say ‘always
turn the acoustic guitar frequency up 3dB at 800Hz. There are no definites in mixing.
There are frequency ranges that are typically beneficial, but not always. It depends on
the sound of the instrument/vocal and the desires of your final mix. Therefore, use this
portion as a generic road map as if you are leaving California and driving to Texas. You
know the city where you are starting and you know the city where you want to arrive.
You have to do the work so the map is specific to your trip.
Follow the order of instruments

1. Drums.
Start with the kick drum. Don’t assume you need to boost the low-end frequencies.
There’s a lot you can do in the low end and mid-range to affect clarity. You need to
start by having a sound in mind you want from the kick drum. Do you want it to punch
through the mix or do you want it to have a subtle emphasis? You can mix the rest of
the drum kit after the bass. I’d wait to mix the cymbals until right before the singers.
The cymbals are great for accenting parts of the mix.

2. Bass.
You can do a lot here depending on your desires. You can boost the lows for a fatter
sound or you can focus around the pluck for a bit of slap bass sound. It’s all in how it
should fit into the mix. Use a sweep-able EQ control to sweep down to 250 and boost
to hear what you can do.

3. Electric Guitars.
Electric guitars can be a bit troublesome in the mix. It all depends on if they are playing
lead or rhythm and what patches or pedals the guitarist is using for the song. They can
sound bright and clear or heavy and full of distortion. Combine that with the number
of electric guitars you have on stage and you definitely need a plan.Let’s start with a
heavily distorted guitar. This will have plenty of low-end frequencies potentially clashing
with the bass. Go for cutting the lows and seeking definition of the electric guitar just
above the bass. You are layering instruments with only the overlap that benefits the
mix. Look for focusing around the 500 Hz – 600 Hz mark.Contrast all of that with a lead
guitar playing a clear line. You’ll want to focus on the mid-range and highs to get that
lead line to cut through the mix. Look to the 1.5 kHz range for a bit of boost and then
jump to the 4+ kHz range for added bite.

4. Acoustic Guitars.
How you EQ an acoustic guitar varies greatly from one guitar to the next. The worship
leader’s Breedlove guitar might naturally sound a lot brighter than the other guitarist’s
Washburn. The guitar that had old strings last week and not much of a high-end sound
now has new strings and an overly bright crisp sound. What do you do?Mixing the
acoustic, you also need to consider what other instruments are available. A small band
might be comprised of only a guitarist and a couple of singers. When this is the case,
you want a nice full sound where you are only EQ’ing to clean up a bad frequency range
or two. A large band with several instruments gives you the need to focus on fitting the
acoustic guitar into the available open space either upfront in the mix if it’s the lead
instrument, or back in the mix if another instrument is leading.
5. Piano / Keyboard.
Developing the general mix with the piano is the easiest part of mixing. Don’t do
anything! Not at first. The piano is a wonderful sounding instrument that can
produce sounds in the full spectrum of audio. By its very nature, it will be through
musical arrangement that the pianist plays melody lines in the octave range(s) that
accentuate the piano in the mix. Use minimal EQ changes to highlight the melody line
and/or add depth to the piano. As long as the song is arranged properly, you can hear
the frequency areas where the melody is breaking through and boost those frequencies.
Keyboards, when played with similar sounds to a piano, can be treated like a piano. It’s
when you deal with pads / synth’s that you might have difficulty. Listen for frequency
areas in your mix where there is an empty space. Imagine a painting with mountains
and trees and a small clearing. That audio clearing is where you can place the synths.
There are a variety of voicings a keyboard can produce so you need to deal with each
one independently.

6. Vocals.
The vocals should be the last item you set for the general mix. .Lead vocal clarity can
be achieved by; using the HPF to drop the super low frequencies, using the upper
mid-range to boost clarity, and using the high end to add a bit of air / breathiness to the
voice. Make sure you listen for sibilance and eliminate it if it occurs.Background vocals
can feel like the hardest part of the mix. You want them to be heard but you don’t want
them at the same layer level as the lead vocal. Background vocals benefit a song when
they bring depth and power to a song. The depth could be thought of as a vocal line that
supports the lead singer as well as a harmony line. Power can be accomplished through
adding voices only to a chorus or during key verses.
Rinse and Repeat

You shouldn’t expect to get the general mix right the first time you go through all the
instruments. Feel free to go back and make changes as necessary. At this point, they
should be minor, in comparison to your first round of EQ changes. Keep in mind how
you want it to sound and make changes to match up with that. Don’t try perfecting the
mix or do a lot of blending.

Step 3: Creating the Distinct Mix

You’ve created your general mix. The mix should have a good overall sound and be
ready for the next step. Creating a distinct mix takes your general mix and takes it to
the next level. This will happen through the blending of instruments and vocals, the
contrasting of instruments and vocals, and through the use of effects.
There are three goals in this distinct mixing process

1 Placing Sounds in Proper Places

Placing sounds in the proper places in the mix is also known as having them sit in
the right places. I’m not sure where that phrase came from but it reminds me of an
orchestra. The conductor stands on the podium and knows when he points to a specific
section all the performers of that instrument will be sitting in the same section.

“Listen to a song with your eyes closed and try pointing out each musician.

In a good mix, they are all sitting in the proper places.”


Instrument Placement

The instruments are the foundational part of the music. For this reason, the
instruments need to be sitting in their proper places to support the vocals. They also
need to support each other in the mix. This supporting can be done via volume and
via EQ’ing. Please note that the below includes EQ frequency recommendations which
should be used as a place to start, not a goal. If you find a different frequency cut/boost
gives the sound you want, then use it.

Kick Drum
The kick drum is the foundation for all the instruments. It needs to be well present in
the mix and distinct enough that you can pick it out from all the other instruments.
Listen to your general mix and determine if it needs a volume bump and/or an EQ
change. Regarding the EQ changes, you can get it to cut through the mix by boosting
the low end (40 Hz – 80 Hz) and/or focusing on the sound of the beater head hitting
the drum head which comes in the 2 kHz to 6 kHz range.
Bass
A lot of what you can do here will depend on the quality of the bass player. For example,
a bass player who has mad skillz and knows how to groove with the song might have
a bit more prominence in the mix. I want them to help carry the vibe of the song but,
like in the church worship environment, in an understated way. I could do this with a
boost in the 1 kHz-6 kHz range for presence and a bit in the 200 Hz range for power. I
might even tweak for clarity.
In the case of the I-only-play-three-notes bassist, I’m more likely to keep them way
back in the mix. They will still be playing a supporting role but I won’t bring out their
attack…or maybe I will. It depends.

Advertisment
Snares and Toms
The snare should have a good amount of bite. It will be the instrument that others will
follow for the tempo so it needs to shine through to support them. The problem is that
it’s easy to get too much bite in your snare if you’re not careful.
Look for adding bite and depth by boosting in the 400 Hz – 800 Hz range. You want
a sound that conveys, “hello, I’m your friendly-neighborhood snare drum.” Boost too
much and it’s “LOOK AT ME, I’M THE SNARE DRUM!” You might also find a cut on the
low end helps separate it out.

You’ve got floor toms and rack toms. Each have distinct sounds. Start with the floor tom
and try boosting at the 140 Hz mark. Once you find the right sound, then move through
the higher toms by boosting at around 20 Hz higher (160 Hz, 180 Hz, etc). The key is
first getting that floor tom where It sounds right. The floor tom is a great accent drum
in the kit so make sure it sits nicely in the mix between the drum and the bass.

Electric Guitars
Place the electric guitar above the bass. You’ll want
to give it the depth around 500 Hz and the presence
around the 1.5 kHz mark. The amount of presence
can be affected by whether or not it’s played as a lead
instrument of a rhythm instrument. Cut on the low
end if you find it’s blending too much with the lower
end instruments. If it still seems lacking in some way,
revisit the 4 kHz – 7 kHz to give it some more bite.

Acoustic Guitars
Placing the acoustic guitar in the mix, I go for
something that’s supporting a lead guitar, by sitting
it just under the lead guitar or along side of it and
using EQ to separate the two. For example, I’d cut t
he highs on the rhythm instrument so the lead instru-
ment could shine through.

Consider all the instruments in the mix and first decide if the acoustic guitar needs to
support a lead instrument. When this is the case, start with a cut to the acoustic guitar’s
highs and consider boosting the mid-range warmth and presence of the guitar.
When you don’t have a lead instrument like a piano or lead guitar, then aim for more
presence in the 2 kHz – 8 kHz range and add some high end. Do be careful that you
don’t add so much high end that the guitar sounds boxy or gets caught up with the
cymbals.
Piano / Keyboard
Tuck this either behind the acoustic guitar or on top of it, depending on how it’s being
used. The piano can be played for melodic lines and it can also be played by chunking
out chords with embellishments. Played for the melody, like a lead instrument, I’ll place
it on top of the guitar with a cut to the high’s on the guitar. Then, I’ll listen to see what
else is needed. I might not have to add anything to the piano EQ. On the other hand, I
might need to boost the upper mid-range and/or high frequencies to have it stand out.

In the case of the piano played for chunking out chords, I’ll boost the mid-range in the
600 – 1.2 kHz range to warm it up and get it in-line with the guitar.
Synths…if the keyboard is played with acoustic pads, then I’ll listen to the mix for
an open frequency range and place the pads in there. I’m not going to restrict the
pads to this range, just like how you shouldn’t restrict any instrument to that degree.
However, I am going to put the emphasis in that range. The pads are to support the other
instruments, not to dominate them or muddy them.
Vocal Placement

Lead Vocals
Considering the state of your mix now, how does the lead vocal sound? They should
be high enough above the mix so the congregation can sing along and clear enough so
they understand the words.
Start with any necessary volume adjustment. Next, listen for clarity. You can create a
clearer sound by cutting the low end and then turning towards the mid-range. Start
with a 4 dB boost at 1.5 kHz and turn your mid-range sweep up through 8 kHz until
you find the optimal clarity. If you don’t hear it, apply a 4 dB cut and sweep again. Each
person’s voice is different so whereas one singer might need a 6 dB boost for clarity,
another might need a 4 dB cut at a different frequency.

Background Vocals
Where do you sit them in the mix? It depends. In the case where they sing to add
power to the song, you might want their volume levels the same as the lead singer. In
the case where they sing for support and maybe harmony, you want them below the
lead singer in volume.
Background singer EQ should focus on blending all of the background singers
together. Cut the frequencies that don’t fit and boost where they do. For example, if
singer #1 has a rough sound in the 1.5 kHz range, then cut it. Now how do they sound in
comparison to the others? If they don’t blend, what can you do to the singer or the
others so they do blend? There is no one-size-fits-all answer. Some singers naturally
harmonize well together (BONUS!) and others simply don’t.
2 When to Blend Versus When to Contrast

Throughout the mixing process, you are blending and contrasting instruments and
vocals. You are blending background singers. You are contrasting the bass against the
kick drum. However, there are times when you have an option between contrasting and
blending.

The two times you’ll have the blend/contrast choice is with vocals and with instruments

There are times when two singers will lead a song. Do you blend or contrast their
voices? It depends. When only one sings lead, then the other two are singing
background. However, if two of them sing lead, I have options. Jeff and Leshon
naturally harmonize. Therefore, I tweak the EQ so the two singers create one sound.
When Kevin and Leshon sing, they don’t have a naturally blended voice so I contrast
them. Not so much actively contrasting; that sounds bad. I EQ them so each has a
distinct voice when singing together.

Working with acoustic guitars is much the same. I have three different worship bands
and two of them have two acoustic guitarists that play rhythm. Depending on the
musicians, the guitars, and the song, I may blend their guitars for one sound or contrast
them wherein I bring out the best sounds in each guitar and sculpt the two so they sit
next to each other in the mix yet with enough frequency differences they sound unique.
Deciding between blending and contrasting comes down to asking the question;

“what sounds best for this song?”


3. Adding Effects

Effects are the final piece in creating your distinct mix. Effects can include standard
effects like reverb and delay. They can also extend to compression-as-an-effect as well
as all of the effects available on digital audio workstations. Considering the amount of
information I’ve covered so far in this guide, I’m going to leave this up to you with one
helpful tip…only use effects to benefit your mix. Don’t use them to fix a bad EQ job. Go
back and fix your EQ problem – you might even find what you really need is a different
mic’ing setup.

EQ, or equalisation, has been mentioned sporadically through-out this document thus
far without really tackling this subject with any detail. We are nicely moving into this
rhelm talking about effects.

However, before diving into EQ, we should first understand the audio spectrum and it’s
frequency range surrounding musical material and content first.
Advertisment
Audio Spectrum Explained

The audio spectrum is the audible frequency range at which humans can hear.

The audio spectrum range spans from 20 Hz to 20,000 Hz and can be effectively
broken down into seven different frequency bands, with each having a different
impact on the total sound.

The seven frequency bands are:

Sub-bass | Bass | Low midrange | Midrange | Upper midrange | Presence | Brilliance

Frequency Bands Frequency Values

Sub-bass 20 to 60 Hz
Bass 60 to 250 Hz
Low midrange 250 to 500 Hz
Midrange 500 Hz to 2 kHz
Upper midrange 2 to 4 kHz
Presence 4 to 6 kHz
Brilliance 6 to 20 kHz
Frequency Bands Detailed

Sub Bass: 20 to 60 Hz
The sub bass provides the first usable low frequencies on most recordings. The deep
bass produced in this range is usually felt more than it is heard, providing a sense of
power. Many instruments struggle to enter this frequency range, with the exception of
a few bass heavy instruments, such as the bass guitar which has a lowest achievable
pitch of 41 Hz. It is difficult to hear any sound at low volume level around the sub bass
range due to the Fletcher Munson curves (Equal Loudness Curves).
It is recommended that no or very little equalization boost is applied to this region
without the use of very high quality monitor speakers.

Bass: 60 to 250 Hz
The bass range determines how fat or thin the sound is. The fundamental notes of
rhythm are centered on this area. Most bass signals in modern music tracks lie around
the 90-200 Hz area. The frequencies around 250 Hz can add a feeling of warmth to the
bass without loss of definition. Too much boost in the bass region tends to make the
music sound boomy.

Low Midrange: 250 to 500 Hz


The low midrange contains the low order harmonics of most instruments and is
generally viewed as the bass presence range.
Boosting a signal around 300 Hz adds clarity to the bass and lower-stringed
instruments. Too much boost around 500 Hz can make higher-frequency instruments
sound muffled. Beware that many songs can sound muddy due to excess energy in this
region.

Midrange: 500 Hz to 2 kHz


The midrange determines how prominent an instrument is in the mix. Boosting around
1000 Hz can give instruments a horn like quality. Excess output at this range can
sound tinny and may cause ear fatigue. If boosting in this area, be very cautious,
especially on vocals. The ear is particularly sensitive to how the human voice sounds
and its frequency coverage.

Upper Midrange: 2 to 4 kHz


Human hearing is extremely sensitive at the high midrange frequencies, with the
slightest boost around here resulting in a huge change in the sound timbre.
The high midrange is responsible for the attack on percussive and rhythm instruments.
If boosted, this range can add presence. However, too much boost around the 3 kHz
range can cause listening fatigue. Vocals are most prominent at this range so as with
the midrange, be cautious when boosting.

Presence: 4 kHz to 6 kHz


The presence range is responsible for clarity and definition of a sound. It is the range
at which most home stereos center their treble control on.
Over-boosting can cause an irritating, harsh sound. Cutting in this range makes the
sound more distant and transparent.

Brilliance: 6 kHz to 20 kHz


The brilliance range is composed entirely of harmonics and is responsible for sparkle
and air of a sound. Boost around 12 kHz make a recording sound more Hi Fi.
Be cautious over boosting in this region as it can accentuate hiss or cause ear fatigue.
Equalization Techniques

Improve instrument clarity


Equalization can help enhance an instruments sound.

Harmonic and Fundamental frequency rules


• Boosting a sounds harmonics gives the impression of more presence and brightness
• Decreasing a sounds harmonics gives the impression of a dull, less dazzling sound
• Boosting a sound around its first fundamental frequency will promote warmth and
depth
• Decreasing a sound around its fundamental frequency will promote a colder, less pow-
erful sound.

When applying equalization, it is advisable to use large amounts of cut or boost


initially. This helps give a better idea of the frequencies being affected. The human ear
can quickly become used to an equalized sound, so quick successive bursts on the gain
knob until the frequency sounds about right is best.

Golden Frequency Rules

• It is the harmonics that give each individual instrument its character, or timbre, and
sets it apart from all the rest

• Boosting a sound around its fundamental frequencies will promote warmth, depth
and place it in front of the mix. In contrast, decreasing the fundamentals will promote
a cold, thin sound which will be pushed back in the mix

• Boosting a sound around its harmonics gives the impression of more presence,
brightness and air. In contrast, decreasing the harmonics generally gives a dull, less
dazzling impression

• The human ear is more sensitive at the midrange frequencies, where speech and
voice communication occur. As a result, very small energy changes in the midrange
frequencies cause much more noticeable effects than do larger changes in the very low
and/or very high frequency ranges
Create room and balance

The equalizer can be useful for creating space in the mix by balancing frequencies. If
certain instruments occupy a similar frequency band they will end up masking each
occur within that particular area of the audio spectrum, often resulting in a muddy
sound.

To allow elements to best fit together, there has to be some juggling of frequencies
so that each instrument has its own predominant frequency range. For example, if a
kick drum is heavy and powerful in the 80 Hz but getting muddied up by the bassline,
attenuating the bassline around this frequency will free up valuable mix room allowing
for the kick to shine through. The result being that the mix will sound more clear and
distinct.

Frequency Output
When considering equalization, it is helpful to have an understanding of each instruments overall frequency
range output in order to gain an awareness of which frequencies to work with during equalization.

Instrument Enhancement Frequencies


All instruments contain a range of frequencies. Boosting or cutting these frequencies will produce a different
sounding timbre (character), no matter what the instrument.
Muddiness and honk

Learn about harmonics.

The main problems in a mix are usually excess muddiness and honk.

Before beginning to equalize any instrument, it is important to listen to the sound in


order to identify any nuance/unwanted frequencies.

Muddiness normally comes from bass heavy instruments, such as the kick drum, bass
guitar, and the lower end of the piano. The frequencies responsible are usually centered
between 100-400 Hz. However, simply cutting these frequencies will make the sound
thin as this area does contribute to the body and rhythm of a mix.

The best way to deal with muddiness is to scan the lower frequencies with a high Q
setting and a moderate boost level of about 8dB. Once excess muddiness is found, cut
the signal right down, and then slowly bring up the gain control until there is a good
balance between body and muddiness.

Exercise caution when cutting frequencies and do not cut the same frequency on all of
the sounds.

The idea is to fill up the audio spectrum and applying cut to several sounds in the
same frequency area will just leave a hole in a mix. If too much bass is lost from the
sound, try adding some moderate boost to the sub-bass region at around 60 Hz to
compensate.

The honk area is focused between 500-3000 Hz and determines how honky and
prominent an instrument is in the mix. Excess output at this range can sound cheap,
boxy and can cause unwanted ear fatigue. If boosting in this area, be very cautious,
especially on vocals. Human hearing is extremely sensitive at these frequencies with
the slightest boost or cut resulting in a huge change in the sound.

If there are any irritating frequencies, sweep through the sound with a medium curve at
about 10dB cut. Once the frequencies are located adjust the amount of cut as desired.
It may be necessary to compensate for any cutting in this area by applying a small
amount of boost around the 5-8 kHz area to liven up anything that may have been
severely affected by the cutting. This will help to preserve the overall brightness.
Outlined below are tips on equalizing common instrument types to get the best
potential sound. Remember that these are only guidelines. Every instrument is unique
and has different sound characteristics. Note that low cut applies to the low frequencies
which are relatively useless for the instrument sound and can be rolled away, if desired,
to lighten the mix and free up mix room.

Kick Drum Tips

• Boost around 5kHz to increase attack Frequency Description


50-80 Hz Bottom, boom
100-200 Hz Roundness
250-800 Hz Mud, clarity
2500-5000 Hz Attack
Low Cut: None

Snare Drum Tips

• Cut around 60-120 Hz to thin the snare


• Boost around 5-6 kHz for a snappy sound Frequency Description
100-300 Hz Fat, full
900 Hz Boing
5000 Hz Snap
6000-9000 Hz Presence
Low Cut: 80Hz

Hi Hats/Cymbals Tips

• Cut around 800 Hz if the cymbals sound clangy


• Boost around 3 kHz to add brightness Frequency Description
250-500 Hz Clang, mud
1000-5000 Hz Presence, attack
8000 Hz Sparkle, Hardness
9000 Hz Presence
15,000 Hz Air
Low Cut: 200Hz

Toms Tips

• Cut around 100 Hz to eliminate a muddy sound


• To increase attack, try a small boost around 6 kHz Frequency Description
200 - 400 Hz Fullness
5000 - 7000Hz Attack
Low Cut: None
Vocal Tips

• Cut around 100 Hz to eliminate a muddy sound


• Boost around to 6000 Hz to increase clarity Frequency Description
150 Hz Fullness
1000-5000 Hz Presence
6000 Hz Clarity, sibilance
10,000 Hz Brightness/Air
Low Cut: 80Hz

Violin Tips

• Boost around 6000 Hz to add crunch sound Frequency Description


250-500 Hz Bottom, boom
800-1000 Hz Roundness
5000 Hz Mud, clarity
7000-10,000Hz Attack
12,000 Hz Air
Low Cut: 100Hz

Piano Tips

• Cut around 80 Hz to eliminate excess boom


• Boost around to 4 kHz to increase presence Frequency Description
100 Hz Bottom
100-250 Hz Body
250-1000 Hz Mud
1000-5000 Hz Presence, honk
10,000 Hz Brightness
Low Cut: None
Bass Guitar Tips

• Roll off around 300 Hz to increase clarity if muddy


• Boost around to 60 to increase body Frequency Description
50-80 Hz Bottom, body
100-500 Hz Fullness, mud
700 Hz Attack
1000-6000 Hz Presence, pluck
Low Cut: None

Electric Guitar Tips

• Cut around 100 Hz to increase mix clarity Frequency Description


150 Hz Body
300-500 Hz Fullness, mud
1500-3000 Hz Presence
6000 Hz Brightness
Low Cut: 80Hz

Acoustic Guitar Tips

• Boost around 5 kHz for increased presence Frequency Description


80 Hz Bottom
200-300 Hz Body
2000-5000 Hz Brightness
Low Cut: 80Hz
12 COMMON EQ’ING MISTAKES

For the audio engineer, equalization is one of the most important tools—but I’m not tell-
ing you anything you don’t know. We carve space for our favored frequencies in much
the same way a sculptor slashes statues out of stone.

Like all our most important processes, however, equalization has the potential to be
abused, and rather quickly at that. What follows is a list, in no particular order, of twelve
common EQ mistakes.

I provide the following disclaimer: if you find you’ve committed any of these sins, don’t
be hard on yourself—so have I! So have we all.

Advertisment
Adding Top-End to Every Track

We often make this mistake in the beginning of our careers; I know I did. I’d listen to
my favorite records, note their vibrancy and brilliance, and think I’d have to push the
top-end on every track to get in the same ballpark.

Before I knew it, I’d invariably mixed a tin can. Like a poor craftsman, I tended
to blame my tools: if only I had expensive plug-ins instead of these measly stock
processes! Imagine the sweetness I could impart to every track!

But piles and piles of Pultec emulations didn’t sweeten the pot; they only brought
bitterness, harshness, an ear-shredding mess. Death by a million treble boosts
produced another problem too: all those shelves must begin somewhere in the frequency
spectrum—often lower than I’d intended. 6 kHz, 7 kHz, 8 kHz, my mixes suffered from
an abundance of problems in these ranges.

It took a lot of wasted money and not-so-wasted time to realize that shelving
everything got in the way of the very brightness I sought. It turns out, though, that if
I can resist the desire, trust in the finished product, and see the statue hidden in the
marble, then I can uncover the true secret to satisfactory brightness: you only need
one or two elements to brighten a whole track. A vocal at 16 kHz here, some overheads
at 10 kHz there, and maybe (just maybe) a slight baxandall boost on an important
buss—but leave the rest alone. You’ll be surprised at what happens if you trust in the
innate brightness of the material.

Here’s another tip: if your mix sounds too dull in comparison to your favorite mastered
track, drop the level of the reference so it matches on the Loudness or RMS meter. You
might be surprised what the loudness is adding in terms of high-end presence.

11. Adding Too Many Resonant EQs in One Track

You hear a pesky snare resonance and you cut it. Then you hear another, due to the
focus you’ve put into listening, so you cut it. Now you hear another. Then another.
Soon, you’ve created a series of notches that sucks the life out of the snare. Does this
sound familiar?

Here’s the simple fix: Stop at two notches, and let it rest for a while. If the sound still
bothers you after you’ve moved on for ten or fifteen minutes, add another notch if you
must.

8. Using EQ as a Hammer for a Screw

Sometimes EQ isn’t the right job. You can use a hammer for a screw, sure, but you’ll
strip the paint off the wall surrounding it. It’s better to bang on a different item, such
as a nail. Similarly, sometimes a well-executed pan move, or a level change, minimizes
the need for frequency tailoring.
10. Not Using a Reference for EQ

Some people shy away from using reference mixes as musicians might shy away from
learning music theory, or painters might avoid traditional brush technique; they feel it
robs something from their artistry, their originality. This, I’ve often argued, is to their
detriment, for reference mixes are not employed to turn us into forgers—the very
differences of the performance you’re mixing will mitigate that concern right off the bat.

Instead, think of a reference mix as a compass (or a constellation) for navigating an


ocean at midnight. When the hour is dark, you sometimes don’t know left from right,
north from south. But your compass knows. The North Star, virtually immutable in the
sky as we’ve come to find it, also serves as an indicator.

So it is in the weeds of mixing; if we’ve worked for so long and so hard that we don’t
know what a good snare drum sounds like anymore, it helps to have a good snare
drum on hand to reference. Otherwise, we might deprive our share of proper frequency
treatment.

I try to have a few references on hand when I’m


mixing a track. The first is the client’s choice—the
thing he or she wants the song to sound like. The
others are my own. They could be general (what I
want the song to sound like) or specific (what I want
the kick to sound like).

They are level matched many times throughout the mix,


but one thing about them remains constant: after I’ve made
huge decisions in my mix, I refer back to these references to
make sure my goals—the ones I’ve set for myself—are still being
met. This keeps me honest.

Choose reference audio that matches the style and material you are mixing
2. High Passing Just ‘Cuz

Similar to slapping on a compressor “just ’cuz,” needless high-passing is a life drain.


Yes, sometimes nasty bits of thump and hum swim in the depths below 100 Hz, and for
these, a cut can help. But resonances of a most pleasing, chest-pumping variety also
lurk down there. You don’t want to lose some life-affirming low end just because you
saw someone say to cut everything below 100 Hz in a tutorial, do you?

Indeed, strange pieces of advice often crop up around high-passing, such as “find the
instrument’s lowest note and high pass there.” The thinking, I believe, is that within the
context of the mix, there’s nothing of value below that frequency.

Okay, but here’s a hypothetical: Say your client paid top dollar to record her trumpet in
the finest recording studio in the world. Sure, we’re taught to mitigate needless room
sound, but is this particular room sound—the finest in the world, mind you—needless?
Could vital information denoting this sweet, sweet location lurk below the instrument’s
lowest note?

Let’s take the hypothetical even further: what if this trumpet player only blasted high
C’s for the whole song? Should we cut everything below that frequency? I’d say no.
You’d lose too much of the space.

As always, context is everything when taking tips into account, and if there’s anything
I want to impress upon you, it’s that high-passing is all about context -- you don’t just
do it willy-nilly. So if you need to high-pass, here are two tips:

First, protect any vital resonances. You can do this by adding a parametric boost just
below the low-cut, so that its downwards slope raises the low-cut a little bit.

Secondly, make sure your monitoring situation is accurate when dealing with low-end.
Know the frequency range of your monitors and reference cans, audition any low-pass
filters with and without your sub (if you have a sub), and make sure you know your
room (this will come into play later).
3. EQing in Solo

This is probably the biggest mistake I could mention here. Others could mention it
too—heck, you’re smart; you’ve probably come across similar articles and seen this
mentioned; maybe you’re even expecting me to go into it, and are wondering why it’s
appeared eight hundred words in.

Indeed, the problems of EQ’ing in solo have long been storied: You start by
correcting a track with no reference of how it fits into the bigger picture, surely incurring
unnecessary problems down the line. Quite often you over-process the track or
overextend certain frequencies, and when the solo’d material is placed back in context,
the resulting track fights the vocal, the snare, or some other important element.

Now, here’s the question: If we all know that this is bad practice, why do we fall into
this rabbit hole? Why does every article and tutorial surrounding EQ mistakes inevitably
bring it up?

Because soloing falls within our basest instincts as engineers: when there’s a problem
in our studio, we troubleshoot it by testing one item at a time, often starting with a
cable. Likewise, when we hear a problem in a specific instrument, we want to home in
on it, and a great way to do that is to hit solo. Then we sweep for the problem (another
issue we’ll cover), but upon fixing it, we notice something else.

Then, of course, the creative solutions start pummeling us… “Ooh I could add harmonic
distortion to the low-mids and warm those up, that would be nice...” “…hmm, seems a
bit out of hand, how about some multiband compression…” “…Hey, aren’t I forgetting
something?”

Yes! You’re forgetting to mix! When you’re making a salad, you don’t spend hours
cutting up a single carrot! You mix ingredients together. The same applies here: hit that
solo button to confirm your deepest fears, and go ahead and let yourself deal with that
one troublesome resonance. But then immediately put the mix back in—or at least fold
in a couple of other tracks to give you some context. You’ll wind up chasing your tail
otherwise.
4. Not Trimming Fat off the Low-Mids

This isn’t the biggest mistake you can make, but it might be one that separates the
wheat from the chaff, so to speak. Certainly, in my experience, an excess of information
in the low-mids makes a tune sound less “radio ready.”

Yes, we can be rescued in this sin by our saviors, the mastering engineers; they exert
some finesse in helping us out of the low-mid tub. They’ve indubitably helped me, and
I’ve tried to pass the favor along to others.

But still, why not fix it yourself?

It’s roughly that 200 Hz to 600 Hz area I’m talking about here: if you place an
inarguably inferior mix next to a proven song in the same genre, you’ll surely notice
not only the comparable dullness of the top, but the fat around the lower middle. This
band, if not divvied correctly among instruments, can take away from the precision
of transients, the power of your harmonic backups—be they guitars or synths—and
contribute some indistinct qualities to the vocal as well.

Many reasons contribute to this mistake, but I’ll lay two down right off the bat: inferior
monitoring/listening environment, and an underutilization of referencing tracks.

The first problem is self explanatory: if what you’re hearing isn’t correct, you’ll never
know whether you’re making the right moves.

Interestingly enough, this problem can be solved, at least in part, by addressing


the second issue. Yes, you should acquire decent monitors (easier to do at cheaper
price-points these days). You should also hang up some room treatment, and, as a last
resort, use some sort of DSP compensation.

But you can also train yourself to understand the inadequacies of your monitoring
system with reference tracks you know particularly well. It goes back to that
“childhood” mix I talked about a few articles ago. If you play a mix you know from your
childhood through your monitors and take note of where you hear palpable differences
(such as, “Hey, this sounds quieter in the low-mids than I remember”) you’ll be clued
into how your room or system isn’t accurate in that circumstance.

The fix? Better monitors, more room treatment, and good reference mixes and
referencing practices.
5. Using Static EQ When Dynamic EQ Would Preserve Vibrancy

Dynamic EQs—as opposed to multiband compressors—seem to be in vogue these


days. Though the line between the two processes blurs, it’s easy to see why a dy-
namic EQ would come in handy: without messing up other bands, you can effortlessly
select a specific range of frequencies and process those to your liking. Yes, sometimes
a dynamic EQ is actually preferable to a static one, and yet, sometimes engineers res-
olutely hold to their fixed EQ.

Here’s an example of when it might be wiser to grab a dynamic EQ: say you have a
build up in the vocal of that dreaded harmonic resonance point, 2 to 3 kHz. You try a
fixed EQ in that range, but wind up draining the singer’s luster.

Do you compromise on the cut, living with some harshness? You could—or you could
switch to a dynamic EQ. With such a process, you have more control over how the EQ
starts behaving. If the singer only hits that resonance during louder passages, a dy-
namic EQ could help tame these frequencies when the vocalist starts belting.

Conversely, a dynamic EQ can help with a more constant problem: when we cut
those troublesome frequencies with a fixed attenuation, they’re always heard at low-
er levels. But a dynamic EQ gives you a time constant—an attack and release; you
can set the EQ to let a little of the meddlesome resonance through, tricking the ear
into thinking nothing is unnaturally missing, but simultaneously addressing the issue.
Gone, but not forgotten.

The same applies to lower frequency bands—a bass with an uneven bloom in the four
hundred range, for instance. Tubby guitars, of the sort referenced in the preceding
tip, can also be addressed with a dynamic EQ.

6. Using Dynamic EQ Instead of Static EQ

The inverse problem also rears its head; as dynamic EQs are all the rage these days,
they can be abused. An engineer can apply them all over a channel or buss, thus
promoting a weird, inorganic kind of multiband compression. As referenced in earlier
articles, improper multiband compression can lead to a host of problems.

The key, as always, is to listen. If the dynamic EQ you’ve enabled has had an unnat-
ural effect on either the overall resonance or dynamic interplay of the track, that’s an
indicator it isn’t the right tool for the job.
7. Using Linear-Phase EQ Incorrectly (and Vice-Versa)

This is a common mistake people make when starting out, because it’s hard to know
the difference between your standard EQ and its linear phase sibling without an ex-
planation; also, your DAW tends to provides both, and sometimes they look the same!
Perhaps you’re wondering, why and when should I use one over the other?

Your typical channel EQ, unless otherwise indicated, causes phase-shifts when sin-
gling out frequencies. That is to say, it not only moves frequencies in level, but in
time. This might seem unwanted, but have no fear: the resulting delay is often desir-
able—the particular phase-distortion an EQ imparts can very well be tied to its sonic
signature.

Desirable or not, what these EQs do lack is literal transparency, even in their cleanest
iterations. The time differential ensures this. A linear-phase EQ, on the other hand,
takes these delays into account, recombining the signals at the output in a way that
mitigates this delay. The result? An EQ often described as “lowering the fader on a
frequency” rather than imparting any color.

But there are issues with linear phase EQs too. They can bring about a horrid little
noise called “pre-ringing.” Also, since they attempt to realign any time disparities at
the output stage, they can introduce overall latency—especially if they don’t speak
nicely with your DAW. On a stereo mastering session this can be fine, as you’re most-
ly working with one stereo track. On a mix, you might get away with a few linear
phase EQs (depending on the DAW), but piling them on can become problematic,
even with good delay compensation.

Here’s an example: have you ever worked with two kicks in a production, EQ’d one of
them, and noticed there was something funky about the way they hit together after-
ward? A peculiar flamming you couldn’t get rid of? This could very well be the cul-
prit—especially if your EQ is switchable within the plug-in itself. Check to make sure!
9. EQ’ing When You’ve Lost Perspective

My wife often comes in while I’m mixing to ask me if I’m hungry yet. I reply that
I’m not, and that’ll be my answer to breakfast. When next she arrives with the same
question, and I give her the same retort, I’m invariably surprised to learn she is in-
quiring about a very late lunch; so much time has passed that I have failed to notice
the needs of my own body (or marriage).

Working in this way—as I suspect you might, with or without the wife—how does one
keep anything like perspective? This is a hefty problem, to be sure. A deadline is a
deadline, but that doesn’t change how our ears acclimate, react, and in many ways
worsen over hours of exertion. EQ mistakes, in such cases, are involuntary and fre-
quent in their rate of accumulation.

The best safeguard against this problem is to take breaks frequently—a fifty minute
timer with a ten minute allotment for breaks is not a bad idea at all. Such respites,
especially if submerged in silence, can restore us to sanity.

However, many are the moments when my clock signified the time, and yet I choose
to ignore it. I have a groove going; I’m not about to sacrifice that groove. This too
must be factored into your decisions, as there is nothing so bad as losing one’s mojo.
Thankfully we have a second tool to help us keep perspective, and that tool is a refer-
ence mix. Or multiple reference mixes. Read on.
12. Excessive Frequency Sweeping While Looking for the Right EQ Curve

Ah, frequency sweeping! This is the practice of boosting a range of frequencies and
moving them around to locate the right center-point, either for boosting or for cut-
ting. There are two schools of thought here. Some engineers advocate avoiding the
sweep altogether because it changes your perspective (they point out that for every
“right” frequency you isolate, you audition hundreds of wrong ones). Some don’t care,
preferring the speed sweeping affords.

There is truth in both arguments. Me? I go for a halfway approach, because my per-
ception can absolutely be altered by excessive sweeping. But I’m not afraid of the
practice, within reason—after all, we have all these aforementioned tools to keep our
perspectives sharp. So I sweep at first, but when I’m close to the right frequency, I
stop sweeping, and do the following:

I set up the boosts or cuts as I think they should be, but do so in bypass. Then I in-
stantiate and listen. If I’m wrong, I know right away and put it all back in bypass. I
rinse and repeat until I’m on the money. It seems like a slower process, but it trains
your ear to move faster the more you do it.

Conclusion

Why twelve EQ mistakes? Why not ten or fifteen? I could be enigmatic and say I’ve
given you one for every note of the tempered scale, but that would be rather preten-
tious of me. What we have here are simply all the mistakes I could think of. I am sure
there are more. (Frequency masking! See? There’s another one!)

However, watching out for these twelve mistakes will serve you well. Pay attention to
these potential pitfalls, and you’ll be less in danger of falling down the rabbit hole.
What is
Sound Layering?

Making music is like painting a picture. You start with the basic outline and shape, then
gradually fill in the canvas with color, texture, light and shade, before adding some
finishing touches. Making the right choices at all stages of the creative journey — from
songwriting to tracking and arranging — can greatly speed up the mixdown process,
result in a more polished track, and avoid the need to ‘fix it in the mix’.

When layering sounds, there are two major considerations at play: each sound’s
frequency profile, and its amplitude over time. A plucked bass guitar note, for example,
will typically have most of its information in the low-end of the frequency spectrum,
with some upper-frequency information at the pluck part of the sound. Its envelope
shape will normally have a fast attack and a slowly decaying tail. A gong smash, on the
other hand, has a broad-band frequency make-up, with a slow attack and long sustain.

Frequency Layering

For centuries, arrangers and composers have thought about frequency make-up
of the different instruments they are scoring for. Acoustic instruments have a set
frequency range, governed by their physical size and resonant tendencies (among some
other factors). Composers use these practical limitations to their advantage. When
writing music for a string quartet, for example, a composer instinctively knows each
instrument’s limitation: the double bass can reach very low frequencies, while the
violin’s range extends into the highest registers. They score instrumental parts so that
certain orchestral sections complement each other — call and response between the
lower and higher parts of the ensemble, for example, or rousing glissandos that begin
in the lower registers and end up with the highest notes.

Extending this concept to your compositions — even if your sounds are


electronic — is a great way to manage the frequency make-up of your mix, and bring a
professional sound to your finished tracks. Carefully choose instruments with non-competing
frequency profiles to fill out your sonic canvas and keep the listener’s ear engaged. And
don’t overuse the same instrument across the whole musical range, especially if you
have a lot going on in your arrangement.
Arrangement Space

Busy mixes can be captivating and exciting, but over-loading your tracks with instrumentation can be over-
whelming to the listener, (let alone hard to mix). There are several ways to deal with busy mixes, but the first
step is always to ask yourself ‘do I really need this part?’. To be able to answer this faithfully is the true test of
a good producer: emotionally detaching yourself from the music you have toiled over, and judging it from an
objective standpoint, to leave behind the purest artistic statement.

Experimentation with the placement of your musical parts within your arrangement is a good way to keep
hold of passages you really love, but which threaten to clutter up your songs. It’s a common mistake to load
all your musical ideas near the front of the song, so you get to the peak of musical energy quickly and have
nowhere to go. If you have fallen into this trap, space out the musical motifs to allow the musical energy to
build slowly and dynamically.

Another trick is to distill your musical ideas into lighter phrases. Maybe you have a complicated, two-bar
guitar part that can be pared down to a few licks that do exactly the same job in the mix. Because there’s less
sonic information, you will end up with more space, more dynamic range, and more impact in your mix.

Of course, if you’ve tried moving or adapting troublesome parts, and they still doesn’t belong, you might
have no other option than to scrap them — the delete button is sometimes your best friend. Don’t throw your
good ideas away forever, though — create an ideas folder for when you need inspiration down the line.

Dynamics in Music

The dynamic character of any sound can be defined by its attack, decay, sustain and release — you’ll find
these parameters on the amplitude envelope of a synthesizer, and on dynamics processors. By layering sounds
with different dynamic characteristics, you can change their shape, to help drums punch without overwhelm-
ing the mix, make leads sound more pronounced without them being over-dominant, and keep sustaining
tones sounding consistent.

Dynamic layering is time-consuming work. Let’s say you have a hard-hitting kick drum, but your beat lacks
low-end energy when you play it on a full-range sound system. You don’t want to lose the kick character you
have already; you just need to beef up the low rumble. It’s not quite as simple as just placing a subby sample
over every kick hit. First, you will need to tune the sample to make sure it fits in the key of your song. Next
you need to shape the sound to make sure it doesn’t overwhelm your existing kick, or mask any of the other
instruments. There are several tools you can use to do this, the first of which is a spectrum analysis meter
(there are some good free ones online, and your DAW will most likely have one). Next is a compressor; you
can use extreme attack and release controls to completely change the profile of the amplitude envelope. You
can even route another instrument’s output to the compressor side-chain input to have your new sample’s dy-
namics shaped by the signal of another’s. You can also use a transient designer tool, which gives you compre-
hensive control over all of the above.

Remember that layering sounds, especially low-frequency ones, can result in strange EQ inconsistencies,
caused by the emphasis and cancellation of certain frequencies when they come into and drift out of phase
with each other. Close attention to the frequency profile of your sounds is required — you might need to
make micro timing adjustments, whether in your sampler or on the timeline.
Use a frequency analyzer to determine where
in the sonic spectrum each instrument is being
most intense, and ring-fence that frequency so
that only one instrument is active there. Oth-
er methods to manage this are to syncopate
the kick and bass lines against each other
so they aren’t always playing together.

The final piece of advice


— something you’ve definite-
ly heard before — is to listen
to your favorite tracks and take note of
how instruments like the ones in your
compositions are layered. Do all
the instruments play at the same
time? Are certain tones notice-
ably filtered? There’s nothing
wrong with taking your layering
cues from the pros!

You might also like