Professional Documents
Culture Documents
2. Understanding synthesizers…………………………….……..………..........7
2.1 What is a Synthesizer?.........................................................................7
2.2 The characteristics of sound and its perception...…………………....…7
2.2.1 Pitch………………………………………………………….....8
2.2.2 Loudness………………………………………………………..8
2.2.3 Timbre…………………………………………………………..8
2.3 Different types of synthesis………………………………………………..9
2.3.1 Additive synthesis…………………………………….……......9
2.3.2 Subtractive synthesis……………………………………..……9
2.3.3 Frequency Modulation……………………………….……….10
2.3.4 Wavetable synthesis……………………………….……..….10
2.3.5 Linear arithmetic synthesis………………………….........….11
2.3.6 Physical modelling……………………………………....……11
2.3.7 Other types of synthesis…………………………………...…12
2
CHAPTER I
In the beginning… a brief history of modern synths’ ancestors
The synthesis is a process through which, also artificially, you get compounds starting from
simpler elements.
If we consider tha matter carefully, the words we often use to describe the world all around us
take much more information than we believe. In one of its meanings also just the word
“synthetic” has today become a synonym of something produced artificially, that is by means
of processes which copy the natural ones.
Why then dwelling upon a word such as “synthesis”? What is the aim of this introduction in a
subject about electronic music and modern musical instruments commonly known as
synthesizers?
Because, as it will later be dealt, before speaking about synthesizers it is necessary to
understand what the basis of reproducing and syntesizing sounds is, and which processes
make you replicate that peculiar acoustic sensation linked to some physical phenomena (the
tweak of a string, the impact of two objects, the air getting through a tube).
This topic has always been the cause of man’s curiosity and study since ancient times.
Therefore it should not be singular to deal upon syntesizers starting from times when we
could not even imagine the concept of elettricity. Great part of what was done since the
beginning of the 20th century influenced the evolving course of modern keyboards, synths and
generally speaking of electronic music.
Though electricity is the modern man’s child, mechanics is actually an ancient science, also
automatism and reproduction of sounds and music by means of machines is a matter dating
back to the ancient Greeks.
The Greek engineer Ktesibios, who lived in the 3rd century B.C., in order to give an answer to
the question “How can you play more than one instrument at a time?” designed the Hydraulos
(see figure 1.1). It was a machine basically made up of a tube placed inside a tub filled with
water by means of a pump. The pressure was set by the water weight, and a series of
mechanical levers and devices would push the air through separate tubes, just in accord with
the same concept which rules modern pipe organs. Organs can be actually considered as the
3
first rudimentary example of additive syntesizers, which as a
matter of fact get their sound by making up a series of
harmonics varying as time passes. All this to let a single
person play insruments of harmonic complexity which was
impossible to reach till then by means of the use of a single
instrument.
Let’s still remain in ancient Greece, the aeolic harp is
conversely the first example of completely automated
instrument. This instrument was made up of two bridges
crossed through by strings. It was the wind itself which would
generate music by passing through them.
Figure 1.1 – the Hydraulos
After a fairly great temporal jump, we are in the 15th century when the hardy-gurdy, the
ancestor of the grinder organ, was invented. This was at a certain extent the ancestral
forerunner of something which can be defined as an instrument with a sequencer, that is a way
of automatically reproducing a melody or a pattern of notes.
If we temporarily diverge from musical instruments, we can take into consideration a device
which had a huge impact on the modern age: in 1641 the adolescent Pascal invented the first
adding machine, the Pascaline, which is to be considered as the forerunner of modern
computers and therefore of modern digital synths.
We arrive in the 18th century when esperiments on electricity get greater and greater interst
and so do the machines which use it. The first real electrical instruments of history: the Denis
d’or (the Gold Denis) and the Clavecin Electrique (electric harpsichord) by Jean-Baptiste de
La borde date back to the mid of the century. In the latter, by using a small keybord, just like
the one of the harpsichord, you could control a series of hammers which, being charged with
static electricity, would ring small bells.
Few years later the Panharminicom appears. This is a mechanical instrument provided with a
keyboard which played automating flutes, clarinets, trumpets, violins, drums and other
instruments. It was created by Johann Maelzel who even persuaded Beethoven to compose for
his invention. Even though the Panharmonicon was a mechanical and not an electric
instrument, the idea which forms the basis of this invention is the one which can be found
inside modern sampling instruments.
Going back to the first electrical instruments, the conception of the electromechanic piano is
due to Hipps (whose first name is unknown). This instrument was essentially composed of a
4
keyboard which would activate some electrical magnets.These in their own right would
activate some dynamos (small electrical current generators), the devices actually responsible
for sound production. They were the same dynamos which, almost a century later, would be
used in Cahill’s Teleharmonium.
Elsha Gray, the inventor of the telephone (just like Bell, who first however got the patent)
created the electroharmonic telegraph. Here, for the first time some oscillators appeared. As
a matter of facts Gray discovered how to reproduce a self-vibrating circuit, substantially a
frequency oscillator. This system, which originally was thought to transmit music through
telephone wires, was adjusted by Gray to be used regardless of a telephone and by means of a
speaker to output sounds. For the record, Alexander Bell himself released a similar instrument
which he called electric harp.
In 1898 Thadeus Cahil won a patent named “Art of and Apparatus for Generating and
Distributing Music Elettronically”. His idea was to create an electric device by means of
which music could be played and diffused to offices, hotels and houses through telephone
5
wires. The telharmonium ,or dynamophone, came so to light, which, like the
electromechanical piano, see above, would generate sound by means of dynamos. These
produced alternate current and gave birth to a sinusoidal wave, in this instance the dynamo
was called alternator. This was released by means of electrical magnets and very large tone
wheels. This instrument is considered by many the first additive synthesis device.
We will close this brief look of the ancestors of modern synths with something which perhaps
approaches them the most, the Singing Arc, that is the first fully electronic instrument ever
made. This was conceived by William Duddel who, to produce it, started from the technology
used in the carbon arc lamp, a precursor of the bulb lamp. The problem of this light emitting
device was that it was also a source of a great deal of noise, from a low hum to an annoying
high frequency whistle. Duddel, a physicist who had been requested to inquire into the origin
of the noise these lamps produced, discovered that the more current you would apply to the
lamp, the higher sound frequency you would get. To demonstrate that phenomenon he
connected a keyboard to the lamp which was so called Singing Arc. During a convention of
elecric engineers in London this keyboard was connected to every lamp in the building, and it
was discovered that not only they played all together but all the lamps connected to the same
electric circuit of other buildings played contemporarily as well. It was so found a way to
transmit music at a distance.
No further development would follow neither Duddel requested a patent for this device. He
started to go round the country to show his Singing Arc which soon became a downright
novelty.
And so we close this short historical journey through the evolution of mechanical and
electrical instruments which more deeply influenced the development of modern synths and
more generally speaking of every analogic and digital instrument applied to electronic music.
Before resuming the journey starting from the 20th century, in the next chapter we will see the
exact structure of a synth, and which algorithms and techniques are used in sound production.
6
CHAPTER II
Understanding Synthetizers
As I have already stated at the beginning of the first chapter, to understand the history and
the evolution of this peculiar “family” of instruments is necessary to know what sound
generation is based on and to become clearly aware of the techniques used and of the
algorithms which have been chosen in every type of synth.
What basically characterises a sound from a strict physical point of view are frequency
and amplitude. The former, which is measured in Hz (cycles per second) is bound to the
vibration speed of the source object, the latter, which is measured in decibel, is bound to the
width, or more properly to the energy of oscillation itself.
It is now necessary to point out the link occurring between the sound as a physical
phenomenon and the corresponding sound sensation. So, the features of pitch, bound to
7
frequency, of loudness, bound to amplitude, and of timbre which indicates the colour, or the
quality, of a sound will be now taken into consideration.
2.2.1 Pitch
Pitch is the quality of a sound which makes some sounds seem “higher” or “lower” than
others. Pitch is determined by the number of vibrations produced during a given period of
time, corresponding to the frequency of the sound signal.
An average person can hear a sound from about 20 Hz to about 20,000 Hz. Above and
below this range there is ultrasound and infrasound , respectively. The upper frequency limit
drops with age.
2.2.2 Loudness
Loudness is the amount or level of sound, that corresponds to the amplitude of the sound
wave. A change in loudness in music is called dynamics, and is often measured in decibels
(dB). Sound pressure level (SPL) is a decibel scale which uses the threshold of hearing as zero
reference point. While there is technically no upper limit to amplitude threshold of hearing,
sounds begin to damage to ears at 85 dB and sounds above approximately 130 dB (called
threshold of pain) cause pain. Also in this case the range depends on the individual involved
and on the age.
2.2.3 Timbre
In music, timbre is the quality of a musical note which allows you to distinguish between
different sound sources producing sound at the same pitch and loudness.
The vibration of sound waves is quite complex, most sounds vibrate at several frequencies
simultaneously and the additional frequencies are called overtones or harmonics. The relative
strength of these overtones helps determine a sound’s timbre.
Though the tone colour of the phrase is often used as a synonym for timbre, colours of the
optical spectrum are not generally explicitly associated with particular sounds, the sound of an
instrument is most likely described with words like warm, harsh, dull, brilliant, pure or rich.
8
2.3 Different types of synthesis
As by means of analysis it is possibile to get the main parameters characterising the
spectrum of a sound signal, similarly the aim of synthesis is to artificially produce a sound
with some required timbre features by generating a variation of electrical voltage by means of
either analogic or digital techniques.
In the latest decades a series of synthesis techniques have been realised and formalised.
They differ (one from another) for the peculiar mode of use and the multiplicity of timbre
features which can be obtained.
The synthesis techniques which will be examined are the most significant from a historical
and practical point of view. It is however important to underline that some of these techniques
are not exclusive, that is they can be combined to obtain new more interesting sounds, what
today happens in the software and hardware of modern synthesizers.
9
devices, called filters. Figuratively speaking, this technique resembles the action of a
sculptor who, starting from a formless piece of marble subtracts material to define the
required shape.
This method can be used to effectively recreate natural instrument sounds as well
as textured surreal sounds. Obviously the filters are crucial, the better the filters and the
wider the choice of available filters, the better the end result will be.
10
note.
A common addition to wavetable syntesis is that each instrument waveform
contains a loop region. This region starts after the attack segment of the digital audio
and repeats while the instrument’s note is sustained. Then the release segment of
digital audio finishes off the note.
Using envelopes and modulators these waveforms can be processed and layered to
form complex and interesting sounds. It is clear, for this type of synths, how the
physical memory is crucial to house the waveforms, so only in recent times this
technique has been massively used.
11
air). Resonators simulate the instruments response to the exciter which usually defines
how the instrument’s physical elements vibrate.
This type of sound generation can be extremely complex and requires heavy
computation, so many currently available physical modelling synths use short-cuts or
watered-down methods which enable them to respond in real-time.
One curious fact is that the Nord Lead (a famous synths producer) currently uses
PM synthesis to emulate an analogue syntesizer.
12
CHAPTER III
From first analog synths to modern software instruments
As I have dealt with what was done at the beginning of the 20th century and the concepts
lying in modern synthesizers and their sound generation, now I am going to speak about their
origin and evolution, from the very first experiments till today.
13
Synthesizer, also known as Olson-Belar Sound Synthesizer. This huge and unwieldy
system (see figure 3.1) was controlled by a punched paper roll, similar to a player
piano roll. A keyboard was used to punch the roll and each note had to be individually
described by a number of parameters (frequency, volume, envelope, etc.). The output
was fed to disk recording machines, which stored the results on lacquer-coated disks.
Programming this machine must have been a laborious and time consuming process,
but it caught the attention of electronic music pioneers such as Milton Babbit.
The passage from organs to synths was however gradual and produced lots of very
interesting hybrids. Though even nowadays organs and synths are classified as
differentiated musical instruments, the sound generation occurring inside is based on
common principles.
In 1967, for example, Tsutomu Katoh,
the founder of Keio (which subsequently
would be called Korg), asked Fumio Mieda,
an engineer who wanted to develop musical
keyboards, to design and realise a keyboard
to be built and sold. The Korg Organ was so
born (see figure 3.2), which differently from
other organs on the market could program
voices.
Figure 3.2 – the Korg Organ
14
Becoming one of the most legendary synthesizers producers, he created the first
playable modern configurable music synth, and displayed it the Audio Engineering
Society convention in 1964. So, starting at first as a curiosity it caused a real sensation
by 1968.
The Moog Modular synthesizers (see figure 3.3) became the first embodiment of
the modern analogue synths. To get a sound you had to plug each module into another.
You couldn’t memorize
sounds (you had to draw the
connections on a sheet of
paper) and the keyboards
were monophonic (only one
note at a time), but you
could create an infinite
number of sounds.
Though Donald Buchla
Figure 3.3 – Moog Modular model 3C
got started a few months
ahead of Moog, the first Buchla Box would appear later. They are very similar: both
are modular and use voltage control for the oscillators and amplifiers, but the Buchla
Box consciously avoids keyboard controllers.
Other early commercial synthesizer manufacturers included ARP, by Alan Robert
Pearlman, who also started with modular synthesizers before producing all-in-one
instruments, and British firm Electronic Music Systems.
One major innovation by Moog was in 1971, when they made a synthesizer with a
built-in keyboard and without a modular design. The analog circuits were retained, but
made interconnectable with switches in a simplified arrangement called
"normalization". Though this design was less flexible than modularity, it made the
instrument more portable and its use
much easier. This first prepatched
synthesizer, the Minimoog (see figure
3.4), became very popular, with over
12,000 units sold. It also deeply
influenced the design of nearly all
subsequent synthesizers.
Korg, too, created a similar
15
very small monophonic instrument, whose success convinced the company to invest
substantial resources to develop further synthesizers.
So, by the beginnings of 1970s, even if the market demand for organs such as
Balwin, Hammond and Lowrey was clearly higher, the one of synth keyboards was
starting its small expansion thanks above all to miniaturized solid-state components
that let synthesizers become self-contained and movable. They began to be used in live
performances, becoming soon a standard part of the popular-music repertorire (Giorgio
Moroder’s “Son of my father”, 1971, became the first #1 hit to feature a synthesizer).
16
means that it used a sensor for each key. It supported also a primitive sound settings
memory based on a bank of micropotentiometers.
When microprocessors first appeared on the scene in the early 1970s, they were
costly and difficult to apply. The first practical polyphonic synth, which was also the
first to fully apply a microprocessor as a controller, was the Sequential Circuits
Prophet-5 (see figure 3.6) in
1977. For the first time,
musicians had a practical
programmable polyphonic (5
notes) synthesizer that allowed
all knob settings to be saved in
computer memory (32 digital
memory slots to record all the
synth’s parameters).
The Prophet-5 was also
physically compact and
Figure 3.6 – Sequential Circuits Prophet-5
lightweight, unlike its
predecessors. This basic design paradigm became a standard among synthesizer
manufacturers, slowly pushing out the more complex (and more difficult to use)
modular design.
17
So, generating a digital sample corresponds to a sound pressure at a given sampling
frequency (typically 44100 samples per second). In the most basic case, each digital
oscillator is modelled by a counter. For each sample the counter is advanced by an
amount that varies depending on the frequency of the oscillator (see Par. 2.3, Different
types of synthesis, for further details).
The world of synthesizers became dominated by the FM synthesis model, which
uses sine-wave oscillators, which to be sufficiently stable needs to be generated
digitally.
The first patent covering FM sound synthesis was licensed in 1980 to Japanese
manufacturer Yamaha, that produced in 1983 the first FM digital keyboard, the DX-7
(see figure 3.7). It was about the same
size and weight as the Prophet-5 and
was reasonably priced. No more front
buttons but a set of countless preset
banks (sound-programmed in
factory). The quality and precision of
the sounds were increased, and so was
the sound tuning, compared to the
previous synthesizer generation. The
Figure 3.7 – Yamaha DX-7 digital synth
DX-7 was a smash hit and may be
heard on thousands of pop recordings since 1980s.
When Yamaha later licensed its FM technology to other manufacturers almost
every personal computer in the world contained an audio input-output system with a
built-in 4 oscillators FM digital synthesizer.
Another very important invention date back to 1983, the MIDI, a digital control
interface, making synthesizers more usable and versatile. The so-called General MIDI
(GM) standard was devised in the late 1980s to serve as a consistent way of describing
the set of synthesized tonalities available to a electronic digital instrument (or also a
Personal Computer) for playback of a musical score. This kind of interface and
protocol of communication (together with the file format .mid) are important standards
in use even today.
Roland, another great Japanese electronic instruments manufacturer, entered its
gold era truly with the advent of the digital synthesis. It was 1987 when Roland
released the D50 (see figure 3.8), a synth that had to become Roland’s standard
keyboard: great velocity and aftertouch sensivity, with splits and layers. It was also
18
expandable, with
options for ROM and
RAM cards, a good
MIDI implementation,
a programmer, and
eventually third party
Figure 3.8 – Roland D50
expansion boards. But
the most important reason for its success, being able to topple the DX-7 from the
throne it had occupied for four years, was the fact it sounded amazingly, thanks to the
new LA synthesis, developed by Roland itself in those years.
The enormous popularity of the D50 caused a whole support industry to spring up.
For many users it was unnecessary to learn how to program the instrument, they simply
plugged in their favourite sounds and played.
The possibility to digitalize a sound source created a new brand of electronic
instruments: the samplers. A sampler is a device that can record and store audio signal
samples, generally recordings of existing sounds, and play them back at a range of
pitches. An early form of sampler was the Mellotron, which used individual pre-
recorded tape loops, one under each key on the keyboard.
Sampling can also be used in combination with other synthesizer effects,
processing it with filters, reverbs, ring modulators and the like.
By the end of 1985 sampling had been around for six years. The affordable end of
the market was dominated by the Ensoniq Mirage, while Kurzweil end Emu dominated
the high end. Roland tried to fill the gap between them with the S50 sampler, a very
good and innovative instrument, with 16-voice polyphonic, multitimbral, velocity and
pressure sensitive, and offered splits, layers, and velocity crossfading.
The S-50 could have been a winner, but Roland had come to the market a bit too
late. Akai launched their S900 the same year (1986) and estabilished itself as the de
facto standard (see figure 3.9).
19
So, history of synthesizers continues in the latest years, with old and new leading
manufacturer names such as Yamaha, Roland, Korg, Kurzweil and Alesis.
Most modern synthesizers are now completely digital, including those which model
analogue synthesis using digital techniques. These keyboards are now, in many cases,
real workstations, integrating wider
and wider sound banks, advanced
sequencing functions, midi recording,
sampling and even mastering and
video signal transmitting capabilities.
20
spite of the modern plug-in approach,
they try to retain the look and feeling
of the old analogue hardware model,
with for example the use of the old
jack link cables (yes, even this stuff
has been emulated!). These features
do not bewilder musicians familiar
with particular procedures and let
them continue working using the
more versatile and comfortable
virtual release of their favourite
synthesizers.
But soft-synths are not only an
emulation of old analogue synths. In
actual fact, today you can have at
your disposal impressive banks of
Figure 3.11 – Arturia Moog Modular V soft-synth
every kind of sounds which software
samplers as renowned Native Instrument Kontakt and Steninber Halion are capable to
manage in real time so easily and efficiently as it was not even possible with hardware
samplers.
The virtual revolution has got some synths which made the history of music not
only to be recreated and improved, but they have been made affordable to a higher and
higher deal of musicians, as they quote sensible lower prices and, thanks to their
noticeable easiness, require a more casual use. Besides, with a further increase of
computing power, a lot of synthesis algorithms may be later on improved; which will
get sound generation more and more powerful and versatile.
21