You are on page 1of 21

Basics of Digital Recording

CONVERTING SOUND INTO NUMBERS


In a digital recording system, sound is stored and manipulated as a stream of
discrete numbers, each number representing the air pressure at a particular time.
The numbers are generated by a microphone connected to a circuit called an
ANALOG TO DIGITAL CONVERTER, or ADC. Each number is called a
SAMPLE, and the number of samples taken per second is the SAMPLE RATE.
Ultimately, the numbers will be converted back into sound by a DIGITAL TO
ANALOG CONVERTER or DAC, connected to a loudspeaker.

Fig. 1 The digital signal chain
Figure 1 shows the components of a digital system. Notice that the output of the
ADC and the input of the DAC consists of a bundle of wires. These wires carry the
numbers that are the result of the analog to digital conversion. The numbers are in
the binary number system in which only two characters are used, 1 and 0. (The
circuitry is actually built around switches which are either on or off.) The value of a
character depends on its place in the number, just as in the familiar decimal system.
Here are a few equivalents:
BINARY DECIMAL
0=0
1=1
10=2
11=3
100=4
1111=15
1111111111111111=65535
Each digit in a number is called a BIT, so that last number is sixteen bits long in its
binary form. If we wrote the second number as 0000000000000001, it would be
sixteen bits long and have a value of 1.
Word Size
The number of bits in the number has a direct bearing on the fidelity of the signal.
Figure 2 illustrates how this works. The number of possible voltage levels at the
output is simply the number of values that may be represented by the largest
possible number (no "in between" values are allowed). If there were only one bit in
the number, the ultimate output would be a pulse wave with a fixed amplitude and
more or less the frequency of the input signal. If there are more bits in the number
the waveform is more accurately traced, because each added bit doubles the
number of possible values. The distortion is roughly the percentage that the least
significant bit represents out of the average value. Distortion in digital systems
increases as signal levels decrease, which is the opposite of the behavior of analog
systems.

Fig. 2 Effect of word size
The number of bits in the number also determines the dynamic range. Moving a
binary number one space to the left multiplies the value by two (just as moving a
decimal number one space to the left multiplies the value by ten), so each bit
doubles the voltage that may be represented. Doubling the voltage increases the
power available by 6 dB, so we can see the dynamic range available is about the
number of bits times 6 dB.
Sample Rate
The rate at which the numbers are generated is even more important than the
number of bits used. Figure 3. illustrates this. If the sampling rate is lower than the
frequency we are trying to capture, entire cycles will be missed, and the decoded
result would be too low in frequency and might not resemble the proper waveform
at all. This kind of mistake is called aliasing. If the sampling rate were exactly the
frequency of the input, the result would be a straight line, because the same spot on
the waveform would be measured each time. This can happen even if the sampling
rate is twice the frequency of the input if the input is a sine or similar waveform.
The sampling rate must be greater than twice the frequency measured for accurate
results. (The mathematical statement of this is the Nyquist Theorem.) This implies
that if we are dealing with sound, we should sample at least 40,000 times per
second.

Fig. 3 Effects of low sample rates
The Nyquist rate (twice the frequency of interest) is the lowest allowable sampling
rate. For best results, sampling rates twice or four times this should be used. Figure
4 shows how the waveform improves as the sampling rate is increased.

Fig. 4 Effect of increasing sample rate
Even at high sample rates, the output of the system is a series of steps. A Fourier
analysis of this would show that everything belonging in the signal would be there
along with a healthy dose of the sampling rate and its harmonics. The extra junk
must be removed with a low pass filter that cuts off a little higher than the highest
desired frequency. (An identical filter should be placed before the ADC to prevent
aliasing of any unsuspected ultrasonic content, such as radio frequency
interference.)
If the sampling rate is only twice the frequency of interest, the filters must have a
very steep characteristic to allow proper frequency response and satisfactorily reject
the sampling clock. Such filters are difficult and expensive to build. Many systems
now use a very high sample rate at the output in order to simplify the filters. The
extra samples needed to produce a super high rate are interpolated from the
recorded samples.
By the way, the circuits that generate the sample rate must be exceedingly accurate.
Any difference between the sample rate used for recording and the rate used at
playback will change the pitch of the music, just like an off speed analog tape.
Also, any unsteadiness or jitter in the sample clock will distort the signal as it is
being converted from or to analog form.
Recording Digital Data
Once the waveform is faithfully transformed into bits, it is not easy to record. The
major problem is finding a scheme that will record the bits fast enough. If we
sample at 44,100 hz, with a sixteen bit word size, in stereo, we have to
accommodate 1,411,200 bits per second. This seems like a lot, but it is within the
capabilities of techniques developed for video recording. (In fact, the first digital
audio systems were built around VCRs. 44.1 khz was chosen as a sample rate
because it worked well with them.)
To record on tape, a very high speed is required to keep the wavelength of a bit at
manageable dimensions. This is accomplished by moving the head as well as the
tape, resulting in a series of short tracks across the tape at a diagonal.
On a Compact Disc, the bits are microscopic pits burned into the plastic by a
laser.The stream of pits spirals just like the groove on a record, but is played from
the inside out.To read the data, light from a gentler laser is reflected off the surface
of the plastic (from the back: the plastic is clear.) into a light detector. The pits
disrupt this reflection and yield up the data.
In either case, the process is helped by avoiding numbers that are hard to detect,
like 00001000. That example is difficult because it will give just a single very short
electrical spike. If some numbers are unusable, a larger maximum (more bits) must
be available to allow recording the entire set. On tape, twenty bits are used to
record each sixteen bit sample, on CDs, twenty-eight bits are used.
Error Correction
Even with these techniques, the bits are going to be physically very small, and it
must be assumed that some will be lost in the process. A single bit can be very
important (suppose it represents the sign of a large number!), so there has to be a
way of recovering lost data. Error correction is really two problems; how to detect
an error, and what to do about it.

Fig. 5 Effects of data errors
The most common error detection method is parity computation. An extra bit is
added to each number which indicates whether the number is even or odd. When
the data is read off the tape, if the parity bit is inappropriate, something has gone
wrong. This works well enough for telephone conversations and the like, but does
not detect serious errors very well.
In digital recording, large chunks of data are often wiped out by a tape dropout or a
scratch on the disk. Catching these problems with parity would be a matter of luck.
To help deal with large scale data loss, some mathematical computation is run on
the numbers, and the result is merged with the data from time to time. This is
known as a Cyclical Redundancy Check Code or CRCC. If a mistake turns up in
this number, an error has occurred since the last correct CRCC was received.
Once an error is detected, the system must deal gracefully with the problem. To
make this possible, the data is recorded in a complex order. Instead of word two
following word one, as you might expect, the data is interleaved, following a
pattern like:
words 1,5,9,13,17,21,25,29,2,6,10,14,18,22,26,30,3,7,15,19,27 etc.
With this scheme, you could lose eight words, but they would represent several
isolated parts of the data stream, rather than a large continuous chunk of waveform.
When a CRC indicates a problem, the signal can be fixed. For minor errors, the
CRCC can be used to replace the missing numbers exactly. If the problem is more
extensive, the system can use the previous and following words to reconstruct a
passable imitation of the missing one. One of the factors that makes up the price
difference in various digital systems is the sophistication available to reconstruct
missing data.
The Benefits of Being Digital
You may be wondering about the point of all of this, if it turns out that a digital
system is more complex than the equivalent analog circuit. Digital circuits are
complex, but very few of the components must be precise; most of the circuitry
merely responds to the presence or absence of current. Improving performance is
usually only a matter of increasing the word size or the sample rate, which is
achieved by duplicating elements of the circuit. It is possible to build analog
circuits that match digital performance levels, but they are very expensive and
require constant maintenance. The bottom line is that good digital systems are
cheaper than good analog systems.
Digital devices usually require less maintenance than analog equipment. The
electrical characteristics of most circuit elements change with time and
temperature, and minor changes slowly degrade the performance of analog circuits.
Digital components either work or don't, and it is much easier to find a chip that has
failed entirely than one that is merely 10% off spec. Many analog systems are
mechanical in nature, and simple wear can soon cause problems. Digital systems
have few moving parts, and such parts are usually designed so that a little vibration
or speed variation is not important.
In addition, digitally encoded information is more durable than analog information,
again because circuits are responding only to the presence or absence of something
rather than to the precise characteristics of anything. As you have seen, it is
possible to design digital systems so that they can actually reconstruct missing or
incorrect data. You can hear every little imperfection on an LP, but minor damage
is not audible with a CD.
The aspect of digital sound that is most exciting to the electronic musician is that
any numbers can be converted into sound, whether they originated at a microphone
or not. This opens up the possibility of creating sounds that have never existed
before, and of controlling those sounds with a precision that is simply not possible
with any other technique.
For further study, I recommend Principles of Digital Audio by Ken C Pohlmann,
published by McGraw-Hill inc ISBN number0-07-050469-5.
Peter Elsea 1996





Source: http://artsites.ucsc.edu/ems/music/tech_background/TE-16/teces_16.html





The Mathematics Behind
Digital Technology

Nowadays, when you watch TV or listen
to a CD, you'd be forgiven for taking
the high quality of the audio and video for
granted. It seems a long time since we played
warped, scratched and dusty vinyl records, or
struggled to watch Match of the Day through
a blizzard of atmospheric 'snow'.
Today's digital technology reproduces to a
high degree the images which were captured
at the filming location or the sound recorded
in the studio.
This Entry introduces a branch
of mathematics which underpins modern
communication technology and ensures that,
among other things, the television pictures
we watch and the recorded sounds we hear
are of optimum quality. It's known as coding
theory.
Going Digital





These days we use digital technology to
record, communicate and replay. The
analogue or real-life signals are converted
into long strings of binary digits when they
are recorded, stored and communicated, and
then converted back into an analogue signal
when played back to the viewer or listener.
One of the benefits of using binary numbers
is that we can use some very clever
mathematics to check whether the signals are
correct at each stage of the process, and, in
many cases, where it isn't we can
automatically correct them. In practice, it's a
very complicated business, and the actual
methods used would be far beyond the scope
of this Entry. However, we'll examine one of
the principles involved, and by using simple
examples we'll describe the mathematics
behind one type of binary code, and how
errors can be automatically detected and
corrected.
Each digital signal consists of a series of
numbers. In the case of a CD, for example,
each number might indicate the sound level
at one particular point in time along one of
the stereo channels. The signal is read by
a laser which scans the binary information
burnt into the microscopic indentations in
the track of the disc. If the track is at a
particular depth at a given point, it reads 0,
whereas if it's indented (known as a pit1),
then it reads 1. In the course of playing the
entire CD, the laser will detect many millions
of these numbers.
Let's say that each number we read can have
one of 16 values at any point in time (in fact,
standard compact disc technology uses a
more complex and entirely different coding
system, which we'll mention later). In binary,
these values will be represented by the
numbers 0000, 0001, 0010, through to 1111,
ie the decimal values 0 to 15). Binary digit is
abbreviated to 'bit', so we call this a 4-bit
message.
Now, consider the binary number 1001,
representing a value of 9. This could
represent, for example, the sound level on a
Nigel Kennedy CD through the left-hand
stereo channel at exactly 1 minute and 14
seconds into his rendition of the third
movement of 'Spring' from Vivaldi's Four
Seasons. What happens if there's an error
when we read it? Maybe the CD is slightly
warped, or there's a minute manufacturing
fault in the track depth, or maybe someone
jogs the CD player at that moment. If there's
just a small error, then one of the bits of our
number will be misread a 0 will be
represented as a 1 or a 1 as a 0 in our binary
message.
Depending on which bit was incorrect, our
number 1001 could be read by the CD player
as any of the following: 0001, 1101, 1011 or
1000. But the trouble is, these are still valid
numbers in our set. Instead of 9, they
represent 1, 13, 11 or 8 respectively. Our CD
player can't know that anything is wrong - it
would just process it as if the violinist had
played a sound according to level 13, say,
rather than level 9 at that point, and so this
would corrupt the signal we hear. In practice,
a single error wouldn't usually be noticeable
to the listener, but in some cases it might be
heard as a small 'pop' or crackle on the
soundtrack. With lots of errors throughout
the recording, however, we would notice a
deterioration in sound quality, not unlike the
familiar effects of dust or scratches on a vinyl
LP.
So, if an error occurs with this code, the CD
player can't tell, as any error condition is the
same as another valid number in our code. In
order to tell if there's an error, we need to
add some additional information to our
message.
Error Detection
The most simple way is to add an extra bit to
the end of the code number - a check bit -
and choose its value according to a rule. For
example, we might say that every word in our
code has to have an even number of 1s. So
0000 would have a 0 appended to the end
(giving it zero 1s in total), 0001 would have a
1 appended (making two 1s in total), and so
on. Our new code looks like this:




Message bits Check bit Codeword
0000 0 00000
0001 1 00011
0010 1 00101
0011 0 00110
0100 1 01001
0101 0 01010
0110 0 01100
0111 1 01111
1000 1 10001
1001 0 10010
1010 0 10100
1011 1 10111
1100 0 11000
1101 1 11011
1110 1 11101
1111 0 11110
The first four digits of each codeword are our
original message, and each complete
codeword has an even number of 1s, or, as
mathematicians would say, has even parity.
Now, see what happens if an error occurs in
one bit. No matter where it occurs, it
invalidates our rule for the check bit. It will
change a 0 into a 1 or a 1 into a 0, and we end
up with an odd number of 1s as a result. So
we know if there's an error and we can
programme our equipment to check for this
condition and take appropriate action. In the
case of some communication systems, when
we detect an error, we might ask for the
signal to be retransmitted.
For the CD player, however, this isn't
possible. The music would sound very
strange and we'd lose a fair amount of
rhythm if we asked the laser to go back and
read each error again. If it were a
manufacturing fault causing the error, then
the laser may never be able to read the
correct value in any case. All is not lost,
though. All we need is a clever way to not
only detect an error but to make an informed
guess as to what the correct value might have
been.
In the case of the previous code, we may
know that something is wrong, but it's not
possible to tell which bit was in error. If we
receive the message 10101, say, containing
three 1s, then we know there's an error, but,
assuming only one bit was in error, the
intended message may have been any of five
values: 00101, 11101, 10001, 10111 or 10100
(in decimal: 2, 14, 8, 11 or 10) . We don't
know which one of these is correct.
Error Correction and
Hamming Codes
For automatic error correction we need to
add more check bits. We'll illustrate this
using one common set of codes used for 4-bit
messages, where three check bits are
appended, making each into a 7-bit
codeword. These are known as Hamming
codes, named after American mathematician
Richard Hamming (1915 - 98).
There are many ways to encode the check
bits. In the following example, we'll derive
them according to these three rules:
1. The first, second, fourth and fifth bits
have even parity
2. The first, third, fourth and sixth bits
have even parity
3. The second, third, fourth and seventh
bits have even parity
This allows us to construct the following
code. The columns show the binary number
which is our message, the check bits we add
to it, as calculated by the above rules, and the
final codeword which we would transmit
(and hopefully receive).
Message bits Check bits Codeword
0000 000 0000000
0001 111 0001111
0010 011 0010011
0011 100 0011100
0100 101 0100101
0101 010 0101010
0110 110 0110110
0111 001 0111001
1000 110 1000110
1001 001 1001001
1010 101 1010101
1011 010 1011010
1100 011 1100011
1101 100 1101100
1110 000 1110000
1111 111 1111111
The reason why this code is clever is a
property known as the minimum distance of
the code. Unlike our earlier code where an
error in one bit merely turned it into another
valid codeword, each of these codewords in a
Hamming code differs from any other
codeword by at least three bits. If there is an
error in any one of the seven bits, then we
can identify the nearest codeword which is
only one bit different from it.
If we received the code 1010111, say, then this
isn't a valid codeword in our list. If we search
through the allowable codes, we can see it's
only one bit different from 1010101, so it's
reasonable if we correct it to this. As the
minimum distance of our code is three bits,
we know there is no other codeword which is
only one bit different.
Automating the Error
Correction Process
In the case of our Hamming code example,
we can automate this process using a neat bit
of mathematics. It involves a bit of modulo-2
matrix multiplication. Now, we appreciate
that maths doesn't float everyone's boat, so if
you really don't want to see the clever bit,
look away now.
Remember those rules we used to construct
our Hamming Code? We first need to write
them in the form of a binary matrix. There is
one row for each rule, and a column for each
bit of the code. We will therefore have three
rows and seven columns. Each cell has a 1 if
the rule checks that bit and 0 if it doesn't. In
our example, our first rule was that the first,
second, fourth and fifth bits would have even
parity, so the first row is 1101100, and so on.
Our completed parity check matrix looks like
this:
1 1 0 1 1 0 0
1 0 1 1 0 1 0
0 1 1 1 0 0 1
This is where the matrix multiplication
comes in. If we multiply our received
message by this matrix, it returns a three-bit
result which identifies one of the columns of
the parity check matrix. The column it
matches with is the bit which is in error. So,
using the received code 1010111 once again,
we perform the following multiplication:
1 1 0 1 1 0 0
1 0 1 1 0 1 0
0 1 1 1 0 0 1

X
2

1
0
1
0
1
1
1

=
0
1
0

The result 010 is the sixth column of our
parity check matrix, indicating that the sixth
bit of the received message 1010111 is in
error. When we correct this bit, we transform
the message into the nearest valid codeword,
1010101.
If we multiply every codeword we receive by
the parity matrix and then correct the bit
indicated by the result, then we will
automatically convert the received message
into a string of the nearest valid codewords.
This corrects every codeword which has no
more than one error in it.
Multiple Errors
The Hamming (7,4) code isn't capable of
correcting more than one error bit, but other
coding mechanisms exist which do.
The more errors we wish to detect, the more
check bits we have to use and the longer our
message becomes as a result. There is a
trade-off between the efficiency or rate of a
code ie what proportion of the code is the
actual message and the error-correcting
ability of it.
Different codes are suitable for different
applications. In the case of a CD, we would
wish to make the code as efficient as we can,
so that we can store as much real data on the
disk as possible. The precision manufacturing
process ensures that errors are not
widespread, and any clusters of localised
errors, perhaps indicating a pressing defect
or a scratch, are minimised by a process of
interleaving the numbers such that that data
from the same logical location are spread out
across different physical locations of the disk.
On the other hand, consider the example of a
space probe sending back photographs from
a distant planet. This will transmit a weak
signal, one which will be susceptible to many
errors caused by background noise. This
application lends itself to a coding system
which corrects multiple errors, ensuring that
the received picture is as high a quality as
possible. It will necessarily be an inefficient
code, however, with maybe five or six times
as many parity check bits as message bits,
and it will take far longer to transmit as a
result.
Back to those CDs
In practice, compact discs don't use
Hamming codes; they use something known
as a Reed-Solomon code2. This code is too
complex to describe here, but it has the
advantage of being applicable to the problem
of detecting and correcting long bursts of
errors, as may be caused by a scratched CD.



Source: http://www.bbc.co.uk/dna/place-london/plain/A85655046

You might also like