You are on page 1of 43

2013

Fractal Sound. Research Folder

Robert Braileanu
University of West London

Contents

1.

Introduction .................................................................................................................................... 3

2.

Pierre Fatou ..................................................................................................................................... 4

Pierre Joseph Louis Fatou ........................................................................................................................ 4


Born: 28 February 1878 in Lorient, France Died: 10 August 1929 in Pornichet, France...................... 4
3.

Gaston Julia ..................................................................................................................................... 6

Gaston Maurice Julia ............................................................................................................................... 6


Born: 3 February 1893 in Sidi bel Abbs, Algeria Died: 19 March 1978 in Paris, France ..................... 6
4.

Natural Fractals3 ............................................................................................................................ 11


4.1.

4.1.1.

Galaxies .......................................................................................................................... 11

4.1.2.

Rings of Saturn ............................................................................................................... 12

4.2.

Bio/Chem ............................................................................................................................... 13

4.2.1.

Bacteria Cultures ............................................................................................................ 13

4.2.2.

Chemical Reactions ........................................................................................................ 13

4.2.3.

Human Anatomy ............................................................................................................ 14

4.2.4.

Molecules....................................................................................................................... 15

4.2.5.

Plants ............................................................................................................................. 18

4.3.

5.

Atronomy............................................................................................................................... 11

Other ..................................................................................................................................... 19

4.3.1.

Clouds ............................................................................................................................ 19

4.3.2.

Coastlines & Borderlines ................................................................................................. 21

4.3.3.

Data Compression .......................................................................................................... 22

4.3.4.

Special Effects ................................................................................................................ 23

Other Fractals4 ............................................................................................................................... 24


A Simple Explanation Of Fractal Geometry ......................................................................................... 24

6.

5.1.

The Cantor Set ....................................................................................................................... 28

5.2.

The Koch Curve ...................................................................................................................... 28

5.3.

The Sierpinski Triangle ........................................................................................................... 30

Algorithmic Composition5 .............................................................................................................. 30


6.1.

Introduction ........................................................................................................................... 30

6.2.

Pre/Non-Computer Practices.................................................................................................. 31

6.3.

Use Of The Computer ............................................................................................................. 33


1

6.4.
7.

9.

Closing ................................................................................................................................... 39

Max MSP ....................................................................................................................................... 40


7.1.

Introduction ........................................................................................................................... 40

7.2.

Language................................................................................................................................ 40

References..................................................................................................................................... 42

Data CD content
Order Content
1
Michael Hogg - Slow Deep Mandelbrot
Zoom
2
John Cage Atlas Eclipticalis
3
Lejaren Hiller- Illiac Suite for String
Quartet - Part 1
4
Iannis Xenakis-ST/10=1,080262
5
Fractal Sound Generator

Type
Video
Audio
Audio
Audio
MaxMSP application

1.

Introduction

The purpose of this document is to provide additional information regarding matters discussed in the
main written document Fractal Sound; it is an integral part of this project, aiding the reader in fully
engaging with the subject. The information provided here is in the form of website excerpts, which have
been referenced accordingly in section 9 a superscript index identifies the reference number in the list
(eg.1). Furthermore, original ideas based on materials gathered as part of this project are included in this
document. This document has only been submitted as a physical copy, however, should a softcopy be
required, a digital version can be produced at the readers request. Contact details can be found in the
opening page of the main document.
This collection of research material has been produced for logistic reasons. A limited word count applies
to the main body of work and as such, this document includes additional information which could not be
included in the main written account. A notation system is in place towards the end of the executive
summary, linking the two documents together. It is recommended that the system is acknowledged and
used appropriately, as it provides a simple means of fully grasping the ideas being discussed.

2.

Pierre Fatou
Pierre Joseph Louis Fatou

Born: 28 February 1878 in Lorient, France


Died: 10 August 1929 in Pornichet, France

Pierre Fatou entered the cole Normanle Suprieure in Paris in 1898 to study
mathematics. He graduated in 1901 and then decided that the chance of obtaining a
mathematics post was so low that he would apply for a position in the Paris
Observatory.
Having been appointed to the astronomy post, Fatou continued to work on
mathematics for his thesis. He submitted his thesis in 1906 which was on integration
theory and complex function theory. Fatou proved that if a function is Lebesgue
integrable, then radial limits for the corresponding Poisson integral exist almost
everywhere. This result led to generalisations by Privalov, Plessner and Marcel Riesz.
Although not giving a complete solution, Fatou's work also made a major contribution
to finding a solution to the related question of whether conformal mapping of Jordan
regions onto the open disc can be extended continuously to the boundary. In 1907
Fatou received his doctorate for this important work.
The book [2] presents a beautiful historical account of the global theory of iteration of
complex analytic functions. Fatou enters this history in a rather complicated way and
the book does an excellent job in explaining an interesting episode in the history of
mathematics.
In 1915, the Acadmie des Sciences in Paris gave the topic for its 1918 Grand Prix.
The prize would be awarded for a study of iteration from a global point of view. The
author of [2] suggests that mathematicians such as Appell, mile Picard, and Koenigs
had put forward the idea to the Acadmie des Sciences because they were hoping for
developments of Montel's concept of normal families. Fatou wrote a long memoirs
which did indeed use Montel's idea of normal families to develop the fundamental
theory of iteration in 1917. Although we do not know for certain that he was intending
to enter for the Grand Prix, it seems almost certain that he undertook the work with
that in mind.
Given that the topic had been proposed for the prize, it is not surprising that another
mathematician would also work on the topic, and indeed Julia also produced a long
4

memoir developing the theory in a similar way to Fatou. The two, however, chose
different ways to go forward. During the later half of 1917 Julia deposited his results
in sealed envelopes with the Acadmie des Sciences. Fatou, on the other hand,
published an announcement of his results in a note in the December 1917 part
of Comptes Rendus. It later became evident that they had discovered very similar
results.
Julia wrote a letter to Comptes Rendus concerning priority which was published on 31
December 1917. Julia had asked the Acadmie des Sciences to inspect his sealed
envelopes and Georges Humbert had been asked to carry out the task. In the same 31
December 1917 part of Comptes Rendus Georges Humbert has a letter reporting on
Julia's papers. Almost certainly as a result of these letters Fatou did not enter for the
Grand Prix and it was awarded to Julia. Fatou did not lose out completely, however,
and even though he had not entered for the prize, the Acadmie des Sciences gave him
an award for his outstanding paper on the topic.
Fatou was given the title of "astronomer" in 1928 and, as an astronomer, he also made
contributions to that topic. Using existance theorems for the solutions to differential
equations, Fatou was able to prove rigorously certian results on planetary orbits which
Gauss had suggested by only verified with an intuitive argument. He also studied the
motion of a planet in a resistant medium with the intention of explaining how twin
stars would form with the capture of one moving in the atmosphere of the other.
We have mentioned some of his important mathematical work above. We should also
mention his work on Taylor series where he examined the convergence and the
analytic extension of the series. Perhaps Fatou's most famous result is that a harmonic
function u > 0 in a ball has a nontangential limit almost everywhere on the boundary. 1

3.

Gaston Julia
Gaston Maurice Julia

Born: 3 February 1893 in Sidi bel Abbs, Algeria


Died: 19 March 1978 in Paris, France

Gaston Julia's parents were Delors Delavent and Joseph Julia. Two generations
before, the family had left the Spanish Pyrenees to become established in Algeria after
the French colonised the area. Joseph Julia, who was a mechanic, was working in
Sidi-bel-Abbs when his son was born. Gaston became interested in mathematics and
music when he was young. He entered school when he was five years old, and was
taught by Sister Thoduline. She gave young Gaston certain principles which he
followed throughout his life, in particular to always aim at being top in everything he
did. She also encouraged Gaston's mother to provide financial support to allow her
son to have a good schooling, something that was very difficult to achieve given that
the family were very poor. Gaston studied with the Frres des coles Chrtiennes
(Brothers of the Christian Schools) from the age of seven. His outstanding abilities
were quickly spotted, and his teachers encouraged Gaston's parents to try to get a
scholarship to allow him to study at high school.
In 1901, when Gaston was eight, the family moved to Oran, a city on the
Mediterranean coast in northwest Algeria 70 km north of Sidi-bel-Abbs. There
Gaston's father earned his living repairing agricultural machinery. Gaston entered the
Lyce in Oran, and his parents wanted him to begin his studies in grade 5. However,
the teachers pointed out that pupils in that grade had already studied German for one
year while Gaston had no knowledge of the language. However, Gaston requested that
they give him a month in the class to prove that he could catch up. Learning on his
own from books, he soon caught up and was allowed to remain in this class. By the
end of one year he was the best pupil in German as well as in every other subject that
he studied. He graduated with distinction in the baccalaureate examinations in
science, modern languages, philosophy and mathematics.
Julia won a scholarship which allowed him to go to Paris and spend the year 1910-11
at the Lyce Janson-de-Sailly where he took classes in higher mathematics. Despite
his outstanding abilities, Julia did not find life easy. First, he was still young and had
left the familiar country in which he was brought up for the very different life in
France. Second, he contracted typhoid fever before he had even begun his studies and
was taken to hospital. It was November of 1910 before he was well enough to embark
6

on a course which normally took two years but which he had to complete in the
remaining eight months. Despite these difficulties he was still able to reach a higher
standard than any other student. Somehow, he was also able to continue his interest in
music, playing on a violin he mother had given him, and it was during this time that
he fell in love with the music of Bach, Schubert, and Schumann. Throughout his life
these continued to be his favourite composers. He sat the entrance examinations for
the cole Normale Supriore and the cole Polytechnique and was placed first in both
entrance examinations. He could choose either university but decided to enter the
cole Normale on the grounds that it was the stronger of the two establishments for
mathematics.
Entering the cole Normale Supriore in 1911, Julia had just completed the
examinations for his first degree in mathematics when political events in Europe
interrupted his studies. Matters came to a head in July 1914 with various declarations
of war, and on 3 August Germany declared war on France. Events had been moving
quickly and Julia received his call up papers one day later. He trained with the
57th Infantry Regiment at Libourne and was soon made a corporal, then a sublieutenant. He saw action on the western front with the 144 th Infantry Regiment when
sent to the Chemin des Dames ridge. Kaiser Wilhelm II of Germany had his birthday
on 27th January and the German troops wished to mark the occasion with successes.
Accordingly, on 25 January they launched a strong attack on the French lines where
Julia and his men had just arrived. The following is a report of what happened to Julia
that day:January 25, 1915, showed complete contempt for danger. Under an extremely violent
bombardment, he succeeded despite his youth (22 years)to give a real example to his
men. Struck by a bullet in the middle of his face causing a terrible injury, he could no
longer speak but wrote on a ticket that he would not be evacuated. He only went to the
ambulance when the attack had been driven back. It was the first time this officer had
come under fire.
Many on both sides were wounded in the action called the 'attack of the Creute farm'
in which the Germans captured the remaining allied positions on the plateau. Julia's
injury was an extremely painful one and many unsuccessful operations were carried
out in an attempt to repair the damage. Eventually, in 1918, he resigned himself to the
loss of his nose and he had to wear a leather strap across his face for the rest of his
life. Between these painful operations he had carried on his mathematical researches
often in his hospital bed. He undertook research at the Collge de France, beginning in
1916, and in 1917 he submitted his doctoral dissertation tude sur les formes binaires
non quadratiques indtermines relles ou complexes, ou indtermines
conjugues. The examiners of his thesis were mile Picard, Henri Lebesgue and
Pierre Humbert, with Picard as president of the examining committee.
7

In 1918 Julia married Marianne Chausson, one of the nurses who had looked after him
while he was in hospital. Marianne was the daughter of the romantic composer Ernest
Chausson, who had died in 1899 in a freak accident on his bicycle. Gaston and
Marianne Julia had six children: Jrme, Christophe, Jean-Baptiste, Marc, Daniel, and
Sylvestre.
When only 25 years of age, Julia published his 199 page masterpiece Mmoire sur
l'iteration des fonctions rationelles which made him famous in the mathematics
centres of his day. The beautiful paper, published in Journal de Math. Pure et
Appl. 8 (1918), 47-245, concerned the iteration of a rational function f. Julia gave a
precise description of the set J(f) of those z inC for which the nth iterate f n(z) stays
bounded as n tends to infinity. He received the Grand Prix of the Academy of
Sciences for this remarkable piece of work.
In November 1919 he was invited to give the prestigious Peccot Foundation lectures
at the Collge de France and was appointed as Matre de Confrences at the cole
Normale Supriore. At the same time he was appointed rptiteur in analysis at the
cole Polytechnique, examiner at the cole Navale, and professor at the Sorbonne.
This appointment to a professorship at the Sorbonne came without a specific chair, but
in 1925 he was named to the Chair of Applications of Analysis to Geometry at the
Sorbonne. In 1931 he was appointed to the Chair of Differential and Integral Calculus,
then in 1937 he was appointed to the Chair of Geometry and Algebra at the cole
Polytechnique when Maurice d'Ocagne retired.
Seminars were organised in Berlin in 1925 to study Julia's work on iteration and
participants included Richard Brauer, Heinz Hopf and Kurt Reidemeister. H Cremer
produced an essay on his work which included the first visualisation of a Julia set.
Although he was famous in the 1920s, his work on iteration was essentially forgotten
until Benoit Mandelbrot brought it back to prominence in the 1970s through his
fundamental computer experiments. However, Julia was very active mathematically
over a wide range of different topics which is perhaps best summarised by looking
briefly at the six volumes of his collected works which were published between 1968
and 1970 edited by Jacques Dixmier and Michel Herv. Of course the volumes were
published before Julia's death so he was able to write the Preface to the volumes
himself. In addition to the Preface, Volume 1 contains a list of Julia's 232 publications
from 1913 to 1965. These 232 publications consist of 157 research papers, 30 books,
and 45 articles on the history of science or miscellaneous topics.
Volume
1
contains
works
on
iteration
and
its
applications.
Volume 2, in three parts, consists of articles on (i) J points of functions of one
variable, (ii) Jpoints of functions of several variables, and (iii) Series of iterates.
Volume 3 contains four parts: (i) Functional equations and conformal mapping; (ii)
Conformal mapping; (iii) General lectures; and (iv) Isolated works in analysis on
8

Implicit function defined by the vanishing of an active function, and on certain series.
Volume 4 is again in four parts: (i) Functional calculus and integral equations; (ii)
Quasianalyticity; (iii) Various techniques of analysis; and (iv) Works concerning
Hilbert
space.
Volume 5 contains works on (i) Number theory; and (ii) Geometry, mechanics, and
electricity.
Volume 6 contains Julia's miscellaneous writings.
What about the 30 books? Let us mention Elments de gomtrie
infinitsimale (1927),Cours cinmatique (1928), and Exercices d'Analyse (4 vols.)
(1928-38). Reviewing the first of the four volumes of Exercices d'Analyse, Einar Hille
writes:This book is a worthy descendent of a long line of French Exercices sur le calcul
infinitsimal. Such collections of problems are intended primarily for the students who
prepare themselves for the licence or the agrgation and contain problems of the type
set in these examinations. A thorough knowledge of the theory is expected as well as
skill in calculation and the training is directed towards developing both qualities in
the students. The present book contains a small number of carefully chosen problems,
each problem followed by one or more complete solutions. About two-thirds of the
first volume is devoted to the applications of analysis to geometry. An admirable
account of the theory of Fourier series (pp. 120-190) is eminently suitable as outside
reading for first year graduate students. This part of the book will probably be found
the most useful one to the general mathematical public outside of France.
The classic Principes Gomtriques d'Analyse (1930) was reviewed by Virgil Snyder
who wrote:The present volume has for its purpose the development and explanation of those
geometric concepts which are employed in connection with rational, and particularly
linear, transformations of a complex variable z, and the consequent transformations
of uniform and of multiform functions of z.
Two years later Julia produced a second volume of Principes Gomtriques
d'Analysewhich was reviewed by W Seidel:This book presents a continuation of the first volume of the author dealing with those
aspects of the modern theory of functions of a complex variable which are derivable
from simple geometrical principles. As the author himself points out in the preface to
the first volume, the most important of these principles is the conformal
correspondence between two regions of planar character or two Riemann surfaces
realized by an analytic function. The book serves the excellent purpose of unifying by
9

means of geometric concepts various branches of the theory of functions which have
hitherto been scattered in the literature. The presentation throughout is lucid,
rigorous, and elegant.
Another classic text Introduction Mathmatique aux Theories Quantiques also
appeared in two volumes, the first in 1936 and the second in 1938. Francis
Murnaghan, reviewing the first volume, wrote:This book is the sixteenth of the well known series, 'Cahiers Scientifiques,' and is the
first of a series which proposes to give the mathematical foundation of quantum
mechanics. In this first volume the essential difficulties of quantum mechanics (some
of which concern the fact that Hubert space is not finite dimensional) are merely
foreshadowed, the attention being directed in the main to vector analysis in a space of
finite dimensions. However, the treatment is sophisticated and designed, as far as
possible, to carry over to the infinite dimensional case.
The second volume was reviewed by Marshall Stone:The topics included in the book are presented from a purely mathematical point of
view in a clear and lively style. The applications to the theory of matrices and
equations, which are largely implicit, in certain of the more abstract treatments, are
elaborated here with a wealth of detail which renders them unusually accessible to
the student. The author's approach to the modern theory of operators is obviously a
cautious one, presumably because of his desire to keep the reader on ground which
shall appear as nearly familiar as possible at every stage.
Further books by
d'algbre (1959).

Julia

include L'espace

hilbertien (1949)

and

Elments

Julia received many honours for his outstanding mathematical contributions. He was
elected to the Academy of Sciences on 5 March 1934, filling the place left vacant by
the death of Painlev in the previous year. He was elected President of the Academy
in 1950. He was also elected to the Upsal Academy in Sweden, the Pontifical
Academy of Rome, and many other European Academies. He was also President of
the French Mathematical Society. In 1950 he was made an officer of the Lgion
d'Honneur.2

10

4.

Natural Fractals3
4.1.

Atronomy

4.1.1. Galaxies

Looking at the structure of our universe, you can find it to be very self-similar. It is composed of
gigantic superclusters, which are in turn composed of clusters. Every cluster is composed of
galaxies, which are in turn composed of star systems such as the solar system, which are further
composed of planets with moons revolving around them. Truly, every detail of the universe
shows the same clustering patterns. The cluster fractals, such as the Cantor Square below are
indeed useful in modeling the universe:

Cluster fractals are formed by repeatedly cutting out pieces of a polygon. The fractal above is
obviously not a good model, and making it more random helps a lot . The fractal dimension of
such fractals can be found very easily using
the similarity method. In the Cantor
Square, for example there are 4 smaller
squares, the sides of each of which are 1/3
of the entire picture. The fractal dimension
will thus be log 4 / log 3 = 1.26. This is
remarkably close to the fractal dimension of
the universe according to one of the
experiments, where it was found to be
about 1.23. The fact that this is a fraction is
yet another proof of the universes fractal
geometry.

Another way universe can be modeled


is by using IFS fractals that resemble
the galaxies, such as the one on the
left.

11

4.1.2. Rings of Saturn

Saturn is perhaps most famous for the ring that it has around itself. Originally, it was
believed that the ring is a single one. After some time, a break in the middle was
discovered, and scientists considered it to have 2 rings. However, when Voyager I
approached Saturn, it discovered that the two ring were also broken in the middle, and
the 4 smaller rings were broken as well. Eventually, it identified a very large number
of breaks, which continuously broke even small rings into smaller pieces. The overall
structure is amazingly similar to... Cantor Set, which is formed by continuously
cutting out middles of the segments:

If you put circles through the points in the last picture above, you will get a simple
model of the rings of Saturn:

12

4.2.

Bio/Chem

4.2.1. Bacteria Cultures

Some of the most amazing applications of fractals can be found in such distant areas as the
shapes of bacteria cultures. A bacteria culture is all bacteria that originated from a single
ancestor and are living in the same place. When a culture is growing, it spreads outwards in
different directions from the place where the original organism was placed. Just like plants the
spreading bacteria can branch and form patterns which turn out to be fractal. The spreading of
bacteria can be modeled by fractals such as the diffusion fractals, because bacteria spread
similarly to nonliving materials.
If you are familiar with fractals, you will probably bet money on the fact that the above picture is
a fractal. You would be right, but we still need a real mathematical proof to be sure of that. The
way to do it is quite simple just place the culture on a piece of graph paper and count the
number of occupied squares. This kind of data will let you calculate the fractal dimension of the
culture using the box-counting method. In an example experiment performed by Tohey
Matsuyama and Mitsugu Matsushita, the fractal dimension of a culture of Salmonella
anatum was found to be about 1.77. The fact that the dimension is a fraction is enough to prove
that the culture is a fractal.
4.2.2. Chemical Reactions

If you know some chemistry, you are probably familiar with the concept of forward and
backward reactions. Most reactions are accompanied by a backward reaction, in which the
products turn back into the reactants. At equilibrium, the rates of these reactions become equal
and the overall composition of the system does not change. However, the fact that is usually
missed here is that talking about the rates of reactions we are talking aboutaverage rates, since
the rates depend on the movement of particles, which involves a lot of chance. Sometimes,
however, the rates become different for a short interval of time and the composition of the
system changes. As you might guess, these changes would be very chaotic... Aha! In one of the
lessons, we have already established the connection of chaos and fractals. Maybe if we view
every three consecutive concentrations of a substance as coordinates of a point in space... we can
get something that is fractal in shape! Such fractal would be a strange attractor because we
know that this is the type of fractals based on changing numbers.
Indeed, fractal shapes were found after graphing many different systems,
even such common ones as hydrogen and oxygen reacting to make water.
One of the scientists who tried to study this mathematically was Otto
Rossler. He came up with three formulas that could model chemical
reactions. When these three formulas are used to create a strange
attractor, they create the famous 3-dimensional Rossler Attractor:

13

4.2.3. Human Anatomy

If you are still not convinced that that fractals, being a math topic, are very important in real life,
your opinion might change after finding out that you yourself are made of fractals!
THE LUNGS
The first place where this is found is rather obvious to anyone who knows fractals in the
pulmonary system, which you use to breathe. The pulmonary system is composed of tubes,
through which the air passes into microscopic sacks called alveoli. The main tube of the system
is trachea, which splits into two smaller tubes that lead to different lungs, called the bronchi. The
bronchi are in turn split into smaller tubes, which are even further split. This splitting continues
further and further until the smallest tubes, called the bronchioles which lead into the alveoli.
This description is similar to that of a typical fractal, especially a fractal canopie, which is
formed by splitting lines:

The endpoints of the pulmonary tubes, the alveoli, are extremely close to each other. The
property of endpoints being interconnected is another property of fractal canopies.
THE ALVEOLI
Another supporting evidence that your lungs are fractal comes from measurements of the
alveolar area, which was found to be 80 m2 with light microscopy and 140 m2 at higher
magnification with electron microscopy. From the geometric method we know that the increase
in size with magnification is one of the properties of fractals!
THE BLOOD VESSELS
Similarly to bronchial tubes, splitting can also be found in blood vessels. Arteries, for example
start with the aorta, which splits into smaller blood vessels. The smaller ones split as well, and
the splitting continues until the capillaries, which, just like alveoli, are extremely close to each
other. Because of this, blood vessels can also be described by fractal canopies.
THE BRAIN

14

The surface of the brain, where the highest level of thinking takes place contain a large number
of folds. Because of this, a human, who is the most intellectually advanced animal, has the most
folded surface of the brain as well. Geometrically, the increase in folding means the increase in .
Instead of 2, which is the dimension of a smooth surface, the surface of a brain has a dimension
greater than 2. In humans, it is obviously the highest, being as large as between 2.73 2.79.
Heres another topic for science fiction: super-intelligent beings with a fractal brain of dimension
up to 3!
MEMBRANES
The surface folding similar to that of a brain was found in many other surfaces, such as the ones
inside the cell on mitochondria, which is used for obtaining energy and the endoplasmic
reticulum, which is used for transporting materials. The same kind of folding was found in the
nasal membrane, which allows sensing smells better by increasing the sensing surface. However,
in humans this membrane is less fractal than in other animals, which makes them less sensitive to
smells.
The fractal dimension of some anatomical structures are given below. Note that all dimensions
are greater than you would expect them to be, and most are fractions, which automatically
implies that the structures are fractal.
Anatomical Structure
Fractal Dimension
Bronchial Tubes
very close to 3
Arteries
2.7
Brain
2.73 2.79
Alveolar Membrane
2.17
Mitochondrial Membrane (outer)
2.09
Mitochondrial Membrane (inner)
2.53
Endoplasmic Reticulum
1.72
Fractals, in addition to the anatomical structures above can be found in the body on smaller
scales in various molecules.

4.2.4. Molecules

In addition to anatomical structures, fractals were found in living organisms on even smaller
scales in molecules.

DNA
As you probably already know, DNA is a long sequence of nucleotides that code all
the genetic information about us. The nucleotides can be either adenine, guanine,
15

cytosine, or thymine (abbreviated A, G, C, and T). One of the fractal patterns that
were studied was in the sequence of nucleotides in what is called the DNA walk.
The DNA walk is a graphical representation of the DNA sequence in which you move
up if you hit C or T and down if you hit A or G. For example, for the sequence CATG
you will get the following picture:

Fractal patterns were found in many DNA walks. These pattern are remarkably
similar to Brownian motion. The fractal below is a model of a fractal DNA walk:

CHROMATIN
Chromatin is a fibrous material inside a cells nucleus that contains the genetic
material. As you can see below, chromatin tends to cluster:

16

We have seen on the example of galaxies, clusters are fractal in shape. Recently,
scientists have found ways to measure the fractal dimension of chromatin.
Interestingly, experiments performed a couple of years ago at the Mount Sinai
research center in New York showed that the fractal dimension of chromatin might be
somehow connected with cancer. Current experiments are attempting to detect breast
cancer by measuring the fractal dimension. Talk about useful applications!
PROTEINS AND POLYMERS
A polymer is a molecule that is composed of a series of "building blocks" (called monomers)
connected to one another in a chain. If you take a polymer, you will find that its monomers are
not connected in a straight line. Instead, the angles between the monomers can be different and
the entire molecule can twist into pretty complicated shapes. The same is true for proteins, which
are formed by amino acid bonding together in a chain. Twisting, as well as folding and breaking
often implies by itself that the shape is fractal. Proteins and many other polymers are, indeed,
fractal and various methods exist for finding their fractal dimension. For some interesting
proteins the results are shown below. Note that the dimensions are much higher than 1, which
you would expect from a linear chain. This is another proof that proteins are fractal.
Protein

Fractal Dimension
1.614

Lysozyme (egg-white)
Hemoglobin (oxygen carrier in the blood)

1.583

Myoglobin (muscle protein)

1.728

SPECTRUM
If you hold a substance above in the flame, the flame will turn some color that is characteristic of
that substance. If you then let the light from that flame pass through a spectroscope, the light will
break into several colors of the rainbow. Shortly after the discovery of fractals, Harter found
spectra of some molecules that remarkably resembled the Cantor Set. The picture below if a
simulation of a spectrum that is perfectly fractal:

17

4.2.5. Plants

Most plants show some form of branching. This happens when the main stem (of
trunk) splits into a number of branches. Each of those branches splits into smaller
branches, and this kind of splitting continues until the smallest branches. You have
probably noticed that a tree branch looks similar to the entire tree and a fern leaf looks
almost identical to the entire fern. This property, called self-similarity is one of the
most important properties of fractals. Because of numerous ways branching can be
achieved geometrically, there are several ways of creating models of plants as well.
One classic way of creating fractal plants is by means of l-systems. Lindenmayer,
who is the founder of l-systems, introduced them in a book called The Algorithmic
Beauty of Plants, where he first used them to create models of plants. Some of the
fractal plants he created became classic examples. Here are some of them in addition
to several other ones:

Another way of creating fractal plants is using fractal canopies or Pythagoras trees.
Fractal canopies are formed by splitting lines, which is very similar to branching.
Pythagoras trees, such as the one below do the same more realistically by using
squares and triangles instead of lines:

18

One of the properties of fractal canopies is the endpoints being interconnected. This is
especially interesting in its similarity to broccoli, where the branches endpoints form
an interconnected surface:

The final way of creating plant models is by using IFS fractals such as the Barnsley
Fern below, which resemble plant shapes:

4.3.

Other

4.3.1. Clouds

Clouds look very irregular in shape. At some point in your life you probably did look at them
wondering how their diverse shapes are capable of resembling many common objects, animals,
and people. Yet, for the purpose of this website, the word "irregular" automatically triggers the
word "fractal." Yes, indeed, clouds are fractal in shape just like most other objects in nature. Let
us first look at experimental evidence that can prove this.
Usually, to prove that something is a fractal it is enough to find its fractal dimension. For
something like a cloud it is the best to do it using the geometric method. Obviously, it is not
done by measuring the actual cloud, but by measuring its 2D projection, which is the shade. We
19

can make several measurements of the clouds perimeter using different magnifications. This is
achieved by using different sized "yardsticks." If, lets say, our yardstick is 1 kilometer long, the
magnification is higher and measurement would be more exact than the one where the yardstick
is 10 kilometers long. We also know that in fractals, more detail adds additional irregularities,
which adds to the measurement. If we graph log(magnification) against log(perimeter) we should
get a line with a positive slope since the perimeter of fractals increases with magnification.
Indeed, when graphing it for the clouds, we get something like this:

By adding 1 to the slope (see geometric method) we find the fractal dimension. According to
the findings of Lovejoy in 1981, the fractal dimension for most clouds is about 1.164.
Now, having proved that clouds are fractals, it would be good to try using fractals to generate
computer models of them. We know for sure that, since clouds are very irregular, we have to use
fractals that are random and have Brownian self-similarity. The best ones to use are plasma
fractals. To make plasma fractals look like clouds, we can use a color map which uses colors
similar to ones on a real cloud photograph. The pictures we can generate using this method are
something like this:

We can control how fragmented the clouds are by changing a parameter in plasma
fractals called roughness.

20

4.3.2. Coastlines & Borderlines

Benoit Mandelbrot, the founder of fractals has first noticed the properties of fractals
on the coast of Britain. He realized that no matter how small a piece of the coast is, it
will still have its own bays, harbors, and capes. Basing himself on Richardsons data,
he was able to prove that many coasts as well as borderlines are fractal.
Richardson searched many encyclopedias to find data about the lengths of certain
borderlines. He found enormous differences in data from different countries. For
example, Portugal claimed its border with Spain to be 1214 km, while Spain claimed
it to be 987 km. Portugal, as a smaller country would definitely measure its border
more accurately. Thus, we know that the increase of accuracy increased the
measurement... which is one of the properties of fractals! This happens because
fractals are figures with an infinite amount of detail, and measuring more accurately
adds more of these details, which adds to the overall size. Mandelbrot claimed that the
difference in the two measurements were due to the fact that Spain used a "yardstick"
that was bigger than Portugals. If, for example, Spain measured the border with a 2
kilometer yardstick, its measurement would be less exact than Portugal, which used a
1 kilometer yardstick. If we graph log(total length) against log(length of yardstick),
we get lines with negative slopes since the total length decreases with the increase of
the size of the yardstick:

Using this graph, we can find fractal dimensions of the coasts and borders by using a
modified version of the geometric method. Since the magnification, used in the
geometric method is equal to 1 / size of yardstick, the identity log (1/x) = log (x)
will tell you that the slopes of the line above are the negated slopes of the lines in
geometric method. A little bit of algebra will tell you that in order to get fractal
dimension, you need to subtract the slope from 1. Note that in the above diagram, the
lines of more irregular lines, such as the coast of Britain are more steep than the lines
21

of more smooth lines, such as the coast of South Africa. This is due to the fact that the
more irregular a curve is, the higher its fractal dimension.
Simple models of coasts can be made with base-motif fractals that use polygons for
the bases. Such fractals are also called Koch Islands. Below are three examples of
the Koch Snowflake which uses a triangle, a Quadric Koch Island which uses a
square, and the Gosper Island which uses a hexagon:

These pictures, obviously, are terrible in modeling real-life coats since they are too
perfectly symmetrical and self-similar. The solution to this problem is to use fractals
with Brownian self-similarity, such as the plasma fractals. This gives the coasts
randomness, which makes them more realistic. Below is an example of a coastline
made using a plasma fractal:

4.3.3. Data Compression

In December 1992, Microsoft released a compact disk entitled the Encarta Encyclopedia. It
contains thousands of articles, 7000 photographs, 100 animations, and 800 color maps. All of
this is in less than 600 megabytes of data. How was it possible? The answer lies in the
mathematics of fractal data compression.
Consider the Mandelbrot Set. A color full-screen GIF image of it occupies about 35 kilobytes.
However, all you need to store it is the formula z = z^2 + c, which takes no more than 7 bytes.
22

Thats a 99.98% compression talk about efficiency! Well, maybe if it works for the Mandelbrot
Set... it could work for a flower diagram, a map of Africa, or a photo of Kennedy as well! The
goal is too find functions, each of which produces some part of the image. For a complex image
that is not a fractal, you might need hundreds of such functions. Yet, it would still take up less
space than hundreds of thousands of colored pixels. IFS are the functions usually used for
compressing data. The mathematical foundation of the image compression was established by
Michael Barnsley, who is the founder of IFS fractals as well.

4.3.4. Special Effects

Computer graphics has been one of the earliest applications of fractals. Indeed, fractals can
achieve realism, beauty, and require very small storage space because of easy compression.
Very beautiful fractal landscapes were published as far back as in Mandelbrots Fractal
Geometry of Nature. Although the first algorithms and ideas are owed to the discoverer of
fractals himself, the artistic field of using fractals was started by Richard Voss, who generated
the landscapes for Mandelbrots book. This sparked the imagination of many artists and
producers of science fiction movies. A little later, Loren Carpenter generated a computer movie
of a flight over a fractal landscape. He was immediately hired by Pixar, the computer graphics
division of Lucasfilms. Fractals were used in the movie Star Trek II: The Wrath of Khan, to
generate the landscape of the Genesis planet and also in Return of the Jedi to create the
geography of the moons of Endor and the Death Star outline. The success of fractal special
effects in these movies lead to making fractals very popular. Today, numerous software allows
anyone who only knows some information about computer graphics and fractals to create such
art. For example, we ourselves were able to generate all landscapes throughout this website, such
as the one below.

23

5.

Other Fractals4

A Simple Explanation Of Fractal Geometry


While the classical Euclidean geometry works with objects which exist in integer dimensions, fractal
geometry deals with objects in non-integer dimensions. Euclidean geometry is a description lines,
ellipses, circles, etc. Fractal geometry, however, is described in algorithims -- a set of instructions on
how to create a fractal.
The world as we know it is made up of objects which exist in integer dimensions, single dimensional
points, one dimensional lines and curves, two dimension plane figures like circles and squares, and three
dimensional solid objects such as spheres and cubes. However, many things in nature are described
better with dimension being part of the way between two whole numbers. While a straight line has a
dimension of exactly one, a fractal curve will have a dimension between one and two, depending on
how much space it takes up as it curves and twists. The more a fractal fills up a plane, the closer it
approaches two dimensions. In the same manner of thinking, a wavy fractal scene will cover a
dimension somewhere between two and three. Hence, a fractal landscape which consists of a hill
covered with tiny bumps would be closer to two dimensions, while a landscape composed of a rough
surface with many average sized hills would be much closer to the third dimension.
A More Complete Explanation of Fractal Geometry and Fractal Dimensions
Fractal Dimensions can be demonstated by first defining a fractal set as Nn = C/rnD where Nn is the
number of fragments with the linear dimension defined as rn, C is some constant, and D defines the
fractal dimension. If this equation is rearranged with simple algebra, the outcome is D = [ ln(Nn + 1/Nn) ]
/ [ ln(rn/rn + 1) ]. Given a line of unit length, we can divide it in varying ways and do different things with
each segment. For the first example (figure 1a), if the segment is divided into two parts, making r1 =
1/2. One of the parts is kept and the other is disposed of, so N1 = 1. If we divide the remaining segment
into two parts and again only keep one of the fragments, then r2 = 1/4 and N2=1. If this process is
repeated (iterated), D turns out to be zero, which give the equivalent to the Euclidean point. Regardless
of the number of iterations, at order n, Nn=1. Hence, D will always be zero. This way of thinking makes
sense because if you were to take a line segment and continually divide it into two, keeping only one of
the pieces, the length of the line segment will approach zero as the order approaches infinity.
A Euclidean line which exists in the first dimension can be demonstrated just as easily. This example is
modeled in figure 1b. The line segement is again divided into two parts; however, we keep all the
fragments, sor1 = 1/2 and N1 = 2. Iterating again, we get r2 = 1/4 and N2 = 4. Hence, ln(2) / ln(2) =
1. This also makes sense because we never remove any part of line so it will always remain of unit
length.

24

In the first two examples, the results were both Euclidean figures with dimensions of zero and one,
respectively. It is, however, just as easy to create a line segment with a fractal dimension between zero
and one. Infigure 1c we divide the line segment into three different parts and keep only the two end
pieces. After the first iteration, we get r1 = 1/3 and N1 = 2. When this process is repeated, we get r2 =
1/9 and N2 = 4.Therefore, D = ln(2) / ln(3) = 0.6309. To show how to generate line segements with a
varying fractal dimension, we start with a line segment of unit length and divide it into five distinct parts
(figure 1d). By keeping only the two end pieces and the center piece, we get r1 = 1/5 and N1 =
3. Iterating again, we get r2 = 1/25 and N2 = 9. In this example D = ln(3) / ln(5) = 0.6826. As this process
is iterated, the infinite set of points is called dust. This term will be explained later.

Figure 1. Demonstration of fractal dimensions with Euclidean line segments.


Fractal dimensions are not limited to being between zero and one. We can also apply the same method
to the Euclidean square to produce items with a fractal dimension between zero and two. For each of
the following examples, each square will be divided into nine squares of equal size, making r1 = 1/9. The
iterations continues n times. To demonstrate the Euclidean point (figure 2a), we keep only one square
with each iterations, making N1=Nn=1. In the next example (figure 2b), we keep only the top three
squares with each iteration, making N1 = 3 and N2 = 9. Through this process we discover a Euclidean
line with a dimension of one. The last Euclidean figure which can be derived from this example is the
plane (figure 2c). To accomplish this, we keep all the squares with each iteration.
To produce a figure with a fractal dimension, we will keep only the two pieces in the upper left and
lower right corner with each iteration (figure 2d), making N1 = 2 and N2 = 4. Hence, at the second
orderD = ln(2) / ln(3) = 0.6309. On the other hand, if we remove only the center piece with each
iteration, as in figure 2e, we N1 = 8 and N2 = 64. This example produces a fractal dimension of 1.8928.

Figure 2. Demonstration of fractal dimensions with Euclidean planes.


25

Calculating Fractal Dimensions


Now you understand what fractal dimensions are and where they come from, but how are they
calculated? For certain objects which with you have delt all of your life, such as squares, lines, and
cubes, it is easy to assign a dimension. You intuitively feel that a square has two dimensions, a line has
one dimension, and a cube has three dimensions. You might feel this way because there are two
directions in which you can move on a square, one direction on a lines, and three directions in a cube,
but what about fractals? Sometimes you can move in a certain number of directions and sometimes you
can move in a different number of directions. This is what causes fractal dimensions to be non-integers.
To derive a formula which will work with all figures, lets first look at how to calculate the dimensions for
the figures which we already know. A line can be divided into n = n1 seperate pieces. Each of those
pieces is 1/n the size of the whole line and each piece, if magnified n times, would look exactly the same
as the original. Repeating the process for a square, we find that is can be divided into n 2 pieces. The
same concept holds true for a cube, we need n3 pieces to reassemble a cube. Each of the pieces would
be 1/n the size of the whole figure. The exponent in each of these examples is the dimension. For
fractals, we need a generalized formula, which can be derived from what we already know. The steps
bellow assume you have a working knowledge of logarithims and basic algebra.
Note: ln denotes loge and may be refered to as the natural logarithim. Because of the way in which this
formula ends up, it is independant of the base used for the logarithims.
for a line: ln(number of divisions) = ln(n1)
for a square: ln(number of divisions) = ln(n2)
for a cube: ln(number of divisions) = ln(n3)
If you look back, the figure was divided into pieces that when zoomed in on n times, revealed to starting
figure. Because of this, we divide the ln(number of divisions) by the natural logarithim of the
magnification facator. The resulting formula gives the dimension, represented by D.
D=ln(number of divisions)/ln(magnification factor)
for a line: D = ln(n1)/ln(n) = 1
for a square: D = ln(n2)/ln(n) = 2
for a cube: D = ln(n3)/ln(n) = 3
Each of these examples was easy because the magnification factor was always n. But for fractals,
magnification factor will be a constant, which varies for each fractal. Because you are unfamiliar with
specific fractals, we can not examine specific cases now. Under the section
dimension of the individual fractals will be examined in more detail.

26

Individual Fractals the

What Are Fractals?


For the most part, when the word fractal is mentioned, you immediately think of the stunning pictures
you have seen that were called fractals. But just what exactly is a fractal? Basically, it is a rough
geometric figure that has two properties: First, most magnified images of fractals are essentially
indistinguishable from the unmagnified version. This property of invariance under a change of scale if
called self-similiarity. Second, fractals have fractal dimensions, as were described above. The word
fractal was invented by Benoit Mandelbrot, "I coined fractal from the Latin adjective fractus. The
corresponding Latin verb frangere means to break to create irregular fragments. It is therefore sensible
and how appropriate for our needs! - that, in addition to fragmented, fractus should also mean irregular,
both meanings being preserved in fragment."
Graphical Representation Of Fractals
Graphically, fractals are images created out of the process of a mathematical exploration of the space in
which they are plotted. For this page, a computer screen will represent the space which is being
explored. Each point in the area is tested in some way, usually an equation iterating for a given period of
time. The equations used to test each point in the testing region are often extremely simple. Each
particular point in the testing region is used as a starting point to test a given equation in a finite period
of time. If the equation escapes, or becomes very large, within the period of time, it is colored white. If if
doesn't escape, or stays within a given range throughout the time period, it is colored black. Hence, a
fractal image is a graphical representation of the points which diverge, or go out of control, and the
points which converge, or stay inside the set. To make fractal images more elablorate and interesting,
color is added to them. Rather than simply plotting a white point if it escapes, the point is assigned a
color relative to how quickly it escaped. The images produced are very elaborate and possess nonEuclidean geometry. Fractals can also be produced by following a set of instructions such as remove the
center third of a line segment. A more complete explanation of how to generate fractal images, specific
to individual fractals, follows.

Bellow are several sections, each dealing with an individual fractal. Of course not all of the fractals in the
world are listed bellow, but only ones which are well known or show and important point which
everyone should know. With each fractal, there is a picture, followed by some information about it. For
many of the fractals, there is also a link to a C/C++ or BASIC program which will generate a picture of the
fractal. For more working soure code, visit the Appendix Of Source Code. Even if you are not interested
in these specific fractals, it is strongly enouraged that you read through each one because many topics
other than the specific fractal are reviewed. For example, strange attractors and several applications of
fractals to real-life situations are discussed.

27

5.1.

The Cantor Set

Figure 3. The Cantor set


The Cantor set is a good example of an elementary fractal. The object first used to demostrate fractal
dimensions, figure 1c, is actually the Cantor set. The process of generating this fractal is very simple. The
set is generated by the iteration of a single operation on a line of unit lenght. With each iteration, the
middle third from each lines segment of the previous set is simply removed. As the number of iterations
increases, the number of seperate line segments tends to infinity while the length of each segment
approaches zero. Under magnification, its structure is essentially indiguishable from the whole, making
it self-similiar.
To calculate the dimension of the Cantor set, we first realize that its magnification factor is three, or the
fractal is self-similiar if magnified three times. Then we notice that the line segments decompose into
two smaller units. Using the formula given in the section entitled

Dimensions, we get:
D = ln(2) / ln(3)
D = 0.6931 / 1.0986
D = 0.6309
The Cantor set has a dimension of 0.6309.

5.2.

The Koch Curve

28

Calculating Fractal

Figure 4. The Koch Curve


So far, all of the examples in this document have delt with removing pieces from various geometric
figures. Fractals, and fractal dimensions can also be defined by adding onto geometric figures. The Koch
curve was named after Helge von Koch in 1904. The generation of this fractal is simple. We begin with a
straight line of unit length and divide it into three equally sized parts. The middle section is replaced
with and equilateral triangle and its base is removed. After one iterations, the length is increased by
four-thirds. As this process is repeated, the length of the figure tends to infinity as the length of the side
of each new triangle goes to zero. Assuming this could be iterated an infinite number of times, the result
would be a figure which is infinitely wiggly, having no straight lines whatsoever.
To calculate the dimension of the Koch Curve, we look at the image of the fractal and realize that it has a
magnification factor of three and with each iteration, it is divided into four smaller pieces. Knowing this,
we get
D = ln(4) / ln(3)
D = 1.3863 / 1.0986
D = 1.2619
The Koch Curve has a dimension of 1.2619.
The Koch Snowflake
As would be expected, the Koch Snowflake is generated in very much the same way as the Koch Curve.
The only variation is that, rather than using a line of unit length as the intial figure, an equilater triangle
is used. It is iterated in the same way as the Koch Curve. The length of the resulting figure tends to
infinity as the length of the side of each new triangle goes to zero. Iterated an infinite number of times,
the Koch Snowflake, like the Koch Curve, has absolutely no straight lines in it. This fractal, if magnified
three times in any area, also displays the property of self-similiarity.
As mentioned above, the magnification factor of this fractal is three, and as with the Koch Curve, the
number of divisions in each magnification is four. With this we get:
D = ln(4) / ln(3)
D = 1.3863 / 1.0986
D = 1.2619
The Koch Snowflake has a dimension of 1.2619.

29

5.3.

The Sierpinski Triangle

Figure 6. The Sierpinski Triangle


Unlinke the Koch Snowflake, which is generated with infinite additions, the Sierpinski triangle is created
by infinite removals. Each triangle is divided into four smaller, upside down triangles. The center of the
four triangles is removed. As this process is iterated an infinite number of times, the total area of the set
tends to infinity as the size of each new triangle goes to zero.
After closer examinition of the process used to generate the Sierpinski Triangle and the image produced
by this process, we realize that the magnification factor is two. With each magnification, there are three
divisions of the triangle. With this data, we get:
D = ln(3) / ln(2)
D = 1.0986 / 0.6931
D = 1.5850
The Sierpinski Triangle has a dimension of 1.5850.

6.

Algorithmic Composition5
6.1.

Introduction

"Since I have always preferred making plans to executing them, I have gravitated
towards situations and systems that, once set into operation, could create music with
little or no intervention on my part. That is to say, I tend towards the roles of planner
and programmer, and then become an audience to the results" -Brian Eno (Alpern,
1995).
Algorithmic composition, sometimes also referred to as "automated composition,"
basically refers to "the process of using some formal process to make music with
minimal human intervention" (Alpern, 1995). Such "formal processes," as we will see,
have been familiar to music since ancient times. The title itself, however, is relatively
newthe term "algorithm" having been adopted from the fields of computer science
and information science around the halfway mark of the 20th century (Burns, 1997).
30

Computers have given composers new opportunities to automate the compositional


process. Furthermore, as we will explore, several different methods of doing so have
developed in the last forty years or so.
To begin with the title itself, Webster's dictionary defines an "algorithm" simply as "a
predetermined set of instructions for solving a specific problem in a limited number of
steps." The "problem" composers are faced with, of course, is creating music; the
"instructions" for creating this music according to the definition are "predetermined,"
suggesting that intervention on the part of the human composer is superceded once the
compositional process itself is set into motion, as hinted at as well in the above Brian
Eno quote. Thus, "automated composition" also suitably describes this kind of music,
since "automation" refers to "anything that can move or act of itself."

6.2.

Pre/Non-Computer Practices

ancient Greeks, canon, Mozart, John Cage, serialism

The idea of utilizing formal instructions and processes to create music dates back in
musical history as far back as the ancient Greeks. Pythagoras believed in a direct
relation between the laws of nature and the harmony of sounds as expressed in music:
"The word music had a much wider meaning to the
Greeks than it has to us. In the teachings of
Pythagoras and his followers, music was
inseperable from numbers, which were thought to
be the key to the whole spiritual and physical
universe. So the system of musical sounds and
rhythms, being ordered by numbers exemplified
the harmony of the cosmos and corresponded to
it" (Grout, 1996; italics added).

Thus, theoretical applications of numbers (i.e. "data," in a sense) and various


mathematical properties derived from nature were the formalisms, or "algorithms,"
upon which the ancient Greek musicians had constructed their musical
systems. Ptolemy and Plato, also, were two others who wrote about this practice.
Ptolemy, the "most systematic of the ancient theorists of music," was also a leading
astronomer of his time; he believed that mathematical laws "underlie the systems both
of musical intervals and of the heavenly bodies," and that certain modes and even
certain notes "correspond with particular planets, their distances from each other, and
31

their movements" (Grout, 1996). This idea was also given poetic form by Plato in the
myth of the "music of the spheres," the unheard music "produced by the revolutions of
the planets" (Grout, 1996), and the notion was later invoked by writers on music
throughout the Middle Ages, including Shakespeare and Milton (Grout, 1996).
These ancient Greek "formalisms," however, are rooted mostly in theory, and their
strict application to musical performance itself is probably questionable since Greek
music was almost entirely improvised (Grout, 1996). Thus, while Greek mathematical
conjectures certainly created the musical system of intervals and modes with which
the musician operated and probably also guided and influenced his/her performance
practice in some ways, the musician was by no means entirely removed from the
decision-making process. Ancient Greek music was not "algorithmic composition" in
any pure sense, therefore, but it is undoubtably important historically in music for its
tendency towards formal extra-human processes.
An extra layer of abstraction would later be achieved with the birth of "canonic"
composition in the late 15th century:
"The prevailing method was to write out a single
voice part and to give instructions to the singers to
derive the additional voices from it. The instruction
or rule by which these further parts were derived
was called a canon, which means 'rule' or 'law.' For
example, the second voice might be instructed to
sing the same melody starting a certain number of
beats or measures after the original; the second
voice might be an inversion of the first or it might
be a retrograde [etc.]" (Grout, 1996).

These "rules" of imitation and manipulation are indeed the "algorithm" by which
performers unfolded the music. In this case, then, as opposed to the previous one of
the ancient Greeks, we can see a clear removal of the composer from a large portion
of the compositional process: the composer himself only invents a kernel of musica
single melody or sectionfrom which an entire composition is automatically
constructed.
Mozart, too, used automated composition techniques in his Musikalisches
Wurfelspiel ("Dice Music"), a musical game which "involved assembling a number of
small musical fragments, and combining them by chance, piecing together a new
piece from randomly chosen parts" (Alpern, 1995). This very simple form of
32

"algorithmic" composition leaves creative decisions in the hands of chance, letting the
role of a dice to decide what notes are to be used.
There are more modern examples, as well, of algorithmic composition without the use
of the computer. John Cage, for example, like Mozart, utilized randomness in many
of his compositions, such as inReunion, performed by playing chess on a photoreceptor equipped chessboard: "The players' moves trigger sounds, and thus the piece
is different each time it is perfomed" (Alpern, 1995). Cage also delegated the
compositional process to natural phenomena, as in his Atlas Eclipticalis (1961), which
was composed by laying score paper on top of astronomical charts and placing notes
simply where the stars occurred, again delegating the compositional process to
indeterminacy (Schwartz, 1993).
The twelve-tone method and serialism, furthermore, were movements of the postWorld War II era that tried to completely control all parameters of music and to
objectify and abstract the compositional process as much as possible. Decisions over
everything from notes to rhythms to dynamic markings were often subject to precomposed "series" and "matrices" of values, which, in effect, "automated" many of
these parameters by determining the order in which each must occur in a piece. These
series and matrices were, then, the "algorithms" that superceded the human creative
process. Serialism can thus be labeled "algorithmic" or "automated" composition in a
rather pure sense, especially when it strives to integrate as many musical parameters
as possible. Olivier Messiaen's 1949 piano etude, Mode de valeurs et d'inensites, for
example, had a thirty-six pitch series, each pitch of which was given specific
rhythmic, dynamic, registral, and attack characterstics with which to be used in the
composition (Kostka, 1995).
*

6.3.

Use Of The Computer

2 early pioneers: Lejaren Hiller, Iannis Xenakis


3 general approaches: stochastic, rule-based, artificial intelligence (AI)

Computers introduced incredible new capacities available for algorithmic composition


purposes. Ada Lovelace, inventor of the "calculating engine," the precursor of
computers, had this to say about the possibilities of automated composition (Alpern,
1995) in the 19th century:
"Supposing, for instance, that the fundamental
relations of pitched sound in the signs of harmony
and of musical composition were susceptible of
33

such expression and adaptations, the engine might


compose elaborate and scientific pieces of music of
any degree of complexity or extent" (Alpern, 1995).

And so it happened, as Lovelace had predicted, that the computer (or modern
"calculating engine") brought scientists and composers together to construct such
"elaborate" pieces of music out of new algorithmic programming methods.
The earliest instance of computer generated composition is that of Lejaren
Hiller and Leonard Isaacson at the University of Illinois in 1955-56. Using
the Illiac high-speed digital computer, they succeeded in programming basic material
and stylistic parameters which resulted in the Illiac Suite (1957). The score of the
piece was composed by the computer and then transposed into traditional musical
notation for performance by a string quartet. What Hiller and Isaacson had done in
the Illiac Suite was to (a.) generate certain "raw materials" with the computer, (b.)
modify these musical materials according to various functions, and then (c.) select the
best results from these modifcations according to various rules (Alpern, 1995). This
"generator/modifier/selector" paradigm was also later applied to MUSICOMP, one of
the first computer systems for automated composition, written in the late 1950s and
early 1960s by Hiller and Robert Baker, which realized Computer Cantata: "Since
[MUSICOMP] was written as a library ofsubroutines, it made the process of writing
composition programs much easier, as the programmer/composer could use the
routines within a larger program that suited his or her own style" (Alpern, 1995; italics
added). This idea of building small, well-defined compositional functionsi.e.
"subroutines"and assembling them together would prove efficient and allow the
system a degree of flexibility and generality (Alpern, 1995), which has made this
approach a popular one, as we will see, in many algorithmic composition systems
even into the present day.
Another pioneering use of the computer in algorithmic compostion is that of Iannis
Xenakis, who created a program that would produce data for his "stochastic"
compositions, which he had written about in great detail in his book Formalized
Music (1963). Xenakis used the computer's high-speed computations to calculate
various probability theories to aid in compositions like Atres (1962) and MorsimaAmorsima (1962). The program would "deduce" a score from a "list of note densities
and probabilistic weights supplied by the programmer, leaving specific decisions to a
random number generator" (Alpern, 1995). "Stochastic" is a term from mathematics
which designates such a process, "in which a sequence of values is drawn from a
corresponding
sequence
of
jointly
distributed
random
variables"
(Webster'sdictionary). As in the previous example of the Illiac Suite, these scores
were performed by live performers on traditional instruments.
34

With Xenakis, it should be noted, however, "the computer has not actually produced
the resultant sound; it has only aided the composer by virtue of its high-speed
computations" (Cope, 1984): in essence, what the computer was outputing was not the
composition itself but material with which Xenakis could compose. In contrast, the
work of Hiller and Isaacson attempted to simulate the compositional process itself
entirely, completely delegating creative decisions to the computer.
Already in these first two examplesXenakis and Hillerwe find two different
methodologies that exist in computer-generated algorithmic composition: (1.)
"stochastic" vs. (2.) "rule-based" systems. As we will see, there is also a third
category, (3.) which we can label AI, or artificial intelligence systems.
Stochastic approaches, already somewhat touched upon, are the simplest. These
involve randomness and can be as simple as generating a random series of notes, as
seen already in the case of Mozart'sDice Music and in the works of John Cage, though
a great amount of conceptual complexity can also be introduced to the computations
through the computer with statistical theory and Markov chains. Basically, many of
the creative decisions in the stochastic method are merely left to chance, essentially
the same as drawing notes out of a hat. Another example of non-computer-oriented
"stochastic" composition can be found in Karlheinz Stockhausen's Klaveirstucke
XI in that the sequence of various fragments of music are to be performed by a pianist
in random sequence. A different slant to usages of unexpectedness is that of
applying chaos theory to algorithmic composition (Burns, 1997). These applications
employ various nonlinear dynamics equations that have been deduced from nature and
other chaotic structures such as fractals to relay different musical information:
"In recent years [the '70s and '80s], the behaviour
of systems of nonlinear dynamical equations when
iterated has generated interest into their uses as
note generation algorithms. The systems are
described as systems of mathematical equations,
and, as noted by Bidlack and Leach, display
behaviours found in a large number of systems in
nature, such as the weather, the mixing of fluids,
the phenomenon of turbulence, population cycles,
the beating of the human heart, and the lengths of
time between water droplets dripping from a leaky
faucet" (Alpern, 1995).

This is a large and mathematically complex field of algorithmic composition, and the
author refers the interested reader to my website on the topic as well as to the article
35

by Jeremy Leach ("Nature, Music, and Algorithmic Composition." Computer Music


Journal, 1995) as good starting points for more in-depth investigation.
A second approach to algorithmic composition using the computer is that of "rulebased" systems and formal grammars: "An elementary example of a rule-based
process would center around a series of tests, or rules, through which the program
progresses. These steps are usually constructed in such a way that the product of the
steps leads to the next new step" (Burns, 1997). Non-computer parallels to rule-based
algorithmic composition that have been previously mentioned include the 15 th-century
canon of the Renaissance period as well as the post-WWII twelve-tone method and
integral serialism. Rather than delegating decisions to chance as in the stochastic
methods just described, rule-based systems pre-compose a "constitution," so to say, or
a "grammer," by which the compositional process must behave once set into motion
"grammar" being a term borrowed from linguistic theory which designates the formal
system of principles or rules by which the possible sentences of a language are
generated (Burns, 1997). Like Hiller's MUSICOMP, these efforts usually take the
form of a computer program or a unified system of subroutines, and often also involve
databases of various rules either collected from compositional techniques of the past
or newly invented. One example of using a "rule-based" method of algorithmic
composition is that of William Shottstaedt's automatic species counterpoint program
that writes music based on rules from Johann Joseph Fux' Gradus ad Parnassum, a
counterpoint instruction book from the early 18th-century aimed at guiding young
composers to recreate the strictly controlled polyphonic style of Palestrina (15251594) (Grout, 1996):
"The program is built around almost 75 rules, such
as 'Parallel fifths are not allowed' and 'Avoid
tritones near the cadence in lydian mode.'
Schottstaedt assigned a series of 'penalties' for
breaking the rules. These penalties are weighted
based on the fact that Fux indicated that there
were some rules that could never be broken, but
others did not have to be adhered to as
vehemently. As penalties accumulate, the program
abandons its current branch of rules and backtracks
to find a new solution" (Burns, 1997).

Another example is that of Kemal Ebcioglu's automated system called CHORAL


which generates four-part chorales in the style of J. S. Bach according to over 350
rules (Burns, 1997).
36

One last unique approach that I found to algorithmic composition using the computer
is that of aritifical intelligence (AI) systems. These systems are like rule-based
systems in that they are programs, or systems of programs, based on some pre-defined
grammar; however, AI systems have the further capacity of defining their own
grammaror, in essence, a capacity to "learn." An example of this is David Cope's
system called Experiments in Musical Intelligence (EMI). Like the previous
example of Shottstaedt and of Ebcioglu's CHORAL, EMI is based on a large database
of style descriptions, or rules, of different compositional strategies. However, EMI
also has the capactiy to create its own grammar and database of rules, which the
computer itself deduces based on several scores from a specific composer's work that
are input to it. EMI has been used to automatically compose music that evokes already
somewhat successfully the styles of Bach, Mozart, Bartk, Brahms, Joplin, and many
others.
Another interesting branch of AI techniques is that of "genetic programming," a very
recent technique in the field of computer science for "automatic programming" of
computers (Alpern, 1995). Rather than basing its grammar on scores input to the
computer as in EMI, genetic programming generates its own musical materials as
well as form its own grammar. The composer must also program a "critic" function,
therefore, which then listens to the numerous automatically produced outputs at
various stages of the processing to decide which are "fit" or suitable for final output
(the composer having final say, then, as to which of these to discard and which to
save). Below is a more in-depth description of the different processes involved in
genetic programming methods:
"[Genetic programming] is a method which actually
uses a process of artificially-created natural
selection to evolve simple computer programs. In
order to perfrom this process, one uses a small set
of functions and terminals, or constants, to
describe the domain one wishes an evolved
program to operate in. For example, if the human
programmer wishes to evolve a program which can
generate or modify music, one would give it
functions which manipulate music, doing things
such as transposition, note generations, stretching
or shrinking of time values, etc. Once the functions
have been decided on, the genetic programming
system will create a population of programs which
have been randomly generated from the provided
function set. Then, a fitness measure is determined
37

for each program. This is a number describing how


well the program performs in the given problem
domain. Since the initial programs are randomly
generated, their performance will be very poor
however, a few programs are likely to do slightly
better than the rest. These will be selected in pairs,
proportionate to their fitness measure, and then a
new population of programs will be created from
these individuals, and the whole process will be
repeated, until a solution is reached (in the form of
a program which satisfies the critic), or a set of
number of iterations has passed. Operations which
may be performed in generating this new
population include reproduction (passing an
individual program on into the next generation
unchanged), crossover (swapping pieces of code
between two 'parent' programs in order to create
two unique 'children'), mutation, permutation, and
others" (Alpern, 1995).

The composer, thus, provides the system with a library of functions, or subroutines, as
we have already seen in the case of Hiller's MUSICOMP and other systems, which
can do various things to the generated musical materials: however, in this case, the
composer does not define the way in which these functions will be usedthe
composer merely defines for the computer what is desirable in an output (i.e. designs
a "critic") and the computer in turn tries to automatically achieve these results using
the provided subroutines. This form of "algorithmic composition," thus, (using AI or
genetic programming) can be seen as an extreme case, abstracting itself even from its
own "algorithm" since the output it produces as well as the formal process by which it
performs is automatically constructed.
Besides the three various methods of algorithmic composition using the computer that
I have describedstochastic, rule-based, and AIfurther distinction also occurs in
the type of musical output different algorithmic composition systems produce. Some
systems specify score information only (i.e. pitch, duration, and dynamic material) to
be realized by whatever acoustic or electronic instruments, as seen already in the early
cases of Hiller and Xenakis and which is also true in the case of Cope's EMI
compositions (the MIDI scores of which are fed into a Disclavier or other MIDI sound
device for output) and most others mentioned in this paper. Other systems, however,
do not create scores and focus instead on electronic sound synthesis or manipulation
of recorded sounds (i.e. musique concrte), or on a combination of these activities.
38

Sound synthesis algorithms, furthermore, "have been used in a variety of ways, from
the calculation of complex waveforms (building sounds), to the evolution of timbre
development over time" (Burns, 1997). A last approach is to combine both score and
electronic sound synthesis in the system's output, controlling both structural content
and its own timbral realization.

6.4.

Closing

As for new developments in the field today, automatic listening programs seem to be
a new trend and focus: not only does the computer automatically compose, it is also
being designed to listen and respondto music being performed around it, a field of
music that is labelled "live electronics":
"Another tendency is to use the computer as an
accompanist who listens to what is being played
and responds appropriately in real-time. Here, the
human input is used to generate rules on which the
machine will base its output. This is seen in such
programs as Cypher (Rowe, 1993) and IBL-Smart
(Widmer, 1994)" (Jacob, 1996).

Another slant on "automatic listening" is that of Jonathan Berger and Dan Gang
(Berger, 2004) who have created computational models of perception and cognition of
music using AI approaches that have given new insights into the creative properties
inherent in listening and, furthermore, to the process of creativity itself . These new
techniques could also potentially improve algorithmic composition, it would seem,
since the "critic" functions that we have seen in examples of genetic programming
could gain much improvement from their insights into how humans listen to music:
the computer could, then, better judge itself as to the quality of its output.
Aesthetically speaking, the more recent and complicated brands of algorithmic
composition that utilize the computer are still in their infancy and much improvement
is, perhaps, left to be desired. As Cope himself remarks, for example, in regard to
Hiller's early experiments with the Illiac, many "directions in 'computer control' have
not proven to be great artistic successes" (Cope, 1984). These various new directions
with the computer (i.e. stochastic, rule-based, and AI) have been, nevertheless,
extraordinarily important for they have "opened the door to new vistas in the
expansion of the computer's development as a unique instrument with significant
potential" (Cope, 1984). They have also broadened our conception of music and how
39

it can be realized, as well as given us rare opportunities to test different compositional


theories, listen to them in action, and also then try to improve upon them. Thus, not
only has the composer been able to do new things with the computer through
algorithmic means, s/he has also been able to investigate him/herself more closely and
to gain new insights not only into his/her own compositional processes but into the
techniques and strategies of composers throughout history. These experiments are
thus intellectually stimulating and important in their own right for these reasons, and
time will tell whether they cannot also produce many "great artistic successes."

7.

Max MSP
7.1.

Introduction

Max is a visual programming language for music and multimedia developed and maintained by San
Francisco-based software company Cycling '74. During its 20-year history, it has been used by
composers, performers, software designers, researchers, and artists for creating recordings,
performances, and installations.
The Max program itself is modular, with most routines existing in the form of shared libraries. An API
allows third-party development of new routines (called "external objects"). As a result, Max has a large
user base of programmers not affiliated with Cycling '74 who enhance the software with commercial
and non-commercial extensions to the program. Because of its extensible design and graphical interface
(which represents the program structure and the GUI as presented to the user simultaneously), Max has
been described as the lingua franca for developing interactive music performance software.6

7.2.

Language

Max is named after the late Max Mathews, and can be considered a descendant of MUSIC, though its
graphical nature disguises that fact. As with most MUSIC-N languages, Max/MSP/Jitter distinguishes
between two levels of time: that of an "event" scheduler, and that of the DSP (this corresponds to the
distinction between k-rate and a-rate processes in Csound, and control rate vs. audio rate in
SuperCollider).
The basic language of Max and its sibling programs is that of a data-flow system: Max programs (called
"patches") are made by arranging and connecting building-blocks of "objects" within a "patcher", or
visual canvas. These objects act as self-contained programs (in reality, they are dynamically-linked
libraries), each of which may receive input (through one or more visual "inlets"), generate output
(through visual "outlets"), or both. Objects pass messages from their outlets to the inlets of connected
objects.
Max supports six basic atomic data types that can be transmitted as messages from object to object: int,
float, list, symbol, bang, and signal (for MSP audio connections). A number of more complex data
40

structures exist within the program for handling numeric arrays (table data), hash tables (coll data), and
XML information (pattr data). An MSP data structure (buffer~) can hold digital audio information within
program memory. In addition, the Jitter package adds a scalable, multi-dimensional data structure for
handling large sets of numbers for storing video and other datasets (matrix data).
Max is typically learned through acquiring a vocabulary of objects and how they function within a
patcher; for example, the metro object functions as a simple metronome, and the random object
generates random integers. Most objects are non-graphical, consisting only of an object's name and a
number of arguments/attributes (in essence class properties) typed into an object box. Other objects
are graphical, including sliders, number boxes, dials, table editors, pull-down menus, buttons, and other
objects for running the program interactively. Max/MSP/Jitter comes with about 600 of these objects as
the standard package; extensions to the program can be written by third-party developers as Max
patchers (e.g. by encapsulating some of the functionality of a patcher into a sub-program that is itself a
Max patch), or as objects written in C, C++, Java, or JavaScript.
The order of execution for messages traversing through the graph of objects is defined by the visual
organization of the objects in the patcher itself. As a result of this organizing principle, Max is unusual in
that the program logic and the interface as presented to the user are typically related, though newer
versions of Max provide a number of technologies for more standard GUI design.
Max documents (called patchers) can be bundled into stand-alone applications and distributed free or
sold commercially. In addition, Max can be used to author audio plugin software for major audio
production systems.
With the increased integration of laptop computers into live music performance (in electronic music and
elsewhere), Max/MSP and Max/Jitter have received quite a bit of attention as a development
environment available to those serious about laptop music/video performance.7

41

9.

References

No.

Reference

Source

J J O'Connor, E F Robertson. 2000. Fatou Biography. [ONLINE] Available at:


http://www-history.mcs.st-and.ac.uk/Biographies/Fatou.html. [Accessed 05
December 13].
J J O'Connor, E F Robertson. 2008. Julia Biography. [ONLINE] Available at:
http://www-history.mcs.st-and.ac.uk/Biographies/Julia.html. [Accessed 05 December
13].
Oracle Education Foundation. 1999. Fractal Applications. [ONLINE] Available at:
http://library.thinkquest.org/26242/full/ap/ap.html. [Accessed 12 December 13].

Online
Article

Oracle Education Foundation. 1999. Fractals and Fractal Geometry. [ONLINE]


Available at: http://library.thinkquest.org/3493/frames/fractal.html. [Accessed 12
December 13].
John A. Maurer. 1999. The History of Algorithmic Composition. [ONLINE] Available at:
https://ccrma.stanford.edu/~blackrse/algorithm.html. [Accessed 08 January 14].

Website

Place, T. and Lossius, T.: Jamoma: A modular standard for structuring patches in Max.
In Proc. of the International Computer Music Conference 2006, pages 143146, New
Orleans, US, 2006.

Online
Article

Cycling 74. 2014. Max 5 Help and Documentation. [ONLINE] Available at:
http://cycling74.com/docs/max5/vignettes/intro/docintro.html. [Accessed 14
December 13].

Online
User
Manual

42

Online
Article
Website

Website

You might also like