You are on page 1of 47

sender; corruption; receiver;

composition as the noisy channel model;

a junior level exam by


derek kay jeppsen
dvd
1. inertia; – (performance, december 5, 2009)

cd
2. humanity=nostalgia;
a. performed by Zachary Hazen
b. performed by Laurel Grinnell
3. disorder etude;
4. curation1;
introduction:
The works that comprise this collection, composed over the past 13 months, are
examples of some of the musical concepts with which I am enamored. When compiling
pieces which I thought showed the most progression of depth and gave an insight into my
musical and philosophical background, I began noticing a pattern. It seems that impetus for
my pieces is simple miscommunication. There is most likely an irrelevant psychological
reason for this, but it certainly informs my views and aesthetics within music.
The noisy channel model gives us an objective way of studying communication. Bits of
information (any information) are transmitted from one person or thing (the sender), to
another (the receiver), but the pathway on which it travels is tenuous. Any sort of corruption
can take place in between the two agents. With the help of stochastics, we can model a way
of reverse engineering the sent (intended) message to derive full comprehensibility.
In this portfolio, you will find four pieces which each deal with their own noisy channel
models, in which the listener can reconstruct intentions or meaning:
1. inertia; - a sent message from an electronic composer to an audience (the public at large)
which is corrupted by confusion about what kind of music comes out of a laptop.
2. humanity=nostalgia; - a sent message from a piece of music to a text file, corrupted by
optical character recognition
3. disorder etude; - a sent message from a recording to time itself, corrupted by the
randomization of time
4. curation1; - a sent message of a composer's subjective will machine, corrupted by the
barriers to creating a map of the composer will
inertia;
for:
1.inertia.pd (real-time loop sampler)
2.voice
inertia;
1. introduction;
inertia; is the first piece that I created inside the Pure Data environment. It was initially
a study on how to properly play back samples without using the [tabplay~] object, which
calculates the sample rate and length of the file to be played back, and allows the sample to
be triggered with one “bang” message. My “inertia sampler” records input of up to five
samples, and plays them back with the [tabread4~] object which does not have its own
oscillator, nor does it have any sample rate calculation. With this more mathematical style,
[tabread4~] can play back samples at various speeds (including reverse), although the inertia
sampler does not utilize these possibilities at this time.

2. methodology;
I added a simple controller for the keyboard's “a” and “s” keys to control the beginning
and ending of the recording of the samples. This allows for a quick turnaround of the
samples, requiring only two keys to not only gather a sample, but also to allow for the
samples to be overwritten and replaced with the same mechanism.
The samplers are extremely basic, consisting of only a playback mechanism that plays
the sample, can loop the sample, and does basic amplitude enveloping in an attempt to mask
end of sample discontinuities. The amplitude envelope, consisting of a 30 millisecond ramp
down and 30 millisecond ramp up, contributes to the perceived rhythm of the loops of the
sample as well. For example, a sample that lasts roughly 100 milliseconds (such as the ones
in section ii) spends 60ms in the ramping process and 40ms with its full amplitude.
One of the other main features of the patch is a subpatch, [pd pansequence], which
makes stochastic choices about the angle of pan on each individual sampler/channel
independently. It can choose from 7 divisions across the pan spectrum, and can choose to
move it smoothly to that position over any length of time between 250 and 2000 milliseconds.
When the pan reaches its destination, the patch then chooses the next set of variables for it
move to.
The other function of this patch is a tap sequencer, which takes its input from the “/”
key, after it has been initialized by the “,” key, and stops recording the tap rhythm when the “.”
key is entered. The sequencer can record up to ten seconds of rhythms input by the means
of the “/” key. It is stored on a table as a square wave with only two possible values: one or
zero. This is then looped infinitely, but does not affect anything in the patch until it is routed to
the two variables that can read the table: a pan controller, and a sample-on controller.
When the tap sequencer is running and routed to the pan controller (which controls the
pan on each separate sampler), the tapped sequence serves as a control for a series of
[shuffle] objects that make two choices: which side to pan the sound to, and how far to that
side. These choices are made immediately in the tap sequence, translating a tap from the “/”
key into a choice and immediately initiates movement to a new spot on the pan spectrum.
When the tap sequence is routed to sample start, it re-triggers the sample in the rhythm of the
tap sequence.
3. aesthetics;
This piece does not have any prescriptive melodies or harmonies; no specific notes or
rhythms need to be performed. This is not to say that it is a piece to be wholly improvised,
but that it is a piece that allows itself to change as needed. The piece, as it is composed, is a
form that contains a run-through of the abilities of the sampler with the performer able to react
to themselves, an audience, a venue, etc. The instrumentation need not be voice, but it is
somewhat tailored to vocals because of the role of performer in their tactile use of the laptop
keyboard.
The choice to use vocals for the piece was somewhat happenstance, but entirely
natural. The quickest and easiest way that I had figured out to test new code, and to not have
to awkwardly switch back and forth between playing and programming, was to have a vocal
microphone on a stand as for input. It fit well because of my ability to make a near-
continuous wall of sound by taking samples without the attack or release of my voice. This
quickly became a strong point of the piece, as it allowed me to move quickly on the creation
of the sampler, and it helped form the project as it was being created.
I have, on occasion, toyed with the idea of writing concrete parts (with prescriptive
harmonies and melodies) but ultimately found it to not be rewarding. I enjoy the idea of
another person being able to learn the patch and to make a drastically different sound with it,
as the sound that I make with the patch is one that is subjectively pleasing to me.
The most important aesthetic aspect of the piece is the ability for the audience to
understand the process that they are witnessing. The piece is designed in a way that makes
it easy to understand that the main function of the laptop in the piece is to take vocal samples
and loop them against each other in real time. There is, in this patch, no modulation of the
sound of the input and it is obvious that the samples are being taken live because the
audience sees the performer singing into the microphone.
The idea that I needed to reach out as a composer to the audience and make a
gesture to help everyone understand what laptop composers do, came about because of the
reactions I receive when people ask me what instrument I play and respond “laptop.” This
generally initiates a conversation where I attempt (but ultimately fail) to state in a clear and
concise way what kind of work I do in music. This piece helps bridge the gap because of the
transparency of the method.

4. conclusion;
The most shocking and educational role that this piece played for me, in addition to
being my first functional performance patch in Pure Data, was in affirming me that
compositional technique is extremely difficult to define in contemporary electronic music. The
main compositional issues that I faced were answering questions about what language can
be used to describe the compositional aspects of a piece that does not originate in a
traditional manner. This piece includes the compositional techniques stochasticism and
indeterminacy, and when using my voice, the harmonies create in the piece are modal, and
the piece retains elements of ameteric and chronometric time. No one school of composition
or term can be used to define or describe the whole of the piece.
inertia;
key:
= long loops = short loops = pan control

section i:
v5
v4
v3
v2
v1

section i is characterized by taking samples into each channel and looping them to
create harmonies pleasing to the performer, keeping five voices at all times even if
samples are rotated out for new ones. Input is single notes or glissandi in a
comfortable range. section i (as with each of the sections) can last any length.

section ii: section iii:


v5 v5
v4 v4
v3 v3
v2 v2
v1 v1

section ii is defined by taking section iii is defined by activating the


samples with higher pitch (at the random pan sequencer and
top of the performer's vocal range) subsequently routing the tap
that are very short (trying for 80- sequencer to the pan automation
100ms) and keeping the loop
function on each channel

section iv: section v:


v5 v5
v4 v4
v3 v3
v2 v2
v1 v1

section iv is defined by returning to section v is defined by using the tap


longer samples with the tap sequencer to control the startpoints
sequencer still controlling the pan of and pan simultaneously and slowly
the samples making the attacks more sparse until
ending the piece
humanity=nostalgia;
for:
1. XXXXXXXXXXXX (undefined)
humanity=nostalgia;

1.introduction;
“humanity=nostalgia;” is a piece that addresses the concept of system failure. It came
about because of an interesting idea that I had that would re-purpose my scanner/copier, and
make it try to do something I already knew it could not do. The experiment typifies a
relationship that I (and I would assume, many other electronic musicians) have with the
technology that we are constantly surrounding ourselves with.
When we use technology to be creative, we are always walking the line between what
is accepted by the system, and what we want as an artist. Sometimes a compromise can be
reached, and other times the system is capable of offering an adequate solution. Sometimes,
the system will break down entirely with new, interesting results. This piece addresses an
alternative to these: setting out to do something that technology will fail at, and try to pick up
the pieces and still make something out of it.

2.methodology;
This piece was originally for solo violin, and was written as an assignment for a
Composition Laboratory class. In this form, the point of the piece was to explore the timbres
of a violin within the format of the class: the piece was to be sight read by a professional
violinist and take less than three minutes. Because of these constraints, I wanted to make a
technical, yet simple piece. It included a lot of extended techniques, wide and varying
dynamics, but was incredibly simple as far as the pitch content, staying within the first position
of the violin.
This piece came about when I purchased a new scanner/copier. When I first set it up, I
was testing all of its capabilities, and found that it performs Optical Character Recognition
(OCR). The scanner has the ability to take printed documents, scan them, and produce an
editable text file out of them. This is done by line detection, a method for reverse engineering
what the symbols are on the paper being scanned (usually aided by a noisy channel
modeler). After trying this procedure on a few documents and finding that it worked
surprisingly well, I then put in the score of the violin piece. What it gave me is now the title-
page and score of the piece. The only editing done to the file after its conception was the de-
capitalization of my name on the title page.
The instrumentation of the piece is still somewhat violin-esque, but only because of the
ability for the scanner to reproduce the markings of “pizz” and “naturale.” I was interested,
however, in how other instrumentalists with other backgrounds would interpret the notation.

3.aesthetics;
Included in this portfolio are performances of the piece by Zachary Hazen and Laurel
Grinnell. The performances were chosen to illustrate the subjective nature of the piece, and
how wildly variable the symbols in the score are. The piece is also gratifying to me visually,
as it contains disjunct, out of context symbols that lose their original and proper meanings.
The performers of the piece then have the opportunity to restore meaning to the symbols, and
it is at their liberty.

4.conclusion;
This piece has been a very rewarding experience for me, as it has allowed me the
opportunity to work in indeterminate acoustic music, which is a field that I generally do not
work with. I have had the idea to rework the piece, doing different interpretations of it with
different systems (electronic and acoustic) but have never really been inspired to make new
versions.
------
-----.
-.
- :r-
-- -
----

hum
anity
=nos
talgi
a
derek kay
jeppsen
~0:=' = ;
Allegretto COn rubato

~=
.. . . ------- . ~ .....
..'---"" ----- ~ ------- .. ..
.------ -----
. ... .
ifz p f ifz p ppp

r~.
~ t~imm
f ifz P mf- f= p

u--
mf pp ppp ff

P ff

=r~~-~~
25 tempo di valse moderato

. = P . . izz. izz.

32~ ~= C"C"C" C"·


-...; --. ~ > > > > > > > >
=============--- p fl· > <:> >
II
~nh" 0 naturale

~Ssl=isss<~~
> <::> > > <:> >
P
disorder etude;
for:
1. a sample
2. pure data
disorder etude;

1. introduction
The disorder etude is given as a first use of a utility that I have built, which loads any
.wav or .aif file, analyzes the attacks of the file, and builds a map of them. This then allows a
randomization of the successive events of the piece, and the file can be played back in any
order of attacks. This was first built in the spirit of making a stochastic “break beat” machine
(break beats are the common samples taken from drum breaks and repurposed by electronic
musicians).
Two additional signal processing techniques add more corruption of the time elements
in the sample. The first is a spectral delay machine, which is a form of delay which works in
the frequency domain, making it possible to apply a short delay to specific frequencies in the
frequency spectrum. The second is a scrub controller, which allows the user to define a point
in the sample to move to, and how fast the audio should be played to get to that point.
The input of this performance of the disorder etude; is a performance that I engineered
of Ladrang Geger Sakutho Slendro Nem, a classic Central Javanese composition. It was
performed by the Music 361 class at San Diego State University, and I was invited to engineer
a recording, and to perform real-time electronics with the group (which also had a drum set
player in addition to the standard gamelan ensemble).
The real-time electronics used for the piece consisted two processes. The first is a
live-timestretching algorithm, which could use granular synthesis to repeat small chunks
(grains) of audio, making it sound as though a note was held for longer than it was on the
recording, and then snap back to what was happening live. The second was an eight-tap
delay (eight repetitions of the signal) that was controlled by analyzing the predominant
frequency of the input (the gamelan), and adjusting the delay to make the reverb of an
imaginary room with eight perfectly reflective surfaces at the distances that would constitute
the wavelength of the fundamental frequency of the predominant frequency.
A few days after the recording of the gamelan performance, I built a patch in Pure Data
to search my hard drive for five randomly selected .wav files and return one second of each
five files every second. This patch corrupted the file system of my hard drive, and I lost all of
the patches that I had built in a six month period, including the patch used for Geger, so it
would be impossible for me to give more in-depth examples of systems included in it.

2. aesthetics
The patch that constitutes the instrument for disorder etude; was originally research
into the same concepts that inspired inertia; (also included in this portfolio), in the sense that
the patch was designed as an instrument to take something that happens live, and show an
audience exactly what the computer is doing with that audio. These types of constructions
interest me in the sense that the audience can follow precisely what is happening, with out it
being predictable, or beyond their ability to comprehend.
The way that the patch works with time is by taking something that is meaningful and
complete, but then also shows it in a way that makes it into a completely different reality, an
acoustically impossible reality, although it retains its original acoustic profile. This is ultimately
the goal of this construct.
One of the resulting features of the patch is the concept of “style mining,” which is
derived from the practice of “data mining,” or taking a large amount of data, and processing it
in a way that derives new information. Musically, this process is performed by the patch,
which renders out new meaning for the input, as it creates a new awareness of the reality of
the piece, by putting it in a new context. This topic is discussed with more depth in the essay
included with the piece curation1;

3. construction
The patch is made up of three discrete systems: 1) the sample loading and playback
engine, 2) randomized playback engine, and 3) the spectral delay processor. The sample
loading and playback engine is extremely simple in nature, as it just loads a .wav or .aif file,
and has controls for playing, stopping, and scrubbing the file. The scrubbing mechanism
calculates how far away the playback is from the destination, and lets the user control how
long it will take to scrub to that point. The scrubbing engine is modeled after a tape scrubber,
which re-pitches the content depending on how fast the head (in this case, the [tabread4~]
object) is moving.

Illustration 1: the loading and playback engine

In disorder etude; the file is loaded, and then must be played once in its entirety for the
analysis of the randomized playback engine to take place. This offers the listener the ability
to hear the sample in its original form before hearing it in the randomized form. The analysis
consists of an algorithm that determines when an attack has taken place, and then places a
marker on the file in that spot. Once these markers have been placed, then the randomized
playback engine can choose to play the file from any of these points.
Illustration 2: random playback engine

The random playback controls have three separate modes of playback: 1) by scrubbing
to the next point chosen, 2) by triggering attacks at the interval of a pulse, or 3) by playing
back the attack points in a random order while retaining their original (natural) timing.
The spectral delay unit is based on one that has been published by Johannes Kreidler
in his book loadbang (published online at http://www.pd-tutorial.com/), but with a different
method of controlling which frequencies are delayed and at what length. This is done by a
sine wave generator which is undergoing a few mathematical transformations to determine
the delay at certain frequencies. The delay window in the unit shows, on the x axis,
frequency (20 Hz to 20 kHz, linear scale) and on the y axis, amount of delay (0 milliseconds
to 105 milliseconds).

Illustration 3: the spectral delay

The determination of these sine waves can be controlled manually via these controls,
or they can be randomized over a period of 0-15 seconds, creating a smooth transformation
of the delay spectrum.
4. conclusions
The sounds produced by this patch come together on the idea of realigning the musical
time of a piece to something entirely different. It is gratifying, to me subjectively, to have
events take place that can be in any order, without any loss of consistency or intensity. It was
an extremely difficult patch to build, as the construction of even a stereo table for the sample
to be loaded to gets complicated quickly. The components that make up the patch are, to
date, the most polished utilities I have built in Pure Data.
The technical challenges, however, do not make up the piece. The label of “etude” has
been given to the piece because it has been an intention of mine to write specific music to go
into this piece, but have not made it that far in the process. Plans for collaborations do exist
that will bring this concept to fruition.
disorder etude;

score for this performance:


1. load the sample and play/analyze
2. start random order playback
3. adjust to natural timing
4. readjust to pulsar timing and find a better pulse speed
5. engage the spectral delay by increasing “dry/wet” control
6. randomize spectral delay in successively faster increments
7. increase feedback to “100”
8. randomize spectral delay in a long increment
9. increase pulsar timing to circa twice as fast as set in step 4
10. enable scrub controller
11. disable random playback
12. use scrub controller and spectral delay randomization, one after the other
13. stop
curation1;
for:
1.subjective human
2.pure data
3.digital samples of mrindangam
4.two analog synthesizers
1

curation 1;

1. the two engines: compositional/interfaces, and audio


The piece relies on two separate engines, the compositional and interface engine and
the audio engine. The compositional/interface engine is constructed entirely in Pure Data, is
an interface for the performer to run the piece, and the composition is mostly housed within
the patch. The audio engine is Propellarhead's Reason, which is a “soft synth” (short for
“software synthesizer”) music creation program. Reason is used as the audio engine for the
piece because of its simplicity and stability. Of course, all of the things that are happening in
the audio engine are achievable with Pure Data, but would require creation of instruments
and constructs that already exist in Reason. Such a thing may happen to the piece at a later
date, but for the sake of not spending more time on building audio systems (taking away from
the compositional focus), the piece does not contain any custom audio systems; all audio
processing takes place in Reason.
The instruments that are used in the piece are (two instances of) Reason's Subtractor,
a subtractive synthesis model, and the Kong drum designer, an analog and digital sample
modeling instrument (in the case of curartion1; it is used to trigger custom digital samples).
One of the major facets of the piece is the ability to randomize most of the controls that
are on the Subtractor, which randomizes the timbre of the first melodic voice, the “unlogiced
voice.” These are fairly standard controls/parameters that could be extended to any other
software or hardware synthesizer with a way to communicate with PD (midi, OSC, or any
other possible method).
The Kong drum machine is utilized as a digital sample player, and holds nine rhythmic
samples that I made of a mrindangam, a drum that is used in Indian Carnatic music. The
drum is, compositionally, a good fit for the piece as it is generally played in a style that is
linear, that is to say, it is played with sequential attacks and generally not with any
simultaneities. This feature coincides with the rhythmic generation system that is contained in
curation 1; which tends to make entirely linear patterns.
There are various other sonic treatments in the Reason patch, most of which deal with
compression/limiting, equalization, and distortion. These are modules that are necessary for
the sonic realities of the piece, making it possible to have an acceptable mix always
happening even when the instruments change tessitura, or in the case of the first voice,
timbre. Some of these modules exist to the composer's subjectivity as well (primarily, the
distortion and processing on the mrindangam samples, and the slow attack on the master
limiter to make a “punchier” sound).

2. inspirations-large dimension
The following section of this essay is provided as a thorough background of what
inspired this work into being, and to give a certain level of understanding the aesthetic of the
composer. The vast works by philosophers and theorists on what art is, at its core, are of
grave importance to me, and the specific works that inspired this piece set the stage for the
2

piece's function and what it is at a larger dimension.


With this in mind, it becomes imperative to state that the Western traditions of
philosophy and music are strongly represented in the composition, and that this essay will
refer to concepts and philosophies in a way of reinforcing and furthering the concepts of that
culture. Thus, it is necessary for the reader to understand that terms which appear universal
will obviously not be. When reference is made to “aesthetic” or “art,” it is done so as defined
in Western culture. While a more lengthy conversation on the cultural implications of this type
of work will have to wait for another time, it seems important that a distinction be made at the
onset.
The fundamental ideas of this piece are inspired by aesthetic and theoretical works in
the music theoretical and philosophical field of phenomenology, chiefly James Tenney's Meta
+ Hodos and Martin Heidegger's Der Ursprung des Kunstwerkes. After the fact, Iannis
Xenakis' Formalized Music has proven extremely valuable in exposing a more thorough
aesthetic in which this piece comfortably resides. In these works, different ideas are
espoused about art and aesthetic that deal mainly with a physical perception of art (in
Xenakis' case, working in composition toward an objective physicality of sound) and an
attempt to slough off some outdated understandings of what art and music really are.
In addition to these works, much reference will be made to the field of computational
linguistics and information theory, which have inspired a new system for explaining and
documenting curation1; as well as clarified the more general processes employed by many
systems that exist in the piece (i.e. Markov processes controlling melodic generation,
weighted probability, weak artificial intelligence).

2a. Heidegger's model of artistic being


In Der Ursprung des Kunstwerkes, Heidegger seeks a distinction not of what is art or
what is not, but rather, what work art does. In this sense, the progress of art is completely left
open by Heidegger, as his generalizations lead to a greater awareness of what art can do.
Heidegger tackles art as he would any other perceived object, with emphasis on where the art
of the object comes from. Art, in his view, is still just an object. He explains that “Beethoven's
Quartets lie in the storerooms of the publishing house like potatoes in a cellar.” 1 Heidegger
acknowledges that the object itself is not art, but rather, the work of the art is what makes it
art.
Heidegger's main distinction between what is art, and what is not, is based on the the
work involved with the object. For instance, one could have a pair of work shoes, which are
made with the idea of work and wear in mind. When these shoes do their work (of protecting
the feet) then there is a resulting wear, a breakdown of the actual object. They stop being
able to perform their function at some point, having lost their reliability, their equipmental
quality, and must be discarded or re-purposed. 2 The distinction between this work, and a work
of art, is that the art never deteriorates because of its work.

1. Heidegger, Martin. “The Origin of the Work of Art,” in Philosophies of Art and Beauty: Selected Readings in
Aesthetics From Plato to Heidegger, ed. Albert Hofstadter and Richard Kuhns (Chicago: University of Chicago
Press, 1976), 652.
2 Ibid., 664.
3

Throughout the essay, the example of Vincent Van Gogh's A Pair of Shoes (1886),
illustrates the way that art works with the creator of it. This simple painting shows, for
Heidegger, what he means when he says that the work of the art is completely detached from
the object which holds the art. One can see the shoes, and experience the “truth” (or
“unconcealedness”) put forth by them (Heidegger uses the Greek “aletheia”), as the shoes
are not the actual shoes of a peasant laborer, but rather, the essence of the form “peasant
shoes.” The way the artist “set(s)” the truth in his work, is to put it on a pedestal, and to allow
the surrounding truth of the shoes manifest. 3
This, Heidegger refers to this as the work “setting up” a “world.” 4 This world is the world
that surrounds the work, the openness needed to have a world apart from our reality which
shows the truth and reality of the work. Heidegger gives the example of a temple, which is
itself, rock, metal, and other building materials. Each of these materials is, by itself,
worldless, giving no truth about anything but itself. The work that it takes to bring them
together is non-destructive, in the sense that they are all present, and not decaying because
of their work (which is being a temple).5
In this, Heidegger explains that “The work moves the earth itself into the Open of a
world and keeps it there. The work lets the earth be an earth.” 6 Essentially, the work moving
the earth into the open, refers to the work of the temple creating its own spot in our reality,
which is beyond our reality. Its meaning is much more than a building. It is the meaning that
distinguishes itself from a building, and that meaning is a world of comprehension,
recognition, and sensibilities that accompany the world of the temple. This is what Heidegger
means by truth.
Heidegger's notion of art is based on the truth that is portrayed in the “Open” of the
“earth.” This truth, he claims, is the work that makes art into art, not merely an object. A
model of how Heidegger proposes that this work takes place is portrayed thusly:

While this model seems philosophically attainable, and proves to be quite correct (in
that scope), it seems to ignore entirely what the perceiver of art actually brings to the work of
art. The subjectivity of all parties to art (in music: composer, performer, and listener) is
certainly a difficult subject, and one that Heidegger does not approach. The essay is, of
course, not an unlimited source for all that makes up art, but the creation of art does not
inherently explain how art is received, how it is understood, or what affect or information is put
forth for a viewer to respond to.
In Heidegger's model, we could merely say that a listener may not have enough
reference to identify the truth that has been unconcealed, which then leads them to ignore
that truth, or in other cases, actively reject it. We could also find hypothetical situations where
the composer unconceals an unimportant, or unwanted truth. For the performer's subjectivity,
3. Ibid., 666.
4. Ibid., 672.
5. Ibid., 674.
6. Ibid., 674
4

we can find a parallel where the performer could misrepresent the truth being unconcealed or
could not understand that truth well enough to unconceal it. This gap in mutual subjectivity
can lead to the disregard of the truth, a destruction of art and its function. As will be shown
below, the only way to truly know how much of the performer's and composer's subjectivity
goes into that unconcealing is by placing them in situations where we can derive (via
perceptual subtraction) how much their influence is changing the piece.
With Heidegger's model of creation in mind, we can begin to ask the real questions of
how art exists once it has been created. Creation itself cannot stand to be the end game in
art, as it takes perception and reception to truly function. One could say that the model that
Heidegger lays out is a fine model for how art can come to being in the first place, and that we
need to identify a model for the next steps of art's work. These questions are the difficult,
large scale questions that curation1; attempts to answer.

2b. Xenakis' model of composition


The subjectivity of the composer is certainly one of the most obvious questions posed
by listening to any music. Possibly the most employed tactic for creating a “master” out of a
composer, is to label them with the term “genius.” Of course, it is hard to understand what it
is that the masters have done to achieve the title without understanding a status quo from
which they rose. This status quo is the heart of Western art music's subjectivity.
Shenkerian's can bemoan the loss of the musically educated public all that they would like,
but it still does not make a difference in the fact that the public no longer shares the subjective
knowledge that they once did with the masters. With out knowing where the third and fourth
movements of Beethoven's Fifth Symphony should break from each other, the novelty of an
attacca movement is lost.
The majority of Western art music has relied on some kind of prior knowledge of the
listener to attain full understanding of the music, but what is even more interesting, is that on
deeper contemplation, this is less about the convention of listening, and more about the
conventions of the composer. It is the subjective knowledge of what the listener will identify in
the music that the composer has, which allows them to play with the audience's expectations.
What, then, is the proper role of subjectivity in composition? Had Beethoven's Fifth
Symphony been orchestrated for six cannons and a bowed saw, the attacca between
movements three and four would probably not resonate as much as an innovation. The
subjective knowledge of what was an acceptable instrumentation was employed at the
inception of the piece.
In Xenakis' Formalized Music, he lays out a sequence, in which he believes a work of
music is derived. This process can be applied, more or less, to composition in whole as a tool
of analyzing some of the features of a composed work. According to Xenakis, these are the
Fundamental Phases of a Music Work7:
1. Initial conceptions (intuitions, provisional or definitive data);

2. Definition of the sonic entities and of their symbolism communicable with


the limits of possible means (sounds of music instruments, electronic sounds,

7. Xenakis, Iannis, Formalized Music (Bloomington: Indiana University Press, 1971), 22.
5

noises, sets of ordered sonic elements, granular or continuous formations, etc.);

3. Definition of the transformations which these sonic entities must undergo


in the course of the composition (macrocomposition: general choice of logical
framework, i.e., of the elementary algebraic operations and the setting up of
relations between entities, sets, and their symbols as defined in 2.); and the
arrangement of these operations in lexicographic time with the aid of succession
and simultaneity);

4. Microcomposition (choice and detailed fixing of the functional or


stochastic relations of the elements of 2.), i.e., algebra outside-time, and algebra
in-time;

5. Sequential programming of 3. and 4. (the schema and pattern of the work


in its entirety);

6. Implementation of calculations, verifications, feedbacks, and definitive


modifications of the sequential program;

7. Final symbolic result of the programming (setting out the music on paper
in traditional notation, numerical expressions, graphs, or other means of solfeggio);

8. Sonic Realization of the program (direct orchestral performance,


manipulations of the type of electromagnetic music, computerized construction of
the sonic entities and their transformations).

Of course, Xenakis' description is biased to the language of his own music, but if we render
out the mathematical connotations, what we get is a clear division of the elements of
composition which are subjective in the process, and of the objective within the process.
Steps 1, 2, 3, and 4, show themselves to be highly subjective. Steps 3 and 4 even
include in them the word “choice,” which points directly to the composer's subjectivity.
Xenakis' piece Pithoprakta, is for two trombones, percussion, and strings. The choice of
instrumentation is most likely linked somehow to the dialog that the piece contains (i.e. what it
will explore). Ultimately, though, the instrumentation is a decision based on practicalities, or
what the instruments offer for that dialog, which are not inherent features of that instrument.
For instance, the strings in the piece are needed for bars 52-57 to perform pizzicato glissandi,
but there's no reason that the same orchestration would not be able to be played on other
instruments with different effects.
Steps 5 through 8 show objective processes that are needed to make the art-object.
The objective nature of these steps can obviously change (Xenakis has called this out in the
parentheses of steps 7 and 8), but their end is the necessity to, as Heidegger says, “set up”
the work. This is the end-game of the piece's creation, or from above, the Heidegger model
of artistic being. But it is obvious, from reading the first four steps, that the subjectivity of the
composer is what is necessary to set the entry into composition.
The subjective choices that Xenakis makes at the onset of his compositions gives an
6

excellent example of the gray areas that arise out of this process. Firstly, the initial
conceptions from the first step also have with them the objective definitions of “provisional or
definitive data.” This goes to show that even though the process itself is subjective, Xenakis
attempts to also leave it open for objective concepts. In his case, the impetuses for his pieces
tend to be scientific or mathematical. He is, after all, searching to make music that is ruled by
the same mathematical (stochastic) concepts as the events in our everyday lives. 8
No matter the derivation of the concepts which are involved in the actual piece being
created, the impetus for composition is subjectivity. Taken to an extreme, the choice to
endeavor the path of creation itself is a subjective one. There is no objective truth that makes
it so that one must compose. It is not a given function of humanity to create art-objects. It is
rather, our ability. To put objective material into the choice to compose does not change the
fundamental function of the composer, as that itself has already compromised the objectivity
of the piece.
Xenakis' process does not adhere fully to either a subjective or objective direction, but
rather walks a fine line between both. The end result of this are pieces that are in some ways
subjective (composing for accepted instrumentation) and are in some ways objective
(macrocomposition takes place within mathematical constructs). The end result is a beautiful
mix of both: a composition that uses familiar means (subjectivity for the listener) to introduce
an objective compositional approach (stochasticism), with a result that is approaching an
objective scientific process, which aims to take data on how the audience hears the piece.
The antithesis of this result is the aim of curation1;. It is to proceduralize the
subjectivity of the composer, to take objective statistics of that process by 1) allowing the
performer a subjective role which is subservient to the composer's, then 2) allowing the
performer an equal role to the composer, then 3) allowing the performer zero subjectivity, with
only the composer's subjectivity informing the choices made for the piece. As we will see
below, the large scale construction of the piece instigates discrete perceptual values for each
movement which are the statistics of the subjectivity contained in the method of each
movement.

3. philosophical interlude: subjectivity and objectivity


In curation1; the concepts of objectivity and subjectivity are of primary importance, and
it is necessary to cursorily define them here. There is really no need to catalog the extensive
conversation in Western culture about these concepts, be we can distinguish the concept in
general from their application within this work. In Tenney's Meta + Hodos, he gives a clear
definition of what he calls the “subjective set,” as “expectations or anticipations which are the
result of experiences previous to those that are occasioned by the particular piece of music
now being played.”9 Beyond this, Tenney offers only one example of how the subjective set
works in perception, and it is one of the practice of direct quotation.
This omission of discourse on the topic was the primary provocation of this piece. This
somewhat taboo subject is confronted head-on by the piece in an attempt to dissect
subjectivity itself, to help it find a home in musical discourse.

8. Ibid., 9.
9. Tenney, James, Meta + Hodos and META Meta+ Hodos (Oakland: Frog Peak Music, 1986), 44.
7

Subjectivity has been the scapegoat of Western culture's decline in musical literacy for
some time now. It has been the assertion of the masses that musical taste must be entirely
subjective, or else everyone would share the same taste. Our commercial culture has
generated much music that does not contribute any meaningful discourse, or in Heidegger's
terms, does not unconceal any truth (besides the socio-economic truths pertaining to effective
advertizing and inspirational consumerism). Still, subjectivity is evoked to defend the rights of
those who are the audience for these advertisements to continue to (willingly) subject
themselves to ineffective art. The confusion of subjectivity with a lack of musical
understanding (or desire to attain that understanding) is made possible due to the lack of
testability, its massive scope and applicability, and a misunderstanding of the physicality of
listening.
It is counter to the laws of physics to make the argument that two people can hear
different things in a recording or live performance. The same sound waves encounter the
eardrums of any number of audience members and the difference in physical sound that two
audience members would experience at any musical exhibition will be negligibly different (and
unless extreme circumstances occur, musically irrelevant). It is, of course, our cognition and
experience of music that varies from person to person. This is done through applying
subjective information, or information that would not necessarily be a part of any other
person's experience.
Relying on one's own experience to exercise judgment on musical experiences is a
perfectly natural and universally human, and the more one can interact (cognitively) with a
piece, the more rewarding the piece becomes. John Cage, quoted in Joseph Byrd's 1967
review of Variations IV, explains the new role of the listener: “... we must arrange our music,
we must arrange our Art, we must arrange everything, I believe, so that people realize that
they themselves are doing it, and not that something is being done to them.” 10 This attitude of
letting the audience know that they are an element of the piece is a strong connection that
can be built with the listener, and one that yields high subjective interaction.
The discourse of unearthing the subjectivity of the composer is extremely important.
Without the composer's subjectivity, how to we even get to the “origin” of art that Heidegger's
theory revolves around? The impetus for composition is itself a subjective cognitive process
involving a judgment of which art needs to exist that currently does not. The analysis of
composition as a subjective suggestion seems a bit dull on its own, but if we extend that
subjectivity to the role of the listener, we get to see a true objective end.
It is this subjectivity of the composer that really translates into the success of a piece.
The subjective processes of determining the elements of a piece are what determine its
cultural function, its shape (in the musical sense), and the truth which it unconceals about our
reality. This is where subjectivity creates style, which creates an audience who identify with
the subjective content of the composer's work. Even without being able to trace what that
subjectivity is, listeners can still identify which music they like, and which they do not. This is
a function of the composer's subjective input to the piece, and the listener's subjective input to
the physical act of listening. If the amount of overlap between the two is maximized, this is
bound to translate into the listener having a more fulfilling experience with the music.

10. Byrd, Joseph. “Variations IV,” in Writings about John Cage, ed. Richard Kostelanetz (Ann Arbor: University of
Michigan Press, 1993), 135.
8

These concepts result in a theory of genre. The slander that the concept of genre
currently endures is the same process that subjectivity endures, as both have been
repurposed in an act of false egalitarianism. It is believed by some that music should not be
labeled by its attributes, because that limits the music's scope, and hence, the music's
audience. What is not understood is that the music itself declares a style, and those styles
contained in a piece are what define its subjective features; this is what is given by the
composer, and interacted with by the listener.
Jan LaRue's Guidelines for Style Analysis lays the groundwork for a study in genre, if
only his taxonomy was applied to higher hierarchical levels (such as, periods of a composer's
works, or more modernly, entire albums, or works across a genre). This, along with James
Tenney's conceptions of “parametric intensity” could provide a full view of what binds together
a genre, or finds unity across seemingly disparate works.
curation1; plays with musical features that are constructed in a way to make the style
of the piece largely ambiguous. For instance, the percussive voice is derived from the timbre
and technique of the mrindangam, a drum used in Indian Carnatic music. The digital samples
of the drum are also heavily processed to suit a subjective desire of the composer (uses of
distortion and compression to derive a sound most commonly found in the electronic genre of
Drum and Bass), and the rhythms played with the samples are determined by an algorithm
that fills a sequencer with random samples at any given timepoint (which is akin to the
algorithmic approach to composition taken by Xenakis) . These three features, contained in
one voice, are from disparate genres in which each of these features is most expected, and
all three are very unlikely to be found in the same piece.
It seems that middle- and small-dimension features are the most flexible hierarchical
degrees with which the composer can exert this kinds of subjective choices. As long as each
feature, on these hierarchical levels, can function as an expected feature for the audience, or
conversely, as a novelty of the piece within the style, then the listener will be able to relate to
those subjective choices by the composer. These flexibilities are what defines one piece from
another inside of a particular genre.
In curation1; the middle- and small-dimension characteristics are defined based on
constructs that are inspired by the study of information theory, and of computational and
stochastic linguistics. These inspirations and actualities identify the piece to be (generally)
similar in genre to Xenakis' compositions, and to the technique and style employed by the
IDM duo Autechre (since their employing of algorithmic composition on their album Confield).
4a. interdisciplinary inspirations – stochastic linguistics
During the creation of this piece, it has been the composer's privilege to study
(cursorily) the fields of stochastic linguistics and information theory. The overlap of these
fields with music seems a natural fit, seeing as the field of linguistics and music have been
theoretically linked since the publishing of Lerdahl and Jackendoff's A Generative Theory of
Tonal Music (1983)11, and information theory (at least the probability concepts therein) has
been employed to create music for the past 60 years, most notably by Iannis Xenakis (who
explains the theory surrounding his information theory inspirations in Musique Formelles

11 Clarke, Eric F. “Theory, Analysis and the Psychology of Music: A Critical Evaluation of Lerdahl, F. and
Jackendoff, R., A Generative Theory of Tonal Music,” Psychology of Music 14 (1986), 4,
http://pom.sagepub.com.libproxy.sdsu.edu/content/14/1/3.full.pdf+html (accessed Dec. 4, 2010).
9

(1963), and the expanded English edition also cited in this work as Formalized Music (1971), .
All of these disciplines have sizable overlap and give a new perspective for a modern
understanding of music, as well as the generation of music.
The idea of algorithmic composition was an expected end for Xenakis considering his
other trade, architecture. Using his mathematical background to compose was his large-
dimension subjective choice (see 2b. above). The processes he employed are a synthesis of
the objectivity of his mathematical approach, and the subjectivity of a composer which is
crucial in giving form to a new reality. His process, however, now looks extremely similar to
processes employed by the field of computational linguistics, which would use the same
formulas as he did to create so-called “weak artificial intelligence,” to make it appear that a
computer is using language.
Weak artificial intelligence, sometimes also known as “expert systems” are the kind of
computational device which uses the power of computation to serve a specific function.
Linguists use this concept to construct search algorithms, text-to-speech generators, and
grammatical checkers. These practices rely heavily on the field of information theory, which
uses probability to decipher ambiguities, which abound in linguistic reception and production.
The work of Claude Shannon, a information scientist who created the concept of digital
communications, gives us a model for how to reconstruct messages that have been corrupted
(or veiled), which then can be applied to any digital use of information 12.
The process by which this is done is discussed further below in respect to curation1;'s
“logiced voice” and how it generates melodic contour. Although the small-dimension
actualities of the piece are derived from these techniques, the piece derives a portion of its
large-dimension inspiration from this theory of weak AI. The piece is, after all, about the link
between the composer's subjectivity and the subjectivity of the audience. What better way to
represent this subjectivity than by creating a automated system that composes in the style of
the composer? This is achieved with a simple recognition of what subjective input makes it
into the piece (methods and logical processes which the composer could go through in the act
of composition), and then allowing a computational process control what is actually chosen in
that framework (on the small-dimension/surface level).
The ability of the computer to decide the actualities of a performance of the piece also
brings with it a certain understated objectivity. The probabilistic processes are used as a way
of helping to “set up” a “world” for the piece, bringing it to fruition in a perpetually different
manner but within parameters that create certain invariances. This is not improvisation (as
the computer does not choose which rules it obeys), but rather an aleatoric expression, which
carries with it the objectivity of not having any prior inclinations to form any idioms involved in
any genre.
The use of randomization in the piece is the only attempt at offering an objective
approach in the composition. It acts as a way of making objective phrasing, melodies, and
counterpoint, which are not necessarily the most “musical” choices (in an historical sense),
but are decidedly unable to become an inside joke to a group of elite listeners. This allows for
the piece to be experienced objectively without listeners being afraid of being outside of the
group of listeners for which the piece is intended (as there is no specific group in mind).

12 Anouschka Bergmann, Kathleen Currie Hall, and Sharon Miriam Ross, eds. Language Files 10 (Columbus:
The Ohio State University Press, 2007), 592.
10

The process of determining the subjective parameters of the piece is a process of


uncovering what the composer believes the resulting music should be. This can be extended
to a few levels and the inspirational phase of any work; this piece should exert this feature, all
pieces should explore this question of perception, this music should to be composed in
chronometric metric time instead of musical time, etc. In curation1; each of the parameters
needed to be created within a programing environment (Pure Data), so each parameter
needed to come from a specific need for the piece, as defined by the composer.

4b. a comparison with the work of composer Michel Philippot


An early piece with a similar construction is by Michel Philippot, whose 1960
composition, Composition pour double orchestre is based on a process which Xenakis calls
an “imaginary machine.”13 Xenakis includes, in Formalized Music, comments by Philippot
about the imaginary machine, a system where composition takes place based on a flow chart
of the decisions that he would normally go through in the act of composing.
Philippot catalogs a number of decisions that must be made, reduces it to binary (yes
or no) decisions, creates the flow chart of the decisions, and then goes through the process of
the machine. Philippot then rewrites the flow chart to delete results which he believes are not
part of his musical taste, re-runs the flow chart.
Philippot states that he is creating a piece of music that he himself would make without
the aid of this system, but that the motivations for systematizing his compositional processes,
he believed that it would output some insight into himself as a composer: “At the end of the
experiment I possessed at most some insight into my own musical tastes, but to me, the
obviously interesting aspect of it... was the analysis of the composer, his mental processes,
and a certain liberation of the imagination.”
Although very similar to the processes and intentions of curation1; there are a few
important differences. Philippot's process allows the composer to rewrite the flow chart if the
“musical intentions” of the composer were not met by the output. The process used for
curation1; does not allow re-checking the output of the machine. The composer had the
ability to build a system in the first place, and this becomes the piece. The choices made as
pertaining to the functionality of the machine (what it is capable of), are set. That a natural
facet of the piece.
A more important difference, is that Philippot's process is binary. The machine can
only accept or reject something. In curation1; the choices made by the system are in the form
of parametric choices. The computer uses a higher degree of logic, not choosing from a
choice of yes or no, but choosing, for instance, a note from a mode (scale) instead. This
makes the processes more active in curation1; which allows more objective (probabilistic)
decision making. These processes will be discussed further below.

5. actualities and algorithms: small- and middle-dimension composition


The small- and middle-dimension features of curation1; are defined by three specific
voices: the “unlogiced voice,” the “logiced voice” and the “percussive voice,” and one binding

13. Xenakis, Iannis, Formalized Music (Bloomington: Indiana University Press, 1971), 39.
11

inspiration for their construction: to build a machine to follow the same subjective logic that
the composer works through when composing. The ultimate goal of such an endeavor (if it
were actually possible to fully succeed), would be to define a specific style of the composer, a
compendium of what art the composer wants to bring into being, without the need for the
composer to have to articulate what they are looking for.

Illustration 1: sequencer view of the "percussive voice"

For this reason, the choices made about what the piece is possible to generate (what
the absolute parameters are that programed into the piece), are entirely subjective means,
but with largely objective ends. The parameters included are (mostly) musical features that
are pleasing to the composer, and are not necessarily chosen for any reason beyond that
goal. The ranges of those parameters can have multiple functions, from strictly physical
necessity, to entirely subjective. We will now dissect the voices individually, and define their
functions and possibilities. First, we will define the “percussive voice.”

5a. the percussive voice, small- and middle-dimensions


The percussive voice is sonically defined by the samples that are loaded into each
channel (shown by the nine rows). These samples were derived from sampling a
mrindangam, and then were heavily processed with a modified attack envelope (making the
initial attack, the strike tone of the drum hit, proportionally louder than the decay phase, or
ringing, of the drum), distortion (provided by Reason's Scream4 distortion model), and
compression which provides a needed mixing functionality.
The surface rhythms that are ultimately executed by the percussive voice (the small-
dimension features) are chosen by an algorithm which generates a rhythmic sequence as it is
called for by the other controls in the patch. An illustration of the Pure Data objects needed to
construct the rhythmic pattern generator is included to illustrate how simple the algorithm is.
12

As can be seen from the illustration, there are more objects in the sub-patch that deal with
environmental issues than there are of actual generation. The process of generation
proceeds as follows:

Illustration 2: "pd algorythm" sub-patch - rhythmic pattern generator (left paths) and
clearing function (right paths)
1. receive instruction to generate a sequence (via sub-patch [inlet], reception of [addpat]
variable, or a bang from the 2nd outlet of [pd allclear] which controls the clearing of the
sequencer)
2. repeat this process “x” times ([kalashnikov] the number of timepoints contained in the
sequencer)
3. choose a random floating-point number between 0 and 20, then truncate ([randomF
20] then [i])
4. if this number is in the range 1-9, add hit on this timepoint (“x” of step 2), ([moses 1] →
[moses 10] construction is range filter)
13

5. if number is out of the range 1-9, do nothing (results in a rest on timepoint “x”)
If we calculate these ratios, we find that there is a 9 in 20 (45%) chance that we will
have a note on any given timepoint, and that there is an 11 in 20 (55%) chance that we will
have a rest on any given timepoint. Furthermore, there is a 1 in 9 (11.112%) chance that any
particular sample will be given the note.
This math may seem to be a dull or insensitive process, but it is up to the subjectivity of
the performer to make the algorithmic generation musical by controlling the other (higher
level) controls that deal with how fast the sequence is played. The composer's subjectivity is
reflected by the sense of accentual openness in a simple 32 timepoint pattern. The composer
and computer share the same outlook on the 32-step sequencer, that it is a grid which can be
filled in any way, as the grid is simply 32 evenly spaced musical timepoints.
The computer need not define what timepoints make up the grid, as they are chosen in
the next hierarchical level of controls (which are located in the interface of the patch and
accessible by the performer during the first two movements).
The interface controls that deal with the middle-dimension features of the sequencer
are as follows:

Illustration 3: middle-dimension controls "for percussive voice"

1. the step resolution (i.e. the length of each step on the sequencer)
2. “add pattern”, “clear”, and “new on clear” (which allow a one-shot manual application of
each function to the grid at any given time)
3. tempo modulation (which actually modulates the master tempo of the patch) and
“addpattern”/”newpattern” controls (which can allow a new pattern to be generated
when modulating, or adds a pattern to the existing one at the time of the modulation)
4. the “new pattern every” and “add pattern on” controls (which respectively, generate a
new pattern after “x” amount of repetitions, and generates an additional pattern after
“x” repetitions of the pattern)
The tempo modulation control is in the form of a ratio, and the send button. The ratio
on which it operates is “x in the space of the last eight timepoints on the grid.” This means
that when the clock gets to the last eight timepoints, it will choose a random drum sample
(from the nine used by the voice) to play the polymetric pulse that the tempo is modulating to,
and when the pattern gets to the beginning of the sequencer (step one) everything will
14

modulate to that tempo. The “newpattern” and “addpattern” controls allows the performer or
computer to modulate and respectively, generate a brand new pattern, or add an additional
pass of the generation algorithm to the pattern that already exists.
These controls are the determiners of the style of the surface rhythms played by the
percussive voice. It does not include large-dimension elements, or small dimension controls,
such as where any given note will be played. The former is controlled subjectively by the
composer (in the form of direct choice), and the latter by the subjectivity of the composer (in
the form of the weak artificial intelligence systems which make the small-dimension choices in
all engines contained in the piece).
5b. the unlogiced voice: small- and middle-dimensions
The rhythmic sequences created for the percussive voice are also used (in two of three
modes) for the “unlogiced voice,” a subtractive synthesis model whose melodic contour is
controlled by randomization within a selected musical mode. The three procedural modes in
which the voice operates are titled “correlated,” “unified,” and “pulsar.”

Illustration 4: unlogiced voice:


middle-dimension controls

The pulsar mechanism is the most straightforward operation. When activated, the
voice plays on the pulse given by the master clock, and generates its melodic contour
randomly within the selected musical mode. The mode, however can be constricted in the
order which is shown in “Illustration 4.” This ordering of the mode is a subjective choice of the
composer. For example, if the performer slides the “constrict mode” slider to the left, they will
eliminate the options for the third scale degree (^3) to be chosen, then the sixth (^6), the
fourth (^4), etc. This operation works in both the pulsar and unified operations.
If “unified” is selected, the same random generation and mode constriction applies, but
the rhythm is derived from the 32 step sequencer of the percussive voice. This mode of
operation has been created by the composer to fulfill an aesthetic of rhythmic coherence
between the percussive content and the melodic content of the piece.
If “correlated” is selected the generation of which scale degree is played is no longer
randomized, but instead, correlates to the actual tracks of the 32 step sequencer of the
percussive voice. The scaling of this data is as such: if track one has a note on a specific
timepoint, then there will also be a melodic note on the first scale degree on that timepoint.
There are nine tracks on the percussive voice's sequencer; the first track through seventh
correlate to the first scale degree through seventh, and tracks eight and nine, correlate to
scale degrees one and two, respectively.
15

Illustration 5: curation1;'s visulazation of random timbre controls

In the middle-dimension, the unlogiced voice's main source of movement is timbre.


The timbre controls are constructed as a replica of some of the controls in Reason's
Subtractor subtractive synthesis modeler. curation1; can control the Subtractor's oscillators,
mix and FM of the two oscillators, the filters, filter envelope, amplitude envelope and filter
envelope. These modulations are all controlled by randomization, of which the performer can
control three parameters: 1) after how many pulses of the master clock randomization
happens, 2) how long it takes to reach the generated number (in milliseconds), and 3)
whether to send the generated numbers or not.

Illustration 6: Reason's SubTractor synthesizer

There are a few additional functionalities that relate to the changing to a random timbre
that are available on the “pd timbrenature” sub-patch. For instance, the performer or
computer can decide to “send” the ideal length of change. This means that the change
16

between two timbres will take the full amount of time between the two timbres, and move
smoothly and continuously. If the “line resolution” is set to less, the timbre change will happen
in less time, meaning it will move smoothly to the next timbre, and stay there until the next
change. Conversely, if it is set to longer than the “ideal” length, the Subtractor's settings will
move slower and never make it to the generated settings. This technique also can be used
(by making the setting considerably longer) to have the timbre modulate, but not very far from
where it is at any given point.

Illustration 7: pd timbrenature sub-patch, middle-dimension


timbre control

In the second movement, control is given to the performer over the individual settings
of which are being randomized as well, in the sense that the performer can stop the
randomization of any of the parameters individually, and there are also two large groups of
the parameters which can be selected to be enabled. For example, the score to the first
movement asks the performer to choose either the “all parameters” group, or the “smooth
parameters” group, and depending on that action, the score asks the performer to disengage
either the randomization of the “mod envelope” or the “filters,” and then reset it.
The final controls for the middle-dimension level of the unlogiced voice are it's tessitura
and mode. The tessitura is controllable by a simple slider, which controls which octave the
voice's is in at any given point. Both the unlogiced and logiced voices are constrained to a
single octave tessitura, except for in one specific circumstance (but both can be manually
moved by the performer or computer). When the unlogiced voice is set to the “unified”
rhythmic method, and the mode is constricted to anything other than the full mode, the
seventh scale degree is allowed to either appear above (a major or minor seventh) or below
(a major or minor second) the first scale degree.
The mode selector is also applicable to both melodic voices, which simply allows the
performer to choose which of the “church modes” the voices operate in. The performer can
also choose the tonic note of the mode. The patch initializes to C phrygian, which is a strictly
subjective decision of the composer.

5c. the “logiced” voice: small- and middle-dimensions


The third voice in curation1; is the so-called “logiced voice.” This moniker is derived
from how it generates it's melodic content. It was the first voice built in the piece, and its use
of rhythm is designed to test the theories of James Tenney, as put forth in Meta + Hodos. The
rhythms employed by it are controlled in the small-dimension by a randomization within
specific musical time durations.
17

Illustration 8: rhythm controller for "logiced voice"


There are 14 possible durations for the voice to choose from on the small-dimension
level (see Illustration 8). On the middle-dimension, however, there are four modes by which
the surface rhythms are chosen. The “pulsar,” “random,” “triple meter” and “duple meter”
controls, plus the constrictor slider, each have their own degree of randomization which they
will allow when selected. For example, during the first movement, the performer is instructed
to constrict the rhythmic selection to “something less than [a] whole [note],” and then select
either the “triple meter” or “duple meter” modes. If “triple meter” were selected, and the
selection constricted to “twhole” or less, the durations of any individual note that is generated
by the voice will be one among these durations: triplet whole note, dotted quarter note, triplet
half note, quarter note, triplet quarter note, or triplet eighth note. Random, triple meter, and
duple meter each generate a new duration for each note played. In pulsar mode, the rhythm
is a continuous pulse of a chosen duration.
The melody generated for the voice is a first order Markov process, which derives
whichever note will come next based on a weighted probabilistic construction. This technique
is similar to many functions in the field of computational linguistics, where certain search
algorithms and spelling corrector functions need at least one word of prior context to properly
function. As such, an illustration is provided which is a Finite State Machine-style
representation of the process (Illustration 9). Each note that has been chosen (the note that
we are currently on in the melody), then can move in the probabilities listed to each next note.
For instance, the highest probability that is in the system is. The probability that the second
scale degree will be followed by the first scale degree 60% of the time. This reads in the
illustration as “P(^1, ^2)=60%” as in, “the probability that scale degree one will follow scale
degree two is 60%.”
Each of these probabilities was subjectively chosen by the composer, as a way of
avoiding certain cliches (for instance, V cannot be followed by I), and as a way of promoting
the generation of less major chords. These are not necessarily noticeable in the piece, as
these stochastic melodies are nearly always operating with randomized rhythm as well.
The timbre of the logiced voice is static, and the piece has no construct to ensure or
deny any individual timbre for the logiced voice. The performer could change settings of the
synthesizer manually, either before or during a performance, if they saw fit. The synthesizer
patches that have been used for the performances of this piece have been generated by the
unlogiced voice's timbre randomization, and applied to the logiced voice, but there is no
construct in the piece for this function. It is, ultimately, a comfortable ambiguity in the piece.
18

Illustration 9: Markov process for "logiced voice" melodic generation


19

6. recapitulation of systems
A table of systems is given as a recapitulation of sections 5a – 5c. The input of the
performer in the second movement and the computer in the third is entirely on the middle-
dimension level.
Voice Scale Feature
Percussive voice Small-dimension Algorithmic Surface Rhythms
Middle-dimension Step Resolution
Pattern Controls
Tempo Modulator
Unlogiced Voice Small-dimension Randomized Melodic Content
(unified and pulsar modes)
Randomly Determined
Pattern (correlated mode)
Timbre Randomization
Generation
Middle-dimension Mode Constriction
Tessitura Selection
Logiced Voice Small-dimension Timbral Parameter
Randomization Destinations
Stochastic Melodic
Construction
Randomization of Rhythms
Middle-dimension Mode of Rhythmic
Randomization
Tessitura Selection
Other Elements Middle-dimension Mode Selection
(accessible to performer in Master Clock Tempo
movement2;)
Table 1: table of systems and their subsequent hierarchical levels

7. theoretical interlude: testing Tenney's theory of “clang”


In James Tenney's Meta + Hodos, he lays out a new framework of theoretical ideas as
a way of understanding the complexities of 20th century music. The reason for taking on such
a task is, as Tenney says, the problem of “a nearly complete hiatus between music theory and
musical practice.”14 The elements employed by music had certainly changed in the first half of
the 20th century, and Tenney's goal is to form a theory which is inclusive of all music, via a
14. Tenney, James, Meta + Hodos and META Meta+ Hodos (Oakland: Frog Peak Music, 1986), 4.
20

theory of perception.
Tenney proposes a new standard for formal units of “sound,” “sound-configuration,”
and “musical idea,” which treats all as the fluid units of measurement that they are: the
“clang.” This is a way of stripping the tonal harmonic conceptions of form, harmony, and
melody of their loaded nature. The next step up in Tenney's hierarchy is the “sequence,”
which is a unit formed by successive clangs.15
The delineation of musical figures then can be seen as the perception of any musical
idea, instead of delineation reliant on a traditional conception of harmony, dynamic, etc. This
opens. Tenney cites certain Gestalt psychologists whose interest was in how perception itself
works, and explains that factors of similarity and proximity are the primary factors that are
used in the act of perception to delineate groups of events. 16 This grouping is the definition
(while perceiving) of the clang and sequence.
Beyond these two factors are the four “secondary factors” that Tenney believes are at
work in cohesion and segregation. They are 1) the factor of intensity, 2) the repetition-factor,
3) the objective set, and 4) the subjective set. Of these four factors, Tenney seems to focus
most on the factor of intensity as the most able to initiate cohesive clangs, and to finalizing
them.17 (pg. 37) In other words, the intensity level of clang can be determined by the intensity
of the elements that make it up, and a sequence of these clangs can then have a map of
changing intensity from clang to clang.
Tenney's concept of “clang” is put to the test by the small- and middle-dimension
features of curation1; by creating, for example a voice that is highly musical, adhering to a
mode and using musical time, but containing no instructions beyond this. The constructs, at a
higher level, relate to the composers subjectivity of what the voice needs to operate, but it
does not adhere to any musical style or idiom. This is the pure test of the clang, as it is just
by chance that any sort of similarity/dissimilarity or proximity/distance arises in this voice,
which contributes to any way of parsing a motive or phrase.
If it is the changes in small-dimension parametric intensities that defines these musical
constructs of motive or phrase, then the algorithmically generated melody and randomized
rhythm will tend to create places to parse them. It is down to the subjectivity of the listener as
to whether this happens or not, and up to the performer and/or computer as to give this output
to allow the audience to test the concept. At all times, these musical constructs may appear,
or they may not.
The middle-dimension constructs of the piece are all modeled after the aforementioned
concept of parametric intensity, creating the differences in perception from sequence to
sequence. This is as a way of testing Tenney's concepts by moving them to different
hierarchies in the musical framework, to see if parametric intensities create the same
perceptual markers (in the sense of defining a form of a piece) in similarity and proximity on
these levels.
This test is also employed on the large-dimension scale, as the only real differences in
the piece's movements are based on the three sources of middle-ground choices: 1)
subjectivity of the composer, 2) the subjectivity of the performer, or 3) randomization thereof.
15. Ibid., 22.
16. Ibid., 32.
17. Ibid., 37.
21

8. large-dimension construction: objective ends through subjective means


In the large dimension, the piece is constructed in a way that outputs three separate
values to compare, one for each movement. These three perceptual values are the overall
shape of the piece's construction on the middle-dimension level, which result from the
instructions that are given, via the interface, to the composition to generate small-dimension
(surface) melody, rhythm and timbre. The sum of the three
Since each of the movements successively lose a layer of subjectivity, the difference
(as in mathematical subtraction) reveals something about the composer and performer's
respective subjective sets. For instance, the first movement contains three layers of
subjectivity: 1) the composer's subjectivity as reflected by the score, 2) the performer's
subjectivity as called out for by the score, and 3) the composer's subjectivity in the
compositional engine itself.

Illustration 10: large-dimension construction of movement1;

The second movement contains two of these three layers of subjectivity, deleting the
score from the piece at the onset of the second movement. This makes the subjectivity of the
composer equal to the subjectivity of the performer, each being exerted on only one
construction of the piece at this level.

Illustration 11: large-dimension construction of movement2;

The third movement then deletes the performer's last subjective input of the piece, their
ability to control the interface. All that remains at this point is the subjectivity of the composer,
which is exerted by the composition engine as it takes over and randomizes the control of the
middle-dimension parameters and their intensities. Probabilistically speaking, the output is
the aggregate of what the system is capable of without the interference of subjective musical
information from the composer or performer in middle-dimension constructs.
22

Illustration 12: large-dimension construction of movement3;

As the piece changes from movement to movement, the resulting perception of change
becomes a measurement of the subjectivity that each construction contains. If we take the
perceptual difference of each of these movements, we can derive meaningful data about
which subjectivity exerts which force on the piece. In a mathematical sense: the subjectivity
of the composer (as exerted by the composition engine) is equal to movement 3, the
subjectivity of the performer (as exerted by their input to the interface of the composition) is
equal to movement 2 minus movement 3, and subjectivity of the composer (as exerted by the
score and resulting form of movement 1) is equal to movement 2 minus movement 1.
Ultimately what this gives the listener is a full view of the subjective information that the
music results from. This exploration is the true goal of curation1; which defines for the
audience exactly what it is that the composer believes is needed in the composition, shows
what the musical preferences or curiosities are of the performer, and shows an aggregate of
what the machine is capable of by randomizing its middle-dimension controls.

9. personal conclusions and future projections


The questions asked by curation1; have no simple answers, and certainly there is no
unanimous answer, as the subjectivity of the listener is put to the test. The questions for the
listener are not ones that need to be specifically stated, and its unclear whether someone who
is not privy to the philosophies and theories that inform the piece would ever be able to
understand what the piece is asking them. Such is the essence of music theory and
composition. It is not necessary to communicate such a technical or niche objective to the
listening public.
The choices that were made about the sonic results of the piece were done so as an
organic growth of the compositional systems employed throughout its creation. This does not
ensure any sort of pleasing musical experience per se, but the pleasing music experience
can result from the piece. This hinges entirely on the subjective input of the listener, as this
piece has taught me. Ultimately, the truth that the art unconceals for me, personally, is a new
understanding of subjectivity and its role in composition.
The role of the composer's subjectivity is one that is detached from the listener's
subjectivity, and when operating in a mode of artistic creation, it is obvious (to me, at least)
that the composer is attempting to form art that they think will please themselves. With the
openness of that subjective end, it is obvious that a composer can never fulfill the desire of all
of their subjectivity. It is an endless pursuit to create personal sonic gratification.
It is the role of the composer, not of neuroscientists, linguists, and computer scientists,
23

to explain the phenomenon that is music, and this is done by composing what they believe a
furthering of their culture's tradition consists of. There need be nothing more than output from
composers to take data on how music truly works. Their own reactions lead to technical
innovation, pushing through to new techniques and styles that gratify their own subjective
means. Their audience gives their reaction to the music as a judgment of what as
subjectively worthwhile to them, and form meaningful communities around their ideal
subjectivity overlaps.
More practically, building the systems in Pure Data has shown me that this attempt at
achieving subjective gratification has another direct correlation to the field of computational
linguistics. The field of “data mining” consists of research based on taking a large set of data
(a “corpus”) and deriving from it new statistical information that would be unknown otherwise.
The third movement of curation1; inches toward a musical parallel, which I will refer to as
“style mining.”
When the computer controls all of the middle-dimension features of the interface, via
randomization, the result is a completely different musical output, which is, probabilistically, an
aggregate of the possibilities inside of the system. What results from this aggregate is a more
pure representation of what the system was built for, as it contains nothing but the subjectivity
of the composer, and what it is that I, personally, was looking to achieve with the piece. The
changes of parameters in this fashion are much more musically disjunct, but in a way that a
human performer possibly never choose. The changes in musical features that are
“accidentally” possible are exhilarating.
If this concept is extended (a current plan is to build the third movement of the piece
into an extended version, possibly curation1.5;) then one can envision a logical end with
many more systems that gratify more musical possibilities, as in, many more major subjective
influences from the composer. If a compendium of the composer's style were possible to
construct, and an automated system built to create middle-dimension and small-dimension
choices inside of that compendium, then an aggregate music of all the personally pleasing
musical features could result.
Such research already exists in the domain of music listening with Pandora's Music
Genome Project.18 But what if these systems could be automated for the creation of music.
No longer would people tune into internet radio stations to hear music that an algorithm tells
them that they would like, but rather, music is generated for them based on what they like,
and it is their own personal manifestation of taste, endlessly, and with the added interest of
serendipitously combining musical elements that they are interested in.
curation1; is a far cry from this goal, but its philosophical contemplations and design is
the perfect entryway into research on the subject. I will continue to conduct this research,
defining what the subjective goals of the composer are, and applying them in systems that
make it possible to find new information about that subjectivity and to create music based on
what we all want from music, to please us on the personal level.

18. Pandora.com, “About the Music Genome Project,” (Pandora Media Inc., n.d.)
http://www.pandora.com/corporate/mgp (accessed Dec. 9, 2010).
24

Bibliography
Bergmann, Anouschka, Kathleen Currie Hall, and Sharon Miriam Ross, eds. Language Files
10. Columbus: The Ohio State University Press, 2007.

Byrd, Joseph. “Variations IV,” in Writings about John Cage, ed. Richard Kostelanetz, 134-135.
Ann Arbor: University of Michigan Press, 1993.

Clarke, Eric F. “Theory, Analysis and the Psychology of Music: A Critical Evaluation of Lerdahl,
F. and Jackendoff, R., A Generative Theory of Tonal Music.” Psychology of Music 14
(1986): 3-16, http://pom.sagepub.com.libproxy.sdsu.edu/content/14/1/3.full.pdf+html
(accessed Dec. 4, 2010).

Heidegger, Martin. “The Origin of the Work of Art,” in Philosophies of Art and Beauty:
Selected Readings in Aesthetics From Plato to Heidegger, ed. Albert Hofstadter and
Richard Kuhns, 650-708. Chicago: University of Chicago Press, 1976.

Pandora.com, “About the Music Genome Project.” Pandora Media Inc.: n.d.
http://www.pandora.com/corporate/mgp (accessed Dec. 9, 2010).

Tenny, James. Meta+ Hodos and META Meta + Hodos. Oakland: Frog Peak Music, 1986.

Xenakis, Iannis. Formalized Music. Bloomington: University of Indiana Press, 1971.


25

appendix: screen shots of full interfaces

Illustration 13: user interface at step 0)


26

Illustration 14: user interface at step 6a1)


27

Illustration 15: audience interface


curation1;

performer's score

mvmt. 1:

1. force: start machine by starting the clock


2. force: add pattern to the sequencer
3. choice: increase the pulse of the grid once the beat starts
● range of parameter: 1/8, or 1/8T
4. choice: increase tempo via modulator
● range of parameter: 180-220
● if math doesn't work out that way, go back with 5 modulator and newpattern
selected and go again
5. subjective: run the “new pattern every:” setting every 4 bars until you are happy;
deselect and stay with it
6. choice: select either “correlated” or “unified” for the “unlogiced voice”
● if it is unified, constrict the mode to “^1, ^2, ^7”
• choose tessitura
• click “done”
● if it is correlated then change the tessitura three times
7. force: initialize the “logiced voice” by selecting the toggle
8. choice: choose an octave, then press “done”
● range: octaves 2-5
9. choice: constrict the rhythm selector, then press “done”
● something shorter than “whole”
10. choice: select method for rhythm
● “triple meter” or “duple meter”
11. force: click the “1/4” bang
12. force: move the unlogiced voice up an octave
13. force: change the unlogiced voice from “correlated” or “unified” to the other
● 13a. choice: if going to unified; force: constrict to “^1, ^2, ^7, ^5, ^4”
● 13b. choice: if going to correlated; subjective: change the tessitura 3 times
14. a) choice: set “new pattern every:” to 6 or less and activate
b) choice: set “add pattern on bar:” to something less than “new pattern every:”
setting
15. force: set "x steps for send" (timbre) to 4
16. force: in timbrenature section, send the ideal length between change
17. force: select send randoms
18. choice: engage timbre randomization parameters
● range: “smooth parameters” or “all parameters”
19. force: deselect either “mod env” (if “smooth parameters” was selected previously) or
“filter” (if “all parameters” was selected previously) and then reset that parameter
20. force: deselect “new pattern every:” and “add pattern on bar:”
21. force: clear the pattern from the grid
22. force: choose random rhythm for the “logiced voice”
23. subjective: let it solo as long as makes you happy, then push “done”

mvmt. 2
free form improvisation with the controller for same amount of time that mvmt. 1 took

mvmt. 3
step away

You might also like