You are on page 1of 23

TIMESCULPT IN OPENMUSIC

Karim Haddad
karim.haddad@ircam.fr

abstract

In the last decade, regarding musical composition, my work was essentially focused on
musical time. I have used thouroughly and mostly computer aided composition and most
particularly OpenMusic. In this article we will see some compositional strategies dealing
with musical durations, skipping any considerations rearding pitch in order to better
focus on our subject except when these are related to our main disscussion1.

I. How "time passes in OpenMusic ..."

In OpenMusic time is represented (expressed) in many ways. Time could be :


- a number (milliseconds for instance)
- a rhythm tree (an internal representation in OpenMusic of musical rhythm notation
[2] - see JIM2002 article)
- a conventional graphical symbol (a quarter note) in OpenMusic Voice* editor
- an "unconventional" graphical object (an OpenMusic temporal object).

Another important issue is how musical time is conceived internaly (i.e implemented
as a structural entity) [3].

Most computer environments and numerical formats (MIDI for instance), represent the
musical flow of time decomposed in two orders :

- the date of the event (often called onsets)


- the duration of the event (called duration)

We can already notice that this conception diverges from our own traditional musical
notation system which represents a compactly readable time event + duration2. The only
reason to that is that generally the computer representation of time is not done with
symbols but with digits3. That is why, I beleive that today's composers should get
familliar with this "listing" representation of musical events.
1 One can argue that these fields are indissociable and most particularly rhythm and duration. We will
consider in this article that duration is from a different order than rhythm (think about Pierre Boulez'
"temps strie" and "temps lisse" in 'Penser la musique aujourd'hui' [1] which is a well accepted view
nowadays).
2 We can notice also that through this conception tempo is meaningless.
3 Time is sequentially expressed and not iconically symbolized (as a whole entity like a measure
containing rhythm figures, having a tempo assignement and a rhythm signature).
Since the MIDI standard is integrated in OpenMusic, this representation is common to
most musical objects (see CHORD-SEQ), and since there are OpenMusic libraries that
generates CSOUND [3] instruments which uses this "time syntax", one must learn this
particular representation in order to deal accurately with these objects specially if one
intends to use controled synthesis.

a) dx->x and x->dx

dx->x and x->dx are fluently used objects and are very practical when it comes to
durations and rhythm. They are generic functions found under the menu
function>kernel>Series:

- dx->x computes a list of points from a list of intervals and a <start> point.
- x->dx computes a list of intervals from a list of points.

Starting from a list of durations in milliseconds and a starting point, dx->x will output a
list of intervals (durations) according to these parameters.
Vice-versa, x->dx will output a sequence of time-dates starting from a list of durations.

In order to illustrate this mechanism, we will start with a marker file done with
Audiosculpt [4] or SND [5]. This file represents a list of time events automaticaly
generated or hand controlled.

In the example below, the soundfile was marked manually.

Illustration 1
Once the analysis file is imported in OpenMusic, we will quantify it using omquantify,
as shown in the next figure:

Illustration 2

Looking carefully to this rhythmical result, we may notice a wide variety of durations (in
this case it could be considered as a series of durations). We may also consider it as a
sequence of accelerando/ decelerando profiles ( modal time durations). In either case, it is
a rich rhythmical material that could be developped.

Illustration 3

This is a possibility to "note" symbolicaly and integrate a sound file into a score4.

4 It is also very practical for coordination between musicians and tape.


We can of course extend this "symbolic" information and consider it as a
compositional material for example , applying to it counterpointal transformations like
recursion, diminition or extention, and that by simply using multiplication (om*) or the
reverse function, permutations, etc...

b) Another aspect of rhythm manipulation I use, is on the antipodes of the precedent


example. Instead of extracting rhythm from a physical source (i.e such as a soudfile) I use
directly combinatorial processes of rhythmical structures using the internal definition of
rhythm in OpenMusic, Rhythm Trees [2]. Since this description has a wonderfull capacity
in creating any inimaginable rhythm, should it be simple or complex, and since the RT
standard is both syntacticaly and semanticaly coherent with musical structure, rhythm
manipulation and transformation is therefore very efficient using these structures.

It is for this reason I came to write the omtree library for OpenMusic which
basicaly was personal collection of functions. These allow 1) basical rhythm
manipulations 2) practical modifications and 3) some special transformations such as
proportional rotations, filtering, substitution, etc...

The whole stucture of “...und wozu Dichter in dürftiger Zeit?...” for twelve
instruments and electronics is written starting from the following generic measure:

Illustration 4

which corresponds to the following rhythm tree:

(? (((60 4) ((21 (8 5 -3 2 1)) (5 ( -3 2 1)) (34 ( -8 5 3 -2 1))))))

Rotations are performed on the main proportions based on the fibonaci series
(rotations of D elements – durations-, and rotations on the S elements also – subdivisions
-). The first rotation being :
Illustration 5

The corresponding rhythm tree being:

(? ((( 60 4) (( 5 ( 2 1 -3)) ( 34 ( 5 3 -2 1 -8)) ( 21 ( 5 -3 2 1 8 ))))))

This is generated by this patch :

Illustration 6

The result is a six voice polyphony. The pitches are also organized following an
ordered rotation and distributed on the six voices as an heterophony.
Illustration 7

After quantification this will appear in the score as (excerpt):

Illustration 8 “...und wozu Dichter in dürftiger Zeit?...” for twelve instruments and electronics

III. Topology of sound in the symbolic domain


1) Form

We might consider Sound analysis as an opened field for investigation in the time
domain, where interesting and unprecedent time form and gestures could be found and
used as a basic (raw) material for a musical composition. This new approach is made
possible due to fast computation. But the fact that sound analysis being a vast field of
research (where one can find multiple kind of analysis and different visual or raw data
formats) , one must take into consideration the nature of this analysis and its original
purpose. We can say that sound analysis is a rich source where symbolic data could be
retreived and remodeled following the needs of the composer in an open perspective for
composition.

Sound analysis data could also be considered as potential correlated vectors. These
according to the analysis type could be streams of independent linear data, or more
interessting, arrays of data. The type of data oftenly found are directly related to the
sound nature. This could be regarded abstractly as a pseudo random flow or considered
as a coherent interrelated orders of data , depending again on type of analysis choosen.

Using sound analysis as a basis for music material production and most particularly in the
time domain. It is important to note that the following approach is not a “spectral” one in
the traditional sense5, but on the contrary it should be considered as a spectral-time
approach. Frequency domain will be translated into time domain and vice-versa
following the compositional context as we will see further on6.

This “translation” is possible with the wide variety of analysis type (additive, modres-
resonance modes, ) and not to forget the many data format available, visual, or under the
form of numerical data.

This mateial will be used following the musical project. Different order of "translation"
in the symbolic field could be used. Form, could be extracted litteraly from the analysis
data or could be applied from a given symbolic material . This mixity of sources
(symbolic and analytic) will be coordinated such as compositionaly, sources will fuse
together. That's where tools are very important.
We can consider OpenMusic as a black box where analysis and the symbolic fields are
connected in order to produce such fusion in the field of musical time.

2) No one to speak their names (now that they are gone)

The structure of "No one to speak their names (now that they are gone)" for two
bass clarinets, string trio and electronics, is based on an aiff stereo sound file of 2.3
seconds long.

5 Meaning that form and strategies are primarly based on pitch.


6 This was the initial approach in Stockhausen's well known article “...wie die Zeit vergeht...” [6]
Illustration 9

Considering the complex nature of this sound file (friction mode on TamTam), it
has been segmented into 7 parts (c.f figure). This segmentation is done according to the
dynamical profile of the sound file.

Illustration 10

We may consider a sound as an array of n dimension (as it is shown in the figure


above) with a potential information that could be translated into time information. At first
site, it is natural to construct this array under the additive model (time, frequency,
amplitude and phase). This is rather a straightforward description that could be used to
process directly into time domain or for an eventual resynthesis. Other sound analysis-
description is available such as the spectral analysis, lpc analysis, the modres analysis or
others. The modres analysis was chosen. All these described above are discrete
windowing analysis where time domain is absent. Most of them have time addressing,
but the last one (modres analysis) is an array of dimension 3 (Frequency, amplitude and
bandwidth/Pi).

Illustration 11frequency, dynamics and bandwidth data

Given a sound analysis/description and an array of n dimension, it is possible to translate


it into time domain from array to array, i.e. the analysis data could be read in any plane,
vertically, diagonally, etc... or any combination of arrays. This "translation" of course is
arbitrary and is meant to be a translation in the symbolic field, the score that in its turn is
an array of a totally different class. Although this operation seems arbitrary, (and
somehow it is), there are two arguments in my opinion which are pertinent:

First, the fact that (as we will see later) the sound array is processed in a complete
interdependent way, taking into account all the proportionate weight relations contained
within it, the coherency of the sound resonance will be somehow "reflected" in the
symbolic domain through specific analogical modes (dynamics, durations and pitch)
which are not supposed to be literally associated one by one (i.e. exact correspondence of
parametrical fields is not necessary) on the contrary in this piece these are permutated.

The second important point is the fact that this translation establishes a strong significant
relation between the electronic part and the acoustical part of the piece. This stratetgy
seems to me to be one of the strongest and most coherent binding in mixed music
repertoire.

Moreover, if we visualize the given data in a 3 dimensional graph (see illustration 10) we
will see many folds ("plis" [6]) of different densities. These are directly related to the
polyphonical gesture representing the point-counterpoint technique used in the score (c.f
score example – illustrations 18-19).

Illustration 12

As we can see above, there are two parallel musical processes: The electronic part (tape),
which is also entirely constructed with the initial material (the Tam sound file), and the
score part. The semantic bridge is shown as a dashed arrow. It is through analysis that
both domains communicate with each other7 . In the case of resynthesis, another bridge
could be established in the other direction (from symbolic to sound domain) but this is
not the case in our present composition.

In "No one to speak their names (now that they are gone)'', using the modres analysis, the
bandwidth array has been chosen to order each sets of pitches in each fragments
following its bandwidth. For each parsed pitch segments we will again establish a
proportional relation: all pitches / highest pitch.
These proportions will be used as snapshots of seven sonic states in a plane of a three-
dimensional array (x, y, z), each state being the sum of all energetical weights within one
window. We will use them to determine our durations all through the composition. The

7Analysis could be thought of as another potential aspect of a sound file, or in other terms, it is an alternative
reading/representation of sound
durations are of two orders:

– Macro durations that represents metric time and will determine a subliminal pulse
illustrated by dynamics. Measures are calculated following the proportions computed
from the last segment.

– Local durations consisting in effective durations given on the four instruments. These
will be distributed following the same proportions around measure bars creating an
asymmetrical crescendo-decrescendo pairs.

Illustration 13 Analysis translated into polyphony of durations

The main persistant concept in the whole work refering to pitch/bandwith/dynamic


weights is the notion of symetry. As we have seen in the example above, we can use this
as a compositional element.

Starting from one mode of resonance which was assigned to durations following our
proportional translation:
Illustration 14

we will apply to it a new axe of symetry where all durations will start from and then
continue in an asymetric mirroring as illustrated below:

Illustration 15 -35 degrees symetrical axe


This was calculated by the patch below:
Illustration 16
The resulting durations could be seen below.

Illustration 17
Durations are not the only elements that are calculated from the analysis. Starting from
measure 28, pitches are extracted from the analysis and distributed on all four instruments
following an order based on bandwidth over amplitude giving weight criteria from the
most important to the least (result from the patch seen in illustration13):

Illustration 18 Excerpt from the elec. guitar and string trio version.

Illustration 19

IV. Hypercomplex minimalism

1) Sound analyis for controling instrumental gestures

As opposed to the examples we have already seen, where data was three dimensional
arrays of information, therefore complex, we will see here concrete use of a more simple
two dimensional sound data array.

Ptps analysis is a pitch estimation analysis (pitch and time arrays). When applyied to a
noise source or inharmonic sound the analyis output will yield interesting profiles.

Illustration 20 PTPS analysis

This data will be used after its decomposition by n fragments as a mean to control
musical gesture.
Illustration 21 Fragmented analyis in OpenMusic
These fragments will be considered as potential fields in the dynamical control of
instrumental gestures

Illustration 22 Bow pressure, placement and speed control of the doublebass part in “In lieblicher
Blaue...” for amplified bass saxophone, amplified doublebass and electronics.

These “potential” fields will afterwards be filtered and assigned according to the musical
context. As we have mentioned above (section 2), the relevance of this technique is the
fact that all sound sources used for analysis or in the tape part of the piece, are issued
from samples pre-recorded by the musicians, using special playing modes (as
multiphonics, breathing, and others...)
Illustration 23 Excerpt from “In lieblicher Blaue...” for amplified bass saxophone, amplified
doublebass and electronics

One however must also take into consideration the fact that musical events are
proportionaly balanced between complex gestures in the process of sound production and
minimal activity in note and rhythm production , i.e we can eventualy dustinguish two
layers of activity: The “traditional” score notation, and the control processing notation.

2) Adagio for String quartet

Again in this work, the use of a soundfile was a starting point to the whole piece.
However the technique is completely different. Instead of using an external analysis
program, all the work was done in OpenMusic.
Using OpenMusic's handling of soundfiles which is limited in playing them and
representing them in the SounFile object under a time/amplitude curve that is mostly
common to all sound editors, it was in my intention to use limited and reduced data in
order to have a closer affinity with the symbolic field due to the instrumental conotation
of the string quartet.

I therefore used the BPC object and downsampled the amplitude curve in order to
have a global satisfying overview of the amplitude profile.
Illustration 24

The amplitude having two phases and due to downsampling a more accentuated
difference was therefore created between the positive and negative phase creating a
somehow double choir polyphony.
Illustration 25
I then determined four axes intersecting the curves. Durations were then quantified
starting from these segments (see illustration 24).

Illustration 26
In order to control the result, the two polyphonies were synthesised using Csound and the
result was put in OpenMusic's Maquette object
Illustration 27
Illustration 28 beginning of the Adagio for String quartet.
IV. Conclusion

In the compositional processes presented here, we can distinguish two


fundamental operatives : data and algorithms.
Data in itself, could be assimilated as a conceptual idea. It represents the
deterministical drive of "what must be" in a given lap of time decided by the composer's
deus ex machina.
The algorithms could be seen as the executive part of the composer's idea who is
also deterministic as the given data proposal added to that, a dynamical decisional
potential that models the propositional data to its own creative role.

These two operatives are elements of a wide dynamic program since the
computational part (analysis and processing) was executed with different computer
programs such as OpenMusic, Diphone [7], etc' that could be considered part of a unique
program: the composition itself.
Indeed, it is legitimate nowadays to consider a work of art not only under its
aspect of factual performance, its aesthetical reality, but also under its deconstructural
knowledge of its own constitution. I personally adhere to Hegel's [8] 8 thesis and own
perspectival view that the work of art has arrived to its finality, and that modern
understanding of art (from Descartes to Kant) could not be anymore understood as it was
before that. Neither the post-modernist attitude nor the techno-classicism accomplishes
the destiny of modern art, but a conscious study of the state of art of its own medium is
necessary, similar to the renaissance revolution. The French composer and philosopher

8 "In allen diesen Beziehungen ist und bleibt die Kunst nach der Seite ihrer höchsten Bestimmung für uns ein
Vergangenes." ( X, 1, p.16)
"In all its relations its supreme destination, Art is and stays to us something that has been." (X, 1, p.16) [14]
Hughes Dufourt states “La musique en changeant d'échelle, a changé de langage.9” [9]. The
techniques in composition and sound exploration must be integrated totally not only in
the praxis of composition but in it's understanding, and better, as a wholly part of
composition itself.

Bibliography

1. Pierre Boulez, “Penser la musique aujourd'hui”.


2. Carlos Agon, Karim Haddad and Gerard Assayag, “Representation and Rendering of

9 “In changing its scale, Music has also changed its language.”
Rhythmic Structures”, 2002.
3. Carlos Agon, “doctorat.......”
4. Karlheinz Stockhausen, “... wie die Zeit vergeht...”, 1956.
4. Massimo Cacciari, "Icone della Legge". (Adelphi Edizioni), 1985.
5. Hughes Dufourt , "L'oeuvre et l'histoire". (Christian Bourgeois Éditeur), 1991.
6. Gilles Delleuze "Le Pli - Leibniz et le baroque". (Les éditions de Minuit), 1988.
7. F. Hegel Complete Works
8. Diphone, Xavier Rodet, Adrien Lefevre, IRCAM
9. Csound . Barry Vercoe, MIT
10. OpenMusic, C.A Agõn and G. Assayag, IRCAM

Works by Karim Haddad mentionned in this article

No one to speak their names (now that they are gone) 11' (*) 2002
electric guitar, string trio and electronics
First performance: October 2001, Paris
Territoires polychromes Festival
Yvap Quartet

In lieblicher Blaue... 7 ' 2003


doublebass, bass saxophone, tape and electronics
First performance: February 2003, Beziers
Reina Portuando, Daniel Kienzy & Jean-Pierre Robert

"...und wozu Dichter in durftiger Zeit?..." 15 ' (*) 2003


For twelve instruments and electrtonics
First preformance : April 2003, Paris
Radio France
Ensemble 2e2m
conducted by Paul Mefano

Adagio for string quartet 15 ' (*) 2004


String Quartet
First preformance : September 2004, Paris
Radio France
Diotima String Quartet
Nicolas Miribel, Eichi Chijiwa, Franck Chevalier and Pierre Morlet

You might also like