You are on page 1of 12

SOURCE

https://pantherfile.uwm.edu/kdschlei/www/files/a-history-of-spatialmusic.htmlhttp://cec.concordia.ca/econtact/Multichannel/spatial_music.html
eContact! 7.4

A History Of Spatial Music


by Richard Zvonar, PhD

Historical Antecedents:
from Renaissance Antiphony To Strings In The Wings
The spatial relationships between musical performers has always been an integral part of
performance practice, and this takes many forms: folks songs alternate dialog between
women's and men's voices, military bands march through the town square, alpenhorns and
talking drums pass messages across miles, carillons chime from church towers. Antiphonal
performance ("call and response") is extremely ancient, having been practiced in the chanting
of psalms by Middle Eastern Jews in biblical times, and there is further evidence of the
practice in the Roman Catholic church as early as the fourth century.
The earliest published works using space as a compositional element date from the mid-16th
century at the Basilica San Marco in Venice. One distinctive feature of this church was the
presence of two organs, facing each other across the basilica. The maestro di cappella,
Flemish composer Adrian Willaert, took advantage of this arrangement and began to
compose antiphonal works for two spatially separated choirs and instrumental groups, a
technique known as cori spezzati. Willaert's 8-part Vespers (1550) is the earliest-known work
of this type, featuring 'dialog form' and echo effects. The practice was adopted and extended
by Willaert's pupil Andrea Gabrieli and others and it became a hallmark of the Venetian
musical practice. Works using up to five choirs were performed.
The practice was carried elsewhere throughout Europe and became common in England.
Spem in alium by Thomas Tallis was composed in honor of Queen Elizabeth's 40th birthday
in 1573, featuring 40 separate vocal parts arranged into eight 5-voice choirs. The practice
continued throughout the Baroque period, after which it gradually tapered off. The high point
may have been Orazio Benevoli's Festal Mass for the dedication of Salzburg Cathedral in
1628, calling for 53 parts (16 vocal and 34 instrumental) plus two organs and basso continuo.
From the late Baroque through the Classical period there seems to have been little interest in
spatial antiphony, but beginning in the Romantic period one finds a number of examples of
spatial placement being used for special theatrical effects. A prime example is found in the
Tuba Mirum section of Hector Berlioz's Requiem (1837), where four separate brass
ensembles make their dramatic entrances, as from the four points of the compass, heralding
the entrance of the choir. Similarly, off-stage brass ensembles were employed in Giuseppe
Verdi's Manzoni Requiem (1874) and in Gustav Mahler's Symphony No. 2 (1895).

With the advent of the 20th century one finds a new approach to spatial music, with
performer locations being used as a way to articulate contrasting strata of musical activity.
American experimentalists, such as Charles Ives, and European Modernists, such as the
Italian Futurist Luigi Russolo pursued interests in collage form and simultaneity, and were
creating musical works which were reflective of the clamor of the industrial modern world.
Ives in particular began to work with multi-layered musical forms using independent
ensembles, such as The Unanswered Question (1908) where the strings are placed offstage in
opposition to the on-stage trumpet soloist and woodwind ensemble. Ives was influenced in
this by his father George Ives, a Civil War bandmaster and music teacher, whose own
experiments with spatial music included directing two marching bands to move through the
town square from different directions.
Ives's spatial experiments were later carried to their logical conclusion in the music of Henry
Brant, whose catalog of works contains more spatially articulated works than not. Brant's
Antiphony I (1953) called for five spatially-separated orchestras. Voyage Four (1963) a 'total
antiphony' required three conductors to direct percussion and brass on stage, violins on one
side balcony, violas and celli on the other, basses on floor level at the rear, woodwinds and a
few strings in the rear balconies, and several individual performers in the audience.
Windjammer (1969) had a stationary horn soloist and several wind players who moved along
prescribed routes while playing.

Electroacoustic Concert Music:


Music by Wire and Loudspeakers in the Concert Hall
The invention of sound recording, radio, and telephony introduced a new musical era as
audible sound was first displaced from its source. A range of experiments in musical
instrument design and music presentation were tried. Thaddeus Cahill's Telharmonium(19001906) was the first electronic musical instrument, delivering music on a subscription basis to
homes and offices via telephone lines. The instrument was huge, weighing 200 tons, and
requiring thirty railroad flatcars to transport it. Cahill's endeavor was not commercially
successful, but within a few years other inventors had followed his lead. In 1923 Leon
Theremin invented the instrument which bears his name, a radical design which was
controlled by moving one's hands in space near two antennae. The sound of the instrument
issued from a loudspeaker, and Theremin's ensemble performances are probably the first
example of multichannel loudspeaker music.
Also during the 1920s and 1930s composers including Paul Hindemith began to experiment
with the use of phonographs as musical instruments, anticipating present-day DJ practice by
nearly 80 years! Best known among these early works is John Cage's Imaginary Landscape
No. 1 (1939) which called for variable speed phonograph turntables and recordings of test
tones. Cage was also quick to adopt the performance possibilities of radio broadcast.
Imaginary Landscape No. 4 (1951) used 12 radios, 24 performers and a conductor who beat
4/4 time. At each radio, one performer controlled the frequency and the other controlled the
volume.
Although tape recording had been invented in 1927 and had been commercialized in
Germany in the early 1930s, there was no general access to this technology until after the end
of World War II. As a consequence, early compositional work with recorded sound was
dependent on optical film sound tracks and phonograph disk recorders. In 1948 Pierre

Schaeffer, an engineer at the Radiodiffusion-Tlvision Franaise (RTF), presented the first


musical works created with disk recorders. This music was created from recordings of
everyday sounds, such as the clang of kitchen implements or the chugging of a locomotive;
Schaeffer called it musique concrte. The first performances used multiple phonograph
turntables, but soon tape recorders became the technology of choice. In collaboration with a
young composer Pierre Henry, Pierre Schaeffer created a repertoire of works for tape,
including Symphonie pour un Homme Seul (1950). Multitrack recorders were not yet
available, so the RTF composers often used multiple mono tape decks, with up to five tape
signals routed to a 4-channel speaker system. The speakers were arranged in a tetrahedral
configuration, with Front Left and Right, Back, and Overhead. To facilitate distribution of the
sound Schaeffer conceived of a mechanism called the potentiomtre d'espace (1951) which
used induction coils to control the signal routing. The user interface was highly theatrical,
consisting of four large hoops surrounding the performer, whose arm movements regulated
the spatialization.
In the early 1950s readily available tape recorders were quickly adopted by composers
worldwide, particularly for the ease of editing sound through tape splicing. In New York John
Cage and a group of colleagues established the Project for Music for Magnetic Tape. Among
the works produced by the laborious splicing of often minuscule snippets of tape was Cage's
Williams Mix (1952) for 8 mono tapes running at 15 ips, each playing through its own
loudspeaker. The speakers were distributed in equidistant locations around the auditorium.
This was one of the first of Cage's works to use chance operations for randomized selection
of musical materials, in order to create an audio collage from a large library of categorized
sounds. Altogether the Project resulted in three works, including Earle Brown's Octet and
Morton Feldman's Intersection (also for eight mono tapes and eight loudspeakers).
Among the composers who flocked to Schaeffer's musique concrte studio at the RTF was a
German wunderkind named Karlheinz Stockhausen. After completing a short Etude for mono
tape, Stockhausen returned home to Kln and composed a series of works at the Studio fur
Elektronische Musk, Westdeutsche Rundfunk (WDR). In 1956 he completed Gesang der
Jnglinge for electronic sounds and the recorded voice of a boy soprano. This work is
generally considered the first piece for multitrack tape, using a 4-track machine plus a second
mono machine for a fifth track of playback. Stockhausen's original plan was for the fifth
speaker to be suspended above the audience, but for logistical reasons this was not possible
and the premiere featured a panoramic arrangement of speakers across the stage. Following
the premiere the composer remixed the piece for subsequent surround-sound quadraphonic
playback.
Probably the first true quadraphonic composition was Stockhausen's Kontakte (1960) for
electronic sounds. The channel arrangement was designed for Front, Left, Right, and Back
speaker positions. In order to create the effect of sounds orbiting the audience Stockhausen
used a turntable system with a rotating loudspeaker mechanism, surrounded by four
microphones to enable the re recording of spinning sounds. Kontakte also exists in a later
version which combines live piano and percussion performance with the quad tape, and it
forms the musical core of the composer's "happening"-like theater piece Originale (1961).
Spatial movement of sound has remained a key concern throughout Stockhausen's later
career, and it has not been limited just to his electronic works. Gruppen (1957) for three
orchestras and Carr (1960) for four orchestras and four choruses both explore the spatial
movement of musical materials from ensemble to ensemble. Stimmung (1968) for six

vocalists was amplified through six equally-spaced loudspeakers surrounding the audience,
placing the listener at the sonic center of the ensemble.

International Expositions:
Programmed Multimedia Environments
As the Atomic Age of the 1950s segued into the Space Age of the 1960s popular culture
became increasingly saturated with images of the future, travel to other planets, and
ultimately of inner psychic journeys. International expositions became natural venues for
lavishly-funded multimedia extravaganzas, while at the same time the art Happenings of
Allen Kaprow and other post-Expressionist artists provided models for psychedelic acid tests
and environmental theater.
An early manifestation of the post-Sputnik obsession with space as environmental
entertainment was the Vortex multimedia program at the Morrison Planetarium in San
Francisco. Vortex (1957-59), created by Visual Coordinator Jordan Belson and Audio
Coordinator Henry Jacobs, presented a series of thirteen programs of projected light and
sound. Vortex made use of the planetarium's elaborate lighting system, which featured "all
known systems of projection." Sound was played back through thirty-six to forty
loudspeakers, and "actual movement and gyration of sound was made possible by means of a
special rotary console." The musical part of the program included works by Karlheinz
Stockhausen, Vladimir Ussachevsky, Toru Takemitsu, Luciano Berio. The program was so
popular it was invited to participate in the Brussels World's Fair in 1958.
The major multimedia installation of the Brussels Worlds' Fair was the Philips Pavilion,
featuring the tape composition Pome Electronique (1958) by Edgard Varse. Philips, as a
major manufacturer of lighting, optical, and audio equipment had commissioned a
multimedia environment to showcase these technologies in the form of an "electronic poem."
The pavilion was an eccentrically-shaped structure made up of hyperbolic paraboloid shells,
designed by architect (and composer) Iannis Xenakis of the firm of le Corbusier. The interior
walls, designed in the shape of a large stomach, formed an unbroken projection surface which
received projected images colored washes from a battery of film projectors and lighting
instruments. The audio portion of the environment was Varse's tape composition, exactly
480 seconds long and synchronized with the visual effects by an elaborate multitrack tape
system. The audio source was on sprocketed 35mm 3-track tape, with one track holding the
main musical material and the other two containing "reverberant and stereophonic" effects.
Each track was distributed dynamically to 425 speakers via an 11-channel sound system with
20 120-watt amplifier channels. Loudspeakers were grouped in threes and fours and the
movement of sound from location to location was achieved through a switching system
controlled by a second 15-track sprocketed tape. Nine different "Sound Routes" were
programmed.
EXPO 70 in Osaka Japan was home to several multichannel sound installations featuring the
music of avant garde composers. Having come into his own as a composer in the twelve
years since he had designed the Philips Pavilion Iannis Xenakis presented his 12-channel tape
composition Hibiki Hana Ma at the Japanese Steel Pavilion projecting the sound through a
system of 800 speakers situated around the audience, overhead, and under the seats. At the
same time in the German Pavilion, Karlheinz Stockhausen and a group of 20 soloists
performed two concerts a day for 183 days in a blue steel spherical auditorium 28 meters in

diameter, holding an audience of 600. Stockhausen controlled the sound projection from a
station in the center of the sphere, distributing sound in circular and spiral paths through a set
of 55 loudspeakers arranged in seven rings from top to bottom of the sphere. The soloists
themselves were situated on six small balconies high on the walls, and the audience was
seated on a sound-transparent grid so that they were completely enveloped in the music from
above and below. Stockhausen's comment on this experience was:
To sit inside the sound, to be surrounded by the sound, to be able to follow and experience
the movement of the sounds, their speeds and forms in which they move: all this actually
creates a completely new situation for musical experience. 'Musical space travel' has finally
achieved a three-dimensional spatiality with this auditorium, in contrast to all my previous
performances with the one horizontal ring of loudspeakers around the listeners.
Whereas the Japanese Steel and German pavilions were designed as multichannel concert
venues for the presentation of a particular artist's work, the Pepsi Cola Pavilion was created
as an adaptable multimedia instrument, which could take on different personalities according
to its programming by a variety of artists. The pavilion was a collaborative design by an
American group artists and engineers called Experiments in Art and Technology (E.A.T.),
which had come together in 1966 on the occasion of a project called "9 Evenings: Theatre
and Engineering."
The dome's 37 loudspeakers were arranged in a rhombic pattern and could be driven by
combinations of up to 32 inputs deriving from 16 monaural tape recorders and 16 microphone
preamps. An input switching system permitted combinations of four inputs each to be routed
to 8 signal processing channels, each of which contained in series amplitude modulation,
frequency modulation, and high-pass filtering. The outputs of this system then passed through
a programmable switching matrix and thence to the speaker system of 12" speakers powered
by 50 watts each. In addition to the fixed speaker installation the pavilion offered a large
number of Handsets which could be carried around the space. Each Handset picked up audio
material by means of an electronic induction system, so that 11 zones within the space
represented different sonic environments which could be "tuned-in" to by the bearers of the
device.
In addition to the sound system, the pavilion had a rich array of optical and environmental
effects such as laser beams and dense fog. The interior of the dome had a mirror finish,
providing an ever-changing distorted image as performers and the public moved about the
space. Because the pavilion was designed as an adaptable instrument, much depended on the
choices of the programmers. In all there were twenty four artist/technologists chosen for this
task. They ranged from musicians and sound artists to light (or "lumia") artists to
dancer/choreographers.
ART ENVIRONMENTS AND GUERRILLA ELECTRONICS
In marked contrast to the precision-controlled spectacle of the Philips Pavilion or the live
interaction of Stockhausen's sphere, the John Cage and Lejaren Hiller collaboration, the
multimedia environment HPSCHD was a study in programmed environmental chaos.
HPSCHD was performed at the University of Illinois Urbana on May 16, 1969 with seven
harpsichord soloists, fifty one computer-generated tapes, eighty slide projectors, seven film
projectors, and an audience of 9,000. There were a total of 58 speaker channels, amplifying
the harpsichordists and the tapes. The tape parts were of computer-synthesized harpsichord

sounds, based on tuning systems which divided the octave by integer numbers ranging from
five to fifty six. The result of all these fifty one tapes and seven soloists playing continuously
was a dense microtonal sound mass, playing without a stop for five hours. The corresponding
visual display was equally dense. A number of plastic sheets hung overhead as projection
screens, each one hundred by forty feet, with a continuous 340-foot circular screen running
around the perimeter. NASA had loaned forty films and 5,000 slides, resulting in a
preponderance of space travel imagery.
David Tudor was another member of John Cage's circle and a frequent collaborator. Known
in his earlier career as a virtuoso organist and pianist and a foremost interpreter of the avant
keyboard during the 1950s, Tudor became enraptured by the vagaries of unstable electronic
circuits and devoted the rest of his life to interactive environmental sound installations based
on complex networks of resonant circuits. Rainforest is perhaps his best-known work. It
existed in several versions, the first in 1968 being a sound-score for a dance work by Merce
Cunningham. It was based on the process of activating small resonant objects by means of
audio transducers, so that each object produced a sound determined by its physical materials,
and modified by the resonant nodes of those materials. Rainforest IV (1973) was a
"collaborative environmental work, spatially mixing the live sounds of suspended sculptures
and found objects, with their transformed reflections in an audio system." The sound system
used varied from four to eight independent speakers. but because the sculptural objects were
themselves emitting sounds there were actually a larger number of discrete sources. In
Tudor's words, "The idea is that if you send sound through materials, the resonant nodes of
the materials are released and those can be picked up by contact microphones or phono
cartridges and those have a different kind of sound than the object does when you listen to it
very close where it's hanging. It becomes like a reflection and it makes, I thought, quite a
harmonious and beautiful atmosphere, because wherever you move in the room, you have
reminiscences of something you have heard at some other point in the space."
Although Tudor died 1996 his work continues to be researched and performed by a devoted
group of younger colleagues. Rainforest IV was performed at Lincoln Center's Summer
Festival '98 and in Toronto and Oakland California to mark the work's 25th anniversary and
in May 2001 as part of the symposium The Art of David Tudor at the Getty Center in Los
Angeles.

Classic Studio Composition:


Quadraphony and Beyond
Beginning in 1960 many electronic music studios began to standardize their playback
systems on a quadraphonic array of speakers, generally positioned in the four corners of a
room. This follows naturally from typical rectangular room geometry, is a logical next step
from two-channel stereo, and coincides neatly with the availability of four-track tape
recorders. Although commercial quad was a relatively brief blip in the home audiophile
market (due largely to the limitations of vinyl) tape-based 4-channel music both anticipated
and survived commercial quad by a number of years on either side. Indeed, there developed a
sort of quadraphonic common-practice which was supported by a world-wide network of
venues as well as compositional tools (both hardware and, later, software).
The earliest practitioners of quad panning were normally forced to "roll their own" quad
panners (such as Lowell Cross's "Stirrer" which could rotate four channels with a single

joystick) or to develop special fader and pan pot techniques. As modular analog synthesizers
came available in the mid-1960s it became feasible to perform spatial modulation using
voltage controlled amplifiers (VCAs). For example, the Buchla Series 200 Electronic Music
Box (1970) offered the Quadraphonic Monitor/Interface Model 226, which according to the
brochure, "Provides for monitoring, interfacing, and final program formatting in four-channel
systems. Built-in provisions for quad mixing, duplications, and overdubbing simplify the
manipulation of quadraphonic material." Composer Morton Subotnick, whose musical ideas
helped to shape the design of Don Buchla's instruments, has even claimed that his work with
quad helped get him a recording contract (!), though his electronic piece Sidewinder (1970)
remains his only commercially released quadraphonic work.

While the most prevalent of the electronic music of this period was
realized with analog electronics, there was significant concurrent development using digital
mainframe computers. Because of the cost (there were no garage sale computers in the 1950s
and '60s) this work was supported only in such large institutions as AT&T's Bell Telephone
Labs in New Jersey, where Max V. Mathews and his colleagues invented computer sound
synthesis. Mathews' first program, Music I, was completed in 1957 and by the early '60s had
begun to attract interest elsewhere. John Chowning was a graduate student at Stanford
University when he read of Mathews' work. In the summer of 1964 Chowning visited
Mathews at Bell Labs, was given a box of computer punch cards containing the code for
Music IV, and returned to Palo Alto to make the best of it. By the fall he and a computer
science colleague had the program up and running and began research on the computer
simulation of sounds traveling through space. By 1966 he had completed a program which
allowed him to "draw" a sound trajectory, and which would compute the necessary energy
distributions between four loudspeakers as the sound traversed a particular path.
At this point Chowning turned his attention to digital synthesis of musical tones, a significant
effort which resulted in the powerful FM (frequency modulation) synthesis technique used in
the best-selling Yamaha DX7 keyboard instrument. When he returned to his work in spatial
sound manipulation, armed with a new and richer sonic palette, the results were astounding.
He completed a quadraphonic work Sabelith , and in 1970 he delivered his seminal paper
"The Simulation of Moving Sound Sources" at the Audio Engineering Society convention, a
study which included an analysis of the effects of doppler shift on the perception of the speed
of a moving sound as well as of local and global reverberation effects. Two years later he also
completed the classic Turenas (1972) one of the monuments of computer music.
Stanford in the 1970s was THE computer music facility, and the Center for Computer
Research in Music and Acoustics (CCRMA) played host to composers from the world
around. Among them was Roger Reynolds, who had been working with analog technology at
University of California San Diego on a series of multichannel pieces using the human voice
as primary source material. Reynolds proceeded from the principal that since the voice is the

most familiar sound in our environment, our ears are acutely sensitive to any variations or
anomalies in its location and quality of sound ("intimate endearments, rage at a distance
") The early Voicespace pieces, such as Still (1975) were produced at the Center for Music
Experiment at UCSD using analog recording and mixing technology as well as analog
reverberators and a voltage controlled spatial location system. Eclipse (Voicespace III) (1979)
and The Palace (Voicespace IV) (1980) were produced at Stanford, relying on then-new
capabilities for digital editing and advances in digital reverberation algorithms.
Eclipse was created as part of a collaborative intermedia work with artist Ed Emshwiller,
presented at the Guggenheim Museum in New York. The work took advantage of the
museum's enormously tall central atrium, with its helical ramp. Emshwiller's imagery was
projected on a series of screens while Reynolds' music sounded from seven speaker positions
including the extreme top of the atrium. The Palace was perhaps more conventional in its
quadraphonic presentation. Singer Philip Larson performed alternating dramatic baritone and
countertenor passages from a seated position on-stage, while his digitally recorded speaking
voice sounded from the four speakers surrounding the audience. His voice had been
"analyzed and then processed in such as way as to emphasize their natural harmonic content
and give them a supra-human scale." The reverberation algorithm created the illusion of an
impossibly huge space, resonating and sustaining the pitches of Larson's voice in an
acoustically surreal manner.
Reynolds' research into, and composition of, spatial music continues using a wide range of
technologies and techniques. Transfigured Wind II (1984) for solo flute, quadraphonic tape,
and orchestra uses computer analysis and resynthesis techniques to fragment, distend, and
otherwise transform the sound of the flute and to project these sounds through the space of
the concert hall. The processing was accomplished at IRCAM, the French governmentsponsored research facility in Paris. More recent works have increased the channel count,
have explored a diversity of technical means, and have brought the means of production back
home to San Diego with the establishment of the TRAnSIT (Toward Real-time Audio
Spatialization Tools) project. This work is documented on the recently-released Watershed
DVD, on Mode Records, with several works mixed down to 5.1 format.
The title piece Watershed IV (1996), for solo percussion and computer-mediated interactive
spatialization system is performed with a six-channel surround system. It employs a spatial
processing system developed by the TRAnSiT team, consisting of a Silicon Graphics (SGI)
computer running custom-designed software and a digitally controlled audio matrix from
Level Control Systems (LCS).
The direct sound of the instruments and the reverberation return from a digital reverb unit are
mixed and distributed through the LCS matrix to a six-channel speaker system. Four of the
speakers are arrayed in an equally-spaced panorama across the stage with the percussion
setup at the center, while the additional two speakers serve as left and right surrounds.
Certain of the microphone signals from Steve Schick's panoply of percussion instruments are
routed to the audio inputs of the SGI computer, where they are used to trigger a set of
preprogrammed spatialization effects.
The specific type of spatialization changes throughout the piece: Sometimes the sound of the
percussion is projected into various sound spaces (with varying degrees of reverberation and
at different apparent relationships to the listeners), sometimes the listeners themselves seem

to be in the performer's position at the center of the percussion kit, and at other times
individual instruments are heard to be flying around the room.
The disc also includes and excerpt from the 8-channel tape part from The Red Act Arias
(1997), a series of works based on the Greek tragedy of Clytemnestra and Agamemnon. The
piece in its entirety is a 47-minute composition for narrator, orchestra, chorus, and 8-channel
computer processed sound. The computer part features massive panning of the layered
sounds of fire and water, implemented by Reynolds' assistant Tim Labor using the cmusic
"space" unit generator. The work was commissioned by the BBC Proms Festival and was
premired in 1997 at the Royal Albert Hall in London by the BBC Symphony Orchestra and
Singers, conducted by Leonard Slatkin. The next stage in The Red Act project wasJustice
(2000), a theatrical work for soprano, actress, percussionist, tape, and real-time computer
spatialization, which was created with director Tadashi Suzuki and premired at the 2nd
Theatre Olympics in Shizuoka, Japan in May 1999 (the first fully-staged version premired
the following year in the Great Hall of the Library of Congress in Washington, DC).
MULTICHANNEL DIFFUSION AS PERFORMANCE PRACTICE
In contrast to works such as John Chowning's Turenas and Roger Reynolds' Transfigured
Wind, which are essentially deterministic works reflecting a "classical" studio philosophy,
there is a community of composer/performers worldwide who embrace a philosophy of live
diffusion of recorded works through an "orchestra" of multiple speakers. The lineage of this
work can be traced directly to the first performances of musique concrete using a four
channel speaker system by Pierre Henry in the early 1950s, and indeed Henry himself
continues to be at the forefront of this practice. Now in his late 70s, Henry has had a long
career as composer of works for concert, film, broadcast, and record album. In recent years
he has been discovered by a new generation of DJs or live remixers, who revere him as the
grandfather of their craft.
Henry's typical practice is to play back prerecorded mono or stereo source materials and to
route these through a special "diffusion desk" to a battery of loudspeakers. In contrast to the
familiar practice by classical tape music composers of using matched speakers in a quad or
greater surround configuration, Henry uses a heterogeneous selection of speakers, chosen for
their individual tonal characteristics and positioned to take advantage or variations of the
acoustics within the performance venue. It is common for the majority of the speakers to be
positioned toward the front of the room, functioning as an on-stage orchestra, though there
may also be speakers to the sides and rear of the audience, as well as being suspended
overhead. In Henry's diffusion (and indeed this is true of most of his diffusionist colleagues)
the main interest is not in moving sound around in space but rather the articulation of the
music through performing different passages through differently sounding arrays of speakers.
Thus a sound which is massive and threatening in character might be sent to a pair of very
large cabinets positioned far upstage, and then gradually introduced into a larger number of
speakers surrounding the audience, while simultaneously increasing the subwoofer feed.
Similarly, a delicate sound might be circulated through a battery of very small tweeters
suspended overhead.
A typical example of such a performance took place in Montreal in 1999 as part of a festival
produced by the Canadian organization ACREQ. Pierre Henry's masterwork L'Apocalypse de
Jean (1968) was diffused through a speaker system with 24 full-range channels, plus 6

subwoofer channels, by his long-time assistant Nicolas Vrin. The source recording in this
case was actually the commercial CD of the work!
As might be expected, Henry's greatest influence is in his own country and wherever there
exists a cultural connection. Hence we find prominent groups in France (Groupe de
Recherches Musicales (GRM) in Paris with their Acousmonium and Groupe de Musique
Experimentale de Bourges with the Gmebaphone), in Belgium, and in Quebec (ACREQ and
the Canadian Electroacoustic Community). There is also considerable activity in the United
Kingdom, with a particular locus of activity in Birmingham, England.
The Birmingham ElectroAcoustic Sound Theatre (BEAST) is based at music department of
the University of Birmingham and is directed by Professor Jonty Harrison. "The BEAST
system uses up to thirty channels of loudspeakers, separately amplified and arranged in pairs,
each pair having characteristics which make them appropriate for a particular position or
function. They include custom built trees of high frequency speakers suspended over the
audience, as well as ultra- low frequency speakers." The basic core of the system is eight
speakers, referred to as the "Main Eight." These are the Main, Wide, Distant (all in the stage
area in front of the audience) and Rear pairs. Added to that are optional Side Fills, Stage
Centre, Front/Back, Punch, Front Roof and Rear Roof, Stage Edge, Very Distant, and Mixer
sets. It is typical to use 24 channels in a 100-seat hall. A special diffusion desk was built by
DACS Ltd. with 12-in 32-out signal routing. As in Pierre Henry's work, it is typical for
BEAST composers to use a two-channel source. The stereo perspective of the source is
generally preserved, so that the left channel signal is split and sent through all the channel
strips feeding the left-hand speakers, while the right channel feeds all the right-hand speakers.
The faders are group so that the Main Eight lie together naturally under the hands, with Bass
Bins, Tweeters, and extra Stage faders to the left and Side Fill and Overhead faders to the
right.
The BEAST system and its French cousin the Acousmonium are frequently taken on tour.
Recent performances have taken place at festivals in Scotland, Holland, Germany, and
Austria, many of which were in celebration of the 50th anniversary of musique concrte.
Despite some differences in style between the groups, there is a certain commonality of
philosophy concerning sound diffusion. Given that the practice of diffusion has its roots in
the manipulation of the physical sound sources of musique concrte, it follows that this
physicality also manifests itself in the physical gesture of diffusion. Therefore the diffusion
hardware is primarily manual; there is very little interest in the use of automation systems in
the performance of this work. Associated with this is the concept of "spectro-morphology"
that is, the association of the sonic shape of the source material with the performance gesture
used in diffusion. Putting it more simply, the sound "tells you where it wants to go."
Although we see that multichannel diffusion as a performance practice is primarily European
(and to an extent, Canadian) there are some examples to be found in the United States. Two
of the most established are in San Francisco. Composer Stan Shaff's Audium, A Theatre of
Sound-Sculptured Space, is the more venerable of the two, having presented its first public
concert in 1960 and existing in its present location since 1975. The dome-shaped 49-seat
theater houses 169 loudspeakers of various sizes. Like the European diffusion systems,
Audium is controlled manually with a custom-designed diffusion desk. The source music in
this case comes from 4-track tape and consists of Shaff's own highly personal compositions.
The routing system is a sort of binary tree structure, so that each signal can be switched into a

particular subsystem and circulated among a subset of the total complement of speakers.
Audium is open to the public on weekends.
Surround Traffic Control (STC, operating as part of Recombinant Media Labs at Asphodel
Records) is an independent group active in multichannel sound control and multimedia since
the mid 1970s. An earlier dodecaphonic speaker configuration (a cube intersected by an
equilateral triangle) has been expanded to the present 16.8. Various technologies are used for
control and diffusion, including both a full range of commercial and proprietary software
(Trajectory Algorithmic Generation or TAG, written in MaxMSP) and an infrared conducting
interface based on the Buchla Lightning, controlling a set of Yamaha 02R mixers. Sound
sources include tape compositions by members of the group, live electroacoustic
performances by the ISO Orchestra, and a variety of invited guest performers, including
recent residencies by Deadbeat, Monolake, and Matmos . During the late 1990s the group
mounted a series of public performances wherever and whenever resources made it possible
(including the Ars Electronica Festival in Linz, Austria (1999) and the Expo '98 in Lisbon)
but projects are increasingly developed in-house as their own facilities evolve.
BIBLIOGRAPHY

Brant, Henry. "Space as an Essential Aspect of Musical Composition," Contemporary


Composers on Contemporary Music, ed. E. Schwarz and B Childs (New York, 1967), 221.
Chadabe, Joel. Electric Sound: The Past and Promise of Electronic Music Upper Saddle
River, NJ: Prentice Hall, 1997.
Chaudhary , Amar and Adrian Freed. Visualization, Editing and Spatialization of Sound
Representations using the OSE Framework
Chowning, John "The Simulation of Moving Sound Sources," J. Audi Eng Soc 199. (1971):26.
Cope, David H. New Directions in Music. Dubuque, Iowa: Wm. C. Brown Co. Inc., 1976.
Cott, Jonathan. Stockhausen,: Conversations with the Composer. New York: Simon and
Schuster, 1973.
Davies, Hugh. International Electronic Music Catalog. Cambridge, Massachusetts: The MIT
Press, 1968.
Driscoll, John and Matt Rogalsky. David Tudor's 'Rainforest': An Evolving Exploration of
Resonance. Leonard Music Journal. Volume 14, 2004.
Ernst, David. The Evolution of Electronic Music. New York: Schirmer Books,1977.
Harvey, Jonathan. The Music of Stockhausen. Berkeley and Los Angeles: University of
California Press, 1975.
Harrison, Jonty. Sound, Space, Sculpture: some thoughts on the 'what', 'how' and (most
importantly) 'why' of diffusion...and related topics.

Human, Naut et al. Sound Traffic Control: An Interactive 3-D Audio System for Live Musical
Performance. ICAD '98Proceedings, University of Glasgow, 1998.
Kaup, Arnold et al. Volumetric Modeling of Acoustic Fields for Musical Sound Design in a
New Sound Spatialization Theatre
Klver, Billy, ed. Pavilion by Experiments in Art and Technology. New York, E.P. Dutton &
Co., Inc., 1972.
Kostelanetz, Richard. John Cage. New York: Praeger Publishers, 1970.
Kostelanetz, Richard. John Cage (ex)plain(ed). New York: Schirmir Books, 1996.
Kurtz, Michael. Stockhausen: A Biography. London and Boston: Faber and Aber, 1992.
Leitner, Bernhard. Sound:Space. New York: New York University Press, 1978.
Maconie, Robin. The Works of Stockhausen. Marion Boyars 1976.
Moore, F.R. Elements of Computer Music. Englewood Cliffs, NJ: Prentice Hall, 1990.
Oliveros, Pauline. "Acoustic and Virtual Space as a Dynamic Element of Music" Leonardo
Music Journal, Number 5, 995
Ouellette, Fernand. Edgard Varse. New York: The Orion Press, 1968.
Reynolds, Roger. Explorations in Sound/Space Manipulation
Reynolds, Roger. Mind Models: New Forms of Musical Experience. New York: Praeger
Publishers, 1975
Roads, Curtis. The Computer Music Tutorial. Cambridge, Massachusetts: The MIT Press,
1996.
Treib, Marc. Space Calculated in Seconds: The Philips Pavilion, Le Corbusier, Edgard
Varse. Princeton, NJ: Princeton Univeristy Press, 1996.
Wrner, Karl H. Stockhausen: Life and Work. Berkeley and Los Angeles: University of
Colifornia Press, 1973.
Xenakis, Iannis. Les Polytopes. Balland, 1975.

1999, 2005 Richard Zvonar

You might also like