You are on page 1of 56

Constructing Media Theory

The Keywords of Media Theory began life as an assignment in the course Theories of Media, taught by W. J. T. Mitchell

at the University of Chicago. The course sought to explore the concept of media and mediation in very broad terms,

looking not only at modern technical media and mass media, but at the very idea of a medium as a means of

communication, a set of institutional practices, and a "habitat" in which images proliferate and take on a "life of their

own." As a result of this goal, the course dealt as much with ancient as with modern media, with writing, sculpture,

and painting as well as television and virtual reality. In methodology, the course drew from media studies, cinema and

film, art history, and literature, but its goal was to establish connections between these fields that would allow for a

theory-driven, comparative approach to the study of media and their history.

The Keywords assignment was designed to enable this goal by establishing a group of terms that the class could use

to bridge their own interdisciplinary backgrounds. Thus, it would serve as a base for the pursuit of an interdisciplinary,

theoretical, and historical study of mediums, media, and mediation—both a jumping-off point and a place to return to

when discussions became too ethereal. Above all, though, we have hoped to provide a sense of the fluctuation of

these terms, their differing connotations, contingent valences, and multiple meanings. The resulting resource offers a

rare opportunity to document the collective learning accomplished by the course over a period of several years.

Within the course, keywords essays are considered a major resource. Used alongside other readings, these essays are

constantly referenced, and students often cite them in their own work. The students learn from each other,

demystifying the aura of the media theory avatars they are reading. The recursivity of learning among students is

mirrored by the recursivity of the online interface. A tiled interface provides a picture of the entire field of terms in

debate, no one term emphasized over another. Like tiles, these terms are our building blocks, and the interface is

extendable like the technique of tiling (one term after another). The format disrupts traditional alphabetical listings,

based on the model of the index, by providing a visual form that does not separate the alphabet spatially. Within the

essays, keywords are hotlinked within the body of the essay as well as by a quicklinks menu. These hotlinked terms

lead the reader from one essay to the next in a crawling network of terms.

Raymond Williams' essay "From Medium to Social Practice" has served as our touchstone, based on its differentiation

between a materialist conception of medium and medium as social practice, and the implication of historical

transformation from one to the other. Our keywords are the language that does the work of transformation—every

term evokes both materialist usages and issues of social practice. We can employ these terms as bridges between the

two sides of media, material technique and social practice, and in doing so, we can see these conceptions that

Williams offsets as two sides of the same coin, tendencies we must always account for and fruitful avenues of

investigation.

We have come a long way from our print models, from Raymond Williams' Keywords (1976) to Formless: A User's

Guide, by Rosalind Krauss and Yve-Alain Bois (1997). The online form of the project, in allowing us to teach and

publish with new media, has proven an exciting pedagogical arena. In relation to our subject, media theory, it has

prompted us to think about the interface as a mediating element, whether it is the tiled interface presented here for

the Keywords Glossary, or the form of books themselves as in the case of our printed forbears. The project has offered
students a rare opportunity to test out the world of publishing, and to consider how academic study gets transformed

into textual and visual forms that can teach others. Our students serve not only the academic community at the

University of Chicago, where we are developing use of the Keywords Glossary in the Media Aesthetics sequence of the

Common Core, but also nationally and internationally. Programs such as Brown, Wisconsin, and Toronto list us as a

resource, and the project has forged a community of thinkers. Through language, of course, communities come to

agree and disagree, and thus we have enacted our own community formation project for the study of media theory.

W. J. T. Mitchell Eduardo de Almeida Rebecca Reynolds February 2004

film

The Oxford English Dictionary offers seven unique definitions of the word film. As a means for
situating film in the broader context of media, and as a means for handling the range of ways that
film can be understood as a medium, it will be fruitful to initiate this reference article by making
explicit two central definitions of the word.
First, film is the material basis for media such as photography and motion pictures. The Oxford
English Dictionary defines film in this capacity as "a thin, flexible, transparent material consisting
essentially of a plastic base or support (formerly of celluloid, now commonly of cellulose acetate)
coated on one side with one or more layers of emulsion and sold as a rolled strip and as separate
sheets; also, a single roll of this material, allowing a small number of exposures for use in still
photography or a large number for use in cinematography." [1] Second, film is often used
synonymously with words such as "movie" or "motion picture." The Oxford English Dictionary
defines film in this capacity as "a cinematographic representation of a story, drama, episode,
event, etc." [2]
Indeed, defining what a film is has proven to be one of the central tropes in film discourse. French
film critic Andre Bazin published a four-volume tome whose title, What is Cinema?, defines its
subject. It is from the first volume that his seminal essay "The Ontology of the Photographic Image"
derives. In the essay Bazin argues that to understand what cinema is one must attend not only to
the ontology of the actual film but also the psychological conditions governing its reception. For
Bazin the invention of photography constitutes the most important even in the history of film. It
was a moment when "a shift in the material, ontological basis of images forced a reconfiguration of
aesthetics and psychology in the ever-variable 'balance between the symbolic and realism.'" [3]
Realism, for Bazin, is "an automatic effect of photographic technology drawing on an irrational
psychological desire" [4] and so it is psychologically, rather than aesthetically, that photography,
and therefore cinema, satisfies "our appetite for illusion by a mechanical reproduction in the
making of which man is no part." [5] Noel Carroll, in the Encyclopedia of Aesthetics, attempts to
make a push towards determining how one is to differentiate the film image from pro-filmic reality,
painting, still pictures, and theater. His preliminary definition is constructed by situating film in
opposition to other media in an attempt to isolate what is unique to film. He asserts that a film
image is a "detached display," meaning that "seeing a film image is not just like seeing reality." [6]
He notes that that while paintings are also detached displays, films differ in that they offer the
possibility for "the impression of movement." [7] A stage play, however, is both a detached display
and offers the possibility for the impression of movement, and so he revises his necessary
conditions so that "X is a motion picture image (1) only if X is a detached display, (2) only if X is a
member of the class of things wherein the impression of movement is possible, and (3) only if a
token performance of X is generated by means of a template that is itself a token." 8 The template
of such a performance that is itself a token of the performance is the physical filmstrip, which
exists on its own after the token performance is over. This final definition centers not only on film's
material existence, but also the ways in which film is unlike other, potentially overlapping, media.
However, other film theorists, including Carroll, have argued that conceiving of film as a unique,
singular medium is wrong. For Carroll the notion of an artistic medium is vague, "referring
sometimes to the physical materials out of which artworks are constructed, sometimes to the
implements that do the constructing, and sometimes to formal elements of design that are
available to artists in a given practice." [9] With regard to film, Carroll notes that photochemical
emulsion is not necessary for film—some films are of projected images that have been painted
onto a filmstrip—and neither is a camera. "Cinema can be made without cameras, a point
reinforced by the existence of scratch films." [10] Carroll argues that "each artform is a multiplicity
of media, and that the relevant media are open to physical re-invention." [11] Film, therefore,
should not be understood as a unique, or even single, medium, but as many media. Whereas
Carroll, in writing about multiplicity of media contained within film, is concerned particularly with
the content of film, other scholars have formulated theories of film that encompass elements
beyond the film proper. Jean-Louis Baudry developed the idea of film as an apparatus, where film is
understood not only by content of the film but also with respect to the camera, the film stock, the
editing, the projection, etc. Namely, all the elements of film technique and technology that go into
producing a film. In “Ideological Effects of the Basic Cinematographic Apparatus” Baudry argues
that “the specific function fulfilled by the cinema [is] as support and instrument of ideology.” 12
Crucial to his understanding of film as an ideological instrument is the spectator. “The ideological
mechanism at work in the cinema seems thus to be concentrated in the relationship between the
camera and the subject.” [13] Cinema, then, is a “sort of psychic apparatus of substitution,
corresponding to the model defined by the dominant ideology” that constitutes its subject to serve
the dominant ideology. [14] A key component to Baudry’s argument about the ideological effects
of the cinematographic apparatus is that film projection enables the illusion of continuous action
even though film is composed of discrete images which, in most films, is a string of minutely
different photographic images. It is easy to think of photography as the antecedent to film; that the
relationship between the two is natural or evolutionary. Indeed, the discovery, in the 16th century,
that exposed salts of silver would darken if exposed to light enabled the knowledge and technology
necessary for the first daguerreotypes in the early 1800s. These daguerreotypes eventually gave
way to the first modern photographs in the middle of the 19th century, culminating in 1887 when
Eastman introduced celluloid as the material basis for photography. Celluloid-based photography
would go on to serve as the material basis for most modern films.
But the history of film is not a clean, linear narrative. The years of early cinema are marked by
disparate, competing technologies, all of which can lay claim to heralding in the modern film and
each of which has informed and shaped what eventually became standardized, in the early 20th
century, as the modern cinema. Though most technologies were photography-based, the
Mutoscope (19th century) and Zoetrope (19th century), for example, were devices that functioned
in ways principally similar to film projection. Instead of a filmstrip moving in front of a flickering
light, these devices quickly rotated images in front of peepholes to create the illusion of moving
images. And just as no single technology is the immediate precursor to film, the relations hip
between film and photography is not simply evolutionary—the advent of cinema did not spell the
demise of photography. As much as the technologies overlap, there are crucial differences
between the film and photography including, for example, the possibility of synchronous sound in
film. As Paolo Cherchi Usai writes, "The history of cinema did not begin with a 'big bang.' No single
event—whether Edison's patented invention of the Kinetoscope in 1891 or the Lumiere brothers'
first projection of films to a paying audience in 1895—can be held to separate a nebulous pre-
cinema from cinema proper. Rather there is a continuum which begins with early experiments and
devices aimed at presenting images in sequence and includes not only the emergence in the 1890s
of an apparatus recognizable as cinema but also the forerunners of electronic image-making." [15]
Just as the origins of film are marked by the existence of competing image technologies so to is in
the current film world. The rise of digital imaging technologies over the last few decades is
challenging film as the material basis for cinema. Increasingly films are being edited with non-
linear editing programs, which require the analog film stock to be digitized so that the film can be
edited on computers. Computer Generated Images (CGI) have replaced older, analog special
effects techniques, some of which, such as superimposition, date back to the early twentieth
century and are almost as old as cinema itself. Lee Manovich has argued that "conceptually,
photorealistic computer graphics had already appeared with Felix Nadar's photographs in the
1840s and certainly with the first films of Georges Melies in the 1890s. Conceptually, they are the
inventors of 3-D photorealistic computer graphics." [16] Then there is digital video, a medium on
which feature-length films are slowly being recorded. While analog film is still the basis for most
major studio movies, the digital has invaded cinema to the point that, for Manovich, "[cinema] is no
longer an indexical media technology but, rather, a subgenre of painting...The photographic and
cinematic aspects of cinema are being replaced by the painterly and the graphic." [17] The
influence of digital media is altering, perhaps fundamentally, what it means to be a film. Or, as
Manovich puts it: "The moving-image culture is being redefined once again." [18] Joel Witmer
Winter 2007

http://csmt.uchicago.edu/glossary2004/face.htm

reality, hyperreality (1)

The Oxford English Dictionary defines reality foremost as "the quality of being real or having an
actual existence" and supplements this with a definition of real as "having objective existence,"
and finally to exist as having "place in the domain of reality." These conventional definitions of
reality represent a larger problem in the attempt to locate the real on the most basic level, for they
are wholly circular, a set of signifiers reflecting back at each other lacking the grounding necessary
to render meaning. This problem is not unique to the word ‘reality,’ indeed almost all words and
signs are only able to refer back towards the internal exchange of other signs in order to produce a
theoretical anchor. The slippage of reality, its elusiveness encountered even in a basic search for a
definition, is an element of the hyperreal – a condition in which the distinction between the ‘real’
and the imaginary implodes. There is no static definition of hyperreality, and the interpretations
employed by theorists vary on some of the most essential terms. That said, this article will attempt
to extrapolate a common understanding of the hyperreal based on the work of several theorists. A
general understanding of hyperreality is important for it is an issue at the crux of several critical
debates within the study of media including semiotics, objects and space, the spectacle,
performativity, the examination of mass media, Platonism, resistance, and the structure of reality.
The concept most fundamental to hyperreality is the simulation and the simulacrum (see
Simulation/Simulacra, (2)]. The simulation is characterized by a blending of ‘reality’ and
representation, where there is no clear indication of where the former stops and the latter begins.
The simulacrum is often defined as a copy with no original, or as Gilles Deleuze (1990) describes it,
"the simulacrum is an image without resemblance" (p. 257). Jean Baudrillard (1994) maps the
transformation from representation to simulacrum in four ‘successive phases of the image’ in
which the last is that "it has no relation to any reality whatsoever: it is its own pure simulacrum"
(SS p.6). (see mimesis, representation) Deleuze, Baudrillard, and several other theorists trace the
proliferation and succession of simulacra to the rise of hyperreality and the advent of a world that
is either partially, or entirely simulated. Frederic Jameson (1990) contends that one of the
conditions of late capitalism is the mass reproduction of simulacra, creating a "world with an
unreality and a free floating absence of "the referent"’ (p. 17). Although theorists highlight different
historical developments to explain hyperreality, common themes include the explosion of new
media technologies, the loss of the materiality of objects, the increase in information production,
the rise of capitalism and consumerism, and the reliance upon god and/or ‘the center’ in Western
thought. Essentially, certain historical contingencies allow for the wide scale reproduction of
simulacra so that the simulations of reality replace the real, producing a giant simulacrum
completely disconnected from an earlier reality; this simulacrum is hyperreality.One of the
fundamental qualities of hyperreality is the implosion of Ferdinand Saussure’s (1959) model for the
sign (see semiotics) (pg. 67). The mass simulacrum of signs become meaningless, functioning as
groundless, hollow indicators that self-replicate in endless reproduction. Saussure outlines the
nature of the sign as the signified (a concept of the real) and the signifier (a sound-image).
Baudrillard (1981) claims the Saussurian model is made arbitrary by the advent of hyperreality
wherein the two poles of the signified and signifier implode in upon eachother destroying meaning,
causing all signs to be unhinged and point back to a non-existing reality (180). Another basic
characteristic of the hyperreal is the dislocation of object materiality and concrete spatial relations
(see objecthood). Some of these problems are explored in Paul Virilio’s The Lost Dimension, in
which he argues that modern media technology have created a "crisis of representation" where the
distinctions between near and far, object and image, have imploded (p. 112). Virilio locates the
‘vacuum of speed’ as the historical development which produces technology that overturn our
original understanding of spatial relations by altering our perceptions. This machinery "gives way
to the televised instantaneity of a prospective observation, of a glance that pierces through the
appearances of the greatest distances and the widest expanses" (p. 31). These ethereal qualities
of hyperreality mean drastic revision for media theory surrounding the spectacle. This theory was
famously articulated by Guy Debord (1977) who argued through neo-Marxian criticism that the
spectacle has become central to capitalist modes of reproduction (p. 24). Hans Enzensberger also
attempted his own ‘socialist theory of the media’ and proposed theories of domination and
potential resistance based on a liberal/Marxist critique (1996). Yet, the world of hyperreality
overturns any hope of a Marxist understanding of mass media, for the entire web of human
meaning-making activities has been transformed into the symbolic exchange of empty signs, the
modes of production have been liquefied and leukemized into the giant political economy of
exchanging signs. Steven Best and Douglas Kellner present the hyperrealist argument against
Debord and his colleagues, "this is not to say that "representation" has simply become more
indirect or oblique, as Debord would have it, but that in a world where the subject/object distance
is erased […] and where signs no longer refer beyond themselves to an existing, knowable world,
representation has been surpassed […] an independent object world is assimilated to and defined
by artificial codes and simulation models" (DBT pg. web).The system of monetary exchange is an
example of the hyperreal that should help to prevent any definitional confusion. Traditional
explanations of the history of money will return to earlier societies in which people traded goods
and tools that presumably had similar amounts of labor invested within their
production/acquisition. At some point, a common good was substituted as a ground for exchange,
and then later pecuniary units were produced in order to simulate the common exchange. At first
the monetary units had inherent value in that they were made of precious metals, but they were
eventually replaced with worthless paper units, and many contemporary economies are now
substituting these papers for credit information stored in computer databanks. During the process
of countless successive copies the essential reality of exchange has long since been lost, with
commodities now completely disconnected from their use value, their production cost, and even
their function. Moreover, the foundational lie of exchange has long since been forgotten over the
weight of countless simulacra: that there was never any trade grounded in reality, that symbolic
exchange is precisely and only that which can only refer to other signs for meaning and
definition.The next important intersection between the theory of hyperreality and media studies is
performativity. Although the problem of performance is not one unique to modernity, it does seem
as though it has been exacerbated in the hyperrealist environment with the proliferation of
identities and recognizable performative actions. Social performance is a copy that instantaneously
reproduces itself by being viewed thus disseminated to others who will potentially incorporate the
performative action into their own technologies of self. Jeffrey T. Nealon in his book Alterity Politics
interprets the work of Butler and Derrida to argue that basic performances underlie all social
agency, "agency is necessarily a matter of response to already given codes" (p. 23). But where are
the originals, the carved wooden blocks that produced so many performative copies? The
‘originals’ are constantly referenced through discursive performance, mostly as ‘human nature’ or
some equivalent concept. Performances based on gender, race, sexuality, ethnicity, and a number
of smaller modes of action consistently refer back to a fabricated biological essence, a ‘truth’ of the
body. Yet as my own performance in this course revealed, gender (and by insinuation the entire
structure of human nature) is entirely performative lacking any grounding in biological or otherwise
human essence. My ability to simply change the gender of my everyday performance elucidated
the lack of any biological grounding to gender or sex, and illuminated all social performance as
media simulacrum.The role of performance within mass media must thus be studied in the two
following ways: firstly as being reproduced among wide scale audiences, and secondly as a forged
‘unreality’ that implies the ‘realness’ of everyday performance. The first form of analysis is
obvious, that commonly portrayed performances such as race or gender normalize those modes of
behavior and train audiences to take on, improve, and master those performative identities thus
replicating the simulacra. Umberto Eco (1983) touches on this aspect of simulations in his book
Travels in Hyperreality, where he notes that the simulacrum not only produces illusion, but
"stimulates demand for it" (p. 44). In the second instance of media criticism, Baudrillard’s
metaphor of Disneyland should be employed, that the constructed realm of fantasy exists to imply
that the rest of the world is real (1994, p. 12). The obviously unreal performances of characters in
television and movies should be examined in light of their significant role for persuading
populations that their own social performances are ‘real,’ and providing the most foundational
‘other’ to stabilize all identities.Deleuze helps to connect hyperreality to another strain of media
theory originating in one of the oldest known media theorists, Plato. Suspicion of media
technologies is not a uniquely modern phenomenon, indeed Plato advanced a critique of the
written word through the dialog of Socrates in the Phaedrus (quite similar to that of Baudrillard in
CPS). Plato, in his Allegory of the Cave, purports the existence of truth in ideal forms, accessible
not in reality but through the philosopher’s ideas and intellectual pursuit of the forms. Plato
presents a clear understanding of simulations in the Caves; although he concedes that any artistic
reproduction of ideal forms would constitute representation, he is clear that it entails the copy of
an original, true form. Deleuze argues that Plato contrasts these legitimate copies to fearful
simulacra, "Plato divides in two the domain of images-idols: on one hand there are copies-icons, on
the other there are simulacra-phantasms" (p. 256). It is thus that Deleuze is able to claim that with
the arrival of hyperreality Platonism has been reversed, for any original truth or ideal forms that
provided the anchor for representation have since been permanently lost in the reproduction of
simulacra and the construction of a hyperreality without any connection to the real.The role of
resistance in relation to hyperreality differs greatly among theorists. Some thinkers are fairly
optimistic, such as Marshall McLuhan’s portrayal of media technologies as a generally benign force,
expanding and evolving toward a society with great communicative potential. This interpretation
directly clashes with Baudrillard, who sees the mass media as inherently non-communicative, a
quality that allows them to exert social control over mass populations. In his earlier work
Baudrillard’s proposal for resistance is radical but clear: obliterate the transmitters, destroy the
world of media technologies through revolutionary action and resume normal face to face
conversation (1981: p. 170). Yet in his later work, Baudrillard borders more on nihilism, with the
closest articulation of resistance being his advocacy of mass indifference to simulacra (IL 1994: 60-
61). Eco is far more hopeful about the possibilities for resistance. Eco, in a move theoretically
similar to Enzensberger, advocates what he calls the guerrilla solution, modeled off the metaphor
of guerrilla resistance; he claims that revolutionaries and critical theorists can use the grassroots
television programming to spread their subversive message (142-143). My own performance
proposed a strategy of resistance adopted from the work of Judith Butler, to reverse certain
performative signs in a subversive manner around the body so as to expose, reveal, and de-
familiarize specific media technologies– to dress in drag in order to denaturalize simulated norms
of sex and gender.The conceptual use of hyperreality is consistent enough within the literature to
give space for a common working definition for media theory, but the contrasting term ‘reality’ is
used in far too many divergent ways to arrive at a unified understanding. However, it may be
helpful for readers to conclude this article with a few brief theories of reality as a starting point for
further study. For Lacan, the term real is composed in opposition to that which is encompassed by
the symbolic and the imaginary (see symbolic, real, imaginary). The real is what eludes
representation, what cannot be either symbolized (in terms of Saussure’s notion of signifiers) or
imagined and perceived within the images of the conscious and unconscious (Sheridan 1978: p.
280). Gilles Deleuze and Felix Guattari (1983) understand desire to be based upon the lack of the
object, yet nonetheless a productive force that renders into reality the fantasy of that object.
‘Reality’ is thus nothing more than a "group fantasy" reified by ‘desiring machines, for "desire
produces reality, or stated another way, desiring-production is one and the same thing as social
production" (p. 30). For a definition of reality in contrast to hyperreality, Baudrillard represents
many of the hyperrealists with his claim that the real is "fictional," a phantasy generated by
"doubling the signs of an unlocatable reality" (1994: p. 81). Baudrillard concludes on reality that it
is nothing more than a fairy tale, it is "now impossible to isolate the process of the real, or to prove
the real" (1994: p. 21). Nicholas Oberly Winter 2003

time, space

The first definition of "time" in the Oxford English Dictionary is "a space or extent of time" (OED).
The first definition of "space" is "denoting time or duration" (OED). These circular definitions
demonstrate the congruity between time and space as concepts. While long related through
motion (cf movement), the congruity of "time" and "space" reaches its scientific apotheosis in the
early twentieth century with the single concept of "space-time" in physics and mathematics.
Before Albert Einstein and Hermann Minkowski conceived of "space-time," time and space were
aligned as separate but interdependent media. Time and space in and of themselves are media in
that they fulfill the Oxford English Dictionary's definition of "medium" as "pervading or enveloping
substance; the substance or 'element' in which an organism lives; hence ... one's environment,
conditions of life" (OED 4b). Time and space are also elements that fundamentally determine and
affect multiple forms of media. Conversely, media transform the human experience and
perception of time and space. In The Art of Memory, Frances Yates elucidates a classical example
of the perceptual affects of time and space through media. The medium of memory utilizes both
time and space. "Artificial memory," a mnemonic technology, was used to remember a speech as
it unfolded in time through the media of architecture. [see memory, (2)] The speech was mapped
onto specific familiar places or "loci" through which the orator navigated in his or her mind. In
"artificial memory," the temporal and the spatial were inextricable. The Enlightenment ushered in
new concepts of time and space to match its championing of scientific thought, empiricism, and
rationality. The work of French mathematician and philosopher René Descartes (1596-1650) posits
understanding as existing in the mind alone and casts doubt on experience produced through
corporeal sense perception. Descartes' conception of time and space reflects this primacy of the
mind. Descartes maintains that space is infinite and unlimited, and that time is the means with
which the human mind accounts for duration (Trusted 69-70). Scientist Isaac Newton's (1642-
1727) theories of mechanics are postulated on the ideals of absolute space and time. Due to
instabilities in the earth's movement, human beings necessarily depend on "relative time,"
although an absolute time outside of this relativity exists. Likewise, absolute space exists because,
while objects may be moved in relation to each other, space itself cannot be moved (Trusted 98-
100). In 1766, German writer Gotthold Ephraim Lessing's Laocoön applied the Enlightenment
sensibilities of Newton and Descartes to media theory. Lessing states that pictorial representation
(see drawing and painting) should strive for spatial purity and that poetry must represent time, or
the changing moment. Lessing's distinction between media as privileged representations of either
space or time, and these divisions of time and space as theoretical absolutes in the Enlightenment
project, were undermined in the early twentieth century by Albert Einstein's theory of relativity of
1905 and 1916, and Hermann Minkowski's formulation of a non-Euclidean geometric space.
According to Einstein, space and time are relative to the observer. They are continuous only within
the coordinate system in which they are operating. Therefore, multiple types of "space-time" are
conceptually possible outside of the "space-time" experience of human beings (Nerlich 1-11). In
1907, German mathematician Minkowski published Space and Time , a text that posited a four-
dimensional space, with time as the fourth dimension. Minkowski was a crucial influence on
Einstein's revision of the theory of relativity and the further definition of his own concept of "space-
time" as a single entity rather than two separate entities (Nerlich 1-11). Einstein's and Minkowski's
conceptions of space radically transformed the media of visual arts and literature. As Linda
Dalrymple Henderson notes in The Fourth Dimension and Non-Euclidean Geometry in Modern Art,
modernist movements in the visual arts became concerned with the representation of four-
dimensional spatial realities as determined by time. From Cubism, which still maintained a
figurative approach, to the abstractions of Russian suprematism and De Stijl, modern art
attempted to depict and form changing spatial and temporal realities. Likewise, as Joseph Frank
proposes in The Idea of Spatial Form , modernist writers intend their work to be read as a distinct
moment in time as opposed to a chain of events. Lessing's categories had switched sides to the
pictorial representation of time and the literary representation of space. In the late 19th and early
20th century, the development of the film medium was another radical influence on changing
perceptions of space and time. Film produced the movement through time of photographic
recordings of space. French film theorist André Bazin's What is Cinema? argues for an utter
realism in cinema justified by film's unique ability to represent time itself. "The cinema is
objectivity in time ... Now, for the first time, the image of things is likewise the image of their
duration, change mummified as it were" (Bazin 14-15). In his book Gramophone, Film, Typewriter,
Friedrich Kittler equates this realization of images moving through time with Lacan's imaginary,
while the materiality of the film stock itself and each individual frame belongs to the Lacanian real
(Kittler 122). (see symbolic-imaginary-real) Film's projection produces an illusion of spatial depth
and movement through time in the viewer. Like perspective in painting and composition within the
photograph , the two-dimensional image on the screen is made perceptually three-dimensional and
identifiable through the placement of objects and light patterns (Bordwell and Thompson 176). In
the 1970s, French films theorists such as Christian Metz and Jean-Louis Baudry likewise associated
film with the Lacanian imaginary. These 'apparatus theories' compared the sense of
empowerment that emerges in Lacan's mirror stage with the illusions of control over space and
time which the cinema, particularly classical Hollywood cinema, produced. Other so called "time-
based" media that construct similar illusions of three-dimensional space through different
technologies such as television's cathode ray tube, video's tape encoding, and digital video's
digital codes have emerged since the development of film (although pre-filmic media such as
writing are also dependent on movement through time). In addition, computer technologies and
the common use of personal computers since the 1990s have effected further change in the
presentation of time and space in media. N. Katherine Hayles points out in her essay "The
Condition of Virtuality" that time lags produce the effects of three-dimensional space on the
computer screen. Hayles writes, "three-dimensional representations take many more [time] cycles
[which are determined by the computer's internal clock] than do two-dimensional maps. (see
virtuality) Hence the user experiences the sensory richness of three-dimensional topography as a
lag in the flow of the computer's response" (90). Interestingly, three-dimensional spatial
experience, called into question by certain movements in modern art, remains the dominant form
of spatial replication in cinema, television, video, and computers. The sense of control over time
and space in cinema examined by French apparatus theory seems eminently applicable to the
three-dimensional worlds of computer video games in which the player controls the movements of an
avatar through space and time as simulated on the television monitor or computer screen. In
addition to the perceptual accommodations to space and time in media such as television, film,
and computers, theorists of the 'postmodern,' the period of the late twentieth and early twenty-
first centuries, postulate that experiences of time and space outside individual media have become
increasingly compressed. In The Condition of Postmodernity, geographer David Harvey argues that
a "time-space compression" initiated through social factors such as economic globalization, the
quick dispersal of information through communications networks, and the accelerated pace of
consumption dialectically produces and is produced by a "postmodern" sensibility with deep socio-
economic roots. In Understanding Media , Marshall McLuhan describes a similar process as
"implosion" as people are more closely unified through networks in the electronic age. This
unification through implosion, for McLuhan, produces a positive sensory connection that allows for
a "global village" to emerge. "The immediate prospect for literate, fragmented Western man
encountering the electric implosion within his own culture is his steady and rapid transformation
into a complex and depth-structured person emotionally aware of his total interdependence with
the rest of human society" (McLuhan 50). Harvey, on the other hand, is less sanguine about our
ability to adapt to postmodern shifts in time and space. "There is an omnipresent danger that our
mental maps will not match current realities" (Harvey 305). Media open and shape our experiences
of time and space. Experientially, if not literally, human beings can operate on radically different
time-space coordinates through media from painting to writing to film. Media also shape time and
space experienced in "daily life" from forming space through architectural intervention to
standardizing time to enable trade and communication. Theories of time and space, whatever
their diagnosis, must account for the radical physical restructuring of time and space which has
taken place over the last century. Technological developments in communications media (from
the telegraph to the telephone to email), travel (the airplane), and the dissemination of information
(television to the internet) are perceptually reducing and conflating the lived experience of time
and space, as discussed by Harvey and McLuhan. Transformations in the perceptions of time and
space form an understanding of the world. The purity of poetry and painting as posited by Lessing
were Enlightenment ideals just as the attempt to represent the fourth dimension of time was a
modernist goal. An understanding of time and space in the late-twentieth, early-twenty-first
century is determined by the media which structure and augment lived space and time. The
French philosopher Henri Bergson, in the early twentieth century, formulated the notion that time
is always in a state of flux, or becoming, through changes in space, and that fixed concepts of
being are patently false. "Thanks to the third dimension of space, all the images making up the
past and future are ... not laid out with respect to one another like frames on a roll of film ... But let
us not forget that all motion is reciprocal or relative: if we perceive them coming towards us, it is
also true to say that we are going towards them" (Bergson 142). Absolute time and space are
historical products as are the concepts of "time-space compression" and "implosion," but their
mediated and mediating reality is nonetheless crucial to the formation of lived experience. Laurel
Harris Winter 2003

discourse

The prevailing sense of "discourse" is defined by the OED as "A spoken or written treatment of a
subject, in which it is handled or discussed at length; a dissertation, treatise, homily, sermon, or
the like." While previous, archaic definitions of discourse have been "process or succession of time,
events, actions, etc." or "the act of understanding," discourse is most simply understood today as a
sort of unit of language organized around a particular subject matter and meaning. This can be
contrasted to other ways in which language has been broken down into much smaller units of
analysis, such as into individual words or sentences in studies of semantics and syntax.
Furthermore, as opposed to the linguistic conception of language as a generally stable, unified,
abstract symbolic system, discourse denotes real manifestations of language--actual speech or writing.
In addition, the idea of discourse often signifies a particular awareness of social influences on the
use of language. It is therefore important to distinguish between discourse and the Saussurean
concept of the parole as a real manifestation of language (Saussure, 11-17). Saussure's distinction
between langue and parole is such: langue is a linguistic system or code which is prior to the actual
use of language and which is stable, homogenous and equally accessible to all members of a
linguistic community. Parole is what is actually spoken or written, and varies according to
individual choice. Thus while discourse is also what is actually spoken or written, it differs from
parole in that it is used to denote manifestations of language that are determined by social
influences from society as a whole, rather than by individual agency. Because the form that
discourse takes cannot be solely the product of individual choice, the word entails a meaningful
ambiguity between generality and specificity (Fairclough, 24). Discourse can refer either to what is
conventionally said or written in a general context, or to what is said or written on a particular
occasion of that context. One example of discourse in our culture is one that posits that being cold
and wet can cause a person to develop a cold--a belief which doctors reject as unscientific. Yet if I
specifically were to say, "I have a cold because I got caught in the rain last night," this would be
also be an example of discourse. My words would reflect my own particularity by stating the fact
that I, as an individual, had been caught in the rain last night--yet at the same time my words are
determined by a social commonplace. The ambiguity exists between generality and specificity
because the idea of discourse implies that the specific is also always general.Yet while discourse
most often denotes an instance of language, it is also important to note that in other frameworks,
discourse is not necessarily a linguistic phenomenon; it can also be conceptualized as inhabiting a
variety of other forms, such as visual and spatial (Fairclough, 22). For example, in his analysis of
the development of the modern penal system Foucault cites the medical and juridical discourse
about the necessity of rehabilitating criminals--but he also cites the actual structure of prisons,
designed to maximize surveillance, as contributing to the discourse of this conceptualization of
criminality (Foucault, 1975, 233-9).The idea of discourse thus emphasizes that language is a social
and communal practice, never external to or prior to society (as some conceptualizations of
linguistics, such as Saussure's, may seem to assume). In semiotics, one way to conceptualize
discourse, then, is to see it as a reflection of its particular context in a particular part of society.
According to linguist Michael Halliday, discourse is "a unit of language larger than a sentence and
which is firmly rooted in a specific context. There are many different types of discourse under this
heading, such as academic discourse, legal discourse, media discourse, etc. Each discourse type
possesses its own characteristic linguistic features" (Martin and Ringham, 51). This definition of
discourse emphasizes the way in which social context--who is speaking, who is listening, and when
and where the instance of language occurs--determines the nature of enunciations. It is clear how
legal discourse and media discourse, will demonstrate fundamentally different conventions of style,
wording, and other "linguistic features."A more complex understanding of discourse emphasizes
that formal conventions of the mode of expression are not the only aspect of language that is
determined by the social. Underlying beliefs and worldviews, specific to the social context, are
seen to be mediated by discourse. According to the Dictionary of Semiotics, discourse, "in strictly
semiotic terms," does not refer to the literal or "narrative" level of language but to the interaction
between "the figurative dimension, relating to the representation of the natural world" and "the
thematic dimension, relating to the abstract values actualized in an utterance" (Martin and
Ringham, 51). As evident in the previous example of the discourse that states that coldness and
wetness can cause a cold, discourse entails underlying assumptions about the nature of the world
and of particular social values and beliefs.In contemporary continental philosophy, this
understanding of discourse as the covert embodiment of social values is taken on a more critical,
political level--discourse is seen by some philosophers as a means of the legitimization of social
and political practices. The Italian Marxist Antonio Gramsci wrote about ideology as "a conception of
the world that is implicitly manifest in art, in law, in economic activity and in all manifestations of
individual and collective life" (Gramsci, 330). For him, discourse mediates ideological justifications
of the status quo that come to be accepted as "common sense." Similarly, anthropologist Pierre
Bourdieu wrote that the ultimate objective of a discourse is the "recognition of legitimacy through
the misrecognition of arbitrariness" (Bourdieu, 163). Through the proliferation of discourse, beliefs
and ideas that are actually socially and historically specific are legitimized by their seemingly
universal and natural appearance. An example of this sort of discourse might be advertising
discourse in capitalist society. Advertisements may portray luxury products as naturalized needs;
this discourse thereby reinforces a consumption-driven culture.Using a similar theory of discourse
as ideology, Louis Althusser sees discourse as naturalizing "subject-positions," or social roles.
Althusser writes, "Like all obviousnesses, including those that make a word 'name a thing' or 'have
a meaning'... the 'obviousness' that you and I are subjects... is an ideological effect, the
elementary ideological effect" (Althusser, 171-2). If subject positions are an ideological effect, then
individuals are given social identities that are established by discourse, a discourse that at the
same time naturalizes such subject positions and conceals this very process.The discursive
production of the subject has been theorized in other ways that do not utilize the concept of
ideology. For Foucault, discourse is a medium through which power and norms function. Foucault
describes how, in modernity, scientific discourses such as the "human sciences" which claim to
reveal human nature actually establish norms and prescribe optimum modes of conduct. These
discourses also establish ways of identifying, understanding, and managing "deviant" subjects. By
describing and categorizing individuals in detail, these discourses exert an unprecedented amount
of power over the individual's comportment and relationship to herself (Foucault, 1978, 92-114 and
1999, 39). For example, in The History of Sexuality, Foucault describes how psychological
discourses actually produced a new understanding of personhood by creating the concept of
sexuality as a fundamental marker of identity. Whereas previously, non-heterosexual acts were
simply seen as against nature, under the new discourse they became psychologically deviant,
indicating of a whole array of other psychological disturbances. The idea of the homosexual, the
invert, and the sadomasochist developed, thereby constituting a new experience of the individual
as a sexual being and, through its most minute descriptions of the meanings of sexuality, a tighter
control over the subjective experiences of individuals.While borrowing the Foucauldean concepts of
power and the norm, Judith Butler takes a slightly different stance on the way in which discourse
produces the subject. Butler is particularly interested in the embodiment of gender--a process that
she calls performativity. Butler claims that gender identity is actually an ongoing process of "citing"
gender norms that permeate society, mediated by a heteronormative discourse that describes
masculinity and femininity as stable, natural, and mutually exclusive. In fact, a gender identity only
seems to naturally emanate from the subject, while what is actually occurring is an ongoing
reiteration and performance of gendered comportment that never fully achieves the gender ideal. If
one fails even to approximate gender norms, one fails to be socially recognized as fully human. For
Butler, discourse actually demarcates the necessary conditions for the embodiment of personhood
(Butler, 171-180).Understood as a medium, then, discourse functions as a powerful tool through
which linguistic conventions social and political beliefs and practices, ideologies, subject positions,
and norms can all be mediated. Yet as we have seen, discourse does not simply serve as a
connecting link between a stable, exterior society and the individual. All of these social values
emanate from individuals who enunciate a discourse that is at the same not completely their own,
a discourse which in turn implants and reinforces the notions it contains. Discourse always consists
of both input and output, and is always at once an extension of our culture and of ourselves Sulin
Carling

The Fixation of Belief

Charles S. Peirce

Popular Science Monthly 12 (November 1877), 1-15.

Few persons care to study logic, because everybody conceives himself to be


proficient enough in the art of reasoning already. But I observe that this
satisfaction is limited to one's own ratiocination, and does not extend to that
of other men.
We come to the full possession of our power of drawing inferences, the last
of all our faculties; for it is not so much a natural gift as a long and
difficult art. The history of its practice would make a grand subject for a
book. The medieval schoolman, following the Romans, made logic the
earliest of a boy's studies after grammar, as being very easy. So it was as
they understood it. Its fundamental principle, according to them, was, that all
knowledge rests either on authority or reason; but that whatever is deduced
by reason depends ultimately on a premiss derived from authority.
Accordingly, as soon as a boy was perfect in the syllogistic procedure, his
intellectual kit of tools was held to be complete.

To Roger Bacon, that remarkable mind who in the middle of the thirteenth
century was almost a scientific man, the schoolmen's conception of
reasoning appeared only an obstacle to truth. He saw that experience alone
teaches anything -- a proposition which to us seems easy to understand,
because a distinct conception of experience has been handed down to us
from former generations; which to him likewise seemed perfectly clear,
because its difficulties had not yet unfolded themselves. Of all kinds of
experience, the best, he thought, was interior illumination, which teaches
many things about Nature which the external senses could never discover,
such as the transubstantiation of bread.

Four centuries later, the more celebrated Bacon, in the first book of his
Novum Organum, gave his clear account of experience as something which
must be open to verification and reexamination. But, superior as Lord
Bacon's conception is to earlier notions, a modern reader who is not in awe
of his grandiloquence is chiefly struck by the inadequacy of his view of
scientific procedure. That we have only to make some crude experiments, to
draw up briefs of the results in certain blank forms, to go through these by
rule, checking off everything disproved and setting down the alternatives,
and that thus in a few years physical science would be finished up -- what
an idea! "He wrote on science like a Lord Chancellor," indeed, as Harvey, a
genuine man of science said.

The early scientists, Copernicus, Tycho Brahe, Kepler, Galileo, Harvey, and
Gilbert, had methods more like those of their modern brethren. Kepler undertook to

draw a curve through the places of Mars; and to state the times occupied by the planet in
describing the different parts of that curve; but perhaps his greatest service
to science was in impressing on men's minds that this was the thing to be
done if they wished to improve astronomy; that they were not to content
themselves with inquiring whether one system of epicycles was better than
another but that they were to sit down to the figures and find out what the
curve, in truth, was. He accomplished this by his incomparable energy and
courage, blundering along in the most inconceivable way (to us), from one
irrational hypothesis to another, until, after trying twenty-two of these, he
fell, by the mere exhaustion of his invention, upon the orbit which a mind
well furnished with the weapons of modern logic would have tried almost at
the outset.

In the same way, every work of science great enough to be well


remembered for a few generations affords some exemplification of the
defective state of the art of reasoning of the time when it was written; and
each chief step in science has been a lesson in logic. It was so when
Lavoisier and his contemporaries took up the study of Chemistry. The old
chemist's maxim had been, "Lege, lege, lege, labora, ora, et relege."
Lavoisier's method was not to read and pray, but to dream that some long
and complicated chemical process would have a certain effect, to put it into
practice with dull patience, after its inevitable failure, to dream that with
some modification it would have another result, and to end by publishing
the last dream as a fact: his way was to carry his mind into his laboratory,
and literally to make of his alembics and cucurbits instruments of thought,
giving a new conception of reasoning as something which was to be done
with one's eyes open, in manipulating real things instead of words and
fancies.

The Darwinian controversy is, in large part, a question of logic. Mr. Darwin
proposed to apply the statistical method to biology. The same thing has been
done in a widely different branch of science, the theory of gases. Though
unable to say what the movements of any particular molecule of gas would
be on a certain hypothesis regarding the constitution of this class of bodies,
Clausius and Maxwell were yet able, eight years before the publication of
Darwin's immortal work, by the application of the doctrine of probabilities,
to predict that in the long run such and such a proportion of the molecules
would, under given circumstances, acquire such and such velocities; that
there would take place, every second, such and such a relative number of
collisions, etc.; and from these propositions were able to deduce certain
properties of gases, especially in regard to their heat-relations. In like
manner, Darwin, while unable to say what the operation of variation and
natural selection in any individual case will be, demonstrates that in the long
run they will, or would, adapt animals to their circumstances. Whether or
not existing animal forms are due to such action, or what position the theory
ought to take, forms the subject of a discussion in which questions of fact
and questions of logic are curiously interlaced.

II
The object of reasoning is to find out, from the consideration of what we
already know, something else which we do not know. Consequently,
reasoning is good if it be such as to give a true conclusion from true
premisses, and not otherwise. Thus, the question of validity is purely one of
fact and not of thinking. A being the facts stated in the premisses and B
being that concluded, the question is, whether these facts are really so
related that if A were B would generally be. If so, the inference is valid; if
not, not. It is not in the least the question whether, when the premisses are
accepted by the mind, we feel an impulse to accept the conclusion also. It is
true that we do generally reason correctly by nature. But that is an accident;
the true conclusion would remain true if we had no impulse to accept it;
and the false one would remain false, though we could not resist the
tendency to believe in it.

We are, doubtless, in the main logical animals, but we are not perfectly so.
Most of us, for example, are naturally more sanguine and hopeful than logic
would justify. We seem to be so constituted that in the absence of any facts
to go upon we are happy and self-satisfied; so that the effect of experience
is continually to contract our hopes and aspirations. Yet a lifetime of the
application of this corrective does not usually eradicate our sanguine
disposition. Where hope is unchecked by any experience, it is likely that our
optimism is extravagant. Logicality in regard to practical matters (if this be
understood, not in the old sense, but as consisting in a wise union of
security with fruitfulness of reasoning) is the most useful quality an animal
can possess, and might, therefore, result from the action of natural selection;
but outside of these it is probably of more advantage to the animal to have
his mind filled with pleasing and encouraging visions, independently of their
truth; and thus, upon unpractical subjects, natural selection might occasion a
fallacious tendency of thought.

That which determines us, from given premisses, to draw one inference
rather than another, is some habit of mind, whether it be constitutional or
acquired. The habit is good or otherwise, according as it produces true
conclusions from true premisses or not; and an inference is regarded as valid
or not, without reference to the truth or falsity of its conclusion specially,
but according as the habit which determines it is such as to produce true
conclusions in general or not. The particular habit of mind which governs
this or that inference may be formulated in a proposition whose truth
depends on the validity of the inferences which the habit determines; and
such a formula is called a guiding principle of inference. Suppose, for
example, that we observe that a rotating disk of copper quickly comes to
rest when placed between the poles of a magnet, and we infer that this will
happen with every disk of copper. The guiding principle is, that what is true
of one piece of copper is true of another. Such a guiding principle with
regard to copper would be much safer than with regard to many other
substances -- brass, for example.

A book might be written to signalize all the most important of these guiding
principles of reasoning. It would probably be, we must confess, of no
service to a person whose thought is directed wholly to practical subjects,
and whose activity moves along thoroughly-beaten paths. The problems that
present themselves to such a mind are matters of routine which he has
learned once for all to handle in learning his business. But let a man
venture into an unfamiliar field, or where his results are not continually
checked by experience, and all history shows that the most masculine
intellect will ofttimes lose his orientation and waste his efforts in directions
which bring him no nearer to his goal, or even carry him entirely astray. He
is like a ship in the open sea, with no one on board who understands the
rules of navigation. And in such a case some general study of the guiding
principles of reasoning would be sure to be found useful.

The subject could hardly be treated, however, without being first limited;
since almost any fact may serve as a guiding principle. But it so happens
that there exists a division among facts, such that in one class are all those
which are absolutely essential as guiding principles, while in the others are
all which have any other interest as objects of research. This division is
between those which are necessarily taken for granted in asking why a
certain conclusion is thought to follow from certain premisses, and those
which are not implied in such a question. A moment's thought will show
that a variety of facts are already assumed when the logical question is first
asked. It is implied, for instance, that there are such states of mind as doubt
and belief -- that a passage from one to the other is possible, the object of
thought remaining the same, and that this transition is subject to some rules
by which all minds are alike bound. As these are facts which we must
already know before we can have any clear conception of reasoning at all, it
cannot be supposed to be any longer of much interest to inquire into their
truth or falsity. On the other hand, it is easy to believe that those rules of
reasoning which are deduced from the very idea of the process are the ones
which are the most essential; and, indeed, that so long as it conforms to
these it will, at least, not lead to false conclusions from true premisses. In
point of fact, the importance of what may be deduced from the assumptions
involved in the logical question turns out to be greater than might be
supposed, and this for reasons which it is difficult to exhibit at the outset.
The only one which I shall here mention is, that conceptions which are
really products of logical reflection, without being readily seen to be so,
mingle with our ordinary thoughts, and are frequently the causes of great
confusion. This is the case, for example, with the conception of quality. A
quality, as such, is never an object of observation. We can see that a thing
is blue or green, but the quality of being blue and the quality of being green
are not things which we see; they are products of logical reflections. The
truth is, that common-sense, or thought as it first emerges above the level of
the narrowly practical, is deeply imbued with that bad logical quality to
which the epithet metaphysical is commonly applied; and nothing can clear
it up but a severe course of logic.

III

We generally know when we wish to ask a question and when we wish to


pronounce a judgment, for there is a dissimilarity between the sensation of
doubting and that of believing.

But this is not all which distinguishes doubt from belief. There is a practical
difference. Our beliefs guide our desires and shape our actions. The
Assassins, or followers of the Old Man of the Mountain, used to rush into
death at his least command, because they believed that obedience to him
would insure everlasting felicity. Had they doubted this, they would not
have acted as they did. So it is with every belief, according to its degree.
The feeling of believing is a more or less sure indication of there being
established in our nature some habit which will determine our actions.
Doubt never has such an effect.

Nor must we overlook a third point of difference. Doubt is an uneasy and dissatisfied
state from which we struggle to free ourselves and pass into the state of belief; while the latter is a calm and
satisfactory state which we do not wish to avoid, or to change to a belief in anything else. On the contrary,
we cling tenaciously, not merely to believing, but to believing just what we
do believe.

Thus, both doubt and belief have positive effects upon us, though very
different ones. Belief does not make us act at once, but puts us into such a
condition that we shall behave in some certain way, when the occasion
arises. Doubt has not the least such active effect, but stimulates us to
inquiry until it is destroyed. This reminds us of the irritation of a nerve and
the reflex action produced thereby; while for the analogue of belief, in the
nervous system, we must look to what are called nervous associations -- for
example, to that habit of the nerves in consequence of which the smell of a
peach will make the mouth water.

IV

The irritation of doubt causes a struggle to attain a state of belief. I shall


term this struggle inquiry, though it must be admitted that this is sometimes
not a very apt designation.

The irritation of doubt is the only immediate motive for the struggle to
attain belief. It is certainly best for us that our beliefs should be such as
may truly guide our actions so as to satisfy our desires; and this reflection
will make us reject every belief which does not seem to have been so
formed as to insure this result. But it will only do so by creating a doubt in
the place of that belief. With the doubt, therefore, the struggle begins, and
with the cessation of doubt it ends. Hence, the sole object of inquiry is the
settlement of opinion. We may fancy that this is not enough for us, and that
we seek, not merely an opinion, but a true opinion. But put this fancy to the
test, and it proves groundless; for as soon as a firm belief is reached we are
entirely satisfied, whether the belief be true or false. And it is clear that
nothing out of the sphere of our knowledge can be our object, for nothing
which does not affect the mind can be the motive for mental effort. The
most that can be maintained is, that we seek for a belief that we shall think
to be true. But we think each one of our beliefs to be true, and, indeed, it is
mere tautology to say so.

That the settlement of opinion is the sole end of inquiry is a very important
proposition. It sweeps away, at once, various vague and erroneous
conceptions of proof. A few of these may be noticed here.

1. Some philosophers have imagined that to start an inquiry it was only


necessary to utter a question whether orally or by setting it down upon
paper, and have even recommended us to begin our studies with questioning
everything! But the mere putting of a proposition into the interrogative form
does not stimulate the mind to any struggle after belief. There must be a
real and living doubt, and without this all discussion is idle.

2. It is a very common idea that a demonstration must rest on some ultimate


and absolutely indubitable propositions. These, according to one school, are
first principles of a general nature; according to another, are first sensations.
But, in point of fact, an inquiry, to have that completely satisfactory result
called demonstration, has only to start with propositions perfectly free from
all actual doubt. If the premisses are not in fact doubted at all, they cannot
be more satisfactory than they are.

3. Some people seem to love to argue a point after all the world is fully
convinced of it. But no further advance can be made. When doubt ceases,
mental action on the subject comes to an end; and, if it did go on, it would
be without a purpose.

If the settlement of opinion is the sole object of inquiry, and if belief is of


the nature of a habit, why should we not attain the desired end, by taking as
answer to a question any we may fancy, and constantly reiterating it to
ourselves, dwelling on all which may conduce to that belief, and learning to
turn with contempt and hatred from anything that might disturb it? This
simple and direct method is really pursued by many men. I remember once
being entreated not to read a certain newspaper lest it might change my
opinion upon free-trade. "Lest I might be entrapped by its fallacies and
misstatements," was the form of expression. "You are not," my friend said,
"a special student of political economy. You might, therefore, easily be
deceived by fallacious arguments upon the subject. You might, then, if you
read this paper, be led to believe in protection. But you admit that free-trade
is the true doctrine; and you do not wish to believe what is not true." I have
often known this system to be deliberately adopted. Still oftener, the
instinctive dislike of an undecided state of mind, exaggerated into a vague
dread of doubt, makes men cling spasmodically to the views they already
take. The man feels that, if he only holds to his belief without wavering, it
will be entirely satisfactory. Nor can it be denied that a steady and
immovable faith yields great peace of mind. It may, indeed, give rise to
inconveniences, as if a man should resolutely continue to believe that fire
would not burn him, or that he would be eternally damned if he received
his ingesta otherwise than through a stomach-pump. But then the man who
adopts this method will not allow that its inconveniences are greater than its
advantages. He will say, "I hold steadfastly to the truth, and the truth is
always wholesome." And in many cases it may very well be that the
pleasure he derives from his calm faith overbalances any inconveniences
resulting from its deceptive character. Thus, if it be true that death is
annihilation, then the man who believes that he will certainly go straight to
heaven when he dies, provided he have fulfilled certain simple observances
in this life, has a cheap pleasure which will not be followed by the least
disappointment. A similar consideration seems to have weight with many
persons in religious topics, for we frequently hear it said, "Oh, I could not
believe so-and-so, because I should be wretched if I did." When an ostrich
buries its head in the sand as danger approaches, it very likely takes the
happiest course. It hides the danger, and then calmly says there is no
danger; and, if it feels perfectly sure there is none, why should it raise its
head to see? A man may go through life, systematically keeping out of view
all that might cause a change in his opinions, and if he only succeeds --
basing his method, as he does, on two fundamental psychological laws -- I
do not see what can be said against his doing so. It would be an egotistical
impertinence to object that his procedure is irrational, for that only amounts
to saying that his method of settling belief is not ours. He does not propose
to himself to be rational, and, indeed, will often talk with scorn of man's
weak and illusive reason. So let him think as he pleases.

But this method of fixing belief, which may be called the method of
tenacity, will be unable to hold its ground in practice. The social impulse is
against it. The man who adopts it will find that other men think differently
from him, and it will be apt to occur to him, in some saner moment, that
their opinions are quite as good as his own, and this will shake his
confidence in his belief. This conception, that another man's thought or
sentiment may be equivalent to one's own, is a distinctly new step, and a
highly important one. It arises from an impulse too strong in man to be
suppressed, without danger of destroying the human species. Unless we
make ourselves hermits, we shall necessarily influence each other's opinions;
so that the problem becomes how to fix belief, not in the individual merely,
but in the community.

Let the will of the state act, then, instead of that of the individual. Let an
institution be created which shall have for its object to keep correct
doctrines before the attention of the people, to reiterate them perpetually,
and to teach them to the young; having at the same time power to prevent
contrary doctrines from being taught, advocated, or expressed. Let all
possible causes of a change of mind be removed from men's apprehensions.
Let them be kept ignorant, lest they should learn of some reason to think
otherwise than they do. Let their passions be enlisted, so that they may
regard private and unusual opinions with hatred and horror. Then, let all
men who reject the established belief be terrified into silence. Let the people
turn out and tar-and-feather such men, or let inquisitions be made into the
manner of thinking of suspected persons, and when they are found guilty of
forbidden beliefs, let them be subjected to some signal punishment. When
complete agreement could not otherwise be reached, a general massacre of
all who have not thought in a certain way has proved a very effective
means of settling opinion in a country. If the power to do this be wanting,
let a list of opinions be drawn up, to which no man of the least
independence of thought can assent, and let the faithful be required to
accept all these propositions, in order to segregate them as radically as
possible from the influence of the rest of the world.
This method has, from the earliest times, been one of the chief means of
upholding correct theological and political doctrines, and of preserving their
universal or catholic character. In Rome, especially, it has been practised
from the days of Numa Pompilius to those of Pius Nonus. This is the most
perfect example in history; but wherever there is a priesthood -- and no
religion has been without one -- this method has been more or less made
use of. Wherever there is an aristocracy, or a guild, or any association of a
class of men whose interests depend, or are supposed to depend, on certain
propositions, there will be inevitably found some traces of this natural
product of social feeling. Cruelties always accompany this system; and when
it is consistently carried out, they become atrocities of the most horrible
kind in the eyes of any rational man. Nor should this occasion surprise, for
the officer of a society does not feel justified in surrendering the interests of
that society for the sake of mercy, as he might his own private interests. It
is natural, therefore, that sympathy and fellowship should thus produce a
most ruthless power.

In judging this method of fixing belief, which may be called the method of
authority, we must, in the first place, allow its immeasurable mental and
moral superiority to the method of tenacity. Its success is proportionately
greater; and, in fact, it has over and over again worked the most majestic
results. The mere structures of stone which it has caused to be put together
-- in Siam, for example, in Egypt, and in Europe -- have many of them a
sublimity hardly more than rivaled by the greatest works of Nature. And,
except the geological epochs, there are no periods of time so vast as those
which are measured by some of these organized faiths. If we scrutinize the
matter closely, we shall find that there has not been one of their creeds
which has remained always the same; yet the change is so slow as to be
imperceptible during one person's life, so that individual belief remains
sensibly fixed. For the mass of mankind, then, there is perhaps no better
method than this. If it is their highest impulse to be intellectual slaves, then
slaves they ought to remain.

But no institution can undertake to regulate opinions upon every subject.


Only the most important ones can be attended to, and on the rest men's
minds must be left to the action of natural causes. This imperfection will be
no source of weakness so long as men are in such a state of culture that
one opinion does not influence another -- that is, so long as they cannot put
two and two together. But in the most priest-ridden states some individuals
will be found who are raised above that condition. These men possess a
wider sort of social feeling; they see that men in other countries and in
other ages have held to very different doctrines from those which they
themselves have been brought up to believe; and they cannot help seeing
that it is the mere accident of their having been taught as they have, and of
their having been surrounded with the manners and associations they have,
that has caused them to believe as they do and not far differently. Nor can
their candour resist the reflection that there is no reason to rate their own
views at a higher value than those of other nations and other centuries; thus
giving rise to doubts in their minds.

They will further perceive that such doubts as these must exist in their
minds with reference to every belief which seems to be determined by the
caprice either of themselves or of those who originated the popular opinions.
The willful adherence to a belief, and the arbitrary forcing of it upon others,
must, therefore, both be given up. A different new method of settling
opinions must be adopted, that shall not only produce an impulse to believe,
but shall also decide what proposition it is which is to be believed. Let the
action of natural preferences be unimpeded, then, and under their influence
let men, conversing together and regarding matters in different lights,
gradually develop beliefs in harmony with natural causes. This method
resembles that by which conceptions of art have been brought to maturity.
The most perfect example of it is to be found in the history of metaphysical
philosophy. Systems of this sort have not usually rested upon any observed
facts, at least not in any great degree. They have been chiefly adopted
because their fundamental propositions seemed "agreeable to reason." This is
an apt expression; it does not mean that which agrees with experience, but
that which we find ourselves inclined to believe. Plato, for example, finds it
agreeable to reason that the distances of the celestial spheres from one
another should be proportional to the different lengths of strings which
produce harmonious chords. Many philosophers have been led to their main
conclusions by considerations like this; but this is the lowest and least
developed form which the method takes, for it is clear that another man
might find Kepler's theory, that the celestial spheres are proportional to the
inscribed and circumscribed spheres of the different regular solids, more
agreeable to his reason. But the shock of opinions will soon lead men to
rest on preferences of a far more universal nature. Take, for example, the
doctrine that man only acts selfishly -- that is, from the consideration that
acting in one way will afford him more pleasure than acting in another. This
rests on no fact in the world, but it has had a wide acceptance as being the
only reasonable theory.

This method is far more intellectual and respectable from the point of view
of reason than either of the others which we have noticed. But its failure
has been the most manifest. It makes of inquiry something similar to the
development of taste; but taste, unfortunately, is always more or less a
matter of fashion, and accordingly metaphysicians have never come to any
fixed agreement, but the pendulum has swung backward and forward
between a more material and a more spiritual philosophy, from the earliest
times to the latest. And so from this, which has been called the a priori
method, we are driven, in Lord Bacon's phrase, to a true induction. We have
examined into this a priori method as something which promised to deliver
our opinions from their accidental and capricious element. But development,
while it is a process which eliminates the effect of some casual
circumstances, only magnifies that of others. This method, therefore, does
not differ in a very essential way from that of authority. The government
may not have lifted its finger to influence my convictions; I may have been
left outwardly quite free to choose, we will say, between monogamy and
polygamy, and, appealing to my conscience only, I may have concluded that
the latter practice is in itself licentious. But when I come to see that the
chief obstacle to the spread of Christianity among a people of as high
culture as the Hindoos has been a conviction of the immorality of our way
of treating women, I cannot help seeing that, though governments do not
interfere, sentiments in their development will be very greatly determined by
accidental causes. Now, there are some people, among whom I must
suppose that my reader is to be found, who, when they see that any belief
of theirs is determined by any circumstance extraneous to the facts, will
from that moment not merely admit in words that that belief is doubtful, but
will experience a real doubt of it, so that it ceases to be a belief.

To satisfy our doubts, therefore, it is necessary that a method should be


found by which our beliefs may be determined by nothing human, but by
some external permanency -- by something upon which our thinking has no
effect. Some mystics imagine that they have such a method in a private
inspiration from on high. But that is only a form of the method of tenacity,
in which the conception of truth as something public is not yet developed.
Our external permanency would not be external, in our sense, if it was
restricted in its influence to one individual. It must be something which
affects, or might affect, every man. And, though these affections are
necessarily as various as are individual conditions, yet the method must be
such that the ultimate conclusion of every man shall be the same. Such is
the method of science. Its fundamental hypothesis, restated in more familiar
language, is this: There are Real things, whose characters are entirely
independent of our opinions about them; those Reals affect our senses
according to regular laws, and, though our sensations are as different as are
our relations to the objects, yet, by taking advantage of the laws of
perception, we can ascertain by reasoning how things really and truly are;
and any man, if he have sufficient experience and he reason enough about
it, will be led to the one True conclusion. The new conception here involved
is that of Reality. It may be asked how I know that there are any Reals. If
this hypothesis is the sole support of my method of inquiry, my method of
inquiry must not be used to support my hypothesis. The reply is this: 1. If
investigation cannot be regarded as proving that there are Real things, it at
least does not lead to a contrary conclusion; but the method and the
conception on which it is based remain ever in harmony. No doubts of the
method, therefore, necessarily arise from its practice, as is the case with all
the others. 2. The feeling which gives rise to any method of fixing belief is
a dissatisfaction at two repugnant propositions. But here already is a vague
concession that there is some one thing which a proposition should
represent. Nobody, therefore, can really doubt that there are Reals, for, if he
did, doubt would not be a source of dissatisfaction. The hypothesis,
therefore, is one which every mind admits. So that the social impulse does
not cause men to doubt it. 3. Everybody uses the scientific method about a
great many things, and only ceases to use it when he does not know how to
apply it. 4. Experience of the method has not led us to doubt it, but, on the
contrary, scientific investigation has had the most wonderful triumphs in the
way of settling opinion. These afford the explanation of my not doubting the
method or the hypothesis which it supposes; and not having any doubt, nor
believing that anybody else whom I could influence has, it would be the
merest babble for me to say more about it. If there be anybody with a living
doubt upon the subject, let him consider it.

To describe the method of scientific investigation is the object of this series


of papers. At present I have only room to notice some points of contrast
between it and other methods of fixing belief.

This is the only one of the four methods which presents any distinction of a
right and a wrong way. If I adopt the method of tenacity, and shut myself
out from all influences, whatever I think necessary to doing this, is
necessary according to that method. So with the method of authority: the
state may try to put down heresy by means which, from a scientific point of
view, seem very ill-calculated to accomplish its purposes; but the only test
on that method is what the state thinks; so that it cannot pursue the method
wrongly. So with the a priori method. The very essence of it is to think as
one is inclined to think. All metaphysicians will be sure to do that, however
they may be inclined to judge each other to be perversely wrong. The
Hegelian system recognizes every natural tendency of thought as logical,
although it be certain to be abolished by counter-tendencies. Hegel thinks
there is a regular system in the succession of these tendencies, in
consequence of which, after drifting one way and the other for a long time,
opinion will at last go right. And it is true that metaphysicians do get the
right ideas at last; Hegel's system of Nature represents tolerably the science
of his day; and one may be sure that whatever scientific investigation shall
have put out of doubt will presently receive a priori demonstration on the
part of the metaphysicians. But with the scientific method the case is
different. I may start with known and observed facts to proceed to the
unknown; and yet the rules which I follow in doing so may not be such as
investigation would approve. The test of whether I am truly following the
method is not an immediate appeal to my feelings and purposes, but, on the
contrary, itself involves the application of the method. Hence it is that bad
reasoning as well as good reasoning is possible; and this fact is the
foundation of the practical side of logic.

It is not to be supposed that the first three methods of settling opinion


present no advantage whatever over the scientific method. On the contrary,
each has some peculiar convenience of its own. The a priori method is
distinguished for its comfortable conclusions. It is the nature of the process
to adopt whatever belief we are inclined to, and there are certain flatteries to
the vanity of man which we all believe by nature, until we are awakened
from our pleasing dream by rough facts. The method of authority will
always govern the mass of mankind; and those who wield the various forms
of organized force in the state will never be convinced that dangerous
reasoning ought not to be suppressed in some way. If liberty of speech is to
be untrammeled from the grosser forms of constraint, then uniformity of
opinion will be secured by a moral terrorism to which the respectability of
society will give its thorough approval. Following the method of authority is
the path of peace. Certain non-conformities are permitted; certain others
(considered unsafe) are forbidden. These are different in different countries
and in different ages; but, wherever you are, let it be known that you
seriously hold a tabooed belief, and you may be perfectly sure of being
treated with a cruelty less brutal but more refined than hunting you like a
wolf. Thus, the greatest intellectual benefactors of mankind have never
dared, and dare not now, to utter the whole of their thought; and thus a
shade of prima facie doubt is cast upon every proposition which is
considered essential to the security of society. Singularly enough, the
persecution does not all come from without; but a man torments himself and
is oftentimes most distressed at finding himself believing propositions which
he has been brought up to regard with aversion. The peaceful and
sympathetic man will, therefore, find it hard to resist the temptation to
submit his opinions to authority. But most of all I admire the method of
tenacity for its strength, simplicity, and directness. Men who pursue it are
distinguished for their decision of character, which becomes very easy with
such a mental rule. They do not waste time in trying to make up their
minds what they want, but, fastening like lightning upon whatever
alternative comes first, they hold to it to the end, whatever happens, without
an instant's irresolution. This is one of the splendid qualities which generally
accompany brilliant, unlasting success. It is impossible not to envy the man
who can dismiss reason, although we know how it must turn out at last.

Such are the advantages which the other methods of settling opinion have
over scientific investigation. A man should consider well of them; and then
he should consider that, after all, he wishes his opinions to coincide with the
fact, and that there is no reason why the results of those three first methods
should do so. To bring about this effect is the prerogative of the method of
science. Upon such considerations he has to make his choice -- a choice
which is far more than the adoption of any intellectual opinion, which is
one of the ruling decisions of his life, to which, when once made, he is
bound to adhere. The force of habit will sometimes cause a man to hold on
to old beliefs, after he is in a condition to see that they have no sound
basis. But reflection upon the state of the case will overcome these habits,
and he ought to allow reflection its full weight. People sometimes shrink
from doing this, having an idea that beliefs are wholesome which they
cannot help feeling rest on nothing. But let such persons suppose an
analogous though different case from their own. Let them ask themselves
what they would say to a reformed Mussulman who should hesitate to give
up his old notions in regard to the relations of the sexes; or to a reformed
Catholic who should still shrink from reading the Bible. Would they not say
that these persons ought to consider the matter fully, and clearly understand
the new doctrine, and then ought to embrace it, in its entirety? But, above
all, let it be considered that what is more wholesome than any particular
belief is integrity of belief, and that to avoid looking into the support of any
belief from a fear that it may turn out rotten is quite as immoral as it is
disadvantageous. The person who confesses that there is such a thing as
truth, which is distinguished from falsehood simply by this, that if acted on
it should, on full consideration, carry us to the point we aim at and not
astray, and then, though convinced of this, dares not know the truth and
seeks to avoid it, is in a sorry state of mind indeed.

zoography

According to the OED, "zoography" refers generally to the description of animals and specifically to both the scientific
classification (zoology) and the pictorial representation of animals; the latter meaning also can stand for pictorial art in
general. [1] Taken directly from ancient Greek, its first written usage was back in 1593, referencing early work in
scientific description and, it seems, the whole of known animals in general. It was not used in writing to refer to
pictorial representation of animals until 1656. [2] The former meaning seems to have remained the same throughout
history, though its usage has become quite rare in the past hundred years. [3]

However, using "zoography" to refer generally to the pictorial arts has seen something of a revival in media discourse,
a usage pioneered by Derrida in Of Grammatology. In the midst of discussing Rosseau’s essay on the origin of
language, Derrida digs up the archaic term zoography to emphasize a certain violence done to our perceived reality in
both writing and painting, as argued by Rosseau. The following quote makes this clear:

"Here painting—zoography—betrays being and speech, words and things themselves because it freezes them. Its
offshoots seem to be living things but when one questions them, they no longer respond. Zoography has brought
death. The same goes for writing. No one, and certainly not the father, is there when one questions. Rosseau would
approve without reservations. Writing carries death. One could play on this: writing as zoography as that painting of
the living which stabilizes animality, is, according to Rosseau, the writing of the savages… Writing would indeed be the
pictorial representations of the hunted beast: magical capture and murder.” [4]

Though Derrida uses zoography simply to explore Rousseau’s theory of writing, which cannot be assumed to be
Derrida’s, he does the term in an unusual way, suggesting that media involves the “capture” of reality and that its
subsequent stasis outside of context equals a “death” of reality. He also makes clear Rousseau’s assertion that word
and image are both founded in this problematic mimesis, and this foundation is found in the “primitive” cave art that
uses animals as its sole subject/media of representation. It’s also worth noting that Derrida identifies his usage of
"zoography" as actually that of the ancient Greeks; he quotes the dialogue Cratylus and its judgment “writing is
unfortunately like painting (zoography)” and formally references it before applying the term to Rosseau. [5] Thus,
Derrida makes the term "zoography" relevant to media theory through its connotative link between word and image in
their fundamental limitations and inherent “violence,” one that is as old as Western philosophy itself.

More recent media theory has utilized zoography in a more historical context. W. J. T. Mitchell references Derrida’s use
of zoography and its connotations of imperfect mimesis in the chapter “Surplus Value of Images” in his book What Do
Pictures Want? In his discussion of the origins of mimesis, he mentions “zoographic” images as both the source of
writing and the subject of the very first paintings, citing Derrida’s use of zoography as the source of the term and
concept; however, he focuses more on the place of animals in the history of images and culture rather than its
connotations of both physical and metaphorical violence. [6] Also, Hermann Kalkofen mentions zoography in his essay
“Irreconcilable Views,” a study of the use of contradictory, multiple views in perspectival pictures. An art historian,
Kalkofen is interested in zoography as a historical artistic discipline that concerned itself solely with the depiction of
living beings and was separate from painting of inanimate objects, or scenography, until the third century BC in
Ancient Greece. [7] He goes on to discuss the disparate perspectives of subject, a living being, and background within
Renaissance painting through the terms "zoography" and "scenography." [8] It is clear Mitchell and Kalkofen consider
zoography more of a historical focus on a specific period of painting and do not consider it an inherent judgment of
media as Derrida seems to.

Unlike Derrida, Mitchell, and Kalkofen, Akira Mizuta Lippit’s piece of film criticism “The Death of an Animal” posits
zoography as a metaphysical perspective on “animal,” implicitly rejecting its specificity to a particular type or period of
painting. Lippit’s essay concerns the portrayal of animal death throughout the history of Western cinema, scrutinizing
both graphic depictions of butchery and hunting and mainstream fantasy of animal death embodied in the disclaimer
“No animals were harmed in the making of this film.” While discussing a segment from Chris Marker’s Sans Soleil, he
proposes a dichotomy of zoography and ethnography, as related in the following quote:

"In this scene from Sans Soleil, Marker’s partition functions not only as the divide between differing views of life and
death, animal and human being, East and West, but also between the rhetoric of zoography and ethnography. Like the
children that “stare through the partition,” Marker’s look blurs the lines that separate humanity from animals: children
mourn the animals as if they were people; a hunter kills a giraffe like a condemned man, as if it were guilty. The
exchange of human and animal features takes place across the threshold of the imaginary partition. [9]

By attaching the term "zoography" to a cultural/semiotic understanding of human and animal nature independent of
pictorial and cinematic mediation, Lippit abstracts zoography and its twin connotations of visual (or in this case
cinematic) representation and scientific categorization and applies it as a critical tool for the analysis of media.
Zoography the method becomes zoography the mindset, a metaphysical outlook in which animals must be rounded up
and killed in order to preserve humanity’s imaginary “difference” that produces a long narrative of pictorial and
eventually cinematic representations of animal death, usually at the hands of humans. What Derrida alludes to in Of
Grammatology to describe the mimetic “violence” of symbols Lippit fashions into a theory of the Western human-
animal relationship.

Following Lippit’s reasoning, zoography can describe, or rather explain, any symbolic representation of animals, from
cave painting to poetry to computer animation, that identify “animal” as other and use the “violence” of mimesis to
maintain this metaphysical divide. This usage of zoography strays from its etymological roots as defining a general
practice of painting living things, for it can be presumed that Lippit would consider any representation of animals that
does not exhibit this philosophy would be considered as not of zoography despite their fulfillment of the term’s most
basic requirements. However, it also can be argued that his usage marks the culmination of its semiotic development
within the narrative of modern media theory, or at least an equal of its discourse.
To conclude, zoography symbolizes something different for every media theorist that has utilized it. Though its
fundamental definition, the pictorial representation of living things, remains essentially the same for Derrida, Mitchell,
and Kalkofen, save its placement in a historical and theoretical framework, Lippit sees it as such a framework. Derrida
uses zoography to stand for both a historical genre of painting and writing as well as an abstract action of mimetic
violence against reality, while Mitchell and Kalkofen consider zoography only as a focus of painting specific to a
historical period. Lippit rejects historical and medium specificity and abstracts it into a constructed metaphysical
division preserving “humanity” through symbolic destruction of “animality.” Yet in spite of such disagreement, all four
theorists use zoography to invoke a visceral reaction to life and reality hardwired into the fundamental nature of
Western art. Whatever the theoretical structure attached to zoography, it is always the ur-art, the first and
fundamental that shaped the semiotic and metaphysical peculiarities of Western culture. Matthew Landback Winter
2007

taste

Although the Oxford English Dictionary lists more than eight definitions of taste, the word is usually talked about in
relation to physical and aesthetic perception. We talk about people either having good taste or bad taste when we look
at the personal media choices that they make. Many times it is visual. Are their clothes fashionable? Is their house
nicely decorated? But we also make judgments based on their consumption of more traditional media as well. Do they
read books from the canon of dead white men? Are they conversant in independent film directors? Taste is usually
described as a binary opposition between “good” and “bad,” and it is nearly impossible to agree on a suitable definition
for either.

The Oxford English Dictionary defines the aesthetic view as “mental perception of quality; judgment, discriminative
faculty.”[1] Perhaps because of its subjective nature, many philosophers and thinkers have taken up the question of
what taste entails, and whether or not we can come to an objective agreement on the judgment of it. Older
philosophical works tend to focus on the moral and ethical foundations that lead to good taste, while more modern art
theory seems to rely on a solid education in the history of aestheticism, as well as familiarity with the mediums of
form, color, line, style, and musicality. But it is important to remember that taste is primarily one of the five bodily
senses, and its use as a term for aesthetic judgment is a metaphor. The Oxford English Dictionary describes it as, “The
act of tasting, or perceiving the flavour of a thing with the organ of taste.” [2] Quantity and quality of food has always
been a signifier of wealth, and so the opportunity for experiencing good taste has always been associated with
prosperity.

David Hume’s essay on art, “Of the Standard of Taste” [3] outlines his views. Pronoucments of taste are, on the
whole, sentimental, which means that they lack truth value. But he does attempt to construct a definition, saying it is
the joint opinion of true critics. By true critics he means people who are well versed in the history and ideas of
whatever they are choosing to pass judgment on, so he makes clear that there must be different sets of critics who
have different areas of expertise. Therefore, one man cannot be educated enough to have good taste in all matters.
Art should be evaluated on whether or not it fufills its purpose; for example, poetry should please the imagination.
There is no “art for art’s sake.” According to Hume, there are two mediating factors for the qualified critic: their
character and moral differences. Moral judgment cannot be seperated from aesthetic judgment. If a work of art is
representing human action, the critic must evaluate the morality of the action that is being depicted. He allows for
cultural differences, saying that we are more likely to appreciate that which we are used to, but the true critic must
rise above that, a difficult task.

Edmund Burke also takes up the question in A Philosophical Enquiry into the Origin of Our Ideas of the Sublime and
Beautiful, his 1757 treatise on aesthetics. [4] According to Burke, the Beautiful [link] is aesthetically pleasing and
nicely formed, and the Sublime is so compelling that it has the power to destroy us. But Burke’s view of beauty is not
broken down into traditional forms of proportion, fitness or perfection, but rather by their causal structures. According
to Aristotelian physics and metaphysics, causation can be divided into formal, material, efficient, and final causes
(which may complicate the idea, rather than illuminate it). The formal cause of beauty is love; the material cause is a
pleasing sensation of an object such as delicacy or softness; the efficient cause is whether the object fulfills its
usefulness, and the final cause is divine providence.

Immanuel Kant’s Critique of Judgment [5] is one of the most prominent discussions of taste and lays the foundations
of modern aesthetics. In it, Kant makes the claim that there are four possible aesthetic judgments: the agreeable, the
good, the sublime, and the beautiful. Kant has an absolute notion of what is the good because it is what is ethical,
which for him is fixed in moral law, meaning that there is no judgment involved. The agreeable is purely a sensory
judgment, such as “I am comfortable.” The last two are “subjective universal” judgments, meaning that they are
made with the belief that other people ought to agree with the judgment. This sense of “ought to” comes from his
belief in a “sensus communis,” which is a community of taste. This refers to the term Aristotle had for the part of the
psyche that combined all the sensory perceptions into a coherent whole.
Kant explains that something should be considered beautiful if it has the form of finality, meaning that it is apparent
that it has been designed with a purpose, although he does not mean that it should have a practical function. The
aesthetics of something that is sublime however, inspire fear because it is beyond human comprehension. The sublime
has a quality of transcendent greatness because nothing else can be compared to it. The aesthetic branch of
philosophy owes much to Kant, not only for his ideas, but for making aesthetic debate valid and for providing the
framework for it.

The debate over aesthetic theory and taste is closely tied to the philosophy of art, and art critics have had much sway
in defining and shaping art trends, and have also grounded the debate in secular terms. Clement Greenberg’s
canonical essay, “Avant Garde and Kitsch” [6] defined the role of high culture in society and the qualifications needed
to possess good artistic taste. “Taste” for him here seems to be defined as the “style or manner exhibiting aesthetic
discernment; the style or manner favored in any age or country” [7]. He defines the avant-garde as the historical
agency which functions to keep culture alive in the face of capitalism, celebrating artists such as Braque and Pollock
for their role in taking art beyond mere representation. “The true and most important function of the avant-garde was
not to experiment, but to find a path along which it would be possible to keep culture moving in the midst of
ideological confusion and violence” [8]. Greenberg shares Kant’s “form of finality” theory, saying that for an object to
have aesthetic validity, it cannot be arbitrary but must “stem from some worthy constraint or original” [9].

Subject matter and content in art suddenly became very kitsch. Kitsch, a word Greenberg appropriated from the
Germans, was his term for the second new cultural phenomenon to appear in the industrial West. This is the popular,
commercial art that was seen on magazine covers, movie posters, and advertisements. Greenberg writes, “Kitsch is a
product of the industrial revolution which urbanized the masses of Western Europe and America and established what
is called universal literacy” [10]. As peasants left the countryside, they left behind folk art as well, and a new market
was created for a new commodity. This new culture, according to Greenberg is mechanical, relying on formulas and
accumulated experience, rather than innovation. It is fake experience and sensation. Greenberg clearly shares Hume’s
view that only the educated few are able to cultivate good taste. It is important to note however, that Greenberg did
not hold the masses accountable for their affinity for kitsch. To appreciate high culture, one had to be very educated
and have a fair amount of leisure time, luxuries that the working class was not afforded.

In “Towards a Newer Laocoon,” [11] Greenberg writes what he calls a “historical apology for abstract art,” [12] and
suggests that certain media are superior art forms. The ultimate and purest form of art is music, and that other art
better achieves its goals when it strives to imitate the ideals of music. “Because of its absolute nature, its remoteness
from imitation, its almost complete absorption in the very physical quality of its medium…music had come to replace
poetry as the paragon art” [13]. As proof that the art of forms is superior to representative art, he points to Oriental
and children’s art, which conforms to his ideal of purity. Greenberg expected that his standard of taste was valid only
in his historical context and that it would be replaced in the future by other standards.

This turned out to be the case, as subsequent art movements, especially Pop Art, purposely turned much of this
argument on its head. The avant-garde artist Andy Warhol started to create mass-produced art from mass-produced
items. He appropriated “lower” forms of art, such as advertisements and Hollywood images and presented them in the
form of high art. Warhol declared that he wanted to “be a machine,” [14] which removed his artistic authority as a
taste-maker in creating art. “I want everybody to think alike.” [15]

But everybody thinking alike creates problems, as people in power want to differentiate their aesthetic views from the
masses. So writes Pierre Bourdieu, in his 1979 work, Distinction: A Social Critique of the Judgement of Taste. [16] In
it, he claims that only those in power can define the concept of taste. The working class aesthetic is a “dominated”
aesthetic, meaning that it can only define itself in relation to the dominant aesthetics of those in power. If those two
aesthetics become too similar, the upper class will work to disassociate itself. This can be seen most clearly in mass
media. For example, if a critically acclaimed movie becomes hugely popular in mass culture, critics (even those who
gave the movie a favorable review) will tend to form a derisive opinion of the movie as a way to distinguish
themselves and prove that their taste functions on a higher level. With Bourdieu’s argument, taste is not an organic
judgment, but is almost entirely socially produced.

It is here worthwhile to return to the primary definition of taste as one of the five senses and the ability to detect
flavor in food. The fact that this became a metaphor for aesthetic judgment is indicative of our inclination to conflate
consumption with positive mental associations. Names of affection in the English language are often based on baked
goods, while classic taste sensations have come to describe personality types (sweet, bitter). Perhaps it speaks to a
collective oral fixation, but it is also the only physical sense that invites much debate. Sight, sound, feeling, and
hearing are objective to a degree, but the palette of the mouth invites exploration and innovation. It is also of interest
that Royals used to employ tasters, to make sure that their food was not poisoned; literal bad taste can be deadly,
and metaphoric bad taste is deadly for culture.

It is also crucial to state that these theories of taste take a “rational” view and don’t content with the “instinctual”
judgments of taste, because one cannot create a reasoned theory about an instinct. Thus, it is necessarily from an
incomplete set of data that this enquiry into aesthetic proceeds. Presumably, Kant, Hume, Burke, and Greenberg
would argue that snap judgments are actually well reasoned if they are made by a person who is well versed in
whatever he or she is judging, because experience has trained them to recognize quality. Bourdieu would argue that it
is not instinctual at all, but merely a response to a social environment that has trained the viewer to respond to a
certain aesthetic.

Judgment of aesthetic value has many facets, but the thread holding the many taste theories together seems to be an
insistence on careful thought. The intellectual and interpretive ability is highly valued by philosophers and art critics,
as is knowledge of history and one’s own historical context. Because these criteria are usually unattainable for the
working class, taste is certainly subject to social pressures, and perhaps most importantly, functions to create a
cultural hierarchy. Alexandra Squiteri Winter 2007

taste

The concept of "taste" is central to both the gustatory and aesthetic realms. Taste, when defined as one of the five
senses of the body, is not typically engaged in discussions of media theory, though it is particularly relevant to such
discussions by virtue of its location in the tongue and mouth, the organs of taste and speech. In contrast, theorists
widely engage the concept of taste as aesthetic judgment within media discourse, analyzing the process of the
discernment of the relative artistic worth of specific media, as well as the ultimate trustworthiness or danger of these
discernments.

The bodily sense of taste has historically occupied a low position within the hierarchy of the five human senses.
Aristotle's classical hierarchy of the senses deems "sight" the highest of the senses, followed in order by hearing,
smell, taste, and touch (Jutte 61). Philosophers have privileged the "distance" senses such as sight and hearing over
the "bodily" sense of taste due to notions that distance from the object perceived yields objectivity (which in turn
might lead to knowledge), and proximity to the object perceived yields subjectivity (which implies the risk of self-
indulgence) (Korsmeyer 361). This sense hierarchy is not uncontested. Theorists have argued that this hierarchy is
not a universal "given," but a social construct influenced by philosophy, human evolution, and technological progress
(Jutte 61). Certainly, taste's place within the hierarchy of senses is prone to change as aspects of culture and forms
of media change.

Gustatory taste is necessarily tied to the organs of taste, the tongue and mouth. Taste is the "faculty or sense by
which that particular quality of a thing...is discerned, the organs of which are situated chiefly in the mouth" (OED).
The "organs of taste," are not only mediums through which to discern the flavors of particular foods, but are also
mediums for the articulation of sound. In his "Course in General Linguistics," Ferdinand de Saussure notes the
importance of the vocal apparatus in language and speech.

In his outline of the speaking-circuit, Saussure details the speech act, emphasizing the intimate relationship between
the psychological, physiological, and physical aspects of communication. The vocal apparatus, including the mouth
and tongue (those apparatuses most central to "taste"), play a key part in this speaking circuit. The vocal apparatus
is the physiological component of the circuit where "the brain transmits an impulse corresponding to the image to
the organs used in producing sound" (Saussure 12). The vocal apparatus is inseparable from the technology of
speech; it is necessary to "sound" and "oral articulation," and cannot be defined apart from "acoustical impression"
(8). Interestingly, some non-Western cultures relate speaking directly to sensing. Kathryn Linn Geurts notes that the
Anlo speakers of West-Africa emphasize the "sensorial aspects of speech" (175). Speaking is part of "a broader
category of experience they call sesetonume (feeling in the mouth) (175), a category which also includes such taste-
related experiences as eating, drinking, and kissing.

Although philosophers deemed gustatory taste too subjective and primal for employment in the rational study of
aesthetics, it has served as a metaphor for aesthetic judgment since the sixteenth century. This metaphor captures
the "immediacy of the aesthetic phenomenon of savoring and enjoying experienced qualities" (Korsmeyer 360). The
concept of taste as a faculty of aesthetic judgment is nuanced.

The Oxford English Dictionary definition of taste that best captures its meaning within philosophies of aesthetics is
"the sense of what is appropriate, harmonious, or beautiful; esp. discernment and appreciation of the beautiful in
nature or art; spec. the faculty of perceiving and enjoying what is excellent in art, literature, and the like." "Taste"
may also refer merely to the preferences of individuals; taste in this sense is "the fact or condition of liking or
preferring something; inclination, liking for; appreciation" (OED). This definition of taste corresponds in part with the
theories of Enlightenment philosophers, who posited that taste is a "feeling"; a "taste for something just is the
subjective pleasure that one takes in it" (Korsmeyer 358).

Theories of taste first gained prominence in 18th century Great Britain in the same documents often credited for the
rise of modern aesthetic theory, Joseph Addison's Spectator papers On the Pleasures of the Imagination (Dickie
565). The theory of taste that proliferates in this century is concerned with the individual's "experience and
appreciation of beauty in art and nature" (566), especially as opposed to platonic notions of the experience of beauty
"as an objective property of things" (565). The most notable 18th century philosophers of taste include Addison,
Francis Hutcheson, David Hume, the Third Earl of Shaftesbury, and Immanuel Kant. These philosophers, with the
exception of Shaftesbury, all adhered to Locke's Empiricist tradition; viewing perception as a passive act which
"simply reveals the nature of the perceived world" (568).

The concept of "disinterestedness" is central to 18th century formulations of aesthetic taste. Critics credit the Third
Earl of Shaftesbury for developing the notion of disinterestedness, a type of attention or impartial attitude
characterized by lack of desire or concern for personal gain (Korsmeyer 360). In his Critique of Judgment (1790),
Kant develops a theory of aesthetic judgment which privileges "disinterested pleasure" as the "first moment" in the
judgment of beauty. This "disinterested pleasure" is based on a subjective feeling of pleasure, unique in that it
neither holds nor breeds desire for the object (Ginsborg). French sociologist Pierre Bourdieu critiques Kant's concept
of disinterested pleasure, asserting that it fails to provide a universal basis for a standard of taste, as Kant claims.
Bourdieu argues that Kant's conception of aesthetic judgment is culturally-specific, viable only for those wealthy
enough to spend time on "contemplation" and the practice of "disinterestedness" (Korsmeyer 361).

The notion of disinterestedness has implications for the ways we view media; scholar Jerome Stolnitz compellingly
argues that disinterestedness has "transformed habits of seeing and judging" (Dickie 607). We now conceive of and
view works of art as "autonomous" and "self-contained" due to our awareness of the disinterested attitude (607).
Interestingly, the concept of disinterestedness has an inverse correspondence to the gustatory sense of taste. While
disinterestedness requires an emotional and physical distance from the object, the experience of the sense of taste
implies direct bodily contact with the object.

McLuhan's theory that the technology of the "electric age" extends the whole human sensorium and central nervous
system has implications for personal disinterestedness and the ability to perceive and judge art. He argues that the
extension of the sensorium will involve us in an intimate relationship with all of humanity, so that it will be "no
longer possible to adopt the aloof and dissociated role of the literate Westerner" (4). If dissociation becomes
impossible, the ability to engage media in a "disinterested" fashion, and the very definition of "disinterestedness,"
may also shift.

Art critic Clement Greenberg plays a central role in the twentieth century discourse on taste. In "Avant-Garde and
Kitsch," Greenberg provides "ideal types" for high and low art; "avant-garde" and "kitsch" begin to function as a way
to classify culturally valuable and culturally worthless media. Greenberg portrays the small avant-garde community
as the protector and defender of "high art." The masses, in contrast, gravitate towards "kitsch," a form of media
characterized by its crass pillaging of canonized culture (12). Perhaps the most important aspect of avant-garde
media in Greenberg's analysis was its purity.

In "Towards a Newer Laocoon," Greenberg expresses the notion that the most culturally valuable, avant-garde art is
that which is most pure. Pure art is not a vehicle for an idea or a subject, but is an end in and of itself; it is a
medium which serves to reveal and realize the qualities of its particular medium (28). For Greenberg, the avant-
garde painting at the time exemplified pure art; it strove not to conceal its medium or give the illusion of three-
dimensionality on a two-dimensional plane, but to "progressively surrender to the resistance of its medium" (34).

Pure art also makes an immediate impression on the senses. It strives always to "express with greater immediacy
sensations, the irreducible elements of experience" (30). In Greenberg's estimation, the nature of the impressions
on the senses should correspond to the nature of the medium; thus; pure plastic art will produce "the emotion of
'plastic sight'" (34). Greenberg uses notions of purity as well as 18th century ideals of "disinterestedness" in order to
construct a theory of "discerning taste." This process of discerning involves the immediate experience of art,
followed by a detached evaluation of the felt impact of the experience (16). The emphasis on immediacy and
detachment recall early hierarchical classifications of the senses; philosophers had associated taste with immediacy
and the ability to feel, while vision and sight related to distance and the ability to reflect (Korsmeyer 361).

Greenberg's insistence on pure art as that which is "immediate" and privileges the "medium" over content, parallels
Marshall McLuhan's famous line, "the medium is the message." In fact, McLuhan notes that cubism (an example of
Greenberg's "pure art"), exemplified the notion of "the medium is the message" (13). McLuhan, like Greenberg,
warns that a medium's content may potentially "blind us to the character of the medium (McLuhan 9). For
Greenberg, the repercussions of a blindness to medium entail the deterioration of a discerning taste, while for
McLuhan, repercussions of blindness are a numbness and an inability to properly discern the effects of media on
society (9).

Marshall McLuhan's ideas regarding media's influence on sense ratios are relevant both to a discussion of taste as a
gustatory faculty and taste as a critical faculty of aesthetic discernment. McLuhan postulates that technology, indeed
all forms of media, at once "extend" and "amputate" the senses (44). To the extent that a person is engaged with a
technology, the strength of her senses in relation to one another will also change. For example, that "if sound...is
intensified, touch and taste and sight are affected at once" (44). Thus, according to McLuhan's theory of sense
ratios, the gustatory sense of taste is altered according to levels of engagement with various media.

The changes in sense ratios and the extension and amputation of the senses which technology brings about will
necessarily affect perception and aesthetic judgments. For instance, in the latter part of the 18th century,
philosophers considered taste "a response to associations provided by the external senses" (Korsmeyer 358). In this
conception, if external senses change, tastes must change accordingly. The dependence of taste upon changeable
senses will ultimately influence any attempt to form an "objective" or all-encompassing hierarchy of media, or a
cultural canon.

Some contemporary theorists have asserted that classifications and hierarchies of media are merely social products
and social tools. In his ethnography of French culture, Bourdieu argues that an individual's notions of what is
"tasteful" are determined more by her social standing than any special ability to experience and appreciate the
sensory, aesthetic, or social world (Bourdieu 1). Greenberg also acknowledges the socially constructed nature of
artistic discernment, noting that "superior culture is one of the most artificial of all human creations (19).

Notions of gustatory and aesthetic taste are intimately bound with theories of media. The organs of taste function as
mediums for the articulation of sound; the tongue and mouth are the technology of speech. The faculty of aesthetic
taste serves to discern between the culturally superior and inferior media. Furthermore, this discernment, or faculty
of taste, engages the human sensorium as its medium of perception. Ultimately, as McLuhan hypothesizes, the
experience of gustatory and aesthetic taste will be determined by media. Noelle Baer

television

Floating somewhere between cinema and radio, yet comfortably furnished in 98% of American homes, is the
television, according to the 2000 Census. Undeniably, television is a dominant cultural force that has aided in
generating the term "media" as a somewhat unavoidable household presence in the life of a modern human, a
ubiquitous presence as common as a couch and taken just as casually for granted. What is television? The question
seems facile, but the term, with all its meanings and power, amounts to much more than the Magnavox or Sony TV
set in one's living room.

In simplest terms, the OED defines television as "a system for reproducing an actual or recorded scene at a distance
on a screen by radio transmission, usually with appropriate sounds; the vision of distant objects obtained thus." The
term originates from "tele" for "far away" and "vision" for "a thing seen" or "the act of seeing." The term was coined
rather straightforwardly to describe a new audiovisual medium that allowed the sight (and sound) of something
hundreds of miles away comfortably in one's own home.

What complicates the definition of television is the instability of its placement within a larger system of media. From
its conception, television was advertised as a natural outgrowth of the radio and cinema (Crafton 150) due mainly to
technological precedent but also due to its situational and cultural antecedent in radio. The television was seen as a
logical replacement for the radio as the new family hearth around which to gather for news and entertainment.
When television was still in experimental stages of invention, AT&T, in an attempt to "claim" television as a natural
outcrop of telephone technology rather than radio, saw the medium as an opportunity to "illustrate two-way long-
distance telephone conversations" as well as "to present current events and entertainment in formats
indistinguishable from the Movietone Newsreel and the Vitaphone short." (Crafton 151) Yet television is not merely
radio with pictures, nor is it cinema in the living room. Television is somewhere murkily in between, sometimes
conflated or conjoined with cinema and radio, though more often antagonistically related. After the popularity of
television burgeoned, the film industry went to great lengths to differentiate film from television through exciting
technologies like Technicolor and Cinemascope--the latter providing the most lasting distinction between cinema and
television on a strictly visual level, widescreen v. 4:3 (or 16:9, in many modern televisions) aspect ratio.
The tension between cinema and television has become much more pronounced than the antagonism between
television and radio. Whereas television and radio are distinguished by the former eliciting sight and sound and the
latter strictly emanating sound, television and cinema are both audiovisual media on a relatively flat screen,
differentiated more by size, location, and the technological specification that creates the spectacle (television is
electronic whereas cinema is technically mechanical). The tension has sparked a plethora of arguments concerning
form and content that can be characterized mostly by cinema represented as the artistic, larger-than-life,
spectacular, hot medium (McLuhan 1964) and television as the commercialized, intimate, cool medium. In fact, both
media can be seen as helping to create, or at least evolve, the other: "technogenesis" or "the process by which one
cultural technology contributes to the construction of the other." (Stokes 4) Movies often take television as their
target of criticism (Videodrome, Murder by Television), highlighting the antagonism analyzed in On Screen Rivals.
Through such films, cinema can be seen as attempting to contain television within a diegetic narrative and therefore
containing the threat that television poses of stealing the cinematic audience.

Defining McLuhan's theorization of television in Chapter 31 of Understanding Media brings to light the medium's
controversial role in media discourse. For McLuhan, television is a "cool medium," one with which the viewer
participates actively, rather than passively as one watches a movie in a dark theatre. The construction of television
as a cool medium is partly due to its technology and visuality compared with film: the pixilation and generally poor,
digital, blurry image quality (likened to a mosaic) as opposed to the crisp analog of film invite the viewer to
"complete" the picture, complete the medium, which forms a highly participatory circuit completely unlike in the
movie theatre, as the viewer has nothing to "fill in" with a movie that is already so much larger-than-life. Yet the
"filling in" is complicated by the fact that the motion is filled in by both mediums, and more pronouncedly in cinema,
which does not have interlaced graphics. "The TV image requires each instant that we "close" the spaces in the mesh
by a convulsive sensuous participation that is profoundly kinetic and tactile, because tactility is the interplay of the
senses, rather than the isolated contact of skin and object." (McLuhan 273) McLuhan then declares television the
fulfillment of the romanticized notion of synaesthesia: while cinema is audiovisual, television is "tactile" and
"textured," yet another way to pull the viewer into a circuit of participation. Though television is not as detailed
visually as cinema, McLuhan also declares that the "TV mosaic" has the power "to transform American innocence into
depth sophistication." (McLuhan 282). Television is, ironically enough when we consider the lack of intelligent
programming, the medium of "depth." McLuhan treads into more controversial territory with this statement, leading
to one of the fiercest debates of television: its valuation. Is television good, bad, or both for those 98% of Americans
and billions of worldwide viewers?

In analyzing television's depth and content, Adorno severely criticizes such quasi-mystical technological positivism.
Examining the phenomenon of television from a sociopsychological standpoint, Adorno denounces the medium's
oversimplification of characterization that completely exteriorizes and stereotypes characters and situations to the
point where the concept of suspense or unpredictability is abandoned but maintained superficially. (Adorno) Directly
contrary to McLuhan's theorization that television is unlike film partly because characters are not what they seem,
Adorno calls to attention the "threat of inducing people to mechanical simplifications by ways of distorting the world
in such a way that it seems to fit into preestablished pigeonholes." (Adorno 255) On a more epistemological level,
Adorno is addressing content, which McLuhan dismisses somewhat under his end-all-be-all "the medium is the
message." Adorno charges that the content is dangerous, specifically because the medium addresses the psyche on
multiple levels and, pacifying the psyche with lighthearted fare, such as sitcoms, more subtly imbues the unguarded
viewer with cultural biases and stereotypes, despite the protest of consciousness. In conclusion, Adorno addresses
the task of theorists in confronting the menace of television as an unwitting psychological, technological force: "We
propose to concentrate on issues of which we are vaguely but uncomfortably aware, even at the expense of our
discomfort's mounting, the further and the more systematically our studies proceed. The effort here required is of a
moral nature itself: knowingly to face psychological mechanisms operating on various levels in order not to become
blind and passive victims." (Adorno 259)

Between two such eminent theorists lies the fundamentally ambivalent nature of television: is it a tool of cultural
connection that links humans in the "electronic implosion" (McLuhan) with fellow humans to create a "global village,"
or is television a damaging psychological mechanism that has the potential to prey on the subconscious on a level
unprecedented in other media? The debate has been continuing for decades and is nowhere near a conclusion.

One difficulty in debating television is its sheer omnipresence and ability to envelop both the masses and the
intelligentsia in its circuit. How can one step outside of the circuit to examine the impact of television when one's
own psyche is inevitably wound together with the television as part of culture and coming-of-age? At this point in
time, we are approaching generations of adults raised on television and raising their children on television. It is a
rarity to meet someone who did not watch television as a child. The rest of us, those 98% or more, felt the presence
of Sesame Street, Fraggle Rock, and older shows like Mary Tyler Moore and Cheers (and a host of others; the sheer
list of sitcoms to draw from the mind raised on television is daunting) in our childhood as vividly as afternoons
outside playing hide-and-seek. How do we step outside this influence, this massive overdose of pop culture and
audiovisual stimulation since birth, to analyze television's influence on our lives? Is it the medium itself that is
potentially damaging as a whole, or is it simply the content, as Adorno indicates? With over 500 channels to choose
from, content seems to be a flimsy argument that has weathered with time: we can watch movies (both theatrically
released and made-for-TV), documentaries, surgeries, news, sitcoms, Britcoms, talk shows, music shows, music
videos, wedding videos, dramas, soap operas, live operas, live plays, sports, pornography, makeover shows,
zoological explorations...can we really criticize content when the content not only spans the globe but every possible
genre? Today we even have a fireplace channel so that a couple can curl up before the TV and enjoy a warm, cozy,
fluorescent hearth. What would McLuhan have had to say about such an inversion 40 years after his declaration that
television had replaced the hearth as the focus for family gatherings?

Daunting, but inescapably powerful, television is a cultural force that only grows stronger, like grafting itself onto
ipods, which now play videos; computers can now be hooked up to cable modems and play TV channels. Television
is finding its way into waiting rooms, onto buses and planes and hypnotizing children in the backseat of the family
SUV. Its omnipresence can be seen as another outgrowth of television and cinema's mutual cultural antagonism.
Whereas cinema used Cinemascope, surround sound, and other enveloping technologies to enhance the cinematic
experience and make going to the movies continually alluring, television retaliated by expanding its scope--now it
not only populates living rooms, but kitchens, ipods, etc. Television is now in bars and clubs and malls, whereas
cinema is still confined to the theatre or multiplex, or has even absorbed by television. With the threatening power
to influence and indoctrinate, or at the very least, penetrate and inundate, bright young minds of over 98% of
children, television is as important to understand as a social function as family dynamics and parenting methods. As
such, the debate over the good, bad and polemical, but unavoidable, phenomenon of television could very well last
for another 50 years before the medium is pinned down to some satisfaction. Alexandra Ensign

video

All round were chellovecks well away on milk plus vellocet and synthemesc and drencrom and other veshches which take you
far far far away from this wicked and real world into the land to viddy Bog And All His Holy Angels And Saints in your left
sabog with lights bursting and spurting all over your mozg.
--- Anthony Burgess, A Clockwork Orange

The word 'video' was first used in the 1930s to describe the visual channel, as opposed to the auditory channel, in
early television experiments (Barbash). A 'video' track was first recorded in 1927 by John Logie Baird. He created a
system called Phonovision that used discs to hold images. This was accomplished in a way similar to recording audio
on a phonograph. By tracing a path in a disc with a rapidly moving needle a low quality image was reproduced by a
cathode ray tube. Thus the medium 'video' has two connotations. It can be used to describe a visual channel of
information or to describe a recording medium that stores electromagnetic information.

Video comes from the latin verb videre 'to see' (OED). Burgess undoubtedly uses this etymology to coin the word
'viddy' in the vocabulary of ultra-violent London teens in A Clockwork Orange. 'Seeing' is often used interchangeably
with 'knowing' in highly visual Western society. Yet seeing and knowing are completely different acts. Burgess's
dystopia arises from the confused notion that the two are synonyms. This is encapsulated by the word 'viddy.'

The phenomena of 'medium nesting' can be used to separate video from film and television which previously nested
older media- most notably photography and radio ("Video Killed the Radio Star"). Videos unique qualities can be
discerned from an examination of its origin, comparison, and divergence from these older media. The myth of the
'real' in video and its predecessors reaches back to the phonograph's tendency to capture 'noise' (Kittler). The video
camera captures visual noise. When coupled with technological mysticism and complacent trust in science there is a
danger of grafting a false 'realness' onto the medium. This 'realistic effect' occurs because "the 'real' is never more
than an unformulated signified, sheltering behind the apparently all-powerful referent" (Barthes). Surveillance
videos, pornographic videos, and documentary videos are all exploitation of this false objectivity. None would need
much rhetorically induced credibility if brought in front of a jury. This is hubristic for two reasons. First, video is often
of very low quality. The resolutions of photography and film have always surpassed video. Video is constructed of
three colors (red, green, blue/magenta, yellow, cyan) which are displayed at different brightnesses. Video currently
is most often recorded onto magnetic tape which can degrade after time or be erased (due to magnetization)
altogether. It can stretch and induce 'vertical rolling' during playback. All of these can influence what we 'see' and
then 'know' upon a later viewing. Second, video can be edited quite easily. If the time/date appears in the corner of
a video it is assumed to be accurate. Other temporal ellipses are as easy (or easier) to create in video as in film. The
time qualities of video, television, and film separate them as mediums. Television consists of previously recorded
video and film and live broadcasting. A live television broadcast (the nightly news) is 'immediate.' But television is
always 'being broadcast' so even recordings become 'immediate' in that there is little control of the receiver in what
is mediated. Commercials capitalize on this quality in order to incite desires for food, clothing, information, sex, etc.
The television medium prescribes immediate capitalistic pharmakon in order to visually manifest what the viewer
should be.

Film is the opposite of television's immediateness. It is not accessible in the normal home. Much like a play, the film
creates a spectacle meant (usually) to entertain an audience in a theater. The fictionality of film and its ability to
trick through montage is well known and accepted by its audience that uses it to construct cohesive visual
narratives. Commercial films are created from screenplays demonstrating nesting of theater and writing media. The
film then becomes a timeless entity crystalized within the confines of celluloid like carving writing into a clay tablet.
The theatricality of film is regulated by time. One knows that a film will be projected whether one is there or not,
and that the film will be projected at some later time. Aside from showing up in the first place film is about lack of
control to the normal spectator. One goes to a theater, watches a film (and in so doing forgets reality) but at its
conclusion is left back in the theater/reality. Film as media allows a timeless spectacle to exist while simultaneously
refuting this as a future experience- it shows what one can never be.

Video often nests within television immediacy (previously recorded broadcast) and film timelessness (film transfers
to video). Video that is broadcast is simply television as previously discussed. Video and film are much more difficult
to separate (see bibliography). Video can be projected with an LCD projector in a cinema causing it to become
theatrical. However, video is more often viewed alone in an intimate home setting. One has much control over when
and how video is played. Furthermore there is an obvious physical difference between film and video- one is celluloid
and the other is not. Video can be trapped on tape or digitized. Both can be readily copied (even with anti-copy
protections) and are much more freely accessible than film. Video can be played on many different types of
monitors. Lastly video is much more 'ethereal' than film. It is stored digitally as binary code or directly through
magnetism. Video is self-reflexive (through both recording and playback characteristics) and has the ability to show
one as they are.

Moving images were able to be recorded for quite some time by either filming them or using novel devices like
Phonovision. The first use of tape as a recording medium was in 1956. The Ampex VRX-1000 was the first
commercial videotape recorder. Its quality was very poor. The first consumer video tape recorder was 9-feet long
and weighed 900 pounds. It was not portable (obviously), but offered for sale at $30,000 in 1963. Once recording
could take place the format of television radically changed. Images could be seen again and again. The consumer
had to wait until 1975 when Sony released the Sony Betamax Combination TV/VCR. The next year the stand alone
Betamax VCR was released and sold fairly well. In 1977 RCA introduced the VHS VCR. This was much cheaper and
allowed twice as much recording time compared to the Betamax (4 vs. 2 hours). Essentially it was due to economics
and good timing that VHS is now synonymous with 'video' tape (Betamax tapes had much better image quality).
DVD has been the only recent contender to VHS (ignoring all computer video formats) having sold readily since its
release in 1997. It would be difficult to create a comprehensive list of 'dead' video technologies (such as pixelvision).
Regardless of the specific storage mechanism (so long as it is autonomous from film) video is important because of
its massconsumer appeal. Video has changed television and film as each strives to become separate from each other
and fill a niche market. Video is cheap and can be left recording for much longer times than film. Surveillance
cameras and amateur videos tend to do just that.

Historically it was much easier to edit film than video; instead of video's anonymous magnetic strip (although one
form of early video actually consisted of small, film-like images) film has little 'photographs' that run through the
projector at 24 frames per second. To change the film one physically 'cut' it and 'spliced' this series of photographs
in somewhere else. Video could be recorded over easily. Recently there has been an explosion of new digital editing
techniques that allow editing of video to be done much like that of film (frame by frame at 29.97 frames per second)
on computers. Now it is actually easier and much less expensive to manipulate video than film. Film is even 'going
digital.' It is altogether likely that video tape (VHS) will become a 'dead medium' in the near future and give way to
digital video. Digital video, which is stored as binary code, will not loose quality if stored on a computer.
Furthermore, video is increasingly striving towards the picture quality and speed of film. Commercial video cameras
are now available that record at 24 frames per second. As film and video tape are digitalized they become digital
video and, subsequently, fit video's first definition again the 'image track' as opposed to the 'audio track.'

The best recent example of video's accessibility is the spectacle of the 9/11 footage. The footage was stored on
video and re-broadcast over and over again until the entire country felt as though they had actually experienced the
tragedy. They were in truth "far far far away from this wicked and real world" and "into the land to viddy ." Video
can not perfectly replicate experience. Rather, it constitutes a different experience of fantasy and pseudo-reality [see
reality/hyperreality, (2)].

McLuhan and Krauss compare video as a medium to narcissism. In a sense the obsessive rewatching of 9/11 videos
makes the violence pornographic and serves as a type of auto-eroticism upon reviewing. McLuhan would say that
the American public has through a real but, as a whole, distant amputation (of buildings and human life), amputated
itself still further through the numbness and closure that the video medium as 'extension' grants. America looks into
its 'pool' of video footage in order to counter the irritant of emotional amputation, until through increasing numbness
it is unable to recognize itself anymore. Almost all Americans actually think that they were in New York/Washington
DC on 9/11 when in truth they were staring into their narcissistic pools. Soon they could not even recognize
themselves. That is, they thought that they were actually witnesses to the events when in truth they were just
sitting and watching what they were to become through false experience. America was viddy-ing. Krauss claims
narcissism as the main distinguishing factor of video art. She observed that through the 'feedback coil of video'
"consciousness of temporality and of separation between subject and object are simultaneously submerged. The
result of this submergence is, for the maker and the viewer of most video art, a kind of weightless fall through the
suspended space of narcissism." The music video is an excellent example of video art creating a fantasy pool. A
music video beseeches autoerotic participation. One becomes sexually aroused, violent, and wants to buy
something. The music video unrealistically presents desires that can be fulfilled in a realistic manner. One gets up
and dances with Dionysian bliss, turning up the volume, while being overcome by the rapid succession of visual
stimuli. The music video has the potential to replicate a similar experience every time it is played (either by the
spectator on tape or by MTV).

Video art has had to overcome both this narcissistic and film/television/video medium identity crisis. What
distinguishes a video artist from a film artist or a photographer? Is their work any different from what you see on
television? The first video artists worked in the 1965 alongside the availability of commercial camcorders. Nam June
Paik is widely cited as being the first. He experimented with the physical medium of video tape by manipulating it
with magnets. This work was an attempt to demonstrate how to 'see' video. Many claim that video art was just
another outlet for artists resisting the materiality of painting and merely served a theatrical role. This is exemplified
by Dan Graham's more conceptual/idea oriented work that contained non-manipulated timescales. Bill Viola in the
1970s stood for less self-referential work and dealt primarily with content (his work also tended to be theatrical).
Video art began to get less exclusive in the 1980s and leave the museum context altogether. Documentary
videographers and video artists constitute the current tendency to create concept driven work that explores editing
technique or allows grass-roots political activism. This sort of work is described as a sort of moebius strip by Ryan a
formula for self video taping. When you watch it back again you will declare, "wow, it's like making it with yourself."
Perhaps this is the sort of narcissism that Krauss and McLuhan had in mind for video. Because it is so self-reflexive
video art has the potential/tendency to be both 'boring' (Ryan) and self-reflexive and thus narcissistic. (see mirror)

Video as being a visual electromagnetically recorded channel is an incredibly broad definition of the medium.
Perhaps this is why it has been going through a continual identity crisis with film and television. The work of video
artists frequently takes advantage of video's characteristics as a medium. Some of these are realism, cheapness,
accessibility, fantasy, and dreamlike temporal disruption. It must be remembered that every different type of video
storage medium allows more specific characteristics. Perhaps the most dramatic effect of video is its ability to distort
what is 'seen' in a visual-centric society into what is 'known.' Though their blending they allow 'viddying' to occur - a
massively replicated sociological pseudo-experience intrinsically tied to narcissism. Justin Cassidy Winter 2003

rhizome

Overview:
In media theory, rhizome is an evolving term that stems from the theories of Gilles Deleuze and Felix Guattari. It
has been offered as an explanatory framework for network (both human and machine) theory and hypertext,
although a strict reading of Deleuze and Guattari does not support these interpretations. Their rhizome is non-
hierarchical, heterogeneous, multiplicitous, and acentered. The term has been applied broadly outside of media
theory, as Deleuze and Guattari intended.

Origins of the Rhizome:


The word rhizome originates in botany, and for many people the most common rhizomes encountered that follow
such a definition are pieces of ginger1 seen in the produce section of a local supermarket, or irises2 planted in a
garden. The Oxford English Dictionary defines rhizome as "a prostrate or subterranean root-like stem emitting roots
and usually producing leaves at its apex; a rootstock."3 The OED dates rhizome from the middle of the 19th century;
it is derived from a Greek word meaning "to take root."

Rhizome's entry into the world of theory began with the psychologist Carl Jung. His introduction to Memories,
Dreams, Reflections includes the following reference to rhizome:

Life has always seemed to me like a plant that lives on its rhizome. Its true life is invisible, hidden in the
rhizome. The part that appears above the ground lasts only a single summer. Then it withers away--an
ephemeral apparition. When we think of the unending growth and decay of life and civilizations, we cannot
escape the impression of absolute nullity. Yet I have never lost the sense of something that lives and
endures beneath the eternal flux. What we see is blossom, which passes. The rhizome remains.4
Though Jung paved the way with this quote, the figures responsible for rhizome as a term in media theory are
French philosopher Gilles Deleuze (inspired by Jung) and clinical psychoanalyst Félix Guattari, who together
developed an ontology based on the rhizome in works such as Rhizome: Introduction (1976) and A Thousand
Plateaus (1980). In the latter volume, Deleuze and Guattari begin the discussion of rhizome with an expansion of its
traditional botanical one, noting that "even some animals are [rhizomes], in their pack form. Rats are rhizomes.
Burrows are too, in all their functions of shelter, supply, movement, evasion, and breakout."5

Deleuze and Gauttari arrive at the rhizome by way of analyzing the book. They describe the book, and any work of
literature in general, as an assemblage, and set out to discuss what sorts of assemblages are possible. The first type
is the "root-book," meaning a mode of thought associated with the tree. The tree image is Deleuze and Guattari's
chief contrast against the rhizome, and much about the rhizome can be understood through this opposition. The tree
plainly represents a hierarchy, but it also refers to binary systems, because every new branch ties back in some
essential way to the root that makes all growth possible. Thus, 0/1, +/-, and Y/N are all "tree" structures, as is a
traditional hierarchy like this:

The tree, the authors explain, has become the dominant ontological model in Western thought, exemplified in such
fields as linguistics (e.g. Chomsky), psychoanalysis, logic, biology, and human organization. All these are modeled as
hierarchical or binary systems, stemming from the tree or root from which all else grows. One of Deleuze and
Guattari's criticisms of the tree is that it does not offer an adequate explanation of multiplicity. A political implication
of the tree is that it reinforces notions of centrality of authority, state control, and dominance; it is perhaps no
coincidence that this theory challenging the tree emerged in France shortly after the events of 1968. Deleuze and
Guattari thus posit another type of book, the rhizome (they claim that A Thousand Plateaus is one); but rhizome as
book is merely one example of, and a metaphor for, the ontology they develop. Unlike the tree, whose branches
have all grown from a single trunk, the rhizome has no unique source from which all development occurs. The
rhizome is both heterogeneous and multiplicitous. It can be entered from many different points, all of which connect
to each other. The rhizome does not have a beginning, an end, or an exact center. "The rhizome is reducible neither
to the One nor the multiple...it is comprised not of units but of dimensions, or rather directions in motion."6 This
inter- or trans-dimensionality is an important component of the rhizome, and separates it from a mere a system
made up of components or structure made up of points.

Rhizome is "defined solely as a circulation of states,"7 that are able to operate by means of multiplicity, variation and
expansion. Deleuze and Guattari describe a mode of organization in which "all individuals are interchangeable,
defined only by their state at a given moment--such that the local operations are coordinated and the final, global
result synchronized without a central agency."8 Although a rhizome can be broken or injured in one location, it will
merely form a new line, a new connection that will emerge elsewhere.
The Rhizome and the tree are at odds, because the rhizome represents a structure that threatens the authority of
the tree's hierarchy. The two are not completely repellant however, because the rhizome is able to infiltrate the tree;
fluidity and openness infect the closed, unchanging, and static. Alan Taylor elaborates this opposition between tree
and rhizome, concluding that Deleuze and Guattari's social project is to "invite the reader to become a rhizome, for
only the rhizome can defeat the tree. The rhizome deterritorializes strata, subverts hierarchies. The rhizome can be
'novel.' It can create 'strange new uses' for the trees that it infiltrates. Most importantly, though, the rhizome
engenders 'lines of flight.' It allows for the re-opening of flows that the tree shuts down...The rhizome offers some
hope of bringing about a kind of 'liberation' from structures of power and dominance."9 Christa Bürger notes that the
tree "is meant to indicate the essence of the enemy: classical thought [which] operates dualistically and
hierarchically.10

Rhizome, Cyberspace, al-Qaeda:


Stefan Wray has noted that the earliest attempts to apply postmodern theory to cyberspace largely ignored Deleuze
and Guattari's rhizome, but by the mid 1990s the concept was prevalent in the literature of internet theory.11
Hypertext theory in particular has used the rhizome as an explanatory structure.12 Gauttari died in 1992, and
Deleuze in 1995; neither had the opportunity to see the development of the World Wide Web as we know it now.

However, the characterization of the Web as a rhizome leaves out aspects of the concept described by Deleuze and
Guattari. As Tim Berners-Lee et al. originally explained, "the common URI syntax reserves the "/" as a way of
representing a hierarchical space."13 (For example, the URI
http://chicagoschoolmediatheory.net/projectsglossary.htm actually describes a tree-like structure, with
http://chicagoschoolmediatheory.net/ at the base.) In addition, the Web operates on the internet, itself a structure
with a tree-like Root whose centralized features have been cited as ripe for domination.14 This aspect of the internet
(and therefore the Web) as a locus of political power was widely acknowledged in recent objections to continuing
U.S. control of the Internet.15 These are disanalogies to the idea of Web as rhizome, the former example shows the
hierarchical nature of the Web, while the latter reminds us of the traditional institutions that lay beneath the
interfaces of the internet.

The Critical Art Ensemble has commented on the notion of internet as rhizome, saying that network technologies
have reinforced existing power structures by allowing them to become "nomadic."16 If electronic technologies such as
the Web have become rhizomatic, according to CAE, it is primarily to the benefit of those with power (e.g.
corporations) who seek to reinforce their domination. CAE seems to accept traditional Marxist critiques of capitalism,
but rejects the idea that previous modes of resistance remain relevant now that elite power is electronicized,
decentralized, and globalized. Resistance movements must now become like a rhizome in order to be effective.

CAE's theory has been used to explain the operation of resistance movements, such as the Zapatistas.17
Counterterrorism intelligence analyst and popular blogger Jeff Vail recently started using the term to describe mobile
political networks and military configurations; he refers to al-Qaeda as "rhizomatic," and implies that the U.S.
military ought to become more rhizomatic in order to effectively fight the group.18 Like the Web, organizations such
as al-Quada and the Zapatistas have hierarchical aspects. Nor should one think that such human organizations are
based on a relatively recent model; the French revolution probably involved similar rhizomatic aspects. While the
rhizome is a useful analogy, it does not truly describe these organizations. Like most ontologies, it offers a
perspective that can be instructive even if it cannot be perfectly applied to a, or any, particular object.

Other Uses of Rhizome:


Rhizome has been used often in art and literary theory, as a keyword search for the term in any good academic
article index (e.g. MLA International Bibliography) will reveal. In art history, rhizome has been used to describe the
repetition of ornamental patterns on sculpture and architecture from the Indian Subcontinent. (e.g. "The attenuated
upper cylinder (rising to 30 m), built in brick, was faced with stone carved with frothy floral scrolls, spectacular lotus
rhizomes and complex geometric ornament.")19

http://rhizome.org is "an online platform for the global new media art community." The name of the site and
organization is "a metaphor for the organization's non-hierarchical structure."20 Since they began a partnership with
the New Museum of Contemporary Art (in New York City), Rhizome.org has begun offering a variety of rhizome-
named products such as Rhizome Raw (mailing list), Rhizome Exhibitions (online shows), and the Rhizome
Commissions Program ($$$ for artists). The point of mentioning all this is to demonstrate that rhizome is
increasingly being used as a proper noun for a new media organization. However, they are also a good resource for
anyone interested in new media. 21

Finally, http://www.rhizomes.net hosts Rhizomes: Cultural Studies in Emerging Knowledge, a peer-reviewed online
journal (ISSN 1555-9998) published by the Department of English at Bowling Green State University. Rhizomes
seeks to publish works written "in the spirit of Deleuzian approaches." Mark Gartler

montage

The Oxford English Dictionary defines montage as “the process or technique of selecting, editing, and piecing
together separate sections of films to form a continuous whole; a sequence or picture resulting from such a process”
[1]. Another definition it gives for montage is “the act or process of producing a composite picture by combining
several different pictures or pictorial elements so that they blend with or into one another; a picture so produced”
[1].

It is helpful to first understand montage within a historical framework: I will first detail the early Soviet films, where
montage began. It is of vital importance to keep in mind that this revolutionary cinema was just that: occurring
during the height of the Russian Revolution, thus many of the films had a specific ideology that they were trying to
use the medium to convey. With the purpose to incite, came the one of the most powerful tools in cinema: montage.
This tool is highly poetic because it can be used to convey in the cinema what is the parallel to the literary metaphor.
For example, just as the masses are being shot down in the movie, a Soviet director might quickly cut to a shot of a
bull being slaughtered (this example is from Eisenstein’s Strike), thus distributing the message that the masses were
being slaughtered like animals.
Furthermore, montage could be used to establish rhythm in a film, with the change in cuts giving excitement
through the dynamic editing. Essential to the aims of Soviet cinema was the power to incite; something that
stressed disjunction and fragmentation. Thus, Soviet cinema was very much against a “smooth” film, rather it would
be purposely disjointed at times to arouse the audience. When sound cinema finally came to Russia, the directors
purposely mismatched the sound with the image to create an even further disruption. Soviet cinema insisted on
shaping the cinematic medium to meet the message it was intent on conveying. Thus, while it may come across to
American viewers as heavy-handed propaganda, Soviet cinema attacks the notion of a smooth cinema that makes
the audience passive; and as a result, Soviet cinema brought many new devices to the still new cinematic language.

Not all montage has to be like the Soviet style, however. Montage can be used as a device for establishing spatial
and temporal relationships within a movie. In fact, most action and suspense movies rely on the power of montage
to create excitement. A famous director who uses montage to create suspense is Alfred Hitchcock. In his movie,
Psycho, when the protagonist is murdered, Hitchcock cuts between the killer’s knife and the woman in the shower,
so the audience realizes the woman is being murdered, without the knife actually having penetrate the woman.
The concept of montage has its parallels in other art mediums; an example of this is collage. The Oxford English
Dictionary defines collage as “an abstract work of art in which photographs, pieces of paper, newspaper cuttings,
string, etc., are placed in juxtaposition and glued to the pictorial surface” [2]. While collage is in fact a technique for
making art, much of its power resides in its theoretical implications, such as the importance of juxtaposition to the
collage technique: things are combined that normally would not be. This technique is often used to take elements of
and create art of them, Furthermore, this juxtaposition of multiple seemingly different things challenges the unity of
conventional art forms and thus challenges the grand narrative of art.

Christine Poggi remarks on the collage: “The intrusion of everyday, nonartistic materials into the domain of high art
challenged some of the most fundamental assumptions about painting inherited from both the classical and the more
recent avant-garde traditions. The invention of collage put into question prevailing notions of what and how works of
art signify, what materials artists may use, and what constitutes unity in a work of art” [3].

When thinking about the collage, it can be helpful to learn about artists who use this technique. One specific artist
that is renowned for his experimentation with the collage is Pablo Picasso. Picasso viewed art as a re-presentation of
nature rather than an imitation. Thus Picasso would use materials from popular culture and give them new meaning,
as in his guitar collage.

Poggi remarks on this: “Rather than describe these materials as ‘bits of reality,’ as has sometimes been done, one
might more accurately describe them as already circulating cultural signs, confiscated by Picasso in order to be
redeployed in the world of high art. Within their new context, the prior meanings of these elements is partly effaced
and new meanings are superimposed” [4]. Thus mass culture becomes the world from which Picasso gets his
material (literally), and his brilliance lies in his successful re-combination of mass culture to bring interesting
connections between seemingly different artifacts of modernity. Poggi adds, “for Picasso, the making of collages and
constructions was never a process that implied creation ex nihilo, but the recombination of the separable, semantic
elements of preexisting cultural codes... That these elements appear as the remnants of a prior discourse, quoted
out of context, allows for the partial depletion of their everyday meanings or uses, and opens them to a dialectical
process of recognition and critical reinterpretation” [5].

Interestingly, when thinking of such things as bluescreens and digital image manipulation, it is evident that collage
has taken on a further meaning in the temporal visual mediums. The concept of collage can now be used as a means
of manipulation of images without the viewer realizing it.

Also important to the collage is the pun; Francis Frascina explains: “From the earliest examples of collage, the role
of humor and irony, notably through the uses of visual and verbal puns, has been an important element. In both the
realm of “puns” (historical relations between signs from different periods) and the realm of etymology (historical
relations between signs from different periods), two similar but distinct signifiers are brought together and the
“surface” relationship between them invested with meaning through the inventiveness and rhetorical skill of the
practitioner. Such possibilities... can act to destabilize notions of “fixed” meanings or dominant distinctions between
“real” and “false” connections” [6].

The concept of the double meaning is essential for the collage; as the Group Mu Manifesto states: “Each cited
element breaks the continuity or the linearity of the discourse and leads necessarily to a double reading: that of the
fragment perceived in relation to its text of origin; that of the same fragment as incorporated into a new whole, a
different totality. The trick of collage consists also of never entirely suppressing the alterity of these elements
reunited in a temporary composition” [7].

Collage is not the only parallel technique to montage, however. The use of collage and montage can be seen in
poetry, such as in the writing of T.S. Eliot, where sets of seemingly arbitrary words are strung together to create
new meanings. Furthermore, the collage/montage device can even be seen in the music of today, where producers
create “mash-ups” of songs to create a new unique hybrid song. Thus, while collage and montage are usually
associated with visual art, they actually apply to a wide array of mediums. Jared Leibowich Winter 2007

memory (2)

Memory is a basic human ability that allows us to recall past events and knowledge. It is a concept whose
importance stems from the fact that our understanding of time is one directional, forward, and all of our current
actions depend upon the past knowledge and future expectations. It is no accident that the our word for memory is
derived from the ancient Greek myth of Mnemosyne, the mother of the Muses who was "said to know everything,
past, present, and future." [1] Indeed, memory is essential to our existence and impossible to underestimate its
importance, without memory we would not be able to perform basic functions or have abstract thought. Moreover, it
is through our concept of the past that we are able to create our own identities and communicate with others.
Through memory able learn how to create and comprehend what is presented; in this respect it is the starting point
of media.

Generally, memory can be understood as follows: 1. The process and/or processes of remembering past thoughts
and actions. 2. A recollection, or remembrance. 3. A device that data or program instructions may be stored and
from which they may be retrieved (generally in computers or artificial intelligence). [2] It is action or series of
actions in remembering, the recollection itself, and the aid that stores the past. It is both internal and external,
natural and artificial.

As previously stated, the memory is what allows us to perform tasks, and remember past events, and it is those past
events that help to shape us. It is through our memory that we are able to remember our own personal history, the
history of our society, and thus come to terms with our place in society. In the chapter entitled "Narrative, Memory,
and Slavery," from W. J. T. Mitchell's Picture Theory , memory in terms of both the ability to remember (definition 1)
and the recollection itself (definition 2) is considered with regard to how we are able to understand the self, society,
and the memory of slavery and oppression. In the first few pages he states that "my subject . . . is not 'slavery
itself,' but the representation of slavery in narrative memory." [3] Mitchell makes a point that in recalling memories
of the past there is a separation, a distancing, between the self and former self, yet despite this division the two are
interrelated. The former self determines the present self. He also notes the importance of the current self,
suggesting that the gaps and filters in memory, both unconscious and conscious attempts to forgot past incidences,
in order to protect the current self.

In recalling the past and sharing past experiences the individual narrating the story of his life is both considering his
own memories on and individual basis and passing on his memories to other people. He is re-articulating his
personhood through his past and the only thing that a slave could truly own. Thus memory as Mitchell sees it is "a
technology for gaining freedom of movement in and the mastery over the subjective temporality of consciousness
and the objective temporality of discursive performance." [4] Memory allows for the telling of stories and the
understanding of them. The stories that we share, both as fictitious and real, are nothing but a layering of
information, a dependency of new events and actions on the previously learned events and actions. Narratives
require us to remember the characters, the events and actions that took place previous to the point that we are in
the story. Without the capacity to remember the early part of the story, the events that are unfolding have no
importance to us, the choices the characters make have no consequence, and we are unable to hold any interest in
the story. As illustrated in the movie Memento, where the main character Leonard "Lenny" Shelby has a damaged
short-term memory, making it impossible to create new memories, the ability to remain attentive to a conversation,
television show, or book is impossible.
The film Memento is an interesting point to consider, especially with regard to the idea of memory and identity as
illustrated in Mitchell's discussion of narrative. In the movie Leonard attempts to gain control over his situation by
devising a system of recording to substitute for his memory. He relies on information he has written down on paper,
photographs, and tattoos to record his situation, yet despite the scrutiny with which he attempts to documents his
life, he is completely ill equipped to make decisions or act with a clear purpose. He lacks fully awareness of his
situation, is ignorant of people's motives. Furthermore, he is unable to have affection for those around him.
Leonard remains caught in the past, because he cannot develop a concept of himself separate from his former self;
he is forever caught in past goals that are no longer relevant. Mitchell observes that "memory, like description, is
the servant of narrative and of the narrator's identity," [5] while Leonard exemplifies the fact that without memory's
help identity cannot be fully constructed.

There is an understanding that our past has made us, and that our memories belong to us. There is a sanctity to
our memories; they have value. It is not just the comprehension and enjoyment of narration that is dependent
upon memory, but the very act itself of narration that is bond up in the idea of memory. Whether it be a simple
conversation where one person is telling another about his day or a masterpiece in literature, the process of
remembering (definition 1) so that it is possible to share and the specific recollection (definition 2) being shared is
present and necessary. Furthermore, many books begin with the idea that the events are not taking place, but have
taken place. Jane Eyre, The End of the Affair, Brideshead Revisited, Romeo and Juliet, Beloved, The Woman
Warrior, -- even the Bible, are all records of past events, they are memories. Furthermore, in many of these books,
there is an idea that there is sanctity to those memories. The subtitle of Waugh's Brideshead Revisited reads "the
Sacred and Profane Memories of Captain Charles Ryder," cluing the reader into the fact that they are reading a
recollection of past events and that there is something special about them. They are the memories of a specific
person, and belong to him alone.

The understanding that the process of memory occurs naturally within the human mind is quite modern and goes
against the classical belief that viewed memory as an art. While modern psychologists attempt to understand the
way that the mind sorts through the information it receives, retaining certain pieces unconsciously without any
control or reason, memory as it was conceived in antiquity was planned and controlled. Mitchell argues that memory
is media for the fact that "since antiquity, memory has been figured not just as a disembodied, invisible power, but
as a specific technology, a mechanism, a material and semiotic process subject to artifice and alteration." [6]
Memory was a technique that allowed an orator to remember long speeches with perfect accuracy. It was a media
that combined "the same modalities (space and time), the same sensory channels (the visual and aural), and the
same codes (image and word)." [7] Memory was understood through associations with place and position, and
images found in architecture, and relying on sight. It was through "seeing the places, seeing the images stored on
the places, with a piercing inner vision" [8] that an orator was able to recite his speech. Thus the sharing, the
externality was important not only in the process of remembering, but also in the motivation for the remembering.

Memory in the story given to us by Cicero was a rhetorical skill that relied on visual stimuli as a device for
remembering. Like the memories on our computers (definition 3), architecture stored information. As if something
within the external world could hold on to our memories. This idea is later echoed in the legend of the origin of
painting. "According to the legend, drawing was discovered by the daughter of a Corinthian poster. About to be
separated from her lover, she discovered that she could preserve his likeness by tracing the outline of his shadow
cast on the wall." [9] Later, her father, a potter, transferred this shadow drawing into a sculpture. In both
instances of this story, "these Arts seem to have proceeded out of a desire of prolonging the memory of the
deceased, or else of them whose absence would be most grievous unto us without such a remembrance." [10]
Thus art can and was viewed as a type of memory, a physical device by which the ephemeral could be made
eternal. Like the memory found in our computers, architecture, painting, and, of course, writing are devices by
which data could be stored. Writing has been previously alluded to with regard to the idea of narration and
literature, however the idea of written language as a storage has not been fully discussed.

Writing can be viewed as a record of the past, a storage device used to grasp hold of past events, actions, or
thoughts, yet this tool has problematic implications for some. In Plato's Phadrus, Socrates argues against writing for
those who write "will not use their memories, they will trust to the external written characters and not remember
themselves." [11] Socrates fears not only that humans will loss their capacity for being able to remember, but also
that writing lacks the power of the spoken word [see speech]. On the page the words and ideas "are tumbled
about . . . and know not to whom they should reply, to whom not,' [12] he fears that they will be misunderstood.
Separated from the speaker, words are vulnerable to misunderstanding. Written language is lifeless, dead, a
sentiment later echoed by Friedrich Kittler. Kittler viewed writing, and indeed many medias, as a storage device, a
record of the past. To him a "book . . . coincides with the realm of the dead," [13] a time that is no more, except
within these records.

Information and the retention of that information is of the utmost importance in our lives, thus our dependency on
memory cannot be overestimated. It forms the building blocks with which we are able to understand the world
around us, and identify ourselves as a separated being from those around us. In many instances, though not always
with positive connotations, media acts as a device for memory. Though it may not be a media, like television, film,
or paint, it is arguably the starting point for much of our media and communication systems. Maya Ganguly Winter
2002

kanji

The Oxford English Dictionary defines kanji as “the corpus of borrowed and adapted Chinese ideographs which form
the principle part of the Japanese writing system.” The same dictionary defines ideographs as “a character or figure
symbolizing the idea of a thing, without expressing the name of it.” In this sense, the OED definition of kanji is only
partially correct. Because of their history and contemporary use, kanji occupy a troublesome position within Western
semiotic scholarship. For example, The Kodansha Kanji Learner’s Dictionary does not once call kanji ideographs.
Instead, the preferred term is character or unit. This essay seeks to analyze kanji against C.S. Peirce’s theories of
semiotics to establish what attributes of the different signs kanji possess.

C.S. Peirce in Logic as Semiotic: The Theory of Signs writes about the formation of signs. “A sign, or representamen,
is something which stands to somebody for something in some respect or capacity.” (PP, 99) That sign is in a triadic
relationship with an Object and an Interpretant. “I define a sign as anything which is so determined by something
else, called its Object, and so determines an effect upon a person, which effect I call its interpretant, that the later is
thereby mediately determined by the former.” (EP2, 478, as cited online). As part of the triadic relationship of the
Sign to its Object, the Interpretant translates the sign and thus explains the object. “The Sign determines an
interpretant by using certain features of the way the sign signifies its object to generate and shape our
understanding.”

Peirce constructs “three trichotomies” of signs from the triadic relationship. Under the ‘first trichotomy’ a sign is
called a qualisign, sinsign, or a legisign, “according to the sign itself as a mere quality (qualisign), in an actual
existent thing or event (sinsign), or is a general law (legisign). (PP, 101) The ‘third trichotomy’ outlines three
different types of signs with relations to their Interpretants. A rheme sign “is a Sign of Qualitative Possibilities,” or
understood to be capable of expressing certain categories of Objects. (PP 103) A Dicent sign describes an actual
existence, and necessarily includes a rheme sign to show how it is indicating that actual existence. An argument sign
is interpreted as representing its Object as the Object’s character as a sign. The above two trichotomies become
clearer with an explanation of the ‘second trichotomy.’

The ‘second trichotomy’ describes signs called an icon, an index, or a symbol. This final trichotomy is useful in
exploring both the polyvalent nature of kanji and the complex organization of Peirce’s Sign/Object relations.

Kanji as an icon:
When kanji become a representamen to a reader of Chinese or Japanese, the context of its relation to the viewer
may determine how fully it is understood as an icon. Suppose the kanji is seen illuminated against a Tokyo night
sky, standing by itself. Suppose that kanji was a 1,000 ft tall neon ‘大’ or ‘手’. For the right viewer, these signs are
icons.

In Prolegomania to an Apology for Pragmaticm (1906), Piece defines icon signs as “partaking in the characters of the
object“ (CP 531) From a 1904 paper, he says that icons are fit to be used as signs if they possess the quality of the
signified. (EP, 307) Peirce’s most confusing and potentially useful “An Icon is a Representamen whose representative
quality is a Firstness of it as a First. That is, a quality that it has qua thing renders it fit to be a representamen.
Thus, anything is fit to be a Substitute for anything that it is like.” (PP 104) So a sign is a successful icon if the first
impression upon viewing it brings to mind the character being expressed by the sign.

Let us examine how the neon 大 and 手 are icon signs in the sense of the above definitions. 大 is the kanji which
means ‘great’ or ‘big’ in Japanese and is pronounced ôkii or dai. A 1,000 ft. tall 大 is an icon in that the characteristic
bigness is directly translated from the object to the sign. The first impression of the viewer, even before recognition
of the symbol’s meaning sets in, is that this 大 is huge. Some may argue that the sign was produced unto the viewer
under circumstances that force the sign to correspond its meaning of bigness to the bigness of its place in nature.
The sign, however, is substitutable for anything that it is like, which clarifies that its size makes it an icon sign and
not an index sign.

The iconic nature of 山 does not come from its size, but from its possession of the qualities of the object it signifies.
山 is pronounced yama and means ‘mountain.’ The origin of the kanji is from a pictograph, a pictorial sign. The three
prongs represent peaks and the flat bottom orients the readers who can determine the meaning without any
phonetic references. While 山 has evolved within the Japanese language to be neither a pictogram nor an ideogram,
it still retains value as an icon sign from the characteristic shape of a mountain that makes it intelligible to some
non-Japanese readers.
Kanji as Index Signs:
Index signs represent only by their real connection to their Object and not by any resemblance to them. (EP 460)
“By being really and in its individual existence connected with the individual object, when I call the sign an Index.”
(CP 531) The connection, spatial or temporal, to its object is important for an index sign. The connection links the
object with the “senses or memory of the person” for which it serves as a sign. (PP 109)

How kanji work as index sign requires an investigation of the internal elements of the characters. Kanji are
composed of different internal units called radicals. The kôki jiten (康熙字典; Chinese: Kāngxī Zìdiǎn), a character
dictionary compiled in China in 1716, diagrams 214 radicals. (Kodansha 958) The number of radicals increases when
you factor in the variants, slightly modified versions of the parent radical. For example, the parent radical “hito”
(person) is written: 人. The variant radical of hito is hitoben (亻). Radicals are the ideographic units from which kanji
are built.

There are two ways that the physical presence of radicals act as index signs to a kanji: semantically and
phonetically. Kanji with semantic readings of radicals will resemble ideographs in that their meaning can be guessed
by the connection of the radicals to each other. The kotoba radical, 言, can be found in kanji like 証, 詩, and 議. 言
(kotoba) means ‘word’ and when juxtaposed against 正, (sei; correct), produces 証 (akashi; evidence). The kotoba
radical placed next to 寺 (tera; temple) produces 詩 (uta; poem) from the connotation of poems sung at a temple.
When connected to 義 (gi; righteous), the product is 議 (gi: deliberation) because a righteous debate is a
deliberation.

The physical presence of radicals may also determine the connection of the kanji character to its phonetic sound.
Phonetic radicals, regardless of the different meanings of the internal radicals paired against each other, determine
the kanji’s phonetic reading. 義, 議, 儀, 嶬, and 曦 are all pronounced gi because the 義 gi radical is used as the
phonetic radical. Whether or not a radical will be the phonetic radical is usually determined by its placement in
regards to right/left or top/down orientation, size within the kanji character, or prominence as a reading radical.

Kanji as Symbol Signs:


All of spoken and written language can be argued to be symbol signs under Peirce’s trichotomy. “Symbols, which
represent their objects, independently alike of any resemblance or any real connection, because dispositions or
factitious habits of their interpreters insure their being so understood.” (EP, 460) A symbol always needs the
reader’s interpretation since it does not resemble the object nor is it possible to pick up additional information from
the event. Instead, the interpretation needs to be fixed by habit. “A Symbol is a Representamen whose
representative character consists precisely in its being a rule that will determine its Interpretant. All words,
sentences, books, and other conventional signs are Symbols.” (PP 112) Education systems, through instruction on
reading and writing, establish the habits that determine the interpretation of symbols.

Kanji definitely play a role with the modern Japanese language. The evolution to their current form is a complex
history of acculturation and reformation. As language units, and thus symbols, they have undergone a specific
selection process. On November 16, 1946 the Japanese Ministry of Education released an official tôyô kanji list of
1850 characters to be taught in Japanese schools and used in official documents. In 1989, the Japanese government
approved an expanded list called the Jôyô Kanji List, with 1,945 characters. On that list: 737 characters have only
on readings and 40 only kun readings. This leaves 1,168 (60.5%) with both types of readings. As can be seen, only
a select few of the many thousands survived to be used in the modern Japanese educational system and thus find a
place within the reader’s habit formation. According to Sachiko Matsunaga, only 11.7 percent of kanji on the tôyô
kanji list originate from pictographs used in China from 1200 - 1045 BPE. Kevin Mulholland

graphic novel

According to the now legendary story, comic book artist and writer Will Eisner first linked the words 'graphic' and
'novel' in 1978. Eisner used the term 'graphic novel' to describe a number of his comic book tales collected under a
single title, forming a book-length publication. Eisner's creative word-play failed to convince the publishers to whom
he first offered his work, yet the designation 'graphic novel' has since entered the lexicon of literary genres (Weiner
17). The Oxford English Dictionary currently carries a definition of 'graphic novel': "a full-length (esp. science fiction
or fantasy) story published in comic-strip format" (OED).

Eisner likely invented the term in order to distinguish his new work from comic books, artifacts of popular culture
with a long history of stigmatization for their alleged less-than-edifying content. Most notable is the attack on the
American comic book industry in the 1950's, significantly propelled by Frederic Wertham's Seduction of the Innocent
(1954, Fig. 1), during which the government took a serious look at the content of comic books and other popular
culture media. Eisner was no doubt familiar with comic book criticism. An acknowledged master of the art, Eisner
worked on comics in some form until his death in 2005.

Figure 1. Comic book illustration reproduced in Seduction of the Innocent with caption: "Children told me what the
man was going to do with the red-hot poker."

Like the OED, most try to define 'graphic novel' in terms of attributes relative to comic books. For example, graphic
novels are described as longer or sturdier than the average comic book. Graphic novels are said to cost x% more
than comic books. Critics debate whether the term implies certain content: can an anthology of comics be
considered a graphic novel, or does a work have to be envisioned and published as a graphic novel to be one?
Furthermore, does a graphic novel have be about a subject worthy of serious consideration, such as the holocaust or
coming of age, or can they be about superheroes, elves, and supernatural beings?

The most essential definition of 'graphic novel' will not be predicated on issues of size, shape, cost or content.
Rather, a definition of graphic novel should consider the implications of linking the words 'graphic' and 'novel' to
describe this specific medium.

Storytelling and the Novel


In Narrative and Graphic Storytelling, Eisner writes that "the story is the most critical component in a comic . . . the
intellectual frame on which all artwork rests" (Eisner, 1996 2). In this sense, all comics can be considered novels -
even the single panel cartoon or comic strip; the defining characteristic of a novel is not length, but the realism of
storytelling. "The fundamental aspect of the novel," argues E.M. Forester, "is its story-telling aspect" (Forester 44).
Forester describes the story as the "backbone of a novel," the perpetually ticking clock that propels our page
turning: "we want to know what happens next." According the Forester, time inside the novel is undeniable; time
allows for experience, the essence of a novel's realism. A 300-page comic book is not necessarily more of a "graphic
novel" than a four panel Peanuts strip (Fig. 2) because each displays Ian Watt's definition of formal realism: "the
premise, or primary convention, that the novel is a full and authentic report of human experience" (Watt 32). Both
Eisner's A Contract With God (1978) and any given Charles Schultz creation convey the "primary criterion" of truth
to individual experience that the novel embodies (Watt 13).

Figure 2. Peanuts,
October 2, 1950.

Individual experience, writes Watt, "is always unique," thus the novel is "the logical literary vehicle of a culture
which, in the last few centuries, has set an unprecedented value on originality . . . it is therefore well named" (Watt
13). 'Novel' implies the originality that writers have brought to the medium of comic books, seeing the potential for
their innovative stories to make a greater impact within single publications rather than through an extended series of
comic issues. In other words, the rise of the graphic novel in the latter half of the 20th century and into the next
corresponds to an increase in the demand for fresh approaches to the communication of individual experience.

A consideration of the term 'graphic novel' cannot avoid referencing, to some extent, the variety of narrative types
published in the last twenty years. In Maus (Pantheon Books, 1987), Art Spiegelman relates the story of his father's
survival as a Jew in World War II Nazi Germany, in addition to illustrating his own experience composing a graphic
novel on the difficult subject. Watchmen and Batman: The Dark Knight Returns (both DC Comics, 1986), each place
the well-known 'superhero' narrative within a postmodern historical setting in order to focus on the psychological
intricacies of their less-than-invincible protagonists. Graphic novels like Dan Clowe's Ghost World (Fantagraphic
Books, 1998) and Jimmy Corrigan: the Smartest Kid on Earth (Fantagraphics, 2000) by Chris Ware offer
contemporary impressions of the 'coming-of-age' tale though their visually compelling formats. The list of successful
graphic novels grows exponentially each year, yet as the medium advances in popularity, it retains originality as its
main impetus for developing stories.

Time and Space Through Sequential Art


Paul Ricoeur writes that "two kinds of time are found in every story told: on the one hand, a discrete, open, and
theoretically undefined succession of incidents (one can always ask: and then? and then?); on the other hand, the
story told presents another temporal aspect characterized by the integration, the culmination and the ending in
virtue of which a story gains an outline" (427). Graphic novels convey both the linear and narrative formations of
time through the 'sequential art' that is unique to the comic book format.

Novel storytelling is combined with the sequential art of the comic book to create the graphic novel. Scott McCloud,
in Understanding Comics: the Invisible Art, defines comics as "juxtaposed pictorial and other images in deliberate
sequence, intended to convey information and/or to produce an aesthetic response in the viewer" (McCloud 9).
Likewise, Eisner designates comics as a "sequential art," referring mainly to the artist's task of arranging "the
sequence of events (or pictures) so as to bridge the gaps in action" (Eisner, 1985 38). The story of a graphic novel is
told through a progression of frames, technically referred to as 'panels.' Panels "secure control of the reader's
attention and dictate the sequence in which the reader will follow the narrative" (Eisner, 1985 40).

Comic book panels are comparable to the frames of a film, or images on a television screen; as McCloud suggests,
"In comics, as in film, television, and 'Real Life,' it is always now. This panel and this panel alone represents the
present. Any panel before this - that last one for instance - represents the past. Likewise, all panels still to come -
this next panel, for instance - represent the future" (McCloud 104, (Fig. 3)).

Figure 3. McCloud, Understanding Comics: 104.

However, in film, television, and photography only one frame is available at a time for the viewer's consumption. In
sequential art, McCloud writes, "the past is more than just memories for the audience and the future is more than
just possibilities! Both past and future are real and visible all around us! Wherever your eyes are focused, that's
now. But at the same time your eyes take in the surrounding landscape of past and future!" (McCloud 104 (Fig. 4))
Figure 4. McCloud, Understanding Comics: 104.

Time in the sequential art of the comic book is a distortion of how time is experienced in human understanding. St.
Augustine envisioned the three forms of time - past, present, and future - as contemporaneously related: "The time
present of things past is memory; the time present of things present is direct experience; the time present of things
future is expectation" (Augustine Book XI). Augustine's conception of time is opposed to the eternal presence of
God, who is capable of experiencing past, present, and future simultaneously. Under human understanding, "all time
is forced to move on by the incoming future; that all the future flows from the past; and that all, past and future, is
created and issued out of that which is forever present" (Augustine Book XI). As Patrick Maynard argues,
photography, like sequential art, forces us to think about the arrangement of time within seemingly motionless and
momentary depictions of space. The difficulty, in photographs and comic books, is determining what degree of time
is represented by the single comic book frame or photographic image: "For example, if we attempt . . . to picture
time as a one-dimensional flow . . . with "now" as an index that - like the photofinish razor slit-cans, that strip as
"now" shifts to ever later times - we have the awkward situation that, since all times get indexed as "now" . . ., to
identify the present we'd need to know where the "now" index was . . ." (Maynard 208).

It remains ambiguous whether we experience comic book frames as a series of 'nows,' instantaneous moments, like
snapshots of frozen actions, or as a continuous stream of movement comparable to the fusion of frames that
compose the movie-viewing experience. How comic books are read, or consumed, is a direct function of the
exclusivity of sequential art.

Also unlike the frames of other media, the comic book panel must be exclusive; film and television display
continuous action, yet the comic book contains only a fraction of that continuous action within its panels. The
exclusive nature of the comic book panel forces the sequential artist to be fully aware of how a story is to be laid-out
- movies may be composed scene-by-scene, or even shot-by-shot, but comic book storytelling is depicted frame-by-
frame. Marshall McLuhan notes that the exclusivity of depiction in the comic book affects the method of its
consumption. McLuhan writes that the modern comic strip and comic book "provide very little data about any
particular moment in time, or aspect in space, of an object. The viewer, or reader, is compelled to participate in
completing and interpreting the few hints provided by the bounding lines" (161). Like television, which McLuhan
notes also has a "very low degree of data about objects" in its "mosaic mesh of dots," the comic book requires the
participation of the reader (161). For McLuhan, comic books and television both have a "participational and do-it-
yourself character" (165).

Graphic Storytelling
Graphic novels combine text and pictures equally in order to convey a narrative. Both words and images are
essential to the graphic novel, thus creating the desire for compatible relationships between visual representation,
through talented art and design, and dialogue or descriptive writing. W.J.T. Mitchell writes extensively on the
associations and connections between words and images, noting that "the domains of word and image are like two
countries that speak different languages but that have a long history of mutual migration, cultural exchange and
other forms of intercourse" (Mitchell 49) (see the entry on the comic book for further discussion on words and
images as language). In the successful graphic novel, words and images should be on equal planes, one should not
be privileged over the other. One cannot ask of a graphic novel, "which is more important, the words or pictures?" as
both the domains combine to form an inseparable text (Fig. 5).
Figure 5. A panel from Frank Miller's Sin City demonstrates the ability of words and pictures to cooperatively form a
coherent text.

Jan Baetens writes that words and pictures are two types of language: verbal and visual. Pictures organized within
comic book panels must be perceived as a whole text if a story is to be conveyed to the reader. Through a model of
what Baetens calls "relatedness," the panels of a graphic novel are "related to one another so as to form . . . a
unified whole" (Saraceni 167). In graphic novels, the 'text' is formed by the sequence of panels; panels are identical
to the sequence of sentences in a text of just words; " - sentences and panels represent the most identifiable units
into which language-based texts and comics are respectively arranged" (Saraceni 169).

The reader's progression through the pages of a graphic novel is a function of the sequential artist's ability to convey
"real life through the spatial perception of time and space" consistently through the comic book panel. As mentioned
above, both the past and future of a story are represented simultaneously on the pages of a graphic novel.
Consequently, the sequential artist must depend on the reader's conditioned response to the written word in order to
communicate a coherent story. McCloud notes that, fortunately, "comics readers are also conditioned by other media
and the "Real Time" of everyday life to expect a very linear progression. Just a straight line from point A to point B"
(McCloud 106).

Ricoeur explains the process of reading as a process of transfiguration which takes place between the text and the
consumer. In Ricoeur's words, "the meaning or the significance of a story wells up from the intersection of the world
of text and the world of the reader" (430). Through reading, the graphic novel consumer is able to access the
"fictitious universe of the work," and 'reading' becomes synonymous with 'living' in the imaginary mode of existence
(Ricoeur 432).

An Authentic Medium?
Roger Sabin questions the authenticity of this supposedly recent medium, pointing out that book-length comics have
been in circulation since the 1940s. Sabin also indicates that "the idea of the 'graphic novel' was hype . . . it meant
that publishers could sell adult comics to a wider public by giving them another name, specifically by associating
them with novels, disassociating them from comics" (165). Yet even if the format has been employed in the past,
the advent of the term 'graphic novel' indicates a new recognition of its possibilities for the mediation of storytelling.
The graphic novel has thrived because of its ability to combine the visual language of sequential art with the
structured realism of the novel. Jon Thompson

You might also like