Professional Documents
Culture Documents
媒介哲学、认知科学与人文精神的未来
( 国际高端学术论坛 )
会议论文
〔Conference Manual〕
北京师范大学文学院
School of Chinese Language and Literature
Beijing Normal University
2018 年 10 月 27-29 日
October 27-29, 2018
Arrangements(活动一览)
Breakfast Buffet at the Jian Wei Restaurant (the first floor of Jingshi Hotel)
早餐 兼味轩,京师大厦一楼
9:00-11:30
Workshop 2
8:30-12:00 Forum 8:30-12:30 Forum
Meeting
Conference Hall Conference Hall
Room 5058,
Morning No.6, Jingshi No.6, Jingshi
Free Area C, Main
上午 Hotel Hotel
Building
学术论坛·京师大 学术论坛·京师大
工作坊·文学
厦第六会议厅 厦第六会议厅
院主楼 C 区
5058 会议室
Lan Hui
Lunch Xi Bei Restaurant
Free Restaurant Free
午餐 西北餐厅
兰蕙餐厅
14:00-18:00 14:30-17:30
Forum Workshop 1
Conference Hall Conference Hall
Afternoon
Free No.6 of Jingshi No.6 of Jingshi Free
下午
Hotel Hotel
学术论坛·京师大 工作坊·京师大厦
厦第六会议厅 第六会议厅
FORUM
学术论坛
Mary Ann Doane: The Concept of Immersion: Mediated Space and the Location of the
Subject
多 恩:沉浸的概念:中介空间和主体位置
Myung-koo Kang: How a Gaze Can Become Violence: Representations of the North
Korean Sports Team to Pyeongchang Olympic
姜明求:凝视何以成为暴力——平昌冬奥会上朝鲜运动队的再现
Liu Chao: Effect of Mortality Salience on Guilt and Shame and Its Neurocognitive
Mechanism
刘 超:死亡凸显对内疚和羞耻的影响及其神经机制
Tony D. Sampson: Transitions in Human–Computer Interaction: From Data
Embodiment to Experience Capitalism
桑普森:人机交互领域的转变——从数据具身化到经验资本主义
Briankle G. Chang: Spectral Media
张正平:幽灵般的媒体
Mary Ann Doane The Concept of Immersion: Mediated Space and the Location of
the Subject………………………………………………………. 44
Liu Chao Effect of Mortality Salience on Guilt and Shame and Its
Neurocognitive Mechanism…………………………...………… 123
方维规 探究人文与科学之关系构型的新范式....................... 1
齐林斯基 惊奇制造者:多样的媒介思想............................. 6
汉 森 心灵怎样参与(人工)交流?
——(以)机器思考的替代路径............... 19
罗跃嘉 情绪与认知功能的认知与神经基础......................... 42
多 恩 沉浸的概念:中介空间和主体位置......................... 44
姜明求 凝视何以成为暴力
——平昌冬奥会上朝鲜运动队的再现............ 62
瓦格特 智力外包............................................... 73
徐英瑾 通用人工智能为何需要胡塞尔的“意向性”理论?........... 83
江 怡 认知科学与人文科学的模糊边界...........................105
吉见俊哉 文化延续与人文科学再定义
——21 世纪全球化社会中大学的作用..........121
刘 超 死亡凸显对内疚和羞耻的影响及其神经机制.................123
桑普森 人机交互领域的转变:从数据具身化到经验资本主义..........125
张正平 幽灵般的媒体...........................................147
附录 参会学者...............................................152
Fang Weigui
First of all, I would like to welcome you to Beijing Normal University in the best
season, as we join here to participate in the Fifth International Summit Forum “Ideas
and Methods”. This series of Forums, known for its small scale and high quality, has
been well received by academia at home and abroad, and has widely promoted scientific
in-depth exchange and exploration with regard to many key issues in the contemporary
ideological field. This year, the theme of our forum is “Media Philosophy, Cognitive
Science, and the Future of the Humanities” – a theme that entails the observation and
reflection of profound changes in current life and intellectual discourse. I will briefly
address the topic here, based on my own understanding.
From the perspective of academic history, the separation and entanglement that
we notice with regard to the Humanities and Science has a long history. In the 1950s,
C. P. Snow and F. R. Leavis fought over “the Two Cultures”, kicking off the
confrontation between the Humanities and Science. Since then, the estrangement and
hostility of the two discourses continued. This February, The Chronicle of Higher
Education published an article by Harvard scholar Steven Pinker. Entitled “The
Intellectual War on Science”, this article explicitly attacks the demonizing criticism
leveled against sciences and its utilization by the humanities, and the author regards it
as a “war” launched by humanistic discourse on science. In Pinker’s view, humans must
rely on the progress of modern science in order to solve the emerging problems. Pinker's
defense of science is, in a sense, a response to the criticism raised years ago by Leon
Wieseltier, The New Republic's literary editor. Wieseltier opposed Pinker's scientism,
arguing that the humanities, despite being vilified by science, were always
indispensable.
However, this controversy is not a simple continuation of the debate 60 years ago,
but contains new contexts and ideas. Today, science and technology are unrivaled both
-1-
in terms of their concepts and their practice. On the one hand, frontier explorations in
the field of science, compared with those of the humanities, often tend to attract more
attention from the public and the media. It causes concerns that scientific explorations
are endowed with an independent and undoubted importance while, on their
philosophical premise and ethical requirements, the necessary introspection is lacking.
On the other hand, the accelerating technology of our period has highly shaped the
lifestyle of contemporaries. The revolutionary progress of AI – “artificial intelligence”
– and information technology is constantly updating people's communication methods
and self-awareness. In particular, the rapid expansion of clients in mobile media like
Facebook, Wechat, and WhatsApp has rebuilt the information means that people use to
communicate on a daily basis worldwide. How to understand the enormous impact of
the technological force on human language expression, social relations, cultural habits
and even the evolution of genes is obviously not a subject that can be solved by a single
discipline.
In response to this challenge, of course, one cannot simply expect the Humanities
or Science to return to the good-natured ancient times. Instead, it is important to re-
understand the meaning and functional boundary of the two in an transdisciplinary
sense through a mutual perspective. As Derrida puts it, “The future of the humanities
depends on how we decide borders.” So, how to think about the possibility and
significance of boundary changes in new contexts determines the way we imagine the
future of the humanities. Derrida's consciousness of this issue is taken up by Catherine
Malabou who proposes a concept of “plasticity” (plasticité) to revisit the internal and
-2-
external relations of the humanities. In her view, the humanities can only reconstitute
themselves by overstepping boundaries, just as humans create themselves by crossing
Kant's “transcendent” boundary. Thus, she stresses that the future of the humanities is
not simply that they transform themselves into a science, but instead, it matters to
transform the most closely related sciences (such as the neuroscience of brain) into a
part of the humanities. In her research she reflects that through neuroscience
“transcendence” becomes an empirical state, thereby laying an empirical foundation for
the subject-form defined by ontology and metaphysics. Malabou breaks the boundaries
between transcendence and experience, thus expanding the boundaries of the
humanities while further activating the intellectual power reacting from the side of
humanity to reality.
Unlike Malabou’s approach from the perspective of the humanities, Pinker revisits
human nature from another point of departure: scientific research. He hopes to absorb
neuroscience, evolutionary biology, genetics, artificial intelligence and other scientific
fields through psychology, drawing on the achievements of spiritual philosophy to
reinterpret the constitutive principle of human nature. As the title of his best-selling
book The Better Angels of Our Nature shows, he is, from a scientific point of view,
optimistic about the improvement of human nature. If these two ways of thinking are
compared, based on transdisciplinary research, it is easy to see that the theory of human
nature reached by the humanities is quite different from that offered by science.
Malabou does not readily assess the possible improvement of human nature, while
Pinker's optimistic theory of human nature largely weakens the interpretative effect of
the humanistic tradition. Therefore, the critical issue at present is, in these divergent
cognitive situations, how to examine the remolding of traditional human nature through
expanding the interpretative boundaries of the Humanities and Science. And once
implemented in specific social situations, what kind of institutional composition and
ethical life does it really mean?
In modern society, the Humanities and Science, as two worldviews, are put into
practice in varied ways. The presentation of science, in the form of technology in
everyday life, can be measured by the corresponding social value or benefit, whereas
the shaping of secular feelings, by means of ethics, cannot yet be transformed into any
visible social value or benefit. More importantly, the humanities cannot simply be
counted as the opposite of science, since the value of the humanities must transcend the
survival mode which includes the inherent social benefits of science. Therefore, if
science is not existent as a cognitive factor that is intrinsic to the humanities, the
spiritual value of the humanities will be greatly detracted, and it will be struggling to
cope with the actual situation of social life. In this sense, any sentimental appeals for
the humanistic spirit, if not implemented as the reproduction of humanistic knowledge,
will ultimately become unreal and unsustainable.
-3-
探究人文与科学之关系构型的新范式
第五届“思想与方法”国际高端学术论坛开幕致辞
方维规
先生们、女士们,
首先欢迎各位在北京最好的季节,来到北京师范大学参加第五届“思想与方
法”国际高端学术论坛。这个系列学术会议,以小规模、高品质的办会风格,已
在海内外学界获得良好反响,广泛推动了当代思想领域诸多关键议题的深入交流
与探索。今年的论坛主题为“媒介哲学、认知科学与人文精神的未来”。之所以
选择这一议题,乃是基于对当前人类生活形式与思想话语之深刻变化的观察和思
考。下面我结合个人理解,谈谈对这个主题设置的主要认识。
然而,这场争辩不是 60 年前论争的简单延续,而是包含着新的时代语境与
思想内容。今天,无论在观念还是实践层面,科学技术都占据无可匹敌的主导性
地位。一方面,与人文学术相比,科学领域的前沿探索,往往更容易引发公众和
媒体的注意。在这些关注中,科学探索被赋予独立的、不容置疑的重要性,而对
其哲学前提和伦理要求,则缺少必要的省察;另一方面,日益加速的技术进步高
度形塑了当代人的生活方式,尤其是人工智能和信息技术的革命性进展,不断更
新人们的交往方式和自我认识。尤其是诸如 Facebook、微信、WhatsApp 等移动
媒体客户端的迅速扩张,在世界范围内重塑了人们日常交流的信息手段。如何理
解这种技术力量对人类的语言表达、社会关系、文化习惯乃是基因进化的巨大影
响,显然并非单一学科所能解决的课题。
令人遗憾的是,当前学术界流行的跨学科研究,往往局限于人文或科学的内
部,真正跨越两者界限的研究仍然相当有限。很多人文学者对于科学领域的前沿
成果所知甚少,更不关心科学方法的革新,时常津津乐道的只有一本托马斯·库
-4-
恩的《科学革命的结构》。而他们对于技术现实的批判,往往只是借用技术哲学
的新词重新装饰人文主义的道德想象力,而对于科学技术的内涵变革知之甚少。
这种认识状况,直接影响了大学通识教育的实施方式。许多高校的通识课程设置,
仅止于关注人文知识的历史整体性,很少意识到科学意识对于现代公民人格塑造
的重要性。因此,在我们所能见到的各类通识阅读书目中,鲜有科学史方面的经
典著作。这种意义上的通识教育,显然无助于弥合人文与科学两者的隔离与对峙。
而由此对人文知识的再生产,更无法回应当代科学技术在人性论意义上提出的尖
锐挑战。
当然,要回应这种挑战,不能简单地希求人文与科学回到古代社会的整全人
性状态。重要的是,如何在跨学科的意义上,通过互看的眼光,重新理解两者的
意义与功能边界。如德里达所言:
“人文学科的未来依赖于我们如何决定边界”。
因此,如何在新的语境下思考边界变动的可能及其意义,决定了我们想象人文学
科之未来的思想方式。马拉布对此的思考,承续了德里达的问题意识。她提出“可
塑性”的概念重新思考人文学科之内部与外部的关系。在她看来,人文学科只有
逾越界限才能再造自身,正如人类是通过跨越康德的“超验”界限,才得以创造
自我。因此,她强调,人文学科的未来不是简单地转变为科学,而是将最密切相
关的科学(如研究大脑的神经科学)转化到自身内部。在她的研究中,通过神经
科学,我们可以重新思考“超验”如何变成可经验的状况,从而为本体论形而上
学界定的主体形态赋予经验基础。马拉布由此破除超验与经验的界限,从而扩充
人文学科的边界,激活人文回应现实的思想能量。
与马拉布从人文学科出发的思考方式不同,平克则是从科学研究出发重新思
考人性。他希望通过心理学来吸纳神经科学、演化生物学、遗传学、人工智能等
科学领域,同时借鉴心灵哲学的成果,从而重新解释人性的构成原理。如畅销著
作《人性中的善良天使:暴力为什么会减少》的题目所示,他从科学角度对人性
的改善充满乐观。如果对比两种思考路向,不难明白,从人文与科学两个方向开
展的跨学科研究,最终指向的人性论相去甚远。马拉布并不轻易对人性改善之可
能性作出判断,而平克乐观的人性论则在很大程度上弱化了人文主义传统的解释
效力。因此,重要的问题是,如何在这些充满分歧的认识处境中,审视人文与科
学通过扩展解释边界的方式对人性论传统的重塑,一旦落实在具体的社会处境中,
究竟意味着怎样的制度构成和伦理生活?
在现代社会,人文与科学作为两种世界观,落实在实际生活层面的方式并不
相同。如果说科学以技术形式在日常生活过程的呈现,可以通过相应社会价值或
效益来衡量,那么,人文以伦理方式对世俗人情的形塑,则不能被转化为某种可
见的社会价值或效益。更重要的是,不能简单地将人文视为科学的对立面,人文
的价值乃是对包括科学内在的以社会效益为指标的生存方式的超越。因此,一旦
科学不能作为内在于人文的认识因素存在,那么人文的精神价值势必大为贬损,
难以应对社会生活的实际处境。在这种意义上,任何标举情怀的人文精神呼吁,
如果不能落实为人文学科新的知识再生产,最终都不免凌虚蹈空,难以为继。
-5-
惊奇制造者:多样的媒介思想
齐林斯基(Siegfried Zielinski)
伴随工具与技术对人类的全面征服,我们已经很难绕开媒介理解身处的世界。
无论哪个学科,在面对各自课题时都不得不将媒介的物质性纳入考量。这促使我
们思考:能否为形形色色的媒介研究划定一片专门的学术领域?如何避免媒介研
究在学科化的同时陷入封闭、僵化、自我循环,进而丧失批判性?媒介不应被简
单理解为交际手段,媒介研究也不应以自身为终极目的,而应主动吸取其他领域
的思想方法以校正和反思自身,否则终会陷入空洞的自我循环,切断与他者相遇
的可能。作者的媒介研究奠基于原子论世界观,并深受福柯系谱学(genealogy)
方法的启发。原子论传统将世界的形成归因为原子微小偏斜所导致的偶然相遇。
在这一图景中,万物处于永不停息的流变中,其中,偶然性优先于一切人为建构
的意义,构成世界存在的根本原因。这意味着,历史并非铁板一块,而如一座迷
宫,歧中有歧、小径分岔,充斥着断裂、偏移、意外和转向。人们从当下出发回
溯地建立起稳定而连续的历史叙述,往往掩盖了历史众声喧哗的本来面貌。福柯
的系谱学启迪人们以多重视角观察历史、尊重异于我们的他者。媒介研究并非某
种强硬划分和武断把握,而是与他者相遇、嬉戏,在过去的断壁残垣中发掘未完
成的可能,通过解放那些已逝的当下(by-gone presents)开启未来,不断为人们
带来惊喜。作者回顾了自己的学术生涯,如何从媒介史研究一步步转向媒介考古
学和系谱学、转向对深层时间(deep time)的挖掘,并达致近来对变体学
(variantology)的思考。相比异质之物(the heterogeneous)的概念,变体(the
variant)更加轻盈、富有动态。变体学将那些迥然不同甚至彼此排斥的现象暂时
聚合,却并不生成一个标准化的封闭系统,而是可以根据需要再度散逸。它关注
变化、偏离和差异,却不意味着排斥和歧视。它跨域东西方界限,将一系列可能
的系谱学聚合为一个想象中的整体。最后,作者尝试将二战以来媒介思想的研究
者依照学术理路的不同划分为七代。
-6-
Siegfried Zielinski
1.
Past centuries have provided us with plenty of those who prophesize and plenty of
those who warn against the conquest of the last refugees of the anthropos by
instruments and technical systems. Catholic mathematician Johann Zorn (1641-1707)
believed the artificial eye (oculus artificialis) wielded such enormous power that the
optical apparatus—a robust telescope with a projection chamber attached—could even
extract impure spots from the supposedly pure and divine sun, that it could, in other
words, outwit astrophysical reality. Hegelian Ernst Kapp (1808-1896) was urging as
early as 1877 that culture itself must be reconceived from a technological persperctive
as organ projection and that the structure of language was so intimately bound up with
the nature of the state that the development of electronic communications networks and
the kinematic concept of disciplinary full-closure represented the becoming-apparatus
of the actual late 19th century state. Friedrich Nietzsche (1844-1900) insisted toward
the end of his life that, the more his own psycho-physical powers of hand-writing
deteriorated, the more his typewriter would become co-author of his texts. His pencil
was smarter than he was, Albert Einstein (1859-1955) is purported to have joked.
Bertolt Brecht (1898-1956) knew already in the 1920s that art without technology was
sheer absurdity. Walter Benjamin (1892-1940) assumed—if somewhat more
serenely—that the typewriter would alienate the pen-holding hand of the litterateur only
“once the precision of typographical forms were immediately assimilated into the
conception of his books […] and the innervations of the commanding fingers had
replaced the familiar hand.” 1 Catholic iconoclast Marshall McLuhan (1911-1980)
wanted us to seek the agent of our sensibility and our understanding in the medium and
nowhere else, though this first pop star in the global market of media thinking largely
left open what he actually meant by this mysterious portent, the medium. Friedrich
Knilli (*1930), who was raised among the cutting and sewing machines of his uncle’s
garment factory in the Austrian city of Graz and later studied mechanical engineering,
1 Walter Benjamin, “Lehrmittel. Prinzipien der Wälzer oder die Kunst, dicke Bücher zu machen”
(Teaching Aids: The Principles of Tomes, or the Art of Making Thick Books) in Gesammelte
Werke, vol. 4, 1 (Frankfurt/Main: Suhrkamp), p. 105. My translation.
-7-
came to understand the powerful materiality of the medial through the vibrating
membranes of loudspeakers in Austria and Germany’s early radio studios and then
developed from this his own psycho-physical and aesthetic concept of the total sound
spectacle [totales Schallspiel]. That was about the same time that Jacques Lacan (1901-
1981) began insisting emphatically that even the unconscious was structured like
language. It was also at about this time that structure itself began to win the upper hand
over the Subject in disciplines ranging from ethnology to history and literature and even
early theories of cinema. No longer were we Subjects, but projects that in the ideal
circumstance projected worlds of our own—as Vilém Flusser (1920-1991) consistently
emphasized in his own unique way. In an equally eschatological gesture, Friedrich
Kittler (1943-2011) claimed, on the basis of his “technical a priori,” that all that was
expressed and all that our eyes and ears received as symbolic material was first and
foremost technology and that it always would remain technology, even at the vanishing
point of its development.
These diverse concerns, from prophetic and cautionary voices alike, have each
gained acceptance in various ways. What we refer to as our world is no longer thinkable
without the medial. Mathematicians and physicists, medievalists, philologists of all
kinds, theologians, philosophers, biologists and art critics all know that they must deal
with media—or at least with materials that are contingent on media—when they trawl
through the containers, archives and contemporaneous utterances that have been
produced in their respective fields, in their endeavor to understand and to impart. All
of them equally must learn to read, interpret and calculate medial surfaces and
materialities, as well as the metaphysical messages intimately linked with these—
messages that articulate and transport symbolic bodies and their networks.
-8-
Perhaps it is too soon to answer such questions with any certainty. It is however
not too soon to pose them resolutely. For the tendency toward the establishment of
media theory (in Germany, this has even been immoderately propagated as media
science) as its own discipline with its own laws, hierarchies, canons, power structures,
conceptualities and clearly defined origins is quite strong. The disastrous consequence
of such a circumscription, for instruction and increasingly also for research, is that the
loops of self-reflection embarked upon by both apprenticed and established media
experts assume ever more audacious forms and contents. Students now earnestly
believe that the only legitimate content of media can be nothing but other media, and
they write and act accordingly. Ever since the critical thought was cast out from the
humanities, the medial has been confirmed and celebrated – and decelerated – as the
communicative potential: for control and correction but also for culture. The difference
engine has become an engine of management and design, even an engine of careers. To
posit that nothing anymore can exist and thrive independently of mediation-machines
tends to inflate those very machines into all-powerful, self-sufficient centrifuges
positioned right at the center of what journalists still blithely refer to as society, where
they whirl away, organizing their own academic circles around themselves.
Elaborated media thinking needs in its immediate vicinity the depths and gravity
of other modes of thought that are not oriented toward medial phenomena, with which
it may periodically connect, by which it may be stimulated, urged on, occasionally
reined in and reminded of its place. The study of medial sensations and structures is not
an end in itself, or else it would devolve into the very paradoxy in the void which
Baudrillard never tired of criticizing. Ultimately, technical means of communication
only serve to make encounter impossible. It is the imaginary that saves us in the ongoing
acid test between the real and the symbolic; yet it punishes us at the same time, given
its semblant character. It was Jacques Lacan, borrowing an exceptional media concept
from Lucretius,3 who so admirably formulated this in a number of variants.
Let’s dwell for a moment, then, with this ancient thinker who has been so
tremendously important in my own intellectual passages through medial phenomena.
The clinamen is “the smallest deviation possible” that may take place “we know not
2
See FLUSSERIANA—An Intellectual Toolbox, eds. Siegfried Zielinski, Peter Weibel and Daniel
Irrgang (Minnesota 2015), p. 17. The book is a tri-lingual publication (English, German,
Portuguese).
3 Nam si abest quod ames, praesto simulacra tamen sunt / For if what you love is distant, its
images are present. (Titus Lucretius Carus, De rerum naturae / On the Nature of Things)
-9-
when, we know not where,” as Louis Althusser puts it, citing De rerum naturae, that
incomparable natural historical poem written by Lucretius in the last century prior to
the Common Era. The clinamen causes an atom to “swerve” from its vertical plunge
into the void, where “there occurs an encounter between one atom and another, and this
event becomes advent on condition of the parallelism of the atoms, for it is this
parallelism which, violated on just one occasion, induces the gigantic pile-up and
collision-interlocking of an infinite number of atoms, from which a world is born”—a
world, in other words, as an aggregate of atoms that is created through a chain reaction
set off by the first swerve and the first encounter.4 Althusser, along with ancient Greek
natural philosopher Epicurus, was convinced that the origin of any world, thus any
reality and any meaning, is due to a deviation; that deviation and not reason is the cause
of the origin of the world.5
Considered from the perspective of deep time, my own media research has at its
core been powerfully shaped by those thinkers, poets and naturalists known to the
histories of science and mind as Atomists. Before Socrates and of course beyond the
great dividers, Plato and Aristotle, the Atomists conceive of the world fundamentally
as turmoil, a ceaselessly streaming exchange of the smallest particles, energies and
signals, a world that does not yet require such severings as that between subject and
object, active and passive, matter and mind, between the receiver on one side and the
sender on the other. Anaxagoras, Anaximander, Democritus, Empedocles, Epicurus,
Lucretius and others thought the world as perpetually colliding objects, as the billiard-
reality of interobjectivity, two and a half thousand years before this concept again
acquired effective power under the banner of things becoming independent; they
thought chaos, its complex regularities and its incalculabilities; they thought the world
as porous objects that articulate themselves and thus reveal themselves to our
perception, just as much as we in turn are realized for them, become ecstatic for them
and step out of ourselves. It was Martin Heidegger who rediscovered this world in the
20th century and ontologically fundamentalized it with unnecessary severity. And for
French philosophers of becoming and of energetic dialogue, too—from Gilles Deleuze
to Félix Guattari and, in a different form, from Alain Badiou via Jean-Luc Nancy to
Jacques Rancierre, with whom I had the pleasure of teaching on the same faculty for a
number a years6—this world full of motion and events is the only thinkable one, or
better: the only attractive one with respect to a basic idea: that the world which is known
to us has only a single raison d’être, which consists in the fact that it is changeable and
that it is constantly changing.
4
Louis Althusser, “The Underground Current of the Materialism of the Encounter” in Philosophy
of the Encounter: Later Writings, 1978-87, eds. François Matheron and Oliver Corpet, trans. G.M.
Goshgarian (London, New York: Verso, 2006).
5 Ibid.
6 I am referring here to the European Graduate School (EGS) in Saas-Fee, Switzerland.
- 10 -
2.
Michel Foucault was a master of the kind of writing that makes us operatively
conscious of where what we call our civilization comes from, why and how we have
evolved into powerful beings; and he managed to pose these questions in such a way
that we are able to critically examine what we call history even as we write it. Deriving
it from an anti-historical concept developed by Nietzsche, Foucault designates this
process as genealogy. It enables us to understand developments as labyrinthine, as
movements associated with digressions and impasses, and it assumes a many-eyed
seeing and a many-tongued writing.
7
Tracy B. Strong, Friedrich Nietzsche and the Politics of Transfiguration (Urbana, Chicago:
University of Illinois Press, 2000), p. 54.
8 Friedrich Nietzsche, Kritische Studienausgabe (Berlin: de Gruyter, 1980), p. 170. My
translation.
- 11 -
now at the start of the 21st century we need another move into the open.9 This is the
title of Berlin-based philosopher Dietmar Kamper’s invitation to unabashed exchange
among architects, artists, musicians, philosophers and media specialists—an invitation
to a debate that need not lead to resolutions but to an intellectual adventure, that may
not necessarily rule out academic chairmanships but that ultimately may not need them.
I have in my arsenal of language no better phrase for describing what this project
toward a genealogy of media thinking is really about. The avant-garde is nothing but
the reinterpretation of by-gone presents, and genealogy proves above all to be an
operation with a lofty aim: namely, that of re-opening the windows and doors onto that
nervous, heterotopic place of possibilities that the thinking of media once occupied, and
organizing passages through the boredom that has taken root there, and recalling the
gardens10 which poachers from the most disparate disciplines and schools of thought
have in passing laid out there and cultivated.
3.
9
Umzug ins Offene is the title of an edited volume: Umzug ins Offene. Vier Versuche über den
Raum (Move into the Open: Four Experiments About Space), eds. Tom Fecht and Dietmar
Kamper (Berlin: Springer, 2000).
10 “…They see an entanglement of spaces that emerges as it would were we somewhere like a
cinema auditorium. But the oldest example of a heterotopia may well be the garden…” Michel
Foucault, Les hétérotopies/Le corps utopique (Heterotopias/The Utopian Body), Two Radio
Lectures, France Culture, December 7th and 21st 1996 (INA, Paris 2004).
- 12 -
Baltrušaitis’s fantastical writings on anamorphic art and the mirror11 or Gustav René
Hocke’s superb 1957 work on mannerism in European art, Die Welt als Labyrinth (The
World As Labyrinth). 12 But archaeology first emerges as a notable thematic and
methodological paradigm in the humanities as a discourse effect of the work of historian,
sociologist and philosopher Michel Foucault. The Birth of the Clinic: An archaeology
of medical perception (Paris: PUF, 1963), The Order of Things: An archaeology of
human sciences (Paris: Gallimard, 1966) and The Archaeology of Knowledge (Paris:
Gallimard, 1969) led a variety of disciplines, some with notable hesitation, to conduct
analyses of historical phenomena which sought to interweave aspects of the political,
the cultural, the technical and the social—to conduct, in other words, interdiscursive
analyses. At the Technical University of Berlin, where I studied, new research projects
were being articulated on topics as diverse as the history of female labor (as in Karin
Hausen’s social history of the sewing machine), the intellectual and social history of
mathematics and the history of computing machines (as in Herbert Mertens and
Hartmut Petzold’s early studies). The periodical Wechselwirkung (Interaction), founded
in 1979 in Berlin, provided a unique platform for this particularly active
interdiscursivity between the sciences of nature and the sciences of mind. Such diverse
archaeologies and genealogies evolved as academic attempts to intervene on the often
encrusted systems of knowledge and organization in the established disciplines and to
aggravate and alter these by means of critical, transdisciplinary reflection.
My first media critical publications emerged from just such a milieu, as did my
early writings on the history of medial attractions like the Arbeiter-Radio-Bewegung
(Workers’ Radio Movement) of the interwar period between 1919 and 1933. In today’s
terms, historicized in relation to hegemonic media apparatuses, one might deem this the
first hacker movement, vested in an aura similar to that which surrounded the self-styled
Guerrilla Television of the electronic avant-garde of the late 1960s and early 1970s.13
The epic gesture of intervening action that we had learned to extrapolate above all from
Bertolt Brecht and his radio heuristic, but also from the hopeful potential in the writings
of Walter Benjamin, was just as important to us as was work on the utopian possibilities
we saw in a collectivity in which, as a rule, there would be no exclusions and no
hegemonic hierarchies in the exchanges among its members. Jürgen Habermas was as
11
See Baltrušaitis, Anamorphoses, ou magie artifielle des effets merveilleux (Paris: Olivier
Perrin, 1955), English translation: Anamorphic Art, trans. W.J. Strachan (Henry N. Adams 1977);
and Le miroir: Essai sur une légende scientifique: révelations, science-fiction et fallacies (The
Mirror: Essay on a Scientific Legend: Revelations, Science Fiction and Fallacies) (Paris: Éditions
du Seuil, 1978).
12 Hocke, Die Welt als Labyrinth. Manier und Manie in der europäischen Kunst. Beiträge zur
Ikonographie und Formgeschichte der europäischen Kunst von 1520 bis 1650 und der Gegenwart
(The World as Labyrinth: Manner and mania in European art. Contributions to the iconography
and formal history of European art from 1520 to 1650 and the present) (Reinbek: Rowohlt, 1957).
13 See the part history, part instruction manual by Michael Schamberg and Raindance Corporation
14
There is a chapter dedicated to this in my book […After the Media]: News from the Slow-
Fading Twentieth Century (Minnesota: Univocal, 2013), pp. 173ff.
- 14 -
as a professor of audiovisions, which was the original title of a book I published 1989.
This context resulted in 1991 in the project we called “One Hundred—20 Short Films
on the Archaeology of Audiovision,” which was our contribution to a celebration of the
first one hundred years of cinema history. In tandem with this, I prepared essays in the
form of theses for the Austrian magazine Eikon, one of which is presented here for the
first time in American English.
In the extreme rush of networked bustle, which also incorporated critique into it, I
began to discover, in an indispensable counter-movement, ever more of that dimension
of the medial that I would go on to tamper with intensively for a good 20 years to come,
much to my great intellectual pleasure: the deep time of the nexus of art, science and
technology. In variantology I came up with a new thinking and playing field, one in
which I was able to investigate this exhilarating context as a unique poetics of relations.
15 This resulted in the five volumes of Variantology which I was able to publish through Walther
König in Cologne between 2005 and 2011, in collaboration with a rotating pool of editors and on
the basis of a worldwide economy of friendship with the contributing authors.
- 15 -
In contrast to the heterogeneous, with its heavy inflections of ontology and biology,
the variant is more interesting, in methodological and epistemological respects, as a
mode of lightness and movement. As such, the variant is equally at home in
experimental science as it is in diverse artistic practices,16 most forcefully in music.
Variation, versioning, digression—in playing and interpretation—are an obvious part
of the vocabulary as well as the everyday practice of composers and interpreters alike.
In a narrower sense, the variant designates a modulation, say from minor to major tonal
series, brought about by a change in the interval.
The semantic field that I am trying to open by means of this concept has a primarily
positive connotation. To be different, to diverge, to shift, to alternate are themselves
alternative translations for the Latin verb variare. Its connotation topples over into the
negative only when used by the speaking subject as a means of exclusion and
discrimination—which the word itself does not actually abide. To vary something that
is present is an alternative to its destruction, an alternative that played a remarkably
sustaining role in the diverse avant-gardes of the 20th century, in politics as well as art.
And, of course, an attractive medial format also inheres in the concept, a format one
relates to as one would to a sensation. Long before the cinema, the variety show was
experimenting with combining diverse stage practices into a colorful whole that would
come together only in the time of a given performance.
4.
16
For a powerful contemporary example in the visual arts, see Allen Ruppersberg, One of
Many—Origins and Variants (Cologne: Buchhandlung Walther König, 2005).
17 We have elsewhere dealt extensively with the dimensions of deep time. See, for instance, Deep
Time of the Media (Boston: MIT Press, 2006); the German original was published in 2002.
- 16 -
I have attempted a thought experiment in operationally grouping past and present
media researchers and protagonists by generation—not least in order to temporally
locate my own position in the context of this still fledgling genealogy of our field of
intellectual energies. I have started from the presumption that we are presently well into
the seventh generation of explicit media thinkers.18 Given the accelerated development
of the interdiscursive field in the second half of the 20th century, I decided to scale the
shift in generations following decade markers. The generational groupings are not
determined by the age of the thinker but rather on the basis of important differences
each one has individually contributed to this heterogeneous field of knowledge. I have
paid special attention to intelligible discourse effects that have been observed in Europe
and that have also had an intelligible impact in Germany, for instance.
Early thinkers through the end of WWII: Theodor W. Adorno, Rudolf Arnheim,
W. Ross Ashby, AndréBazin, Walter Benjamin, Henri Bergson, Bertolt Brecht, Karl
Bühler, Claude Cahun, Ernst Cassirer, Germain Dulac, Sergei Eisenstein, Gisèle
Freund, René Fülöp-Miller, Aleksei Gastev, Siegfried Giedion, Fritz Heider, Max
Horkheimer, Harold Innis, Ernst Kapp, Siegfried Kracauer, Lev Kuleshov, Harold
Lasswell, Kazimir Malevich, Filippo T. Marinetti, Solomon Nikritin, John von
Neumann, Charles S. Peirce, Luigi Russolo, Ferdinand de Saussure, Hermann
Scherchen, Claude Shannon, Wilbur Schramm, Alan Turing, Dziga Vertov, Paul
Watzlawick, Hermann Weyl, Fritz Winckel…
First mid- and post-war generation (explicitly active since the 1940s/50s): Günther
Anders, Peter Bächlin, Roland Barthes, Max Bense, John Berger, Maya Deren, Jean-
Luc Godard, Richard Hoggart, Danièle Huillet, E. Katz/J.G. Blumler, Harry Kramer,
Marshall McLuhan, Werner Meyer-Eppler, Abraham Moles, Raymond Queneau,
Gilbert Simondon, Hans Heinz Stuckenschmidt, Wolf Vostell, Roman Wajdowicz, The
Whitney Brothers, Norbert Wiener…
Second generation (explicitly active since the 1960s): Dieter Baacke, Nanni
Balestrini, Gianfranco Baruchello, Konrad Bayer, Gilbert Cohen-Séat, Guy Debord,
Umberto Eco, Vilém Flusser (in Brazil), Otto F. Gmelin, Jürgen Habermas, Helmut
Heißenbüttel, Walter Höllerer, Friedrich Knilli, Ferdinand Kriwet, Gerhard Maletzke,
Denis McQuail, Christian Metz, Franz Mon, Frieder Nake, Georg Nees, Ted Nelson,
Nam June Paik, Pier Paolo Pasolini, Wolfgang Ramsbott, Jasia Reichardt, Gerhard
Rühm, Marc Vernet, Paul Virilio, Peter Weibel, Oswald Wiener, Raymond Williams…
Third generation (explicitly active since the 1970s): Jean-Louis Baudry, Hans
Belting, René Berger, Gábor Bódy, Jean-Louis Comolli, Gilles Deleuze, Mary Ann
Doane, Franz Dröge, Hermann Klaus Ehmer, Thomas Elsaesser, Hans Magnus
18 For the distinction between explicit and implicit media thinkers, see Zielinski, […After the
Media]: News from the Slow-Fading Twentieth Century (Minnesota: Univocal: 2013), esp. ch. 3,
p. 173ff. The implicit media thinkers are not contained in the list.
- 17 -
Enzensberger, VALIE EXPORT, Friede Grafe, Félix Guattari, Hans Ulrich Gumbrecht,
Stuart Hall, Stephen Heath, Knut Hickethier, Horst Holzer, Stuart Hood, Eberhard
Knödler-Bunte, Gerhard Lischka, Laura Mulvey, Friederike Pezold, Marcelin Pleynet,
Hans Posner, Erwin Reiss, Michel Serres, Kristin Thompson, Sven Windahl, Peter
Wollen…
Fourth generation (explicitly active since the 1980s): Anne-Marie Duguet, Peter
Bexte, Friedrich Kittler, Teresa de Lauretis, Vilém Flusser (in Europe), Florian Rötzer,
Dietmar Kamper, Avital Ronell, Jean Baudrillard, Sybille Krämer, Arthur and
Marilouise Kroker, Werner Künzel, Miklòs Peternák, Jean-François Lyotard, Pierre
Lévy, Hartmut Petzold, Hans-Ulrich Reck, Irit Rogoff, Gerburg Treusch-Dieter, Georg
Christoph Tholen, Michael Wetzel, Hartmut Winkler, Christina von Braun, Joachim
Paech, Siegfried Zielinski…
Fifth generation (explicitly active since the 1990s): Marie-Luise Angerer, Peter
Berz, Manuel Castells, Régis Debray, Manuel DeLanda, Bernhard Dotzler, Timothy
Druckrey, Lorenz Engell, Wolfgang Ernst, Matthew Fuller, Ulrike Gabriel, Miriam
Hansen, Donna Haraway, N. Katherine Hayles, Hans-Christian von Herrmann, Erkki
Huhtamo, Brenda Laurel, Thomas Y. Levin, Geert Lovink, Lev Manovich, Dieter
Mersch, Brian Massumi, Alla Mitrofanova, Claus Pias, Nils Röller, Henning
Schmidgen, Bernhard Siegert, Andrey Smirnov…
Sixth generation (explicitly active in the 2000s and beyond): Arianna Borrelli,
Knut Ebeling, Alexander Galloway, Erich Hörl, Ute Holl, Yuk Hui, David Link, Mara
Mills, Jussi Parikka, Matteo Pasquinelli, Patricia Pisters, Raqs Media Collective, Gao
Shiming, Hito Steyerl, Frederik Stjernfelt, Eugene Thacker, Tiqqun, Joanna Zylinska,
et al.
***
- 18 -
心灵怎样参与(人工)交流?
——(以)机器思考的替代路径
汉 森(Mark Hansen)
我的论文将首先讨论目前军事无人机升级项目中与日俱增的决策自动化趋
势,细审此一趋势可以看出,它并未能够为自动决策提供技术支撑,反将此类项
目推向僵局,凸显出计算机器及其运算过程缺乏应变性。简言之,应变性才是仿
真过程的关节点。我将此种僵局归因于人工智能研究乃至于机器学习算法研究中
的个人主义,进而试图借助法国哲学家吉尔伯特·西蒙东的相关理论,寻求一种
替代性的趋近于(以)机器思考的路径,藉由研究人机联合过程中适度的科技个
体性生成之可能性,补充完善西蒙东的思考。实现此种个性生成的关键在于“关
系化的自主性”这一概念,关系自主异于实质自主,指机器凭借自身的可操作性
而获得自主性,就当前的算法系统(超乎所有具体网络之上)而言,关系自主指
通过处理社会与情感数据活动获得自主性。为探明这一发展的潜力,助益目前愈
发广泛的人机集合协同现象之理论化,更贴切地预测未来真正意义上的机器智能,
我主张机器的关系自主来自于其对人类应变性数据的处理,其实质是人类出借给
机器的虚拟应变性,我将借助对 HBO 科幻连续剧《西部世界》中一些场景的分
析具体论证此一观点。
- 19 -
Mark Hansen
Abstract: My paper will begin with the escalation of the drive to autonomize
decision-making in contemporary military drone development programs.By submitting
this drive to critical inspection, I will suggest that, far from forming a technical fix for
the problem of automating decision, this line of development isolates the impasse to
any such project: the fact that computational machines and processes lack contingency.
Put simply, contingency constitutes a cog in the process of simulation. I shall link this
impasse to the individualist ontology of artificial intelligence research, up to and
including contemporary work on machine learning algorithms, and with the help of
French philosopher, Gilbert Simondon, seek to suggest an alternative path toward
thinking (with) machines, one that expands on Simondon’s work by investing in the
possibility of a joint human-machinic process of properly technological individuation.
The key to such a process is, I shall argue, the concept of relational autonomy; in
contrast to all notions of substance autonomy, relational autonomy stipulates that
machines acquire autonomy through their operationality, and that in the case of
contemporary computational systems (above all the web), this means they acquire
autonomy by processing social and affective data. To explore the potential of this
development, both for contemporary theorization of the co-functioning of humans and
machines in larger, technically-distributed assemblages, and for future speculation
regarding truly machinic intelligence, I will argue that relational autonomy of machines
is obtained by their processing of human contingency – a virtual contingency that we
humans lend to them. I will try to make these issues concrete by analyzing some scenes
from the recent HBO science fiction series, Westworld.
- 20 -
其他的事:人工智能、机器人和社会
冈克尔(David J. Gunkel)
我们正处于被机器人入侵的时代。现在,机器无处不在,几乎可以做任何事
情。我们在网络上和他们聊天,和他们一起玩数码游戏,并且依赖他们日益见长
的本领来组织和管理我们日常生活的方方面面。 面对机器入侵,最关键的问题
是我们如何理解并应对由此带来的全新的社会机遇和挑战。本研究将分成三个步
骤或行动。第一步将重新评估我们定义和理解事物的典型方式。这将以工具理论
为目标,并且重新审视这一理论,因为工具理论将事物,特别是技术产品,仅仅
视为服务于人类利益和目标的工具而已。第二步将探讨人工智能、学习算法和社
交机器人的最新进展给这种标准的默认理解带来怎样的机遇和挑战。最后,作为
结语,第三部分将说明后果,阐述这一发展对我们意味着什么,还有哪些我们可
以与之交流和互动的实体,以及哪些新的社会现状和环境开始规定 21 世纪的生
活方式。
- 21 -
David J. Gunkel
We are it seems in the midst of a robot apocalypse. The invasion, however, does
not look like what we have been programmed to expect from decades of science fiction
literature and film. It occurs not as a spectacular catastrophe involving a marauding
army of alien machines descending from the heavens with weapons of immeasurable
power. Instead, it takes place, and is already taking place, in ways that look more like
the fall of Rome than Battlestar Galactica, with machines of various configurations and
capabilities slowly but surely coming to take up increasingly important and influential
positions in everyday social reality. “The idea that we humans would one day share the
Earth with a rival intelligence,” Philip Hingston (2014) writes, “is as old as science
fiction. That day is speeding toward us. Our rivals (or will they be our companions?)
will not come from another galaxy, but out of our own strivings and imaginings. The
bots are coming: chatbots, robots, gamebots.”
And the robots are not just coming. They are already here. In fact, our
communication and information networks are overrun, if not already run, by machines.
It is now estimated that over 50% of online traffic is machine generated and consumed
(Zeifman 2017). This will only increase with the Internet of things (IoT), which is
expected to support over 26 billion interactive and connected devices by 2020 (by way
of comparison, the current human population of planet earth is estimated to be 7.4
billion) (Gartner 2013). We have therefore already achieved and live in that future
Norbert Wiener (1950) had predicted at the beginning of The Human Use of Human
Beings: Cybernetics and Society: “It is the thesis of this book that society can only be
understood through a study of the messages and the communication facilities which
belong to it; and that in the future development of these messages and communication
facilities, messages between man and machines, between machines and man, and
between machine and machine, are destined to play an ever-increasing part” (p. 16).
What matters most in the face of this machine incursion is not resistance—insofar
as resistance is already futile—but how we decide to make sense of and respond to the
new social opportunities or challenges that these things make available to us. The
investigation of this matter will proceed through three steps or movements. The first
part will critically reevaluate the way we typically situate and make sense of things. It
will therefore target and reconsider the instrumental theory, which characterizes things,
- 22 -
and technological artifacts in particular, as nothing more than tools serving human
interests and objectives. The second will investigate the opportunities and challenges
that recent developments with artificial intelligence, learning algorithms, and social
robots pose to this standard default understanding. These other kinds of things challenge
and exceed the conceptual boundaries of the instrumental theory and ask us to reassess
who or what is (or can be) a legitimate social subject. Finally, and by way of conclusion,
the third part will draw out the consequences of this material, explicating what this
development means for us, the other entities with which we communicate and interact,
and the new social situations and circumstances that are beginning to define life in the
21st century.
- 23 -
meaning that something becomes what it is or acquires its properly “thingly character”
in coming to be put to use for some particular purpose. A hammer, one of Heidegger's
principal examples, is for building a house to shelter us from the elements; a pen is for
writing an essay like this; a shoe is designed to support the activity of walking.
Everything is what it is in having a “for which” or a destination to which it is always
and already referred. Everything therefore is primarily revealed as being a tool or an
instrument that is useful for our purposes, needs, and projects.1
- 24 -
To put it in colloquial terms (which nevertheless draw on and point back to Heidegger’s
example of the hammer): “It is a poor carpenter who blames his tools.”
This way of thinking not only sounds level-headed and reasonable, it is one of the
standard assumptions deployed in the field of technology and computer ethics.
According to Deborah Johnson’s (1985) formulation, "computer ethics turns out to be
the study of human beings and society—our goals and values, our norms of behavior,
the way we organize ourselves and assign rights and responsibilities, and so on" (p. 6).
Computers, she recognizes, often "instrumentalize" these human values and behaviors
in innovative and challenging ways, but the bottom-line is and remains the way human
agents design and use (or misuse) such technology. Understood in this way, computer
systems, no matter how automatic, independent, or seemingly intelligent they may
become, "are not and can never be (autonomous, independent) moral agents" (Johnson,
2006, p. 203). They will, like all other things, always be instruments of human value,
decision making, and action.
- 25 -
answering questions put to it, and it will only pass if the pretense is reasonably
convincing. A considerable proportion of a jury, who should not be experts about
machines, must be taken in by the pretense. They aren’t allowed to see the machine
itself—that would make it too easy. So the machine is kept in a faraway room and the
jury are allowed to ask it questions, which are transmitted through to it” (p. 495).
According to Turing's stipulations, if a machine is capable of successfully simulating a
human being in communicative interactions to such an extent that human interlocutors
(or “a jury” as Turing calls them in the 1952 interview) cannot tell whether they are
talking with a machine or another human being, then that device would need to be
considered intelligent (Gunkel 2012b).
At the time that Turing published the paper proposing this test-case, he estimated
that the tipping point—the point at which a machine would be able to successfully play
the game of imitation—was at least half-a-century in the future. "I believe that in about
fifty years’ time it will be possible to programme computers, with a storage capacity of
about 109, to make them play the imitation game so well that an average interrogator
will not have more than 70 per cent chance of making the right identification after five
minutes of questioning" (Turing, 1999, p. 44). It did not take that long. Already in 1966
Joseph Weizenbaum demonstrated a simple natural language processing (NLP)
application that was able to converse with human interrogators in such a way as to
appear to be another person. ELIZA, as the application was called, was what we now
recognize as a “chatterbot.” This proto-chatterbot2 was actually a rather simple piece of
programming, “consisting mainly of general methods for analyzing sentences and
sentence fragments, locating so-called key words in texts, assembling sentence from
fragments, and so on. It had, in other words, no built-in contextual framework of
universe of discourse. This was supplied to it by a 'script.' In a sense ELIZA was an
actress who commanded a set of techniques but who had nothing of her own to say"
(Weizenbaum, 1976, p. 188). Despite this rather simple architecture, Weizenbaum's
program demonstrated what Turing had initially predicted:
- 26 -
Since the debut of ELIZA, there have been numerous advancements in chatterbot
design, and these devices now populate many of the online social spaces in which we
live, work, and play. As a result of this proliferation, it is not uncommon for users to
assume they are talking to another (human) person, when in fact they are just chatting
up a chatterbot. This was the case for Robert Epstein, a Harvard University PhD and
former editor of Psychology Today, who fell in love with and had a four month online
“affair” with a chatterbot (Epstein, 2007). This was possible not because the bot, that
went by the name “Ivana,” was somehow intelligent, but because the bot’s
conversational behavior was, in the words of Byron Reeves and Clifford Nass (1996),
“close enough to human to encourage social responses” (p. 22). And this approximation
is not necessarily “a feature of the sophistication of bot design, but of the low bandwidth
communication of the online social space,” where it is much easier to convincingly
simulate a human agent (Mowbray, 2002, p. 2).
Both AlphaGo and Tay are AI systems using connectionist architecture. AlphaGo,
as Google DeepMind (2015) explains “combines Monte-Carlo tree search with deep
neural networks that have been trained by supervised learning, from human expert
games, and by reinforcement learning from games of self-play.” In other words,
AlphaGo does not play the game of Go by following a set of cleverly designed moves
described and defined in code by human programmers. The application is designed to
formulate its own instructions from discovering patterns in existing data that has been
assembled from games of expert human players (“supervised learning”) and from the
trial-and-error experience of playing the game against itself (“reinforcement learning”).
Although less is known about the exact inner workings of Tay, Microsoft explains that
the system “has been built by mining relevant public data,” i.e. training its neural
networks on anonymized data obtained from social media, and was designed to evolve
its behavior from interacting with users on social networks like Twitter, Kik, and
- 28 -
GroupMe (Microsoft 2016a). What both systems have in common is that the engineers
who designed and built them have no idea what these things will eventually do once
they are in operation. As Thore Graepel, one of the creators of AlphaGo, has explained:
“Although we have programmed this machine to play, we have no idea what moves it
will come up with. Its moves are an emergent phenomenon from the training. We just
create the data sets and the training algorithms. But the moves it then comes up with
are out of our hands” (Metz, 2016, p. 1). Consequently, machine learning systems, like
AlphaGo, are intentionally designed to do things that their programmers cannot
anticipate or completely control. In other words, we now have autonomous (or at least
semi-autonomous) things that in one way or another have “a mind of their own.” And
this is where things get interesting, especially when it comes to questions of social
responsibility and behavior.
AlphaGo was designed to play Go, and it proved its ability by beating an expert
human player. So who won? Who gets the accolade? Who actually beat the Go
champion Lee Sedol? Following the dictates of the instrumental theory, actions
undertaken with the computer would be attributed to the human programmers who
initially designed the system and are capable of answering for what it does or does not
do. But this explanation does not necessarily hold for an application like AlphaGo,
which was deliberately created to do things that exceed the knowledge and control of
its human designers. In fact, in most of the reporting on this landmark event, it is not
Google or the engineers at DeepMind who are credited with the victory. It is AlphaGo.
In published rankings, for instance, it is AlphaGo that is named as the number two
player in the world (Go Ratings, 2016). Things get even more complicated with Tay,
Microsoft’s foul-mouthed teenage AI, when one asks the question: Who is responsible
for Tay’s bigoted comments on Twitter? According to the standard instrumentalist way
of thinking, we would need to blame the programmers at Microsoft, who designed the
application to be able to do these things. But the programmers obviously did not set out
to create a racist Twitterbot. Tay developed this reprehensible behavior by learning
from interactions with human users on the Internet. So how did Microsoft answer for
this? How did they explain things?
- 29 -
word, is our fault. Later, on 25 March 2016, Peter Lee, VP of Microsoft Research,
posted the following apology on the Official Microsoft Blog: “As many of you know
by now, on Wednesday we launched a chatbot called Tay. We are deeply sorry for the
unintended offensive and hurtful tweets from Tay, which do not represent who we are
or what we stand for, nor how we designed Tay. Tay is now offline and we’ll look to
bring Tay back only when we are confident we can better anticipate malicious intent
that conflicts with our principles and values” (Microsoft, 2016b). But this apology is
also frustratingly unsatisfying or interesting (it all depends on how you look at it).
According to Lee’s carefully worded explanation, Microsoft is only responsible for not
anticipating the bad outcome; it does not take responsibility for the offensive tweets.
For Lee, it is Tay who (or “that,” and words matter here) is named and recognized as
the source of the “wildly inappropriate and reprehensible words and images” (Microsoft,
2016b). And since Tay is a kind of “minor” (a teenage AI) under the protection of her
parent corporation, Microsoft needed to step-in, apologize for their “daughter’s” bad
behavior, and put Tay in a time out.
Although the extent to which one might assign "agency" and "responsibility" to
these mechanisms remains a contested issue, what is not debated is the fact that the
rules of the game have changed significantly. As Andreas Matthias (2004) points out,
summarizing his survey of learning automata:
In other words, the instrumental theory of things, which had effectively tethered
machine action to human agency, no longer adequately applies to mechanisms that have
been deliberately designed to operate and exhibit some form, no matter how
rudimentary, of independent action or autonomous decision making. Contrary to the
usual instrumentalist way of thinking, we now have things that are deliberately designed
- 30 -
to exceed our control and our ability to respond or to answer for them.
In July of 2014 the world got its first look at Jibo. Who or what is Jibo? That is an
interesting and important question. In a promotional video that was designed to raise
capital investment through pre-orders, social robotics pioneer Cynthia Breazeal
introduced Jibo with the following explanation: “This is your car. This is your house.
This is your toothbrush. These are your things. But these [and the camera zooms into a
family photograph] are the things that matter. And somewhere in between is this guy.
Introducing Jibo, the world’s first family robot” (Jibo 2014). Whether explicitly
recognized as such or not, this promotional video leverages a crucial ontological
distinction that Jacques Derrida (2005) calls the difference between “who” and “what”
(p. 80). On the side of “what” we have those things that are mere instruments—our car,
our house, and our toothbrush. According to the usual way of thinking, these things are
mere instruments or tools that do not have any independent status whatsoever. We
might worry about the impact that the car’s emissions has on the environment (or
perhaps stated more precisely, on the health and well-being of the other human beings
who share this planet with us), but the car itself is not a socially significant subject. On
the other side there are, as the video describes it “those things that matter.” These things
are not things, strictly speaking, but are the other persons who count as socially and
morally significant Others. Unlike the car, the house, or the toothbrush, these Others
have independent status and can be benefitted or harmed by our decisions and actions.
Jibo, we are told, occupies a place that is situated somewhere in between what are
mere things and those Others who really matter. Consequently Jibo is not just another
instrument, like the automobile or toothbrush. But he/she/it (and the choice of pronoun
is not unimportant) is also not quite another member of the family pictured in the
photograph. Jibo inhabits a place in between these two ontological categories. It is a
kind of “quasi-other” (Ihde, 1990, p. 107). This is, it should be noted, not unprecedented.
We are already familiar with other entities that occupy a similar ambivalent social
position, like the family dog. In fact animals, which since the time of Rene Descartes
have been the other of the machine (Gunkel, 2012a, p. 60), provide a good precedent
for understanding the changing nature of things in the face of social robots, like Jibo.
“Looking at state of the art technology,” Kate Darling (2012) writes, “our robots are
nowhere close to the intelligence and complexity of humans or animals, nor will they
reach this stage in the near future. And yet, while it seems far-fetched for a robot’s legal
status to differ from that of a toaster, there is already a notable difference in how we
interact with certain types of robotic objects” (p. 1). This occurs, Darling continues,
because of our tendencies to anthropomorphize things by projecting into them cognitive
capabilities, emotions, and motivations that do not necessarily exist in the mechanism
per se. But it is this emotional reaction that necessitates new forms of obligation in the
- 31 -
face of things. “Given that many people already feel strongly about state-of-the-art
social robot ‘abuse,’ it may soon become more widely perceived as out of line with our
social values to treat robotic companions in a way that we would not treat our pets”
(Darling, 2012, p. 1).
Jibo, and other social robots like it, are not science fiction. They are already or will
soon be in our lives and in our homes. As Breazeal (2002) describes it, “a sociable robot
is able to communicate and interact with us, understand and even relate to us, in a
personal way. It should be able to understand us and itself in social terms. We, in turn,
should be able to understand it in the same social terms—to be able to relate to it and
to empathize with it…In short, a sociable robot is socially intelligent in a human-like
way, and interacting with it is like interacting with another person” (p. 1). In the face
of these socially situated and interactive entities we are going to have to decide whether
they are mere things like our car, our house, and our toothbrush; someone who matters
like another member of the family; or something altogether different that is situated in
between the one and the other. In whatever way this comes to be decided, however,
these things will undoubtedly challenge the way we typically distinguish between who
is to be considered another social subject and what remains a mere instrument or tool.
Although things are initially experienced and revealed in the mode of being
Heidegger calls Zuhandenheit (e.g. instruments that are useful or handy for our
purposes and endeavors), things do not necessarily end here. They can also, as
Heidegger (1962) explains, be subsequently disclosed as present-at-hand, or
Vorhandenheit, revealing themselves to us as objects that are or become, for one reason
or another, un-ready-to-hand (p. 103). This occurs when things, which had been
virtually invisible instruments, fail to function as they should or are designed to get in
the way of their own instrumentality. “The equipmental character of things,” Silvia
Benso (2000) writes, “is explicitly apprehended via negativa when a thing reveals its
unusability, or is missing, or ‘stands in the way’” (p. 82). And this is what happens with
things like chatterbots, machine learning applications, and social robots insofar as they
interrupt or challenge the smooth functioning of their instrumentality. In fact, what we
see in the face of these things is not just the failure of a particular piece of equipment—
e.g. the failure of a bot like “Ivana” to successfully pass as another person in
conversational interactions or the unanticipated and surprising effect of a Twitterbot
like Tay that learned to be a neo-Nazi racist—but the limit of the standard
instrumentalist way of thinking itself. In other words, what we see in the face
chatterbots, machine learning algorithms, and social robots are things that intentionally
challenge and undermine the standard way of thinking about and making sense of things.
- 32 -
Responding to this challenge (or opportunity) leads in two apparently different and
opposite directions.
But this strict re-application of instrumentalist thinking, for all its usefulness and
apparent simplicity, neglects the social presence of these things and the effects they
have within the networks of contemporary culture. We are, no doubt, the ones who
design, develop, and deploy these technologies, but what happens with them once they
are “released into the wild” is not necessarily predictable or completely under our
control. In fact, in situations where something has gone wrong, like the Tay incident,
or gone right, as was the case with AlphaGo, identifying the responsible party or parties
behind these things is at least as difficult as ascertaining the “true identity” of the “real
person” behind the avatar. Consequently things like mindless chatterbots, as Mowbray
- 33 -
(2002) points out, do not necessarily need human-level intelligence, consciousness,
sentience, etc. to complicate questions regarding responsibility and social standing.
Likewise, as Reeves and Nass (1996) already demonstrated over two decades ago with
things that were significantly less sophisticated than these recent technological
innovations, we like things. And we like things even when we know they are just things.
“Computers, in the way that they communicate, instruct, and take turns interacting, are
close enough to human that they encourage social responses. The encouragement
necessary for such a reaction need not be much. As long as there are some behaviors
that suggest a social presence, people will respond accordingly… Consequently, any
medium that is close enough will get human treatment, even though people know it’s
foolish and even though they likely will deny it afterwards” (p. 22). For this reason,
reminding users that they are just interacting with “mindless things,” might be the
“correct information,” but doing so is often as ineffectual as telling movie-goers that
the action they see on the screen is not real. We know this, but that does not necessarily
change things. So what we have is a situation where our theory concerning things—a
theory that has considerable history behind it and that has been determined to be as
applicable to simple devices like hand tools as it is to complex technological systems—
seems to be out of sync with the actual experiences we have with things in a variety of
situations and circumstances. In other words, the instrumentalist way of thinking may
be ontologically correct, but it is socially inept and out of touch.
This shift in perspective, it is important to point out, is not just a theoretical game,
it has been confirmed in numerous experimental trials and practical experiences with
things. The computer as social actor (CASA) studies undertaken by Reeves and Nass
(1996), for example, demonstrated that human users will accord computers social
standing similar to that of another human person and this occurs as a product of the
extrinsic social interaction, irrespective of the actual composition (or “being” as
Heidegger would say) of the thing in question. These results, which were obtained in
numerous empirical studies with human subjects, have been independently verified in
two recent experiments with robots, one reported in the International Journal of Social
Robotics (Rosenthal-von der Pütten et al, 2013), where researchers found that human
subjects respond emotionally to robots and express empathic concern for machines
irrespective of knowledge concerning the actual ontological status of the mechanism,
and another that used physiological evidence, documented by electroencephalography,
of the ability of humans to empathize with what appears to be “robot pain” (Suzuki et
al, 2015). And it appears that this happens not just with seemingly intelligent artifacts
in the laboratory setting but with just about any old thing that has some social presence,
like the very industrial-looking Packbots that are being utilized on the battlefield. As P.
W. Singer (2009, p. 338) has reported, soldiers form surprisingly close personal bonds
with their units’ Packbot, giving them names, awarding them battlefield promotions,
risking their own lives to protect that of the machine, and even mourning their “death.”
This happens, Singer explains, as a product of the way the mechanism is situated within
the unit and the social role that it plays in field operations. And it happens in direct
opposition to what otherwise sounds like good common sense: They are just things—
instruments or tools that feel nothing.
Once again, this decision sounds reasonable and justified. It extends consideration
to these other socially aware and interactive things and recognizes, following the
predictions of Wiener (1950, p. 16), that the social situations of the future will involve
not just human-to-human interactions but relationships between humans and machines
and machines and machines. But this shift in perspective also has significant costs. For
all its opportunities, this approach is inevitably and unavoidably exposed to the charge
of relativism—“the claim that no universally valid beliefs or values exist” (Ess, 1996,
p. 204). To put it rather bluntly, if the social status of things is relational and open to
- 35 -
social negotiation, are we not at risk of affirming a kind of social constructivism or
moral relativism? One should perhaps answer this indictment not by seeking some
definitive and universally accepted response (which would obviously reply to the
charge of relativism by taking refuge in and validating its opposite), but by following
Slavoj Žižek’s (2000) strategy of “fully endorsing what one is accused of” (p. 3). So
yes, relativism, but an extreme and carefully articulated version of it. That is, a
relativism (or, if you prefer, a “relationalism”) that can no longer be comprehended by
that kind of understanding of the term which makes it the mere negative and opposite
of an already privileged universalism. Relativism, therefore, does not necessarily need
to be construed negatively and decried, as Žižek (2006) himself has often done, as the
epitome of postmodern multiculturalism run amok (p. 281). It can be understood
otherwise. “Relativism,” as Robert Scott (1976) argues, “supposedly, means a
standardless society, or at least a maze of differing standards…Rather than a
standardless society, which is the same as saying no society at all, relativism indicates
circumstances in which standards have to be established cooperatively and renewed
repeatedly” (p. 264). In fully endorsing this form of relativism and following through
on it to the end, what one gets is not necessarily what might have been expected, namely
a situation where anything goes and “everything is permitted.” Instead, what is obtained
is a kind of socially attentive thinking that turns out to be much more responsive and
responsible in the face of other things.
These two options anchor opposing ends of a spectrum that can be called the
machine question (Gunkel 2012a). How we decide to respond to the opportunities and
challenges of this question will have a profound effect on the way we conceptualize our
place in the world, who we decide to include in the community of socially significant
subjects, and what things we exclude from such consideration and why. But no matter
how it is decided, it is a decision—quite literally a cut that institutes difference and
makes a difference. We are, therefore, responsible both for deciding who counts as
another subject and what is not and, in the process, for determining the way we perceive
the current state and future possibility of social relations.
Notes
1
A consequence of this way of thinking about things is that all things are initially
revealed and characterized as media or something through which human users act. For
more on this subject, see Heidegger and the Media (Gunkel and Taylor, 2014).
2
Identification of these two alternatives have also been advanced in the phenomenology
of technology developed by Don Ihde. In Technology and the Lifeworld, Ihde (1990)
distinguishes between “those technologies that I can take into my experience that
- 36 -
through their semi-transparency they allow the world to be made immediate” and
“alterity relations in which the technology becomes quasi-other, or technology “as”
other to which I relate” (p. 107).
3
Although the term “chatterbot” was not utilized by Weizenbaum, it has been applied
retroactively as a result of the efforts of Michael Mauldin, founder and chief scientist
of Lycos, who introduced the neologism in 1994 in order to identify a similar NLP
application that he eventually called Julia.
4
“Wizard of Oz” is a term that is utilized in Human Computer Interaction (HCI) studies
to describe experimental procedures where test subjects interact with a computer system
or robot that is assumed to be autonomous but is actually controlled by an experimenter
who remains hidden from view. The term was initially introduced by John F. Kelly in
the early 1980s.
References
Benso, S. (2000). The Face of Things: A Different Side of Ethics. Albany, NY:
State University of New York Press.
Bessi, A. and E. Ferrara (2016). Social Bots Distort the 2016 U.S. Presidential
Election Online Discussion. First Monday 21(11).
http://firstmonday.org/ojs/index.php/ fm/article/view/7090/5653
- 37 -
Darling, K. (2012). Extending Legal Protection to Social Robots. IEEE Spectrum,
10 September 2012. http://spectrum.ieee.org/automaton/robotics/artificial-
intelligence/extending-legal-protection-to-social-robots
Epstein, R. (2007). From Russia, With Love: How I Got Fooled (And Somewhat
Humiliated) by a Computer. Scientific American Mind Oct/Nov: 16-17.
- 38 -
Harman, G. (2002). Tool Being: Heidegger and the Metaphysics of Objects. Peru,
IL: Open Court Publishing.
Hingston, P. (2014). Believable Bots: Can Computers Play Like People? New
York: Springer.
Johnson, D. G. (1985). Computer Ethics. Upper Saddle River, NJ: Prentice Hall.
Johnson, D. G. (2006). Computer Systems: Moral Entities but not Moral Agents.
Ethics and Information Technology 8: 195-204.
McLuhan, M. and Q. Fiore (2001). War and Peace in the Global Village.
Berkeley, CA: Ginko Press.
Mowbray, M. (2002). Ethics for Bots. Paper presented at the 14th International
Conference on System Research, Informatics and Cybernetics. Baden-
Baden, Germany. 29 July-3 August.
http://www.hpl.hp.com/techreports/2002/HPL-2002-48R1.pdf
Peterson, A. (13 August 2013). On the Internet, No one Knows You’re a Bot.
And That’s a Problem. The Washington Post.
- 39 -
https://www.washingtonpost.com/news/the-switch/wp/2013/08/13/on-the-
internet-no-one-knows-youre-a-bot-and-thats-a-
problem/?utm_term=.b4e0dd77428a
Reeves, B. and C. Nass. (1996). The Media Equation: How People Treat
Computers, Television, and New Media Like Real People and Places.
Cambridge: Cambridge University Press.
Singer, P. W. (2009). Wired for War: The Robotics Revolution and Conflict in the
Twenty-First Century. New York: Penguin Books.
Wiener, N. (1950). The Human Use of Human Beings: Cybernetics and Society.
Boston: Ad Capo Press.
- 40 -
Žižek, S. (2000). The Fragile Absolute or, Why Is the Christian Legacy Worth
Fighting For? New York: Verso.
- 41 -
情绪与认知功能的认知与神经基础
罗跃嘉
情绪是复杂的心理生理学现象,反映了心智状态与个体内在与外部环境影
响的相互作用。认知功能是人类认识和获取知识的智能加工过程,涉及学习、
记忆、语言、思维、精神、情感等一系列随意、心理和社会行为。本报告将回
顾课题组近年来在情绪与注意、工作记忆、冲突、抑制、决策等方面的系列研
究工作。情绪与认知是重点讲述的内容,二者是相互依存、相互作用的,例
如,表情对于人类的社会行为具有重要意义,研究结果提出了不同表情的三阶
段加工假说;情绪具有注意的负偏向,发生在早期知觉阶段、晚期刺激评价阶
段以及动作准备阶段;情绪对执行功能的影响体现在冲突和抑制等方面;消极
情绪选择性影响空间工作记忆的皮层区;FRN 与 P3 的改变表明了焦虑情绪对
于决策的影响过程。上述研究揭示了情绪与执行功能的相互作用及其潜在的神
经基础,以力图对现有理论进行补充、修改,或提出新观点,将有助于加深对
于情绪与认知脑机制的进一步认识。
- 42 -
Luo Yuejia
- 43 -
沉浸的概念:中介空间和主体位置
“沉浸”一词越来越广泛地用以描述图像和声音的新科技,以及这些新科
技与主观性、空间性和位置的关系。“沉浸”一词的使用将主体移入电影,并表
达了一种电影的泄放,从银幕至电影院,使空间与位置的关系变得模糊不清。本
文以 IMAX(最大影像)和数字环绕声这两种电影技术为研究对象,借此探讨沉浸
的话语。IMAX 一直与表示崇高的美学范畴保持一致,从康德的观点来看,它也与
无限的概念化并行不悖。这一点表明,无限的概念已经不再局限于表达人文视阈
里的深度和后退,而开始与规模、延展和难以觉知的网络息息相关。IMAX 崇高的
逻辑——也许是卓越的技术崇高——在沉浸的话语体系下运行。这制造了一种幻
觉,使主体以为同时既享有深刻和又能即刻感知自己的身体。这也隐藏了对主体
的猛烈的移位和错位,使主体仿佛得以面向一个延展至无限的世界。本文研究的
数字环绕声与电影“转弯”的比喻有关——空间的转弯必须交由角色或者摄像
机来完成,以避免让观众转弯,因为那会牺牲掉电影中的世界/空间视角。声音
一旦进入电影院,便会在声音的定位和观看的方向之间产生一种张力。但以上技
术能唤起吸纳和包围的隐喻,扩大电影(叙事)的范围,从而超越屏幕,渗透进
入观众的空间。通过将观众定位在另一空间的任何地点,沉浸减弱了位置的强制
性。这种强制性来自于主体身体的局限,又或如列斐伏尔(Henri Lefebvre)所
言,来自于在特定历史和社会背景下划分和分析的空间,更不用说来自于制造的
空间了。
- 44 -
Mary Ann Doane
Today, one feels a whiff of nostalgia reading Barthes’s description of his relation
and nonrelation to the screen in the movie theater, at the idea of a designated place for
the viewing of moving images that might constitute in itself a distraction. In contrast,
screen culture now has become strikingly heterogeneous and pervasive. Screen sizes
now range from the miniature touch-screen of the iPhone and iPad to the immense scale
of IMAX. Images are mobile and transportable, savable and recyclable, called up at
1
Rustle of Language, p. 239.
45
will and often ephemeral. They can be viewed virtually anywhere. And to that extent,
where they are viewed becomes less and less significant, even in the case of IMAX,
which although it requires a specialized theater, projection and screen, heavily weights
Barthes’s first form of fascination with the image—that of engulfment. In IMAX, the
bloated image exceeds the screen, swells, distends and infiltrates the space of the
spectator. If Hollywood’s promise was that of taking the viewer to another place by
denying his or her own location in the theater, IMAX holds out the allure of annihilating
that location, hence the pervasive and persistent discourse of “immersion.” The concept
of immersion suggests a transport of the subject into the image but also a bleeding of
the image beyond the screen into the auditorium so that the very question of place or
location becomes nebulous. References to immersion are ubiquitous in advertising for
IMAX, 3-D, digital sound surround and virtual reality. The concept is symptomatic of
larger questions concerning subjectivity, spatiality and mediation. I will focus here on
two technologies that have been consistently and emphatically allied with the discourse
of immersion: IMAX and digital sound surround systems.
With IMAX, size is the central and defining characteristic, so much so that the
films themselves must entail subjects of a certain grandeur and ungraspability, self-
reflexively conjuring up narratives of magnitude. IMAX seems to have fulfilled the
early cinematic aspirations associated with the phrase, “Bigger Than Life.” The
emergent history of IMAX was hence dominated by nature and exploration films,
seemingly transcending the comparatively minute human scale of characters and plots.
The earliest IMAX films were significantly shorter than traditional feature length films,
ranging from 17 minutes to half an hour, at least partially determining the avoidance of
fiction and the classical narrative, whose norms at that point in cinematic history
required a certain duration. IMAX emerged from and found a home in world fairs and
expositions as a performance of the capabilities of image technologies—the films were
less about subjects than the very fact of the technology. Migrating to specialized venues
associated with museums and science centers, the films were presented as an
educational experience, often touristic (and imperialistic).2
The advertising rhetoric for IMAX reiterates and refashions that for widescreen in
the 1950s and focuses on the concept of “immersion.” “You,” i.e. the spectator, are not
observing the space revealed on the screen—you are inside of it. For John Belton, the
“illusion of limitless horizontal vision” in Cinerama and Cinemascope intensified the
2
See Charles Akland, “IMAX Technology and the Tourist Gaze,” Cultural Studies 12, no. 3
(1998)
46
spectator’s sense of immersion or absorption in the space of the film (much of the
advertising for these processes emphasized the spatial relocation of the spectator from
his/her seat to the world provided by the cinema).3 [Figure 1] IMAX ads as well insist
that the spaces of film and spectator are confused and entangled. Objects or persons in
the film reach out of the screen into the space of the audience or the spectator is sucked
into the world of the film, erasing all borders between representational space and the
space of the viewer. [Figures 2, 3] In this scenario, there is no “offscreen space.” All of
the world has become media and as a consequence, there is no mediation.
The paradox of IMAX is that its development and expansion in theaters coincided
with the accelerating minimization of screen size—on computers, laptops, notepads and
culminating in handheld mobile devices such as the iPhone. Films are now viewable on
the smallest of screens as well as the largest. Although David Lynch, in defense of the
large screen, has categorically insisted (using various expletives) that if you view a film
on an iPhone, you simply haven’t seen the film,4 the mobility of images is a pervasive
cultural phenomenon that must be confronted. Perhaps it is not so much a question of
whether it is the “same image,” but how technologies with such extreme differences of
scale can inhabit the same media network. What is the work of “scale” in contemporary
media and how does it configure or reconfigure space, location and subjectivity?
At first glance, the iPhone, unlike IMAX , would not seem to provide an immersive
experience. Immersion connotes a transport of the subject into the image and the iPhone
appears to give its user an unprecedented control over the screen. But if immersion,
with its alliance with water, fluids, liquidity, indexes an absorption in a substance that
is overwhelming and all-encompassing, there is a sense in which the user of the iPhone
could be described as immersed. In fact, this has been the social anxiety concerning
iPhones—young people, absorbed in their iPhones, are lost to the world. They no longer
have face-to-face conversations; they are no longer where they are. They have fled the
real. This fear of the danger of iPhones is reminiscent of historical diatribes against the
movies for their irresistible influence on young and malleable minds, particularly in
relation to images of sex and violence. In the case of the iPhone, what is feared is a
form of temporal and spatial immersion, absenting oneself from a specific time and
location. The geography of the iPhone is that of “elsewhere,” the elsewhere of an
unmappable, uncognizable network.
3
(Belton, 1992, p. 197).
4
on YouTube, paid for by Apple
47
within their own critical language. Haidee Wasson describes the experience of IMAX
in these terms: “With IMAX you find yourself moving into and out of great heights and
depths, traveling downward to the bottom of the sea or upward to the stars” or “IMAX
engulfs its spectators, stretching the limits of human vision through its expansive screen
and immersive aesthetic.”5 For Charles Acland, “IMAX films soar. Especially through
the simulation of motion, they encourage a momentary joy in being placed in a space
shuttle, on a scuba dive, or on the wing of a fighter jet.” Immersion is used not only to
describe the experience of IMAX, but of new technologies such as virtual imaging. It
is the lure, the desire, the alleged fascination of the industry itself. But what does it
mean to be immersed? And why is it the focus of a contemporary desire? Obviously
figural, the tropology denies the physical location of the spectator. I propose to read the
concept of immersion as symptomatic, as a claim that points to a work of spatial
restructuring in a screen-saturated social economy.
IMAX is about excess—one of its movie theater intros deploys the traditional
movie countdown from 10 to 1 (which gradually enlarges the numbers until they
become gigantic) and inserts the words “See more, hear more, feel more,” ending with
the IMAX slogan, “Think Big.” The largest IMAX screen is in Sydney, Australia and
is approximately eight stories high. IMAX screens can be ten times the size of a
traditional cinema screen. The clarity and resolution of the image is made possible by
a frame size that dwarfs that of conventional 70mm film (three times larger). With the
perforations placed horizontally rather than vertically, the film must run through the
projector at extremely elevated speeds. The very high resolution of the image allows
spectators to be positioned closer to the screen. In a typical IMAX theater, the seats are
set at a significantly steeper angle and all rows are within one screen height whereas,
in a conventional movie theater, all rows can be within eight to twelve screen heights.
As Allan Stegeman points out in an article claiming that IMAX and other large screen
formats can compete effectively with high definition television, “An Imax image
occupies 60° to 120° of the audience’s lateral field of vision and 40° to 80° of the
vertical field of view, and an Omnimax image occupies approximately 180°in the
audience’s horizontal field of vision, and 125° vertically—the large-screen format
effectively destroys the viewer’s awareness of the film’s actual frame line.”
It is this annihilation of the frame line that I would like to focus on here. While
Cinemascope claimed to compete with the spectator’s peripheral vision, IMAX and
other large formats exceed the eye in all dimensions so that the image appears to be
uncontained. The frame in cinema is not only a technical necessity adjudicating the
relation to temporality (24 frames per second) and the production of an illusion of
5
IMAX engulfs its spectators, stretching the limits of human vision through its expansive screen
and immersive aesthetic. Marchessault, Janine; Lord, Susan (2011-07-07). Fluid Screens,
Expanded Cinema (Digital Futures) (Kindle Locations 1661-1662). University of Toronto Press.
Kindle Edition.
48
motion, but also a link between cinema and the history of Western painting, particularly
in its inscription of perspective as a rule of space. The frame demarcates the space of
the representation as a special place, one which obeys different dictates for legibility.
Or, as Jacques Derrida has pointed out, the frame as parergon, is neither part of the
work nor outside the work but gives rise to the work (23). The frame is the condition
of possibility of representation. In the history of cinema, the frame lends to the film’s
composition a containment and a limit that rivaled the limit of the two-dimensional
surface of the screen. Both could be contested but the frame and the screen were
themselves activated to produce the concepts of off-screen space and depth of field as
domains of the imaginary.
If the frame constitutes a limit—a fully visible limit--in the experience of the
spectator in conventional cinema, what does it mean to remove that limit by using
technology to exceed the physiological limits of the spectator’s vision? IMAX clearly
has limits, but they are not of a visible order in the spectator’s experience. It strives
against limits, as seen in this ad from the IMAX corporation:[Figure 4] “People say our
screen curves down at the edges. It doesn’t. That’s the earth.” The limit of the IMAX
screen merges with that of the earth, which is to say that it has no artificial or cultural
limit. What is the lure of this idea of boundlessness?
In the history of aesthetic theory, this concept has been most frequently associated
with that of the sublime in its philosophical formulation. In Edmund Burke’s analysis,
in which “sublime objects are vast in their dimensions,” (113) the eye is given a
privileged position, standing in metonymically for the entire body (“as in this discourse
we chiefly attach ourselves to the sublime, as it affects the eye”). 6 For Burke, the
sublime is associated with passion, awe, and terror and with a pain that proves to be
pleasurable. And this abstraction of pain from pleasure is in many instances a bodily
phenomenon—both terror and pain “produce a tension, contraction or violent emotion
of the nerves.” (120) This is the sublime, as long as any possibility of actual danger is
removed.
6 This is true even though, for Burke, language retains its superiority over figurative painting,
which is restricted by its mimeticism. Edmund Burke. A Philosophical Enquiry into the Origin
of our Ideas of the Sublime and Beautiful (Oxford World's Classics) (p. 128). Kindle Edition.
49
only be produced through a detour and it is the detour that causes pain preparatory to
the pleasure of discovering the power and extension of reason.
Hence, the concept of the sublime grapples with the notion of infinity and its
representability, although this is not the term Kant would have used. Yet, there is
another way of thinking and representing infinity that is not usually articulated with the
sublime. Renaissance perspective, inherited by the cinema, constitutes infinity as a
point—a perpetually receding point—the vanishing point--which mirrors the position
of the subject contemplating the painting. Like Kant’s reason in at least one respect, it
acts as an imprimatur of a mastery that takes form by going beyond, even annihilating,
the subject’s sensory and spatio-temporal localization, all the
singularities/particularities of embodiment in a finite body limited by the reach of its
senses. At least this reading of perspective is that of apparatus theory in film studies,
the legacy of Jean-Louis Baudry, Jean-Louis Comolli, and others in the 1970s. And it
is that of Erwin Panofsky as well. Panofsky analyzed Renaissance perspective as the
symptom and instanciation of a new concept–that of infinity, embodied in the vanishing
50
point.7 Yet, this was a representational infinity that confirmed and reassured the human
subject, replacing a theocracy with an individualizing humanism. In a way, it could be
seen as a secularization of the sublime.
The Oxford English Dictionary defines the sublime as “Set or raised aloft, high up”
and traces its etymology to the Latin sublimis, a combination of sub (up to) and limen
(lintel, literally the top piece of a door). The sublime is consistently defined by
philosophers in relation to concepts of largeness, height, greatness, magnitude. For
Burke, visual objects of “great dimension” are sublime. Kant claims, “Sublime is the
name given to what is absolutely great” and “the infinite is absolutely (not merely
comparatively) great.” The sublime is associated with formlessness, boundlessness, and
excess beyond limit. It is not surprising in this context that IMAX has been analyzed
by invoking the concept of the sublime (Haidee Wasson and Alison Griffiths refer to
Burke’s sublime in particular), especially insofar as the terror associated with the
sublime, for both Burke and Kant, must be experienced from a position of safety. The
7 See Erwin Panofsky, Perspective as Symbolic Form, trans. Christopher S. Wood (New York:
Zone Books, 1997).
8 Joselit, p. 293
51
sublime is an aesthetic category and it is inevitably chained to affect—whether awe,
terror, pleasure, or fear—and most frequently a combination of these. The advertising
and the analysis of IMAX are obsessed with its involvement of the subject in a gripping
experience—hence, the discourse of immersion. IMAX is described as above all a
visceral experience, requiring a form of bodily participation. Unlike the disembodiment
of the classical perspectival system, the body seems to be what is above all at stake in
discourses on IMAX. The IMAX sublime, if there is such a thing, here deviates from
Kant’s, for whom the sublime was sublime only on condition that it exceed the sensuous,
proclaim the irrelevance of the subject’s spatio-temporal presence in favor of the
infinite grasp of reason. The discourse of immersion would seem to rescue the body
from its nullification by both Renaissance perspective and the Kantian sublime, making
us once again present to ourselves.
But I would like to argue that immersion as a category is symptomatic and one has
to ask what this body is. The body here is a bundle of senses—primarily vision, hearing
and touch. But this appeal to the body as sensory experience, as the satiation of all the
claims for its pleasure, does not revive an access to spatio-temporal presence or
localization. Instead, it radically delocalizes the subject once again, grasping for more
to see, more to hear, more to feel in an ever expanding elsewhere. IMAX emerged from
the world fairs and expos that constituted exhibitionistic displays of the ever expanding
powers of technology (what David Nye has called the “technological sublime”). It is
telling that one of the works of this early tendency toward magnification of the scale of
the image and proliferation of screens was the Eames’s iconic Powers of Ten. This
pedagogical film illustrates a movement from a couple having a picnic in Chicago to
the edge of the universe and back to the interior of the body by exponentially increasing
the “camera’s” distance from the couple, reversing the trajectory, and decreasing that
distance to the point of inhabiting the body itself. (Figures 5 and 6—clips) The human
body would seem to be central to this demonstration, primarily as a marker of scale and
as the threshold of a trajectory from the gigantic to the infinitesimally small. Yet, the
film is instead an allegory of the nullification of the body and its location, acting only
as a nostalgic citation of a time when the human body was the ground and reference for
measurement, replacing it with a mechanical mathematical formula for the progressions
of scale. The limits of the “camera’s” trip in both directions are, of course, the limits of
human knowledge—at the moment. But the film suggests that this movement is
infinitely extendable and it is not accidental that technologies of knowledge and
technologies of the image are inseparable here. Human vision, with the aid of imaging
technologies, is infinitely extendable and knowledge is embedded in that vision. But I
have spoken only of the represented body, not of the spectatorial body. The spectatorial
eye is fully aligned with the technological eye—not with the vision of the represented
“characters”—the man and the woman, and its travels are limited only by the current
state of technologies of imaging/knowledge. Yet, there is not only a sense that it is
disembodied or delocalized but that it is potentially everywhere. The logic of the IMAX
52
sublime—perhaps the technological sublime par excellence--operates under the
umbrella of the discourse of immersion, producing an illusion that depth and ready
access to the body are still with us, and concealing its radical delocalization and
dislocation of a subject seemingly empowered in the face of a world defined as infinite
extension.
While both classical and contemporary film theory have productively dissected
the relation between the visible and the invisible in cinema through a concentration on
off-screen space as the preeminent “blind space,” much less attention has been paid to
that other dimension of invisibility—that which is behind, the “other side” of bodies
and of things. Because the film image is two-dimensional, the activation of perspective
and overlapping figures are clearly involved in the production of the effect of three-
dimensionality but this is true of a painting or a photograph as well. The cinema has an
added advantage—movement, which aids in carving out the space of the diegesis. The
“turn”—both of character’s bodies and the body of the camera—is a crucial trope in
this respect. The “turn” in classical Hollywood films is often activated in the service of
scenes of misrecognition, where it reveals a mistaken identity. For the turn makes
visible that which was concealed--the “other side”--an other side that does not
materially exist in the two-dimensional realm of cinema but is continually evoked,
imagined, assumed. The turn is a constant reiteration of otherness and the limits of
knowability, a denial of the sufficiency of the screen as surface. Knowledge resides
somewhere else--behind, on the other side. But the turn also confirms that there is
another side, in what could be labeled a “virtual dimension.” Nevertheless, given the
physical immobility of the spectator, the necessity of facing forward to see the screen,
that turn must be delegated to someone or something else—character, figure, camera.
Navigable space is on the side of the screen. What are the effects of this delegation to
figure or camera of a bodily gesture that is critical to the subject’s relation to space, of
a body’s fundamental capability, as Henri Lefebvre has pointed out, “of indicating
direction by a gesture, of defining rotation by turning round, of demarcating and
orienting space.” (170) In Lefebvre’s analysis, space cannot be conceived of as an
empty container, ready and able to accept any content. Space is, above all, occupied:
“there is an immediate relationship between the body and its space, between the body’s
deployment in space and its occupation of space… each living body is space and has
its space: it produces itself in space and also produces that space.”9 Yet, in the context
of the cinema, the spectator’s body is incapacitated, rendered useless, deprived of its
role of demarcating space through gesture and movement. As has so often been pointed
9
Lefebvre, 1991, pp. 170-171
53
out, the spectator must become immobilized, bodiless, his or her senses reduced to those
characterized by distance—vision and hearing. Space is not lived—at least in the sense
of the ordinary or everyday experience of space in its relation to the body—but
abstracted, alienated. The turn that helps to demarcate and define space is, in the cinema,
a represented turn, and the space is a represented space. But there is another turn at
issue here, one which must be prohibited. One thing the spectator must not do is turn
around to look at the back of the auditorium. The turn that demarcates and orients space
must be relocated on the side of the screen.
But again, this conceptualization of front and back and the turn concerns the space
of the diegesis and not that of the spectator. Both the turning character and the turning
camera mark out the space of the diegesis and delineate its volume. Yet, the spectator’s
54
space is defined differently. Vision is directional and the spectator who turns around
and no longer faces the film will miss a part of it, making that particular turn taboo,
prohibited. Nevertheless, the space behind the spectator has not been entirely neglected.
Often it has been activated by theorists in intriguing ways. Baudry, deploying Plato’s
allegory of the cave in which the prisoners are chained since infancy, allowed only to
look ahead at the screen of shadows, cites Plato’s imaginary scenario of turning around:
“Suppose one of them were set free and forced suddenly to stand up, turn his head, and
walk with eyes lifted to the light; all these movements would be painful, and he would
be too dazzled to make out objects…”.In Baudry’s analogy, it is the turn toward the
projector that breaks the illusion of the apparatus but it also connotes a certain violence,
a dazzlement of vision. And Christian Metz’s transcendental identification with the
camera and with the pure act of perception becomes in the screening an identification
with that other part of the apparatus—the projector, “an apparatus the spectator has
behind him, at the back of his head, that is, precisely where fantasy locates the ‘focus’
of all vision.” (253) In a discussion of the way in which Renaissance perspective, from
the outset, was linked to the concept of infinity, Hubert Damisch refers to infinity as
“an idea of what’s behind one’s head.” (121) (Figure 10) Hence, the non-place of this
“behind” in the theater is not empty, but instead replete with the subject’s relations to
illusion, the real, fantasy and infinity as well as answerable to a certain taboo against
the gaze in support of representation. The separation between “front” and “back” spaces
in relation to media has also been conceptualized as a structure of the social availability
of knowledge and ignorance by Anthony Giddens. Giddens claims that the “front”
space of society constitutes an open, accessible space for the general public, a place of
transparency and visibility. But the “back” space is “the locus of social information that
is hidden.” (qtd Sterne, 151) According to Jonathan Sterne, “Giddens and John
Thompson both argue that the rise of the mass media has coincided with the growth of
forms of communication that entail very small front spaces (relatively little available
information) in relation to relatively large back spaces (lots of unknown factors).” (p.
151) All of the arguments of 1970s film theory about concealing the apparatus and
hiding the work of the production of a film would seem to confirm this assertion. It is
arguable that the “back spaces” of digital media are larger still. The spatial categories
of front and back are aligned with a form of social engineering of the availability of
information. The back spaces are those which are withheld, secret, deliberately opaque.
But 1970s film theory was primarily interested in cinema as a visual medium, with
only occasional references to sound. Although one cannot see what is behind one’s head,
one can hear it. And this three-dimensionality of sound is increasingly referenced by
film theorists. For instance, in a consideration of cinema and the ear, Thomas Elsaesser
and Malte Hagener claim that “hearing is always a three-dimensional, spatial perception,
i.e. it creates an acoustic space, because we hear in all directions” and quote Mirjam
Scaub, “the main “anthropological“ task of hearing […] [is] to stabilize our body in
space, hold it up, facilitate a three-dimensional orientation and, above all, ensure an all-
55
round security that includes even those spaces, objects and events that we cannot see,
especially what goes on behind our backs. Whereas the eye searches and plunders, the
ear listens in on what is plundering us. The ear is the organ of fear.” (131) The ear is
associated with a sense of balance and with contributing strongly to the apprehension
of the body’s location in space. Cinematic space is molded as much by sound as by the
dialectic of onscreen and offscreen space. Sound, as the material displacement or
vibration of airwaves, affects the entire body and not just the ears. Michel Chion
similarly stresses the fact that hearing “is omnidirectional. We cannot see what is
behind us, but we can hear all around.” (Acousmetre, 17) Although all of these
considerations allude to a phenomenological conceptualization of hearing and are part
of what Jonathan Sterne terms the “audio-visual litany,” that is, the string of
characteristics that are supposed to be natural to sound and hence dehistoricized, it is
significant that these specific traits are becoming more fundamental in recent years to
our understanding of cinematic sound. This is partially a function of the increasing
mobility of sound—it accompanies us everywhere and, in the theater, it has begun to
invade the space previously erased or at least reduced by classical cinema, the space of
the auditorium. But what does it do there?
One of the major debates in 1930s attempts to grapple with sound circulated
around the question of sound perspective. Sound perspective refers to the spectator’s
sense of a sound’s location in space and is determined by a number of factors including
volume, frequency, the balance with other sounds and the amount of reverberation. It
can be an effect of microphone placement or of post-production manipulations. In
conflict in the debate were the values of spatial realism (the localizability of an event,
the matching of image and sound) and the intelligibility of dialogue (which would be
lost at a certain distance if strict sound perspective were maintained). As Rick Altman
has shown, intelligibility of dialogue generally won out (except in very special cases),
undermining the perceived necessity of spatial fidelity of sound to image. What was
lost were all the qualities, including reverberation, that might be used to spatialize a
sound. The debate was settled according to James Lastra, by “close-miking and a certain
‘frontality.’” (82) As Emily Thompson has pointed out, radio and other modern
deployments of sound, including soundproofing and the use of a directional flow of
sound in theaters, were a crucial reference point: “ …this kind of sound was everywhere.
In its commodified nature, in its direct and nonreverberant quality, in its emphasis on
the signal and freedom from noise, and in its ability to transcend traditional constraints
of time and space, the sound of the sound track was just another constituent of the
modern soundscape.” (284) The technical possibility of producing reverberation in the
studio, independently of the space of the original recording, freed sound from “any
architectural location in which a sound might be created: it was nothing but an effect, a
quality that could be meted out at will and added in any quantity of any electrical signal.”
(283) In a sense, sound was both everywhere and nowhere. What was at stake in these
debates were the limits of acceptability of the spacelessness of sound. A spaceless
56
sound is one that can be more easily disengaged from its specific geographical,
historical and political location and subjected to circulation as a commodity.
The sound perspective debates of the 1930s have somewhat uncannily re-emerged
with the production of new multi-channel systems, sound surround, digital sound and
the consequent proliferation of speakers throughout the auditorium. With respect to
questions of sound space, there are at least two ramifications of these changes. One
would be the accelerated annihilation of the sense of the specific space of the
auditorium in which a film is projected. Michel Chion claims that the choice of
architecture and building plans for new movie theaters have “mercilessly vanquished”
reverberation—“the result is that the sound feels very present and very neutral, but
suddenly one no longer has the feeling of the real dimensions of the room, no matter
how big it is.” (100) It is arguable that, perhaps with the exception of ostentatious
picture palaces that called attention to themselves, movie theaters have always been
designed to reduce a sense of their own specific spatial properties in order to “host” any
number of diegetic spaces proposed by a stream of ever-changing films. In order to
allow audiences to “go elsewhere,” theaters must become nonspaces or “nonplaces” to
adopt Marc Augé’s term to describe airports, shopping malls and any institutional space
that is eminently recognizable in a generic sense that has nothing to do with its specific
location. But for Chion, this process has intensified—theatrical sound has become so
“pure” and neutral that it has reduced any distinction between cinema sound and a good
home stereo system. Collective sound has been displaced by personal sound. This
pursuit of spatial anonymity characterizes the space of cinematic exhibition. But the
second ramification of the proliferation of multi-channel systems and sound-surround
concerns the space produced by the film, its diegetic space. For the multiplication of
potential sound sources exacerbates the issue of the localizability of sound. It appears
to demand a greater precision in matching sound and space and hence, in a sense, to
respatialize sound. Chion defines as the “superfield” the space produced in multi-
channel films by ambient sounds that surround the visual space and that “can issue from
loudspeakers outside the physical boundaries of the screen.” (150) According to Chion,
the fact that these sounds are more precisely located spatially releases contemporary
narrative film from the classical obligation of providing an establishing shot (typically
used to orient the spectator in relation to the use of close-ups and medium shots that
fragment that space). This results in a contemporary filmic style of fast editing and more
insistent use of close-ups because the “superfield provides a continuous and constant
consciousness of all the space surrounding the dramatic action.” (151) Modern
soundtracks endow the image track with a greater recognizability. Yet, echoing the
sound perspective debates of the 1930s, many sound technicians have been reticent
about “too much” sound realism (spatialization), about overuse of the speakers spread
over the auditorium, due to the potential distraction of the spectator’s attention away
from the screen. If sound has traditionally been used to tell us where to look, what is
visually important, its leakage into the auditorium presents a potential difficulty. Again,
57
harking back to the 1930s debates, this is particularly true in the case of dialogue, which
must be both intelligible and “present,” intimately bound to the image of the person,
whether visible or invisible, i.e. just over there, on the other side of the frame, in what
has traditionally been specified as the most significant form of off-screen space in
narrative film. Current sound practice tends to locate dialogue in the speakers behind
the screen, just as classical practice dictated. Ambient noise—leaves rustling, train
whistles in the distance, birds, rain, etc.—and music, forms of sound that can be more
easily dissociable and independent of the image, are those more likely to be channeled
to the speakers in the auditorium.
While even the classical film attempted to absorb its audience, bring the spectator
into the diegesis, this rhetoric seems to have become more insistent with each “new”
technology. According to Kerins, in films using immersive sound, “the audience is
literally placed in the dramatic space of the movie, shifting the conception of cinema
from something ‘to be watched from the outside’—with audience members taking in a
scene in front of them—to something ‘to be personally experienced’—with audience
members literally placed in the middle of the diegetic environment and action.” (130)
58
The problem is that, unlike the characters, the spectators continue to face forward. The
true blind space is still behind them. The taboo nature of this space is indicated very
clearly by the fact that sound designers continue to be wary of over-localizing or over-
spatializing sounds to the extent that the spectator is distracted and pulled away from
the image/screen. This is evidenced most tellingly in what they refer to as the “exit door
effect” or “the exit sign effect,” in which, hypothetically, the spectator would try to
localize a sound and turn away from the screen in order to identify its source. Kerins
suggests that the exit door effect is no longer as pressing a concern after more than two
decades of multichannel sound and the “training” or “recalibration” of audiences, but
he does so in the context of a discussion about why, despite the potential of surround
sound, directors continue to be extremely conservative in their use of it. Outside of a
few instances, the rear speakers are generally used for ambient effects that do not call
out for a specific localization.
Dolby’s website introduces Dolby Atmos (short for atmosphere), its most recent
technical development in sound, with the promise to the movie-going public that it will
“Feel Every Dimension,”—not just hear every dimension but feel its bodily impact.
Dolby Atmos is based on Audio Objects governed by metadata rather than on channels,
more precisely locating and scaling sounds and purportedly capable of working with
any theater’s configuration of speakers. [Figures 11 and 12, clips] The examples of
sounds are those taken, tellingly, from a sublime nature—birds, a waterfall, a
thunderstorm, etc. and the sublimity of the cinematic image corresponds to that of the
sound. Sound is “seen” as its source is pinpointed in the movement from speaker to
speaker in the auditorium, tracing the path of a helicopter seed. In the scene from Life
of Pi, the directionality of the sounds of fish flutters is reversed from left-right to right-
left on the cut from the tiger’s POV to Pi’s, violating the classical sound editing rule of
staggering sound cuts and image cuts to conceal the fact of the cut. “You,” according
to Dolby are the subject of a constant movement—“you” are “propelled into the story”
and “you” are “transported into a powerfully moving cinema experience”—a reiteration
of the discourse of immersion characterizing the advertising of IMAX and 3-D. In fact,
immersion now has a technical definition in relation to sound: immersive sound is “the
term used to describe sound that emanates from sources beyond the horizontal plane by
means of enhanced spatial properties such as additional height and overhead speakers
and localized apparent sound sources within the auditorium.” (This is from an article
entitled “The Spectrum of Immersive Sound” appearing in Film Journal International
in 2014 [Bill Cribbs and Larry McCrigier]).While this definition strikes one as dry and
technical, without the affective valence of the usual discussions of immersion, the
article begins with the description of an immersive sound experience: “Imagine
stepping from life and being totally immersed in the story during your next cinema
experience. Hearing everything as if you were actually there in the scene. Close your
eyes. You're at a cafe in Paris, around you dishes are clanking and patrons are engaged
in conversations. A woman is shouting from a third-floor window and birds are chirping
59
in the trees. High overhead, a jet cruises by, and you subconsciously note that it's
departing to the east. You hear the familiar footsteps of your date approaching behind
you. You hear all these details exactly where they belong. This is the goal of Immersive
Sound, the next big advance in cinema technology.” The fact that “you” are asked to
close your eyes is symptomatic of the continuing tensions between 3D sound and 2D
image localization. To “hear all these details exactly where they belong” requires
denying the visual space that does not support (or supports only figuratively) the sound
space. The Dolby Atmos website situates the difference of this technology in a more
powerful bass and overhead sound that “heightens the realism of your cinematic
experience.” And, finally, your own location is made irrelevant: “no matter where you
sit in the theatre…”, you will have access to this moving experience.
Why this insistent rhetoric refining and insisting upon the immersion of the
spectator in the diegesis? Why does it, beyond the promises of classical cinema,
produce a contract that pledges the film will enter the space of the auditorium and
envelop the spectator? Why the insistence upon “enlarging” the diegesis (the space of
fantasy), as if it were not large enough already? Why deny the crucial (and necessary)
incommensurability of the space of the spectator and that of the diegesis? It would be a
mistake to try to understand surround sound as separate from IMAX and 3-D, other
attempts to expand the space of the diegesis in as many directions as possible. Surround
sound and multi-channel systems, by moving sound into the space of the auditorium,
assist in this annihilation of the frame line.
60
Perhaps the most striking delocalization masquerading as localization is the map
posted in urban space that specifies “You are here.” [Fig. 13] This is a rather large point
representing “you,” but it is still a point. A point that in mathematics is without
extension, takes up no space. And the same is true, despite its mobility, of the point that
represents “you” in Google Maps as you navigate an unknown territory. This is not only
surveillance—“they” know where you are, but the reduction of your spatiality, the fact
that the body is itself a space, to a point. This only seems “natural” because we are
accustomed to thinking of ourselves as points within a network. Social media
ameliorate this by purportedly giving “you” an identity—but where are you when you
post on social media?
So, why have I emphasized the “turn” and its function both in classical cinema and
the cinema of today? The turn in classical cinema had a quite precise effect—that of
indicating the lost dimension of the image—within the diegesis. There was no question
of the spectator herself turning, looking away from the screen. That turn invokes the
possibility of another space, the missing space, behind the spectator. Sound surround,
in its most current uses, hopes to make this space palpable, to conquer the otherwise
and formerly taboo space of the rear of the theatre, the back of the spectator. Perhaps
the “last” territory. But it must do so very carefully, with restraint. For the spectator,
turning and looking behind is not just a refusal of the screen but an acknowledgement
of the existence of an exit.
61
凝视何以成为暴力
——平昌冬奥会上朝鲜运动队的再现
姜明求(Myung-koo Kang)
本文旨在解释韩国主流媒体如何在暴力框架里看待朝鲜的问题。人们可以选
择看待事物的框架,但韩国的主流媒体选择排斥和仇恨的视角来看待朝鲜,这种
方式导致了现实世界权威的争论,迫使韩国人民保持沉默。因此本文试图通过回
顾以朝鲜代表队名义访韩,参加平昌冬奥会的朝鲜体育队、代表团和欢迎队的新
闻报道展现韩国主流媒体如何展示和再现对其的仇恨。
首先,对战争的预期是未实现的未来的焦虑,并非是依据事实与证据做出的
合理推断的愿望反而加深了焦虑。这种对战争的预期是真实还是幻想? 既可能
是这种预期产生了一种真实的,规律的并超出我们感受和表现方式的力量,也能
说这是一种幻想,因为它不存在现实中,但这是真实的,只是这种战争预期产生
了内心的恐惧,超出了集体背景下个人的心理界线,使一个虚构的情况发生在一
个现存的“危险”中。
其次,审查一系列日常声明和报纸、观看不同的见解方式被用来煽动仇恨,
比如朝鲜代表团的出现,特别是女性领导人的外貌,在镜头中被物化了。例如,
该报告强调特使金与正的怀孕和雀斑,玄松月与金正恩的浪漫关系的谣言,以及
朝鲜欢呼队的俘虏尸体等。
除此以外,奥运会上朝鲜欢呼队由那些已经被政权选中、用于展示目的的人
组成,他们知道自己会成为观看的对象,并且假设他们经常意识到这一事实。他
们在朝鲜政权的凝视下,也不停地注视着相机,无论是在音乐会期间,还是去海
滩休息,朝鲜代表团和欢呼队受到这种偷窥者的注视,尽管欢呼队知道他们会接
触到韩国的摄像机,但他们仍然处于无助状态,只能暴露在镜头前作为被捕的身
体。
如上所述,凝视的暴力是基于观众的焦虑和对一个人生存条件的危机感,因
此有必要将目前的敌对共存转变为和平与团结共存,只有这样才能保障朝鲜半岛
人民的安全生活。欢呼队不得不存在于异常和非法的领域的原因正是他们是朝鲜
的宣传战略,自然行为和品行,随意的笑声和对话是主流媒体难以接受的“正常”,
就好像他们通过隐藏摄像机的镜头凝视着他们,发现他们异常,偏离,尴尬和奇
怪的行为。
62
Myung-koo Kang
Introduction
This essay aims to explain the question of how the mainstream media in South
Korea views North Korea from a gaze of violence framework. One can look through
eyes of love and consideration of others, or have a gaze of exclusion and disgust. To
begin with the conclusion, mainstream media in South Korea views North Korea
through a lens of exclusion and hate. Such ways of seeing others results in an argument
from authority in the real world, which is collective punishment because it mobilizes
the South Korean people into tamed agreement and forces silence.
Ways of seeing objects is not merely a way of looking at things and people other
than oneself, but is also the result of how the individual and society internalizes and
acts on that object. The way one sees things does not just reveal an individual’s desires,
but also that constructed by society as a group. This paper attempts to reveal how
mainstream South Korean media produces and reproduces hatred of North Korea by
reviewing the various journalistic reports made about the North Korean sports team,
representative delegation, and cheer squad during their visit to South Korea to join the
Pyeongchang Winter Olympics as a unified Korea team.
63
Is this anticipation of war real or fantasy? It is possible to say that this anticipation
exerts a real, regulatory power over how we feel and behave. However, we can also say
it is a fantasy because it does not exist in reality, only through the words of the media
and people. Yet, it is real because the anticipation of war creates an internal fear,
exceeding its reach beyond psychological states of individuals to that of group settings.
Mainstream South Korean media’s perspective of North Korea’s team of athletes to the
Pyeongchang Olympics was made possible through the fantasy of an anticipation of
war, and demonstrated an undeniably real effect.
It is not unheard of that the continued state of division between South Korea and
North Korea has been based on an antagonistic coexistence. It is the most heavily
militarized place in the world, and it is fact that now more than ever, the possibility of
military altercations has increased because of North Korea’s nuclear missiles testing.
Furthermore, it is also true that a geopolitical structure was formed between maritime
powers (U.S., Japan, South Korea) and continental powers (China, Russia, and North
Korea) after Obama’s policy shift declaration, “pivot to Asia.” The antagonistic
coexistence of the South and the North surpasses a state of confrontation to the scene
of the post-Cold War.
Such an antagonistic view wields the effect of truth when it is expanded to a global
level of antagonistic coexistence between sea and land powers. After North Korea’s
nuclear testing, it has become common to hear claims that an American preemptive
strike is imminent or that it is necessary. These claims are developed on the rationale
that “the responsibility lies in North Korea’s belligerent behavior, and the South has
continually pointed this out and sent warnings.”
The deployment of THAAD in South Korea has seen a continued stream of TV
news and mainstream newspaper articles warning of the imminent threat of war. After
the Pyeongchang Olympics commenced, and troupe leader Hyun Song Wol and Special
Envoy Kim Yo Jong visited, this type of reporting became more frequent and
widespread.
Panelist Statement
64
It has become highly likely that the U.S. will make decisions
unilaterally, be it military or diplomatic, regardless of the South Korean
government’s intentions.
“I think we should have the view that war could happen at any time
regardless of the intentions of our government.”
Shin In- “Doesn’t it really feel like the U.S. will attack North Korea right
kyun now?”
“The U.S. Airforce has removed the reflectors off of F-22s so that they
will not show up on the radar. This suggests this is a real battle.”
Park Sang-
“Is it possible to suggest that preparation is being made for the next
ryul
step with the real intention for immediate action if necessary?”
(Panel Host)
Park Ga-
“They have prepared as many military options as we expected, and
young
perhaps more.”
(Panel Host)
65
If we transition to a “peace regime” as they (North Korea) desire,
it is unknown how much longer the North Korean people will have to
suffer through a hell of human rights abuses. If the U.S.-R.O.K alliance
is broken and a North Korea-led unification occurs, just like Kim Jong
Un seems to want given his 12-time use of the word “unification” during
his New Year’s Address, our daughters may face the same fate and
sacrifice of the majority of North Korean defector women who
experienced trafficking and prostitution in China. Yet in this country,
the leftist camp that cries out for human rights generally remain silent
when it comes to North Korean human rights. It is a waste of breath to
state that this is hypocrisy.
(Dong-A Ilbo, Kim Soon-deok Column, February 12, 2018, emphasis
added by author)
We call the warning and danger of an event that has not occurred as
“premediation”. Grusin has previously conceptualized it as premediated terrorism that
is witnessed in person and news that reports on the warning and dangers of terrorism.1
In a way, this kind of warning has become habitual in South Korean society, and even
the average person living on the Korean Peninsula has become accustomed to threats
of war. Events that symbolically “do not occur” go unreported in the media. But war,
which should not happen in the real world, is understood as something that could
immediately actualize. This real possibility has been a constant staple in news reporting
since the Korean War. Reporting around North Korean nuclear development is based
on the hypothetical situation of North Korea launching a nuclear attack on the U.S.
mainland (very similar to the assumption that convicted Hussein’s regime of possessing
WMD through the Iraq War), and action plans, countermeasures, and strategies are
constantly discussed with this as a premise. It is noteworthy that a situation that has not
realized (though, anything is possible at any time) has had the affect of warning and
anxiety now rooted in the Korean people, and it is upon this affect that disgust of North
Korea exerts its influence.
Daily statements and reports by the New Korea Party, Chosun Ilbo, and multiple
news channels mocked the Pyeongchang Olympics as the Pyongyang Olympics. A
1 In the book Premediation: Affect and Mediality After 9/11 (Pargrave, 2010), Grusin uses the concept
of “premediation” to critically explain how predictions and warnings about terror or dangers lead to
actual situations, and how viewers become accustomed to these predictions, resulting in an
international phenomenon of helplessness and lethargy.
66
selection of these will be examined to review the various ways of seeing, and how they
instigate hatred.
[Figure 1]A special envoy, Yo Jong Kim, a screen captured from TV Chosun
In Figure 1 is a screenshot of the February 9 TV Chosun news which sent out the
following subtitle, “Freckles and Moles Visible on Skin,” and aired under the news
subheading “All-Black Fashion and Color Makeup” seen on the top left corner.
Additionally, a video clip of Vice Department Director Kim shaking hands with Hyun
Song Wol was played repeatedly while close commentary was made about Kim’s
clothing and body. The reporter’s voice read the following, “Many experts have noticed
that Kim’s stomach is slightly enlarged, as seen under her coat. The Korean National
Intelligence Service has previously revealed that Kim Yo Jong gave birth in 2015, and
she wore a similar coat during that time. Judging by her posture of leaning her back out
and waist in, it appears that she is five months pregnant. An obstetrician specialist
confirmed this. However, the specialist could not be certain without a direct
examination.”2
2
At the time of the reporting, Kim Yo Jong’s pregnancy had not been confirmed. It was
confirmed by the speaker of the Blue House a few days later. However, it was clearly beyond
rational reporting guidelines to report on such a private issue such as the pregnancy of the North
Korean special envoy when she herself had not mentioned anything of it.
67
2)Reporting on Hyun Song Wol
[Figure 2] Above photo captures troupe leader Hyun Song Wol at the inter-Korean
talks on January 15
Hyun Song Wo, head of the Moranbong Band, enters the meeting
location with a light smile on her lips. She wore a two-piece business
attire in navy with black high heels. It is a different look from the
military uniform she wore when she canceled the performance in Beijing.
“She dressed up her hair with a flower pin.”
Despite rumors that had circulated about romantic involvement
with Kim Jong Un, today she wore a ring on her left ring finger. The
green leather purse seen when she took out her notebook is a product of
the famed European luxury brand “H,” and is estimated to be 20 million
Korean Won if an original.
Today’s meeting, a “working-level contact of art troupes” took
place as the North’s counter proposal to our “senior working-level talks”.
(TV Chosun, January 15, 2018. “Hyun Song Wol Appears with Luxury
Bag and Wedding Ring”)
As seen in the quote above, the focus on the rumor of her romantic relationship
with Kim Jong Un, ring on her left hand, hair pin, and luxury alligator bag put Hyun
Song Wol in the sole context of a being woman instead of as a delegate of the art troupe
working-level meetings. Statements belittled and treated her lightly, and reporting on
the ring seemed to express a certain disappointment about the falsity of the rumor
involving her with Kim Jong Un.
68
3) Captured Bodies of the North Korean Cheer Squad
(North Korean cheer squad waving the Korean Unification Flag, Yonhap News,
January 18)
The two figures above were common images after the start of the Pyeongchang
Olympics. First, the four different screenshots in the lower figure are the first start scene
that main broadcasters such as KBS, MBC, JTBC, and Yonhap News used to report the
first day the North Korean cheer squad arrived in the South. Every broadcaster began
their report with the first shot as the North Korean cheer squad’s legs or calves. Without
being said, it is clearly a view that objectifies women’s bodies. They were first
commodified by North Korea which only selected beautiful women in order to be the
object of exhibition in the South, and they were commodified a second time in the South
by the South Korean media. It seems these images were not strange at all, and were
reproductions of repeated images.
What must be questioned in the reappearance of the above three images is how it
was possible for such news broadcasters to report so carelessly on the North Korean
delegation, despite ongoing efforts to decrease war threats via the Pyeongchang
Olympics. It is possible to criticize and place blame on commercialized journalism or
the ways of seeing that objectify women, but they are difficult to accept as sufficient
explanations for such journalism practices. There is a need to clarify and shed light on
the different devices ingrained the ways mainstream media sees things.
69
3. Arrangement of Normality and Abnormality
As evident in the three examples of above, the North Korean delegation’s
appearance and especially that of women leaders’ physical appearance were objectified
from a disciplinary lens. Sophisticated fashion and appearance, beauty, and smiling
faces are the normal. Most media in general tend to have views that objectify and
display women. However, the way the South Korean media cameras looked at the North
Korean delegation this time surpassed the usual level of objectification, and did not
hide views expressing confusion or inability to accept what was being seen. Firstly,
Kim Yo Jong and Hyun Song Wol’s manners, facial expressions, and fashion were
rather normal relative to the expectations of the South Korean media. The media
searched for deviation and anomalies in their smiling faces, sophisticated and calm
movements and answers but found them to be very normal.
Kim Yo Jong and Hyun Song Wol’s ways of behavior and speech should have
been accessories of the abusive North Korean regime symbolized by missiles and
military assemblies, like robots ready to shout “Great Leader” at any time. Instead they
gave normal handshakes and greetings, laughing and conversing. When those who are
expected to be abnormal act normally, one must look even closer to find abnormalities.
This is the reason for mainstream media’s focus on Kim Yo Jong’s pregnancy and
freckles, Hyun Song Wol’s romance rumors, luxury bag, and scarf, among others. In
fact, Yonhap News even aired a recording of cheer squad members waiting in line
inside the women’s restroom.3 When something breaks expectation and is abnormally
normal, one tends to rationalize it by pushing it past the borders of normalcy. Luxury
goods, sophisticated fashion, but juxtaposed with freckles and a pregnant female body
(not as a productive body but rather as a body having conceived a dangerous child of
the Baekdu bloodline) positions the information as abnormal. And with this, an
irrational report is justified.
The normal behaviors and manners of the North Korean delegation, the Blue
House and ruling party treating them as normal diplomatic partners, and welcoming
South Korean citizens were all abnormal actions causing instability to South and North
Korea’s symbiotic antagonism. This can only be accepted as a threat for South Korea’s
conservative powers, when its existence is one of the axes resulting from this
antagonistic relationship. Symbiotic antagonism is maintained not only through
military and political power, but also the mentality of a divided system(anti-
communism).
3 The reporter’s action of following the cheer squad into the women’s restroom with a hidden
camera to take pictures is what is strange and abnormal. What sort of abnormality was it that the
camera hoped to capture?
70
gaze of the camera, be it during the concert or even when they visited the beach for a
break. One’s actions and words cannot be very natural or free when one is conscious of
being under someone else’s gaze.4 They cannot laugh freely, nor can they not laugh. It
is difficult to speak, but it is also not possible to not speak. The media continued to
maintain a voyeuristic gaze at this group while then were in defenseless circumstances.
Why is the North Korean cheer squad watching South Korean TV in their
accommodations something strange and newsworthy? TV Chosun narrated the
following while broadcasting this point.
“At night, they watch our TV shows. This act doesn’t seem to be
secretive, as two people sit side by side watching TV.” “(When those
people turn the TV on, they see our channels, right?) When I checked
the rooms the day before yesterday, all the (South Korean) channels
worked. They work fine but who knows what happened.”
The report states the strangeness of North Koreans watching South Korean TV
when it should be prohibited. The report itself reveals that the photo was caught by
zooming-in and peeping through an open window using a telephoto lens. There is a
need to discuss the significance of a voyeuristic gaze at this point.
4 One wonders what would have happened if the cheer squad had prohibited cameras and rejected
interviews for their privacy (one should have the right to lay on the beach without the watchful
eyes of others) while on break at Gyeongpodae Beach?
71
Methods to look at a counterpart who is difficult to face directly include sneaking
peeks and taking glances. The most representative is the gaze of misogyny. As
misogynists, men select physically weak women as the target of their hate and disgust
in order to disguise and hide their vulnerabilities. These men generally cannot face
women as equal agents. They threaten and attack others in order to hide their
weaknesses and deficiencies. Consideration of others and helping others is not possible
for them. It is unimaginable to help and be considerate of others when I myself am
empty and vulnerable. They hide their vulnerabilities and ignore those of others. And
because they must hide their deficiencies and position, they sneak peeks at others.
Those that sneak peeks believe their targets are abnormal and impure.
The North Korean delegation and cheer squad were subject to this kind of
voyeuristic gaze, and though the cheer squad knew they would be exposed to South
Korean cameras, were left in a defenseless condition. They had no other option but to
be exposed in front of the camera as captured bodies that had to laugh but not mindlessly,
could not get caught watching TV, made sure not to have in possession a luxury bag,
and had to put on makeup without the choice of not putting it on. This is how the South
Korean mainstream media’s gaze of the North Korean delegation and cheer squad
became abusive and violent.
5. Concluding remarks
As examined above, the violence of gazes is based on the viewer’s anxiety and
feelings of danger to one’s conditions of existence. It is necessary to change the current
antagonistic coexistence to a coexistence of peace and togetherness because only then
will a safe and secure life be guaranteed for those living in the Korean Peninsula. North
Korea’s nuclear threat has been a favorable condition providing political soil for the
vested interests in North Korea’s governing system and South Korea’s conservative
powers that have depended on antagonistic coexistence to maintain their power. The
Abe administration in Japan also safely overcame political crises thanks to North
Korea’s missile launches.
The joint entrance under the Korean Unification Flag, a unified ice hockey team,
and the cheer squad at the Pyeongchang Olympics had to dwell in the spheres of
abnormality and illegality because they were North Korea’s propaganda strategy. The
natural actions and behaviors, casual laughter and conversations were a difficult
“normal” to accept by the mainstream media. It was as if they had gazed at them through
the lens of a hidden camera to find their abnormal, deviant, awkward, and strange
actions.
72
智力外包
瓦格特(Christina Vagt)
73
Christina Vagt
1. Mistrust of intellect
In times of climate change denial and other conspiracy theories circulating in
nationalist politics and media, French philosopher Bruno Latour cautions against a
(postmodern) criticism that eliminates its offspring. Postmodern theory appears in the
eyes of Latour, himself one of its key protagonists, as a self-diminishing movement,
and its danger lies in the old (Western) tradition to distrust immutable facts by
presenting them as ideological biased.2 Is it criticism itself that produces these effects?3
Latour’s answer to this stated misery of criticism is to turn towards a new
empiricism, an empiricism that is located somewhere between Martin Heidegger’s
Thing romanticism and Alfred North-Whitehead’s process ontology; an empiricism
that promises to return agency to the things and quasi-objects that they have lost through
criticism and politics.
Latour’s text is still relevant today, even 15 years after being first published,
maybe today even more than when he wrote it. But in my opinion, the actual problem
of criticism today is less rooted in a mistrust of immutable facts, but has something to
do with a much more profound mistrust of human rationality, and derived from that, a
mistrust can be traced back to a structural much more profound mistrust of the
rationality of human action and derived from that a deep mistrust of the
comprehensibility and governability of a human-made world.
Over the course of the 19th Century, the world that predates all asserted facticity
transforms into a whimsical hybrid of organisms and symbols, of will and intellect.
Arthur Schopenhauer is one of the first to articulate his mistrust of human reason from
the perspective of the will to live. The will belongs to life, to the organism, it sits in
every network of roots as much as in the seeds that it drives towards the surface of the
earth. But the will is “unanschaulich”, nondescript, timeless, and discreet, without any
representation. But even though it only knows affirmation and negation, it founds all
1
Schopenhauer, Die Welt als Wille und Vorstellung ()World as Will and Presentation, book1, §1)
2
Cf. The marketing text of Diaphanes Press for the German translation of Bruno Latour’s text...
3 Latour, “Why has critique run out of steam?” (2004)
74
community, because the will governs all cooperation. Whereas the intellect is self-
centered, oriented only towards the individual which has to form it. The intellect is as
much modifiable as it is limited: Unaware of itself, calculating, it insists on taming the
will, but at the same time it is limited to the sphere of visibility and enlightenment. The
dilemma of the intellect is already present in the philosophy of Schopenhauer, before
the third narcissistic blow in form of Freudian psychoanalysis hit the human subject.
Already in Schopenhauer, it entertains a distorted relationship with the will, just like
the will is being hindered in its drive by the intellect. The brain is simply a parasite of
the living organism, and only a genius can almost suppress the will entirely. The
mistrust of the intellect therefore precedes the mistrust of the facts. Since at least
Schopenhauer, the intellect is under the suspicion, at least within a certain European
tradition of thought, to be nothing more than an erratic function of the organic, and
constantly interrupted by the ‘true’ continuum of life.
4
Herbert A. Simon: The Sciences of the Artificial, Boston 1969, S. 3.
5
Carnap was the first philosopher to present the philosophy of mind as a computational program
(Android Epsitemology, Glymour, Ford, Hayes, The Prehistory of Android Epistemology, MIT
Press 1995, 3-23, hier18)
6 Herbert A. Simon: Models of My Life, Boston 1991, S. 202.
7 Simon: The Sciences of the Artificial (wie Anm. 4), S. 3.
75
intellect as being characterized trough education and artificiality, rather than through
living beings that bring it forth, and they share the insight in its limitations. For
Schopenhauer, it is the task of philosophy to present the concrete world in abstract terms,
to summarize the complexity in which it appears to the individual in abstract and
general terms:
“Thus it will on the one hand separate and on the other hand unite,
in order to deliver, for the sake of knowledge, any and all of the manifold
things in the world (…). Philosophy will be, accordingly a summa of the
most general judgements whose immediate cognitive ground is the
world itself in its totality, without the exclusion of anything: thus
everything that is to be found within human consciousness. It will be a
complete replication, as it were a mirroring, of the world in abstract
concepts, which is only possible by uniting the essentially identical
within one concept and separating out that which is different in
another.”8
While Schopenhauer’s model lingers within the antagonistic relation between will
and intellect of 19th century philosophy, Simon builds his model of “bounded rationality”
after World War 2 in the new medium of computer simulations and with the declared
goal to dispel any suspicion of an élan vital (life force, vitalistic force) in the heart of
intellectual processes. 9 Simulation itself was nothing new, Simon writes, but the
spectrum of systems that can be simulated grew by large through digital computers and
their degree of abstraction. No other simulation technique like thought experiments or
wind tunnel set ups is as “protean”, as adaptive, and as capable when it comes to
functional description, and therefore as mathematical.10
Simon and Newell call their computer program Logic Theorist, and unlike
traditional Operation Analysis programs, it does not search for the optimal solution of
a decision problem by running through all possibilities, but it discards the majority of
possibilities at the beginning without exhaustive testing them, and runs the remaining
possibilities only as long as it takes to find a satisfactory solution. Facing complex
problems, the Logic Theorist reaches its goal faster. Within one year, the computer is
8
»daher wird sie teils trennen, teils vereinigen, um alles Mannigfaltige der Welt überhaupt (...)
dem Wissen zu überliefern. Die Philosophie wird demnach eine Summe sehr allgemeiner Urteile
sein, deren Erkenntnisgrund unmittelbar die Welt selbst in ihrer Gesamtheit ist, ohne irgend etwas
auszuschließen; sie wird sein eine vollständige Wiederholung, gleichsam Abspiegelung der Welt
in abstrakten Begriffen, welche allein möglich ist durch Vereinigung des wesentlich Identischen in
einen Begriff und Aussonderung des Verschiedenen zu einem andern.«( Schopenhauer: Die Welt
als Wille und Vorstellung I, S. 104.)
9 Newell, Simon, Simulation of Human Thought, p. 7.
10 Simon: The Sciences of the Artificial (wie Anm. 4), S. 18.
76
able to solve the first 25 theorems of the Principia Mathematica, and in some cases
even in a more elegant way than its human predecessors.11
Simon and Newell present the program 1956 at the founding conference for
artificial intelligence at Dartmouth College to John McCarthy, Claude Shannon, Oliver
Selfridge, Marvin Minsky and others. The Logic Theorist is considered the first
computer program able to imitate human problem-solving behavior, and therefore as
the beginning of artificial intelligence. According to Simon and Newell, with the Logic
Theorist it became clear, that the computer is not just a metaphor or analogy for the
brain:
“We are not talking of a crude analogy between the nervous system
and computer ›hardware‹. The insight of a computer does not look like
a brain any more than it looks like a missile when it is calculating its
trajectory. There is every reason to suppose that simple information
processes are performed by quite different mechanisms in computer and
brain. […] However, once we have devised mechanisms in a computer
for performing elementary information processes that appear very
similar to those performed by the brain (albeit the quite different
mechanisms at the next lower level), we can construct an explanation of
thinking in terms of these information processes that is equally valid for
a computer so programmed and for the brain.”12
According to Simon, both computers and human brains operate goal oriented in
their information processing, because they serve the adaptation of a system to its outer
environment, a crucial distinction: The inner environment is represented by a group of
alternative, defined actions, while the outer environment is represented by known or
unknown parameters – just like a human decision makers who will never have 100
percent of the information of their environments. As an example, Simon mentions the
optimization of nutrition: Which foods can guarantee a desired amount of calories,
taking both dietary guidelines and cost efficiency into account?13
While the inner environment is bounded by food prices, nutrition rates and
requirements, the system’s relation to its outer environment can be optimized through
the cost-benefit- function. Hypothetically there is an unlimited amount of possible
foods to choose from, but the Logic Theorist reaches its goal fast by means of linear
programming. Obviously, planning a menu based on this kind of optimization will only
take characteristics like taste or sustainability of foods into account if they are counted
in as parameters. The more parameters are taken into account, the longer the calculation
will take.
77
The Logic Theorist is like a human decision maker programmed to discern
between inner and outer environment on a symbolic level, a fact which makes it
intelligent, because »intelligence is the work of symbol systems.«14
A programmed digital computer has the necessary and sufficient means to not only
crunch numbers but to interact with all kinds of thinkable symbols in an intelligent way.
The relation between brain and computer is therefore not a metaphorical.
Cognition does not occur by means of calculation, it is calculation according to Simon.
With computer simulations, the computer stops being a metaphor for the brain, because
it demonstrates how computers can produce human behavior.15 A computer simulation
of thinking thinks, Simon writes in an almost Heideggerian tone, because computer and
brain work with the same material, namely symbols.16 Thinking, unlike other actions
like digestion, occurs in form of an environmental oriented optimization via symbol
processing. In this understanding, the discourse on human rationality is freed from all
substance ontology that it carried since RenéDescartes, and the goal from now on is to
produce rationality as a organizational function through the design of symbol
processing machines, almost like Schopenhauer thought about it: not in terms of
philosophical world description, but still as »Abspiegelung der Welt in abstrakten
Begriffen«(a total reflection of world in abstract concepts) – as computer technology.
Before working for the RAND Corporation in Santa Monica, Simon studied
mathematical decision and business theories as a political scientist, so called Operation
Analysis. With his study Administrative Behavior from 1947, he laid out the
foundations for a behavioral-economical critique of the classic model of the homo
economicus, by demonstrating that human actors in larger organizations and
administrations act rational only to a certain degree.17 The all-knowing, profit-oriented,
and rational business man of (neo-)classic economics now appeared to be nothing but
an idealization, that no longer had anything in common with the reality of different
actors in modern organizations. The homo economicus according to Simon was nothing
more than »the idealization of human rationality enshrined in modern economic
theories«.18
Human behavior therefore is not determined by rationality, but keeps a certain
flexibility in order to be able to cope with a complex environment – of which it has only
partial knowledge. 19 With his decision theory of bounded rationality, for which he
14
Herbert A. Simon: The Sciences of the Artificial, Cambridge, MA. 31996, S. 23.
15 Vgl. Roberto Corderschi: Steps Toward the Synthetic Method, in: Philip Husbands, Owen
Holland und Michael Wheeler (Hg.): The Mechanical Mind in History, Boston 2008, S. 219-258,
hier S. 231.
16 Vgl. Herbert A. Simon: Machine as Mind, in: Peter Millican und Andy Clark (Hg.): Android
78
received the noble prize in economics in 1978, Simon contributed significantly to the
fact that psychological theories and factors entered the realm of management theory
and economic models. But economics itself was not the goal of Simon’s research, it
appears to be just the perfect environment to study the bounded rationality of human
behavior. And artificial intelligence was a promising way to optimize rationality that
depends on the interaction between inner and outer environment. From the beginning
on, the theory of bounded rationality wanted to be more than simply economic analysis
or theory, it wanted to be a new way of governing in the form of design and
programming.
According to Benjamin Seibel, the decision and game theories developed at
RAND were at the heart of the neoliberal transformation of statehood under Ronald
Reagan.20 In Seibel’s analysis of cybernetic governance, this political technology is
motivated by the desire to de-subjectify political sovereignty through mathematical
procedures. I like to add that it is not just mathematical procedures, but that the
cybernetic vision of automating civil governance processes meets with a behavioral
design strategy.21 Simon who served as political advisor under Lyndon B. Johnson and
Richard Nixon, does not address politicians and managers but a new type of engineer-
designer, whom he sees in need to learn economic cost-benefit analysis. 22 The
heuristics of the Logic Theorist – to run possible alternatives until a satisfying solution
was found – was to be applied whenever an optimum solution was not attainable. Simon
calls this problem solving heuristics ›satisficing‹ and it serves as an alternative to
rational decision theories: »Decision makers can satisfice either by finding optimum
solutions for a simplified world, or by finding satisfactory solutions for a more realistic
world.«23
When rationality according to Simon is not primarily an exclusive quality of
human reason but rather one of inner organization in relation to an environment, than
it depends directly on the system’s design. ‘Satisficing’ can be called the first artificial
intelligence programming heuristics, as well as a general design and governing maxim.
John von Neumann had reformulated the problem of how to design a hydrogen
bomb in such a way that it could be simulated on the architecture of the ENIAC at the
University of Pennsylvania.24 Simon and Newell reformulated the problem of human
intelligence and decision making in such a way that it could run on the architecture of
the JOHNNIAC at RAND. According to Seibel the governmental orientation of US
neoliberalism was not just political reaction towards social welfare reforms under
Lyndon B. Johnson as Foucault figured, but rather the result of the computer
technological modelling and governing that had to deal with a complex global political
20
Seibel: Cybernetic Government (wie Anm. 10), S. 202-203.
21
Seibel: Cybernetic Government (wie Anm. 10), S. 201. Siehe auch Jeannie Moser und
Christina Vagt (Hg.): Verhaltensdesign. Ästhetische und technologische Programme der 1960er
und 1970er Jahre, Bielefeld 2018.
22 Vgl. Simon: The Sciences of the Artificial (wie Anm. 4), S. 70.
(Hg.): Nobel Lectures, Economics 1969-1980, Singapore, 1992, S. 343-371, hier S. 350.
24 Vgl. Peter Louis Galison: Computer Simulations and the Trading Zone, in: Gabriele
79
situation of the Cold War, in which traditional decision makers could not be trusted to
find objective decisions. With Simon pointing out the limits of capacity and complexity,
the problem of decision-making shifts towards the design of conditions which could
guarantee decidability in the face of limited resources.25
The cost-benefit-calculus forms the core of the behavioral economic governing
programs that thoroughly submit everything to economic analysis, even the non-
economical. As Foucault states in his analysis of US neoliberalism, the behavioral
economists at the University of Chicago developed already during the 1930’s the
methodology to calculate the cost-benefit ratio of everything – and after that even
something like racism could be reformulated as a problem of offer and demand, and its
economic cost could be expressed in dollars. According to Foucault, the behavioral
economics of Gary Becker and others epitomizes bio politics, a specific form of power
that aims at the control of a population through the governance of normalizing statistics:
Make life and let die.26
Behavioral economics together with new techniques of artificial intelligence form
a new experimental field of political technologies, in which decision making, ergo what
was formerly called intellect, is unhinged from its subjective and qualitative context in
order to be able to outsource it in economic-technological systems. According to
behavioral economics, there is no need for a homo economicus anymore because
rationality is not to be found in humans but in the organizational structures of overriding
importance. Artificial intelligence like it was developed and described by Simon and
others in the aftermath of WW2, has been administered in policy making as well as in
corporate management ever since – an expression of this behavioral design shift of the
political itself.
The already mentioned text by Bruno Latour, Why Has Critique Run Out of Steam?
was first published in 2004 and is clearly affected by the U.S. American presidency of
George W. Bush and his “war against terror” in the aftermath of 9/11. It is saturated of
a mistrust of human reason in general and of humanistic theories of the 20th and 21st
Century in particular. Latour attempts to reformulate the Kantian problem of criticism
according to the (new) language of artificial intelligence. With a reference to Alan
Turing’s Computing Machinery and Intelligence, Latour ends his essay just like Turing
ended his, with the “surprising result that we don’t master what we, ourselves, have
fabricated, the object of this definition of critique”. 27 In Latour’s text, the lack of
empiricism and in its aftermath the facticity critique of modern and postmodern theory
is the main factor for the feebleness of humanistic criticism in the times of homeland
25
»In der quantitativen Übersetzung trat das Regieren als ökonomische Tätigkeit hervor, deren
Resultate in einem Kosten-Nutzen-Kalkül evaluiert werden konnten.«(Seibel: Cybernetic
Government (wie Anm. 10), S. 203.)
26 Vgl. Michel Foucault: Geschichte der Gouvernementalitä t II. Die Geburt der Biopolitik, hrsg.
v. Michel Sennelart, Frankfurt am Main 2004, S. 300-330.
27 Bruno Latour, “Why Has Critique Run Out of Steam?”, Critical Inquiry, Winter 2004, 347.
80
security, while Turing 1950 simply reflects in the face of nuclear weapon technology if
something like a “critical mass” exists in the context of human theory production.
»Is there a corresponding phenomenon for minds, and is there one for machines?
There does seem to be one for the human mind. The majority of them seem to be ›sub-
critical,‹ i.e. to correspond in this analogy to piles of sub-critical size. An idea presented
to such a mind will on average give rise to less than one idea in reply. A smallish
proportion are super-critical. An idea presented to such a mind may give rise to a
whole ›theory‹ consisting of secondary, tertiary and more remote ideas. Animals’ minds
seem to be very definitely sub-critical. Adhering to this analogy we ask, ›Can a machine
be made to be super-critical?‹«.28
When Latour’s text was translated from English to German, a momentous error
occurred. The crucial sentence “a smallish proportion are super-critical” was forgotten,
so that the German translation now states that the sub-critical minds (and machines)
give rise to super-critical theories.29
Aloof all philological persnicketiness, this edition error demonstrates once more
how the space of the symbolic and intelligible is governed by chain reactions of
signifiers, and not by supposedly stable relationships between signs and things or even
meaning; a perception prevailing not only among the criticized postmodern theorists,
but as I tried to show, also within the heart of behavioral economics and
governmentalities and the beginnings of artificial intelligence. According to Simon, the
condition of possibility for a ‘satisficing’ organization of intelligent systems in complex
environments is the ability to make significant decisions. Facticity on the other hand
occurs on a different ontological level, because it is bound to sociality and its norms,
and to the symbolic organizational structure. And because of this inherent social fabric
of facticity, it will always be subjected to metonymical displacements and
communication noise.
Recently, artificial intelligence has ascended from an almost esoteric research
project and relict of the Cold War to a billion-dollar business under the new names of
machine learning and smart technologies. In the meantime, nationalist and racist
movements induce politics in Europe and the United States, while the first artificial
intelligence ran for office in a political campaign in Japan. In the face of the actual
political situation in Europe and the United States with the resurrection of ethnical
people movements that did not reach a ‘critical mass’ since the 1930’s, a theoretical
navel-gazing about lacking morals and facticity of postmodernism is in danger of
descending into sub-critical spheres. Once political decision making is completely
reduced to economic cost-benefit-calculus, the political as conflict and negotiating zone
runs the danger of being eradicated or being reduced to the production of affects. The
misery of criticism does not lie in the assumed postmodern lack of empiricism, but in
28
Turing zitiert nach Bruno Latour: Why has Critique Run out of Steam? From Matters of Fact to
Matters of Concern, in: Critical Inquiry 30/2 (2004), S. 225-248, hier S. 248.
29 „Der Verstand der meisten Menschen scheint ‚unkritisch‘ zu sein, dh. er entpricht bei dieser
Analogie den Reaktoren unterkritischer Größe. Eine einem solchen Verstand mitgeteilte Idee ruft
eine ganze ‚Theorie‘ hervor, bestehend aus sekundären, tertiären und noch fernerliegenden
Ideen.“(Turing quoted by Latour in the German translation, p.59)
81
the helplessness of intelligent systems that are confronted with a political madness that
operates within a completely rational cost-benefit paradigm.
82
通用人工智能为何需要胡塞尔的“意向性”理论?
徐英瑾
人工智能不得不具有意向性,因为心智需要意向性——前提是有根据环境
变化而修正信念的能力。然而,主流英语言说者的哲学意向性理论并不能“照
亮”通用人工智能(AGI)方面的问题:这些主流进路要么诉诸于外部环境因素
而无法触及内在模式,要么对不同认知状态之间的渐变力有不逮。由此看来,
所需的意向性理论必须能将心理意旨悬置于外部世界判断,并且能将心理模式
视为允许它们之间相互渐变的对象。这两个刻画方案自然而然地将我们引向胡
塞尔的“现象学悬搁”,并引向对他“Noema”概念的推论解释。而这两个方案
自身也可以通过“非公理化推演系统”(NARS)而得到算法化的说明。
83
Xu Yingjin
1. Introduction
However, besides the controversy involved in the first premise that we will address
in section 3, at least the second premise of this argument is doubtable, since there is a
relatively new tendency of interpreting the Husserlian notion of “Noema” not in terms
of Minskian frames or Fregean “senses” but by virtue of Robert Brandom’s
inferentialism, and it is this reading that attributes more dynamic features to Husserl’s
theory of intentionality (more on this in section 5). Therefore, mainstream naturalized
84
phenomenologists’ marginalization of Husserl (which is in sharp contrast with their
preference of Heidegger and Merleau-Ponty) is not warranted.
But the preceding claim itself does not imply that the relevance of Husserl to AGI
is self-evident. The revelation of this relevance requires some further arguments, which
are supposed to be provided in this article. To be more specific, these arguments are
supposed to be supporting the following sub-claims, which constitute the route-map of
this research:
The main purpose of doing this research is not only to persuade naturalism-
oriented AI/AGI researchers to acknowledge the values of Husserl’s phenomenology,
but also to reconstruct Husserl’s phenomenology from a new perspective, namely, a
perspective different from mainstream naturalized phenomenology by keeping distance
from 4E-ism. And explorations in this direction will hopefully save Husserl’s reputation
out of the shadows of Heidegger and Merleau-Ponty, who have long been favored by
mainstream naturalized phenomenologists.
85
2. Intentionality is required by intelligence
Here we will elude the complicated problem on how to strictly define the term
“intelligence” and begin with a simpler question: given that no reasoning system can
get its conclusion which are practically useful without premises encoding empirical
contents, and that prejudices are usually (albeit perhaps not inevitably) involved in
these premises, what kind of reasoning machine we need to build if it is supposed to be
bearing the mark of “intelligence”? Prima facie we have four options on the table:
Option 1: To build a system which reasons with premises which are all
true and is capable of revising its beliefs in accordance with new environmental
changes.
Option 4: To build a system which reasons with premises which are all
true and is not capable of revising its beliefs in accordance with new
environmental changes.
Option 1 is quite weird in the sense that it looks unnecessary for a system to revise
its belief if its starting premises are all true. Surely the set of all true premises of a
system could be fairly small so that it is still necessary for such a system to enlarge the
scope of its true beliefs in order to be more adaptive to the environment. But to include
more new true beliefs does not mean that those older ones have to be revised, unless
they can be proven to be untrue. Thus, option 1 still remains weird. Option 3 is weird
too, since it is not so practically useful to build a machine which can only transfer
falsities from premises to conclusions rather a machine which can automatically
recognize falsities and separate them from truth. As to option 4, it is theoretically a bit
more acceptable than 1&2, since a system with no false starting premises would
theoretically require no revisions of its beliefs. But it is still practically too challenging
to build such a system, given that no programmer, who can be any one but an omniscient
being, can guarantee that all premises that she feeds into the system will not be proven
to be untrue in the future, unless the premises in question encode only trivial truth and
hence are not potentially relating to any interesting implications. Hence, only one
option, namely, option no. 2, is left on the table. That is to say, any intelligent system,
whether artificial or natural, has to be able to revise its initial beliefs, some of which
will be proven to be untrue.
86
And it is this option that makes the modelling of intentionality an indispensable
part of the modeling of any artificial agent, if it is supposed to be minimally intelligent.
Here goes the argument for saying so:
• Hence, from (2)&(5), it can be inferred that the requirement of the variety
of psychological modes will eventually lead to the modelling of full-fledged
intentionality in artificial systems.
We believe that this argument, which is sound, can make any reasonable AGI
scientist seriously consider the problem of modeling intentionality, no matter whether
the term “intentionality” has to be construed in a Husserlian manner. However, some
readers may still ask: if the modelling of intentionality is so urgent for the design of any
intelligent system, why do most AI scientists seem to be dismissive of this issue?
The answer is fairly simple: they are mostly AI scientists rather than AGI scientists;
or in another way around, most AI systems that they built are too specific to certain
tasks to satisfy the general requirement of option 2. Actually, these systems are merely
intended to satisfy option 4, according to which premises fed into the system are at least
87
supposed to be all true. An exemplary case to footnote this point is Edward
HYPERLINK "https://en.wikipedia.org/wiki/Edward_Feigenbaum"Feigenbaum’s
expert system (which is fairly representative of GOFAI), namely, a system usually
designed to emulate the decision-making processes of human experts in a certain
domain of knowledge. Such a system is routinely composed of a knowledge base, which
represents empirical state of affairs which are supposed to be facts, as well as an
inference engine, which applies the inference rules to given “facts” to yield new “facts”.
But such a system can work well only when the “facts” stored in its knowledge base do
encode genuine facts and hence immune to further revisions, and this condition itself is
hard to satisfy since the progress in any domain of human scientific inquiries will
routinely force human experts to update what they did believe, whereas it is technically
challenging to make an expert system to automatically update its knowledge base as
what a human expert would do with less efforts. Surely an AI scientist may try to design
an expert system which literally has the capacity of automatically acquiring genuine
knowledge from a large body of information including falsities, but this move is
tantamount to the adoption of option 2, which eventually leads such as designer to the
modelling of intentionality, as the preceding 7-step argument predicts.
88
3. Mental contents cannot be treated externalistically in AGI/AI
• From (1) & (2), it can be inferred that any attempt to model
intentionality in accordance with externalism has to encode the secondary
intension from an omniscient being’s perspective.
• Hence, from (4) & (3), it can be deduced that semantic externalism
89
cannot provide a feasible framework for AI.
Some readers may doubt the acceptability of step 2 by denying the necessity of
introducing an omniscient being’s perspective for fixing the secondary intension. They
may contend that a high-level ascriber who knows more than the agent in question may
suffice for ascribing the secondary intension to the target representation. But the
question is: to know how much more is more enough for such an ascriber? Advocates
of two-dimensionalism simply cannot say that “the ascriber only needs to know that
the chemical composition of water is H2O” in the twin earth case, since it would be too
ad hoc to explain why this ascriber is so lucky to acquire the right piece of knowledge,
among others, for picking up the right sort of secondary intension just in this case.
Given that luck will routinely undermine the reliability of ascribing the secondary
intension, luck has to be precluded in such processes, and the best way to preclude it is
to appeal to an idealized ascriber who delivers semantic knowledge steadily and reliably.
Obviously only an omniscient being can perfectly satisfy this condition, whereas no
artificial system can stimulate such a being.
Some readers may also doubt the acceptability of step 4. Although for GOFAI, as
they may contend, it looks necessary to deliberately avoid introducing an omniscient
being’s perspective by constructing “micro-worlds”(namely, partial representations of
worlds which could be processed by a certain configuration of computing machinery),
GOFAI is not the only game in the town. It seems that both connectionist and enactivist
systems are irrelevant to the problem posed by step 4 by avoiding building such micro-
worlds.
But we don’t think so. Actually, even in a connectionist system, it still makes sense
to view “neuronal activation space” as another form of micro-worlds, although
elements of these worlds are points, regions, or trajectories rather than symbols in their
GOFAI-counterparts. Moreover, according to AI scientist Ian Goodfellow et al., in a
deep learning system (which is an updated form of connectionism), increasing amounts
of raw data equivalent to fragments of certain mirco-worlds do go hand in hand with
the increasing complexity of the micro-world-building mechanisms. Hence, just like
GOFAI, even in connectionism, there is no place for an omniscient being who is not
constrained by any micro-world-building mechanism either.
The key observation is that the world is its own best model. It is
always exactly up to date. It always contains every detail there is to be
known. The trick is to sense it appropriately and often enough…. To
build a system based on the physical grounding hypothesis it is
necessary to connect it to the world via a set of sensors and actuators.
Typed input and output are no longer of interest. They are not physically
grounded.
The moral of our analysis of Searle’s treatment of psychological modes is that the
desire/belief distinction cannot be treated in terms of directions of fit, which assume
that these modes are based on relationships between mental entities and external entities
(otherwise it would make no sense for him to talk about the direction either of “mind-
93
to-world” or “world-to-mind”). Moreover, even seemingly world-oriented actions like
“carrying out X” can be also viewed as something based on (although perhaps not
reducible to) internal states and hence still more relevant to agent’s internal mental life.
This perspective-based analysis of psychological modes is perfectly compatible with
the internalist treatment of mental contents, which was proposed by the last section,
whereas Searle’s perspective-free view is conflicting with it. Hence, if the conclusion
of last section is sound, Searle’s treatment of direction of fit cannot be acceptable.
94
about “having a mental box in the belief box”. Hereafter we will call this treatment of
psychological modes as the “box-approach”.
Obviously both the strength of beliefs and desires are gradable: It makes perfect
sense to say that I have a strong belief of p or a weak desire of q. But the semantic
problem involved here is that the meanings of many psychological words, when
supplemented with adverbial expressions indicating strength, are mutually overlapping
or even synonymous to each other: As for an instance, is there really a substantial
difference between “A very weakly believes that p is the case.” and “A very weakly
suspects that p is the case.”? If there is no substantial difference between them, then the
most natural explanation for the lack of this difference seems to be that the scope of the
so-called “belief-box” is continuous to that of “suspect-box”. But this explanation
quickly makes Fodor’s box-metaphor, which assumes the discreteness of boxes, fade.
95
Now sympathizers of either Searle or Fodor may still contend that neither
philosopher is interested in algorithmically realizing artificial intentionality; rather,
both philosophers have their independent arguments against the possibility of doing it,
e.g., Searle’s “Chinese Room Argument” and Fodor’s “Argument against High-level
Modularity as a Requisite of Computational Theory of Cognition”. But we don’t think
this objection is relevant to our argument. Our point is: no matter whether their global
hostility towards the algorithmic reconstruction of intentionality is warranted, their
theory of natural intentionality is flawed, hence, any AI scientist who has adopted their
general view about how intentionality works cannot model intentionality successfully.
Now we will give some further reasons to explain why Searle’s and Fodor’s
theories are problemic in AGI. In the last section, we have explained why the
perspective-free view of “world” assumed by enactivism-oriented AI cannot be
coherently modelled. Since the similar view has been assumed in Searle’s notion of
“direction of it”, this notion itself cannot be algorithmically modeled as well. As to
Fodor’s box-approach, actually a variant of it has been adopted by mainstream AI
scientists in a branch of AI which is labeled as “context modelling”. The aim of context
modelling is to build a computer system which can automatically handle data
differently according to different contexts, and the whole goal here is relevant to the
issue on intentionality in the sense that each type of psychological state can be more
abstractly viewed as a type of context (e.g., the belief-context, the desire-context, etc.).
Hence, if the box-approach in a theory of context is flawed, the similar approach in the
modelling of contexts, namely, an approach according to which each context is treated
as a “box”, cannot bring about satisfactory results as well.
And following examples may show that even the box-approach in context
modelling is defective, and this observation would conversely reinforce our current
doubt of the validity of the similar approach in a theory of intentionality. A typical AI-
oriented (but still philosophical) formulation of the box-approach in context modelling
is given by Fausto Giunchiglia and Paolo Bouquet (hereafter G&B):
96
metaphor can be given two very different interpretations. According to
the first, a “box” is viewed as part of the structure of the world;
according to the second, a “box” is viewed as part of the structure of an
individual's representation of the world.
It is not hard to see that G&B’s expressions like “each box has its own laws and
draws a sort of boundary between what is in and what is out” predicts that inter-box
transition has to be abrupt. Since the second first type of “box” in G&B’s narrative
obviously refers to psychological modes, inter-mode transition cannot be gradual in
G&B’s framework as well.
The general moral of this section and the last one is that mainstream philosophical
theories of intentionality is not illuminating for AGI because they either appeal to
external environmental factors which cannot be internally modeled, or they cannot
handle gradual transitions among different cognitive states. Now it is the right time to
introduce Husserl to solve these problems.
First of all, we will show how Husserl could explain intentionality without
introducing external factors by reinterpreting his notion of “phenomenological epoché”
or “phenomenological reduction”. The core text relevant to this notion is as the follows:
The theory of categories must start entirely from this most radical
of all ontological distinctions — being as consciousness and being as
something which becomes “manifested” in consciousness,
“transcendent” being — which, as we see, can be attained in its purity
and appreciated only by the method of the phenomenological reduction.
97
In the essential relationship between transcendental and transcendent
being are rooted all the relationships already touched on by us repeatedly
but later to be explored more profoundly, between phenomenology and
all other sciences - relationships in the sense of which it is implicit that
the dominion of phenomenology includes in a certain remarkable
manner all the other sciences. The excluding has at the same time the
characteristic of a revaluing change in sign; and with this change the
revalued affair finds a place once again in the phenomenological sphere.
Figuratively speaking, that which is parenthesized is not erased from the
phenomenological blackboard but only parenthesized, and thereby
provided with an index.
Some readers may wonder how one could be entitled to presuppose the
omnipresence of an implicit speaker (as step. 3 requires) without introducing subjective
idealism. But an AI/AGI-related point of view can easily explain how. Obviously, no
AI/AGI system can be built without a certain programing language, and the
organization of each programing language has to encapsulate how the world works
from the perspective of a specific designer. Therefore, nothing mysterious will be
98
involved in presupposing such an “implicit speaker” if the preceding procedures are
construed in an AI/AGI context. And this interpretation can even make Husserl’s
notion of “epoché” perfectly compatible with metaphysical physicalism (which is the
metaphysical assumption of most AI scientists), since the irreducibility of an “implicit
speaker” in any algorithmically reconstructed micro-world implies neither that the
physical world itself does not exist independently of how the cognitive systems
perceive them, nor that the cognitive activities are not supervenient on corresponding
physical events. Or in Husserl’s own terms in the preceding citation, speculations about
the metaphysical nature of the world are “not erased from the phenomenological
blackboard but only parenthesized”. Hence, a Husserlian AI programmer does not need
to take the burden of modelling the world beyond the horizon of an omnipresent
“implicit speaker”.
• But it makes no sense to talk about the abrupt transitions among these
components, given that they constitute a continuum in which the “present” can
be only seen as an ideal limit, “just as the continuum of species red converges
towards an ideal pure red”.
99
But what is noema? Unfortunately, even within Husserl scholarship there is a
debate over different interpretations of noema. For example, according to the Fregean
interpretation (supported by Føllesdal, Dreyfus and McIntyre, etc.), noema is a
meaning-encoding entity between mental act and the external object, and the relevant
object becomes the referent of the relevant mental act just because noema specifies the
way in which the referent is referred. By contrast, a competing interpretation of noema
(supported by Sokolowski, Drummond, etc.) contends that noemata are not mediating
entities between mental acts and external objects but just the external objects considered
in the phenomenological reflection, or “experienced objects” for short.
The first interpretation of noema looks less promising from the perspective of AGI,
because it assumes a huge programing burden of modeling the sandwich-like structure
of “act-noema-object”, and despite the formidable work of specifying each noematic
meaning as a contextually invariant manner of fixing referents, how to harmonize these
meanings with contextually emerging factors would be another tricky problem. By
contrast, since no contextually invariant entities have been assumed in the second
interpretation of noema, it may afford a more elegant way to model intentionality.
However, even the the second interpretation is problematic by including the key
phrase “experienced objects”. Given that the specific perspective involved in any piece
of experience is by nature in contrast with the object itself which is perspective-free,
this gap cannot be easily filled by appealing to a compound expression like
“experienced objects”, which can be only unpacked as a weird phrase like “perspective-
free entities from the lens of a specific perspective”(but how could any entity keep on
being perspective-free when viewed from a certain point of view?). Hence, the burden
of modelling perspective-free external entities is still left on the table if this compound
expression is literally put into practice.
This reading of noema fits with Husserl’s following comment on the nature of
phenomenological “object”, which is synonymous to “noematic X” in his context:
100
Everywhere ‘object’ is the name for eidetic concatenations of
consciousness; it appears first as the noematic X, as the subject of sense
pertaining to a different essential types of sense and posita. Moreover, it
appears as the name, ‘actual object’, and is then the name for certain
eidetically considered rational concatenations in which the sense-
conforming, unitary X inherent in them receives its rational position.
101
More importantly, in NARS, psychological modes are characterized without
appealing to the box-approach. Rather, belief, the most primitive psychological mode,
is firstly implicitly expressed in terms of the strength or weight of pathways connecting
one Narsese node and another. For instance, if a pathway connecting the node S with
that of P is highly weighted, it means that the system strongly believes that all Ss are
normally Ps. As to the weight-values of pathways, they come from the interactions
between acquired evidence and the corresponding Narsese sentence (by the way, each
piece of evidence is regarded as a Narsese term in NARS). That is to say, the more
evidence for a Narsese belief is at hand, the more firmly the system has the belief. This
evidence-based treatment can easily handle psychological modes like suspect and
disbelief (both of which involve the role of positive/negative evidence) as mutually
transformable states.
Step. 2. The system applies general knowledge in the preceding pool to the
current state of itself to find whether it is “healthy” enough. If it is, then no
desire will be produced; if not, go to execute the next step.
Step. 3. Due to the inferential capacity of the system, it finds out that if a
precondition p were true, it could “live” much better.
Step. 4. But the system finds that it cannot believe that p is true now since
it lacks enough positive evidence.
Step. 5. Then the system would like to attach the label of “primitive goal”
to p and calculate how much evidence is needed to make it true.
Step. 6. Since the needed evidence is not actually presented, the system
would attach the label of “derived goal” to each operation that would make a
certain piece of relevant evidence occur.
Step. 7. The forgoing reasoning will drive the system into proper actions.
102
Step. 8. The system will evaluate the gap between the newly acquired
evidence and the p-requiring evidence after each run of actions, until the gap is
reduced to a certain level, which means that the desire is satisfied.
103
6. Metaphilosophical observations as concluding remarks
But why AGI? Why not only formal tools from logic or statistics, given that all AI
systems have to rely on them? The primary reason is that a workable AGI system has
to be something more than these formal tools. For instance, it has to have a proper
cognitive architecture and hence to be minimally relevant to human intentionality,
whereas formal tools do not need to be so. Meanwhile, due to its reliance on algorithmic
details, any AGI narrative, albeit perhaps on a high level, still has to be “analytic” in
the most general sense of the term. Hence, due to this duality, AGI could provide a
perfect platform to interpret Husserl.
Another reason not to appeal to formal logic is that by “formal logic”, most people
just mean the Fregean logic, which is actually more suitable for characterizing semantic
externalism, since the ontological status of external referent (e.g., objects or truth-
values) has to be assumed in the Fregean theory of meaning, otherwise it would make
no sense for a Fregean to view meanings as mapping mechanisms correlating symbols
with referents. In this sense, the Fregean logic should be a very cumbersome tool for
modelling Crowell’s inferentialist interpretation of noema, from which naïve
externalism has to be precluded. By contrast, if we appeal to AGI rather than “logic”,
then the novelty of the term “AGI” itself will give us more space to introduce some
form of non-Fregean logic, e.g., the Narsese logic. And this treatment will naturally
separate Husserl’s own position from Føllesdal’s and Dreyfus’ Fregean interpretation
of Husserl, in which the Fregean view of logic is still assumed.
104
认知科学与人文科学的模糊边界
江 怡
我们知道,认知科学至今都没有一个单一可接受的定义。根据不同的定
义,认知科学领域被划分为七个或四个。罗伯特· J. 斯坦顿(Robert J.
Stainton)在他主编的《认知科学的当代争端》一书的序言中提出了一种有所
争议的区分,即区分为四个分支:行为科学和脑科学部分,如心理语言学、神
经科学和认知心理学;社会科学部分,如人类学和社会语言学;形式学科部
分,如逻辑、计算机科学和人工智能;哲学部分,如心灵哲学和语言哲学。根
据斯坦顿的说法,认知科学的标志是,它规定了所有这些分支的方法和结果,
试图提供对心灵的全面理解。(Stainton, p. xiii)在这些分支中,我们发现
只有两个领域在传统上被看作是属于人文学科,即语言学和哲学,虽然有两个
学科使得语言学成为一门交叉学科,即社会学和心理学。从斯坦顿的划分中,
我们还可以看到自然科学在认知科学中占据着支配地位。所以,这里的问题就
是:人文科学在认知科学中会有什么作用?或者说,人文科学是否对认知科学
有所贡献?
无论认知科学包含了多少领域,其中有一个强烈的自然科学立场,使得认
知科学成为具有自然科学指向的学科。认知科学基于自然科学之上,这是自然
的,也是必要的,因为它在性质上就是经验的,在朝向上是实验的。认知科学
的目的是要更好地理解人类心灵,正确地观察这个世界。虽然理解心灵是极其
复杂的,处于多学科之中,但认知科学是讨论不同学科相互作用的话题的最好
选择。认知科学中不欢迎思辨和形而上学。虽然哲学讨论中也存在这样一种倾
向,即自然主义,但哲学却很少被包含在认知科学中,除了从性质上就具有经
验特征的心灵哲学和语言哲学。所以,在这种意义上,认知科学就在一定程度
上有根据地被理解为属于自然科学。
但是,人文科学会如何呢?如果认知科学的性质就是更好地理解人类心
灵,它就应当包含某些人文科学,因为以某些特殊的方式探讨人类心灵,这对
人文学科来说是自然的,也是必要的。那么,人文科学如何探索人类心灵呢?
通过思辨还是沉思?或者只是论证?在哲学史上始终存在哲学与科学之间的明
显区分,在当代欧洲大陆哲学中也是如此。根据这种区分的观点,哲学必须远
离科学,由此哲学就可以保持与世界的独特地位。但在当代分析哲学传统中,
哲学家们更愿意指向一种科学的模式,它以可观察和实验的方式改变了哲学。
以一种更为公众的和常识性的方式加以重建的,不仅仅是哲学,还包括其他人
文科学,如文学、历史、宗教和艺术。模式化支配了文学创作。历史变成了对
历史证据和文献的研究。宗教也努力接受科学的理论。艺术的发展则依赖于实
验。如我们所知,实验哲学在最近十几年里得到了发展。所有这些都表明了这
样一个观念:在哲学与科学之间如今很难做出区分了。
105
如果这个观念是可以接受的,我们如何对待这个观念就是一个我们必须解
决的问题。我这里想要强调的是,哲学与科学之间特别在当代不存在严格的区
分。如果认知科学是属于自然科学的,哲学就会有某些部分是具有自然科学导
向的,特别是心灵哲学和语言哲学。在分析哲学史上,一直有这样一种宣传,
即哲学应当基于科学而得到重建。即使是今天,更多的心灵哲学家和语言哲学
家试图根据心理学、神经科学、人工智能以及实验科学中的其他学科探索人类
心灵的性质。然而,相反,当代欧洲大陆哲学,如现象学、诠释学和后现代主
义哲学等,则反对这个观念。在他们看来,哲学应当保持其自身不同于科学的
对人类心灵的地位。但问题在于:随着科学突飞猛进的发展,我们如何在心灵
和语言上保持哲学的特殊形式?
由于哲学与科学之间不存在严格的区分,如今我们就无法完全离开科学而
讨论哲学问题。科学已经渗入到了哲学之中。这不仅包括了科学的思维方式,
而且包括来自科学的术语用词,这些都强烈地影响到哲学的讨论。即使是大陆
哲学家也会关心科学的发展,虽然他们的解释不同于科学家。没有人会认为哲
学可以与某种方式与科学对立。相反,哲学在认知科学中也具有一种作用。卡
罗琳·索贝尔和保罗·李在他们的《认知科学:一种跨学科方法》中指出,
“哲学在我们研究如何理解我们所面对的宇宙中始终起到了非常重要的作用,
对我们理解自身也是如此。”(Sobel and LI, p. 343)以往的哲学家们始终
努力解决从古希腊以来就提出的身心问题。这个问题是科学家们探索心灵独特
性质的起点,由此发现心灵的特征和心灵与身体之间的关系。而当科学家们在
根据最新的科学技术发展中发现某些无法解决的问题时,他们就会求助于哲学
家的帮助。例如,如何解释感受质(qualia)的性质?如何描述现象意识?我
们在什么意义上可以解释道德?雷尼(Regina A. Rini)在《道德与认知科
学》中描述了关于道德判断的认知科学理论与哲学上的道德理论之间的互动关
系。根据这种描述,大多数哲学家都否认认知科学在道德哲学中的作用。某些
哲学家则主要赋予认知科学消极的作用。对这些哲学家来说,哲学研究是无法
用科学研究取代的。例如,道德哲学家试图回答一些实质性的伦理问题,如我
们可以追求的最为有价值的目标是什么?我们应当如何解决这些目标之间的冲
突?存在某些我们不可为的方式吗,即使这样做会促进最好的结果?什么是好
的人类生活形式?我们可以如何获得这种形式?一个正义的社会是如何组织
的?显然,这些问题是无法仅仅根据某些实验成果和经验数据得到回答的。科
学家们在努力达到他们更好地理解人类心灵的目标时,他们会寻求来自哲学家
们的帮助。在这种意义上,哲学研究不仅是科学家们的出发点,也是他们从事
科学研究的终点。
综上所述,我们无法看到哲学与科学之间的严格区分,在这种意义上,认
知科学与人文科学之间也不存在清晰的边界。这个边界是模糊的,无法划定
的。
106
参考文献
107
Jiang Yi
No matter how many fields are involved in the cognitive science, there is a strong
position on natural sciences which make the cognitive science natural-science-directed.
It is natural and necessary that the cognitive science is based on natural sciences, for it
is empirical-intrinsic and experimental-oriented. The aim of the cognitive science is to
understand human mind better and to observe the world right. Though understanding
of human mind is complicated and in varieties of disciplines, the cognitive science is
the best choice for discussion of this topic with interactions of disciplines. No
speculation and metaphysics in the cognitive science is welcome. Though there is a
trend in philosophical discussions, namely so-called naturalism, philosophy is not much
involved in the cognitive science, exception of mind and language which are empirical-
intrinsic. So in this sense, the cognitive science is accordingly to some extend part of
natural sciences.
108
But how about humanities? If the nature of the cognitive science is better
understanding of human mind, it should contain also some humanities, for it is natural
and necessary for humanities to explore human mind in particular ways. Here is the
question for humanities. How do humanities explore human mind? By speculation or
mediation, or just argumentation? There has been a clear discrimination between
philosophy and science in the history of philosophy as well as in contemporary
continent philosophy. According to this discrimination, philosophy must be away from
science, in which philosophy could keep its peculiar position to the world. But in
contemporary analytic tradition philosophers prefer directing to the scientific model
which modifies philosophy in observable and experimental way. Not only philosophy
but other humanities as literature, history, religion and fine arts are reconstructed in a
much public and commonsense way. Models dominate writings in literature. History
has changed to a study of historical evidences and classics. Religion is also engaged
with scientific theories. Fine arts develop relying on experiments. And as we know,
experimental philosophy has aroused in recent decades. All those show this idea up that
it is hard to make distinction between philosophy and science today.
If this idea is acceptable, what we can do with the distinction is the problem we
have to solve. I would like to address here that there is no sharp distinction of
philosophy and science particularly in contemporary times. If the cognitive science is
part of natural sciences, philosophy would have some part in natural-science-directed,
especially in mind and language. It has been a propaganda in the history of analytic
philosophy that philosophy should be reconstructed on the basis of sciences. Even today,
more philosophers of mind and language attempt to explore the nature of human mind
according to developments in psychology, neuroscience, artificial intelligence and
other disciplines in experimental sciences. It is opposite, however, that the
contemporary Continental philosophies as phenomenology, hermeneutics and post-
modernist philosophy reject the idea. For them philosophy should keep its own position
on human mind different from sciences. But the problem is: how could we keep the
philosophical way on mind and language while sciences are developing rapidly?
109
engaged with such a problem of mind-body so far since the ancient Greek. The problem
is the starting-point for scientists to explore the unique nature of mind by finding some
features of mind and its relation to human body. And scientists would appeal for
philosophers’ help when they find some unsolvable puzzles on the dated developments
in sciences and technology. For example, how to explain the nature of qualia? How to
describe the phenomenal consciousness? In what sense we can explain morality? In
Morality and Cognitive Science Regina A. Rini described the interaction of cognitive
scientist theory of moral judgments with moral theory in philosophy. According to this
description most philosophers deny much or less role of cognitive science in moral
philosophy. Some assign to cognitive science a primarily negative role. For those
philosophers philosophy is irreplaceable with scientific researches. For instance, moral
philosophers try to answer the substantive ethical questions, such as, what the most
valuable goals we could pursue? How should we resolve conflicts among these goals?
Are there ways we should not act even if doing so would promote the best outcome?
What is the shape of good human life and how could we acquire it? How is a just society
organized? It is evident that those questions could not answered just according to some
experimental achievements and empirical database. Scientists would ask for a favor
from philosophers when they are approaching to their goals to understand human mind.
In this sense, philosophical research is not only the starting point for scientists but the
end for their exploration in scientific research.
In concluding above, we could not find the sharp distinction between philosophy
and sciences in the sense that there is no clear boundary of cognitive science and
humanities as well. The boundary is fuzzy and unable to be drawn.
References:
110
作为文化技术的媒介——从书写平面到数字界面
克莱默(Sybille Krämer)
20 世纪 80 年代之后,媒介根本主义(media fundamentalism)形成潮流。麦
克卢汉、基特勒、德里达以来的众多学者将媒介视为高度自律的文化动因,认为
媒介创制了其所传达的意义。这一立场源自尼采、福柯等对人类主体概念的消解。
但在媒介根本主义中,媒介实际上沿袭了过去人类主体的自我中心主义。单纯将
媒介视为意义的创造者、过分张扬其建构性和自律性,毋宁说贬低了传播活动的
创造性价值。有鉴于此,本文欲跳脱媒介根本主义的束缚,探寻一种三元的媒介
哲学。媒介犹如信使,连结相异的两方,其根本功能在于使不可见者得以被感知。
它具有本雅明所谓“间接的直接性”(mediated immediacy):交流顺畅意味着媒
介消隐,后者的物质性只有在断裂、失序处才被察觉。信使并不像言语行为理论
的说话者那样为自己所说的内容负责,他仅是传话的第三方,不可避免要受其余
两方的制约。因此,媒介在使用中,一方面要顾及它所传送的意义,一方面要重
塑其内容,使之适应媒介自身的结构与物质性,从而处于自律与他律的持续互动
中。平面化技术(the technique of flattening)是这种三元媒介哲学的一个范例。
人类通过设想现实中并不存在的、可供书写刻画的纯平面,为思维赋予了可见、
可操作的外在形式,这无论对于审美还是认知都意义重大。以认知为例,《美诺
篇》中的小男孩通过在绘图过程中不断试错,成功画出了两倍于前的正方形;高
斯通过观察算式中数字的空间排布,迅速算出了从 1 到 100 的数字总和。其中,
平面扮演了思维的试验场、参与者、推动器,无形的智识活动一旦落实于平面就
变得直观、有序。二维平面是一维时间与三维空间之间的中介,是时间连续性与
空间同时性之间相互转化的枢纽。这一转化同时伴随着重构,如拼音文字对口语
的空间化不止是单纯的记录,亦包含对语言本身的分析。此外,平面也是个体与
社会之间的中介,是推理与直观两种认识能力之间的中介。平面媒介代表了欧洲
启蒙精神对于明晰、可控的追求,然而当数字化时代来临,书写平面演化为彼此
联通的人机界面,一种全新的深度模式死而复生。电脑好似黑箱。人工智能在海
量数据中通过自我学习获取的能力,连其开发者都捉摸不透。平面化技术极力想
要消除神秘之物与不可知物,而如今,这二者重新回到了我们身边。那么,我们
能否重审启蒙在当下的崭新含义,设想某种“数字化的启蒙”呢?
111
Sybille Krämer
1.
Media create what they transmit. Marshall McLuhan’s “the medium is the
message,” Friedrich Kittler’s “only that which is switchable is at all,” and Jacques
Derrida’s “there is nothing outside the text” paved the way for an interpretation of
media as more or less autonomous agents of social and cultural life. Media construct
and constitute what they present. That is the foundational idea of a theoretical
movement during the last two decades of the 20th century. As result media were
permitted as legitimate objects of intellectual work in the humanities. Although there is
a wide range of differences in how media were equipped with autonomous power and
as an instance of ultimate grounding, I would like to collect the proponents of this
movement in media theory under one label. The transformation of media to a quasi-
autonomous cultural agency will thus be referred to as “media fundamentalism.”
112
2.
To avoid any misunderstanding: the use of the term “messenger” is not an attempt
to personalize media. Nothing is as easily replaced by symbolic and/or technical means
as the messenger function. What matters here is only that the fundamental purpose of
media lies in mediating between heterogeneous worlds that are not accessible to one
another. The messenger function is usually defined as enabling or extending
communication between unconnected sides. However, if the role of media is
connection and transmission then mediation has to be understood not only as enabling
communication but as a process of “making perceptible” (Wahrnehmbarmachen).
Media endow connections to make what is hidden visible and to make what is absent
present. The basic, the primordial function of media is not representation, yet
presentation in the sense making something to be looked at. The reason for
strengthening the pivotal role of perceptibility is that a messenger does not speak in
terms of speech act theory. Speech act theory assumes, that speakers not only speak but
are responsible for the content of their saying. Yet a messenger is discursively
powerless because he or she is not responsible for what the messenger was instructed
to tell. Rather, the messenger makes apparent, or presents and recalls that was told by
someone else and that happened somewhere else. Making perceptible is the basic
principle of being a medium as a third in between heterogeneous, distant fields.
3.
To make the invisible perceptible means radically to transform it. The medium
transfigures the information to be transmitted into a configuration of data that has to
conform to the constraints of the medium itself. This metamorphosis into the code of
the medium constitutes the formative part of media by virtue of which they not only
convey information but rather at the same time shape, condition, and finally even
constitute what they transmit.
113
We see: Distancing from media fundamentalism does not imply an invalidation
and renunciation of the generative aspect of medial functions. The relationship between
generation and transmission or production and mediation should be understood not as
mutually exclusive but rather as mutually dependent. This constructive power of the
medium is apparent in the trace the medium leaves behind on the content of what it
mediates.
When media function smoothly their physical materiality remains below the
threshold of perception. A “good medium” is invisible when in use. The content of a
speech is heard, but not the sound waves. We do not read single letters, yet a meaningful
text. The image must be turned around in order to see the canvas on which it is painted.
Media make something present through the process of their own withdrawal. The user
only becomes aware of the materiality of the medium when there is a disorder and
disruption. All media—and not only digital media—thus have an immersive power.
They have the ability to make what they mediate seem unmediated. The German author
and philosopher Walter Benjamin called this “mediated immediacy.”
The fact of the disappearing medium is also indicated by the etymological origin
of the word “medium.” The Greek “terminus medius” is found in both premises of a
syllogism and establishes their connection, but it is extinguished in the concluding
sentence of the “conclusio.” It thus becomes apparent that the terminus medius endows
syllogistic reasoning by withdrawing itself in the conclusion. The topos of the “dying
messenger”—the runner in Plutarch’s tale who delivered the message of the Greeks’
victory over the Persians in 490 BC—alludes to this issue like a media theory avant la
lettre.
This leads to a preliminary conclusion: every use of media occurs in the field of
tension between the heteronomy of what is being mediated and the autonomy that
allows the content to be transfigured into a representational structure that is aligned to
the physicality and structure of the medium itself.
4.
Let us get now more concrete by looking at a class of graphic media, such as tables,
writing, graphs, diagrams, and maps. These media all involve the application of
inscribed and illustrated surfaces, which will be referred to as the cultural technique of
flattening.
114
We live in a three-dimensional world, yet we are constantly surrounded by
inscribed and illustrated surfaces. Artificial flatness is an everyday phenomenon – even
in cultural history. From an empirical perspective, there are no pure surfaces. By
drawing, writing, or storing, however, we act as if these surfaces have no depth: what
matters can be seen on the surface. Seen from an anthropological perspective, the
cultural technique of flattening is a relevant evolutionary tendency in our symbolic and
technical practices; it extends from cave paintings and skin tattoos to the invention of
writing, diagrams, and maps to computer screens, tablets, and smartphones. Not to
forget that ‘to be flat as possible’ gets a maxim of nearly all technical devices nowadays.
The fullness of the real world as well as the phantasms of fictional worlds thus
obtain an observable and manipulable form; things that are not yet or that can never be
(such as images of logically impossible objects) are made perceptible too.
Artificial flatness has a productive aesthetic and cognitive power; to write down
music, changes what we can do with music, to produce choreography, modifies the
nature of dance; theatre and film mostly depends on scripts etc. In what follows, we
focus on the cognitive, on the epistemic use of flatness.
115
to be performed so to say by ‘paper and pencil’. Every symbolic structure can be
restructured, and every configuration can be reconfigured.
Inscribed surfaces are used not only as instruments for visualizing information but
also as tools for operating and exploring the inscribed and visualized. When we do not
know our way around a foreign city, we can become oriented with the help of a map or
navigational device. We can transfer this operative principle into the realm of the
cognitive. Written and graphic notations help us to navigate spaces of knowledge in
much the same way. The cartographic impulse, which is familiar in the context of
moving in real spaces, can thus be transferred to intellectual activities in knowledge
spaces. The transformation of the cartographic impulse into moving in intellectual
landscapes is the reason for the effectivity of cognitively applied artificial flatness.
5.
Plato’s MENO dialogue is designed to show that knowledge is not a kind of entity,
which is transferable from one person to another through language and telling, because
it has to be produced by the knowing individual him- or herself. This is demonstrated
using the situation of a mathematically uneducated slave boy. Socrates draws a two-
foot square in the sand and tells the youth to double the area.
The boy first doubles the length of the sides of the square, but he recognizes that
this fourfold increase is too much. He then increases the length of the sides to three feet,
but – as he can see - this also produces a square that is more than twice as large. The
boy is puzzled and admits that he is irritated: “I don’t know,” he confesses to Socrates.
With the aid of further Socratic questions, in which Socrates does not communicate the
technique of doubling a square, and further geometrical drawings, the boy finally
recognizes that it is possible to double the area by constructing another square from the
diagonal.
What does this “diagrammatic primal scene” reveal? The first step is that the
engagement with the drawing involves the realization not of knowledge but rather of a
lack of knowledge. An intellectual mistake literally becomes visible, and the
perceptibility of this false assumption paves the way for the generation of positive
knowledge. The surface becomes the experimental field of this mathematical insight,
insofar as the drawing is always also revisable: everything that is illustrated can be
drawn differently. It is also clear that the act of working with diagrams is embedded in
dialogue. Image and text, or drawing and speech, are interconnected. There is not such
a thing as a singular, context independent diagram.
116
The Menon scene is not a singular diagrammatical event in Plato.
6.
(1) 1+2+3+4+5+….+97+98+99+100
This resulted in an optical situation that showed that the sum in each set of brackets
was equal.
Due to the fact that there were 50 such sets of brackets, the answer was
117
7.
Back to our general reflection: the interplay of time and space, the spatialization
of time makes it possible to write programs, notate musical scores, and prepare design
drawings that can be seen, red and realized by others. Temporal performances thus
solidify into stable and transmissible spatial configurations, which can become fluid
through their implementation and then solidify once again into new stable structures.
And it is already apparent here that operative flatness facilitates not only the transfer
between space and time but also the mediation between the individual and the social.
Because the inscribed surface introduces a form of visibility and operativity that is
always in the “we-mode” (Modus des Wir). It organizes mutual perceptions and
experiences. A contribution to the social cultural mind outside the head!
1
Vitruvius, De Architectura, 9,1,1ff.
118
However, there is yet another issue that illustrates the mediating aspect of
inscribed surfaces. Reasoning and intuition are—at least since Immanuel Kant—two
distinct and irreducible sources of knowledge, yet written notations, scientific diagrams,
and graphs constitute an intermediate world that permits to connect reasoning and
intuition. This can be illustrated using the example of the natural scientist,
mathematician, and philosopher J. H. Lambert (1728-1777).
Lambert wanted to calculate the deviation of the magnetic needle from the
geographic North Pole over time and in relation to Paris. To this end, he plotted the
observed data as points on a coordinate plane with the axes of space and time. He then
connected these points by drawing a curved line. What is important is that this line
embodied the general law of deviation. General laws cannot be seen. The induction
problem raises the difficult question of how something general can be derived at all
from something singular. Lambert solved this problem haptically by connecting the
points with a line and interpreting the line itself as the representation of a law. The
drawing hand thus fills in the gaps between the observed and the unobserved, and the
individual drawing provides a visualization of a general law. Lambert used the
inscription surface not only as an instrument of recording and storage, but also as an
instrument of analysis. New insights emerge through the interaction of point, line, and
plane. The paper becomes a mental laboratory that mediates between singular
perceptions and general concepts, between observation and theory. We do not think on
paper, yet with paper.
8.
Insofar as the inscribed surfaces evolve into a networked interface and graphic
user interfaces control our interactions with the computers, a new kind of depth in form
of an expanding universe of interacting machines and protocols behind the screen
comes into being. Rhizomatically in the back of ‘smart usability’ sprawls an invisible
and uncontrollable region of an resurgent "secret," a black box in the literal sense. Each
software develops a "virtual machine" that is hidden from those working with the
119
software. The skills that computers acquire inductively from huge datasets through self
learning programs (deep learning) of Artificial Intelligence remain unclear to the
developers in the "how" of the acquired rules and routines. And the multiple data
traces left by users on the net and on social media, and commercially used by profiling
algorithms and behavioral prediction algorithms, are usually beyond the reach of their
creators.
The European Enlightment was connected with the promise of transparency and
control in a media perspective offered by the device of artificial flatness. But if the
surfaces evolve to interconnected interfaces and transfigure into black boxes, we
witness a return of the withdrawn, of the secret, of the unknown, which the cultural
technique of flattening tried to eliminate. Do we have think about a new idea of
enlightment, to create a ‘digital enlightment’?
120
文化延续与人文科学再定义:大学于 21 世纪全球化
社会之角色
吉见俊哉(Shunya Yoshimi)
在信息大爆炸时代,“大学”概念必然面临质疑与再定义。今日全球大学数量
已逾 15000 所(日本 780 所,韩国 200 所,中国 1800 所,美国 4200 所,俄罗斯
1000 所等),这些大学及其学生如何幸存于此信息大爆炸时代?互联网的无限
延展使我们得以轻易藉由谷歌、维基百科、电子图书系统等诸多路径抵达知识金
字塔之巅,“学院知识”概念备受互联网社会发展的激烈挑战。
继而,我将讨论人文与社会科学话语在当下高科技社会中变动不居的位置。
2015 年夏,我们曾就人文与社会科学话语在当今大学教育中的重要性/非重要性
展开激烈争论,此次我将重申并进而阐明大学教育必须为其留一席之地的诸种原
因。人文与社会科学之重要性在于鼓励人们批判既有价值体系,从而使得细辨新
兴价值与社会去向成为可能。例如,人文/社会科学与工程/自然科学之联合必须
建基于对未来社会“时间”结构的谋划之上。
121
Shunya Yoshimi
In this lecture, I will firstly remind you about the similarity between the 16th and
the 21st century especially in terms of the communication and transportation revolution.
In the 16th century, the age of Discovery and Printing Revolution, information was
exploded and people began to access to larger knowledge compare with previous
century. In the 21st century, the age of Globalization and Digital Revolution, people
are now beginning to access huge amount of knowledge.
Then, I will discuss the changing location of the discourses of humanities and
social sciences in the highly advanced technological society. Last summer in 2015,
we have big debates on the significance/non-significance of humanities and social
sciences in the university education. While explaining the points of the debates, I’m
going to give light on the reasons why we should not give up humanities and social
sciences in university education. They are very “useful” because they can make
people to think about the new value and purpose of the society through criticizing the
already established value system which has been taken for granted. So, for example,
the collaboration between humanities/social sciences and engineering/natural sciences
needs to be based on the design of structure of “time” of the future society.
122
死亡凸显对内疚和羞耻的影响及其神经机制
徐振华,刘 超
恐惧管理理论认为当人们面对死亡时,个体的思想、态度和行为会发生改
变。很多研究表明死亡凸显会影响人的社会行为,但是死亡凸显如何影响人的
情绪感知以及其背后的神经机制尚不清楚。本研究关注内疚和羞耻两种自我意
识情绪,应用 fMRI 技术探索死亡凸显如何影响内疚和羞耻情绪的加工过程。被
试先在网上填写问卷,写下自己经历过的感到内疚、羞耻的事件以及中性情绪
事件,来到实验室之后随机进行死亡启动(死亡凸显组)或负性情绪启动(对
照组),之后被试对内疚事件、羞耻事件以及中性事件进行回忆。我们发现,
不管是对于内疚情绪还是羞耻情绪,死亡启动组被试表现出更强的腹内侧前额
叶激活。进一步分析表明,相较于回忆中性事件,回忆内疚事件时死亡凸显增
强了腹内侧前额叶和楔前叶、颞中回的功能连接。而回忆羞耻事件时,死亡凸
显则减弱了腹内侧前额叶与楔前叶以及后扣带回皮层的功能连接。死亡凸显对
内疚和羞耻情绪产生了不同的调节机制。
123
Xu Zhenhua, Liu Chao
124
人机交互领域的转变:从数据具身化到经验资本主义
桑普森(Tony D. Sampson)
本文将提出一种新的人机交互批评理论(critical HCI)以重新检验该领域
存在的诸多假设与缺失。正如哈里森所论,人机交互正在由一种认知理论框架转
变为一种对用户体验的现象学式理解。这既是人机交互领域的第三种研究范式,
也正如苏珊娜·博德克(Susanne Bødker)所指出的,这是第三波人机交互浪潮。
尽管这种对用户体验的高度关注开启了学术研究领域的多种新路径,但是本文将
在数码环境中着重关注基于任务型(task-based)数字工作和使用环境(use
context)的传统人机交互学科,与对消费者体验兴趣日渐浓厚的商业活动之间
的 独 特关联。人机交互批评理 论将以两种相互关联的方式阐述经验 / 体验
(experience)问题。 一方面,该理论将探索市场逻辑在促使用户体验发生作
用时所扮演的角色。另一方面,该理论将与对经验/体验(experience)的本体
论理解相结合。实际上,对经验/体验的本体论理解已经被人机交互领域以现象
学矩阵的方式所认知。在总结部分,本文将通过引入 A. N. 怀特海德的相关论
述对“经验”(experience)予以重新认识。本文认为,“经验”使得本体论问
题(ontological concerns)与“经验资本主义”(experience capitalism)
这一更为宽广的哲学概念相互关联。
125
Tony D. Sampson
126
Here we find a significant and potentially reciprocal overlap between established media
theory critiques of the political economy in which digital communication technologies
are operative and the need for critical-HCI. On the other, critical-HCI needs to fully
engage with ontological understandings of experience hitherto realized in HCI by way
of a phenomenological matrix (Harrison et al 2007). The idea is to test the limits of this
matrix by drawing on an alternative philosophy of experience, which, I argue, helps
critical-HCI to more effectively approach ontological transitions to new technological
contexts of interaction. This means bringing in an old thinker (A.N. Whitehead) to
consider experience in novel ways that relate ontological concerns to this broader
political concept (and persistence) of experience capitalism.
127
matrix with a catchall name for this recent shift in focus: the experience paradigm.
However, following my earlier approach to efficiency analysis in each paradigm, I will
similarly argue here that experience is not simply the defining factor of a third paradigm
of computer interaction, but can be traced through all three paradigms as they each
endeavour to capture the variations of experience in different ways. So unlike Bødker
(2006), for example, who argues for a discontinuity between a second paradigm related
to computer work efficiency and a third all about online consumer experience, I note a
continuity apparent in the efficiency analysis of work and consumption in which
experiences are similarly put to work. Indeed, in addition to the contextual political and
philosophical discussion below, the article will also set out a nascent agenda for a
critical-HCI events based analysis of each paradigm focused on an alternative concept
of experience informed by Whitehead.
The origins of the experience economy have been traced back to Alvin Toffler’s
1970 book, Future Shock, and a chapter therein titled “The Experience Makers” which
prophesizes where the economy is heading after the exhaustion of the service industries
(Pine and Gilmore, 2013). It is here that Toffler (1970, 208-09) first introduces the idea
of the experience industries.
128
[The experience industries are] a revolutionary expansion of certain
industries whose sole output consists not of manufactured goods, nor
even ordinary services, but pre-programmed ‘experiences’. The
experience industry could turn out to be one of the pillars of super-
industrialism, the very foundation, in fact, of the post-service
economy… the experience industry of the future and the great
psychological corporations, or psych-corps... will dominate.
A similar theme emerges in the field of consumer research in the early 1980s
where Holbrook and Hirschman (1982, 132-40) argue for “an experiential view” of
consumption focused on the symbolic, hedonic (the pursuit of fantasies, feelings, and
fun), and aesthetics of the consumption experience. It is in 1999, nonetheless, when
Pine and Gilmore (2010), seemingly unaware of Toffler’s futurology, introduce a
notion of the experience economy that can now be concretely related to the current
digital landscape. As follows, the twenty-first-century expansion of the UX industry (a
convergence of interaction design and marketing akin to Toffler’s psych-corps) can
indeed be grasped as a major component of a political economy of experience marked
by a shift from commodities, factory goods, and services to the added value of
experiential consumption increasingly associated with industrial scale operations in a
digitalized environment.
Following the experience economy model, the added value of digital experiences
can, on one hand, include conventional commodities, goods and services readily
transformed into new experiences realized through design, branding and marketing.
The point is that the experience economy is more attuned to the idea that it is the
experience itself that often captivates user-consumer attention, leading to emotional
engagements and the all-important purchase intent (Norman 2004). At its most deep-
seated though, on the other hand, there is a commodification of experiences that do not
refer back to a tangible product or service. The design of smart phone interactions with
social media are apposite here. The value extracted from user interactions with social
media apps, for example, does not appear to relate in any palpable way to a conventional
product, but instead extracts value from the experience of social interaction. It is this
digital transformation of commodity production that arguably leads to a business need
to realize value in newly mediated interactions and experiences related to social context.
It is indeed the work of the UX industry, composed of UX consultants, interaction
designers, information architects, ethnographers, behavioural psychologists, big data
researchers, coders, biofeedback experts, network strategists and online marketers to
produce the sensory environments in which shared experiences can be captured,
cultivated and exploited.
129
The UX industry is able to draw on the resourceful expertise of a range of
specialists to prime sensory environments in which experiences might occur, but no one
person or business enterprise produces experience. To be sure, the broader concept of
experience capitalism emerges from research into (and extracting value from) what is
already in action. Borrowing from Langlois and Elmer’s (2013) approach to corporate
social media, we might say that what experience capitalism does is more closely aligned
to the patterning of experience, and I might add, significantly focused on the relational
aspects of interaction and the capacity of machines to learn from social context rather
than individual subjective experience. Here we can see how Pine and Gilmore’s (2010)
Erving Goffman inspired theatre productions are perhaps expanded to a point where the
capture of the performance of experience moves beyond any one locatable subjective
viewpoint to the massive-scale automations of experience gathering. As these big data
captures become more pervasively realized through the invention of ubiquitous
computer technologies, the subjective experience – described by Goffman as the
presentation of self, is, as Greenfield (2006) argues, increasingly teased out into the
public domain. That is to say, human subjectivity is not the producer of experience
(indeed, as I will contend below, it never has been). On the contrary, experience
capitalism persists in a world full of social media apps, relational databases, sensors
and computerized things that process experiences in which subjectivities are constantly
being made.
We can see the extent to which this economic shift toward experience steadily
dovetails with the three paradigms of HCI. Ostensibly, the pragmatic concerns of
early designers of computing systems demonstrated very little regard for the user
experience beyond a Tayloristic concern with bodily fatigue associated with
inefficiencies in the workplace. However, the eventual introduction of social factors
into ergonomics followed by a conceptual move to a second paradigm underpinned by
cognitive psychology and centrality of the information metaphors of mind/computer
coupling, transitions increasingly toward a focus on user need, for example, through
usability studies. The subsequent development of user related services, like user testing,
heralds a distinctive trend toward incorporating elements of use initially focused on
cognitive processes of memory, attention and perception, but latterly incorporating user
motivation, frustration and satisfaction, requiring some knowledge of emotions,
feelings and affect. This trend can perhaps be seen as a precursor to third paradigm
concerns with the processing of felt experience, including previously marginalized
research questions, such as, what is fun (Harrison et al 2007).
To fully understand the bridge that spans HCI and the experience economy, there
is a need to look more closely at two components of third paradigm research. Firstly,
there are fresh concerns with the role emotions, affect and feelings play in the
processing of experience. Secondly, the research focus shifts towards exploring new
pervasive contexts of computing use. It is my contention here that while much attention
130
has been given to the undoubted importance of these two components of third paradigm
HCI (e.g. Boehner et al 2007), there is a further need to explore how each becomes
interwoven with the experience economy.
The third paradigm marks the significant appearance of emotion in HCI research
as it emerges from its marginal positioning in the cognitive paradigm. Most notably this
interest in emotion stems from the HCI related affective computing research carried out
by Rosalind Picard (1997) at MIT, as well as the work of HCI and UX guru, Don
Norman (2004), whose influential emotional design thesis borrows from neuroscientific
ideas concerning the so-called emotional brain thesis to inform a model of experience
processing. According to Norman (2004: 21-24) experience is processed through three
interconnected levels: reflective (cognitive), behavioural (use) and visceral (affective).
This approach does not however go unchallenged in HCI. To be sure, Harrison et al
(2007) draw attention to a “wide range of [opposing] approaches to emotion”
including challenges to the “central role” it is assumed to play in cognition as a kind of
“information flow.” In contrast, there is a rejection of the “equation of emotion with
information” in favour of an “interpretation and co-construction of emotion in action
[and interaction]” (Harrison 2007). The transition from second to third paradigm HCI
research plays a key role in how these opposing conceptions of emotional experience
take shape. To begin with, the accusation against Norman’s model of experience
processing is that it (a), remains stuck with one foot firmly in the cognitive paradigm
and its tendency to reduce experience to the internal processor (and rationality) of the
individual user’s mind (i.e. the cognitive mind/computer metaphor), and (b) tends to
counterpoise cognition and emotion. A second kind of emotional experience therefore
emerges which is referenced back to Wittgenstein, and argues that emotions are not the
opposite of cognition, but like cognition, they are made in social and cultural
interactions. Indeed, Boehner et al (2007) argue for a culturally grounded understanding
of emotional experience in HCI research that recognizes the dynamics of shared
experience socially constructed in action and interaction.
Following fairly recent discourses from the technology sector, we can see how the
digitized experience economy has the potential to considerably expand beyond the
current wave of social computing to the Internet of Things (IoT). We may indeed
already have one foot firmly standing in a future wherein experiential data, mostly
captured today by way of conventional computing devices like PCs, mobile tablets and
smart phones, are being gathered from interactions with pervasive computing in every
131
conceivable location, everywhere and at any time. To be sure, experiences are already
being captured through interactions with everyday things like cars and so-called
wearables (fitness gadgets and training shoes, watches etc.), and now other things, like
kettles, mirrors, speakers, furniture, pavements, and streetlamps are fast becoming
computational devices. There are a number of implications for the growth of the
experience economy (and the focus of HCI research) in terms of the changing
spatiotemporal experience of computing. To begin with, the disappearance of the
conventional graphical user interface (GUI) and dissolving of computer power into
these everyday objects will alter the way the subject/object relation with technology is
approached. Encounters with IoT will be triggered by non-task interactions, fleeting
moments of contact, often hidden from users, and even accidentally engendered
interaction. Furthermore, biometric detection systems could potentially capture data
about the affective valence of the body. Here the capacity of facial recognition software,
for example, to detect emotional responses to environmental stimuli comes into play.
Secondly, pervasive computing challenges the way cognitive process, like memory,
perception and attention, have been conventionally studied in HCI. For instance,
although generally considered as an augmentation of memory, media technology can
capture past experiences, lost to memory in the complex passage and variation of events,
so that they can be prompted back into action in the present. In other words, via machine
learning technologies, forgotten experiences can work in the background to generate
inferred experiential performances (Blackwell, 2015) that become perceptible in the
here and now of the experience economy. Thirdly, although the capture of entangled
experiences relating to animals, landscape and climate is already yielding a kind of
nonhuman experiential data, the pervasive operational level of computingmay well
threaten the status of an assumed human centred, conscious experience (Hansen 2015).
Harrison et al (2007) contend that the changing digital environment draws our
attention to the importance of embodiment in third paradigm HCI research. How we
come to “understand the world, ourselves, and interaction” in these new contexts
crucially derives, they argue, “from our location in a physical and social world as
embodied actors” (Harrison et al 2007). Embodied interaction has become one of the
major concerns of HCI, as such, and to understand it researchers have turned to
phenomenology. Dourish (1999; 2004), for example, sees these new contexts as
intimately linked to the technological changes he first observed in the latter part of the
twentieth century. To begin with, in the 1970s, GUI technology introduced a
visualization of computing that prompted a representational turn in the study of
interaction typified by cognitive task based testing and mental models utilized in the
cognitive paradigm. Yet by the 80s the growth in digital network communication adds
new importance to the social in interaction design, prompting a trend in research toward
132
analysing distributed notions of cognition. Subsequently, in the 90s, when computing
first begins to break out of the screen and make its way into the physical environment
in the shape of tangible technologies, attention is drawn toward the limits of the
cognitive approach. It is indeed these two latter developments in the context of
computer use (social and tangible) that, Dourish (2004, 15-22) argues, require a new
HCI framework focused on embodiment and grasped through the twentieth century
phenomenological tradition.
Embodiment is defined in a way that makes it useful to the HCI researcher because
it provides a “property of being manifest in and of the every-day world” in which
interactions take place (Dourish 1999). This property is not, however, simply restricted
to physical things, like computers or mobile devices, but can include participatory
patterns, like conversations between “two equally embodied people” set against “a
backdrop of an equally embodied set of relationships, actions, assessments and
understandings” (Dourish 1999). This backdrop owes an initial debt to Husserl’s
phenomenology, insofar as it is seen as part of a transition away from an experience of
the world grasped through the realm of abstract ideas (idealism) to one derived from
the experience of concrete phenomena. However, importantly, more attention is given
to Heidegger and Merleau-Ponty in third paradigm HCI research. In the first instance,
Heidegger famously tried to escape Husserl’s “mentalistic model that placed the focus
of experience in the head” (Dourish, 1999). This is, evidently, important to the third
paradigm’s similar transition from the cognitive realm of mental modelling to
embodied interaction whereby interaction is no longer considered in the head (or mind),
“but out in the world… that is already organised in terms of meaning and purpose”
(Dourish 2004, 108). Indeed, Heidegger’s ontological worldview is not taken as a given
- it arises through interaction (Dourish 1999).
Dourish is not the first to utilize Heidegger for HCI purposes. Below he uses
Winograd and Flores (1986) adoption of the phenomenological distinction between
“ready-to-hand” and “present-at-hand” to explain a distinctly first paradigm experience.
133
of it as an object of my activity, the mouse is present-at-hand (Dourish
2011, 109).
This switching between automatic interaction and mindful attention suggests that
the mouse only really exists because of the way it becomes present-at-hand through
embodied interaction. The point is that the mindful activity of using the mouse is
constitutive of ontology, not independent of it (Dourish, 1999). The mouse comes into
being in the mind because, it would seem, it is part of an embodied experience of being
in the world. Indeed, this notion of mindful embodiment is developed further, Dourish
(2004, 114) notes, by Dreyfus (1996) who brings in the phenomenology of perception
developed by Maurice Merleau-Ponty (1962). Here we find that perception itself is
an active process, carried out by an embodied subject. As a result, third paradigm HCI
research begins to focus on a somewhat dualistic distinction between the “physical
embodiment of a human subject, with legs and arms, and of a certain size and shape”
and a “cultural world” from which subjects extract meaning from (Dourish 2004, 114).
From this stance the importance of developing “bodily skills and situational responses,”
alongside mindful acts (or “cultural skills”), which in turn respond to the user’s
embeddedness in this “cultural world,” comes to the fore (Dourish 1999). It is in
between bodily and mindful interactions that abilities and understandings of computing
are developed. There is also a considerable social component to this notion of
interaction. On one hand then, we find the presence of the phenomenological body of
the user-subject, who, on the other hand, simultaneously becomes the “objective body”
experienced and understood by others in the cultural worlds they encounter (Dourish
2004, 115). From this point on, HCI researchers start to draw on Merleau-Ponty’s
phenomenal perception of embodied and cultural worlds to develop, for example, “a
taxonomy of embodied actions for the analysis of group activity” (Dourish 2004, 115;
Robertson 1997).
Although escaping Husserl’s mental prison of the head to explain how experience
emerges from human interaction with the world, human perception remains stubbornly
(and problematically) central to the phenomenologist’s ontology. Whether or not it is
in the head or embodied in the world, HCI phenomenology similarly begins with the
notion that it is the human who has the experience. In other words, where the action is
can be grasped ontologically as it is sensed (in the head, in the hand or through some
other bodily interaction) to the human. So why use Whitehead to challenge such a
position and what tools can we take from this radical departure from the
phenomenological tradition?
134
Part Two: A Whiteheadian Adventure in HCI
Of course, HCI researchers may well want to question the value of an approach to
HCI that side-lines the human, or more specifically, human consciousness. However,
this stance is important to critical-HCI because the transient perception of the subject-
user of the here and now of experience only represents a small slice of the passage of
events occurring in the actual world. Arguably therefore the focus on human perception
neglects to grasp the full extent of the shift to the experience economy and changes to
the technological infrastructure that newly redefine where the action is. This is not,
however, an approach that is dead set against perception. But perception needs to be
seen as only taking into account what occurs (Stengers 2014, 147). This is not the same
as saying that perception produces reality. Perception does not decide if things are more
or less real! That is to say, embodied interaction only goes as far as declaring mere
instants of percipient, and sometimes specious, events in experience. What the
adventure profoundly tells us is that it is, inversely, the process of reality that produces
subjectivity.
135
Analytical Tools for Non-Bifurcated Experience
Whitehead was determined not to limit his philosophical outlook to theories that
made such a bifurcation happen. He looked, as such, to develop new concepts of
experience that are not exclusively the property of human perception, but rather
inclusive and interlocked with the actual world humans are a part of. Of course, this is
a complex task. It is necessary to, first, undo the subject predicated philosophies
developed over epochs of human consciousness; to completely disengage from the
solipsistic sense that humans are the masters of subjectivity when it comes to observing
real material substances or the formulation of ideas that describe them. It also means
overcoming the language games we have absorbed into our minds that explain our
subjective experience of the real world in such limited ways. Second, and clearly related
to HCI, we need to challenge the rigidity of subject-object relations as the only way to
think about the ontology of spatial interaction, and, third, Whitehead prompts us to
move beyond purely spatial concepts of interaction to radically approach experience in
terms of the passage of events.
The Whiteheadian adventure asks us to test the limits of language and redesign it
in a similar way to which the tools of physics are intended to better probe the dynamics
of the actual world. As Whitehead contends, language was designed to handle a static
world and fails, as such, to express the dynamics of reality (Urban 1951, 304). For
example, in his endeavour to refuse bifurcation Whitehead criticized the orthodox
concept of “having an experience” of an object since it is erroneously determined by
the mould of the subject-predicate. That is to say, the subject (the knower) is always
situated by the experience of the object (the known). As Victor Lowe (1951, 106)
136
argues, the subject-predicate mould is “stamped on the face of experience” so that the
experient is the subject who is always qualified by the sensations of the objective world.
This is how language traps experience in the unidirectional relation between the private
subject and the public object.
[W]e cannot determine with what molecules the brain begins and
the rest of the body ends. Further, we cannot tell with what molecules
137
the body ends and the external world begins. The truth is that the brain
is continuous with the body, and the body is continuous with the rest of
the natural world. Human experience is an act of self-origination
including the whole of nature, limited to the perspective of a focal region,
located within the body, but not necessarily persisting in any fixed
coordination with a definite part of the brain.
Clearly, this is not experience limited to any privileged sense organ (the brain or
the sensation of a body), or indeed, a higher level of consciousness (the all-perceiving
mind with the capacity for language). Although, Whitehead (1967, 78) concedes that
human consciousness may well be an exhibit of the “most intense form of the plasticity
of nature,” there is no dichotomy between the human and what is experienced, and
ultimately, in this nonbifurcated sense-making assemblage, nature is closed to mind.
Space is Interaction
138
first paradigm may well have been onto something that the second and third have gone
on to ignore. Instead of concentrating on perceptive locations of interaction in space –
i.e. the points in space where hands (and minds) meet the mouse – ergonomic experts
engaged in capturing (and breaking down) computer tasks into discrete activities in
time. Albeit an oversimplification of a passage of time lacking in the thickness required
by Whitehead’s theory of events (Stengers 2014, 52), the first paradigm ergonomic
study of interaction is not limited to a notion of perception fixed to a geometric grid.
Like third paradigm HCI, the Whiteheadian adventure endeavours to escape from
the same Cartesian structures that underpin the second cognitive paradigm. To do this
Whitehead borrows from William James’s concept of pure experience to make a contra-
Cartesian move (Stengers 2014, 70). But we must first clearly distinguish here between
the phenomenological contra-Cartesian position Dourish (2004, 127; 191) takes in
Where the Action Is and Whitehead’s event analysis. On one hand, Dourish (2004, vii)
is critical of the cognitive paradigm’s convention of grasping interaction through a
mind-computer metaphor that seems to have lost its relation to a body. As we have seen,
embodied interaction is not just information in the mind; it is also experienced in the
hand. On the other hand though, Whitehead does not regard mind or body as the
situation where interaction occurs, but rather draws attention to how both are composed
in a passage of events. The “I” of the mind (and the body from which it seems to belong)
does not determine who we are, since in the duration of events, both body and mind are
swept up in the present before slipping into the past. So unlike Descartes dualism, the
mind does not determine who we are. Again, this is not the command post of experience
we find in the phenomenological matrix. To be sure, the mind always comes later! The
experience does not therefore belong to the mind. The mind’s perceptual judgements,
as well as its apparent capacity for memory and attention, can only testify to the passage
of events from its percipient foothold - in the duration of events (Stengers 2014, 75).
139
including how it is sensed through a clicking noise even if it is not seen, as well as the
haptic physicality and perception of shape or even viewed under a microscope as a mass
of molecules, and so on. Abstract objects are not experienced merely in the now either.
They provide a uniqueness and continuity that presents the foothold the mind needs in
the events that pass it by; there is the mouse and there it is again! It is not, as such, an
object in a given space. It is a mouse-event or pattern of interaction that produces the
subjective reality of the mouse. Ontologically, the mouse is not therefore hidden from
consciousness, but it is declared in the percipient encounter with events (Stengers 2014,
46). To put this another way, it is not the abstract properties of the concrete object that
declares the mouse, but rather the mouse is an abstract object perceived of in the unified
concrescence of the events that declare it. The subject who perceives the mouse is not
the author of the event, or indeed, the author of the many variations in mouse-events.
But we must not simply replace subject/object with object/event relations. We need to
think of interaction as a society or a nexus of events in passage that provide ingression
to objects so that the object is expressed in the event and the event expressed in the
object (Whitehead 2004, 144-52). As Stengers (2014, 52) puts it, every duration of an
event “contains other durations and is contained in other durations.” This is the
relational temporal thickness of Whitehead’s event that cannot be grasped in individual
points in time or space. As follows, we need to recall that making the subject the author
of this kind of mouse-event reintroduces bifurcation. The human mind (however
exceptional its plasticity in nature) cannot experience the whole event. The subject does
not decide on events (whether the mouse is here or not here), as such. The events decide
the subject. The subject’s point of view (this percipient window on experience) belongs
to an “impersonal web” of events (Stengers 2014, 65). To put it another way, events are
not a privileged conscious point of view the user adopts. Users may well occupy the
here, but it is their relation to the now that sweeps them up in a complex flow of events
in which they might confuse the observational present for something that exceeds the
mere foothold the mind has in all of this complexity.
To counter the phenomenal mind, which finds meaning in the symmetry of the
here and now Whitehead introduces us to the asymmetry of the here and now. Yes, the
percipient event locates us in the here but this here does not move in tandem with the
now. The durational now scoops up the here producing infinite variation. It is indeed,
as Stengers (2014, 67) points out, the and in the here and now that really matters in
terms of meaning making. This is what relates the asymmetrical sense of an
observational present (the here) to the now in durational passage. This is Whitehead’s
cogredience (Whitehead 2004, 108-09), which would later be developed more fully in
process philosophy as the vector-like concept of prehension.
140
Prehending HCI
The need for prehension begins with a problem regarding how humans
confusingly perceive what’s here with real things that are supposed to exist at a distance;
as there. Prehension, according to Lowe (1951, 97), therefore provides the “thread” of
process and reality. It is the vector that makes events into concrescent unities, and
analyzable, as such. The prehension take us beyond the here and now of phenomenality
by otherwise looking to how the there becomes the here. Unlike the idealist’s answer
to this problem wherein the abstraction of space by the mind results in a solipsistic
subjective perception we find a production of reality in what is felt is always becoming
(Whitehead 1985, 236-43): the past (objective datum – what is prehended) is alive and
well in the present derivation (subjective form – how it is prehended). Prehensions thus
provide a way of grasping how what is there becomes something here. In other words,
a prehension is the relation established between events in which the past has a stake in
the composition of what is new. Again, it is not simply the here and now (immediate
present) that matters to Whitehead, but how prehension sweeps past events up into a
unity (or nexus) in which something there becomes something here (causal efficacy).
Following Whitehead’s nonbifurcated event analysis then, the mouse cannot be said to
be in or out of mind because the past (what is prehended as the mouse) is always in the
now (this is how the mouse becomes a subjective form). In short, the mouse is
experienced as a flow of events (a process) whereby the past event flows into the present
event.
The use of prehension in critical HCI might also help researchers to go beyond
Dourish’s criticism of the second cognitive paradigm by not only radically inverting
the notion that action in the world necessarily comes after concrete experiences of
objects (the mouse) followed by an abstraction (the mouse in hand or mind), but also
questioning the very concept of social context. Indeed, as Blackwell (2015) argues,
much of the study of situated and embodied interaction misses the new technical
landscape in which social context is engendered by machine learning systems. Machine
learning operates on “‘grounded’ data, and their ‘cognition’ is based wholly on
information collected from the real world” (Blackwell, 2015). These systems directly
interact with social context insofar as they collect data from social media, cookies and
relational databases making the user experience increasingly inferred and akin to
Toffler’s forecast of a pre-programmed experience industry. For Blackwell, the
critical issue at stake now is that by making humans into “data sources” in the service
of machine learning systems, it is no longer simply a problem of grasping human
cognition as situated in the machine, but instead we need to recognize the inhumane
character of a ‘cognition’ emerging from a new technological context. Prehension can,
as such, help us to reconceive of a user experience beyond the subjective relations of
an Euclidean objective world of the here and now, by looking to a spatiotemporal
concept of interaction defined by what is experienced over there (by a machine)
141
becoming experienced here (by the human). These are concerns in critical HCI that
considerably overlap with similar concerns in media theory.
At first look, this may seem like a plausible explanation for what happens when
capitalism, weaponized by the latest operations of digital technology, captures and
commodifies experience. Nonetheless, what I argue here is that the notion of the loss
of human experience in digital culture, suggested by Hansen, glosses over Whitehead’s
more profound and constraining concept of nonbifurcated actual experience -
something Hansen (2015) reduces to this “worldly production of experience” in which
the ontology of duration appears to be full of gaps and ruptures between human
consciousness and technologically produced experience. As Greg Seigworth (2015)
similarly argues in a recent talk:
142
can be done and done more comprehensively in Hansen’s view by, say,
technical machines of various sorts). But such a conception creates a
rather troubling kind of ahistorical suspension or hiatus in any sense of
what might be longer stretches of temporal continuity – durations
persisting alongside any array of ruptures / gaps / delays – within the
ontological itself.
To be sure, the experiential gap that Hansen offers up seems to break all the rules
of Whiteheadian nonbifurcation. The point is that human experience is not increased or
lessened; it is not a case of less or more consciousness in twenty-first-century media,
or for that matter is experience something that can simply fall through an experiential
gap. On the contrary, experience is generative in the circuitries of the capitalist
economy, which records and patterns interactions as they occur in spatiotemporal
occasions. Indeed, the experience of the there, and there it is again, mouse-event is
transformed in pervasive digital media, but only in respect to the novel digital objects
that now ingress with the thickness of durational passage.
143
such a task and focus instead on the far more dystopic grip of experience capitalism in
which the mere foothold of the mind in the durational thickness of events is captured in
a twenty-first-century media circuitry. We may choose to ponder our asymmetrical
experiences in this circuitry, but the most pressing critical issue, it would seem, is the
extent to which capitalism experiences us! Although seemingly overlapping critical
concerns from some quarters of HCI and media theory, this circuitry presents a very
different politics of experience to those that are founded on a perceived loss of human
judgement in the face of a new dehumanizing technological context. The power of
experience capitalism, weaponized by data gathering and machine learning, is not to be
found in the human’s experiential exclusion from an inhumane world of inferred
interaction. On the contrary, although there is more work to be carried out to fully grasp
the folded nature of human computer interaction and its relation to experience
capitalism, this is a power that seems to tap directly into the often improvised
experiences and events in which subjectivity is produced. The power of experience
capitalism is therefore found in a capacity to prehend past events so that they become
part of the composition of what is experienced as new.
References
http://dl.acm.org/citation.cfm?id=2882878
Bødker, S (2006) “When second wave HCI meets third wave challenges.” NordiCHI
'06 Proceedings of the 4th Nordic conference on Human-computer interaction:
changing roles. 1-8. http://dl.acm.org/citation.cfm?id=1182476
Boehner, K, DePaula, R, Dourish, P & Sengers, P (2007) How emotion is made and
measured. International Journal of Human-Computer Studies (65), 275-291.
Dewey, J (1951) The philosophy of Whitehead. In: Schilpp, P.A (ed.) The philosophy
of Alfred North Whitehead. Tutor Publishing Company, New York.
144
Dourish, P (1999) Embodied interaction: exploring the foundations of a new approach
to HCI. Researchgate.
https://www.researchgate.net/publication/228934732_Embodied_interaction_Ex
ploring_the_foundations_of_a_new_approach_to_HCI
Dourish, P (2004) Where the action is, MIT Press, Cambridge. (Originally published
by MIT Press in 2001).
Harrison, S, Tatar, D, & Sengers, P (2007) The three paradigms of HCI” paper
presented at the Conference on Human Factors in Computing Systems
https://people.cs.vt.edu/~srh/Downloads/TheThreeParadigmsofHCI.pdf
Langlois, G & Elmer, G (2013). The research politics of social media platforms.
Culture Machine Vol 14 Open Humanities Press.
http://www.culturemachine.net/index.php/cm/article/viewArticle/505
145
Norman, DA (2004) Emotional design: why we love (or hate) everyday things. Basic
Books, New York.
Pine, J & Gilmore, JH (2011) The experience economy. Harvard Business School
Press, Boston. (Originally published by Harvard Business School Press in 1999).
Pine, J & Gilmore, JH (2013) The experience economy: past, present and future. In:
Jon Sundbo and Flemming Sørensen (eds) Handbook on the experience
economy. Edward Elgar Publishing, Northampton: 21-44.
Stengers, I (2014) Thinking with Whitehead: a free and wild creation of concepts
(Trans Michael Chase) Harvard University Press, Cambridge (originally
published in French in 2002).
Whitehead, AN (1985) Process and reality (corrected edition), Free Press, New York.
(Originally published in 1929 by Macmillan, New York).
146
幽灵般的媒体
张正平(Briankle G. Chang)
媒介是至高无上的,它统御至上:媒介无处不在,而且正如人们所熟知,我
们的生命依赖媒介而生。媒介又是崇高的:媒介不仅支撑着我们的日常生活,而
且正如生活会溢出生命那般超乎我们的理解。媒介的重要性远胜一切,因为它无
所不在且无所不能。可以说,在最普遍的意义上,任何两者之间的存在均为媒介。
媒介是“第三者”,是“中间物”(Medium)——你我也为媒介之一种。无论何
时何地,只要稍加留心,媒介始终“与你同行”。的确,任何两者之间总会有一
个第三者存在。该第三者会进一步成为其它“两者”中的一者,由另一“第三者”
来发挥伴随或媒介作用。由此看来,媒介总是另一媒介的媒介,或旋即消失,或
恍然出现,居于此间(in medias res)。同样,我们只能在媒介之中、以媒介
之法对媒介展开思考。
媒介,媒也(Media mediates)。然而,这种同义反复式的表述依然符合有
关媒介/中介(mediation)的事实——这个简单的事实是,在作为媒介的过程之
中,在行其所能之时,媒介消失了。媒介,无论以何种形式存在,都会成功地全
身而退,化入所谓媒介效果(media effects)之中。有效的媒介都是无形的,
唯有失效的媒介才会显现。这一点已经被海德格尔等人多有论及,同时,我们也
可智能手机或笔记本电脑失灵(或不受我们控制)时感知到媒介的这种特性。媒
介存在的前提是它允诺自我消除,隐入幕后,以至消失不见(dis-appear)。媒
介的终结意味着媒介自身的终结(the end of media is the end of media),
媒介作用的终结(the end of mediation),一切传送、传递以及一切通信与交
流的终结(the end of all correspondence and exchange)。这是“邮递员之
死”(the death of the postman),又或许会如孟德斯鸠所说的在普遍意义上
通信的“绚烂终结”(“brilliant end” of the post in general)。
从媒介的视角看来,我们的世界更近乎于莱布尼茨式而非笛卡尔式的。世界
并非由锐角事物构成,而是由在不同感知力层面上或展开或折叠的“花园”
(gardens)和“鱼塘”(fishponds)构成;而且,是在一个乐观主义者的“圆
房子”(an optimist “round house”)内——一个受万物和谐与正义主导的
天堂,其中,原子及其聚合物从不同的角度相互关联。这幅屡被提及的“万物互
联”图,正如当今世界一样基于媒介、中介、万物秩序(order of things)而存
在,从而预示着一个有关连接、中断与再连接、再中断的“完美世界”(best
possible world)。
本文认为,一切“部分-整体”关系之中都存在两种不同类型的“连接”
(connection),正是二者的归并(conflation)决定了我们大部分关于媒介的
常规理解:以内容导向为主,由此在输送、编码/解码以及传播的过程中始终以
147
人质的形象出现,且永生不灭。在这部分的结尾,我将思考两种或多或少具体些
的例证——以太(aether)和链接器/耦合器(the coupler),从而尝试厘清在
理解媒介的过程中常常被混淆的那两种连接形态。在论文的结尾部分,我将对“网
络”(network)这一理念展开反思。 “网络”持续性地赋予我们作为节点(node)
的功能而后又剥夺之,而且如德勒兹所指出的那样,我们对“节点”的理解是一
种幻觉(hallucinations),对之既无法避免、又浑然不觉。
148
Briankle G. Chang
Spectral Media
Media is sovereign; it reins supreme: not only is it everywhere, but our life, as we
know it, depends on it. Media is also sublime: not only does it supports our daily life,
but it also exceeds our comprehension in the same fashion that life itself always exceeds
the life lived. More than anything because it could be everything and is everywhere, we
will do well to begin by saying that, in its most general sense, media is whatever stands
between any two things. It is a “third,” a someone or something that, whenever and
wherever we look, “walks always beside you”—a medium, so to speak, that we
ourselves are as well. Indeed, between any two, there is always a third, which is in
turn one of the two accompanied or mediated by its other(s). Seen in this light, a media
is always a media of media, presently absent or absently present in medias res. Seen in
this light too, we can begin thinking about media only in the midst of media but also
with media.
Media mediates. This tautology, however, doesn’t fail to betray a fact about
mediation, the simple fact that in mediating, in doing what it does, media disappears.
Media, whatever form it may take, succeeds in withdrawing itself, into its work we call
media effects. When media works, it does not appear to work and it appears only
when it fails to perform, as amply discussed by Heidegger and others, but also reflected
and recognized by us when our smartphones or laptops fail to work (or work according
to its own will), as it were. The premise of media is that it promises to erase itself, to
recede to the background, to dis-appear. The end of media is the end of media; it is the
end of mediation, of all transmission, all delivery, all correspondence and exchange; it
is the death of the postman, the “brilliant end” of the post in general, as Montesquieu
would call it.
From a media point of view, our world is more Leibnizian than Cartesian; it is a
world made not sharp-angled objects, but of “gardens” and “fishponds” folding and
unfolding across levels of perceptibility and according to multiple perspectives from
which each atom and aggregates of atoms relate to one another all the while within an
optimist “round house,” a paradise, governed by universal harmony and just rebribution.
The oft-invoked image of the “internet of things” as emblematic of the present day
149
world is itself based on a media, or mediated, order of things, which in turn is predicated
on an image of a “best possible world” of connection, interruption, and ceaselessly
interrupted reconnection.
In this paper, I argue that there are two distinct types of “connection” that underlies
any part-whole relation, and it is the conflation of the two that determines much of our
common understanding of media as largely content-driven and, consequently, kept
hostage to the self-perpetuating images of transmission, encoding/decoding, circulation.
To that end, I will consider two more or less concrete figures, aether and the coupler,
that help illustrate the two modalities of connection often con-fused in our conceptions
of media. I will end by offering some reflections on the idea of network that continually
makes and unmakes us as a node whose perceptions, as Deleuze suggests, are inevitably
and unknowingly hallucinations.
150
Figure 3, Robert Morris, Ring with Light 1965/66
151
Participants (与会学者)
Briankle G. Chang
Associate Professor of Department of Communication
University of Massachusetts Amherst
张正平(美国马萨诸塞大学传播系教授)
多 恩(美国加州大学伯克利分校电影与传媒系荣誉教授)
Fang Weigui
Distinguished Professor of School of Chinese Language and Literature
Beijing Normal University
Research Fellow of the Centre for Literary Theory
Changjiang Scholar (Ministry of Education of China)
方维规(北京师范大学文学院特聘教授,文艺学研究中心研究员,长江学者)
David J. Gunkel
Presidential Teaching Professor of Communication Studies
Department of Communication
Northern Illinois University
冈克尔(美国北伊利诺伊大学传播系教授)
152
Mark Hansen
James B. Duke Professor of Literature
Department of Art, Art History & Visual Studies
Duke University
汉 森(美国杜克大学艺术、艺术史与视觉研究系荣誉教授)
Jiang Yi
Professor of School of Philosophy and Sociology
Shanxi University
Changjiang Scholar (Ministry of Education of China)
江 怡(山西大学哲学社会学学院教授,长江学者)
Myung-koo Kang
Professor Emeritus of Media and Cultural Studies
Director of Asia Center
Seoul National University
姜明求(韩国首尔大学传播系教授,亚洲研究中心主任)
Sybille Krämer
Professor of Theoretical Philosophy
Institute of Philosophy
Free University of Berlin.
克莱默(德国柏林自由大学哲学系教授)
Liu Chao
Professor of State Key Laboratory of Cognitive Neuroscience and Learning
Beijing Normal University
Young Top-notch Talent of Ten Thousand Talent Program
刘 超(北京师范大学认知神经科学与学习国家重点实验室教授,万人计划青
年拔尖人才)
153
Luo Yuejia
Distinguished Professor of College of Psychology and Sociology
Founding Director at the Research Center of Brain Disorder and Cognitive Science
Shenzhen University
罗跃嘉(深圳大学特聘教授,脑疾病与认知科学研究中心主任)
Tony D. Sampson
Reader in Digital Culture and Communication
School of Arts and Digital Industries
University of East London
桑普森(英国东伦敦大学艺术与数字产业系教授)
Christina Vagt
Associate Professor for European Media Studies
Department of Germanic & Slavic Studies
University of California, Santa Barbara
瓦格特(美国加州大学圣塔芭芭拉分校日耳曼与斯拉夫研究系教授)
Joseph Vogl
Professor of German literary, Cultural and Media Studies
Institute of German Literature
Humboldt University of Berlin
Permanent Visiting Professor at the Department of German
Princeton University
福格尔(德国柏林洪堡大学德语文学系教授,美国普林斯顿大学德语系常任客
座教授)
154
Xu Yingjin
Professor of School of Philosophy
Fudan University
Changjiang Young Scholar (Ministry of Education of China)
徐英瑾(复旦大学哲学学院教授,青年长江学者)
Shunya Yoshimi
Professor of Interfaculty Initiative in Information Studies
Vice President, University of Tokyo.
吉见俊哉(日本东京大学信息学研究科教授,副校长)
Siegfried Zielinski
Professor for Archaeology & Variantology of the Arts & Media
Berlin University of Arts
Michel-Foucault-Professor for Techno-Aesthetics and Media Archaeology
European Graduate School in Saas Fee
齐林斯基(柏林艺术大学媒体理论教授,瑞士欧洲研究院米歇尔·福柯讲席教
授)
155