You are on page 1of 28

Picture Pluperfect

April 12, 2012

It is easy to mistake what we do online as centrally about


exposure and transparent exhibitionism. The explosive
popularity of Pinterest a social network based around
collecting and sharing beautiful images seems to
suggest just the opposite: Rather than reflecting the naked
truth of ourselves, we also embrace something a bit more
pristine.

Wandering

on

Pinterest

can

prompt

disorienting vertigo, a dizzying sugar high, with so much


that is adorable and clean and sweet. Other networks, like
Instagram, can similarly hurt the teeth. When digitally
connected, we are increasingly scrolling beautiful.
Instead of thinking of social media as a clear window into the selves and lives of its users,
perhaps we should view the Web as being more like a painting.
On the walls of many wealthy 18th and 19th century European palaces were hung so-called
picturesque landscape paintings. Disarmingly charming, these paintings, by such artists as
Claude Lorrain and J.M.W. Turner, often depicted central Italy and were admired for their
beauty; indeed, they were painted to be more beautiful than the landscapes themselves. In
1792, English artist and cleric William Gilpin wrote about the distinction between that which
is beautiful when viewed in person versus that which is best captured by its representation:
the most essential point of difference between the beautiful, and the picturesque [is] that
particular quality, which makes objects chiefly pleasing in painting.
Gilpins essay struggles to capture the aesthetic essence of picturesque but makes clear that
distinction between beauty as it naturally exists as opposed to constructed beauty. What is
beautiful to the eye in the ephemeral stream of (mostly) unmediated experience may be
different from what is beautiful in its mediated, documented form. While photography was
invented well after Gilpins essay was written, many of us have had similar experiences of a
photo imbuing additional beauty that one might not have appreciated without the

photograph. This is the essence of the picturesque: something that is more pleasing in a
mediated representation.

So popular was the picturesque ideal in the late 18th century that it set off a type of tourism
in which wealthy vacationers took to the European countrysides in search of landscapes
reminiscent of picturesque paintings. Given how the picturesque privileged constructing
beauty over appreciating scenery in its natural form, some tourists carried with them a device
designed to allow them to view landscapes as if they were picturesque paintings. It was
sometimes called the black mirror or, more commonly, the Claude glass, named after
Claude Lorrain. The device was typically pocket-size, with convex, gray-colored glass. When
viewers looked into it, the convex shape pushed more scenery into a single focal point and
the color of the glass changed the tones to be more pleasing to the eye by the standards of the
contemporary picturesque paintings, which had a limited color palette. The constructed
image was thought to be even more beautiful than reality.
Whats most striking about the Claude glass was how it was used: Tourists would stand with
their back to the landscape and look at a reflection of it rather than look directly at the
landscape they had traveled to see. The Claude glass may be a long-forgotten piece of
technology, but in that regard its a perfect metaphor for much of the modern Web. Imagine
the tourist using the Claude glass, turning away from the world and toward a gadget
providing an idealized image. And now consider us, positioning ourselves precisely away
from the complexities of reality and staring into our glowing, well-connected digital Claudeglass screens.
This is contrary to the belief that the Internet means the end of anonymity, the death of
privacy, or the development of a space that requires pure and naked exhibitionism. As we do
offline, our self-presentations online are always creative, playful, and thoroughly mediated
by the logic of social-media documentation. The Claude glass metaphor describes an Internet
thats more than beautiful one that is picturesque.
You may have already noticed that picturesque paintings of the past look hauntingly like
the faux-vintage Instagram and Hipstamatic photos propagating social media streams today.
Less a matter of capturing an accurate representation, the faux-vintage photo uses similar
lighting and coloring cues that the picturesque painters employed centuries ago.

Painting on left by Claude Lorrain, Landscape with a Draughtsman Sketching Ruins, 1630.
Image on right taken with Instagram

Like Instagram, Pinterest is a social network defined by the picturesque. It captures not the
users reality but instead bathes them within a stream of adorable, immaculate, charming,
and precious photos to scroll through, click on, repin, and appreciate. On typical pinboards,
the full palette of reality the ugly, the gritty, the dark, and the complex is rounded into a
version pleasing to the eyes.
But the popularity of Pinterest, like that of the Claude
glass, goes beyond the simple enjoyment of pretty images.
The wealthy 18th century tourists enjoyed more than just
the view, the reflections, and the paintings. More
fundamentally, they enjoyed demonstrating their refined
taste, distinct from the lower and middle classes as well
as the new rich.
What we pin, post, and like allows us to demonstrate our refined tastes, to declare publicly
what we deem picturesque. Part of the pleasure of using Pinterest stems from declaring this
shirt, photograph, or coffee mug represents who I am, designing a self as Steve Jobs would
a phone. Identity is performed not as through a transparent window but through the logic of
mediated and curated imagery. Pinterest lets us immerse ourselves in ourselves, awash in a
never-ending torrent of our own taste. Thus, the danger of Pinterest, as Bon Stewart has
argued, is that it might foster an uncreative, Stepford Wifeversion of the self based on the
currency of the repinnable.
And this is how we use Facebook, as well. Many of Facebooks commentators and users,
however, have reified the mythpropagated by Zuckerberg that the site is the story of your
life and who you really are. This isnt the case: You know that your Facebook profile is
more beautiful, clever and interesting than you really are; blemishes downplayed and
mistakes deleted. There, your Friday night seemed more exciting, your insights more witty,

your home better furnished, your film selections more exotic, your pets more adorable and
your food more delicious. The myth of frictionless sharing exposing everything about the
real you is debunked the moment you introduce the friction of turning private listening on
in Spotify when listening to Bon Jovi.

On Pinterest, we do not collectively fail at creating a real self as we do on Facebook. The


curated Pinterest pinboards of perfectly prepared cupcakes, flawless bathroom designs, and
precious haircuts make no claim to represent the full complexities of reality or self-identity.
And Instagram, too, makes obvious an image-mediated unreality thats precisely the opposite
of Facebooks claim to be the sum of ones whole life. The fantasy escapism of Facebook is
less honest. Facebook is where the fiction of the picturesque is often passed off and regarded
as fact.
Simply, if Pinterest is the picturesque landscape painting hung on the wall, then Facebook is
where we pretend the painting is a window.
Remember that Zuckerbergs bank account hinges on pretending Facebook is fact. His data
on us is only as valuable as it is accurate. Commentators get mileage out of falsely claiming
the death of anonymity because sensationalist scare tactics garner clicks, eyeballs,
attention, and again money. And we propagate the myth out of fear of seeming
inauthentic against the obvious evidence we are all posers.Facebook is a lot like identity
performance offline our online and offline identities were never that separate to begin with.
We propagate the myth of identity as being natural, authentic, and spontaneous and forget
what thinkers like Erving Goffman and Judith Butler have painstakingly illustrated: Identity,
on and offline, is a performance. Both Pinterest and Facebook evoke the Claude glass of past
and the logic of the digitally picturesque the beauty of what is constructed, modified,
performed rather than given. The mistake is not in engaging in fantasy but buying into the
illusion of the real.

Speaking in Memes
October 24, 2012

Biden-laughs and Ryan-abs, Big Birds and


binders and bayonets: There is something
fascinating when an event as stodgily
ceremonial as the presidential campaign is
run through the lulz-filter of social media,
secreting a hallucination of phrases and
images and videos and, of course, gifs. An
army is at the ready to spin off a gag at every
turn, to propagate the joke to maximum scope; digital arpeggiations of candidate goofs and
campaign blunders are transmitted from host to host through a mere caress of the touchsensitive screen. Watching debates with that second screen of fast-moving social media
streams and text-input boxes begging our thoughts has positioned many of us as hunters for
the most shareable, memeiest content, ready to pounce at something, anything, and in the
process, changing the overall narrative of an event. Weve developed a kind of meme literacy,
a habit of intuiting in real time the potential virality of a speech act to hear retweets inside
words.
Retweets, reposts, reblogs, repins, and remixes lead to reporting. The Meme Election 2012
isnt just a matter of whats found in some sticky gifd-out corner of Tumblr; it also dominates
everyday Facebook feeds and news blogs. And because journalists are disproportionally
connected digitally, popular memes also burrow into mainstream-media narratives as a
measure of what has captured peoples attention. Whether you watched the conventions and
debates on one screen or three, theres a good chance you encountered discussion of Internet
memes afterward.
The definition of meme can be debated, but the short of it is that a meme is a unit of culture,
a parallel to the biological gene in Richard Dawkinss original coinage. Many have since
adapted the term to describe how cultural products pass virally from person to person by
multiplying themselves throughout the social body. Technically, any shared image is a meme
regardless of how viral it has become, but when we saymeme, we generally mean a successful
one.
Its that success of memes in influencing the political narrative that has garnered so much
attention this election cycle. Memes themselves have become a meme. As Amanda Hess said,
the trajectory of U.S. election coverage is unmoored from campaign headquarters and D.C.
bureaus and placed into the hands of the loudest crowds and their swiftest microbloggers.

Hess also noted that it is difficult for the candidates to manufacture virality. Instead, meme
politics often actively resist the campaigns intentions. The memes that proliferate, on and
offline, are not what any of the campaigns planned. Obama gave Romnesia a hard sell but
failed to spark a fire. Perhaps the most successful intentional meme was Obamas reference
of bayonets in the final debate, though even this did not proliferate like the others. Instead,
after major political events, what goes most viral are not the zingers carefully constructed by
teams of hired writers. Mitt Romney likely had no clue that Big Bird or binders would not
only get attention but would generate Twitter accounts with tens of thousands of
followers, Tumblrs with thousands of reblogs, and countless Facebook updates, as well as
dominate the blogosphere and mainstream political reporting the next morning. When Clint
Eastwood made a speech to an invisible Obama in an empty chair, it was not Obamas
advisers that created the @invisibleobama Twitter account or theEastwooding performative
Internet meme.
Campaigns cant plan memes. Instead, the campaigns can merely react to them. Savvy
staffers quickly jump in as a meme begins to go viral and try to capture the moment with an
image. For instance, Obamas team quickly tweeted in response to Eastwood, This chair is
taken, which has been retweeted more than 50,000 times. Or, as is increasingly popular,
campaigns will buy a Twitter hashtag, as the Democratic Party did when they paid to
promote #malarkey after the vice presidential debate. By joining in, campaigns can reinforce
memes favorable to their candidate, attempt to look with it by being aware of the meme
economy, and reassert their own traditional influence over the political narrative influence
that memes, even if just for an instant, threaten.
But nearly any attempt on the part of the campaigns to manufacture virality fails. The
memorable memes are those that seem to authentically emerge from the bottom up, their
very spontaneity serving as evidence of something genuine. When Obama mentioned Big
Bird in the second debate, it fell flat. Another round of Sesame Street images didnt go reviral,
and new hashtags didnt begin to retrend. The idea of something going reviral is almost a selfcontradiction.
Analyses of memes that examine their specific content at face value often miss that virtually
all election-related memes are inherently a critique of the election in general. In a moment
where trust and favorability in politics is near an all-time low, the political statements we
make about the presidential election increasingly need to account for the absurdity of the
process, from the behavior of the campaigns themselves to the mainstream coverage of
them. One of the most common narratives about presidential conventions, commercials, and
debates is what silly performances they are. We all know that style is as important as
substance, that the winner of a debate isnt the one with the strongest logic, and that both
candidates are telling such a slanted story that accepting anything uttered as fact is a sure
sign of naivet. Presidential debates are rightly mocked as mere recital of many scripted

mini-speeches rather than the back-and-forth exchange of ideas the term debate should
conjure.
Because of this frustration, many stand ready to find any bit of authenticity, any deviation
from the script and scream it to the crowd, hashtag and all. Romneys Big Bird statement was
surely prepared with one meaning in mind, but the digitally connected masses instantly saw
another story, that Big Bird has potential to transcend the script. To simply repeat the topdown narratives provided by the campaigns would be to accept the idea that the campaigns
arent theater. Memes inject some authenticity into a political process seen as problematically
overperformed.
Thats not to say that there are not serious political positions at stake in the memes Big
Bird LOL did lead to an important discussion about the funding of public television. But it
remains significant that memes cast these issues within a sarcastic rebuttal of the
performativity of modern political discourse itself. Consuming, liking, and sharing election
memes places politics at an ironic distance (as Dave Perrytweeted me), making a political
statement while simultaneously mocking the political process. In this way, political memes
say more about the people sharing them than they do about any specific campaign issue; the
meme is personal is political.
This memeified election thus marks a clash between exemplars of the top-down and the
bottom-up: a Presidential race filled with official campaign releases and big-media discourse
vs. social media. Presidential politics have long seemed too distant, a contest in which any
individual voter has little say especially if one doesnt live in a swing state. That democracydefining act of user-generated content voting is rigidly delimited and bureaucratic, but
above all, it feels inconsequential. In contrast, social-media sites like Facebook and Twitter
have deeply infiltrated our culture exactly because they provide voice. The Meme Election
allows for cathartic release in response to a political system that has made us feel as if we
dont matter. More than just a voting booth, we have social media to vote viral, everyday,
making us at least feel a bit more significant. Posting a funny women in a binder photo to
our Facebook wall the day after the debate can make us feel like we are participating in
something bigger on our own accord.
If memes are about rejecting a passive consumption of the election, they are equally about
asserting individual autonomy in choosing what to share and repositioning a major news
event as a statement about ourselves. The meme serves as an antidote to an electoral process
that increasingly sits out of place in a society that demands more agency, more
personalization, more individual voice. A new flavor of an old treat, memes allow the
individual to put the political process to work in the construction and maintenance of our
own identities.

That memes are not centrally conceived and controlled means that when we share them, we
share them as autonomous actors declaring what we think is creative or funny, not what the
campaigns or traditional mainstream media outlets think matters. Malcolm Harris makes a
similar point in his provocative discussion of how the Web was used strategically as a protest
tool during Occupy Wall Street.
As long as reportings framed as a rumour, then it can only be false if the rumour fails to
resonate. As long as people repeat it, the rumour becomes a self-fulfilling story
The rumour offered something a band-confirmed appearance wouldnt have: an event,
something that might or might not happen.
What is essential here is that what goes viral isnt what is most accurate but rather the sort
of information individuals to want to be a part of that demonstrates we are in the know
and offers us the best opportunities to add our own two cents along the way in comments and
likes. Look: I know about the Binders Full of Women Tumblr! I found the funniest Big Bird
captioned photo! I have just the best GIF of Biden laughing youd ever want to see!
***
The death of a meme is as interesting as its life. The logic of meme virality is rife with internal
contradictions. The conditions Ive outlined for the success of the election-season meme
that it is seen as emerging spontaneously, authentically, from the bottom up, allowing the
individual to declare their identity, to participate in a distant system, and to ironically mock
the performativity of the political process are also why they burn out as quick as they
burned bright. Like a forest fire, memes use up the fuel that allows them to proliferate.
These graphs of the trending popularity of various memes from the 2012 election show, as
you might expect, a steep rise and sudden decline. As election memes become popular,
emerging from the bottom up, the campaigns join the party as quickly as they can, yet this
accelerates the memes death, exhausting its spontaneous, authentic energy. The meme
moves inside the mainstream narrative, traditional media, the campaigns themselves. At this
moment, sharing those memes no longer demonstrates that you are in the know. Instead,
sharing them begins to demonstrate that you are late to the alternative narrative and are a
blind trend follower. Hence the decline in meme popularity is precipitous.
As PJ Rey noted, had Obama zinged Romney with Big Bird at the end of the first debate, that
moment might have been the most discussed potentially changing the whole narrative
about his performance that night. But by the time Obama referenced Big Bird in the second
debate, 13 days after the meme went viral, Big Bird was, in the words of the Portlandia sketch,
over. Social media was poised to flare at any mention of the 47% in the first debate, yet

Obama ignored the dry kindling and waited two weeks, when much of the phrases potential
viral energy was exhausted.
These lessons from the 2012 election allow us to reflect on the life cycle of the meme and the
ecology of attention in general. Going viral can mean short-term attention at the expense
oflong-term attention. The term virus is problematic because it references only proliferation,
whereas a better epidemiology of the meme would also account for how the rapid explosion
of attention might preclude durability. The memes very success ends up making it part of the
script, and no longer its alternative. We stop retweeting and reblogging a meme when its
ability to express a unique authentic identity diminishes into the mere performance of mob
conformity.
I wonder if viral success is necessarily such a good thing. Even when attention is desired, can
too much be harmful? Might viral success actually portend long-term failure? Viral attention
has an ecology; its something that can be exhausted. Im reminded of Jenna
Worthams story on how massive attention for crowd-funded projects can have the opposite
effect of what is intended. As one entrepreneur said, Going viral was crippling. Social
movements that trade heavily in the meme economy seem to have faced a similar fate.
Perhaps Obamas viral success in 2008 is partly why he cannot generate the same energy in
2012. And the Occupy movement was very successful in getting viral attention: the
phrase 99%, the Casually Pepper Spraying Cop, and even the image of the tent as a symbol of
what Occupy stood for. However, Occupy seems to have followed the logic of the meme
outlined here: burn bright and fast. Of course, there were many reasons why Occupy saw the
success it did as well as why it does not garner as much media attention anymore. Exactly
opposed to predictions that Occupy could endure on the sustenance of memes, it seems the
opposite might be true. Live by the meme and die by the meme.
In any case, is this the new normal for massive cultural events? Traditional media narratives
are still overwhelmingly dominant, but the cacophony of voices from the bottom do
occasionally congeal into something of a competing narrative, one that is about participation,
authenticity, and, of course, about saying something of ourselves as much as it is about the
election. Alternatively, we know that memetime runs fast; as important as the memes initial
proliferation is the declaring of a meme as over, clearing the way for the next sensation,
Schumpeter-style. Now that memes themselves have achieved meme status this election
cycle, will that prompt the next step, an inevitable outcry against the meme itself as over?

The Disconnectionists
November 13, 2013

Unplugging

from

the

Internet

isnt

about

restoring the self so much as it about stifling the


desire for autonomy that technology can inspire

Once upon a pre-digital era, there existed a golden age of personal authenticity, a time before
social-media profiles when we were more true to ourselves, when the sense of who we are
was held firmly together by geographic space, physical reality, the visceral actuality of flesh.
Without Klout-like metrics quantifying our worth, identity did not have to be oriented toward
seeming successful or scheming for attention.
According to this popular fairytale, the Internet arrived and real conversation, interaction,
identity slowly came to be displaced by the allure of the virtual the simulated second life
that uproots and disembodies the authentic self in favor of digital status-posturing, empty
interaction, and addictive connection. This is supposedly the world we live in now, as a recent
spate of popular books, essays, wellness guides, and viral content suggest. Yet they have hope:
By casting off the virtual and re-embracing the tangible through disconnecting and
undertaking a purifying digital detox, one can reconnect with the real, the meaningful
ones true self that rejects social medias seductive velvet cage.
That retelling may be a bit hyperbolic, but the cultural preoccupation is inescapable. How
and when one looks at a glowing screen has generated its own pervasive popular discourse,
with buzzwords like digital detox, disconnection, and unplugging to address profound
concerns over who is still human, who is having true experiences, what is even real at all. A
few examples: In 2013, Paul Miller of tech-news website The Verge and Baratunde Thurston,
a Fast Companycolumnist, undertook highly publicized breaks from the Web that they
described in intimate detail (and ultimately posted on the Web). Videos like I Forgot My
Phone that depict smartphone users as mindless zombies missing out on reality have gone
viral, and countless editorial writers feel compelled to moralize broadly about the minutia of
when one checks their phone. But what they are saying may matter less than the fact that
they feel required to say it. As Diane Lewis states in an essay for Flow, an online journal about
new media,

10

The question of who adjudicates the distinction between fantasy and reality, and how, is
perhaps at the crux of moral panics over immoderate media consumption.
It is worth asking why these self-appointed judges have emerged, why this moral
preoccupation with immoderate digital connection is so popular, and how this mode of
connection came to demand such assessment and confession, at such great length and detail.
This concern-and-confess genre frames digital connection as something personally debasing,
socially unnatural despite the rapidity with which it has been adopted. Its depicted as a
dangerous desire, an unhealthy pleasure, an addictive toxin to be regulated and medicated.
That wed be concerned with how to best use (or not use) a phone or a social service or any
new technological development is of course to be expected, but the way the concern with
digital connection has manifested itself in such profoundly heavy-handed ways suggests in
the aggregate something more significant is happening, to make so many of us feel as though
our integrity as humans has suddenly been placed at risk.
***
The conflict between the self as social performance and the self as authentic expression of
ones inner truth has roots much deeper than social media. It has been a concern of much
theorizing about modernity and, if you agree with these theories, a mostly unspoken
preoccupation throughout modern culture.
Whether its Max Weber on rationalization, Walter Benjamin on aura, Jacques Ellul on
technique, Jean Baudrillard on simulations, or Zygmunt Bauman and the Frankfurt School
on modernity and the Enlightenment, there has been a long tradition of social theory linking
the consequences of altering the natural world in the name of convenience, efficiency,
comfort, and safety to draining reality of its truth or essence. We are increasingly asked to
make various bargains with modernity (to use Anthony Giddenss phrase) when
encountering and depending on technologies we cant fully comprehend. The globalization
of countless cultural dispositions had replaced the pre-modern experience of cultural order
with an anomic, driftless lack of understanding, as described by such classical sociologists as
mile Durkheim and Georg Simmel and in more contemporary accounts by David Riesman
(The Lonely Crowd), Robert Putnam (Bowling Alone), and Sherry Turkle (Alone Together).
I drop all these names merely to suggest the depth of modern concern over technology
replacing the real with something unnatural, the death of absolute truth, of God. This is
especially the case in identity theory, much of which is founded on the tension between seeing
the self as having some essential soul-like essence versus its being a product of social
construction and scripted performance. From Martin Heideggers they-self, Charles
Horton Cooleys looking glass self, George Herbert Meads discussion of the I and the
me, Erving Goffmans dramaturgical framework of self-presentation on the front stage,

11

Michel Foucaults arts of existence, to Judith Butlers discussion of identity


performativity, theories of the self and identity have long recognized the tension between
the real and the pose. While so often attributed to social media, such status-posturing
performance success theater is fundamental to the existence of identity.
These theories also share an understanding that people in Western society are generally
uncomfortable admitting that who they are might be partly, or perhaps deeply, structured
and performed. To be a poser is an insult; instead common wisdom is be true to yourself,
which assumes there is a truth of your self. Digital-austerity discourse has tapped into this
deep, subconscious modern tension, and brings to it the false hope that unplugging can bring
catharsis.
The disconnectionists see the Internet as having normalized, perhaps even enforced, an
unprecedented repression of the authentic self in favor of calculated avatar performance. If
we could only pull ourselves away from screens and stop trading the real for the simulated,
we would reconnect with our deeper truth. In describing his year away from the Internet,
Paul Miller writes,
Real life, perhaps, was waiting for me on the other side of the web browser It
seemed then, in those first few months, that my hypothesis was right. The internet
had held me back from my true self, the better Paul. I had pulled the plug and found
the light.

Baratunde Thurston writes,


my first week sans social media was deeply, happily, and personally social [] I
bought a new pair of glasses and shared my new face with the real people I spent
time with.
Such rhetoric is common. Op-eds, magazine articles, news programs, and everyday
discussion frames logging off as reclaiming real social interaction with your real self and
otherreal people. The R in IRL. When the digital is misunderstood as exclusively virtual,
then pushing back against the ubiquity of connection feels like a courageous re-embarking
into the wilderness of reality. When identity performance can be regarded as a by-product of
social media, then we have a new solution to the old problem of authenticity: just quit. Unplug
your humanity is at stake! Click-bait and self-congratulation in one logical flaw.
The degree to which inauthenticity seems a new, technological problem is the degree to which
I can sell you an easy solution. Reducing the complexity of authenticity to something as
simple as ones degree of digital connection affords a solution the self-help industry can sell.
Researcher Laura Portwood-Stacer describes this as that old neoliberal responsibilization

12

weve seen in so many other areas of ethical consumption, turning social problems into
personal ones with market solutions and fancy packaging.
Social media surely change identity performance. For one, it makes the process more explicit.
The fate of having to live onstage, aware of being an object in others eyes rather than a
special snowflake of spontaneous, uncalculated bursts of essential essence is more obvious
than ever even perhaps for those already highly conscious of such objectification. But that
shouldnt blind us to the fact that identity theater is older than Zuckerberg and doesnt end
when you log off. The most obvious problem with grasping at authenticity is that youll never
catch it, which makes the social media confessional both inevitable as well as its own kind of
predictable performance.
To his credit, Miller came to recognize by the end of his year away from the Internet that
digital abstinence made him no more real than he always had been. Despite his great ascetic
effort, he could not reach escape velocity from the Internet. Instead he found an inextricable
link between life online and off, between flesh and data, imploding these digital
dualismsinto a new starting point that recognizes one is never entirely connected or
disconnected but deeply both. Calling the digital performed and virtual to shore up the
perceived reality of what is offline is one more strategy to renew the reification of old social
categories like the self, gender, sexuality, race and other fictions made concrete. The more we
argue that digital connection threatens the self, the more durable the concept of the self
becomes.
***
The obsession with authenticity has at its root a desire to delineate the normal and enforce
a form of healthy founded in supposed truth. As such, it should be no surprise that digitalausterity discourse grows a thin layer of medical pathologization. That is, digital connection
has become an illness. Not only has the American Psychiatric Association looked into making
Internet-use

disorder

DSM-official

condition,

but

more

influentially,

the

disconnectionists have framed unplugging as a health issue, touting the so-called digital
detox. For example, so far in 2013, The Huffington Post has run 25 articles tagged with
digital detox, including The Amazing Discovery I Made When My Phone Died, How a
Weekly Digital Detox Changed My Life, Why Were So Hooked on Technology (And How
to Unplug). A Los Angeles Times article explored whether the presence of digital devices
contaminates the purity of Burning Man. Digital detox has even been added to the Oxford
Dictionary Online. Most famous, due to significant press coverage, is Camp Grounded, which
bills itself as a digital detox tech-free personal wellness retreat. Atlantic senior editor Alexis
Madrigal has called it a pure distillation of post-modern technoanxiety. On its grounds the
camp bans not just electronic devices but also real names, real ages, and any talk about ones
work. Instead, the camp has laughing contests.

13

The wellness framework inherently pathologizes digital connection as contamination,


something one must confess, carefully manage, or purify away entirely. Remembering Michel
Foucaults point that diagnosing what is ill is always equally about enforcing what is healthy,
we might ask what new flavor of normal is being constructed by designating certain kinds of
digital connection as a sickness. Similar to madness, delinquency, sexuality, or any of the
other areas whose pathologizing toward normalization Foucault traced, digitality what is
online, and how should one appropriately engage that distinction has become a
productive concept around which to organize the control and management of new desires
and pleasures. The desire to be heard, seen, informed via digital connection in all its
pleasurable and distressing, dangerous and exciting ways comes to be framed as unhealthy,
requiring internal and external policing. Both the real/virtual and toxic/healthy dichotomies
of digital austerity discourse point toward a new type of organization and regulation of
pleasure, a new imposition of personal techno-responsibility, especially on those who lack
autonomy over how and when to use technology. Its no accident that the focus in the viral I
Forgot My Phone video wasnt on the many people distracted by seductive digital
information but the woman who forgets her phone, who is free to experience life the
healthy one is the object of control, not the zombies bitten by digitality.
The smartphone is a machine, but it is still deeply part of a network of blood; an embodied,
intimate, fleshy portal that penetrates into ones mind, into endless information, into other
people. These stimulation machines produce a dense nexus of desires that is inherently
threatening. Desire and pleasure always contain some possibility (a possibility its by no
means automatic or even likely) of disrupting the status quo. So there is always much at stake
in their control, in attempts to funnel this desire away from progressive ends and toward
reinforcing the values that support what already exists. Silicon Valley has made the term
disruption a joke, but there is little disagreement that the eruption of digitality does create
new possibilities, for better or worse. Touting the virtue of austerity puts digital desire to
work strictly in maintaining traditional understandings of what is natural, human, real,
healthy, normal. The disconnectionists establish a new set of taboos as a way to garner
distinction at the expense of others, setting their authentic resistance against others
unhealthy and inauthentic being.
This explains the abundance of confessions about social media compulsion that intimately
detail when and how one connects. Desire can only be regulated if it is spoken about. To
neutralize a desire, it must be made into a moral problem we are constantly aware of: Is it
okay to look at a screen here? For how long? How bright can it be? How often can I look? Our
orientation to digital connection needs to become a minor personal obsession. The true
narcissism of social media isnt self-love but instead our collective preoccupation with
regulating these rituals of connectivity. Digital austerity is a police officer downloaded into
our heads, making us always self-aware of our personal relationship to digital desire.

14

Of course, digital devices shouldnt be excused from the moral order nothing should or
could be. But too often discussions about technology use are conducted in bad faith,
particularly when the detoxers and disconnectionists and digital-etiquette-police seem more
interested in discussing the trivial differences of when and how one looks at the screen rather
than the larger moral quandaries of what one is doing with the screen. But the
disconnectionists selfie-help has little to do with technology and more to do with enforcing
a traditional vision of the natural, healthy, and normal. Disconnect. Take breaks. Unplug all
you want. Youll have different experiences and enjoy them, but you wont be any more
healthy or real.

15

View From Nowhere


October 9, 2014

On the cultural ideology of Big Data.

What science becomes in any historical era depends on what we make of it Sandra
Harding, Whose Science? Whose Knowledge? (1991)
Modernity has long been obsessed with, perhaps even defined by, its epistemic insecurity, its
grasping toward big truths that ultimately disappoint as our world grows only less knowable.
New knowledge and new ways of understanding simultaneously produce new forms of
nonknowledge, new uncertainties and mysteries. The scientific method, based in deduction
and falsifiability, is better at proliferating questions than it is at answering them. For
instance, Einsteins theories about the curvature of space and motion at the quantum level
provide new knowledge and generates new unknowns that previously could not be pondered.
Since every theory destabilizes as much as it solidifies in our view of the world, the collective
frenzy to generate knowledge creates at the same time a mounting sense of futility, a tension
looking for catharsis a moment in which we could feel, if only for an instant, that
we know something for sure. In contemporary culture, Big Data promises this relief.
As the name suggests, Big Data is about size. Many proponents of Big Data claim that massive
databases can reveal a whole new set of truths because of the unprecedented quantity of
information they contain. But the big in Big Data is also used to denote a qualitative
difference that aggregating a certain amount of information makes data pass over into Big
Data, a revolution in knowledge, to use a phrase thrown around by startups and massmarket social-science books. Operating beyond normal sciences simple accumulation of
more information, Big Data is touted as a different sort of knowledge altogether, an
Enlightenment for social life reckoned at the scale of masses.
As with the similarly inferential sciences like evolutionary psychology and popneuroscience, Big Data can be used to give any chosen hypothesis a veneer of science and the
unearned authority of numbers. The data is big enough to entertain any story. Big Data has
thus spawned an entire industry (predictive analytics) as well as reams of academic,
corporate, and governmental research; it has also sparked the rise of data journalism like

16

that of FiveThirtyEight, Vox, and the other multiplying explainer sites. It has shifted the
center of gravity in these fields not merely because of its grand epistemological claims but
also because its well-financed. Twitter, for example recently announced that it is putting $10
million into a social machines Big Data laboratory.
The rationalist fantasy that enough data can be collected with the right methodology to
provide an objective and disinterested picture of reality is an old and familiar one: positivism.
This is the understanding that the social world can be known and explained from a valueneutral, transcendent view from nowhere in particular. The term comes from Positive
Philosophy (1830-1842), by August Comte, who also coined the term sociology in this image.
As Western sociology began to congeal as a discipline (departments, paid jobs, journals,
conferences), Emile Durkheim, another of the fields founders, believed it could function as
a social physics capable of outlining social facts akin to the measurable facts that could
be recorded about the physical properties of objects. Its an arrogant view, in retrospect
one that aims for a grand, general theory that can explain social life, a view that became
increasingly rooted as sociology became focused on empirical data collection.
A century later, that unwieldy aspiration has been largely abandoned by sociologists in favor
of reorienting the discipline toward recognizing complexities rather than pursuing universal
explanations for human sociality. But the advent of Big Data has resurrected the fantasy of a
social physics, promising a new data-driven technique for ratifying social facts with sheer
algorithmic processing power.
Positivisms intensity has waxed and waned over time, but it never entirely dies out, because
its rewards are too seductive. The fantasy of a simple truth that can transcend the divisions
that otherwise fragment a society riven by power and competing agendas is too powerful, and
too profitable. To be able to assert convincingly that you have modeled the social world
accurately is to know how to sell anything from a political position, a product, to ones own
authority. Big Data sells itself as a knowledge that equals power. But in fact, it relies on preexisting power to equate data with knowledge.
***
Not all data science is Big Data. As with any research field, the practitioners of data science
vary widely in ethics, intent, humility, and awareness of the limits of their methodologies. To
critique the cultural deployment of Big Data as it filters into the mainstream is not to argue
that all data research is worthless. (The new Data & Society Research Institute, for instance,
takes a measured approach to research with large data sets.) But the positivist tendencies of
data science its myths of objectivity and political disinterestedness loom larger than any
study or any set of researchers, and they threaten to transform data science into an

17

ideological tool for legitimizing the tech industrys approach to product design and data
collection.
Big Data research cannot be understood outside the powerful nexus of data science and
social-media companies. Its where the commanding view-from-nowhere ideology of Big
Data is most transparent; its where the algorithms, databases, and venture capital all meet.
It was no accident that Facebooks research branch was behind the now infamous emotional
manipulation study, which was widely condemned for its lax ethical standards and
intellectual hubris. (One of the authors of the study said Big Datas potential was akin to the
invention of the microscope.)
Equally steeped in the Big Data way of knowing is Dataclysm, a new book-length expansion
of OkCupid president Christian Rudders earlier blog-posted observations about the
anomalies of his dating services data set. We are on the cusp of momentous change in the
study of human communication, Rudder proclaims, echoing the Facebook researchers
hubris. Dataclysms subtitle sets the same tone: Who we are (when we think no one is
watching). The smirking implication is that when enough data is gathered behind our
backs, we can finally have access to the dirty hidden truth beyond the subjectivity of not only
researchers but their subjects as well. Big Data will expose human sociality and desire in ways
those experiencing it cant.
Because digital data collection on platforms like OkCupid seems to happen almost
automatically the interfaces passively record all sorts of information about users behavior
it appears unbiased by messy a priori theories. The numbers, as Rudder states multiple
times in the book, are right there for you to conclude what you wish. Indeed, because so many
numbers are there, they speak for themselves. With all of OkCupids data points on love and
sex and beauty, Rudder claims he can lay bare vanities and vulnerabilities that were perhaps
until now just shades of truth.
For Rudder and the other neo-positivists conducting research from tech-company campuses,
Big Data always stands in the shadow of the bigger data to come. The assumption is that
there is more data today and there will necessarily be even more tomorrow, an expansion
that will bring us ever closer to the inevitable pure data totality: the entirety of our everyday
actions captured in data form, lending themselves to the project of a total causal explanation
for everything. Over and over again, Rudder points out the size, power, and limitless potential
of his data only to impress upon readers how it could be even bigger. This long-held positivist
fantasy the complete account of the universe that is always just around the corner
thereby establishes a moral mandate for ever more intrusive data collection.
But whats most fundamental to Rudders belief in his datas truth-telling capability and
his justification for ignoring established research-ethics norms is his view that data sets
built through passive data collection eliminate researcher bias. In Rudders view, shared by

18

other neo-positivists that have defended human digital experimentation without consent, the
problem with polling and other established methods for large-scale data gathering is that
these have well-known sources of measurement error. As any adequately trained social
scientist would confirm, how you word a question and who poses it can corrupt what a
questionnaire captures. Rudder believes Big Data can get much closer to the truth by
removing the researcher from the data-collection process altogether. For instance, with data
scraped from Google searches, there is no researcher prodding subjects to reveal what they
wanted to know. There is no ask. You just tell, Rudder writes.
This is why Rudder believes he doesnt need to ask for permission before experimenting on
his sites users to, say, artificially manipulate users match percentage or systematically
remove some users photos from interactions. To obtain the most uncontaminated data, users
cannot be asked for consent. They cannot know they are in a lab.
While the field of survey research has oriented itself almost completely to understanding and
articulating the limits of its methods, Rudder copes with Big Datas potentially even more
egregious opportunities for systematic measurement error by ignoring them. Sometimes,
he argues, it takes a blind algorithm to really see the data. Significantly downplayed in this
view is how the way OkCupid captures its data points is governed by the political choices and
specific cultural understandings of the sites programmers. Big Data positivism myopically
regards the data passively collected by computers to be objective. But computers dont
remember anything on their own.
This naive perspective on how computers work echoes the early days of photography, when
that new technology was sometimes represented as a vision that could go beyond vision,
revealing truths previously impossible to capture. The most famous example is Eadweard
Muybridges series of photographs that showed how a horse really galloped. But at the same
time, as Shawn Michelle Smith explains in At the Edge of Sight: Photography and the
Unseen, early photography often encoded specific and possibly unacknowledged
understandings of race, gender, and sexuality as real. This vision beyond vision was in fact
saturated with the cultural filter that photography was said to overcome.
Social-media platforms are similarly saturated. The politics that goes into designing these
sites, what data they collect, how it is captured, how the variables are arranged and stored,
how the data is queried and why are all full of messy politics, interests, and insecurities.
Social-science researchers are trained to recognize this from the very beginning of their
academic training and learn techniques to try to mitigate or at least articulate the resulting
bias. Meanwhile, Rudder gives every first-year methods instructor heart palpitations by
claiming that there are times when a data set is so robust that if you set up your analysis
right, you dont need to ask it questions it just tells you everything anyways.

19

Evelyn Fox Keller, in Reflections on Gender in Science, describes how positivism is first
enacted by distancing the researcher from the data. Big Data, as Rudder eagerly asserts,
embraces this separation. This leads to perhaps the most dangerous consequence of Big Data
ideology: that researchers whose work touches on the impact of race, gender, and sexuality
in culture refuse to recognize how they invest their own unstated and perhaps unconscious
theories, their specific social standpoint, into their entire research process. This replicates
their existing bias and simultaneously hides that bias to the degree their findings are
regarded as objectively truthful.
By moving the truth-telling ability from the researcher to data that supposedly speaks for
itself, Big Data implicitly encourages researchers to ignore conceptual frameworks like
intersectionality or debates about how social categories can be queered rather than
reinforced. And there is no reason to suppose that those with access to Big Data often tech
companies and researchers affiliated with them are immune to bias. They, like anyone,
have specific orientations toward the social world, what sort of data could describe it, and
how that data should be used. As danah boyd and Kate Crawford point out in Critical
Questions for Big Data,
regardless of the size of a data, it is subject to limitation and bias. Without those
biases and limitations being understood and outlined, misinterpretation is the
result.
This kind of short-sightedness allows Rudder to write things like The ideal source for
analyzing gender difference is instead one where a users gender is nominally irrelevant,
where it doesnt matter if the person is a man or a woman. I chose Twitter to be that neutral
ground without pausing to consider how gender deeply informs the use of Twitter.
Throughout Dataclysm, despite his posture of being separate from the data he works with,
Rudders politics are continually intervening, not merely in his explanations, which often
refer to brain science and evolutionary psychology, but also in how he chooses to measure
variables and put them into his analyses.
In a society deeply stratified on the lines of race, class, sex, and many other vectors of
domination, how can knowledge ever be said to be disinterested and objective? While
former Wired editor-in-chief Chris Anderson was describing the supposed end of theory
thanks to Big Data in a widely heralded article, Kate Crawford, Kate Miltner, and Mary Gray
were correcting that view, pointing out simply that Big Data is theory. Its merely one that
operates by failing to understand itself as one.
***
Positivism has been with us a long time, as have the critiques of it. Some research
methodologists have addressed and incorporated these critiques: Sandra Hardings Whose

20

Science? Whose Knowledge? argues for a new, strong objectivity that sees including a
researchers social standpoint as a feature instead of a flaw, permitting a diversity of
perspectives instead one false view from nowhere. Patricia Hill Collins, in Black Feminist
Thought, argues that partiality and not universality is the condition of being heard.
Big Data takes a different approach. Rather than accept partiality, its apologists try a new
trick to salvage the myth of universal objectivity. To evade questions of standpoint, they
lionize the data at the expense of the researcher. Big Datas proponents downplay both the
role of the measurer in measurement and the researchers expertise Rudder makes
constant note of his mediocre statistical skills to subtly shift the source of authority. The
ability to tell the truth becomes no longer a matter of analytical approach and instead one of
sheer access to data.
The positivist fiction has always relied on unequal access: science could sell itself as morally
and politically disinterested for so long because the requisite skills were so unevenly
distributed. As scientific practice is increasingly conducted from different cultural
standpoints, the inherited political biases of previous science become more obvious. As
access to education and advanced research methodologies became more widespread, they
could no longer support the positivist myth.
The cultural ideology of Big Data attempts to reverse this by shifting authority away from
(slightly more) democratized research expertise toward unequal access to proprietary, gated
data. (Molly Osberg points out in her reviewof Dataclysm for the Verge how Rudder explains
in the notes how he gathered most of his information through personal interactions with
other tech company executives.) When data is said to be so good that it tells its own truths
and researchers downplay their own methodological skills, that should be understood as an
effort to make access to that data more valuable, more rarefied. And the same people
positioning this data as so valuable and authoritative are typically the ones who own it and
routinely sell access to it.
Data science need not be an elitist practice. We should pursue a popular approach to large
data sets that better understands and comes to terms with Big Datas own smallness,
emphasizing how much of the intricacies of fluid social life cannot be held still in a database.
We shouldnt let the positivist veneer on data science cause us to overlook its valuable
research potential.
But for Big Data to really enhance what we know about the social world, researchers need to
fight against the very cultural ideology that, in the short term, overfunds and overvalues it.
The view from nowhere that informs books like Dataclysm and much of the corporate and
commercialized data science must be unmasked as a view from a very specific and familiar
somewhere.

21

Fear of Screens
January 25, 2016

Why would anyone want to believe that


people who are communicating with
phones have forgotten what friendship is?

The Sender, 1982

THE sudden appearance of mobile and social digital technologies has brought new
pleasures; we hail them as a new kind of magic that can make us both more and more than
human. But the prominence of these technologies can also feel toxic, threatening, and
inhuman, sparking fears about their effects on users who seem increasingly enticed by and
dependent on their expediency. This has spawned a genre of concerned critique that has
surfaced everywhere from weekend New York Times op-eds to such academic journals
asCyberpsychology. Rather than seek to describe specific changes or uneven distributions in
how we relate and communicate, this genre instead takes a medicalized view of digital
connectivity and seeks to diagnose the threats it poses to our very humanity. These critiques
begin with a received definition of what makes us human having authentic selves and
real emotions, moral sensitivity and deep social connection. Our capacity to experience
these real truths and depths of feeling is posited as inborn and inherently fragile; at any
moment insidious technologies can disturb the delicate balance and strip us our humanity,
throwing organic order into cyber chaos.
Science and technology professor Sherry Turkle has emerged as the most high-profile voice
among

these

disconnectionists.

While

her

most

recent

book,

Reclaiming

Conversation,praised unironically as self-help by fellow disconnectionist Jonathan


Franzen in a New York Times book review, makes some concessions to the pleasure and
usefulness of using smartphones, it is also replete with rhetoric that understands screens as
necessarily toxic and represents humans who use this technology as broken. She describes
(mostly young) adults on their phones as impaired in nearly every facet of behavior that is
thought to make a human a human. People using phones purportedly have a limited capacity
for solitude, sadness, empathy, and deep relationships. Parents have it particularly rough:
We catch ourselves not looking into the eyes of our children or taking the time to talk with
them just to have a few more hits of our email, she writes. Parents who look at their phones
too much create kids who are awkward and withdrawn. And children using phones, of
course, are the most broken of all. According to Turkle, they cant sustain attention or engage

22

in deep reading of books; they cant express themselves, find friends, or form attachments.
They cant exercise executive function, listen, make eye contact, or respond to body language,
and they are generally uninterested in each other. They talk at each other in short bursts of
minutia, they hurt each other and dont know it, and theyve moved from an emotional to an
instrumental register.
Sometimes implicitly and sometimes overtly, this theme runs throughout the entirety
of Reclaiming Conversation: Look at all these damaged subhumans that have fallen for
technologys addictive and noxious appeal! the book insists.Look at the victims of the digital
toxin who need curing!Turkle asks imploringly, Have we forgot what conversation is? What
friendship is? But the more important question posed by Reclaiming Conversation more
interesting than Turkles or any other disconnectionists answer to the apocalyptic questions
they raise are the conceptual leaps needed to ask them. How do you look at everyday people
using digital devices to communicate with one another and suppose that they may not even
know what conversation and friendship are?
Turkles questions are very different from asking, say, how digital connection
has changed conversation or friendship, and which of those changes are better or worse for
whom. Instead she raises the stakes of digital connection directly to the threatened end of
your human spirit. Why this presumption of doom?
These questions, and the concern behind them, are prevalent because they seem to have an
almost intuitive appeal. Who hasnt wondered about their dependency on digital
convenience, on the constant contact and unprecedented visibility on social media? But how
intuitive, really, is her claim that we are all broken? That young people, especially, have been
made digital subhumans?
Turkles claims may feel commonsensical in part because they are self-flattering: They let us
suspect that we are the last humans standing in a world of dehumanized phone-toting drones.
That everyone is becoming mindless robot assholes makes for a good, immediately accessible
routine for a certain kind of bemused comedy: Throughout the book, Turkle cites such
comedians as Louis CK, Stephen Colbert, Aziz Ansari, and Jerry Seinfeld. These routines are
as comforting as they are funny, because they point toward a simple solution. Once we recast
our insecurities as the phones fault, all we need to do to fix them is be more mindful of our
digital intake, as Turkle and many others have recently begun to recommend.
To make its case, Reclaiming Conversation appeals to science and empirical observation, but
the evidence it offers is convincing only to the extent that you share the presupposition that
screens are inhuman and antisocial. Turkle cherry-picks from social science literature on
social media and mobile devices, while suppressing the general thrust of that research: that
the relationship between digital connection and sociality is multivalent and complicated.
How could it not be? Sociologist Jenny Davis has already written about the methodological

23

shortcomings of Reclaiming Conversation, arguing that the findings from a study on a tech
disconnection camp that Turkle relies on throughout show that screens themselves have no
effect on empathyexactly the opposite of what its authors (and Turkle) report. The camp
study, relying on shoddy methods and inaccurate conclusions, exemplifies how cultural fears
and emotional appeals can facilitate the spread of unsubstantiated claims, cloaked in
science.
This cycle of claim and counter claim is not new. For as long as there have been social media
and mobile devices, there have also been articles or books aimed at lay audiences arguing
that were trading real life for something digital. And then come the replies from researchers
who have found that the relationship is much more complicated that people who text more
often also meet face to face more; that the contemporary technologies of social isolation were,
and are, the television and the automobile, not smart phones; that theres been a recent
reversal of the long postWorld War II trend toward social isolation.
To be sure, each of these findings comes with a long list of caveats, with the correlations
holding true only under certain conditions and in certain cases. But regardless, Turkles case
does not hinge on empirical proof of the damage suffered by us the dehumanized but on
presuppositions that predetermine her conclusions. Turkle sees a world of connectivity as
devoid of connection because she misunderstands digitality itself.
***
DIGITAL connection is deeply interwoven through social life; it is made of us and is thus as
infinitely complex as we are. Anything social is inherently shaded with both good and bad. It
may be good or bad for some and not others, at some times and not others, in some places
and not others. Reclaiming Conversation, like too much other writing about new
technologies, is invested in the false question of whether the Internet is centrally good or bad,
as if technology were a separate thing that could be subtracted from social life rather than
being part and parcel of it.
This oversimplification pre-empts her critique, so that she asks not what technology
(including language itself) affords or discourages, and how and under what circumstances,
but what do we forget when we talk through machines? This slanted question elides the
issue of how communication is always mediated by power, space, bodies, language,
architecture, and other factors as well as by the particular medium through which it occurs.
To prescribe one form of media to privilege speaking over writing over texting would
require deep description and analysis of the context: who is speaking, to what ends, and why.
Turkle too often assumes screen-mediated communication comes in only one flavor, which
cannot grasp the complexities of our always augmented sociality, to say nothing of how
screens are differently used by those with different abilities.

24

Throughout Reclaiming Conversation, Turkle makes the unqualified and unsupported


assumption that realconversation, connection, and personhood must happen without the
screen. She refuses to understand digital connection as itself human and part of this world,
seeing it instead as an appendage of the separate, virtual world of machines and robots. This
frames digitality as inherently antihuman, pitting society and technology as opposites.
Herdigital dualism is plain when she describes how we have used technology to create a
second nature, an artificial nature, or when she discusses a world of screens, or when she
laments the pull of the online world away from the real world of humans. We turn to our
phones instead of each other, she says, as though our phones do not contain each other. She
worries that online, we are tempted to present ourselves as we would like to be, as if such
virtuality and self-presentation hasnt always been basic to the traditional real world of
human bodies.
Digital dualism allows Turkle to write as though she is championing humanity, conversation,
and empathy when ultimately she is merely privileging geography. Again, this can feel
intuitive, because this fetishization of contiguity has a long tradition and is echoed in our
everyday language: Each time we say IRL, face-to-face, or in person to mean connection
without screens, we frame what is real or who is a person in terms of their geographic
proximity rather than other aspects of closeness variables like attention, empathy, affect,
erotics, all of which can be experienced at a distance. We should not conceptually preclude
or discount all the ways intimacy, passion, love, joy, pleasure, closeness, pain, suffering, evil
and all the visceral actualities of existence pass through the screen. Face to face should
mean more than breathing the same air.
Of course, geographic proximity is important to whether we call something close or in
person or face to face. At times it is perhaps the most important variable. But it certainly
should not be the only one. To start from the prerequisite that co-presence is solely
dependent on proximity in space devalues so many other moments where closeness occurs
that happens to be mediated by a screen. Physicality can be digitally mediated: What happens
through the screen happens through bodies and material infrastructures. The sext or the
intimate video chat is physical of and affecting bodies. Video chat brings faces to other
faces. You are aware of, learning from, assessing, stimulated by, and speaking through bodies
and the spaces around them, as details of those spaces filter in and are noticed or
foregrounded. This screen-mediated communication is face-to-face, in person, physical, and
close in so many important ways, and distant in only one.
Likewise, being geographically close does not necessarily assure the other qualities of
proximity. You can be in the same room with someone, but that doesnt mean you are actively
caring for or about them: Maybe you are not listening; perhaps you are there out of
obligation. You can be distant in all the ways you were close in the video conversation, not
in the same place at all.

25

Turkle claims she is championing real human connection by downplaying the ways people
are close at a distance and distant when close. What she is implicitly claiming is that
geography is the only form of proximity that counts and finds support for the idea in its
supposed profitability: The more the business world appreciates the importance of
composure, attention, and face-to-face communication to its own financial interests, she
writes, the more distance it will take from technologies that disrupt them. As she does in
the book, Turkle is willing to endorse the scripted, commercially motivated conversation
Starbucks urges employees to have with customers simply because they are occurring
between people who are geographically close not because they produce or are the product
of empathy. It should seem altogether perverse that these hollow Starbucks interactions get
called face-to-face, real, and in person, whereas the intimate video chat is called distant,
virtual, and inhuman.
***
TOO much of Reclaiming Conversations argument about a generation broken by digitality
rests on its presumption that geography is the only way humans can be close. Lurking behind
this is Turkles effort to assert the moral supremacy of what she experienced as traditional
(like printed books, which, when they were newly popular, were seen to be destabilizing in
many of the same ways that digital technologies are today). This packages the status quo as
a solution to those problems with digitality that we all recognize and occasionally experience.
Some people really are annoying with their phones; sometimes smartphones really do feel
compulsive.
To address these problems, Reclaiming Conversation doesnt advocate fully getting rid of
phones or social media. Though Turkle speaks highly of digital detoxes, she regards them
as mere stopgap measures. What she proposes instead sounds more moderate: It is not
about giving up our phones but about using them with greater intention, Turkle says.
Throughout the book, she asks for this intention, that we be more deliberate and
mindful, that we find balance and moderation in our digital connection so that we can
newly enact a more self-aware relationship with our devices.
While this may seem like sensible middle ground, it is asking that we make our relationship
to digital connection hyper-present in our lives a constant preoccupation if not an
obsession. It makes connection and disconnection a ritual practice to be tracked and
confessed. The constant mindfulness and self-awareness she prescribes as the healthy or
normal way to use your phone is also a form of internalized social control, leveraging the
fiction of a stable authentic self to enforce boundaries around ones behavior.
When Turkle says she is a partisan for conversation, she means the kind of talk that is
presumed to help people discover what they have hidden from themselves so they can find
their inner compass. She writes that a virtuous circle links conversation to the capacity for

26

self-reflection and that in solitude we learn to concentrate and imagine, to listen to


ourselves. She calls such self-consciousness a path toward realism.
The prescription of this type of self-awareness assumes that there is some stable internal
entity that is who you really are; it frames self-discovery in terms of who you are not and
what you wont do. It regulates and prohibits behavior in the name of this true self. Turkle
writes that we are torn between our desire to express an authentic self and the pressure to
show our best selves online. Instead of promoting the value of authenticity, Turkle
complains, social media encourages performance, which she construes as a form of lying:
In theory, you know the difference between your self and your Facebook self. But lines blur
and it can be hard to keep them straight. Its like telling very small lies over time. You forget
the truth because it is so close to the lies.
But performing the self is not lies; it is the essence of the self, as the history of identity theory,
from George Herbert Mead to Erving Goffman to Judith Butler, can attest. The idea that selfperformance is somehow a new product of being online is as false as the idea that one can
have any sort of self that is not in some way performed. Such a view undermines the pleasures
and potentials of identity fluidity and performance and instead demands a more intense
relationship to a static self that is true and normal.
There is another way we can handle our phones, one that doesnt call for a misguided
mindfulness that misperceives technology as inherently toxic: Dont be rude to others, with
or without your phone. Be mindful of people rather than screens. Focus less on your
relationship to your device and more on your relationship to human beings. This includes not
feeling entitled to someones attention just because they are geographically near, and it
especially includes not putting forward your nonuse of a phone as proof of your superiority
and others subhumanity. Reading Reclaiming Conversation,I often felt that if Turkle were
more mindful of others, she wouldnt be so quick to see them as broken.
Rather than constant self-regulation through mindfulness and balance, we might assess
our relationship to digital connection in terms of our autonomy. Are we really addicted to
phones, or do contemporary work demands make it impossible to disconnect? In what ways
is our control over how connected we are a privilege, especially when considering those for
whom digital connection is prohibitively expensive or who cannot procure reliable internet
access?
From this point of view, both connection and disconnection can be appreciated for their own
sakes. When connection is not treated as a controlled substance, it can transcend its relation
to productivity. Time away need no longer be seen as a kind of necessary recharging, as if
humans were batteries. Whether we are pleasurably zoned out in front of a screen or a

27

campfire, we might waste time for wastefulness sake, to burn it, to put it to no future
productive use.
The false sense that your health and humanity are at stake in when and how you look at your
screen suggests that we are already too mindful about how we are connected. We have too
many self-conscious rituals of disconnection. If being mindful means being preoccupied with
a phony sense of balance and moderation, anchoring oneself to a fictitious real identity,
and judging constantly who is normal and who is broken, then we may need something more
mindless.

28

You might also like