Professional Documents
Culture Documents
Government by social network? US president Barack Obama with Facebook founder Mark Zuckerberg. Photograph:
Mandel Ngan/AFP/Getty Images
By Evgeny Morozov, Saturday 19 July 2014
In this context, Google's latest plan to push its Android operating system on
to smart watches, smart cars, smart thermostats and, one suspects, smart
everything, looks rather ominous. In the near future, Google will be the
middleman standing between you and your fridge, you and your car, you
and your rubbish bin, allowing the National Security Agency to satisfy its
data addiction in bulk and via a single window.
This "smartification" of everyday life follows a familiar pattern: there's
primary data a list of what's in your smart fridge and your bin and
metadata a log of how often you open either of these things or when they
communicate with one another. Both produce interesting insights: cue
smart mattresses one recent model promises to track respiration and
heart rates and how much you move during the night and smart
utensils that provide nutritional advice.
In addition to making our lives more efficient, this smart world also presents
us with an exciting political choice. If so much of our everyday behavior is
already captured, analyzed and nudged, why stick with unempirical
approaches to regulation? Why rely on laws when one has sensors and
feedback mechanisms? If policy interventions are to be to use the
buzzwords of the day "evidence-based" and "results-oriented,"
technology is here to help.
This new type of governance has a name: algorithmic regulation. In as
much as Silicon Valley has a political programme, this is it. Tim O'Reilly, an
influential technology publisher, venture capitalist and ideas man (he is to
blame for popularizing the term "web 2.0") has been its most enthusiastic
promoter. In a recent essay that lays out his reasoning, O'Reilly makes an
intriguing case for the virtues of algorithmic regulation a case that
deserves close scrutiny both for what it promises policymakers and the
simplistic assumptions it makes about politics, democracy and power.
To see algorithmic regulation at work, look no further than the spam filter in
your email. Instead of confining itself to a narrow definition of spam, the
email filter has its users teach it. Even Google can't write rules to cover all
the ingenious innovations of professional spammers. What it can do,
though, is teach the system what makes a good rule and spot when it's
time to find another rule for finding a good rule and so on. An algorithm
can do this, but it's the constant real-time feedback from its users that
allows the system to counter threats never envisioned by its designers. And
it's not just spam: your bank uses similar methods to spot credit-card fraud.
Such systems, however, are toothless against the real culprits of tax
evasion the super-rich families who profit from various offshoring
schemes or simply write outrageous tax exemptions into the law.
Algorithmic regulation is perfect for enforcing the austerity agenda while
leaving those responsible for the fiscal crisis off the hook. To understand
whether such systems are working as expected, we need to modify
O'Reilly's question: for whom are they working? If it's just the tax-evading
plutocrats, the global financial institutions interested in balanced national
budgets and the companies developing income-tracking software, then it's
hardly a democratic success.
With his belief that algorithmic regulation is based on "a deep
understanding of the desired outcome", O'Reilly cunningly disconnects the
means of doing politics from its ends. But the how of politics is as important
as the what of politics in fact, the former often shapes the latter.
Everybody agrees that education, health, and security are all "desired
outcomes", but how do we achieve them? In the past, when we faced the
stark political choice of delivering them through the market or the state, the
lines of the ideological debate were clear. Today, when the presumed
choice is between the digital and the analog or between the dynamic
feedback and the static law, that ideological clarity is gone as if the very
choice of how to achieve those "desired outcomes" was apolitical and didn't
force us to choose between different and often incompatible visions of
communal living.
By assuming that the utopian world of infinite feedback loops is so efficient
that it transcends politics, the proponents of algorithmic regulation fall into
the same trap as the technocrats of the past. Yes, these systems are
terrifyingly efficient in the same way that Singapore is terrifyingly efficient
(O'Reilly, unsurprisingly, praises Singapore for its embrace of algorithmic
regulation). And while Singapore's leaders might believe that they, too,
have transcended politics, it doesn't mean that their regime cannot be
assessed outside the linguistic swamp of efficiency and innovation by
using political, not economic benchmarks.
As Silicon Valley keeps corrupting our language with its endless
glorification of disruption and efficiency concepts at odds with the
vocabulary of democracy our ability to question the "how" of politics is
weakened. Silicon Valley's default answer to the how of politics is what I
call solutionism: problems are to be dealt with via apps, sensors, and
feedback loops all provided by startups. Earlier this year Google's Eric
Schmidt even promised that startups would provide the solution to the
problem of economic inequality: the latter, it seems, can also be
"disrupted". And where the innovators and the disruptors lead, the
bureaucrats follow.
The intelligence services embraced solutionism before other government
agencies. Thus, they reduced the topic of terrorism from a subject that had
some connection to history and foreign policy to an informational problem
of identifying emerging terrorist threats via constant surveillance. They
urged citizens to accept that instability is part of the game, that its root
causes are neither traceable nor reparable, that the threat can only be preempted by out-innovating and out-surveilling the enemy with better
communications.
Speaking in Athens last November, the Italian philosopher Giorgio
Agamben discussed an epochal transformation in the idea of government,
"whereby the traditional hierarchical relation between causes and effects is
inverted, so that, instead of governing the causes a difficult and
expensive undertaking governments simply try to govern the effects".
For Agamben, this shift is emblematic of modernity. It also explains why the
liberalization of the economy can co-exist with the growing proliferation of
control by means of soap dispensers and remotely managed cars into
everyday life. "If government aims for the effects and not the causes, it will
be obliged to extend and multiply control. Causes demand to be known,
while effects can only be checked and controlled." Algorithmic regulation is
an enactment of this political programme in technological form.
The true politics of algorithmic regulation become visible once its logic is
applied to the social nets of the welfare state. There are no calls to
dismantle them, but citizens are nonetheless encouraged to take
responsibility for their own health. Consider how Fred Wilson, an influential
US venture capitalist, frames the subject. "Health is the opposite side of
healthcare," he said at a conference in Paris last December. "It's what
keeps you out of the healthcare system in the first place." Thus, we are
invited to start using self-tracking apps and data-sharing platforms and
monitor our vital indicators, symptoms and discrepancies on our own.
This goes nicely with recent policy proposals to save troubled public
services by encouraging healthier lifestyles. Consider a 2013 report by
Westminster council and the Local Government Information Unit, a
thinktank, calling for the linking of housing and council benefits to claimants'
visits to the gym with the help of smartcards. They might not be needed:
many smartphones are already tracking how many steps we take every day
(Google Now, the company's virtual assistant, keeps score of such data
automatically and periodically presents it to users, nudging them to walk
more).
The numerous possibilities that tracking devices offer to health and
insurance industries are not lost on O'Reilly. "You know the way that
advertising turned out to be the native business model for the internet?" he
wondered at a recent conference. "I think that insurance is going to be the
native business model for the internet of things." Things do seem to be
heading that way: in June, Microsoft struck a deal with American Family
Insurance, the eighth-largest home insurer in the US, in which both
companies will fund startups that want to put sensors into smart homes and
smart cars for the purposes of "proactive protection".
An insurance company would gladly subsidies the costs of installing yet
another sensor in your house as long as it can automatically alert the fire
department or make front porch lights flash in case your smoke detector
goes off. For now, accepting such tracking systems is framed as an extra
benefit that can save us some money. But when do we reach a point where
not using them is seen as a deviation or, worse, an act of concealment
that ought to be punished with higher premiums?
Or consider a May 2014 report from 2020health, another thinktank,
proposing to extend tax rebates to Britons who give up smoking, stay slim
or drink less. "We propose 'payment by results', a financial reward for
people who become active partners in their health, whereby if you, for
example, keep your blood sugar levels down, quit smoking, keep weight
off, [or] take on more self-care, there will be a tax rebate or an end-of-year
bonus," they state. Smart gadgets are the natural allies of such schemes:
they document the results and can even help achieve them by constantly
nagging us to do what's expected.
The unstated assumption of most such reports is that the unhealthy are not
only a burden to society but that they deserve to be punished (fiscally for
now) for failing to be responsible. For what else could possibly explain their
health problems but their personal failings? It's certainly not the power of
food companies or class-based differences or various political and
economic injustices. One can wear a dozen powerful sensors, own a smart
mattress and even do a close daily reading of one's poop as some selftracking aficionados are wont to do but those injustices would still be
nowhere to be seen, for they are not the kind of stuff that can be measured
with a sensor. The devil doesn't wear data. Social injustices are much
harder to track than the everyday lives of the individuals whose lives they
affect.
In shifting the focus of regulation from reining in institutional and corporate
malfeasance to perpetual electronic guidance of individuals, algorithmic
regulation offers us a good-old technocratic utopia of politics without
politics. Disagreement and conflict, under this model, are seen as
unfortunate byproducts of the analog era to be solved through data
collection and not as inevitable results of economic or ideological
conflicts.
However, a politics without politics does not mean a politics without control
or administration. As O'Reilly writes in his essay: "New technologies make
it possible to reduce the amount of regulation while actually increasing the
amount of oversight and production of desirable outcomes." Thus, it's a
mistake to think that Silicon Valley wants to rid us of government
institutions. Its dream state is not the small government of libertarians a
small state, after all, needs neither fancy gadgets nor massive servers to
process the data but the data-obsessed and data-obese state of
behavioral economists.
The nudging state is enamored of feedback technology, for its key founding
principle is that while we behave irrationally, our irrationality can be
corrected if only the environment acts upon us, nudging us towards the
right option. Unsurprisingly, one of the three lonely references at the end of
O'Reilly's essay is to a 2012 speech entitled "Regulation: Looking
Backward, Looking Forward" by Cass Sunstein, the prominent American
legal scholar who is the chief theorist of the nudging state.
And while the nudgers have already captured the state by making
behavioral psychology the favorite idiom of government bureaucracy
Daniel Kahnemanis in, Machiavelli is out the algorithmic regulation lobby
advances in more clandestine ways. They create innocuous non-profit
organizations like Code for America which then co-opt the state under the
guise of encouraging talented hackers to tackle civic problems.
Such initiatives aim to reprogramme the state and make it feedbackfriendly, crowding out other means of doing politics. For all those tracking
apps, algorithms and sensors to work, databases need interoperability
which is what such pseudo-humanitarian organizations, with their ardent
belief in open data, demand. And when the government is too slow to move
at Silicon Valley's speed, they simply move inside the government. Thus,
Jennifer Pahlka, the founder of Code for America and a protg of O'Reilly,
became the deputy chief technology officer of the US government while
pursuing a one-year "innovation fellowship" from the White House.
Cash-strapped governments welcome such colonization by technologists
especially if it helps to identify and clean up datasets that can be profitably
sold to companies who need such data for advertising purposes. Recent
clashes over the sale of student and health data in the UK are just a
precursor of battles to come: after all state assets have been privatized,
data is the next target. For O'Reilly, open data is "a key enabler of the
measurement revolution".
This "measurement revolution" seeks to quantify the efficiency of various
social programmes, as if the rationale behind the social nets that some of
them provide was to achieve perfection of delivery. The actual rationale, of
course, was to enable a fulfilling life by suppressing certain anxieties, so
that citizens can pursue their life projects relatively undisturbed. This vision
did spawn a vast bureaucratic apparatus and the critics of the welfare state
from the left most prominently Michel Foucault were right to question its
disciplining inclinations. Nonetheless, neither perfection nor efficiency were
the "desired outcome" of this system. Thus, to compare the welfare state
with the algorithmic state on those grounds is misleading.
But we can compare their respective visions for human fulfilment and the
role they assign to markets and the state. Silicon Valley's offer is clear:
thanks to ubiquitous feedback loops, we can all become entrepreneurs and
take care of our own affairs! As Brian Chesky, the chief executive of
Airbnb, told theAtlantic last year, "What happens when everybody is a
brand? When everybody has a reputation? Every person can become an
entrepreneur."
Under this vision, we will all code (for America!) in the morning,
drive Ubercars in the afternoon, and rent out our kitchens as restaurants
courtesy of Airbnb in the evening. As O'Reilly writes of Uber and similar
companies, "these services ask every passenger to rate their driver (and
drivers to rate their passenger). Drivers who provide poor service are
eliminated. Reputation does a better job of ensuring a superb customer
experience than any amount of government regulation."
The state behind the "sharing economy" does not wither away; it might be
needed to ensure that the reputation accumulated on Uber, Airbnb and
other platforms of the "sharing economy" is fully liquid and transferable,