You are on page 1of 14

The rise of data and the death of politics

Tech pioneers in the US are advocating a new data-based approach to governance


'algorithmic regulation'. But if technology provides the answers to society's problems,
what happens to governments?

Government by social network? US president Barack Obama with Facebook founder Mark Zuckerberg. Photograph:
Mandel Ngan/AFP/Getty Images
By Evgeny Morozov, Saturday 19 July 2014

On 24 August 1965 Gloria Placente, a 34-year-old resident of Queens, New


York, was driving to Orchard Beach in the Bronx. Clad in shorts and
sunglasses, the housewife was looking forward to quiet time at the beach.
But the moment she crossed the Willis Avenue Bridge in her Chevrolet
Corvair, Placente was surrounded by a dozen patrolmen. There were also
125 reporters, eager to witness the launch of New York police department's
Operation Corral an acronym for Computer Oriented Retrieval of Auto
Larcenists.
Fifteen months earlier, Placente had driven through a red light and
neglected to answer the summons, an offence that Corral was going to
punish with a heavy dose of techno-Kafkaesque. It worked as follows: a
police car stationed at one end of the bridge radioed the license plates of
oncoming cars to a teletypist miles away, who fed them to a Univac 490
computer, an expensive $500,000 toy ($3.5m in today's dollars) on loan
from the Sperry Rand Corporation. The computer checked the numbers
against a database of 110,000 cars that were either stolen or belonged to
known offenders. In case of a match the teletypist would alert a second
patrol car at the bridge's other exit. It took, on average, just seven seconds.

Compared with the impressive police gear of today automatic number


plate recognition, CCTV cameras, GPS trackers Operation Corral looks
quaint. And the possibilities for control will only expand. European
officials have considered requiring all cars entering the European market to
feature a built-in mechanism that allows the police to stop vehicles
remotely. Speaking earlier this year, Jim Farley, a senior Ford executive,
acknowledged that "we know everyone who breaks the law, we know when
you're doing it. We have GPS in your car, so we know what you're doing.
By the way, we don't supply that data to anyone." That last bit didn't sound
very reassuring and Farley retracted his remarks.
As both cars and roads get "smart," they promise nearly perfect, real-time
law enforcement. Instead of waiting for drivers to break the law, authorities
can simply prevent the crime. Thus, a 50-mile stretch of the A14 between
Felixstowe and Rugby is to be equipped with numerous sensors that would
monitor traffic by sending signals to and from mobile phones in moving
vehicles. The telecoms watchdog Ofcom envisions that such smart roads
connected to a centrally controlled traffic system could automatically
impose variable speed limits to smooth the flow of traffic but also direct the
cars "along diverted routes to avoid the congestion and even [manage]
their speed".
Other gadgets from smartphones to smart glasses promise even more
security and safety. In April, Apple patented technology that deploys
sensors inside the smartphone to analyze if the car is moving and if the
person using the phone is driving; if both conditions are met, it simply
blocks the phone's texting feature. Intel and Ford are working on Project
Mobil a face recognition system that, should it fail to recognize the face of
the driver, would not only prevent the car being started but also send the
picture to the car's owner (bad news for teenagers).
The car is emblematic of transformations in many other domains, from
smart environments for "ambient assisted living" where carpets and
walls detect that someone has fallen, to various masterplans for the smart
city, where municipal services dispatch resources only to those areas that
need them. Thanks to sensors and internet connectivity, the most banal
everyday objects have acquired tremendous power to regulate behavior.
Even public toilets are ripe for sensor-based optimization: the Safeguard
Germ Alarm, a smart soap dispenser developed by Procter & Gamble and
used in some public WCs in the Philippines, has sensors monitoring the
doors of each stall. Once you leave the stall, the alarm starts ringing and
can only be stopped by a push of the soap-dispensing button.

In this context, Google's latest plan to push its Android operating system on
to smart watches, smart cars, smart thermostats and, one suspects, smart
everything, looks rather ominous. In the near future, Google will be the
middleman standing between you and your fridge, you and your car, you
and your rubbish bin, allowing the National Security Agency to satisfy its
data addiction in bulk and via a single window.
This "smartification" of everyday life follows a familiar pattern: there's
primary data a list of what's in your smart fridge and your bin and
metadata a log of how often you open either of these things or when they
communicate with one another. Both produce interesting insights: cue
smart mattresses one recent model promises to track respiration and
heart rates and how much you move during the night and smart
utensils that provide nutritional advice.
In addition to making our lives more efficient, this smart world also presents
us with an exciting political choice. If so much of our everyday behavior is
already captured, analyzed and nudged, why stick with unempirical
approaches to regulation? Why rely on laws when one has sensors and
feedback mechanisms? If policy interventions are to be to use the
buzzwords of the day "evidence-based" and "results-oriented,"
technology is here to help.
This new type of governance has a name: algorithmic regulation. In as
much as Silicon Valley has a political programme, this is it. Tim O'Reilly, an
influential technology publisher, venture capitalist and ideas man (he is to
blame for popularizing the term "web 2.0") has been its most enthusiastic
promoter. In a recent essay that lays out his reasoning, O'Reilly makes an
intriguing case for the virtues of algorithmic regulation a case that
deserves close scrutiny both for what it promises policymakers and the
simplistic assumptions it makes about politics, democracy and power.
To see algorithmic regulation at work, look no further than the spam filter in
your email. Instead of confining itself to a narrow definition of spam, the
email filter has its users teach it. Even Google can't write rules to cover all
the ingenious innovations of professional spammers. What it can do,
though, is teach the system what makes a good rule and spot when it's
time to find another rule for finding a good rule and so on. An algorithm
can do this, but it's the constant real-time feedback from its users that
allows the system to counter threats never envisioned by its designers. And
it's not just spam: your bank uses similar methods to spot credit-card fraud.

In his essay, O'Reilly draws broader philosophical lessons from such


technologies, arguing that they work because they rely on "a deep
understanding of the desired outcome" (spam is bad!) and periodically
check if the algorithms are actually working as expected (are too many
legitimate emails ending up marked as spam?).
O'Reilly presents such technologies as novel and unique we are living
through a digital revolution after all but the principle behind "algorithmic
regulation" would be familiar to the founders of cybernetics a discipline
that, even in its name (it means "the science of governance") hints at its
great regulatory ambitions. This principle, which allows the system to
maintain its stability by constantly learning and adapting itself to the
changing circumstances, is what the British psychiatrist Ross Ashby, one of
the founding fathers of cybernetics, called "ultrastability".
To illustrate it, Ashby designed the homeostat. This clever device consisted
of four interconnected RAF bomb control units mysterious looking black
boxes with lots of knobs and switches that were sensitive to voltage
fluctuations. If one unit stopped working properly say, because of an
unexpected external disturbance the other three would rewire and
regroup themselves, compensating for its malfunction and keeping the
system's overall output stable.
Ashby's homeostat achieved "ultrastability" by always monitoring its internal
state and cleverly redeploying its spare resources.
Like the spam filter, it didn't have to specify all the possible disturbances
only the conditions for how and when it must be updated and redesigned.
This is no trivial departure from how the usual technical systems, with their
rigid, if-then rules, operate: suddenly, there's no need to develop
procedures for governing every contingency, for or so one hopes
algorithms and real-time, immediate feedback can do a better job than
inflexible rules out of touch with reality.
Algorithmic regulation could certainly make the administration of existing
laws more efficient. If it can fight credit-card fraud, why not tax fraud?
Italian bureaucrats have experimented with the redditometro, or income
meter, a tool for comparing people's spending patterns recorded thanks
to an arcane Italian law with their declared income, so that authorities
know when you spend more than you earn. Spain has expressed interest in
a similar tool.

Such systems, however, are toothless against the real culprits of tax
evasion the super-rich families who profit from various offshoring
schemes or simply write outrageous tax exemptions into the law.
Algorithmic regulation is perfect for enforcing the austerity agenda while
leaving those responsible for the fiscal crisis off the hook. To understand
whether such systems are working as expected, we need to modify
O'Reilly's question: for whom are they working? If it's just the tax-evading
plutocrats, the global financial institutions interested in balanced national
budgets and the companies developing income-tracking software, then it's
hardly a democratic success.
With his belief that algorithmic regulation is based on "a deep
understanding of the desired outcome", O'Reilly cunningly disconnects the
means of doing politics from its ends. But the how of politics is as important
as the what of politics in fact, the former often shapes the latter.
Everybody agrees that education, health, and security are all "desired
outcomes", but how do we achieve them? In the past, when we faced the
stark political choice of delivering them through the market or the state, the
lines of the ideological debate were clear. Today, when the presumed
choice is between the digital and the analog or between the dynamic
feedback and the static law, that ideological clarity is gone as if the very
choice of how to achieve those "desired outcomes" was apolitical and didn't
force us to choose between different and often incompatible visions of
communal living.
By assuming that the utopian world of infinite feedback loops is so efficient
that it transcends politics, the proponents of algorithmic regulation fall into
the same trap as the technocrats of the past. Yes, these systems are
terrifyingly efficient in the same way that Singapore is terrifyingly efficient
(O'Reilly, unsurprisingly, praises Singapore for its embrace of algorithmic
regulation). And while Singapore's leaders might believe that they, too,
have transcended politics, it doesn't mean that their regime cannot be
assessed outside the linguistic swamp of efficiency and innovation by
using political, not economic benchmarks.
As Silicon Valley keeps corrupting our language with its endless
glorification of disruption and efficiency concepts at odds with the
vocabulary of democracy our ability to question the "how" of politics is
weakened. Silicon Valley's default answer to the how of politics is what I
call solutionism: problems are to be dealt with via apps, sensors, and
feedback loops all provided by startups. Earlier this year Google's Eric

Schmidt even promised that startups would provide the solution to the
problem of economic inequality: the latter, it seems, can also be
"disrupted". And where the innovators and the disruptors lead, the
bureaucrats follow.
The intelligence services embraced solutionism before other government
agencies. Thus, they reduced the topic of terrorism from a subject that had
some connection to history and foreign policy to an informational problem
of identifying emerging terrorist threats via constant surveillance. They
urged citizens to accept that instability is part of the game, that its root
causes are neither traceable nor reparable, that the threat can only be preempted by out-innovating and out-surveilling the enemy with better
communications.
Speaking in Athens last November, the Italian philosopher Giorgio
Agamben discussed an epochal transformation in the idea of government,
"whereby the traditional hierarchical relation between causes and effects is
inverted, so that, instead of governing the causes a difficult and
expensive undertaking governments simply try to govern the effects".
For Agamben, this shift is emblematic of modernity. It also explains why the
liberalization of the economy can co-exist with the growing proliferation of
control by means of soap dispensers and remotely managed cars into
everyday life. "If government aims for the effects and not the causes, it will
be obliged to extend and multiply control. Causes demand to be known,
while effects can only be checked and controlled." Algorithmic regulation is
an enactment of this political programme in technological form.
The true politics of algorithmic regulation become visible once its logic is
applied to the social nets of the welfare state. There are no calls to
dismantle them, but citizens are nonetheless encouraged to take
responsibility for their own health. Consider how Fred Wilson, an influential
US venture capitalist, frames the subject. "Health is the opposite side of
healthcare," he said at a conference in Paris last December. "It's what
keeps you out of the healthcare system in the first place." Thus, we are
invited to start using self-tracking apps and data-sharing platforms and
monitor our vital indicators, symptoms and discrepancies on our own.
This goes nicely with recent policy proposals to save troubled public
services by encouraging healthier lifestyles. Consider a 2013 report by
Westminster council and the Local Government Information Unit, a
thinktank, calling for the linking of housing and council benefits to claimants'

visits to the gym with the help of smartcards. They might not be needed:
many smartphones are already tracking how many steps we take every day
(Google Now, the company's virtual assistant, keeps score of such data
automatically and periodically presents it to users, nudging them to walk
more).
The numerous possibilities that tracking devices offer to health and
insurance industries are not lost on O'Reilly. "You know the way that
advertising turned out to be the native business model for the internet?" he
wondered at a recent conference. "I think that insurance is going to be the
native business model for the internet of things." Things do seem to be
heading that way: in June, Microsoft struck a deal with American Family
Insurance, the eighth-largest home insurer in the US, in which both
companies will fund startups that want to put sensors into smart homes and
smart cars for the purposes of "proactive protection".
An insurance company would gladly subsidies the costs of installing yet
another sensor in your house as long as it can automatically alert the fire
department or make front porch lights flash in case your smoke detector
goes off. For now, accepting such tracking systems is framed as an extra
benefit that can save us some money. But when do we reach a point where
not using them is seen as a deviation or, worse, an act of concealment
that ought to be punished with higher premiums?
Or consider a May 2014 report from 2020health, another thinktank,
proposing to extend tax rebates to Britons who give up smoking, stay slim
or drink less. "We propose 'payment by results', a financial reward for
people who become active partners in their health, whereby if you, for
example, keep your blood sugar levels down, quit smoking, keep weight
off, [or] take on more self-care, there will be a tax rebate or an end-of-year
bonus," they state. Smart gadgets are the natural allies of such schemes:
they document the results and can even help achieve them by constantly
nagging us to do what's expected.
The unstated assumption of most such reports is that the unhealthy are not
only a burden to society but that they deserve to be punished (fiscally for
now) for failing to be responsible. For what else could possibly explain their
health problems but their personal failings? It's certainly not the power of
food companies or class-based differences or various political and
economic injustices. One can wear a dozen powerful sensors, own a smart
mattress and even do a close daily reading of one's poop as some selftracking aficionados are wont to do but those injustices would still be
nowhere to be seen, for they are not the kind of stuff that can be measured

with a sensor. The devil doesn't wear data. Social injustices are much
harder to track than the everyday lives of the individuals whose lives they
affect.
In shifting the focus of regulation from reining in institutional and corporate
malfeasance to perpetual electronic guidance of individuals, algorithmic
regulation offers us a good-old technocratic utopia of politics without
politics. Disagreement and conflict, under this model, are seen as
unfortunate byproducts of the analog era to be solved through data
collection and not as inevitable results of economic or ideological
conflicts.
However, a politics without politics does not mean a politics without control
or administration. As O'Reilly writes in his essay: "New technologies make
it possible to reduce the amount of regulation while actually increasing the
amount of oversight and production of desirable outcomes." Thus, it's a
mistake to think that Silicon Valley wants to rid us of government
institutions. Its dream state is not the small government of libertarians a
small state, after all, needs neither fancy gadgets nor massive servers to
process the data but the data-obsessed and data-obese state of
behavioral economists.
The nudging state is enamored of feedback technology, for its key founding
principle is that while we behave irrationally, our irrationality can be
corrected if only the environment acts upon us, nudging us towards the
right option. Unsurprisingly, one of the three lonely references at the end of
O'Reilly's essay is to a 2012 speech entitled "Regulation: Looking
Backward, Looking Forward" by Cass Sunstein, the prominent American
legal scholar who is the chief theorist of the nudging state.
And while the nudgers have already captured the state by making
behavioral psychology the favorite idiom of government bureaucracy
Daniel Kahnemanis in, Machiavelli is out the algorithmic regulation lobby
advances in more clandestine ways. They create innocuous non-profit
organizations like Code for America which then co-opt the state under the
guise of encouraging talented hackers to tackle civic problems.
Such initiatives aim to reprogramme the state and make it feedbackfriendly, crowding out other means of doing politics. For all those tracking
apps, algorithms and sensors to work, databases need interoperability
which is what such pseudo-humanitarian organizations, with their ardent

belief in open data, demand. And when the government is too slow to move
at Silicon Valley's speed, they simply move inside the government. Thus,
Jennifer Pahlka, the founder of Code for America and a protg of O'Reilly,
became the deputy chief technology officer of the US government while
pursuing a one-year "innovation fellowship" from the White House.
Cash-strapped governments welcome such colonization by technologists
especially if it helps to identify and clean up datasets that can be profitably
sold to companies who need such data for advertising purposes. Recent
clashes over the sale of student and health data in the UK are just a
precursor of battles to come: after all state assets have been privatized,
data is the next target. For O'Reilly, open data is "a key enabler of the
measurement revolution".
This "measurement revolution" seeks to quantify the efficiency of various
social programmes, as if the rationale behind the social nets that some of
them provide was to achieve perfection of delivery. The actual rationale, of
course, was to enable a fulfilling life by suppressing certain anxieties, so
that citizens can pursue their life projects relatively undisturbed. This vision
did spawn a vast bureaucratic apparatus and the critics of the welfare state
from the left most prominently Michel Foucault were right to question its
disciplining inclinations. Nonetheless, neither perfection nor efficiency were
the "desired outcome" of this system. Thus, to compare the welfare state
with the algorithmic state on those grounds is misleading.
But we can compare their respective visions for human fulfilment and the
role they assign to markets and the state. Silicon Valley's offer is clear:
thanks to ubiquitous feedback loops, we can all become entrepreneurs and
take care of our own affairs! As Brian Chesky, the chief executive of
Airbnb, told theAtlantic last year, "What happens when everybody is a
brand? When everybody has a reputation? Every person can become an
entrepreneur."
Under this vision, we will all code (for America!) in the morning,
drive Ubercars in the afternoon, and rent out our kitchens as restaurants
courtesy of Airbnb in the evening. As O'Reilly writes of Uber and similar
companies, "these services ask every passenger to rate their driver (and
drivers to rate their passenger). Drivers who provide poor service are
eliminated. Reputation does a better job of ensuring a superb customer
experience than any amount of government regulation."
The state behind the "sharing economy" does not wither away; it might be
needed to ensure that the reputation accumulated on Uber, Airbnb and
other platforms of the "sharing economy" is fully liquid and transferable,

creating a world where our every social interaction is recorded and


assessed, erasing whatever differences exist between social domains.
Someone, somewhere will eventually rate you as a passenger, a house
guest, a student, a patient, a customer. Whether this ranking infrastructure
will be decentralized, provided by a giant like Google or rest with the state
is not yet clear but the overarching objective is: to make reputation into a
feedback-friendly social net that could protect the truly responsible citizens
from the vicissitudes of deregulation.
Admiring the reputation models of Uber and Airbnb, O'Reilly wants
governments to be "adopting them where there are no demonstrable ill
effects". But what counts as an "ill effect" and how to demonstrate it is a
key question that belongs to the how of politics that algorithmic regulation
wants to suppress. It's easy to demonstrate "ill effects" if the goal of
regulation is efficiency but what if it is something else? Surely, there are
some benefits fewer visits to the psychoanalyst, perhaps in not having
your every social interaction ranked?
The imperative to evaluate and demonstrate "results" and "effects" already
presupposes that the goal of policy is the optimization of efficiency.
However, as long as democracy is irreducible to a formula, its composite
values will always lose this battle: they are much harder to quantify.
For Silicon Valley, though, the reputation-obsessed algorithmic state of the
sharing economy is the new welfare state. If you are honest and
hardworking, your online reputation would reflect this, producing a highly
personalized social net. It is "ultrastable" in Ashby's sense: while the
welfare state assumes the existence of specific social evils it tries to fight,
the algorithmic state makes no such assumptions. The future threats can
remain fully unknowable and fully addressable on the individual level.
Silicon Valley, of course, is not alone in touting such ultrastable individual
solutions. Nassim Taleb, in his best-selling 2012 book Antifragile, makes a
similar, if more philosophical, plea for maximizing our individual
resourcefulness and resilience: don't get one job but many, don't take on
debt, count on your own expertise. It's all about resilience, risk-taking and,
as Taleb puts it, "having skin in the game". As Julian Reid and Brad Evans
write in their new book, Resilient Life: The Art of Living Dangerously, this
growing cult of resilience masks a tacit acknowledgement that no collective
project could even aspire to tame the proliferating threats to human
existence we can only hope to equip ourselves to tackle them individually.

"When policy-makers engage in the discourse of resilience," write Reid and


Evans, "they do so in terms which aim explicitly at preventing humans from
conceiving of danger as a phenomenon from which they might seek
freedom and even, in contrast, as that to which they must now expose
themselves."
What, then, is the progressive alternative? "The enemy of my enemy is my
friend" doesn't work here: just because Silicon Valley is attacking the
welfare state doesn't mean that progressives should defend it to the very
last bullet (or tweet). First, even leftist governments have limited space for
fiscal manoeuvres, as the kind of discretionary spending required to
modernize the welfare state would never be approved by the global
financial markets. And it's the ratings agencies and bond markets not the
voters who are in charge today.
Second, the leftist critique of the welfare state has become only more
relevant today when the exact borderlines between welfare and security
are so blurry. When Google's Android powers so much of our everyday life,
the government's temptation to govern us through remotely controlled cars
and alarm-operated soap dispensers will be all too great. This will expand
government's hold over areas of life previously free from regulation.
With so much data, the government's favorite argument in fighting terror if
only the citizens knew as much as we do, they too would impose all these
legal exceptions easily extends to other domains, from health to climate
change. Consider a recent academic paper that used Google search data
to study obesity patterns in the US, finding significant correlation between
search keywords and body mass index levels. "Results suggest great
promise of the idea of obesity monitoring through real-time Google Trends
data", note the authors, which would be "particularly attractive for
government health institutions and private businesses such as insurance
companies."
If Google senses a flu epidemic somewhere, it's hard to challenge its hunch
we simply lack the infrastructure to process so much data at this scale.
Google can be proven wrong after the fact as has recently been the case
with its flu trends data, which was shown to overestimate the number of
infections, possibly because of its failure to account for the intense media
coverage of flu but so is the case with most terrorist alerts. It's the
immediate, real-time nature of computer systems that makes them perfect
allies of an infinitely expanding and pre-emption-obsessed state.
Perhaps, the case of Gloria Placente and her failed trip to the beach was
not just a historical oddity but an early omen of how real-time computing,

combined with ubiquitous communication technologies, would transform


the state. One of the few people to have heeded that omen was a littleknown American advertising executive called Robert MacBride, who
pushed the logic behind Operation Corral to its ultimate conclusions in his
unjustly neglected 1967 book, The Automated State.
At the time, America was debating the merits of establishing a national data
centre to aggregate various national statistics and make it available to
government agencies. MacBride attacked his contemporaries' inability to
see how the state would exploit the metadata accrued as everything was
being computerized. Instead of "a large scale, up-to-date Austro-Hungarian
empire", modern computer systems would produce "a bureaucracy of
almost celestial capacity" that can "discern and define relationships in a
manner which no human bureaucracy could ever hope to do".
"Whether one bowls on a Sunday or visits a library instead is [of] no
consequence since no one checks those things," he wrote. Not so when
computer systems can aggregate data from different domains and spot
correlations. "Our individual behavior in buying and selling an automobile, a
house, or a security, in paying our debts and acquiring new ones, and in
earning money and being paid, will be noted meticulously and studied
exhaustively," warned MacBride. Thus, a citizen will soon discover that "his
choice of magazine subscriptions can be found to indicate accurately the
probability of his maintaining his property or his interest in the education of
his children." This sounds eerily similar to the recent case of a hapless
father who found that his daughter was pregnant from a coupon that Target,
a retailer, sent to their house. Target's hunch was based on its analysis of
products for example, unscented lotion usually bought by other
pregnant women.
For MacBride the conclusion was obvious. "Political rights won't be violated
but will resemble those of a small stockholder in a giant enterprise," he
wrote. "The mark of sophistication and savoir-faire in this future will be the
grace and flexibility with which one accepts one's role and makes the most
of what it offers." In other words, since we are all entrepreneurs first and
citizens second, we might as well make the most of it.
What, then, is to be done? Technophobia is no solution. Progressives need
technologies that would stick with the spirit, if not the institutional form, of
the welfare state, preserving its commitment to creating ideal conditions for
human flourishing. Even some ultrastability is welcome. Stability was a
laudable goal of the welfare state before it had encountered a trap: in
specifying the exact protections that the state was to offer against the

excesses of capitalism, it could not easily deflect new, previously


unspecified forms of exploitation.
How do we build welfarism that is both decentralized and ultrastable? A
form of guaranteed basic income whereby some welfare services are
replaced by direct cash transfers to citizens fits the two criteria.
Creating the right conditions for the emergence of political communities
around causes and issues they deem relevant would be another good step.
Full compliance with the principle of ultrastability dictates that such issues
cannot be anticipated or dictated from above by political parties or trade
unions and must be left unspecified.
What can be specified is the kind of communications infrastructure needed
to abet this cause: it should be free to use, hard to track, and open to new,
subversive uses. Silicon Valley's existing infrastructure is great for fulfilling
the needs of the state, not of self-organizing citizens. It can, of course, be
redeployed for activist causes and it often is but there's no reason to
accept the status quo as either ideal or inevitable.
Why, after all, appropriate what should belong to the people in the first
place? While many of the creators of the internet bemoan how low their
creature has fallen, their anger is misdirected. The fault is not with that
amorphous entity but, first of all, with the absence of robust technology
policy on the left a policy that can counter the pro-innovation, prodisruption, pro-privatization agenda of Silicon Valley. In its absence, all
these emerging political communities will operate with their wings clipped.
Whether the next Occupy Wall Street would be able to occupy anything in a
truly smart city remains to be seen: most likely, they would be out-censored
and out-droned.
To his credit, MacBride understood all of this in 1967. "Given the resources
of modern technology and planning techniques," he warned, "it is really no
great trick to transform even a country like ours into a smoothly running
corporation where every detail of life is a mechanical function to be taken
care of." MacBride's fear is O'Reilly's master plan: the government, he
writes, ought to be modelled on the "lean startup" approach of Silicon
Valley, which is "using data to constantly revise and tune its approach to
the market". It's this very approach that Facebook has recently deployed to
maximize user engagement on the site: if showing users more happy
stories does the trick, so be it.

Algorithmic regulation, whatever its immediate benefits, will give us a


political regime where technology corporations and government
bureaucrats call all the shots. The Polish science fiction writer Stanislaw
Lem, in a pointed critique of cybernetics published, as it happens, roughly
at the same time asThe Automated State, put it best: "Society cannot give
up the burden of having to decide about its own fate by sacrificing this
freedom for the sake of the cybernetic regulator."

You might also like