You are on page 1of 61

Good evening

and thanks for having me here. Today I want to look at how our relationship to the
world changes when were surrounded by devices that anticipate our needs and act on them. That
means it sits at the intersection of the internet of things, user experience design and machine learning,
and although people have dealt with one of those disciplines before, I dont think theyve ever been
combined in quite the ways they are now, or with the current enthusiasm. And, to be clear: I am
neither a fan of, nor a critic of, these technologies. I think theyre too complex to be reduced that way
and to maximize their positive impact we have to actively engage with them, and thats what this talk
is trying to do.
The talk is divided into several parts: it starts with an overview of how I think Internet of Things devices
are primarily components of services, rather than being self-contained experiences, how predictive
behavior enables key components of those services, and then I finish by exploring some speculative
ideas of what kind of impact theyre going to have on us, as individuals and as a society. At its core is
an argument that everything is going to be connected to the Internet, that those things will each try to
predict our immediate future, and that this is going to fundamentally change our relationship to the
world.
A couple of caveats:
- My current work in this field focuses almost exclusively on the consumer internet of things, so I see
most things through that lens.
- I want to point out that few if any of the issues I raise are new. Though the terms internet of things
and machine learning are hot right now, the ideas have been discussed in research circles for
decades. Search for ubiquitous computing, ambient intelligence, and pervasive computing and
youll see a lot of great thought in the space. If youre really ambitious, you can read the Artificial
Intelligence and Cybernetics works of the 50s and 60s and youll be surprised by the prescience of the
people working in this space when the entire worlds compute power was about as much as my key
fob.
- There are a lot of ideas here, and I will almost certainly under-explain something. For that I apologize
in advance. My goal here is to give you a general sense of how these the pieces connect, rather than
an in-depth explanation of any one of the pieces.
- Finally, most of my slides dont have words on them, so Ill make the complete deck with a transcript
available as soon Im done.

Let me begin by telling you a bit about my background. I m a user experience


designer. I was one of the first professional Web designers. This is the navigation for a
hot sauce shopping site I designed in the spring of 1994.

Ive also worked on the user experience design of a lot of consumer electronics
products from companies youve probably heard of.

Now, a quick aside. What is User Experience Design? UX design is not graphic design,
interface design, ergonomics, industrial design, or product design, but it includes
aspects of all of those things.
UX design is a humanistic problem solving approach that brings together the needs of
people and businesses to create technological products that are valuable for both
groups. Its much more about process than making things look good.
The field is about 20 years old. This is how it looked about 15 years ago.
Diagram by Jess McMullin.

Its a little more complex today, buts roughly the same thing.
Diagram by Corey Stern.

I wrote a couple of books based on my experience as a designer. One is a cookbook of


user research methods, and the second describes what I think are some of the core
concerns when designing networked computational devices. Im also married to one
of the authors of this book, so thinking about the impact of the design of connected
devices on people is kind of a family business.

I also started a couple of companies. The first, Adaptive Path, was primarily focused
on the web, and with the second one, ThingM, I got deep into developing hardware.

Today I work for PARC, the famous research lab that invented the personal computer,
object oriented software, the tablet computer, and laser printer, as a principal in its
Innovation Services group. We help companies reduce the risk of adopting novel
technologies using a mix of social research, design and business strategy.

PARC also started thinking about what we call the IoT long before most other
companies.
It was at PARC in 1971 that Dick Shoup, and early PARC researcher, wrote that
eventually processors would be as common, and as invisible as electric motors. This
clearly outlines the destiny of connected computer: that eventually it will become as
boring and as common as electric motors are today.

In the late 80s, also at PARC, Mark Weiser coined the term ubiquitous computing to
describe a future when the number of computers surpassed the number of people
using them. In this chart from 20 years ago, he predicted that would happen around
2005. He didnt live to see that crossover, but he was basically rightthe iPhone
launched in 2007and we now live in the world he envisioned.
Essentially, what we now see as a novel phenomenon has been forseen by people in
the industry for many decades. The questions have always been not about where
were going, but when well get there, and how.

But the end vision doesnt appear all at once. Weve only started the transition to the
ubiquitous computing world, and as such, were seeing a lot of bad ideas about what
the Internet of Things is and it isnt. Essentially, everything that can be connected to
the Internet will be, which includes a lot of things that shouldnt be. There are so
many bad ideas now that there are entire Tumblrs dedicated to mocking stupid IoT
ideas. One is about dumb smart things and the other is just about dumb smart
refrigerators.

10

Most of these things are bad ideas because simply connecting existing stuff to the
internet does not produce customer value

11

Simple connectivity helps when youre trying to maximize the efficiency of a fixed
process, but thats not a problem that most people have. Weve been able to simply
connect various devices to a computer since a Tandy Color Computers could lights off
and on over X10 in 1983. The problem is that that wasnt very useful then, and its
not very useful now. If you replace the Tandy with an iPhone and the lamp with a
washing machine

12

or an egg carton, you still have the same problem, and its a user experience
problem.
The UX problem is that end users have to connect all the dots to coordinate between
a wide variety of devices, and to interpret the meaning of all of these sensors to
create personal value. For many simply connected products there is so little efficiency
to be had relative to the cognitive load that its just not worth it. Whats worse, the
extra cognitive load is exactly opposite to what the product promises, and customers
feel intensely disappointed, perhaps even betrayed, when they realize how little they
get out of such a product That makes most such products effectively WORSE than
useless.
That promise gap is what distinguishes a gadget from a tool, why this egg carton is
funny, and why Quirky who made it, filed for bankruptcy after burning through
hundreds of millions of dollars.

13

How do you create a tool that reduces cognitive load instead of creating it, that
exchanges peoples precious time for significant value? One approach is to couple
cloud-based services with predictive machine learning models to anticipate what
behaviors will maximize the chances of a desirable outcome in a given situation.

14

When I talk about services, Im talking about thinking of hardware devices as physical
representatives of cloud services, which makes them very different than traditional
consumer electronics. Historically, a company made an electronic product, say a
turntable, they found people to sell it for them, they advertised it and people bought
it. That was traditionally the end of the companys relationship with the customer
until that person bought another thing, and all of the value of the relationship was in
the device. With the IoT, the sale of the device is just the beginning of the
relationship and physical thing holds almost no value for either the customer or the
manufacturer.

15

Value now shifts to services and the devices, software applications and websites used
to access itits avatarsbecome secondary. A camera becomes a really good
appliance for taking photos for Instagram, while a TV becomes a nice Instagram
display that you dont have to log into every time, and a phone becomes a convenient
way to check your friends pictures on the road.
Hardware, physical things, become simultaneously more specialized and devalued as
users see through each device to the service it represents. The avatars exist to get
better value out of the service.

16

Amazon really gets this. Here s a telling older ad from Amazon for the Kindle. Its
saying Look, use whatever device you want. We don t care, as long you stay loyal to
our service. You can buy our specialized devices, but you don t have to.

17

When Fire was released 5 years ago, Jeff Bezos even called it a service.

18

Amazon Dash is a service thats enabled by dedicated devices. A Dash button is a


networked computer whose only purpose is to be an avatar for a macaroni and
cheese service.

19

Most large-scale IoT products are service avatars. They use specialized sensors and
actuators to support a service, but have little valueor dont work at allwithout
the supporting service. Smart Things, which was acquired by Samsung, clearly states
its service offering right up front on their site. The first thing they say about their
product line is not what the functionality is, but what effect their service will achieve
for their customers. Their hardware products functionality, how they will technically
satisfy the service promise, is almost an afterthought.

20

Compare that to X10, their spiritual predecessor thats been in the business for 30
years. All that X10 tells is you is what the devices are, not what the service will
accomplish for you. I dont even know if there IS a service. Why should I care that
they have modules? I shouldnt, and I dont.

21

I think the real value connected services offer is their ability to make sense of the
world on our behalf, to reduce cognitive load by enabling people to interact with
devices at a higher level than simple telemetry, at the level of intentions and goals,
rather than data and control. Humans are not built to collect and make sense of huge
amounts of data across many devices, or to articulate our needs as systems of
mutually interdependent components. Computers are great at it.

22

They do this through processes that have many names, but Ill lump them all under Machine
Learning, which is a big part of what used to be called Artificial Intelligence. Many of the core
ideas here go back to the 1950s and its the basis of every email spam filter, so if youve had
your spam automatically filtered, youve experienced the value of machine learning.
A big part of Machine Learning is pattern recognition. We humans evolved very sophisticated
faculties to rapidly identify visual images in all kinds of difficult conditions. You look at a
picture of an orange on a red plate and you can tell instantly that its not a sunset, but until
recently that was really, really hard for a computer. Because of a combination of Moores
Law and some breakthroughs, computers have gotten much better at pattern recognition in
the last couple of years.
For a computer, recognizing something starts with a process where some basic attributes of
an image are extracted, such as the shape of boundaries between clusters of pixels, or the
dominant color of a patch of an image. These are called features in machine learning. By
examining lots and lots of examples of features in an image, a machine learning system builds
a statistical model of what that cluster represents.
Basic forms of this kind of image recognition has been used industrially for decades. Most of
the oranges that come from the central valley are scanned 360 times to separate ones with
blemishes from ones without. Lego has a completely automated factory that injection molds
a million Lego bricks an hour, examines every single piece, automatically sorts, bags and
boxes them, all using computer vision. Thats relatively old.
Images from: Region-based Convolutional Networks for Accurate Object Detection and
Semantic Segmentation, R. Girshick, J. Donahue, T. Darrell, J. Malik, IEEE Transactions on
Pattern Analysis and Machine Intelligence
Real-Time Image and Video Processing: From Research to Reality by Kehtarnavaz and
Gemadia

23

Whats new is a class of systems that understand the content of images. They dont just look
at features, but clusters of features, and clusters of clusters of features, and they can now
identify an orange from the setting sun, or a person from an airplane, or a polar bear from a
dalmatian.
This is why Facebook asks you to say who is in an image. Its not just for you, its for their face
recognizer.
Now heres the interesting part: were built to identify patterns in visual phenomena, but
were pretty bad at identifying them in other kinds of situations. For example, if youve ever
tried to understand someones food sensitivities, its really hard to extract what that person
is reacting to, even if you keep very careful track of what theyve eaten. Were just not built
for it. It was never evolutionarily sufficiently important, so we didnt evolve an organ for it.
Computers, on the other hand, dont care, and now that weve found really good ways to find
patterns in visual images, these same techniques can find patterns in anything.
Instead of a matrix of pixels, what if you had a matrix of medical prescriptions, with each row
as the history of one persons prescriptions from the first time that person went to the doctor
for a problem, through when they were prescribed certain things, to when they got better, or
they didnt. The same kind of system could learn the typical pattern for prescribing, say, a
wheelchair. It would essentially see the general shape of the sequence for the prescription of
a chair over time and across many people.
Then if you saw a wheelchair being prescribed that was outside of the typical pattern, you
could identify it. Thats called anomaly detection. Thats in fact exactly how we built a system
to identify Medicare fraud for the state of California. People are terrible at that stuff, but
computers are great.

24

When one of the dimensions is time and another is the outcome of a series of actions
you can make a pattern recognizer that associates a sequence of actions with a set of
statistical probabilities for possible outcomes based on data collected across a wide
variety of similar situations. In other words, because people and machines behave in
fairly consistent ways, these machine learning systems can increasingly predict the
future and attempt to adapt the current situation to create a more desirable
outcome.

25

The interesting thing is that this not just theory.


Prediction and response is at the heart of the value proposition many of the most
compelling IoT services, starting with the Nest. The Nest says that it knows you. How
does it know you? It predicts what youre going to want based on your past behavior.

26

Amazons Echo speaker says its continually learning. How is that? Predictive machine
learning based on your actions and your words.

27

The Birdi smart smoke alarm says it will learn over time, which is again the same
thing.

28

Jaguar, learningAND intelligent.

29

The Edyn plant watering system adapts to every change. What is that adaptation?
Predictive machine learning.

30

Canary, a home security service.

31

Cocoon, another home security system knows. How does it know? Machine learning.

32

Heres foobot, an air quality service.


[I also like how one of its implicit service promises is to identify when your kids are
smoking pot.]

33

Silks Sense adapts

34

Mistbox sprays water into your air conditioner to reduce your energy bill. Youd think
thats a pretty simple process, but no, its always learning.

35

A number of companies are making chips that make machine learning much cheaper
and more power-efficient, which means that its going to be very easy to install it in
every device, from street lights to medical equipment to toys. Its not just likely, its
inevitable. Heres one that was announced a couple of weeks ago.

36

Heres a Kickstarter for an AI Butler that posted earlier this month. What does it do?
I dont know, but it learns.

37

The ideal scenario these things paint is pretty seductive. Imagine a world of espresso
machines that start brewing as youre thinking its a good time for coffee; office lights
that dim when its sunny to save energy, and mac and cheese that never runs out. The
problem is that although the value proposition is of a better user experience, its
unspecific in the details. Previous machine learning systems were used in areas such
as predictive maintenance and finance. They were made by and for specialists. Now
that these systems are for general consumers, we have some significant questions.
How exactly how will our experience of the world, our ability to use all the collected
data, become more efficient and more pleasurable?
Were still early in our understanding of predictive devices, so right now the problems
are worse than solutions. I want to start by articulating the issues Ive observed in our
work.

38

Weve never had mechanical things that make significant decisions on their own. As
devices adapt their behavior, how will they communicate that theyre doing so? Do
we stick a sign on them that says adapting, like the light on a video camera says
recording? Should my chair vibrate when adjusting to my posture? How will users,
or just passers-by, know which things adapt? I could end up sitting uncomfortable for
a long time, waiting for my chair to change, before realizing it doesnt adapt on its
own. How should smart devices set the expectation that they may behave differently
in what appears to be identical circumstances?
How do we know HOW intelligent these devices are? People already often project
more smarts on devices than those devices actually have, so a couple of accurate
predictions may imply a much better model than actually exists. How do we know
were not just homesteading the uncanny valley here?
Chair by Raffaello D'Andrea, Matt Donovan and Max Dean.

39

The irony in predictive systems is that theyre pretty unpredictable, at least at first.
When machine learning systems are new, theyre often inaccurate, which is not what
we expect from our digital devices. 60%-70% accuracy is typical for a first pass, but
even 90% accuracy isnt enough for a predictive system to feel right, since if its
making decisions all the time, its going to be making mistakes all the time, too. Its
fine if your house is a couple of degrees cooler than youd like, but what if your
wheelchair refuses to go to a drinking fountain next to a door because its been
trained on doors and it cant tell thats not what you mean in this one instance? For
all the times a system gets it right, its on the mistakes that we judge it and a couple
such instances can shatter peoples confidence. Anxiety is a kind of cognitive load,
and a little doubt about whether a supposedly smart system is going to do the right
thing is enough to turn a UX thats right most of the time into one thats more trouble
than its worth. When that happens, its lost you.
Photo CC BY 2.0 photo 2011 Pop Culture Geek taken by Doug Kline:
https://www.flickr.com/photos/popculturegeek/6300931073/

40

The last issue comes as a result of the previous two: control. How can we maintain
some level of control over these devices, when their behavior is by definition
statistical and unpredictable?
On the one hand you can mangle your devices predictive behavior by giving it too
much data. When I visited Nest once they told me that none of the Nests in their
office worked well because theyre constantly fiddling with them. In machine learning
this is called overtraining. The other hand, if I have no direct way to control it other
than through my own behavior, how do I adjust it? Amazon and Netflixs
recommendation systems, which are machine learning systems for predicting what
you may like, give you some context about why they recommended something, but
what do I do when my only interface is a garden hose?

41

As interesting as these issues are, I think that, more importantly, what they represent
is that were entering into a new relationship with our device ecosystem, a sea
change in our relationship to the built world.

42

Think of a sewing machine. Its very complex, but it still only acts in response to us.

43

Computers acting autonomously erode this simple tool/user relationship.


At the dawn of computing in the late 1940s cyberneticists like Norbert Wiener
philosophized about the increasingly complex relationship between people and
computers, and how it was fundamentally different than the way we interact with
other kinds of machines. Developers working in supervisory control of manufacturing
machines and robotics have had to deal with these questions pragmatically for about
30 years, but thanks to the Internet of Things, this is now a problem that everyone
will have to grapple with going forward.
Heres a diagram by the greats Tom Sheridan and Bill Verplank from 1978, in which
they illustrate four ways that semi-autonomous computers and humans can work
together to solve a problem.

44

By 2000 Sheridan expanded these ideas to create this framework, to define a


spectrum of responsibility between people and computers. It ranges from humans
doing all the work (this is you writing an essay) to computers doing all the work
completely autonomously (this is your car s fuel injection controller). Of course the
goal is to get a system to level 9 or 10. Thats the maximum reduction in cognitive
load. However, for a system to qualify for that, it has to be very stable, its effects
need to be highly predictable and, equally importantly, its role needs to be
adequately embedded in society. It needs to be OK for a computer to take on that
level of responsibility. At the airport we trust the monorail computers to work
without human intervention, but we dont trust the plane autopilot to do that, even
though-as I understand itplanes can basically fly themselves these days.
Predictive IoT devices generally fall between 5 and 7 on this scale right now. The
problem is that this is the exact range where youre maximizing someones cognitive
load, but not necessarily doing all the work for them, so the result of the automation
had better be worth it. This fundamentally undermines what we expect from our
tools, and when that tool is trying to anticipate what were trying to do, it
fundamentally changes our working relationship with it.

45

Danny Hillis of the Long Now talks about how we have gone past the Enlightenment
idea where we thought that we could understand and control everything, and built
tools that reflected that view. In his perspective, we are no longer in control as much
as we are entangled with them.
Anne Galloway, a New Zealand researcher who looks at the intersection of animals
and digital technology, calls it the end of human exceptionalism. Others would say its
just the Postmodern condition, the recognition that the complexity of the world is
beyond our ability to control, and we have to learn to coax and coexist, rather than
command and control.

46

Because sooner than we expect, well be living with hundreds of devices and services
trying to model us and predict what will be good for us, and most of them will require
our attention. They will want us to verify things, to upload things, to confirm things.
They will want us to validate their existence. And they will be wrong a lot. If you have
100 devices and each device is 99% accurateand most predictive algorithms rarely
achieve that level of accuracy, at least not at firstthen one is always wrong.
So how do we engage with this world? How do we approach wrangling all these
thinking tools?

47

You can think about working surrounded by a bunch of apprentice assistants, as in a


middle age guild.

48

or you can take an animist view of assuming everything in the world has a
consciousness. Phil Van Allen of Art Center has recently started advocating an
approach like this. Well, maybe not like THIS.
Image from Miyazakis Princess Mononoke.
Phil Van Allen: https://medium.com/@philvanallen/rethink-ixde489b843bfb6#.6jszlfw9p

49

Id like to explore farming as a metaphor, and not because of the superficial irony of
using pre-Enlightenment technology to talk about a post-Enlightenment problem.
I really want to create a useful way of thinking about the challenge of smart tools so
we can design a better relationship with them from the beginning.

50

Farming is one of our oldest technologies, one of the most advanced, and one of the
most brutal on the land, people and animals involved. But it got us here.
Also, an admission: Im a city kid, my family has been living in cities going back many
generations. I have not raised so much as a single edible plant or owned a pet,
though I do have children, but I dont think its the same. But the long now asked me
to do something brand new and for a general audience, and this is where I ended up,
so if this talk hasnt gone off the rails for you yet, itll probably go off the rails now.

51

For me farming is a useful metaphor about how to simultaneously manipulate the


state of many autonomous, independent, similar things, for your gain. A farmer
doesnt raise an ear of corn, she raises a field of corn, and she is not in control of
their crops as much as she is in symbiosis with them.
She reduces the complexity of farming by planting many copies of the same plant,
and dividing her land into regions for each kind of plant. Right now s like each plant is
totally different and requires a totally different technique to work with it.
She selects crops that thrive in a specific set of conditions and which can
synergistically use the same raw material to maximize the value of that material.
What if had multiple algorithms using the information from the same sensorssay all
the cameras and temperature sensors in your environmentthen fusing their
results?

52

A farmer uses specialized tools to work on many plants at the same time, whether its
a plow, a harvester or a scarecrow. Thats why she chooses many of the same thing.
In the algorithm analogy, how can we group large numbers of algorithms and work on
them all at once?
She expects pests. Right now everyone is shocked when their smart fridge starts
posting spam because its been hacked. Thats kind of like a fungus infection, and
farmers have tools for that and try to maintain good practices to minimize it, but
when it happens, no one is surprised.
She doesnt expect to extract the value from it immediatelythat may take months
or yearsyet she knows she will have to maintain it that whole time regardless. Right
now we expect our digital products to work immediately or we think theyre not
worthwhile or defective if they dont. What if we designed things so that they would
only be useful after we had lived with them for a long time, but then theyd be
REALLY useful?

53

Another aside: machine learning algorithms are pattern recognizers, so they need to
know which patterns are important. Whenever you mark email as spam using your
email program, you are doing whats called training the algorithm to understand what
you consider spam.

54

Similarly when you make a choice using virtually any a digital device or service, youre
training an algorithm. Facebook asks you to label people in your pictures to train its
algorithms to associate a set of facial features with the person you labeled.
What happens when you train a single animal? What are your mechanisms of
control? What are your expectations?
Well, you expect that it will require time and it will require a combination of both
positive and negative reinforcement. Then, you expect that it will regularly misbehave
and you have to reinforce what you teach it. Conversely, you can expect that it will
probably learn a bit from other animals without you having to tell it everything and
its behavior will surprise you in good ways in addition to bad ways.
Image source: http://countingsheep.info/permalamb.html (Anne Galloways Counting
Sheep project)

55

But what happens when a farmer has a lot of animals to control? She cant train all of
them individually, so over the last 10000 years shes developed some tools for
managing them.
First, she selects animals that work well in groups. Our algorithms are currently built
one at a time and the expectation is that our interaction with them will be individual.
That doesnt scale. We need algorithms that are experienced well together, or else
were not herding sheep, were herding cats.
Next, she has a crook. When you need to assert control, you need a clear way to do
that which works on a wide variety of animals and we need consistent ways to asset
immediate control over a wide variety of smart devices.
She has a dog, which is a smarter entity that also needs to be trained, but once
trained can be used to autonomously control multiple other independent entities
itself.
She can hand off the work to an assistant. In farming a whole class of people who can
take responsibility for all of the things and who can work together. Responsibility can
be delegated. As Tom Coates of Thington points out, most IoT systems are not built
for many people to control them simultaneously, even though their effects are often
experienced in shared environments.

56

Today we dont have an Internet of Things, we have many AOLs of things. Theyve
been intentionally made mutually incompatible and although some may be cute on
their own, when you have a lot of them, and they have to be dealt with individually,
its a big problem.

57

I think in 1000 years, maybe 100 years from now, this entire discussion will seem
absurd, like arguing about whether iron is a good thing or a bad thing. Well see it as
just the way the world is. Our bodies are going to be semi-autonomous components
that we have some control over, in an ecosystem that combines other biological and
digital semi-autonomous components. Everything is going to have some control over
and be controlled by other things.
Some of them are smarter than others, some are more autonomous than others,
some are even smarter than we are in certain ways, some have positive symbiotic
relationships, some are parasites. The boundaries between minds and bodies,
between natural and artificial, and between human and non-human will have been
eroded. Our world will have reconfigured itself around assumptions that everything is
much permeable and much less clearly delineated than we had fooled ourselves into
believing. We are not as gods. We are, and always have been, animals in an
ecosystem.
And it wont all be good. There will probably be terrible things that happen to
peoples bodies, minds and societies. There may also, hopefully, be good things.
Image: Camille Pissarro, Shepherdesses, 1887

58

This is the looking glass that weve made, and its time for us to step through, and
explore the field beyond, because we have no choice but to engage with it, to make it
be what we want it to be, what we need it to be, because it is not androids who will
dreaming of electric sheep, it will be us.

59

Thank you.

60

You might also like