You are on page 1of 8

Could we start by you saying your name and position?

Barbara Gros. I’m a faculty member in computer science at Harvard. My official title is Higgins
professor of natural sciences.

Can you tell us how your work in AI began?

I had passed my qualifying exams at Berkeley in computer science. I actually did research in
theoretical computer science for that having to do with compilers and what you would call
system verification. I had also previously done some work in numerical analysis and neither of
those fields was fields I was really excited about continuing in. So I was searching for a thesis
topic and I was fortunate to meet Alan Kay, who many people know as the inventor of Smalltalk
and the Dino Book which led to the laptops we all carry around now. So Terry Einnergraft had
just finished his dissertation at MIT and Alan was sure that once he got Smalltalk to work, and it
was just on paper at that point, it was just an idea, that it would be easy to write a dialogue
system in Smalltalk. So, he proposed to me that I take as my thesis topic writing a program that
could read a children’s story and tell it back from one of the characters points of view. And Alan
was always one for big, ambitious projects so I actually spent a year reading children's stories,
reading literature about reading and children and studying natural language processing, which I
hadn’t studied officially. At that point, two people from SRI came to visit me, Ann Robinson and
Bill Paxton, and they said if you’re crazy enough to work on children’s stories, why don’t you
come work with us on the SRI speech understanding system because speech is easier than
text, and the kinds of task dialogues we’re doing are easier than children’s stories. So they were
right about the children’s stories and the tasks. I had actually at that point come to realize that
children’s stories are really mechanisms for socialization and getting children to be part of the
culture and not very simple to read. And the things that they used to teach children from were so
poorly written that it was no wonder the kids couldn’t read. They weren’t right about speech; it
took many many decades for speech to be conquered. So I actually left Alan’s group and went
to work in the AI center at SRI on a couple of projects that had to do with understanding
dialogue in task situations.

Was your in work in speech and AI and such influenced at all by popular culture?

This was a long time ago. It was before 2001, but didn’t read science fiction, so the answer is
no. I was a math major as an undergraduate. There was no computer science major at that
point. There was a few computer science graduate programs but no undergraduate programs. I
really loved the solutions, I loved mathematics, I loved the solutions in theoretical computer
science and numerical analysis but I didn’t like the problems so much. What really drew me into
natural language processing which is really where I started is that I loved the problems and I
thought that I had many more intuitions about cognition and language than I did about satellites
which is really what numerical analysis was focused on at the time, so that's what drew me. And
also I think that this was the reason I went to work with Alan; he and I had in common two
things. One was wanting to make computer systems easier for people to use and the other was
that we were both appalled at how computers were being used in education. In those days the
idea of computers in education was a caricature, but it’s not too far off. It’s 2+3, the kid types
four and they say no, wrong, try again. The kid types seven, you say no, wrong, try again. I
should say, when I went to college I was going to be a seventh grade math teacher so I was
always interested in math and education, anyway I knew that that’s not how you could teach
kids so that was something else that Alan and I had in common.

Let’s fast forward to the recent times; what’s your present work in AI?

The early work I did was in dialogue processing, and there I constructed the first computational
model of dialogue. And that was really a tripartite model that had a language structure. It had a
language structure that we called intentional state that trapped the focus of attention and it had
something called intentional structure which was tracking the purposes of the dialogue. And
developing that was what the first decades of my career. That intentional model required that we
be able to model the plans of multiple agents. And up to that point in AI, the work in planning
was all focused on what a single robot could do and that even people in the area of multi agent
systems relied on this individual robot planning kind of technology, and Katy Sneider and I
argued. Well, let me back up and say there was work in speech act theory and dialogue that
used those same single agent models and Katy Sneider and I argued, based on the work she
had done earlier in dialogue and also the what I just described to you about my work that those
models wouldn’t suffice, that you needed a model of a collaborative plan. And so the next
decades of what I did, of my research were focused on developing models of collaborative
planning and examining the decision making that takes place under that. Examining issues like
interruption and when interruptions are okay, and information sharing. And that all leads to my
focus right now which is taking those models of collaboration and I guess as the trendy phrase
goes, using them for social good. It particular I have a joint project with a pediatrician at
Stanford to develop computational tools for care coordination. So the healthcare world talks
about teams but none of the technologies supports teamwork. So we’re developing technology
to support teamwork.

What do you feel is the computer science community's responsibility for communicating about
what computational robotics does to the public's?

So I have always argued since I became a senior researcher that we have a responsibility to the
public to explain what we do. To explain not only the promise of our technology but also the
limitations. As I said, I started out wanting to be a teacher, so to teach the world about the
science and technology. I think organizations that- say CRA, we’re here at a CRA meeting is
one of those. I think the American association for the advancement of science is another. So
participating in those organizations… different people have different talents. Some people are
great at getting up and talking to a thousand people, some people love to be on television. I’m
not one of them. Some people write, so I think one of the important things I did for educating the
public was to write a blog for the Atlantic in 2012 arguing that we should not be attempting to
replace humans but instead should be building systems to complement humans. So I think
different people have different ways of contributing to that, but it is part of what were responsible
for as scientists.
Are there topics that you think we as a community have done a better or worse job of
communicating to the public?

So the answer is not that I can think of right now; I have reactions of the moment. I think if I had
a criticism of the community it’s that it’s too eager to talk about what it can to and too reluctant to
talk about what it should do. And I think it’s true for an interesting reason which is until very
recently it wasn’t clear that we could do very many interesting things. And so there’s now been
this boom and computing has entered all areas of life. When what computer systems did was
banks processed money better, or helped scientists get satellites into space. There was a very
constrained part of public life that they dealt with. One of the wonderful things is smaller
computers means everybody has one. I mean there are things computing has been terrific for
really increasing communication and building a global village as it were but we didn’t track
therefore that the audience for our technology was now the general public and that we had to be
careful what we said technologies could do.

I love this distinction between telling the public what we can do and what we should do. Thinking
out loud I wonder if we don’t tell the public what we can’t do. What is hard to do.

I think this is especially true now with certain emerging technologies that have some intelligence
in them. I will go off in that direction for the moment or also with technologies that are about
communicating and making communication more fluent than technology that is basically selling
us things. In each of those cases it's again what I was saying: it’s amazing we can do it but we
need to stop and say what should we do and how should we do them. We actually know from
research in artificial intelligence it's much harder to know what you don't know than what you do
know, and as several people have pointed out and this is very important in the machine learning
community. We know there's what you don’t know and then the unknown unknowns, so what
we don’t know we don’t know and people are also not very good at. There is also evidence that
we are not very good at predicting the consequences of what we build and design and we see
that in law as well as in technology. So, people in the government, people in the legal
profession, they write laws and then whoops I didn’t mean for it to do this! I think we have to
develop a culture of looking for the unintended consequences, of thinking about how people
might use systems in ways we never intended. You know it reminds me of something that
Charlie Rosen, who started the artificial intelligence center at SRI said to me. He said “I won’t
believe we have a really intelligent system until it can figure out that my shoe is a good
hammer.” So there’s the kind of leap you might want in a certain circumstances. We also have
to start predicting the leaps that people will make in the way they’ll use technology. And I want
to put a plug in here for something which I know you care about also, which is this is something
that we can’t do alone as technologists, as computing people as computer scientists. This
requires bringing in people who have thought about societies, about people, about how people
interact, about how people think, so the full range of that social sciences and the humanistic
social sciences and also people who think about ethics and what’s right and what might not be
right in certain circumstances. So computing is everywhere now, and it needs to bring
everywhere into how it designs it’s systems.
So my first question is how have we done so far, thinking about computing technology, how do
you think so far they’ve made changes to how people work?

So I’m gonna start with the amazingly positive things they do. We can have collaborations
around the world. People whose jobs have these pauses for a moment can actually use those
pauses to communicate with other people. So I’m always fascinated by how many people I see
doing construction on the street and there's a break, and they’re communicating with somebody,
or maybe they’re reading the news, I don’t know what they’re doing on their smartphones. It’s
made work easier in many ways, it’s gone from making it easier to write a document yourself to
easier to write a document with many people. One of the things that Alan Kay and I aspired to
do, which you’ll probably laugh about, was to give children in school the ability to write essays
without having to erase. So when I was a child, if I made a mistake I erased, and if you made
too many erasures your grade would get lowered so you would start handwriting the whole thing
over; I had a big lump on my finger from it. This is ridiculous, it takes away from the thought part
of writing, so kids don’t do that anymore. So it just makes it easier to write, think about how
science is done now, it’s completely changed.

If you look at computer technology advancing even more, any thoughts about how you think that
might change labor further?

So I don't make predictions about the future. I always tell the story of when I was finishing my
PhD thesis, having a conversation with one of my colleagues who was also finishing his, and he
said to me “It’s good we don't care about money, because AI will never amount to anything that
will make any money.” Let me say instead what I think is important because of this way in which
computing technology has become completely intermeshed with the way we lead our lives
which is really important that systems, especially in the workplace, and so I want to focus on
that they are designed in a way that the jobs that people have are good jobs and meaningful
jobs. And I can give you an existing counter-example which is the field of healthcare. Electronic
health record systems have made the practice of being a doctor worse. They don't support
clinical care, and they put an enormous data entry burden on physicians. So all of what they do
down the pipe, by improving healthcare by collecting data is fantastic, but they weren't designed
to enhance the delivery of health care. They were designed for the billing department or the
insurance companies, and you can see that. And they could have been designed, and in fact I
think there are several startup companies trying to break into a market that's a really political,
but they could have been designed to work with physicians to deliver health care. So another
example I can give you is in customer service which irritates almost all of us. Many of the
customer service functions have been taken over by computer systems and then when the
computer system fails you get a human being, but they have been designed largely in this way
that you first get a machine and then you get a person, makes for a bad experience for the
customer unless their request is so simple the machine can answer it and it makes not very
good job for the person. If they have been designed instead for the person to answer the phone
first and for the computer to support them, it would be a better experience all around. Would you
have saved as much money for the companies? We don’t know the answer to that. I would love
somebody to try and design a system that way.

That’s what you said in your blog though. Because what you’re saying is essentially instead of
replacing that humans greeting process with the computer, design the computer to help the
person do a better job.

So the article I wrote was the year that we were celebrating Turing's birth, really, the hundred
year anniversary of his birth and I was asked to speak in Scotland and then I was asked to write
this blog and it basically started speaking about what question would Turing pose now and I
have no idea what question Turing would pose now, but the world of computing has changed
and we can go through how it has changed enormously but the most important way is that it’s
permeating everywhere, it’s not one person on a machine. It’s not like the telescopes that
astronomers use. And in that context I argued that I don’t know what he would pose, but I know
what question I would like to pose which is could we build a computer system that would
function over a long period of time, not just one shot as an effective teammate. It’s not that you
would think it is a human being, it wasn’t going to cover up its personality, but it would behave
so well it is as if it is a human being. You wouldn’t say why that is what that machine is doing.
So the blog argues for that as really the goal for artificial intelligence in particular, but I think
really for computing technology overall would now generalize it to say we’re gonna put
computing systems into the work that people do we should design them to enhance that work
doesn’t mean we won’t lose jobs. We may well lose jobs but the jobs that remain should be
good ones and the new jobs that get created should be good ones

Could you talk about ways that we could design computers do they empower physicians versus
ways that they could disempower physicians ?

So I think the first step is to be very clear in the moment about what computer systems are good
at. So for example in the moment they’re much better than human beings, and probably always
will be better than human beings at taking a huge amount of data and assessing what’s in it. So,
drawing a certain kind of conclusion from the data, basically statistical inference and no human
can do that as quickly as they can with much larger data statistics. But there’s things they’re not
good at; they’re not good at counterfactual reasoning. They aren’t human.

Can you define counterfactual reasoning for our students? It’s a good point.

It’s what-if reasoning, or it’s hypothetical reasoning they might be easier to explain. So if I go
down this path what will happen if I go down that path. Well you presumed this; what if this were
the case instead. So that whole range of thinking not just about how the world is right now, but
how it might be. Looking at the work to be done, especially in health care, and then figuring out
what the right thing is to give to the computer system. So I mentioned that I was doing work with
this pediatrician, his patient base is children with complex conditions, so they could be genetic in
origin, they could be a result of an accident, but they have many many different problems to be
dealt with at once, it’s not simple. How do you support that care? The computer systems keep
track of what multiple doctor are doing and keep track of what multiple different doctors are
doing and try to make sure that communication is working. It can’t take over communication, it
can help parents in tracking things well. If you look at the other end of the age spectrum, which
anyone who has elderly parents will have looked at, there are things that computing
technologies can do with helping people keep in touch, with helping elders keep in touch with
the family, but there are also many issues to it. Some people have proposed computing
babysitters or computers that would be the companions for older people. I think that’s a huge
ethical mistake .

Can you say more about why that’s a mistake?

Yeah because we know that the social interactions with other human beings is the key to a
better life for older people and it’s essential for learning. Children learn within the context of
social interaction.

I’m interested in cases where computational technology affords people more agency and also
where an advanced computational system might actually erode someone’s agency.

So I’ll go back to the healthcare domain: So how should we design systems to help in medical
decision making? If we design them to complement what physicians can do, by delivering to
them information relevant to the case at hand, then they will enhance human agency. If instead
we design them so they deliver a diagnosis, and then the physician has to contravene that
diagnosis, we have removed human agency. And the same thing is true in the justice system
and this goes back to a very early conversation so there are predictive systems now. If the
judge uses those systems conclusion as the answer for what is going to happen then that’s
removed human agency and it’s probably done harm at least in some cases. If instead, the
system presents its results as a piece of evidence to be weighed with everything else the judge
is weighing, then you won’t have that decrease in human agency. And I think we’re really at a
point now where we have to think very carefully about the limitations of the computer system.
And here the limitation isn’t just people talking about biased algorithms, it’s the data they’re
working with; how biased is that data? What does it mean? To take it away from some of the
more controversial areas and certainly until very recently in the US, gender information wasn’t a
part of biomedical experiments. Everything was done on men- not just human beings but on
male mice. Why? Well women have hormones. Well yes, women have hormones. And it was
done on relatively young people not on old people. You give old people the same dosage as
young people you can injure them. So this is all data that you wouldn’t want to generalize from.
And of course that’s not what we’re going to learn over time! It’s not like we see all of the
options we should do so we have to be very careful about what powers we attribute to computer
systems. That said, you know my friends that have Roombas that clean their carpets, they just
love them! So there are places to let the computer do its thing.

So when we think about ascribing autonomy to a system, what are the benefits of that and what
are the pitfalls of that?
Maybe you’re too young to remember pet rocks and also cliff mass and Myron reave and they
talk about how easy it is to get even people who are knowledgeable to treat machines as though
they were human. When I did my very early data collection of dialogues between people and it
was before there was a labeled Wizard of Oz, but it was essentially a Wizard of Oz experiment.
I did them in 1972 or 1973, that label was in 1980 and at the end I would tell people actually I’m
sorry it wasn’t a computer system. They were really disappointed. So also you’ve probably seen
these psychology experiments where you just have a few balls moving around and they think,
they attribute agency to that so I don’t think we’re gonna block that; it’s just what we do. And
there are certainly benefits to assuming that the Roomba can do its thing. So I think again it’s
also a matter of design. So, Vijay Kumar today said intellectual honesty is important. We have to
be honest about what the systems can do and can’t do so that at least people have an
opportunity to block an entrance that might be harmful for them to make about the autonomy of
the intelligent system with which they’re interacting. Now, you know, people’s fear of
autonomous robots- that’s nothing new. This goes back at least to the golem of Prague and
Frankenstein. I’m told in Japan people don’t have the same kind of fear of robots: they think
they’re cuddly. So who knows where this comes from. But I think as systems become more
autonomous people become more concerned about what they can and can’t do and I think that
really ups the ante about them being honest about their limitations.

When these systems do harm or make mistakes, how do we think about responsibility?

So this is a very big question now and as with many such questions I don’t have a simple
answer and I don’t think that there will be a general answer and I don’t think that there’s a one
size fits all answer. I think that part of the design process is to understand where responsibility
lies for the behavior of the system. If this system is going to make this kind of decision then
who’s going to be held responsible when it gets it wrong. It should be thought about.

Do we do this today, do you think?

I don’t think so. I mean even, to take one of my favorite examples, Elon Musk should not have
been allowed to call the system he put in his car an autopilot because it suggests to people that
it has an autonomy that it does not have the skill to carry on.

So in the blogosphere we hear a lot of talk these days about artificial general intelligence; just
curious about your thoughts about that prospect.

So remember I make no predictions about the future, I’m going to start by saying that I think
focusing on trying to build a system that has the same intelligence as a human being is the
wrong focus. My quip on this is that we actually know how to replicate human intelligence.
They’re called children. And the problem with them is they have only human intelligence. So we
don’t need to do that. From a curiosity point of view, there are many other questions that I think
are more compelling including the question of how do you build an artificial system that could
work well with and support people. So it’s a fine philosophical question, that’s really what Turing
was saying is it possible but I don’t think it is how we should focus our scientific and
technological development efforts. I also think if we look at AI and where the whole field is right
now it’s all in specialized areas and to get to the kind of intelligence we have is much, much
harder, so I’m saying on the one hand we’re not close and on the other hand why should we
spend our resources on this. It’s the wrong use. It’s the wrong use of human intellectual
resources, it’s the wrong use of resource funding, we should be working on smarter systems
that do better that do in individual realms things that we considered intelligent in people. Let’s
go there.

You might also like