You are on page 1of 16

We’d like to start just by asking you to say your name and your affiliation.

My name is Moshe Vardi and I’m Professor of Computer Science at Rice University.

How did your work and interest in computational technology and AI begin?

My work has, in some sense, been touching on AI for a long time. I mean, going back all the
way to my PhD dissertation was about using Logic to specify database management systems.
So, it was always around logic and reasoning, in some sense logic has been the theme of my
research from the very beginning. It was not thought so much as AI. We thought at the time of
database management systems, but it was really, if I look over my whole career, it was about
automated reasoning.

Were you inspired, as you dived deeper and deeper into advanced computer technologies, were
you inspired in any way by popular culture, like stories or movies or such?

So, when I was in college, I was in the stacks. And I saw a small book called I, Robot. And I
saw, well, what is this book? And I pulled it out and I started reading. And I finished the book,
standing in the stacks, okay. And I just discovered Asimov and I loved it. And it actually was
how I discovered science fiction. I had no exposure to science fiction before that. It was how I
stumbled on science fiction. But it was clear that it was – you know, your mind segments
things—it was fiction, this is it. And it seems just always incredibly far from the reality of
research we are, incredibly far from it. And in some sense to me, the kind of turning point was in
2011 when IBM’s Watson won Jeopardy. Because, before that, even when Deep Blue won in
chess, I said okay, it was clearly brute force search, somehow. I said, okay, it’s a natural
progress. but it did not seem like real intelligence, even though the program played very well.
But Watson somehow seemed like a glimpse of intelligence there. And it was the point at which
I started thinking more deeply about the possible implications of AI.

It’s interesting that you were inspired by popular culture, Asimov, and also by a real news story
as well; this combination. Can you talk a little bit about your recent and current work in AI?

So, my current work in AI is again, logic has been the theme of my research for many years, but
one other thing is in AI there has been different schools of thought, one field was about symbolic
reasoning, and Perez was about probabilistic reasoning, and today, my research, I’m trying to
combine the two. So, how do we combine logic and probabilistic reasoning? In the same
framework, this gives us a principled approach, for example, to do probabilistic inference.

Is this, would you say, one of the essential connections we need in order to approach human
intelligence? Both of these?

You know, it’s clear that we are living in a world with a lot of uncertainty; and that means that
when you have machines that are going to make decisions, or recommendations, they will
always come with some degree of certainty. And what my work is trying to do is to do it in a
principled way. In fact, I’ve followed something know as PAC – Probably Approximately Correct
– which is trying to quantify the actual uncertainty and trying to quantify the certainty of your
inference.

The first topic we have is really about the question of publics and community AI to publics. And
we have kind of a positive first question, which is, can you think of an example where an AI
system has been accurately or usefully communicated to the public? And how it was
communicated?

Most of the time it’s not done that way. I mean, for some reason, we live in the age of hype.
Okay, I mean, starting from Watson in 2011, we have- if you look at most of the funding today
for AI R&D comes from the commercial side. It’s almost impossible for these corporations to
give you a dispassionate and nuanced description of what happened because it is so much tied
to their commercial interest. You know, I don’t know if it’s fair to blame the corporation, you
know, this is the style of today. You have to sensationalize everything. The media tend to
sensationalize it. A very important results about three years ago. Pure algorithmic result, it was
the graph isomorphism can be solved in quasi-polynomial time, which is a beautiful
mathematical result, computer algorithmic result. And the hype was, this hard problem can be
solved practically. And none of that was true. I mean, somehow, I think even the researchers
tend to exaggerate the importance of what they are doing, and so we are drowning in hype.

What is the researcher’s responsibility in communicating about AI to the public?

You know, it’s actually a bit of a dilemma. And the dilemma is because – I talk a lot to the media
– and a couple of years ago I was in fact in a news conference, a press conference with two of
my colleagues, that had to do with the future of AI. And if you try to be very measured, and very
nuanced, you know, it’s like what you do in photography: it gets washed away; if there is too
much light, nuances get washed away. So, the media, you know, the reporters, what they want
are sound bites. If you give them sound bites. I actually know how to give sound bites, and I was
quoted by about one thousand newspapers. And I feel sorry for my two colleagues, who were
trying to be careful academics, and it’s just not dramatic enough. The public likes drama. If you
tell them, ‘well, on the one hand, on the other hand; it may be good; it may not be good,’ that’s
not a story. So, unfortunately, the nature of media today, is, if you don’t sensationalize it, they
will choose something else. They will always go to the more sensational story. If you try to tell
them a careful story, boring. They will go to another sensational story. So, it’s a very delicate
balance between ‘I do want to reach out’ and that involves some element of sensationalism, but
still how do you do it in a somewhat responsible way? It’s a delicate balance.

Do you think your colleagues in AI know what you just said? Or do you think people aren’t so
aware of what you just said?

Maybe a decade ago, maybe more than a decade ago, I had one day training in how to talk to
the media, which was amazingly useful. Actually, I think we should do it more broadly. The
mainstream media is declining today because much of the advertising has gone to digital
advertising, so this was some reporter who lost their job in newspapers, and they’ve created an
outfit to train people on how to talk to the media. This was just a half a day, but it was
immensely useful to tell you how the whole thing works, and we should do it more broadly,
teach people how to talk to the media.

The next topic we have is about labor. I know that is something you have thought about a lot.
The first question is, up until now, how would you say AI systems have been changing the way
people work?

It depends whom are you talking about, okay. So, you’re using AI all the time. Every time you go
to Google and do a search, you use AI, okay. Every time, all these recommendation systems
you are using all the time, they’re all AI. If you look at, I would say, professionals, most of the
time, AI and technology is making us more productive. Just our ability to communicate with
people all around the world. And in fact, it makes us so productive that if you ask almost any
professional today- go to the conference here and ask people ‘How many hours do you work
per week?’ I don’t think you will find anybody who says ‘Oh, I work nine to five every day.’ You
won’t find those people, because they catch up on email in the evening, I do my writing on the
weekend, all over the world, and one reason is, because we’re so productive. Technology has
made us more productive. And so, for us, technology is really an augmenting technology. It
makes us more productive, and the nature of our business is such that if you are an academic, if
you are more productive, it doesn’t mean we need fewer academics. It means, okay, I get to
publish more papers. No university says, okay, if our professor used to publish three papers per
year, and now they publish four papers per year, we need fewer professors. Nobody says that.
Okay, great, we’re more productive.

But if you take people who work in, let’s say, manufacturing, and, you know, there was a panel
here on how we can augment people rather than replace them, but if you make these people
more productive, the manufacturer may decide, ‘okay, I need fewer employees. Maybe I can sell
more, but it’s not clear that I can sell more. Maybe it’s not clear that there’s demand for more, I
may need fewer people.’ So, technology, how technology has been impacting work, it partly
depends where in the workforce you are. And labor economists have talked about what they call
the polarization of the workforce. So, they look at the workforce by the skill level. You can think,
salary is a decent proxy for skill. And what they see is at the high end of the skill level, there is,
technology augments people. And people with technological skill are in much demand today.
The salary goes up, and these people are doing very well. At the very low end of the skill
spectrum, think of the person who rearranged this room. We don’t have robots that can
rearrange the room from one configuration to the other. This requires a tremendous amount of-
the state of the art in robotics, we are just very far from it. But how much are we paying people
who are rearranging the room. We are paying them somewhere around minimum wage. So,
they have not been impacted by technology much. Where technology has the most impact,
adverse impact, is in the middle of the skill spectrum, where these are called ‘routine jobs,’ and
where the pay is more than minimum wage, where it makes sense, economically, you have
incentive to automate. Manufacturing has been the sweet spot for automating. The jobs are
routine, and we used to pay them twenty, twenty-five dollars an hour. This was nice, middle-
class wages. This is where cellphone manufacturing has the biggest impact.

When you look over the years, you find that the definition of routine is changing, because
routine depends on what machines can do. So over time, the routine part has been moving up
the skill ladder, so to speak, up the skill spectrum. And suddenly it means more and more
people are being affected by this. But if you look over, I would say, thirty years, the biggest
impact has been so far on manufacturing.

I think you have also answered partially the next question, which is thinking about future trends,
the next few decades, how do we think AI changes the way people work? Part of what you’re
saying is, in fact, the word ‘routine’ will keep expanding.

One is the word ‘routine’ will keep expanding, but the reality is there are lots of people who do
all kinds of projections. And you should take all of them with boulder of salt. Because one thing
to understand about economy, the economy is just a very complex system of feedback
relationships. So, what happens, you take something, and you automate it. Well, ultimately you
need fewer workers; you increase productivity. If you increase productivity, prices will decline. If
prices will decline, demand will go up. How does it all balance in the end? The answer is, we
don’t know. It could be there will be more jobs that are created than jobs that are destroyed. Or
it could be the other way around. Really nobody knows, okay. What we have seen so far was
that the jobs that were created were high-skilled jobs. That’s why people today with high skill are
doing very well. The jobs that were destroyed were middle skilled jobs. And now what happens
to the people who lost their jobs? They can try to side skill, which means find other jobs with the
same skill level, there are fewer of them. They can try to up-skill. Up-skilling is very very difficult
for people to do, especially we are talking to someone who is, let’s say, in their forties in their
fifties, you tell them, well, you can go and become a coder. Not so simple if you take a miner
and make them a coder. Or they can down-skill. Down-skilling also means, for them, loss of
income. At some point they say it’s not worth it for me to work. And that’s why we have seen
huge declines in what’s called Labor Force Participation Rate, which is the rate at which people
participate in the workforce.

So, when that declines that means more people are giving up essentially?

More and more people are giving up. It has had huge impact on men. The Labor Force
Participation Rate for men has been in decline now for about fifty years. Books are written.
Nobody fully understands why men are dropping out of the workforce. Sometimes economists
have perverse explanations. Somebody has looked to see what these men are doing. They find
especially young men, when they’re out of the workforce, they’re playing video games. So, the
conclusion was, ‘they dropped out of the workforce so they can play video games.’ Which, I
think, is just completely perverse. But this is now, if you look especially at men without college
education, which is high school or less, about one in five such men is not working—which is
something that we think holds for an economy in deep depression. When we talk about 20% of
men not working, these are men between 25 and 54. And this is huge. A book came out a
couple of years ago, Men without Work, and the author said this is a national crisis, to have so
many men without work. It is a national crisis, it has huge social impact, it goes beyond just the
job market. I mean these men lose their social status, they lose their income. It has huge impact
on marriage. Marriage is declining, because these men have less to offer in the matrimonial
marketplace, so to speak. So, this has huge impact on society.

I think you’re answering my next question, which is turning from unemployment to the question
of dignity; what tension can we expect in regards to human dignity when developing AI systems
engage with particular labor markets?

[...] You go back to Socrates and Aristotle and Plato. They have been asking these questions
which to them are one of the most fundamental questions in philosophy, which they call it, ‘What
is the good life?’ But really that’s not a very good translation, ‘What is the meaningful life?’ And
at the end the answer, the meaningful life is life in which a person is needed, and a person has
the feeling of making some kind of a contribution. And when you take this away from people,
you rob them of their dignity. People who feel—think of the movie, It’s a Wonderful Life, when
Jimmy Stewart, when he thinks he failed and he’s about to commit suicide and the angel comes
and shows him what impact he has on peoples’ lives, and he decides not to commit suicide
because he realized he made an impact on people’s life. His life suddenly gets meaning. And
the meaning is not just internal meaning. We are social animals. The meaning is, we made a
difference in other peoples’ lives.

The next topic we have is about power negotiations, about the whole idea of transference of
power, or agency, between people and machines or people and people. So, the first question is,
can you think of an example or a hypothetical AI system where power is transferred from a
human being to a system, an AI system?

So, I don’t quite see the AI system get the power. But, more, we are seduced by technology. By
the entertainment value of technology, by the convenience power of technology. Somebody has
recently written about the Tyranny of Convenience, okay. If I have to think, okay, should I get
into my car, I just have to get something from the drugstore, I can get into my car, and I’ll drive,
and, you know, it will probably take me half an hour to go get to the drugstore and come back.
Or I can sit down, and I can click on something and in five minutes I’ll order it. I will order it
online. Especially, Amazon has a brilliant idea, Amazon Prime, you already paid for the shipping
in some sense; now I feel I should order to get the value of my Amazon Prime. So, you order it.
Now the outcome may be that this store that I may have gone to buy at, this store may close.
And if you look at our urban geography, part of the diversity of this is the retail establishment
that we have. What happens when these things close? In some sense, if we want to see what
happens we can look, it doesn’t even require technology, look what happened when Wal-Mart
came to small towns. And they go outside, and they open a huge store. And the parking is easy,
not like in the downtown strip. And they can offer more selections. What happened? Main Street
was destroyed, basically. This has huge impact, social impact, because main street also gave
you the – if you think, who are the civic leadership of these small towns, it was the merchants of
main street; and when this was one, this town lost a layer of civic leadership.
The fact that technology can destroy community, I read a story years ago. It stuck in my mind.
This is after Franco dies in Spain. And Spain was a very backward country compared to the rest
of Europe. And now suddenly Spain becomes democratic, it joins the EU. And one of the thing,
in many villages did not have flowing waters. And so, they said, wow, we are in the twentieth
century now. This place, they still go to the river to wash; they go to the well to bring water. We
have to bring them water into the houses. And of course, it was much more convenient. These
villages, the social structure was destroyed. And the reason is, because the men used to go
together to bring water from the well, and that was where people met, and associated, and
schmoozed, and it was the social gathering place. And the women went to the river to wash.
And suddenly, when you took this away, everybody can do everything now in their home, social
relationships broke down. Think of what happened in America when you introduced the garage
door opener. Okay, before that, people would get their car, get out of the car, people arrive at
the same time. Your neighbor is also arriving at the same time. You exchange some words; it’s
not very much, but at least you have some interaction with your neighbor. Now, days can go by,
you have no interaction with your neighbor. People used to sit on the porch. Now they’re inside
the house watching TV. So, yes, technology has this impact. Technology for a while, throughout
the twentieth century, has been destroying social relationships, and we as a society have not
quite, we have not thought hard enough how do, how do we compensate for that?

So, let me ask the opposite question, because those are very strong examples; they’re
compelling. But, can you think of examples where technology or AI has actually helped
community relationship or group empowerment?

You know, I mean there is this phenomenon for example where people suddenly go to look for
high school friends they have not seen for ages, right. And it was just too difficult to find them.
Now you go and look, you Google them, you go on Facebook, you have renewed, you know, my
wife just now connected with someone she has not seen for fifty years. And somehow, he
looked her up on Facebook. This is from when they were teenagers and suddenly there is this
value, you know, I mean the truth is I go now to many places and people come to me and say
I’m, your Facebook friend and I feel as if I know you. You know, we’ve never met, but we share
information. So yes, there is value for these things. You know, I mean we need more, I think we
find requires some more social science tied to that says what have we lost and what have we
gained.

My last question in this category is a bit more nuanced. It’s around decision-making and, the
question is, can you give an example of how an AI system might undermine human decision-
making power?

Like a commenter system, I’m not sure I would say they are taking away our decision-making
power, you know, even if they give you a set of options, so, you think you’re making the
decision, but somebody made the, a machine made a decision, what options to show you, okay.
You bought a book, other people bought this book bought other books, alright. And somebody
made the decision to show you some set of books. So, even if you get to choose the set of
choices has been framed to you by a machine. And so, we may think, well I’m making a choice.
But it’s a bit of an illusion here, you get to make a choice, but someone has framed to you
exactly the sort of choice that have been made.

It’s, it’s interesting. I can imagine that situation both undermining and empowering me because
that choice may include books I did not think of reading.

Absolutely.

At the same time.

Absolutely.

Me and archetypes like me are all getting these same choices.

[...] I complained to a librarian because when we have the card catalogue, I would discover
books by proximity of cards because they were indexed by subject matter. And in some senses
Amazon is kind of recreating this idea and I wish our library catalogue would give me something
similar to that. So yes, look I mean in terms of giving us access to information, just look at the
ease, you know. I’m looking at the trees and I’m like is this really a Poplar tree or not. And years
ago, I would have said, I have no idea. I have to go to the library to find out. Too much work.
Now I can just take a picture and go to my room and upload the picture and bang, you know, in
five minutes you have the answer. So, in terms of access to people like us who are in the
information business, technology has been enormous boom, I mean in just ability to find
information, I still enjoy every time I go, and I find so quickly an answer to a question that had
been bugging me.

Good, good empowerment example. Our next category is about autonomy.

Yeah.

And there’s so much to talk about the idea of robot autonomy, car autonomy.

Yeah.

So that word is swirling around. First question we have is, what do you perceive as valuable in
the concept of machine autonomy?

So, I have to say, I have a hard time fully understanding the concept of autonomy. And, I’ve
debated with some of my colleagues who are very passionate now about lethal autonomous
weapon system. And I’ve tried to understand better the concept of autonomy because first of all,
autonomy has to do with someone with causality. Who made decision to do X, right. You said,
autonomous made, the system made, it’s the system decision to cause the things to happen.
And causality is already very complex thing to define exactly, causality. So, I’ve asked them for
example, I said, give an example, take the Terminator. Was the Terminator autonomous? So,
the Terminator if you remember the story, Skynet and the Terminator go back in time and kill a,
you know, make sure, that this human gets killed so she doesn’t give birth to the boy and so on
and so forth. So, was the Terminator autonomous? So, the decision was made by Skynet. Of
course, the Terminator then goes and proceed incredibly to us looks like, I mean of course it’s
an actor, but the story that proceed incredibly autonomously, even though the big decision was
made by Skynet. But all the details how to do it, that’s left to the Terminator to figure out. And, if
you watch the movie, you said, well, yeah, the Terminator looks pretty autonomous to me. But
some people say, “No, no the decision was made by Skynet.” So, I’m actually having a hard
time grapping and partly because autonomy has to do with agency and with free will. And these
are very thorny concepts, you know, I mean people are debating even the whole concept of free
will is very, very debatable. Do we have free will? Did our brain make me do it? I mean, this is
you can go and dive deeply into this. So, you know, we have an intuitive sense what is
autonomy, but if you dig into it, you know, if I take my car and said okay, drive me to Rice. But I
don’t need to hold the steering wheel. Is the car now autonomous? I say, no the car is just doing
what I told the car to do, okay. So, even now, many functions in the car, you know, I don’t
control the wheels directly, alright. There are lots of levels of mediation. Think of even more
sophisticated thing like fly-by-wire. There are lots of levels between what the pilot does and
what happens in the wing. So, I’m having a hard time defining autonomy now, given the
complexity of technology.

What I hear is the idea that the system’s engineering is so complex that it’s a slippery slope,
there’s not a clear boundary where we can say something is either on this side—not
autonomous, or on that side—autonomous?

Autonomous car, it just means that I don’t hold the driving wheel, but I told the car where to go.
But when they talk about autonomy in the concept of weapons, they think about the kill decision
is made by the machine. Not necessarily the means. Let’s say if Skynet told Terminator to kill
someone as if it takes away autonomy, I think that, in my opinion, the discussion right now is
very muddled. And we’re not consistent in how we use the word autonomy and I don’t have very
sharp opinions on autonomy.

Your example about the car and your example about the weapon makes me feel like we’re in
society using the word autonomous and, in some cases, very tactically and in some cases very
strategically.

Very strategically.

So, we’re actually very inconsistent with the definition.

We are very inconsistent.

Of the word.
I think we need to have a much clearer definition of what exactly do we mean by autonomy. And
this is a very good, nice terminology. This distinction between the tactical decision and the
strategic decision so to speak and what level of autonomy do we really care about the strategic
one or the tactical one?

So, I like that boundary problem. Now I am going to talk about autonomous systems. What I
mean is whether they’re tactically or strategically autonomous, like self-driving cars. What are
your thoughts about ascribing responsibility when they cause harm? And I’m not talking about a
weapon-system that’s intended to cause harm.

Yeah.

But a system that’s designed not to cause harm, but then does cause harm.

Going back again to the notion of lethal autonomous weapons. So, you look today at the military
situation. And people somehow have very simplistic view of the robot that holds the weapons
and shoot someone, but today you really deal with very large complex systems. And ascribing
causality becomes very, very complicated. So, if you want to say, you know, let’s talk today in
artillery battery. Okay. And now there is an Office of Artillery Battery. And this is a war. Now,
clearly you can imagine [missed @ 5:26, 2nd video] all day, holds a gun and shoots and kill an
enemy soldier. You could say, okay, you killed that person. It’s very clear to us. But now let’s
look at an artillery battery. So, an officer gave command. Okay, that’s a command. It goes back
to the guns. In the gun, somebody has to load the gun. Somebody has to aim the gun.
Somebody has to, to give the fire command. Somebody has to pull the trigger. Eventually let’s
suppose the projectile went out and people were killed. Who killed them? You cannot ascribe it
to individual. The answer was, well, collectively all these people worked together and other
people were killed. But if you tried to break it down, the responsibility. You have to do what we
do in some situations and say how much are you responsible. [missed @ 6:18, 2nd video] an
absolute responsibility. But if this happens in a court, in a legal setting, there would be some
debate. You would say okay, this person is responsible for 10% of this. This one is for 5% of
this. You have to break responsibility in a very refined way to understand what happened. The
same way if you have now, if an automated vehicle. I prefer to say automated vehicle, AV,
rather than autonomous vehicle, automated vehicle. If that accident was caused. We would
have to go into a very refined setting. How much was the other side at fault? How much? How
much was the road may have been at fault? Or the technology was at fault? Or the technology
was not operated properly? You can imagine there are many, many factors an answer would be
then, maybe there would be ten different responsible parties and we would divide the
responsibility between them in a very refined way.

Let me push back on it a little bit.

Please do.

And I’ll use the example of Watson you said you were inspired by Watson.
Yeah.

One of the children of Watson of course was medical, Dr. Watson.

Yes.

The medical diagnostic system. So, I’m imagining a situation where a doctor is using Watson to
make diagnostic...

Yeah.

To get diagnostic help.

Yes.

And ascribing a certain amount of agency to Watson. That is, giving to Watson a certain amount
of authority.

Yes.

And so, he ends up making a misdiagnosis and already this happens in medicine without
Watson.

Yes.

So, this is not exactly new.

Right.

But today we have malpractice all the time.

Yes.

Lawsuits.

Yeah.

And so, I’m wondering, how do we make sense of this AI system that maybe doing online
machine learning...

Yeah.
Has evolved over time. The doctor is dependent on it. I’m wondering, is there an unusual way
we have to treat this situation, or do we treat it as if Watson is a person? Or do we treat it as if
Watson is just an encyclopedia that he is using?

I think if it went to Court, it would require a deep analysis of weapon. Did the patient give the
doctor all the relevant details? If I didn’t tell you that I’m allergic to sulfur and I was prescribed
sulfur then I’m partly at fault. The patient may not have given full information. You know, it would
partly depend on when a machine is tell the doctor what to do, what are the norms? Do we
expect the doctor to just blindly say well this is what the system told me what to do so that is
what I’m going to do? I’m not really playing an independent role or the would the oath of
medicine have to say, the machine have to provide, here are the options, here are the
explanation and you make the final decisions. We’re not there yet to make this happen. Now
suppose the machine make the wrong recommendation. What was it? Was the software at
fault? Was maybe the software with public had to be trained? Maybe the training was not
sufficient, not enough example were given to the machine to train. Was the training at fault? I
think you have to do a very deep analysis to understand what’s happening. I read some time
ago there was a horrific accident in Belgium where a ferry to went from Belgium, a port in
Belgium to the UK. And it turn out they, the hold doors where the cars are was not properly
closed. And it filled up with water and it sank. And hundreds of people drowned. And they said,
who’s the idiot that didn’t close the doors? Let’s find the person. You know, who knows, throw
them out to the sea. And they did an analysis and the analysis was that it was a combination of
like fifteen different things. One sensor was at fault and there was a backup sensor, but it was
not regulated. And you find it was a very complex system and really for the whole thing to
happen, you know, it’s like tossing a coin, you know, ten times and they all had to come out
heads for this to happen. And so now you say, who’s at fault? Well a little bit of this, a little bit of
this, a little bit of that, a little bit of this. We like simplistic answer, who’s at fault? We want to
catch someone. But the answer’s almost always going to be very complicated answer I believe.

Then the one thing I would suggest as a question to you is, I, I completely see what you’re
saying. The irony is if there’s millions of AI systems helping us make millions of decisions, then
there will be many cases where we get fifteen tails in a row. And a highly unlikely, really
complex outcome will become quite common sometimes.

If you buy a car and it, let’s suppose there is some mechanical failure of the car and some
accident happens. The basically, by now we have enough of, of legal experience to, to say how
do we do this? And basically, we say it’s the manufacturer responsibility. We should say the car
dealer is not responsible unless the car dealer did something unusual. And the manufacturer
have strict liability we’ll call it. So, we don’t even go exactly, we don’t go let’s examine what’s
happening in the factory and how it happened. Manufacturing, this is, you bought the car, this is
a GM car. GM is responsible. And we have simplified away all kind of very deep analysis cause
we don’t want to do this analysis every time. We say the car failed, it’s a GM car, GM is
responsible. And I think what will happen over time, first of all, so far, for historical reasons,
software has been exempted from strict liability. In fact, not only that, software tied to exempted
from any liability. Almost any software, we use it, we say use it at your own risk. And we will
have [missed @ 11:49, 2nd video] to come to agree this is something that is a huge part of our
lives. There should be no exemption from software from the same rule we have for cars. Okay.
When you take a car, you don’t have to tear off shrink wrap wrapping of the car and there’s no
shrink wrap license to say well, use this car at your own risk. Okay. You don’t, go ahead [missed
@ 12:10, 2nd video] check the car. You said no. The manufacturer is liable, end of story. And I
think, over time, we would get more experienced with using this kind of automated decision
making, medical decision making. And most likely we would say, we can’t go afford to every
time to make this such a complicated analysis and we will say the system made a mistake. End
of story. The vendor, you know, whoever, developed the system, the vendor is responsible. End
of story. Without doing this, this really complicated analysis to say well five percent for the
software vendor, five percent for the hardware vendor, you know, and do this cause it’s just too
tedious to do this analysis for every litigation. We just don’t have enough experience in this kind
of litigations to simplify things such a way.

I like your [missed @ 12:57] for with the car. My only question is, one interesting thing about AI
is you can end up with agents that have social interactions with people. I wonder if we can be as
simplistic as we are with a manufactured object if it’s the social interaction that results in
decision that results in minor litigation.

You know the, the Microsoft chatbot that learned racism from reading Twitter. But again, we
could say, well it’s not really the bot’s responsibility. This person told the bot, or we would just
say to make life simpler, whoever makes a bot is responsible. And yes, there are kind of
complicated factor, but we’re going to get it to a simpler legal situation. But the answer is the law
is, we are very far behind on these cases. Not just in AI. Just generally in the whole legal
framework for software and in and taught and liability. You know, the practices, the law has not
caught up to the reality of the marketplace already. And I’m guessing that as this machine
become more prevalent in our life, litigation becomes more common. We would see more and
more this situation. In fact, there was a case a couple of years ago, I think there was a case with
Toyota with some failure I forgot. I think maybe it was braking or speeding. And there was a
litigation.

Brakes.

And the conclusion was, they brought expert witnesses who examined the firmware. And the
conclusion was that the firmware was such poor quality, that without finding the explicit
causality, Toyota was found liable. Just because the expert says we’re looking at the firmware,
the software essentially is crappy and therefore you’re liable. Even though we cannot exactly
ascribe it to say this happened because of this.

End of story.

End of story. Copies of the, you’re liable. And this is kind of one step in subject towards strict
liability which I believe that’s where we should be going.
That’s the end of my main questions. We have one final optional question for you.

Yep.

We’re curious if you want to share with us your thoughts on the development of general artificial
intelligence, AGI?

I have to say that on one hand I’m on two minds on this. Cause on one hand I’m a materialist.
That means that I think we’re machines. We’re a machine developed by evolution starting, you
know, over billions of years in some sense. But all directly we’re just a machine. A machine
made from proteins, DNA, what have you, we’re just a machine. And if it’s a machine, I don’t
see any general reason why we shouldn’t be able to build from other kind of known biological
machine that would be just intelligent as human being. So, philosophically, I don’t see any, any
boundary here. At the same time, when you ask people what is general artificial intelligence, I
cannot even get a satisfactory definition. In fact, if you talk to psychologists and you ask them to
define intelligence, you find they don’t have an agreement. What is intelligence. Is it a one thing,
is it a unitary thing, or is it just a portfolio of capabilities that we have evolved over the years and
it’s very nicely packaged in one form factor, okay? And so, if you look, if you go back to, Turing
paper from 1950 about intelligent machines, Turing realized the difficulty of defining intelligence.
So, what he’s trying to do, he tried to avoid the problem by defining what you called Imitation
game. Now actually I think this was his weakest part of the paper because it takes a bit of
conversation as the test of intelligence and at best it is only one aspect of intelligence. Okay,
there are many other aspect of intelligence. But I could imagine that would define it as not one
Turing test that says well here is what people can do, and we can define a portfolio of
capabilities, and maybe there will be, and I don’t know, twenty, a hundred, two hundred
capabilities, and imagine that we are able to do all of them at the same level as human beings.
And package them nicely together. And at that point would the distinction between weaker and
stronger always disappears. So, I’m not sure that there is something so-called general artificial
intelligence or AGI, artificial general intelligence, or do we just need to do enough intelligent
tasks well and package them all together, and have them integrate well together with each
other. And I suspect that that’s probably the way AI would progress. Not by achieving some,
some mythical AGI, but which acceptable, but making enough progress on weak AI but enough
of them and packaging them together that the end result would be a system that behave very
intelligently across a variety of ways.

That was my last question. Is there anything you would like to add that we have not talked
about?

You know, you touched before on the, on the issue of human dignity. And I think this is going to
come back to us, I mean, this questions of the essence of humanity. And so, you know, we are
treating, you know, on one hand, you know, I said I’m a materialist, so I think humans are just
part of nature and, I mean, I don’t see distinction between us and other animal other than we
have, we have evolved intelligence. Other dogs can smell better than, other animals run faster.
We are much smarter than anyone else. But at the same time, we all behave in terms of our
morality as if we are above nature. We’re halfway between nature and God. Okay. We’re
somewhere half in the middle. So, for example we say human life are precious. Why? Well it’s a
moral principle. It’s been a very good principle for society to treat human beings as very special.
And part of what the progress of intelligence is, artificial is going to do, is to kind of hold our
specialness. I think I see people talking about, about the human rights for robots which I think is
a horrible idea, okay. Because it degrades human rights for humans. I have no problem, the
same way we have laws against cruelty to animals. I have no problems against laws against
cruelty to robots, okay. That’s fine to me. But I think this illogical thing that we have done which
it to treat human beings somehow as sacred, as special category in the universe, deserving
special treatment, has been the basis of even [missed @ 19:37, 2nd video]. All human beings
are born with inevitable rights, where does it come from? It’s again this idea that we treat human
beings as very very special. And this is to me the basis of human morality. And we should not
toy with it. I don’t think we should give robots human rights. We should not give corporation
human rights. We are eroding human rights by giving them away to other things. Either they are
corporations, robots or gorillas. We should keep human [missed @ 20:06, 2nd video]. I know it’s
speciesism, some people say it’s speciesism. But to me this is the basis of human of morality
and we should not give it up lightly.

So, if we want to maintain a kind of distance from human to other, so that we can maintain
specialism of human, I can see how we can avoid for instance giving robots human rights, but
on the design side of it, we sometimes make robots have faces and smile and laugh we can, we
can manipulate somebody’s emotions better that way. And sometimes the researcher says well
it’s more efficient, it’s easier to communicate with people if I have my robot be humanlike. But if
we make our robots more and more humanlike, engineered that way, even if they aren’t in
essence human, aren’t we eroding that same discourse?

So, there was this, this British now I forget what it was called, Things, what was the show
called? There was a British show, just recently in the last couple of years.

Humans?

Humans. I think it was called Humans. In which the robots played by humans of course, are so
human that it becomes difficult to say which one are which. And, I think we need to think very
hard before doing this. I look at the people who do, who do these humanlike robots. And I don’t
quite understand why we need to make robot that would look like, like just like humans. You
know, it could be that we could decide, I have not thought about it fully. But I could imagine that
we should not let technology be our master. We should be the master of technology. We talk
about CPSR. And they had a very nice slogan. Technology’s driving the future. Who’s holding
the driving wheel? I think we should be holding the driving wheel. Just because something is
technologically feasible does not mean we should necessarily do it. Usually we follow every
possible technological. But there’s, so far there’s one I think exception which was also somehow
a threat to I think our notion of human dignity. And this is human cloning. And everybody kind of
is freaked out by human cloning and we so far as society said don’t go there. Why? Because we
see that the threat to the sacredness of humanity. And we should have a dialogue. Should we
have robots that are so human that one cannot make the distinction between them and human?
And if you asked me, I would say no. We should distinguish between robots and human. We
should keep humans special.

So, I should stop, but I have to ask. So, Google had this agent that they demonstrated a year
ago.

Yeah.

That would call for you to a restaurant.

Yeah.

And make a reservation for you.

Yeah.

And it did not divulge that it’s a robot.

Yeah.

It had a natural sounding voice.

Yes. Yes. Yes. Duplex, Duplex.

Did you listen? It had a big negative response to Duplex.

People freaked out. I think the same kind of thing, oh my God.

It’s what you’re talking about, I think.

Yes.

It’s a similarity.

In fact, Toby Wall showed an article, a viewpoint article in CCM a couple of years ago. Basically,
said there should be a law you should always be able to tell are you talking to human or to a
robot. Okay. And, you know, it’s one thing to say we have robots who have system with such
human sounding voice, or should we have a law that says if a robot is using a human voice, it
should always start by saying “This is a robot talking to you.” I think people should know. I think
this is something we would need. This issue is it’s fast upon us, technology is running very fast
and, you know, we’re facing the same issue of what’s called deep fake. The ability to simulate
reality in such a way that we’re not going to be able to tell the difference between what’s real
and what’s not real and we need to start this conversation now. About we need to this
conversation and we need to bring technologies and social scientist and philosopher. My worry
is if we’re not doing this, we’ll find our self that the technology will run ahead, and we are
witnessing now that our ability to focus the social implication of technology are very very poor.
Running ahead look what we have done with social media. Facebook said fiction is sharing
what can go wrong. Now we see what can go wrong. Maybe we need to sometimes slow down
and think about where are we going with technology.

Thank you very much Moshe.

My pleasure.

You might also like