You are on page 1of 5

An Ethical Theory for Autonomous

and Conscious Robots


Soraj Hongladarom
Department of Philosophy
Chulalongkorn University
s.hongladarom@gmail.com

AUTHOR'S NOTE: ROUGH FIRST DRAFT – PLEASE reasons or others.


DO NOT QUOTE. THANK YOU.
Many scholars have thought about the ethical
ABSTRACT implications of autonomous robots. Firstly, if these robots
The main question of this paper is which ethical theory is are autonomous, then what would guarantee that they will
most suitable as a foundation for equipping autonomous not run amok and create harms to human beings?
or conscious robots with a system that enables them to Autonomous robots have made much inroad to the
perform ethical action or to avoid unethical action. The military, and the ethical implications are obvious.
two main theories in existence, namely the deontological Military autonomous robots are extremely powerful, and
and the consequentialist ones, are algorithmic in the sense since they by definition operate with no or very minimal
that both rely on a set of a priori rules that are intended to human supervision, then the problem is how to control its
define ethical decision making. In doing this both theories lethality. Hence many scholars have called for ethical
claim to be universally applicable. However, there are a safeguard to be installed into the design of autonomous
number of deficiencies in both theories when it comes to robots in the first place (E.g., Arkin and Moshkina 2009).
designing an ethical robot. First of all, both theories Not only in the military, but the use of autonomous robots
disregard the clear role that context and embodiment in many civilian fields is also fraught with potentially
plays in how decisions are made. Secondly, by being disastrous consequences that require close ethical
algorithmic, both theories do not seem to be able to cope guidelines and supervision.
with changing circumstances. On the contrary, a kind
These attempts to install ethical guidelines in order to
teleological theory might do a better job, since it relies on
create ethical and autonomous robots are commendable,
the very design and function of a particular robot as a
but that is in fact not the main concern of this essay. A
necessary ingredient in ethical decision making.
deeper question concerns which sets of ethical guidelines
should be installed in the robots. It can’t be assumed that
I
everyone agrees on every aspect or every detail of such
Autonomous robots have made themselves present due to
guidelines. It is clear in any case that any guideline
today’s advancing technology. Many have found
should take priority on the safety of human beings. Any
applications in a variety of fields, such as space
ethical robots need first of all to ensure the safety of
exploration, cleaning floors, mowing lawns, and waste
human beings and to protect them in any way they can.
water treatment. According to the article on the topic in
But other than that it seems an open question what kind
Wikipedia.org, autonomous robots are capable of the
of ethical considerations or guidelines an autonomous
following: (1) gaining information about the
robot should take. That much seems to depend on the
environment; (2) working for an extended period without
function of each particular robot, including its mission,
human intervention; (3) moving either all or part of itself
the location that it finds itself in, the time of its operation,
throughout its operating environment without human
and so on.
assistance; and (4) avoiding situations that are harmful to
people, property, or itself unless those are part of its The discussion so far has focused exclusively on
design specifications (Wikipedia.org 2009). Thus unconscious robots. Although autonomous robots are
autonomous robots are machines that are capable of being researched on and developed at a fast pace,
working independently with minimal human supervision. theorists have only talked about conscious ones. The
That would indeed be necessary if robots, for example, difference between an autonomous robot and a conscious
are to explore space or distant planets, or in environment one is quite simple. The former may be able to perform
where humans are not able to explore due to safety tasks assigned to it with minimal or no human
supervision, but it is the latter that is capable of being
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are fully autonomous in the philosophical sense. That is, a
not made or distributed for profit or commercial advantage and that conscious robot would be able to think to itself and to
copies bear this notice and the full citation on the first page. understand meaning. That has been the holy grail of
artificial intelligence even since computers were invented
AP-CAP 2009, October 1–2, 2009, Tokyo, Japan.
Copyright 2009 by Author(s) many decades ago. Perhaps the most serious stumbling
block against developing a conscious robot is that even
now consciousness is a very poorly understood conscious organisms, but they are certainly autonomous
phenomenon. Scientists and philosophers are debating (in the engineering sense, but, of course, not in the
whether consciousness can be located specifically in a philosophical sense) because they are capable of
region of the brain, or whether it is something emerging functioning without any supervision. A microbe moves
out of collaboration of various parts of the brain, or, in about and when it finds its food it proceeds to digest it,
the case of the dualist, whether it is another kind of and when the time comes it divides itself, and so on. Thus
phenomenon altogether. Such lack of understanding it functions much like a robot nowadays. On the other
naturally leads to a difficulty in developing a conscious hand, no one doubts that humans are conscious beings, so
robot. Nonetheless, the picture is not hard to imagine. A conscious robots would perform much like a typical
conscious robot would be one, like the famous Hal in the human being does (witness the various depictions of
novel 2001: A Space Odyssey (Clarke 1968) which (or conscious robots in science fiction movies). Let us
who?) is capable of ruminating to itself (himself), assume that both autonomous and conscious robots do
pondering the various pros and cons of its future exist. The question then becomes what kind of ethical
decisions. In short, a conscious robot would be one that is theory is best suited to explain the moral behavior of
capable of representing reality to itself through a system these robots. This question is also important in the case
of representation, most like a natural language. of autonomous robots in the engineering sense because
there is an obvious need to have a clear idea as to the
If there are indeed conscious robots (and let us presume
ethical guideline for these robots, and as we have seen at
that actual development of these robots is not too distant
the beginning of this paper there is a lack of theorization
in the future), then a whole host of philosophical
on the specific content of such ethical guidelines and its
novelties will have emerged. Foremost among them
justification. As most works on ethics of autonomous
would be the question of moral agency and moral
robots deal with arguing for a need to have ethical system
autonomy. Ken Himma has argued that moral agency
installed, there is then a lack of works that discuss what
presupposes consciousness, in other words consciousness
such ethical system needs to consist of and what are the
is a necessary condition for there to be any kind of what
reasoning behind that. This gives rise to a question which
he calls “artificial moral agency,” which is the kind of
is clearly specific to autonomous (and by extension to
moral agency for artifacts such as robots (Himma 2009).
conscious robots): Since it appears that autonomous
Thus, if the development of conscious robots lies not too
robots did not exist in nature before they were invented,
far ahead in the future, then we can anticipate many
are there then any specific ethical requirements that are
conceptual problems that will arise through a reflection
unique to them? What exactly are those requirements?
on what kind of ethical theory would be most appropriate
And the last question which is most pertinent to this essay
to deal with these issues. What, in other words, should an
is what kind of ethical theory is best suited to deal with
ethical theory be like in order for it to help us best
the situation.
comprehend the phenomena of moral agency of
autonomous and conscious robots? At first sight it might appear that the
deontological theory might do a better job at explaining
Having stated the main question of the paper, I would like
the moral behavior of conscious or autonomous robots.
to argue that it is a teleology-based theory focused on
This is because these robots would be capable of thinking
development of moral character that is best suited for the
and reasoning for themselves. According to the
emerging trend of autonomous and conscious robotics.
deontological theory, then, moral reasoning follows a
The main competitor to this type of theory, namely the
universal logic that is integral to rationality itself. For
Kantian, deontological theory, fares less well. A reason is
Kant, one knows how one should decide to act in any
that the latter theory does not take much into
situation because one follows a set of maxims, which
consideration the important fact that our bodies and
inform one of what one should be doing in a particular
concrete situations in which we find ourselves are
situation. The maxims do not tell the subject what kind of
integral to our beings and our identities. Secondly, robots,
specific action she needs to do, but it gives a general
including the autonomous ones, are different from human
guide as to how the subject should make the decision.
beings in one very significant aspect: They are artifacts in
Since this arises out of pure reasoning, then the outcome
the way that we human beings are not. If it is the case that
of the reasoning should be the same for every rational
concrete situation or ‘lived’ bodies are inseparable or
being, just as the outcome of reasoning in ordinary logic
cannot be consistently considered apart from one’s
should be the same no matter what the context is. Thus, if
rationality, then to consider the robots and humans to be
a robot is really autonomous and conscious (for Kant, by
on a par in terms of autonomous rationality alone seems
the way, one cannot be autonomous unless one is
quite deficient.
conscious, which is contrary to the sense of autonomy in
the engineering sense outlined here), then the robot
II
should be able to make its own ethical reasoning.
A problem with this approach is that it does not
As mentioned before, the difference between an
give serious attention to the role that the context plays in
autonomous and a conscious robot is not as great as it
forming particular ethical decisions. According to Kant,
might appear. The difference can be found also in the
lying is wrong no matter what. But at least we could
biological world. No one doubts that microbes are not
imagine situations where ‘lying’ is not only permissible its ability and importantly will not cheat. So it seems that
but encouraged, such as when one is tortured and knows different designs of robots could imply at least different
that the torturer will not be able to uncover the lie for a details of ethical judgment system. This attention to the
certain amount of time. In separating ethical thinking needs of different designs or different functions of robots
from the context, Kantians or deontologists believe that is lacking in the Kantian system.
they can get to the essence of ethics, namely the pure
Another aspect of the Kantian system is that it
logic of ethical deliberation, without having to contend
tends to reduce the agent or the moral deliberator to only
with the changing contexts. But let us imagine what it
the essence of the agent as a rational being only, as the
would be like for a robot to be installed with the
contexts that surround the agent are not relevant to the act
deontological ethical system. Presumably the robot would
of ethical reasoning. Thus the Kantian system would be
have to be able to think of maxims that it has to follow in
usable for human beings, conscious, thinking robots, or
every situation it needs to make an ethical judgment. This
any other thinking beings including abstract ones. This
could be practically cumbersome. Furthermore it might
makes it difficult to be of real use in concrete situations
not furnish the robot with a decision making system that
because our own ethical judgments are very much shaped
is appropriate for all tasks.
and informed by the contexts and even the physical
Talking about maxims, one is reminded here of bodies that we find ourselves in. For example, robots that
Asimov’s well known three maxims for robots, but of are only capable of talking and being conscious but have
course situation has become much more complex and no means to perform any action (perhaps consisting only
robots are involved in much more varied tasks than only of a computing unit, a speaker and a microphone that
killing or not killing humans. The typical Kantian allow it talk and listen) might presumably be conscious
maxim—Act only in accordance with that maxim through and ‘autonomous’ in the sense that it thinks for itself. The
which you can at the same time will that it become a Kantian universal law would then have to cover this robot
universal law—would mean that the robot has to too. Nonetheless, it’s hard to imagine what kind of ethical
formulate a maxim every time it reaches a decision and system should be installed in the robot since it is not
has to calculate whether that generated maxim could capable of performing any physical action. The only
become a universal law. As such the Kantian maxim is a action it can do is verbal, so if there is a need for an
second-order maxim, that is, maxim used for generating ethical system for this robot, then it will only be needed
lower-order maxims, which then govern action that the for preventing verbal abuses by the robot. Thus this
robot will undertake. The key component in the Kantian shows that an ethical system needs to be tailored to the
maxim is the injunction that the maxim must be able to design and the function of particular robots, again
become a universal law. But robots and humans are something the Kantian system is rather ill equipped for.
fundamentally different. For one thing robots are
If the Kantian, deontological system is perhaps
designed to serve certain purposes, whereas human
inadequate as a theory for designing ethical autonomous
beings have evolved naturally from other species of
or conscious robots, then how about its main rival, the
organisms (and the Asimov maxims are reflections of this
consequentialist one? The two theories, even though they
fact). The ‘universality’ in the Kantian maxim then
are different and opposite in many ways, in fact do share
becomes problematic because it is unclear whether this is
one important feature in common in that both are rule
universal only to human beings or not. If the putative
dependent. The Kantian system is obviously rule
universal law here is intended to cover all rational beings,
dependent since it depends on the maxim and the
and not only human beings, then one would have to fact
universal law. The consequentialist or the utilitarian
the conceptual difficulty of accounting for the fact that
theory is also rule dependent since it relies on the rule of
conscious, autonomous robots and (conscious,
maximization of utilities as the principle for ethical
autonomous) humans are fundamentally different. One
judgment and decision making. The well known ‘maxim’
would then have to abstract away all the differences and
for the utilitarian theory is “Greatest happiness for the
focus only on the putative sameness, such as the ability to
greatest number;” that is, any action that gives greater
reason. However, it is possible that robots that are being
happiness to the greater number of people is to be
developed or will be developed in the future serve a large
preferred over other action that gives less. To install this
variety of purposes, some of which may require the
system in an autonomous or conscious robot would mean
robots to be conscious or thinking in one way (such as
that the robot has to be able to calculate the possible or
robots who know how to tie up shoe laces) rather than
probable utilities of its action. Hence the situation would
another (such as robots playing chess). Kantian ethics
not be too much different from one installed with the
assumes that these specific functions (tying shoe laces or
Kantian system. In the case of the Kantian system, the
playing chess) are only instances of the same ethical
robot would need to know the extend of ‘universal’—
reasoning system, but it seems that these activities at least
whether it covers only human beings, or both human and
have some role to play in the kind of ethical system that
conscious robots, or humans and all robots no matter they
is relevant. For example, in tying shoe laces, perhaps the
are conscious or not, or human, robots and animals, or
ethical system would tell the robot to do it in the way that
other possible scenarios. In the case of the utilitarian
would make the result look best or most beautiful (if we
theory, the robot then has to calculate utilities, but to do
can say that such result can be beautiful). And in playing
that it has first to be able to interpret what counts as
chess, it would imply that the robot plays it to the best of
utilities. And since different beings (robots, humans, themselves. As human virtues have their basis on what
animals) might perceive different things to be beneficial constitutes a human being, and the typical Greek
to them, then the task becomes more complicated than it mentality would think of an ideal human, a model of
first appears. perfection that ordinary humans should strive to reach.
Taking this cue from this ancient Greek thought,
The rule dependence of either Kantian or
designers might want to consider conceptualizing a model
consequentialist theories implies that both theories are
of perfection for ethical robots based on the ontology of
algorithmic in ethical reasoning. The problem with this is
the robots itself. Here the role of function of particular
that it discounts the role of the embodiment of the robots
robots plays a crucial role. A robot that does everything it
(or humans for that matter) as well as the contexts that
has been designed for, and does it well, is one that
surround it. As designs and functions of robots seem to
approaches its model of perfection. The model of
matter very significantly in how the robot is expected to
perfection will then be a basis on which lies the ethical
behave and how its ethical action should be, then both
judgment system of the robot. Since functions and
Kantian and consequentialist theories seem to be deficient
designs of different robots vary, basing ethical judgments
since they tend to discount these factors.
so that it varies according to their own functions and
designs would then solve the problem of algorithmic
III
ethical decision making that we have seen earlier in the
The deficiencies of the two leading ethical theories show
case of the two main ethical theories.
that an alternative is needed. A theory that is suitable for
installing in autonomous or conscious robots should be To imagine a concrete example, we might think
teleological in character. This kind of theory contrasts of autonomous and perhaps conscious robots working on
with the two leading theories in that it is not algorithmic. a remote planet. Communication with earth would be
That is, the teleological theory does not depend on obviously difficult due to the vast distance and other
formulating and following strict rules when it comes to factors. So the robots have to work independently and
ethical decision making. Realizing that contexts and autonomously. Imagine further that the robots are
situations are always varied, the theory takes into account designed to explore the geological and other features of
these contexts and situations and formulate what should the planet and to work together as a team. Here ethics
be done in accordance with those circumstances. become clearly relevant. Firstly the robots should
perform their function well, as they have been designed
A leading theory of this kind is, of course, virtue
to do. Furthermore, there might arise some unexpected
ethics theory derived from Aristotle. What is distinctive
circumstances, such as sudden dust storms, where the
about this theory is that it emphasizes development of a
robots are expected to know how to fend for themselves.
set of virtues or qualities that are desirable in an ethical
Here ethical robots would seek to protect themselves
being or organism so that the being can make its own
from the storm so that they live to perform their tasks the
decision on the spot when a situation arises what it should
other day. Then they have to report the data accurately. It
be doing at that particular instance. This may sound
might be too much now to think of robots which are
difficult for a robot designer, for it might seem as if a list
capable of deliberating making false reports (that may be
would be required that provides all the possible scenarios
too far into the future), but when the robot will be
that the robot might have to face and instruct it to act
autonomous and conscious and when its function is to
accordingly. But to do that would simply be nothing more
give accurate reports back to earth, one of its virtues then
than providing the robot with an algorithm for making
needs to be honesty, or in more concrete terms ability to
decisions. The whole idea of the teleological ethical
give out surveyed data accurately. Furthermore, when the
system is that the end result, what is desirable, in robot’s
robots work as a team, ethical values needed for
ethical decision making is that the robot can make a
successful teamwork (such as coordination, ability to
decision by itself what it should do at a particular
communicate and understand one another, etc.) become
moment or when it faces with a need to make a decision.
important. Hence these abilities becomes virtues for these
For Aristotle and the virtue ethicists, this means that the
robots.
robot has to be installed with a set of ‘virtues.’ In fact the
word ‘virtue’ came from Latin virtus meaning moral One possible objection against the teleological
perfection, which ultimately came from the word vir or theory concerns situations when the robot encounters
man. This shows that the root meaning of ‘virtue’ is situations which it has not been designed for. Since the
derived from characteristics that best define a human algorithmic theories are supposed to be universal, in
being. principle robots installed with ethical system based on
these theories should know how to make decisions when
So the task for designers of ethical and
they are faced with novel, unexpected situations.
autonomous robots is that they should search for the set
However, since these theories are too abstract and operate
of robot virtues and install the set into the robot so that
more on the metalevel, it becomes difficult to
the robot become virtuous. Defining such a list in any
conceptualize how this could be realized in practice. The
substantial detail would easily entail another paper or a
teleological theory, on the other hand, equips the robots
whole research project, but at least we can begin by
with a set of robot virtues so that, even in novel and
pointing out some obvious ones. In any case, the idea is
unexpected situations the robots should perform
to start from the defining characteristics of robots
satisfactorily well as they are designed through the ethical
judgment system to handle these situations by themselves
based on their original design. Hence, if the robots
performing geographical survey of a distant planet were
to encounter a situation that lies totally out of range of
what has already been envisaged for them to encounter,
their set of virtues, especially one that gives them an
option on how to react in such a way that is appropriate
for the situation, could be of much help.

IV
In conclusion, I have tried to argue that the teleological
ethical theory can be more suitable for designing ethical
autonomous (and also conscious) robots. A lot of tasks
still remain concerning spelling out in detail how such
ethical design according to this principle is like in
practical detail. In the end, robots that are capable of
thinking for themselves, relying not on algorithmic rules
but on their own judgments, should prove to perform
better and approach the model of perfection for robotkind
better than their algorithmic counterparts.

REFERENCES
Arkin and Moskhina. 2009. Lethality and autonomous
robots: an ethica stance. Available at
http://www.cc.gatech.edu/ai/robot-lab/online-
publications/ArkinMoshkinaISTAS.pdf. Retrieved
September 15, 2009.
Clarke, Arthur C. 1968. 2001: A Space Odyssey. New
American Library.
Himma, Ken. 2009. Artificial agency, consciousness,
and the criteria for moral agency: what properties
must an artificial agent have to be a moral agent?.
2009. Ethics and Information Technology 11: 19-29.
Wikipedia.org. 2009. Autonomous robots. Available
at http://en.wikipedia.org/wiki/Autonomous_robot.
Retrieved September 18, 2009.

You might also like