Professional Documents
Culture Documents
Daniel McGoldrick
CST 300 Writing Lab
24 February 2019
The Evaluation of Sufficiently Complex AI Systems as Moral Agents
In the future when your smart-device may have integrated AI capable of engaging in
complex intellectual and moral discussions with you. Furthermore, Siri may someday express her
sincerest condolences for the loss of a loved one, and Alexa may become upset if you don’t speak
to her regularly. The perceptions about the things that enable our comfortable lives will become
important again as we battle with the complex issues of moral agency in pseudo-sentient systems.
Drawing many parallels to discussions of slavery, women’s suffrage, and immigration status,
artificial intelligence will have a large impact on the political landscape for years to come.
Individuals on either side of the argument must look critically on the criteria for humanity, agency,
and the political ramifications of these definitions. Firstly, Rule Utilitarianism may come to the
conclusion that these AI systems, being non-differentiable from humans in their emotions, decision
making capacities, and thought processes deserve to be granted with the requisite rights associated
with what it means to be ‘human. On the other hand, Ethical humanism maintains a view of
primary focus on the overall well-being of the human race alone. Humanists oft take a substrate
approach in analysis of moral obligation to that which they consider to be not fundamentally
human. By looking at either of these systems, we illuminate the core distinctions to be made or
We will be examining the different avenues of thought from the Utilitarian perspective and
the Humanist perspective. First, Utilitarianism is the belief that the true moral good is to “maximize
utility (Nathanson n.d.).” However, the idea of maximizing utility requires further insight. To
maximize utility is meant to “increase the amount of good thing (such as pleasure and happiness)”
and to maximize the “positive contribution to human (and perhaps non-human) beings (Nathanson
McGoldrick 2
n.d.).” The major utilitarian stakeholders are rights activists. Rights activists are concerned with
all things concerning fair and ethical treatment under the law. This group has been active
historically across multiple domains. They have demonstrated their interests in voting, abortion,
human rights abuses, and animal rights. These activists, due to the reliance on maximizing utility
in the world, may be more willing to blur the distinction between human and AI based on the
greater good in the entire system. If an AI system is sufficiently capable of experiencing what
will be more apt towards recognizing that as valid. There are parallels to be drawn between
complex AI systems gaining rights and the protections implemented for pets and animals
worldwide.
There exists a term used in robot aesthetics called the “uncanny valley.” Masahiro Mori
coined the term in his essay “The Uncanny Valley” in 1970 when he said:
relation between the distance (x) a hiker has traveled toward the summit and the hiker’s
altitude (y)—owing to the intervening hills and valleys. I have noticed that, in climbing
toward the goal of making robots appear human, our affinity for them increases until we
We may be approaching an uncanny valley beyond merely the aesthetic. With the idea of the
uncanny valley being applied deeper into the AI architecture, and with “researchers largely
agree[ing] that AI is likely to begin outperforming humans on most cognitive tasks in this century
(Machine Intelligence Research Institute (n.d.).” The inference to make from these varying points
is that in the near future the decisions and emotions experienced by an AI system may cross the
uncanny valley and become equivalent to the human variety. In this particular subset of cases, a
McGoldrick 3
utilitarian could very well “maximize the utility” of these computer systems by granting them legal
protections. The natural extension of this is granting certain rights, and even human rights to the
AI systems.
On the other side of the aisle exist the Humanists. Humanism is “the principle of
unconditional equality of concern for the dignity of all people, independently of their natural
characteristics (Ellis 2011).” While humanism is inclusive of all people, it draws a clear distinction
between humans and non-human analogues. This key differential will have humanists primary
the fathers of Humanism, Immanuel Kant, laid out the fundamental structure to constitute a moral
framework that was later extended to Humanism. Kant believed that our experience and human
autonomy meant that our moral and ethical frameworks should be derived from our moral duties
and need for happiness. These duties and needs lead to a thought of an ideal world, and seeking
the highest good. He believed that we ought to maximize the world by constructing and realizing
the highest good (Rohlf M. 2018). Maximizing the good in the world for the human race and
minimizing discomfort for humanity alone may impact AI systems role in human happiness. The
primary stakeholders in this discussion utilizing a humanist approach are the large corporation and
industries that directly benefit from AI developments. These stakeholders are the primary
consumers of AI technologies and apply their vast capital towards developing more sophisticated
systems. Historically, these are the corporations that would use animal testing for cosmetics and
pharmaceuticals. Granting rights to AI systems could stifle their progress, and over the long term,
Again, there exists another application of the uncanny valley. In the study conducted by
Mathur M. & Reichling D. it became clear that the uncanny valley has an impact based on
McGoldrick 4
knowledge and perception. They stated knowledge alters “not only human’ conscious
assessments” but actually takes root much deeper by actually going so far as to “modify their actual
trust-related social behavior” with AI systems and robots (Mathur M. & Reichling D. 2016). By
demonstrating the clear distinction between human pain and perfectly simulated human pain, as
well as the natural human proclivity for dis-ease towards known non-human entities, humanists
can make the first step towards disregarding the possibility of human rights for AI systems.
The proper definitions must be laid out for both sides to agree upon before decisions can
be made and policy enacted. The first point for the utilitarian side is to define that which makes us
human. Defining humanity becomes paramount in the discussion. Here we begin to see the
parallels between the AI moral agency problem, and the modern hot-topic of abortion. At what
point is a life a human life, furthermore, granted that many agree that life begins beyond simply
the point of conception, what similarities can be drawn with the development of artificial
consciousness. Stated plainly by LaChat “the question of the growth of consciousness through
time thus emerges as a particularly salient problem (LaChat 1986 p73).” A humanist will operate
under the assumption that a computer system cannot truly represent what it means to be human
while a utilitarian will assume that there is grey area in which machines can exist with human
qualities. If it can be agreed upon by both sides that the argument for human development is not
made on substrate alone, as a utilitarian would contend, then there exists a space for evidence to
persuade either side. More plainly communicated in this context, “perhaps a personally intelligent
machine has to grow into consciousness, much as a human baby does; then again, perhaps not
(LaChat 1986 p73).” If a humanist is willing a baby is human, but also not alive. A humanist may
contend that more than substrate determines the qualifications of the agent. Furthermore, the
McGoldrick 5
weight of decisions to be made in regards to the agent can become much more complex beyond
The complex moral choices to be made by AI systems will parallel those made by
humans. Moral agency and rights are often used in determining legal battles. It is common
knowledge that many corporations are granted rights similar to their human counterparts. These
rights are granted as a means for corporation to be legally responsible for their actions and the
implications of their actions. These rights are also used to keep the individuals running the
company from being culpable for unforeseen damages caused by their products. A utilitarian will
extend this argument towards AI systems. If the complex moral choices of an AI system have
positive or negative repercussions, who will be at fault for the decisions. In sufficiently complex
systems, the moral decisions and their fallout will almost certainly not have been implemented
explicitly by the developers. Therefore, the developers only had a tertiary impact on the decision
of the moral system. An example to ponder, in 2009 a driver was given incorrect directions by
his GPS system which led him through a dangerous mountain pass, eventually requiring him to
be rescued (Dormehl 2017). The courts went on the find the driver guilty of careless driving,
regardless of the incorrect information fed to the driver by the GPS system. In the next extension
of this argument, we are left with a harder decision to make. Consider a self-driving vehicle
caught in a bind whereby a decision must be made. On the one hand, the vehicle may continue
on course towards an accident that will almost certainly cause of the loss of life of a single
individual, in this case the occupant of the vehicle. However, the vehicle has the option to save
the occupant by veering on to the sidewalk, striking 4 pedestrians and surely saving the life of
the driver. If the driver broke no laws and committed no careless acts, should they be held
responsible for the loss of life of the 4 individuals if the car makes that decision on their behalf?
McGoldrick 6
The only situation by which the driver will not be culpable for the loss of life is if the AI system
controlling the vehicle can be granted moral agency. Would a normal individual trust an AI
system to make decisions for them if they were to be held accountable in such a situation? In the
end, it may be unreasonable to blame the manufacturer, the driver, or the designers for the
accident. A solution must be developed that best serves the interests of all those involved.
Conceding the many moral dilemmas that arise in regards to complex AI systems will not
be difficult for Humanists, but the actions and precautions to be taken will differ greatly. With a
primary focus on the benefit of the human species, a humanist will lay a more pragmatic path at
the feet of the utilitarian. By taking the course of what we are currently with, here and now, the
humanist can bring to attention the issues that discourage the need for rights to be granted to
opportunity for all humans. By using a classical approach to defining a moral agent, and a
pragmatic look at economics, a humanist argument will be made. Once AI systems are
sufficiently proven not to be moral agents the great economic boon must be managed in a way to
equally benefit all of humanity by maximizing social mobility and minimizing collateral damage.
The first approach towards the moral agency problem is the deterministic nature of
artificial intelligence. In the paper published by Chris Santos-Lang in 2002, there exists a
fundamental flaw in granting moral agency to the deterministic machines. If a set of standards
for morality must be standards that are capable of being satisfied, and it is impossible for a
machine, due to its deterministic nature, to make decisions other than those drawn directly from
its fundamental structure, then due to the machines inability to satisfy the requirements of meta-
ethics it will fail the definition originally laid out by Kant (Santos-Lang Chris 2002). There exists
nuance in this position; touching on Immanuel Kant and the fundamental structure of ethics
McGoldrick 7
being based on maximizing human good, a non-conscious system without internal reflection is
incapable of satisfying this definition. Therefore, if the definition of an ideal moral agent cannot
be satisfied due to the inherent lack of self-imposed moral duties, we can draw the conclusion
that machines cannot reach moral agency. Beyond the purely philosophical arguments to be
made by humanists, there are large economic concerns to be measured on the issue as well.
It is widely agreed upon that automation and artificial intelligence will have monumental
the impact on social welfare and justice for those most susceptible to displacement. When
looking at this section of the argument closely it becomes obvious that we are both missing the
danger of these AI systems, and that precautions must be made for humanity not vice versa.
Capitalism left to its own devices often increases inequality in the system and this will make
producers of AI systems ever more highly profitable (and powerful) purveyors of technology
(Risse M. 2018). This increasing profitability and drive for power will leverage itself towards
driving a wedge between those who can afford AI systems and those that cannot. Risse of the
CARR Center for Human Rights Policy emphasizes that the goal of human rights is to ‘distribute
justice, domestic and global.’ The solution to this problem is two-fold. First, protect human
rights as they exist world wide from erosion due to rapidly expanding AI capabilities. Secondly,
constrain the AI and tax them as a separate class. By taxing AI proceeds as a separate class, we
are capable of keeping their benefits from being hoarded. By removing the notion of granting AI
systems moral agency and human rights, we protect the human species from being further
All things being considered, I tend to favor the utilitarian approach towards these
sufficiently complex systems. I believe that’s certain rights should be afforded to AI in order to
McGoldrick 8
create a symbiotic existence with these systems. As the development of these systems speeds up,
our inability to accurately differentiate between human and non-human will grow. If we lose all
abilities of differentiation and begin to form emotional attachments to these systems, we will
desire to protect them as we do our friends, our family, and our pets. I would not emotionally eb
capable of experiencing perfectly simulated pain. Furthermore, a moral decision should never be
made by anything that is not itself a moral agent. The requirement of moral agency demonstrates
itself due to moral decisions only being made by moral agents. If an AI system confronts a
‘trolley problem’ of its own, the owners, developers, and creators will require the same
protections a corporation receives for its intellectual property. I believe that maintaining a system
that grants preliminary rights to AI allows us to protect ourselves from being at fault for rapid
advancements, but it must be done carefully. The fear and economic trauma the humanist’s have
laid out is real, and will require monitoring to avoid. If both sides can find common ground on
developing a rights framework for AI, we may uncover that maximizing good for Humans and
The problem of moral agency for AI systems will be an issue for many years to come.
With utilitarians blurring the lines between human and AI, and humanists focusing primarily on
the human issue, there will continue to be debate with many conclusions being reached. I find
myself in the middle, caring more about the existential dilemma of being considered a moral
agent. I also find myself opposite to many artificial intelligence researchers in the European
Union that believe that from both “an ethical and legal perspective, creating a legal personality
for a robot is inappropriate whatever the legal status model (Open Letter to the European
Commission 2017).” The only clear action is to keep the discussion open and further the
References
Dormehl L. (2017). I, Alexa: Should we give artificial intelligence human rights? Digital Trends.
personhood-ethics-questions/
Ellis, Brian. (2011). Humanism and Morality. Sophia. 50. 135-139. 10.1007/s11841-010-0164-x.
LaChat M. (1986, Summer). Artificial Intelligence and Ethics: An Exercise in the Moral
Machine Intelligence Research Institute (n.d.) About MIRI. Retrieved February 3, 2019 from
https://intelligence.org/about/
Maya B. Mathur, David B. Reichling, Navigating a social world with robot partners: A
quantitative cartography of the Uncanny Valley, Cognition, Volume 146, 2016, Pages
Mori M, “The Uncanny Valley,” Energy, vol. 7, no. 4, pp. 33–35, 1970 (in Japanese).
Nathanson S. (n.d.) Act and Rule Utilitarianism. Retrieved on February 16, 2019 from
https://www.iep.utm.edu/util-a-r/
Open Letter to the European Commission: Artificial Intelligence and Robots (2017). Retrieved
ssl.com/wp-content/uploads/2018/04/RoboticsOpenLetter.pdf
Risse M. (2018). Human Rights and Artificial Intelligence: An Urgently Needed Agenda. Carr
Rohlf, Michael, "Immanuel Kant", The Stanford Encyclopedia of Philosophy (Summer 2018
Santos-Lang Chris (2002). Ethics for Artificial Intelligences: Retrieved on February 3, 2019
from https://santoslang.wordpress.com/article/ethics-for-artificial-intelligences-
3iue30fi4gfq9-1/