You are on page 1of 9

To: Professor Douglas

From: Ryan Dopler


Subject: Artificial Intelligence Research Paper
Date: April 16, 2016

The Ethics of Artificial Intelligence

Introduction
The technological field has advanced to something far more than what people could have
imagined just a half a century ago. The technological revolution has changed the lifestyle of
societies just as the Industrial Revolution changed the lifestyle of Europe. Who would have
imagined the Internet and computers in most homes, when a computer could barely fit into an
entire building, much less intelligent machines? Artificial Intelligence is an intriguing technology
that will shape the human lifestyle of the future. Restricting research and progress in the field is
hardly a feasible task in today's world. More realistically, we should monitor and keep the
technology in a realistic and safe progression.

History
Artificial Intelligence folklore has been traced back to the times of Ancient Egypt. But the "birth
of artificial intelligence" as some would call it, was in 1956 at the Dartmouth Conference. The
conference was based on two theories, the principle of feedback theory and the Logic Theorist.
The principle of feedback theory was observed by Norbert Wiener. He theorized that all
intelligent behavior was the result of a feedback mechanism. An example would be a temperature

control system that simply checks the temperature of the room, compares the reading to the
desired temperature, and adjusts the flow of heat to bring the room to the desired temperature.
Then in 1955, Newell and Simon developed The Logic Theorist. The Logic Theorist was a
program that represented every problem as a tree. The program would attempt to solve a problem
by selecting the branch that would most likely result in the correct solution. Then in 1956, John
McCarthy organized the Dartmouth Conference to draw interest and talent to the field of
artificial intelligence. (Lieto, Radicioni, 2016)
Finally, almost a decade after the Dartmouth Conference, Centers for Artificial Intelligence
Research began to form at Carnegie Mellon and MIT. Further advancements were made in the
field. The General Problem Solver (GPS) was developed based on the Wiener's feedback
principle. The GPS was capable of solving a greater range of common sense problems.
As the field progressed, the LISP language was created. LISP became the language of choice
among the Artificial Intelligence developers. The in 1963, the Department of Defense's
Advanced research projects Agency (ARPA) gave MIT a 2.2 million dollar grant to be used in
researching "Machine-Aided Cognition" or artificial intelligence. This move by the United States
government was to ensure that the United States have the technological advantage over the
Soviet Union. (Miner, 1993)
Over the next few decades steady advancements were made. Programs were able to solve
algebraic story problems (STUDENT) and understand simple English sentences (SIR). The
1970's brought forth the advent of the expert system. The Expert system was capable of
predicting the probability of a solution under set conditions. Due to the amount of storage space
available, the program was able to store the solutions to each conditional statement. Machine
vision was also discovered in the 1970's. Machines were able to differentiate between shapes,

color, shading, and texture. By 1985, hundreds of companies offered machine vision systems to
perform quality control on assembly lines. The 1980's showed us that the technology of artificial
intelligence had real-life uses. The US military put the artificial intelligence based hardware to
the test during Desert Storm. Artificial intelligence technology was used in the missile systems
and other areas of combat.
The present state of the art can be found at MIT in the humanoid robotics group. An example of
the humanoid robotics group is Coco; Coco is also the newest member of the humanoid robotics
group. Coco is fully mobile which helps in social interactions and intelligence. Independence
from "a human caregiver" allows Coco to exhibit behaviors "closer to their evolutionary origins."
When avoidance is sensed, Coco has the capability to move farther away from the object.
Likewise, Coco is also able to move closer to objects to investigate and explore the world.
Coco's team has also installed a vestibular system to help keep its eyes level to the ground while
it is in motion. Though not to the point of a fully interactive robot, Coco is starting to have
minuet resemblance of an interactive being.

Ethics
The ethics of artificial intelligence falls into two separate but dependent issues. The first is
whether or not the research in the area of artificial intelligence is ethical and to what extent
should the research be limited, if at all? Secondly if the research were to be unlimited and to
have limited monitoring, what would happen if a true artificial intelligence close to that of a
human were to exist?
Many different parties would be affected by the existence or nonexistence of artificial
intelligence. The most obvious are the AI machines themselves; after all it is their existence or

creation that is the issue at hand here. The inventors and researchers have a stake in the
technology. Many of the researchers have made making innovating steps in artificial intelligence
the purpose of their lives. They spend day in and day out working to improve the technology.
Placing limitations on these people's creativity would be disastrous for them.
In the extreme view, the whole human race is a stakeholder. People in general could easily
become dependent on machines. In many ways, our society already is; computers run far more of
our daily tools than most people could imagine. Human dependence on machines is only one of
the many problems that could stem from higher form of artificial intelligence. Artificial
intelligence surpassing human intelligence is the common fear associated with higher forms of
artificial intelligence. This fear is most likely the most common, purely on the fact that movies;
such as, The Matrix showed possible outcomes. Even though these outcomes have been
"hollywoodized" a similar situation is still possible.
Not only has Hollywood portrayed negative outcomes, but the question of how humans would
interact with artificial intelligence and what rights they would have, have been tackled in movies;
such as, A.I. and Bicentennial Man. This leads us into the second of the ethical issues, what are
the rights of each stakeholder?
Starting at the creators, the researchers have every right to expand their knowledge and be
creative and productive. Like any other situation, researchers do not have the right to create
something that is destructive to the human race. Or do they? We have after all allowed the
creation of weapons of mass destruction; such as, the atomic bomb. The primary difference here
is that humans will always control the use of such weapons. (Waltz, 1996) When dealing with
artificial intelligence, we are potentially dealing with an entity that could be out of human
control.

Since the technology is headed in the path of creating an artificial intelligence in the true sense,
we, as a society, must look into what rights would they have. This technology does not fall into
an already predetermined category. The machines would be neither just machine nor animal. At
the same time, they would very much have parts of both. The question then becomes, when does
something become similar enough to be thought of as an animal or human? At what point do we
give the distinction between elements and electricity and life? When the technology advances to
the point that machines can have feelings and emotions, the human mind would naturally think
differently of the "machine". Yet to group artificial intelligence, no matter what technological
advances are made with animals is a leap. To group them with humans and justify it would be
almost impossible. (Hardest, 2016)
After thousands of years, we are just now beginning to question the ancient teachings that
animals had no souls and thus, did not have to be treated with any humane care. People came to
be dependent on animals for survival. There is little that one could do now to change the views
toward animals in an entire society. People are simply too dependent on the role that animals
play in our lives.
Humankind is the last stakeholder. Humans have always made innovating inventions to make
their own lifestyle more comfortable. Humans will always have the right to create new and
innovating resources to make life as easy as possible. At the same time, humans also have the
right to continue to exist, which is why the researchers have an ethical obligation to the rest of
humanity.
There are many professional issues that surround the field of artificial intelligence. Companies
are after the newest and best product to take to the market. They are constantly pushing
researchers to find the newest gadget to interest the public. The researchers should have an

ethical standard to ensure safety within the field. There is a fine balance in technological
advancement between over protection and loss of innovation and creativity. (Innovation and
Entrepreneurship v.195, 2015)
Currently, there are no laws or government policies pertaining to artificial intelligence. Within
corporations, there is bound to be policies regarding types of research or the focus of research in
artificial intelligence. But, the general public does not seem to be concerned with the
advancements in this field. (Hengstler, Enkel, Duelli, 2016)

Conclusion
There are three possible actions to be taken. The first is the stop the research in this field. This
would get rid of the ethical issue altogether. The creation of artificial intelligence would no
longer be in question and the question of what the rights of the Artificial Intelligence would be
gone as well.
The second choice would be to have completely open research with no regulations. Companies
would benefit by having faster advancements which would generate more profits. The
researchers are free to explore all of the options and uses that could be thought of for the
machines. And the Artificial Intelligence does not exist, so their rights cannot be violated. This
would be the best option when taking into account the individual rights of all parties involved
and the fairness of the decision.
But in order to gain the greatest common good and the least negative consequences, research
should continue with the addition of advisory or ethics boards that review and evaluation the
direction of the progress. This would limit the potential damage that could be done, yet still
allows the researcher to have freedom and creativity. (Ashrafian, Hutan, 2015)

The first two options are simply not realistic in our society. People have a need to be creative,
and people also have a need to feel safe. The third option allows for both. It allows for unlimited
creativity and innovation while at the same time has other reviewing to ensure the safety of
society. Simply, other people will be capable of seeing the far difference possibilities of a design
than the original creator. Humans are capable of amazing feats. We are easily capable of our own
destruction, taking a precaution against that is acting responsibly for our future generations.

Sources
Ashrafian, Hutan. Science & Engineering Ethics. Feb2015, Vol. 21 Issue 1, p29-40. 12p. DOI:
10.1007/s11948-013-9513-9.

Cadar, Cristian. "Proceedings of the Eleventh European Conference on Computer


Systems." Proceedings of the Eleventh European Conference on Computer Systems. N.p., n.d.
Web. 19 Apr. 2016.
erka, Paulius, Jurgita Grigien, and Gintar Sirbikyt. "Liability for Damages Caused by
Artificial Intelligence." Computer Law & Security Review 31.3 (2015): 376-89. Web
Hardest, Larry. "Energy-friendly Chip Can Perform Powerful Artificial-intelligence Tasks." MIT
News. MIT, 2 Feb. 2016. Web. 22 Mar. 2016.
Hengstler, Monika, Ellen Enkel, and Selina Duelli. "Applied Artificial Intelligence and Trust
The Case of Autonomous Vehicles and Medical Assistance Devices." Zeppelin University, Dr.
Manfred Bischoff Institute of Innovation Management of Airbus Group, Am Seemooser Horn 20,
Feb. 2016. Web. Mar. 2016.
Lieto, Antonio, and Daniele P. Radicioni. "From Human to Artificial Cognition and Back: New
Perspectives on Cognitively Inspired AI Systems."Cognitive Systems Research 39 (2016): 1-3.
Web.
Miner, Jack. "An Overview of Nonmonotonic Reasoning and Logic Programming." An
Overview of Nonmonotonic Reasoning and Logic Programming. N.p., 1993. Web. 22 Mar. 2016.
"QuAIL." Home. NASA, 10 Dec. 2015. Web. 28 Mar. 2016.

Verlag, Springer. "Personal and Ubiquitous Computing." - Springer. N.p., n.d. Web. 19 Apr.
2016.

Waltz, David L. "Artificial Intelligence." Artificial Intelligence. NEC Research Institute, 1996.
Web. 22 Mar. 2016.

You might also like