You are on page 1of 5

Cox 1

Jamie Dalan Cox

Professor Parry

ENGL 2010

25 February 2019

The Future Potential of AI

Artificial Intelligence otherwise known as AI, is becoming the center of conversation

between computer scientist, politicians, and the public. AI is defined as giving a computer the

task to adapt and have the ability to think like humans. The whole concept may be difficult to

wrap around your mind. In the last couple years AI has developed at an exponential rate. We

have came from computers trying to beat humans in a simple game of Pong, to beating world

champion chess players. The question many ask is when will AI exceed human intelligence and

cause irreversible damage to human life. I will compare and contrast different biases on the issue

and provide my own opinion throughout this paper.

In researching this particular issue many ideas and opinions conflict on whether AI will

become a threat to humans. While researching a scholarly article by Devdatt Dubhashi and

Shalom Lappin, an interesting idea was put out into the open. Their bias opinions on AI suggest

that the idea of super intelligence destruction on humans is more than fanciful (43). It's not that

one day we will be so far from reversing AI, it’s about how it will affect us. They argue that most

development within AI is supervised by professionals and programmers. A quote from Dubashi

and Shalom inform us that “Work in technology driven by AI generally seeks to solve particular

tasks, rather than to model general human intelligence” (44). AI is developed in situational

circumstances and not in a general from of intelligence. For example, learning how to spot

objects on a road for self driving cars. There is still a lot of research and development to go into
Cox 2

the process of creating general intelligence for a computer. But Dubashi and Shalom are not

discrediting that the issue revolves around fiction. In fact they suggest that the argument around

AI rest on logical possibility (45). Although, through their article they simply suggest that the

danger isn’t something to worry about. Programmers and scientists know what they are doing in

this specialized field of research.

In “The evitability of autonomous robot warfare” by Noel E. Sharkey he suggest that AI

and robotic development will result in catastrophes around the world if is not to be regulated or

stopped. In his first quote from the article he says “We could be moving into the final stages of

the industrialization of warfare towards a factory of death and clean-killing where hi-tech

countries fight wars without risk to their own forces” (Sharkey 788). He is saying that we will be

in a future where humans won't fight their wars anymore. We will develop robotic technology

and AI to conquer and win war. We won’t need to worry about the blood-shed of our soldiers.

Sharkey backs the claim up with the use of drones by the United States in warzones like

Pakistan, Yemen, Somalia, and the Philippines (788). His claim does have some backing

according to New York Times “Since 2009, the government said, 473 strikes had killed between

2,372 and 2,581 combatants.” (NY Times 3). All of the combatants killed were by robotic

functioned drones that are operated by humans. Although this isn’t completely AI controlled.

Humans still have to step in and give the robot functions to complete its task. The issue it arises

though, is when we come to a future where a robot is functioned completely by itself without the

intervention of a human. Will it be able to tell the difference between a civilian and a enemy?

Sharkey argues that AI wouldn’t be able to tell the difference. He brings up the point that a lethal

autonomous robot won’t be able to discriminate (Sharkey 788).


Cox 3

Edward A. Lee brings up interesting points and biases on this issue as well. His article “Is

Software the Result of Top-Down Intelligent Design or Evolution?” answers the question if we

evolve with AI or is just a top-down design of it. He explains in his article AI will evolve with

humans and it will be a symbiosis relationship. A rather interesting quote from him says

“Humans today are strongly dependent on software systems, just as software systems are

dependent on humans.” (Lee 35). He is saying that humans give AI purpose and without us

giving them a purpose they do nothing. A example he gives is that if we look at “memes”as he

defines as propagating and always changing as humans evolve culture and thinking. If we don’t

evolve our ideas and thinking then memes will become unfunny and uninteresting to us (35). Lee

highlights the issue that we will reach a point of symbiosis with technology where they become

part of us. AI relies on us to give it information to function. But without us feeding it information

it won’t progress any more (36). It's a true statement that humans as of now have control over AI

and without us providing the information and database it will be unable to function.

There are 3 different viewpoints of how AI can function and with time it will actually

come into play. Some are optimistic as others are not. I think that AI is something the we should

embrace and shouldn’t be scared of. We have to realize we are far from the future life depicted in

Terminator. AI can only progress as far as we able to push it. The real problem in my opinion is

the military use of robots with AI implementations. I can see a future where governments will

make deadly devices to kill and win wars. I feel like there should be policies and regulation put

into place to stop the development of robotic AI to the point where it can harm civilians and

cause damage that can not be undone.


Cox 4

Works Cited
Cox 5

Sharkey, Noel E. “The Evitability of Autonomous Robot Warfare.” International Review of the

Red Cross, vol. 94, no. 886, June 2012, pp. 787–799. EBSCOhost,

doi:10.1017/S1816383112000732.

Lee, Edward A. “Is Software the Result of Top-Down Intelligent Design or Evolution?

Considering the Potential Danger to Individuals of Rapid Coevolution.” Communications

of the ACM, vol. 61, no. 9, Sept. 2018, pp. 34–36. EBSCOhost, doi:10.1145/3213763.

Dubhashi, Devdatt, and Shalom Lappin. “AI Dangers: Imagined and Real.” Communications of

the ACM, vol. 60, no. 2, Feb. 2017, pp. 43–45. EBSCOhost, doi:10.1145/2953876.

Shane, Scott. “Drone Strike Statistics Answer Few Questions, and Raise Many.” The New York

Times, The New York Times, 21 Dec. 2017,

www.nytimes.com/2016/07/04/world/middleeast/drone-strike-statistics-answer-few-questions-

and-raise-many.html