You are on page 1of 9

Zarubin 1

Sergiy Zarubin
CST300
February 18, 2018

Lethal Autonomous Weapon Systems (LAWS)

Even before humanity was able to create its first digital machine, people fantasy ran wild

with the possibilities. One of the most prominent and earlier examples would be work of Issak

Asimov (1982) and his Robot Series. First published in the fifties, when a notion of portable

computers was science fiction, Asimov was able to conceive an artificial intelligence (AI) and

challenges it would face interfacing with humanity. Issac imbued his artificial protagonist with

three laws of robotics which stipulates that robot cannot harm a human or allow human to be

harmed, it has to obey human commands and preserve its existence. The plot of the novels in the

series revolves around robot trying to come to terms with and resolve the inherent contradictions

with this laws. Azimov’s work, in my opinion, was so influential that it influences us on a

subconscious level in all the discussions about development and use of AI.

There is another work of fiction that influences us either we want it or not – Terminator

(1984) series. Evil AI by the name Skynet sends into the past its mobile assault unit

“Terminator” to hunt down and kill the mother of his archnemesis John Connor who is knocking

the SkyNet doors. In the future, the world is decimated by a nuclear war, started by Skynet to

wipe out the humanity. That image of the relentless killing machine, never tiring, never sleeping,

none horridly pursuing you was design to raise fear of the machines in the audience. Either we

want it or not, that fear is rooted in us so profoundly that it guides our words, decisions, and

arguments in today's discussion around the use of AI in the military.


Zarubin 2

On May 13th, 2014, De Boisboissel (2015) writes, UN held a four-day informal meeting

to discuss the issues surrounding the development of military AI, or lethal autonomous weapon

systems (LAWS) in response to the “Stop Killer Robots” campaign (p. 1). Garcia (2017) reports

on formal follow-up meeting taking place in November 2017. During this latest meeting, not

only clear positions on the use of LAWS were declared by many stakeholders, but also

international community started to consider the possibility of the AI arms race between

technologically developed countries.

Garcia (2017) identifies several interest groups that emerged during the UN meeting.

First one consisted of representatives of top three military powers on the planet: the USA, China,

and Russia. This group expressed interest in the farther development of the LAWS definitions

and rejected any legally binding agreements which would limit research and development of

such weapons (para. 6).

Amitai and Oren Etzioni (2017) explain the position of US Military. Use of LAWS will

provide US troops with clear military advantages on the battlefield. Machines could access more

arias and traverse virtually any terrain. They could work in hazardous conditions: deep waters,

high altitudes, radioactive and toxic environments which are inaccessible to humans. Etzioni also

points out considerable cost savings US Military will realize by using LAWS. Robots could

replace humans in “dull,” repetitive or “dangerous” tasks, efficiently reducing the number of

enlisted personnel with the loss of combat efficiency, cutting overall cost by more than half.

Moreover, the ability of the machines process and share information could drastically reduce the

time needed to train and deploy combat-ready units (para. 4 -10). Arguments above constitute a

factual claim.
Zarubin 3

The claims of value also use to justify the use of LAWS. Multiple sources point out that

Machines could act more “humanely” than flash and blood soldiers. Machines are not susceptible

to stress, anger, panic, and rage. Therefore they would not rage at the civilian population, or

sexually assault them in acts of revenge. Robots would not loot or pilage and will act with a

higher morality than human troops, which could potentially lead to decrease in collateral damage

among civilian population (Etzioni and Etzioni, para. 13-14; 2017 Bailey, para. 12, 2015).

Multiple authors agree that arguments that make use of LAWS so appealing to the

Military create problems to the humanity in general. For example, reduction in the costs to the

military, monetary and otherwise, makes it easier for the country possessing LAWS to go to war.

Subsequently, that will lead to the AI arms race between major military powers. They also point

out that at our current level of technology it will be hard for machines to distinguish between

combatants and non-combatants, and that will lead to the higher civilian casualties before proper

targeting solutions could be developed. All authors agree, if LAWS are deployed on the

battlefield, it will be hard to assign fault if unspeakable happens. Who would you blame, they

ask, machine, developer or an operator, for making the wrong decision? Who will be ultimately

responsible for the act of killing? (Baker, 2015, pp 211-212; Bailey, 2015; Etzioni and Etzioni,

2017; Garcia, 2017; Simpson and Muller, 2016). Garcia points out that the Technology Giants,

Scientific Community, and various None Government Organisations (NGO) took a firm position

on banning research, development, and use of LAWS.

We could see the contradictions. From one side we would like to lower the expenses of

our military and keep our people out of the harm's way. At the same time, we could see how it

will be easier for us to go to war if we do not have to count our losses in the number of lives lost.

We also could see that machines not influenced by human emotions of fear and rage, and they
Zarubin 4

would not commit war crimes, while guided by such feelings. On another hand, machines are

free of compassion and empathy. They will execute any order given to them without question.

Let's look at the first argument presented, that use of LAWS will decrease the financial

and personal cost to the military and the country. We could recognize that the Utilitarian

Approach is used to argue the ethical issue. Would use of LAWS benefit our country and our

citizens? It definitely will. We will spend less money on the military; we will be able to save

lives of our soldiers.

Let's step back and look at the Utilitarian Framework. Hanry R West (2003) describes

Utilitarianism as an ethical framework in which action should be judged by the amount of

happiness they produce or amount of unhappiness they reduce, and not if the action is right or

wrong. Although Utilitarianism in one form or another were present from the ancient times, for

example, Mo Tzu (420 B.C.) Chinese philosopher, Aristotle, and Epicurus (306 B.C. ) spread the

similar message as Utilitarian, the birth father of the modern Utilitarianism John Stuart Mills (

Scarre, 1996, pp. 1 - 47).

If we look at the original argument of reducing costs of the military conflict to our side,

we could apply the Utilitarian framework and see that the fewer soldiers are dying or getting

injured on the battlefield, the happier our society will be. The less money we spend on the

military complex, the more money we can spend on elevating happiness in our society. The

fewer people engaged in dull, repetitive tasks the happier they are. It is hard to argue against it –

we all want to be happy.

What if we take a Rights Approach. According to Brown University “A Framework for

Making Ethical Decisions” (2018 ) webpage, Rights Approach is a modification of Kant’s Duty-
Zarubin 5

Based Framework and influenced by the works of John Locke. In essence, the Right Approach

stipulates that for the act to be ethical rights of all stakeholders should be protected (para. 11).

If we start using LAWS in the military, decreasing the likelihood of massive casualties on

our side, it might be easy for us to decide to go to war overwhelming our enemies with swarms

of machines. There are distinct possibilities that we will become an international bully, going

into one skirmish after another, and relying on force rather than diplomacy. In this case, we will

be disregarding the rights of the side opposing us. Moreover what if that side would not be as

technologically advanced? We hardly could call such an outcome ethical under the Rights

Framework, because the invasion is violating the right of at least one side.

When we look at the arguments for and against research and development of LAWS, it

could look like the side arguing for the LAWS development and deployment has a stronger case.

Their arguments and appeals are easy to understand, and they are not dealing with “what ifs”

unless we remember that most of the major military powers already engaged in armed conflicts

with technically inferior nations. The U.S. using drones extensively in the middle east, Russian

Federation is involved in Ukraine, and China uses its troops to quash rebellions inside its own

country. The monetary cost of deploying one soldier in Afghanistan is about $850,000,

according to Etzioni (2017), where the cost of using one autonomous robot will be around

$230,000 (para. 4). We can easily see that lower price will allow a higher number of mobile units

to be deployed in the conflict zone.

De Boisboissel (2015) describes several scenarios in which LAWS would be used. He

also illustrates the basic flowchart of AI decision making, including and excluding human input

in control loop. In some cases, including human in control loop could lead to the mission failure,

due to the slow human response time. One of the examples surface-to-air battery being
Zarubin 6

overwhelmed by the swarm of incoming threats, a human operator could freeze up, where

LAWS will come up with targeting solution quickly (sec 6-8).

In the same time, what if the enemy will use a fleet of civilian aircraft to shield itself

from air defenses. Would LAWS be able to distinguish between military and civilian aircraft?

Something that even human operators are having trouble with.

The issue of the use of AI in modern warfare is complex. Banning the use of AI in the

military would lead to higher number of casualties among military personnel. We could not also

expect major military powers just to abandon all research into military AIs, U.S., China, and

Russia will develop this technology in secret without international oversite. Therefore the

concern about new arms race is pointless, as such race already began. What we can do instead is

allow the development and use of AI technology in the military with a broad international

oversite.

We could imagine that military will be using robots and AI systems in their operations

more and more. Let's consider none lethal application of AI. Hesman (2016) describes the

automated guided vehicles (AGVs) that Amazon is using in their warehouses. AVGs allow for

the much more efficient use of space, they could carry up to 3,000 pounds of equipment without

supervision or human input (para. 5 -10). We could imagine that in a few years such systems will

be deployed in arms depots across the country. Working with high explosives and ammunition is

extremely dangerous for people. Self-driving vehicles could be replacing human convoys to

deliver supplies, decreasing the number of casualties from improvised explosive devices (IEDs).

AI flight vehicles could be used for medical evacuations and reconnaissance.


Zarubin 7

Baker (2015) in his book “Key Concepts in military ethics” explain the issues which we

could face in using autonomous systems in the military. It is hard for modern AI systems to

comply with discrimination/distinction principle and the principle of proportional and necessary

force. For example, AI would not be able to distinguish between civilian law enforcement or a

hunter and rebel fighter in civilian close. Or, another example, of enemy combatant surrendering,

it will be hard for an AI to distinguish between a ruse and genuine attempt to surrender (pp 210-

11).

The international community, as a part of the oversite effort, could mandate the use of AI

in gathering and analyzing data about the use of force by the military personnel, evaluating and

learning the nuances of human behavior and decision making in life and death situations. That

way we could postpone the deployment of the LAWS on the battlefield. Provide international

community and humanitarian organizations with visibility into the conflict zones.

In conclusion, we have to admit to ourselves that use of AI in the military is much more

complex and nuanced issue. We cannot deal in absolutes here. Banning the use AI in the military

equates to a loss of lives. On the other hands, allowing unchecked military research in the AI

field without the international oversite is also unacceptable. We have to find a balance between

two opposing positions. Quoting Baker (2015) “There is no doubt that the field of autonomous

weapons systems will see extensive debate in the coming years” (p 212).
Zarubin 8

References

Asimov, I. (1982). The complete robot (1st ed.). Garden City, N.Y.: Doubleday.

Baker, D. (Ed.). (2015). Key concepts in military ethics. Retrieved from

http://ebookcentral.proquest.com

Bailey, R. (2015). Let slip the robots of war: Lethal autonomous weapon systems might be more

moral than human soldiers. Reason Magazine, 46(11), 16.

Brown University. (n.d.). Retrieved February 18, 2018, from

https://www.brown.edu/academics/science-and-technology-studies/framework-making-

ethical-decisions

Cameron, J., & Hurd, G. A. (1984). The Terminator. Los Angeles: Hemdale.

De Boisboissel, G. (2015). Uses of Lethal Autonomous Weapon Systems. Military Technologies

(ICMT), 2015 International Conference on, 1-6.

Etzioni, Amitai, & Etzioni, Oren. (2017). Pros and cons of autonomous weapons systems.

Military Review, 97(3), 72.

Garcia, D. (2017, December 13). Governing Lethal Autonomous Weapon Systems. Retrieved

January 18, 2018, from https://www.ethicsandinternationalaffairs.org/2017/governing-

lethal-autonomous-weapon-systems/

Hessman, T. (2016, September 07). A Brave New World of Warehouse Robots. Material

Handling & Logistics, p. 7.


Zarubin 9

Scarre, G. (1996). Utilitarianism (Problems of philosophy (Routledge (Firm))). London ; New

York: Routlege.

Simpson, T., & Müller, V. (2016). Just War and Robots’ Killings. The Philosophical Quarterly,

66(263), 302-322.

West, H. (2004). An introduction to Mill's utilitarian ethics. Cambridge, U.K. ; New York:

Cambridge University Press.