You are on page 1of 5

Jada Kimple

March 29, 2018


Research Project:​Should we build robots that feel human emotion?

Should we build robots that feel human emotion? When speaking upon robots we are

talking about humanoids. Humanoids are robots in which resemble humans. Some feel as if

humanoids are not moral agents while others disagree. Some feel as if humanoids can socially

interact with people, while others feel slightly different. These different opinions stem from

disagreements about whether we should build robots that feel human emotions.

Many people argue about the morality of robots. In many circumstances robots can be

seen as moral. In an article “When Is a Robot a Moral Agent” by Sullins, John P. in 2006-12,

published by SSU ScholarWorks is an argument about the circumstances of robots and moral

agency. According to Sullins, John P. a robot does not need personhood to be moral but must

pass three required criteria to meet the agreement of morality. Sullins states, “ I detail three

requirements for a robot to be seen as a moral agent. The first is achieved when the robot is

significantly autonomous from any programmers or operators of the machine. The second is

when one can analyze or explain the robot’s behavior only by ascribing to it some predisposition

or intention to do good or harm. And finally, robot moral agency requires the robot to behave in

a way that shows and understanding of responsibility to some other moral agent.” In order for a

robot to be considered moral, it must have the ability to make moral judgment based on the

concept of wrong and right. A robot must be able to be held accountable for their actions in some

way. This reveals that Sullin is saying that a robot must be able to have some type of capability

of acting with reference to right and wrong in order to be considered a moral agent. Mark

Coeckelbergh says otherwise. In September 2010 “Moral appearances: emotions, robots, and

human morality” by Mark Coeckelbergh was published by Springer Link. According to Mark

Coeckelbergh a robot can not be moral when speaking upon emotions.Coeckelbergh states, “ If
Jada Kimple
March 29, 2018
Research Project:​Should we build robots that feel human emotion?

morality depends on emotions, the answer seems negative. Current robots do not meet standard

necessary conditions for having emotions: they lack consciousness, mental states, and feelings.

Moreover, it is not even clear how we might ever establish whether robots satisfy these

conditions. THus, at most, robots could be programmed to follow rules, but it would seem that

such ‘psychopathic’ robots would be dangerous since they would lack full moral agency.”

Robots are programmed to do as told and can not function without being specifically

programmed. Robots fail to have any type of emotion in order to express such, as said, feelings

or consciousness. This reveals that robot morality can not be a thing if speaking upon emotions.

Though, Coeckelbergh did mention that drawing robots into our social-moral world is less

problematic than it might first seem, since human morality also relies on such appearances. As

you can see Coeckelbergh and Sullin had two different views, though Peter M. Asaro would

agree with both in different ways. In December, 2006 “What Should We Want From a Robot

Ethic?” by Peter M. Asaro, was published by IRIE.Peter M. Asaro believes There are at least

three things we might mean by “ethics in robotics”: the ethical systems built into robots, the

ethics of people who design and use robots, and the ethics of how people treat robots “I shall

argue that what we should want from a robot ethic is primarily something that will prevent

robots, and other autonomous technologies, from doing harm, and only secondarily something

that resolves the ambiguous moral status of robot agents, human moral dilemmas, or moral

theories.”In order for a robot to be a moral agent comes robot ethics. People must stop and

prevent robots from doing harm.This reveals that it is not possible for a robot to be a moral agent

if one may not make a decision by itself. Peter, though, is undecided and not so sure on whether
Jada Kimple
March 29, 2018
Research Project:​Should we build robots that feel human emotion?

it is possible. He is on both sides. He would agree and disagree with different parts of both Sullin

and Coeckelbergh.

People have different views on whether robots can socially interact with people. In

“Social Robots that Interact with People,” Breazeal believes that in order for robot to interact

with people it needs to do this through a cognitive and emotional level. They must be able to

communicate verbally and through nonverbal signals. Breazeal states, “​A deep understanding of

human intelligence and behavior across multiple dimensions (i.e., cognitive, affective, physical,

social, etc.) is necessary in order to design robots that can ,successfully play a beneficial role in

the daily lives of people. This requires a multidisciplinary approach where the design of social

robot technologies and methodologies are informed by robotics, artificial intelligence,

psychology, neuroscience, human factors, design, anthropology, and more.”​In order for robots to

engage with us they will need to in a cognitive, and emotional level. This involves a wide range

of social-cognitive skills which develops theories of other minds in order to understand the

human behavior to be controlled and helped through a robot. This reveals that In order for a

social robot to interact with people successfully they must be built through a psychology

viewpoint. Kanda has a different perspective. In “Human- Computer Interaction,” Takayuki

Kanda believes, ​robots could form relationships with children and that children might learn from

robots as they learn from other children.​Kanda states, “ Two English-speaking "Robovie" robots

interacted with first- and sixth-grade pupils at the perimeter of their respective classrooms. Using

wireless identification tags and sensors, these robots identified and interacted with children who

came near them. The robots gestured and spoke English with the children, using a vocabulary of

about 300 sentences for speaking and 50 words for recognition. The children were given a brief
Jada Kimple
March 29, 2018
Research Project:​Should we build robots that feel human emotion?

picture-word matching English test at the start of the trial, after 1 week and after 2 weeks.

Interactions were counted using the tags, and video and audio were recorded. In the majority of

cases, a child's friends were present during the interactions.Interaction with the robot was

frequent in the 1st week, and then it fell off sharply by the 2nd week. Nonetheless, some children

continued to interact with the robot.” As seen these robots were interacting with the children just

like their peers would. Though this study showed the attention span of the idea of a robot with

children. This reveals that yes, robots can interact and stand just as a peer though small children

when around friends lose interest with interacting with a robot.

Many have disagreements on whether robots are moral agents, and if they can interact

with people for long. Developing a robot that resembles a human would never replace humans,

but can simply advance humans. If robots are able to feel emotion, it can help many

psychologically. NASA wants to send a human robot to a planet, because robots are able to enter

any type of environment and can operate as a human. If a robot can do such thing for NASA, a

robot should definitely be made to feel human emotions, to help humans emotionally.

Works Cited

● Coeckelbergh, M. Ethics Inf Technol (2010) 12: 235.

https://doi.org/10.1007/s10676-010-9221-y

● Breazeal C., Takanishi A., Kobayashi T. (2008) Social Robots that Interact with People. In:

Siciliano B., Khatib O. (eds) Springer Handbook of Robotics. Springer, Berlin, Heidelberg

● Kanda, Takayuki. “Human-Computer Interaction.” ​L. Erlbaum Associates Inc. Hillsdale, NJ,

USA​, 6 Jan. 2004, dl.acm.org/citation.cfm?id=1466551.

● Capurro, Rafael, and Michael Nagenborg. ​Ethics and Robotics​. AKA, 2009.
Jada Kimple
March 29, 2018
Research Project:​Should we build robots that feel human emotion?

● Nørskov, Marco. ​Social Robots: Boundaries, Potential, Challenges​. Ashgate Publishing, 2016

You might also like