Professional Documents
Culture Documents
TECH
HEALTH
PLANET EARTH
SPACE
STRANGE NEWS
ANIMALS
HISTORY
HUMAN NATURE
SHOP
search LiveScience
TECH
HEALTH
PLANET EARTH
SPACE
STRANGE NEWS
ANIMALS
HISTORY
HUMAN NATURE
SHOP
TRENDING: Geek Gifts // Ebola Outbreak // Military & Spy Tech // 3D Printing // OurAmazingPlanet //
Best Fitness Trackers // Human Origins
1/8
12/10/2014
604
100
12
Submit
30
Reddit
2/8
12/10/2014
Massachusetts-based software company Nuance Communications, said the concerns are "way overblown."
"I don't see any reason to think that as machines become more intelligent which is not going to happen
tomorrow they would want to destroy us or do harm," Ortiz told Live Science.
Fears about AI arebased on the premise that as species become more intelligent, they have a tendency to be
more controlling and more violent, Ortiz said. "I'd like to think the opposite. As we become more intelligent,
as a race we become kinder and more peaceful and treat people better," he said.
Ortiz said the development of super-intelligent machines is still an important issue, but he doesn't think it's
going to happen in the near future. "Lots of work needs to be done before computers are anywhere near that
level," he said.
Follow Tanya Lewis on Twitter. Follow us @livescience, Facebook & Google+. Original article on Live
Science.
Editor's Recommendations
5 Reasons to Fear Robots
History of A.I.: Artificial Intelligence (Infographic)
Bionic Humans: Top 10 Technologies
More from LiveScience
3/8
12/10/2014
Math for Drones, Self-Driving Cars Wins Top Student Science Award
Tanya Lewis
Tanya has been writing for Live Science since 2013. She covers a wide array of topics, ranging from
neuroscience to robotics to strange/cute animals. She received a graduate certificate in science
communication from the University of California, Santa Cruz, and a bachelor of science in biomedical
engineering from Brown University. She has previously written for Science News, Wired, The Santa Cruz
Sentinel, the radio show Big Picture Science and other places. Tanya has lived on a tropical island, witnessed
volcanic eruptions and flown in zero gravity (without losing her lunch!). To find out what her latest project is,
you can visit her website.
Tanya Lewis on
93 comments
http://www.livescience.com/48972-stephen-hawking-artificial-intelligence-threat.html
Science Newsletter:
Subscribe
Add a comment
4/8
12/10/2014
Bob Johnson
Top Commenter
"[T]he creation of AI will be 'the biggest event in human history.' Unfortunately, it may also be the
last, the scientists wrote."
Unless the AI becomes aware, notwithstanding a superior intelligence, it will be no more a threat
than the keyboard upon which I'm now typing this.
As for awareness, the AI would need a soul to achieve that feat. And even with a soul, it would
have to be an evil soul for it to contemplate the end of humankind, considering that it had the
means to do so--connected, for example, to the launch codes of nuclear missiles on submarines
and in silos.
Reply Like Follow Post Edited December 6 at 8:32pm
Alvaro Fernandez
I concur with some of the posters hear that the most likely course of action for artificial
intelligence would be to build spacecraft and leave. unless the required something specific to
this planet in order to function,there really is no reason for them to stay at the bottom of a gravity
well. truthfully I would be more concerned that inherent in what we call intelligence are the same
pathologies we associate with mental illness. after all we barely understand the neurological
causes behind such things as schizophrenia and psychosis. What if in creating this artificial
intelligence we inadvertently made it schizophrenic? Or we made it bipolar? As I recall when
Watson played jeopardy it exhibited some behavior that was suboptimal yet also recognizably
human. It might mean that what we call intelligence is self limiting in some way and that the
reason we are not more intelligent is not due to any inherent limitation of the substrate upon
which our intelligent is based would rather in inherent to the nature of intelligence itself.
Reply Like
Top Commenter
"Fears about AI are based on the premise that as species become more intelligent, they have a
tendency to be more controlling and more violent, Ortiz said. "I'd like to think the opposite." So
we're supposed to trust our very existence to what this guy, Ortiz thinks. Chamberlain thought
Hitler was trustworthy and see how that worked out.
Reply Like Follow Post December 4 at 8:44am
Kevin Brown
automatic cars verses illegal aliens without drivers license in stolen cars....hum?
Reply Like Follow Post December 4 at 7:53am
Bonne Kennedy
So how do we know a robot didn't write this article?
Reply Like
http://www.livescience.com/48972-stephen-hawking-artificial-intelligence-threat.html
5/8
12/10/2014
Pam Burton
Why would an artificial intelligence view Homo Sapiens as anything other than vermin?????We
pollute.....W e hate and kill any one or thing we refuse to understand....We are-GENERALLY-NOT
a very nice species...to or for others like us nor for our environment....A I with it's vastly more
precise set of "parameters" would not tolerate our waste and/or cruelty.....
Reply Like
Alvaro Fernandez
Do we know other intelligent species are any better? No, cuz we don't know any
other intelligent species. We may be middle of the road for all we know.
Personally I think humanity is fallen -that's a religious term. You are saying the same
thing in a way. But we have no idea if other species are particularly better. And
thinking humanity has issues - one way to define fallen - does not mean we should be
as down on ourselves as your post suggests.
Reply Like December 6 at 12:28pm
Jayakar Johnson Joseph Managing Director at Johnsons Medicom P Ltd
Its true. As time paradox exists in the particle scenario of universe, events that are driven by AI
does not synchronize with the natural evens of the universe. Thus AI in entirety with human
functions is disastrous in its present form in relevant to the particle scenario of universe.
Reply Like Follow Post December 4 at 12:45am
Sajedul Karim SMU
When Stephen Hawking believes a full development of artificial intelligence could end human
race; we need to be worried. Yet, I would not like to believe it to happen.
Reply Like Follow Post December 3 at 11:40pm
Koen Schot Koos at Werkloos
The soul.... Do we possess one? Can a soul be created by ones and zero's? Every human has
his unique sets of values and scalings all day long about everything. And one day the scale goes
this way, and one day the next. Sometimes it's just a coinflip how we look at stuff. Just like
quantum coinflipping. Can we create a soul? Pure in all it's flawings? Not perfect....just like our
owns? Can we recognize an A.I. as having a soul after we talked to it? Or can we just create
soul-less golems? Mimicing having a soul? What if the mimic is...not? Can you tell it's just
mimicing? Or is it?.... right and wrong.... just take a 3 hour flight and see where you stand with
your values! Better duck! There's a sniper aiming at you..... :P
Reply Like Follow Post Edited December 3 at 11:19pm
View 40 more
Facebook social plugin
Follow Us
http://www.livescience.com/48972-stephen-hawking-artificial-intelligence-threat.html
6/8
12/10/2014
Most Popular
US Birth Rate Hits All-Time
Low
Artificial Intelligence:
Friendly or Frightening?
Math for Drones, Self-Driving Cars Wins Top Student Science Award
US Airports Screened 2,000 Travelers for Ebola, But Found No Cases
Parents May Overestimate Marijuana's Effects on Kids' Seizures
7/8
12/10/2014
Fall Finale
SPOILERS: Marvel's AGENTS OF S.H.I.E.L.D. Season 2 Fall Finale Reaction & Easter Egg Liveblog
Who Are Marvel's AGENTS OF S.H.I.E.L.D.'s [Redacted] & [Redacted]? SPOILERS
SUBSCRIBE
enter email here...
http://www.livescience.com/48972-stephen-hawking-artificial-intelligence-threat.html
8/8