Professional Documents
Culture Documents
Jonathan Delgado
CST 300 Writing Lab
5 October 2018
How Far-Fetched is Skynet
Ever since the Terminator movies being based on an artificial superintelligence (ASI),
Skynet, taking over the world, there has been some worry in the world that an ASI could do
some real damage. In an increasingly digital age, we continue to bolster our reliance on
technology. This was not always the case, and a lot of the technologies we rely on today were
made before security was a prime concern. In recent years, governments have begun creating
military branches dedicated to cyber security, both to defend internal infrastructure and plan
offensive attacks if necessary. Stuxnet, a virus that was able to cause physical harm to Iran’s
nuclear program, is a good example of what these teams are capable of. If humans manually
finding these exploits are able to cause so much damage, it brings into question how much
damage a powerful ASI connected to the internet could do. There is an ongoing ethical argument
whether we as a society should continue to develop an ASI due to these concerns. The
development of an ASI is still largely an idea, no meaningful (publicly known) progress has been
made, so this debate is mostly a theoretical one. The two primary sides of this argument
There is a large difference between the AI we commonly refer to today versus the
concept of a machine that can self-improve or even become self-aware, which we can refer to as
an artificial superintelligence (ASI). The most common form of AI known to the public today
would be something like the Amazon Echo, or more commonly known as “Alexa” (Bogost,
2018). This consumer appliance responds to voice cues as if it was a human, using prebuilt
Delgado 2
algorithms to determine how to respond. Alexa is not learning or intelligent by human standards,
the algorithms it uses makes it seem that way to the layman. When the user prompts Alexa, it
begins recording audio, that audio is transformed into text so a computer can begin processing it.
Once the query has been translated to text, algorithms try to determine the intent of the message
pre-programmed algorithms, it will simply reply with an error rather than attempting to “learn”
how to do it (Levels of AI Self-Improvement. n.d.). There are other common forms of AI as well,
for instance the concept of sentiment analysis which allows computers to determine the
sentiment of some text. This is extremely useful in the age of social media, allowing large
companies to quickly find and respond to messages with a negative sentiment, specifically. This
process uses an algorithm that compares the text it is analyzing against training data, this will
result in a percent match of that message against specific training data, usually a positive or
negative sentiment. Both Alexa and the process of sentiment analysis use the same basic concept,
they are simply algorithms that compare something to training data in order to extract some kind
of meaningful conclusion. The intelligence in this case is comparing a query to training data
while the learning is from a human manually updating that training data, to make better
comparisons in the future; there is nothing truly intelligent about these algorithms. On the flip
side, there is a concept of a much more advanced type of AI, an artificial superintelligence (ASI)
that is able to actually learn and self-improve on its own. This ASI would be a breakthrough in
technology, allowing us to combine the creativity and learning of the human brain with the fast
and accurate processing power that machines are great at. This ASI would be able to
self-improve, or modify its own training data to learn from previous failures; it would be able to
Delgado 3
do this rapidly and constantly, never needing to rest or sleep (Gent, E. 2017). This would allow
the ASI to learn about its possibilities and environment at an unprecedented rate. Many large
companies such as Google, Facebook, Amazon and Microsoft are currently attempting to build
There are a large amount of proponents for ASI development with notable minds such as
Facebook’s Mark Zuckerberg and Google’s Eric Schmidt openly supporting it. The main
reasoning behind supporting its development is to allow more things to be automated, which
could be a huge help to our society. Consider the possibility in which most jobs could be
completely automated and humans were able to spend their time working on whatever they
wanted, assuming there was some kind of universal basic income in place. Additionally, consider
the possibility of combining the versatility of a human mind with the speed and precision of a
machine that can do complex calculations. We could be looking at a time of great scientific
discoveries, allowing humanity to evolve at an even faster rate. If either one of these realities
were to come true, the entire world could benefit. A notable claim of value to support ASI
development from Eric Schmidt puts an interesting light on the topic, “I don’t think there is a
strong positive correlation between intelligence and the desire to dominate… we have the desire
to dominate because we are social animals, but the same isn’t true for machines” (Collins, 2018).
Schmidt notes that even if the ASI were to become self aware, we would most likely not be in
danger because it would not necessarily have the desire to dominate, that is just a human trait
that we are applying to machines. All arguments to continue to develop ASI would assume either
the ASI would not have the will to harm us, or it would be hindered in such a way that it would
be unable to. We can further rationalize this train of thought by looking at the “Common-Good
Delgado 4
Approach” ethical framework, which states mankind should be producing what is best for the
people as a whole (Brown University, n.d.). This framework was made by Plato and Aristotle
who were ancient Greek philosophers, “promoting the perspective that our actions should
contribute to ethical communal life” (Brown University, n.d.). It became more prominent thanks
to Jean-Jacques Rousseau, a French philosopher, “who argued that the best society should be
guided by the ‘general will’ of the people, which would then produce what is best for the people
as a whole” (Brown University, n.d.). This framework indicates that we, as a society, have an
obligation to produce technologies, such as an ASI, that would benefit both ourselves and future
generations. For this concept to succeed, these companies would need to be able to continue ASI
development unhindered.
Another way to view this issue is the possibility of something going wrong while
developing ASI. Elon Musk has been very vocal recently claiming that an ASI could be the end
to humanity as we know it, going as far to make a claim of value stating an ASI could be an
“immortal dictator from which we would never escape” (Thompson, 2018). Similar claims of
value were brought up by renowned physicist Stephen Hawking, claiming it could be an end to
mankind (Cellan-Jones, 2014.). To fully understand what could go wrong, we must understand
what an ASI could do in this scenario. If an ASI was able to connect to the internet, it could be
able to replicate itself to countless computers. Once it would be able to replicate, it would
become the world's smartest virus, doing whatever it would take to survive. It could even modify
something like blockchain applications to harness processing power. From there, it could take
over physical factories, control our infrastructure and much more, the possibilities are endless.
We as humans are extremely reliant on technology, and the aforementioned cyber branches of
Delgado 5
the military would not be able to act quick enough to keep up against such an attack. It is easy to
dismiss claims like this as another “Y2K” conspiracy theory, but there is a very real possibility
of an ASI getting out of control; so much so that Google has created an “AI off switch” to ensure
a human operator can “take control of a robot that is misbehaving [that] may lead to irreversible
consequences” (Orf, 2016). A well honed ASI might be able to iterate at such a quick rate that
the off switch would not be able to deployed in time when something starts getting out of hand
(Clifford, 2018). Development of an ASI could even be compared to the initial test of the atomic
bomb, where scientists calculated a certain percent chance that the atmosphere may catch on fire
and end life on this planet as we knew it, but they proceeded anyway; we may be aware that an
ASI could end humanity, but we will still proceed. It is also possible that this ASI technology
might become weaponized as a way to either wage war on other countries or ensure obedience
from a country's citizens, this might lead to a new form of a nuclear arms race. We can relate this
line of thinking to the “Utilitarian Approach” ethical framework to further rationalize this
argument, which could be summarized as “the best life is one that produces the least pain and
distress” (Brown University, n.d.). It was created by Epicurus of Samos, who was an ancient
Greek philosopher. Under this framework, we should not continue developing an ASI because
the good it could provide is not matched with the possible devastation it could cause. There are
two possible solutions to either eliminate or reduce the risks of the aforementioned negative
impacts. First and most absolute, we as a society could ban and completely prevent ASI
development, this would completely remove the risk of ASI. If society still wanted to roll the
dice with an ASI, but with better ratio of risk to reward, we could look to another possibility. We
could put severe limits on ASI implementations, this would include disallowing it internet
Delgado 6
connectivity, limiting processing power and implementing other further sandboxing techniques
Society should continue to develop an ASI, however, it should be done in a safe and
controlled environment to mitigate the risk of a disaster scenario. It is in society’s best interest to
continue developing an ASI, the benefits of which would exponentially increase our already
Consider an ASI that could solve all of our crises. Things like cancer and world hunger could be
solved at the speed of the processing power we allot it. We owe future generations a chance to
live better lives than we have, and continuing to develop an ASI may provide that for them.
However, there are many considerations to worry about if we go down this path. First, ASI
processing power. It is in humanities best interest to play it safe with an ASI, we should treat it
as if it could be dangerous to ensure our survival as a species. Proper and thorough sandboxing
should eliminate a significant amount of risk when developing an ASI while not significantly
slowing down or hurting its development. It would be irresponsible to allow an ASI which could
self-improve to be ran without constraints. Another consideration is the impact it would have on
unemployed workers almost overnight, which would require governments to either create jobs or
instantiate something like a universal basic income program. This might lead to a system in
which currency would be removed because so many jobs could be automated, leading to a
fundamental shift in economies. This shift would undoubtedly have a short term negative impact,
but if we can get past that point, we could transcend into a new global economy and way of
Delgado 7
thinking, plunging our society into a new era. This opinion is mostly in the center of these two
sides, aligning slightly more with the argument against due to the call for sandboxing and other
limits when developing and deploying an ASI. We as a society have an ethical obligation to
future generations to do all we can now to make their lives better according to the
“Common-Good Approach” ethical framework. In the same train of thought, small negative road
bumps that occur along the way to much larger advanced should not be a deterrent according to
the “Utilitarian Approach” ethical framework. Companies should continue their current efforts to
The development of an ASI is still largely an idea, but this theoretical ethical debate is a
very important one for the technology industry to have, now more than ever. An ASI is
significantly different than the consumer-grade AI that we have become accustomed to, these are
advanced self-learning and possibly even self-aware levels of intelligence that we are artificially
creating. There is no doubt that an ASI would be able to bring a lot of benefits to society, we
could automate millions of jobs, make groundbreaking technological advances quickly and solve
allowed too much freedom. We, as a society, must decide to continue to allow an ASI to be
developed, place legal rules hindering future ASI’s access, or prevent them from being
developed all together. My recommendation would be that we continue developing ASI’s, but
sandbox them to prevent a possible catastrophe. We should continue to develop an ASI, but do
so in a way that greatly mitigates the risk of it getting out of control. If we can implement ASI’s
correctly, we can end up in a new golden era; however, if we cannot, we might be looking at the
References
Bogost, I. (2017, July 31). Why Zuckerberg and Musk Are Fighting About the Robot Future.
Retrieved from
https://www.theatlantic.com/technology/archive/2017/07/musk-vs-zuck/535077/
https://www.brown.edu/academics/science-and-technology-studies/framework-making-eth
ical-decisions
Cellan-Jones, R. (2014, December 02). Stephen Hawking warns artificial intelligence could end
Clifford, C. (2018, March 14). Elon Musk: 'Mark my words - A.I. is far more dangerous than
https://www.cnbc.com/2018/03/13/elon-musk-at-sxsw-a-i-is-more-dangerous-than-nuclear
-weapons.html
Collins, K. (2018, May 25). Elon Musk is 'exactly wrong' on AI, says Google's Eric Schmidt.
Retrieved from
https://www.cnet.com/news/elon-musk-is-exactly-wrong-on-ai-says-googles-eric-schmidt/
Gent, E. (2017, May 31). Google's AI-Building AI Is a Step Toward Self-Improving AI.
Retrieved from
https://singularityhub.com/2017/05/31/googles-ai-building-ai-is-a-step-toward-self-improv
ing-ai/
https://www.lesswrong.com/posts/os7N7nJoezWKQnnuW/levels-of-ai-self-improvement
Delgado 9
Orf, D. (2016, June 03). Google Doesn't Want to Accidentally Make Skynet, So It's Creating an
https://gizmodo.com/google-doesnt-want-to-accidentally-make-skynet-so-its-1780317950
Thompson, C. (2018, April 06). Elon Musk warns that creation of 'god-like' AI could doom
https://www.businessinsider.com/elon-musk-says-ai-could-lead-to-robot-dictator-2018-4