You are on page 1of 3

The Idea of Artificial Intelligence was first put forward by the great scientist Alan Mathison

Turing. As a great scientist and innovator, Turing designed a machine, which he hopes can
imitate human. A Turing machine takes an income statement and judge whether it is true or false
(Zarkadakis 166). This machine can theoretically calculate logical answers mathematically
(Zarkadakis 166). Ideally, a Turings machine can take any income message and process an
outcome eventually. Yet in reality, there will be a point that the Turing machine always have a
new true or false question after the last one and it repeats forever (Zarkadakis 166). Then the
machine cannot tell whether the statement is true or not (Zarkadakis 166). Therefore, the dream
of the ultimate statement answerer, in this situation, is not practical (Zarkadakis 166). There is
also an idea of a sonnet writing machine that Turing imagines, which goes like this:
Interrogator: In the first line of your sonnet which reads "Shall I compare thee to a summer's day," would
not "a spring day" do as well or better?
Witness: It wouldn't scan.
Interrogator: How about "a winter's day," That would scan all right.
Witness: Yes, but nobody wants to be compared to a winter's day.
Interrogator: Would you say Mr. Pickwick reminded you of Christmas?
Witness: In a way.
Interrogator: Yet Christmas is a winter's day, and I do not think Mr. Pickwick would mind the comparison.
Witness: I don't think you're serious. By a winter's day one means a typical winter's day, rather than a
special one like Christmas. (Hey 256)

Yet this machine is a pure imagination and successfully making it does not means it can think as
human do. Beside the machine, Turing also offered the famous Turing test which focuses on
whether or not a machines can act like a human by imitating human behavior (Henderson 274). It
is said that the day AI passes the Turing test will be the doomsday of human being. However, a
computer program posing as a 13-year-old Ukrainian boy passed the Turing Test in 2014 (Hulick
57). Yet its success is not admitted by all people, because when Eugene was asked, Did you see
Jubilee?, he responded, Try to guess! Actually, I dont understand why you are interested. I
know you are supposed to trick me. which can be the answer to any question it cannot answer
directly (Hulick 57). Thus some scientists argue that the Turing test tests AI too narrowly, so
instead, they prefer identifying AI through its logic (Henderson 280). It is unarguable, that the
Turing test cannot help us tell whether a machine can think in the exact same process as human
do. Therefore, although Alan Turing contributed a lot in bringing up the idea of AI, the Turing's
machine and Turing's test are too superficial for creating an intelligence machine that can think
as same as human do.
Scientists have been trying to make AI smarter and smarter, but fast and smart does not
necessarily means that the AI thinks the same way we do. Take the chess playing machines (like
Deepblue) as an example. The machines do not think the same way a human player does
(Henderson 32). No human player would calculate every possible step before they move and no
one can do that (Henderson 32). Yet this is the exact way computer does and beats the human
players (Henderson 32). There is also a famous thought experiment "Chinese Room". This
experiment locks a person, who does not speak Chinese at all, into a room with a database of
Chinese symbols and instructions (Hey 58). It says that by following the instructions, the person
will be able to give right answers to questions in Chinese, which are given from a person out of
the room (Hey 58). In this case, one can pass a Chinese understanding Turing Test without
understanding Chinese at all (Hey 58). With larger and larger memories, computers can store
more and more data in them as knowledge (Hey 67). Therefore, letting the computer programs to
be more capable of imitating the decision making process of human (Hey 67). In 2007, Ferrucci's

team started building a machine for the game Jeopardy! (Hey 295). Yet it is hard for them to let
the computer recognize where to search (Hey 295). The questions have all the different points to
start with and the computer do not know how to judge which one is the one it should search with
(Hey 295). Therefore, the computer has to search though a lot of things for possible answers
(Hey 295). Although searching through from all the different starts takes a lot of time (Hey 295).
The team finally found a way to reduce the time, which is to let the computer do all the different
search in the same time (Hey 295). The machine Watson won the game against human, used
more than 100 different technologies to understand every single clue, decide how to find the
answer, and list all the answers (Hulick 8). Watson used more than 100,000 sample questions to
train for its Jeopardy! competition (Hulick 8). Yet this is not how human think. So some
scientists looked into exactly how human think. Neurons, being in charge the human thinking
process, handle information by deciding whether the incoming signals pass a certain requirement
or not (Hey 89). Deep learning is the best technology today for computer learning (Hulick 17).
Stimulating neurons in human brain, Artificial neural networks (ANNs) takes input and form
output from them (Hulick 17). There are more than a billion connections in the biggest ANNs in
AI today (Hulick 17). By processing huge amount of data, it can find patterns and combine them
together into higher level meaning (Hulick 17). But does any human need a ton of data to learn
to make a simple decision? Probably not. Therefore, even though scientists try to make their
machines think like human by every conceivable means, the current existing machines still
cannot reach the height of human intelligence.
But what makes machines intelligence unable to reach human intelligence? If we want to build
machines that can think the same as human do, first we need to know how human think (Stuart
2). If we can understand a human mind through introspection, psychological experiments, or
brain imaging, we will be able to compare the input and output of the machines (Stuart 2). Yet it
is hard for machines to think rationally since the information they obtain is not 100% certain as
well as formal (Stuart 2). Also, they difference between "principle" and "real life" makes it even
harder to solve question rationally (Stuart 2). Minsky offered that human thinking can be divided
into 6 levels: Inborn, Instinctive Reactions, Learned Reactions, Deliberative Thinking,
Reflective Thinking, Self-Reflective Thinking, and Self-Conscicous Emotion (130). Animals,
including human, of course, are born with "instincts" like dodging and seeking food in order to
survive (Minsky 133). Yet if the environment changes, they may need to adopt new habits and
change their reactions to certain things (Minsky 133). And when there is a bran-new thing
happening, animals try random actions, and then they know which is the right reactions for this
situation, thus this reaction get reinforced (Minsky 133). Therefore, when the same thing
happens again, it is likely the animal will have the same reaction, again (Minsky 133). Then our
mind rethink what it has done or what we are going to do (Minsky 142-143). Thus letting us
have the idea of if we are doing a right or wrong thing (Minsky 142-143). Are we confused
(Minsky 142-143)? Are we on the right track (Minsky 142-143)? Even further, when we are
asking ourselves even more complicated questions like "What would he have thought of me after
I do that?, we are in the sixth level of our mind, the level of Self-Conscious Reflection (Minsky
146). We set a goal of what we "should be", did what we have done, and then examine and
reflect on whether we met the goal (Minsky 146). As Human beings, we can think back and forth
about what we were thinking earlier; we can make random decision which we who decided it
cannot even tell why. However, every brain activity we go through daily, everything we take
normally, are not so easy for a machine to accomplish. As Lake says that For most interesting

kinds of natural and man-made categories, people can learn a new concept from just one or a
handful of examples, whereas standard algorithms in machine learning require tens or hundreds
of examples to perform similarly (1332). Even a children can learn a bran-new concept by
comparing and generalising the new idea with known ones (Lake 1332). Yet man-made
machines, especially the most frontier ones like the "deep-learning" ones, requires a ton of data
to learn new things (Lake 1332). Furthermore, we learn a wider range of things than machines do
when we are given the same material to learn from (Lake 1332). We can create new ideas based
on the given one, and probably some other information we gathered in our mind through life
(Lake 1332). Human has the ability to learn a rich number of information from only small
amount of data (Lake 1333). However, learning a more complicated model, in any learning
theory, requires more instead of less data (Lake 1333). Machines under current technology can
do no more than storing new concepts, and deal with them exactly as same as programmed.
Therefore, as Lake mentioned in his article, A central challenge is to explain these two aspects
of human-level concept learning: How do people learn new concepts from just one or a few
examples? And how do people learn such abstract, rich, and flexible representations? An even
greater challenge arises when putting them together: How can learning succeed from such sparse
data yet also produce such rich representations (1333)? We always take our ability of "thinking
smoothly" for granted (Minsky 216). Yet we overlooked the past we have been through (Minsky
216). Back when we were infants, we learnt how to pick up things, recognize what is edible, how
to talk, and how to make decisions (Minsky 216). Our ability of thinking humanly is built bit by
bit in our infancy (Minsky 216). Comparing to human mind, the man-made machines are too far
behind on the depth and efficiency.

You might also like