You are on page 1of 4

Shurel Marl R.

Buluran
BPA 4-2

Artificial Intelligence, commonly known as “AI”, is a term that’s popular to people of all ages.
Many people, especially kids, teens, and gamers in general, incorporate the term to “bots”, those
annoying no-brainer, non-controllable characters in their games that move and decide on their own.
The general public, on the other hand, may refer to “robots” and machines in factories. Both of these
are, indeed, products of artificial intelligence but AI is more than that. According to Marvin Minsky,
the co-founder of Massachusetts’ Institute of Technology AI laboratory, AI is the “science of making
machines do things that would require intelligence if done by men”. AI is all about machines, hence
the term “artificial”, doing some kind of human cognitive functions, hence the “intelligence”.
Artificial intelligence had a long history before it became what it is that we see today. AI was
the product of the studies of different mathematicians and scientists. George Boole, in 1854, argued
that logical reasoning could be performed systematically. In 1898, Nikola Tesla made a
demonstration of the world’s first radio-controlled vessel at the Madison Square Garden. In 1929,
Makoto Nishimura, a Japanese biologist, invented the first functional robot in Japan, Gakutensoku,
Japanese for “learning from the laws of nature”. It was 1943 when Warren S. McCulloch and Walter
Pitts proposed “neural networks” which are artificial “neurons” that are capable of performing simple
logical functions in computers. This would later stem to “deep learning” involving computers
“mimicking the brain”. In 1950, computer science pioneer Alan Turing proposed “The Turing Test”,
also known as “The Imitation Game”. This was a test measuring the intelligence of machines by
making a computer that could fool a person into thinking that they were conversing with another
human. Another major breakthrough was on 1996 when Deep Blue, a computer developed by IBM,
defeated Garry Kasparov, the reigning world champion in chess at the time. In 2002, iRobot
successfully created an autonomous vacuum cleaner named Roomba. Only seven years ago, IBM’s
Watson defeated two of the best performers in the quiz show Jeopardy.
From these major breakthroughs, scientists, programmers, and engineers were able to
develop so many use for AI. Different AI systems were developed by different companies for varying
needs and functions. The most famous is Siri, Apple’s digital personal assistant. She’s the voice-
activated computer used in Apple’s IPhones. Amazon’s Alexa is another voice-activated computer
that is not only used for web purposes but also for powering the features of smart homes through
speech recognition. Another is Boxever, a company that uses machine learning to improve
customer’s experience in the travel industry. Netflix uses AI in suggesting films that you may like
based on your behavior on their site. YouTube also uses AI similar to Netflix by suggesting videos
based on your activity on their site. In the field of health and medicine, AI’s are being used in
radiology in detecting diseases like Pneumonia. Google’s DeepMind is being used by the UK
National Health Service (NHS) in detecting health risks with the use of a mobile application.
While AI has brought success to different fields useful to humans, it also comes with different
problems. AI brings with it different ethical challenges that could have serious implications to the
everyday life of humans if not addressed. No, the problem here is not AI systems taking the place of
humans in almost every field and acquiring superiority over humans. Elon Musk recently said that
artificial intelligence is the “biggest risk we face as a civilization”. This statement is both plausible
and possible but not at the current state of AI. The use of AI is very limited to some specific fields
only. It cannot be fully employed to every single object that we see nor can it be used to perform
every thinking and action of a human. Citing an article in the Forbes website, “AI is only as good as
its data”. Without data, AI is basically useless. By this, it’s obvious that AI is only AI because of the
data it gathers, either through self-learning techniques or being given by an actual human. Now, this
is where the major ethical issue arises. As mentioned above, AI systems learn either on its own or
after a human feeds it with data. The problem here is the type of data being gathered or fed into the
AI. Do the data contain bias against one person’s age, race, sexuality? Do the data show justice and
fairness? Basically, it all boils down to the question of what is right and wrong. It boils down to the
question of morality. For an AI to be considered ethical, its actions must be based on what is right
according to a set of moral standards. Although simple enough, the problem does not end here.
Surely, it’s easy to say “teach the AI about morality”, but is it easy to do such thing? The answer, of
course, is no. Morality is not something that you can objectively measure and present to an AI in
some kind of metrics that would be understandable for the AI’s logic. Morality is subjective and it
relies heavily on emotions and gut feeling rather than logic. Let’s not forget that the morality of us,
humans, is questionable at some instances, especially when faced with a moral dilemma. More
importantly, there is no one morality for all humans. Morality varies depending on a person’s culture.
Morality and ethical norms clearly cannot be standardized because we all have different culture.
Training morality, or feelings in general, would be almost impossible as no quantifiable data could be
made about it. Others might recommend to just teach the AI what to do when faced with a dilemma
or when simply doing something. Again, even by doing so, the AI is already acquiring bias as we
humans cannot agree on morality and morality is not standardized. In reality, dilemmas and
decision-making is much more complex. We cannot also take away the fact that situations are not
always absolute and happens differently for different people at different times and places. To teach
an AI a response as if all situations involving an ethical or moral dilemma are the same is obviously
flawed and dangerous. Microsoft once developed a Twitter an AI chatbot named Tay and it was
designed to learn from its interactions with other users in Twitter. Tay apparently became racist and
foul-mouthed after trolls from the website 4chan fed it with it data. This clearly shows the heavy
dependence of AI to data and the importance of feeding the AI the right kind of data.
Clearly, morality and ethical decision-making is a problem both for humans and for AI’s. It
follows then that we cannot rely solely on artificial intelligence when it comes to morality and ethical
decision-making. There are ways, however, to deal with this as researchers and engineers try to
develop morality and ethics on AI systems. One of the best course of action right now is to define
ethical behaviors for AI systems in a way that AI’s would be able to fully understand them. This
means that morality and ethics should be quantified; this means that something subjective should be
made objective. This can be done if humans would agree among themselves on the most ethical
course of action when faced with decision-making. According to the Organisation for Economic Co-
operation and Development, Germany’s Ethics Commission on Automated and Connected Driving
has recommended to specifically program ethical values into self-driving cars to prioritize the
protection of human life above anything else. Agreeing on what is the most ethical can be achieved
through proper crowdsourcing solutions to ethical and moral dilemmas. Now, surely, this is a
challenge, as there are more than seven billion people in the world. Fortunately, this is not an
impossible task with the help of artificial intelligence. MIT’s Moral Machine project showed how
crowdsourced data can be used to effectively train self-driving cars to make better moral decisions.
Although another challenge for humans is the cultural differences that strongly affect the moral and
ethical values of humans. This is something that should greatly be taken into consideration. When it
comes to AI systems, researchers, engineers, and consumers are not the only actors involved.
Surely, policymakers will be and should be involved to make sure things are regulated and kept in
order. Presently, policymakers are calling for transparency from these engineers and researchers.
What this means is that engineers and researchers should be transparent in showing their
algorithms when demanded to. This would, of course, benefit everybody since humans heavily
depend on AI’s and computers in general. The challenge here is to make sure that the right to
privacy remains protected despite being transparent on this area.
With all of these being said, do AI’s need to have a code of ethics as well? We humans,
especially those in the government, adhere to these code of ethics aside from their own moral
standards. This makes sure that people do not abuse their rights, privileges, and power that could
have little to big implications. Honestly, a code of ethics would be very useful to observe peace and
order at the very least. So, yes, I agree that a code of ethics should be made for these AI systems. A
code of ethics would surely benefit everybody in some certain ways. But then again, the challenge
here is the varying moral and ethical values of people. To make a code of ethics, moral and ethical
values should first be standardized. This would be, of course, the biggest challenge for everybody,
especially the developers and engineers, in case a code of ethics is agreed upon by these people.
For now, it is important that policymakers study this field as much as possible as they try to make
regulations. A code of ethics can serve as a middle ground for the benefits and the dangers that
artificial intelligence brings upon us. It would not hurt to try and come up with something that would
bring regulation, peace, and order to everybody especially when machines are involved.
Nevertheless, humans cannot and should not heavily rely on the machines and the AI’s. As logical
and rational creatures, we should still be in full control of these machines, and final decisions should
come from us. Artificial intelligence only acts on the data it has gathered, either by itself through
deep learning, or with the help of humans by feeding them the data they need or want. Again,
emotions cannot be fully taught to machines and these machines will not ever feel emotions, at least
on their current state of development. It’s only us who’s fully capable of feeling emotions and
knowing what is right and what is wrong. To depend solely on machines is stupidity, carelessness,
and blatant disregard for human life. While AI’s bring benefits to humans, we cannot take away the
fact that things can always go wrong. As Murphy’s Law suggests, “Anything that can go wrong will
go wrong”. This surely means that humans should be careful in decision-making just as we want
artificial intelligence systems to be careful as well.
To conclude, artificial intelligence has evolved ever since its first inception more than 79
years ago. From that moment on, AI’s became more sophisticated and expanded into various fields
and brought benefits along them. From simply playing chess to powering up homes, AI has shown
what it is fully capable of. With these, we cannot simply deny the fact that there are challenges that
come along with it, especially in the field of ethics. To be specific, AI’s are faced with the problem in
ethical decision-making. Since no standardized moral and ethical values exist, and since these
values depend mainly on a person’s culture, ethical decision-making becomes a huge problem for
the AI’s. Given these, policymakers are being called upon to make regulations and possibly a code
of ethics for AI systems in order to balance the benefits and dangers it brings with it. With this, I truly
agree that a code of ethics should be made in order to avoid compromises and the serious
implications of decisions made by AI's. In the end, the final decision should still be on the hands of
the human and not solely on the machines.

You might also like