You are on page 1of 3

Slide 1

Artificial Intelligence is the science and engineering involved in constructing a


machine that possess the computational ability to achieve goals in real world.
Superintelligence is any intellect that vastly outperforms human brains in practically
every field- whether it be scientific creativity, general wisdom and social skills.
Slide 2
Recently the National Highway Traffic Safety Administration regulators have agreed
to modify the regulations to allow the automakers to test their self-driving cars on
road for real-world testing. This has caused people to raise questions about their
safeties. Some people have shown disagreement in allowing self-driving cars on
road. On the other hand the government, and the officials at the National Highway
Traffic Safety Administration have given full support to this in order to reduce
accidents.
This has raised some ethical quandaries. (Read from the slide)
Slide 3
The trolley problem:
Before the self-driving cars become widespread, Carmakers have to solve the
ethical dilemma of algorithmic morality. One of the most preferred method of
testing a self-driving car is by making it solve the trolley problem. And that raises
some difficult issues. How should the car be programmed to act in the event of an
unavoidable accident? Should it minimize the loss of life, even if it means sacrificing
the occupants, or should it protect the occupants at all costs? Should it choose
between these extremes at random? The answers to these ethical questions are
important because they could have a big impact on the way self-driving cars are
accepted in society. Who would buy a car programmed to sacrifice the owner?
Slide 4
Technology links both the natural and human sciences: how we create it belongs to
the natural sciences, how we use it, belongs to the human sciences.
Slide 5
My first claim is Moral judgements in human sciences can be evaluated using logical
reasoning. Moral judgements in human sciences involve a discussion of the way we
ought to live our lives, the distinctions between right and wrong, the justification of
our ethics, and the implications of moral actions. Reason allows rational view. It
allows us to assess the outcomes of our actions, and arrive at an objective judgment
which can be evaluated to be morally correct or not. It depends on a solid
framework that can be used to formulate a moral code. Everyone appeals to a
commonly agreed moral principle, and then we try to show if a particular action falls
under it by using logical reasoning. If it does, it is morally right.
My counterclaim is Moral judgements in human sciences can differ based moral
relativism. Reason does not affect moral relativism. In human sciences one of the

key things concerned with is that the information is gathered by studying human
behavior. The human behavior depends on moral values. According to moral
relativism, our values are determined by the society we live in and so different
people have different set of moral values. This means that the commonly agreed
moral principles that is used to justify our moral actions using logical reasoning
differs from community to community and culture to culture.
Example: (read from the slide)
Using logical reasoning, it is morally correct to punish Tom. If we all share the same
underlying moral principles, there is likely to be plenty of scope for reasoning.
However, what if tom thinks that there is nothing wrong with cheating? Then we
cannot say that cheating is the commonly agreed moral principle and so we cannot
use logical reasoning.
Slide 6
My second claim is In human sciences, moral intuitions are sometimes influenced
by emotions.
Human sciences like sociology greatly depend on decisions made by intuitions and
how these decisions affect human society. Intuition is connected with emotions as
our emotional state affects our intuition. It is similar to emotions because like
emotion it gains knowledge without the need for conscious reasoning. Moral
intuitions are the instinctive response to what is morally right or wrong. For example
Utilitarianism, which means an ethical philosophy in which the happiness of the
greatest number of people in the society is considered the greatest good.

My counter-claim is Emotions may hinder our moral intuition.


Intuition generally relies on pattern recognition and will point to solutions that have
worked well with the current perceived pattern. We usually identify these patterns
unconsciously. Emotion is an innate, powerful, and principally unconscious process. Our emotions
would affect the pattern recognition, which may then lead to incorrect moral intuition. For example:
According to your moral intuition, the first thing that comes to mind is to punish person A. However, as
soon as emotions get involved, you start doubting your moral intuition.
Conclusion:

Logical reasoning can be used to evaluate moral judgements however it has


its limits.

Emotions affect our moral decisions. It can strengthen our moral values, or
can overrule logical reasoning that is used to evaluate moral judgements.
However, emotions play an important role in moral intuitions, which serves as
a faster way to gain a perspective of a moral conflict.

Logical reasoning ensures certainty in decision making since it is unbiased


and follows a framework most of the time.

Therefore, emotions can be used to provide the basis for ethical decisions
which can then be confirmed and evaluated by logical reason.

To evaluate the extent to which AI is ethically justifiable using reason, we may say
that it reduces human errors which drastically reduces accidents and deaths on
roads. However, to morally judge whether Ai is ethically justifiable, we need to
consider all the aspects of Ai. Would you buy a self-driving car which will only safe
you rather than other victims in an accident. Would that be ethically justifiable?
Who makes the decisions of the self-driving cars? Programmers, governments or
companies?
To logically reason with it, one may say that programmer and companies would be
better at deciding for the car because in case of an accident, the company or the
programmers would be responsible for the death of the victims and not the
occupant of the car. However is that ethically justifiable? Should you buy a car on
the basis that you would not be blamed for any death?
Self-driving cars make calculated decisions, while on the other hand, a decision
made by a driver in an emergency would be called an instinctive reaction. In an
accident, where there death is unavoidable, is the calculated decision or an
instinctive reaction better? A calculated decision would mean causing deliberate
harm to people. Instinctive reaction would sometimes prove to be very unreliable,
as it may lead to death of many people. AI has its advantages and disadvantages.
However, it is difficult to

You might also like