Professional Documents
Culture Documents
key things concerned with is that the information is gathered by studying human
behavior. The human behavior depends on moral values. According to moral
relativism, our values are determined by the society we live in and so different
people have different set of moral values. This means that the commonly agreed
moral principles that is used to justify our moral actions using logical reasoning
differs from community to community and culture to culture.
Example: (read from the slide)
Using logical reasoning, it is morally correct to punish Tom. If we all share the same
underlying moral principles, there is likely to be plenty of scope for reasoning.
However, what if tom thinks that there is nothing wrong with cheating? Then we
cannot say that cheating is the commonly agreed moral principle and so we cannot
use logical reasoning.
Slide 6
My second claim is In human sciences, moral intuitions are sometimes influenced
by emotions.
Human sciences like sociology greatly depend on decisions made by intuitions and
how these decisions affect human society. Intuition is connected with emotions as
our emotional state affects our intuition. It is similar to emotions because like
emotion it gains knowledge without the need for conscious reasoning. Moral
intuitions are the instinctive response to what is morally right or wrong. For example
Utilitarianism, which means an ethical philosophy in which the happiness of the
greatest number of people in the society is considered the greatest good.
Emotions affect our moral decisions. It can strengthen our moral values, or
can overrule logical reasoning that is used to evaluate moral judgements.
However, emotions play an important role in moral intuitions, which serves as
a faster way to gain a perspective of a moral conflict.
Therefore, emotions can be used to provide the basis for ethical decisions
which can then be confirmed and evaluated by logical reason.
To evaluate the extent to which AI is ethically justifiable using reason, we may say
that it reduces human errors which drastically reduces accidents and deaths on
roads. However, to morally judge whether Ai is ethically justifiable, we need to
consider all the aspects of Ai. Would you buy a self-driving car which will only safe
you rather than other victims in an accident. Would that be ethically justifiable?
Who makes the decisions of the self-driving cars? Programmers, governments or
companies?
To logically reason with it, one may say that programmer and companies would be
better at deciding for the car because in case of an accident, the company or the
programmers would be responsible for the death of the victims and not the
occupant of the car. However is that ethically justifiable? Should you buy a car on
the basis that you would not be blamed for any death?
Self-driving cars make calculated decisions, while on the other hand, a decision
made by a driver in an emergency would be called an instinctive reaction. In an
accident, where there death is unavoidable, is the calculated decision or an
instinctive reaction better? A calculated decision would mean causing deliberate
harm to people. Instinctive reaction would sometimes prove to be very unreliable,
as it may lead to death of many people. AI has its advantages and disadvantages.
However, it is difficult to