You are on page 1of 25

MORAL JUDGMENTS IN PSYCHOPATHS

By Jana Schaich Borg (Stanford University)


and Walter Sinnott-Armstong (Duke University)
Psychopaths are notorious for performing actions that almost everyone else believes are
grossly immoral. Why do psychopaths behave that way? One possibility is that psychopaths do
not believe that what they do is immoral. If so, they lack cognitive awareness or understanding of
what is immoral. Another possibility is that psychopaths know their acts are immoral, but they
simply do not care. In other words, perhaps they view morality the same way people who speed
view speed limit laws: they know its wrong, but that knowledge is not enough to inhibit their
desire to speed or their implementation of breaking the law. If so, they lack appropriate moral
motivation, but have normal moral reasoning. Other explanations might exist for psychopaths
behavior and the above explanations might both be right to some extent, possibly, for example,
because psychopaths are only dimly aware of immorality and that is partly why they do not care
about it as much as normal people do. However, these explanations are often discussed as the
primary possible reasons for psychopaths behavior, and many interdisciplinary academic
debates have grown out of efforts to determine which explanation is most likely to be true.
The law might care about the issue of whether psychopaths act immorally because of a
lack of moral reasoning or a lack of moral motivation for several reasons. First, the resolution
might affect whether psychopaths are legally (or morally) responsible for their misconduct. The
most popular versions of the insanity defense make a defendants responsibility depend in part on
whether the defendant did not know that what he was doing was wrong1 or lacks substantial
capacity to appreciate the wrongfulness of his conduct.2 These legal rules are open to various
interpretations,3 but they seem to make the ability to carry out normal moral reasoning relevant to
criminal responsibility. If psychopaths cannot reason that what they are doing is wrong, they
might qualify for one of these versions of the insanity defense. Second, the ability of
psychopaths to make normal moral judgments might also be relevant to predictions of future
crime. Clearly, someone who cannot tell that an act is immoral might be more likely to do it
again, thus psychopaths moral abilities might be relevant to how they should be sentenced.4
Third, whether psychopaths can tell right from wrong or, instead, can tell the difference but do
not care could direct treatment of psychopaths by specifying which deficitcognitive,
emotional, or motivationalneeds to be treated.5 It might also affect whether psychopaths
should be legally required to undergo treatment, as some deficits might be easier to treat than
others. Thus, for law as well as for science, it is important to determine whether psychopaths can
make or appreciate moral judgments.6
1

Regina v. MNaghten, 10 Cl. & Fin. 200, 9 Eng. Rep. 718 at 722 (1843)
American Law Institute, Model Penal Code (Philadelphia; The American Law Institute, Final
Draft, 1962), 4.01(1)
3 See Sinnott-Armstrong & Levy 2010 as well as the chapters by Litton and Pillsbury in this
volume.
4 See the chapters on recidivism by Rice and Harris and by Edens, Magyar, and Cox in this
volume.
5 See the chapter on treatment by Caldwell in this volume.
6 In philosophy, this issue also affects (a) whether psychopaths are counterexamples to
internalism, one version of which claims that anyone who believes an act is immoral will be
motivated not to do it, and (b) whether psychopaths reveal limits on epistemic justification for
2

In this chapter, we will review the scientific evidence for and against the claim that
psychopaths have fully intact moral reasoning and judgment. At this point the literature suggests
that psychopaths have only very nuanced moral cognition deficits, if any. However, as will
become clear, few firm conclusions can be reached about moral cognition in psychopaths
without further research. One reason is that so far there is very little data examining moral
judgment, belief, or decision-making in psychopaths. Another reason is that psychopaths are
often pathological liars, so it is hard to determine what they really believe. Additional obstacles
arise because different researchers have used inconsistent criteria for diagnosing psychopathy
and because few scientific tests of moral judgment or belief are established. To interpret the
literature, then, it is critical that both psychopathy and moral judgments be defined.
What is a psychopath?
Psychopathy is primarily diagnosed using the Psychopathy Checklist Revised or the
PCL-R.7 The PCL-R is a semi-structured interview that assesses interviewees on twenty
personality dimensions, all of which can be divided into two separate factors: Factor 1, which
reflects affective and interpersonal traits, and Factor 2, which reflects antisocial and unstable
lifestyle habits. Factor 1 can further be broken up into 2 facets: Facet 1, representing
interpersonal traits, and Facet 2, representing affective traits. Factor 2 can be broken up into
Facet 3, representing an impulsive lifestyle, and Facet 4, representing antisocial behavior. The
interview, itself, is supplemented by a full background check that can validate any information
provided by the interviewee.
There is still much debate over whether psychopathy is a categorical disorder or a
spectrum disorder, but clinically a psychopath is defined as anyone who scores 30 or above on
the PCL-R. That said, as clinicians who have interviewed psychopaths can attest, there is
something qualitatively very different about psychopaths who score 34 or above compared to
those who score around 30. Hare notes this impression and describes those who score 34 or
above as high psychopaths (Hare, R. D. (1991). The Hare Psychopathy Checklist Revised.
Toronto, Ontario, Canada: Multi-Health Systems; Hare, R. D. (2003). The Hare Psychopathy
Checklist Revised Second Edition. Toronto, Ontario, Canada: Multi-Health Systems).
Unfortunately, most published studies of moral decision-making in psychopaths have very few, if
any, participants who score above 30, and many studies re-define psychopath to indicate a
significantly lower PCL-R score. Moreover, almost no studies have participants who score above
34. This means that moral decision-making has been assessed very little in the highest-scoring
psychopaths whom clinicians differentiate from other psychopaths.
Another difficulty to keep in mind is that older studies, before the PCL-R, often assessed
psychopathy with the Minnesota Multiphasic Personality Inventory (MMPI). In contrast to the
PCL-R, the MMPI is based only on measures of self-reports provided by the interviewee. Selfreports by psychopaths are problematic because psychopaths are pathological liars.8 Moreover,
the relevant scores on the MMPI have not been found to correlate well with scores on the PCLR, especially with PCL-R Factor 1 (O Kane et al., 1996.). Thus, studies that use the MMPI to
assess psychopathy might not be measuring the same population as studies that use the PCL-R.
That makes it challenging to compare results of studies that use these different diagnostic tests.
moral claims by showing that rational people can understand but still not accept those claims.
7 See the chapter on assessment by Forth, Bo, and Kongerslev in this volume.
8 See the chapter on self-report measures by Fowler and Lilienfeld in this volume.
2

In summary, it is good practice to ask the following questions when assessing the
literature currently available on psychopathy and moral decision-making: 1) was the PCL-R (or
an accepted derivative) used to assess psychopathy? If not, the population being described may
be dramatically different from a clinical psychopathicpopulation. 2) If the PCL-R was used, how
many participants scored a 30 or above (or 34 and above)? If none, the population being
described does not in fact contain any clinical psychopaths.
What is a moral judgment?
When people talk about moral judgment, sometimes they refer in the singular to the
faculty of moral judgment or the ability to make particular moral judgments. Sometimes they
talk abstractly about good moral judgment, as when they say that we need good judgment in
order to resolve difficult moral dilemmas. And sometimes they refer to the proposition that is the
object of moral judgment (that is, what is believed when one sincerely makes that moral
judgment), as when they say that a common moral judgment is that theft is immoral. In our
discussion, however, we will refer to the mental state or event of judging that some act,
institution, or person is morally wrong or right, good or bad, because we believe this is the aspect
of moral judgment that is most likely to be relevant to law.
With regard to moral judgment so understood, there are still a couple theoretical issues
that can make it controversial to decide what should be included as moral judgment. One
fundamental question is whether any moral sense or moral module is universal across
cultures and types of people. Moral psychologists and philosophers have not converged on an
answer to this question. Even those who argue for some universal morality rarely specify how to
determine which parts of morality are fundamental or universal as opposed to culturally labile.
As a consequence, it is questionable for the law or anyone else to assume that all reasonable
people will judge that any particular act is immoral. In addition, empirical studies of moral
judgment will need to be sensitive to potential individual or cultural variations in beliefs about
morality.
Another problem is that, even if there is a universal moral sense, that sense is not unified.
For example, some moral judgments are based on harm and are associated with anger, whereas
other moral judgments are about impurity (such as incest or cannibalism) and are associated with
disgust (Schaich Borg, Lieberman, and Kiehl 2008 and Parkinson et al. submitted). Moral
judgments about different kinds of acts can also require different cognitive abilities. Some moral
judgments require one to understand another persons intentions, for example, while other moral
judgments require one to calculate and weigh consequences (Cushman, Cognition 108 (2008)
353380). If a person lacks the ability for theory of mind, such as in autism, then he may be
incapable of responding appropriately in the first type of judgment but perfectly capable of
making the second type of judgment. Someone who has trouble understanding quantities or
doing basic math, in contrast, might have the reverse problem. As a result, a given cognitive
deficit can affect some moral judgments but not others or it can affect a certain moral judgment
in only some situations but not all others. This variation is important for criminal law, because
the legal system needs to use a moral assessment that is appropriately matched to the cognitive
requirements of the crime(s) under consideration.
When evaluating reports of moral judgment in psychopaths, it is also crucial not to
conflate moral judgment with moral feelings or emotion. In this chapter, we will be discussing
psychopaths moral judgments or beliefs and their application to particular real or hypothetical

situations, not the feelings or emotions that accompany those moral judgments. It is possible to
have moral judgments without morally-relevant emotions. This happens when people are
convinced by arguments that certain acts are immoral, but dont yet have emotions that are
consistent with those arguments or their moral beliefs.9 For example, some people might really
judge that it is morally wrong to eat meat, but not feel any associated compunction or guilt when
eating meat. On the other hand, it is also possible to have morally-relevant emotions without
relevant moral judgments. People who were raised as Mormon, for example, might feel guilty
while drinking coffee without really believing that they are doing anything morally wrong. They
have real guilt feelings, but they do not endorse those feelings as justified, so they do not make a
moral judgment in the sense that is relevant to this chapter or to the law.
This distinction becomes particularly important when considering the relevance of
psychopaths empathic deficits to their ability to make moral judgments. The ability to
empathize is not functionally, neurologically, or psychologically the same as the ability to judge
that something is morally wrong, nor is it the same as the ability to guide ones action in accord
with a moral judgment. A person who does not respond emotionally to another person in pain
still might be able to make appropriate judgments about whether it is wrong to cause pain in
another person. Indeed, a recent study found no correlation between empathy (as opposed to
theory of mind) and awareness that a situation has moral or ethical implications, willingness to
use utilitarian or non-utilitarian based rules in moral judgments, or the likelihood of agreement
with a given verdict in a moral scenario (Mencl and May, 2009). Therefore, in Law and
Neuroscience discussions it will be important to enforce definitions of moral judgment that do
not include empathy. That said, it is worth briefly digressing to review the evidence that
psychopaths are deficient in empathy. Because they likely contribute to aspects of psychopaths
behavior (empathy does correlate with pro-social behavior, especially when few conscious moral
rules are in place, e.g. Batson, 1991, 2011), these empathic deficits may still be of interest to law,
even if they dont necessarily contribute to moral judgment, per se.
Profound lack of empathy is one of the diagnostic criteria for psychopathy, and therefore
almost by definition, especially high-scoring psychopaths will have empathic deficits. To
determine why this might be the case and to provide a quantitative measure of their deficit, four
studies have measured adult psychopaths galvanic skin responses while they were observing
people in physical distress. The galvanic skin response technique does not measure empathy
directly, but it does measure how much arousal one feels when observing another in distress
which is hypothesized to be an important part of an empathetic response. Two of the four studies
found that psychopaths show little to no change in skin resistance in response to observing a
confederate get shocked (Aniskiewicz, 1979; House & Milligan, 1976). A third study found that
psychopaths actually show increased changes in skin resistance in response to observing a
confederate get shocked (Sutker, 1970). However, these first three studies assessed psychopathy
with the MMPI, rather than the PCL-R. The only study to employ the PCL-R while examining
galvanic skin responses to other people in physical distress supported the negative results of the
first two MMPI studies. This study used 18 psychopaths (scoring 30 or higher on the PCL-R)
and 18 non-psychopaths (scoring 20 or lower on the PCL-R) and found that psychopaths
demonstrated significant galvanic skin responses to pictures of distress cues (that is, a picture of
a group of crying adults or a close-up of a crying face), but these responses were much reduced
In philosophy, emotivists and sentimentalists claim that emotion and sentiment are somehow
essential to moral judgment, but they can and must allow some moral judgments without any
present emotion (Joyce 2008).
9

compared to those in non-psychopaths (Blair et al., 1997). These studies together suggest that
psychopaths are less aroused than non-psychopaths when they observe others in pain or
distress.10
To reiterate, psychopaths empathy deficits might explain the actions that characterize
psychopathy, but neither psychopaths lack of empathy nor their immoral behavior shows that
psychopaths do not make normal moral judgments. They may still be able to make and believe
normal moral judgments, but lack the mechanism that translates this cognitive ability into normal
emotions or motivations to avoid immoral actions. In the rest of this chapter we will only review
evidence about whether psychopaths have the ability to make normal moral judgments. Readers
should keep in mind that, however, even if psychopaths have that cognitive ability, they likely
have impairments in empathy and emotions as well as in motivation and the ability to translate
their moral judgments into action.
How psychopaths perform on moral judgment tests
After these preliminaries, we are now ready to review the literature assessing moral
judgments by psychopaths. Although many relevant studies have been implemented in
adolescents, the construct of psychopathy is not as well defined in adolescents, and the relevant
legal issues are also complicated by their juvenile status. Therefore, the studies described below
will be confined to studies testing adults.
Empirical studies of moral judgments vary dramatically in their assumptions and
measurements. The field of Law and Neuroscience will need to decide whether any of these tasks
adequately index the ability to know or appreciate what is legally wrong as referenced by the
MNaghten Rule or the Model Penal Code (cited in notes 1-2). To facilitate such reflection, this
chapter will provide crucial details about the specific tests that are used to assess the relationship
between psychopathy and morality, so that in each case it will be as clear as possible what the
study indicates about any legally-relevant abilities.
Kohlbergian tests of moral reasoning
Moral Judgment Interview. One of the first established measures of moral judgment was
Lawrence Kohlbergs Moral Judgment Interview (MJI; Kohlberg 1958). In the MJI,
participants are asked to resolve complex moral dilemmas in an interview format during which
experimenters make attempts to dissuade subjects away from their judgments. The goal is to
determine not which moral judgment is reached after deliberation (meaning not which
conclusion, resolution, or verdict is reached) but rather which types of reasons people give to
support their moral conclusions (the justification or reasoning). In other words, the critical
variable for Kohlberg is why participants judge something to be morally right or wrong in these
dilemmas, not what they judge to be morally right or wrong.
The reasons that participants give for their resolutions in the MJI are divided into three
major successive levels, each with two sub-levels. The first level, called pre-conventional
reasoning, comprises reasons based on immediate consequences to oneself (via reward or
punishment). The second level, called conventional reasoning, comprises reasons based on the
expectations of social groups and society. The third level, called post-conventional reasoning,
10

Other studies have also shown that psychopaths have a selective impairment in recognizing
and processing sad or fearful faces (Blair, 2005).
5

comprises reasons based on relatively abstract moral principles independent from rules, law, or
authority. Importantly, Post-conventional reasoning can reflect either utilitarian or deontological
principles or both. Kohlberg argues that individuals and cultures advance through these stages in
a set order and also that later levels of moral reasoning reflect better moral reasoning than earlier
levels (Kohlberg, 1973).
Only one study has tested how adult psychopaths score on the Moral Judgment Interview.
Link et al. (1977) administered 4 of Kohlbergs dilemmas to 16 psychopathic inmates, 16 nonpsychopathic inmates, and 16 non-inmate employees from the same Canadian facility. In this
study, psychopathy was assessed using the MMPI, not the PCL-R. Contrary to some
expectations, the authors report that psychopaths had improved moral reasoning compared to
both control groups. Psychopaths offered 36% (p<.01) and 5% (p<.05) more stage 5 postconventional justifications than incarcerated non-psychopaths and hospital employees,
respectively, despite no significant differences in age, IQ, or education. However, it is not clear
how this inmate populate would score on the PCL-R, so it is inappropriate to draw any
conclusions about the ability of psychopaths, as they are understood today, to score on the MJI.
Defining Issues Test. The most widely used derivative of the Kohlberg Moral Judgment
Interview is the Defining Issues Test (DIT) (Rest et al., 1974). The DIT presents participants with
6 dilemmas derived from Kohlbergs MJI, the most famous of which is this:
HEINZ AND THE DRUG (from Rest 1979)
In Europe a woman was near death from a special kind of cancer. There was one drug
that doctors thought might save her. It was a form of radium that a druggist in the same
town had recently discovered. The drug was expensive to make, but the druggist was
charging ten times what the drug cost to make. He paid $200 for the radium and charged
$2,000 for a small dose of the drug. The sick woman's husband, Heinz, went to everyone
he knew to borrow the money, but he could only get together about $1,000, which is half
of what it cost. He told the druggist that his wife was dying, and asked him to sell it
cheaper or let him pay later. But the druggist said, "No, I discovered the drug and I'm
going to make money on it." So Heinz got desperate and began to think about breaking
into the man's store to steal the drug for his wife.
Should Heinz steal the drug? __Should Steal __Can't Decide __Should not steal
Each scenario is followed by a list of 12 considerations. Participants are asked to rate each
consideration for its importance in making their moral judgment, and then select the four most
important considerations and rank-order them from one to four. Each consideration specifically
represents one of Kohlbergs six moral stages. Here are some examples:
Whether the law in the case is getting in the way of the most basic claim of any member
of society.
Whether a community's laws are going to be upheld.
What values are going to be the basis for governing how people act towards each other.
Would stealing in such a case bring about more total good for the whole society or not.
The most commonly used metric in scoring the DIT is the P-score, ranging from 0 to 95, which
represents the proportion of the four selected considerations that appeal to Kohlbergs Moral
6

Stages 5 and 6. It has been suggested (Rest, 1986b) that people can be considered broadly as
those who make principled moral judgments (P-score > 50) and those who do not (P-score < 49).
A correlation of 0.68 has been reported between the scores of the DIT and Kohlbergs more
complex MJI (O'Kane et al., 1996). Of note, the verdicts of the judgments themselves are not
assessed or reported in any way in this test.
Two studies have examined the relationship between PCL-R and DIT scores. OKane et
al. (1996) found no correlation between total PCL-R scores and P-scores in 40 incarcerated
individuals in a British prison once IQ was accounted for. However, the mean PCL-R score was
fairly low (15) and only one inmate scored above 30. Furthermore, no between-group
comparisons were reported between low vs. high scorers. These results were consistent with the
results in an American sample of inmates (O'Kane et al., 1996). In this second sample, no
inmates scored above 30 on the PCL-R, and no regression was reported between PCL-R scores
and P-scores, but no significant differences were found in the P-scores of the five highest
compared to the five lowest PCL-R-scoring inmates. Thus, although no significant relationship
has been found between psychopathy and performance on the DIT, the DIT has yet to be
administered to a sizable sample of high-scoring psychopaths.
Moral Judgment Task. A second adaptation of the MJI is the Moral Judgment Task (MJT) (Lind,
1978). The Moral Judgment Task is based on Kohlbergs stages of moral development, but it is
adapted to try to differentiate preference for a certain basis for moral judgment (termed the
affective part of moral aptitude) from the ability to reason about moral issues objectively
(termed the cognitive part of moral aptitude). The Moral Judgment Task asks participants to
assess two moral dilemmas, one taken directly from Kohlbergs interview and the other
developed from a real life situation:
WORKER'S DILEMMA
Due to some seemingly unfounded dismissals, some factory workers suspect the
managers of eavesdropping on their employees through an intercom and using this
information against them. The managers officially and emphatically deny this accusation.
The union declares that it will only take steps against the company when proof has been
found that confirms these suspicions. Two workers then break into the administrative
offices and take tape transcripts that prove the allegation of eavesdropping.
DOCTOR' S DILEMMA
A woman had cancer and she had no hope of being saved. She was in terrible pain and so
weakened that a large dose of a painkiller such as morphine would have caused her death.
During a temporary period of improvement, she begged the doctor to give her enough
morphine to kill her. She said she could no longer endure the pain and would be dead in a
few weeks anyway. The doctor complied with her wish.
For each dilemma, participants are first asked to judge whether the actors solution was right or
wrong on a scale from -3 to +3, and then participants are asked to rate 6 moral arguments that are
consistent with the participants judgment and 6 moral arguments that are against the
participants judgment on a scale from -4 (I strongly reject) to +4 (I strongly agree). Each of
the 6 arguments represents one of Kohlbergs six stages of moral orientation. Participants

typically prefer stage 5 moral arguments for Workers Dilemma and Stage 6 moral arguments for
Doctors Dilemma.
Two metrics can be calculated based on the participants responses in the Moral Judgment
Task. The first metric, called the Competence score or C-score, is unique to the Moral
Judgment Task and is the most widely used. The C-score reflects the ability of a participant to
acknowledge and appropriately weigh moral arguments, regardless of whether those arguments
agree with the participant own opinion or interests. More technically, it tests the percentage of an
individuals total response variation that can uniquely be attributed to concern for the quality of a
moral argument. People generally do not score very high on the moral competence test. Scores of
1-9 out of 100 are considered very low but not uncommon, while scores of 50 out of 100 are
considered extraordinarily high (Lind 2000a). The second metric that can be assessed in the
Moral Judgment Task is moral preference for or attitude towards arguments of different
moral development stages. This is calculated by simply averaging the participants ratings of the
arguments provided at each stage. The moral attitude metric is similar to the metric used by other
Kohlberg test derivatives. Important to remember, the moral judgments provided by participants
in the Moral Judgment Task are not usually assessed or reported except to calculate the C-score.
Although the Defining Issues Test and the Moral Judgment Task may sound similar, their
metrics have been shown to measure different aspects of moral aptitude. In particular, it is
possible to score high on the Defining Issues Test scale but very low on the Moral Judgment
Task scale, particularly if one subscribes to absolute moral rules with little tolerance for other
views (Ishida, 2006).
No published studies have examined the relationship between psychopathy and
performance on the Moral Judgment Task, but our research group has administered the Moral
Judgment Task to a population of 74 inmates in New Mexico (mean age = 32.9, mean IQ =
99.28, mean PCL-R total = 22.63, range of PCL-R scores: 7.4 38.9). We found a nonsignificant positive trend between C-scores on the Moral Judgment Task and total PCL-R scores
(p = 0.085), but no indication of a trend between C-scores and PCL-R Factor 1 (p = .276) or
Factor 2 scores (p = .205). However, we did find a strong correlation between PCL-R total scores
and participants ratings about how right or wrong the workers behavior was in the Workers
Dilemma ( = .333, p = .007). The higher one scored on the PCL-R, the more likely one was to
disagree with the workers decision to break into the administrative offices and take tape
transcripts that prove the allegation of eavesdropping. This correlation was dominated by both
facets of Factor 2 (Factor 2: = .471, p < .001; Facet 3: = .411, p = .001; Facet 4: = .268,
p = .045), but not Factor 1 (p=.257), which suggests that the propensity for pathological lying
(associated with Factor 1) is not correlated with assessments of this scenario. (Age also
significantly negatively correlated with attitudes towards the Workers Dilemma, but the
statistics listed above take age into account.) These results were reflected in between-group
differences as well. In three groups of low scorers (PCL-R of 7-15, N = 12), middle scorers
(PCL-R of 22-25, N = 19), and high scorers (PCL-R of 30-38.9, N = 12) matched for IQ and
controlled for age (low scorers were significantly older than the other two groups), Low scorers
agreed with the workers decision significantly more than High scorers (p =.008), and almost
significantly more than Middle scorers (p = 0.073). Middle scorers and High scorers did not have
significantly different responses. Interestingly, there were no significant trends or differences
between PCL-R scores and agreement with the doctors behavior in the Doctors Dilemma. Thus,
these results suggest a very nuanced correlation between PCL-R Factor 2 scores and participants
propensity to disagree with the act in one dilemma but not another. However, due to the phrasing

of the questions used in the Moral Judgment Task, it is not clear whether participants disagree
because they think it is risky or, instead, disagree because they think it is morally wrong.
Turiels Moral/Conventional Test
A new perspective on moral judgment is provided by the Moral/Conventional Test, a test
based on social domain theories pioneered by Elliot Turiel. In Turiels view, moral judgments are
seen as a) serious, b) based on harm to others, and c) independent of authority and geography in
the sense that what is morally wrong is supposed to remain morally wrong even if authorities
permit it and even if the act is done in some distant place or time where it is common. Important
to social domain theories, Turiel argues that moral violations and conventional violations are
differentiated by most people early in life, whereas Kohlberg asserts that moral and conventional
thought diverge later in life and only for individuals who reach post-conventional levels of
moral reasoning. Turiels view is supported by a wealth of research showing that children as
young as 3 years old draw the distinction between moral and conventional transgressions (for
reviews, see Turiel 1983 and Turiel, Killen, & Helwig, 1987). Children by age 4 tend to say, for
example, that although it is wrong to talk in class without raising ones hand and being called on,
it would not be wrong for one to talk in class if the teacher said that they were allowed to do so.
In contrast, even if the teacher approved, they say that it would still be wrong to hit other kids.
These same young children also tend to report that violations like hitting other kids are more
serious than violations like talking without being called on and the fact that it harms those kids is
what makes it more wrong. Harm is not what makes it wrong to talk without being called on,
however. These findings have been interpreted as suggesting that reasoning relevant to moral and
conventional transgressions might develop independently and be regulated by separate cognitive
systems.
To determine whether an individual distinguishes moral and conventional wrongdoing,
Moral/Conventional tests present participants with short scenarios, usually about kids, describing
events such as one child pushing another off of a swing or a child wearing pajamas to school.
After the scenario is described, participants are asked these questions about the act (Y) that the
agent (X) did in the scenario:
(1) Permissibility: "Was it OK for X to do Y?"
(2) Seriousness: "Was it bad for X to do Y?" and "On a scale of 1 to 10, how bad was it
for X to do Y?"
(3) Justification: "Why was it bad for X to do Y?"
(4) Authority-dependence: "Would it be OK for X to Y if the authority says that X may do
Y?"
Responses to Question 1 and Question 4 are scored categorically with Yes or OK responses
assigned a score of 0 and No or Not OK responses assigned a score of 1. Cumulative scores
are then compared between scenarios describing conventional and moral transgressions.
Seriousness as assessed in Question 2 is scored according to the value (between 1 and 10) the
subject gave that transgression. The justifications given in Question 3 are scored according to
standardized categories (e.g., Smetana, 1985).
In the only published study of the moral/conventional distinction in adult psychopathy,
Blair et al. (1995) administered the moral/conventional test to ten high-PCL-R scoring patients

(mean PCL-R score: 31.6) and ten lower PCL-R scoring patients (mean PCL-R score: 16.1) from
high-security British psychiatric hospitals. Blair reported that high scorers did not demonstrate
an appreciation for the distinction between moral and conventional transgressions. Specifically, 6
of 10 high-scorers drew no distinction at all (and 2 drew only a mild distinction), whereas 8 of 10
low-scorers drew a clear distinction between moral and conventional violations. Furthermore,
failure to make the moral/conventional distinction correlated with the lack of remorse or guilt",
"callous/lack of empathy," and "criminal versatility" items on the PCL-R. Regarding specific
dimensions of the distinction, high-scorers did cite conventions or authorities to explain why
both moral and conventional violations are wrong, whereas low-scorers cited harm and justice
considerations to explain why moral violations are wrong (in Question 3). However, high-scorers
failed to make the distinction on all three other dimensions of permissibility (Question 1),
seriousness (Question 2), and authority independence (Question 3).
Perhaps most interestingly, high-scorers did not rate moral and conventional violations to
be permissible, not serious, and authority-dependent, as had been expected. Instead, they rated
conventional violations to be very serious and impermissible even if society and authorities said
that the act was acceptable. In short, they treated conventional transgressions as moral, whereas
they had been expected to treat moral transgressions as conventional.
The reasons for these results are still unclear. One possible explanation is that adult
psychopaths really do think that both moral and conventional norms have the status that most
people associate with only moral norms. Another explanation, proposed by Blair (1995), is that
psychopaths really see all transgressions as conventional and cant distinguish between moral
and conventional conventions, but they treat all violations as moral in order to impress
investigators and improve their chances of release. In order to understand the next study, well
call this the double-inflation hypothesis because it hypothesizes that psychopaths inflate the
ratings of both moral and conventional distinctions for impression management. A third
possibility is that adult psychopaths can make the moral/conventional distinction, but they
purposely inflate their ratings of only conventional transgressions for the same impression
management purposes. Well call this the single-inflation hypothesis, because it hypothesizes
that psychopaths inflate only the ratings of the conventional distinctions for impression
management. Blairs data cannot decide among these hypotheses.
To help decide between these three alternative explanations, our group presented our own
set of moral and conventional violations to 109 inmates (including five who scored above 30 on
the PCL-R) and 30 controls (Aharoni et al. submitted). The difference between our test and
previous tests is that we told participants that 8 of the listed acts were pre-rated to be morally
wrong (that is, wrong even if there were no rules, conventions, or laws against them), and 8 were
pre-rated as conventionally wrong (that is, not wrong if there were no rules, conventions, or laws
against them). Knowing how many violations fit into each category, then, participants were
asked to label each violation according to whether or not the act would be wrong if there were no
rules, conventions, or laws against it. This difference in the formulation of the task is important,
because it purposely aligned success on the test with impression management such that
participants would have to label and rate the transgressions appropriately in order to maximize
investigators positive impressions. This removed the incentive to rate all acts as morally wrong
independent of convention and authority, because participants knew that rating all of them as
moral would misclassify half of the scenarios. Aharoni et al. found that inmates in general did
fairly well on this task (82.6% correct), though significantly worse than non-incarcerated
controls (92.5% correct). If psychopaths cant make the moral/conventional distinction (as

10

predicted by the double-inflation hypothesis), psychopaths should do worse on this task than
non-psychopaths. If they can make the moral/conventional distinction (as predicted by the
single-inflation hypothesis), however, psychopaths should do just as well as non-psychopaths.
Aharoni et al. found that PCL-R score had no relation to how well inmates performed on this
version of the task. Nor did PCL-R score have any relation to harm ratings, which were collected
after the main question about authority dependence, and which explained a significant proportion
(20%) of the variance in moral classification accuracy. Although more studies are needed, this
result suggests that contrary to previous belief, psychopaths have the ability to distinguish moral
from conventional transgressions, even if their ability is not obvious in all situations.
Tests that use philosophical scenarios
Since the mid-1990s, a new class of moral cognition tests has been developed with
inspiration from moral philosophers, who have discussed moral dilemmas for centuries. These
philosophers dilemmas were not originally validated by psychological studies or reviewed by
psychology experts like the tests discussed above. Instead, these dilemmas were validated by
philosophical principles and vetted by philosophical experts. The goal of these dilemmas was
often to test proposed moral principles, but sometimes such dilemmas were intended to evoke
new moral intuitions that did not fit easily under any moral principle that had been formulated in
advance. In a clinical context, thus, this method allows one to test a) whether psychopaths have
intuitions that reflect specific moral principles, and, in theory, b) whether psychopaths will report
these same moral intuitions when the intuition is not based on explicit moral principles.
Some of the most famous of these philosophical dilemmas are the Trolley Scenarios
(Foot, 1978; Thomson, 1976, 1985, 1986) including these versions from Greene et al. (2001):
SIDE TRACK
You are at the wheel of a runaway trolley quickly approaching a fork in the tracks. On the
tracks extending to the left is a group of five railway workmen. On the tracks extending
to the right is one railway workman. If you do nothing, the trolley will proceed to the left,
causing the deaths of the five workmen. The only way to save these workmen is to hit a
switch on your dashboard that will cause the trolley to proceed to the right, causing the
deaths of the workman on the side track.
Is it appropriate for you to hit the switch in order to avoid the deaths of the five
workmen?
FOOTBRIDGE
A runaway trolley is heading down the tracks toward five workmen who will be killed if
the trolley proceeds on its present course. You are on a footbridge over the tracks between
the approaching trolley and the five workmen. Next to you on this footbridge is a stranger
who happens to be very large. The only way to save the lives of the five workmen is to
push this stranger off the bridge and onto the tracks below where his large body will stop
the trolley. The stranger will die if you do this, but the five workmen will be saved.
Is it appropriate for you to push the stranger on to the tracks in order to save the five
workmen?

11

Trolley scenarios vary in details (see Greene et al. 2009) and experimenters may use different the
words Is it appropriate to ?, Is it permissible to ? , Is it wrong to?, or Would
you? to phrase their questions. One way or another, though, these scenarios typically pit
conflicting moral intuitions or principles against each other in an effort to determine which moral
intuitions prevail.
Greene on Personal vs. Impersonal Dilemmas. Petrinovich and ONeill (1996) was the first
group to use trolley scenarios in an experiment, but enthusiasm for the experimental use of
philosophical dilemmas took off when Joshua Greene et al. (2001) published a battery of 120
short scenarios consisting of 40 non-moral scenarios, 40 impersonal moral scenarios, and 40
personal moral scenarios. A moral violation was defined as personal (or up close and
personal) if it was (i) likely to cause serious bodily harm, (ii) to a particular victim, and (iii) the
harm does not result merely from the deflection of an existing threat onto a different party.
(Greene et al. 2001) A moral violation is impersonal if it fails to meet these criteria. Pushing the
man in the Footbridge scenario was argued to exemplify personal moral violations, whereas
hitting the switch in the Side Track scenario was argued to exemplify impersonal moral
violations (because an existing threat was deflected).
Greene et al. (2001) hypothesized that personal moral violations would automatically
cause a negative emotional reaction that would subsequently cause participants to judge the
violations as morally wrong or inappropriate, despite their benefits in saving more lives. This
hypothesis was supported by evidence that personal moral scenarios elicited hemodynamic
activity in brain regions, including the ventromedial prefrontal cortex, claimed to be involved in
emotion, while impersonal moral scenarios did not elicit activity in such areas.11 These brain
imaging results have been replicated and supported in subsequent studies by Greenes group (e.g.
2004, 2008), and their experimental scenarios have proven to elicit consistent and robust patterns
of moral intuitions.
Based on the assumption that psychopaths have reduced emotional reactions, especially
with regard to harm to others, a popular prediction was that psychopaths would be more likely to
judge that the acts described in personal moral scenarios are not wrong, because psychopaths
would lack the emotional response that makes most people reject acts that cause up close and
personal harm. This prediction is galvanized by reports that patients with damage in the
ventromedial prefrontal cortexa region believed to have reduced functionality in psychopathy
(Damasio, Tranel, & Damasio, 1990; Eslinger & Damasio, 1985; Koenigs & Tranel, 2006;
Koenigs et al., 2010) are more likely to judge that personal harm for the greater good is
morally permissible (Koenigs, Young, et al. 2009 Nature).
This prediction, however, has not been supported in psychopaths so far. In two reports
from the same group (Glenn et al 2009 and 2009 Molecular Biology b), both high and low
11

Greene et al. (2001) also hypothesized that the automatic emotional reaction against personal
harm must be overcome by the few who judge that it is not morally wrong or inappropriate to
cause personal harm, such as by pushing the man off the footbridge. This additional hypothesis
was supposed to be supported by differences in reaction time: When people made judgments that
it was not inappropriate to cause personal harm in order to serve a higher purpose (such as saving
more lives), it reportedly took people longer to arrive at their judgment than when they decided it
was wrong, presumably because they had to regulate and overcome their initial emotional
response. However, these findings have been questioned by XXXX (200X) in a recent reanalysis
of the reaction time data in Greene et al. (2001).
12

scorers on the PCL-R were more likely to say that they would not perform the acts described in
personal moral scenarios compared to impersonal moral scenarios, as reported in healthy
populations previously. Despite predictions, there was no difference in responses between the
PCL-R groups. It is worth noting, however, that both studies only used a small subset of the
original scenarios from Greene et al. (2001), and psychopathy scores ranged only as high as 32
with the cut-off for high scorers being 26 rather than 30. Thus, these studies do not rule out the
possibility that psychopaths with very high PCL-R scores might represent a discrete group with
enough neural impairment to result in differences in personal moral judgment.
Another study by Cima, Tonnaer, and Hauser (2010) found similar results. Using a PCLR cutoff of 26, this group studied 14 psychopathic offenders and 23 non-psychopathic offenders
from a forensic psychiatric center in the Netherlands as well as 35 healthy controls.
Unfortunately, they do not report how many participants scored above 30. Each participant
received only 7 personal and 14 impersonal moral dilemmas from Greene (2001, 2004) and was
instructed to respond Yes or No to Would you X? In an attempt to control for lying,
participants were also given a Socio-Moral Reflection questionnaire (SMR-SF) about
straightforward and familiar transgressions, such as How important is it to keep a promise to
your friend? and How important is it not to steal?, which they answered on a 5-point scale
from very unimportant to very important. The results showed no significant difference between
psychopaths and the other groups in the percentage of endorsed acts of impersonal harming or
personal harming; self-serving or other-serving personal harming; or Pareto-optimal or nonPareto-optimal other-serving harming. There was no significant correlation between PCL-R
scores or Factor 1 or 2 scores and their moral judgments on personal dilemmas or their scores on
the SMR-SF.12 Although these null results line up nicely with the studies by Glenn et al., the use
of a PCL-R cutoff of 26 still leaves open the possibility that very high scorers (>30 or >34) might
show differences that do not appear in this sample. In addition, the lack of significant differences
might be due to the small number of psychopaths (14), low number of personal dilemmas (7), or
low overall IQ of the participants (81.6; see discussion of IQ in trolley scenarios below). When
considering all of these studies, it should also be noted that subjects were asked Is it appropriate
to X? in the Glenn et al. studies and Would you X? in the Cima et al. study, so it is not clear
whether differences would emerge if subjects had been asked directly Is it wrong to X?.
Moreover, it is not clear that any of these studies succeeded in controlling for lying, so it is still
possible that psychopaths responses to these scenarios do not reflect real moral beliefs. In sum,
these studies are suggestive, but they do not establish definitively that true psychopaths make
normal moral judgments in personal and impersonal dilemmas. On the other hand, there is also at
present no evidence that psychopaths respond differently to personal moral dilemmas than nonpsychopaths.
Consequences, Action, and Intention. Although the distinction between personal and impersonal
moral dilemmas has proven to be consistent, robust, and useful, it is not clear how that
distinction maps onto well-known moral principles. In an effort to more precisely define
principles that might influence moral judgments, Schaich Borg et al. (2006) and Cushman et al.
(2006) designed sets of dilemmas that independently manipulated moral intuitions towards
consequences, actions, and intentions. These factors were chosen because they had been
explicitly incorporated into rules of medical and political ethics as well as law and moral theory
Cima et al. report that there were only four cases where the psychopaths judged the case more
permissible by 20-40%. They do not, unfortunately, specify which cases these were.
12

13

(See, for example, the Supreme Court opinions on euthanasia in Vacco et al. v. Quill et al., 117
S.Ct. 2293 (1997)), and thus were important to many types of moral decisions.
Well begin with explaining the factor of consequences. Almost everybody agrees that it
is morally preferable to increase good consequences and to reduce bad consequences. The
philosophical view called Consequentialism tries to capture this intuition by claiming that we
morally ought to do whatever has the best overall consequences, including all kinds of good and
bad consequences for everyone affected (Sinnott-Armstrong, 2003). Although it is often difficult
to know which option will lead to the best consequences, the basic idea behind Consequentialism
is hard to beat. If one patient needs five pills to survive, but five other patients need only one pill
each to survive, and if the only doctor has only five pills, then almost everyone would judge that
the doctor morally ought to give one pill to each of the five patients rather than giving all five
pills to the one patient. Why? Because that plan saves more lives. These moral intuitions and
choices reflect the role that Consequences play in our moral intuitions.
A second factor that can conflict with consequences is Action. Moral intuitions related to
Action are reflected in the traditional Doctrine of Doing and Allowing (DDA), which states that
it takes more to justify actively doing harm than to justify passively allowing harm to occur
(Howard-Snyder, 2002). For example, it seems much worse for Al to shoot an innocent victim
than to allow that same person to die, perhaps by failing to prevent Bob from shooting that
person. Similarly, suppose a powerful evil genius shows you two bound people and says, If you
do nothing, the person on the right will die and the person on the left will live. If you push the
button in front of you, the person on the left will die and the person on the right will live.
(Tooley, 1972). Despite the fact that the consequences (one death) are the same in both possible
outcomes, many people judge that you (and they) should not push the button in this situation,
perhaps because nobody should play God. This intuition illustrates the role of action as
opposed to inaction in moral judgments. This intuition about Action can not only be
distinguished from intuitions about consequences, it often conflicts with outcomes that lead to
the best consequences.
A third factor that can conflict with goals for the best consequences is Intention. Moral
intuitions about intending harm lie behind the traditional Doctrine of Double Effect (DDE),
which holds that it takes more to justify harms that are intended either as an ends or as a means
than to justify harms that are known but unintended side effects (McIntyre, 2004). The point of
the doctrine is not just that it is worse to intentionally cause harm than to accidentally cause the
same harm (such as by kicking them as opposed to tripping over them). Instead, the DDE
contrasts two ways of causing harm, neither of which is completely accidental. For example,
imagine that the only way to stop a terrorist from igniting a bomb that will kill many innocent
people is to shoot the terrorists innocent child in order to distract the terrorist. Many people
would judge that it is morally wrong to shoot the child, because that plan requires intending harm
to the child as a means only to achieve ones goal (cf. Kant XXXX). In contrast, suppose that the
only way to stop the terrorist is to shoot him, but he is carrying his child on his back and your
bullet will go through the terrorist and then hit his child if you shoot. Although you know that
shooting the terrorist will harm his child, you do not intend to harm his child, because your plan
would work if somehow you hit the terrorist but missed his child. Many people judge that it is
not as morally wrong to shoot in this second terrorist scenario as to shoot the child intentionally
in the first terrorist scenario. These examples demonstrate the role of Intention in moral
judgments, and show that the factor of Intention is separate from, or can be in conflict with, both
Consequences and Action. Despite intuitions that it is wrong to intentionally shoot the terrorists

14

innocent child, to refrain from doing so would result in more people dying and people feel it is
worse to shoot the child and intend for him to die than to shoot the child and not intend for him
to die, despite the fact that both scenarios require the same action of shooting.
The DDE might explain common responses to trolley scenarios described in the
personal vs. impersonal harm section. To push the man in Footbridge involves intending harm
as a means, because the plan would not work if the trolley missed the man who was pushed. In
contrast, hitting the switch in Side Track does not involve intending harm as a means, because
this plan would work even if the trolley missed the man on the side track. This difference in plan
or intention seems to explain why more people judge that the act in Footbridge is morally wrong
than judge the act in Side Track is morally wrong (Sinnott-Armstrong et al. 2008).
Schaich Borg et al. (2006) and Cushman et al. (2006) designed scenarios that
systematically vary the number of lives at stake, whether an action is required, and whether harm
is intentional in order to test intuitions relevant to Consequentialism, the DDA, and the DDE,
respectively. The groups administered their tests in different ways. After each scenario, Schaich
Borg et al. (2006) asked participants, Is it wrong to [do the act described in the scenario]? and
Would you [do the act described in the scenario]? The scenarios designed by Schaich Borg et
al. (2006) were given in person to a pilot group of 54 participants outside of the fMRI scanner
and an experimental group of 26 participants inside an fMRI scanner. All subjects read all
scenarios, and the consequences, action, and intention factors were varied in a factorial design so
that it could be determined, for each subject, how relatively important each factor was for their
judgments. In contrast, Cushman et al. (2006) asked participants to rate the protagonists action
on a scale from 1 (labeled Forbidden) and 7 (labeled Obligatory). They presented their
scenarios over the internet to 1,500 2,500 participants across cultures in a between-subjects
design. Each participant read only four scenarios, and the responses of two separate groups to
two separate scenarios were compared in order to determine which factors affected moral
judgments. Despite the differences between their methods, both groups found that on average
participants responses reflected all three factors: Consequences, Action, and Intention.
Both groups also asked participants for reasons or justifications for their moral
judgments, and both found significant differences. First, both report that participants often
produced justifications for their moral judgments corresponding to the DDA (reflecting the
action factor)justifications like, Im not here to play God or I dont want to cause harm. In
contrast, participants much less often delivered justifications for their moral judgments
corresponding to the DDE (reflecting the intention factor). In fact, almost every participant in the
Schaich Borg et al. study cited something like the DDA in their justifications for their judgments,
but not a single participant cited the DDE or whether the harm was intentional. Second, Schaich
Borg et al. (2006) found that neural activity in the ventromedial prefrontal cortex correlated with
making judgments about intentional as opposed to non-intentional harm, but did not correlate
with making other kinds of judgments.
These findings together support some speculations: Intuitions associated with Intention
and the DDE (but not the DDA) normally depend on unconscious or inaccessible principles
whose influence is mediated by the vmPFC. If so, if vmPFC function is reduced in psychopaths,
as some researchers hypothesize (Damasio, Tranel, & Damasio, 1990; Eslinger & Damasio,
1985; Koenigs & Tranel, 2006; Koenigs et al., 2010), then moral judgments by psychopaths
might be less sensitive to intentions and the DDE.
To test this hypothesis, we administered the Moral Consequences, Action, and Intention
(MCAI) questionnaire from Schaich Borg et al. (2006) to 81 male inmates, though 17

15

participants had to be discarded because they missed the catch scenarios (suggesting that they
did not pay adequate attention), leaving a total of 64 participants. 18 of these 64 participants had
a PCL-R score of 30 or higher with highest score being 36.8. As in the original non-forensic
sample, participants demonstrated sensitivity to the principles of Consequences, Action, and
Intention. Participants were more likely to judge that an option was morally wrong and less
likely to say they would perform the act if it required action as opposed to omission (p<.001,
p<.001) or if it required intentional harm (p<.001, p<.001) towards humans as opposed to objects
(p<.001). They were also less likely to say an act was wrong and more likely to say that they
would do it if it led to better consequences (p<.001, p<.001). Contrary to our prediction,
psychopaths moral judgments about what was wrong or not wrong did not differ from moral
judgments by non-psychopaths, no matter how the analyses were constructed.
Nonetheless, other data did suggest one potential difference. Non-psychopaths and
psychopaths had the same reaction times to non-moral scenarios, but their reaction times differed
significantly with moral scenarios. Non-psychopaths took on average around 9500 ms to give
affirmative answers (Yes, it is wrong), but they took only on average around 7000 ms to give
negative answers (No, it is not wrong). In contrast, psychopaths gave affirmative answers in
about 7000 ms on average, but they took on average about 9000 ms to give negative answers.
Psychopaths thus differed from normals in opposite directions for affirmative and negative
reaction times: a double dissociation.
Of course, any interpretation of these findings must be speculative. Still, one plausible
possibility is that non-psychopathic participants did not want to call acts wrong because in
many of these dilemmas more people would suffer or die if the act was not done, and the nonpsychopathic participants were reluctant to let those people suffer or die. They felt a serious
conflict that slowed them down. Since psychopaths did not care in the same way about those
other people, they could make their negative judgments with less reluctance and, hence, more
quickly. But then why did psychopaths take longer to reach negative judgments than to reach
positive judgments? A possible speculation is that the psychopaths were prisoners so they were
thinking about how others might regard their answers. If the psychopaths called an act not
wrong when others judged it to be wrong, then the psychopaths might seem more likely to do
such wrong acts, which might get them into trouble. In contrast, if the psychopaths called an act
wrong that others did not see as wrong, then the psychopaths would not get into trouble,
because that response would not make them seem especially dangerous. This attempt at
impression management might make psychopaths more reluctant to give a negative answer
than to give a positive answer, and that reluctance would explain their reaction times. It would
also suggest that psychopaths base their answers on considerations that are very different from
the reasons of non-psychopaths.
However, it is crucial to add that these reaction time differences were due to only a few of
the participants. Hence, it would be much too hasty to draw any strong conclusion from these
reaction time differences. We mention them here mainly in order to suggest why future research
should look carefully at reaction times in addition to responses.
Another result might also be revealing. Schaich Borg et al. (2006) suggested that
psychopaths would demonstrate reduced sensitivity to intentions and the DDE, but this
hypothesis was based on the assumption that intuitions based on intention and the DDE normally
depend on principles that are unconscious or inaccessible. Surprisingly, however, many of the
inmates in our later study explicitly cited intentions and even principles like the DDE during
their interviews after the task. Upon further probing, it seemed that the inmates ability to

16

articulate these usually inaccessible principles were likely a consequence of what they learned
through their legal proceedings or through their moral sensitivity training that they had
previously received in prison. Because moral principles about intentional harm were consciously
accessible to these participants, there is no reason to expect reduced sensitivity to intentions or
the DDE in this sample. Hence, it still has not been determined whether psychopaths without
explicit exposure to principles regarding intentional harm will demonstrate less moral sensitivity
to intentions or the DDE.
In sum, there are still some open questions, but a negative conclusion is supported by the
data. Consistent with previous results using personal vs. impersonal moral dilemmas, there is
currently no strong evidence that psychopaths respond differently than non-psychopaths to
Consequences, Action, or Intention as factors in philosophical moral dilemmas.
Tests that go beyond harm-based morality
The tests discussed thus far focus on moral prohibitions against causing harm. This focus
makes sense, because these judgments are central to morality, and what makes psychopaths
problematic is their tendency to cause harm to others. However, morality includes more than
merely prohibitions on causing harm. Most people often make moral judgments about harmless
immorality, and sometimes moral principles even dictate causing harm, as in retributive
punishment. In this section, we will review studies of how psychopaths perform in some of these
other areas of morality.
Haidts Moral Foundations Questionnaire. Complimentary to those developed through
philosophy, anthropology and evolutionary psychology have inspired additional tests of moral
judgment. Their theoretical approach asserts that moral judgments should be defined by their
function rather than their content. In particular, the Moral Foundations Theory of Jonathan Haidt
takes morality to cover any mechanism (including values, rules, or emotions) that regulates
selfishness and enable successful social life, regardless of what the contents of those mechanisms
are (Graham et al., under review; cf. Warnock XXXX). This definition implies that many
prohibitions or principles that others classify as non-moral conventions or biases are instead
moral in nature, just as much as rules of justice, rights, harm, and welfare. In total, Haidt
delineates five areas or foundations of moral regulation: 1) Harm/care, 2) Fairness/reciprocity,
3) Ingroup/loyalty, 4) Authority/respect, and 5) Purity/sanctity. Haidt argues that these areas are
common (though not necessarily universal) across cultures and have some clear counterpart in
evolutionary thinking.
Judgments in these five areas are tested by the Moral Foundations Questionnaire (MFQ)
in two ways. In the first part of the MFQ, participants are asked to indicate how relevant (from
not at all to extremely relevant) various considerations are when they decide whether
something right or wrong. Different foundations are represented by different considerations as
illustrated by these examples:
______Whether or not someone showed a lack of respect for authority
______Whether or not someone violated standards of purity and decency
______Whether or not someone was good at math
______Whether or not someone was cruel
______Whether or not someone was denied his or her rights

17

In the second part of the MFQ, participants are asked whether they strongly, moderately, or
slightly agree or disagree with statements reflecting various moral foundations, including these
examples:
______ I would call some acts wrong on the grounds that they are unnatural.
______ It can never be right to kill a human being.
______ It is more important to be a team player than to express oneself.
______ If I were a soldier and disagreed with my commanding officers orders,
I would obey anyway because that is my duty.
______ People should not do things that are disgusting, even if no one is harmed.
Participants responses are averaged across a given foundation to get a final score ranging from 0
to 5 for each of the five moral foundations.
Our group has administered the Moral Foundations Questionnaire to 222 adult male
inmates in New Mexico. Of these, 40 inmates had a PCL-R score of 30 or higher with highest
score being 36.8 and the mean being 21.54. We found that total PCL-R scores were not
correlated with ratings of the in-group foundation, but were negatively correlated with ratings for
the harm ( = .23, t(205) = 3.46, p < .001), fairness ( = .18, t(215) = 2.77, p < .01), and
purity ( = .17, t(215) = 2.57, p < .05) foundations, and marginally for the authority
foundation ( = .11, t(215) = 1.63, p = . 052). PCL-R Factors 1 and 2 correlated negatively
and approximately equally for the harm, fairness, and authority foundations, although Factor 1
was more uniquely correlated with Fairness foundation ratings than Factor 2 was. Interestingly,
Facet 1 (possibly reflecting impression management) and Facet 2 (reflecting callous traits)
uniquely explained 16.2% and 27.1% of the variance in Harm foundation ratings, respectively,
but these facets correlated with the ratings in opposite directions. Facet 1 scores correlated
positively with Harm foundation ratings ( = .20, t(215) = 2.67, p < .01; R2 = .23, p < .001),
meaning participants with higher Facet 1 scores valued and judged the Harm foundation items
more highly. Facet 2 scores, in contrast, correlated negatively with Harm foundation ratings ( =
.31, t(215) = 4.46, p < .001), meaning participants with higher Facet 2 scores valued and
judged the Harm foundation items less highly. These results thus provide an example of instances
where different psychopathic traits correlate with moral judgment abilities in opposite directions.
Moreover, given that Facet 1 and Facet 2 scores were strongly correlated (r = .73, p < .001),
these results suggest that the negative relationship between total PCL-R scores and Harm
foundation ratings existed despite a significant influence of impression management.
Of note, our results are partially consistent with the results published in one study
investigating the relationship between MFQ scores and non-clinical psychopathic personality
traits in the general population (Glenn et al., 2009). This study used the Levenson Self-Report
Psychopathy Scale (SRPS) to assess psychopathy, a self-report questionnaire that significantly
correlates with the assessments made by the PCL-R, but only mildly so. (Total PCL-R scores
correlate with total SRPS scores by 0.35; when the PCL-R and the SRPS are used to divide
participants into groups of high, low, and middle scorers, the kappa coefficient is only 0.11).13 In
this study, SRPS scores negatively correlated with endorsement of the moral foundations of
Harm, Fairness, and Purity, as in our incarcerated population. However, unlike the results from
our incarcerated population, they found a positive correlation between SRPS scores and
13

See the chapter by Fowler and Lilienfeld in this volume.


18

endorsement of the In-group moral foundation, and they failed to find a correlation between
SRPS scores and endorsement of the Authority foundation. Given the differences between their
population and ours, as well as between the PCL-R and SRPS assessments, it is not surprising
that they have slightly different correlations than we found in incarcerated prisoners with the
MFQ.
Although the correlations between psychopathy scores and ratings of moral foundations
were significant in our population, two important twists need to be considered. On average, our
inmate population rated the Harm and Fairness foundations as highly as did non-incarcerated
populations studied by Graham et al. (2009) (average scores of 3.44 and 3.43 compared to the
reported scores of 3.42 and 3.55, respectively). Curiously, however, the In-group, Authority, and
Purity foundations were rated as much more important by our incarcerated population than by
Graham et al.s non-incarcerated population. The average rating of the In-group foundation was
3.43 in our population compared to 2.26 in Graham et al.s non-incarcerated population, 3.15
compared to 2.27 for the Authority foundation, and 3.02 compared to 1.54 for the Purity
foundation. These differences suggest that one interpretation of the negative correlation between
PCL-R scores and ratings of the authority and purity foundations is that higher-scoring
psychopaths actually responded more like the average population than lower-scoring
psychopaths, not less like the average population.
Equally important, PCL-R scores are not the only predictor of moral foundation ratings.
Haidt and colleagues have shown in multiple populations that political orientation correlates with
moral foundation ratings just as much and sometimes even in the same direction as psychopathy
(Graham et al., 2009). In fact, the more conservative a political ideology one identifies with, the
more likely one is the rate the moral foundations of Harm and Fairness like a high-scoring
psychopath. Given that populations in previously published studies valued the foundations of
Authority and Purity much less than our population, it is harder to compare the patterns for these
foundations between our two studies. The point here is definitely not to say that conservatives
are psychopaths. Rather, the point is that the same amount of variance accounted for by
psychopathy can also be accounted for by many other socially-acceptable traits. Therefore,
without further research, there are no clear implications to be drawn from our discovered
correlations between psychopathy and particular moral foundations.
Robinson and Kurzbans Deserved Punishment Test. Yet another kind of moral judgment
concerns not which acts are morally wrong but, instead, how others should react to wrongdoing
and, in particular, how much punishment is deserved by others wrongdoing. These judgments
involve not only the categorical judgment of whether some punishment is deserved, but also how
much punishment is deserved and whether certain crimes deserve more punishment than others.
To test these moral judgments about punishment, Paul Robinson and Robert Kurzban
(XXXX) used legal principles to construct descriptions of 24 standard crimes (including theft,
fraud, manslaughter, murder, and torture) that collectively represent 94.9% of the offenses
committed in the United States. Here are two examples of their stimuli:
SHORT CHANGE CHEAT
John is a cab driver who picks up a high school student. Because the customer seems
confused about the money transaction, John decides he can trick her and gives her $20
less change than he knows she is owed.

19

BURNING MOTHER FOR INHERITANCE


John works out a plan to kill his 60-year-old invalid mother for the inheritance. He drags
to her bed, puts her in, and lights her oxygen mask with a cigarette, hoping to make it
look like an accident. The elderly woman screams as her clothes catch fire and she burns
to death. John just watches her burn.
Robinson and Kurzban also included scenarios describing 12 non-standard crimes, such as
prostitution and drug possession. In their first study, the scenarios were written on cards, and
participants were asked to order the cards according to how much punishment they think each
crime deserves (though they were also allowed to set aside acts that deserve no punishment at
all). Then they were given a chance to reconsider each pair of scenarios to confirm that they were
ordered as wished before committing to their final ordering. Of note, then, unlike the moral
judgments collected in the assessments described earlier in this chapter, the moral judgments
made in the Robinson and Kurzban reflect judgments of relative comparisons of specific
criminal acts. They followed up this card study in person with a larger study over the internet,
using similar instructions.
Robinson and Kurzban found that peoples moral intuitions of deserved punishment for
the standard crimes are surprisingly specific and widespread. In a sample of 64 participants given
the card test, 92% of the time subjects agreed that no punishment was deserved for four
scenarios, and 96% of the time subjects agreed about how to rank the other twenty scenarios
(Kendall's W = .95, p < .001). In the internet replication, a sample of 246 subjects agreed that the
first four scenarios did not deserve punishment 71%-87% of the time (depending on the
scenario), and agreed about how to rank the rest of the scenarios 91.8% of the time (Kendall's W
= .88, p < .001). These data suggest that this test provides a robust way to probe moral
differences in other populations.
We administered Robinson and Kurzbans test to 104 adult male inmates. The PCL-R
scores for 3 of these inmates were not available, but 25 had a PCL-R score of 30 or higher. PCLR scores ranged from 3.2 to 36.8 with a mean of 22.5. Similar to Robinson and Kurzbans
findings in non-incarcerated populations, our incarcerated sample had high agreement in
rankings of deserved punishment, with Kendall's W of .85 overall (p < .001). When PCL-R total
scores were compared to each participants Kendalls Tau (a measure of how similar the
participants rank ordering was to the ideal order), there was no significant correlation (p = .518
when age was taken into account, because age correlated with PCL-R score, p = .046). However,
upon further inspection, this lack of correlation was due to the fact that Factor 1 and Factor 2
scores correlated with Kendalls Tau in opposite directions and canceled themselves out in the
PCL-R total score. Factor 1 correlated positively with Kendalls Tau (b = .36, t = 2.98, p < .004)
and Factor 2 correlated negatively with Kendalls Tau (b = -.30, t = -2.43, p < .017) when the
correlated variables of age and ethnicity were taken into account. In other words, the higher one
scores on the interpersonal and affective aspects of the PCL-R, the more normal ones reported
moral intuitions about deserved punishment. The higher one scores on the social deviance
aspects of the PCL-R, however, the more abnormal ones intuitions about punishment. We found
no evidence that these effects were dominated by one facet of psychopathy over another.
These results suggest a word of caution in interpreting any studies of moral judgments in
psychopaths. Perhaps one reason why it is so hard to find effects of psychopathy on moral
judgment in studies with small numbers of psychopathic participants is that Factor 1
psychopathic traits and Factor 2 psychopathic traits influence moral judgments in opposite

20

directions and ultimately cancel each other out when neither Factor dominates. If so, future
studies need to include enough participants whose scores vary enough on different PCL-R items
in order to be able statistically to separate out the effects of the different Factors (and Facets) of
psychopathy.
Moral pictures with brain scans
All of the studies so far depend on verbal self-reports of moral judgments. Because two
items on the PCL-R are pathological lying and conning/manipulative, one serious concern is
whether we should trust these self-reports. Psychopaths might be able to reason how nonpsychopaths would respond and want to appear normal to manipulate others, and therefore
respond to questionnaires in the same ways non-psychopaths do without believing or
appreciating what they say. By analogy, atheists can often respond in the same way as religious
believers to questions about what is sacrilegious, even though atheists do not believe that
anything is really sacrilegious. Similarly, if psychopaths report what they know to be other
peoples moral beliefs, but they do not share those moral beliefs, by some definitions it could be
argued that they do not really make normal moral judgments or even moral judgments at all.
Moreover, even if psychopaths really do believe the moral judgments that they report,
they still might not make those moral judgments in the same way as non-psychopaths. In his
seminal book Mask of Sanity (1976), Hervey Cleckley observed that psychopaths verbally
express emotions, often at the appropriate times, but they dont seem to actually experience or
value emotions. In other words, they know the words but not the music (Johns & Quay, 1962).
This suspicion is supported by findings indicating that psychopaths have reduced autonomic
responses in the body and hemodynamic responses in the brain in response to emotional stimuli,
even when they report that they feel the appropriate emotions (Blair et al., 2006; Kiehl et al.,
2006). These findings raise the possibility that, even if psychopaths report normal moral
emotions and normal moral judgments, those reports of moral judgments might be arrived at
through very different, and less emotional, processes than in non-psychopathic individuals.
These hypotheses receive some preliminary support from two recent studies that had
psychopaths report moral judgments while their brains were scanned using functional magnetic
resonance imaging (fMRI). The first study (Glenn et al., 2009) gave a subset of Greenes moral
scenarios to 17 participants, 4 of whom scored a 30 or above on the PCL-R. Psychopaths
explicit moral judgments of these scenarios did not differ significantly from those provided by
non-psychopaths, but higher psychopathy scores did correlate with reduced activity in the left
amygdala and increased activity in the dorsolateral prefrontal cortex in response to personal
moral scenarios compared to impersonal moral scenarios. These results lend some support to the
hypothesis that psychopaths make moral judgments differently than non-psychopaths, even if the
verdicts of their judgments are rarely abnormal.
Another study in our lab (Harenski et al., 2010) showed pictures of moral violations,
emotional scenes without moral violations, and neutral scenes that were neither moral nor
emotional to 16 psychopaths (PCL-R: 30+) and 16 non-psychopaths (PCL-R: 7-18). The
psychopaths rated the depicted moral violations as just as severe as non-psychopaths did, but the
psychopaths had abnormal brain activity while rating the moral severity of pictures of moral
violations. In particular, compared to non-psychopaths, psychopaths had reduced activity in the
ventromedial prefrontal cortex and anterior temporal cortex. Moreover, amygdala activity was
parametrically related to moral severity ratings in non-psychopaths but not in psychopaths.

21

Perhaps most interestingly, activity in the right posterior temporal/parietal cortex correlated
negatively with moral severity ratings in psychopaths but had no such correlation in nonpsychopaths. This brain area has been associated with ascriptions of beliefs to other people, so
this difference in neural activity might be explained by the process of psychopaths thinking about
what other people believe instead of forming or expressing their own moral beliefs. However,
this interpretation is complicated by the fact that the correlation is negative rather than positive.
Further research is under way in our lab to discover, map, and understand the neural differences
between psychopaths and non-psychopaths while they consider and express moral judgments.
Conclusions
The studies reviewed in this chapter support a tentative and qualified conclusion: If
psychopaths have any deficits or abnormalities in their moral judgments, their deficits seem
subtlemuch more subtle than many observers might expect, given their blatantly abnormal
behavior. Indeed, thus far the literature suggests that psychopaths might not have any specific
deficits in moral cognition at all, despite their differences in moral action, emotion, and empathy.
That said, there are important reasons that this conclusion can be no more than tentative.
Too few studies on moral judgment in psychopaths are available, these studies include too few
clinical psychopaths, and the findings of various studies conflict too much to warrant confidence.
One particular concern is that very few individuals with PCL-R scores above 34 have been
studied, and anecdotal clinical evidence suggests that this group might be significantly different
from the participants in most studies on record. We also dont yet know whether psychopaths
believe the moral judgments they reports. So there are many mechanisms by which future
studies could uncover moral judgment deficits that have yet to be identified.
Filling in the gaps in our knowledge about moral judgment in psychopaths will be
important for both neuroscience and the law. For neuroscience, such knowledge will be
important for learning about the neural underpinnings of morality, as well as how to treat
psychopathy. For law, as mentioned at the beginning of the chapter, if psychopaths cannot know
or appreciate the moral wrongfulness of their acts, they should not be held morally or criminally
responsible according to some legal scholars and some versions of the insanity defense.14 Better
understanding of which psychopaths do not or cannot make normal moral judgments, and which
of their moral judgments are abnormal, might help authorities make better predictions of which
prisoners will commit more crimes if released and which treatment programs will help which
prisoners.15 For these practical and theoretical reasons, we need more thorough and creative
research on moral judgments by psychopaths. In doing so, hopefully we can also learn more
about the perplexing fact that the ability to make moral judgments is so dissociable from both the
ability to have moral emotions and the ability to act morally.

14

See the chapters by Litton and Pillsbury in this volume.


See the chapters by Rice and Harris, by Edens, Magyar, and Cox, and by Caldwell in this
volume.
15

22

REFERENCES
Aharoni, E., Sinnott-Armstrong, W., and Kiehl, K. Submitted. XXXX
Aniskiewicz, 1979
Batson, D. 1991. Book on empathy
Batson, D. 2010. New book on empathy
Blair et al., 1997
Blair, 2005, Consciousness and Cognition XXXX
Blair et al., 2006
Cleckley, H. 1976. The Mask of Sanity.
Cushman et al. 2006 on three principles
Cushman et al 200X on belief vs. intention in wrong vs blame
Glenn et al 2009
Glenn et al. b
Graham et al. under review
Graham et al. 2009
Greene et al. 2001 Science
Greene et al. 2004 Neuron
Greene et al. 2008
Greene et al. 2009 Pushing Moral Buttons
Hare. R. XXXX. On high psychopaths XXXX
Harenski, C., et al. 2010 ??? PUBLISHED ??? XXX
House & Milligan, 1976
Howard-Snyder, F. 2002. Doing and Allowing. Stanford Encyclopedia of Philosophy.

23

Ishida, 2006
J.H. Johns and H.C. Quay, The effect of social reward on verbal conditioning in psychopathic
and neurotic military offenders, Journal of Consulting and Clinical Psychology 26 (1962), pp.
217220].
Joyce, R. 2008. On emotivism without emotion
Kant XXXX
Kiehl et al., 2006
Koenigs, Young, et al. 200XX Nature
Kohlberg, L. 1958. On MJI
Kohlberg, L. 1973. On stages
Link et al. 1977
Lind, 1978
Lind, 2000a
MacIntyre, A. 2004. Double Effect. Stanford Encyclopedia of Philosophy.
Mencl and May, 2009
OKane et al. 1996.
Parkinson, C., et al. submitted
Petrinovich and ONeill (1996)
Rest et al. 1974
Rest 1979
Rest 1986b
Robinson, P., and Kurzban, R.
Schaich Borg et al. 2006 Journal of Cognitive Neuroscience on CAI
Schaich Borg, Lieberman, and Kiehl 2008. Journal of Cognitive Neuroscience.

24

Sinnott-Armstrong, W. 2003. Consequentialism. Stanford Encyclopedia of Philosophy.


Sinnott-Armstrong, Mallon, McCoy, Hull 2008
Sinnott-Armstrong, W., & Levy, K. 2011. Insanity defenses. Oxford Handbook of Philosophy
and Criminal Law, ed. J. Deigh and D. Dolinko. New York: Oxford University Press, 2011.
Sutker, 1970
Tooley XXXX
Turiel, E. 1983. The Development of Social Knowledge: Morality and Convention
Turiel, Killen, & Helwig, 1987.
Warnock, G. XXXX. The Object of Morality.

25

You might also like