Professional Documents
Culture Documents
Human-Level AI Requires
Compassionate Intelligence
Cindy L. Mason
Compassionate Intelligence
¬ℑφ:¬(ƒ(φ) V ¬ƒ(φ)) where AI believes that AJ feels φ, but AJ does not feel φ.
The situation is remedied once AI receives notification of
that is, the set of all propositions φ for which the IH the change by AJ, but not without some cost.
“shrugs its shoulders” – it answers negative to both ƒ(φ)
and ¬ƒ(φ) (alternatively, you may chose to implement the Agents reasoning with affective inference may require
concept of “ unknown” feelings with both positive, or by increased demands for computation and communication
creating a “ don’t know” state, etc.) The idea of the depending on the degree of persistence of agent state
unknown feelings operator is to describe an agent that has (agent personality) or the dynamics of the domain (e.g.
feelings but is neither positive nor negative towards φ frequent sampling of hardware devices measuring affective
(humans might refer to this as indifference, being state of drivers or pilots.) In general, TMSes can be
neutral, or undecided.) It is important to distinguish computational intensive, and like traditional or non-
between an agent that has feelings but simply does not affective inferencing, alternative solutions and
know what they are and an agent with no feelings. In the improvements will be needed as a result. Due to space
latter case, the operator ℑ is undefined. constraints we do not address many issues nor can we
discuss topics completely or in depth.
The presence or absence of ℑ in agent mental state is a
means by which agents may be divided into two camps:
those that have the capacity to reason with feelings and Summary
those that do not. Presently most agents fall into the later
We have introduced several new concepts regarding the
category.
challenges AI researchers face in reaching human-level AI.
Namely, computing with compassionate intelligence and
Communicated Feelings In an agent with compassionate
using affective inference as a means of common sense
intelligence, propositions in the feeling system may occur
default reasoning. To accomplish this we introduce the
not only as a result of affective inference and knowledge as
idea of an Irrational-TMS where psychologically based
discussed in the previous section, but also as a result of
justifications may be used to support belief or feelings
communication. That is AI believes ƒφJ as a result of a
about a proposition as well as traditional logic. Meta-
previous communication of ƒφ by AJ. The set of
cognition is central to each of these. We believe that
propositions an agent has feelings about includes not only
compassionate intelligence is a necessary part of future
locally generated propositions but also propositions shared
by other agents. It follows that agents may reason about applications involving social interactions or healthcare.
another agent’s feeling of a proposition as well as its
beliefs. This is the heart of compassion and of indifference
– to consider and take into account or not the feelings of
Thanks
another agent when engaged in reasoning, planning and Thanks to Aaron Sloman for early support when I first
scheduling, decision making, and so on. began working on EOP and for his paper with Monica
Croucher entitled “Why Robots Will Have Emotion”
It is possible that AI also feels ƒφJ, that is, ƒƒφJ - in this [Sloman and Croucher 1981]. Thanks to John McCarthy
case, the agent might be called empathetic.) If agent AI for years of arguing that only made me more convinced of
believes ƒφJ where J≠I (it believes something about the the need for affective inference and for access to his great
feelings of another agent) then agent AI believes that agent library where I found many books by common sense
AJ feels φ (it is possible that AI also feels ƒφJ, that is, ƒφI - philosophers. Thanks also to Henry Lieberman, to
in this case, the agents feel the same.) Waldinger for introducing me to Vipassana 9 years ago, to
Michael Cox for his insight, to many kind people who
When agents reason with feelings, as when reasoning helped over the years, as well as the staff at AAAI without
with defaults or assumptions, the inferences that an whom none of this would come together.
agent holds depending on those feelings may be
retracted when feelings change. A special problem
References Mason, C. 1995. Introspection As Control in Result-
Sharing Assumption-Based Reasoning Agents.
Proceedings of the First International Workshop on
Ariel, E. and Menahemi, A. 1997. Doing Time, Doing Distributed Artificial Intelligence, Lake Quinalt, Wash.
Vipassana, Karuna Films.
Sloman, A., and Croucher, M. 1981. Why robots will have
Begley, S., 2007. Train Your Mind, Change Your Brain: emotions. Proceedings of the 7th International Joint
How a New Science Reveals Our Ability to Transform Conference on Artificial Intelligence, Vancouver, Canada.
Ourselves, New York, Ballantine.