You are on page 1of 5

Date: Tue, 29 Sep 1998 22:08:01 -0300 (EST)

From: "416720" <mcg-talk@mcwg>


To: MCG <mcg-talk@mcwg>
Subject: [MCG] Trust: Defeating Descartes' demon
List:
Why should one be able to *increase* security by allowing
subjectivity to be dynamically represented in trust relationships?
The solution to this apparently paradoxical question can benefit from
the "method of systematic doubt", devised some 400 years ago by
Descartes, to defeat what has been called "Descartes' demon" -- who,
today, would be called "hackers", "virus", "fraudsters", etc.
It is important to understand the question also in the terms of
recent postings at the mcg-talk, "Re: Subjectivity and its
representation".
After all, is it not subjectivity itself that is subverting my
system's security? Would it not be better to advocate "objective
security" -- that would be equal to all and at all times?
Further, the question links to a timely evaluation of the "objective
security" view point -- which seems also to be the backbone of the
report issued today by the National Academy of Sciences in the US,
called "Trust in Cyberspace" -- the final report of the Committee on
Information Systems Trustworthiness, Computer Science and
Telecommunications Board. A free series of images of all report pages
can be seen at
http://www.nap.edu/readingroom/enter2.cgi?0309065585.html
The central point of the NAS report is where it defines the concepts
it uses, and we will check two of them. At the Glossary and in the
text, the report defines:
"Trust is the concept that a system or other trusted entity will
provide its intended functionality with a stated level of confidence
(or assurance), which may be measured but sometimes is inferred on
the basis of testing or other information."
where I first note that the "definition" is circular on two counts:
1. "trust is the concept that a .. trusted entity will provide its
intended functionality ...", where the report tries to define trust
in terms of a trusted entity's trusted functionality, and
2. "trust is the concept ... with a stated level of confidence (or
assurance)...", where confidence is just a synonym for trust and
assurance is a related word, both according to Webster's Thesaurus,
so the report tries to define that trust is trust since trust is
defined in terms of confidence (or assurance), ie, trust.
The given "definition" is thus very much useless to actually define
what it uses to define itself with. It also leads to a picture of
trust as something uncertain, vague and imprecise, as the last phrase
"which may be measured but sometimes is inferred on the basis of
testing or other information" denotes.

Further, the "definition" stresses an objective view of trust because


nowhere the observer's role is clear in its various possible
functions as the "definition" allows for. "Trusted entity" by whom?
"Intended" by whom? "Measured" by whom? etc.
However, "trust is in the eye of the beholder" as studied in
http://mcwg.org/mcg-mirror/trustdef.htm and the concept of trust cannot be
harmonized with an objective definition. Of course, if we want to
deal with real-world security then we must use real-world concepts of
trust and not define "de novo" virtual, theoretical and objective
models which do not represent what we see ... and cannot deal with
real threats. Virtual security is perhaps not very much useful, even
in virtual reality.
To recall, in trustdef.htm, trust is formally defined as "that which
is essential to a communication channel but which cannot be
transferred from a source to a destination using that channel", which
is abstract enough to accomodate widely different applications of the
concept of trust, including the linguistic cases. Thus, trust is
defined by the equivalence class of all possible expressions that can
be formally derived from the abstract definition of trust, when
different instances and observers are taken into account.
To exemplify, in Internet Information Systems, the following
instances can be directly derived from the given abstract definition
of trust, maintaining consistency and without circularity or
ambiguity:
trust: "trust is that which an observer has estimated with quasi-zero
variance at time T, about an entity's behavior on matters of x",
trust: "trust is that which an observer has estimated with
high-reliance at time T, about an entity's behavior on matters
of x",
trust: "trust is an open-loop control process of an entity's
response on matters of x",
As the last instance of an equivalent trust definition shows, trust
must be viewed as an open-loop control process. "Trust but verify" is
the principle one must use -- not "postulate an intended
functionality and trust" as implied by the NAS report. Trust is not
surveillance either -- exactly because one must trust ... since one
cannot measure.
So, going more into the "objective trust" concept used by the NAS
paper, the piece of resistance of the report is presented as the
redefinition of an archaic form used once to designate trust (Cf.
Webster) -- "trustworthiness", which since 1829 means "worth of
confidence". The NAS report defines:
"Trustworthiness is assurance that a system deserves to be trusted
-- that it will perform as expected without disruptions, human and
operational error, and hostile attacks."
This second "redefinition" of trust is thus not a new definition of
the word "trustworthiness" (contrary to what the NAS report says).
Also, the redefinition is circular (it "defines" that for a system to
be worth of trust the system must deserve to be trusted) and also

forces an objective view of trust, as given by "will perform as


expected". Expected by whom? Equally to whom? Here, "expected" cannot
come from an objective fiat but should have been relative to an
observer, also taking into account that different observers will
necessarily diverge in their assesments.
Further, the "redefinition" assumes a level of ideal performance
(without disruptions, error or attack effects) which is clearly
neither measurable (in reference to what?) nor useful to gauge a
system's relative performance in regard to what it is intrinsically
capable of presenting at maximum level, or even at operationally
expected levels -- taking all modelled operational factors into
account, including attacker's capabilities.
To put the discussion of "trustworthiness" in a broader context, a
discussion thread at the mcg-talk list (called "relative strength of
trust statements" and initiated by Alfarez) has recently discussed
the meanings that could by assigned to two different forms of trust
expressions:
A) A trusts B (on matters of X)
B) A believes B is trustworthy (on matters of X)
The conclusion was lead by Tony Bartoletti with the following
arguments:
>If we take "trustworthy" to mean exactly "is worthy of trust", then the
>question comes down to this: Is "trust" being used as a verb in the
>active (present) sense, or the hypothetical sense.
>
>That is, suppose you replaced the word "trust" with "fly" (like with
>an airline.) It is potentially two different things to say
>
>A. I fly (in) 747s.
>
>B. I believe 747's are fly-worthy. (but my fear of heights precludes me
>
from stepping foot in one.)
>
>Taking a step in the opposite direction, if we took "trusts" as a
>hypothetical, as in "A trusts B (at the moment) in matters of X, and
>might act in the future to rely upon B accordingly if some situation
>arose where this would matter", then there is almost no difference
>between your statements A and B.
>
>Put another way, a 10mm bundle of nylon rope is more than strong enough
>to support me, should I need to be hanging from the underside of a tall
>bridge. So I might say "I trust the rope to safely hold my weight".
>This is the hypothetical usage (B) as in "I believe the rope is worthy
>of my trust." On the other hand, if I were to be actually hanging from
>this bridge at the moment, I would be in the very act of trusting the
>rope to hold my weight. That is, I am actively relying upon this
"belief".
>
>I believe it is this suggestion of hypothetical (passive?) vs active
>that would lead most people to feel there is a difference in strength
>between A and B.
>
and, after some discussion, was summarized by Alfarez as:

>
>Intuitively, I think one would consider statements of the form a) Alice
>trusts Bob" to be stronger than b) "Alice believes Bob is trustworthy" for I
>think a) is inherently 'active' and b) is inherently 'hypothetical', in
>Tony's sense. Thus in making trust statements, this subtle difference may
>affect Joe Public's level of confidence in Bob.
>
This conclusion further indicates the passive attitude taken when
"trustworthiness" is used as the NAS' report backbone, as if trust
could be passive and objective.
Regarding this attitude, and answering the openning question of this
message, it is interesting to go back in time some 400 years. The
citation is from Bertrand Russell's work, available in the Net at
http://csmaclab-www.uchicago.edu:80/philosophyProject/sellars/russell/rus2.html
and the comments within [] are mine. By using this example, I am
following what has been called the AM (Anthropomorhic Metaphor) case
in the mcg-talk list and which postulates that since software agents
should be at least as capable as "perfect clerk" human agents then it
is useful to consider human-like actions as models to be copied,
rather shamelessly (this is not new but follows the example of Turing
when devising the Turing-Machine model).
Descartes (1596-1650), the founder of modern philosophy, invented a
method which may still be used with profit -- the method of
systematic doubt [Security work can profit well from such method, as
we will see]. He determined that he would believe nothing which he
did not see quite clearly and distinctly to be true [What a nice
security maxim to be used in the Internet!]. Whatever he could bring
himself to doubt, he would doubt, until he saw reason for not
doubting it [ditto, maxim #2]. By applying this method he gradually
became convinced that the only existence of which he could be quite
certain was own [This means, that one can only be certain of one's
own server or client but anything else can be an illusion in
the Internet -- maxim #3]. He imagined a deceitful demon, who
presented unreal things to his senses in a perpetual phantasmagoria;
it might be very improbable that such a demon existed [this demon
exists! It is called today: hackers, virus, fraudsters, etc.], but
still it was possible, and therefore doubt concerning things
perceived by the senses [ie, by the software/hardware and by the
user] was possible.
But doubt concerning his own existence was not possible [your client
or server exists], for if he did not exist, no demon could deceive
him. If he doubted, he must exist; if he had any experiences
whatever, he must exist. Thus his own existence was an absolute
certainty to him [this means that trust MUST begin as self-trust]. 'I
think, therefore I am, ' he said (Cogito, ergo sum); and on the
basis of this certainty he set to work to build up again the world
of knowledge which his doubt had laid in ruins [ie, which the mere
existence of real-world hackers, viruses, fraudsters, etc are
conspiring to set in ruins]. By inventing the method of doubt, and
by showing that subjective things are the most certain, Descartes
performed a great service to philosophy [and to Internet security!],
and one which makes him still useful to all students of the subject

[and us, alike].

Thus, by following the cartesian method of doubt, we concluded that


subjective things are the most certain! This provides the following
direct conclusions:
1. it answers the NAS report in the negative regarding its
usefullness to model trust relationships and their application in
cyberspace and 3D-world situations. In fact, following the circular
definitions given in the NAS report (ie, even if such were possible),
security would decrease and not increase, since trust is treated as
passive and objective in the NAS report.
2. it shows that we must continue to invest time and efforts into a
real-world understanding, treatment and representation of trust, with
all its needed subjectivity and interactive behavior. In our dynamic
subjectivity (and, the "subjectivity" or locality of our servers and
clients) lies the roots of any certainty we may aspire to have at
any moment.
On the contrary, if one neglects the cartesian method, Decartes'
demon will win rather easily as he breaks in into a stronghold that
someone else decided was safe for you, "objectively" and in some
past.
Cheers,
Ed Gerck
______________________________________________________________________
Dr.rer.nat. E. Gerck
egerck@mcwg
http://mcwg

You might also like