You are on page 1of 19

Byzantine Fault Tolerant Public Key Authentication in Peer-to-Peer

Systems

Vivek Pathaka∗ and Liviu Iftodea
a
Department of Computer Science
Rutgers, the State University of New Jersey,
110 Frelinghuysen Road
Piscataway, NJ 08854-8019 USA

We describe Byzantine Fault Tolerant Authentication, a mechanism for public key authentication in peer-to-peer
systems. Authentication is done without trusted third parties, tolerates Byzantine faults and is eventually correct
if more than a threshold of the peers are honest. This paper addresses the design, correctness, and fault tolerance
of authentication over insecure asynchronous networks. An anti-entropy version of the protocol is developed
to provide lazy authentication with logarithmic messaging cost. The cost implications of the authentication
mechanism are studied by simulation.
Key words : Public Key Authentication, Peer-to-peer Systems, Byzantine fault tolerance.

1. Introduction the mechanism, and the administrative burden


placed by the security policy, it may not be pos-
Public key authentication is a fundamental sible to scale up the centralized authentication
problem of digital security. Authenticated pub- mechanism for securing large systems like the In-
lic keys are required for boot-strapping shared ternet.
secret encryption methods and for verifying the PGP [7,8] creates a web of trust that allows
integrity of digital signatures [1,2]. Trusted third peers to authenticate public keys by making man-
parties and webs of trust are the two established ual decisions on key ownership and trustworthi-
methods of public key authentication. ness. Although PGP supports peer-to-peer key
The trusted third party model uses a central- authentication, its dependence on human evalu-
ized offline public key certifying authority that is ation of key ownership and peer trustworthiness
trusted by all the participants. This authentica- prevents its large scale usage in autonomous peer-
tion architecture is hierarchically extended to cre- to-peer systems. It also appears that its manual
ate Public key infrastructure [3]. While trusted usage is limited to sophisticated users [9].
third parties are acceptable in a client-server com- The increasing usage of large peer-to-peer sys-
puting model, they are not suitable to the users tems motivates the creation of a Byzantine fault
of peer-to-peer systems for a number of reasons. tolerant public key authentication mechanism.
It may be impossible to find sufficiently trusted The proposed mechanism involves a distributed
parties in heterogeneous systems. Trusted third system of mutually authenticating semi-trusted
parties must support certificate revocation to pre- parties and tolerates Byzantine faults. Authenti-
vent misuse of compromised private keys [4–6]. cation is eventually correct if no more than ⌊ n−1
3 ⌋
The off-line advantage of certificate authorities of the n parties are malicious or faulty. Au-
is reduced by the overhead of maintaining fresh thentication does not require predefined trusted
revocation information. Considering the consis- third parties and enables secure communication
tency and timely propagation issues imposed by in heterogeneous groups. Since it allows the au-
tonomous light-weight mutual authentication of
∗ Corresponding author.
† Email
strangers, Byzantine fault tolerant authentication
Address: vpathak@cs.rutgers.edu

1
2

is well suited for peer-to-peer systems.


The remainder of this paper is organized as
follows: Section 2 presents the system model in-
cluding the trust model and its impact on system
security. It outlines authentication and trusted
group management. Section 3 specifies the proto-
cols implementing distributed authentication and
managing group membership. Proof of correct- Internet
ness and protocol analysis are done in Section
4. Section 5 describes an anti-entropy algorithm
for efficient authentication. Section 6 describes
an application, Section 7 describes and discusses Message Delivery Guaranteed
simulation results. Section 8 discusses this au- Under Retransmission

thentication approach and Section 9 compares it


with traditional methods. Section 10 concludes
the paper. Disjoint transmission paths Active Adversary

2. Model Faulty End−Point

Public key authentication by trusted third par-


ties is traditionally done through certificate au-
thorities that digitally sign a public key certifi-
cate. Verification of the certificate makes a state- Figure 1. Adversaries and assumptions.
ment of the following form: If the certificate au-
thority is honest and capable, then the private
key corresponding to the certified public key be-
longs to the given identity. Here the concept of We assume that disjoint message transmission
identity is very general. It may span applications, paths exist to each peer from some of its peers.
individuals, and organizations. In contrast, our Further, if the man in the middle attack can be
mechanism authenticates network end-points or mounted for more than a fraction φ of peers, then
peers. Authentication is a proof of possession of the peer is considered to be faulty and is not au-
the private key under honest majority and other thenticated by this method. Faulty end points,
assumptions as described below. as shown in Figure 1, are not authenticated by
our protocol. The authentication mechanism is
2.1. Network designed to detect and ignore faulty peers.
Consider a distributed system of mutually 2.2. Honest majority
semi-trusting peers. They are interconnected by Correctness of authentication depends on the
an asynchronous network and are identified by existence of honest peers that faithfully execute
their network identifiers. The network does not the protocol. Informally, they tell the truth about
guarantee message ordering or delivery. However, their network identity and public key. While none
no part of the network becomes permanently dis- of the peers inherently trusts any other peer, each
connected. The network is assumed to return de- believes that the honest peers are in a majority.
livery failure notifications. In particular, a no- We define honest peer and honest majority as fol-
tification is expected if a message is sent to a lows:
non-existent end point. Similarly, if an end-point
does not implement the authentication mecha- DEFINITION 1 An honest peer protects the
nism, this fact can be detected by receiving a privacy of its private key and executes the au-
connection failure message. thentication protocol correctly. A set of n peers
3

has honest majority if the number of malicious the possessor of the private key, a correct response
or faulty peers t < 1−6φ
3 n. is a proof of possession.
In the distributed authentication phase, peers
Dishonest peers may behave in an arbitrary forward their proofs to other peers. A peer B can
fashion, either because of being faulty or because authenticate a peer A after it receives a number
of being malicious adversaries. It is assumed that of valid proofs from different peers. If all the par-
the system of mutually authenticating peers has ticipants are honest, there will be consensus on
honest majority. validity. In this common operating case, the pro-
tocol terminates with B becoming convinced that
2.3. Adversaries the public key is authentic.
The computational power of adversaries is If there are conflicting claims on authenticity,
polynomially bounded. Hence with very high B can deduce that either A or some of the peers
probability, the adversary cannot forge digital sig- are malicious or faulty. The protocol proceeds to
natures or invert the encryption transformations. Byzantine agreement where the sent and received
We consider active and passive adversaries. messages of different parties are validated. As
The passive adversary has unlimited power to all the messages are digitally signed, malicious
eavesdrop on any message. While the active ad- behavior can be discovered by this procedure.
versaries have unlimited power to inject arbitrary The messaging cost of authentication motivates
messages into the network3 , they cannot prevent optimization of the common case when all trusted
message delivery for more that a small fraction φ parties are indeed honest. The public key infec-
of the honest parties. Clearly this is weaker than tion protocol implements optimistic authentica-
the classical network is the adversary approach. tion that hides latency by proceeding before a
The weakened active adversary is appropriate public key is authenticated. Public keys and their
in wireless networks because of physical difficul- authentication proofs are propagated efficiently
ties in silencing radio transmissions. Its use in In- by an anti-entropy public key infection algorithm.
ternet applications is justified by considering the
difficulty of preventing message delivery to a large 2.5. Trusted Groups
number of end-points. Practical experience with Each peer has a probationary group, trusted
Internet based systems also suggests that mes- group, and untrusted group of peers as shown
sage injection or spoofing is the preferred form of in Figure 3. Peers gain knowledge of each oth-
attack. ers public keys depending on their communica-
tion patterns. Newly discovered peers are added
2.4. Authentication to the probationary group. Successful authenti-
Challenge response protocols can authenticate cation moves a peer in the probationary group
public keys in the absence of man in the middle to the trusted group. Malicious peers are moved
attack. Since we allow for a limited number of from the trusted group to the untrusted group.
such attacks, a public key can be authenticated by 4
Peers are also deleted from trusted groups for
multiple challenge response exchanges originating lack of liveness and for periodic pruning of trusted
from different end-points. group. This is done to improve authentication
The authentication protocol (Figure 2) consists performance.
of three phases: Challenge response, Distributed
authentication and Byzantine agreement. During 3. Architecture
challenge response, the peer to be authenticated
is challenged with encrypted nonces by a set of Byzantine fault tolerant authentication is im-
peers. Since the nonce can be recovered only by plemented by executing Authentication protocol
3 Sincewe do not address denial of service type of attacks, 4 Continuous addition of malicious peers can cause the un-
the spoofing power is not large enough to break the net- trusted group to grow without limit. Therefore, peers may
work or the parties processing the forged messages. forget malicious behavior of the very distant past.
4

Set of peers with honest majority


A B

Encrypted Nonce A B C D E

Unauthenticated public key


at peer A
Challenge
Recover and Sign Response
Nonce

Challenge response
E B pairs sent by peers
x
Distributed
Public key of A Authentication
D C authenticated to B

x Byzantine
B identifies Agreement
A B
the malicious peer D if no consensus
Challenge response pairs
on authenticity
sent by A

Figure 2. Authentication protocol example: A peer A is authenticated by B using its trusted peers. D
is a malicious peer that tries to prevent authentication of A.

and Membership control protocol at each peer. Table 1


The protocols are described as per the notation Notation
given as shown in Table 1. A bootstrapping pro- Ki Public key of the principal i.
cedure is also provided for system initialization. Ki−1 Private key of the principal i.
All protocol messages have timestamps, source Ki (x) A string x encrypted with the
and destination identifiers, and digital signatures. public key of i.
Peers ignore messages with invalid signatures and ri A pseudo random number gener-
maintain a most recent received time-stamp vec- ated by peer i.
tor to guard against replay. {x, y, z} A message containing three
strings x, y and z.
3.1. Authentication Protocol {x}i A message signed by i.
The authentication protocol (Figure 4) consists T (i) Trusted group of peer i.
of the following steps:

• Admission request
The protocol begins when B encounters an • Challenge response
unauthenticated public key KA . It an- Each peer Pi challenges A by sending a
nounces the key to its trusted group and random nonce encrypted with A’s supposed
asks them to verify its authenticity. public key in the signed challenge message.
5

1. Admission request
A peer A makes a key possession claim by notifying the peer B. If A has an expired authen-

ticated public key KA , it includes the proof of its possession P = {A, KA }A⋆ . B announces
the claim to the group.
A→B : {A, B, admission request, {A, KA [, P]}A }A
For each trusted peer Pi of B
B → Pi : B[i] = {B, Pi , authentication request, {A, KA [, P]}A }B

2. Challenge response
Each peer challenges A with an encrypted nonce, and A responds with the signed response.
A also stores the challenge response pair {CiA , RiA } from its interaction with peer Pi as VA [i]
for use in Byzantine agreement.
At each trusted peer Pi of B
Pi → A : CiA = {Pi , A, challenge, KA (ri )}Pi
A → Pi : RiA = {A, Pi , response, ri }A
3. Distributed authentication
Each peer returns the proof-of-possession {CiA , RiA } to B. B saves the pair in a local variable
VB [i] and determines the public key to be authentic (or inauthentic) if there is consensus on
validity (or invalidity) in the proofs received. If there is no consensus, B calls for Byzantine
agreement.
At each trusted peer Pi of B
Pi → B : {CiA , RiA }Pi

4. Byzantine agreement
B asks the peer A for the challenges it received, and its responses to them. It then compares
the proofs received from the peers and those received from A. It also notifies the peers of the
received proofs so that malicious parties are eliminated from the trusted group.
B→A : {B, A, proof request}B
A→B : {A, B, proof, VA }A
If A is not proved malicious
For each trusted peer Pi of B
B → Pi : {B, Pi , byzantine fault, B, VB }B
For each trusted peer Pj of Pi
Pi → Pj : {Pi , Pj , byzantine agreement, B, Vj }Pi

Figure 4. Authentication Protocol


6

Admission request B sends the proof request message to A.


Authentication The response consists of all challenge mes-
sages received, and the responses sent by A.
If A is honest it can prove that it received
Probationary the messages because they were signed by
members the sending peer. It can also show a correct
Trusted members response. If A is not provably malicious,
then some of the peers must be malicious or
faulty. This leads B to announce a byzan-
tine fault to the group. Now each group
Untrusted members
member will send the byzantine agreement
Deletion message to others. At end of this phase,
the honest peers will be able to recognize
Faulty or malicious peers malicious peers causing the split in authen-
tication votes.

3.2. Bootstrapping
The bootstrapping procedure is provided to
Figure 3. Group structure cold-start the system. This is in contrast with the
situation when trusted groups already exist and a
peer joins some of them. Bootstrapping initializes
the authentication system by creating a trusted
A can recover the nonce only if it holds the group consisting of the bootstrapped peers. The
private key KA −1
. It returns the nonce in peers authenticate each other by requesting ad-
a signed response message. The challenge mission into this trusted group. It should have
response message pair is a proof of posses- honest majority to function correctly.
sion for the public key. At end of the chal-
lenge response phase, each peer gets a proof 3.3. Membership Control Protocol
of possession for KA 5 . Each challenger Membership control (Figure 5) serves three
waits for an application specific time-out. purposes. It preserves honest majority of trusted
It deletes the proof if duplicate responses groups, maintains consistency of trusted group
are received. definition among sets of frequently communi-
cating peers, and prevents excessive growth of
• Distributed authentication
trusted group size to limit the cost of authenti-
The peers respond to B’s authentication re-
cation. The group operations of the protocol are
quest by sending their proofs of possession
described below:
to B. If all peers are honest, then there will
be consensus on the validity of proofs. In
this case, B gets the authentication result Addition to trusted groups
and the protocol terminates. Each peer maintains a list of to be sent au-
thentication proofs for each probationary peer.
• Byzantine agreement It lazily pushes these proofs to its trusted peers.
If there are differing authentication votes, Thus the probationary peer becomes trusted at
then either A or some of the peers are mali- each trusted peer. A peer may pull proofs because
cious or faulty. To detect if A is malicious, lazy push may delay a required authentication.
5 Notethat since KA is not yet authenticated, digital sig- Peers pull the proofs by sending authentication
nature is not verified on the response message. request messages.
7

1. Push proofs
A peer D periodically pushes the proof-of-possession {CDA , RDA } to peers that have not yet
received its proof.
For each trusted peer Pj that has not been sent the proof
D → Pj : {CDA , RDA }D

2. Pull proofs
A peer B has some, but not all proofs of authenticity. It can ask any peer Pj for the proof to
arrive at the authenticity, and hence trusted group membership decision for a probationary
peer A.
For each trusted peer Pj that has not sent a proof
B → Pj : {B, Pj , authentication request, (A, KA )}B
Pj → B : {CjA , RjA }Pj

Figure 5. Membership Control Protocol

Deletion from trusted groups Group migration


Peers are deleted from trusted groups for ma- Authentication depends on honest majority of
licious activity or lack of liveness. A malicious trusted group. However, trusted group member-
message causes execution of the Byzantine agree- ship is granted on key authentication and does
ment phase that captures malicious behavior ex- not guarantee that the authenticated peer will
cept for the lack of liveness. Deletion occurs as not act maliciously in future. In particular, it
a result of byzantine agreement [10] on the ma- is possible that a number of covertly malicious
liciousness of the proof. If the group has honest peers join a trusted group so that they are in ma-
majority, all the honest trusted peers delete the jority. Trusted groups are periodically flushed to
malicious peer from their trusted groups and add provide proactive security against unobservable
it to the untrusted group6 . loss of honest majority. This process also guards
A peer that fails to respond to messages in a against the Sybil attack [11] that relies on a ma-
timely manner is considered failed due to lack of licious peer creating multiple fake identities. The
liveness and is deleted from the trusted group. progress made by multiple fake identities is lost
Performance of authentication is maintained by by group migration.
preserving a suitable group size. Thus honest
peers voluntarily delete themselves from trusted 4. Analysis
groups by randomly selecting a trusted peer and
ceasing to respond to its messages. The probabil- This section analyzes the correctness of au-
ity of deletion is chosen as a function of trusted thentication in honest majority groups. It is
group size in order to create suitably sized trusted also shown that the group dynamics resulting
groups. from membership control creates honest major-
ity groups with high probability. Previous stud-
ies [12] have focused on proving boolean correct-
6 It
is possible that honest peers may be deleted from dis-
ness assertions on two and three party authenti-
honest groups. This causes them to join other groups, cation protocols. The authentication protocol de-
most of which have honest majority. scribed here is open ended in number of peers and
8

incremental in its approach. Therefore, a direct the middle attacks since they are signed by au-
case by case analysis of the protocol is developed thenticated public keys. Considering the various
below. possibilities of attacks on the protocol, the effect
on correctness of challenge response is analyzed
4.1. Challenge Response below. A summary is provided in Table 2.
Consider a peer B with a trusted group of peers We consider Spoofing, Impersonation and Pro-
{P1 , . . . , Pi , . . . , Pn }. Let A request admission tocol attacks on the authentication architecture.
into the trusted group of B. Each peer Pi sends Spoofing is defined as the attack where an ad-
a proof of possession {CiA , RiA }Pi to B, where versary A′ assumes the identity of a peer A. This
attack is detected by the challenge response mech-
CiA = {Pi , A, challenge, ci }Pi anism. Impersonation is a man in the middle type
of attack where an adversary M impersonates A
RiA = {A, Pi , response, ri }A
while communicating with B, and B while com-
Let the proof of possession be valid if ci = KA (ri ) municating with A. In accordance with the me-
and both CiA and RiA are properly signed. chanics of the attack, A and B cannot commu-
nicate directly without passing through M . We
CLAIM 1 If Pi and A are honest, the proof define protocol attacks as the set of attacks that
of possession valid, and the communication path are mounted by providing incorrect responses (or
Pi A does not lose messages, then KA is authentic lack or responses) to various protocol messages.
with very high probability. A number of other protocol attacks like replay,
type flaws and encapsulation are rendered inef-
PROOF: By contradiction, let KA be inau- fective by the use of timestamps, message identi-
thentic. Since Pi is honest, it transmits a correct fiers and digital signatures respectively. In gen-
challenge containing ci = KA (ri ) to A, and does eral, source and destination identifiers are part of
not disclose its nonce ri . message definition when the identity of commu-
Since the network path does not lose messages, nicating parties matters.
the challenge will be delivered to A and the re- The adversary mounts a successful attack if at
sponse delivered to Pi . Thus, if a single response least one of its following goals are satisfied:
is received, A must be the responder7 . Since it
computes ri = KA −1
(ci ), it knows the private key, G1 Violate authentication
a contradiction. 2 The adversary convinces an honest peer
that the public key of A is KA′ when it is
4.1.1. Attacks on Challenge Response not.
The challenge response protocol can be at-
tacked in a number of ways. Messages may be G2 Violate honest majority
spoofed and originating from sources other than The adversary creates an adverse selection
their apparent origin X. Man in the middle at- of group members that lack honest majority
tacks may cause a peer X ′ to impersonate X and
protocol attacks could be launched by a peer X Consider the case of malicious peers that are
not following the prescribed protocol. not trusted by honest parties. They can attack
Let a proof of possession be P -invalid if the challenge response in one of the following ways:
challenge is not properly signed, A-invalid if the • Spoofing
response is not property signed, K-invalid if ci 6= A malicious peer A′ may try to impersonate
KA (ri ) and faulty if it is valid but KA is not an honest peer A by sending the admission
owned by A. Messages exchanged between the request message. If A is already part of
trusted peers are safe from spoofing and man in the trusted group, then each trusted peer
7 Ifmultiple responses are received, they are marked in- has its correct authenticated public key KA .
valid by the protocol. Since A′ cannot produce the required proof
9

Table 2
Effect of attacks during challenge response. A is authenticated by B and its peers Pi .
Sender under attack A B Pi
Spoofing delay K-invalid K-invalid
Man in the middle faulty faulty faulty
Incorrect response P -invalid, A-invalid or K-invalid delay or K-invalid delay or A-invalid
No response delay delay delay

P = {A′ , KA′ }A without the knowledge of This is equivalent to spoofing if X 6= A′ .


−1
KA , each honest peer will ignore the in- If X = A′ , then the malicious peer will re-
valid request. ceive challenge messages from the peers. If
Each peer will challenge A if it does not it responds with incorrect source or destina-
belong to the trusted group. The peer A tion identifier, incorrect signature or incor-
responds to the challenges because it is hon- rect format, then the message will be dis-
est. As it does not have KA −1
′ , the response
carded as ill-formed. Thus it can at best
will be invalid. If the adversary also sends send a message of the following form to the
a spoofed response, then the honest peer H peer Pi :
will receive two distinct responses, one valid
RiA = {A′ , Pi , response, ri′ }A′
and one invalid. H considers the incor-
rect message and A′ is not authenticated.
Since ri 6= ri′ causes invalidity, the peer Pi
Therefore, none of the goals are satisfied.
sends the received response to its peer B
• Impersonation that does not find a consensus. Thus G1 is
By the limited power of active adversary, not satisfied.
some of the challenges must reach A. Thus The protocol now goes into the Byzantine
some peers will see valid proofs for KA′ agreement phase. B asks A′ to send its
while the others will get invalid proofs. The copies of the challenge response proofs. A
authentication will fail because the major- cannot produce a valid challenge since it
ity of peers contact A. Thus G1 is not sat- does not know KPi and cannot create a new
isfied. challenge. Its only option is either to send
Because each peer can prove that it issued nothing or to send the available copy. Since
the proper challenge, and received the cor- it cannot prove that it received a correct
responding response, none of them is proved challenge and recovered its response, the
malicious. Thus G2 is not satisfied. trusted peer is not proved to be malicious.
Thus, G2 is not satisfied.
• Protocol attack
The malicious peer A′ can only choose the If the malicious peer already belongs to a
responses for the messages it sends. This trusted group, then the following attacks are pos-
limits it to the admission request mes- sible:
sage and the response message. Because
a receiving peer B shall verify the validity • Spoofing
of the digital signature, the message format Spoofing by trusted peers is restricted to
and the correct recipient, the only choice parties outside the trusted group. This
for the peer is to create a message of the is because trusted peers have an authenti-
following form: cated public key of the peer being spoofed,
and can detect an incorrect signature on the
{X, B, admission request, {X, KX [, P]}X }X spoofed message. A malicious trusted peer
10

may send spoofed challenge messages to a majority of trusted groups.


probationary peer. These messages will not
count in distributed authentication because 4.2. Distributed Authentication
of invalid signature on the challenge. Fur- Distributed authentication ends with a consen-
ther, if the probationary peer sends this in- sus of valid if every participant is honest and there
valid proof to any trusted peer, the insider are no attacks. Given the presence of attacks and
is provably malicious and will be deleted malicious peers, distributed authentication of A
in byzantine agreement phase. Neither G1 by B is correct as follows.
nor G2 is satisfied. CLAIM 2 A peer A is not mis-authenticated if
• Impersonation it is honest.
Consider a trusted peer Pi being imperson- PROOF: Mis-authentication requires consensus
ated by Pi′ due to a man in the middle on faulty proofs. Because A is not malicious or
attack. Pi belongs to the trusted group faulty, consider two possibilities: B is malicious
because of successful authentication. The or A′ spoofs messages.
adversary Pi′ cannot convince the trusted If B is malicious it can either inform its peers of
group that it is Pi because it does not know an incorrect KA′ or fail to send correct messages
KP−1
i
. Since badly signed messages are ig- to its peers. If B sends incorrect key(s), the hon-
nored, Pi appears to be non-live to some est peers will receive authentication request
of the peers and G1 fails. Also Pi is not messages and send challenges to A. Because A
proved to be malicious and G2 fails. responds by decrypting according to its correct
• Protocol attack private key, the proofs will be K-invalid. If B
A malicious insider can delay the en- does not send the correct messages, honest peers
try of a peer A into the trusted group will send no challenges and some proofs will be
by not sending a correct authentication missing.
request message. This does not satisfy ei- If A′ spoofs responses for A, then the honest
ther of the goals because A can find other peers will delete their proofs because A will re-
honest peers. spond too. Since there must be missing or K-
invalid proofs, there can not be a consensus on
The challenge response phase can be af- faulty proofs. 2
fected by the malicious insider in the fol-
lowing way: It can fail to send the challenge CLAIM 3 Honest majority is preserved at every
or send a number of challenges. However, honest peer.
A cannot be proved malicious by any such
strategy because it can remember the chal- PROOF: If B is honest, lack of consensus leads
lenges and produce them to other trusted to byzantine agreement. It compares proofs sent
peers. These challenges will be sent in the by peers with the proofs sent by A. If A is prov-
Byzantine fault phase when an honest peer ably malicious because of sending a K-invalid
observes lack of consensus on authenticity proof to some peer and valid to another, or be-
of KA . cause of sending P -invalid proofs on request of B,
the protocol ends with A being marked malicious.
A malicious insider can delay sending the
No honest peer is deleted.
proof of possession. However, it cannot con-
If A is not provably malicious, the protocol
struct a bad response signed by A. As the
moves into the second phase. This implies either
malicious peer may only delay the authen-
some trusted peers are malicious or there is a man
tication of A, neither G1 and G2 are not
in the middle attack.
satisfied.
Each peer sends the proofs it has (for authen-
Therefore, the challenge response protocol pro- ticity of KA ) to its trusted peers. Malicious peers
vides correct authentication and preserves honest could send conflicting proofs of possession to their
11

New communication path


peers. These actions are detected by byzantine
agreement as follows: Consider an honest peer B
A
Pj receiving the byzantine agreement message
from other peers. Let t of the peers be malicious
and may offer arbitrary proofs. Secondly, φn of Trusted group of A Trusted group of B
the peers may not be able to reach A and may
have faulty proofs to offer. Finally, proofs from
another φn peers may be faulty because the peer
Pj has a compromised path to them. In the worst
case these three sets of peers are disjoint, and
2φn + t of the proofs can be missing. Thus, every
peer eventually gets at least n − 2φn − t proofs. Request membership of groups

However, the malicious peers could respond ea- Authenticated communication path
gerly, causing 2φn + t of the n − 2φn − t received
proofs to be faulty. Therefore, a majority of the
proofs are identical and correct at every honest A B
peer if

n − 4φn − 2t > 2φn + t Trusted group of A Trusted group of B

i.e.
1 − 6φ
t< n
3
Therefore, using a majority vote after Byzantine
Figure 6. Dynamics of authenticated communi-
agreement allows the peers to form trusted groups
cation.
that contain only the honest peers that are not in
the path of a man in the middle attack. This
preserves the honest majority.
If B is malicious and sends conflicting requests
to the peers, its signed authentication request earlier, or periodic pruning of trusted groups will
messages will cause it to be detected by Byzantine ensure honest majority as described in the follow-
agreement on the requests received. Again, by ing section. In either case the honest peers can
deletion of the malicious peer B, honest majority eventually authenticate each other.
is preserved. 2
4.4. Formation of Honest Majority Groups
4.3. Group Evolution Since honest members form trusted groups by
Admission requests are caused by the need for following the membership control protocol, any
secure communication. If the peers A and B in- provably malicious peers are deleted from trusted
tend to communicate securely, they will check if groups. On the other hand, if malicious peers
A ∈ T (B) and B ∈ T (A). In this case, the prob- can successfully masquerade as honest peers, then
lem is trivially solved. the continuous group migrations cause the distri-
Otherwise, A will request admission to T (B) bution of covertly malicious parties to be same
and B will request admission to T (A). If both as a random selection. Therefore, honest major-
A and B are honest, the admission requests will ity groups are formed with a probability greater
succeed in the common operating case when their than that of random selection.
3t
groups are also honest as shown in Figure 6. If A trusted group with 1−6φ + 1 peers has hon-
one of the requests fails, then either Byzantine est majority if t peers are malicious or faulty. Be-
agreement will correct the groups as described cause the value of φ does not change the behavior
12

Application Table 3
The cache record data structure.
Authenticated Public Keys
lt The Lamport timestamp
ct The causal timestamp at source
Authentication Protocol src Source
dest Destination
mesg The protocol message
Authentication Protocol Messages
Message Cache

Public Key Infection Protocol


broadcast messages leading to an O(n2 ) cost for
mutual authentication in trusted groups. In or-
Anti−entropy sessions der to reduce the messaging cost, we use an epi-
demic algorithm called Public key infection for
Peers
lazy propagation of protocol messages as shown
in Figure 7. Because each protocol message is
protected by an unforgeable digital signature, it
is possible to store and forward messages through
Figure 7. Overview of Public Key Infection.
intermediate peers. This section discusses the
protocol design, correctness, and the performance
analysis in terms of messaging and space require-
ments.
of random selection, consider the honest majority Consider a lazy messaging layer underlying the
group with 3t + 1 peers. Let the fraction of dis- authentication protocol discussed earlier. This
honest parties be 31 − ǫ, where 0 < ǫ ≤ 31 . Under layer maintains a cache of the undelivered mes-
the assumption of independent random selection sages and provides eventual consistency in the fol-
of members, the probability P (i) of choosing T lowing sense: The outcome of the lazy protocol
with i dishonest members is given by the follow- will approximate the outcome of the eager proto-
ing binomial probability: cols described earlier. Thus, before the two pro-
  i  3t+1−i tocols achieve the same assignment of public keys
3t + 1 1 2
P (i) = −ǫ +ǫ to peers, we shall be in the state of optimistically
i 3 3
trusting the authenticity of public keys. If the
Pt
The probability Ph = i=0 P (i) of selecting an optimistic trust is broken, we mark the offending
honest majority group is computed numerically peer untrusted as required by the authentication
and rapidly convergences to 1 as ǫ approaches 31 . protocol.
In practice the untrusted sets preserve infor- Public key infection does not require knowl-
mation about past malicious activities and im- edge of physical time. However, each peer main-
pede the free assimilation of dishonest parties into tains a numeric logical timestamp, lts, called the
trusted sets. Thus, two honest parties A and B Lamport timestamp. It is maintained by setting
can have an expectation (1 − Ph )k < 21k of being it to the maximum of the local timestamp and
incorrectly authenticated if they continue com- incoming message timestamp [13]. This times-
munication through k group migrations. tamp provides a partial order on all events in the
distributed system. Each peer also maintains a
5. Public Key Infection causal timestamp, cts, which is simply a local
event counter incremented on send and receive
The protocols implementing distributed au- events. This timestamp is sent with each outgo-
thentication are expensive in messaging cost. ing message allowing peers to maintain a times-
Trusted groups execute protocols that include tamp vector ctv of causal timestamps.
13

Table 4 lowing holds:


Data Structures at the Key Infection layer.
lts The Lamport timestamp. r.ct > ctvd [r.src]
ctv[i] The last causal timestamp
known for peer i. It can be inferred that d has not seen the
ltv[i] The Lamport time of the most message stored in r. The causal times-
recent message originating from tamp is later than the last causal timestamp
peer i. known to d. Thus, all records satisfying
srv[i] The stable read timestamp. Its the message exchange condition have to be
value is the largest (known) Lam- transmitted to the peer d to enable eventual
port time such that all messages delivery at the destination.
with a smaller timestamp have
3. Receive cached messages
been received at peer i.
The peer d computes a set of records to
be transmitted by the same logic. On re-
ceiving a record r, the receiver inserts r
in the cache and updates the component
Key infection works by deferring the transmis- ltv[r.src], if the following condition holds
sion of messages sent by the authentication pro- ltv[r.src] < r.lt. Thus the Lamport time
tocol. The following steps are taken when a pro- vector reflects the latest known Lamport
tocol message m with source s and destination time for each peer.
d is pushed to the key infection layer: Firstly
the causal timestamp ctv[s], and the Lamport 4. Exchange stable read time stamp
timestamp lts, are incremented to record the The stable read time stamp is computed as
change of state (Table 4). Secondly, the record mini (ltv[i]). The peer s assigns the value
{l, ctv[s], s, d, m} is inserted into the message to srv[s]. The updated vector srv is now
cache (Table 3). It holds the outgoing message transmitted to d.
and timestamps to achieve eventual delivery.
5. Delete received messages
Anti-entropy sessions The exchange of stable read timestamp ad-
Anti-entropy sessions between peers transfer vances its components in the usual manner.
the protocol messages in an epidemic manner. Consider a cached message r such that:
The timing of anti-entropy sessions can either be
application dependent to take advantage of piggy- r.lt < srv[r.dest]
backing on application messages or could depend
Clearly since the value srv[r.dest] is com-
on a timeout to prevent too much divergence from
puted at r.dest as minimum of the Lamport
the eager protocol execution. The following steps
timestamps received from the peers, this
define a session between the peers s and d.
implies a message with greater Lamport
1. Exchange time stamps time was received from the message source
The Lamport timestamps are exchanged as well. By assumption of ordered mes-
and used to update the Lamport timestamp sage delivery, the older timestamped mes-
vector ltv. The local Lamport time is also sage has already been received. Thus, the
maintained using the usual algorithm. The peer can delete the records that satisfy the
causal timestamp vectors are compared to deletion condition.
decide which messages should be sent to the
other peer. Encrypted timestamps
The epidemic algorithm operates in a semi-
2. Send cached messages trusted environment. Thus, it is necessary to
For any locally cached record r, if the fol- ensure the integrity of timestamps. We create
14

encrypted timestamps by requiring the peer i Now assume that the destination gets infected
to generate the pair {t, Ki−1 (t)} instead of the at round i⋆ . Again by anti-entropy propagation of
timestamp component t. Thus, the protocol pro- its stable read timestamp, we have the probability
cessing outlined above would always pass times- P that a peer is infected with the message but not
tamp values as pairs. The receivers would be re- with the stable read timestamp of the destination:
quired to verify the correctness before acting on i i−i⋆
a timestamp value. By the assumption of non- P = (1 − f 2 )f 2
invertibility, the secure timestamp can be gener-
Since infection with the update but not with
ated only by the peer i. Thus, for an honest peer
the timestamp ensures that the message is not
i, it is impossible to forge the timestamp compo-
deleted, the expected fraction of cached messages
nent representing the state at i. Since the authen- i i−i⋆

tication protocol requires correct operation only is (1 − f 2 )f 2 . Again the cached messages can
from the honest peers, secure timestamps are suf- be created at any previous exchange round. Thus
ficient to preserve the correctness of authentica- we have the summation for log size N :
tion protocol. X i
∞ X
i i−i⋆
N= nµ(1 − f 2 )f 2 (1)
5.1. Complexity and coverage i=1 i⋆ =1
Let the peers do an anti-entropy session with a R x
Relating the series to the integral ee dx,
randomly chosen peer every unit time. Consider
which is evaluated using integration by parts, we
a message transmitted at the first round of ex-
have the following relation on log size:
changes. If |T | = n, then we know the fraction f
of initially uninfected peers is n−1
n . Now at round e log e n+1
i, if fi is the fraction of uninfected peers, only fi2 N <µ n log( )
2 n
peers remain uninfected with the update at round log n
i + 1. Thus, on the average: We know that log( n+1
n ) is O( n ). Also, the
rate of message insertion is O(n) because mes-
sages are sent to all members of the trusted
fi+1 = fi2 group. Hence the number of cached messages is
i in O(n log n).
= f2
6. Application
Thus the number of uninfected peers drops
doubly exponentially with time. Since the num- We have implemented the Byzantine fault tol-
ber of exchanges initiated by a peer is one per unit erant authentication protocol as a standalone li-
time, the number of messages sent and received brary to make it available to a variety of applica-
by a peer is in O(1 + n1 ). tions. Our first application target is an electronic
mail authentication system implemented through
5.2. Size of the message cache a self authenticating mail (SAM) client.
Let µ be the rate at which messages are be- Electronic mail is one of the most popular ap-
ing created at the authentication protocol layer. plications on the Internet. Although it has gained
Thus the cache gets µ new messages from the wide acceptance both for business and personal
higher protocol every unit time. Now let us con- use, its usage is limited by the lack of security
sider the situation at round i with respect to the in the mail transport protocol. Unlike conven-
messages created during the first round. Since the tional mail that can be signed by hand, electronic
fraction of uninfected peers is same as the proba- mail does not come with any inbuilt authenti-
bility Pd of the destination being uninfected, we cation mechanism. Thus, it is possible to forge
have: sender identity and modify contents en-route. To
i
be at par with paper documents, email must pro-
Pd = f 2 vide these minimal authentication capabilities in
15

a convenient and unobtrusive manner. As the brary. All the message transfers are replaced by
email client is in first hand contact with mail function calls that update counters to simulate
users, we need to address manual overrides over system activity. The simulation is designed to
protocol decisions. Thus, the system will allow replicate the authentication protocol in a universe
a manual override for all trustworthiness deci- of peers. The trust relationships between the
sions. It can be observed that in absence of the peers can be preset during bootstrapping. Appli-
autonomous authentication protocol, such a sys- cation level message exchange can be simulated
tem would be equivalent to PGP in terms of the to trigger authentication of unknown peers. The
trust model. simulation consists of about 1500 lines of C++
Compatibility with existing email infrastruc- code and uses the libc pseudo random number
ture is part of the design goals. We use the generator to make application level choices.
SMTP extensions to enforce backward compat- The scenarios of interest are bootstrapping and
ibility. The extension fields will carry protocol the authentication of new peers on an ongoing
messages as part of the mail header. Since the basis. The flow of time is uniformly measured
clients that cannot interpret the extended field in epochs with each epoch consisting of the time
will ignore it, the protocol data is an overhead needed for one cryptographic operation and one
for clients that cannot use it. We eliminate the message transfer. This way of measuring time
overhead of sending the extension fields to non- allows extrapolation of the simulation results to
SAM clients in the following manner. Identifica- systems having various tradeoffs of processing
tion of SAM end-points is enabled by requiring power and network latency.8 It also allows us
each application client to send messages with the to state all simulation results in terms of number
extension field. If there are no outstanding pro- of messages exchanged by a peer.
tocol messages, then the sender sends an empty
field for the extension. Thus all SAM enabled 7.1. Bootstrapping Cost
addresses are identifiable on receipt of an email The bootstrapping procedure requires the
message. trusted peers to authenticate each other as if they
A cache of recent deductions like the authenti- were mutually probationary. This process was
cated public key and the authentication protocol simulated with respect to the bootstrapping set
capability of a peer are kept in the SAM client. size, the topology of the trust graph connecting
The act of receiving an email message makes a the bootstrapping peers, and the proportion of
SAM client aware of the capabilities at the sender. malicious peers in the universe. We simulated
As the end result of protocol execution is the au- the bootstrapping process on a universe of 1000
thentication of the public keys at mail addresses, peers with trusted group sizes from 6 to 56.
the data associated with an end point includes We chose different types of trusted groups as
the known public key and the number of group shown in Figure 8. Bidirectional trust was used
migrations since it was first authenticated. This to create clusters of mutually trusting peers. This
information helps to assign a confidence value to is realistic in a geographically local or in a small
the authentication. We expect to gain real life ex- worlds type of trusted environments. These clus-
perience on the utility of Byzantine fault tolerant ters of bidirectional trust were perturbed by ran-
authentication by implementing self authenticat- domly selecting some of the members from the
ing mail. universe. This makes the trust unidirectional and
imposes additional cost on the peers. The cost
increases because it becomes less likely that the
7. Simulation trusted peer can return a cached proofs of posses-
We devised a simulation system to investigate sion.
authentication cost in various scenarios. The 8 The extrapolation should assign zero message transfer
simulation system is a stripped down version of time and the measured cryptographic operation time to
the authentication module implemented in the li- calculate the cost of Public key infection.
16

12000 1e+06
Bidirectional Trust Group Size 8
35% Trusted randomly Group Size 16
50% Trusted randomly Group Size 31
10000 Fully random Trust Group Size 56
100000
Number of Messages

Number of Messages
8000

6000 10000

4000
1000
2000

0 100
0 10 20 30 40 50 60 0.01 0.1
Number of Bootstrapping Trusted Peers Fraction of Malicious Peers in Bootstrapping Set

Figure 8. Average Bootstrapping cost per Peer. Figure 9. Effect of Malicious Peers on the Aver-
age Bootstrapping Cost.

The effect of malicious peers was studied for


various group sizes. The malicious peers were with randomized selection because some of the
randomly distributed in the universe. Their ac- challenge response steps are avoided if the pro-
tions were not provably malicious on challenge bationary peer is already trusted by some group
response leading to the execution of Byzantine members. The role of malicious peers is inves-
agreement. The trusted groups were created with tigated in Figure 11. It shows a rapid increase
15% random selection. The rsults shown in Fig- in the authentication cost in trusted groups with
ure 9 indicate that the incurred cost increases increasing proportion of malicious peers. The ef-
rapidly with increasing group size and increasing fect of malicious peers is greater on larger groups
proportion of malicious peers. This is expected as expected by the quadratic messaging cost of
because each malicious peer forces an overhead byzantine agreement phase of the authentication
of O(|T |2 ) message transfers on each of its peers. protocol.

7.2. Authentication Cost 8. Discussion


We investigated the incremental cost incurred
by group members and the probationary peers Our model supports an incremental growth of
for various group sizes. This simulation was con- trust. Optimistic authentication allows the trust
ducted on a universe of 2000 peers with statistics to increase by the successful authentication of a
collected in 1000 runs of 1000 application mes- public key through many group migrations. Since
sages each. The application messages were tar- most of the peers are in honest groups, it becomes
geted to an unauthenticated peer with probability increasingly unlikely that a long sequence of dis-
0.1. As shown in Figure 10, the cost of authen- honest groups is selected. Thus, our protocols
ticating a new peer is independent of group size provide soft authentication, which contrasts them
for the trusted peers. The cost increases linearly from the traditional notion of authentication.
with group size for probationary members. It is The traditional model with its all-or-none
slightly cheaper to authenticate peers in groups approach provides stronger authentication with
17

Group Cost Bidirectional Trust Group Cost Group Size 100


Group Cost 35% Trusted randomly Group Cost Group Size 200
Group Cost 50% Trusted randomly Group Cost Group Size 300
Group Cost Fully Random 100000 Peer Cost Group Size 100
10000
Peer Cost Bidirectional Trust Peer Cost Group Size 200
Peer Cost 35% Trusted randomly Peer Cost Group Size 300
Peer Cost 50% Trusted randomly
Number of Messages per Authentication

Number of Messages per Authentication


Peer Cost Fully Random
10000
1000

1000

100

100

10

10

1
1
0 100 200 300 400 500 600 700 0.001 0.01 0.1
Number of Trusted Peers Proportion of Malicious Peers

Figure 10. Cost of Authentication. Figure 11. Authentication with Malicious Peers.

weaker fault tolerance. We trade off the authen- out a quorum of participants, it is impossible to
tication strength and use stronger network as- create a digitally signed public key certificate [14].
sumptions to provide an autonomous and fault As a consequence, unless the number of malicious
tolerant authentication mechanism. It can be ar- parties is as large as the quorum, false authen-
gued that the public key infrastructure approach tication is impossible. Threshold cryptography
forces the system designers to take a boolean ap- requires the existence of a trusted dealer that ini-
proach to security. This has allowed most of the tializes the key shares. In this way, it depends
Internet traffic to remain insecure even though on the honesty of the dealer. Threshold cryptog-
the computational power and software engineer- raphy has been used in COCA, a fault tolerant
ing needed to secure it are available. Byzantine public key authentication service [15] and as the
fault tolerant authentication is useful because it basis of a number of other secure services [16–18].
allows the co-operative formation of self authen- Threshold cryptography is also used to imple-
ticating systems. ment proactive recovery. To compromise such
The authentication mechanism has not ad- a system, the adversary is required to compro-
dressed denial of service type of attacks. A possi- mise the quorum within its vulnerability win-
ble solution for this problem would require keep- dow or lose any previous progress due to a re-
ing track of the cost incurred on behalf of other randomization of key shares [19]. Although better
peers and making peers untrusted if they launch than static key shares, the scheme cannot recover
such attacks. from the compromise of a quorum because the
same long term shared secret is recycled among
9. Related Work the trusted parties. In contrast, distributed au-
thentication is proactively secure in the sense that
Alternative approaches to provide fault toler- it holds no long term secrets.
ant authentication are known. There is a class of Role based access control in a distributed
protocols relying on threshold cryptography that system has been studied earlier [20]. It uses
uses key shares with the following property: with- roles instead of identities for granting access and
18

therefore avoids the issue of identity authenti- trust management systems [27,28]. Our approach
cation [21]. Reputation based distributed trust is made feasible by the weakening the network is
has been investigated by a number of previous the adversary model.
works. The Free Haven project uses a proactive The rise of peer-to-peer systems on the In-
mechanism based on recommendations to pro- ternet, and the popularity of wireless commu-
tect anonymity of the users [22]. NICE allows nication are the prime motivations behind the
the creation of trustworthy peer groups through change of the underlying model. Although weaker
a trust evaluation mechanism based on reputa- than the traditional model in terms of adversary
tion [23]. Both systems aggressively eliminate power, it is stronger in terms of fault tolerance.
(overtly) malicious parties to preserve their cor- Byzantine fault tolerant authentication was im-
rectness. Byzantine fault tolerant authentication plemented as a library which will be used to build
builds upon this idea of peer reputation. It does a secure e-mail system. The system will sup-
not require external identity authentication and port a set of e-mail clients that will authenti-
works on provable observations rather than rec- cate each other through an underlying distributed
ommendations. trust mechanism.
PGP has applied decentralized trust to authen-
tication in distributed systems [7,8]. However it 11. Acknowledgments
requires human evaluation of trustworthiness that
limits its applicability for unsophisticated users This work is supported in part by the Na-
and autonomous systems [9]. Trusted ambient tional Science Foundation under CCR-0133366
communities [24] is an approach that incremen- and ANI-0121416. The authors thank Dr. Mar-
tally builds trusted groups by observing the be- ios D. Dikaiakos for his constructive feedback and
havior of securely initialized peers. It is more per- suggestions. The authors are thankful to the
missive of malicious peers than our authentica- anonymous reviewers for their thoughtful com-
tion mechanism that needs verifiable challenge re- ments that helped improve the quality of this
sponse proofs. Cryptographic identifiers [25] pro- paper.
vide authenticated identities by selecting network
identity related to the public key. This mecha- REFERENCES
nism does not handle the man in the middle at-
tack. This approach is simple but constrained by 1. W. Diffie, M. Hellman, New Directions in Cryp-
the need to acquire specific network identifiers. A tography, IEEE Trans. Info. Theory 22 (1976)
644–654.
randomized approach to setting up weakly secure
2. A. S. R. Rivest, L. Adleman, A method for obtain-
peer-to-peer networks is used in Smart Dust [26]. ing digital signatures and public-key cryptosys-
Although it does not focus on authentication, it tems, Communications of the ACM 21 (2) (1978)
shares distributed trust and a weakened adver- 120–126.
sary model with Byzantine fault tolerant authen- 3. CCITT, The Directory Authentication Frame-
tication. work, Recommendation X.509 (1988).
4. M. Naor, K. Nissim, Certificate Revocation and
Certificate Update, in: Proceedings 7th USENIX
10. Conclusion and Future Work Security Symposium (San Antonio, Texas), 1998.
Byzantine fault tolerant distributed authenti- 5. B. Fox, B. LaMacchia, Certificate Revocation:
Mechanics and Meaning, in: R. Hirschfeld (Ed.),
cation provides a new approach to tackle the au-
FC’98: International Conference on Financial
thentication problems of distributed and peer to Cryptography, Vol. 1465 of Lecture Notes in Com-
peer systems. The salient features are lack of to- puter Science, Springer-Verlag, 1998, pp. 158–
tal trust and single points of failure. Our ap- 164.
proach allows a natural growth of trust with- 6. W. Aiello, S. Lodha, R. Ostrovsky, Fast digital
out requiring trustworthy hierarchies of delegat- identity revocation, in: Advances in Cryptology -
ing and recommending parties as done in other CRYPTO ’98, 18th Annual International Cryp-
19

tology Conference, Santa Barbara, California, 22. R. Dingledine, M. J. Freedman, D. Molnar, The
USA, August 23-27, 1998, Proceedings, Vol. 1462 free haven project: Distributed anonymous stor-
of Lecture Notes in Computer Science, Springer, age service, Lecture Notes in Computer Science
1998, pp. 137–152. 2009.
7. P. Zimmermann, The Official PGP User’s Guide, 23. S. Lee, R. Sherwood, S. Bhattacharjee, Coopera-
MIT Press, Cambridge, Massachusetts, 1995. tive Peer Groups in NICE, in: INFOCOM, 2003.
8. S. Garfinkel, PGP: Pretty Good Privacy, O’Reilly 24. S. U. V. Legrand, D. Hooshmand, Trusted Ambi-
& Associates, Inc., Cambridge, MA, 1995. ent community for self-securing hybrid networks,
9. A. Whitten, J. D. Tygar, Why Johnny can’t en- INRIA Research Report, 5027 (2003).
crypt: A usability evaluation of PGP 5.0, in: Pro- 25. G. Montenegro, C. Castelluccia, Crypto-based
ceedings of the 8th USENIX Security Symposium, identifiers (cbids): Concepts and applications,
1999. ACM Trans. Inf. Syst. Secur. 7 (1) (2004) 97–127.
10. L. Lamport, R. Shostak, M. Pease, The byzan- 26. R. Anderson, H. Chan, A. Perrig, Key infection:
tine generals problem, ACM Transactions on Pro- Smart trust for smart dust, in: Proceedings of
gramming Languages and Systems (TOPLAS) IEEE International Conference on Network Pro-
4 (3) (1982) 382–401. tocols (ICNP 2004), 2004.
11. J. Douceur, The sybil attack, in: Proc. of the 27. M. Blaze, J. Ioannidis, A. D. Keromytis, Trust
IPTPS02 Workshop, Cambridge, 2002. Management and Network Layer Security Proto-
12. P. Syverson, I. Cervesato, The Logic of Authenti- cols, in: Cambridge Security Protocols Interna-
cation Protocols, Lecture Notes in Computer Sci- tional Workshop, 1999, pp. 103–118.
ence 2171. 28. R. Yahalom, B. Klein, T. Beth, Trust relation-
13. L. Lamport, Time, clocks, and the ordering of ships in secure systems—a distributed authenti-
events in a distributed system, Commun. ACM cation perspective, in: In Proceedings of the 1993
21 (7) (1978) 558–565. IEEE Symposium on Research in Security and
14. S. Goldwasser, S. Micali, R. L. Rivest, A Dig- Privacy, 1993, pp. 150–164.
ital Signature Scheme Secure Against Adap-
tive Chosen-Message Attacks, SIAM J. Comput.
17 (2) (1988) 281–308.
15. L. Zhou, F. B. Schneider, R. van Renesse, COCA:
A Secure Distributed On-line Certification Au-
thority, Tech. Rep. 2000-1828, Department of
Computer Science, Cornell University, Ithaca, NY
USA (December 2000).
16. C. Cachin, Distributing trust on the Internet,
in: International Conference on Dependable Sys-
tems and Networks (DSN2001), Gteborg, Swe-
den., IEEE, 2001.
17. M. K. Reiter, The Rampart Toolkit for Building
High-Integrity Services, in: Dagstuhl Seminar on
Distributed Systems, 1994, pp. 99–110.
18. L. Zhou, Z. Haas, Securing Ad Hoc Networks,
IEEE Network Magazine 13 (6).
19. R. Canetti, R. Gennaro, A. Herzberg, D. Naor,
Proactive Security: Long-term protection against
break-ins, RSA CryptoBytes 3 (1) (1997) 1–8.
20. J. Bacon, K. Moody, W. Yao, Access Control and
Trust in the Use of Widely Distributed Services,
Lecture Notes in Computer Science 2218 (2001)
295+.
21. R. S. Sandhu, E. J. Coyne, H. L. Feinstein, C. E.
Youman, Role-based access control models, IEEE
Computer 29 (2) (1996) 38–47.

You might also like