Professional Documents
Culture Documents
Systems
†
Vivek Pathaka∗ and Liviu Iftodea
a
Department of Computer Science
Rutgers, the State University of New Jersey,
110 Frelinghuysen Road
Piscataway, NJ 08854-8019 USA
We describe Byzantine Fault Tolerant Authentication, a mechanism for public key authentication in peer-to-peer
systems. Authentication is done without trusted third parties, tolerates Byzantine faults and is eventually correct
if more than a threshold of the peers are honest. This paper addresses the design, correctness, and fault tolerance
of authentication over insecure asynchronous networks. An anti-entropy version of the protocol is developed
to provide lazy authentication with logarithmic messaging cost. The cost implications of the authentication
mechanism are studied by simulation.
Key words : Public Key Authentication, Peer-to-peer Systems, Byzantine fault tolerance.
1
2
has honest majority if the number of malicious the possessor of the private key, a correct response
or faulty peers t < 1−6φ
3 n. is a proof of possession.
In the distributed authentication phase, peers
Dishonest peers may behave in an arbitrary forward their proofs to other peers. A peer B can
fashion, either because of being faulty or because authenticate a peer A after it receives a number
of being malicious adversaries. It is assumed that of valid proofs from different peers. If all the par-
the system of mutually authenticating peers has ticipants are honest, there will be consensus on
honest majority. validity. In this common operating case, the pro-
tocol terminates with B becoming convinced that
2.3. Adversaries the public key is authentic.
The computational power of adversaries is If there are conflicting claims on authenticity,
polynomially bounded. Hence with very high B can deduce that either A or some of the peers
probability, the adversary cannot forge digital sig- are malicious or faulty. The protocol proceeds to
natures or invert the encryption transformations. Byzantine agreement where the sent and received
We consider active and passive adversaries. messages of different parties are validated. As
The passive adversary has unlimited power to all the messages are digitally signed, malicious
eavesdrop on any message. While the active ad- behavior can be discovered by this procedure.
versaries have unlimited power to inject arbitrary The messaging cost of authentication motivates
messages into the network3 , they cannot prevent optimization of the common case when all trusted
message delivery for more that a small fraction φ parties are indeed honest. The public key infec-
of the honest parties. Clearly this is weaker than tion protocol implements optimistic authentica-
the classical network is the adversary approach. tion that hides latency by proceeding before a
The weakened active adversary is appropriate public key is authenticated. Public keys and their
in wireless networks because of physical difficul- authentication proofs are propagated efficiently
ties in silencing radio transmissions. Its use in In- by an anti-entropy public key infection algorithm.
ternet applications is justified by considering the
difficulty of preventing message delivery to a large 2.5. Trusted Groups
number of end-points. Practical experience with Each peer has a probationary group, trusted
Internet based systems also suggests that mes- group, and untrusted group of peers as shown
sage injection or spoofing is the preferred form of in Figure 3. Peers gain knowledge of each oth-
attack. ers public keys depending on their communica-
tion patterns. Newly discovered peers are added
2.4. Authentication to the probationary group. Successful authenti-
Challenge response protocols can authenticate cation moves a peer in the probationary group
public keys in the absence of man in the middle to the trusted group. Malicious peers are moved
attack. Since we allow for a limited number of from the trusted group to the untrusted group.
such attacks, a public key can be authenticated by 4
Peers are also deleted from trusted groups for
multiple challenge response exchanges originating lack of liveness and for periodic pruning of trusted
from different end-points. group. This is done to improve authentication
The authentication protocol (Figure 2) consists performance.
of three phases: Challenge response, Distributed
authentication and Byzantine agreement. During 3. Architecture
challenge response, the peer to be authenticated
is challenged with encrypted nonces by a set of Byzantine fault tolerant authentication is im-
peers. Since the nonce can be recovered only by plemented by executing Authentication protocol
3 Sincewe do not address denial of service type of attacks, 4 Continuous addition of malicious peers can cause the un-
the spoofing power is not large enough to break the net- trusted group to grow without limit. Therefore, peers may
work or the parties processing the forged messages. forget malicious behavior of the very distant past.
4
Encrypted Nonce A B C D E
Challenge response
E B pairs sent by peers
x
Distributed
Public key of A Authentication
D C authenticated to B
x Byzantine
B identifies Agreement
A B
the malicious peer D if no consensus
Challenge response pairs
on authenticity
sent by A
Figure 2. Authentication protocol example: A peer A is authenticated by B using its trusted peers. D
is a malicious peer that tries to prevent authentication of A.
• Admission request
The protocol begins when B encounters an • Challenge response
unauthenticated public key KA . It an- Each peer Pi challenges A by sending a
nounces the key to its trusted group and random nonce encrypted with A’s supposed
asks them to verify its authenticity. public key in the signed challenge message.
5
1. Admission request
A peer A makes a key possession claim by notifying the peer B. If A has an expired authen-
⋆
ticated public key KA , it includes the proof of its possession P = {A, KA }A⋆ . B announces
the claim to the group.
A→B : {A, B, admission request, {A, KA [, P]}A }A
For each trusted peer Pi of B
B → Pi : B[i] = {B, Pi , authentication request, {A, KA [, P]}A }B
2. Challenge response
Each peer challenges A with an encrypted nonce, and A responds with the signed response.
A also stores the challenge response pair {CiA , RiA } from its interaction with peer Pi as VA [i]
for use in Byzantine agreement.
At each trusted peer Pi of B
Pi → A : CiA = {Pi , A, challenge, KA (ri )}Pi
A → Pi : RiA = {A, Pi , response, ri }A
3. Distributed authentication
Each peer returns the proof-of-possession {CiA , RiA } to B. B saves the pair in a local variable
VB [i] and determines the public key to be authentic (or inauthentic) if there is consensus on
validity (or invalidity) in the proofs received. If there is no consensus, B calls for Byzantine
agreement.
At each trusted peer Pi of B
Pi → B : {CiA , RiA }Pi
4. Byzantine agreement
B asks the peer A for the challenges it received, and its responses to them. It then compares
the proofs received from the peers and those received from A. It also notifies the peers of the
received proofs so that malicious parties are eliminated from the trusted group.
B→A : {B, A, proof request}B
A→B : {A, B, proof, VA }A
If A is not proved malicious
For each trusted peer Pi of B
B → Pi : {B, Pi , byzantine fault, B, VB }B
For each trusted peer Pj of Pi
Pi → Pj : {Pi , Pj , byzantine agreement, B, Vj }Pi
3.2. Bootstrapping
The bootstrapping procedure is provided to
Figure 3. Group structure cold-start the system. This is in contrast with the
situation when trusted groups already exist and a
peer joins some of them. Bootstrapping initializes
the authentication system by creating a trusted
A can recover the nonce only if it holds the group consisting of the bootstrapped peers. The
private key KA −1
. It returns the nonce in peers authenticate each other by requesting ad-
a signed response message. The challenge mission into this trusted group. It should have
response message pair is a proof of posses- honest majority to function correctly.
sion for the public key. At end of the chal-
lenge response phase, each peer gets a proof 3.3. Membership Control Protocol
of possession for KA 5 . Each challenger Membership control (Figure 5) serves three
waits for an application specific time-out. purposes. It preserves honest majority of trusted
It deletes the proof if duplicate responses groups, maintains consistency of trusted group
are received. definition among sets of frequently communi-
cating peers, and prevents excessive growth of
• Distributed authentication
trusted group size to limit the cost of authenti-
The peers respond to B’s authentication re-
cation. The group operations of the protocol are
quest by sending their proofs of possession
described below:
to B. If all peers are honest, then there will
be consensus on the validity of proofs. In
this case, B gets the authentication result Addition to trusted groups
and the protocol terminates. Each peer maintains a list of to be sent au-
thentication proofs for each probationary peer.
• Byzantine agreement It lazily pushes these proofs to its trusted peers.
If there are differing authentication votes, Thus the probationary peer becomes trusted at
then either A or some of the peers are mali- each trusted peer. A peer may pull proofs because
cious or faulty. To detect if A is malicious, lazy push may delay a required authentication.
5 Notethat since KA is not yet authenticated, digital sig- Peers pull the proofs by sending authentication
nature is not verified on the response message. request messages.
7
1. Push proofs
A peer D periodically pushes the proof-of-possession {CDA , RDA } to peers that have not yet
received its proof.
For each trusted peer Pj that has not been sent the proof
D → Pj : {CDA , RDA }D
2. Pull proofs
A peer B has some, but not all proofs of authenticity. It can ask any peer Pj for the proof to
arrive at the authenticity, and hence trusted group membership decision for a probationary
peer A.
For each trusted peer Pj that has not sent a proof
B → Pj : {B, Pj , authentication request, (A, KA )}B
Pj → B : {CjA , RjA }Pj
incremental in its approach. Therefore, a direct the middle attacks since they are signed by au-
case by case analysis of the protocol is developed thenticated public keys. Considering the various
below. possibilities of attacks on the protocol, the effect
on correctness of challenge response is analyzed
4.1. Challenge Response below. A summary is provided in Table 2.
Consider a peer B with a trusted group of peers We consider Spoofing, Impersonation and Pro-
{P1 , . . . , Pi , . . . , Pn }. Let A request admission tocol attacks on the authentication architecture.
into the trusted group of B. Each peer Pi sends Spoofing is defined as the attack where an ad-
a proof of possession {CiA , RiA }Pi to B, where versary A′ assumes the identity of a peer A. This
attack is detected by the challenge response mech-
CiA = {Pi , A, challenge, ci }Pi anism. Impersonation is a man in the middle type
of attack where an adversary M impersonates A
RiA = {A, Pi , response, ri }A
while communicating with B, and B while com-
Let the proof of possession be valid if ci = KA (ri ) municating with A. In accordance with the me-
and both CiA and RiA are properly signed. chanics of the attack, A and B cannot commu-
nicate directly without passing through M . We
CLAIM 1 If Pi and A are honest, the proof define protocol attacks as the set of attacks that
of possession valid, and the communication path are mounted by providing incorrect responses (or
Pi A does not lose messages, then KA is authentic lack or responses) to various protocol messages.
with very high probability. A number of other protocol attacks like replay,
type flaws and encapsulation are rendered inef-
PROOF: By contradiction, let KA be inau- fective by the use of timestamps, message identi-
thentic. Since Pi is honest, it transmits a correct fiers and digital signatures respectively. In gen-
challenge containing ci = KA (ri ) to A, and does eral, source and destination identifiers are part of
not disclose its nonce ri . message definition when the identity of commu-
Since the network path does not lose messages, nicating parties matters.
the challenge will be delivered to A and the re- The adversary mounts a successful attack if at
sponse delivered to Pi . Thus, if a single response least one of its following goals are satisfied:
is received, A must be the responder7 . Since it
computes ri = KA −1
(ci ), it knows the private key, G1 Violate authentication
a contradiction. 2 The adversary convinces an honest peer
that the public key of A is KA′ when it is
4.1.1. Attacks on Challenge Response not.
The challenge response protocol can be at-
tacked in a number of ways. Messages may be G2 Violate honest majority
spoofed and originating from sources other than The adversary creates an adverse selection
their apparent origin X. Man in the middle at- of group members that lack honest majority
tacks may cause a peer X ′ to impersonate X and
protocol attacks could be launched by a peer X Consider the case of malicious peers that are
not following the prescribed protocol. not trusted by honest parties. They can attack
Let a proof of possession be P -invalid if the challenge response in one of the following ways:
challenge is not properly signed, A-invalid if the • Spoofing
response is not property signed, K-invalid if ci 6= A malicious peer A′ may try to impersonate
KA (ri ) and faulty if it is valid but KA is not an honest peer A by sending the admission
owned by A. Messages exchanged between the request message. If A is already part of
trusted peers are safe from spoofing and man in the trusted group, then each trusted peer
7 Ifmultiple responses are received, they are marked in- has its correct authenticated public key KA .
valid by the protocol. Since A′ cannot produce the required proof
9
Table 2
Effect of attacks during challenge response. A is authenticated by B and its peers Pi .
Sender under attack A B Pi
Spoofing delay K-invalid K-invalid
Man in the middle faulty faulty faulty
Incorrect response P -invalid, A-invalid or K-invalid delay or K-invalid delay or A-invalid
No response delay delay delay
However, the malicious peers could respond ea- Authenticated communication path
gerly, causing 2φn + t of the n − 2φn − t received
proofs to be faulty. Therefore, a majority of the
proofs are identical and correct at every honest A B
peer if
i.e.
1 − 6φ
t< n
3
Therefore, using a majority vote after Byzantine
Figure 6. Dynamics of authenticated communi-
agreement allows the peers to form trusted groups
cation.
that contain only the honest peers that are not in
the path of a man in the middle attack. This
preserves the honest majority.
If B is malicious and sends conflicting requests
to the peers, its signed authentication request earlier, or periodic pruning of trusted groups will
messages will cause it to be detected by Byzantine ensure honest majority as described in the follow-
agreement on the requests received. Again, by ing section. In either case the honest peers can
deletion of the malicious peer B, honest majority eventually authenticate each other.
is preserved. 2
4.4. Formation of Honest Majority Groups
4.3. Group Evolution Since honest members form trusted groups by
Admission requests are caused by the need for following the membership control protocol, any
secure communication. If the peers A and B in- provably malicious peers are deleted from trusted
tend to communicate securely, they will check if groups. On the other hand, if malicious peers
A ∈ T (B) and B ∈ T (A). In this case, the prob- can successfully masquerade as honest peers, then
lem is trivially solved. the continuous group migrations cause the distri-
Otherwise, A will request admission to T (B) bution of covertly malicious parties to be same
and B will request admission to T (A). If both as a random selection. Therefore, honest major-
A and B are honest, the admission requests will ity groups are formed with a probability greater
succeed in the common operating case when their than that of random selection.
3t
groups are also honest as shown in Figure 6. If A trusted group with 1−6φ + 1 peers has hon-
one of the requests fails, then either Byzantine est majority if t peers are malicious or faulty. Be-
agreement will correct the groups as described cause the value of φ does not change the behavior
12
Application Table 3
The cache record data structure.
Authenticated Public Keys
lt The Lamport timestamp
ct The causal timestamp at source
Authentication Protocol src Source
dest Destination
mesg The protocol message
Authentication Protocol Messages
Message Cache
encrypted timestamps by requiring the peer i Now assume that the destination gets infected
to generate the pair {t, Ki−1 (t)} instead of the at round i⋆ . Again by anti-entropy propagation of
timestamp component t. Thus, the protocol pro- its stable read timestamp, we have the probability
cessing outlined above would always pass times- P that a peer is infected with the message but not
tamp values as pairs. The receivers would be re- with the stable read timestamp of the destination:
quired to verify the correctness before acting on i i−i⋆
a timestamp value. By the assumption of non- P = (1 − f 2 )f 2
invertibility, the secure timestamp can be gener-
Since infection with the update but not with
ated only by the peer i. Thus, for an honest peer
the timestamp ensures that the message is not
i, it is impossible to forge the timestamp compo-
deleted, the expected fraction of cached messages
nent representing the state at i. Since the authen- i i−i⋆
tication protocol requires correct operation only is (1 − f 2 )f 2 . Again the cached messages can
from the honest peers, secure timestamps are suf- be created at any previous exchange round. Thus
ficient to preserve the correctness of authentica- we have the summation for log size N :
tion protocol. X i
∞ X
i i−i⋆
N= nµ(1 − f 2 )f 2 (1)
5.1. Complexity and coverage i=1 i⋆ =1
Let the peers do an anti-entropy session with a R x
Relating the series to the integral ee dx,
randomly chosen peer every unit time. Consider
which is evaluated using integration by parts, we
a message transmitted at the first round of ex-
have the following relation on log size:
changes. If |T | = n, then we know the fraction f
of initially uninfected peers is n−1
n . Now at round e log e n+1
i, if fi is the fraction of uninfected peers, only fi2 N <µ n log( )
2 n
peers remain uninfected with the update at round log n
i + 1. Thus, on the average: We know that log( n+1
n ) is O( n ). Also, the
rate of message insertion is O(n) because mes-
sages are sent to all members of the trusted
fi+1 = fi2 group. Hence the number of cached messages is
i in O(n log n).
= f2
6. Application
Thus the number of uninfected peers drops
doubly exponentially with time. Since the num- We have implemented the Byzantine fault tol-
ber of exchanges initiated by a peer is one per unit erant authentication protocol as a standalone li-
time, the number of messages sent and received brary to make it available to a variety of applica-
by a peer is in O(1 + n1 ). tions. Our first application target is an electronic
mail authentication system implemented through
5.2. Size of the message cache a self authenticating mail (SAM) client.
Let µ be the rate at which messages are be- Electronic mail is one of the most popular ap-
ing created at the authentication protocol layer. plications on the Internet. Although it has gained
Thus the cache gets µ new messages from the wide acceptance both for business and personal
higher protocol every unit time. Now let us con- use, its usage is limited by the lack of security
sider the situation at round i with respect to the in the mail transport protocol. Unlike conven-
messages created during the first round. Since the tional mail that can be signed by hand, electronic
fraction of uninfected peers is same as the proba- mail does not come with any inbuilt authenti-
bility Pd of the destination being uninfected, we cation mechanism. Thus, it is possible to forge
have: sender identity and modify contents en-route. To
i
be at par with paper documents, email must pro-
Pd = f 2 vide these minimal authentication capabilities in
15
a convenient and unobtrusive manner. As the brary. All the message transfers are replaced by
email client is in first hand contact with mail function calls that update counters to simulate
users, we need to address manual overrides over system activity. The simulation is designed to
protocol decisions. Thus, the system will allow replicate the authentication protocol in a universe
a manual override for all trustworthiness deci- of peers. The trust relationships between the
sions. It can be observed that in absence of the peers can be preset during bootstrapping. Appli-
autonomous authentication protocol, such a sys- cation level message exchange can be simulated
tem would be equivalent to PGP in terms of the to trigger authentication of unknown peers. The
trust model. simulation consists of about 1500 lines of C++
Compatibility with existing email infrastruc- code and uses the libc pseudo random number
ture is part of the design goals. We use the generator to make application level choices.
SMTP extensions to enforce backward compat- The scenarios of interest are bootstrapping and
ibility. The extension fields will carry protocol the authentication of new peers on an ongoing
messages as part of the mail header. Since the basis. The flow of time is uniformly measured
clients that cannot interpret the extended field in epochs with each epoch consisting of the time
will ignore it, the protocol data is an overhead needed for one cryptographic operation and one
for clients that cannot use it. We eliminate the message transfer. This way of measuring time
overhead of sending the extension fields to non- allows extrapolation of the simulation results to
SAM clients in the following manner. Identifica- systems having various tradeoffs of processing
tion of SAM end-points is enabled by requiring power and network latency.8 It also allows us
each application client to send messages with the to state all simulation results in terms of number
extension field. If there are no outstanding pro- of messages exchanged by a peer.
tocol messages, then the sender sends an empty
field for the extension. Thus all SAM enabled 7.1. Bootstrapping Cost
addresses are identifiable on receipt of an email The bootstrapping procedure requires the
message. trusted peers to authenticate each other as if they
A cache of recent deductions like the authenti- were mutually probationary. This process was
cated public key and the authentication protocol simulated with respect to the bootstrapping set
capability of a peer are kept in the SAM client. size, the topology of the trust graph connecting
The act of receiving an email message makes a the bootstrapping peers, and the proportion of
SAM client aware of the capabilities at the sender. malicious peers in the universe. We simulated
As the end result of protocol execution is the au- the bootstrapping process on a universe of 1000
thentication of the public keys at mail addresses, peers with trusted group sizes from 6 to 56.
the data associated with an end point includes We chose different types of trusted groups as
the known public key and the number of group shown in Figure 8. Bidirectional trust was used
migrations since it was first authenticated. This to create clusters of mutually trusting peers. This
information helps to assign a confidence value to is realistic in a geographically local or in a small
the authentication. We expect to gain real life ex- worlds type of trusted environments. These clus-
perience on the utility of Byzantine fault tolerant ters of bidirectional trust were perturbed by ran-
authentication by implementing self authenticat- domly selecting some of the members from the
ing mail. universe. This makes the trust unidirectional and
imposes additional cost on the peers. The cost
increases because it becomes less likely that the
7. Simulation trusted peer can return a cached proofs of posses-
We devised a simulation system to investigate sion.
authentication cost in various scenarios. The 8 The extrapolation should assign zero message transfer
simulation system is a stripped down version of time and the measured cryptographic operation time to
the authentication module implemented in the li- calculate the cost of Public key infection.
16
12000 1e+06
Bidirectional Trust Group Size 8
35% Trusted randomly Group Size 16
50% Trusted randomly Group Size 31
10000 Fully random Trust Group Size 56
100000
Number of Messages
Number of Messages
8000
6000 10000
4000
1000
2000
0 100
0 10 20 30 40 50 60 0.01 0.1
Number of Bootstrapping Trusted Peers Fraction of Malicious Peers in Bootstrapping Set
Figure 8. Average Bootstrapping cost per Peer. Figure 9. Effect of Malicious Peers on the Aver-
age Bootstrapping Cost.
1000
100
100
10
10
1
1
0 100 200 300 400 500 600 700 0.001 0.01 0.1
Number of Trusted Peers Proportion of Malicious Peers
Figure 10. Cost of Authentication. Figure 11. Authentication with Malicious Peers.
weaker fault tolerance. We trade off the authen- out a quorum of participants, it is impossible to
tication strength and use stronger network as- create a digitally signed public key certificate [14].
sumptions to provide an autonomous and fault As a consequence, unless the number of malicious
tolerant authentication mechanism. It can be ar- parties is as large as the quorum, false authen-
gued that the public key infrastructure approach tication is impossible. Threshold cryptography
forces the system designers to take a boolean ap- requires the existence of a trusted dealer that ini-
proach to security. This has allowed most of the tializes the key shares. In this way, it depends
Internet traffic to remain insecure even though on the honesty of the dealer. Threshold cryptog-
the computational power and software engineer- raphy has been used in COCA, a fault tolerant
ing needed to secure it are available. Byzantine public key authentication service [15] and as the
fault tolerant authentication is useful because it basis of a number of other secure services [16–18].
allows the co-operative formation of self authen- Threshold cryptography is also used to imple-
ticating systems. ment proactive recovery. To compromise such
The authentication mechanism has not ad- a system, the adversary is required to compro-
dressed denial of service type of attacks. A possi- mise the quorum within its vulnerability win-
ble solution for this problem would require keep- dow or lose any previous progress due to a re-
ing track of the cost incurred on behalf of other randomization of key shares [19]. Although better
peers and making peers untrusted if they launch than static key shares, the scheme cannot recover
such attacks. from the compromise of a quorum because the
same long term shared secret is recycled among
9. Related Work the trusted parties. In contrast, distributed au-
thentication is proactively secure in the sense that
Alternative approaches to provide fault toler- it holds no long term secrets.
ant authentication are known. There is a class of Role based access control in a distributed
protocols relying on threshold cryptography that system has been studied earlier [20]. It uses
uses key shares with the following property: with- roles instead of identities for granting access and
18
therefore avoids the issue of identity authenti- trust management systems [27,28]. Our approach
cation [21]. Reputation based distributed trust is made feasible by the weakening the network is
has been investigated by a number of previous the adversary model.
works. The Free Haven project uses a proactive The rise of peer-to-peer systems on the In-
mechanism based on recommendations to pro- ternet, and the popularity of wireless commu-
tect anonymity of the users [22]. NICE allows nication are the prime motivations behind the
the creation of trustworthy peer groups through change of the underlying model. Although weaker
a trust evaluation mechanism based on reputa- than the traditional model in terms of adversary
tion [23]. Both systems aggressively eliminate power, it is stronger in terms of fault tolerance.
(overtly) malicious parties to preserve their cor- Byzantine fault tolerant authentication was im-
rectness. Byzantine fault tolerant authentication plemented as a library which will be used to build
builds upon this idea of peer reputation. It does a secure e-mail system. The system will sup-
not require external identity authentication and port a set of e-mail clients that will authenti-
works on provable observations rather than rec- cate each other through an underlying distributed
ommendations. trust mechanism.
PGP has applied decentralized trust to authen-
tication in distributed systems [7,8]. However it 11. Acknowledgments
requires human evaluation of trustworthiness that
limits its applicability for unsophisticated users This work is supported in part by the Na-
and autonomous systems [9]. Trusted ambient tional Science Foundation under CCR-0133366
communities [24] is an approach that incremen- and ANI-0121416. The authors thank Dr. Mar-
tally builds trusted groups by observing the be- ios D. Dikaiakos for his constructive feedback and
havior of securely initialized peers. It is more per- suggestions. The authors are thankful to the
missive of malicious peers than our authentica- anonymous reviewers for their thoughtful com-
tion mechanism that needs verifiable challenge re- ments that helped improve the quality of this
sponse proofs. Cryptographic identifiers [25] pro- paper.
vide authenticated identities by selecting network
identity related to the public key. This mecha- REFERENCES
nism does not handle the man in the middle at-
tack. This approach is simple but constrained by 1. W. Diffie, M. Hellman, New Directions in Cryp-
the need to acquire specific network identifiers. A tography, IEEE Trans. Info. Theory 22 (1976)
644–654.
randomized approach to setting up weakly secure
2. A. S. R. Rivest, L. Adleman, A method for obtain-
peer-to-peer networks is used in Smart Dust [26]. ing digital signatures and public-key cryptosys-
Although it does not focus on authentication, it tems, Communications of the ACM 21 (2) (1978)
shares distributed trust and a weakened adver- 120–126.
sary model with Byzantine fault tolerant authen- 3. CCITT, The Directory Authentication Frame-
tication. work, Recommendation X.509 (1988).
4. M. Naor, K. Nissim, Certificate Revocation and
Certificate Update, in: Proceedings 7th USENIX
10. Conclusion and Future Work Security Symposium (San Antonio, Texas), 1998.
Byzantine fault tolerant distributed authenti- 5. B. Fox, B. LaMacchia, Certificate Revocation:
Mechanics and Meaning, in: R. Hirschfeld (Ed.),
cation provides a new approach to tackle the au-
FC’98: International Conference on Financial
thentication problems of distributed and peer to Cryptography, Vol. 1465 of Lecture Notes in Com-
peer systems. The salient features are lack of to- puter Science, Springer-Verlag, 1998, pp. 158–
tal trust and single points of failure. Our ap- 164.
proach allows a natural growth of trust with- 6. W. Aiello, S. Lodha, R. Ostrovsky, Fast digital
out requiring trustworthy hierarchies of delegat- identity revocation, in: Advances in Cryptology -
ing and recommending parties as done in other CRYPTO ’98, 18th Annual International Cryp-
19
tology Conference, Santa Barbara, California, 22. R. Dingledine, M. J. Freedman, D. Molnar, The
USA, August 23-27, 1998, Proceedings, Vol. 1462 free haven project: Distributed anonymous stor-
of Lecture Notes in Computer Science, Springer, age service, Lecture Notes in Computer Science
1998, pp. 137–152. 2009.
7. P. Zimmermann, The Official PGP User’s Guide, 23. S. Lee, R. Sherwood, S. Bhattacharjee, Coopera-
MIT Press, Cambridge, Massachusetts, 1995. tive Peer Groups in NICE, in: INFOCOM, 2003.
8. S. Garfinkel, PGP: Pretty Good Privacy, O’Reilly 24. S. U. V. Legrand, D. Hooshmand, Trusted Ambi-
& Associates, Inc., Cambridge, MA, 1995. ent community for self-securing hybrid networks,
9. A. Whitten, J. D. Tygar, Why Johnny can’t en- INRIA Research Report, 5027 (2003).
crypt: A usability evaluation of PGP 5.0, in: Pro- 25. G. Montenegro, C. Castelluccia, Crypto-based
ceedings of the 8th USENIX Security Symposium, identifiers (cbids): Concepts and applications,
1999. ACM Trans. Inf. Syst. Secur. 7 (1) (2004) 97–127.
10. L. Lamport, R. Shostak, M. Pease, The byzan- 26. R. Anderson, H. Chan, A. Perrig, Key infection:
tine generals problem, ACM Transactions on Pro- Smart trust for smart dust, in: Proceedings of
gramming Languages and Systems (TOPLAS) IEEE International Conference on Network Pro-
4 (3) (1982) 382–401. tocols (ICNP 2004), 2004.
11. J. Douceur, The sybil attack, in: Proc. of the 27. M. Blaze, J. Ioannidis, A. D. Keromytis, Trust
IPTPS02 Workshop, Cambridge, 2002. Management and Network Layer Security Proto-
12. P. Syverson, I. Cervesato, The Logic of Authenti- cols, in: Cambridge Security Protocols Interna-
cation Protocols, Lecture Notes in Computer Sci- tional Workshop, 1999, pp. 103–118.
ence 2171. 28. R. Yahalom, B. Klein, T. Beth, Trust relation-
13. L. Lamport, Time, clocks, and the ordering of ships in secure systems—a distributed authenti-
events in a distributed system, Commun. ACM cation perspective, in: In Proceedings of the 1993
21 (7) (1978) 558–565. IEEE Symposium on Research in Security and
14. S. Goldwasser, S. Micali, R. L. Rivest, A Dig- Privacy, 1993, pp. 150–164.
ital Signature Scheme Secure Against Adap-
tive Chosen-Message Attacks, SIAM J. Comput.
17 (2) (1988) 281–308.
15. L. Zhou, F. B. Schneider, R. van Renesse, COCA:
A Secure Distributed On-line Certification Au-
thority, Tech. Rep. 2000-1828, Department of
Computer Science, Cornell University, Ithaca, NY
USA (December 2000).
16. C. Cachin, Distributing trust on the Internet,
in: International Conference on Dependable Sys-
tems and Networks (DSN2001), Gteborg, Swe-
den., IEEE, 2001.
17. M. K. Reiter, The Rampart Toolkit for Building
High-Integrity Services, in: Dagstuhl Seminar on
Distributed Systems, 1994, pp. 99–110.
18. L. Zhou, Z. Haas, Securing Ad Hoc Networks,
IEEE Network Magazine 13 (6).
19. R. Canetti, R. Gennaro, A. Herzberg, D. Naor,
Proactive Security: Long-term protection against
break-ins, RSA CryptoBytes 3 (1) (1997) 1–8.
20. J. Bacon, K. Moody, W. Yao, Access Control and
Trust in the Use of Widely Distributed Services,
Lecture Notes in Computer Science 2218 (2001)
295+.
21. R. S. Sandhu, E. J. Coyne, H. L. Feinstein, C. E.
Youman, Role-based access control models, IEEE
Computer 29 (2) (1996) 38–47.