You are on page 1of 17

ANALYSIS OF LINEAR COMBINATION ALGORITHMS IN

CRYPTOGRAPHY
PETER J. GRABNER

, CLEMENS HEUBERGER

, HELMUT PRODINGER

,
AND J

ORG M. THUSWALDNER

Abstract. Several cryptosystems rely on fast calculations of linear combinations in groups.


One way to achieve this is to use joint signed binary digit expansions of small weight. We study
two algorithms, one based on non adjacent forms of the coecients of the linear combination, the
other based on a certain joint sparse form specically adapted to this problem. Both methods
are sped up using the sliding windows approach combined with precomputed lookup tables.
We give explicit and asymptotic results for the number of group operations needed assuming
uniform distribution of the coecients. Expected values, variances and a central limit theorem
are proved using generating functions.
Furthermore, we provide a new algorithm which calculates the digits of an optimal expansion
of pairs of integers from left to right. This avoids storing the whole expansion, which is needed
with the previously known right to left methods, and allows an online computation.
1. Introduction
In many public key cryptosystems, raising one or more elements of a given group to large powers
plays an important role (cf. for instance [2, 8]). In practice, the underlying groups are often chosen
to be the multiplicative group of a nite eld F
q
or the group law of an elliptic curve (elliptic curve
cryptosystems).
Let P be an element of a given group, whose group law will be written additively throughout the
paper. What we need is to form nP for large n N in a short amount of time. One way to do this
is the binary method (cf. [12]). This method uses the operations of doubling and adding P. If
we write n in its binary representation, the number of doublings is xed by log
2
n| and each one
in this representation corresponds to an addition. Thus the cost of the multiplication depends on
the length of the binary representation of n and the number of ones in this representation. The
goal of the methods presented in this paper is to decrease the cost by nding representations of
integers containing few nonzero digits.
If addition and subtraction are equally costly in the underlying group, it makes sense to work
with signed binary representations, i.e., binary representations with digits 0, 1. The advantage
of these representations is their redundancy: in general, n has many dierent signed binary rep-
resentations. Let n be written in a signed binary representation. Then the number of non-zero
digits is called the Hamming weight of this representation. Since each non-zero digit causes a group
addition (1 causes addition of P, 1 causes subtraction of P), one is interested in nding a rep-
resentation of n having minimal Hamming weight. Such a minimal representation was exhibited
by Reitwiesner [10]. Since it has no adjacent non-zero digits, this type of representation is often
called non-adjacent form or NAF, for short. On average, only one third of the digits of a NAF is
dierent from zero. Morain and Olivos [9] rst observed that NAFs are useful for calculating nP
for large n quickly.
Recently, Solinas [11] considered the problem of computing mP+nQat once, without computing
each of the summands separately. Using unsigned binary representations of m and n this can
Date: August 5, 2003.
2000 Mathematics Subject Classication. Primary: 11A63; Secondary: 94A60, 68W40.
This author is supported by the START-project Y96-MAT of the Austrian Science Fund.
This author is supported by the grant S8307-MAT of the Austrian Science Fund.

This author is supported by the grant NRF 2053748 of the South African National Research Foundation.

This author is supported by the grant S8310-MAT of the Austrian Science Fund.
1
2 P. J. GRABNER, C. HEUBERGER, H. PRODINGER, AND J. THUSWALDNER
be done with the help of the operations doubling and adding P, Q, or P + Q. We are
again interested in diminishing the number of additions. Assume that the additions of the three
quantities are equally costly. If we write the binary representations m =

a
j
2
j
and n =

b
j
2
j
in the form
a


a0
b0
, the cost of the calculation of mP + nQ depends on the number of non-
zero columns in this joint representation. This number is called the joint Hamming weight of
this (joint) representation. If addition and subtraction are equally costly, again by using signed
representations of m and n, one can reduce the joint Hamming weight considerably (note that
for signed representations we have to deal with the addition of P, Q, P Q and P Q).
One way to do this consists in writing m and n in their NAF. However, in the above mentioned
paper, Solinas found an even cheaper way of representing m and n: the so called Joint Sparse
Form. It turns out that his construction yields the minimal joint Hamming weight among all joint
expansions of two numbers. In Grabner et al. [3] this concept was simplied and extended to the
joint representation of d 2 numbers and its properties are studied in detail. The representation
used in [3] is therefore called Simple Joint Sparse Form, or SJSF for short.
The detailed denitions of joint NAFs and SJSF for d integers will be given in Section 2 and
Section 3, respectively.
In all these algorithms we determined nP and mP+nQ by doubling and adding some quantities.
There is a modication of these algorithms by using so called windows or window methods (cf. for
instance Gordon [2, Section 3] or Avanzi [1]). This is a rather easy concept. We will explain it
for the case of the computation of mP + nQ with m, n written in binary representation. First
select a window size w. Then precompute all sums of the form rP +sQ such that r and s have a
binary representation of length at most w. Now we can compute mP +nQ by multiplying by 2
w
and adding one of the precomputed values. Of course, this makes the algorithms faster at the cost
of precomputation tables. There are many ways to rene this concept and to consider adaptions
which are suitable to special representations. An easy modication consists in jumping over zero
vectors at the beginning of a window. If we use window methods where zero digits are forced after
a bounded number of non-zero digits we may adapt the size of the window after each step in order
to exploit these zeros. The latter modication is possible for instance in the case of SJSF. We will
explain all this in more detail when we apply windows to our algorithms later.
In the present paper we are concerned with the joint representation of d-tuples of integers. In
Section 2 we dwell upon joint NAFs with windows. In particular, we give a detailed analysis
of the average cost of calculating linear combinations n
1
P
1
+ + n
d
P
d
by examining the joint
Hamming weight of joint NAFs. We give expressions of the average cost, its variance as well as
its distribution. This extends and renes the work of Avanzi [1].
In Section 3 we perform a detailed runtime analysis of the SJSF of d integers using window
methods. Contrary to the joint NAF the SJSF guarantees that after at most d non-zero columns
(or digits) there occurs a zero column. It is natural to adapt the size of the windows dynamically
in a way that we can expect zero columns at the beginning of each new window. In this way we
can exploit the existing zeros in an optimal way. In this case it is a nontrivial problem to compute
the size of the precomputation tables. We give an asymptotic formula for their size.
From the way we calculate the linear combinations n
1
P
1
+ + n
d
P
d
we see that we proceed
through the representations of n
1
, . . . , n
d
starting from their most signicant digit down their least
signicant digit, or, in other words, from left to right. Unfortunately, as Avanzi [1] and Solinas [11]
both regret, the known algorithms for the SJSF produce the representations from right to left.
This has the disadvantage that we need to calculate the whole SJSF representation from right to
left before we can start to apply it from left to right in order to compute our linear combinations.
Especially if we have to deal with long representations this requires a large amount of memory.
Our last section, however, is devoted to a remedy to this unfortunate situation by providing
a transducer (with 32 essential states) which constructs a minimal joint representation from
left to right for d = 2. This is done by rst writing each of the numbers m, n separately in
a representation which gives us some freedom in changing their digits locally reading from the
left. Because of its resemblance to the well-known greedy expansion we call this representation
3
the alternating greedy expansion. Starting from this expansion we succeeded in constructing a
minimal joint representation of m, n from left to right.
2. Joint non-adjacent form
The present section is devoted to the complexity analysis of the joint distribution of integers
in non-adjacent form. Recall that a NAF is a signed binary representation of an integer x of the
shape
x =
J

j=0
x
j
2
j
with x
j
0, 1
such that x
j
x
j+1
= 0 for all j 0, . . . , J 1. For a given integer it is possible to compute its
NAF with help of the easy Algorithm 1.
Algorithm 1 Calculation of the Non-Adjacent Form.
Input: x integer
Output: (x
j
)
0j
non-adjacent form of x.
j 0
while x ,= 0 do
x
j
x mod 2
if x
j
= 1 and (x x
j
)/2 1 mod 2 then
x
j
x
j
end if
x (x x
j
)/2
j j + 1
end while
Note that Algorithm 1 is the same as the algorithm for computing the simple joint sparse form
for d = 1 (see Section 3). This algorithm can easily be interpreted as a three state transducer
(cf. Figure 1). Using this transducer an easy calculation yields that the expected value of the
1 2 3
0
[
0
1[
0[01
1[0

1
0[
1
[
0
Figure 1. Transducer to compute the NAF from right to left.
Hamming weight of an expansion of length J is J/3.
Let d N. In what follows we will investigate d-tuples of NAFs. Such a d-tuple will be called
joint NAF. Joint NAFs can be regarded as nite sequences of d-dimensional vectors. We will
write d-dimensional vectors in boldface. For the coordinates of a vector we will use the notational
convention
x = (x
(1)
, . . . , x
(d)
).
We will set up an easy probabilistic model. Dene the space
^
d
:=
_
(. . . , x
1
, x
0
) 0, 1
dN0
[ j N
0
, k 1, . . . , d : x
(k)
j
x
(k)
j+1
= 0
_
whose elements will be called innite joint NAFs. On ^
d
we dene a probability measure by the
image of the Haar measure on Z
d
2
:= 0, 1
dN0
via the map Z
d
2
^
d
given by Algorithm 1. The
joint Hamming weight of a joint NAF (x
j
)
j0
is the number of j N
0
with x
j
,= 0. In order
to derive results on the disribution of the Hamming weight of NAFs we need information on the
number of nonzero entries in a vector x
j
. Thus we set
(2.1) A(x) :=
_
k 1, . . . , d [ x
(k)
,= 0
_
.
4 P. J. GRABNER, C. HEUBERGER, H. PRODINGER, AND J. THUSWALDNER
Let x, y 0, 1
d
satisfying x
(k)
y
(k)
= 0 for all k 1, . . . , d. We dene the random variable
X
j
to be the j-th column of an innite joint NAF. Then, keeping track of Algorithm 1 for each of
the coordinates, we derive
(2.2) P(X
j+1
= y [ X
j
= x) = 2
d#A(y)+#A(x)
and
(2.3) P(X
0
= y) = 2
d#A(y)
.
As mentioned above, we are only interested in the Hamming weight of joint NAFs. Thus it suces
to consider the random variables #A(X
j
) rather than X
j
itself. Using (2.2) we easily derive
p
k,
:= P(#A(X
j+1
) = [ #A(X
j
) = k) = 2
dk
_
d k

_
.
These quantities will be helpful in order to study the number of group additions required for
multiple exponentiation algorithms which are used in cryptography (cf. Avanzi [1]). As mentioned
in the introduction, such algorithms can be accelerated by using window methods (cf. for instance
Gordon [2]). Suppose we want to compute the linear combination x
(1)
P
1
+ + x
(d)
P
d
in an
Abelian group G using joint NAFs with window length w. Then we need a table of precomputed
values given by
PreComp
d,w
:=
_
_
_
w1

j=0
2
j
_
y
(1)
j
P
1
+ +y
(d)
j
P
d
_

(y
0
, . . . , y
w1
) 0, 1
dw
/1, y
0
,= 0
_
_
_
.
It is clear that larger windows lead to less group additions at the cost of larger precomputation
tables on the other hand. From Avanzi [1] we know that
#PreComp
d,w
=
I
d
w
I
d
w1
2
with I
w
:=
2
w+2
(1)
w
3
.
This follows easily by noting that I
w
is equal to the number of NAFs of length N, which can be
computed by analyzing Algorithm 1.
We now want to examine Algorithm 2, which produces joint NAFs using windows. In particular,
we want to derive distribution results for the random variable W
n,w
which counts the number of
group additions in G when this algorithm is applied to (X
n1
, . . . , X
0
) (i.e., W
n,w
counts the
number of windows that are opened by Algorithm 2, in other words, it counts how many group
additions are required in order to compute a linear combination of group elements using the joint
NAF). Since w is xed we will write W
n
instead of W
n,w
.
To this matter we study the bivariate generating function
F(y, z) :=

m=0

n=0
P(W
n
= m)y
m
z
n
.
In order to get a closed expression for this function we note rst that in view of Algorithm 2 each
d-tuple of NAFs can be written using the regular expression
(2.4) (0

NB
w1
)

, NB
v
.
Here 0 v w 2, B := 0, 1
d
, N := B 0 and is the empty word. In addition, each
coordinate has to satisfy the NAF condition. Note that each occurrence of N in this regular
expression causes a group addition in Algorithm 2. Thus, we have to label each occurrence of N
with y. Labelling each digit with z will lead to the desired function. As usual, we encode the
Markov chain dened by p
k,
by matrices. Denote the (d + 1)-dimensional identity matrix by I
5
Algorithm 2 Calculating linear combinations using joint NAF with windows.
Input: P
1
, . . . , P
d
G, X 0, 1
d(J+1)
joint NAF of x N
d
, w N, PreComp
d,w
Output: P = x
1
P
1
+ +x
d
P
d
P 0
j J
while j 0 do
while j 0 and x
j
= 0 do
P 2P
j j 1
end while
if j w then
j j w
else
w j
j 0
end if
for k = 1, 2, . . . , d do
f
(k)

w1
r=0
x
(k)
j+r
2
r
end for
s largest positive integer such that 2
s
[f
(k)
for all k 1, . . . , d
for k = 1, 2, . . . , d do
f
(k)
f
(k)
/2
s
end for
P 2
ws
P
P P +

d
k=1
f
(k)
P
k
this can be looked up for free in PreComp
d,w

P 2
s
P
end while
and set
P := z(p
k,
)
0k,d
,
Z := z([ = 0]p
k,
)
0k,d
,
G := z([ > 0]p
k,
)
0k,d
,
L := yI,
where Iversons notation, popularized in [5], has been used: [P] is dened to be 1 if condition P
is true, and 0 otherwise. By inspecting (2.4) we get
(2.5) F(y, z) = (1, 0, . . . , 0)S(y, z)
1
T(y, z)(1, . . . , 1)
T
with
S(y, z) := I (I Z)
1
LGP
w1
,
T(y, z) := (I Z)
1
(I + LG(I P
w1
)(I P)
1
).
We mention that S
1
represents the expression (0

NB
w1
)

in (2.4), while T represents the


two possible tails after this expression. Note that the addition occurring in T encodes the two
alternatives in (2.4). The vectors at the left and right hand side of the matrix expression for F
can be explained as follows. Because of
P(X
1
= y) = P(X
1
= y [ X
j
= 0)
we always start with the digit vector 0. On the other hand, we can end up with an arbitrary digit
vector.
One can easily imagine that the expression (2.5) becomes more and more complex for increasing
dimensions d. Using Mathematica
r
, we computed F explicitly for 1 d 5. To give an
6 P. J. GRABNER, C. HEUBERGER, H. PRODINGER, AND J. THUSWALDNER
impression of the expressions obtained, we include the resulting generating function for d = 1:
F(y, z) =
4y(1 +z)z
w
(2)
w
_
6 + 2yz
w
+ z(6 +y(3 +z
w
))
_
(1 +z)
_
4y(1 +z)z
w
+ (2)
w
(6 + 2yz
w
+z(3 +yz
w
))
_.
Since these expressions become very large for d 2, we refrain from writing them down here and
refer to the le which is available at [4].
As usual, E(W
n
) and E(W
n
(W
n
1)) are computed by taking the rst and second derivative
w.r.t. y, respectively, and setting y = 1. From this we can easily calculate the variance V(W
n
).
In view of (2.5) it is clear that for each choice of (d, w) we obtain a rational function F. The
main term in the asymptotic expansion of E(W
n
) and V(W
n
) comes from the dominant double
and triple pole of F, respectively. We state exactly those results which t into one line, the others
(main terms and constant terms for E(W
n
) and V(W
n
) and 1 d 5) are available at [4]. For
d = 1, we have
E(W
n
) =
3(2)
w
(2)
w
(4 + 3w) 4
n +
3(2)
w
w(8 (2)
w
+ 3(2)
w
w)
2(4 + 4(2)
w
+ 3(2)
w
w)
2
+O(
n
w
),
V(W
n
) =
12(5(8)
w
4
w
4(2)
w
)
((2)
w
(4 + 3w) 4)
3
n +O
w
(1)
for some [
w
[ < 1. For d = 2, we get
E(W
n
) =
3(2)
w
(8 + 9(2)
w
)
16 + 16 4
w
+ 24(2)
w
w + 27 4
w
w
n +O
w
(1),
V(W
n
) =
48(1 + (2)
w
)(2)
w
(1 + (2)
w
)(128 + 464(2)
w
+ 560 4
w
+ 261(8)
w
)
(16 + 16 4
w
+ 24(2)
w
w + 27 4
w
w)
3
n +O
w
(1),
and for d = 3,
E(W
n
)
9(2)
w
(16 + 36(2)
w
+ 37 4
w
+ 21(8)
w
)
64 64(2)
w
+ 64(8)
w
+ 64 16
w
+ 144(2)
w
w + 324 4
w
w + 333(8)
w
w + 189 16
w
w
n.
We list these main terms for the pairs (d, w) with 1 d 5 and w 3 in Table 1.
Since the generating function F(y, z) ts into the general scheme of H.-K. Hwangs quasi-power
theorem (cf. [7]), the random variable W
n
satises a central limit theorem
lim
n
P
_
W
n
E(W
n
) +x
_
V(W
n
)
_
=
1

2
_
x

t
2
2
dt.
Theorem 1. Let w 1, d 1 and W
n,w
be the random variable counting the number of group
additions when calculating X
1
P
1
+ +X
d
P
d
using Algorithm 2, where X
1
, . . . , X
d
are independent
random variables uniformly distributed on 0, . . . , 2
n
1. Then
lim
n
P
_
W
n,w
E(W
n,w
) +x
_
V(W
n,w
)
_
=
1

2
_
x

t
2
2
dt,
where E(W
n,w
) and V(W
n,w
) are given in [4] for d 1, 2, 3, 4, 5 and in Table 1 for d 5 and
w 3.
We remark that the main terms of the expected values for d 1, 2, 3 are given by Avanzi [1].
In contrast to his approximate approach, our generating function approach allows us to extract
lower order information as well as higher moments and to prove a central limit theorem.
3. Simple Joint Sparse Form
In this section we will adapt the window method to exponentiation studied in [1] to the d-
dimensional Simple Joint Sparse Form introduced in [3]. We shortly summarize the denition of
this joint expansion of elements of Z
d
.
7
(d, w)
1
n
E(W
n,w
)
1
n
V(W
n,w
)
(1, 1)
1
3
0.333333
2
27
0.074074
(1, 2)
1
3
0.333333
2
27
0.074074
(1, 3)
2
9
0.222222
2
81
0.024691
(2, 1)
5
9
0.555556
4
27
0.148148
(2, 2)
11
27
0.407407
80
2187
0.036580
(2, 3)
32
117
0.273504
2464
177957
0.013846
(3, 1)
19
27
0.703704
1064
6561
0.162170
(3, 2)
131
297
0.441077
196210
8732691
0.022468
(3, 3)
1082
3645
0.296845
1822366
199290375
0.009144
(4, 1)
65
81
0.802469
41776
295245
0.141496
(4, 2)
3469
7533
0.460507
242550560
15832158831
0.015320
(4, 3)
22976
74115
0.310005
95487386176
15078376202625
0.006333
(5, 1)
211
243
0.868313
644320
5845851
0.110218
(5, 2)
32297
68283
0.472987
7065325354906
648539908339455
0.010894
(5, 3)
11961398
37601091
0.318113
963646563298519282
218773676422818911097
0.004405
Table 1. Asymptotic means and variances of W
n,w
/n for small d, w.
In [3] it is shown that for each d-tuple of integers (x
(1)
, . . . , x
(d)
) there is a unique joint expansion
(x
J
, . . . , x
2
, x
1
, x
0
), i.e.,
x
(k)
=
J

j=0
x
(k)
j
2
j
with x
(k)
j
0, 1,
such that
(3.1) A(x
j+1
) A(x
j
) or A(x
j+1
) = , 0 j < J
and x
J
,= 0, where A(x) has been dened in (2.1). This is called the Simple Joint Sparse Form
of x
(1)
, . . . , x
(d)
.
3.1. Algorithms and probabilistic model. Algorithm 3 can be used for the computation of
the Simple Joint Sparse Form of d integers. This algorithm was described in [3]; we will need this
algorithm to derive the transition probabilities for the probabilistic model we will use.
From (3.1) it is clear that the Simple Joint Sparse Form can have at most d consecutive non-zero
digit-vectors. When applying windows to the computation of x
(1)
P
1
+ +x
(d)
P
d
in an Abelian
group using the SJSF, it is natural to use these guaranteed 0 digit-vectors. Therefore we will
consider Algorithm 4.
Let
d
be the space of all innite SJSFs, i.e.,
(3.2)
d
=
_
(. . . , x
1
, x
0
) 0, 1
dN0
[ j N
0
: A(x
j
) A(x
j+1
) or A(x
j+1
) =
_
.
We now dene a probability measure on
d
as the image of the Haar-measure on Z
d
2
= 0, 1
dN0
under the map Z
d
2

d
given by Algorithm 3. This measure induces uniform distribution on all
possible input vectors in 0, . . . , 2
N
1
d
. Analyzing the congruence conditions used in Algorithm 3
yields (for x, y 0, 1
d
satisfying A(x) A(y) or y = 0)
(3.3) P(X
j+1
= y [ X
j
= x) = 2
(d+#A(y)#A(x))
8 P. J. GRABNER, C. HEUBERGER, H. PRODINGER, AND J. THUSWALDNER
Algorithm 3 d-dimensional Simple Joint Sparse Form.
Input: x
(1)
, . . . , x
(d)
integers
Output: (x
(k)
j
)
1kd
0j
Simple Joint Sparse Form of x
(1)
, . . . , x
(d)
j 0
A
0
k [ x
(k)
odd
while k : x
(k)
,= 0 do
x
(k)
j
x
(k)
mod 2, 1 k d
A
j+1
k [ (x
(k)
x
(k)
j
)/2 1 (mod 2)
if A
j+1
A
j
then
for all k A
j+1
do
x
(k)
j
x
(k)
j
end for
A
j+1

else
for all k A
j
A
j+1
do
x
(k)
j
x
(k)
j
end for
A
j+1
A
j
A
j+1
end if
x
(k)
(x
(k)
x
(k)
j
)/2, 1 k d
j j + 1
end while
Algorithm 4 Calculating linear combinations using Simple Joint Sparse Forms with windows.
Input: P
1
, . . . , P
d
G, X 0, 1
d(J+1)
SJSF of (x
1
, . . . , x
d
), w 1 integer, Q(Y ) =

L
=0
2

(y
(1)

P
1
+ +y
(d)

P
d
) for all SJSF Y PreComp
d,w
Output: P = x
1
P
1
+ +x
d
P
d
P 0
j J
while j 0 do
while j 0 and X
j
= 0 do
P 2P
j j 1
end while
nd i such that X
i
= 0 and such that there are exactly w 1 0-digit vectors amongst the
digit vectors X
j
, X
j1
, . . . , X
i+1
or i 1
nd k minimal with i < k j and X
k
,= 0
P 2
ki
(2
jk
P Q((X
j
, X
j1
, . . . , X
k
)))
j i
end while
and
(3.4) P(X
0
= y) = 2
(d+#A(y))
.
We are interested in the random variable W
n
= W
n,w
(X), which counts the number of group
additions when applying Algorithm 4 to (X
n1
, . . . , X
0
). Since W
n
depends only on
(#A(X
n1
), . . . , #A(X
0
)),
9
we compute the corresponding transition probabilities
(3.5) p
k,
:= P(#A(X
j+1
) = [ #A(X
j
) = k) =
_
_
dk
k
_
2
(dk)
for > k,
2
(dk)
for = 0.
In order to study W
n
we introduce the generating function
(3.6) F(y, z) =

m=0

n=0
P(W
n
= m) y
m
z
n
.
We denote N = 0, 1
d
0. A regular expression for a window containing w1 interior 0s
is given by
NN

(0N

)
w1
(we remark here that the conditions (3.1) have to be satised). Since adjacent windows can be
separated by an arbitrary number of 0s and windows at the end of the expansion can be incomplete,
the generating function F(y, z) can be calculated by
(3.7) F(y, z) = (1, 0, . . . , 0)
_
I (I V )
1
LU
_
(I U)
1
V
_
w
_
1
(I V )
1

_
I +
_
LU +LU(I U)
1
V + +LU((I U)
1
V )
w1
_
(I U)
1
_
(1, . . . , 1)
T
,
where
U = z([ > 0]p
k,
)
0k,d
,
V = z([ = 0]p
k,
)
0k,d
,
L = diag(y, 1, . . . , 1).
We remark here that the entry vector (1, 0, . . . , 0) simulates a 0 in position 1 which corresponds
to the fact that P(X
0
= y) can be computed by setting x = 0 in (3.3).
For d 20 we computed the function F(y, z) using Mathematica
r
. Only the result for d = 2
ts on one line:
F(y, z) =
4 (4 3y)z
3
+z
_
4 +y
_
3 2
32w
_
z(1 +z)
2
_
w
__
z
2
_
4 y
_
6 4
1w
_
z(1 +z)
2
_
w
__
(1 z)
_
4 z
3
+z
_
7 2
32w
y
_
z(1 +z)
2
_
w
_
+z
2
_
2 4
1w
y
_
z(1 +z)
2
_
w
__ .
As usual, the generating functions of E(W
n
) and E(W
n
(W
n
1)) are computed by dierentiating
F(y, z) with respect to y and setting y = 1. Since these functions are all rational, the main term
in the asymptotic expansion of the moments of W
n
comes from a double resp. triple pole at
z = 1; the other terms are exponentially smaller in magnitude. For d = 1, . . . , 20 we computed
the means and variances of W
n
. Table 2 gives the asymptotic main terms of means and variances
for 1 d 7.
Since the generating function F(y, z) ts into the general scheme of H.-K. Hwangs quasi-power
theorem (cf. [7]), the random variable W
n
satises a central limit theorem.
By the same means we compute expectation and variance of the number W
n,0
of additions
using SJSFs without windows (see Table 3).
We summarize our results in the following theorem.
Theorem 2. Let w 1, d 1 and W
n,w
be the random variable counting the number of group
additions when calculating X
1
P
1
+ +X
d
P
d
using Algorithm 4, where X
1
, . . . , X
d
are independent
random variables uniformly distributed on 0, . . . , 2
n
1. Then
lim
n
P
_
W
n,w
E(W
n,w
) +x
_
V(W
n,w
)
_
=
1

2
_
x

t
2
2
dt,
where E(W
n,w
) and V(W
n,w
) are given in [4] for 1 d 20 and in Table 2 for d 7.
10 P. J. GRABNER, C. HEUBERGER, H. PRODINGER, AND J. THUSWALDNER
d
1
n
E(W
n,w
)
1
n
V(W
n,w
)
1
2
3(w + 1)
2(w + 7)
27(w + 1)
3
2
3
2(3w + 1)
9
16(3w + 1)
2
3
112
39(7w + 1)
784(1225w137)
59319(7w + 1)
3
4
960
179(15w + 1)
8640(82175w10751)
5735339(15w+ 1)
3
5
63488
6327(31w + 1)
63488(3549810031w337187183)
253275687783(31w+ 1)
3
6
4128768
218357(63w+ 1)
12386304(5319844735149w322156222637)
10411213601145293(63w+ 1)
3
7
1065353216
29681427(127w+ 1)
1065353216(1110439852652223895w40282349901979031)
26148954556492040001483(127w+ 1)
3
Table 2. Asymptotic means and variances of W
n,w
.
3.2. Counting the precomputed values. We now count the number of elements in the set
PreComp
d,w
of precomputed values. The following computations will show that #PreComp
d,w
increases exponentially in w and hyperexponentially in d. The set PreComp
d,w
consists of all nite
sequences of digit vectors satisfying (3.1), containing at most w1 0-digit vectors, and which start
and end with a non-zero digit vector. Furthermore, we normalize the elements of PreComp
d,w
by requiring that the rst non-zero entry in the rst column equals +1. Since any admissible
sequence of digit vectors can be followed by an arbitrary number of 0-digit vectors, we have
#PreComp
d,w
= #
__
X (0, 1
d
)

[ X NN

(0N

)
w1
and X a SJSF
_
/1
_
,
where N = 0, 1
d
0. Thus we have for w 1
#PreComp
d,w
=
1
2
(1, 0, . . . , 0)C(I C)
1
(B(I C)
1
)
w1
(1, . . . , 1)
T
,
where
B = ([ = 0])
0k,d
, C = ([ > k]
_
d k
k
_
2

)
0k,d
.
In order to study the behaviour for large d, we study the matrix B(I C)
1
in more detail. Since
rank B = 1 it is clear that all rows of B(I C)
1
are equal. It can be proved by induction that
the k-th entry in this row equals
_
d
k
_
2
k
a
k
,
where a
k
satises the recursion
(3.8) a
n
=
n1

k=0
_
n
k
_
2
k
a
k
n 1, a
0
= 1.
11
d
1
n
E(W
n,0
)
1
n
V(W
n,0
)
1
1
3
2
27
2
1
2
1
16
3
23
39
2800
59319
4
115
179
210368
5735339
5
4279
6327
7565047808
253275687783
6
152821
218357
263523314106368
10411213601145293
7
21292819
29681427
577533922219434967040
26148954556492040001483
Table 3. Asymptotic means and variances of the number of additions when using
SJSF without windows.
Since
d
=

d
k=0
dk2
k
a
k
= (2
d
+ 1)a
d
is the only non-zero eigenvalue of B(I C)
1
, we get
#PreComp
d,w
=
1
2
(
d
1)
w1
d
.
In order to study the asymptotic behaviour of a
n
we substitute a
n
= n!2
(
n
2
)
b
n
. This is motivated
by the fact that the dominating summand in (3.8) occurs for k = n1; this gives a
n
n2
n1
a
n1
.
We get the recursion
(3.9) b
n
=
n1

k=0
1
(n k)!
2
(
k+1
2
)(
n
2
)
b
k
for b
n
. Since the term for k = n 1 on the right hand side equals b
n1
, the sequence b
n
is
monotonically increasing. It remains to show that b
n
is bounded. For this purpose we estimate
b
n
b
n1
_
1 +
n2

k=0
1
(n k)!
2
(
k+1
2
)(
n
2
)
_
b
n1
_
1 +
n

k=2
1
k!
2

1
2
n(k1)
_
,
where we have used
_
n
2
_

_
k+1
2
_

1
2
n(n k 1). Using
e
x
1
x
e
x
for x > 0 and extending the
nite sums on the right hand side to innite sums, we obtain
b
n
b
n1
exp
_
2

1
2
n
_
b
0
exp
_
n

k=1
2

k
2
_
.
Thus b
n
is bounded and we can form the generating function
(3.10) f(z) =

n=0
b
n
z
n
.
12 P. J. GRABNER, C. HEUBERGER, H. PRODINGER, AND J. THUSWALDNER
Inserting the recursion (3.9) into (3.10) yields
(3.11) f(z) = 1 +

n=1
n1

k=0
1
(n k)!
2
(
k+1
2
)(
n
2
)
b
k
z
n
= 1 +

=1
1
!
2
(

2
)
z

k=0
b
k
_
2
(1)
z
_
k
.
The inner sum in (3.11) is just f(2
(1)
z). Furthermore, the summand for = 1 equals zf(z).
This yields
(3.12) f(z) =
1
1 z
_
1 +

=1
1
( + 1)!2
(
+1
2
)
z
+1
f(2

z)
_
,
from which we conclude that f has a meromorphic continuation to the whole complex plane with
simple poles at z = 2

, N. The residue of f(z) at z = 1 equals


c = 1 +

=1
1
( + 1)!2
(
+1
2
)
f(2

)
= 1.57298 62035 88985 42167 40408 30458 77385 46604 92965 . . . .
Putting everything together we conclude that
(3.13) a
n
cn!2
(
n
2
)
.
Summing up, we have
Theorem 3. Let d, w 1. Then the number of precomputed values #PreComp
d,w
needed in
Algorithm 4 is given by
(3.14) #PreComp
d,w
=
1
2
(
d
1)
w1
d
with
d
cd!2
(
d+1
2
)
.
In order to compare the number of additions when computing linear combinations using SJSF
with and without windows, we introduce the notation
#PreComp
d,0
= #
__
0, 1
d
0
_
/1
_
=
3
d
1
2
for the number of precomputations needed for SJSF without windows. Table 4 shows the number
of precomputed values in the various situations.
d #PreComp
d,w
d #PreComp
d,0
d
1 3
w1
1 0
2 12 25
w1
2 2
3 301 603
w1
3 10
4 19320 38641
w1
4 36
Table 4. Number of precomputed values.
Table 5 shows the minimal values of n, such that
PreComp
d,w
+E(W
n,w
) PreComp
d,w1
+E(W
n,w1
).
13
d w = 1 w = 2 w = 3
1 1 19 110
2 65 1 793 112 001
3 1 249 1 081 666 1 793 670 987
4 62 748 4 602 740 129 511 331 633 697 389
Table 5. Threshold numbers n depending on the window size w and dimension d.
4. Calculating a minimal expansion from left to right
It is a major disadvantage of Joint Sparse Form representations of pairs of integers that they
can only be computed reading the binary expansion from right to left (cf. [3, 11]). However,
computing linear combinations using these representations requires the digits from left to right.
Therefore, the full representation has to be precomputed and stored.
However, as in the one dimensional case (cf. [6]), it is possible to compute some minimal joint
expansion from left to right using a transducer automaton.
The idea is as follows:
We want to work with redundant expansions of numbers thatwhen proceeding from left to
rightalways leave alternatives. For instance, if the number is 29, and we started with 1????
(where ? stands for an arbitrary digit 0, 1 not yet computed), then the next two positions
are already forced, 111??, and only then one has choices: 11101 resp. 1111

1; if we would have
considered 31 instead, we would have no choice at all. That is undesirable, since we want to
perform local changes in order to create as many
0
0
s as possible, and so we must always have
alternatives. It is therefore wise to start the representation of 29 as 1?????. A good strategy
is to alternatively over- resp. undershoot the number, which means thatignoring intermediate
zerosthe digit

1 follows 1 and vice versa. With that condition, the process is essentially unique,
but we can still write 4 as 100 or as 1

100, etc. We found it easier to work with the second variant.


The representation of 29 would then be 100

101. We call this representation alternating greedy


expansion.
Denition 1. 1, 0, 1
N0
is called an alternating greedy expansion of n Z, if the following
conditions are satised:
(1) Only a nite number of
j
is nonzero,
(2) n =

j0

j
2
j
,
(3) If
j
=

,= 0 for some j < , then there is a k with j < k < such that
j
=
k
=

.
(4) For j := minj :
j
,= 0 and j := maxj :
j
,= 0, we have sign(n) =
j
=
j
.
Proposition 1. For any integer n, there is a unique alternating greedy expansion. It can be
computed by Algorithm 5.
The proof is easy.
This alternating greedy expansion of single integers can be computed from the standard
binary expansion by a transducer automaton as shown in Figure 2.
0 1
0
[
0
1[1

[
0
0[

1
1
[
0

1
Figure 2. Transducer Automaton for computing an alternating greedy expan-
sion from left to right. The symbol denotes the end of the sequence.
Now we think about a pair of integers (x, y), both given in the alternating greedy expansion.
When we parse this two rowed representation from left to right, we claim the following: if we
14 P. J. GRABNER, C. HEUBERGER, H. PRODINGER, AND J. THUSWALDNER
Algorithm 5 Alternating Greedy Expansion.
Input: n positive integer.
Output: Alternating greedy expansion of n.
m n
0
while m ,= 0 do
k log
2
m|
if m = 2
k
then

k
= 1
Return()
else

k+1
sign(m)
m m
k+1
2
k+1
end if
end while
see
a
b
c
d
e
f

, either a = b = 0, and we found a


0
0
, or if not then
a
b
c
d
can be rewritten such that
0
0
is produced, or if this is not possible,
a
b
c
d
e
f
can be rewritten such that
0
0
is produced. So, after
at most three digits, a
0
0
has been produced. In other words, a representation of length J has
the property that at least J/3| digits are equal to
0
0
. Recall that the representation of Solinas
[11] (SJF) resp. the representation in [3] (SJSF) have this property. The Table 6 contains all
the necessary information. (The obvious variants when either interchanging the top resp. bottom
rows or exchanging 1

1 are not explicitly stated.) Note that in a few instances one would have
some choices. E. g., if we see
1
0
0
1

1
, we decided to replace it by
1
0
0
0

1
1
, but we could have chosen
0
0
1
1
1

1
or
0
0
1
0
1
1
with the same eect.
Read Write
0
0
0
0
1
1
0
0
1
1
0
0
1
1

1
0
0
1
1
1
1

1
0
0
0
1
1

1
0
0
0
1
1

1
0
0

1
0
0
1
1
0
1
1
1

1
0
1
0
1
1
0
0

1
0
1
1

1
0
1

1
0
0
1
1
1
1
1
0

1
x
0
0
1
x
1
0
0
1
0
0
1
0
0
1
0
0
1
0
0
1
0

1
1
0
0
0
0
1
1
0
0
1

1
0
0
0
1
1
1
0
1
0
0
1

1
1
0
0
0

1
1
Table 6. Rules for modifying a pair of alternating greedy expansions to a
minimal joint expansion.
Clearly, this can be realized by a transducer automaton, too.
Combining the two transducer automata, we get a single transducer automaton to calculate
a low weight expansion from the binary expansions of two integers. The resulting transducer
15
automaton is shown in Table 7 and in Figure 3 (note that the nal state has not been drawn in
Figure 3).
1
3
4 5
6
7
8
9 10
11
12
13
14 15
16
17 18 19 20
21
22
23
24
25
26
27
28 29
30
31 32
33
0
0 | 0
0
1
0 |
0
1
|

1
1
|
0
0 | 0
0
1
0
1
0
|
1
0
0
0
0 1
|
0 0
1 1
1 1
|

0
0
|
0
0
0
1
1
0 |
0
0
1
1
0 1
|
0 1
0 0
1
1
|
0
0
|
0
0
1
1
1
0
|
0 1
|

1
1
|
1
1
0
0
1
0
|
0
0
0
0
|
0
1 |
1
1
|

0
1
|
0
0
0
0
|
1
0 |
1
1
|

0
0
|
1
0
0
0
1
1
1
0 |
1
0
0
0
0
1
0 1
|
0 0
1 1
1 0
1
1
|
1
0
0
1
0
0
0
0
|
0
1
0
0
1
1
1
0 |
0
0
1
1
0
1
0 1
|
0 1
0 0
1 0
1
1
|
0
1
1
0
0
0
0
0
|
0
0
1
1
1
0
1
0 |
1
1
0
1
0
0
0
1
|
0
0
1
1
1
1
1
1
|
1
1
0
0
0
1
0
0
|
0
0
1
1
0
1
1
0 | 0
0
1
1
1
1
0
1
|
1
1
1
0
0
0
1
1
|
1
1
0
0
1
0
1
1 | 0
0
0
0
|
1
0
|

0
1 |
0
0 |
1
0
0
0
1
0
|
0
0
1
0
1 1
|
0 0
1 1
0 1
|

1
0 | 0
0 1
1
0
1 | 1
1 0
0
0
0
|
1 1
|

0
0
|
0
0
1
1 1
0
|
0
0
0
1
1 1
|
0 1
0 0
0
1
|
0 0
|
0 1
0 0
0
1
|
0
0
0
1
1
1 |
0
0
1
1
1 0
|

1
0 | 1
1 0
0
0
1 | 0
0 1
1
0 0
|

1
1
|
0 0
|
0 0
1 1
0
1
|
0
0
1
0 1
1
|
1
0
0
0
1
0
|
0
0
|
1
1
0
0
1
1
|
0
0
1
1
1 0
|

0
1
|
1 0
|
0 1
0 0
0
1 |
0
0
1
1
1
1
|
0
0
0
1
0
0
|
1 0
|
0 0
1 1
0
1
|
1
0
0
0
1
1 | 0
0
1
0
0 0
|

0
0
|
1
0
0
0
0
1
1
0 |
1
0
0
0
1
1
0
1
| 1
0 0
1 0
0
1 1
|
0 0
1 1
1 0
0
0
|
1
1
0
1
0
0
1
0 | 0
0
1
1
1
0
0
1
| 1
1 0
0 0
1
1
1
|
0
0
1
1
1
1
0
0
|
0
0
1
1
1
1
1
0
| 0
0 1
1 0
1
0
1 | 1
1 0
0 1
0
1
1
|
1
1
1
0
0
0
0
0
|
0
0
1
1
0
1
1
0
|
0
1
0
0
1
1
0
1 | 0
1 1
0 0
0
1 1
|
0 1
0 0
1 0
0 0
|
0 1
0 0
1 0
1
0
| 0
1 1
0 0
0
0
1 | 0
1
0
0
1
1
1
1
|
0
0
1
1
0
1
0
0
|
1
1
1
0
0
0
1
0 | 1
1 0
0 1
0
0
1
| 0
0 1
1 0
1
1
1
|
0
0
1
1
1
1
0
0
|
0
0
1
1
1
1
1
0
| 1
1 0
0 0
1
0
1 | 0
0 1
1 1
0
1
1
|
1
1
0
1
0
0
0 0
|
0 0
1 1
1 0
1
0 | 1
0 0
1 0
0
0
1
|
1
0
0
0
1
1
1
1
|
1
0
0
0
0
1
0
0
|
1
1
0
0
1
0
1
0
|
1
1
1
0
0
0
0
1 | 0
0
1
1
1
1
1
1
|
0
0
1
1
0
1
0
0
|
1
1
0
0
0
1
1
0
|
0
0
1
1
1
1
0
1 |
1
1
0
1
0
0
1
1
|
0
0
1
1
1
0
0
0
|
0
1
1
0
0
0
1 0
|
0 1
0 0
1 0
0
1 |
0
0
1
1
0
1
1
1
|
0
1
0
0
1
1
0
0
|
1
0
0
1
0
0
1 0
|
0 0
1 1
1 0
0
1 |
1
0
0
0
0
1
1
1
|
1
0
0
0
1
1
Figure 3. Transducer automaton for calculating a minimial joint expansion from
the binary expansion.
We now prove that this transducer indeed calculates a minimal joint expansion. To this aim,
we denote by h(x, y) and

h(x, y) the joint Hamming weights of the SJSF and the output of the
transducer for x and y, respectively. We set
w
kn
:= #(x, y) [ 0 x, y < 2
n
, h(x, y) = k,
w
kn
:= #(x, y) [ 0 x, y < 2
n
,

h(x, y) = k,
F(Y, Z) :=

k,n0
w
kn
Y
k
Z
n
,

F(Y, Z) :=

k,n0
w
kn
Y
k
Z
n
.
16 P. J. GRABNER, C. HEUBERGER, H. PRODINGER, AND J. THUSWALDNER
In:
0
0
In:
1
0
In:
0
1
In:
1
1
In:
State Nr Out: To: Out: To: Out: To: Out: To: Out: To:
0
0
1
0
0
1 3 4 5
0
0
2
2 2 2 2 2 2
1
0

1
0
3
0
0
1
0
1
1
0
0
0
6
0
0
1
1
7 8
0
0
1
0
2
0
1
0

1
4
0
0
0
1
1
0
0
1
1
6
0
1
0
0
7 9
0
0
0
1
2
1
1

1
5
0
0
1
1
1 10 11
1
1
0
0
12
0
0
1
1
2

1
0
6 13
0
0
6 14 15

1
0
2
0

1
7 16 17
0
0
7 18
0

1
2
1
0
0
1

1
8
1
0
0
0

1
1
1
1
0
0
0
0
1
6
0
0
1
1
1
0
7
1
0
0
1
0
0
12
1
0
0
0

1
1
2
0
1
1
0

1
9
0
1
0
0
1

1
1
0
0
1
1
0
1
6
0
1
0
0
1
0
7
0
1
1
0
0
0
12
0
1
0
0
1

1
2
1
1
0

1
0
10
0
0
1
1
1
0
1
1
1
0

1
0
0
6
0
0
1
1
1
1
7
1
1
0
0
0

1
12
0
0
1
1
1
0
2
1
1

1
0
0

1
11
0
0
1
1
0
1
1
0
0
1
1
1
1
6
1
1

1
0
0
0
7
1
1
0
0

1
0
12
0
0
1
1
0
1
2

1
12 19 20 21
0
0
12

1

1
2

1
0
0
0
13

1
0
0
0
1
0
0

1
0
6 22
0
0

1
1
12

1
0
0
0
2

1
1
0

1
14 23
0
0

1
1
6

1
1
0
0
7 24

1
1
0

1
2
0
1

1
15
0
0

1
1
1
0
0
0
1
6 25
0
1
0
0
12
0
0

1
1
2
0

1
0
0
16
0

1
0
0
1 26
0
0
0

1
7
0
0
1

1
12
0

1
0
0
2
1

1
0
17 27
1

1
0
0
6
0
0
1

1
7 28
1

1
0
2
1
0

1
18
0
0
1

1
1 29
0
0
1
0
7
1
0
0
0
12
0
0
1

1
2

1
0
0
19

1

1
0
0
1 30 31
0
0

1
12

1

1
0
0
2
0

1
0
20 32
0

1
0
0
6
0
0

1
7
0
0
0

1
12
0

1
0
2

1
0
0

1
21 33
0
0

1
6

1
0
0
0
7
0
0

1
0
12

1
0
0

1
2

1
0
0
1
0

1
22

1
0
0
0
0
1
1

1
0
0
0
1
1
6

1
0
0
1
0
0
7
0
0

1
1

1
0
12

1
0
0
0
0
1
2

1
1
0

1
0
0
23

1
1
0

1
0
0
1
0
0

1
1

1
0
6

1
1
0
0
0

1
7
0
0

1
1

1
1
12

1
1
0

1
0
0
2

1
1
1
0

1
24
0
0

1
1

1
1
1
0
0

1
1
0
1
6

1
1
0
0
1
0
7

1
1
1
0
0
0
12
0
0

1
1

1
1
2
0
1

1
0
0

1
25
0
0

1
1
0
1
1
0
1
0
0

1
6
0
1

1
0
0
0
7
0
1
0
0

1
0
12
0
0

1
1
0
1
2
0

1
1
0

1
0
26
0

1
0
0
1
0
1
0

1
1
0
0
0
6
0

1
0
0
1
1
7
0
0
1

1
0

1
12
0

1
0
0
1
0
2
1

1
0
0
0
27
1

1
0
0
0
1
1

1
0
0

1
0
6
0
0
1

1
0

1
7
0
0
1

1
1

1
12
1

1
0
0
0
2
1

1
0
1

1
28
0
0
1

1
1

1
1
1

1
0
0
0
1
6
0
0
1

1
1
0
7
1

1
0
1
0
0
12
0
0
1

1
1

1
2
1
0
0

1
0
29
0
0
1

1
1
0
1
1
0
0

1
0
0
6
1
0
0
0

1
7
1
0
0
0
0

1
12
0
0
1

1
1
0
2

1
1
0

1
0
30

1

1
0
0
1
0
1

1

1
1
0
0
0
6
0
0

1
7
0
0

1
0

1
12

1

1
0
0
1
0
2

1
0
1
0

1
31

1

1
0
0
0
1
1
0
0

1
6

1

1
0
1
0
0
7
0
0

1
0
12

1

1
0
0
0
1
2
0

1
0
0
0
32
0

1
0
0
0
1
0

1
0
0

1
0
6
0
0

1
0

1
7
0

1
0
0

1
1
12
0

1
0
0
0
2

1
0
0

1
0
0
33

1
0
0

1
0
0
1
0
0

1
0
6

1
0
0
0
0

1
7

1
0
0
0
1

1
12

1
0
0

1
0
0
2
Table 7. Transducer automaton for calculating a minimial joint expansion from
the binary expansion from left to right. The symbol denotes the end of the
sequence.
17
Let m
()
:= Y
h
, where h is the joint Hamming weight of the transition j = (

).i in the transducer.


If there is no such transition, we set m
()
:= 0. We set

M
(,)
:= ( m
ij
)
1i,j32
and

M :=

0,1
M
(,)
. Then it is easily seen that

F(Y, Z) =
_
1, 0, . . . , 0
_
(I Z

M)
1
(

M
(00)
)
2

_
1, 0, . . . , 0
_
T
.
The factors M
(00)
ensure that we return to state 1 writing all accumulated digits. An explicit
calculation yields

F(Y, Z) =
1+(23Y )Z+(1+13Y 9Y
2
)Z
2
+Y (10+37Y 18Y
2
)Z
3
2Y
2
(1623Y +8Y
2
)Z
4
+8Y
3
(4+3Y )Z
5
(1+Z)(1+Z+2Y Z
2
)(1+Z+8Y Z
2
+16Y
2
Z
3
)
.
A similar calculation using the right-to-left transducer described in [3] yields

F(Y, Z) = F(Y, Z).


This implies w
kn
= w
kn
for all nonnegative k and n. Since

h(x, y) h(x, y) for all x, y, this
proves that

h(x, y) = h(x, y), as required.


So we proved the following theorem.
Theorem 4. Let x, y Z with binary expansions x =

J
j=0
x
j
2
j
and y =

J
j=0
y
j
2
j
. Then the
output (
J+1
. . .
0
) of the transducer in Table 7 when reading (
xJ
yJ
. . .
x0
y0
) is a joint expansion of
x and y of minimal joint Hamming weight.
References
[1] R. Avanzi, On multi-exponentiation in cryptography, 2003, Preprint, available at http://citeseer.nj.nec.
com/545130.html.
[2] D. M. Gordon, A survey of fast exponentiation methods, J. Algorithms 27 (1998), no. 1, 129146.
[3] P. J. Grabner, C. Heuberger, and H. Prodinger, Distribution results for low-weight binary representations for
pairs of integers, Preprint, available at http://www.opt.math.tu-graz.ac.at/~cheub/publications/Joint_
Sparse.pdf.
[4] P. J. Grabner, C. Heuberger, H. Prodinger, and J. Thuswaldner, Analysis of linear combination algorithms in
cryptography online resources, http://www.opt.math.tu-graz.ac.at/~cheub/publications/ECLinComb/.
[5] R. L. Graham, D. E. Knuth, and O. Patashnik, Concrete mathematics. A foundation for computer science,
second ed., Addison-Wesley, 1994.
[6] C. Heuberger, Minimal expansions in redundant number systems: Fibonacci bases and greedy algorithms,
Preprint.
[7] H.-K. Hwang, On convergence rates in the central limit theorems for combinatorial structures, European J.
Combin. 19 (1998), 329343.
[8] N. Koblitz, A. Menezes, and S. Vanstone, The state of elliptic curve cryptography, Des. Codes Cryptogr. 19
(2000), no. 2-3, 173193, Towards a quarter-century of public key cryptography.
[9] F. Morain and J. Olivos, Speeding up the computations on an elliptic curve using addition-subtraction chains,
Inform Theory Appl. 24 (1990), 531543.
[10] G. W. Reitwiesner, Binary arithmetic, Advances in computers, Vol. 1, Academic Press, New York, 1960,
pp. 231308.
[11] J. A. Solinas, Low-weight binary representations for pairs of integers, Tech. Report CORR 2001-41, University
of Waterloo, 2001, available at http://www.cacr.math.uwaterloo.ca/techreports/2001/corr2001-41.ps.
[12] J. von zur Gathen and J. Gerhard, Modern computer algebra, Cambridge University Press, New York, 1999.
(P. Grabner) Institut f ur Mathematik A, Technische Universit at Graz, Steyrergasse 30, 8010 Graz,
Austria
E-mail address: peter.grabner@tugraz.at
(C. Heuberger) Institut f ur Mathematik B, Technische Universit at Graz, Steyrergasse 30, 8010 Graz,
Austria
E-mail address: clemens.heuberger@tugraz.at
(H. Prodinger) The John Knopfmacher Centre for Applicable Analysis and Number Theory, School
of Mathematics, University of the Witwatersrand, P. O. Wits, 2050 Johannesburg, South Africa
E-mail address: helmut@maths.wits.ac.za
(J. Thuswaldner) Institut f ur Mathematik und Angewandte Geometrie, Montanuniversit at Leoben,
Franz-Josef-Strae 18, 8700 Leoben, Austria
E-mail address: Joerg.Thuswaldner@unileoben.ac.at

You might also like