You are on page 1of 6

Topics in analytic number theory, Lent 2013.

Lecture 24: Selbergs theorem


Bob Hough
March 13, 2013
Reference for this lecture: Selberg, Contributions to the theory of the Riemann zeta-function. In Collected Papers vol 1.
Recall from last lecture our approximate formula for the logarithm to the
right of the half-line.
Theorem. Assume RH. Let 2 < x < t and let =
Set s = + it. There exists , || < 1 such that

1
2

A
log x

< 1 with A > A0 .

log (s) =

X
pn x

1
+
npns

X
x<pn x2

log pxn
1

npns log x



2 X log p
+
+
log x n
pns
p x

X
x<pn x2

(1)



x2
log p log pn
log t

.
+O
pns
log x
log x

We may deduce an immediate corollary.


Corollary 1 (RH implies Lindelof). Assume RH. Let t > 2 be large. We have
 

log t
.
|(1/2 + it)| exp O
log log t

Proof. Let x = log t. Since |(1/2 + it)| |(1/2 + 1/ log x + it)| we have


log t
log |(1/2 + it)| O
+ log |(1/2 + 1/ log x + it)|.
log x
But

< log (1/2+1/ log x+it) O

 X


X log p
log t
1
2
log t
+
+
O
.
log x
log x
npn/2 log x n 2 pn/2
n
2
p <x

p <x

Selbergs theorem
We now turn to Selbergs theorem, which describes the distribution of values of
log((1/2 + it)). The formal set-up is the following. We work on the probability
space
(, ) = ([1, 2], dt),
1

the unit interval with standard measure. We work with a continuous family of
random variables {XT (t)}T 106 , which are functions [1, 2] C. Well consider


1
1

X,T (t) =
log
+ itT ,
2
log log T
so we are studying log (1/2 + it) in the interval [T, 2T ].
Let C0 (C) be the space of continuous functions of compact support in C.
We say that XT converges weak-* to a measure on C if for each f C0 (C)
Z
lim

Z
f (XT (t))dt =

f (z)d(z).
C

Let

1 |z|2
e 2 dz
2
denote the standard complex Gaussian measure on C.
G =

Theorem 24.1 (Selberg). The family X,T converges weak-* to G .

Thus typically log (1/2 + it) varies on a scale of log log T in the interval
t [T, 2T ].
In the proof well use that the Gaussian distribution is determined by its
moments.
Theorem 24.2 (Method of moments). Let XT be a continuous family of random variables on a probability space (, ). A sufficient condition for the weak-*
convergence of XT G is that, for each j, k Z0 ,
Z
Z
k
lim
z j z k dG (z).
XTj XT d = j=k j! =
T

See e.g. Billingsley, Probability and Measure Chapter 5.


We also use the following lemma.
Lemma 24.3. Suppose 1 m, n Y . Then

Z 2  itT
m
1
dt =
O(Y /T )
n
1

m=n
.
m 6= n

Proof. If m = n this obvious.


If m 6= n then we use the following
observation.
R2
Let f R, f 6= 0. Then 1 eif t = O(1/|f |). To see this, note that eif t
has periods of length  |f1 | . Split [1, 2] into some full periods and one partial
period. Over each full period the integral is 0, and over the partial period use
|eif t | = 1.
Now if m 6= n, say without loss of generality that m > n. Then m n 1
mn
1
1
from which it follows that log m
n = log(1 + n ) log(1 + Y )  Y . Thus the
above observation applies with f  YT .
Sketch proof of Selbergs theorem. We set log x =
The proof proceeds in three steps.

log T
log log log T

so that x < T  .

Step 1
In the first step we take moments to show that the sum over primes which
approximates log (1/2 + A/ log x + it) tends to a Gaussian limit. It will then
remain to show that the distribution of log (1/2 + it) is similar.
Set
X
1
1
X1,T =
.
log log T px p1/2+itT
We have
Z 2
k
X1,T (t)j X1,T (t) dt =
1

1
(log log T )

X
j+k
2

p1 ,...,pj x
q1 ,...,qk x

p1 ...pj q1 ...qk

Z
1

q1 ...qk
p1 ...pj

itT

Since xk , xj T  , the integral is O(T 1+ ) unless p1 ...pj = q1 ...qk . This can
only happen if j = k. So if j 6= k then the moment is bounded by T 1+ , which
tends to zero.
When j = k, we neglect the error and take only those terms with p1 ...pj =
q1 ...qj . For a fixed p1 , ..., pj the number of possible q1 , ..., qj is j! if all of the pi
are distinct, and is smaller otherwise. The moment becomes

X
X
1
j!
1

+O
(log log T )j
p1 ...pj
p21 p2 ...pj1
p1 ,...,pj x

p1 ,...,pj1 x

where the second sum bounds the terms with at least one pi repeated (we allow
the constant in the big O to depend on j). Since each sum over pi contributes a
factor of log log x, the error is lower order. It follows that the moment becomes
(1 + o(1))

j!(log log x)j


= (1 + o(1))j!.
(log log T )j

This concludes the first step.


Before the final two steps we record a simple lemma regarding convergence
of distributions.
T (t) are two families of complex valued
Lemma 24.4. Suppose XT (t) and X
random variables on [1, 2], and is a measure on C. Each of the following
conditions is sufficient to guarantee the simultaneous weak-* convergence
XT

T (t)) = 0
i. limT m(t : XT (t) 6= X
T (t)| = 0
ii. limT supt |XT (t) = X
R2
T (t)|2 = 0.
iii. limT 1 |XT (t) X
Proof. Exercise.

T .
X

dt.

Step 2
In the second step we show that


1
A
X2,T (t) =
+ itT
log 1/2 +
log x
log log T
converges to G . We will reduce this to the convergence of X1,T by using the
expression (1) and repeatedly applying Lemma 24.4. We have
p
log log T |X1,T (t) X2,T (t)|






X
X

1
1
1
+





1/2+itT
1/2+A/
log
x+itT
n(1/2+A/
log
x+itT
)
p
pn x,n>1 p

px p


2

log pxn
X
1


+
n(1/2+A/ log x+itT ) log x

x<pn x2 p

2

X
log pxn
X
log
p
log
p
1


+

+O

n(1/2+A/ log x+itT )
log x n pn(1/2+A/ log x+itT )
log
x
p

p x
x<pn x2


log T
.
+O
log x
The last term may be handled by item ii.
To the remaining terms we apply iii. Since the computations are similar
well just show how to bound the first part of the last sum. We have

2



Z 2
X

1
1
log
p

dt



n(1/2+A/
log
x+itT
log log T log X pn x p
1

Z 2  n2 itT
X
1
log p1 log p2
p2
n2 (1/2+A/ log x)
=
dt
p2
2
n
(1/2+A/
log
x)
1
(log x) log log T n1 n2 p
pn1 1
1
p1 ,p2 x

.
=

1
(log x)2 log log T

X
pn x

(log p)2
pn(1+2A/ log x)

1
log log T

[weve discarded the terms with pn1 1 6= pn2 2 , which are very small].
Step 3
We now pass between the distribution of log (1/2 + A/ log x + itT ) and that of
log (1/2 + itT ). We have


 Z 1+ A
2
log x 0
1
1
A
+ itT log
+
+ itT =
( + itT )d
1
2
2 log x

2


 Z 1+ A 0 
2
log x
1
A 0 1
A
A
0
=
+
+ itT ( + itT ) d
+
+ itT +
1
2 log x

log x 2 log x
2


log

The first term may be bounded by the second line of (1) (see Lemma 23.3).
4

In the second term we use the partial fraction representation of 0 / to


write the integral as


X Z 12 + logA x
log T
1
1
O
d
+

1
1
log x
A/
log
x
+
i(T
t

)
2 + i(T t )
2
()=0
|tT |<1

The error term is negligible, and we may rewrite the integral as


A
X Z 12 + logA x
12 log
x
()=0
|T t|<1

(A/ log x + i(T t ))(

1
2

1
2

+ i(T t ))

d.

(2)

Let  = (T ) = log log1 log T . We split the sum over zeros into three sets.
X
X
X
X
(...) +
(...) =
(...) +
(...)

:|T t|< log T

A
: log T |T t|< log
x

A
: log
x |T t|<1

= 1 + 2 + 3
We also distinguish a two sets of t [1, 2]. Let



E1 (T ) = t [1, 2] : , | tT | <
log T




A log T
A

E2 (T ) = t [1, 2] : # : | tT | <
log x
 log x
Since the density of zeros at height T is
E2 to be small, which we now show.

T
2

log T , we expect both sets E1 and

Lemma 24.5. We have


lim m(E1 (T )) = 0,

lim m(E2 (T )) = 0

Proof. We have


E1 (T )


log T

T log T <<2T + log T


log T

so that, using Theorem 21.3 for the number of zeros,




2


m(E1 )
# :T
< < 2T +
 .
T log T
log T
log T
To estimate E2 we may write


Z
A log T
A
m(E2 )
# : | tT |
 log x
log x
ZtE2
X

1dt
t[1,2]

A
:|tT |< log
x
+

min(2,

A
A
[T log
x ,2T + log x ]

T log x


N

max(1,

A
2T +
log x
5

A
log x
T

A
log x
T

1dt
)


N

A
T
log x




A log T
log x

so that m(E2 )  .
Applying this last lemma and i. of Lemma 24.4 we may work under the
assumption that
t E1 (T )c E2 (T )c ,
i.e. since we could change the value of log (1/2 + itT ) on either E1 or E2
without changing the limiting distribution. Then 1 is empty. On 2 we bound
the integrand of (2) by  log T , the length of integral by  log1 x and the
number of zeros in the sum by 
2 

log T
 log x

for a bound

(log T )2
 (log log log T )3 .
(log x)2

This ratio is dominated by the normalizing factor log1log T .


Finally, for the integrand on 3 we may put in the bound

1
2

so that
3 

1

+ i(T t )

A
log x

1
+ i(T t )

X
1
1
.
2
(log x) |A/ log x + i(t )|2

Applying Lemma 23.3, the contribution of 3 is bounded by the error in the


second line of (1), completing the proof.

You might also like