You are on page 1of 175

T H E F R O N T I E R S C O L L E C T I O N

U LTIMATE HORIZONS UL
T IMATE HORIZONS ULTI
M ATE HORIZONS ULTIMA
T E HORIZONS ULTIMATE
Helmut Satz H ORIZONS ULTIMATE HO
R IZONS ULTIMATE HORI
Z ONS ULTIMATE HORIZO

ULTIMATE N
U
T
M
S ULTIMATE HORIZONS
LTIMATE HORIZONS UL
IMATE HORIZONS ULTI
ATE HORIZONS ULTIMA

HORIZONS
T E HORIZONS ULTIMATE
H ORIZONS ULTIMATE HO
R IZONS ULTIMATE HORI
Z ONS ULTIMATE HORIZO
N S ULTIMATE HORIZONS

Probing the Limits


of the Universe

123
THE FRONTIERS COLLECTION

Series editors
Avshalom C. Elitzur
Université Grenoble I Centre Équation, Labo. Verimag, Gières, France
e-mail: avshalom.elitzur@weizmann.ac.il
Laura Mersini-Houghton
Department of Physics & Astronomy, University of North Carolina, Chapel Hill,
North Carolina, USA
e-mail: mersini@physics.unc.edu
T. Padmanabhan
Inter University Centre for Astronomy and Astrophysics (IUC),
Pune University Campus, Pune, India
e-mail: paddy@iucaa.ernet.in
Maximilian Schlosshauer
Institute for Quantum Optics and Quantum Information Austrian Academy of
Sciences, Portland, Oregon, USA
e-mail: schlossh@up.edu
Mark P. Silverman
Department of Physics, Trinity College, Hartford, Connecticut, USA
e-mail: mark.silverman@trincoll.edu
Jack A. Tuszynski
Department of Physics, University of Alberta, Edmonton, Alberta, Canada
e-mail: jtus@phys.ualberta.ca
Rüdiger Vaas
University of Giessen, Giessen, Germany
e-mail: ruediger.vaas@t-online.de

For further volumes:


http://www.springer.com/series/5342
THE FRONTIERS COLLECTION

Series Editors
A. C. Elitzur L. Mersini-Houghton T. Padmanabhan
M. Schlosshauer M. P. Silverman J. A. Tuszynski R. Vaas

The books in this collection are devoted to challenging and open problems at the
forefront of modern science, including related philosophical debates. In contrast to
typical research monographs, however, they strive to present their topics in a
manner accessible also to scientifically literate non-specialists wishing to gain
insight into the deeper implications and fascinating questions involved. Taken as a
whole, the series reflects the need for a fundamental and interdisciplinary approach
to modern science. Furthermore, it is intended to encourage active scientists in all
areas to ponder over important and perhaps controversial issues beyond their own
speciality. Extending from quantum physics and relativity to entropy, conscious-
ness and complex systems—the Frontiers Collection will inspire readers to push
back the frontiers of their own knowledge.
Helmut Satz

ULTIMATE HORIZONS
Probing the Limits of the Universe

123
Helmut Satz
Fakultät für Physik
Universität Bielefeld
Bielefeld
Germany

This work appears in a parallel German edition ‘‘Gottes unsichtbare Würfel’’, published by
C. H. Beck Verlag.

ISSN 1612-3018 ISSN 2197-6619 (electronic)


ISBN 978-3-642-41656-9 ISBN 978-3-642-41657-6 (eBook)
DOI 10.1007/978-3-642-41657-6
Springer Heidelberg New York Dordrecht London

Library of Congress Control Number: 2013953242

 Springer-Verlag Berlin Heidelberg 2013


This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of
the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations,
recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or
information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar
methodology now known or hereafter developed. Exempted from this legal reservation are brief
excerpts in connection with reviews or scholarly analysis or material supplied specifically for the
purpose of being entered and executed on a computer system, for exclusive use by the purchaser of the
work. Duplication of this publication or parts thereof is permitted only under the provisions of
the Copyright Law of the Publisher’s location, in its current version, and permission for use must
always be obtained from Springer. Permissions for use may be obtained through RightsLink at the
Copyright Clearance Center. Violations are liable to prosecution under the respective Copyright Law.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this
publication does not imply, even in the absence of a specific statement, that such names are exempt
from the relevant protective laws and regulations and therefore free for general use.
While the advice and information in this book are believed to be true and accurate at the date of
publication, neither the authors nor the editors nor the publisher can accept any legal responsibility for
any errors or omissions that may be made. The publisher makes no warranty, express or implied, with
respect to the material contained herein.

Printed on acid-free paper

Springer is part of Springer Science+Business Media (www.springer.com)


In memory of my mother
who dared to venture into the unknown
in search of a better life for her sons
Preface

Confronted with the choice between paradise and knowledge, man, according to
the Bible, chose knowledge. Were these really alternatives? It came to be that the
gaining of knowledge and the wider horizon outside the garden of Eden brought to
many as much pleasure and satisfaction as any paradise they could imagine.
Humans have always wanted to explore the world they live in, and they have
always wanted to know what lies beyond the horizons that limit their view. The
search for richer pastures, better climates, easier communication—all these cer-
tainly played a part in this, but behind it all there was an inherent human sense of
curiosity. This curiosity triggered a journey starting some 200,000 years ago in a
remote corner of Africa and has driven us to navigate all the oceans, to conquer the
entire Earth, to probe the heavens and to penetrate ever more deeply into inter-
stellar space, to study ever more distant galaxies. At the other end of the scale,
high-energy particle accelerators allow us to resolve the structure of matter to an
ever higher degree, to look for its ultimate constituents and study how they interact
with each other to form our world. Are there limits, is there an end to this drive, at
the large scale as well as at the small?
In the last hundred years, modern physics and cosmology have shown that there
exist regions forever beyond our reach, hidden from us by truly ultimate horizons.
These regions we can access in our imagination only; we can speculate what they
are like and whether perhaps some sign of their existence, some indication of their
nature can ever reach our world.
Such hidden regions exist in those remote parts of the universe where, from our
point of view, space expands faster than the speed of light. Closer to us, they are
found in black holes, where gravity is strong enough to retain even light within its
horizon of ultimate attraction. And in the realm of the very small, quarks remain
forever confined to their colorful world of extreme density; they can never be
removed from it. But given the Big Bang origin of the universe, our world in its
very early stages was immensely hot and dense; and given the spectrum of all the
particles created in high-energy collisions, we can try to reconstruct ever earlier
stages. The evolution of the universe, with cooling and expansion, then defines
horizons in time, thresholds through which the universe had to pass to reach its
present state. What were the earlier stages like?

vii
viii Preface

Although it is not possible to transmit information across the ‘‘event horizons’’


that form the borders of these forbidden regions, still sometimes strange signals
may appear, providing us with hints of the existence of those other worlds. Such
striking phenomena can become possible through quantum effects; ‘‘Hawking–
Unruh’’ radiation provides one example expected to arise in a variety of cases,
whenever there exists an event horizon. And looking at the multitude of ‘‘ele-
mentary’’ particles produced in high-energy accelerators, we can speculate that
they originally came from a simpler, more symmetric world, which in the course
of the evolution experienced transitions, like the freezing of water or the mag-
netization of metals, to form the many-faceted and less symmetric world we see
today.
The aim of this book is to tell the story of how the different horizons, on Earth
and in the heavens, on large and on small scales, now and in the past, were
discovered and used to define our view of the world. It is a story of the evolution of
this view, which started before ‘‘science,’’ and which is much more than just
‘‘something for scientists.’’ It started with philosophers wondering what matter
was made of, and how; with sailors daring to find out if the world ends somewhere;
with astronomers trying to determine our position among the stars, to estimate the
size of the Earth by looking at the Sun and using the newly developed geometry.
With Edgar Allan Poe, the Big Bang appeared in literature before it was com-
monplace in physics and cosmology; and aspects of both black holes and worm-
holes were part of the stories of Lewis Carroll before they became significantly
appreciated in science. Many of the ideas, even today’s, have come up here and
there in the course of time. The ways of treating them, and the tools used for that
were different, of course, and changed over the centuries. But what remained was
that desire to see what lies beyond, and to find out whether there is a limit to what
we can reach and understand.
We begin by looking at the various horizons partitioning our world and then
show how different forbidden regions arise in the universe, and when and how they
can emit signatures as testimony to their presence and their nature. The mysterious
light emerging from an event horizon, or the equally mysterious clusters in a new
and strange ether, they may well remain all that we can ever see of what is hidden
beyond the ultimate horizons.
This book is not meant to give a systematic presentation of the recent devel-
opments in physics or cosmology. Its aim is to tell a story that began a long time
ago and that will certainly not come to an end very soon. And it covers devel-
opments that sometimes, as in the age of Vasco da Gama and Columbus, or in the
time of Einstein, Planck, Bohr and Heisenberg, revolutionize the world in two or
three decades. At other times, between Ptolemy and Copernicus, it takes a mil-
lennium to add a couple of epicycles to the accepted scheme of things. The
problem is, in the words of the renowned Austrian theorist Walter Thirring, that
‘‘to do something really new, you have to have a new idea,’’ and that does not
happen so very often. It does not suffice to play on the keyboard of the available
theoretical formalisms; this just leads to many melodies and not to any convincing
and lasting new harmony.
Preface ix

I have tried to present things in a way not needing any mathematics. That is, as I
indicate in the section on Notation, a two-sided issue. Even Einstein sometimes
presented the special theory of relativity in terms of people on a train versus people
on the ground. It can be done, and it is indeed helpful to convey the basic ideas.
For a full understanding of the ultimate conclusions, however, mathematics
becomes essential. To travel a middle road, I have at times added inserts, in which
some aspects of the basic mathematical formulation are indicated. But I hope that
the presentation remains understandable even if you skip these.
One unavoidable aspect appears if one tries to present things in as readable a
way as possible: some points and concepts are mentioned more than once.
Although strictly speaking logical, the reminder ‘‘as already discussed in the
previous Chapter’’ is in fact often not what the reader wants; it seems better to just
briefly recall the idea again. So I offer my apologies for a number of repetitions.
And another apology is probably also needed. When forced to choose between
scientific rigor and simplifying an idea enough to make it understandable, I gen-
erally took the latter path. I thought it better to try to have readers follow my train
of thought, even if they will later need corrections, than to lose them in technical
details they cannot follow. My inspiration here were the words of the great Danish
physicist Niels Bohr, who noted that Wahrheit (truth) and Klarheit (clarity) are
complementary: the more precisely you enforce one, the less precise the other
becomes.
Finally, it is my pleasure to express sincere thanks to all who have helped me
with this endeavor. Obvious support came from my colleagues here in Bielefeld, in
Brookhaven, at CERN, in Dubna and elsewhere. They have been of crucial
importance in forming my view of things. And last, but far from least, profound
thanks go to my wife, who has patiently borne with me during all these years.

Bielefeld, May 2013 Helmut Satz


Contents

1 Horizons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 The Horizon of Accessibility . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Forbidden Rooms in the Universe . . . . . . . . . . . . . . . . . . . . . . 6
1.3 Ultimate Constituents. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.4 The End of the Earth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.5 The Roof of Heaven . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

2 The Vanishing Stars. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19


2.1 The Speed of Light . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.2 Why Is the Sky Dark at Night? . . . . . . . . . . . . . . . . . . . . . . . . 29
2.3 The Big Bang . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
2.4 Cosmic Inflation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
2.5 The Absolute Elsewhere . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

3 The Secret Glow of Black Holes . . . . . . . . . . . . . . . . . . . . . . . . . . 43


3.1 The Escape Velocity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
3.2 Tidal Effects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
3.3 The Sea of Unborn Particles . . . . . . . . . . . . . . . . . . . . . . . . . . 51
3.4 Invisible Light on the Horizon . . . . . . . . . . . . . . . . . . . . . . . . 54

4 The Visions of an Accelerating Observer . . . . . . . . . . . . . . . . . . . 59


4.1 Gravity and Acceleration . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
4.2 A Total End of Communication . . . . . . . . . . . . . . . . . . . . . . . 63
4.3 The Temperature of the Vacuum . . . . . . . . . . . . . . . . . . . . . . . 64
4.4 Lightning in Empty Space . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
4.5 Quantum Entanglement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67

5 The Smallest Possible Thing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71


5.1 Why Does the Sun Shine? . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
5.2 The Strong Nuclear Interaction . . . . . . . . . . . . . . . . . . . . . . . . 78
5.3 The Weak Nuclear Interaction. . . . . . . . . . . . . . . . . . . . . . . . . 84
5.4 The Quarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
5.5 The Standard Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
5.6 The Confinement Horizon . . . . . . . . . . . . . . . . . . . . . . . . . . . 98

xi
xii Contents

6 Quark Matter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103


6.1 Quarks Become Deconfined . . . . . . . . . . . . . . . . . . . . . . . . . . 105
6.2 Collective Behavior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
6.3 The Ultimate Temperature of Matter . . . . . . . . . . . . . . . . . . . . 112
6.4 The Little Bang. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
6.5 Universal Hadrosynthesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
6.6 How Hot is the Quark–Gluon Plasma? . . . . . . . . . . . . . . . . . . . 121

7 Hidden Symmetries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125


7.1 The Ising Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
7.2 Shadow Particles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
7.3 Local Symmetries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
7.4 Primordial Equality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140

8 The Last Veil. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147


8.1 Ultimate Horizons in Time . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
8.2 Ultimate Horizons in Space . . . . . . . . . . . . . . . . . . . . . . . . . . 151
8.3 The End of Determinacy . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
8.4 Hyperspace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
8.5 Cosmic Connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156

Notes on Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161

Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165

Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
Horizons
1

Beyond the horizon, behind the Sun,


at the end of the rainbow,
life has only begun.
Bob Dylan

We live in a finite world. Even from the highest mountain or from an airplane, our
view always ends at a horizon, beyond which we cannot see. Moreover, horizons are
elusive. We see them, we’re surrounded by them, we try to reach them, and when we
get “there”, they have moved to somewhere else. Yet they always confront us with
the challenge to find out what lies beyond; at all times humans have wondered that.
And nowhere is the challenge quite as present as at the sea, where water and sky
touch in that sharp horizontal line. Already more than three thousand years ago, on
the eastern shores of the Mediterranean Sea, the Phoenicians built navigable sailing
vessels (Fig. 1.1), and they were familiar with astronomical orientation. Their ships
explored the entire Mediterranean and passed beyond the limits of their world, the
pillars of Hercules, today’s Strait of Gibraltar. A thousand years ago, the ships of
the Vikings set out into the unknown northern seas and reached what turned out to
be a new continent. And the systematic exploration of all the lands beyond all the
horizons began when the Portuguese sailors of Henry the Navigator dared to find
out if the Earth ended somewhere. The inquisitive curiosity to discover if and how
the known world continues—this was surely one of the driving forces that made
mankind conquer the whole Earth and go on beyond. Once all earthly horizons were
surpassed, the sky became the limit, receding back further and further. At first, man
could only look up, then telescopes gave him the power to see further, and today,
there are human footsteps on the moon and our probes in space penetrate ever more
distant stellar regions. Are there still regions in the universe which will remain forever
beyond our reach?
Each horizon forms a boundary not only in space, but also in time. If in ancient
times a traveller saw a distant mountain range at the horizon, he knew that it would
take many hours to see what might lie on the other side. His horizon of vision,
of cognition, thus had a spatial dimension in miles and a temporal one in hours,

H. Satz, Ultimate Horizons, The Frontiers Collection, 1


DOI: 10.1007/978-3-642-41657-6_1, © Springer-Verlag Berlin Heidelberg 2013
2 1 Horizons

Fig. 1.1 Phoenician sailing


vessel

determined by his walking speed. This temporal limit also inspired men to find ways
to to transcend it faster. A horse could help to bring our traveller more quickly to the
mountains, and for ages that was the solution. Stage coaches defined the travel time
and comfort. Postal relay stations were established, where tired riders and exhausted
horses could be replaced, and in this way, news was distributed with remarkable
speed (Fig. 1.2). Such post rider systems existed already in ancient Egypt, Persia and

Fig. 1.2 Post rider in 1648, announcing the end of the Thirty Years’ War in Europe
1 Horizons 3

China three thousand years ago, and in the Roman empire, post rider relays could
cover 300 km in a twenty-four hour period. Post riders and post carriages determined
the speed of communication until the nineteenth century, and it was the Pony Express
that brought the American West within reach. More than 400 horses and over ten
days were needed to transport a bag of letters from coast to coast.
If we combine the spatial and the temporal aspects of horizons, we obtain an
interesting new form of limit.

1.1 The Horizon of Accessibility

For illustration, let’s go back to the time of the post riders, with a 300 km per day
coverage. In that case, to send a message to some person in a place 900 km away
would have taken at least three days. For that length of time, the person to be reached
was simply beyond our accessibility horizon. Of course, the longer we are willing to
wait, the greater becomes the region with which we can communicate. The resulting
partition of space and time into accessible and inaccessible regions is shown in
Fig. 1.3. It is, however, a relative thing—the size of the region accessible to us after
a given time also depends on the speed of the messenger; the faster the messenger,
the further back the horizon recedes.
Today’s means of transportation reduce the days, weeks or months of former times
to just a matter of hours. A hundred years ago, a trip from Europe to the Far East meant
many weeks on a steamboat; today it takes ten hours or less by plane. In fact, if it
comes down to simply exchanging information with the “other side of the mountain”,
we don’t need a messenger; telephones can do that almost instantaneously, and
satellite stations connect us to all parts of the Earth. For communication, our temporal

Fig. 1.3 The accessibility


horizon in post rider times
3 accessible
on
time [days]

riz

2
ho
ty
ili
ib
ss
ce
ac

1
inaccessible

500 1000
distance [km]
4 1 Horizons

distance to far away regions has thus become largely a matter of how fast we can
transmit a signal there.
But we know that there is a limit to the speed with which we can transfer infor-
mation: the speed of light, some 300,000 km/s. There is no way to send a signal
faster than that. Learning this was certainly one of the crucial steps in our study of
nature and the universe. On Earth, the effect of the finite speed of light is practi-
cally negligible. To send a radio message half way around the globe (over a distance
of 20,000 km) takes about 1/15 of a second, so for everyday purposes, it’s almost
instantaneous. But the stars we see are very far away, and with the given finite speed
of light, that really matters. What is happening here and now can be known in distant
stellar worlds only much later, and what we know of them is their remote past. The
light of the stars that we see now was emitted millions of years ago, and we don’t
know if these stars still exist today, and if they do, where they are. So there are
horizons seemingly beyond our reach.
Nevertheless, also that inaccessibility seems to be just a question of time. If we
wait long enough, even the light that distant starts emit now will eventually arrive on
Earth. Just as we could define an accessibility radius for the post rider, we can also
do this for radio signals travelling at the speed of light. Then, a place 900 km away
was out of reach for us for three days; here and now, we have regions we cannot
communicate with for some fractions of a second. What is different, however, besides
the sheer scale of things, is that by going from man to horse to train to plane, the
messenger speed increased, and so did the range to the horizon at a given time; its
size was relative. For the radio signal, on the other hand, travelling with the speed
of light, no further speed-up is possible. This is the end of the line, the ultimate
horizon at any specific time, or in physics terminology, the event horizon. Whatever
lies beyond this horizon is out of our reach—with that reach defined in terms of both
space and time.
In astronomical dimensions, the size of the space-time region beyond our reach
of course grows considerably. Given the present human life span, a star 100 light
years away cannot today send us a signal we will live to receive, nor can we send
it one which it will get in our lifetime. This, however, is our personal problem; our
great-grandchildren could in principle receive the signal sent today from that star.
So if we consider the ultimate accessibility limit given by the speed of light, shown
in Fig. 1.4, we can label the accessible region as “future”, the inaccessible one as
“elsewhere”. The distant star * is now in the “elsewhere”, we have no way of reaching
it. But if we wait a while — quite a while in fact — then in the future a radio beam
from our position will reach it, and its signal will reach us.
So the existence of the event horizon means our contact with the world around us
is a question of space and time. The further away something is in space, the longer
the time needed to send it a signal, or to receive one it sent. It is the event horizon that
forms the border between future and elsewhere. What is now at some point outside
of our reach, in the elsewhere, will in the future become accessible for us.
But there are instances where this is no longer true. Today’s physics and cosmology
provide a more stringent form of limit: a truly final horizon, the absolute event
horizon. It defines those hidden regions of the world with which no communication
1.1 The Horizon of Accessibility 5

Fig. 1.4 The event horizon

3 *
future

on
2

time [days]

riz
ho
nt
e
ev
1
elsewhere

25 50
* 75
distance [billion km]

will ever be possible for us, not now and not at any time in the future. From such
disconnected regions, we can never ever receive a signal, no matter how long we
wait and what tools we use. How is that possible? This question leads to some of the
most striking phenomena encountered in our present view of the world. Not being
able to communicate with a region of the universe must mean that light from “there”
can never reach us. This can happen either if there are regions which are somehow
moving away from us faster than the speed of light, or if there are regions which do
not allow light to leave them. Both in fact exist.
Was our universe always there? If not, how old is it? Modern cosmology tells us
of a beginning, a Big Bang about 14 billion years ago, producing immensely hot and
dense primordial matter, which has subsequently expanded to become our universe.
The time of the Big Bang is specified, but spatially it is not defined: 14 billion years
ago it began “everywhere”, the primordial world was not a hot little sphere, which
then exploded. That means that if there were, at that time, regions far away from
where our part of the world started, then they could not, until today, send us a signal.
Light emitted by them has simply not yet had the time to reach us. The world that we
see is a result obtained by combining the speed of light and the age of the universe.
Anything beyond the limits that this defines is simply outside of our reach: we have
no sign of it. But this is still the observable world now. The longer we wait, the more
of the primordial world will become visible—or so it seems; the light from more
distant stars is “on its way to us”. But while we are waiting, the universe does not
hold still. Recent astronomical observations have shown that it is in ever increasing
expansion. If this expansion is rapid enough, there will be stars whose light can
never reach us, which will remain forever beyond our horizon. And some of the stars
from which we are presently receiving light will eventually, through the expansion
of the universe, be pushed beyond our event horizon: they will fade away and be
gone for us.
6 1 Horizons

But this cosmic event horizon is still “ours”; a distant galaxy we can see will have
its own cosmic event horizon, which will reach further out than ours. In other words,
our accessible worlds will overlap in part, but they will not be identical. And at our
horizon, or at that of any other galaxy, absolutely nothing happens. Its again that
elusive thing: the closer we get to it, the further away it moves.
Besides these fleeting limitations to our outreach, there are, however, also more
definite ones. In many old fairy tales, there is a castle with many rooms. You may
visit them all, except one, which you should never ever enter: if you do, you will
suffer a horrible fate. It turns out that this can also happen in outer space.

1.2 Forbidden Rooms in the Universe

If you enter a black hole, you will never come out again to tell what you saw and
what happened to you. At the horizon of the black hole, if you try to avoid falling
into it, you will certainly experience some rather unpleasant effects. And this will
not just be your fate—it will happen to anyone who would dare to try.
Black holes are “dead” stars of huge mass, but small size. A star starts its career
as a gaseous cloud, which gravity contracts more and more. When it has become
compact enough, the fusion of hydrogen to helium lets it shine, but eventually all
the fuel is burnt and gravity compresses the remaining ashes of the stellar mass to an
ever smaller sphere. At the end we have an object of such a high gravity that it pulls
everything in its vicinity into its range of attraction, even light. Since no signal from
such a black hole can reach the outside, it appears to be completely decoupled from
our world. We can never see what is inside, and for anything within its interior, we
are behind an insurmountable event horizon.
Thus, in the vast expanses of space, of the cosmos, there are indeed regions
remaining forever beyond our horizon. But also at the other end of the scale, in
the microcosmos, in the very small, we find an ultimate limit. Just as there is an
end to our reach in the limit of large scales, there is one as we try to divide things
into ever smaller entities. Since antiquity, man has tried to picture the complex
world we find around us as the result of a combined effort of many identical, simple
building blocks, interacting according to basic laws. Complexity thus is thought to
be a random child of simplicity, evolving through patterns defined on a higher level.
This “reductionism” has been immensely successful in understanding the structure
of matter. Depending on how the building blocks are packed, we have solids, liquids
or gases; their constituents are molecules arranged in decreasing orderliness. The
molecules themselves are made of atoms, which in turn consist of positively charged
nuclei surrounded by negatively charged electrons, bound by electromagnetic forces
to form electrically neutral entities. If we heat the system enough, or apply a very
strong electric field, such as a stroke of lightning, the atoms break up into their
charged constituents, forming a fourth state of matter, the plasma. Our view of the
states of matter, with solids, liquids, gases and plasmas, thus agrees very well with
that of antiquity, having earth, water, air and fire (Fig. 1.5). And already in antiquity
1.2 Forbidden Rooms in the Universe 7

Fig. 1.5 The four states of


matter in antiquity: fire, air,
water, earth

the philosophers, in the Greek and as well as in the Hindu–Buddhist world, thought
it necessary to have a fifth form, a quintessence, as a stage for the others, a medium
in which they exist: the void, empty space.
The existence of different states of matter leads to features very reminiscent of
horizons. For a trout, the surface of the water forms its horizon of existence, apart
from short leaps up to catch flies; the shore as well is a definite limit to its living
space. In general, the boundary surfaces between the different states of matter (air–
water, water–ice and so on)—in physics terminology: phase boundaries—separate
worlds of different structure. In ice, the molecules are arranged by firm bonds to
form an orderly crystal pattern, a regular lattice with a periodic structure and of well-
defined symmetry. In water, that lattice is no longer present; the bonds soften and
become flexible. They now allow the molecules to move around in any direction, yet
still restrain them to a rather small spatial volume. In the gaseous state, the bonds
dissolve completely and we now have a system of balls colliding and scattering
off each other, but otherwise free to move around in the entire container. So the
same basic constituents in different order patterns give rise to the different states
of matter, and the boundaries between such states form horizons between worlds of
8 1 Horizons

different order. But such horizons are again of fleeting nature, they can be shifted,
lakes can dry up, land can become flooded. And in all these cases, however, the
states remain divisible into their constituents; we can isolate such a constituent and
consider it individually. In fact, we can continue with the division, breaking up the
molecule into atoms, the atom into a nucleus and electrons. Nuclei in turn consist
of nucleons, that is, protons and neutrons; by binding different numbers of these,
we obtain the nuclei of the different elements, from hydrogen to uranium and even
heavier transuranium elements, artificially created by man. For this binding, strong
nuclear forces come into the game, overcoming the electric repulsion between the
positive protons. Also these basic constituents of matter can in fact exist in vacuo:
electrons, nuclei, protons and neutrons can be isolated and have a mass and a size.
So in a way they are the true building blocks of matter; however, the experimental
study of the forces between individual nucleons has shown that they are not really
the end of the line.

1.3 Ultimate Constituents

If we collide two protons, such a collision produces a multitude of similar particles.


It is not that the protons are “broken up”: they are also still there, in addition to all
the other newly created ones. An understanding of such interactions ultimately led to
further substructure: a nucleon is a bound state of three quarks, bound by an extremely
strong nuclear force—bound so strongly that an infinite amount of energy would be
needed to split a nucleon into quarks. So we can never isolate a single quark. The
Roman philosopher Lucretius had concluded over two thousand years ago that the
ultimate constituents of matter should not have an independent existence, that they
can only exist as parts of a larger whole. And indeed this feature is today the basic
property of quarks, whose bound states form our elementary particles (Fig. 1.6). The
quarks are forever confined to their world, quite different from ours, a world that
does not have a vacuum, in which there is no empty space, in which they always
remain in close contact with their neighbors. They can never escape from this world
of exteme density, just as nothing can ever escape from the interior of a black hole.

Confinement Horizon

Atoms Nucleus & Protons & Quarks


Matter
Electrons Neutrons

Fig. 1.6 The chain of reduction for the structure of matter


1.3 Ultimate Constituents 9

Moreover, given the expansion of the universe, the strange world of the quarks
was not always a feature only of the very small. If we let the film of the evolution
of the universe run backwards until we get to times close to the Big Bang, we find
galaxies being compressed, less and less empty space existing, matter reaching ever
greater densities. And when we are close enough to the beginning, the overall density
of the entire universe will be higher than that inside a single nucleon, there will be
no more void, and the universe will consist of primordial quark matter. The world
as we know it, clusters of material in empty space, is gone; one of the primordial
temporal horizons of the universe is thus the birth of the vacuum. Human imagination
has carried us back even further than that. Electrons and quarks still have intrinsic
masses, and so, following again Lucretius, we can ask where they came from. We
can picture an even younger universe, in which such masses did not yet exist, only
energy. The appearance of intrinsic masses thus defines yet another, even earlier
horizon of the nascent universe.
So wherever we look, be it on Earth or in space, on large or on small scales,
now or in the past, even back to the very beginning: we always seem to encounter
horizons, and beyond these, further horizons. We have always been searching for the
last horizon, and the perseverance in keeping up this search is perhaps one of the
features that made mankind what it is today. Is there an end to our search? Before
turning to the stellar dimensions of the cosmos beyond what we can see, or to the
microcosmos at scales below what we can see, it seems natural to look at the world
around us and remember how its limits were discovered.

1.4 The End of the Earth

Around 1400 A.D., this end had a name: Cape Bojador, the cape of fear, the cape
of horrors, the cape of no return. That is where you might risk falling off the face
of the Earth, and of all the horrible things that could happen to those who went to
sea in the days of old, that was the worst. They had to face a multitude of dangers.
Uncounted men did not return, uncounted mothers and wifes wept for sons and
husbands. “How much of the salt in the sea comes from the tears of Portugal?”
asked the great Portuguese poet Fernando Pessoa. Cliffs, storms, killer waves, sea
serpents, giant octopuses and other monsters of the deep—more horrifying than all
these was the thought of falling over the edge of the Earth (Fig. 1.7), of disappearing
into nothing, without a grave, without a cross, without the blessings of the church.
Somewhere the world must presumably end, and one should not really sail that far.
From our modern point of view, Cape Bojador is the western tip of Africa; but
then the world looked different. In the year 1419, the Portuguese Prince Henrique,
Infante of Portugal and “Henry the Navigator” for posterity, became governor of the
Algarve, and he dedicated his life to finding out what was beyond Bojador. First, he
had collected all reports about the approach to the unknown regions, to establish a
theoretical basis for further action. At the same time, he supported the development
of a new type of ship, the caravelle, which in matters of navigation was a great
10 1 Horizons

Fig. 1.7 Sailing off the edge of the Earth

improvement over all other vessels existing at the time. Finally, in the year 1423,
Henry gave the orders to sail south and check reality. Fifteen times, ships set out to
see what, if anything, was to be found beyond Cape Bojador. They either returned
without being able to tell anything about the beyond (“the horror made us turn back”),
or they were never heard of again. Finally, in 1423, on his second try, captain Gil
Eanes and his brave crew succeeded: they sailed around the end of the Earth and
thereby showed that this it was not.
The subsequent events are well-known: Following his course, Bartolomeu Dias
reached the Cape of Good Hope in 1488, and noted that the coast of Africa there
turned north again. Given this information, Vasco da Gama left Portugal in 1497 with
the aim of reaching India. This turned out to be quite straightforward: in Malindi,
in what today is Kenya, he met the Arab nautical expert Ahmed ibn Majid, who
provided him with maps and a local pilot. And some weeks later, on May 18, 1498,
the Portuguese fleet reached the Malabar coast of India, where Vasco da Gama
proceeded to present his credentials and royal Portuguese greetings to the Raja of
Calicut. Some years earlier, in 1492, Christopher Columbus, in the service of the
Spanish crown, had reached “West India”, on the other side of the Earth. In spite
of considerable evidence to the contrary, such as the lack of cities and the failure
of the natives to understand the Arab interpreters of the Spanish fleet, Columbus
insisted all his life that it was India that he had found. But when Fernando Magellan
1.4 The End of the Earth 11

not much later sailed from Europe around Cape Horn, the southern tip of what was
in fact the “new” American continent, continued westward and finally returned via
India, it was clear to all: the Earth is a globe. There is no mystical border, beyond
which unknown forces operate.
The Earth as a flat disk of finite size: even in the time of Henry the Navigator that
was actually more of a maritime legend of old than accepted reality. As early as four
centuries before Christ, Aristotle had argued that the Earth must be a sphere, since
viewed from the coast first the hull and only later the sails of departing ships would
disappear. Moreover, the shadow of the Earth at a lunar eclipse was always circular.
And in spite of intermediate objections, this knowledge was not forgotten. The Earth
as a flat disk from which you could fall off: in educated circles that was never
very credible. The most influencial theologian of the middle ages, Thomas Aquinas,
summarized the situation 200 years before Henry the Navigator quite precisely:

Astrologus demonstrat terram esse rotundam per eclipsim solis et lunae.


The astronomer proves through solar and lunar eclipses that the Earth is round.

Even the size of the terrestrial sphere was quite well known. More than 200 years
before Christ, the Greek astronomer Eratosthenes had used solar measurements in
Egypt to determine it. He compared the positions of the Sun precisely at noon in the
city of Syene (today’s Assuan) with that in Alexandria. The two cities lie on the same
longitude, so that they do not have a time shift. He noted that when the Sun was at the
zenith, directly overhead, in Syene (point a in Fig. 1.8), in Alexandria (point b) it was
an angle α of 7.2◦ off the zenith line (i.e., a line orthogonal to the surface of the Earth).
Simple geometry shows that α is also the angle between the lines from the center of the
Earth to Syene and to Alexandria, respectively. The observed angle of 7.2◦ is just 1/50
of the full circle of 360◦ , so that 50 times the distance L between the two cities would

Fig. 1.8 Eratosthenes’ sun


determination of the Earth’s
circumference

L
a
b
α

earth
12 1 Horizons

give the circumference of the Earth. The separation distance had been determined by
royal step-markers of the Egyptian court, men who would walk from one city to the
other in steps of as equal a length as possible. They had found the distance between
the two cities to be 5,000 stadia, about 750 km. The full circumference of the Earth
must thus be 50 times that distance, 50 × 750 = 37,500 km. Today’s measurements
give 40,000 km for the polar circumference, attesting to both the logical reasoning
of Eratosthenes and the precision of the royal step-markers.
So, all that was known at the time of Henry the Navigator, but it was theory.
200 years before Vasco da Gama and Columbus, in 1291, the brothers Ugolino and
Guido de Vivaldo from Genoa in Italy had left their city on board two well-armed
ships, the Allegranza and the Sant’Antonio, along with a crew of 300 men, with
the aim of reaching India via the Atlantic. So the idea of such a passage had also
been around for a while—theirs was the first known try. The Genoese sailed south
along the Maroccan coast, and the last message from them came from a place about
a hundred miles before Bojador. Nothing was ever heard of them again.
Many things can interfere between our ideas and the real world, and the early
explorers—Gil Eanes, Vasco da Gama, Christopher Columbus, Fernando Magel-
lan—had established where they matched. Their achievements were a crucial step
in making observation, not contemplation, the way to determine our ultimate picture
of the world. After them, our terrestrial world was finite, was a sphere. For mankind
ever after, that was not theory, not thinking, not imagination, but reality.

1.5 The Roof of Heaven


1.5 The Roof of Heaven 13

And that inverted Bowl we call The Sky,


Whereunder crawling, coop’t we live and die,

wrote the Persian astronomer, mathematician and, last but not least, poet Omar
Khayyam around 1100 after Christ. Is the sky indeed something like a roof over
the Earth, and if so, what is above that roof? The idea of a “firmament” above us,
on which the Sun, the Moon and the stars are attached, ran into problems from the
beginning, because up there everything is in motion. So not only Sun and Moon
would have to move along fixed tracks on the firmament, but all the planets as well.
Once it was established that the Earth was a sphere, the geocentric view of the world
meant that it was the stationary center surrounded by concentric moving spheres.
The Earth is the center of the universe, and all the heavenly bodies are attached
to spheres around it. These in turn rotate in different directions and with different
rotation speeds. God lives behind the last and largest of the spheres and, as “prime
mover”, keeps them rotating.
This is indeed a task for a god. While it is quite easy to picture the Sun on one
sphere around the Earth, and the Moon on another, to account for their positions
relative to us, precision measurements of the relative Sun–Moon positions began to
pose problems, and the relative motions of the planets led to immense complexity.
Thus, as seen by a terrestrial observer, the planets, such as Mars, did loops in the
sky… Nevertheless, astronomers of the time were up to the task. The culminating
geocentric scenario was developed by Claudius Ptolemy, a Roman citizen of Greek
origin living in Alexandria, Egypt, in the first century A.D.; his work is generally
known by its Arab title Almagest, since it was preserved, as were many other Greek
works, in Arab translation. In this picture, the planets still move around the Earth, but
in order to account for their observed orbits, they perform smaller circles (epicycles)
around a larger circular path . The entire world is still surrounded by a rotating
firmament, on which the most distant “fixed stars” are attached. The final pattern
traced out by the heavenly bodies is a beautifully intricate pattern, shown in Fig. 1.9.
Complex as it is, the corresponding tables did allow remarkably accurate predictions
of stellar positions and remained in good service for over a thousand years.
But with time and further observations, things became more and more involved
and apparently ad hoc: the epicycles of Ptolemy had to be determined specifically for
each planet, the center of the large circle was shifted from the Earth, and more. The
complexity of the formalism had become so great that King Alfonso X of Castile ,
who was a great patron of astronomy in the eleventh century and had a compilation
made of Ptolemy’s works, based on Arab translations, is supposed to have said that
“if the Lord Almighty had consulted me before embarking on Creation, I would
have suggested something simpler”. Hence it seemed not unreasonable to step back
and ask if there might not be a more appropriate way to account for the observed.
This is where Nicolaus Copernicus came in, around 1510 A.D., when he proposed
the Sun as the center of the observable stellar world. He did acknowledge some
hints from antiquity; the Greek astronomer Aristarchos of Samos had suggested a
heliocentric universe already more than two centuries B.C. Aristarchos had estimated
the Sun to be much larger and heavier than the Earth, and thought it more reasonable
14 1 Horizons

Fig. 1.9 The orbit of Mars


around the Earth, according
to Ptolemy

for the smaller body to circle around the larger. But Copernicus now developed a
mathematical model, in which the different planets circled around the Sun in different
distances and moreover rotated around their own axes. It was still a world of spheres,
with a final outer sphere for the fixed stars, centered at the Sun and containing within
it the circular orbits of the planets. In the aesthetic and religious thinking since
antiquity, circles and spheres were considered as the symbol of universal harmony,
and so their use as a basis seemed natural to Copernicus. Nevertheless, the Earth was
now no longer the stationary center, the fixed point of the universe. It rotates about
its own axis once a day and around the Sun once a year.
In its time, the model of Copernicus did not receive serious criticism and was
apparently received favorably even by the Roman clergy. This does not imply, how-
ever, that it was accepted in the present sense. It was rather considered an abstract
construct, a mathematical scheme to calculate the motion and position of the heav-
enly bodies, and even at that, it was not perfect. It was left for Johannes Kepler to
replace the circular orbits by ellipses to obtain precise agreement. And for much of the
common world, a heliocentric universe with a rotating Earth was simply nonsense.
Martin Luther is quoted as saying about Copernicus “that fool is turning astronomy
upside down…”.
Johannes Kepler, some hundred years later, had one great advantage: he had access
to detailed astronomical measurements by Galileo Galilei and by Tycho Brahe.
Developments in telescope construction had made these possible and so provided a
solid empirical basis requiring a precise mathematical description. Kepler, as well
as Galileo, considered the heliocentric universe as the true description of the cos-
mos, not just a model to compute the positions of planets. As a result, strong protest
came from both the catholic and the protestant churches. Moreover, his work was
carried out during the time of the 30 years’ War between the two christian fractions in
1.5 The Roof of Heaven 15

Germany, and Kepler, refusing to take sides, had to flee several times from persecu-
tion. Nevertheless, he remained deeply religious.
For posterity, he remains, perhaps above all, a brilliant mathematician and thus able
to construct a mathematical theory to account for the data he had obtained, known
today as Kepler’s laws of planetary motion. These laws described with great precision
the elliptical orbits of the planets around the Sun, without, however, explaining why
they moved in this way. Kepler believed that there must be some force of the Sun,
acting over large distances and counterbalancing a centrifugal outward push, to keep
the planets in orbit. At his time, that was speculation—to be made into a physical
theory almost 80 years later, by Isaac Newton, who wanted to explain as well as
describe.
The required abstraction was that the same forces that act on Earth also govern
the motion in the heavens. On Earth, “falling bodies” were a common phenomenon,
rain fell from clouds, apples fell from trees, arrows and cannonballs rose and then
fell. Correcting some Aristotelean misconceptions, Galileo Galilei had already estab-
lished that the falling of all objects follows a universal law: the distance a body has
fallen grows with the square of the time and is the same no matter what the mass of
the body is. To be sure, a feather falls slower than a stone, but this is because it tends
to “float” in the air. A stone the weight of a feather falls in the same time the same
distance as a heavier stone.
The observations of Galileo soon led to what is today called classical mechanics—
the beginning of physics as we now understand it. Isaac Newton, in his celebrated
Philosophiae Naturalis Principia Mathematica formulated the theory describing the
effect of forces on material bodies and on their motion. In antiquity, the natural state
of a body was thought to be “at rest”; any motion seemed to require some action on
the body, a cause for getting it to move. Galileo, and following him more succintly
Newton, replaced this by noting that rest means something different for someone on
a boat floating on a river and for an observer on the banks of the river. So a first kind
of relativity principle appeared: all states of constant relative motion with respect to
each other are equivalent, none is more natural than the other. Or, in Newton’s terms,
a body in uniform motion will remain that way unless acted upon by some force.
That introduced the concept of force as the agent resulting in a change in the state of
being of anything, as the reason for acceleration, as the origin of action and reaction.
One immediate outcome of this was the theory of gravitation, of the forces between
celestial bodies. Gravity was the first universal force to be encountered by humans.
To be sure, there were many other forces, of wind, of the sea, of an ox pulling a plow,
of a bowstring shooting an arrow. But they were dependent on time, circumstance
and cause, whereas gravity was always there, everywhere and at all times. A stone
released would fall to the ground, in the same way, no matter who released it, where
and when. There seemed to be a mysterious attraction of things to the Earth. It
was Newton’s great achievement to relate this everyday force to that determining
celestial structure and motion. Newton’s theory of gravitation states that a massive
object attracts any other massive object with a force that is proportional to the product
16 1 Horizons

Fig. 1.10 The Copernican


picture of a universe with an
ultimate horizon, a final outer
firmament holding the stars

of their masses and inversely proportional to the square of their separation distance,
M1 M2
F=G
r2
where M1 and M2 are the masses, r their separation, and G Newton’s universal
constant of gravitation. The force of gravity is always attractive, and it acts over
immense distances without any apparent connection between the interacting objects,
and, as it seemed, instantaneously. It holds the Earth and the other planets in orbits
around the Sun, with the centifugal force of their motion just balancing the attraction
of gravity. In the same way, it binds the Moon to the Earth. We know today that it is
this force that holds galaxies together and that determines the large-scale structure
of our universe. And yet it is the same force that determines the change of motion
of the objects of our everyday world, the falling of apples, the rising of airplanes,
the orbits of the satellites providing our communication. Gravity is thus the most
universal force in the world, operative from our human scale to that of the entire
universe.
So, at this point, astronomers had a consistent theoretical explanation for the
structure and motion of the observable world: the Earth, the Moon, the Sun, the
other planets and their moons. The Sun is its center, and the force holding everything
in place in the heavens, gravity, is the same force giving mass and weight to all
objects on Earth, making apples fall from trees and preventing stones from jumping
into the sky. Behind all this, there still was the the outer sphere, holding the fixed stars
(Fig. 1.10), and beyond that sphere…what was there? In Greek philosophy, nothing,
infinite and eternal nothing. But off and on, the possibility of a universe without a last
sphere was brought up. Instead, beyond the solar system, there could be an infinity
filled homogeneously with fixed stars; such a scenario had been considered by the
1.5 The Roof of Heaven 17

English astronomer Thomas Digges in 1576. Thoughts of this kind were always on
the verge of being heretic, in the eyes of the church. The Italian philosopher Giordano
Bruno not only believed that the universe is infinite, but that it is filled with an infinity
of worlds just like our own. This was clearly in violent contradiction to the dogma
of one world made by one creator according to the scripture. And so on February 17,
1600, Giordano Bruno was burned at the stake in Rome.
The Vanishing Stars
2

Were the succession of stars endless, then the background of the


sky would present us a uniform luminosity—since there could
be absolutely no point, in all that background, at which there
would not exist a star.
Edgar Allan Poe, Eureka, 1848

In spite of Giordano Bruno’s fate, the limits of the universe continued to occupy the
minds of many scientists and philosophers. Is there indeed some ultimate celestial
sphere? And if so, what is in that forbidden “room” beyond it? The existence of a
final firmament, to which the fixed stars are attached, did in fact answer one rather
curious question. Why is the sky dark at night? If there were no such sphere, if
instead a world of stars continues on and on, homogeneously, with the same density,
forever outward, then every spot in the sky will be filled with shining stars, some
closer, some further out, and further yet. Copernicus insisted on a fixed outer sphere
with a finite number of stars and thus avoided the problem. Kepler had realized the
difficulty and therefore also ruled out the possibility of an infinite universe. Still the
question kept reappearing and is today known as Olbers’s paradox, after the German
astronomer Heinrich Olbers, who formulated it most succinctly in 1823. It is an
excellent illustration of how a well-posed question can lead to progress in thinking
and understanding. To answer it, however, we first have to address one of the basic
issues of physics: what is light?

2.1 The Speed of Light

But what and how great should we take the speed of light to be? Is it instantaneous
perhaps, or momentary? Or does it require time, like other movements? Could we
assure ourselves by experiment which it may be?

H. Satz, Ultimate Horizons, The Frontiers Collection, 19


DOI: 10.1007/978-3-642-41657-6_2, © Springer-Verlag Berlin Heidelberg 2013
20 2 The Vanishing Stars

The question had been around for quite a while when Galilei, in his Renaissance
treatise on the Two New Sciences, made his alter ego Salviati ask it. Already Aristotle
had complained more than 300 years before Christ that

Empedocles says that the light from the Sun arrives first in the intervening space
before it comes to the eye or reaches the Earth.

He, Aristotle, was sure that this was completely wrong, that “light is not a move-
ment”, and his belief dominated western thinking for almost 2,000 years. The speed
of light is infinite—even great scientists and philosophers like Johannes Kepler and
René Descartes were more than convinced of that. Descartes said that “it is so certain,
that if one could prove it false, I am ready to confess that I know nothing at all of
philosophy”.
Galilei, of course, proposed the right way to resolve also this issue: experiment.
He even tried it himself, but at that time terrestrial techniques were not up to the task.
A distant assistant had to cover and uncover a lamp, and Galilei tried to measure the
time it took him to see that. He correctly noted that light did travel faster than sound.
But to determine its speed, one needed longer times and hence larger distances, and
these were then to be found only in astronomical domains. The problem was, in fact,
twofold. Is the speed of light finite, and if so, what is its value?
The first question was answered several decades later by Ole Rømer, a truly multi-
talented man from Aarhus in Denmark. His real name would have been Ole Pedersen,
but with so many Pedersens around, his father had started to call himself Rømer, after
the island of Rømø, where they came from. Ole had studied physics, mathematics
and astronomy in Copenhagen and eventually married the daughter of his professor
there. In between, he had worked for King Louis XIV in Paris and took part in the
design of the fountains of Versailles. After this interlude, he returned to Denmark
for an appointment as “royal mathematician”, where he introduced the first national
system of weights and measures, as well as the Gregorian calender. And besides all
this, he became Chief of the Copenhagen Police, responsible for the installation of
the first street lights there. In Paris, he had worked as an assistant for the astronomer
Giovanni Domenico Cassini, and Cassini had made a remarkable observation. The
planet Jupiter, fifth around the Sun and largest of all, had a Moon, called Io (named
after a nymph seduced by the Roman god Jupiter, in his Greek avatar form of Zeus),
which circled around it approximately once every 42 h, in contrast to the 28 days our
earthly Moon takes for its orbit. That meant that seen from the Earth, there would be
many “eclispses” of Io at any stage of the Earth ’s orbit around the Sun; the geometry
is shown in Fig. 2.1. One could thus measure the time at which Io disappears behind
Jupiter, and do this for a series of eclipses. This provided a determination of the time
between successive eclipses, giving a prediction for the next.
And the striking observation first made by Cassini was that the onset of an eclipse
fell more and more behind schedule the further away the Earth was from Jupiter.
Cassini was not sure, but thought that perhaps light takes some time to reach us.
Eventually, he seems to have rejected this conclusion. Rømer, instead, combined
a number of different measurements, extrapolated them to eliminate interference
2.1 The Speed of Light 21

Fig. 2.1 Ole Rømer’s basis Io


for the determination of the Jupiter
speed of light

Sun
Earth

effects, and found that the delay in time of eclipse onsets seen from the point of
greatest Earth—Jupiter separation (point a) compared to those seen from the smallest
distance (point b) was about 22 min. From this he now concluded that the speed of
light is indeed finite and that the 22 min is the time it needs to traverse the diameter
of the orbit of the Earth around the Sun.
To obtain the actual value of the speed of light from these measurements, the size
of the orbit of the Earth around the Sun had to be known. How far did light have to
travel in these 22 min it took between the two extreme points? This distance, divided
by 22 min, would then be the speed of light. The relevant information to determine
the distance from Earth to Sun was actually available at that time, due mainly to
the studies of Cassini. The first numerical value for the speed of light, however, was
apparently obtained by the Dutch physicist Christiaan Huygens in 1678, two years
after Ole Rømer had announced his conclusions.
Kepler had, in this “third law” of celestial motion, concluded that the time for a
planet to orbit the Sun was related to the distance between this planet and the Sun;
from this, the relative distances of all planets from the Sun were known. In particular,
the distance between Mars and the Sun was found to be about 1.5 times that of the
Earth and the Sun. To arrive at an actual value for the Earth–Sun distance, some astro-
nomical distance had to be measured in terrestrial units, and this “calibration” had
in fact been carried out by Cassini and his collaborator Jean Richer. They measured
simultaneously the position of Mars relative to the fixed star background, Cassini in
Paris and Richer in French Guiana. This gave them an angle and a known terrestrial
distance, the 4,000 km between Paris and Guiana, and geometry then determined
the distance between Mars and Earth. They found it to be about 73 million km. At
the point of closest approach of Mars and Earth, that led to 146 million km for the
distance between Earth and Sun. Since light travelled, according to Rømer, twice that
distance in 22 min, Huygens noted that its speed must be about 220,000 km/s. This
result, obtained over 300 years ago by a combination of logical thinking, abstraction
and rudimentary measurements, is certainly one of the great achievements of the
22 2 The Vanishing Stars

stationary mirror θ stationary mirror

detector detector

rotating mirror light source rotating mirror light source

Fig. 2.2 The mirror arrangement used by Fizeau and Foucault for a terrestrial determination of the
speed of light

human mind; it is only about 25 % too low according to today’s precision value,
measured using radio signals between space craft positioned in the solar system.
The first terrestrial measurements were carried out in Paris by Hippolyte Fizeau
and Léon Foucault around 1850, improving the attempt of Galileo by reflecting light
in a clever arrangement of mirrors. Foucault, with his celebrated pendulum, had in
fact also provided for the first time direct proof of the rotation of the Earth around
its axis. But he now modified an older apparatus devised by Fizeau to measure on
Earth the time light needs to go from one point to another. The set-up is illustrated
in Fig. 2.2. Two mirrors are placed as far apart as possible, at a distance d; they now
play the role of Galileo and his assistant. One of the two mirrors is rotating at a speed
ω, the other is stationary. A beam of light is directed at the rotating mirror, and that
reflects it to the stationary one. When it now returns to the rotating mirror, it has
travelled between the two mirrors a total distance 2d. During the travel time, the
rotating mirror has turned an angle θ, so it reflects the beam back not at the source
of light, but at a detector placed at an angle 2θ away. Knowing d, θ and the rotation
speed ω gives the speed of light as c = 2d ω/θ. The results of Fizeau and Foucault
were within 1 % of the present value, 299,792,458 km/s.
So, the light from the Sun did have to travel through the intermediate space before
reaching the Earth, as Empedokles had supposed 2,500 years ago. But what is this
light travelling through what we think is empty space? What is it that is moving at
300,000 km/s?
This question led to another basic and universal phenomenon of the inanimate
world: electromagnetism. Initially, electricity and magnetism entered as two quite
separate and distinct features. The first appearance of electricity in the life of humans
was lightning, for a long time thought to express the wrath of the gods in a frightful
way, and beyond human understanding. A more mundane version was observed by
the ancient Egyptians, more than 3,000 years ago; they were familiar with electric
fish which could produce remarkable bolts of electricity to stun their prey. This
source of electricity was supposedly used already in those days for the treatment of
neural illnesses. In ancient Greece, it was noted that rubbing amber with a catskin
made it attract feathers and other light objects—and it was this feature that gave the
name to the mysterious force, with elektron as the word for amber in ancient Greek.
2.1 The Speed of Light 23

But it took still more than 1,500 years until these various and seemingly unrelated
phenomena began to be understood, and only in the last 100 years has electricity
dramatically changed human life.
Magnetism was more well-defined from the beginning. Several millennia ago it
was noticed in China that a certain kind of stone attracts iron, and if suspended by a
string, it would orient itself along a north–south axis. Making use of this, the ancient
Chinese constructed the first magnetic compass for navigation. In ancient Greece,
Thales of Milos described the effect, and since the stones showing such behavior
there came from a province called Magnesia, he called it magnetic. In English, it
became “leadstone” and finally “lodestone”, presumably because it could be used to
lead travellers in the desired direction.
Both electricity and magnetism became part of natural science only less than
300 years ago. It was discovered that there exist two different forms of electricity,
arbitrarily denoted as positive and negative; each form could be produced by rubbing,
for example, and each kind can exist on its own. If two metal balls were prepared to
have different “charges”, like and like repelled each other, while positive and negative
showed attraction—both by invisible means across the distance of their separation.
Charles Augustin de Coulomb in France showed in 1785 that these reactions followed
a law very similar to that proposed by Newton for the equally invisible action at a
distance provided by gravity (Fig. 2.3). Coulomb’s law gives for the electric force
q1 q2
F=K 2 ,
r

Earth Moon

N S

+ _
S N

Fig. 2.3 Three forms of action at a distance: the gravitational attraction between the Earth and the
Moon, the electric attraction between positive and negative charges, and the magnetic attraction
between opposite poles, accompanied by the repulsion between like poles
24 2 The Vanishing Stars

where q1 and q2 measure the amount of charge on each ball and r their separation;
the constant K plays the role of Newton’s universal constant of gravitation, except
that it is now positive (repulsion) for like and negative (attraction) for unlike charges.
While positive and negative electric charges could exist independently and could
be produced separately, magnets were curious animals. They had a north pole and
a south pole, and given two magnets, north and south attracted each other, while
north/north or south/south meant repulsion. But there was no way to get just one
pole. Cut a magnet in two in the middle, and you had two new magnets, each with
its north and its south pole. And until today, physicists are still wondering if there
isn’t some way to create a monopole. The magnetic force was not quite of the inverse
square form encountered in Coulomb’s law of electric interaction or Newton’s law of
gravity, since each pair of magnets experienced both attraction, between the opposite
poles, and repulsion, between the equal poles. Nevertheless, the interaction between
two magnets, as well as that between metals and magnets, was again by some invisible
means over the distance of separation.
So both electric and magnetic interactions showed a mysterious feature already
encountered in the case of gravitation: an interaction over a distance, without any
apparent connection between the interacting objects. How such an interaction could
arise was something that had puzzled people at all times. Was there some invisible
medium filling all of space to provide a connection? The beginning of an answer was
provided by the British physicist Michael Faraday, who proposed that each charge
would be surrounded by an electric field, radiating out starlike lines of force emerging
from the source in all directions (Fig. 2.4). And this field would “feel” the presence
of other charges and react accordingly: the lines of force would bend either towards
the other charge or away from it, depending on the sign.

Fig. 2.4 Lines of force


emerging from isolated
sources of positive and
negative electricity (top) and
from neighboring like and
unlike sources (bottom)
2.1 The Speed of Light 25

Moreover, in the early 1800s, Hans Christian Oersted in Copenhagen discovered


that there was a strange connection between electricity and magnetism. It was known
that certain materials—today’s conductors —allow a rapid spreading of electric
charge: they result in the flow of an electric current between opposite charges, form-
ing an electric circuit. Now Oersted observed that a magnet would align itself in a
direction orthogonal to the line of current flow, as if the current had created magnetic
lines of force around its flow axis. So one could imagine unending lines of force
corresponding to magnetic fields, closed loops having neither beginning nor end.
This would explain why cutting a magnet in two simply produced two magnets, and
did not yield an isolated pole.
In the course of the nineteenth century, extensive studies showed that electric and
magnetic forces are indeed closely intertwined: electric currents produced magnetic
fields and moving magnets induce electric currents. This suggested a unified theory
of electromagnetic fields, and it was the great British physicist James Clerk Maxwell
who created it, with his famous equations. Through Maxwell, electricity and mag-
netism were unified to electromagnetism. And in addition, he provided the basis for
an understanding of how the interaction of electromagnetic sources could occur over
distances. Maxwell showed that a changing electric field generates a magnetic field,
just as a changing magnetic field would through induction create an electric field. So
the combination of the two, electromagnetic fields, now gained an independent exis-
tence, without the need of currents or magnets. And one simple solution of Maxwell’s
equations was that of travelling waves, like an excitation travelling down a string, or
a wave travelling across a pool of water. The action over a distance could thus occur
through the exchange of electromagnetic signals in the form of such waves. They
propagate through space at a fixed speed, which can be measured and was found to
be the familiar speed of light. The fundamental question what is light? was therefore
now answered: it is an electromagnetic wave travelling through space, and the differ-
ent colors of light simply correspond to different possible wavelengths. Beyond the
range of visible light, we recognize today electromagnetic radiation on both sides,
with radio waves of longer wavelength (beyond the infrared) and X-rays of shorter
wavelength (beyond the ultraviolet). And in a way, it also answered the question
of how distant charges could interact: through the exchange of an electromagnetic
signal.
But the answer was not really complete. If distant charges communicated by
electromagnetic waves travelling between them: what was being excited to form
such waves? In our everyday world, it can be a string, the surface of water, the
density of air. But what is it in empty space that is vibrating? And so the ether
entered the world of physics, an invisible medium filling all of the so-called empty
space. This satisfied those who thought that truly empty space was “unnatural”,
such as the French philosopher Blaise Pascal, who believed that “nature abhors a
vacuum”. When Evangelista Torricelli in Italy succeeded in removing all the air
from a vessel, Pascal noted that the absence of air does not mean empty. For light,
the ether was first introduced by Robert Hooke, in 1665; he pictured a pulse of light
like a stone thrown into a pool of water, with concentric waves spreading out. Just
as a tsunami wave is formed by an earthquake at the bottom of the sea far out in the
26 2 The Vanishing Stars

ocean and then travels towards some shore, so a change in the electromagnetic state
somewhere would be communicated across space to a distant receiver in the form
of an electromagnetic tsunami wave in the ether. This ether turned out to be one of
the most-travelled dead-end roads of physics. From the time of Hooke to the time of
Einstein, a great number of well-known physicists tried their hand at it, and always
with rather limited success. Is the ether stationary, or is it comoving with stars? Is
there an ether-wind due to the Earth moving through it? Is matter perhaps only a form
of vortices in the ether? The presence of an ether resolved the puzzle of an action at
a distance, but to do so, it had to be a material substance and yet, at the same time,
not seriously affect the motion of the stars. One of the most celebrated experiments
to find it was carried out in the 1880s by the American physicists Albert Michelson
and Edward Morley. If light was travelling through the ether everywhere at its fixed
speed, then it would have to be slower if measured in the direction of the Earth’s
motion than if perpendicular to it. They devised an interferometer constructed such
as to have two beams of light, one along and one perpendicular to the motion of the
Earth, travel the same distance and by means of a mirror arrangement meet again at a
given point (see Fig. 2.5) The slowing effect of the Earth’s motion would throw them
out of phase, so that a valley in the wave of one would hit a peak in that of the other
beam, causing interference. Much to their frustration, Michelson and Morley found
no effect whatsoever; all waves arrived completely in phase. No matter how they
positioned their apparatus, the speed of light seemed always to be exactly the same.
So there was no evidence for any form of ether, and after numerous attempts to find
a way out, it was finally banned from physics by Albert Einstein, almost 20 years
later. It is now definitely ruled out, at least as far as electromagnetism is concerned.

N
mirror 1

W E

S M

light source
mirror 2

detector

Fig. 2.5 The Michelson–Morley experiment to detect the presence of an ether. A beam of light
is directed at a partially transmitting mirror M, from where part of it is reflected to mirror 1 and
then on to the detector, another part to mirror 2 and then to the detector. The direction from mirror
1 to the detector is chosen to be north-south, that from the light source to mirror 2 east-west, and
both mirrors 1 and 2 are equidistant from the central mirror M. The motion of the Earth (east-west)
relative to the ether was predicted to modify the speed of the corresponding light beam and thereby
lead to interference patterns between the two beams arriving at the detector
2.1 The Speed of Light 27

However, even today it is not so clear what the role of a cosmological constant or
dark energy is; we shall return to these somewhat ether-like ideas later on.
Maxwell’s equations implied a unique speed for electromagnetic waves travelling
through empty space, the universal speed of light. This is in fact much more dramatic
than it seems at first sight: such a behavior is simply not in accord with our everyday
experience. A car moving at 100 km/h, as seen by a stationary observer, has a relative
speed of only 70 km/h for someone moving in the same direction at 30 km/h. And two
cars, both travelling at 100 km/h in the same direction, are not moving at all relative
to each other. If someone in the compartment of a moving train drops a coin, it falls
straight down: train, passenger and coin, though all are travelling at high speed for
an observer on the ground, are at rest relative to each other. Light is not like that.
If a stationary and a moving observer measure the same beam of light, they both
find the same value for its speed. No matter how fast you move, the speed of light
you measure is always that 300,000 km/s. By moving faster, you can neither start
to catch up with a light beam, nor run away from it. And ten different observers,
all moving at different speeds, find that, although their relative speeds differ, that of
a given light beam is always the same universal value. In the framework in which
Newton formulated his laws, this was simply impossible. In a fixed space with a
universal time, the speed of light would change for observers moving at different
speeds. To make a constant speed of light possible, the ideas of space and time had to
be fundamentally modified. To keep a universal speed of light, the scales for distance
and time must become dependent on the observer. Let me measure the speed of light
in a laboratory here on Earth, and an astronaut measures it in a space ship moving
at high speed relative to the Earth: if we both get the same result, than his standard
meter and his standard second, as seen by me here on Earth, must have taken on
different values than mine—and they do. The resulting milestone in physics was
Albert Einstein’s theory of relativity, more exactly, the special theory of relativity.
The “special” is an a posteriori modification, indicating that it holds in a restricted
spatial region of the universe only. The extension to the entire cosmos, including the
role of gravity, followed 10 years later with the general theory of relativity, and again
it was Einstein who did it.
To formulate his special theory of relativity, Einstein combined a principle pro-
posed by Galileo Galilei 400 years earlier with the recently discovered universal
speed of light. Galileo had insisted that the laws of physics be the same for all
observers in uniform motion relative to each other. In other words, if I measure the
time it takes a stone to fall to the ground from a height of one meter, once in the
laboratory and once on a high speed train, the results should be identical. Einstein
realized that if this was to hold and at the same time a universal speed of light was
to be maintained for all observers in uniform relative motion, our ideas of space
and time would have to be modified, space and time would have to be related, and
their scales have to depend on the speed of the observer (see Box 1). In Newton’s
world, there was a unique time, the same everywhere, and one could talk about two
events occurring at the same time. In a relativistic world, synchronization over large
distances is not possible, and what is first for one observer, may be later for another.
28 2 The Vanishing Stars

Another striking result of relativity theory was the conclusion that no material
body could ever move at the speed of light. According to Newton’s law of force,
an increase of force must increase the acceleration of a mass and hence eventually
bring its speed to arbitrarily high values, faster than the speed of light. Einstein
showed that in the regime in which relativistic effects cannot be neglected, that is,
at speeds lower but comparable to that of light, Newton’s law becomes modified.
Only part of the force serves to increase the speed; an ever larger fraction goes
into increasing the mass, the inertia of the accelerated body. In our everyday world,
the speeds encountered are so far below that of light that we can safely ignore the
speed corrections and work with a speed-independent inertial mass. But in modern
high-energy particle accelerators, such as the Large Hadron Collider at the European
Laboratory for Nuclear Research CERN in Geneva, Switzerland, one brings protons
to speeds 95 % of the speed of light, and then the effective mass of these particles is
more than three times their mass at rest. And so it is evident that we can never bring
a material body to move at the speed of light—it would require an infinite force to
do that. No massive object can ever catch up with a beam of light in empty space;
light remains the fastest agent in the universe.

Box 1. Relativistic Motion


If an observer moving in a spaceship at a high speed v with respect to a
laboratory on Earth finds that the speed of light is the same as ours, it must
mean that from our point of view his length measure is shorter than ours, or his
clock runs slower than ours, or both. Actually, it is indeed both: a given length
d0 , a standard meter, has that value for us here as well as for the observer in
his moving space ship. But his moving meter stick, as seen by us, becomes
shortened to the length d,

d = d0 1 − (v/c)2 ,
where c denotes again the speed of light. And a fixed time interval t0 on the
spaceship clock, if we measure it from here on Earth, appears dilated to become
to a longer interval t,
t0
t= .
1 − (v/c)2
Evidently, the faster the space ship moves, the greater is the effect, both in
the contraction of the length scales and the dilation of the time scales.
As a consequence, Newton’s law of force becomes modified as well; it now
reads m0
F= a,
1 − (v/c)2
so that the inertial mass m 0 of a body at rest is at speed v increased to
m0
m= .
1 − (v/c)2
2.1 The Speed of Light 29

At low speed, as long as we can ignore the (v/c)2 , we recover both the
speed-independent inertial mass m 0 and Newton’s force law F = m 0 a.
If we consider the force F to be gravity, we see from the relativistic form
of Newton’s law that the inertial mass of a body, i.e., its resistance to a force,
is not its rest mass, but rather a mass including the energy of motion. Einstein
formulated this in his celebrated relation between mass and energy,
E = mc2 ,
which means in particular that energy offers an inertial resistance to any force.
Even photons, which have no rest mass, will thus be affected by gravity as if
they had a mass determined by their energy. So we can weigh the photons
trapped in a container: an empty container is lighter than one containing a gas
of photons.

So we now know that the light from the stars we see today has been travelling
for many years, waves of electromagnetic energy moving through an empty space
containing no ether, at a speed of some 300,000 km/s, no matter who measured it.
We are therefore prepared to return to the puzzle we had started with.

2.2 Why Is the Sky Dark at Night?

The paradox is today named after Heinrich Olbers; he was not the first to realize
it, Kepler did earlier and concluded that the succession of stars is not endless. With
Edgar Allan Poe, the problem entered the literary world, leading to pictures that a
century later became science, such as an expanding universe starting from a Big
Bang. As an earthly illustration of the problem, one can consider an infinite forest:
wherever you look horizontally, your line of vision hits a tree. Olbers, in 1823, did
state most clearly the assumptions which had led to the paradox:
• The universe is infinite in all directions and has existed forever as it is now.
• The stars are distributed with the same density throughout the universe, they
have existed forever, and they have a finite size and brightness.
Given these conditions, the whole sky should be as bright as a typical star; it
should never get dark at night. So something must be wrong somewhere, and that
something leads us directly to the forefront of modern cosmology and its view of the
origin of the universe.
If the age of the universe is finite, if there was a Big Bang starting everything a
certain number of years ago, then the universe we can see today will also be of finite
size, because light has only had those years to travel. To be sure, the numbers are
huge, but they are not infinite. Moreover, the stars had to form sometime after the
Big Bang, so their number is also finite. In other words, a finite age of the universe
allows us to see only a finite spatial part of it, and in that part only a finite number of
stars can have appeared since the Big Bang. That is why the sky is dark at night—a
30 2 The Vanishing Stars

late answer to Heinrich Olbers, requiring both a finite speed of light and a Big Bang
origin of the universe. A simple question can lead you a long way…
But how can we be sure that this view of things is really correct? The origin
of the universe, in fact the question whether it has an origin, has been the subject
of much dispute, scientific, philosophical and religious. There are two main reasons
why today most scientists tend to believe in the Big Bang theory—but let us approach
them slowly and step by step.
A well-known effect in the physics of everyday phenomena is that the pitch of a
sound you hear is modified if the source of the sound is moving. The sound of a race
car engine seems higher pitched as the car approaches and lower as it moves away,
leading to a characteristic tonal flip as it moves past you. In earlier days, the change in
tone of the whistle of a passing railroad engine was the typically cited example. The
phenomenon is known as the Doppler effect, after the Austrian physicist Christian
Doppler. The tone you hear is caused by sound waves of a certain wavelength, and
when the source of the sound approaches you, the distance between wave peaks, the
wavelength, is shortened, giving a higher sound, and when it moves away, it becomes
longer and hence results in a lower sound. The same “Doppler effect” also occurs
for light waves, so that one can in fact check if a given far-away star is stationary or
moving. Stars emit light of certain characteristic wavelengths (“spectral lines”), and
if this light is Doppler-shifted when it arrives at the telescope on Earth, its source must
be moving. Let’s say a star is emitting light of a fixed wavelength λ0 , as measured by
an observer stationed on that star. For an observer moving away from  the star with a
speed v, that light will appear to have a longer wavelength λ = λ0 / 1 − (v/c)2 , i.e.,
it will be shifted in the direction from blue towards red, it will experience a redshift.
The American astronomer Edwin Hubble, working in the 1920s at the Mount
Wilson Observatory in California, had studied the light from very distant stars. From
measurements of redshifts it was already known that they all seem to be moving away
from us at different speeds. Hubble made the striking observation that the further
away they are, the faster they recede. The Doppler shift, and hence the speed of the
stars’ motion, was rather well measurable—the crucial factor for reaching Hubble’s
conclusion was the determination of the distance of the stars in question. To measure
the distance of fairly nearby objects in the sky, such as planets, one could use the
parallax method employed by Cassini and Richer to determine the distance between
Mars and the Earth. Howevever, for the very remote stars Hubble was after, the
parallax angle became for too minute to be measurable. The solution came through
the extension of a very simple phenomenon. The brightness of a given light source
decreases the further one is away from it. Since light is emitted spherically from its
source, the light incident on a given surface becomes less and less with distance.
The size of the spheres grows as d 2 , with d denoting that distance, and therefore the
light per area decreases as 1/d 2 . So if we know the original brightness of the source
and its apparent brightness at some distance, then the difference between the two
measurements determines d. Now it so happened that the inherent brightness of
the stars Hubble was studying, the so-called Cepheid variables, had recently been
determined; they were what astronomers today call standard candles. Measuring
their apparent luminosity as observed at Mount Wilson, Hubble had at least a good
2.2 Why Is the Sky Dark at Night? 31

estimate of their distance, enough to show him that their speed of recession v became
ever greater, the further they were from Earth, with d measuring that distance. The
law v = H0 d was named after him, as was the crucial constant H0 . By today’s
measurement, his value of H0 was off a bit, but the idea was right and changed our
view of the universe. In fact, no matter where he looked, the stars appeared to move
away in every direction, so it seemed that the whole universe was expanding. Could
that be the case? In Box 2 we look in a little more detail why one might think that.

Box 2. The Expansion of Space


To simplify matters, we take space to have only two dimensions instead of three,
a “flat” world. Consider three stars in this world, numbered 1–3, positioned at
an arbitrary starting time t = 0 as shown in Fig. 2.6, with a separation distance
d0 between 1 and 2, as well as between 2 and 3.
Now let us assume that the space in this world expands with time t by a
factor Rt in each direction, so that any distance s0 at t = 0 becomes st = Rt s0
at time t. The separation between stars 1 and 2 thus becomes dt = Rt d0 , and
so their speed of separation is
dt − d0 (Rt − 1)
vt (12) = = d0 = Ht d0 ,
t t
defining Ht = (Rt − 1)/t as our “Hubble” constant. The relation tells us that
the rate of separation grows with the initial separation distance d0 . To check
that this is really true, we can look at the speed of◦ separation of points 1 and 3,
which are initially further apart, namely r0 = 2d0 , as obtained from the
triangle relation r02 = d02 + d02 . The rate of separation of 1 and 3 thus becomes
rt − r0 (Rt − 1) ◦
vt (13) = = r0 = Ht r0 = 2Ht d0 → 1.4Ht d0 .
t t

The separation velocity is thus a factor 1.4 larger than that between the closer
stars 1 and 2. We have so far not said how the expansion of space takes place. If
it happens at a constant rate, with Ht = H0 t + 1, we get the time-independent
form
v = H0 d
of what is now known as Hubble’s law, with H0 for the Hubble constant. From
Fig. 2.6 it is also directly evident that stars 1 and 3, compared to 1 and 2, have
to separate by a larger distance in the same time interval and hence must have
a higher speed of separation.
At this point, we can also clarify a little what is meant by the acceleration of
the expansion. The crucial feature is the scale factor Rt , defining how much a
meter stick expands in a given time t. For Rt = H0 t + 1, the expansion rate is
constant: the stick expands in one minute the same amount now as next year.
32 2 The Vanishing Stars

If the expansion increases with time, for an accelerating expansion, the meter
stick will grow more in one minute next year than it does now—or less, for a
decelerating expansion.
The same forms as discussed here in two dimensions hold of course as well
in a three-dimensional space.

Hubble’s discovery came really at a very opportune moment. The most


up-to-date theory of the universe had just appeared at that time, in 1916: Albert
Einstein’s general theory of relativity, linking the effect of the force of gravity to the
nature of space and time. A ball tied to a string will fly in a circle—but if you only
look at its motion, it could just as well be rolling freely in a curved container. The
role of the force can thus be replaced by force-free motion in a curved space. Near
massive stellar objects, such as the Sun, the force of gravity would in this way distort
the surrounding space to such an extent that even a ray of light passing near it would
be deflected from its straight-line path. Einstein’s theory was tested in celebrated
observations during a solar eclipse in 1919, carried out by the British astronomer
Arthur Eddington and his collaborators, and these showed that the positions of stars
whose light passed close to the Sun appeared in fact shifted by the amount predicted
by Einstein, bringing him world-wide aclaim. However, at the time he formulated
his theory of gravitation, the general belief was that the universe was static, nei-
ther expanding nor contracting, and so Einstein needed some force to counteract the
attractive force of gravity acting on all the matter in the universe. For this, there was
no immediate candidate, and the problem has remained somewhat enigmatic until
today. Einstein reluctantly solved it by introducing a “cosmological fluid”, filling
the entire universe uniformly and providing the pressure needed to balance gravity.
It had to have rather strange properties—not affecting any phenomena in the uni-
verse, other than gravity, so that it remained undetectable in all other ways. And it
had to be tuned very precisely in order to just balance gravity. In a sense, it was a
late counterpart of the ether introduced earlier to provide a medium for electromag-
netic waves, and this presumably made it particularly undesirable to Einstein. And
when Hubble discovered that the universe was in fact expanding, Einstein called his
introduction of a cosmological constant, as the fluid is now generally denoted, his
biggest blunder. Had he stuck to his original equations, without such a constant, he

Fig. 2.6 The separation of dt


stars due to the expansion of
space, starting from a given
3
* * 2

initial time t = 0 (black) to a d0


final time t (blue) 3
* * 2

d0 dt
r0
r 1
t
*
* 1
2.2 Why Is the Sky Dark at Night? 33

could have in fact predicted the expansion of the universe before it was discovered.
Today, cosmologists are not so sure if it really was a blunder—dark energy, which
we will encounter later in the context of an inflation scenario for the Big Bang, this
dark energy may well turn out to be the modern version of Einstein’s cosmological
constant, or even of the ether of still earlier times.
In any case, in 1922 a Russian theorist, Alexander Friedmann, presented a gen-
eral solution of Einstein’s equations and showed that they can readily accommodate
expanding or contracting universes. And when Hubble a little later found his expan-
sion, the scene was set.

2.3 The Big Bang

The theory itself was initiated in 1927 by Georges Lemaitre, who had studied math-
ematics and physics at the University of Louvain in Belgium and at the same time
prepared for Catholic priesthood; with success on both counts: he received his doctor-
ate in physics in 1920 and was ordained as a priest in 1923. In 1926, when Einstein’s
equations had just been seen to describe so well the forces in and the structure of the
universe, Lemaitre independently derived Friedmann’s expanding solution and used
it to account for the observations of Hubble: he concluded that our visible universe
is continuously expanding. Looking the other direction in time, it must then have
originated in a very dense, hot, energetic “primordial medium”, which led to the
creation of our world. For the Catholic priest Lemaitre, such a creation must have
seemed very natural, even though it was a long way away from the dogma applied to
Giordano Bruno or Galileo Galilei. But Einstein apparently was not so happy with the
results of Lemaitre; “your calculations are correct, but your physics is abominable”,
he was supposed to have written to him.
Nevetheless, over the years the Big Bang theory continued to gain support, and
the perhaps decisive step came in 1964, when the American astronomers Arno Pen-
zias and Robert Wilson discovered what is now known as the cosmic background
radiation. It is present throughout the universe as a direct relic of the Big Bang,
and it can be measured in the different regions of the sky. Its discovery is one of
the truly serendipitous findings of science. Penzias and Wilson were working for
the Bell Telephone Company, and they were trying to establish a viable method of
microwave communication, by reflecting such signals off high-up balloon satellites.
This required the elimination of all other interfering sources of radiation, up to a
remarkable precision. Even the detector was cooled to a temperature of a few kelvin,
to prevent its “heat” from producing radiation. And when they had eliminated all
known sources, including bird droppings on the antennas, there still remained a mys-
terious background radiation of some three kelvin. It was there day and night, and
in all directions. From some friends they heard that in nearby Princeton University,
Robert Dicke and collaborators were finishing work on background radiation pro-
duced by and remaining from the Big Bang. Penzias and Wilson got in touch with
them, discussed their findings and concluded that they had indeed found this left-over
34 2 The Vanishing Stars

flash of the Big Bang. Their work was published in 1965 in Astrophysical Journal
Letters, in the same issue as the theoretical work of Robert Dicke, Jim Peebles and
David Wilkinson, predicting that a form of primordial light should still exist today.
So there is more to consider than just the light from the stars. While the Big
Bang, in the absence of air, could of course not really “bang”, it did “flash”, leading
to the emission of light, and this light is still there as the microwave background
radiation observed by Penzias and Wilson. The primordial matter initially was a
medium of interacting constituents, a plasma of quarks, electrons, photons and more.
Eventually, as the medium expanded and cooled, the quarks combined to form protons
and neutrons, and these in turn combined with electrons to form electrically neutral
atoms. From this time on, from the decoupling era, about 300,000 years after the Big
Bang, the photons were “on their own”; in the absence of any charged constituents,
they no longer interacted with the medium, and they don’t interact with each other.
From their point of view, the universe contained nothing but light passing freely
into the expanding space. From our point of view, the photons of the microwave
background radiation are the most primordial signals of the Big Bang we can ever
get. Before decoupling, the plasma of charged constituents was opaque to light, so
from this plasma we cannot get any direct information. The time of decoupling,
of the formation of electrically neutral atoms, is thus for us an ultimate horizon in
time—there is no way we can get any direct information from earlier times.
When the microwave background radiation was emitted, that is, when the photons
became decoupled from any matter, they formed a gas of an effective temperature of
about 3,000 K. As a result, the wavelength of the radiation was in the yellow part of
our spectrum, so that then the sky was not dark at all—it was in fact bright yellow. But
the universe kept on expanding, by about a factor 1,000 since the age of formation
of atoms. Since its volume increased, its density of energy became lower and lower,
and this in turn meant that its effective average temperature also decreased. Through
the expansion, the hot universe of the decoupling era has by today cooled down to
about 3 K. As a result, the wavelength of the radiation became longer and longer, so
that with about 7 cm it is now in the microwave region, far below the visible range. In
a way then, the sky is dark at night for us only because we cannot see this microwave
radiation remaining from the Big Bang. If we could put on the right kind of glasses,
we could see the glow of the sky at night…a glow not of stars everywhere, but the
afterglow of the Big Bang itself.
At this point we should note that the light of the stars is, of course, also affected
by the expansion of the universe. The Doppler effect that we mentioned above will
“redshift” that light, move it to ever longer wavelengths. So up there, in addition to
the cosmic background radiation, there is more light than just that of the stars we
see. The stars that are moving away from us emit light of wavelengths beyond our
visibility range—again, we would have to put on special glasses to see the light from
all those stars pushed away from us at an increasing rate by the expansion of space.
This redshift is thus an additional reason for the darkness of the night sky.
However, the afterglow of the Big Bang also leads to a striking problem. The
microwave radiation we receive today from different regions of the sky was emitted
in the decoupling era from regions of the universe that had no causal communication,
2.3 The Big Bang 35

which were outside each other’s event horizon. The reason for this is that decoupling
occurred so early in the evolution of the universe and hence so long ago, with immense
expansion since then. Two markers separated by a distance of 1 km appear to an
observer 1 km away from each marker to subtend an angle of 60∞ (see Fig. 2.7). For
an observer 10 km away from each marker, the angle has decreased to only a little
more than 10∞ . At decoupling, only regions separated by no more than 300,000 light-
years could communicate with each other, and if they are now 1010 light-years away,
they appear to us only some fraction of a degree apart in the sky. We have here for the
moment neglected the expansion of the universe, which additionally enhances the
effect. In other words, if we measure the microwave background radiation at a certain
angle in the sky, and then at another angle only a few degrees away, the sources of the
two radiation measurements had no chance to communicate at the emission time. So
why do both show the same temperature? The microwave radiation we observe was
emitted from millions of sources, of spatial regions, which up to decoupling had no
way to “tune” their radiation. It is like a gigantic orchestra, without a conductor and
with many, many musicians who have no possibility of getting in tune—yet they all
end up playing the same melody. If the decoupling of photons and matter, due to the
formation of electrically neutral atoms, had occurred at different times in different
regions, the temperature of the background radiation should be correspondingly
different. But all regions behaved as if some imaginary omnipotent conductor had
lowered his baton and indicated “decouple now”. This horizon problem is one of
the big puzzles of today’s cosmology, and it is not really resolved to everyone’s
satisfaction, in spite of some very interesting proposals. We will soon have a look at
one of them in a little more detail.

1 km

Now t
60
observer

5 km

Last Scattering
11.5

observer Big Bang

Fig.2.7 The radiation emitted from a fixed spatial region covers an ever smaller angle of observation
with time (left). As a result, the microwave background radiation we receive today comes from
regions that were causally disconnected at decoupling time (right)
36 2 The Vanishing Stars

But first let us dwell a little more on the cooling process, which since decoupling
has brought the temperature of the microwave background radiation from its initial
3,000 K to today’s 3 K. The frequency of light emitted from a hot body decreases
as its temperature is lowered. This occurs through the interaction of the light with
the atoms of the system, which maintain the temperature of the medium. They exist
in various states of excitation, and correspondingly emit and absorb photons on
moving from one state to another. As the medium is cooled, the atoms absorb more
high frequency photons and emit more low frequency photons, leading to an overall
shift towards lower frequencies, i.e., longer wavelengths, for the radiation. How then
can a “cosmic redshift” occur in the universe, where there is so much empty space
and so few atoms to regulate the temperature? The origin of the cosmic cooling is a
little bit like the Doppler effect we encountered earlier for waves emitted by moving
sources. We see those waves “stretched” in wavelength as the source is moving away
from us. A similar thing happens to a solitary wave travelling through an expanding
space—the distance between crest and valley in the wave becomes stretched, the
wavelength longer, the more the space expands. And if the space has expanded by a
factor thousand since the emission of the cosmic light, the frequency of the light has
decreased by this factor and the wavelength increased. So the cosmic redshift does
not tell us that the source of the radiation is locally moving, but rather that the space
through which it travels is expanding.
The expansion of the universe is encoded in “Hubble’s law”, stating that the
velocity of a distant galaxy, relative to Earth, is proportional to its distance from
Earth. The crucial scale factor is the “Hubble constant” H0 , for which the best present
value is about 22 km/s per million light-years. So a galaxy one million light-years
away is receding from us at a velocity of 22 km/s, while one two million light-
years is receding at 44 km/s. If the expansion of the universe takes place at constant
acceleration, the inverse of the Hubble constant gives us the age of the universe:
13.8 billion years; the details are shown in Box 3.

Box 3. The Age of the Universe


Hubble’s law, v = H0 d, gives us the recession velocity v of a distant star, with
d specifying its distance from Earth and H0 the Hubble constant.
For constant acceleration, i.e., for a constant rate of expansion of space,
v = d/t0 , where t0 is the time since the Big Bang, assuming both the star and
the Earth were effectively born shortly afterwards.
Compared to the present distance, the separation of star and Earth at their
birth are negligible, d = 0 shortly after the Big Bang. So it follows that
v = d/t0 = H0 d, and from this that t0 = 1/H0 is the time since the Big Bang,
the age of the Universe.

Many aspects, both observational and theoretical, enter the determination of


the expansion and its time dependence. One crucial feature is the overall mass of
the universe. If it is large enough, gravity could eventually stop the expansion and the
2.3 The Big Bang 37

universe will start to contract again. The result would then be a final “Big Crunch”.
If the mass of the universe is sufficiently small, the expansion can overcome gravity
and the acceleration will increase with time. The critical boundary between the two
extremes results in constant acceleration. The overall mass contained in the uni-
verse is not easily determined, since in addition to the visible content there is a large
amount of invisible dark matter, which manifests itself only through gravity. And
then, even more elusive, there is most likely an overall background of dark energy,
which permeates the entire universe and hence affects its expansion rate. According
to recent results that led to the award of the 2011 Nobel prize in physics, the vote
goes to an acceleration increasing with time and hence assigns an important role to
the mysterious dark energy. In any case, when all is said and done, the best value for
the age of the universe today remains at about 14 billion years.
It is perhaps interesting to elaborate here a little on the nature of the expansion
following the Big Bang. First, we should, however, note that the “reason”, the initial
cause of the bang, is not really known. One very impressive attempt to describe the
very early stages of the universe was first proposed by the American cosmologist
Alan Guth in 1980.

2.4 Cosmic Inflation


Whenever we measure something, we need a reference, a “zero”. The height of a
mountain is measured “above sea level”, the depth of the ocean floor “below sea
level”. Mount Everest is the highest mountain on Earth only if we use the average
sea level as reference, giving it a height of some 8,800 m. The volcano Mauna Kea
on Hawaii rises more than 10,000 m above the floor of the ocean at its position—
so it is indeed the tallest mountain on Earth. But let us now imagine a dammed
river: on the high side, upstream, the level of water is quite different than on the low
side, downstream. And this difference in water levels corresponds to a difference
in potential energy that can be used, for example, to create electricity by the water
rushing down the damm. So the transition from one level to the other can happen very
abruptly, and it can liberate energy. In cosmic inflation, the entire universe we can
see today was a small bubble of extremely hot matter just after the Big Bang, small
enough to be causally connected and in uniform thermal equilibrium; its ground
state, the reference point, was far above ours today. The bubble expanded, cooled
and thereby was driven to a critical point, over the dam, down the waterfall. In this
process, the space of the medium expanded dramatically in an extremely short time,
and its new reference point became our physical vacuum of today. Since the medium
had been in equilibrium before, it remained uniform even after the expansion of
space had broken it up into causally disconnected regions. So that is why, according
to inflation theory, we measure the same microwave radiation from all parts of the sky:
before inflation, the sources not able to communicate with each other at decoupling
time were originally all in the same pot, in which they could adjust to each other’s
tune, and this information was conserved in the transition. Moreover, in descending
from the upper to the lower level, energy was liberated, and this energy, “dark energy”
38 2 The Vanishing Stars

in today’s terminology, permeates the entire universe; it drove and continues to drive
the expansion of the universe.
But even cosmic inflation can only show that, given certain conditions, a hot
expanding early universe can be formed in an extremely short time. It does not
explain the origin of these conditions, so that, for the time being, the beginning of
the world seems well beyond our science. The subsequent evolution depends, as we
mentioned, on the strength of gravity, on the overall mass of the universe. The Big
Bang provided the expansion, gravity counteracts this, dark energy may modify it,
and, whatever the final verdict on the role of the different components, the universe
continues to expand. This expansion is not an “explosion”, throwing debris into some
empty space. Rather, space was made in the Big Bang, and it is space itself that is
expanding. So a better analogy for our present universe is that of raisins in a cake
dough, after some time in the oven. As the cake “rises”, any given raisin notes that all
its neighboring raisins are moving further and further away. And the dough between
the raisins, that is “space”. For the concept of expansion, it does not matter how much
dough there is or if there is an end to it. Similarly, in the Big Bang, the primordial
matter as such was not localized at some point in space. We can only see that part
of the universe from which light has been able to reach us in 14 billion years, and
that part was indeed localized. Whatever more there was (and now is), we simply
cannot tell. But we can speculate that there is more; we can’t see it now, but it seems
that if we, mankind, wait long enough, light from there will arrive, so that we, taken
generically, should be able to see it then, at sufficiently much later time.
Unfortunately (or fortunately, depending on your point of view), that is not true.
We can use Hubble’s law to see how far away a distant star has to be at present so that
for us it is moving away at the speed of light. Using the value of the Hubble constant
given above, we find that the critical distance is 14 billion light-years. A star further
away from us than that is now moving, relative to us, faster than the speed of light,
and any signal it may send will never be able to reach us. So there is an absolute
cosmic horizon.

2.5 The Absolute Elsewhere

For us and all our descendents, the universe presently further away than 14 billion
light-years is forever beyond any communication; we cannot send “them” a signal,
nor ever receive one from “there”. Our world thus remains in principle bounded by
the “Hubble sphere” with a radius of 14 billion light-years. But this specific limit
applies only to us here on Earth. A distant star will have its own Hubble sphere, and
that will cover a different region of space—which may or may not have an overlap
with ours. There is more “out there”, but our capability to communicate with it has
an absolute limit.
But, you may say, how can something move faster than the speed of light? And
indeed, nothing can “outrun” a light beam. The new feature entering in cosmic
dimensions is theexpansion of space. The far-away star will emit a light beam, and,
2.5 The Absolute Elsewhere 39

measured on that star, it will start on its path towards us with exactly the universal
speed of light. The problem is that while it is travelling, the space of the universe
expands, and if this expansion is fast enough, the light beam will never reach the
Earth. So Hubble’s law is not saying that the distant star is “running away” from us;
from the point of view of other stars near it, it is stationary. And whatever region
the light beam passes on its way, any observer there will see it moving with the
universal speed of light. The light beam is thus a little like a worm crawling through
the expanding cake, from one raisin towards another. Any observer it passes will
see it crawling with its standard worm speed, but in the meantime, the rising cake
stretches the space it has to traverse, and if the cake rises fast enough, the poor
worm will never reach the next raisin. Even during inflation, it was space that was
undergoing the abrupt expansion—on a sufficiently small local level, nothing was
moving faster than the speed of light.
If the Hubble constant were really a constant, our Hubble sphere would have been
the limit of our universe since the Big Bang. Slight time variations of H0 even now
are under discussion by the experts, and immediately after the Big Bang, as we saw,
there may well have been a very short period of a much more rapid “inflationary”
expansion. For our overall picture, we will skip over the evolution of the very early
universe and assume that our Hubble radius has been “in effect” almost since the Big
Bang. That means that any part of the universe beyond our Hubble horizon shortly
after the Big Bang was then and ever afterwards outside our world, unreachable
for us. It was then expanding away from us faster than the speed of light, and has
continued to accelerate since then.
What about a star formed just inside our Hubble sphere not long after the Big
Bang? The expansion of the space environment of that star proceeded, as seen by
us, with an effective speed slightly less than that of light, and so the light of the star
could still eventually reach the Earth. But the expansion rate continued to increase,
and shortly afterwards became greater than that of light. From our point of view,
at that instant the light of the star went off, it disappeared from our world. But the
light it had emitted before crossing our Hubble limit continued to travel through
the expanding space. And when it finally reaches us, its source star is far, far away
outside our world, in our absolute elsewhere.
To find out how far, we ask if a light signal was sent out from Earth shortly after
the Big Bang, how far has it travelled in the time since then, until today? For a static
universe, that distance would be the speed of light times the age of the universe:
about 14 billion light-years. But the expansion makes the distance much larger, as
our worm discovered above inside the expanding cake dough. Taking the expansion
rate to be that of constant acceleration, the light beam has travelled three times the
static distance since the Big Bang: 42 billion light-years; the calculation is shown in
Box 4.
40 2 The Vanishing Stars

Box 4. How Far Has Light Travelled Since the Big Bang?
For a static universe, the distance travelled by a light beam between an initial
time ti and a final time tf would be d = c(tf − ti ), where c is the speed of
light. But the universe expands in that time interval by a factor (tf /ti )2/3 , as is
predicted by an acceleration not changing with time. In this case, the stretching
of space makes the travel time longer, so that the distance now becomes
 tf
d=c dt (tf /ti )2/3 = 3ctf
ti

if we take the initial time ti = 0 to be that of the Big Bang. So when the
light reaches us, it has travelled three times the distance it would traverse
in a stationary world; one unit ctf for “local” travel, two units thanks to the
expansion of space.

As a result, the most distant stars we see are now much further away from us than
the speed of light times the age of the universe. When the light we receive from them
today was emitted by them, they were much closer to us than they are today, just
inside our Hubble sphere. But during the time of travel of the light beam, the universe
expanded, and so our distance to them today is, as we just saw, a combination of the
time of travel of the light and the expansion of the universe during that time. The
most distant star whose light we see today is therefore now 42 billion light-years
away from us. Provided it still exists, of course…this we can never find out.
From a philosophical point of view, this form of an ultimate spatial limit, of an
ultimate horizon of our universe, is really quite satisfying. The “last outer sphere”
in older cosmologies always led to a number of unanswerable questions. What is
the origin of such an ultimate sphere? What is it made of, what happens if a signal
sent by us hits it? And finally, the forbidden question: what is behind this last limit,
this end of the universe? In today’s cosmology, the limit exists only in the eye of
the beholder. At that imagined surface in space 14 billion light-years away from us,
there is nothing special, no discontinuity, no great wall of any kind, and there is no
reason to expect that beyond this limit, things are different. Only we can no longer
check that. The limit exists for us, for our eyes only, not for other observers in far
away parts of the universe. The world according to Thomas Digges, some 450 years
ago, is also ours today, except that we now know it had a beginning and that our
probing must reach an end.

Box 5. The Doll in the Doll


In the mechanics of Newton, instantaneous interactions over large distances
were implicitily considered possible—and from our present view, this means
that effectively the speed of light was taken to be infinite. For a vast range of
natural phenomena, this assumption is satisfied to high precision: as long as
2.5 The Absolute Elsewhere 41

things move with a velocity much less than that of light, Newtonian mechanics
remains correct. It is only when particles move with velocities close to that of
light, as they do for example in today’s large particle accelerators, that relativity
theory, more specifically, Einstein’s special theory of relativity, becomes the
correct description. The resulting relativistic mechanics contains Newton’s
non-relativistic mechanics as the limiting form obtained for small velocities.
The mechanics of special relativity in turn remain correct only as long as the
force of gravity remains comparatively weak. A light beam on Earth is not
measurably bent downward, and for the motion of a particle in one of the
mentioned accelerators, the effect of gravity can also be totally ignored. It is
only on a cosmic scale, for forces between galaxies or light passing massive
stars, that the deformation of space through interaction plays a role. At that
point, Einstein’s general theory of relativity gives the relevant explanation. In
the limit of small scales and weak gravity, it gives the special theory as an
excellent approximation. So in a way, it’s like the Russian babushka dolls: the
biggest, general relativity, contains a smaller one, special relativity, and this
turn contains a still smaller one, Newtonian mechanics.
The Secret Glow of Black Holes
3

But oh, beamish nephew, beware of the day.


If your snark be a Boojum! For then
you will softly and suddenly vanish away,
And never be met with again!

Lewis Carroll, The Hunting of the Snark

There are many curious things in our universe, but black holes must be among the
most curious. You cannot see them, you cannot hear them, you cannot feel them,
and if you ever meet one, you won’t be able to tell anyone about it afterwards. In
fact, there will be no afterwards for you. So a black hole is one of those rooms in our
universe that you should never even think of entering.
The idea that such things might exist in our rational world was first announced
in 1783 by John Michell, a natural philosopher in England, educated at Queens’
College, Cambridge, and in his later years parish priest in the small community of
Thornhill in West Yorkshire. As was often the case in natural science, he had the
right vision, even though his details were faulty. If a stone is thrown up into the air,
it falls back to Earth. The faster it is thrown, the higher it rises. When it leaves our
hand, it has energy of motion, kinetic energy, and when it comes to a stop somewhere
up there, it has no more of that, but lots of potential energy, which it converts back
into motion by falling down. So how fast does it have to be thrown in order to not
fall back down? In our modern age of space ships and satellites, that is almost an
everyday question.

3.1 The Escape Velocity


A bullet shot upwards from the surface of the Earth has to have a certain speed so that
it does not return; the idea was already known at the time of Newton. Just as objects
of different mass fall the same distance in the same time (barring air resistance and
such), the escape velocity from Earth is the same for all objects, about 11 km/s (if
you want to follow the derivation, see Box 6). This is the velocity a bullet has to

H. Satz, Ultimate Horizons, The Frontiers Collection, 43


DOI: 10.1007/978-3-642-41657-6_3, © Springer-Verlag Berlin Heidelberg 2013
44 3 The Secret Glow of Black Holes

have to escape Earth once it has left the gun; its kinetic energy is then sufficient to
overcome the potential energy provided by the force of gravity pulling it back.

Box 6. The Escape Velocity


On the surface of a star of mass M and radius R, a body of mass m is attracted
by the force of gravity
Mm
F=G ,
R2
where G is the universal constant of gravitation. As a result, it has a negative
potential energy
Mm
V = −G .
R
To escape from the star, it has to be shot upward with a speed v sufficient to
have the kinetic energy
1 2
T = mv ,
2
which is needed to overcome the potential energy of gravitational attraction.
From
1 2 Mm
mv = G .
2 R
one finds that the escape velocity is

2G M
vescape = .
R
Using the known values for the mass and the radius of the Earth, this gives a
terrestrial escape velocity vescape ◦ 11 km/s.
Applying this argumentation (incorrectly) to light, one finds that to restrain it
from escaping from a star, the stellar mass and radius have to satisfy
2G M
c2 < ,
R
where c is the velocity of light. We know today that this derivation is not right,
but we also know that
2G M
R0 =
c2
is the correct Schwarzschild radius of a black hole.
3.1 The Escape Velocity 45

Newton had pictured light to consist of corpuscles, small particles flying with the
speed of light. Now the force of gravity is proportional to the mass of the planet, in
our case Earth, and inversely proportional to the distance from its center. Starting
from Newton’s idea of light corpuscles, Michell imagined a stellar object very much
more massive than the Earth, and very much smaller, and considered its effect on
light. On such a star of extremely dense matter, the force of gravity would be very
much stronger, so that it seemed possible that even light emitted on its surface would
“fall back down”. Since the mass of the light particles did not matter, only that of
the star and its size, one could even specify the properties of this hypothetical object
that would not allow light to leave it. Michell quite correctly concluded, in a letter
written in 1784 to his friend Henry Cavendish in Cambridge, that a stellar object
having the size of the Sun but a mass 250,000 times heavier would do the trick:
it would, he thought, make light rising from its surface fall back down. Cavendish
submitted the letter to the Royal Society and thus, as it turned out, established the
Yorkshire country parson Michell as the inventor of the black hole.
As we already noted, almost all the details of Michell’s picture were, given our
hindsight, incorrect. Light does not consist of localized little particles having a mass,
like pebbles, and even if it did, the kinetic energy he used was non-relativistic and
therefore not correct for anything moving at the speed of light. And we know today
that light captured by strong gravitation does not “fall back down”; it rather circles
around the source of the gravity holding it back. But the idea that gravity could
become so strong that it could even imprison light in some spatial region—that
was absolutely correct and, even from our present view from higher shoulders, a
magnificent insight into the workings of nature.
Only twelve years later, Pierre-Simon, marquis de Laplace, of France, had similar
thoughts. He was certainly one of the greatest mathematicians of all times; modern
mathematics and mathematical physics is not conceivable without his contributions.
Laplace had pointed out the possibility that a star might collapse under the force of
gravity and that this could produce light-retaining and hence dark stellar objects. A
German academician, Franz Xaver von Zach, to whom Laplace had sent a sketch
of these ideas, asked him to supply a mathematical proof, which Laplace readily
did. As a result, the first chapter on what we now call black holes was published in
the year 1799, in German, in a rather obscure journal of a learned society in Gotha,
Germany, by one Peter Simon Laplace. It was entitled

Proof of the theorem that the attractive power of a heavenly body


can be so large that light cannot stream out of it.

It was, as things turned out, way ahead of its time. But there is a curious little
sideline to this early work, still puzzling today. Using Newtonian mechanics to cal-
culate when the escape velocity of a body from a very massive star would exceed
the velocity of light, Laplace effectively found that it would occur when the ratio
of stellar mass to stellar radius exceeded a well-defined value (this is also shown
in more detail in Box 6). And this number, it turns out, is just what today general
relativity predicts: the result is correct, even though the derivation is not. So the basic
46 3 The Secret Glow of Black Holes

idea and a mathematical “proof” existed at the end of the eighteenth century. But the
concept of such dark stars seemed so removed from reality, that no one pursued it
any further. It remained a crazy fluctuation of the human mind for another 150 years.
The universe was filled with shining stars—so the thought that there might be others,
dark stars, absorbing all light and returning none, seemed weird, to say the least.
In Einstein’s formulation of gravity, massive stars distort the space and time around
them. It was this effect that led to the apparent shift in the position of the stars
during the solar eclipse studied by Arthur Eddington in his celebrated confirmation
of Einstein’s theory. The black holes came back into our world with the German
astrophysicist Karl Schwarzschild, who was professor in Göttingen and later director
of the Astrophysical Observatory in Potsdam. Only a month after Albert Einstein had
published his general theory of relativity, giving the equations governing the force
of gravitation, Schwarzschild provided the first exact solution of these equations,
published in 1916. Shortly after its publication, Schwarzschild died of a disease
contracted as a German soldier at the Russian front in World War I. But his solution
of Einstein’s equations provided the theoretical basis for black holes and their curious
properties.
Experienced from far away, the effect of a black hole is similar to that of any
massive star; but the closer one gets, the stronger the distortion becomes, and beyond
a certain distance from its center the force of gravity overcomes the power of the
light to continue on—it is now “sucked” in. This distance is the event horizon of the
black hole, often referred to as its Schwarzschild radius Rs . It grows with the mass
M of the stellar object, with
2G M
Rs = ,
c2
where G is the gravitational constant and c the speed of light. Anything entering
the region of space it defines is forever gone from the outside world, it can never
ever escape again from the inside or even send a signal. This horizon is particularly
dangerous, since it has no noticable distingushing features to warn intruders, no
edges, walls or the like. You notice nothing special as you enter, nothing will stop
you, but once you’re in, you’ll never get out again.
We can perhaps obtain a first picture of an invisible horizon by putting a coin on
a flat table with surface friction, and moving it closer and closer to a magnet placed
in the center (Fig. 3.1): up to a certain distance, the friction holds the coin in place,

U=2 π R
R

Fig. 3.1 Attraction of an iron coin towards a magnet on a flat table: inside the blue region, the
friction of the table surface is overcome by the magnetic field and the coin is drawn to the magnet
at the center
3.1 The Escape Velocity 47

but beyond that, the magnetic attraction is strong enough to pull it to the magnet.
That certain distance is the effective radius of magnetic force, the horizon, and it
evidently does not come from any changes in the surface of the table at that point. To
improve this analogy with our case, we have to make the magnet as small as possible,
pointlike, but let its magnetic strength become larger and larger the closer we get to
the center. The reason for such a representation is that the interior of the black hole,
although we can never see it, is expected to be empty everywhere, except at a tiny
point of incredibly high density at the center. And at this point, the force of gravity,
according to the laws of classical general relativity, becomes infinitely strong. We
emphasize “classical” here, because “at a point”, quantum effects are expected, and
what these will do is not yet known.
In visualizing the black hole structure, however, we have to remember that accord-
ing to general relativity, the geometry is modified by the force of gravitation, and so
concepts such as radius or circumference have to be treated with care. The Schwarz-
schild radius defines the region of space occupied by the black hole, as seen by
a far-away observer. For this observer, it specifies the size of the black hole, the
region from which no light can emerge. But as we get closer, as the force of gravity
increases, the space itself becomes more and more modified. Two straight lines, or
two light beams, that are parallel far away from the hole will begin to move towards
each other as they get closer to it, and at its center they will meet. Similarly, also
the conventional idea of a radius as the distance between the center and surface of a
sphere becomes meaningless. So let us translate the above picture of a magnet and
a coin on a table into the geometric view of general relativity. Our table is now no
longer flat; it has a deep hole in the center, and the surface of the table is sloping
down towards that hole. When we put a massive object onto that sloping surface
friction will hold it for a while. But once it gets close enough to the center, it will
slide into the hole. So on the flat table, we were able to define a radius R for the
effective magnetic field: once the coin got closer to the center than a distance R, it
was drawn to the magnet. We can thus, on the table, draw a circle of circumference
U = 2π R, defining the size of our “magnetic hole”. But while, on the table, the coin
was also a distance R from the magnet at the center, this is no longer the case in the
geometric space-time description of general relativity. The hole now becomes deeper
and deeper, the further the test mass moves in, and the distance it would have to slide
to reach the center: that distance becomes infinite. In other words, the circumference
and distance from the center are no longer related in terms of the flat space equation
U = 2π R. The Schwarzschild radius Rs defines the size of the black hole region as
seen from far away, but the path length L from the Schwarzschild circumference of
a black hole to its center, that appears to become infinite (Fig. 3.2).
We note at this point a curious aspect. For the coin, the motion always takes place
in a two-dimensional space, it moves on the surface. For us, to visualize what is
happening, it is easier to imagine this surface embedded in a three-dimensionsal
space, with a dip towards the center of the table. But we should remember that for
the coin, the third dimension is not real, it can never get out of its two. So for it, the
third dimension is something generally referred to as hyperspace; we shall return to
this in more detail in the last chapter.
48 3 The Secret Glow of Black Holes

Fig. 3.2 Attraction of a mass


towards a black hole: once the
mass has passed the blue
Schwarzschild boundary, it
can no longer get back out
and will continue on its path U
L to the singularity at the Rs
center of the black hole

And this has a most astonishing effect. We now replace the coin falling into the
black hole by a clock emitting signals at the rate of ten per second. An observer
hovering outside the hole in a spaceship (continuously firing its engines to avoid
being sucked in) then finds that the interval between the signals grows greater and
greater, until finally, as the clock enters the interior, no more signals are received. It
seems as if time is coming to a halt, as if it takes forever for the clock to fall in. The
signals emitted by the clock inside the hole are no longer able to climb out. But for
the falling clock itself, it is only a short time until it reaches the central region of the
hole.
And here we now have to modify—more than just a little—the claim that nothing
happens to you at the surface of a black hole you happen to fall into. The force
of gravity grows inversely with the distance from its origin, and that gives rise to
well-known consequences.

3.2 Tidal Effects

The gravitational attraction of the Moon is greatest at the point of the Earth nearest
the position of the Moon at any given time. So if there is an ocean at this point, the
water will be drawn to the moon and will rise, moving way from the bounding shores
and leading to low tide there. On the opposite point of the Earth, the force is lowest,
compared to other areas in that region, so again the water rises, for the second daily
low tide. Consider now the fate of someone falling into a black hole. The smaller the
hole, the larger the force of gravity at the Schwarzschild radius. And this means that a
“typical” black hole, of some ten solar masses, will exert tidal forces that would tear
a human into pieces even before he or she reaches the Schwarzschild radius, owing
to the difference in force between head and feet. For gigantic black holes, of millions
of solar masses, however, the tidal force is not noticably different just outside and
just inside the black hole. So what we wanted to say when we claimed that nothing
special happens at the Schwarzschild horizon is that the tidal forces can be deadly
3.2 Tidal Effects 49

even before reaching the horizon, or they can still let you live for a while inside the
black hole; which it is depends on the mass of the black hole. Nevertheless, once
inside and falling towards the center of the hole, any extended object will eventually
reach a point at which the tidal force becomes strong enough to tear it apart. So the
end of anything inside a black hole is always pulverization, or rather, as the experts
call it, “spaghettification”, the creation of long strings pointing toward the center.
To understand the origin of the so-called singularity at the center of a black hole,
the point where the force of gravity seems to become infinite, we have to return to
the formation of such stellar objects. Once the nuclear fuel of the star is used up,
so that the heat pressure keeping the force of gravity in check is no longer present,
it is predicted by general relativity to collapse, to implode, in a very short time.
The end of the collapse depends on the mass of the star. As the stellar volume
continues to decrease, the stellar density of course increases. And at a certain point
it will reach nuclear density. A nucleus is a stable entity of a given size because the
forces that cause the nucleons to cluster together turn repulsive once they get too
close to each other. So the nucleus is in effect a compromise between the attractive
and the repulsive components of the effective force inside the nucleus. Given in
addition the gravitational pressure in stellar objects, some further compression is
possible, but eventually one reaches a limiting critical density; we shall return to
these aspects shortly. Given the mass M of a black hole candidate, we can use its
Schwarzschild radius Rs = 2G M/c2 to determine its density M/Vs , where the
Schwarzschild volume Vs = 4π Rs3 /3. This density decreases as 1/M 2 ; if the mass
is too small, the density reached for the Schwarzschild volume would be larger than
the critical nuclear density, so that nucleon repulsion can stop the collapse before
it has proceeded that far. In such a case, the end of the stellar evolution will be a
neutron star, consisting of neutrons compressed to the maximum density. This will
presumably be the fate of stars having masses up to a few times that of the sun.
For a large enough stellar mass, however, of some ten solar masses or more, the
Schwarzschild radius is around 30 km, and the density remains below the critical
nuclear density. Now the collapse continues, and all matter eventually ends up at
a “point” in the center. How “large” this point actually is, and how the nucleonic
repulsion is eventually overcome, brings us today to the limits of our understanding.
General relativity as a classical theory predicts that it really is a point. But we know
from all other realms of physics that for very short distances, the classical theory has
to be modified to include the uncertainty effects of quantum phenomena. Quantum
theory, with its uncertainty principle, forbids us to specify simultaneously the location
and the energy of an object. So to speak about a specific mass at a point is in quantum
theory not really possible. What is needed is the extension of general relativity to
the quantum domain, quantum gravity, and that has so far remained a challenge to
theoretical physics.
The history of black holes as part of our picture of the universe is indeed also quite
curious. The decisive work of Einstein and Schwarzschild is not yet a hundred years
old, and at the beginning such objects were generally considered as figments of human
imagination, mathematical curiosities, but certainly not reality. That a collapsing star
might lead to something like a black hole was considered as most unlikely, since the
50 3 The Secret Glow of Black Holes

irregularities in its structure would presumably cause it to fragment in some chaotic


way and not contract to the singularity of the theory with perfect spherical symmetry.
Only in the 1960s did Stephen Hawking and Roger Penrose succeed in showing that
no matter how irregular the collapse, the result would be a perfect black hole with
a Schwarzschild radius determined by its mass alone. Subsequently it became clear
that black holes could, in fact, have only three properties: mass, spin and charge; their
radius then followed from the Einstein equations. John Wheeler of Princeton, who is
generally credited as the first to call them black holes, summarized this situation by
noting that “black holes have no hair”: they only have the three mentioned attributes.
At this point, we cannot resist noting that the smallest constituents of matter,
elementary particles, also have no hair, but only mass, spin and charge. We shall
return to them in Chap. 5, but we note already here the crucial difference between
the two worlds. In black hole physics, we have a classical formulation, the Einstein
equations, as well as their solution, but no quantum extension. In elementary particle
theory, there is a counterpart to the Einstein equations, quantum chromodynamics
(QCD), but not (yet) one to their solution. So we have today only estimates of the
size of the elementary particles, and we don’t know if their radius is an event horizon
for the quarks of which they consist.
How can we see if in our universe there really are black holes? Since they emit no
light, we can’t see them; we can only note the effect of their gravitational force on
their surroundings. One proposal in this vein accounts for the motion of matter in our
galaxy as the consequence of an extremely massive black hole at its center. Another
possibility relies on the feature that black holes absorb anything their gravitational
force can get a hold of. If a black hole and an ordinary star are fairly close together—
this can happen if in a system of two stars rotating about each other one collapses
to become a black hole—then gaseous material from the shining star will then be
sucked into its black hole companion, forming a particular radiation pattern. The
binary star system Cygnus X-1 appears to provide such evidence.
In any case, the experts in the field are today convinced that our universe contains
billions of black holes, in sizes ranging from super-massive (up to a billion times the
mass of the Sun) to stellar size (around 10 times the mass of the Sun), distributed
throughout the billions of galaxies filling our world. There appear to be unbelievably
many rooms we may never enter. We can see their effect on the motion of stars, we
can see them “suck in” luminous clouds of gaseous matter, but that’s the best we can
do, we can never get any direct signal from them—or could we after all? Stephen
Hawking, in one of the great strokes combining the large and the small, relativity
theory and quantum physics, pointed out that, in principle, there is a chance. Black
holes can radiate. First thoughts in that direction had come from the Russian physicist
Yakov Zel’dovich, who argued that spinning black holes would radiate; Hawking then
showed that in fact they all will, whether spinning or not. But it is indeed a strange
kind of radiation, and in order to understand it, we have to first consider in more
detail what the “empty space” that we assume the black holes to exist in really is. It
is not just “nothing”; we now know that it is a remarkable virtual world.
3.3 The Sea of Unborn Particles 51

3.3 The Sea of Unborn Particles

A dream cannot disappear, once it was dreamt. But if the person who has dreamt it,
does not keep it—where does it go? Here in Fantásia, in the depths of our Earth, the
forgotten dreams are deposited in fine layers, one on top of the other. All of Fantásia
rests on foundations of forgotten dreams.

Michael Ende , The Never-Ending Story

Bastian Balthasar Bux, the little boy in Michael Ende’s Never-Ending Story, had a
big problem: where did he come from, who was he? He finally found his solution
and salvation in the mine Minroud, where all the forgotten dreams of mankind were
carefully stored. In today’s physics, there is a time-mirrored counterpart to this mine,
a sea containing all the future dreams, not yet dreamt. This “Dirac sea” contains all
possible particles that have not yet become reality. It seems worthwhile looking at it
a little more closely.
When the philosophers of old thought of empty space, of the vacuum, as a fifth
state of matter, a quintessence, they were, by our present thinking, absolutely on
the right track. The only thing that keeps the vacuum from becoming matter is the
lack of energy. Matter means that there is some mass there, and Einstein’s celebrated
formula E = mc2 tells us that energy and mass are just different ways of talking
about the same concept. In the desert, there are flowers that persist as a form of
grey dust for months and months; but when rain finally does fall, they immediately
blossom in a most striking way. Similarly, the vacuum remains empty space until
eventually somehow, in some way, from somewhere, energy is deposited. Then a
pair of physical particles appear, for example, an electron and its positively charged
counterpart, the positron. If the deposited energy exceeds that of their two masses,
the two particles become reality. Empty space plus energy thus means matter. The
overall charge of the pair has to remain zero, since the vacuum had no charge, and this
feature has to be conserved in the creation process. The same is true for the overall
momentum, which puts some constraints on how the energy is to be deposited.
Using these ideas to construct a physical theory, the British physicist Paul Dirac
proposed in 1930 that the vacuum is something like a sea of submerged particles. They
lack the energy to emerge to the surface and become real; they remain virtual, until we
somehow provide the needed energy and allow them to enter reality. Dirac originally
introduced such a picture in order to cure a problem he encountered in formulating a
relativistic equation for the motion of electrons. His equation gave him not only the
wanted electrons, but also a solution with an apparently negative energy; he therefore
banished these “anti-electrons” (today’s positrons) into the vacuum, below the energy
level for real existence. Only two years later, the American experimentalist Carl
Anderson discovered the missing link, the positron, and thus restored the symmetry
of the world given by Dirac’s equation. So today we can imagine the vacuum as a
medium containing infinitely many virtual particles of positive and negative charge,
52 3 The Secret Glow of Black Holes

all the possible particles of the world, lacking only the energy to become real: empty
space as a Dirac sea.
We can continue with this picture even further. How far are they submerged? That
depends on their mass. If we define the energy level of empty space as our zero line,
then we have to supply an energy of two electron masses, 2m e , in order to lift an
electron–positron pair out of the sea. It has to be a pair, since the electric charge of
empty space is zero, and adding energy will not change that; so after our creation
process, it still has to be zero. If we are after bigger fish and want to produce a
proton–antiproton pair, we need twice the proton mass, 2m p . So we have to pay by
weight.
At this point, quantum theory becomes essential. We will return to it in more detail
in Chap. 5; here we only appeal to its perhaps most profound feature, the uncertainty
principle, formulated in 1927 by the German physicist Werner Heisenberg. To resolve
an object of a certain small size, we need to look at it with a wavelength of a
comparable size; that’s why very short wavelength light is needed to observe the
lattice structure of crystals, for example. Short wavelength means high frequency,
and energy as well as momentum of a light beam increase with frequency. So if we
want to locate a very small object very precisely, we need to look at it with light
of very short wavelength and thus of very high frequency. This light therefore has
rather high momentum and gives the particle a considerable kick. As a result, we find
that the determination of the position and the momentum of a particle have opposing
requirements: to achieve a better localization, we need shorter wavelength light,
which in turn gives the particle a bigger kick and hence modifies its momentum by a
larger amount. If we want to measure its momentum as precisely as possible, we have
to use light of long wavelength, which rules out a good localization. The outcome
of all this is that particles, objects on a microscopic scale, as seen by an observer,
seem to acquire a particular form of claustrophobia; they insist: “don’t fence me in”.
If you give them all the space in the world, they quietly sit there. But as you reduce
their space, they start moving around, and the more you reduce it, the faster they
move. In more scientific terms, the more precisely your specify the position of a
particle, the less you know about its momentum, and vice versa. Spatial position and
momentum are “complementary”, and the product of their uncertainties can never
fall below a certain value, the Planck constant . In a similar way, energy and time
become complementary, and so the uncertainties in their determination affect each
other. From the uncertainty principle we can therefore conclude that there are limits
to a statement such as “space is empty, the energy is zero”.
In our case of the submerged pairs, as we just saw, the energy uncertainty is 2m,
where m is the mass of the particle; 2m is the energy difference between empty space
and space containing two particles. We can therefore claim that the space is truly
empty only for time intervals larger than /(2m). For short instants, shorter than this
limit, the pair can appear fleetingly at the surface and then submerge again. During
this short time, it has borrowed the energy needed to appear from its surroundings;
but this is a very short-term loan and has to be repaid in a time /(2m). Physicists
speak about this phenomenon as a quantum fluctuation, resulting in pair creation and
subsequent annihilation. However, because of the smallness of Planck’s constant,
3.3 The Sea of Unborn Particles 53

the life-time of the pair in the real world is so exceedingly small (about 10−22 s) that
we can hardly hope to see them. Nevertheless, we will find that there are situations
where this phenomenon becomes relevant. If they happen to appear in the presence
of some outside agent that captures and removes one of the two, the other has nothing
to annihilate with and must thus become “real”. The price in energy has to be paid
by the outside agent.
So, in quantum theory, the vacuum turns out to be more than just empty space,
nothing. And there is in fact even a direct experimental test for this, proposed in 1948
by the Dutch physicist Hendrik Casimir. The fluctuations of the vacuum appear not
only as virtual pairs of particles but also in the form of electromagnetic waves. On
an atomic scale these are quantized, that is, they appear only in discrete packages,
multiples of a fundamental quantum of energy. For waves inside a metal box, the
lowest possible energy state corresponds to a standing wave whose wavelength is
just the size of the box; correspondingly, higher energy states are given by shorter
wavelength standing waves, see Fig. 3.3. If we now place two uncharged parallel
metal plates extremely close to each other, with a separation of some hundred times
atomic size, then in the space between the plates no excitations of wavelength longer
than this scale are possible, whereas on the outside such longer wavelengths exist.
As a result, the additional outside waves press the two plates together, with a force
as large as one atmosphere. After a number of preliminary studies, in 2001 a group
at the University of Padua in Italy finally succeeded in measuring the Casimir effect
between two parallel plates, showing that the fluctuations of the vacuum are indeed
real and press the plates together. Such an effect is evidently to be expected if the
plates are in a box heated to a certain temperature, so that it contains actual pho-
tons corresponding to that temperature. But here the remarkable fact is that even in
vacuum, at zero temperature, the photons of virtual quantum fluctuations can exert
such a pressure. This indicates that on a quantum level the vacuum is not simply
“empty”.
We have thus found another, very special horizon: the vacuum itself. It sepa-
rates our world of “real” things from the “virtual” world, made up of all possible

Fig. 3.3 The Casimir effect


54 3 The Secret Glow of Black Holes

particles that could surface into reality if the necessary energy is made available from
somewhere.
A particularly interesting case is provided by black holes, which on a classical
level could only exert their force of gravity to make their presence known. This is
where in 1975 Stephen Hawking came in, with what became one of the most striking
predictions in astrophysics: he showed that such virtual pairs could in principle allow
black holes to send radiation into the outside world. But a word of caution: the careful
reader will have noticed that we seem to have some reservations, since we said “in
principle”. It means, of course, that so far Hawking radiation has not been seen, and
that we moreover even think we know why.

3.4 Invisible Light on the Horizon

Imagine a black hole sitting somewhere in interstellar space. As we found out, in the
actual universe the physical space is not really “empty”, it still contains the remnants
of the radiation emitted at the time of the Big Bang. Let us ignore this background
radiation for the moment and assume space to be totally empty; this is in fact not
realistic, which, as we shall see, will prevent us from observing Hawking radiation.
The environment of our black hole then is Dirac’s sea of unborn particles. Electron–
positron pairs jump out of the sea for an instant and then annihilate to disappear
again. But if this happens near the surface of the black hole, during their appearance
the tidal force of gravity may tear them apart, catch one of the pair and suck it into its
regime of no return. The other is then left on its own, to suffer existence— its partner
for annihilation is gone, it has to remain real. So, of the virtual pair we have one
disappearing in the black hole and the other left in real space. This means that, on a
quantum level, black holes will radiate, they will appear to emit the left-over partners
of the virtual pairs brought to reality by the force of gravity. In other words, from our
point of view, black holes are not completely “dead”: they do send out the quantum
signals of the broken pairs. This is today known as Hawking radiation—never (as
yet) observed, but an ever remaining challenge to human imagination. In truly empty
space, a black hole should thus appear to an outsider observer as surrounded by a
halo of radiation.
But how is this possible? We have seen that no information can pass the event
horizon of the black hole from the inside. So what price does the Hawking radiation
have to pay to “get out”? It must be such that it cannot tell us anything about the
inside of the black hole. In the terminology of physics, that means that it must be
stochastic, random, thermal. Let us see what that means.
Transferring information means sending out an ordered sequence of signs, so
that the order tells something to the receiver. These signs can be words, letters,
sequences of numbers, or lines in a barcode, as used to specify the price of goods
in a supermarket. The information is always encoded in the specific order of the
signs. The sinking ship sends radio signals · · · – – – · · · for help, where “three
short” stands for the letter “S” and “three long” for “O”. And the secret codes of the
3.4 Invisible Light on the Horizon 55

military were secret only as long as the enemy did not understand the meaning of
the order. This indicates how a signal can be devoid of any meaning. We take the
set of whole numbers from one to ten and form sequences by randomly choosing
numbers out of that set. We cannot do this as humans, because we may have unknown
biases, preferences for the three, or for prime numbers, for example. But we can use
what is known as a random number generator, which chooses in a truly random
fashion. Such a device is used every week to pick the numbers to determine the
winning combination in the lottery (at least we believe so…). So if we send out
signals prepared in this way, the only information the receiver can obtain from them
is how large our set of numbers is. And that our source is able to construct random
sequences.
A radiating thermal source follows such a pattern. If its overall average energy
is fixed, it can emit an immense number of different signals, waves of different
wavelengths or frequencies, constrained only by the available energy. We call a
system thermal if it chooses signals out of this vast set in a random way, by throwing
dice, so to speak. For a given energy, there will be more states in some energy intervals
than in others, so if we make a distribution over the energies of the signals received,
it will have a peak somewhere, and we use this peak to define the temperature of
the system. This (or the corresponding average energy of the medium) is the only
information we can get from such thermal radiation.
The radiating system just discussed is an idealized case. A real star, for example,
will contain different elements, hydrogen, helium and more. And their atoms will
emit and absorb light of certain well-defined and well-known frequencies. So looking
at the light from such a star, we know more than just its temperature—the spectral
lines corresponding to emission or absorption for certain atoms tell us that the star
in fact contains such elements. Our idealized star is what physicists call a black
body—it emits and absorbs equally well for all frequencies; it has no lines. And
although black body radiation was studied long before black holes entered the scene,
the Hawking radiation from black holes is in fact thermal radiation of this type.
Just as in the case of black body radiation, Hawking radiation is determined by the
temperature of the black hole. Let’s assume for the moment that we are looking at
black holes having no spin and no charge; the more general cases have been solved
as well. Since then its mass M is its only property, its temperature must be specified
by this mass. Hawking has shown that it is in fact inversely proportional to M; he
obtained
c 3
TBH =
8πkG M
for the temperature of a black hole. Here G is the universal constant defining the scale
for the gravitational force,  Planck’s constant, c the universal speed of light, and
k Boltzmann’s constant. The presence of these four constants tells us that Hawking
radiation is a thermal effect (hence the k) caused by gravitation (the G), based on an
interplay of quantum theory (the ) and relativity theory (the c). In a classical world,
with  = 0, TBH = 0, there is no Hawking radiation. In the next chapter, we will
show a simple derivation of Hawking’s result. Here we just wonder, for a moment,
how hot a black hole should be, according to Hawking. Using his formula for TBH
56 3 The Secret Glow of Black Holes

and choosing as the black hole mass ten solar masses, we find that the Hawking
temperature is about one billionth of a kelvin—not particularly hot. Moreover, the
wavelength of any radiation is inversely proportional to the temperature of the emitter;
that’s why the emitted radiation shifts from infrared to ultraviolet as the temperature
is increased. And so the wavelength of the typical Hawking radiation just considered
becomes of the size of the black hole, some tens of kilometers.
So any black hole will emit thermal quantum radiation. It is “thermal” in that it is
random: by measuring many emitted electrons, we can only specify the temperature
and hence the mass of the black hole, nothing more. It is “quantum” because only
the virtual quantum fluctuations gave a pair the chance to escape for the briefest of
moments from the Dirac sea, to be caught by the gravity of the black hole. But even
the black hole has to pay the price in energy: the radiated electron carries away one
electron mass, and this is now missing from that of the black hole. So after one such
emission, the black hole mass has become M − m e . Since M is so vastly larger than
an electron mass, it is of course hardly noticable. But if repeated often enough, even
small acts can lead to large consequences: eventually the black hole will evaporate. It
will become hotter and smaller as its mass decreases, and in the end, it will disappear
and be gone.
Fortunately, at least for lovers of black holes, this whole story is based on an
untenable assumption. We have just seen that the Hawking temperature for a stellar
black hole of some ten solar masses turns out to be around a billionth of a kelvin. And
so we now have to recall that we had assumed our black hole to sit in empty space. In
reality, it finds itself immersed in the background radiation from the Big Bang, with a
temperature of 3 K, much higher than its own thermal radiation. So instead of being a
hot object in a cold environment, evaporating by the emission of Hawking radiation,
a black hole is in fact a cold object in a (relatively) hot medium, growing, becoming
more massive, and hence also bigger, by the absorption of the cosmic background
radiation. This growth will stop only when and if the continuing expansion of space
has reduced the temperature of the microwave background radiation to values well
below the Hawking temperature. The temperature of the background radiation has
dropped by a factor of a thousand since the horizon of last scattering, fairly soon after
the Big Bang. So to have it drop by a factor of 109 to bring it near to the Hawking
temperature of a stellar mass black hole—that will really still take quite a while.
To have a higher Hawking temperature, the mass of the black hole must be much
smaller, and in the stellar world, that is not really possible. To form a black hole, the
force of gravity “contracting” a star has to be stronger than all possible other forces
resisting a contraction. The last line of resistance is, as we have seen, that leading
to neutron stars. It comes into play when gravity has overcome the electromagnetic
forces that lead to the formation of atoms, squeezing the electrons into the nucleus to
make neutrons out of its protons. At this point, the stellar matter consists of only neu-
trons, and these offer a strong resistance to being compressed further. This resistance
is encoded in the exclusion principle, formulated in 1925 by the Austrian physicist
Wolfgang Pauli. It provides the underlying reason for the stability of all matter and
thus is another one of the crucial results of quantum physics. Only sufficiently heavy
stellar objects can collapse to the volume defined by the Schwarzschild radius and
3.4 Invisible Light on the Horizon 57

yet have a density below the nucleonic compression limit provided by the exclusion
principle As a result, light stars, of some two or three solar masses, end as neu-
tron stars; only more massive ones can collapse further to form black holes. And
for these, the Hawking temperature is many orders of magnitude below that of the
cosmic background radiation.
That means that the observation of Hawking radiation from “normal” black holes
is out of the question. But physicists don’t give up all that easily. What if there
were, in our universe, black holes of smaller mass, somehow produced very early,
shortly after the Big Bang, and then left over? The density of matter at that time was
presumably high enough to form regions that could collapse under gravity to form
black holes of smaller size than those from stellar collapse. To have the Hawking
radiation overcome the background radiation, the mass of such a “primordial” black
hole has to be sufficiently small. On the other hand, if it were formed more or less
at the time of the Big Bang, it has had a long time to emit Hawking radiation and
thereby evaporate. The question then is whether there ever were black holes whose
mass was just right, between too heavy and too light, and whether they have managed
to survive until today. So for the time being, Hawking radiation remains for us the
secret glow of black holes. And it leaves us with a conceptual puzzle. Quantum
effects arise typically at microscopic scales, for the very small. Even if there were
no microwave background radiation: is it possible to produce a quantum fluctuation
with a wavelength of more than ten kilometers? The relation between the large and
the small remains enigmatic.
The Visions of an Accelerating
Observer 4

Weighing the heart of Hunefer, Egypt, 1300 B.C.


One of the basic features of any body is its weight—and already the earliest
civilizations had means to determine that. The oldest scales were found in Egyptian
tombs and are more than 5,000 years old. Lifting objects up from the surface of the
Earth required a different amount of force for different objects, so that the identifi-
cation of weight was quite straightforward. The mass of a body, more specifically
its inertial mass, is a more subtle concept: it specifies its resistance to being set in
motion. To picture the difference between mass and weight, consider a pendulum,
constructed by hanging a ball on a string (Fig. 4.1): the mass M of the ball specifies
how hard we have to push to make it move, its weight W the strength of the string
to support it.
Is the mass of a body its weight? This is one of the questions which gave rise to
modern physics. In general, the two are evidently not the same. The force needed to
move the ball is the same on Earth, on the Moon, or in outer space; the gravitational
attraction in these different situations is very different, however, and so the weight of
the ball will depend on where it is measured. Nevertheless, we can consider inertial

H. Satz, Ultimate Horizons, The Frontiers Collection, 59


DOI: 10.1007/978-3-642-41657-6_4, © Springer-Verlag Berlin Heidelberg 2013
60 4 The Visions of an Accelerating Observer

Fig. 4.1 Mass M as


resistance to motion, weight
W as a result of gravity

and gravitational mass as reflections of the same thing, provided gravity treats a
massive object in the same way as any other force does. The crucial test for this
is whether bodies of different mass fall from a given height in the same time. The
accceleration of gravity is proportional to the ratio of weight to mass (see Box 7).
As Galilei had shown in his celebrated experiments, this acceleration is indeed the
same for all bodies, and so we can choose to define mass and weight in such units
that they become equal. That is what we usuallby do on Earth; but we have to
keep in mind that this is our choice for own terrestrial environment. A stone having
an inertial mass of 10 kg has this mass on Earth, on the Moon, or in outer space;
it specifies the force needed to set it in motion. With our choice of units, on Earth
it also weighs 10 kg. But on the Moon, its weight is much less, and in outer space,
it has none. Still, even on the Moon or on another planet, objects of different inertial
mass will fall the same distance in the same time. So inertial mass and gravitational
mass are indicators of the same thing: resistance to a force. This is often referred
to as Galileo’s principle of equivalence; gravitational attraction is a force acting on
massive bodies in the same way as any other force, and so we can equate mass and
weight on Earth by choosing suitable units, such as kilograms, for both.

Box 7. Inertial versus Gravitational Mass The inertial mass m i is the resis-
tance an object offers to a force trying to set it in motion,
F = m i a.
The gravitational mass m w is its weight in the gravitational field of a body, say
the Earth, of mass M,
mw M
F=G ,
R2
where G is Newton’s gravitational constant and R the radius of the Earth.
Hence we have  
d2 r G M mw
a= 2 = 2 ,
dt R mi
which leads to  
GM mw
r= t 2.
2R 2 mi
4 The Visions of an Accelerating Observer 61

If bodies of different weight fall the same distance r in the same time t,
m w /m i is a constant for all and we can chose (say on Earth) m w = m i , making
mass equal to weight.

Galileo thus established the equivalence of inertial and gravitational mass some four
hundred years ago. About one hundred years ago Albert Einstein formulated a similar
but more general equivalence.

4.1 Gravity and Acceleration

Here on Earth, it is the force of gravity that gives us weight. If we were in a rocket
moving through space at constant speed, somewhere far removed from any stellar
source of gravity, we would have no weight. But if the commander of our rocket
decided to turn the engines on again to accelerate, we would suddenly be “pushed
back into our seats”. Einstein concluded from this that a solitary space traveller
confined to a box without windows could never tell if his box was sitting on Earth
or if it was in a spaceship undergoing an acceleration equal to that of gravity on
Earth. So Einstein’s equivalence says that for an observer on a star of given gravity,
nature would behave in the same way as for an observer in a rocket travelling with
the same acceleration. In other words, we can simulate the effect of a massive star by
the acceleration of a rocket engine. If you close your eyes, you won’t know which
it is.
On the other hand, if you don’t close your eyes but bring along some measuring
devices, you may be able to beat Einstein and identify your location. For one thing, on
the surface of the star, gravity leads to tidal effects: the force on your head is slightly
less than that on your feet, since they are closer to the center of the star. Given
a sufficiently sensitive detector, this could tell you where you are. So Einstein’s
equivalence assumes the star to be so big that tidal effects can be ignored or not
detected; the bigger the star, the further you are from its center and the smaller the
difference in gravitational pull between your head and your feet. But there is an
additional identifying effect due to one of the most striking predictions made in
modern physics. Like Hawking radiation, it is a relativistic quantum effect, and like
Hawking radiation, it remains a prediction, for the time being. But unlike Hawking
radiation, as unexpected as that may be, the effect proposed by the Canadian physicist
William Unruh in 1976, at essentially the same time as Hawking’s work, sounds like
genuine science fiction.
Imagine you are on a spaceship travelling through empty interstellar space. As
before, we will neglect the microwave background radiation and assume space to be
truly empty, containing nothing. Your space ship is travelling at constant speed, you
put out detectors and verify: there is truly nothing out there, it is the physical vacuum,
empty and cold. Now you turn on the engines and accelerate the space ship, holding
62 4 The Visions of an Accelerating Observer

Fig. 4.2 The world line of a


t t
bullet (left side) compared to
that of a spaceship (right side)
space ship

t
lle
bu
x x
marksman target launching pad

it at constant acceleration. And your detectors now tell you that you are travelling
not through empty space, but through a hot medium. There are particles out there,
photons and electrons, hitting the thermometer of your detector and indicating that
the temperature of empty space is not zero. If you stop the acceleration, empty space
is once again just that.
So what is real? Is the void out there cold and empty, or is it a hot gas? Hot
enough to cook a steak, says Bill Unruh. To understand what is happening, we have
to consider in a little more detail the motion of objects in space and time, their “world
lines”, in the terms of relativity theory. Imagine a marksman shooting at a target,
and draw this in a space-time diagram, as in Fig. 4.2. The stationary marksman and
the stationary target have world lines parallel to the time axis, whereas the bullet,
as seen by the marksman, traverses both space and time. We have here considered
the bullet to leave the gun with a fixed muzzle velocity v and continue at this speed
until it reaches the target. This constant speed is what makes the world line straight,
with x = vt. Now consider instead a spaceship, starting from rest on a launch pad
and then accelerating until it has left the gravitational field of the Earth, to cruise
through interstellar space at constant speed. Its world line has the form also shown
in Fig. 4.2, as seen from the stationary launch pad. It bends over in the accelerating
phase and then becomes straight when the space ship travels with constant velocity.
Let us now see what happens if the spaceship continues to accelerate. It becomes
faster and faster, but, as we know, it can never reach the speed of light, since it
is massive, and no massive body can move with the speed of light. For constant
acceleration, its world line can be calculated, and in Fig. 4.3, the form of this world
line is illustrated. Here we have measured the time multiplied by the speed of light, so
that the straight line emerging from the origin at a 45◦ angle defines a light beam sent
out by an observer at the origin just as the spaceship is launched. The solution of the
equation of motion of the spaceship shows that if the distance between the launching
pad and the origin of the light beam is just c2 /a, where a is the acceleration of the
spaceship and c as usual the speed of light, then as time goes on, the spaceship and
light beam get ever closer to each other, but the two will never meet. A further look
at Fig. 4.3 reveals some strange features. For the observer at the origin (or for any
other observer further away) the spaceship is never visible, and there is no possible
communication. The space traveller can send a signal that the observer will eventually
receive, but he will never get a reply. For the spaceship, the observer and anything
4.1 Gravity and Acceleration 63

Fig. 4.3 The world line of a


spaceship undergoing ct
constant acceleration a; the
light beam is sent out at the
launch time by an observer at am
be accelerating
a distance d = c2 /a from the gh
t space ship
li
launch pad

x
observer launching pad

within the light cone defined by the beam emerging from the origin is beyond some
kind of horizon, the so-called Rindler horizon, named after the Austrian physicist
Wolfgang Rindler. For the passengers of the spaceship, it is almost like the surface of
a black hole: you can throw something in, but you will never get anything out. And
the crew at the launch pad can communicate with the spaceship for a little while, but
once their world line (the dashed red line in Fig. 4.3) crosses the light cone, that is,
passes beyond the Rindler horizon, they can never reach the spaceship anymore.
It is the appearance of this horizon, removing part of space from the reach of the
spaceship, that forms the basis of the strange and mysterious Unruh effect. To get a
little closer to what it claims, let us imagine a couple at the launch site, Alice and
Bob in the customary jargon of relativists giving names to A and B…Alice gets onto
the spaceship, Bob remains on the ground. They have promised to send each other
signals once a second. And while Bob does register the signals sent by Alice at just
the promised rate, Alice notes that the time intervals between Bob’s signals rapidly
increase, and after some time, they cease to come.

4.2 A Total End of Communication


Once Bob has passed beyond the Rindler horizon of the spaceship, he can no longer
send any message to Alice. She can radio him, but she will never get a reply; in fact,
she will never even know if he is still alive. So whatever relation there was between
the two before the launch, it is now destroyed forever, as illustrated in Fig. 4.4.
It is evident from this that the world as seen by an observer undergoing constant
acceleration is quite different from that seen by an inertial observer. And it suggests
that if we can find observable features resulting from this difference, we could use
them to determine the difference between an accelerating rocket and the surface of
a star.
Everything we have considered so far was classical relativistic physics. That’s why
we could talk about empty space (ignoring the microwave background). But as we
have seen, quantum theory turns this empty space into the sea of unborn particles.
They appear as fluctuations, but with such a short lifetime that we can never see
them. A black hole may grab one partner of such a pair and suck it inside its event
64 4 The Visions of an Accelerating Observer

ct Bob

Alice
c tc

r
c2 / a

Fig. 4.4 The fate of the entanglement of Alice and Bob after the launch of the spaceship. The
lower dashed blue line shows a signal sent by Bob shortly before he crossed the Rindler horizon
of the spaceship; it will eventually reach Alice. The dashed red line shows the signal sent by Alice
at the same time; it reaches Bob when he is already beyond her Rindler horizon. The upper dashed
blue line shows Bob’s futile reply: it will never reach her

horizon, leaving the other outside and now real, with the price for creation paid by the
energy, the mass of the black hole, which is correspondingly reduced. Such Hawking
radiation thus breaks the coupling of the virtual pair. The accelerating spaceship can
in fact do a similar trick. While passing through the vacuum, it can take one (Alice) of
the virtual fluctuating pair aboard and leave the other (Bob) behind. Given the right
scales, the one left behind will soon fall beyond the Rindler horizon of the spaceship
and hence it can never reach its now travelling partner anymore to annihilate. Both
partners of the virtual pair have now become real; and who pays for this lift from
below to above the level of reality? Just as the mass of the black hole is decreased
by Hawking radiation, the spaceship now has to fire its engines a little more to make
up for the energy needed for the pair creation. And the end effect is that an observer
on board the spaceship notes that his thermometer, because of the collisions with the
fluctuations, now tells him that empty space is not really cold and empty.

4.3 The Temperature of the Vacuum

The spaceship travelling through what we, as inertial observers, consider to be the
vaccum, takes aboard electrons and photons from the quantum fluctuations, and so its
thermometer registers these hits as the presence of a hot medium. Unruh calculated
the temperature of such a medium and found it to be

TU = a ,
2π kc
where a denotes the acceleration of the spaceship. The presence of the three funda-
mental physical constants show that the effect is of quantum nature (Planck’s constant
), that it is relativistic (the speed of light c), and that it is thermal radiation (Boltz-
mann’s constant k). If we ignore quantum physics and set  = 0, or relativity theory
by letting c → ∞, the Unruh temperature becomes zero, the effect goes away. Unruh
4.3 The Temperature of the Vacuum 65

radiation is visible only to the accelerating observer; an inertial observer somewhere


near the path of the spaceship in the same region of space finds it to be completely
empty. Evidently what we see is determined to some extent by what we do.
But we have so far not considered Bob anymore. The partner of the quantum
fluctuation that is left behind soon passes the Rindler horizon of the spaceship; it is
beyond any communication with its partner in the fluctuation and cannot annihilate
anymore. It has benefitted from the energy provided by the engines of the spaceship
and thus become real. And while the spaceship is forever beyond the event horizon
of the inertial observer at the origin of Figs. 4.3 and 4.4, the electron “Bob” does
appear after some time in the world of this observer, bearing witness to the passage
of the invisible spaceship. This form of Unruh radiation must also be thermal—just
as Hawking radiation can only indicate the mass of the black hole, so Unruh radiation
can only tell us what the acceleration of the spaceship is, nothing more. No message
from Alice… the Rindler horizon is ultimate.
We have already indicated that Unruh radiation also remains a prediction, for the
time being. If you want to cook your steak by accelerating it, you have to attain an
acceleration of 1022 m/s2 in order to reach 300 ◦ C, that is 1021 times more than the
acceleration of gravity at the surface of the Earth. And similar to Hawking radiation,
the wavelength of Unruh radiation for the terrestrial g = 9.8 m/s2 becomes immense,
of many light-years. So while the principle nature of such radiation is of such great
interest, it also raises again the problem already encountered for Hawking radiation:
the question whether quantum effects of stellar dimension make sense. We will soon
encounter quantum theory as the correct description of the very small – whether this
remains true for the very large is an issue to be resolved by a quantum theory of
gravitation, so far still lacking.
The phenomenon described by Unruh is in fact quite general. If instead of an accel-
erating rocket in interstellar space, we consider an observer hovering in a spaceship
some distance above the event horizon of a black hole, then he must also accelerate
constantly in order to avoid falling into the hole. Using Newton’s equation of force
(see Box 1 in Chap. 3), the necessary acceleration is a = G M/R 2 , where G is
the universal gravitational constant, M the mass of the black hole, and R the event
horizon. With the Schwarzschild form of the horizon, R = 2G M/c2 , this becomes
a = 1/(4G M), and inserting this into the formula for the Unruh temperature we
obtain TU = c3 /(8π kG M), i.e., the temperature of Hawking radiation. Such radia-
tion is therefore a special case of the Unruh effect—one which holds for all observers
in constant acceleration. For black holes, it so happens that the acceleration is that
of gravity.
And it is not the only other such case. Some eighty years ago, Werner Heisenberg
and his student Hans Euler took up a proposal of Friedrich Sauter suggesting that
the vacuum as such cannot exist in the presence of sufficiently strong electric fields.
The underlying picture is quite similar to what happens at the surface of a black hole.
There the force of gravity tears a virtual pair apart, with one partner disappearing
into the hole, leaving the other in what before was empty space and now is no longer
empty. If a vacuum is subjected to a sufficiently strong electric field, then the field
can tear a virtual pair apart.
66 4 The Visions of an Accelerating Observer

4.4 Lightning in Empty Space


The customary bolts of lightning that we are all familiar with arise when, through a
strong electric field difference between a cloud and the Earth, the atoms in between
are torn apart, ionized, establishing a conduction path for electricity to flow from
the cloud to the Earth. In empty space, if it were truly empty, this would not be
possible, because of the absence of any charge carriers. But the vacuum as a Dirac
sea does contain charge-carrying particles of all kinds, they are just submerged and
have to be brought to the surface, which requires immensely more energy than that
needed to ionize an atom. While the latter occurs for some x volts/cm, the lightning
in vacuum requires some 1016 V/cm—which is why it has so far not been observed
in the laboratory. The process was described around 1950 in the context of quantum
field theory by the American theorist and Nobel Laureate Julian Schwinger and is
today generally known as the Schwinger effect. It predicts that the energy of the
electric field will be diverted to produce electron–positron pairs, as long as the field
is strong enough to do so. Schwinger obtained
 
−π m 2 c4
P(E) ∼ exp
eE
for the probability of such “spontaneous” pair production, where e is the electric
charge and m the electron mass. It is evident that the factor c4 forces the electric field
E to be huge in order to make the probability very different from zero. Attempts to
measure the effect continue, using strong lasers. We want to note here that it is yet
another instance of Unruh radiation. This gives for the radiation probability
   
m 2π kcm
P(T ) ∼ exp = exp ,
TU a
where a is the relevant acceleration. The acceleration in turn is obtained from the
Coulomb force on the emerging electrons, a = 2eE/(mc2 ); using this, the Unruh
probability becomes the Schwinger result. In terms of space-time lines, the situation
is illustrated in Fig. 4.5. From the moment they emerge from the sea, the electron
and the positron, initially partners in a virtual pair, have no contact with each other
and move apart in their own worlds.

Fig. 4.5 The world lines of Electric Field E


an electron and a positron,
brought from virtuality to
ct
reality by a strong electric
field: the Schwinger effect

positron hidden region electron

r
− c 2/a c 2/a
4.4 Lightning in Empty Space 67

We have thus encountered several instances in which the vacuum is not just empty
space; below that seeming nothingness is a vast complex array of virtual pairs, waiting
only for some energy in order to become real. In the case of black holes, the surface
gravity does it; for Unruh radiation, the engines of the spaceship provide it; and in
the Schwinger effect, it comes from the strong electric field. All these phenomena
are, according to generally accepted physics, substantiated predictions. Nevertheless,
various reasons have so far prevented their experimental verification. In contrast, the
virtual wave oscillations of empty space, responsible for the Casimir effect, have
after much search been observed—so, for the others, there is still hope.
There is yet another spooky issue that appears when the two partners of a quantum
fluctuation are separated. Initially invented by Einstein in order to disprove quan-
tum theory, it is today generally known as the Einstein–Podolsky–Rosen paradox,
although it is not necessarily, depending on your point of view, paradoxical.

4.5 Quantum Entanglement

In a more prosaic context, the phenomenon was nicely illustrated by the physicist
John Bell from Northern Ireland, working at the European Organization for Nuclear
Research, CERN, in Geneva. Bell had an Austrian collaborator, Reinhold Bertlmann,
who never wore two socks of the same color. So Bell concluded that when he saw a
red sock on one leg of Dr. Bertlmann, he would know, immediately, faster than the
speed of light, that the sock on the other leg would not be red. And so Bertlmann’s
socks became the symbol for entanglement…
Consider now a vacuum fluctuation consisting of an electron–positron pair. Elec-
trons have a property denoted as spin—one can picture them as little magnets with
a polar axis pointing either up or down (see Fig. 4.6). As long as no one makes
any measurement, the orientation of the spins remains undetermined, and in a given
measurement, each orientation is equally likely. The measurement determines the
orientation, and if a subsequent measurement is made, the result is confirmed. If there
is now a source of energy bringing the pair into reality, the relative orientation of
the two spins is preserved; since the vacuum as such has no spin, the two spins must
just add up to zero, i.e., they must point in opposite directions: they are somehow
“entangled”. Let them fly apart, as far as you want; they will remain entangled. If we
now make a measurement of one of the two, if the spin is found to point up, the other
one, far away and unmeasured, must point down, and if it is measured, it does point
down. So making the measurement somehow affects the distant partner and fixes its
spin. This “spooky action at a distance” was what Einstein thought to be impossible,
thus pointing out what he considered to be an internal contradiction of quantum the-
ory. He thought that the way out was to assume that both electron and positron had
an intrinsic fixed spin orientation, fixed even before either was measured, a property
simply unknown to us. Such “hidden variables” are forbidded by quantum theory,
and so the paper by Albert Einstein, Boris Podolsky and Nathan Rosen introducing
the problem in 1935 was considered a true challenge to the theory. It took quite a few
68 4 The Visions of an Accelerating Observer

action at a distance

e+ e−

measurement

Fig. 4.6 The electron–positron pair in the vacuum has overall spin zero; when the pair is brought
into existence, this overall spin must be preserved, even when the partners are separated. Note that
only a measurement will determine the specific spin orientation; initially, both are equally likely.
But the measurement of the positron’s (e+ ) spin instantaneously fixes the orientation of the distant
electron

years until the crucial experimental test was carried out—and it left quantum theory
victorious.
The idea came from John Bell, inspired perhaps by the socks of Dr. Bertlmann. He
considered two detectors, each tuned to measure one of three different spin positions,
say 1, 2 and 3. Together, there are thus nine different possible detector settings. The
electron enters one detector, the positron the other. Each particle is assumed to have
an intrinsic preference for one of the three positions. When the passing particle hits a
preferred setting, the corresponding detector flashes a green light, if not, a red light.
How often do both detectors give the same result, i.e., both green or both red light?
Since both give the same color (red) if they are tuned to the same unfavorable or to two
different unfavorable settings, there are evidently only four possible configurations
for which the two detectors give different lights. In other words, there are five of the
nine configurations for which the detectors give the same color, four with different
colors. Thus if such an experiment were performed sufficiently many times, and if
the results were independent, the same colors should have appeared at least 5/9 of
the time. It is of course possible that the intrinsic program of the particle is such that
it likes all settings, or none. But this would just increase the number of same color
results. Hence John Bell’s famous inequality states that

the probability of getting the same color from both detectors is greater or equal
to 5/9,

and there should not exist any source of electron–positron pairs for which the detec-
tors flash, on the average, the same color less than 5/9 of the time.
So what would it tell us if an experiment shows a smaller fraction of same color
signals? A somewhat more popular exposition of the effect was given by David
Harrison of the University of Toronto, Canada. He proposed considering a group of
people that are male or female, short or tall, blue- or brown-eyed. That leaves eight
possible combinations:
4.5 Quantum Entanglement 69

short men short men tall men tall men


blue eyes brown eyes blue eyes brown eyes

short women short women tall women tall women


blue eyes brown eyes blue eyes brown eyes

and for these Bell’s inequality says that the number of short men plus the number
of tall persons with brown eyes, male or female, will always be greater or equal to
the number of men with brown eyes. Both in this example, and in the spin situation
above, the inequality is the consequence of fixed intrinsic properties of the objects,
people or electrons. Quantum theory denies that the objects it describes have such
features at the microscopic quantum level; the state a quantum system finds itself in
is created by the measurement of the relevant observable.
The work of Bell triggered numerous experimental studies, aimed to decide
whether Einstein’s doubts or quantum theory, with the noted “non-local” action-at-a-
distance features were correct. Instead of electrons and positrons, these experiments
generally used photons; the role of the spin of the electrons was then played by the
polarization of the photons. This is, roughly speaking, the direction, orthogonal to
the axis of flight, in which the electromagnetic wave is waving. If one causes a string
tied down at one end to oscillate, these oscillations occur in a given direction in a
plane orthogonal to the line of the string. And the point of interest is that if a pair of
such photons is produced by one specific system, such as an atom emitting them in
going from a higher to a lower excited state, then the polarizations of these photons
would be correlated, entangled. For example, if one of the pair was measured to be
polarized up–down relative to its axis of flight, then the opposite photon would be
polarized left–right, no matter how far away it might be at the time the measure-
ment, and it would be so whether measured or not. This feature allowed a similar
formulation of Bell’s inequality as the one we looked at above for electrons.
If we want to show that the result of flipping a coin is 50:50 for heads and tails,
we have to flip many times: five heads in a row does not prove anything. On the
other hand, five hundred heads in a row presumably indicates a weighted coin. So
we have to have a large number of flips, and we have to assure that our coin is
perfectly balanced. Both features entered into the experimental attempts to check
Bell’s inequality, and today, with only some tiny specks of doubt, the conclusion is
that it is violated. In other words, the measurement of one determines the state of the
other, instantaneously and over a large distance—the largest up to now was more than
one hundred kilometers. So the quantum entanglement created in their production
is never subsequently destroyed. The spooky action at a distance, so distasteful to
Einstein, is a fact to be accommodated in today’s picture of the physical world.
It should be noted, however, that the existence of quantum entanglement does
not contradict relativity theory. The instantaneous setting of the polarization of the
second photon, achieved by the measurement of that of the first, cannot send any
information from one detector to the other. The result of the first measurement is
unpredictable, and only once it is made can the result be compared to that of the
70 4 The Visions of an Accelerating Observer

second. And this comparison, the transfer of information, is subject to the laws of
special relativity.
At the end of this part of our story, we return to our solitary accelerating spaceship.
The electron it measures as the Unruh radiation in empty space is one of an entangled
pair, of a vacuum excitation brought to reality by the energy provided by the engines
of the spaceship. The other member of the pair passes through the Rindler horizon,
and after that it can no longer communicate with its former companion. Are they still
entangled? From the point of view of the inertial observer, that is difficult to decide:
he cannot send the result of his measurement to the spaceship. But the spaceship
crew can measure their electron and send this information to the inertial observer
in his stationary laboratory, to see if the separation by a causal horizon has affected
quantum entanglement. In the case of a black hole, however, it would seem that
the fate of the partner inside, falling to the singularity, must destroy any quantum
correlation.
The Smallest Possible Thing
5

So there must be an ultimate limit to bodies, beyond perception


by our senses. This limit is without parts, is the smallest possible
thing. It can never exist by itself, but only as primordial part of a
larger body, from which no force can tear it loose.
Titus Lucretius Carus: De rerum natura, ca. 55 B.C.

The Greeks called them “atomos”, indivisible. The idea that all matter should con-
sist of discrete, similar, smallest units is magnificent in both ways. It implies that
the complexity we see around us can be reduced to an underlying simplicity, and it
implies as well that the immense complexity of our world can arise starting from
simple building blocks. In antiquity, the concepts of reduction and composition were
totally philosophical, not at all related to any observations, measurements or experi-
ments. Nevertheless, this contemplation of the nature of things led to conclusions that
reappeared two thousand years later in our modern formulation of physics. In ancient
Greece, the ideas started with Leucippus (or Leukippos) and his student Democritus,
in the fifth century B.C. Continuing their line of thinking, Lucretius argued that if
the smallest constituents of matter could exist as independent “particles”, if they
could somehow be isolated, then one would be led to ask what they are made of. To
avoid this dilemma, he ruled out their independent existence, allowing them only as
parts of something larger, from which they would never be able to escape. We are
now there: in the terminology of present elementary particle physics, the quarks as
ultimate constituents have to remain forever bound to other quarks, they can never
exist independently.
Modern atomism is generally considered to have started around 1,800, with John
Dalton, who was professor at a Quaker college in England. He defined matter to
consist of elements, pure substances, and these in turn were made up of small par-
ticles, “atoms”. The atoms of any given element were taken to be identical in size,
mass and whatever other properties they might have. In contrast, atoms of different
elements would differ in size, mass and other features. In accord with their Greek
name, atoms were assumed to be indivisible; they were to be the final constituents
of matter, and they could not be created or destroyed. Combining atoms of different

H. Satz, Ultimate Horizons, The Frontiers Collection, 71


DOI: 10.1007/978-3-642-41657-6_5, © Springer-Verlag Berlin Heidelberg 2013
72 5 The Smallest Possible Thing

elements in fixed ratios, one obtained chemical compounds, and in chemical reac-
tions, the atomic combinations of these compounds would be rearranged, i.e., sepa-
rated or recombined in different ratios.
The word “atom” for the ultimate constituents in Dalton’s view of matter has
remained in use until today, even though it was from the outset problematic on at least
three counts. Dalton’s atoms have size and mass, so that one can ask the question of
Lucretius: what are they made of? Since there were already at the time of Dalton many
different elements, there would have to be a large number of different species of atoms
(more than a hundred by today’s count). This was also not what Lucretius was talking
about: he guessed that there might be three different fundamental particles—which
again sounds remarkably modern. And finally the masses of the different atoms, the
atomic weights, satisfied curious whole-number ratios: carbon to hydrogen was 12,
nitrogen to hydrogen 14, oxygen to hydrogen 16, and so on. That suggested that
perhaps the atoms of the more massive elements consisted of some combination of
hydrogen atoms.
In the subsequent years, new elements were being discovered at the rate of about
one a year, and they generally followed the pattern of having atomic weights that
were integer multiples of that of hydrogen. This suggested organizing the different
elements in an orderly fashion, starting with hydrogen and then going up with the
atomic weight measured in units of that of hydrogen. The result was what we today
call the periodic table of the elements, first introduced in 1869 by the Russian chemist
Dmitri Mendeleev (Fig. 5.1); at that time, some 60 elements were known. Inciden-
tally, Mendeleev brought order not only to the world of elements; at one time in his
career he was director of the Russian Bureau of Weights and Measures, and in this
capacity he defined the standards for Russian vodka (not less than 40 % alcohol…),
which apparently still hold today. The periodic table led to the prediction of many
new elements; moreover, it was found that neighboring elements often shared similar
properties. But above all, it presented a challenge to understanding the underlying
structure.
The idea of atoms as being indivisible thus became less and less credible. The
final blows came around the beginning of the twentieth century, when the British
physicist J. J. Thomson announced that, during the investigation of cathode rays,
he had discovered a new particle, very much smaller than a hydrogen atom and
negatively charged. From studies using various different elements, he concluded
that such particles, subsequently named electrons, must be contained in all atoms.
Some ten years later, Ernest Rutherford discovered that atoms in fact consist of
a positively charged nucleus somehow coupled to the electrons. Rutherford was
then working at the University of Manchester in England, and he showed that the
scattering of so-called alpha particles, helium atoms from which both electrons had
been removed, on a gold target could be explained by their electric interaction with
a positively charged gold nucleus. This nucleus was also considerably smaller than
the corresponding atom, but gave rise to most of its mass and was much larger than
electrons. So the atom was not, as J. J. Thomson had first thought, a plum pudding
of positive matter with negative electrons as raisins in it, but rather something like a
5 The Smallest Possible Thing 73

Fig. 5.1 Mendeleev’s


periodic table of 1869

planetary system, with the nucleus as a sun, the electrons as planets, lots of empty
space in between, and electromagnetic attraction in place of gravity.
Since the atoms of the different elements increased in mass, compared to that of
hydrogen, it seemed natural to associate the nucleus with an increasing number of
more massive positive charges, eventually named protons. Such nuclei would then be
encircled by a swarm of electrons, sufficient in number to neutralize the atom. This
led to a problem already realized by Rutherford. The number of positively charged
protons, as measured in his scattering experiments, did not suffice to account for
the atomic masses of atoms heavier than hydrogen. In other words, the nucleus must
consist of more than the number of protons he had determined. The puzzle was finally
resolved when James Chadwick, in 1932 at the University of Cambridge in England,
identified a neutral particle of about the proton mass, the neutron. So the modern-
day atom had become definitely composite, and its building blocks were known:
a central nucleus consisting of nucleons—protons and neutrons—which essentially
defines the atomic weight, surrounded by a cloud of electrons, to neutralize the atom
as a whole (Fig. 5.2).
Such a model of atomic structure had, however, one serious shortcoming. Planets
can rotate around the Sun “at no expense”, they are kept from flying away by the
gravitational attraction of the Sun and stopped from falling into the Sun by the
centrifugal force of their orbital motion. If for some reason a planet were to slow
74 5 The Smallest Possible Thing

Fig. 5.2 Planetary model of


the atom

down, it would descend into a smaller orbit. If an orbiting space shuttle wants to return
to Earth, it turns on its engines “in reverse” to reduce its rotation speed and spiral
into a landing. Rotating electrons, on the other hand, because they are electrically
charged, emit electromagnetic radiation. By their rotation, they would continuously
radiate, lose energy, and eventually fall into the nucleus. The atom would collapse.
So how could one stop the electrons from radiating away all their energy? What
happens in the process of radiation? Much of natural science has come into being
because the “right” question was asked. If you’re insistent enough in asking why the
sky is dark at night, you arrive at the Big Bang origin of the universe. As it turned out,
the nature of radiation and the stability of atoms became similarly fruitful questions.
The search for the right interpretation of radiation started a new way of thinking,
born in December 1900, when Max Planck presented his new law of radiation at
a meeting of the German Physical Society in Berlin. It was based on his studies
of thermal radiation from so-called black bodies, having opaque and non-reflective
surfaces, and it allowed electrons to radiate energy only in discrete, finite packages,
called quanta. Correspondingly, the energy an electron could have was parceled into
discrete units, with a minimum size per quantum, and then multiples thereof. The
universal measure specifying this size is today called the Planck constant, h. The
quanta of radiation were so small that on a large scale, the granularity was quite
unnoticable. The light we see does not appear to consist of little bunches. But on an
atomic level, their size became crucial.
Planck’s discovery had immense consequences on a conceptual level, some of
which neither he himself nor such great contemporaries as Einstein were willing
to accept. It basically meant that on sufficiently small scales, things don’t change
smoothly, but only in discrete steps or jumps. On the scales of our visible world, this
did not matter: a particle could be at rest, it could be slowly set into motion, with
all speeds possible. Our macroscopic world is, as we would call it today, in good
approximation analog. But on an atomic scale, this is no longer true. Everything
happens in discrete units, nature becomes digital.
This became very evident when it was found that it was not really possible to
separate the ideas of particle and of wave. Photons sometimes acted like particles,
hitting an electron and kicking it out of atom. And electrons sometimes behaved
like waves: an electron beam sent through a suitable apparatus shows something like
optical diffraction patterns. Now if electrons indeed can be treated as waves, their
discrete energy structure implies that only waves of a discrete set of wavelengths
5 The Smallest Possible Thing 75

can exist. And if we now picture an electron orbiting around a nucleus, only certain
orbits are possible: those whose circumferences can be constructed as multiples of
the discrete wavelengths. As a result of such wave–particle duality, we obtain the
level structure of atoms.
Within thirty years of Planck’s discovery, the impact of his revolutionary finding
on the mechanics of particles and their electromagnetic interactions had led to a revi-
sion of physics: quantum mechanics and quantum electrodynamics replaced classical
mechanics and electrodynamics. They were formulated and solved, their experimen-
tal consequences elucidated, by the great physicists of the last century: Niels Bohr,
Max Born, Louis de Broglie, Paul Dirac, Werner Heisenberg, Erwin Schrödinger and
numerous others. Even Albert Einstein, later so unhappy about God playing dice,
made an essential contribution to this development by explaining how radiation can
excite or break up atoms—the so-called photoelectric effect we just noted above.
It was for this, not for his unique pioneering work in relativity theory, that he was
awarded the Nobel Prize in physics.
So once again, a “doll in a doll” had appeared in physics. We have seen that
the physics of relativity, necessary at large scales and high velocities, contains the
limit of Newtonian mechanics valid in our everyday world. And now each quantum
theory contained a “classical limit”, again bringing us back to our familiar world.
If you didn’t put your glasses on, the classical theory was fine. Only when it really
came down to the minutest detail did quantum effects come into play. But while in
relativity physics our everyday world is the limit of small scales and low speeds, in
quantum theory it becomes the limit of large scales and long times. And one caveat
has remained until today: the extension of general relativity to its quantum form has,
in spite of great efforts by the leading physicists of our time, not been achieved.
Quantum gravity is still the challenge for the next generation.
One fundamental feature of the new physics, as already mentioned, was that
the microscopic world now had a discrete structure; everything came in quanta. A
second such basic feature, intimately related to quantization, was that one could
not observe things with arbitrary precision. We have already encountered this in the
uncertainty principle established by Werner Heisenberg: if the speed with which a
particle moves can only change abruptly, by jumps, then its position can also not be
specified precisely. We thus expect that the electrons in the cloud around the nucleus
are not exactly like the orbits of the planets around the sun. Each orbit corresponds to
an electron having a certain energy, and if energies are quantized, so are the orbits. If
an electron in the cloud interacts with an incoming photon, it cannot smoothly move
into a larger orbit: it can only jump into the next-level orbit; for that, the photon has
to have enough energy. And if it is in an excited state, it cannot smoothly slow down,
like the orbiting satellites. It can only “fall” down to one of the lower orbits, and the
photons emitted in these discrete transitions are the spectral lines present in the light
from stars. Moreover, the number of electrons in each orbit is limited, which brings
us to the third revolutionary feature of quantum physics, alongside with quantization
and measurement uncertainty.
This third feature was discovered by the Austrian physicist Wolfgang Pauli, when
he tried to understand the structure of the different atoms: the exclusion principle.
76 5 The Smallest Possible Thing

All the particles making up the mass in our world, protons, neutrons and electrons,
insist on retaining their individuality when they meet others of the same kind. So
what specifies their individuality? They have their specific mass, they all have a
specific electric charge, +1, −1, or zero; and they all have an intrinsic spin which
can take on two values, i.e., they are little magnets pointing either up or down. In
the mathematical classification scheme of these structures, up/down comes out as
spin one-half, s = 1/2, since the number of possible orientations is given by 2s + 1.
And if we now consider two otherwise identical particles, say two electrons, and
try to bring them into the same orbit around a nucleus, then this turns out to be
possible only if the spin of one points down, the other up. It is not possible to have
two completely identical electrons in the same orbit—which means that each orbit
can contain at most two electrons, one with its spin pointing up, the other with it
pointing down. Only particles with a spin structure of this type have to obey the
exclusion principle; they are generally denoted as fermions, in honor of the great
Italian physicist Enrico Fermi, who left an indelible impact on theoretical as well as
on experimental physics, and beyond that: he also played a major role in establishing
the first man-made nuclear chain reaction, which eventually became the basis for
the construction of the atomic bomb. Since protons, neutrons and electrons are all
fermions, fermions are the species of particle that make up matter. We shall later find
that there exist other particles not having such a spin structure and thus not subject to
an exclusion principle. We have in fact already encountered the best known of them,
the photon as mediator of the electromagnetic interaction. Such particles are referred
to as bosons, named after the Indian theorist Satyendranath Bose, who together with
Einstein studied their field properties. And the role of the photon is the one quite
generally played by bosons, as we shall see shortly: fermions are the fundamental
constituents of matter, and bosons mediate their interaction. The exclusion principle
for fermions as the basic constituents of matter is thus what assures the stability of
the universe; were it not operative, then gravity could contract everything without
any limit: the world would collapse.
In particular, one can also use the exclusion principle to predict the entire struc-
ture of the electron orbits around the nucleus of the atom. Heavier atoms contain
more positive protons and hence also more electrons; the inner orbits are thus filled,
and the more electrons there are, the larger the outer orbits have to be. It is thus
evident that heavier atoms are larger than lighter ones, and since transitions can
occur only between the discrete levels of the different orbits, one can also predict
the entire spectrum, the frequencies of all light emitted from radiating matter. The
discrete energy levels associated with the different orbits imply that the result would
be discrete lines, spectral lines, in the frequency distribution of the observed light.
On the atomic level, we thus do understand the structure of matter—nuclei of given
charge, containing a specified number of protons and neutrons, surrounded by elec-
trons in well-specified orbits.
Nevertheless, our search for the smallest possible thing has turned out to be less
than a real success. The “indivisible” atoms consist of nuclei and electrons; they are
certainly divisible, electrons can be removed to create positively charged ions. The
nuclei are made up of positively charged protons and uncharged neutrons; therefore
5 The Smallest Possible Thing 77

they have an overall positive charge depending on how many protons they contain,
and they are also divisible. The nucleus is surrounded by a cloud of negatively
charged electrons, in number equal to that of the protons, so that the atom as a whole
is electrically neutral. It is held together by the electromagnetic attraction between
the positive nucleus and the negative electrons. Nucleons and electrons have size,
mass and an independent existence, so that they don’t really qualify as the ultimate
constituents of the form Lucretius had imagined. Moreover, this picture contains
another really serious problem: the positively charged protons will repel each other
by electromagnetic interaction. What keeps them together? Just as the dark sky at
night led to the Big Bang, and the stability of atoms to quantum mechanics, this
problem led to the discovery of nuclear forces. And when the answer was found, it
turned out that the question was really of a similarly simple nature.

5.1 Why Does the Sun Shine?

The structure of the universe at different scales is determined by different forces.


Gravitation is the most universal of all forces—it affects all things, even light, and
it is always attractive. It is the force that holds things together, Earth and Moon,
our solar system, our galaxy, and it is our only resort against that mysterious drive
of the Big Bang resulting in the expansion of space which started with creation of
the universe. The electromagnetic force provides light and thus allows us to see the
world we live in. When the world as we know it first appeared, there were positive
and negative charges, and like charges repelled, unlike attracted. Photons were the
messengers of this communication, and they became our tool of vision. The atoms, as
basic building blocks of matter, consist of a positive nucleus surrounded by a cloud
of negative electrons. This seemingly coherent picture contains one big problem
from the outset: what keeps the positively charged protons together in the nucleus?
Electric interaction would make them fly apart. And what binds the neutral neutrons
to the protons to form such a nucleus? So it seems that another force is needed to
achieve nuclear binding.
Gravitation is much weaker than the electric force: two positively charged particles
of proton mass will repel each other, overriding their minute gravitational attraction.
The force binding them together into a nucleus must therefore be very much stronger
than then the electromagnetic repulsion, creating a hierarchy of forces: nuclear,
electromagnetic, gravitational. Actually, the need for such a third, much stronger
force had been lurking in the background for ages—lurking not in the dark, but
rather in the light. What provides the power to make the Sun and the stars shine?
We have already noted that stars are born when gravitation compresses gaseous
matter in the universe into ever denser media. The gas to be found “out there” is largely
78 5 The Smallest Possible Thing

hydrogen, the simplest of the atoms. When such a hydrogen gas is compressed more
and more, its temperature increases, and this heat could make a star shine. That
was the predominant explanation given by theoretical physicists in the nineteenth
century, with Hermann von Helmholtz in Germany and Lord Kelvin in Britain as the
main proponents. The light of the Sun was thus assumed to be gravitational energy
converted into radiation. But there was a problem. The physicists of the time could
use the measured solar radiation on Earth, combine it with the mass and the size of
the Sun, to calculate the age of the Sun. And they got some 30 million years.
On the other hand, biologists and geologists came to a different conclusion. Look-
ing at the evolution of both Earth formations and living beings—here Charles Darwin,
the originator of the Origin of Species, was the main proponent—concluded that the
Earth must be at least 300 million years old. The physicists, however, had no source
of energy that could have provided sunshine for such a length of time, and so Lord
Kelvin suggested that not only Darwin’s solar age estimate was wrong, but his entire
evolution theory as well. The final solution to the puzzle shows that, contrary to
what many believed in the more mechanistic period of human thinking, the seem-
ingly more rigorous arguments—energy conservation vs. the evolution of frogs—are
not always the right ones in the end.
The answer was essentially another outcome of the famous E = mc2 of Albert
Einstein, noting that mass is a form of energy. In 1919, the British chemist Francis
William Aston discovered that the mass of four hydrogen nuclei, i.e., of four individ-
ual nucleons, was greater than that of a helium nucleus, being a bound state of four
nucleons. So the binding—today we speak of fusion—of hydrogen nuclei to form a
helium nucleus liberates energy. In fact, it turns about 0.8 % of the mass into energy,
and this, as was then pointed out almost immediately by Sir Arthur Eddington, this
would allow the sun to shine for about 100 billion years. With the age of the universe
at 14 billion years, that assures us sunlight for quite a while yet. . .
Nevertheless, the starting point of the nineteenth century physicists, with gravi-
tation to create heat, was in principle correct. On the other hand, today’s best value
for the age of the Earth is 4.5 billion years, even older than what the geologists and
biologists had determined. The trick of nature was that when the nucleons were com-
pressed to extreme densities, a new “nuclear” force took over, binding four nucleons
to a new nucleus of lower weight. The energy liberated in this process makes the
light for all life on Earth.

5.2 The Strong Nuclear Interaction

The new force of strong interactions sets in only when nucleons get very close to
each other. A proton and a neutron one picometer (10−12 m) apart do not even see
each other. But at a distance of one femtometer (10−15 m), their interaction can result
in sunlight or in a hydrogen bomb. Strong really means strong.
The force of gravitation is always attractive, it decreases with the square of the
distance of separation, and it is effective over very large ranges: it keeps our Earth in
5.2 The Strong Nuclear Interaction 79

its solar orbit and holds galaxies together. The electric force has the same dependence
on separation, but it is attractive between opposite and repulsive between identical
electric charges. Initially, little was known about nuclear forces, except that they act
only at very short distances, but that they are then very strong. In physics, much
of the twenthieth century activity was devoted to learning more about them. Since
relativity theory forbids instantaneous action at a distance, any force must have a
messenger, travelling from object A to object B to carry out the interaction. For the
electromagnetic force, the photon is the messenger, and its speed, the speed of light,
the fastest way to interact. Its “mass” must be zero, since no massive object can
travel at the speed of light. And barring interference, its range is arbitrarily large—
we see the light from very distant stars, and we can communicate by radio over
large distances. The photon has to get “there”, however, so the interaction is not
simultaneous, it takes time. The situation is illustrated in Fig. 5.3; such space-time
diagrams have turned out to be very useful to picture elementary particle interactions,
where they were introduced by the American theorist Richard Feynman.
The range of the nuclear force, on the other hand, is only of the size of the nucleon:
two nucleons much further apart than a nucleon diameter don’t interact any more.
Using this information, the Japanese theorist Hideki Yukawa predicted in 1935 the
existence of a new, strongly interacting particle. He pictured the mechanism for
strong interactions to be of the same form as found in electromagnetic interactions,
i.e., with a messenger conveying the interaction signal; see Fig. 5.4. To obtain a short
range, however, this messenger had to be massive, since general arguments relate
the range of the force to the inverse of the messenger mass. Yukawa estimated the
necessary mass to be about 1/10 of the nucleon mass. Since this put it somewhere
between electron and nucleon, he called it meson, intermediate. Today, we call it the
π-meson, or pion. Was this just a formal trick, to describe the nuclear interaction, or
was the pion real?
time

tb

photon
ta

A B
distance
a b

Fig. 5.3 Two airplanes A and B are approaching each other; at a certain point a, A turns back and
sends a radio signal (photon) to B, telling it to turn back as well, which it does at point b. The signal
sent by A at time ta travels at the speed of light and is received by B at a later time tb
80 5 The Smallest Possible Thing

Fig. 5.4 A proton and a


neutron are bound together,
i.e., prevented from
separating, by the exchange
of a messenger meson, which

time
acts like a string holding the meson
two nucleons together

proton neutron

distance

Before we turn to the experimental search for the pion, we have to return for
a moment to Einstein’s E = mc2 . If mass is energy, then it seems possible to
create or destroy particles, and we have encountered such phenomena already in our
consideration of the Dirac sea of unborn particles. The surface gravity of a black
hole, in the case of Hawking radiation, or the engines of the accelerating space ship,
for Unruh radiation, provide the energy needed to bring a virtual pair out of the sea,
into reality: a pair of massive particles is created, energy is converted into mass. And
in high-energy collisions of strongly interacting particles, this phenomenon assumes
a very prominent role, since the collision energy, if sufficiently large, could lead to
the creation of new particles.
To study the interaction of such minute objects as nucleons, physicists had (and still
have) basically one tool: hit them against each other and see what happens. The big
particle accelerators of today are the final outcome of this approach. In the first half
of the last century, before the advent of accelerators, there was only one possibility:
cosmic rays. The interstellar space contains many solitary travellers, particles of
all kinds emitted in some distant collisions or explosions. Some of these reach the
Earth, where most of them collide with nuclei in the atmosphere and are stopped. But
some do make it down to our world. If we install here a scintillator, i.e., a material
that will emit light when it is hit by such a particle, then bursts of light will signal
the arrival of these interstellar travellers. At the European Organisation for Nuclear
Research (CERN) in Geneva, Switzerland, the floor of the reception center is made
of such scintillating material, so that guests waiting to be admitted stand on a ground
sparkling with the light of these glowworms from outer space. In the 1930s, physicists
developed the technique of using photographic emulsions to study such collisions. To
avoid atmospheric loss as much as possible, the photographic plates were carried to
some high mountain or sent even higher in balloons. And finally, in 1947, the British
experimentalists Donald Perkins and Cecil Powell established that nucleon–nucleon
collisions indeed produce the meson predicted by Yukawa ten years before. The pion
they found was seen to exist in three charge states, π + , π − and π 0 . This gave the pion
exchange between the nucleons a new possibility: the proton could emit a positive
pion, turning into a neutron, and the neutron, by absorbing the incoming pion, would
5.2 The Strong Nuclear Interaction 81

become a proton. In contrast to the simple transfer of information between two


given partners, the exchange of massive charged messengers allowed it to produce a
modification of the state of both sender and receiver.
Today, the study of nucleon–nucleon collisions is a routine matter, using proton
accelerators. The great-grandfather of these was the cyclotron, first built by Ernest
Lawrence in California in 1929. The basic idea is to accelerate charged particles by
means of electric pulses, while keeping them on a circular orbit with the help of
magnets (Fig. 5.5). The first accelerator reached an energy of one million electron
volts (1 MeV = 106 eV). In comparison, the most powerful accelerator in the world
today, the Large Hadron Collider (LHC) at CERN in Geneva, provides a collision
energy of 7 TeV; a tera electron volt is one million million electron volts (1 TeV =
1012 eV). Making charged particles, such as protons or electrons, move in a circular
path causes them to emit electromagnetic radiation—we recall, that was one of the
problems in determining the structure of atoms and eventually led to the advent of
quantum mechanics. But here it means that the smaller the size of the orbit, the
more they radiate and thereby the more energy they lose. So one has to optimize
the kinetic energy of the beam versus the size of the machine, and that means the
bigger the better, albeit also the more expensive. While the cyclotron of Lawrence
was only a little more than a meter in circumference, the LHC reaches 27 km.
And while Lawrence built his machine at the Berkeley physics department with the
help of graduate students, the LHC cost around three billion Euros, provided by a
collaboration of more than a hundred nations, and is put to use by more than five
thousand physicists from all over the world.
Whether through cosmic ray interactions or in accelerator experiments, a most
striking feature appeared when two protons collided at high enough energy: the
interaction led to a burst of newly produced particles. This does not mean that the
protons were broken up into fragments—they were still there in the shower that
emerged, but in adddition, so were many other particles. It is in this way that Yukawa’s
meson was discovered; the initial state of two colliding protons led to a final state of
two nucleons plus one meson. And the higher the collision energy, the more mesons

+

Fig.5.5 Schematic top view of a cyclotron; the (blue) magnets keep the circulating charged particles
in their orbit, the (red) voltage systems provide acceleration. The two components have to be
coordinated to keep the particles on track
82 5 The Smallest Possible Thing

would appear; at the LHC, a single proton–proton collision produces on the average
more than fifty additional (“secondary”) particles. So such collisions are something
like a solar engine in reverse: in the sun, mass is turned into energy, and here, collision
energy is converted into massive particles.
Obviously, one tried to see what kind of particles made their appearence in such
collisions. Were they mesons or nucleons, what was their mass, their electric charge,
their intrinsic spin? And it turned out that these collisions had indeed opened a
Pandora’s box of different beasts. Now, besides protons and neutrons, there appeared
multitudes of excited nucleonic states, and the same held for mesons. Most of them
were very short-lived, quickly decaying into the more basic pions and/or nucleons;
we shall return to this feature shortly. But they were all definitely identifiable as
objects of given mass, charge and spin. Every year, every self-respecting particle
physics laboratory could claim the discovery of quite a number of new “elementary
particles”. The great Wolfgang Pauli is supposed to have said, “if I had foreseen
this, I would have gone into botany”. The resulting situation could perhaps best be
summarized in the words of Yogi Berra, coach of the New York Yankees baseball
team: “it’s deja vu all over again”. Looking for “the” atom, one found more than a
hundred, and the same kind of proliferation now occurred for elementary particles.
Ever growing lists were published by the Institute of Physics—today, it must be
thousands in the Particle Data Group Booklet, no-one is keeping count anymore.
What they publish is, in a way, the periodic table of elementary particles, which, by
this mere fact, are not so elementary.
Some regularities distinguished elementary particles from atoms. One found
nucleons and heavier nucleon-like states that would eventually decay into a nucleon
and one or more mesons. But the collisions never made particles decaying into two
or more nucleons; the relative energy of the constituents was always far too large
for something like a binding of nucleons to nuclei. And the elementary particles did
have a little bit of hair, in contrast to black holes: besides mass, size, spin and elec-
tric charge, they had a nucleonic charge (did the particle decay into a nucleon and
mesons, or only into mesons?), which was also conserved in the reaction. Moreover,
in the course of the years, it was found that even mesons could differ in nature: for
the pion, one discovered a strange counterpart, pion-like in all ways, but heavier and
with a different decay structure: the K-meson, or kaon. This required the invention
of a new “charge”, appropriately called strangeness, and in the collision process,
this charge, just like its electric and nucleonic partners, was always conserved. As a
result, there also had to be strange and antistrange nucleons, the hyperons, such as
the neutral π or the τ with three charge states ±1, 0. These hyperons were formed
when the collision of two nucleons led to the production of an additional kaon,
p + p → p + K + hyperon. With a strangeness of −1 for the hyperon and +1
for the kaon, this assured the vanishing overall strangeness in such associated pro-
duction processes, since the initial state of the two protons carried no strangeness.
Protons, neutrons and the different hyperons were then grouped together as baryons,
“the heavy ones”, and the set of all the “elementary particles” produced through
strong interactions, mesons and baryons, as hadrons, “the strong ones”. In the spirit
of Dirac’s sea of all the unborn particles, for each hadron, there exists an antihadron,
5.2 The Strong Nuclear Interaction 83

with all charges reversed. It thus became possible to think of antimatter, consisting
of antinuclei made up of antinucleons, i.e., antiprotons and antineutrons, surrounded
by antielectrons, i.e., by positrons. Such antimatter is very difficult to produce, of
course, since when an antinucleon meets a nucleon, of which there are many in our
world, the two almost always annihilate, destroy each other, to form many mesons.
For cosmologists, therefore, the asymmetry of our world, made of matter and not of
antimatter, poses a major unsolved problem. Nevertheless, in 1995, cold antihydro-
gen was produced for the first time, at CERN in Geneva.
One other feature provides a definite distinction between the different hadrons, and
this feature, moreover, shows that there is another, weaker side to nuclear interactions.
While protons are stable particles that exist forever even in an isolated state, this is
not the case for any of the other hadrons. They all have a finite lifetime, some
shorter, some longer. The first evidence for this came quite early, with the discovery
of radioactivity by the French physicist Henri Becquerel at the end of the nineteenth
century. Atomic nuclei can exist in the form of different isotopes, having the same
charge but different mass. Thus a cesium nucleus is defined as having 55 protons,
i.e., an electric charge of +55. The most common form has 78 neutrons, so that the
nucleus consists of 133 nucleons altogether. There are, however, also less common
forms, cesium isotopes, with a nucleus containg either more or fewer neutrons. It
was found that a specific cesium isotope of 82 neutrons would decay after some
time into a barium atom, containing 56 protons and 81 neutrons, i.e., the new atomic
nucleus now had one more proton. In this decay the overall number of nucleons in
the nucleus remained constant, but the charge increased by one. It appeared that for
stable nuclei, the ratio of protons to neutrons had to lie in a given narrow range of
values—too many or too few neutrons caused decay into another element. However,
the overall electric charge of the system, nucleus plus decay products, has to remain
constant, and the decay is in fact accompanied by the emission of an electron. So
somehow it must be possible for a neutron to turn into a proton and an electron,
n → p + e− .
This form of radiation, now called beta-decay (since in earlier stages, electrons were
referred to as beta-particles), was indeed observed, and when later on it became
possible to produce isolated neutrons, it was found that they are really unstable, with
a mean lifetime of about 15 min. After that time, on the average, they turn into a
proton and an electron, some earlier, some later. Inside a stable nucleus, such as for
cesium with 78 neutrons, this decay is not possible, since the binding of the nucleons
reduces the available energy per neutron to below that required for a proton and an
electron mass. Two aspects made this neutron decay mysterious. Proton, neutron and
electron are all particles of spin one-half—so how could a proton and an electron form
a neutron? The neutron spin definitely ruled out the possibility that it is a bound state
of a proton and an electron. Moreover, given the mass of the neutron (939.57 MeV), of
the proton (938.27 MeV and thus 1.29 MeV lighter), and of the electron (0.51 MeV),
the energy of the electrons emitted in neutron decay should have a fixed value of
some 1.2 MeV, the energy difference of the two masses as calculated in relativistic
kinematics. But instead it was observed that the electrons were emitted over a whole
84 5 The Smallest Possible Thing

range of energies, from zero to 1.2 MeV, with a distribution peaked well below this
value. The resolution of this puzzle is yet another great achievement of Wolfgang
Pauli, who correctly concluded that in the decay there must be another, albeit very
spurious partner, which Enrico Fermi later named neutrino, “little neutron”. To get
the overall spin right, it had to have spin one-half as well, and to get the spectrum
right, it had to be essentially massless. The neutron decay is therefore apparently
given by
n → p + e− + ν,
with ν denoting the neutrino. We will find that there will in fact be further corrections
to the decay form before we finally have it right…But we can now already conclude
that besides strong, electromagnetic and gravitational forces, there is yet another
kind.

5.3 The Weak Nuclear Interaction

This force is responsible for the neutron decay as above, and it was denoted as weak,
since it allowed the neutron to live on the average for 15 min, while a typical unstable
hadron decaying by the strong interaction, such as the excited nucleon γ, decays
into a ground state nucleon and a pion on the average in the exceedingly short time
of about 10−23 s. Further studies showed that the messenger of the strong nuclear
interaction, the pion, shares the fate of the neutron: it cannot exist in isolated form for
very long times and was found to decay into what looked like a heavy electron, now
called the μ, or muon. The muon differs from the electron only by its mass, which
is around 100 MeV and thus two-hundred times heavier, and just like electrons and
positrons, there are positive and negative muons. Again, to keep the spin right, the
decay is accompanied by a neutrino; for a positive pion, we thus have
π + → μ+ + ν.
It would seem reasonable that the neutrino emitted here is the same as that obtained
in neutron decay—that a neutrino is a neutrino, no matter where it came from—
but it turned out that this is not the case. Electrons are accompanied by electron
neutrinos, muons by muon neutrinos, and these neutrino species are not the same, they
remember their origin: we have to write νe and νμ . That there are indeed two distinct
neutrino species was discovered at Brookhaven National Laboratory in New York
by Leon Lederman, Melvin Schwartz and Jack Steinberger—and it brought them
the 1988 Nobel Prize in physics. Moreover, neutrinos as spin one-half objects have
their antiparticles, antineutrinos: ν̄, while μ+ and μ− are particle and antiparticle,
just like electron and positron.
While the lifetime of the neutron is a quarter of an hour, that of a charged pion
is only some 3 × 10−8 s. This raises an obvious question: how can we call such a
fleeting object a particle? One answer is given by noting that in 3 × 10−8 s, light
5.3 The Weak Nuclear Interaction 85

travels nine meters. So if we can photograph a pion passing through a photographic


detection apparatus at 1/3 the speed of light, it will leave a track of three meters before
decaying. And at yet higher speeds, relativistic time dilation effects will increase the
pion’s lifetime as measured in the laboratory. So such short-lived beasts are after all
still visible.
Electrons, muons and neutrinos do not participate in strong nuclear interactions,
they are not hadrons, “strong ones”; instead, they form their own club and are denoted
as leptons, “weak ones”. And if we associate with them, and only with them, a lepton
charge ±1, with +1 for e− , μ− , ν and −1 for e+ , μ+ , ν̄, then this lepton charge is
found to be conserved in weak interactions, i.e., it remains zero in hadron decay,
leading finally to
n → p + e− + ν̄,
as the correct form for the decay of the neutron; the lepton charges of e− and ν̄ just
cancel each other. And the lepton club has a rather curious membership requirement.
Gravity is the most universal of all forces, it affects everything, even light, although
for small masses it is also by far the weakest. Electromagnetic interactions occur
between all charged particles, whether leptons (electrons and muons) or hadrons
(charged mesons and baryons). And strong interactions couple all types of hadrons,
independent of the electric charge or their spin, i.e., both fermions and bosons.
The weak interaction, however, only takes place between fermions, particles of an
up-down spin structure and hence of spin one-half.
To obtain a theory of weak interactions, physicists again looked to the pattern
provided by electromagnetism. To see what this leads to here, it is helpful to note that
in the spirit of Dirac, the creation of an antiparticle is equivalent to the annihilation
of a particle. Given the mentioned form of neutron decay, there must thus also exist a
process in which a neutron and a neutrino combine to form a proton and an electron,
n + ν → p + e− .
Note that both sides have lepton charge +1. This process is readily converted into
the messenger form we have encountered for both electromagnetic (Fig. 5.3) and
strong interactions (Fig. 5.4), as is shown in Fig. 5.6. Here, however, the form is a
little more general. While in the two previous examples, both sender and receiver
remained the same (planes or nucleons) after the exchange, the interaction shown in
Fig. 5.6 turns the neutron into a proton, the neutrino into an electron. The messenger
boson in this process is denoted as W ; it can exist in three charge states, ±1 and
0. Since the range of the weak interaction is extemely short—for quite some time
it was considered to be point-like—the mass of this W must be extremely large,
and so it eluded experimental detection for many years. Its discovery was the result
of elaborate large-scale experiments at CERN; success came in 1983, showing a
W -mass of 80 GeV, almost a hundred proton masses, and brought the 1984 Nobel
prize to Carlo Rubbia as leader of the experimental team and Simon van der Meer
for the construction of the accelerator facility used. With the help of the W , we can
now also picture neutron decay as the decay of the neutron into a proton and a W −
(see Fig. 5.6); the latter subsequently turns into an electron and an antineutrino.
86 5 The Smallest Possible Thing

Fig. 5.6 Two related forms of _ _ _


weak interaction: p e p ν e
neutron–lepton scattering
(left) and neutron decay
(right)
_ _
W W

n ν n

The weak interaction story came closer to completion when in the late 1970s
Martin Perl and collaborators found in experiments at Stanford, California, that
there existed a further lepton, the τ , very much like electron and muon, but much
heavier still. The electron has a mass of about 0.5 MeV, the muon 100 MeV, and the τ
1.8 GeV; each has an electric charge of −1, a lepton charge of +1, a spin of one-half,
and its “own” neutrino. We had simply claimed distinct neutrinos for electron and
muon, supported by the Nobel prize for Lederman, Schwartz and Steinberger. Let’s
have a look at how they did it. The accelerator at Brookhaven produced pions, which
then decayed, π + → μ+ +ν. If the neutrinos thus formed now interact with nucleons,
they can again produce leptons, through reactions of the kind ν + n → p + μ+ or
ν + n → p + e+ . With incoming neutrinos from pionic μ-decay, the second reaction
form was never observed. The neutron could identify the incoming neutrino as being
of the μ-kind and so it led only to μ+ production, never to positrons. Therefore the
neutrinos are always labelled by the lepton they came with, νe , νμ and ντ .
We can now summarize the constituents involved in weak nuclear interactions.
The fermions are

e− μ− τ −
νe νμ ντ

together with the corresponding antileptons, i.e., e+ , μ+ , τ + and their three anti-
neutrinos. The bosons mediating the weak interactions are the heavy bosons W ±
and the corresponding neutral form, denoted as Z 0 ; all the boson masses are around
80 GeV, giving the weak interaction its characteristic extremely short range. Before
returning to the world of strong interactions, we note that in isolated form only the
electron and the neutrinos are stable; both μ and τ undergo further weak decay into
electrons and neutrinos, of the form μ− → e− + ν̄e + νμ , with a lifetime also of the
order of 10−6 s, a hundred times larger than that of the pion. The leptons therefore
form a hierarchy, with the heavier decaying into the lighter, so that at the end only
electrons, positrons and their neutrinos remain.
5.3 The Weak Nuclear Interaction 87

Fig. 5.7 The mass


distribution of a proton and a N(M)
pion produced in
proton–proton collisions

1.0 1.2 1.4


M(p+π) [GeV]

Parallel to the efforts to achieve an understanding of the weak nuclear interaction,


intense efforts were devoted to finding an underlying structure for the zoo of strongly
interacting particles, nucleon and mesons. Both formed an ever-growing number of
resonances, i.e., bound states that in a very short time decayed into other states,
but which had nevertheless well-defined features, mass, spin, charge, etc. We have
already noted one example, the excited nucleon γ, which decays into a nucleon and
a pion, such as
γ++ → p + π + .
The lifetime of such a resonance is of the order of 10−23 s, and so one may again
wonder why such a state can be considered as something like a particle. Let us
therefore look at its formation in a little more detail. In the collision of two protons,
one finds frequently a final state of a proton, a neutron, and a positive pion—the
pion is the newly produced “secondary” particle. If we now measure the energy of
the combined proton–pion system, we do not obtain a smooth range of all possible
values, but instead a sharp peak at a combined mass value of about 1.24 GeV—the
mass of the the γ (see Fig. 5.7). And the angular distribution of the pion relative
to the proton has precisely the form it should have if the decaying system had a
well-defined angular momentum—the spin of the γ. The lifetime of the state is just
long enough for a light signal to pass from one side of a nucleon state to the other,
enough to coordinate its properties.
The decay process of the γ illustrates another label already indicated, the nucle-
onic, or more generally the baryonic charge, which is +1 for all baryons, −1 for
the antibaryons, and zero for all mesons. This baryonic charge was also found to
be conserved, i.e., the initial and final states of all hadronic processes had to have
the same baryonic charge. Whereas, as we just saw, a pion can decay weakly into a
muon and a neutrino, such a decay is not possible for a baryon, since leptons have
baryon charge zero. On the other hand, a nucleon and an antinucleon form a state of
baryon charge zero, so that they can annihilate each other to form a number of pions,
p + p̄ → π + + π 0 + π − .
From the hundreds of hadronic resonances it became at least aesthetically clear that
a more elementary substructure of the so-called elementary particles was needed,
88 5 The Smallest Possible Thing

just as such a need had arisen for hundreds of atom species. But in contrast to the
periodic table of the elements, the organization of the many particle species with
their different characteristics proved to be considerably more complex. It was more
than just adding components arithmetically, and as it turned out, the components one
finally arrived at were such that once they were put together, one could not take them
apart any more.

5.4 The Quarks

The quarks thus finally led to the end of the line of reduction, as called for by Lucretius
over two thousand years ago: they have no independent existence. One problem for
any inventor of a substructure of the hadrons was evident from the beginning: all
their electric charges observed so far were integers, whole numbers, and no meson
state had a charge bigger than ±1, no baryon state (including antibaryons) bigger
than ±2. Similar constraints held for the baryonic charge (always 0, ±1) and the
strangeness (always 0, ±1, ±2, ±3). That looked almost like a “no go” situation,
and in a world of purely additive constituents, in fact it is. It was saved by making
use of something like vector addition: if I go north for three kilometers, then east for
four, I am only five kilometers from my start, not seven. What if the subconstituents
of the elementary particles were combined in such a way? It worked, but there were
some further prices to pay—prices which Lucretius would have happily agreed to.
The quark model started in the early 1960s. The American theorist Murray
Gell-Mann and his Japanese collaborator Kazuhiko Nishijima, and independently,
Yuval Ne’eman in Israel, tried to arrange the different mesons in an octet pattern, a
set of eight particles, based on an assumed substructure of these mesons. They then
found that a similar structure, using decuplets, a set of ten, worked also for nucle-
ons. Initially, the whole idea was a bit like the heliocentric world of Copernicus: a
nice mathematical scheme, good for calculations, but not really reality…But then, in
1964, Gell-Mann, and independently his colleague George Zweig at the California
Institute of Technology, went one step further and proposed the existence of actual
“objects” with specific properties, objects which then combined such as to repro-
duce the octets and decuplets. Gell-Mann named them quarks, with a post factum
connection to James Joyce, while Zweig called them aces. Quarks won.
The price to be paid was revealed in several steps. To simplify matters, let us
ignore for the moment the hadrons endowed with the new strange charge and just
consider the “normal” mesons and nucleons of our everyday world. The quark model
proposed for this case two species, called “up” and “down”. These u and d quarks are
taken to be essentially pointlike and massless; the masses of the observed hadrons
are to be obtained through the kinetic energy of the bound quarks. Out of the u and
d quarks, together with their antiquarks ū and d̄, we now have to make everything.
Let’s jump to the the rather striking solution: in order to make things work out, the
quarks had to have a fractional electric charge Q, something so far never encountered
5.4 The Quarks 89

Fig. 5.8 The quark


composition of a positive
pion π+

B=+1/3 B=−1/3
Q=+2/3 Q=+1/3
S=0 S=0

u d

in nature. The u has an electric charge Q = 2/3, while the d has Q = −1/3; their
antiquarks have the corresponding opposite charges. Both u and d have a baryonic
charge B = 1/3, their antiquarks B = −1/3, and also this is totally novel. All quarks
have intrinsic spins of ±1/2, and in addition, they can rotate around each other, so
that they are in bound states of a certain orbital angular momentum. Given these
jetons, we can now play. Binding a u with a d̄ in a state of no angular momentum
and with spins in opposite directions gives us a meson of baryon charge zero ((1/3)
+ (−1/3)), electric charge +1 (2/3 + 1/3), and overall spin zero: a positive pion,
π + (Fig. 5.8). Combining two us with one d with no angular momentum, two spins
up, one down, yields a particle of baryon charge 1, electric charge 1, spin 1/2: a
proton. And if we now add a third strange quark species s to this set, of electric
charge −1/3 and strangeness −1, together with its antiquark s̄, we recover the octet
and decuplet structures of mesons and baryons. In fact, this scheme accounted for
all the strongly interacting particles at that time, as far as the observed quantum
numbers were concerned, and it even predicted some which were not there at the
outset, but were later discovered. In this way, the quark model worked. On such a
level, however, the masses of all the different hadrons were not yet predictable; since
there exist mesons, quark–antiquark pairs according to the model, as well as baryons,
quark triplets, of a whole spectrum of different masses, the hadron masses could not
be simply sums of quark masses. The hadron masses had to be a consequence of the
kinetic energy of the quarks, as already mentioned. The construction scheme used
for atomic nuclei, just adding nucleons, could not work here; the hadron mass must
arise from some form of interaction between quarks of little or no mass of their own.
But are there quarks? In the chain of reduction so far, from atoms to nuclei and
electrons to protons and neutrons, the final constituents were always found to exist
in the real world. So not surprisingly, the success of the quark model triggered an
intensive search for isolated quarks. Could a nucleon be split into quarks? These, with
fractional electric charge and fractional nucleon charge, would show quite unusual
features. The search turned out to be totally unsuccessful. Even more refined methods
than hitting protons with each other did not lead anywhere. A particularly efficient
way of producing elementary particles was to collide an electron with its antiparticle,
the positron, at high energy. The two would then annihilate, depositing a bubble of
90 5 The Smallest Possible Thing

intense energy at some point of space. This energy would then turn into mass in
the form of another species of particle together with its antiparticle; it would bring
some pair of fish out of the Dirac sea. In this way, one could produce practically
any known elementary particle–antiparticle pair. But the search for a quark and an
antiquark remained fruitless. We shall see later on that this experiment was really on
the right track, and electron–positron annihilation does in fact lead to quark–antiquark
production. But these are formed in their own world, beyond an event horizon not
letting them out, and so they could not be seen in our world. This conclusion was
codified by saying that quarks are “confined”; they cannot exist alone, but only as
part of some larger body, from which no force could tear them loose. After more
than two thousand years, physics finally agreed with the ideas of Lucretius.
The binding of quarks brings up a further problem, which we have so far ignored.
How do quarks interact? Today’s thinking is, as we have already seen in the case of
weak interactions, very much patterned after electromagnetic interactions. Particles
carry an electric charge, and one charged particle interacts with another by sending a
photon as a messenger. So what is the counterpart of the charge for strong interactions,
and what is the messsenger? The labels u, d and s identify the species of quark, but
not the charge leading to the strong interaction. These labels are today referred to
as flavor—they specify the different kinds of hadrons that can exist, not how their
constituent quarks are “bound” together.
We have already noted that the composition of quarks to form elementary particles
is not simply additive, but more like an addition of vectors. Closely related to this is
the observation that in order to obtain the observed particles, the strong interaction of
quarks required three strong force charge states, together with three for the antipar-
ticles, instead of the plus and minus of electromagnetism. The proton contains one
d and two u quarks, with two spins up and one down. But there also exists a nucleon
state composed of three u quarks with all spins up, the γ++ already encountered
above. It has charge +2 and decays eventually into a proton and a positive pion. So
it has to be possible to confine in the space of a nucleon three seemingly identical
quarks, objects of spin 1/2. The Pauli exclusion principle mentioned above explicitly
forbids this. The only way out is to give all three quarks different charges and thus
make them not identical. The resulting threefold strong interaction charge is called
color, with red, blue and green as teh most popular choice. The idea here is that just
as the superposition of different colors leads to white, colorless, the combination of
the three different colored quarks would produce a colorless nucleon. Color only
exists in the world of the quarks, behind their confinement horizon; no particles in
our world show any color.
Just as the photons couple to the electric charges, the messengers of the strong
interaction have to “see” the color charge of the quarks that interact. Since they
glue quarks together, they were named gluons, and since the gluon connecting a red
with a blue quark has to couple to both these colors, they are two-colored. One of
the resulting couplings is schematically illustrated in Fig. 5.9. Altogether, there are
nine possible color combinations, red–antired, red–antiblue, red–antigreen, and so
on; the color emitted by a quark is the anticolor for the one receiving the message.
One of the nine is in fact eliminated, since the combination r r̄ + bb̄ + g ḡ becomes
5.4 The Quarks 91

Fig. 5.9 The quark structure Confinement Horizon


of the proton

u d

Physical Vacuum

colorless. In the colored world, we then have quarks of three colors, antiquarks of
three anti-colors, and gluons of eight different color combinations.
So far, we have a model to account for the observed pattern of the hadrons observed
up to 1984, a list of all possible states according to electric and baryonic charge,
strangeness and spin. But we cannot yet calculate any of the hadron masses: the
nucleon is made up of three essentially massless and pointlike quarks, but its mass
is almost 1 GeV. So the mass must somehow arise from the fate of the quarks in
their bag of confinement. To calculate it, we need a theory, in the same way as we
need quantum electrodynamics to calculate the energy levels of the hydrogen atom.
And this theory should, of course, also incorporate the impossibility of breaking up
a hadron into its quark consituents—it should show that the quarks are bound so
tightly that “no force can tear them loose”.
In the early 1970s, the time was ripe for such a theory. Combining the new features
observed in the strong interaction with the structure of quantum electrodynamics,
Murray Gell-Mann and his German collaborator Harald Fritzsch proposed what they
called quantum chromodynamics, “QCD” in contrast to “QED”, since the electric
charge was now replaced by the color charge of the quarks. Similar work appeared
at the same time by the American theorists David Gross and Frank Wilczek. They,
and independently David Politzer, found that this theory predicted a striking novel
feature. It was expected that the quarks would resist being pulled apart—after all,
they were confined to their hadronic prison. But what Gross, Wilczek and Politzer
found was that when quarks approach each other very closely, they no longer interact
at all, they behave like free particles; this is now known as asymptotic freedom, and
its discovery brought the 2004 Nobel prize to the trio Gross, Politzer and Wilczek.
It was as if the gluons connecting quarks form an elastic string: when the quarks are
near each other, it hangs loosely and is unnoticeable. But if they try to move apart, the
restraining force of the string gets stronger and stronger—it defines the confinement
horizon.
On the other hand, the proof of quark confinement has so far resisted all the
attempts of the theoretical physics community. Extensive work carried out on large-
scale computers provides strong support for the idea that quantum chromodynamics
indeed confines the quarks to their world of color. But the actual solution of the
92 5 The Smallest Possible Thing

“confining” equations remains a tantalizing challenge—made all the more tempting


by the promise of a sizeable reward to the winner. The Clay Mathematics Institute
in Cambridge, Massachusetts, USA, announced in the year 2000 a list of seven open
“Millenium Prize Problems”, their choice of the most important unsolved problems
in mathematics, with the commitment to pay a million dollars, the Millenium Prize,
for the solution of any of them. One in that list is the proof of confinement in QCD-
like theories, and its million dollars are still available. In fact, only one of the seven
has actually been solved by now, but the winner, a Russian mathematician, declined
the award.
That would seem to prevent us from ever calculating the masses of the different
hadrons—which after all, was one of the crucial reasons for introducing first the
quark model and then quantum chromodynamics. How do the different meson and
nucleon masses arise from the interaction of massless quarks? While there is indeed
no analytic solution to this problem, both model studies and numerical calculations
show that the scheme is right also for this issue. In the case of a hydrogen atom, for
example, the excitation spectrum can be calculated in terms of the various possible
orbits of an electron around a proton. In a similar way, the quark model provides
the possible quantum states of a triplet of massless quark constituents contained in
a bag, a volume of hadronic size, to get the nucleon spectrum: the lowest orbital
pattern gives the ground state nucleon, the higher ones the excited states with their
larger masses. The nucleon masses are thus effectively determined by the energies
of the bound quarks. And quark–antiquark states give the corresponding masses
for the meson spectrum, with one caveat to be noted shortly. As already indicated,
this picture—using a bag to contain the quarks—has since been complemented by
numerical calculations, based on a discrete formulation of QCD provided by the
American theorist and Nobel laureate Kenneth Wilson. Today these studies reproduce
all hadron masses with astonishing precision. Practically all the mass present in
the universe arises from nucleons and nuclei, which in turn consist of nucleons.
The origin of the mass observed in our world is thus accounted for: it is due to the
interaction of essentially massless quarks and gluons, contained inside a a volume
of hadronic size.
Unfortunately, however, the beauty of quantum chromodynamics was, from the
very beginning, not really perfect. A theory based on massless and pointlike u and d
quarks of three different color charges, and interacting through gluons of eight color
charges, such a theory would in a way be an ideal, a beautiful solution to strong
interactions. It could in principle describe all the “normal” mesons (excluding pions,
we return to them shortly) and the nucleons, in a consistent fashion without any
dimensional input from somewhere else. It would predict (almost) all mass ratios,
as well as the relations of masses and radii; it would then be left to the observer
to chose a scale. But unfortunately such a theory had two basic shortcomings: it
would also contain massless mesons, and these would not be in accord with the
short-range nature of the strong force. The lightest hadron, the pion, does have a
small mass, but one which is large enough to define correctly the range of the strong
force. To circumvent the problem, to make pions massive, the quarks had to be given
an intrinsic non-zero mass—enter: an outside scale. And while the mass of proton
5.4 The Quarks 93

and neutron are not very different, they are different; to take that into account, one
had to give the u and d quarks small and in fact different intrinsic masses; today’s
values are around 2–3 and 3–7 MeV, respectively. These intrinsic quark mass values
are, from the point of view of QCD, parameters added from the outside, foreign to
the theory, which thereby is no longer the final one. We shall return to this problem
again; here we note only that the advent of strange hadrons made it even worse: to
get their masses right, the s quark had to have an intrinsic mass of about 100 MeV,
almost as heavy as the pion.
And, as it turned out, that was not the end of the story: it transpired that there are
in fact more different species of quarks than those needed to account for the hadrons
observed up to the 1970s, just as the electron and the muon were not all the species
in the realm of weak interactions. The discovery of the next quark species was later
sometimes called the revolution of 1984. The Chinese-American experimentalist
Sam Ting, professor at the Massachusetts Institute of Technology, had spent much of
his scientific life studying the annihilation of an electron–positron pair into hadrons,
or of hadrons into an electron–positron pair. In 1974 he hit the jackpot: the otherwise
smooth production distribution in the mass of the produced e+ e− pair showed, at
a value of about 3 GeV, a most dramatic peak, signalling the appearance of a new
particle. His results were obtained by looking at e+ e− pairs produced in proton–
nucleus collisions at Brookhaven National Laboratory. At the same time, the team
of Burt Richter at the Stanford Linear Accelerator in California found a peak at just
that mass value, studying electron–positron annihilation into hadrons—essentially
the reverse process of what Ting and his colleagues looked at. A new meson had
been discovered: the J/ψ, with a mass of over 3 GeV, formed by the binding of a
new quark and its antiquark. Its flavor was subsequently labelled charm (c), the new
meson J/ψ and its higher excited states, found soon afterwards, charmonia. Ting’s
name remains forever with the discovery: J is also the Chinese character for Ting.
So one now had, as basic building blocks, four quark species, or flavors, as they
are commonly referred to, and of course their antiquarks: the u and the c of electric
charge +2/3, the d and the s with −1/3. All quarks had baryon number 1/3 and
spin 1/2; in addition, the s had strangeness −1, the c charm +1. Those of the “first
generation”, the u and d, were almost massless, while the “second generation” had
100 MeV for the s and about 1.3 GeV for the c. The story continued: in 1977, Leon
Lederman and his group at Fermilab near Chicago found the next peak in the e+ e−
mass spectrum, at about 9.5 GeV, the upsilon (ϒ). It led to the bottom quark b, with
charge −1/3 and a mass of about 4.2 GeV, bringing the number of quark flavors
to five. Its missing partner for the “third generation”, the top quark, came around
1995, with the predicted charge +2/3, but a huge mass of about 180 GeV. From our
present point of view, that completes the quark family, with the electric charges and
masses as indicated in Table 5.1. All the quarks and antiquarks have an intrinsic spin
of 1/2; the quarks have a baryon charge of 1/3, the antiquarks have the opposite;
and all come in three color charge states (“red, green and blue”). With the 6 × 3
quarks, the same number of antiquarks, and eight colors of gluons as mediators of
the interaction, quantum chromodynamics provides a complete description of the
94 5 The Smallest Possible Thing

Table 5.1 The three quark


Q = +2/3, B = 1/3 u c t
generations; Q denotes
m [MeV] 2–3 1300 175 000
electric charge, B baryon
Q = −1/3, B = 1/3 d s b
charge
m [MeV] 3–7 100 4200

strong “nuclear” interactions and the spectrum of the strongly interacting particles,
the hadrons.
The fly in the ointment, as we have already mentioned, is of course the presence
of the intrinsic quark masses, introduced ad hoc from the outside. A theory of only
massless u and d quarks would contain no dimensional scale, so that it would hold
for a world “in general”, a sort of blueprint for nature. The masses of the hadrons
would be completely given by the energy of the quarks and gluons that they are
made up of. The actual physical scale in our world could then be fixed by measuring
the mass of any hadron—that would provide the “calibration”. But it was clear from
the outset that such a dream could not work. For one thing, massless quarks would
give the same mass for protons and neutrons, and small as their mass difference is, it
is not zero. Then there is the problem of the pion, as we have seen; let’s consider it in
a little more detail. The formation of massive hadrons in a world of dimension-free
constituents and interactions is a little like the onset of magnetization in a metal
such as iron. At high temperatures, the spins of the atoms are randomly oriented,
there are as many pointing up as down, leading to an average spin value of zero. As
the temperature is lowered, at a certain point (the “Curie point”), the spins begin to
align in a certain direction, either up or down. Which of the two does not matter, but
one direction it has to be. Whereas at high temperatures the system as a whole is
symmetric, invariant under flipping, this symmetry is destroyed when magnetization
sets in. Magnetization thus breaks a symmetry present in the system. In a similar
way, the formation of massive hadrons out of massless quarks also breaks a symmetry
present in quantum chromodynamics. And it turns out that in this case, besides the
wanted massive hadrons, there would be a further massless hadron; we will return
to this aspect in Chap. 7. Since the pion has a mass much less than all other hadrons,
it was a candidate for this so-called Goldstone particle; it is named after the British
theorist Jeffrey Goldstone, who first showed the necessity for its existence in the
world of hadrons, after his Japanese colleague Yoichiro Nambu had found it in
the study of superconductivity. It was indeed light, but its mass was not zero, and
the only way to accommodate this was to assign to the quarks a little bit of mass. In
this way, the pion mass, the range of the strong nuclear force and the proton–neutron
mass difference came out right.
That was, of course, only the beginning. The existence of much heavier K-mesons,
and then of the still more massive charm, bottom and top states, obviously required
the “input” of quarks of a corresponding heavy mass. While the nucleon mass remains
essentially the interaction energy of the three quarks it contains, the mass of the heavy
quark bound states (“quarkonia”) is almost entirely due to that of the heavy charm
or bottom quarks they are made of. So the quark masses remain an outside input for
5.4 The Quarks 95

quantum chromodynamics—the theory itself cannot determine them. And that seems
to indicate that to describe everything in a consistent way, quantum chromodynamics
would have to become a part, a subsection, of some “larger” theory. For this, there
is another, equally valid reason. Although the ground state mesons, such as pions or
kaons, are stable under strong interactions, they (and all other) mesons can decay
through weak interactions, similar to the decay π + → μ+ + νμ we have already
encountered. Apparently quarks undergo weak interactions, even though leptons do
not interact strongly. In particular, the W messenger of the weak interaction can
change the flavor of a quark, turn it from u to d: u → d + W + . And since such possi-
bilities exist also for the heavier quark species, strangeness, charm and beauty are not
conserved in weak interactions; for example, s → u + W − destroys strangeness. As
a consequence, at the end of all decay chains, no heavy flavors remain. In addition,
when the intrinsic charges allow it, quarks can also decay electromagnetically, for
example in the form π 0 → 2 γ. So the full story will have to include weak and
electromagnetic interactions as well.
The three quark generations as the constituents of the strong interaction now
completely match the three lepton generations, consisting of electron, muon, τ and
neutrinos, found in the study of weak nuclear interactions (Table 5.2). This provides
a first indication for a road towards a unified description of the different forces of
nature. The bosons mediating the interaction are the eight gluon states for the strong
interaction and the three W-boson states for the weak interaction.
What about electromagnetic interactions, and what about gravity? While the lat-
ter has so far resisted the efforts of numerous great theorists, electromagnetism was
included as early as the 1960s, when the American theorists Sheldon Glashow and
Steven Weinberg and the Pakistani Abdus Salam independently found ways to for-
mulate what is now known as the unified theory of electroweak interactions, showing
that weak and electromagnetic forces are different facets of a single, more funda-
mental electroweak force.

5.5 The Standard Model

Today, the electroweak theory is combined with quantum chromodynamics to form


the standard model of elementary particle physics, which provides the theoretical
basis for strong, electromagnetic and weak interactions in the subatomic world. The
basic constituents are quarks and leptons, six fermions each, and their interactions

Table 5.2 The basic


u c t
constituents for strong,
d s b
electromagnetic and weak
e μ τ
interactions
νe νμ ντ
96 5 The Smallest Possible Thing

are mediated by boson fields, gluons for the strong interaction, photons for the elec-
tromagnetic and the W s for the weak interaction.
As we have noted several times, one of the essential puzzles in this picture was
from the outset the origin of the different intrinsic quark and lepton masses, as well
as those of the heavy W bosons. The favorite explanation now invokes an additional,
otherwise undetectable field, which, like Einstein’s cosmological constant, pervades
all of space and by clustering around quarks and leptons leads to their “effective”
masses. The theoretical basis for such an approach has been studied over the past fifty
years, with a number of major players coming to similar conclusions: Peter Higgs;
Robert Brout and François Englert; Gerald Guralnik, Richard Hagen and Tom Kibble.
Finding evidence for such a “Higgs” field was the main aim of experiments both at
CERN and at the large American accelerator Fermilab near Chicago. Only last year,
in 2012, CERN claimed first signs of the existence of a “Higgs boson”, the mediating
particle for such a field—that’s why we have listed all the pioneers of the game. If
confirmed, we have with the standard model a viable, albeit not so simple description
of three of the four fundamental forces of nature, with only gravity still left out.
To avoid confusion, it is perhaps worthwhile noting at this point that the issue of
mass enters in two completely distinct ways. The inertial masses, observed in the
universe as a measure of the resistance to force, to putting things into motion, as well
as the masses entering gravity: these masses essentially arise from the interaction
of practically massless quarks bound by massless gluons to form massive nucleons.
Nucleons in turn form nuclei, which provide the mass of the atoms and the mass as we
see it in our world. The fine-structure of the nuclear interaction then requires that the
quarks are not really completely massless, but that they have a minute intrinsic mass,
to reproduce the difference in the masses of protons and neutrons or the small but
finite pion mass. These intrinsic quark masses do not contribute in any significant way
to the inertial or gravitational mass found in the universe. The masses of hydrogen or
helium nuclei, as the building blocks of all observable masses, are affected very little
by any changes in the intrinsic quark mass. The role of the Higgs field as “the origin
of mass”, as sometimes suggested, is thus to be considered with some care—that
mass is not the mass we normally mean.
In the world as we find it, protons, neutrons and electrons are the directly measur-
able ultimate constituents of matter. All other particles, whether hadrons or leptons,
are unstable. In fact, when the muon was first discovered, the American Nobel laure-
ate Isidor Rabi noted “who ordered that?”. It would seem that a world without heavy
leptons and without heavy flavor quarks, a “first generation” world of only u and d
quarks and only the electron and its neutrino, such a world would not be very different
from the one we have now. All the additional particles appear only in rather elaborate
experiments and seem somehow “superfluous”. Why do they exist? Why did nature
decide to add these seemingly unnecessary complicating aspects? So whether the
standard model is aesthetically something to be considered as final—that is another
question, whose answer is a matter of taste. Without a Higgs field, we have twelve
different fermion mass values to account for (six quarks, six leptons); given a Higgs
field, it remains to understand twelve different constants coupling this field to the
different constituents. Why do the coupling strengths differ so much to lead to what
5.5 The Standard Model 97

we call strong, electromagnetic and weak interactions? If we count particles, antipar-


ticles and all charge states, we have 36 quark states, eight gluon states, twelve lepton
states, four leptonic bosons, and the Higgs. Nevertheless, wherever applicable, the
standard model agrees extremely well with the experimental results. Somehow one
cannot help but think of the Ptolemeic picture of the world, also highly precise, and
the Spanish king Alfonso, who felt that had he been consulted, God might have come
up with a somewhat simpler scheme. The German experimentalist Helmut Faissner,
who contributed significantly to the neutrino research at CERN, had a somewhat
similar view: he was sure, he said, that the theorists who developed the standard
model were sufficiently clever to understand it; but he was not sure, he said, if the
Lord could follow their arguments. So it seems that we cannot really exclude the
possibility that the standard model only defines our present horizon of understanding,
and that there may be more beyond this horizon. First attempts were made a while
ago by the Indian physicist Yogesh Pati and his Pakistani colleague Abdul Salam
of the electroweak theory. They considered quarks and leptons in turn as composite
objects, consisting of point-like and massless preons, but such proposals have so far
not really led anywhere, neither in theory nor in experiment.
What theorists have concentrated on instead is whether the present plethora of
fundamental constituents—quarks, leptons and all the different bosonic interaction
messengers—might not all have arisen from some more symmetric, unified, basic
theory. Is it possible that everything started much more simply, from a presently
concealed grand law of nature, from which some abrupt effects in the evolution have
produced the “zoo” of today’s “elementary” constituents? Could it be that the cooling
of the universe broke an initially very symmetric world into so many pieces—and
can one, by looking at the pieces, reconstruct the original? More recently, there have
been numerous attempts at further unification; we will return to these later.
For the study of the horizons of matter in the present microcosmos, the heavy
quarks c, b and t in fact do not play an essential role. While the “normal” hadrons,
made up of u, d and s quarks, are all of hadronic size—a size specified by the range
of the strong interaction—the new heavy quark–antiquark mesons are, because of the
large quark masses, very much smaller. And in any thermal medium, in matter, where
the presence of the different species is governed by their masses, heavy hadrons such
as quarkonia remain extremely rare beasts, with more than a hundred million pions
for each quarkonium present at relevant temperatures.
So, with the quarks, we have, albeit without the blessing of a mathematical proof,
reached an end in our search for the smallest possible constituents of matter. We
have found that they exist in their own world, from which we cannot remove them.
This world is, however, quite different from that inside a black hole, which also does
not allow us to take anything out. We can throw something into the black hole, it
disappears, and we have no way of finding out what happened to it. In the case of the
colored quark world, we can send a probe in and study the effect this has. We can
study the interior of hadrons with electromagnetic probes, and since these can get
back out, we now have in fact quite a bit of information about the hadron structure.
Extensive experimental and theoretical studies have shown that on short distance
scales, quantum chromodynamics correctly describes the interaction of hadrons with
98 5 The Smallest Possible Thing

other hadrons as well as with electromagnetic probes. Most of what we know today
about high energy collisions makes sense only in terms of interactions of the quarks
within the hadrons. But these quarks can never get out, and we can never get them
out.

5.6 The Confinement Horizon

How can we try to separate a quark–antiquark pair? We will soon see how one can
attempt that in accelerator experiments—but to start, let’s use a more powerful tool.
We found that the strong surface gravity of a black hole can suck in one of the partners
of a virtual particle–antiparticle vacuum fluctuation, leaving the other to escape as
Hawking radiation. This fate can strike any fluctuation, not only photons or electron–
positron pairs: it can also happen to a virtual quark–antiquark pair. But while the
electromagnetic fluctuations are allowed to be broken up into two (for illustration,
let’s stick to the electron–positron pair), the quark and the antiquark are in a colorless
state. If one of the two were now to be sucked in, that would turn the black hole into
a colored hole and at the same time lead to the emission of physical colored quark
radiation. So it’s now a fight of gravitational attraction vs. color confinement, and
the absence of colored quarks in our world indicates who wins. There is only one
way out: when the confinement energy of the string between quark and antiquark has
reached the energy of a further quark–antiquark pair, that new pair will be pulled out
of the Dirac sea to become real. One of the newcomers will accompany the quark
into the black hole, the two together entering as a hadron and thus assuring further
color neutrality, while the other will do the same for the antiquark flying off into
space. So the strong interaction form of Hawking radiation, to be emitted by black
holes, consists of hadrons, never of quarks. Black holes always remain black.
To study this way of preserving color neutrality through further pair production
in more detail, we return to the basic experiment for the production of new hadrons,
the annihilation of an electron and a positron. We are quite sure today that this does
indeed result in the production of a quark and an antiquark; the energy set free in the
annihilation is converted into a virtual photon, which then turns into a quark and an
antiquark, flying apart in opposite directions, but contained in their world of overall
color neutrality—that’s why we can’t see them. Let’s try to follow the evolution of
the escape attempt in a set of sketches (see Fig. 5.10).
At first, as long as the quark and antiquark are still very close together, nothing
stops them from separating. But with increasing separation distance, the elastic string
of the gluon takes hold and shows them that there are limits to their world. In classical
physics, they would fly apart until all kinetic energy has been turned into potential
energy of the stretched string, and they would then proceed to oscillate like a yo-
yo. But the colored bubble in which they are contained is a quantum system, and
therefore the separating quarks are a little like the separating capacitor plates we
considered in our discussion of lightning in empty space. The space between quark
and antiquark also contains further virtual quark–antiquark pairs, submerged in the
5.6 The Confinement Horizon 99

+ −
e e

γ*

q q

Fig. 5.10 The evolution of electron–positron annihilation into multiple hadrons

QCD counterpart of the Dirac sea, and just waiting for enough energy to allow
them to surface. So when the energy of the stretched string reaches that value, the
energy needed to bring a virtual quark–antiquark pair to reality, the string “breaks”
and the new pair is there. The initially outgoing quark now switches partners and
couples to the antiquark of the new pair, and likewise the outgoing antiquark. The
only difficulty is that the constituents of the quark–antiquark pair are essentially at
rest in the laboratory, so that the outgoing initial quarks really have to drag their
new partners along. And this relative momentum between them will soon cause a
reiteration of the previous pattern. In any case, the newly formed pairs are now free
100 5 The Smallest Possible Thing

to separate and do so: our initial one colored world, at rest in the laboratory where
the annhilation took place, has turned into two such worlds, moving apart.
Quark and antiquark fly apart also in each of the newly formed worlds, and so the
pattern now continues. In each of these two new worlds, the “fast” quark stretches the
string binding it to its new partner, it hits its confinement horizon, and to go on, it has
to create again a new pair. We see that in order to move on, to pass its confinement
limit, the leading quark has to pay a price: it has to leave behind a bubble of enough
energy to form a (comparatively slow) new quark–antiquark pair, which emerges as
a hadron.
As long as there is energy available, this goes on and on. As a result of the
electron–positron annihilation, there thus emerges a cascade of produced hadrons.
These move at an ever increasing speed, but each has the same intrinsic enery, i.e.,
the same mass. The American theorist James Bjorken has named this result of the
annihilation process the “inside-out” cascade: first, the slowest hadrons would appear,
then faster and faster ones. But the only evidence we have of the passing quarks in
their colored world are the hadrons they leave behind in order to go on.
At each step then, the initial quark (or antiquark) loses part of its energy in order
to form the new pair, allowing it to recouple and thus get away. We, as observers in
the laboratory in which the electron–positron annihilation took place, can only see
the resulting hadrons as glowworms in the vacuum. The electron and the positron
approach each other and annhihilate, and then, if we could do a real slow motion
movie of what is happening, first some slow hadrons appear, then faster, and finally
really fast ones. The quark and antiquark produced in the annihilation, and the new
quarks and antiquarks they create by flying apart—these we never see, they remain
behind their color horizon. So we can look at the cartoon of the annihilation also
in another way. The initial quark and antiquark produced by the virtual photon fly
apart. When they hit the end of their string, i.e., when they reach their confinement
horizon, they have to create a new quark–antiquark pair in order to continue, and they
now rearrange their bonds: the initial quark grabs the new antiquark, and vice versa.
The problem is that the inital quark and antiquark are moving very fast in opposite
directions in the lab where the annihilation occurred, while the newly formed ones
are very slow there. So the primary quark tries to accelerate its new antiquark partner,
and for that, it has to pay. As soon as the separation distance between primary quark
and secondary antiquark reaches the confinement distance, the two have to create a
new pair in order to go on, i.e., they have to emit a hadron. So we can picture what is
happening as the continuing acceleration of the secondary antiquark, accompanied
by a deceleration of the primary quark, and by hadron radiation whenever the con-
finement horizon is reached; see Fig. 5.11. And this hadron radiation we can measure.
So the scenario we witness is much like that seen by the stationary observer in the
presence of a constantly accelerating space traveller: we can’t see this traveller, we
only see the Unruh radiation his passage triggered and which can reach our world.
How can we check if this is really what is happening? The energy that each of
the leftover quark–antiquark pairs gets is the energy of the stretched string needed
to bring a pair of quarks to the surface of the sea of virtuality, and this is the same
at every step. This bubble of energy now becomes a hadron in the real world, and
5.6 The Confinement Horizon 101

q
q
q hadrons
q

γ* _
e+ e

Fig. 5.11 Hadron production in electron–positron annihilation: the secondary quark (red) is being
constantly accelerated by the primary antiquark (blue) and thus emits hadrons as Unruh radia-
tion when it tunnels through the confinement horizon (green); the same holds for the secondary
antiquark/primary quark

it has to do this in a completely random way—it cannot tell us something about the
colored world it came from. Random, as we have seen, means throwing dice. Let us
assume we throw two dice: then the chance of getting a sum of twelve is 1/36, since
each dice has the probability of 1/6 of turning up a six. In contrast, the chance of
getting seven is 6/36 = 1/6. Translated into hadron formation, this would say that it is
six times more likely to produce a light hadron, corresponding to throwing the sum
of seven, than a heavy hadron with a sum of twelve. If the bubbles really hadronize
randomly, we can predict the ratios of the different species produced.
In the Unruh radiation picture, the bubble energy is specified in terms of the
temperature of the radiation, which in turn is determined by the strength of the string
binding. This temperature assigns the relative abundances of the different hadron
species observed, pions, kaons and nucleons. These relative rates should be the same,
no matter what the energy of the initial electron–positron pair was. The momenta
of the initial quarks don’t matter at all, they only trigger the radiation. The relative
abundance of, say, pions to kaons, produced in electron–positron annihilation, should
be the same if the annihilation energy is 10 or 100 GeV. And indeed those relative
abundances, measured for annihilation processes at energies differing by orders of
magnitude, are always the same, and their values are correctly predicted by the Unruh
temperature obtained from the tension of the string connecting quark and antiquark.
So the quarks remain forever behind the confinement horizon, hidden dice in their
colored world; but they do leave us with a glow of thermal hadrons as an indication
of their presence, and these hadrons—in contrast to the Hawking radiation of black
holes hidden under the microwave background—we can indeed observe and study.
If quarks, and not nucleons, are the basic building blocks of matter, and if these
quarks do not have an independent existence: what does that imply for the states of
matter at densities so extreme that the substructure of the nucleons comes into play?
That will be the topic of the Chap. 6.
Quark Matter
6

Stacking cannon balls


Shortly after the Big Bang, the world was very different. Today, after fourteen billion
years of expansion, our universe is on the average rather empty. There are vast
interstellar spaces devoid of anything, and, once in a while, a few stars, a galaxy,
or even a cluster of galaxies; then again, for many light years, nothing. But in our
imagination, we can try to figure out how the universe reached this stage, and what
it was like in earlier times: we just let the expansion film run backwards. Space
now contracts, and the density of matter and energy increases the further back we
go. Stars become clouds of hot gas, fusing into each other, atoms dissolve to form
again a plasma of electrons and nuclei. At this point in the evolution after the Big
Bang—the universe is now about 300,000 years young and its average temperature
is about 3000 K—the microwave background radiation was born. But we continue
to let the film run in reverse, the universe gets ever younger, hotter and denser. It
contains photons, electrons and positrons, and of course neutrinos, but now strong
interactions also come into play as a dominant factor, so that much of the energy
takes the form of strongly interacting hadrons, mesons, nucleons and antinucleons.
What happens when this hot hadronic gas is compressed further? What is the nature
of the primordial matter the universe then finds itself in?
To get a first idea, we imagine that the hadrons are hard little balls of small but
finite size, all identical, and consider the behavior of such a system when its density
is increased. Curiously enough, that question turns out to be more complicated than

H. Satz, Ultimate Horizons, The Frontiers Collection, 103


DOI: 10.1007/978-3-642-41657-6_6, © Springer-Verlag Berlin Heidelberg 2013
104 6 Quark Matter

it sounds, and it does not even have a unique answer. If the increase of density is
handled by the window decorator at the local fruit store and the balls are oranges,
we all know the result. But that solution is reached only through orderly stacking,
by the planning of the stallholder. In the end, each orange in the interior of the stack
is surrounded by twelve other oranges. That this is the highest possible density was
originally guessed by Johannes Kepler in 1611. Kepler had been attracted to this
problem by some correspondence with the English astronomer and mathematician
Thomas Harriot. Harriot worked as an assistant for Sir Walter Raleigh, who was
looking for the best way to stack cannon balls on ships. Kepler’s “guess” was the
orderly pyramid
◦ still used today by the market stallholder, giving for the density per
volume α/ 18 → 0.74. In other words, in the interior of the stack, 74 % of the space
is filled with oranges or cannon balls, 26 % remains empty.
In 1831, the German mathematician Carl Friedrich Gauss finally proved that
Kepler’s guess was right: it doess indeed give the highest possible density for orderly
packing. But if we just pour balls from a crate into a container, we can never reach
such a density, since the randomly falling balls will never arrange themselves in such
an orderly pattern. The maximum density of the disordered medium is much more
difficult to calculate—in fact, that this density is always less than that of the Kepler
pyramid was established only recently by the American mathematician Thomas C.
Hales, using a computer-aided proof. And when the crate is filled, we can still increase
the density a little by shaking or tapping the container for a while. At least on Earth—
in space, without gravity pulling the balls down, the effect of disturbance is not so
clear. Statistical physics specialists speak of close packing (the cannon ball solution)
and random close packing; the latter is what you get by randomly filling the crate
and tapping until that has no more effect. Computer studies give for random packing
a density limit of 0.64—about 10 % less dense than the orderly stacked pile.
Since we know the size of nuclei and how many nucleons a nucleus contains,
we know their average density: about 0.16 /fm3 . Using this information, we can try
to see how densely nuclei are packed. With the nucleon radius found in scattering
experiments, about 0.8 fm, the nuclear “filling factor” becomes about 0.5, the nucleus
is half full, half empty; its density is less than both ordered and random close packing.
So the nucleons can still rattle around a little inside the nucleus. But they are definitely
“jammed”, showing another characteristic phenomenon occurring in dense media of
hard balls. It simply means that the balls have indeed only a very restricted region
of space to move around in; they cannot roam through the entire nucleus. While the
neighors of a given nucleon generally do not touch it, they also don’t leave it enough
space to get out of the local environment.
Heavy nuclei present us with the densest matter normally present on Earth. A
little further on, we will look at recent attempts to surpass this limit in terrestrial
experiments. In the universe, denser matter is presumably to be found in neutron
stars, dead stars which through gravity have collapsed as far as they can. The Pauli
exclusion principle, which we encountered in the previous chapter, forbids having
more than two neutrons in the same spatial region, and so the stars cannot collapse
to a point. In the case of black holes of masses higher than those of neutron stars,
gravity seems to become strong enough to overcome this constraint, although in the
6 Quark Matter 105

absence of a theory of quantum gravity, it is not really clear what happens there at the
singularity in the center. Estimates give for the core of neutron stars densities up to
three to five times that in nuclei, reaching almost the density of black holes. This is
one of the reasons why astrophysicists, for quite some time, have been contemplating
that these cores might be in a new state of matter.

6.1 Quarks Become Deconfined

We found, in the previous chapter, that in our case the little balls are hadrons and,
as such, really bound states of quarks and antiquarks or quark triplets, which can be
squeezed into each other. For nucleons, this is true only up to a point, but mesons can
effectively overlap each other completely; and in the primordial hadron gas, most
of the hadrons are mesons, since the formation of a nucleon or antinucleon requires
considerably more energy than that of a meson. So let us assume we have a meson
gas and slowly start compressing it. At the beginning, a particular quark inside a
given hadron sees its partner, the antiquark, and their maximum allowed separation
defines their confinement horizon. Inside the bubble, the partners see each other as
colored; from the outside, the bubble is colorless. But with increasing density, the
hadrons begin to overlap and the small bubbles begin to fuse into larger ones. Our test
quark now sees nearby several other quarks as well, besides just its original partner.
And the poor quark really has no way to remember which of the quarks it sees was
its original partner within some prior hadronic state. So from this density on, the
concept of a hadron, of a color-neutral elementary particle, ceases to make sense.
The medium simply consists of quarks, and these are so dense that any partitioning
into hadrons becomes meaningless. Somewhere along the line, there must have been
a transition from hadronic matter to quark matter (Fig. 6.1).
To see what this means, we recall that normal matter can exist either as an electric
insulator or as a conductor. The insulator is made up of electrically neutral atoms,
while in the conductor, a crystal structure of positive ions is surrounded by a cloud of
effectively free electrons. If we apply a voltage to such a conductor, the free electrons

Fig. 6.1 States of strongly


interacting matter. The circle
in the right picture shows the
quarks inside a hadronic
radius around the marked
quark

hadronic matter quark matter


106 6 Quark Matter

will flow as an electric current. Compressing an insulator sufficiently can effectively


liberate electrons and make the material undergo an “insulator–conductor transi-
tion”. So the transition from hadronic matter, consisting of color-neutral hadrons,
to a plasma of deconfined color-charged quarks is in its nature something like the
insulator–conductor transition of strong interaction physics. If we could apply a color
voltage, color currents would flow in quark matter.
An illustration of what happens directly at the transition point is given by water
near the boiling point. Below 100 ∞ C, we have bubbles of air in water, and just
above that temperature, there are still droplets of water in the air. As the temperature
approaches 100 ∞ C from below, the air bubbles fuse together to grow in size; coming
from above, the water droplets begin to condense. In the case of a hadronic medium,
just below the critical density, we have bubbles of colored states in the (infinite)
physical vacuum. Above the transition, we still have some (finite-sized) bubbles
of physical vacuum in the (infinite) world of colored states. Which is the relevant
horizon is therefore determined by the density of the matter.
These two phenomena, the onset of conductivity and the fusion of bubbles, are
in fact brought together in a rather new and quite general field of research, the study
of percolation. It is based on a very simple question: at what density do randomly
distributed objects of a given size form a connected pattern? As an example, let us
throw beer coasters onto a table, allowing them to overlap. How many coasters do
we need to get a connected path from one side of the table to the other? A more
poetic version has lilies floating on a pond (Fig. 6.2); how many lilies are needed to
allow an ant to walk from one side of the pond to the other without getting its feet
wet? If we replace the coasters in the first case by metal coins and apply a voltage to
the opposite edges of the table, then the onset of connectivity will also be an onset of
conductivity, of current flow. The striking feature in all these cases is that the onset
of connectivity, percolation, is very abrupt; for a wide range of densities, not much
changes, and then suddenly the transition from disconnected to connected occurs in
a very narrow range of the number of coasters or coins per table area, or the number
of lilies per pond surface. This is in fact what gave it its name. If you pour water

isolated lilies lily clusters percolation

Fig. 6.2 Lilies on a pond


6.1 Quarks Become Deconfined 107

into a filter filled with ground coffee, initially the ingoing water is absorbed by the
coffee; you continue pouring, and then, suddenly, once a critical ratio of water to
coffee is reached, the water passes through freely, it percolates.
Percolation has numerous applications in various branches of natural science: the
onset of conductivity, the boiling of eggs, the making of pudding, the control of
forest fires, the formation of galaxies, and many more. In all these cases, we want
to know the point at which a system suddenly somehow becomes “connected”. So,
in a way, percolation attests to something of what one might call a paradigm shift in
physics. The traditional approach asked for the fundamental constituents and their
interactions, and then proposed to somehow determine from these the possible states
of matter and the transitions between them. Implicit in such a scenario is the idea
that the nature of the constituents and the interactions are crucial for the final result.
The laws of percolation, on the other hand, do not distinguish between the molecules
in gelatine, the coins on the table, the clusters of trees in the forest, or the stars in a
galaxy. The new idea is that the laws of collective behavior transcend those of the
specific constituent interactions. We shall return to this shortly.
Here we note that percolation can also help us to estimate when and how the
hadron–quark transition occurs. We are now in three-dimensional space, the hadrons
are spheres of fixed radius, with overlap allowed; then the disappearence of the
vacuum, the empty space into which they are randomly placed, is found to occur at a
density of about 1.2 hadrons per hadronic volume. At that point, 71 % of space is filled
by the overlapping hadrons, isolated bubbles of vacuum making up the remaining
29 %. With about 0.8 fm for the typical hadron radius, that puts the percolation point
at 0.6 /fm3 , almost four times the density found in heavy nuclei. So we expect that at
such a density, the transition from hadronic matter to a plasma of quarks will occur,
and that it takes place quite abruptly, in the same way the coffee suddenly flows out
of the filter.
The passage from hadronic matter to quark matter is often described as the liber-
ation of the quarks from their confinement prison. We see now that this is somewhat
misleading. A quark is bound to remain behind the confinement horizon, inside the
“bag”, because it has to stay close to its antiquark partner, to form a meson, or to two
other quarks to form a nucleon. Confinement means that it cannot get away from
these and therefore is restricted in its spatial mobility. It becomes deconfined, i.e., it
can move around freely as soon as there are enough other partner choices available.
In quark matter, in a dense medium of many quarks, a given quark can move around
freely, as far as it wants to go, because it never gets away from the others. Wherever
it goes, there are many other quarks nearby, and that is why no force holds it back.
Now it’s the vacuum bubbles that are confined, which occur only intermittently in the
world of colored quarks. Some imaginary creature, living in the physical vacuum,
would find that the infinite room it had to roam around before the transition now has
just become bounded.
The ground state for hadrons, i.e., the state of lowest possible energy from their
(and our) point of view, is the physical vacuum, empty space. For the quarks that is
not the case: their state of lowest energy is inside the confinement bag and different
from the physical vacuum. Just as water exerts a pressure on the air bubbles present
108 6 Quark Matter

near the boiling point, keeping the air molecules inside the bubble, so the vacuum
creates an effective pressure on the bag containing the quarks. The quarks feel this
pressure, the hadrons not; so the bag pressure of the vacuum is what distinguishes
the lowest energy states for hadrons on one hand and quarks on the other. The zero
mark is different on the two sides, just as our own weight in air is different from what
it is in water.
How can we find out more about the transition from hadronic to quark matter,
and about the new state of deconfined colored quarks? We have today a fundamental
theory of strong interactions, quantum chromodynamics (QCD); so why don’t we
just use it to calculate the transition and the properties of quark matter? That, as
we have just alluded to, brings in a fundamental new issue of physics, and not
only of physics. Just as Newton’s gravity accounts for the attraction of two massive
bodies, or Coulomb’s law for the interaction of two electric charges, so does quantum
chromodynamics describe the interaction of two colored quarks. Matter, on the other
hand, is something more complex.

6.2 Collective Behavior

Knowing all there is to know about the anatomy of an ant, its skeleton, its organs, its
nervous system, and whatever else, is of little help in understanding the workings of an
ant colony. The physics of the helium atom is well understood, all excitation levels are
correctly predicted by quantum mechanics—and yet this tells us almost nothing about
the behavior of liquid helium, of superconductivity, of superfluidity. And even the
psychology of humans and their interactions on an individual level does not predict
crowd behavior. A system of very many components develops its own dynamics,
which goes beyond that shown by individuals or isolated small groups. In physics,
the problem is well known: given mechanics, we need statistical mechanics; given
dynamics, you still have to find the corresponding thermodynamics. This extension
from a two-body situation to the case of arbitrarily many interacting components is
not a “derivation”; it requires the introduction of new and essential concepts. And as
we saw in the case of percolation, these may transcend the borders between different
dynamics.
The basic idea of how to address this problem was proposed 150 years ago by
the Austrian physicist Ludwig Boltzmann. He imagined that for a given case—
many particles in a box, with fixed overall energy—some imaginary superhuman
being, some sort of Maxwellian demon, could calculate all the almost infinitely many
possible states of the system. In other words, he would specify for each particle its
position and momentum, and list them all in some huge catalogue. Boltzmann then
formulated as the fundamental postulate of statistical physics the assumption of equal
a priori probabilities: the system will be in each of these many states with the same
probability. If we now divide the set of all these states into subsets, the postulate
implies that the system will (almost) always find itself in the subset containing the
largest number of states. There are an immense number of different states in which
6.2 Collective Behavior 109

the air molecules are evenly distributed throughout a room, and only one for which
they all cluster in one given corner. Suffocation by statistical fluctuation is thus not
really a danger. The crucial measure to specify the state of the system is its entropy
S, which is determined by the number of states possible for a many-body system
contained in a box of volume V at a fixed overall energy E. Since an individual state
is specified by all the positions and all the momenta of all the many particles, the total
number π(E, V ) of such states is absolutely huge; therefore the entropy is defined as
the logarithm of this huge number: S(E, V ) = k ln π(E, V ). The proportionality is
fixed by k, the Boltzmann constant; together with the speed of light, the gravitational
constant and Planck’s constant, it is one of the fundamental constants of nature in
physics. Entropy provides the basis for all of thermodynamics: it is a fundamental
law that it may never decrease. If the overall state of a system changes, it will always
be in the direction of more possible microscopic states, never the other way around.
Using the subsequently developed formalism of statistical mechanics (see Box 8
for more details), one can study the evolution of systems as a function of the condi-
tions they are put in. Given a large number of water molecules in a box, one finds
at low overall energy (which in thermodynamics translates to low temperature) that
the optimal choice is ice, whereas for increased energy (higher temperature), a dis-
ordered liquid leads to more states and thus becomes the state in which the medium
finds itself. And for even higher values, it becomes a gas. For each transition, one
can then try to determine the critical conditions at which the system changes from
one state to the other.

Box 8. Entropy, Temperature and Pressure

We consider an ideal gas contained in a box of volume V ; ideal means


that we can ignore the interactions between gas particles. The total energy
of all particles thus becomes E, consisting of the kinetic energies of all the
individual components. We denote by π(E, V ) the (immense) number of all
the possible states of such a system, that is, all their possible positions and
velocities; we remain in classical mechanics, neglecting all quantum effects.
The entropy of the system is then
S(E, V ) = k ln π(E, V ),
where k specifies the Boltzmann constant, connecting thermodynamics
(entropy) with mechanics (number of mechanical states). Both E and V de-
pend on the size of the system; in thermodynamics, however, we would like
to describe many-body systems in general, without reference to their size. To
achieve that, the energy E and the volume V are replaced by the temperature
T and the pressure P, according to the following recipe. We change the energy
a little, keeping the volume constant, and ask how S(E, V ) is changed; this
rate of change defines the temperature. Then we change the volume a little,
keeping the energy constant; that gives us the pressure. In the transition from
110 6 Quark Matter

mechanics to thermodynamics, we thus replace the quantities E and V , made


up of the contributions of the individual particles, by average values of the
entire system: the temperature T specifies the average energy per particle, the
pressure P the average collision energy per area.
The definitions just given hold quite generally. For our ideal gas, however,
everything can be explicitly calculated, leading to the entropy
4α 2
S(T, V ) = d V T 3,
90
with d specifying how many different particle species there are; we have d = 1
if there is only one species. The constant factor 4α 2 /90 arises from counting
the number of possible states in coordinate space and velocity. The relation
for the entropy still contains the volume; dividing this out, we get the entropy
density s(T ) = S(T, V )/V and the pressure
T α2 4
P(T ) = S(T, V ) = d T .
4V 90
The entropy is the fundamental quantity of thermodynamics; since systems
can change, if at all, only in the direction of more final configurations, it can
only remain constant or increase.

Up to some thirty years ago, Boltzmann’s original starting point, getting all the
possible states lined up, was completely out of the question, even for systems of
moderate size. The connection between few- and many-body physics therefore had
to be made on the basis of simplifying assumptions. The main idea here was divide
and conquer: if the range of the interaction between the constituents is not very
large, one can assume that the big system consists of a sum of small elements tht are
only weakly correlated. In most cases this approach worked very well and gave us
statistical thermodynamics in all its glory. The limiting case of such an idealization
is the perfect gas, in which the constituents don’t interact at all—they are subject
only to the total energy and volume restrictions. The entropy then takes on a very
simple form. If we use the temperature T to specify the average total energy E in
the given volume V , we obtain
s ∼ d T 3,
for the entropy density s = S/V , i.e., the entropy per volume. The factor d specifies
how many different kinds of constituents there are; the more different species of con-
stituents we have, the more possible states exist, and hence the larger is the entropy.
Things become problematic for conventional statistical thermodynamics only if the
effective interaction between the constituents is very strong and of long range; in
particular, it breaks down at the transition points from one state to another. At these
critical points, the system realizes how big it is and refuses to be treated as a sum
of little systems; each constituent is now correlated to all others. So for the quarks
in the critical regime from confinement to deconfinement, the conventional methods
of statistical mechanics also turned out to be simply inapplicable. A solution to the
6.2 Collective Behavior 111

difficulty came with the advent of larger and ever more efficient supercomputers.
One could now indeed create huge numbers of possible configurations and deter-
mine when the probabilities for given states were maximal; Boltzmann’s dream had
become reality. The framework for the new techniques was provided by the American
theorist Kenneth Wilson in 1974, who received the Nobel prize for his pioneering
work in the study of critical phenomena. The corresponding computer methods were
introduced to the field only a few years later by his colleague Michael Creutz from
Brookhaven National Laboratory. So now there was a way to address the relevant
questions, computer simulation, and it was immediately used to see what happens
for hot and dense matter.
In other words, although the critical behavior in quantum chromodynamics could
not be solved through analytical mathematics, the new computer studies came in and
today provide a rather complete view of the behavior of matter in strong-interaction
physics. In particular, it was indeed found that an increase of temperature resulted
in a sudden transition from a gas of hadrons to a state of deconfined quarks and
gluons. Increasing the temperature of the hadron gas makes the individual hadrons
move at ever increasing speed, and so eventually the collisions between them will
result in the production of further hadrons, just as proton–proton collisions did. In the
relativistic regime, an energy input goes only in part into more kinetic energy of the
constituents; another large part is used to produce new particles. As a consequence,
the density of the hadronic medium will increase with temperature, and when it has
become sufficiently high, the hadronic bubbles containing the colored quarks will
fuse to form a large system of deconfined colored quarks. A medium of unbound
charges is generally called a plasma; an electromagnetic plasma contains positive
and negative charges together with photons; they are what gives us light from the
plasma inside a neon tube. Similarly, the deconfined state of strongly interacting
matter contains gluons as well as the quarks of different color charges. In contrast to
the photons in the electromagnetic case, however, the gluons themselves also carry
a color charge, and so they can interact with each other as well as with the quarks.
The deconfined medium in strong-interaction physics is therefore quite generally
referred to as the quark–gluon plasma (QGP), and the temperature at which strongly
interacting matter undergoes the transition from a hadronic to a quark–gluon state
as the deconfinement temperature.
Curiously enough, an upper limit for the temperature of hadronic matter was
expected even before the advent of the quark model. We saw in the previous chapter
that the collision of two energetic hadrons leads to the production of an ever increasing
number of secondaries. And not only their number increases, but also their kind: we
get more and more different species of resonances of ever heavier masses. When the
collision energy is relatively low, it is only possible to produce nucleon resonances
decaying into a nucleon and one pion—a typical form is denoted as τ. This exists
in four charge states, τ++ , τ+ , τ0 and τ− , each decaying into the possible states
of one nucleon and one pion, τ++ → p + α + , τ+ → p + α 0 or n + α + , etc.
With increasing energy, heavier nucleon resonances could be formed, decaying into
a growing number of pions. And while the τ has a spin of 3/2, corresponding to a
rotational angular momentum of one unit, combined with the spin of the nucleon,
112 6 Quark Matter

the heavier resonances could have higher and higher spins as well as more and
more decay channels. The number n(M) of allowed states thus increases with the
resonance mass M, and different theoretical models suggested that this increase is
stronger than any power of M, leading to the exponential form
n ∼ ebM ,
where b is a constant. We have already seen that in high energy collisions a significant
part of the collision energy goes into creation of new particles; but with such a growth
in the number of available possible particles, the fraction going into particle creation
becomes ever larger, the fraction available for kinetic energy—the motion of the
produced particles—ever smaller. The underlying reason for this is quite simple. A
given cake, divided among four eaters, gives each a quarter of the cake, and if the
cake is made bigger, each gets a bigger piece. If, however, the number of eaters grows
with the size of the cake, this is not necessarily so. In fact, if the number of eaters
grows just as fast as the size of the cake, each eater gets a piece of cake of fixed
size, no matter how large the cake becomes. The German theorist Rolf Hagedorn,
working at CERN, concluded in 1965 in a bold conjecture that this ultimately leads
to a limit for the possible kinetic energy of the particles. And since the energy of
particle motion is effectively given by the temperature, Hagedorn proposed that this
would provide an upper limit to the range of temperatures.

6.3 The Ultimate Temperature of Matter

Just as there is a lower limit, T0 = −273 ∞ C = 0 K, there should be an upper limit,


which with the mentioned resonance growth became TH = 1/b, and once b was
determined, led to TH → 150 MeV, or about 1012 K. Just as no matter can be colder
than T0 , so no matter could be hotter than TH , the Hagedorn temperature. In one
way, this conclusion was correct; in our world, matter cannot get any hotter. But,
as the Italian theorists Nicola Cabbibo and Giorgio Parisi pointed out not long after
Hagedorn’s proposal, it is possible that at TH strongly interacting matter undergoes
a transition into a different world—one no longer consisting of hadrons. If the con-
stituents of this world were unbound quarks instead of hadrons, the temperature of
such a medium could increase arbitrarily much above TH . And so today Hagedorn’s
temperature is generally interpreted as the transition temperature of strongly inter-
acting matter, dividing the hadronic state from a quark–gluon plasma. Nevertheless,
Hagedorn’s idea still maintained some of its validity: the hot plasma exists beyond
the confinement horizon, in a different colored world. In our world, with the physical
vacuum as ground state and using our conventional thermometers, it is not possible
to measure any temperature exceeding TH .
To see how such a transition might take place, we recall what happens when we
boil water, making it evaporate. As we heat the water, its temperature rises. As we
reach the boiling point, bubbles of vapor begin to form, and now the temperature
no longer increases, even though we continue to supply more heat. The heat is now
6.3 The Ultimate Temperature of Matter 113

Fig. 6.3 The heating and

temperature
evaporation of water; Tb
denotes the boiling
temperature vapor

Tb
latent
heat
water
heat

needed to convert more and more of the water into vapor, and only when all the
water has evaporated does more heat increase the vapor temperature. The situation
is illustrated in Fig. 6.3; the amount of heat needed to convert the water at the boiling
point into vapor is the latent heat of evaporation.
To study the transition from hadronic matter to a quark–gluon plasma, we proceed
in a similar fashion. In the framework of quantum chromodynamics, one can cal-
culate how the temperature varies as the system is heated, i.e., as its energy density
is increased. Initially, the relation between temperature and energy density is that
of a gas of pions. With increasing temperature, more hadron species, excited meson
states as well as nucleon–antinucleon pairs, come into play and thereby increase the
hadron density. But still a considerable part of the volume of the box is just vacuum.
Then, quite suddenly, the temperature stops increasing for increasing energy density:
we have reached the hadronic evaporation point, the onset of deconfinement. The
medium now contains bubbles of quark–gluon plasma, and until all confined hadrons
are converted into quark–gluon vapor, a further increase of energy input does not
lead to an increase of temperature—it goes into the latent heat of deconfinement.
It is needed to melt the hadrons into their quark and gluon constituents and at the
same time increase their density to fill all of space. At the end of the deconfinement
process, the medium is now a quark–gluon plasma, a dense, colored medium, con-
taining at most some isolated vacuum bubbles. The computer studies of the strong
interaction thermodynamics, based on quantum chromodynamics, indeed provide
precisely this form of behavior; in Fig. 6.4 we show the results. The temperature at
which the transition takes place is found to be about 170 MeV, in the customary units
of particle physics; it corresponds to about 2 × 1012 K. At this point, the average
density of hadrons has reached a value that corresponds to that at which hadronic
bubble fusion would result in percolation. If we continue to increase the energy
input, quarks and gluons eventually form an ideal gas, since at high density and short
interquark distances, the asymptotic freedom of quantum chromodynamics sets in:
the quark–quark interaction becomes ever weaker. Since in such an ideal gas the
energy density and the temperature are connected by the famous Stefan–Boltzmann
relation, θ ∼ d T 4 , it is useful to consider the energy density divided by T 4 . Here
d counts the number of quark and gluon states, so that we know what behavior to
114 6 Quark Matter

temperature
Tc
latent heat
of deconfinement

ε /T 4 SB

Fig. 6.4 The temperature of strongly interacting matter, shown for increasing energy density. Here
ε denotes the energy density, Tc the deconfinement temperature. The Stefan–Boltzmann limit at
high energy density is indicated by S B

expect at high energy density. The results from numerical quantum chromodynamics
studies are summarized in Fig. 6.4.
Expansion cosmology tells us how the value of the overall energy density of
the universe changed and still changes with time, as a function of the age of the
universe. Looking back, we can therefore check how long after the Big Bang the
confinement energy density that we just found was reached, i.e., when the universe
came to the end of its quark era. The result is 10−5 s: the universe changed from its
early colored stage into our present world at the age of about 10 μs. So that point
in time is a genuine historical horizon: the birth of “nothing”. Before, the universe
was a medium of colored quarks and gluons, with their own specific mark of zero
energy; afterwards, the physical vacuum defined this point. Before, it did not make
sense to say “here is something, there is nothing”—there were quarks everywhere,
there was no nothing. Ever since then, the physical vacuum forms the background,
“nothing” rules and color has to remain behind the confinement horizon.
The quark era of the universe is long past, and neutron stars are far away and
difficult to investigate. It is thus not so surprising that also here human curiosity
once more won out and asked if it might not be possible to create such quark matter
on Earth. How could one compress matter enough to reach the extreme densities
necessary to form a quark phase? We would like to create, in the laboratory, something
like the primordial deconfined state of matter of the early universe.

6.4 The Little Bang

To achieve that, there seemed to be only one way, if any, and that gained more
and more interest in the early 1980s. When two heavy nuclei collide “head-on” at
sufficiently high energy, in their collision they might form, albeit for a short time
only, a bubble of the colored primordial plasma.
6.4 The Little Bang 115

One of the main proponents of a new research program to try this, the Nobel-
laureate theorist Tsung-Dao Lee from Columbia University in New York, explained
the idea to the famous Chinese painter Li Keran, who in 1989 composed a picture of
two fighting bulls with the title “Nuclei, as heavy as bulls, through collision generate
new states of matter” (Fig. 6.5).
At that time, as well as today, the text evidently allowed various interpretations;
but today the bulls also exist as a beautiful life-size sculpture, close to the campus
of Tsinghua University in Beijing. The justification for a research program devoted
to the empirical study of quark matter is quite different from most others in recent
times. It is not the search for a well-defined and theoretically predicted entity, such
as the elusive Higgs’ boson, ultimate aim of several presently ongoing experiments
in Europe as well as in the USA. It is also not the purely exploratory study of the
strong interaction in the last century: what happens if we collide two protons at ever
higher energy? Instead, it is almost alchemistic in nature: can we find a way to make
gold? Is it possible, through the collisions of two heavy nuclei at high enough energy,
to make strongly interacting matter of high energy density and study its behavior in
the laboratory?
The idea is quite straightforward. At the start of the program, beams of energetic
nuclei were made to hit stationary nuclear targets. In the modern versions, however,
at CERN, near Geneva in Switzerland, and at Brookhaven National Laboratory, near
New York in the USA, two beams of nuclei, moving in opposite directions, collide
“head-on” in the detectors of the experiment. The incoming nuclei appear as pancakes
to an observer stationed in the laboratory, since their high momenta, through the

Fig. 6.5 Li Keran’s painting “Nuclei, as heavy as bulls, through collision generate new states of
matter”
116 6 Quark Matter

Lorentz contraction of scales in special relativity, compress them in the longitudinal


direction. In the collision, they do not, however, just stop each other: the situation
is more like shooting two beams of water at each other. Some drops indeed hit each
other at the first instant and cease their forward motion, leading to a sideways splash.
Others are only partially stopped and thus spray sideways somewhat later, retaining
some of their motion along the beam direction. In much the same way, the passage
of the two colliding nuclei results in the situation of a jet stream of deposited energy,
of overlapping fireballs containing a hot excited medium. These highly compressed
bubbles form a dense medium expanding along the collision direction: this is the
candidate for the desired quark–gluon plasma (Fig. 6.6). It subsequently expands,
cools off and eventually leads to the production of hadrons, to be detected in the
laboratory.
The opinions on the feasibility of such an endeavor were mixed. The great Amer-
ican theorist Richard Feynman, one of the founding fathers of quantum electrody-
namics, was as always ready with a concise judgement and said “if I throw my watch
against the wall, I get a broken watch, not a new state of matter”. Actually, the prob-
lem had two sides. One was the aspect Feynman had addressed: is it possible to create
through collisions something we would call matter? The other was the question of
whether experimentalists would be able to handle the large number of hadrons that
would be produced in such collisions. Both were indeed serious, but the promise of
finding a way to carry out an experimental study of the stuff that made up the primor-
dial universe—that promise was enough to get a first experimental program going,
in 1986, at Brookhaven National Laboratory (BNL) and at CERN. To minimize the
costs, both labs used existing injectors, existing accelerators, existing detectors and,
as someone pointed out, existing physicists not needing additional salaries: one big
recycling project. Whatever, the second of the two questions mentioned above was
indeed—and resoundingly—answered in the affirmative. At the highest present col-
lision energy, one single collision of two heavy nuclei produces some thousands of
new particles, and the detectors, the analysis programs and the experienced physicists
can handle even that. The beauty of these little bangs is seen in Fig. 6.7; the golden
lines are the tracks of the different hadrons eventually emerging from the collision
region.
This success got the new program off the ground, and today, the high-energy
nuclear collision research program involves more than two thousand physicists
throughout the world, in theory and in experiment. Nuclear collision experiments are

Fig. 6.6 The little bang in theory: quark–gluon plasma formation in high-energy nuclear interac-
tions, left: before, right: after the collision
6.4 The Little Bang 117

Fig. 6.7 The little bang in experiment: particle production in the collision of two lead nuclei at
CERN

presently being carried out with new accelerators at CERN and at BNL, and further
facilities are in the planning and construction stage at Darmstadt in Germany and
at Dubna in Russia. Incidentally, instead of nuclear collisions, one often refers to
collisions of heavy ions. That is actually not quite correct: removing one or more
electrons from an atom makes it an ion, removing all of them leaves the nucleus.
The terminology “heavy ion” came into use during a time when “nuclear” was not
considered a politically correct word…
The first and conceptually more serious question is not yet answered as clearly,
and in the physics community there are still adherents of Feynman’s point of view.
The energy set free under the present collisions is the highest ever reached on Earth.
Thousands of new particles are produced in a rather small volume, so that also the
initial density of constituents is extremely high. We shall see shortly that this is
in fact also confirmed by the fate suffered by probes traversing the early stages of
the produced systems. And it is only possible to understand what is happening in
those stages in terms of quarks, gluons and their interactions. All these observations
are correct beyond any reasonable doubt. But does that allow us to speak of quark
matter? What are the essential features of matter? And how can one show that the
media produced in high-energy nuclear collisions share these features, both in the
early stages and in the later evolution? Those are the basic questions presently being
addressed by the ongoing experiments—in the hope that the truly conclusive final
answers are just around the corner.
118 6 Quark Matter

One striking feature observed in the study of particle production by high-energy


collisions does indeed suggest that these collisions lead to a hot quark medium,
which then cools off, neutralizes its color and becomes part of our world. This is
the behavior of the abundances of the hadron species produced in nuclear collisions:
one finds with remarkable precision that a great number of different species follow
a common pattern of hadronization.

6.5 Universal Hadrosynthesis

Conceptually, the idea is quite similar to the primordial nucleosynthesis that


cosmologists use to establish that the Big Bang theory is the correct description
of the evolution of the early universe. At one stage of this evolution, the universe was
a hot gas of nucleons, electrons, photons and neutrinos. Given the temperature of this
medium, the relative abundances of protons and neutrons are determined by thermo-
dynamics. In other words, one can determine the ratio of protons to neutrons—there
are fewer neutrons, since they are slightly heavier than protons. Initially, this medium
was too hot for nuclei to exist: the kinetic energy of the constituents flying around
was so high that any fleeting bound state of two nucleons was immediately destroyed
again. After the first three minutes, the universe had cooled down enough to allow
nucleons to bind, nucleosynthesis set in. And given the ratio of protons to neutrons,
one can predict the relative abundances of hydrogen, deuterium and helium—and
those, apart from local modifications from star formations, still hold today for the
average universe. The observation of these abundances provides one of the three
pillars of the Big Bang theory, with Hubble’s law of expansion and the microwave
background radiation as the other two.
We can now apply this idea to the hadronization of the quark–gluon plasma.
Since this confinement transition occurs at a critical temperature Tc → 170 MeV,
as we found above, we can predict the relative abundances of the different hadron
species present in a hadronic medium of that temperature. And if nuclear collisions
indeed lead to a quark plasma which then hadronizes, we can predict the composition
of the system, i.e., the relative abundances of the different types of species formed in
the hadronization. We can therefore predict how many of the tracks on Fig. 6.7 are
the traces of pions, how many kaons, how many nucleons, and so on. In this case, the
test is much more stringent than it was for primordial nucleosynthesis. There, it was
mainly a question of hydrogen, deuterium and helium, and it happened gradually, with
deuterium appearing before helium. Here, the experiment provides the measurement
of more than ten different hadron species, and if they all appear at the same known
transition temperature, their relative rates are predicted. So hadrosynthesis is the
nucleosynthesis counterpart of the little bang, and extensive lists of the relative
production rates are calculated. About 86 % of the observed hadrons are expected
to be pions, 4 % kaons and antikaons, 4 % nucleons, 4 % antinucleons, and so on. In
particular, predictions are also made for those hadron species that are produced very
6.5 Universal Hadrosynthesis 119

rarely, with rates of much less than one percent of all produced particles. And the data
from nuclear collisions agree remarkably well with all these predictions. Moreover,
this agreement does not depend on the collision energy of the incident nuclei—once
that energy is high enough to make a hot plasma, it will always proceed to cool off and
turn into hadrons at the same temperature, resulting in the same relative abundances
of the different species. In this aspect, the transition from the colored quark world
to the physical world is thus similar to the condensation of water vapor; no matter
how hot the vapor initially was, it will always condense when it has cooled down to
100 ∞ C.
This universal distribution of the different hadron species also throws some light
on the underlying process of hadron formation. The hot plasma is a medium of
freely moving colored quarks at very high density—each quark sees in its immediate
vicinity, less than one femtometer away, many other quarks and antiquarks. As it
cools off, the density decreases, and eventually it drops to a point where a given quark
reaches its confinement horizon, where the next antiquark is almost one femtometer
away. The resulting situation is quite similar to that encountered above in our study
of electron–positron annihilation. For the cooling quark–gluon plasma, the price
of separation is the same as it was in annihilation: the formation of a new quark–
antiquark pair, left to escape into the physical world. In both cases, the energy for this
pair creation is determined by the tension of the string between quark and antiquark,
and thus universal. And again in both cases, hadron abundances should not carry
information about the previous colored world, should be random. Hence given such a
general scheme of hadron formation, the observed relative abundances of the hadrons
produced in nuclear collisions should agree with those found in electron–positron
annihilation, and indeed they do.
Looking for the smallest possible thing, we have found that the reduction chain
in the structure of matter finally ends the way that Lucretius had proposed over two
thousand years ago. Matter is made of quarks, and the early universe was a quark–
gluon plasma; but quarks and gluons remain forever confined to their world of color
and can never cross the confinement horizon. The only signals we can ever receive
from that world are the hadrons emitted whenever we try to “stretch” it, either in
collision cascades or by cooling a quark plasma. And these hadrons are thermal—
they carry no information about the previous colored world exept that of the state
of the medium from which they were emitted at the time when they were emitted,
the hadronization temperature. Or could they after all tell us a little more? Just as
the hot early universe was expanding, the medium formed in the little bang was also
initially very hot and then expanded. This expansion will give a “boost” to the finally
emerging hadrons, and if we can find a way to relate the size of this boost to the initial
energy density, then we could estimate how hot it was at the beginning. In contrast to
the Big Bang, there was no singularity at that beginning; nevertheless, the subsequent
expansion may well parallel that of the universe in its early quark state. Hubble’s
law indicates that the further away a galaxy, the faster it appears to recede from our
observation site. In nuclear collisions, the situation is quite similar: as the two nuclei
pass each other, they first leave behind a fireball at rest in the laboratory; later further
fireballs appear, moving away, and again, moving faster the further away they are.
120 6 Quark Matter

The cascade of fireballs formed in a nuclear collision is thus indeed a little bang
analogue of the expanding universe, with the initial collision configuration playing
the role of the inflationary mechanism after the Big Bang.
In the analysis of nuclear collisions, we have some possibilities that black holes
don’t offer. We cannot extract quarks from quark matter, just as we could not split
hadrons into quarks or get something out of a black hole. But we can send a probe into
quark matter and see what happens to it—if we choose the right kind of probe, it can
get back out, and by studying what happened to it in the passage through the colored
world, we can perhaps learn something about this world. Strong interactions are not as
universal as gravity. If a colored quark–antiquark pair in the interior should annihilate
into an electron and a positron—that happens very rarely, since the electromagnetic
interaction is so much weaker than the strong one, but it does happen—then these
can get out: they are not strongly interacting and therefore not subject to any color
restrictions. So the experimental study of the little bang provided by high-energy
collisions of heavy nuclei has access to essentially three kinds of probes of quark
deconfinement.
One is given by the hadron radiation emitted when the quark matter has cooled
off enough to reach the confinement transition; a second is the rare electromagnetic
radiation emitted by quark–antiquark annihilation in the plasma of charged quarks.
A third, to which we will turn shortly, is the study of the fate of probes having passed
through the medium in its earlier hot stages. The first has, as we just saw, shown
that there is indeed a universal pattern of what happens when quark matter turns
into hadron matter, when the physical vacuum appears—that occurs indeed at the
confinement temperature calculated in quantum chromodynamics. It has, however,
so far not yet shown us very much about the earlier stages. In fact, if we were only
able to measure the normal hadrons produced in the collisions, hadronization would
play the role of the last scattering horizon in the Big Bang evolution. We cannot
look back further than that, because before that time the photons we see interacted
with the medium, and in the interaction the information about previous stages was
destroyed, was lost to thermalization. The microwave background radiation reflects
the state of the universe at the time the photons decoupled from matter. Similarly,
the hadrons we measure in nuclear collisions reflect the state of the medium at the
confinement point. Here, however, we have some tools that allow us to go back to
earlier stages.
And so the hope of the experimentalists at CERN and Brookhaven is that the other
tools, electromagnetic radiation from the quark plasma and the possibility of sending
probes through the produced medium, will reveal some of the properties of the hot,
early, colored medium. The hope is that these tools can answer a very challenging
question.
6.6 How Hot is the Quark–Gluon Plasma? 121

6.6 How Hot is the Quark–Gluon Plasma?

The proposed procedure is to some extent based on the success of similar methods
applied in astrophysics, in the study of stellar matter, also beyond our reach. The
temperature and the composition of distant stars are determined largely through the
spectral analysis of the light they emit. The interior of these stars is generally so hot
that it is a plasma of electrons and nucleons, emitting a continuous spectrum of light.
The frequency of this light is proportional to the energy density of the inner medium.
In the cooler outer corona of the star, atoms can survive, and the passing light from
the interior excites their electrons from the ground state to higher level orbits. The
photons doing this are thereby removed from the continuous spectrum, and this shows
up: there are absorption lines, whose position indicates the element present in the
corona, and whose strength measures the energy of the light from the interior. To
take the simplest case: if the corona contains hydrogen atoms, then the frequencies
needed to excite these to the different excitation levels are candidates for absorption
lines. In the case of relatively cool stars, the photons will not be energetic enough
to do much except to bring the atoms into their lowest excited state. Sufficiently hot
stars, on the other hand, will generally result in jumps to higher excitation states. So
by looking at which excited states are the target of the photons, we can tell what the
temperature of the stellar core is—see Fig. 6.8.
Twenty-five years ago, my Japanese colleague Tetsuo Matsui and I proposed that
a similar method could be applied to study the early interior of the medium produced
in high-energy nuclear collisions. Here one would observe the mass spectrum of
electron–positron pairs, instead of the photon spectrum from stars. Ideally, this spec-
trum would arise from the annihilation of quark–antiquark pairs in the hot plasma; in
practice, a number of competing sources come into play and have to be eliminated.
Moreover, thermal radiation is emitted at all stages of the evolution, so that it becomes
difficult to single out that coming from the quark–gluon plasma. This has so far made

Fig. 6.8 Stellar spectra as a


measure of the temperature of hot star
a star’s interior
intensity

cool star

frequency
122 6 Quark Matter

the identification of the thermal radiation from quark matter rather difficult. But on
top of the smooth curve found for the mass distribution of the electron–positron pairs,
there are some very pronounced sharp peaks at well-defined positions. They are the
signals of quarkonium production in nuclear collisions, and they can play the role
which the atoms in the corona had in the stellar spectral analysis.
Quarkonia are rare species of hadron, bound states of the heavy quarks we men-
tioned in Chap. 5, whose flavors are denoted as charm and bottom. These heavy
quarks are indeed that, with masses of about 1.3 GeV for charm and 4.2 GeV for
bottom, in contrast to the almost massless u and d quarks making up nucleons and
much of the meson sector. Forming bound states of charm quarks and antiquarks
leads to charmonia, with the J/λ of mass 3.1 GeV as the lowest state, while bottom
quarks give rise to bottomonia and the γ, with 9.5 GeV, as the ground state. In both
cases, the ground states cannot decay into mesons with heavy flavor quantum num-
bers. The mass of two D mesons, each consisting of one light and one charm quark,
is 3.8 GeV and hence much bigger than the mass of the J/λ. The situation is the
same for the γ, so that the ground states of quarkonia are always below the decay
threshold. The same holds for the first few excited states, but with increasing mass,
these get closer and closer to the threshold and thus are less and less tightly bound. As
a result of the large quark masses, quarkonia turn out to be very small, much smaller
than normal hadrons, and they are also much more tightly bound. Moreover, they
are, in the evolution history of the medium produced in the collision, some kind of
primordial animals: they were there first, right at the instance of the collision, before
any thermal medium such as the quark–gluon plasma was formed, and very long
before any normal hadrons could make their appearance. Because of their small size
and tight binding, they can also survive in the hot plasma, unless that gets really hot.
It’s a bit like ice cubes in very cold vodka or aquavit – if the temperature of the drink
is below the freezing point of water, the alcohol content keeps it liquid, and the cubes
don’t melt. But once the drink gets warm enough, above 0 ∞ C, the ice cubes do melt.
Similarly, the really tightly bound γ is expected to survive even if the temperature
in the hot quark plasma reaches values of twice the transition temperature. And each
quarkonium state would have its own melting temperature, depending on how tightly
it is bound. Since the higher excited states are less tightly bound than the ground
states, they are the first to melt with increasing temperature, the ground states the last.
That’s why we thought that the spectral lines of the quarkonia, as observed in
nuclear collisions, could tell us something about how hot the interior of the pro-
duced medium was. If the initial plasma temperature was sufficiently high, almost
everything would have melted and we would see only very weak quarkonium lines.
For a lower initial plasma temperature, some of the more tightly bound quarkonium
states in the core would survive, and by identifying which were still there, we could
specify the conditions of the quark–gluon plasma. The strength of the quarkonium
signals would thus serve as a thermometer of the quark matter produced in nuclear
collisions—see Fig. 6.9.
Such a spectral analysis of high-energy nuclear collision media was started with
the first experiments and is presently still in progress. Numerous complicating factors
have made it more difficult than expected to reach definite conclusions. But essential
6.6 How Hot is the Quark–Gluon Plasma? 123

Fig. 6.9 Quarkonium mass


distributions as a function of
the temperature of the hot plasma
surrounding quark–gluon
plasma; “no plasma” is
equivalent to zero temperature

intensity
cool plasma

Y Y Y
no plasma

quarkonium mass

Fig. 6.10 The quarkonium


thermometer, indicating the 4 Υ
melting points of the different
quarkonium states, with
charmonia on the left and
bottomonia on the right; the 3
temperature scale is in units
of the hadrosynthesis
χ
temperature Tc J/ ψ 2 b
Υ
χc χ
b
1 Υ
ψ

features of the test have been observed: for both charmonia and bottomonia, the
excited states disappear at lower energy densities than the ground states. In the case
of charmonia, there are the ψc and the λ  as excited states above the groundstate J/λ;
for the bottomonium family, we have γ  , γ  , ψb and ψb above the γ. In principle
(and very soon also in practice) the statistical mechanics of quark matter allows the
calculation of the dissociation points of all the different quarkonium states, and some
indicative results already exist. They provide us with a “quarkonium thermometer”
of the kind shown in Fig. 6.10, where the different melting points are indicated in
units of the quark–hadron transition temperature Tc .
Thus the charmonium ground state J/λ is expected to become dissociated at
about 2Tc , while the bottomonium ground state γ remains present until 4Tc . If these
different melting points can be determined in nuclear collisions and are found to
124 6 Quark Matter

agree with the calculted values, we will have direct quantitative evidence for the
production of quark matter. Recent results from the Large Hadron Collider at CERN
indicate that in particular bottomonium spectra may indeed allow such a comparison
of theory and experiment.
Hidden Symmetries
7

Beauty is a manifestation of concealed laws of nature,


which otherwise would never have been discovered.
Johann Wolfgang von Goethe
Maximes and Reflections

Tracing back the evolution of the universe closer and closer to the Big Bang, we
have encountered the confinement horizon, the threshold at which the quarks were
combined to form physical hadrons, at which the physical vacuum first appeared, at
which empty space entered the stage. Before that point, at still earlier times and cor-
respondingly higher temperatures, the world was filled densely with colored quarks
and gluons. And the transition from that world to our present one can, as we saw, be
studied experimentally, at least up to a point, in high-energy nuclear collisions. But
quarks and gluons were not the only constituents around in the early universe: there
were the leptons, electrons and neutrinos, as well as photons and the heavy vector
mesons W ± , Z 0 . And just as the confined quarks of today were once in another,
deconfined state, so one expects the leptons and their force particles to have under-
gone some transition from a still much earlier, more symmetric state to the one we
observe today. What happened at that transition? Since both leptons and quarks still
confront us with the task of understanding how their intrinsic masses were brought
into the game, we may wonder if they appeared through such a transition. These
aspects, one believes today, can best be understood in terms of symmetries inherent
in the underlying theory, and their “breaking” in the course of the evolution after the
Big Bang.
Symmetry is appreciated by man as much as by nature. It appeals profoundly to
our aesthetic sense, it is perhaps the essence of what we call beauty. It is a projection
of a divine world onto the more mundane one we live in; throughout the ages,
man imagined perfection in the form of heavenly spheres. Reflecting this perfection
into the real world, nature uses symmetry to construct spheres, crystals, snowflakes,
tigers, butterflies, flowers, leaves and so much more of what we see around us. Here,
if anywhere, we feel that we understand a little the blueprint of creation.

H. Satz, Ultimate Horizons, The Frontiers Collection, 125


DOI: 10.1007/978-3-642-41657-6_7, © Springer-Verlag Berlin Heidelberg 2013
126 7 Hidden Symmetries

(a) (b) (c)


60

Fig. 7.1 Three forms of symmetry

Symmetry has over the years become the dominant guiding principle in physics,
and the fundamental conservation laws—of energy, momentum, angular momentum,
charge, and more—have been shown to be the consequence of the invariance of
nature’s behavior under certain symmetry operations. More recently, symmetry has
also turned out to be the ideal tool to account for the different stages of the universe
in its early evolution after the Big Bang. The underlying idea is very simple: nature
was inherently more symmetric than it appears now; this symmetry, openly present
at the beginning, became somehow hidden in the further evolution. When we see
the half moon above in the sky, we know that that’s not all, that it is really round.
The Bible notes that in the beginning, the Earth had no form; so originally it was
symmetric, the form came later, the symmetry became hidden. Let us therefore begin
by looking at the symmetries we see and at those that might be concealed but are
nevertheless still inherently there.
The basic aspect of symmetry is most readily seen by looking at geometric figures
and considering which operations we can perform on them that will leave them
unchanged. In Fig. 7.1, we illustrate three common kinds of symmetry. The tree in
(Fig. 7.1a) remains the same if we reflect it about the vertical dashed line through
its middle; the star in (Fig. 7.1b) allows this and five other reflections, as well as
rotations in the plane of the figure, by angles of 60, 120, 180, 240 and 300◦ . The
circle in (Fig. 7.1c), finally, remains unchanged under rotations by whatever angle
and by reflections about whatever axis passing through its center. So when we say
that a geometric figure has a certain symmetry, we mean that there exist specific
operations which we can apply to it and which leave it unchanged, invariant.
The first two examples are denoted as discrete symmetries, since here the figure is
left unchanged under a fixed (“discrete”) number of operations; the last, in contrast,
is a continuous symmetry, with infinitely many possible operations, rotations by any
angle and reflections about any axis through the middle. From Fig. 7.1a we conclude
that reflections and rotations are not the same: no rotation in the plane will reproduce
the tree, other than the trivial one by 360◦ . A perhaps even more convincing way is
to stand in front of a mirror and extend your right arm. No matter how you turn, you
7 Hidden Symmetries 127

Fig. 7.2 Inverting all charges +


will leave the system
unchanged, invariant
+ +

will never reproduce the picture that you see of yourself in the mirror: the person
there holds the left arm out.
Besides the symmetries of individual geometric objects, we can imagine more
complex situations, involving several components. The ring shown in Fig. 7.2 holds
three positive and three negative charges, in an alternating pattern. If we invert all
charges, plus to minus, minus to plus, the system will not change: it is invariant under
the global transformation of full charge inversion. It is not invariant, however, under
the inversion of just one charge, whichever that might be, or under that of a certain
number of them, less than all. Such local operations change the configuration of the
system, and in fact simple geometric patterns or configurations remain invariant only
under global transformations. To have local invariance, it becomes necessary that
the components of the system interact with each other, creating an intimate relation
between local symmetry and interacting constituents. If we bring in an attraction
between opposite and a repulsion between like charges, then it becomes possible
that the one charge we invert sends a message to its neighbors, causing them to
invert as well, and so on, until the full original configuration is restored.
There are other crucial invariances of the world around us. When we formulate the
laws of physics, we believe that they should be the same for phenomena in Europe
as in Australia. The time it takes for a stone to fall one meter will be the same here
as it is at the antipodes, provided both places are at the same altitude, and it will be
the same today as yesterday or tomorrow. So the laws of physics must be invariant
under translations in space and time. And not only under those: the time it takes for
a stone to fall that meter is the same for a stationary observer as for one in a train
moving a constant speed. So we can add the Lorentz transformations of relativity
theory to the set of the continuous symmetries leaving the laws of our physical world
invariant. And these laws indeed provide such a general framework, of which any
specific event is just one possible realization.
So there exist a variety of different transformations that can define symmetries,
discrete and continuous, for single objects or multicomponent systems, global and
local. But at least as interesting as the different forms of symmetry are the different
ways that an existing symmetry can suddenly disappear, be somehow hidden. A
perfect, “honest” roulette table has an intrinsic symmetry: each of the 37 numbers
from zero to 36 is equally possible. The number on which the spinning ball finally
comes to rest can be any one of these 37 numbers; the game as such is invariant
under the 36 rotations. But when the ball is played, “faites vos jeux”, it choses one
number, and by this choice, by the position of the ball, the state in which the system
finds itself breaks its intrinsic symmetry. So the symmetry inherent in the system
and that of the specific state it may be in, these two aspects are here different: the
128 7 Hidden Symmetries

symmetry of the system, the law of roulette requires that all its possible states are
equally likely; the completion of a spin will choose one of these.
Nature has many ways of playing such games. Take water—apart from possible
effects of gravity, the structure of the liquid is the same in all directions, it is isotropic
and homogeneous. The laws of physics describing the interaction of the water mole-
cules must therefore be invariant under rotations about any angle and translations
in any direction. But if we lower the temperature below the freezing point, ice is
formed, crystals and plates with certain geometric structures appear, and they break
the symmetry of liquid water. Similarly, isotropic, homogeneous water vapor can
suddenly turn into snowflakes of a complex geometric pattern defining some dis-
crete symmetries. Why are snowflakes allowed to break the inherent continuous
symmetry of the laws governing the behavior of molecules in water vapor? How can
ice break it? More generally, if the state of a system shows a certain symmetry at
high temperatures, what happens to that symmetry when we lower the temperature
and reach a new, less symmetric state; where does the symmetry go?
We see here already that a topic of particular interest is the possible breaking
of a symmetry for a system consisting of many constituents, such as water and its
transition to ice—in contrast to the symmetry of a given single object. Any symmetry
can be broken by brute force: we can break off a branch on the left side of the tree,
making that different from the right, or we can simply remove one of the points of the
star. This is called explicit symmetry breaking, and for single objects that is the only
way. But when water freezes or evaporates, the system itself appears to change its
state of symmetry. Such behavior is today often called emergent, it emerges without
any specific “breaking” action from the outside. When the temperature of water is
decreased by one degree, from 5 to 4 ◦ C, nothing happens; but the change from
+ 0.5 to −0.5 ◦ C turns it into crystalline ice. How can that happen?
For physicists, another favorite and much used example to study the symmetry of
complex systems is magnetism, as observed in matter such as iron. We can picture the
material to consist of atoms having a certain intrinsic rotation, a spin around their own
axis; they are like tiny magnetic dipoles, somewhat like the Earth with its North and
South Poles. In general, for each atom that spin can point in any direction; however,
the spins of neighboring atoms interact with each other, and this has consequences.
At high temperature, there is much thermal agitation of the individual atoms, and
this washes out most of the remnant interactions. The individual spins are flipped
around randomly, making the material on the average isotropic, it shows the same
structure in all directions. If we imagine determining the average spin orientation by
going through the entire system, atom by atom, we get on the average zero, since
there are spins pointing in all directions, and for every specific spin orientation there
is one of opposite direction. If we turn all the spins by one specific angle, the system
would not really change, and the average spin value would remain the same: zero.
In other words, the system is invariant under an overall rotation of all the individual
spins. And so the equations governing its behavior, the laws of the spin interactions,
must also show this rotational invariance. But if we now lower the temperature, we
reach a certain critical point, the Curie point, named after the French physicist Pierre
Curie—who, incidentally, later on shared the Nobel prize in physics with his wife
7 Hidden Symmetries 129

Marie for the pioneering work they carried out in the study of radioactivity. Pierre
Curie found that once the temperature of the material fell below this critical value, the
spins of the individual atoms began to align with each other, they preferred to join and
point in the same direction, spontaneously, without any outside agent telling them
to do so. The average spin value was now no longer zero, it had some finite value,
indicating that most of the spins were pointing in a common direction, whatever that
might be—the orientation was spontaneously chosen. In other words, the rotational
symmetry of the state of the system was somehow lost. The spin interaction laws
remained the same as before, they continued to be invariant under rotations, but the
state of the system no longer was.
So nature has added some fine print to what rotational invariance really means.
It does not mean that the actual state of system has to remain unchanged under
rotations; it only means that each given overall spin orientation is as likely as any
other. Above the Curie temperature, thermal agitation is strong enough to prevent
the individual spins from aligning, and here both the system and its actual state show
rotational invariance. Below that temperature, however, the alignment forces become
strong enough to overcome thermal randomization and thus put the system into a
state of spontaneously broken symmetry. We should emphasize at this point that the
spin alignment appears indeed without anyone triggering it—hence spontaneous. If,
at any temperature, we subject the system to an external magnetic field, the spins
will, of course, try to line up in the direction of that field. This alignment is a case of
the explicit symmetry breaking already mentioned above; it is induced by an outside
agent and does not just emerge from the system itself.
Parenthetically, before continuing, we should add here that the world we have just
described is really rather idealized. In actual magnetic materials, one finds extended
spatial regions, domains, of different overall spin orientations (Fig. 7.3). The Curie
temperature of iron is some 1,000 K, so that room temperature is well below its
Curie point. But normal iron consists of many such domains of differently aligned
spin orientations, and so its overall magnetization is usually very small or absent. If,
however, an outside magnetic field is applied, even briefly, then the domains all align

(a) (b) (c)

Fig. 7.3 Magnetic domain structure: random spin orientation above the Curie point (a), domain
formation below the Curie point (b), domain alignment below the Curie point after application of
an external field (c)
130 7 Hidden Symmetries

in the direction of this field, and they stay this way, forming a permanent magnet,
which persists even after the removal of the outside field.
But let us nevertheless remain a while in our idealized world; it has led to concepts
that are basic to much of our present thinking. Here, as in many other cases, physicists
like to simplify matters as much as possible, in order to understand the essence of
what is happening; so we look at a model, a cartoon of the real world. Much of
today’s knowledge of spontaneous symmetry breaking stems from a problem that a
physics professor in Hamburg gave to one of his students for a doctoral dissertation
in 1920, and even though the student, Ernst Ising, did not really solve it, it made him
immortal.

7.1 The Ising Model

The model retains only the main features we want to consider: a grid on which at
each site a little spin of unit length points either up or down (si = ±1 at each grid
point i). For two dimensions, the result is illustrated in Fig. 7.4. The spins are allowed
to interact only with their nearest neighbors, and this interaction is assumed to be
such that they prefer to align; in other words, two adjacent spins, if left alone, will
point in the same direction. To get them to point in opposite directions, we have to
force them, use work. By construction, everything is flip-invariant—if we invert all
spins, nothing changes. That’s already all there is to the Ising model; for a more
mathematical formulation, see Box 9.
We now picture the fate of such a system in a hot environment, where there is
enough energy available to flip the spins around and cause them to become randomly
oriented. The two opposing effects, the intrinsic desire to align and the thermal
tendency to randomize, fight each other. At high temperatures, the randomization
wins, at low, the alignment energy. This model has had a most profound impact on
all of statistical physics in the past century. As mentioned above, the topic was first
proposed by Wihelm Lenz, professor at the University of Hamburg, as a problem
for a doctoral thesis for his student Ernst Ising. Ising solved it in 1925 for a one-
dimensional system—missing the point, in our hindsight. In one dimension, nothing
really happens: the thermal agitation always wins, with a new, completely aligned
state only at zero temperature. The problem, however, was really very profound, and

(a) (b) (c)

Fig. 7.4 The Ising model in two space dimensions: a at high temperature with random spin
orientations, as many “up” as “down”; b at some finite temperature below Tc with partial alignment,
more “up”; c at T = 0 with complete spin alignment “up”
7.1 The Ising Model 131

its final solution in 1944 brought the Norwegian theorist Lars Onsager the Nobel
prize. He showed that up to a particular temperature Tc , the spins were randomly
oriented, and if he calculated the average spin orientation, the so-called magnetization
of the system, it was zero. Below this point, spins began to align—either up or down,
but they made a choice, so that the average spin value below Tc was no longer zero; it
became finite and increased to +1 or to −1 for T = 0. Onsager completely calculated
all thermodynamic observables above, below and at the transition, making this the
first case of fully calculable critical behavior in physics, and in fact one of the very
few so far. Onsager’s solution was for the problem we have shown here, with two
space dimensions; up to today, the case of three (or more) dimensions has resisted
all attempts to solve it analytically.

Box 9. The Ising Model


For the two-dimensional case, the situation is illustrated in Fig. 7.4. The overall
energy due to the interaction of the n spins is
E = −J (s1 s2 + s2 s3 + · · · + sn−1 sn ) (7.1)
since only next neighbors are assumed to interact. Therefore the sum contains
the interaction terms of all n 2 next-neighbor pairs on the lattice. The individual
spins point either up (+1) or down (−1), and the energy of the given form is
lowest (E = −J n 2 ) if all point in the same direction. It increases for more
random configurations, up to E = +J n 2 for alternating up and down spins.
The coefficient J simply specifies the units for measuring the energy: the
energy per spin pair in the all-up or all-down state is E/n 2 = −J , and it is
+J for the alternating case. It is clear from the given form that the energy
is invariant under up-down flips for all spins; if si → −si for all spins i, E
does not change. The system is therefore symmetric under a global up–down
flip. On the other hand, there are two possible states of lowest energy, two
possible ground states, all up or all down. This can be specified by calculating
the magnetization m, defined as the average spin value, giving m = +1 or
m = −1. So the state of the system at lowest energy, the ground state, is no
longer invariant under flipping. To specify the state, it is therefore not sufficient
to know its energy; one needs an additional order parameter to indicate “up”
or “down”, and the magnetization fulfills that function.

In a way, the Ising model, as it is called in spite of its story, is for statistical physics
something like Newton’s falling apple for mechanics. The solution, the determination
of the equations of motion, of trajectories of falling apples, flying cannonballs and
the like, required a new kind of mathematics. And so Isaac Newton in England and
Gottfried Wilhelm Leibniz in Germany developed this calculus. It allowed physicists
not only to study infinitesimally small changes, but also to sum up infinitely many
such small changes to specify trajectories. The counterpart for statistical physics
is unfortunately still lacking; we really have no good way of calculating the spe-
cific collective behavior of very many identical interacting components. Onsager’s
132 7 Hidden Symmetries

solution in two dimensions takes up a chapter in most textbooks and thus illustrates
the difficulties our conventional mathematics encounters with such problems. One
way out, applied only in the past few decades, is computer simulation: large-scale,
high-performance computers allow today numerical calculations even of critical
behavior in a great variety of systems, including the Ising model. And a very general
theoretical framework, renormalization theory, allows relations to be found between
different critical features. But the dream of some new, more suitable form of math-
ematics nevertheless remains alive.
Let us look at the grid of the thermally fluctuating spins, remaining as before in
two space dimensions. There is an immense variety of possible configurations for
the array, from all up or all down to any mixture. For all up or all down, there is
only one state each, while for the case of equally many up as down there exists a
huge number of possible states, 2 N to be precise, if N is the number of grid points.
We saw in the prvious chapter that such systems, if they are isolated from the rest
of the world, are found in one of the set of states containing the largest number
of members. In statistical mechanics, this number specifies the so-called entropy
of the system, which tells us something about how ordered or disordered it is. The
fundamental postulate of statistical mechanics says that nature always behaves such
as to maximize entropy, and in any changes, the entropy must always increase or
remain constant. If I drop a glass, it will either survive the drop or shatter irreversibly
into more pieces. Lifting up the pieces and dropping them will never recreate the
glass. There are many ways to break the glass, only one to put it back together. In
our spin system, the chance of finding all up or all down, compared to a completely
random configuration of zero average spin, is 1 : 1030 even for a rather small 10
times 10 grid consisting of 100 points. The number of possible orientations decreases
if we insist on having some clusters of aligned spins favoring a certain direction, i.e.,
having a non-zero magnetization m, such as in Fig. 7.4b. The probability distribution
for the system thus has the form shown in Fig. 7.5a: it is peaked at m = 0, the state
of rotational symmetry.
So far, however, we have ignored the fact that the spins like to be aligned; in other
words, two aligned spins correspond to a state of lower energy than two opposing
spins. If there is enough overall energy, that is, if the temperature is sufficiently

Fig. 7.5 The spin distribution (a) (b)


P(m) in the Ising model, as a
function of the magnetization P(m)
m, above the Curie point (a),
where the average spin value
is zero, and below (b), for the
spin choice “up”; here the
magnetization is m̄, but the
choice “down” (dashed lines)
is equally probable
−1 0 +1 −1 0 +1
m m m
7.1 The Ising Model 133

high, this does not matter: the price in energy is paid for by the thermal energy of
the system. If we now lower the temperature, there is less and less such thermal
energy available, reducing the favored role of the large number of random states: the
price for non-alignment becomes more and more difficult to pay. And at a certain
temperature Tc , the roles are interchanged, the lower energy of an aligned state
is now worth more, the temperature cannot afford a larger number of non-aligned
configurations. Now the probability is highest for some non-zero magnetization, as
shown in Fig. 7.5b. So, as long as disorder dominates, symmetry wins; when it is
spontaneously broken, some form of order appears. For this reason, the average spin
value, the magnetization, is generally referred to as the order parameter. It is zero
for a disordered state and abruptly becomes finite when order sets in.
Thermal systems are thus always engaged in a battle between entropy and energy,
between disorder and order. At high temperature, disorder and entropy win, at low
temperature, order and energy do. So the intrinsic spin interaction laws make their
appearance only at low temperature, when the symmetry of disorder becomes bro-
ken. Finally, at T = 0, there are only two states left: all up or all down, complete
order, minimal entropy. And it is one or the other forever: to turn a state of all up
into one of all down requires much work, for which there is simply no energy avail-
able. Incidentally, the zero entropy of the spin state at T = 0 does not violate the
laws of thermodynamics, which forbid entropy ever to decrease. To achieve a low
temperature, we need cooling, and the operation of a refigerator requires work, elec-
tric power; the combination of spin system, freezing compartment, refigerator and
power supply does not have a decreasing entropy; only the spin system alone does.
Its entropy goes into the heat emitted by the refrigerator.
We noted two types of symmetry: discrete and continuous. And we have seen that
in the simplest discrete case, the Ising model, spontaneous symmetry breaking put
the system into one of two states, up or down. The ultimate form of these states, at
temperature zero, is that of the lowest possible energy: all spins are now aligned.
One therefore refers to these as the ground states of the system. The laws governing
the behavior of the system, defining the Ising model, are invariant under up-down
flipping, and so the system finds itself in one of two equivalent or degenerate ground
states; which one, it has to chose on its own, spontaneously. There is a classical illus-
tration of this situation, based on a dilemma first proposed by the Persian philosopher
Al Ghazali in the eleventh century; he argued that a thirsty man faced by two equally
tempting glasses of water would die of thirst, since he had no way of deciding from
which to drink. It became more popular in the version of the French priest Jean
Buridan, in the early fourteenth century professor at the University of Paris, as the
successor of William of Ockham, famous for his razor. Buridan replaced the thirsty
man by a donkey, standing between two equidistant piles of hay; not having a way
to decide from which to feed, it would starve. Of course, just like the thirsty man,
Buridan’s donkey survives; even the tiniest motion will bring the man or donkey
closer to one or the other of the two choices, spontaneously breaking the two-fold
symmetry. From a physics point of view, a more suitable analogue is a ball at rest on
a ridge between two symmetric trenches, see Fig. 7.6. The ball is in a very unstable
situation, and any disturbance will make it roll into one of the trenches and thereby
134 7 Hidden Symmetries

Fig. 7.6 Spontaneous


breaking of a two-fold
symmetry: the ball rolls down
the hill into one or the other
of the two trenches

break the symmetry. And once it is down, it remains where it is, lacking the energy
to get back up or over into the other trench.
We now make our model just a little more complex, replacing the up-down of the
Ising model by one in which the spin can have three possible orientations, as in a
three-pronged (“Mercedes”) star. At high temperature, the state of system governed
by such a law is again fully symmetric, each direction is equally probable, and
the average spin value again zero. But at low temperatures, there are now three
possibilities, three degenerate ground states. Again, these are separated from each
other by ridges, so that the ball has to chose one of the three and then remain there.
We can go on this way, making yet more complex spin models; the number of
allowed spin directions, and hence of symmetry operations, determines the number
of degenerate ground states, to be reached once the temperature can no longer pay
the price demanded to keep the state of the system fully invariant. And in all these
cases, the ball can neither return to the symmetric starting point, nor transit to any of
the other degenerate ground states, which are separated from each other by “energy
walls”.
But what happens when the symmetry becomes continuous, when the spin can take
on any orientation in space? The resulting system is generally called the Heisenberg
model, since it was first used by Werner Heisenberg to study ferromagnetism; it
is invariant under all rotations in three-dimensional space, the spins can now point
anywhere. Buridan’s donkey is now surrounded by a circular ring of hay; where
should he start to feed? Is there a difference between spontaneously breaking a
discrete and a continuous symmetry? There is, and just to remind you that these
questions are far from trivial: the answer to the first case, discrete symmetry, brought
the Nobel prize to Lars Onsager, as we have already mentioned. The answer to the
second, continuous symmetries, brought it to Yoichiro Nambu of Japan. In both cases,
what they provided was not just some “discovery”, it opened a new field of physics,
a new way of thinking. Returning to our picture of the ball: it is now balanced on a
peak surrounded by a circular trench, see Fig. 7.7. Again, any slight disturbance will
make it roll down, but the (now infinitely many) different degenerate ground states
are no longer separated by any dividers.
Buridan’s donkey starts to feed somewhere, the symmetry is broken. But it can
now, with no restrictions, move on to a place a little further over, where the hay seems
better. In fact, it can move around the whole circle at no expense, with no constraints
7.1 The Ising Model 135

Fig. 7.7 Spontaneous


breaking of a continuous
symmetry

of any kind. And so can the ball roll around the entire trench without requiring
any energy, provided it rolls very slowly, so that its motion does not require work.
The crucial change in going from discrete to continuous symmetry breaking is thus
that there are now infinitely many degenerate ground states, which are in no way
separated from each other. The system can slide over at no expense. And this means
that in a state of broken continuous symmetry, we have an overall alignment of the
spins, an increase of the order parameter from zero to a finite value, and in addition a
slowly travelling wave caused by a gently changing alignment of the spin state; see
Fig. 7.8.
In the quantum world, waves correspond to particles, and so the slow wave
becomes a massless particle, appearing as soon as a continuous symmetry is sponta-
neously broken: the Nambu–Goldstone boson. Yoichiro Nambu had introduced such
soft fleeting waves in the context of superconductivity; Jeffrey Goldstone, a British
theorist, transferred this to field theory. There it was to have an even more profound
effect, eventually providing the origin of pions. The general conclusion was that any
breaking of a continuous symmetry will lead to the appearance of such new, massless
particles; how many depends on the dimension of the broken symmetry. In the case
we just considered, there were three space dimensions, three axes of rotation, three
directions for the spin waves to travel; as a result, there are three distinct Nambu–
Goldstone bosons. Letting the system choose a particular state is allowed, this is
just spontaneous symmetry breaking; normally, for discrete symmetries, the system
makes its choice, and that’s it. But in the case of continuous symmetries, it can change
its mind and move over into another equivalent ground state. It is this possibility, not

Fig. 7.8 Continuously changing order parameter for a spontaneously broken continuous symmetry,
leading to a slowly progressing spin wave
136 7 Hidden Symmetries

present for discrete symmetries, that introduces a completely new kind of “particle”
into the world. So far, we have encountered matter particles, quarks and leptons, and
force particles, gluons, photons and W vector bosons, which mediated the interaction
between the matter particles. Now a third kind enters: massless, spinless particles,
massless scalar bosons in physics speech. They are a bit like the shadow of a moving
cloud; the cloud appears as a change of state in the water vapor in the air, and its
shadow can move over the ground without any energy expenditure.

7.2 Shadow Particles

These particles have turned out to be of considerable importance, particularly in the


world of fundamental interactions, because their occurrence is a general consequence
of spontaneously broken continuous symmetries—in no way restricted to the spin
model we have looked at here. Of obvious interest to us is the theory of strong
interactions, of quarks and gluons. The laws of quantum chromodynamics (QCD)
remain invariant under a number of symmetry operations, and the question is whether
our actual world also shows these symmetries. If they are not there in the world as
we see it today, it could of course just mean that the laws of QCD laws are simply
wrong. But until proven otherwise, we’ll assume that they are right; if not, we’ll try
to modify them a little…
Many of the symmetries inherent in QCD are indeed present in our world, exactly
or at least in good approximation. So what happens if we start with an ideal version
of QCD, in which there are only u and d quarks of the specified quantum numbers,
and their antiquarks, all of mass zero? The proton and neutron then become different
charge states of one kind of particle, a nucleon. Similarly, there are different charge
states of mesons, ±1 and 0. These different charge states, both of nucleons and
of mesons, have in reality slightly different masses—but we’ll neglect that for the
moment. One can then imagine that there is a hypothetical space in which the axes
are labelled by the quantum numbers of the quarks. In this space, we can rotate
around, just as we could in the normal three-dimensional space in the case of the
spins; but here we rotate from a u quark to a d quark, for example. The laws of
QCD are invariant under these rotations—so what about nature? The invariance is
in accord with the existence of the usual spectrum of nucleons and mesons, and
moreover it predicts that an interaction such as that of a proton and a positive meson
is the same as that of a neutron and a negative meson. Experiment finds that to be the
case. But the invariance also predicts that for each nucleon state there exists a mirror
image nucleon state, the same in all aspects except that it is spinning in the opposite
direction, one clockwise, the other anticlockwise. So one would expect to observe
both nucleons and their mirrored counterparts, as well as antinucleons and mirrored
antinucleons. These mirrored states have never been found, so that the symmetry of
QCD under the relevant subset of rotations in the quantum number space is apparently
broken spontaneously in the present state of our world. Such rotations are continuous
operations much like those in normal three-dimensional space, with three distinct
7.2 Shadow Particles 137

axes. So the spontaneous breaking of the corresponding symmetry must produce


three of the scalar massless Nambu–Goldstone modes. The symmetry is generally
referred to as chiral symmetry, with chiral derived from the Greek word for hand; the
different left- and right-handed characters on opposite sides of a mirror here are the
nucleon and its mirrored partner. The order parameter for the symmetry in question
is the mass of the quarks—QCD is chirally symmetric as long as this mass is zero.
In the spontaneous breaking of chiral symmetry, the quark mass suddenly takes on a
finite value. This value is determined by the theory, by QCD itself; it arises because
gluons tend to cluster around each quark, creating something like a cloud around it,
and this “dressing” is what makes up the new, finite quark mass. In other words, our
ideal QCD tells us that in the symmetric world there are massless quarks and gluons;
the world of spontaneously broken chiral symmetry then becomes one of massive
quarks and in addition, three further shadow particles, one for each axis in the space
of chiral rotations. Recalling that the mass of nucleons is around 900 MeV and that
they consist of three quarks, we estimate the quark mass in the broken symmetry
state to be about 300 MeV, and this fits: the mass of the “normal” mesons, made of
a quark–antiquark pair, is indeed around 600 MeV. The role of the massless shadow
particles would then fall to the pions, forming massless “undressed” quark–antiquark
states.
Here we have, once again, generously overlooked some shortcomings of our ideal
QCD world, although, as already seen in Chap. 5, that leads to problems. The masses
of proton and neutron are not exactly the same, although the difference of about
1.3 MeV is only a one per mil effect. The case of the pion is more serious: massless
pions would make the range of the strong interaction infinite! On the other hand,
140 MeV is not zero, although it is in fact much less than the typical meson mass of
600 MeV. Whatever, the conventional way out is to replace our ideal QCD, prior to
chiral symmetry breaking, by a theory in which the quarks have very small but not
truly vanishing masses, with some 2–3 MeV for the u and 3–6 MeV for the d. Note that
these are masses introduced ad hoc into the laws of QCD—they are something quite
different from the inertial mass quarks acquire through chiral symmetry breaking.
The latter, as we had already noted, is created by the QCD interaction itself, by gluons
clustering around the naked quarks; it can be calculated in QCD, while the former
are something just put into the theory. In our ideal QCD, the quarks were massless
before and weighed some 300 MeV after chiral symmetry breaking, the same for
both u and d. In the modified version, they have the small mass values, of a few
MeV, already before chiral symmetry breaking, and again masses of some 300 MeV
afterwards, but now with a slight difference between u and d. These differences then
make the neutron just a little heavier than the proton, as indeed observed. And in such
a pseudo-symmetric theory, where the input quark masses break the chiral symmetry
already a little in an explicit way, one can calculate the effect this has on the mass of
Nambu–Goldstone pions: there are now corrections to zero, due to quark–antiquark
interactions, and they give the pions their observed mass. This mass in turn then
gives the strong interactions their observed range.
And we now also have another way of describing the transition from the col-
ored world of deconfined quarks to the world of hadrons, in which the quarks are
138 7 Hidden Symmetries

confined to color-neutral bound states living in the physical vacuum. In an ideal world,
the onset of confinement is given by the spontaneous breaking of chiral symmetry.
Above the critical temperature, we have a medium of unbound massless colored
quarks and gluons; below, the gluons have only two remaining functions: through
clustering around the quarks, they give these their effective inertial mass, and they
then bind them to form color-neutral massive hadrons. In the real world, with small
but finite intrinsic quark masses, we no longer have genuine critical behavior, but the
required input quark masses are small enough to retain most of the critical features
of the strongly interacting medium. The confinement/deconfinement transition thus
becomes the other side of spontaneous chiral symmetry breaking, and essentially all
the mass of the universe becomes a result of spontaneous chiral symmetry breaking.
The pion is thus a special kind of particle, a quark–antiquark state forming a
Nambu–Goldstone boson—which in the limit of vanishing input quark mass would
render it massless. The quark-constituents of the pion are not dressed by gluons,
they only have their naked intrinsic masses. So all is well that ends well—except
that to make it end well, we had to give the u and d quarks of QCD quite ad hoc
finite intrinsic masses. We don’t have any physical reason for the intrinsic quark
mass values, they are simply chosen to give the right results. And so they leave us
with an obvious unanswered question: where do these mass parameters come from,
and why are they what they are? The pursuit of that question requires that we first
turn to something already hinted at.

7.3 Local Symmetries

The symmetries we have encountered so far, whether discrete or continuous, were


always global, in the sense that the same operation was applied to all the constituents.
In the Ising model, the laws of the interaction do not change if we flip all spins to
their opposite value, they remain invariant under such a global flip. If we flipped
only one spin, or a finite number, for that matter, that would modify the value of
the interaction energy: such an operation does not leave it invariant. Similarly, in the
case of the ideal ferromagnet, only if each spin is rotated by the same angle is the
value of the interaction energy unchanged. Global symmetries leave a world of many
interacting participants invariant, since in the given action, everybody is treated the
same way.
But besides such egalitarian situations, there also exist others which have an even
more symmetric character, a form which remains unchanged already under local
operations, where—in sociological terms—even individual manipulations leave the
system unchanged. We have already indicated that this can only occur in a world of
interacting constituents. Since we want these interactions to be in accord with special
relativity—forbidding instantaneous actions at a distance—this implies the presence
of a field, such as that created by the lines of force emerging from an electric charge.
If we now carry out an operation on one of the constituents in such a medium, it must
emit a wave travelling out to inform the medium of this and to assure that the overall
7.3 Local Symmetries 139

Fig. 7.9 Local change of an d d


up-quark into a down-quark,
mediated by an interaction
through a W + gauge boson
x+

u u

status of the system remains unchanged. Imagine we have a box containing an up-
quark and its antiquark, a system of total charge zero and total baryon number zero. If
we now locally transform the up-quark into a down-quark, a wave in the interaction
field must transmit this information to the antiquark, in order to preserve the overall
charge and baryon number of the system; see Fig. 7.9. In quantum-mechanical terms,
this wave correponds to a particle, the force or messenger particle introduced above.
It arises here simply as a consequence of the the local invariance of a relativistic
theory. Such local transformations, local modifications of the settings of some object
or device, are generally known as gauging, like gauging a scale or a thermometer.
We now insist that the physics of the overall system remains unchanged under such
gauging, that it is gauge invariant. The associated particles needed to assure that are
therefore referred to as gauge bosons. That they are bosons, i.e., objects of integer
spin, is simply a consequence of the fact that all matter particles, quarks as well as
leptons, are spin one-half objects, fermions, and that feature can only be maintained
if the exchange particles have integer spin. And they must be massless, because
otherwise different gauging at different locations would lead to different masses.
The force particles, which we had introduced above simply to obtain an interac-
tion proceeding with a finite speed, to provide communication between interacting
constituents, thus now acquire a much more general nature. They are the conse-
quence of the local or gauge invariance of any relativistic field theory. Their number
is determined by the number of intrinsic degrees of freedom of the matter particles
in that theory, and they must be massless. Thus electrodynamics, with one degree of
freedom, the charge, leads to one gauge boson, the photon. Quantum chromodynam-
ics, with three quark colors, has eight gluons as gauge bosons (red-blue, red-green,
etc., with red-red + blue-blue + green-green = white excluded, not changing any-
thing). And in weak interactions, the three possible electric charge states provide the
three gauge bosons, W ± , Z 0 . The interaction pattern of a relativistic field theory is
thus specified. In the standard model, we have three generations of matter fermions,
two pairs of such fermions (quarks and leptons) per generation, eight strong gauge
bosons, three weak gauge bosons, and the photon—adding up to 24 altogether, with
a small reminder that gravity is still waiting on the outside. And in an ideal, fully
symmetric world, all particles, fermions as well as bosons, are massless.
In the real world, that is indeed correct for photon and gluons as gauge bosons. It
is not so bad for the first generation quarks (u and d) and leptons (e and ν) as mass
140 7 Hidden Symmetries

particles. It is definitely off for the higher generations of both quarks and leptons,
and it is equally bad for the vector bosons of the weak interaction. Particularly the
latter aspect is truly disturbing: it destroys the gauge invariance of the theory. So the
task for theorists was specified: find a theory that has all the symmetries of strong,
electromagnetic and weak interactions for massless constituents, but with a dynamics
such that at lower temperatures spontaneous local symmetry breaking would lead to
the observed mass values for bosons as well as for fermions.
The pattern outlined here is of very general nature. High temperature means
much available energy, many ways to randomize configurations, much disorder, high
entropy, and as a result the full intrinsic symmetry of the theory. When the temperature
decreases, the details of the interaction, which at high temperatures were washed
out, come into play on their own. And this will often lead to spontaneous symmetry
breaking: the system finds itself in one of several, for continuous symmetries even
infinitely many, ground states that are all equally likely, because of the inherent
symmetry of the system. When water vapor cools down, it can form snowflakes or
hail. When liquid water cools down, it freezes and forms ice crystals—just as the
cooling of ferromagnetic matter leads to magnets. And cosmologists today picture
the evolution of the universe in a similar way. The diversity we see today was not
always there, the earlier world was less diverse, more symmetric. Cooling breaks
symmetries; in the cold state, the symmetries become concealed. The dream thus
is—and for the time being, much of it is still a dream—that very shortly after the
Big Bang, symmetry reigned.

7.4 Primordial Equality

All the complexity, the different constituents and the different interactions, all that
appeared only as the universe expanded and cooled off, so that its inherent symme-
tries were, one by one, spontaneously broken. The study of the required mathematical
structure is both complicated and far from finished—and it is not at all clear where
it will lead. The dreams of a final theory, to use the words of the American Nobel
laureate Steven Weinberg, have a primordial “Urfeld”, formed by quantum grav-
ity combined with the quantum fields of the standard model to form a theory of
everything (TOE). This stage undergoes the first, as yet undetermined transition,
leading to gravitation as a distinct interaction described by classical general rel-
ativity. The remaining standard model sector is made up of constituents obtained
through a grand unification of quarks and leptons, one species subject to one uni-
versal electronuclear interaction. At the next step, the quarks and leptons become
distinct species subject to the distinct strong and the electroweak interactions. Nev-
ertheless, the constituents here are still massless—so there has to be a point in the
evolution time at which intrinsic masses make their first appearance. Let’s call this
next step the Higgs transition. Even then, we still have a world without a vacuum, of
immense density. In the last of the primordial transitions, chiral symmetry breaking,
the quarks combine to hadrons, to form the particles that make the inertial matter of
7.4 Primordial Equality 141

Big Bang time[sec] temp.[K]


0 oo
TOE
−43
10 10 32
gravity electronuclear

grand unification 10 −35 10 27


electroweak strong
intrinsic mass 10 −12 10 15
formation
quark
10 −5 10 12
confinement

weak e−m hadronic now

Fig. 7.10 The possible evolution of the early universe after the Big Bang, passing through the
various spontaneous symmetry-breaking transitions

today’s universe, and to provide the stage for everything, the quintessential vacuum.
The overall scheme is illustrated in Fig. 7.10, noting the different evolution stages.
We can label them either by the time after the Big Bang, or by the temperature
the medium has cooled down to by then. It does not seem possible to create in the
laboratory the actual matter of the early universe as it existed on the other side of
any of the primordial transition horizons; the only exception is the possibility of
producing a quark–gluon plasma in high-energy nuclear collisions, a small bubble
of primordial strongly interacting matter. That is one reasons why this endeavor is
so challenging. For all the other thresholds, we can only see whether the dynamics
we need to have them occur, to trigger the spontaneous symmetry breaking, whether
that dynamics can be detected in high-energy elementary particle interactions. Thus
observing W ± and Z 0 was a crucial test for the electroweak transition; it remains to
find a gauge-invariance-preserving dynamics that allows a transition from a state of
massless gauge bosons to one of massive gauge bosons.
The effects of spontaneous symmetry breaking are quite diverse. The change in
geometric symmetry is perhaps the most obvious: water remains the same under any
rotation, ice breaks this and shows a crystal structure of hexagonal symmetry. In the
case of the Ising model, we found that symmetry breaking indicated a transition from
disorder to order, and in retrospect, we can of course also consider water to be more
disordered than ice. The inertial matter of our world today is formed of protons and
neutrons, so we have decided to call them particles. The antiparticles—antiprotons
and antineutrons—can exist equally well, and are in fact produced regularly in high-
energy collision experiments. But our universe appears to be one of particles, a
universe of matter, and we have no evidence of an antimatter version somewhere
else. So the symmetry of the fundamental equations, remaining unchanged in the
“flipping” from matter to antimatter, must have been spontaneously broken at some
142 7 Hidden Symmetries

point in the evolution of the early universe, to make it one of matter rather than
antimatter. It is not clear yet when that occurred—somewhere between the grand
unification and the intrinsic mass formation transitions, or at one of these.
The most symmetric view of the early universe, as we already indicated, would
have it at the very beginning consisting of massless constituents, all subject to one
universal interaction. Masses, different interaction forms, all that came later. The
original universe was made perfect, although perhaps somewhat boring in its com-
plete symmetry. So subsequently, light and darkness, morning and evening, earth
and water were separated. The beauty of variety appeared through a succession of
spontaneous symmetry breakings, and the further on we go, the more difficult it
becomes to reconstruct the original symmetry. The subtlety in the reconstruction of
such a genesis is that we have to find a theory having the intrinsic full symmetry
and yet containing the interaction forms that allow the necessary symmetry breaking
in the course of expansion and cooling. One of the crucial links in such a chain is
the creation of mass from massless consitutents. We have seen that the inertial mass
of the universe is a consequence of the spontaneous breaking of chiral symmetry in
QCD. In the symmetric phase, quarks and gluons both move freely through space;
at the transition point, the gluons become restricted to only two functions: they clus-
ter around each quark, giving it its dynamically generated inertial mass, and they
then bind these massive quarks to hadrons, color-neutral triplets or quark–antiquark
pairs. We now want to extend such mass formation through symmetry breaking to
the more general case of the standard model, to produce also the intrinsic masses
of the heavy weak interaction vector bosons and those of the fermions (quarks and
leptons). In other words, QCD through gluon clouds around the quarks gives us the
mass of the apple; we now want to find a way to give a (small) intrinsic mass also
to the seeds. To keep using the same trick, it is therefore tempting to imagine some
more fundamental, all penetrating field, which at a certain point clusters around the
constituents of the standard model, giving them their mass. Such a field, the Higgs
field, and the associated Higgs boson are in present thinking the decisive elements.
Let us therefore consider the horizon at which mass first made its appearance.
Formally, this step in an evolution based on a sequence of spontaneous symme-
try breakings is quite general and straightforward. We start from an extremely hot
gauge-invariant world of massless fermions, quarks and leptons, and the correspond-
ing gauge bosons. As the temperature is lowered, we encounter what originally was
called the electroweak transition, since it first appeared in the unification of weak
and electromagnetic interactions. But we now believe that at that point both the
weak bosons and the fermionic matter fields (quarks as well as leptons) obtain their
mass; so it seems more appropriate to call it the Higgs transition. While the idea of
mass generation through spontaneous symmetry breaking is quite simple, the actual
realization here turns out to be quite complex.
To obtain some intuitive feeling for how and when mass could be created, we
can imagine a sponge, which when dry and in a gaseous medium, such as air, is
very light. Let’s say it is weightless, and take the gas to be water vapor. If we now
lower the temperature so that the water vapor condenses into water, the sponge will
absorb much of the liquid and thereby gain considerable weight. So massless objects
7.4 Primordial Equality 143

can become massive by absorbing some of a surrounding medium, as soon as that


medium is in a phase suitable for absorption—liquid, not gas. In QCD, the transition
occurred at the point of chiral symmetry breaking: the medium was now ready to
cluster around the quarks, giving them their intrinsic mass. In the standard model
scenario, the Higgs field plays the role of such a medium. Quarks and leptons are
the introduced sponges; they have effectively no weight of their own, they acquire it
through absorption of the surrounding medium at the point where this has become
“liquid”. The Higgs field thus plays in a way the role of a new ether: it permeates
the entire universe in a uniform way, as a new kind of ground state; no Michelson–
Morley experiment could ever detect motion relative to the Higgs field. It is simply
there, everywhere, and once the temperature falls below that of the Higgs transition,
it “liquifies” and leads to clustering around the weak bosons as well as around the
fermions, and hence results in the formation of their masses.
So, to achieve the creation of mass, we proceed to introduce a further field; it
has to be coupled to the existing fields in a suitable way. For the fermions, mass
comes from Higgs clustering; they simply absorb some of the omnipresent Higgs
field. Photons and gluons are not coupled to the Higgs field, nothing happens, they
remain massless. For the weak vector bosons, the situation is more complex—they
are to become massive and yet preserve the gauge invariance of the theory. To assure
this, the proposed Higgs field is scalar and has four components, corresponding to
the four charge components of electroweak theory, +, − and 0 for the weak, 0 for the
electromagnetic sector. The Higgs interaction is formulated such as to have a con-
tinuous symmetry in the space of these charges, but with an interaction form leading
to spontaneous symmetry breaking. When this occurs, at the electroweak transition
point, three components +, − and 0 combine with the weak gauge bosons, mak-
ing them massive. This coupling is a rather tricky mathematical procedure, since the
weak vector bosons are to become massive and yet leave the theory gauge-invariant—
something like having your cake and eating it too. It’s the Higgs mechanism that does
it…see Box 10. A fourth component is left over—it remains there as the ubiquitous
field whose excitations provide the by now almost famous Higgs boson.

Box 10. The Higgs Mechanism

In the symmetric, massless world before the Higgs transition, we have

• fermions (spin 1/2): six massless lepton species (e+ , μ+ , τ + and their neutrinos)
and six massless quark species (u, d, c, s, b, t);
• bosons (spin 1): photons (γ ) and three species of massless weak gauge bosons
(W ±,0 ); eight different massless gluon colors;
• the corresponding antiparticles of the fermions; the boson sets already contain the
antiparticles.
144 7 Hidden Symmetries

To these matter fields (fermions) and force fields (bosons) there now
comes a scalar (spinless) massless Higgs field of four different components
(H ± , H 0 , H̄ 0 ). As a result of the spontaneous symmetry breaking in the Higgs
transition, the weak gauge bosons are to acquire mass, the photon is to remain
massless. This is achieved by forming three independent combinations of the
Higgs fields (two charged and one neutral), and these interact with the gauge
fields W ±,0 to create the observed masses of the weak gauge bosons W ± , Z 0 .
The remaining neutral Higgs component interacts with leptons and quarks,
to produce their intrinsic masses, and with itself, to produce the mass of the
physical Higgs boson.
Massless vector bosons, such as photons, have two degrees of freedom, corre-
sponding to the two possible (“transverse”) polarizations (clockwise or coun-
terclockwise with respect to the direction of flight). Massive vector bosons,
however, can also be polarized along the flight direction and thus have three
degrees of freedom. For each charge state, the two degrees of the massless W s
thus combine with the one further degree of the massless Higgs field to produce
the three degrees of freedom (transverse and longitudinal polarization) for the
massive vector bosons.
The actual interaction form of the different fields is relatively complex; here
we just want to provide a rough idea of the process. We might add here that
the minute mass now attributed to the neutrino is not thought to be due to the
Higgs mechanism; its origin remains unclear.

This standard version of the standard model is then able to account for most
aspects of elementary particle physics. Even the baryon-asymmetry of our present
universe may have arisen at this point, and also through spontaneous symmetry
breaking. The equations governing baryon interactions—on the most fundamental
level those describing the behavior of massless quarks—are certainly symmetric
under an interchange of baryons and antibaryons, here of quarks and antiquarks.
Thus it seems not unreasonable that up to the intrinsic mass formation transition
of Fig. 7.10, the world contained quarks and antiquarks in equal numbers. For a
suitable form of interaction, one might have two degenerate ground states, one giving
a baryon-dominated world, the other one antibaryon-dominated. The mass creation
transition then forced the universe to chose one of the two, and we decided to call
that baryonic, relegating the other, not-chosen one to antibaryonic.
In spite of its successes, the standard model still leaves us with a basic problem:
why do different quarks and leptons result in clusters of different size? Why are
there sponges of different absorption power? How does the Higgs field distinguish
u quarks from b quarks, giving the latter their so much bigger mass? Why are there
different generations of quarks and leptons? The answer to those questions remain
to be given yet… Perhaps the question points to some intrinsic law of diversity: if
different forms are possible, at least some of them will eventually also appear. A law
that can certainly draw on much support in the plant and animal kingdoms.
7.4 Primordial Equality 145

Another aspect of the scenario is, however, now closer to being answered. Does
the Higgs boson really exist? We have seen that after the electroweak transition,
there remained a Higgs field, in addition to all the other, familiar quarks, leptons
and gauge bosons. In quantum mechanics, any field is associated with a possible
excitation, a particle. And since the other three components of the scalar Higgs field
led to masses of the size of the weak vector bosons, we imagine that also the mass
of such a possible Higgs boson would lie in that range. So how can we check if it
exists?
Picture a long line of guests at a royal party. The king arrives, and as he passes
along the line, everyone bows, creating a wave of bent backs as the king moves
on. This wave corresponds to a travelling field disturbance caused by the motion of
the introduced charge in the medium. How can we assure ourselves that the field is
really there, that it is not the king or the sponge that cause what we see? To check
that, we remove the outside cause of the disturbance and instead disturb the field
directly, itself. At soccer games today, this phenomenon is referred to as la ola, “the
wave” in Spanish. The spectators create a wave-like movement around the stadium,
by standing up or raising their arms “as if the king passed by”, started by someone at
some given point, obviously without the presence of any king. A number of different
soccer and American football teams claim to have been the first to produce such
emergent wave behavior. In any case, it is evidently not so easy to initiate, and
once again, it results in the breaking of a symmetry. When the first person stands
up, it is not yet clear whether the wave will circle around the stadium clockwise
or counterclockwise. The second one to participate will break this symmetry… But
what is evident is that the most convincing way to establish the existence of a Higgs
field is to produce a travelling disturbance of the field itself, and this disturbance is
the Higgs boson so much sought after in recent years.
Essentially all particles predicted by symmetry considerations of the underlying
field theories were subsequently discovered in high-energy collision experiments.
For the Higgs search, there are a number of likely configurations. If an electron and
a positron annihilate at comparatively low energy, they will form a massive virtual
photon which then produces a quark–antiquark pair. We looked at this in detail
in Chap. 6. Once the annihilation energy becomes sufficiently high, the result can,
however, also be a virtual weak heavy boson, a Z 0 , with a mass of almost 100 GeV. If
this Z 0 is energetic enough, it may radiate a Higgs boson. If we know the annihilation
energy and can measure that of the emitted Z 0 , a peak in the “missing mass” would
indicate the production of a Higgs of the peak mass. The lifetime of the Higgs is
predicted to be so short that only such decay analyses have a chance of finding
indications of its existence. But even then, the chances for Higgs production at very
high energy colliders are of the order of one in many billions. So high energies,
immensely many collisions (meaning very high current beams), highest precision
detectors and unbelievably intricate analysis programs are essential requirements,
necessary but not sufficient. It now seems that, in addition, fortune was on the side of
the experimental groups at the CERN Large Hadron Collider. The two large “Higgs
Search” collaborations there, each with thousands of physicists, reported in July
146 7 Hidden Symmetries

2012 “the observation of a new particle” with a mass of about 125 GeV. Further tests
are needed to assure that this newcomer is indeed finally the Higgs boson.
So it may well be that we are now yet another step closer to the Big Bang, to having
an idea of the structure of the universe at these very early times. As mentioned, it does
not seem likely that we can in thinkable times produce media of such temperature
and densities in the laboratory; the quark–gluon plasma is probably the hottest and
densest medium we can ever produce in terrestrial laboratories. So to speculate about
the world beyond the earlier horizons, we can only try to obtain information about
the nature of very high energy collisions and then use that information to derive the
possible structure of the medium.
We have already indicated in Fig. 7.10 some further conceivable steps. As you get
closer to the Big Bang, the energy density, and hence also the constituent density,
becomes ever greater. As a consequence, the strong interaction coupling becomes
ever weaker, since the asymptotic freedom of quantum chromodynamics implies
that the interaction strength decreases with decreasing separation distance of the
constituents. One can therefore imagine that at a very early temperature—much
higher than that of even the Higgs transition—the strong and electroweak interac-
tions become equal. At this point, the so-called grand unification (GUT) scale, one
expects that only one interaction form will remain, apart from gravity. All the subse-
quent distinct forms, strong, electromagnetic and weak, are then due to the various
later spontaneous symmetry breakings. The value of the GUT scale noted in Fig. 7.10
is obtained from extrapolation of the running QCD coupling to the electroweak cou-
pling. There are various candidates for such grand unified theories, but so far none
as the obvious choice. Some attempts at unification beyond the standard model take
fermions and bosons as specific states of a more general prototype (“supersymme-
try”); they have led to the prediction of a multitude of as yet unobserved forms of
fundamental constituents, intensely searched for in connection with the search for
the Higgs boson. The situation becomes even more diffuse if we go back still further.
This will bring us to the point at which quantum effects on gravity are no longer
negligible, to the so-called Planck scale. It is obtained from a combination of the
fundamental constants of nature, gravitation (G), special relativity (c), and quan-
tum theory (h); thus rPl = (hG/c3 )1/2 = 4 × 10−35 m is the Planck length, and
tPl = (hG/c5 )1/2 ∞ 10−43 s the Planck time. They are expected to define the limit
for classical general relativity; for shorter scales, or for a still younger universe, one
presumably needs a quantum theory of gravity. It is thus tempting to ask for a unified
theory of the universe for all interactions, including gravity, the theory of everything
(TOE) already mentioned. But for the time being it appears safe to say that whatever
phenomena lie beyond the Higgs horizon, they are presently, again in Weinberg’s
words, “dreams of a final theory”.
The Last Veil
8

There was the Door to which I found no Key:


There was the Veil through which I might not see.
Omar Khayyam, Rubáiyát

So indeed the world we can call ours appears to be finite. Most of the universe remains
forever beyond human reach, inaccessible to our probing both in space and in time—
as far as we can tell, based on the science of today. But as we have also seen, every
horizon inevitably raises the question “what lies beyond”, and so humans will not
stop searching for that key, or for a way to look through the veil. In this last chapter,
we will first summarize the different horizons we have encountered, on both large
and small scales. After that, we will have a look at the possibilities of transgressing
even the ultimate horizons of today. Such possibilities arise if we consider all the
phenomena the laws of physics might allow, even though so far they have never
been observed.
The universe is not eternal, it has not always been there. The Big Bang was the
beginning, modern science tells us, the age of the universe is finite, is 14 billion years.
Almost all religions start with a creation of the world, so that perhaps the human
mind is here inherently in harmony with nature. Not only matter started with the Big
Bang, also space and time did. The cosmic horizon in time then is ultimate, final:
for us, there does not seem to be a way to investigate a world before the Big Bang.
It would be, in the words of Stephen Hawking, like asking what is north of the north
pole. Nor can we study what the Big Bang banged into. It was certainly not something
like an explosion into “empty space”. At the instant of the Big Bang, there was no
empty space. We’re quite sure, as discussed in Chap. 6, that the physical vacuum
in our present sense made its first appearance about 10 millionth of a second after
the Big Bang. Before that, everything was primordial matter of extreme density; the
further we go back in time, the denser it gets, and at THE moment of the Big Bang,
the density was infinite. Mathematicians speak of a singularity when something
becomes infinite; they imagine this as dividing a given number, say one, by smaller
and smaller numbers. The result gets bigger and bigger, and when you divide by

H. Satz, Ultimate Horizons, The Frontiers Collection, 147


DOI: 10.1007/978-3-642-41657-6_8, © Springer-Verlag Berlin Heidelberg 2013
148 8 The Last Veil

zero, you get infinity. So infinity is not really a number, it is the vision of a number
you get if you keep on going. In this terminology, the Big Bang was a singularity
in time. You go back to a millionth of second after it occurred, and then to half that
age, and half that, and so on and on. Each time, the density of the universe almost
doubles, and as the time interval approaches zero, it diverges. This singularity forms
an impenetrable barrier in the past. We never get there. So what about the future?
We have seen that the density of the universe will determine its future. If it is
sufficiently large, gravity can overcome the expansion forces of the Big Bang and
eventually win, causing the universe to contract again, to terminate the Big Bang
episode by a Big Crunch, with everything once more collapsing to a singularity.
Such a universe would just have been some kind of fluctuation, it might have hap-
pened, it might happen again. With this in mind, perhaps, the British philospher and
mathematician Bertrand Russell had God murmuring “Yes, it was a good play; I will
have it performed again”. But today’s studies seem to indicate that the expansion
will not stop, it will continue and even become more rapid. If this goes on, the world
will become ever cooler, the stars will burn out and stop shining, and the end will be
dark and cold, the Big Freeze.
There is no reason to imagine that the universe as a whole is spatially finite. It
seems reasonable to follow Thomas Higges and have it just going on and on, in the
same way. Cosmologists now call this the Copernican Principle: we, at our specific
location, are in no way special. There is no reason to expect that the spatial regions
not accessible to our probing are any different from those we can see, or that we are
different from them. Such a view is, of course, quite in accord with Ockham’s razor,
with simplicity as the basic guiding line in science. On the average, such a principle
appears to be in agreement with all we have observed—in space. In time, we believe
today that this is not the case, extrapolating from the human level to the Big Bang.
But similar to the reincarnation philosophy of Hinduism, some cosmologists—here
the British theorist Fred Hoyle was perhaps the dominant figure—suggested that the
universe is in a steady state of being, not of evolution, passing through an unending
number of equivalent forms, so that on the average, it would be the same also in time.
It was Hoyle who, in an ironical sense, coined the phrase “Big Bang” for the other
extreme, the possibility he thought to be unacceptable. Above all, it was the cosmic
background radiation that eventually made the Big Bang theory appear as the more
appropriate.
Starting from the Big Bang origin, the universe has passed through a number of
very different stages, expanding and cooling, creating empty space, atoms, stars,
galaxies, the Earth, plants, animals and humans. With time, everything has changed
dramatically, so after the initial singularity, there are several well-defined stages.

8.1 Ultimate Horizons in Time


It is perhaps instructive to begin by considering, as an example from our normal
world, the transitions that, through cooling, turn a hot electromagnetic plasma into
ice. Our starting point for this is a plasma of positively charged atomic nuclei and
8.1 Ultimate Horizons in Time 149

negatively charged electrons. To eventually arrive at water, we take the nuclei to be


those of hydrogen and of oxygen, in the ratio two to one; the number of electrons is
such as to keep the entire system electrically neutral. At sufficiently high temperature,
the kinetic energy of the constituents is higher than the binding energy of atoms;
that’s why we have a plasma. Lowering the temperature enough (we assume constant
pressure throughout the entire evolution) allows atomic binding, and so the system
turns into a gas of hydrogen and oxygen atoms. A further decrease of temperature
makes a coupling to H2 O molecules possible, so we now have water vapor. Below
T = 100 ◦ C, this condenses to form liquid water, which for T <0 ◦ C freeze to form
ice. The entire temperature evolution is illustrated in Fig. 8.1.
The transition between water and ice is, as noted in the previous chapter, a well-
known case of spontaneous symmetry breaking: the rotational symmetry in the liquid
is broken by the crystal structure of ice. The liquid–vapor transition leads to an abrupt
change in the density of the medium, and such changes are structurally very similar to
spontaneous symmetry breaking. The transitions from molecular to atomic gas and
to the formation of a plasma out of the latter are not quite as abrupt. But to identify the
gas-to-plasma transition, we can apply a voltage to the sides of the container; we then
notice a spontaneous onset of electrical conductivity at the ionization temperature.
Moreover, we should note that if we carry out the cooling such that the medium
is in equilibrium at all times (adiabatically, in physics language), then none of the
different states retains any information of the previous stages. If I am given a glass
of water, there is no way for me to determine if that water was ice or vapor half an
hour ago. There is no way to see through the veils formed by changes in the states
of matter.
The early stages in the temporal evolution of the universe are, according to the
present scenario, also determined by spontaneous changes induced by cooling. At
the first in the hierarchy of such transitions (see Fig. 8.2), gravity and the unified

plasma
ionisation atomic binding
gas
molecular
molecular binding
break−up
T vapor
evaporation condensation
water
melting freezing
ice

Fig. 8.1 The states of matter attainable from water


150 8 The Last Veil

Big Bang
Planck era

inflation
grand unification

quarks, gluons
quark−lepton plasma
leptons, e−w bosons, Higgs

Higgs transition

quarks, gluons
quark−gluon plasma
leptons, photons, w bosons

quark confinement

hadrons
electromagnetic plasma
leptons, photons

photon decoupling

atoms
physical universe
photons, neutrinos

Fig. 8.2 The temporal evolution of the early universe after the Big Bang, passing through various
transition horizons to arrive successively at different states of matter

electronuclear force become different forms of interaction, ending what is generally


called the Planck epoch, during which the quantum nature of the universe was essen-
tial for all forms of interaction. Rather soon afterwards, inflation and the resulting
dramatic expansion of space fragmented the matter of the universe into causally dis-
connected pieces—the origin of the horizon problem of today’s cosmology. At the
next critical point, the strong and the electroweak forces part ways, making this the
horizon at which grand unification ends; from here on, the strong interaction begins
to dominate. Hence now quarks and leptons become distinct, but still massless basic
constituents. This is followed by the Higgs horizon, beyond which the electroweak
force splits into weak and electromagnetic sectors, with heavy weak vector bosons
and massless photons. With the Higgs mechanism as the crucial feature here, this is
also the point at which quarks and leptons acquire their intrinsic masses. Moreover,
here at the very latest, the baryon asymmetry of our present universe originated; but
it may have earlier origins as well. For the leptons, this point is already the end of
the chain, while quarks still undergo one further transition: the colored quarks now
become confined to form color-neutral hadrons, and as a result, the physical vacuum
makes its first appearance. This confinement horizon is characterized by the spon-
taneous breaking of the chiral symmetry of the quark interaction; here the quarks
acquire their dynamically generated effective mass. It is this mass (some 300 MeV
8.1 Ultimate Horizons in Time 151

per u or d quark), not the intrinsic Higgs-induced bare quark mass (some 2–6 MeV
per u, d), that makes the mass of the nucleons and hence also the overall inertial
mass observed in the universe.
In the further evolution, nucleons now combine to form nuclei (nucleosynthesis),
and these eventually combine with electrons to form electrically neutral atoms. From
this point on, the universe was electrically neutral, so that the corresponding last
scattering horizon defines the time at which the photons remaining in the universe
were on their own—without any interaction partner. From then on, they formed a
non-interacting background gas, the cosmic background radiation, which due to the
expansion of the universe cooled from the initial 3.000 Kelvin (at last scattering)
to the 2.7 Kelvin we observe today. In this sense, the last scattering horizon is that
impenetrable last veil—we can never look back to earlier times, since in all previous
stages, the photons participated in the interaction and thus kinetically lost whatever
information they might have carried from still earlier stages.
In Fig. 8.2, we have illustrated the temporal evolution of the universe after the Big
Bang. It is amusing to imagine some superhuman creature taking a box of the matter
of our world and heating it, putting more and more energy into it. The medium in
this box should then pass through the different stages, cross the different transition
horizons—up to what point? How far back could we go?
The scheme we have shown is largely our present view of things, as based on
the extrapolation of physical concepts found to be correct elsewhere. The onset of
quark confinement and the corresponding vacuum horizon are presently studied in
high energy nuclear collision experiments—the quark-gluon plasma just above the
critical temperature is the hottest, densest matter we can access experimentally. The
Higgs horizon, at temperature a thousand times higher than that of quark confinement,
is—at least with present techniques—not attainable in the sense of matter of such
densities. What we can do here is to check whether the interaction pattern assumed
to lead to it, in particular the Higgs mechanism for generating intrinsic masses, is
indeed observed in elementary high-energy interactions. And that has just now been
shown to be the case—at least that is what the CERN experiments seem to indicate.
What happened prior to the Higgs horizon, the form of grand unification, a possible
theory of everything, all that so far remains unsolved even for the interaction forms
defining the theory. At this time the land “beyond the standard model” is still a land
with little or no experimental claim marks, well hidden behind more than just one
veil…
While there seems to be no reason to assume that in our thinking, the universe is
finite in space, the part that we can actually ever see certainly is. And even within
our universe, the interior of black holes remains beyond our reach.

8.2 Ultimate Horizons in Space


We can never find out what is happening now outside our present Hubble radius of
some 14 billion light years. So what we can justly call our universe is a bubble of
that size, immersed in a sea of unknown size, a terra incognita which will forever
152 8 The Last Veil

Hubble radius

Earth

our universe

the absolute elsewhere

Fig. 8.3 The spatial extension of our universe, up to the Hubble radius. The small bubbles denote
black holes contained in our part of the universe

remain that. And the inside of our bubble, like a Swiss cheese, contains black holes
as further bubbles inaccessible to our probing—at least, if the prober wants to come
back to bring information to the outside. Schematically, the situation is illustrated in
Fig. 8.3. We are now indeed at the center of our universe—but of ours, not of THE
universe. The black holes indicate their presence to us through gravitational effects.
In addition, they can—at least in principle—emit that mysterious Hawking radiation;
we shall come back to this shortly, since it is perhaps the only phenomenon so far
combining quantum and gravity phenomena.
Black holes are regions of inaccessibility created by gravitional forces. Einstein’s
principle of equivalence states that the effect of such forces is (on a classical level) not
distinguishable from that due to acceleration. So an observer undergoing constant
acceleration encounters the counterpart of the event horizon of a black hole; for
him, there is a Rindler horizon that he can never reach. He can send signals into
the forbidden region, but none from there will ever reach him. And just as quantum
effects lead to Hawking radiation at the Schwarzschild horizon of black holes, the
existence of the Rindler horizon turns the empty space of the accelerating observer
into a glowing thermal medium.
So we see that once the phenomena of the quantum world come into play, the
nature of things starts to be really quite different. The flow of time, the motion of
objects, their energy levels—all these are now discrete, occur in jumps.

8.3 The End of Determinacy


We can no longer say that in a certain place there is a particle—we can only indicate
how probable that is. The behavior of nature becomes stochastic: now indeed God
begins to play dice. And in this world, we find the perhaps most spurious of all
8.3 The End of Determinacy 153

horizons, that separating the real and the virtual. Empty space, the physical void
with “nothing” in it, becomes a sea of unborn particles, waiting for an energy input
to jump into the real world. So the vacuum becomes for us something like a pond of
dark water: we see nothing, but if we throw our fishing line into it, we can sometimes
pull out some of the creatures hidden below the surface. In this sense, the ever more
powerful particle accelerators are fishing rods allowing fish of different size and
species to be pulled out of this sea. Ever bigger and ever rarer: the latest catch is
the elusive Higgs boson, a catch rightfully acclaimed. And also the spaceship of the
accelerating observer is fishing in that sea.
Yet even the power of such accelerators has its limits. For many years, intense
experimental efforts were made to break up the smallest observed carriers of matter,
nucleons, into their ultimate constituents, quarks. All these experiments failed, the
ultimate building blocks of nucleons are bound so strongly that no force can tear them
loose, as had been proposed by the Roman philosopher Lucretius more than 2,000
years ago. Today, the theory of the strong interaction, of nuclear forces, predicts this
as a fundamental property of the quark interactions. The only way to remove a quark
from its partners inside a nucleon is to create matter so dense that each quark has the
choice of many other partners. So it is not high energy that can do it, but rather high
density—which is why one now uses the big accelerators also to collide the largest
nuclei possible, so that in the collision they form for a short, fleeting moment in time
such a superdense medium. That would recreate in the laboratory the matter which
made up the universe when it was still on the other side of the confinement horizon,
when the present physical vacuum did not yet exist.
As we have noted several times, each horizon presents for humanity a challenge to
find out what lies beyond. Is this also possible for some of the other ultimate horizons
we have shown to exist, or does that push us into the regime of science fiction? Is it
possible to obtain information from regions outside our Hubble radius? Can we go
back in time and study the world in its earlier stages? Here we encounter a curious
bifurcation point, a fork in the road of human thinking. Based on the experimental
observations made so far, the answer to the questions just noted is no. But if we ask
if the laws of physics, as we have them today, really rule out our exotic requests, the
answer is not so definite. And it is these speculative possibilities that have triggered
so much work and excitement in the past decades.
The crucial weak point, the most striking missing link in our picture of the world
is the connection of the largest scales, gravity, and the smallest, quantum phenomena.
The past century has brought three great achievements in physics. Special relativity
is based on the finite universal speed of light, c, and leads to the equivalence of
mass and energy. General relativity shows how the presence of stellar masses curves
space and introduces a new kind of geometry; we symbolize this aspect by the
gravitational constant G. Finally, the microscopic world of quantum theory parcels
energy into finite-sized packages, quanta, whose size is defined through Planck’s
constant . Our quantum view of the world, quantum field theory, applies for the
time being to realms where the role of gravity can be neglected. And on the other
hand, cosmology, general relativity, the expansion of the universe, all these consider a
classical world, basically without quantum effects. Quantum gravity, the combination
154 8 The Last Veil

of the three great areas, is still missing: it is, as someone has noted, a name, an idea
in search of a theory. The only apparently firm result incorporating all three of the
mentioned constants of nature, and the Boltzmann constant as well, is that of the
Hawking radiation of a black hole, predicted to be kT = c3 /(8πG M), where
M is the mass of the hole. And this radiation is, for reasons we have mentioned,
hidden for many, many eons below the microwave background radiation. But the
basic question remains: What happens when the density of constituents becomes so
great that position—momentum uncertainties are essential—what happens at scales
below the Planck length rPl = (hG/c3 )1/2 ? What happens at times shorter than the
Planck time, tPl = rPl /c? Then we do need something, to be called quantum gravity.
The evolution of the universe before inflation, in the Planck era, requires such a
combination of micro and macro worlds. Speculations in this field of thought have
led to some exciting shadows behind the veil, shadows which, in the future, may or
may not turn out to be a reflection of reality.

8.4 Hyperspace

The stage for one such dream is obtained by adding further space dimensions. We
live in a world of three dimensions in space and one in time. Let us put the time aside
for a moment. In our three space dimensions, we can imagine a simpler world of only
two space dimensions, occupied by “flat” creatures living there. They could never
see “out” of their two-dimensional world, just as we are constrained to our three
space dimensions. But they could check whether their world is distorted, just as we
can check whether space in our three-dimensional world is “curved”, for example as
an effect of gravity as described in general relativity. If the two-dimensional world
were really flat, two parallel light beams would never cross. But if it contained a
center of severe gravity somewhere, that would make the surface seem to “bulge”
there, and now two light beams would cross at the bottom of the bulge (see Fig. 8.4).
Evidently this is quite similar to the deviation of the starlight observed by Sir Arthur
Eddington in his celebrated confirmation of Einstein’s general relativity. Quite gen-
erally, planar geometry would fail within the bulge: the angles of triangles would
add up to more than 180◦ , just as they do on the surface of a globe. So the flat people,
within their 2-d world, could detect curvature. And to help them understand such
curvature, they could imagine that their world is embedded in a larger one, of three
dimensions: in hyperspace. The additional dimension is, from their point of view,
purely hypothetical, it is not real, they can never “enter” it, and any light beam will
pass in the two dimensions of their real space. But if they take the hypothetical third
dimension to be flat, then their 2-d world becomes a curved surface in that 3-d space,
as illustrated in Fig. 8.4.
We used a similar method in Chap. 3, to illlustrate the effect of a strong magnet
on a metallic coin; adding another, “fictitious” dimension often helps to illustrate the
modifications of space due to force. So let us return to the picture of the 2-d world in
3-d hyperspace. In Fig. 8.5 we illustrate a beam of light passing from point A to
8.4 Hyperspace 155

(a) (b)

Fig. 8.4 The effect of space curvature as observed by “flat” creatures in a two-dimensional world
(a) and as seen when embedded in a hyperspace of three dimensions (b)

point B in the 2-d world—it always remains “in” its 2-d surface, also when that
becomes curved. The travel time of the beam is thus the length of the curved path
divided by the speed of light. If, however, it became possible by some miraculous
means to construct a tunnel through the hyperspace, then the path and hence the
length of time needed for the passage would be correspondingly shortened. For the
inhabitants of the 2-d world, the result would be most striking. For them, the speed
of light is determined by its value in their curved surface world. So the light signal
through the tunnel would appear to have travelled from A to B at superluminous
speed. And if point B were outside the Hubble radius for point A, out of reach by
“normal” means, the hyperspace tunnel would allow a signal to get there. So then A
could transgress its ultimate spatial horizon! Moreover, this scheme is quite general:
if any two points in our own world, say on Earth and on a distant star X, are a hundred
light years apart in “real” space, but somehow only one meter in a hyperspace, a tunnel
through the latter would allow almost instantaneous communication with that star.
The crucial question thus is whether such tunnels through hyperspace are merely
a figment of the imagination of science fiction writers, or whether they could exist
in the world defined by the laws of physics.

hyperspace

A B

space

Fig. 8.5 A beam of light (solid red line) passing from point A to point B in a 2-d world. A possible
tunnel (pink tube) through the third hyperspace dimension would clearly provide a shortcut for its
passage (dashed red line)
156 8 The Last Veil

8.5 Cosmic Connections

The search for tunnels of this kind has therefore triggered much study in the past
decades. The first possibility in physics was introduced in 1935 by Albert Einstein and
Nathan Rosen; it is today known as the Einstein–Rosen bridge and provides a tunnel
between two distinct universes. Something like such a tunnel had appeared much
earlier in a more poetic form, in Lewis Carroll’s Alice in Wonderland: Alice passed
through a rabbit hole from her normal environment into another, fantastic world.
In general relativity, wormholes, as these paths through hyperspace are generally
referred to now, are indeed possible solutions of the equations of general relativity.
But these solutions suffer from a number of difficulties that severely hinder their
application as useful tunnels of passage. They connect regions showing some form
of singularity, and even that only for a brief instant of time, as a fluctuation; moreover,
they are generally of almost vanishing thickness, so that no Alice would fit through.
This has led to investigations of conditions needed to keep them open for longer and
for larger dimensions, to move them out of the range of quantum fluctuations.
One puzzle we have already encountered was what made the Big Bang bang—
why did the early universe expand? In particular, what forces were responsible for
the extremely rapid expansion leading to inflation, what could cause the increasing
expansion still observed today, and are the two related? It has been suggested that the
origin of the presently observed expansion is a novel medium filling all of space, dark
energy. Normally the expansion of a medium leads to a reduction of its pressure; for
dark energy, the opposite holds, so that the more it expands, the greater the pressure
becomes. This would account for the fact that with increasing “size”, the expansion
of the universe accelerates further. What could this mysterious medium be? Its only
function is to make the universe expand—it is not subject to any of the other forces,
in particular it is not affected by gravity.
In any case, cosmologists have proposed that if somehow the interior of a worm-
hole could be filled with such dark energy, its negative pressure might hold the
wormhole open wider and for a longer time. So the same force that leads to the
expansion inherent in the Big Bang might also allow apparently superluminous cos-
mic connections between different regions of space and thus permit us to transgress
the ultimate horizons of our conventional world.
This has led to a curious problem, also seemingly more fiction than science. If
achievable, such hyperspace tunnels allow not only superluminous travel in space,
they also make possible travel in time. In fact, simply the possibility of travel faster
than the speed of light allows communication with the past, since an absolute future
and an absolute past are definable for an observer only based on a finite universal
speed of light. For a superluminous observer, there will be events for which past
and future become inverted. And this causes severe problems with our concepts of
causality. If I could travel back in time and kill my parents before I am born, how
can my existence be explained? So the existence of both superluminous travel in
general and wormholes in particular are difficult to accommodate in a world based
on chronology and causality. Omar Khayyam, with whom we opened this chapter
8.5 Cosmic Connections 157

Fig. 8.6 The fractional


contributions of media in the
dark energy
universe to the overall energy
required to account for the
present accelerated expansion
22% 4% 74%

dark matter visible matter

looking for what might lie beyond the ultimate horizons, provided already a thousand
years ago a poetic exclusion of time travel into the past,
The Moving Finger writes; and, having writ,
Moves on: nor all thy Piety nor Wit
Shall lure it back to cancel half a line.
So one aim of a future quantum theory must be the clarification of these problems.
In the words of Stephen Hawking, “we must make the world safe for historians”.
One thing we do have to keep in mind is that possible solutions of the equations
of general relativity are not per se also solutions of the general laws of physics. As
we have noted, the bridge between quantum physics and gravity does not yet exist,
and once it is found, its results may destroy the bridge of Einstein and Rosen and
all related considerations. Quantum physics might rule out singularities in space and
in time, and thereby also rule out the possibility of wormholes of any kind. What
we presently see behind that veil may well therefore be illusions, distorted shadows
of the real world. And another aspect also has to be borne in mind, while we are
waiting for quantum gravitation. To formulate a theory of everything, this quantum
gravitation, once it exists, still has to be made to match the quantum field theory of
the standard model, or whatever extensions beyond, for the interactions of quarks
and leptons.
Nevertheless, there remain also less speculative issues that require an answer from
a quantum gravity yet to come. We have seen that the expansion of the universe is
determined by its density, specifically its energy density. The latest measurements of
the accelerating expansion allow an estimate of the present energy density, and this
estimate creates several problems. We can determine fairly well all the visible matter
in the universe, stars, galaxies, interstellar gas and the like. All this accounts for only
4 % of what is needed. And in fact gravity estimates of galaxies indicate that besides
the visible matter, there must be a large amount of invisible dark matter—matter
which cannot be seen, but still contributes to the gravitational pattern of galaxies. In
contrast to black holes, this dark matter neither emits nor absorbs electromagnetic
radiation. And even if we include the amount required here, if we add up all matter
we can detect in the universe, this only amounts to some 26 % of the energy density
needed to get the acceleration; more than three quarters is still missing (see Fig. 8.6).
The remainder exists presumably in the form of the dark energy introduced above
as the origin for the Big Bang expansion in the first place. It cannot be seen, it is not
subject to gravity, it just makes the space of the world expand.
158 8 The Last Veil

But once we do have a final theory, a theory of everything: what do we mean by


final, by everything? We have seen that, given quantum mechanics, the structure and
the spectrum of atoms, for example, of the helium atom, could be calculated. But this
does not account for the behavior of liquid helium, with superfluidity and the like.
To understand “everything”, there is not only the problem of the large and the small.
That problem is the issue for reductionism, in size, in structure and in interaction of
the constituents. But there is also the problem of the few and the many. A new field
of study has appeared in recent decades, emergent behavior, based on the observa-
tion that a system of many interacting components may lead to forms of collective
behavior not derivable from the two-body interaction. In some simpler cases—we
have seen that in the case of spontaneous symmetry breaking and the resulting phase
transitions—the given form of the two-body interaction allows the calculation of
the equation of state of the macroscopic system. Lars Onsager showed this for the
case of a two-dimensional spin system, the Ising model. But in general, the transi-
tion from dynamics to thermodynamics, in particular to the critical behavior found
in phase transitions, is not at all straightforward. In the physics of strong nuclear
interactions, quantum chromodynamics provides another instance of this problem.
Energetic scattering processes, involving short-distance two-body interactions, are
calculable, and the results agree with the corresponding experimental observations
up to remarkable precision. And through numerical studies, the spectrum, the masses,
charges, etc. of all the hadrons have also been calculated. However, for the thermo-
dynamics of quarks and gluons, as we saw in Chap. 6, much still has to be clarified.
The confinement–deconfinement transition, from hadronic matter to a quark–gluon
plasma, has been established; but the nature of the deconfined medium, the quark–
gluon plasma, in the region hopefully accessible to nuclear collision experiments,
that remains yet to be understood.
Keeping these aspects in mind, we see that once we have determined the interaction
form in the standard model, the behavior of bulk matter in the relevant regime is still
another issue. Is the Higgs transition the point where the baryon asymmetry of our
universe was born? What are the possible states of matter above the transition point?
And the same question of course reoccurs for any still earlier transitions. What is
the equation of state, the phase diagram for matter in the grand unification regime?
For experimental studies, the answers to these questions seem to lie well beyond
accessibility. But once the theoretical problems on a local, few-body level are solved,
we can at least try to see what collective behavior this might imply.
We have mentioned emergent behavior. Difficult though it may be to extend the
few-body dynamics of specified constituents to the statistical limit, it still presumes
that the nature of these constituents and the form of their interaction remain relevant,
are determining for the result. However, the ultimate idea of emergent behavior
transcends such a reductionist basis. Percolation theory, for example, provides a
basis for galaxy formation, for magnetization, for quark deconfinement—but so it
does also for the spread of forest fires, for the behavior of animal swarms, for the
distribution of subterranean oil deposits. So there seem to emerge general patterns
of behavior that do not depend on the nature of the constituents and their form of
8.5 Cosmic Connections 159

interaction. These present us with yet another quite striking and conceptually very
different form of grand unification.
Some of what today still remains behind different ultimate horizons may then
obtain a visible shape through future studies; mirages might become reality. But
in spite of the monumental achievements of theoretical physics, astrophysics and
cosmology, the ultimate test of science is and must continue to be a comparison with
observable nature. It is a great challenge to find out what the possible worlds are
for the most general theoretical schemes. However, there is no law of nature stating
that what is possible must also exist. So in the end, ultimate horizons are the visible
borders that define for us the limits of the real in the sea of the imaginable.
Notes on Notation
A

Each subject of human endeavor seems to have a language particularly suited to it.
More than a thousand years ago, Charlemagne, Carolus Magnus, leader of the Holy
Roman Empire in the tenth century, is supposed to have said that I speak Spanish to
God, Italian to the ladies, French to the men, and German to the horses. Even today,
the horses seem to appreciate German.
Mathematics, it was said, is the language God uses if he wants to speak to humans.
This may well be true, although he is most likely more polyglot and able to commu-
nicate as well through music or poetry. Nevertheless, if he really wants to make a
point, it seems that he does resort to mathematics. How else can we understand that
the arrangement of the petals on all flowers follow a sequence published around 1200
A.D. by the Italian mathematician Leonardo da Pisa, now known as Fibonacci, to
describe the growth of a population of rabbits. Just for fun, we recall it: 0, 1, 1, 2, 3,
5, 8, 13, 21, 34, … It was known already in antiquity, related to the proportio divina,
the divine proportion between different length scales. Today it is a popular item on
mathematics tests: how does it go on? What law governs the pattern? And for the
more advanced: what is the limit obtained for two successive Fibonacci numbers,
13/8, 21/13, 34/21, …? These are not artifacts only of the human mind—they are
found, for example, in the arrangements of the centers of sunflowers, up to 144/89
or 233/144.
Here we only want to recall some mathematical notation helpful for us. Multiple
products are generally written as powers n, so that 2 × 2 × 2 becomes 23 . Since the
cosmos involves very large and the microcosmos very small numbers, powers of ten
provide a convenient formulation. Thus one thousand becomes 103 , 1 million 106 ,
with the exponent counting the number of zeros following the one: 103 = 1, 000,
and so on. Similarly, one thousandth becomes 10−3 , one millionth 10−6 , etc.; here
the negative exponent counts the number of places between the decimal point and
the one: 10−3 = 0.001. If we want to concentrate only on the exponent, we consider
the logarithm: log(106 ) = 6. This form is referred to as the logarithm of the base of
10, since it counts the powers of 10. Another, perhaps even more frequently arising
form is the natural logarithm, ln. Its base is the Euler number e ◦ 2.718 . . . , so that
ln x = 3 means x = e3 ; it is named after the Swiss mathematician Leonhard Euler

H. Satz, Ultimate Horizons, The Frontiers Collection, 161


DOI: 10.1007/978-3-642-41657-6, © Springer-Verlag Berlin Heidelberg 2013
162 Notes on Notation

and can be written in the form


1 1 1 1
e=1+ + + + + ...
1 1×2 1×2×3 1×2×3×4
Together with π, it is perhaps the most important number in mathematics, and
it has an immense number of applications in a diverse number of fields. Just one
illustration: Any rate of growth of a population that is proportional to the number of
its members becomes exponential, i.e., it has the form
N (x) = N (0)e x .
Thus if at a given time a country has N inhabitants, and if the population increase
(births minus deaths) is 1 % per year, the overall population will grow with time t as
N (t) = N e0.01t ,
where t is measured in years. This means that the population has doubled after some
70 years.
In daily use, it is convenient to have names for the powers of ten, just as our
common thousand or million, but more systematic. So a thousand meters becomes
a kilometer, a thousand liters a kiloliter, a thousand volts a kilovolt. The most com-
monly used prefixes here are

kilo (k) 103


mega (M) 106
giga (G) 109
tera (T) 1012
peta (P) 1015

while at the other end of the scale we have

centi (c) 10−2


milli (m) 10−3
micro (µ) 10−6
nano (n) 10−9
pico (p) 10−12
femto (f) 10−15

The common measure of energy (or mass, following Einstein) in elementary


particle physics is the electron volt (eV); it is the amount of energy a single electron
gains if its potential energy level is increased by one volt. The mass of an electron is
roughly 0.5 MeV, that of a pion some 140 MeV, and that of proton about 1 GeV.
This is to be compared to the collision energies of present day accelerators: the
Large Hadron Collider (LHC) at the European Organization for Nuclear Research
CERN in Geneva is today the most energetic such facility, with a top energy of some
7 TeV for proton–proton collisions. If this energy, in a collision, were totally con-
verted into pions, it could lead to some ten thousand such mesons, indicating why
Notes on Notation 163

these interactions result in multiparticle production. And when the LHC is used for
nuclear collisions, using lead nuclei instead of protons as projectiles, the collision
energy reaches the PeV regime, with correspondingly higher multiparticle produc-
tion. This is, as we noted above, one of the reasons one hopes to make primordial
matter in such collisions.
We have at various times made use of the basic constants of nature; let us therefore
list here their numerical values. The constant connecting the energy of mechanics
with the temperature of thermodynamics is the Boltzmann constant,
eV
k ◦ 8.6 ,
K
with the approximate equality sign ◦ we indicate that the value is given only with the
accuracy of the number of significant digits shown (a = 1.3728 becomes a ◦ 1.37).
The universal speed of light in vacuum is
m
c ◦ 3.0 × 105 .
s
Newton’s constant of gravity is
m3
G ◦ 6.7 × 10−11 .
kg s2
Finally, Planck’s constant is given by
h ◦ 4.1 × 10−15 eV s.
Here one should also mention the often-used reduced Planck’s constant,  =
h/2 π.
Finally, we note, for excursions into the literature, that in particle physics it is
often convenient to measure velocities in multiples of the speed of light, energies in
units of the (reduced) Planck constant per second, and temperatures in units of the
Boltzmann constant. This is commonly abbreviated by c =  = k = 1.
Further Reading
B

While there does not seem to be any one book adressing specifically the different
horizons emerging in our attempt to find the limits of the universe, there exist quite
a few excellent general coverages of specific topics, far too many to list here. The
following are just those I found particularly understandable and illuminating.
Cosmology, Gravity and Black Holes:
Schutz, B.: Gravity from the Ground Up. Cambridge University Press, Cambridge
(2003, in press)
Smolin, L.: Three Roads to Quantum Gravity. Weidenfeld and Nicolson, London
(2000)
Thorne, K.S.: Black Holes and Time Warps. W. W. Norton & Co., New York
(1994)
Rees, M., Begelman, M.: Gravity’s Fatal Attraction, 2nd edn. Cambridge Univer-
sity Press, New York (2010)
Interactions and Elementary Particles:
Davies, P.C.W.: The Forces of Nature, 2nd Edn. Cambridge University Press,
Cambridge (1986)
Close, F.: Particle Physics: A Very Short Introduction. Oxford University Press,
Oxford (2004)
Symmetry:
Close, F.: Lucifer’s Legacy: The Meaning of Asymmetry. Oxford University Press,
Oxford (2000)
Zee, A.: Fearful Symmetry. Macmillan Publishing, New York (1986)

H. Satz, Ultimate Horizons, The Frontiers Collection, 165


DOI: 10.1007/978-3-642-41657-6, © Springer-Verlag Berlin Heidelberg 2013
Author Index

A D
Alfonso X of Castile, 13 Dalton, John, 71
Anderson, Carl, 51 Darwin, Charles, 78
Aristarchos, 13 Democritus, 71
Aristotle, 11, 20 Descartes, Ren´, 20
Aston, Francis William, 78 Dias, Bartolomeu, 10
Dicke, Robert, 33, 34
B Digges, Thomas, 17, 40
Becquerel, Henri, 82, 83 Dirac, Paul, 51
Bell, John, 67 Doppler, Christian, 30
Berra, Yogi, 82
Bohr, Niels, 75 E
Boltzmann, Ludwig, 108 Eanes, Gil, 10, 12
Born, Max, 75 Eddington, Arthur, 32, 78, 154
Bose, Satyendranath, 76 Einstein, Albert, 26, 27, 32, 46, 61, 67, 78, 156
Brahe, Tycho, 14 Ende, Michael, 51
Broglie, Louis de, 75 Englert, François, 96
Brout, Robert, 96
Bruno, Giordano, 17, 19, 34 F
Buridan, Jean, 133, 134 Faissner, Helmut, 97
Bux, Bastian Balthasar, 51 Fermi, Enrico, 76, 84
Feynman, Richard, 79, 116
C Fizeau, Hippolyte, 22
Cabbibo, Nicola, 112 Foucault, L´eon, 33
Carroll, Lewis, 156 Friedmann, Alexander, 33
Casimir, Hendrik, 53 Fritzsch, Harald, 91
Cassini, Giovanni Domenico, 20
Cavendish, Henry, 45 G
Chadwick, James, 73 Galilei, Galileo, 14, 15, 27,33
Columbus, Christopher, 10, 12 Gama, Vasco da, 10, 12
Copernicus, Nicolaus, 13 Gauss, Carl Friedrich, 104
Coulomb, Charles Augustin de, 23 Gell-Man, Murray, 88, 91
Creutz, Michael, 111 Ghazali, Al, 133
Curie, Pierre, 128 Glashow, Sheldon, 95

H. Satz, Ultimate Horizons, The Frontiers Collection, 167


DOI: 10.1007/978-3-642-41657-6, © Springer-Verlag Berlin Heidelberg 2013
168 Author Index

Goldstone, Jeffrey, 94, 135 Michell, John, 43


Gross, David, 91 Michelson, Albert, 26
Guralnik, Gerald, 96 Morley, Edward, 26
Guth, Alan, 37
N
H Nambu, Yoichiro, 94, 134, 135
Hagedorn, Rolf, 112 Ne’eman, Yuval, 88
Hagen, Richard, 96 Newton, Isaac, 15, 131
Hales, Thomas C., 104 Nishijima, Kazuhiko, 88
Harriot, Thomas, 104
Harrison, David, 68 O
Hawking, Stephen, 50, 54, 147 Ockham, William of, 133
Heisenberg, Werner, 52, 65, 75, 134 Oersted, Hans Christian, 25
Helmholtz, Hermann von, 78 Olbers, Heinrich, 19, 29, 30
Henry the Navigator, 1, 9, 11, 12 Onsager, Lars, 131, 134, 158
Higgs, Peter, 96
Hooke, Robert, 25 P
Hubble, Edwin, 30 Parisi, Giorgio, 112
Hunefer, 59 Pascal, Blaise, 25
Huygens, Christiaan, 21 Pati, Yogesh, 97
Pauli, Wolfgang, 56, 75, 82, 84
I Peebles, Jim, 34
Ising, Ernst, 130 Penrose, Roger, 50
Penzias, Arno, 33
Perkins, Donald, 80
J
Perl, Martin, 86
Joyce, James, 88
Pessoa, Fernando, 9
Planck, Max, 74
K Podolsky, Boris, 67
Kelvin, Lord (William Thomson), 78 Poe, Edgar Allan, 19, 29
Kepler, Johannes, 20, 104 Politzer, David, 91
Khayyam, Omar, 156 Powell, Cecil, 80
Kibble, Tom, 96 Ptolemy, Claudius, 13

L R
Laplace, Pierre-Simon de, 45 Raleigh, Sir Walter, 104
Lederman, Leon, 84, 93 Reinhold Bertlmann, 67
Lee, Tsung-Dao, 115 Richer, Jean, 21
Leibnitz, Gottfried Wilhelm, 131 Richter, Burt, 93
Lemaitre, Georges, 33 Rindler, Wolfgang, 63
Lenz, Wilhelm, 130 Rømer, Ole, 20, 21
Leucippus, 71 Rosen, Nathan, 67
Li, Keran, 115 Rutherford, Ernest, 72
Lucretius, 8, 9, 72, 77, 88, 90, 119, 153
Luther, Martin, 14 S
Salam, Abdus, 95, 97
M Sauter, Friedrich, 65
Magellan, Fernando, 10, 12 Schrödinger, Erwin, 75
Matsui, Tetsuo, 121 Schwartz, Melvin, 84
Maxwell, James Clerk, 25 Schwarzschild, Karl, 46
Mendeleev, Dmitri, 72 Schwinger, Julian, 66
Author Index 169

Steinberger, Jack, 84 Weinberg, Steven, 95, 140


Wheeler, John, 50
T Wilczek, Frank, 91
Thomson, J. J., 72 Wilkinson, David, 34
Ting, Sam, 93 Wilson, Kenneth, 92, 111
Torricelli, Evangelista, 25 Wilson, Robert, 33

U Y
Unruh, William, 61 Yukawa, Hideki, 79

V Z
Vivaldo, Guido and Ugolino de, 12 Zach, Franz Xaver von, 45
Zel’dovich, Yakov, 50
W Zweig, George, 88
Index

A Deconfinement, 105
Accessibility horizon, 3 Degenerate ground states, 133
Action at a distance, 23 Dirac sea, 51, 52
Age of the universe, 36 Discrete symmetry, 126
Annihilation, 52, 98, 99 Doppler effect, 30, 34, 36
Antimatter, 83
E
B Electricity, 22
Baryon, 82 Electromagnetism, 22
Beta-decay, 83 Electron, 72
Boson, 76 Electroweak interaction, 95, 150
Bottom, 93 Electroweak transition, 142
Bottomonia, 122 Emergent behavior, 158
Entanglement, 67
C Entropy, 109, 132
Charm, 93 Escape velocity, 43
Charmonia, 93, 122 Ether, 25
Chiral symmetry, 137 Event horizon, 4
Close packing, 104 Exclusion principle, 75
Color, 90
Computer methods, 111
Computer simulation, 111, 132 F
Confinement, 90 Fermion, 76
Confinement horizon, 98 Flavor, 90
Continuous symmetry, 126, 135
Copernican principle, 148 G
Cosmological constant, 32 Gauge boson, 139
Critical behavior, 111 General theory of relativity, 27, 32, 41
Critical points, 110 Global symmetries, 138
Curie point, 128 Gluon, 90, 111
Goldstone particle, 94
D Grand unification, 140, 150
Dark energy, 37, 157 Gravitation, 15
Dark matter, 37, 157 Gravity, 15

H. Satz, Ultimate Horizons, The Frontiers Collection, 171


DOI: 10.1007/978-3-642-41657-6, © Springer-Verlag Berlin Heidelberg 2013
172 Index

H P
Hadron, 82 Pair creation, 52
Hadrosynthesis, 118 Percolation, 106
Hawking radiation, 55 Periodic table in year 1869, 73
Hawking temperature, 56 Periodic table of the elements, 72
Heavy ions, 117 Pion, 79
Higgs boson, 142, 143, 145, 146 Planck constant, 74
Higgs field, 142 Positron, 51
Higgs mechanism, 143
Higgs transition, 142 Q
High-energy nuclear interactions, 116 Quantum chromodynamics, 91, 108
Horizon problem, 35 Quantum gravity, 49, 146, 153, 154, 157
Hubble’s law, 31, 36 Quark confinement, 91
Hyperspace, 47, 154 Quark–gluon plasma, 111
Quarkonia, 122
Quarks, 88
I
Inflation, 37
R
Inside-out cascade, 100
Radioactivity, 83
Interaction over a distance, 24
Redshift, 30
Ising model, 130, 131
Rindler horizon, 63

J S
Jupiter, 20 Schwarzschild radius, 46
Schwinger effect, 66
L Singularity, 49
Latent heat, 113 Special theory of relativity, 27, 41
Latent heat of deconfinement, 113 Speed of light, 19
Leptons, 85 Spin wave, 135
Little bang, 114 Spontaneous symmetry breaking, 129, 142
Local symmetries, 138 Standard model, 95, 151
Strong interaction, 78
Symmetry, 125, 126
M
Symmetry breaking, 128
Magnetism, 23
Magnetization, 131
T
Meson, 79
Theory of everything, 140, 146, 158
Muon, 84
Tidal effects, 48

N U
Neutrino, 84 Uncertainty principle, 52
Nucleon, 73 Unruh radiation, 65
Nucleosynthesis, 118
Nucleus, 115 W
Wave–particle duality, 75
O Weak nuclear interaction, 84, 86
Order parameter, 131 Wormholes, 156

You might also like