You are on page 1of 259

Atomic Theory:

Heraclitus = everything changes, Parmenides = change is logically impossible

how can matter change yet maintain its existence?

answer = it is composed of indestructible units called atoms

Dalton = determines that each element corresponds to a unique atom

develops system of chemical symbols based on atomic mass

compounds = atoms linked as molecules

opens way for new laws of physics

Loschmidt estimates size and mass of atoms (10-8 centimeters and 10-24 gms)

Matter:

four states of matter:


1. solid
2. liquid
3. gas
4. plasma

change in state change called phase transition

phase determined by mean temperature of material

atomic theory = explaining macroscopic phenomenon through the behavior of microscopic atoms, in

temperature = velocity of atoms

pressure = momentum of atoms

Ideal Gas Law:

ideal gas law is a statement of the relation between pressure, volume (or density) and temperature

it is given by PV=kT

ideal gas law is dependent on atoms behaving in a pure kinetic fashion, its fails at extremes of temp

kinetic theory requires that the number of atoms be large, when the number is small their behavior

Triple point :

phase diagram is a graphical tool to display the behavior of a substance through the various states

where the three normal phases of matter meet in a phase diagram is called the triple point

phase diagrams are useful for seeing equilibrium regions and understanding the different behavior

Thermodynamics:

thermodynamics is the study of heat, work, temperature and energy

central to thermodynamics are three laws: the first is the law of conservation of energy

the 2nd law deals with the concept of entropy, a measure of the disorder of a physical system

entropy is measured globally, i.e. local systems can lower their entropy, but only through the transfe

entropy leads to irreversible processes which are unexplained by Newtonian physics and its time ind

temperature is the parameter that relates a system to its entropy, the 3rd law constrains temperatur

Arrow of Time :

thermodynamics exposed some cracks in determinism and forced a closer look at the meaning of tim

the microscopic world is time reversible, the macroscopic world is not

Poincare's theorem states that Nature is divided in a multitude of states, less ordered states are mor

laws of chance requires that systems move towards high entropy states

its is not impossible for events to reverse themselves, just very, very, very improbable

entropy determines the arrow of time, the origin of entropy may be cosmological

Relativity :

relativity resolves Newtonian physics at extreme energies

new type of science for the times since it required sophisticated technology to test

relativity redefined fundamental constants, such as mass and length, to be variable

these parameters did not become uncertain (quite the opposite) only relative

relativity is broken into two parts:


1. special relativity involving inertial frames
2. general relativity involving accelerated or gravitational frames

Special Theory of Relativity :

experiments with electromagnetic wave properties of light finds contradictions with Newtonian view

Michelson-Morley experiment shows speed of light is constant regardless of motion of observer (!)

Einstein makes constant speed of light key premis to special relativity

special relativity interprets light as a particle called a photon

photon moves at speed of light and has zero mass

speed of light is an absolute limit, objects with mass must move at less than speed of light

space and time are variable concepts in relativity

time dilation = passage of time slows for objects moving close to the speed of light

Likewise, space is shorten in in high velocity frames, which is called Lorentz contraction

Space-Time Lab

relativity leads to some strange consequences, such as the twin paradox

however, all these predictions have been conferred numerous times by experimentation

Spacetime:

relativity links where and when (space and time) into a 4 dimensional continuum called spacetime

position in spacetime are events

trajectories through spacetime are called world lines

determinism is hardened with the concept of spacetime since time now becomes tied to space

just as all space is `out there', so is all time

Mass-Energy Equivalence:

if space and time are variable notions, the momentum must also be relative

in order to preserve conservation of energy, mass must be connected to momentum (i.e. energy)

mass increases as one nears the speed of light, which explains the limit to the speed of light for mate

mass-energy equivalence is perhaps the most fundamental discovery of the 20th century

photons have momentum, i.e. pressure = solar sails

Spacetime and Energy:

relativity unifies space, time, mass and energy

explanation provided by general relativity, where a complete theory of gravity is provided by using

Equivalence Principle:

equivalence principle equates accelerating and gravity effects

although a simple and common sense assumption, the equivalence principle has strange consequenc

such as, photons will be effected by gravity, even though they have zero mass

General Relativity :

general relativity combines special relativity with the equivalence principle

general relativity first resolves the problem of the instantaneous transfer of gravity under Newton's

remembering that mass changes with motion, and that mass causes gravity, Einstein links mass, gra

gravity as geometry of spacetime returns physics to classic levels of the ancient Greeks

however, spacetime is not Euclidean

matter tells spacetime how to curve, and spacetime tells matter how to move (orbits)

two classical tests of general relativity:

the first is the deflection of starlight by the Sun's gravity as measured by the 1919 solar eclipse expe

the 2nd test was the prediction of time dilation in a gravitational field, first shown by atomic clocks

the effects of general relativity require sensitive instruments under the condition of weak fields, i.e.
of light

strong fields are found in extreme situations such as near neutron stars or black holes

Black Holes:

as gravity increases the escape velocity increases

when escape velocity exceeds the speed of light a black hole forms

since photons have zero mass, a better definition of a black hole is given by curvature

a black hole is an object of infinite curvature, a hole in spacetime

the Schwarzschild radius defines the event horizon, the point of no return around the black hole

a black hole is still visible by its distortion on local spacetime and the deflection of starlight

the structure of a black hole contains only an event horizon and a singularity

the size of a black hole is set by its mass

spacetime is severely distorted near the event horizon, and extreme effects are seen

Computability:

Nature = computational process, i.e. clockwork Universe

more like a program than a clock

formalism, a philosophy of mathematics that states there is a computational procedure to prove the

instead of demonstrating that the Universe was computable, Turing's machine finds that there exist

i.e. there are flaws in the mathematical Universe

Paradoxes:

examples of these flaws are found in paradoxes, contradictions in logic

there are visual paradoxes as well as symbolic ones

modern science hinges on formal logical systems as means to understand the patterns of Nature

logic systems are composed of rules and symbols, abstract representations of structure that we use t

examples of limitations to formal systems are the domino problem and chess problems

the study of paradoxes is very useful to modern science since it quickly demonstrates where the pro

Incompleteness:

formalism requires true/false in finite number of steps

Godel demonstrates that there exist mathematical statements that are undecidable

his theorem is extended to any formal system, like science

for any system, there exist statements which must be taken on faith

Artificial Life:

artificial life studies investigate the problem of highly deterministic, yet undecidable systems

in computational worlds it is possible to demonstrate that randomness and uncertainty are built int

the Turing test and Penrose hypothesis indicate that human thought transcends rationalism

Conway's Life
Life Library

complex systems are defined as those systems which are very sensitive to initial conditions, appeara

examples in Nature are many

Deterministic Chaos:

the key to the failure of determinism is the false assumption that determinism requires prefect pred

perfect predictability would be similar to the linear correspondence of points on a line, when in fact

in fact, many systems the error in our knowledge grows faster than our ability to measure a parame

this is called a chaos system

outcomes appear random due to poor knowledge of initial conditions

if the error grows exponentially, then the system will remain unpredictable even when operating un

Irreversibility:

Epicurus' clinamen = deterministic rules but irreversible Universe

in fact, our laws of Nature are really abstractions of patterns that are only statistical in nature

irreversiblity leads to a important, necessary requirement for a complex Universe, states near none

nonequilibrium allows for self-organization

complex structures in the Universe (e.g. lifeforms) are due to the behavior of ensembles

reductionism must fail for complex systems, like weather forecasting

Evolution:

core theory for biology is evolution

gene = atom, gene pool = macroscopic object

natural selection is how evolution operates

populations evolve, not individuals (which came first the chicken or the egg? evolution says the egg)

evolution = genes mutate, individuals are selected, populations evolve

Malthus shows that organisms produce more offspring than can survive

populations grow faster than food supply = struggle for existence

Darwin proposed mechanism of natural selection based on observations in South Pacific wildlife

uniformatarianism = evolution is a long term process

The five components of evolution are:


1. nonconstancy of species (individuals are unique)
2. all organisms descent from common ancestors
3. gradualness of evolution
4. multiplication of species (diversity)
5. natural selection

Selection:

increased reproductive capability = natural selection (not weeding)

survival is not only factor, sexual selection, enhanced characteristics also contribute

selection can occur in many ways, outlined above are stabilizing, disruptive, directional

extremes are controlled by energy requirements

gradualism (slow divergence) vs. punctuated equilibrium (rapid changes)

Human Evolution:

unique human characteristics =


1. steroscopic vision
2. high mobility (upright stance)
3. opposable thumbs

developed about 20 million years ago in East Africa

IQ was not a selected trait but a byproduct of increased brain capacity due to larger body size and

This illustration compares the crania of a female gorilla, Australopithecus africanus, and Homo
sapiens. The dark area at the bottom of the skull is the foramen magnum, the hole through which
the spinal column passes. It has a forward position in australopithecine skulls, a strong indication

that they were bipedal. Note also that both the shape of the jaw and the teeth of australopithecines
are very similar to those of modern humans. Australopithecines do not have the rectangular-shaped
jaw or the large canine teeth of apes.

DNA tracing confirms Africa origin to human species

the hominid family tree is `bush-like' with numerous hominid species existing at the same time

Which came last?


1. culture
2. upright walking
3. language
4. reasoning
5. tool making

Thomson Atom:

Thomson uses CRT to discover electrons, a small piece of Dalton's atoms

determines that cathode rays are streams of electrons with negative charge using electric plates to d

develops plum pudding model of atom, electrons embedded in positive charged gel

electric current = flow of electrons, metals have abundance of free electrons

Rutherford Atom:

Rutherford determines true nature of atom and the phenomenon of radioactivity (decay of unstable

unstable nuclei emit alpha particles (He nuclei) and beta particles (electrons)

designs experiment to use alpha particles as atomic bullets, new technology, new senses other than h

the Rutherford beamed alpha particles through gold foil and detected them as flashes of light or sci

the gold foil was only 0.00004 centimeter thick, meaning on a few hundreds of atoms thick

if the Thomson model of atoms was correct, then the alpha particles should pass through with relat

The expectation is that they will strike the fluorescent screen directly behind the foil

what was observed was the following:

most alpha particles were observed to pass straight through the gold foil

a few were scattered at large angles

some even bounced back toward the source

only a positively charged and relatively heavy target particle, such as the proposed nucleus, could a

results can best explained by a model for the atom as a tiny, dense, positively charged core called a

the Rutherford atomic model has been alternatively called the nuclear atom, or the planetary mode

Planck's curve:

two laws of radiation:

1. Stefan-Boltzmann law: the amount of energy emitted from a body increases with higher tem
2. Wien's law: the peak of emission moves to bluer light as temperature increases

Planck combined the laws with a special hypothesis (the quantum hypothesis) to produce Planck's l

Planck's solution, while fitting the data, produces a contradiction for atomic theory

specifically, photons should gain more energy from atoms then they lose, thus they should emit lots

Stellar Spectroscopy:

the Planck curve had an immediate impact on stellar astronomy, the brightness and color of stars d

spectroscopy allows astronomers to determine the chemical make-up of stars

Fraunhofer matches solar spectral lines to materials in the lab

Lockyer discovers a new element in the Sun, helium

click me to see spectra of all the elements

but why are spectral lines formed? photons should be emitted at any wavelength

Kirchhoff's Laws:

Kirchhoff showed that there are three types of spectra emitted by objects:

1. Continuous spectrum - a solid or liquid body radiates an uninterrupted, smooth spectrum (c

2. Absorption spectrum - a continuous spectrum that passes through a cool gas has specific spe

3. Emission spectrum - a radiating gas produces a spectrum of discrete spectral lines

spectral lines appear discrete or quantized, leading to the next science revolution, quantum physics

Planck's constant:

accelerating electron produces EM radiation (light), loses energy and spirals into nucleus, i.e. atom

Planck makes `quantum' assumption to resolve this problem

a quantum is a discrete, and smallest, unit of energy

all forms of energy are transfered in quantums, not continuous

electron transition from orbit to orbit must be in discrete quantum jumps

experiments show that there is no `inbetween' for quantum transitions = new kind of reality

despite strangeness, experiments confirm quantum predictions and resolves UV catastrophe

Wave-Particle Dualism:

The wave-like nature of light explains most of its properties:


1. reflection/refraction
2. diffraction/interference
3. Doppler effect

however, a particle description is suggested by the photoelectric effect, the release of electrons by a

wavelike descriptions of light fail to explain the lack of the photoelectric effect for red light

particle and wave properties to light is called wave-particle dualism and continues the strange char

wave-particle dualism is extended to matter particles, i.e. electrons act as waves

Bohr Atom:

classical physics fails to describe the properties of atoms, Planck's constant served to bridge the gap

Bohr proposed a quantized shell model for the atom using the same basic structure as Rutherford, b

Bohr's calculation produce an accurate map of the hydrogen atom energy levels

changes in electron orbits requires the release or gain of energy in the form of photons

Bohr's atom perfectly explains the spectra in stars as gaps due to the absorption of photons of parti

larger formulations explain all the properties outlined by Kirchoff's laws

Heisenberg and Schroedinger formalize Bohr's model and produce quantum mechanics

quantum mechanics is an all encompassing science that crosses over into many fields

de Broglie Matter Waves:

early quantum physics did not ask the question of `why' quantum effects are found in the microscop

One way of thinking of a matter wave (or a photon) is to think of a wave packet.
Normal waves look with this:

having no beginning and no end. A composition of several waves of different wavelength can produ

the wave packet interpretation requires the particle to have no set position

momentum of a particle is proportional to the wavelength of the particle

Lastly, the wave nature of the electron makes for an elegant explanation to quantized orbits around

only certain wavelengths will fit into orbit, so quantiziation is due to wavelike nature of particles

wavelike nature also means that a particles existence is spread out, a probability field

the idea of atoms being solid billiard ball type objects fails with quantum physics

quantum effects fade on larger scales since macroscopic objects have high momentum values and th

Probability Fields:

wave interpretation requires a statistical or probability mathematical description of the position of

where wave represents the probability of finding the particle at a particular point

for higher orbits the probability field becomes distorted

meaning of existence has an elusive nature in the quantum world

Young Two-Slit Experiment:

the two slit experiment is key to understand the microscopic world

click here to interference movie


click here to see a wave experiment

waves can interfere, for light this will make a series of light and dark bands

matter particles, such as electrons, also produce interference patterns due to their wave-like nature

so with a high flux of either photons or electrons, the characteristic interference pattern is visible

if we lower the intensity of light, or the flux of electrons (the electric current), we should be able to s

each photon makes a dot on the screen, but where is the interference pattern?

the interference pattern is still there, it simply takes some time for enough photons, or electrons, to

interference, or a wave phenomenon, is still occurring even if we only let the photons, or electrons, t

so what are the individual particles interfering with? apparently, themselves

in order for a particle to interfere with itself, it must pass through both slits

this forces us to give up the common sense notion of location

Role of the Observer:

since the quantum world can not be observed directly, we are forced to use instruments as extension

however, quantum entities are so small that even contact with one photon changes their position an

1st hint that the observer is an important piece of any quantum experiment, can not isolate the obse

the two slit experiment is a good test of the role of the observer in the quantum realm

any experimental design that attempts to determine which slit a photon has passed through (test for

this is a breakdown of objective reality

each quantum entity has dual potential properties, which become an actual characteristic if and wh

Quantum Wave Function:

a wave packet interpretation for particles means there is an intrinsic fuzziness assign to them

the wave function is the mathematical tool to describe quantum entities

wave function express likelihood *until* a measurement is made

Superposition:

quantum physics is a science of possibilities rather than exactness of Newtonian physics

quantum objects and quantities becomes actual when observed

key proof of quantum superpositions is the phenomenon of quantum tunneling

the position of the electron, the wave function, is truly spread out, not uncertain

observation causes the wave function to collapse to an actual

quantum existence is tied to the environment, opposite to the independence of macroscopic objects

Uncertainty Principle:

the uncertainty principle states that the position and velocity cannot both be measured,exactly, at th

uncertainty principle derives from the measurement problem, the intimate connection between the

the change in a velocity of a particle becomes more ill defined as the wave function is confined to a s

the wave nature to particles means a particle is a wave packet, the composite of many waves

many waves = many momentums, observation makes one momentum out of many

exact knowledge of complementarity pairs (position, energy, time) is impossible

complementarity also means that different experiments yield different results (e.g. the two slit exper

therefore, a single reality can not be applied at the quantum level

the mathematical form of the uncertainty principle relates complementary to Planck's constant

knowledge is not unlimited, built-in indeterminacy exists, but only in the microscopic world, all coll

It is often stated that of all the theories


proposed in this
century, the silliest is quantum theory.
Some say the the only
thing that quantum theory has going for it,
in fact, is that it
is unquestionably correct.
- R. Feynman

Quantum Mechanics:

quantum mechanics is to the microscopic world what classic mechanics and calculus is to the macro

it is the operational process of calculating quantum physics phenomenon

its primary task is to bring order and prediction to the uncertainty of the quantum world, its main

the key difference between quantum and classical mechanics is the role of probability and chance

quantum objects are described by probability fields, however, this does not mean they are indeterm

Schrodinger's Cat and Quantum Reality:

an example of the weirdness of the quantum world is given by the famous Schrodinger cat paradox

the paradox is phrased such that a quantum event determines if a cat is killed or not

from a quantum perspective, the whole system state is tied to the wave function of the quantum eve

the paradox in some sense is not a paradox, but instead points out the tension between the microsco
scenario

quantum objects exist in superposition, many states, as shown by interference

the observer collapses the wave function

Macroscopic/Microscopic World Interface:

events in the microscopic world can happen *without* cause = indeterminacy

phenomenon such as tunneling shows that quantum physics leaks into the macroscopic world

decoherence prevents a macroscopic Schrodinger cat paradox

new technology allows the manipulation of objects at the quantum level

future research will investigate areas such as quantum teleportation and quantum computing

Fission/Fusion:

since quantum events do not have a "cause", this also means that all possible quantum events must

without cause and effect, conservation laws can be violated, although only on very short timescales

violation of mass/energy allowed for the understanding of the source of nuclear power in the Univer

fission is the splitting of atomic nuclei, either spontaneously or by collision (induced)

fusion is the merger of atomic particles to form new particles

quantum tunneling and uncertainty are required for these processes

and quantum physics, even though centered on probabilities, is our most accurate science in its pred

Antimatter:

symmetry in quantum physics lead to the prediction of opposite matter, or antimatter

matter and antimatter can combine to form pure energy, and the opposite is true, energy can comb

spacetime diagrams provide a backwards time interpretation for antimatter, symmetry in space and

the quantum world leads to new ways of looking at existence and reality

Copenhagen Interpretation:

wave-particle duality is a manifestaion of quantum entities

The Copenhagen Interpretation has three primary parts:

o The wave function is a complete description of a wave/particle. Any information that cannot

o When a measurement of the wave/particle is made, its wave function collapses. In the case of

o If two properties are related by an uncertainty relation, no measurement can simultaneously

probabilites in the macroscopic world reflect a lack of knowledge

the quantum world is pure probability

Hidden Variables Hypothesis:

macroscopic physics states that all variables are there, just hard to measure

Copenhagen Interpretation states that variables are not there, randomness is fundamental

indeterminacy was unpopular (not platonic)

Bell hypothesis is that quantum variables exist, but are hidden, special forces required

hidden variables are not testable, poor science

Many-Worlds Hypothesis :

collapse of the wave function still presents a problem for deterministic physics

solution is to not collapse the wave function, rather split reality

many worlds hypothesis is allows for the existence of all quantum states, observation splits the worl

macroscopic systems exhibit irreversible behavior (entropy) that prevents the reconnection of past w

many worlds does not allow communicatation between the worlds, but their existence can be tested
reversable mind experiements (nano-AI's)

Emergence:

the origin of new, original concepts from nothing is a property of emergence

classical physics can have little emerge properties, but quantum physics is dominated by emergent s

the wave function contains a whole that is greater than the sum of the parts

Holism:

holism is a philosophy that the whole is primary and often greater than the sum of the parts

a holist is concerned with relationships not the pieces

quantum physics is difficult to reconsile with reductionism, requires a holistic view of Nature

the particle or wave aspect of a quantum entity requires a dialogue with the environment

numerous experiments have shown that quantum interactions produce results that are not predicta

the rules of the quantum world follow logic, but a logic of both/and rather than the logic of either/o

Neutrinos :

peak of strangness for quantum objects is found in the neutrino

discovered in 1930, the neutrino is a fundamental particle with very small mass and no electric char

interacting only through the weak force, the neutrino is extreme elusive

difficult to observe and requiring advanced technology, studies of the neutrino have reveal importa

Elementary Particles :

particle physics is the search for the fundamental building blocks of Nature, a reductionist goal

elementary particles should be structureless, resulting in simple interactions

more advanced technology lead to the discovery of hundreds of new particles, forcing the search for

Generations of Matter:

the two most fundamental types of particles are quarks and leptons

the quarks and leptons are divided into 6 flavors corresponding to three generations of matter

quarks (and antiquarks) have electric charges in units of 1/3 or 2/3's

leptons are a separate class since they do not interact with quarks by the strong force

leptons have charges in units of 1 or 0

the up and down quark, electron and neutrino (leptons) work together to form normal, everyday m

note that for every quark or lepton there is a corresponding antiparticle. For example, there is an u

Fundamental Forces :

Matter is effected by forces or interactions (the terms are interchangeable)

there are four fundamental forces in the Universe:


o gravitation (between particles with mass)
o electromagnetic (between particles with charge/magnetism)
o strong nuclear force (between quarks)
o weak nuclear force (that changes quark types)

Bosons (Force Carriers):

certain particles play and important role in the transfer of force, the bosons or force carriers

the use of virtual particles to carry force resolves the action at a distance problem

Baryons and Mesons:

the large number of new particles discovered in the 1950's is resolved by quark model

quarks are fundamental building blocks to baryons and mesons, coming together as triplets or pair

quarks have 1/3 charge and bind through the exchange of gluons of the strong force

the many particles of atomic nuclei become a simple combination of quarks

unlike electric charge, quarks bind by exchanging color charge of three colors, blue, red and green

gluons carry color to convert quarks

due to their fractional charge nature, quarks cannot exist in isolation

the strong force binds quarks like a rubber band force

if energy is used to split a quark pair, new quarks are produced, this is how matter was produced w

Quantum Electrodynamics :

the combination of light and charged particles understood through quantum electrodynamics

central to QED is the idea that virtual photons carry electromagnetic force

however, virtual means they cannot be seen or detected because their existence violates conservation

QED led to the unification of electromagnetic and weak forces, implying that all forces are one forc
formed)

Quantum Chromodynamics:

similar to QED, QCD describes the forces that bind quarks

instead of virtual photons, the strong force is transfered by gluons

each gluon can carry one of three color charges, red, blue or green so that particle built of quarks m

note that the strong force overcomes the electromagnetic or gravitational forces only on very short

Quantum Gravity:

Understanding of the fundamental forces of Nature will require a unification of quantum physics an

the development of a quantum theory of spacetime, or quantum gravity, will begins with the discov

exploring quantum gravity will require technology that is well beyond our current means

theoretical work in quantum gravity asks questions about quantum sized black holes and a fuzzy ev

while there is no current working quantum gravity theory, the path to TOE is through quantum gra

partial predictions from quantum gravity ideas indicates hope for a new direction in physics

Gravitational Radiation:

ripples in spacetime are gravity waves

to be generated, gravity waves require rapid motion of high density matter, like a supernova

gravity waves hope to be detected by new technology in this century

Hawking Radiation:

another example of quantum gravity phenomenon is Hawking radiation

Hawking radiation explores the behavior of particle production near the event horizon of quantum

since pair production is symmetric, matter and anti-matter are formed together

energy from the black hole gravitational field is converted into matter

Hawking radiation explains why there are no quantum sized black holes filling the Universe, they h

Theory of Everything :

the Standard model is our current theory of the matter/energy worldview, and has had great succes

missing is a full formulation of gravity and particle physics, quantum gravity

limitations to the Standard Model suggests a more encompassing theory awaits formulation

Supergravity:

a theory that brings gravity, relativity and quantum physics together is called a Theory of Everythi

one recent attempt is called supergravity, which explains the microscopic world as extra dimensions

using pure geometry is a popular feature to TOE's since they become fundamental in their mathem

String Theory:

another example of a TOE is string theory, the explanation of quantum entities as tiny loops or mem

the various subatomic particles are explained as different vibration modes of the tiny strings

the rules for string interactions looks alot like spacetime and relativity

Brane World Scenario:

A combination of string theory and supergravity leads to an eleven dimensional description of the U

Each brane universe is composed of 4D spacetime and 6D quantum space.

Strong, weak and electromagnetic forces are carried by open strings, each attached to their branes

Gravity is carried by closed strings, free to travel between branes

SpaceTime:

while space and time appear separate, relativity shows that space and time are linked and malleable

space as a void is rejected on logical grounds, and must be filled either by an ether (Aristotle) or som

Newton proposes Absolute Space, a continuum in its own right marked and measured with a coordi

Newtonian Time:

time becomes less simple than Newtonian space since it lacks markers and is not observed by our se

additional tools, such as a spacetime diagram marked by events, extends our senses

time as a 4th dimension does not imply its has similar properties to the other three

causality requires a special connection

The Present or `Now':

relativity's view of time as one part of a 4D continuum requires the notion of times passage to be an

our inability to perceive all of spacetime is signified by the boundary of time marked by the 'now'

memory allows us access to the past, but knowledge of the future is very limited

the physical world is unchanging, the notion of becoming is a experience that is an illusion to huma

relativity destroyed the concept of 'now', replacing it with an observer dependent spacetime

temporal flux is replaced by motion through spacetime

Time Travel:

time travel, while a popular story line, violates most conservation laws

in addition, time travel violates most of the logic and consistence in language and math

Grandfather Paradox:

the grandfather paradox deals with the impossibility of going back in time and killing your grandfa

an analysis of the grandfather paradox starts with a simple mechanical example of the paradox usin

collisions reproduce the paradox as the entering ball is forbidden to interfere with itself

the error in the paradox is the missing ball from the wormhole at the start, i.e. spacetime is a contin

attempts to modify the problem only lead to the same solutions

spacetime appears to have a built-in chronology protection effect much like an event horizon

Multiverse:

parallel universes provide an avenue for some sort-of time travel, as well as quantum phenomenon

loops of time-like structure that form temporary universes, also called alternative histories

Consistent Histories:

consistent histories designs Nature to prevent time paradoxes

requires loss of free will and an active Universe, i.e. not rational

Cosmology:

cosmology is the study of the Universe as a whole, its components and evolution

while pursued since the beginning of science, it is only in the last few decades have we obtained the

early cosmology was based on local phenomenon, unexplained = supernatural

later cosmology was based on creation myths, stories that had their own internal logical sense

the last stage links science (observation/experimentation) to cosmological ideas

Early Cosmology:

the Greek cosmology is the first scientific model of the Universe, while clearly incomplete it is a logi

note that their model also makes predictions for a fifth element, quintessence, as does any modern t

this is a kinematic model, it explains motion as well as composition

medieval cosmology focused on religious interpretation

Dante's cosmology dominates Western thinking for centuries

the Renaissance brings a parallel breakthrough in art and science, new cosmologies

the principle of sufficient reason guides modern cosmology

we take on faith that the origin of the Universe is within the scope of current (or future) science

Creation Event:

is there an origin to the Universe? most past philosophies have assumed no creation event

Hindu, Babylonian, Egyptian and Mayan cosmologies all assumed cyclicity

Judeo-Christian cosmology presupposes a creation and a separate Creator

Deism dominates Newtonian thinking until the 20th century and is required to explain Creation fro

Olber's Paradox:

the oldest and most basic cosmological observation is the darkness of the night sky, known as Olber

Olber's paradox indicates that the Universe has a finite age, and implies a Creation

Static Universe:

an infinite, Newtonian Universe is unstable to gravitational collapse

the static Universe falls with the discovery of Hubble's law, all galaxies have a positive redshift

an expanding Universe must have a Creation or Alpha point, and is a dynamic reality

the search for Hubble's constant consumed most of our technology in the late 20th century

Expanding Universe:

expansion is not from a `center', but all of spacetime expands

the Universe must be describe with a geometry that includes a description of the curvature of the U

galaxy redshift is the effect of photons being stretched, not motion

Static Universe:

an infinite, Newtonian Universe is unstable to gravitational collapse

the static Universe falls with the discovery of Hubble's law, all galaxies have a positive redshift

an expanding Universe must have a Creation or Alpha point, and is a dynamic reality

the search for Hubble's constant consumed most of our technology in the late 20th century

Expanding Universe:

expansion is not from a `center', but all of spacetime expands

the Universe must be describe with a geometry that includes a description of the curvature of the U

galaxy redshift is the effect of photons being stretched, not motion

Geometry of the Universe :

general relativity allows for spacetime to be curved, thus the whole Universe may have a non-flat ge

three possible shapes are allowed, flat, positive or negative curvature

different tests are avalable to determine the curvature of the Universe, such as measuring triangles

note that curvature or geometry of the Universe does not determine how it is connected, which is its

a finite Universe, if wrapped, would appear infinite like a box of mirrors

topologies need not be simple, for example a Moebius strip

or a Klien bottle

deep space observations indicate that the Universe is simply connected

however, a large Universe may be connected in complex ways that are not visible to our limited obs

even simple topologies lead to complex connections

and all this is connected in 4D spacetime, not simply in 3D space

the key to understand the shape of the Universe is its history and dynamics

Measuring Curvature:

determining the global curvature of the Universe, called k, should in principle be easy to determine

a positive (k=+1), flat (k=0) and negative (k=-1) Universe make specific predictions for the number

in practice, the property of lookback time makes curvature measurement a very difficult problem

knowledge of some standard yardstick is required, and distance observed makes the timescale invol

Density of the Universe:

the future of the Universe is determined if we are in an open or closed scenario

the amount of mass is an important parameter to an open/closed Universe, but not the only parame

the radius-time diagram displays, graphical fashion, the future of the Universe

gravity, i.e. matter, determines the rate of expansion

modern cosmology is a search for two numbers, Ho and qo

Cosmological Models:
In modern cosmology, the different classes of Universes (open, flat or
closed) are known as Friedmann universes and described by a simple
equation:

In this equation, `R' represents the scale factor of the Universe (think of it
as the radius of the Universe in 4D spacetime), and H is Hubble's constant,
how fast the Universe is expanding. Everything in this equation is a
constant, i.e. to be determined from observations. These observables can
be broken down into three parts gravity (matter density), curvature and
pressure or negative energy given by the cosmological constant.
Historically, we assumed that gravity was the only important force in the
Universe, and that the cosmological constant was zero. Thus, if we
measure the density of matter, then we could extract the curvature of the
Universe (and its future history) as a solution to the equation. New data
has indicated that a negative pressure, or dark energy, does exist and we
no longer assume that the cosmological constant is zero.
Each of these parameters can close the Universe in terms of turn-around
and collapse. Instead of thinking about the various constants in real
numbers, we perfer to consider the ratio of the parameter to the value that
matches the critical value between open and closed Universes. For
example, the density of matter exceeds the critical value, the Universe is
closed. We refer to these ratios as Omega (subscript M for matter, k for
curvature, Lambda for the cosmological constant). For various reasons due
to the physics of the Big Bang, the sum of the various Omega must equal

one. And for reasons we will see in a later lecture, the curvature Omega is
expected to be zero, allowing the rest to be shared between matter and the
cosmological constant.

The search for the value of matter density is a much more difficult
undertaking. The luminous mass of the Universe is tied up in stars. Stars
are what we see when we look at a galaxy and it fairly easy to estimate the
amount of mass tied up in stars, gas, planets and assorted rocks. This is
contains an estimate of what is called the baryonic mass of the Universe,
i.e. all the stuff made of baryons = protons and neutrons. When these
numbers are caluclated it is found that for baryons is only 0.02, a very
open Universe. However, when we examine motion of objects in the
Universe, we quickly realize that most of the mass of the Universe is not
seen, i.e. dark matter, which makes this estimate of to be much too low.
So we must account for this dark matter in our estimate.

Cluster Masses:

measurements in the early 60's of the motion of galaxies in clusters discovered a discrepence

there was too much gravity for the number of galaxies counted = dark matter problem

further measurements showed that dark matter is the dominate form of matter in the Universe, up
us

Dark Matter:

while we have not identified dark matter, we can study its properties

we charactersize the influence of dark matter by studying the ratio of mass to light (M/L) over vari

dark matter forms the halos around galaxies and the intracluster space between galaxies

it is increasly important on large scales, early hope was that dark matter would be sufficient to clos

Baryonic Dark Matter:

the key problem for the 21st century is to determine the nature of dark matter

searches for dark matter have divided into two paths, one to look for a baryonic dark matter candid

while stellar reminants and low mass objects certainly exist, they do not appear to exist in numbers
candidates

Non-Baryonic Dark Matter:

dark matter is so unusual that it seems plausible that it is not composed of normal matter

known particles and new particles are considered

some solutions do not use new particles but instead consider exotic early Universe effects

Dark Energy:

recent work with distant supernova demonstrates that the Universe is not slowing down till to gravi

the SN observations require a cosmological constant, one that dominates the very early Universe an

SN, cluster and CMB observations produce a narrow range of values for Omega M, Lambda and k

the Benchmark model has values of 0.7, 0.3 and 0 for cosmological constant, matter and curvature

thus, we live in an open, flat Universe

Birth of the Universe :

without a theory of quantum gravity we are unable to describe the earilest moments of the Universe

our physics begins after the Planck time

we are very far from understanding `why' the Universe started

Unification:

a key process in the early Universe is the unification of the fundamental forces

when forces break their symmetry, interesting things happen

the development of the macroscopic world begins with the first symmetry breaking era, between qu

later breaks set down the behavior of matter

Cosmic Singularity :

extrapolation of the Universe to t=0 implies a singularity

the properties of the Universe comes from the `nothing' that is the quantum vacuum

the activity of the quantum vacuum allows for the creation of virtual pairs, a source of properties

the quantum vacuum acts as the blackboard for the Universe, a vast ocean of potential that we extr

Planck Era :

our current physics begins at the Planck era, when the Universe was atomic sized

time and space overlap during the Planck era

at the end of the era 4D spacetime will unfold and 6D quantum space will compactify

Spacetime Foam :

even though the 10D Universe has simplified into 4D spacetime, the pressures and energies of this e

quantum-sized black holes and wormholes are the first entities in the Universe

these numerous primodial quantum black holes have all dissolved now due to Hawking radiation

this era ends with the arrival of GUT matter

Symmetry Breaking:

symmetry is the key to understand the changes in the early Universe

when there are no particles or unique forces or photons, symmetry relations determine the characte

symmetry provides order to the early chaos, but also provides key moments when the Universe und

symmetry breaking lead to phase changes, particular moments with the Universe adopts new prope

at each symmetry break chaos increases, entropy marches forward

Inflation:

there are two major problems for the Big Bang model
o the flatness problem
o the horizon problem

values of Omega near 1 are unstable and require a mechanism

opposites of the Universe should not be connected, they are outside each others horizon

an era of inflation solves both these problems, the rapid expansion of the Universe during the GUT

inflation occurs at faster than the speed of light, but there is no motion since the space under the ma

the endresult is the formation of many bubble universes inside a large Multiverse

both horizon and flatness problems are resolved by inflation

inflation forces curvature to zero and requires a cosmological constant for a low matter density Uni

GUT matter :

after inflation, matter first appears in the form of extrememly massive, GUT matter

while matter is created, its lifetime is very short due to the high temperatures in the early Universe

as the Universe cools, less massive particles can be produced

10-6 secs after the Big Bang, ordinary matter comes into existence

Quarks and Leptons :

the characteristic of the strong force and the rule of no free quarks produces a runaway matter pro

quark and anti-quark pairs (mesons) are produced in large numbers which will latter merge to form

Baryongenesis :

in the very early Universe, radiation dominates over matter

as the Universe expands and cools, matter begins to dominate over radiation

even during these early times, entropy controls the progression of the structure of the Universe

baryon production ends the matter production phase, however, matter and anti-matter should be m

note that anti-matter is rare today, so where did it all go?

Matter versus Anti-Matter :

pair production must proceed in a symmetric fashion

symmetry means that at the end of the matter production era, the sea of matter and anti-matter par

since there is clearly matter in the Universe, this implies some mechanism to produce more matter t

how big is this flaw in the symmetry? it is the ratio of cosmic background photons to matter

an asymmetry must occur in the baryon number due to the dynamic nature of the expanding Unive

i.e. it is not in equilibrium

CP Violation:

the asymmetry between matter and anti-matter must occur in the quantum world which is then ma

certain decay processes have internal asymmetries which are restored in a global context

CP violation is one such asymmetry which is only temporary in today's world, but during the rapid

Nucleosynthesis :

once normal matter forms, the temperatures are still high enough for fusion

the fusion of protons and neutrons into heavier elements is called nucleosynthesis

to build higher mass nuclei requires time, but the Universe is still expanding and cooling

the abundance of various light elements will be dependent on the number of protons avaliable, i.e. t

current estimates place

baryons at 0.02

stellar thermonuclear fusion produces elements 4 to 26, heavier elements require supernova explosi

neutron capture during a SN produces the rest of the periodic table

neutron capture is rapid sensitive, faster allows elements to form before they decay

Cosmic Background Radiation :


One of the foremost cosmological discoveries was the detection of the
cosmic background radiation. The discovery of an expanding Universe by
Hubble was critical to our understanding of the origin of the Universe,

known as the Big Bang. However, a dynamic Universe can also be


explained by the steady state theory.
The steady state theory avoids the idea of Creation by assuming that the
Universe has been expanding forever. Since this would mean that the
density of the Universe would get smaller and smaller with each passing
year (and surveys of galaxies out to distant volumes shows this is not the
case), the steady-state theory requires that new matter be produced to
keep the density constant.

The creation of new matter would voilate the conservation of matter


princple, but the amount needed would only be one atom per cubic meter
per 100 years to match the expansion rate given by Hubble's constant.
The discovery of the cosmic microwave background (CMB) confirmed the
explosive nature to the origin of our Universe. For every matter particle in
the Universe there are 10 billion more photons. This is the baryon number
that reflects the asymmetry between matter and anti-matter in the early
Universe. Looking around the Universe its obvious that there is a great
deal of matter. By the same token, there are even many, many more
photons from the initial annihilation of matter and anti-matter.
Most of the photons that you see with your naked eye at night come from
the centers of stars. Photons created by nuclear fusion at the cores of stars
then scatter their way out from a star's center to its surface, to shine in the
night sky. But these photons only make up a very small fraction of the
total number of photons in the Universe. Most photons in the Universe are
cosmic background radiation, invisible to the eye.
Cosmic background photons have their origin at the matter/anti-matter
annihilation era and, thus, were formed as gamma-rays. But, since then,
they have found themselves scattering off particles during the radiation
era. At recombination, these cosmic background photons escaped from the
interaction with matter to travel freely through the Universe.
As the Universe continued to expanded over the last 15 billion years, these
cosmic background photons also `expanded', meaning their wavelengths
increased. The original gamma-ray energies of cosmic background photons
has since cooled to microwave wavelengths. Thus, this microwave radiation
that we see today is an `echo' of the Big Bang.

The discovery of the cosmic microwave background (CMB) in the early


1960's was powerful confirmation of the Big Bang theory. Since the time of
recombination, cosmic background photons have been free to travel
uninhibited by interactions with matter. Thus, we expect their distribution
of energy to be a perfect blackbody curve. A blackbody is the curve
expected from a thermal distribution of photons, in this case from the
thermalization era before recombination.

Today, based on space-based observations because the microwave region of


the spectrum is blocked by the Earth's atmosphere, we have an accurate
map of the CMB's energy curve. The peak of the curve represents the
mean temperature of the CMB, 2.7 degrees about absolute zero, the
temperature the Universe has dropped to 15 billion years after the Big
Bang.

Where are the CMB photons at the moment? The answer is `all around
you'. CMB photons fill the Universe, and this lecture hall, but their
energies are so weak after 15 billion years that they are difficult to detect
without very sensitive microwave antennas.

Ionization:
The last stage in matter production is when the Universe cools sufficiently
for electrons to combine with the proton/neutron nuclei and form atoms.
Constant impacts by photons knock electrons off of atoms which is called
ionization. Lower temperatures mean photons with less energy and fewer
collisions. Thus, atoms become stable at about 15 minutes after the Big
Bang.

These atoms are now free to bond together to form simple compounds,
molecules, etc. And these are the building blocks for galaxies and stars.

Radiation/Matter Dominance :
Even after the annihilation of anti-matter and the formation of protons,
neutrons and electrons, the Universe is still a violent and extremely active

environment. The photons created by the matter/anti-matter annihilation


epoch exist in vast numbers and have energies at the x-ray level.
Radiation, in the form of photons, and matter, in the form of protons,
neutrons and electron, can interact by the process of scattering. Photons
bounce off of elementary particles, much like billiard balls. The energy of
the photons is transfered to the matter particles. The distance a photon
can travel before hitting a matter particle is called the mean free path.

Since matter and photons were in constant contact, their temperatures


were the same, a process called thermalization. Note also that the matter
can not clump together by gravity. The impacts by photons keep the matter
particles apart and smoothly distributed.

The density and the temperature for the Universe continues to drop as it
expands. At some point about 15 minutes after the Big Bang, the
temperature has dropped to the point where ionization no longer takes
places. Neutral atoms can form, atomic nuclei surround by electron clouds.
The number of free particles drops by a large fraction (all the protons,
neutrons and electron form atoms). And suddenly the photons are free to
travel without collisions, this is called decoupling.

The Universe becomes transparent at this point. Before this epoch, a


photon couldn't travel more that a few inches before a collision. So an
observers line-of-sight was only a few inches and the Universe was opaque,

matter and radiation were coupled. This is the transition from the
radiation era to the matter era.

CMB Fluctuations :
The CMB is highly isotropy, uniform to better than 1 part in 100,000. Any
deviations from uniformity are measuring the fluctuations that grew by
gravitational instability into galaxies and clusters of galaxies.
Images of the CMB are a full sky image, meaning that it looks like a map
of the Earth unfolded from a globe. In this case, the globe is the celestial
sphere and we are looking at a flat map of the sphere.
Maps of the CMB have to go through three stages of analysis to reveal the
fluctuations associated with the early Universe. The raw image of the sky
looks like the following, where red is hotter and blue is cooler:

The above image has a typical dipole appearance because our Galaxy is
moving in a particular direction. The result is one side of the sky will
appear redshifted and the other side of the sky will appear blueshifted. In
this case, redshifting means the photons are longer in wavelength = cooler
(so backwards from their name, they look blue in the above diagram).
Removing the Galaxy's motion produces the following map:

This map is dominated by the far-infrared emission from gas in our own
Galaxy. This gas is predominately in the plane of our Galaxy's disk, thus
the dark red strip around the equator. The gas emission can be removed,
with some assumptions about the distribution of matter in our Galaxy, to
reveal the following map:

This CMB image is a picture of the last scattering epoch, i.e. it is an image
of the moment when matter and photons decoupled, literally an image of
the recombination wall. This is the last barrier to our observations about
the early Universe, where the early epochs behind this barrier are not
visible to us.

The clumpness of the CMB image is due to fluctuations in temperature of


the CMB photons. Changes in temperature are due to changes in density
of the gas at the moment of recombination (higher densities equal higher
temperatures). Since these photons are coming to us from the last
scattering epoch, they represent fluctuations in density at that time.
The origin of these fluctuations are primordial quantum fluctuations from
the very earliest moments of are echo'ed in the CMB at recombination.
Currently, we believe that these quantum fluctuations grew to greater
than galaxy-size during the inflation epoch, and are the source of structure
in the Universe.

CMB:
When we look out in the sky, we're actually looking backwards in time.
Light from more distant objects take longer to reach us and thus we are
observing now how they appeared in the past. We can see back a few
billion years with the light of galaxies. The microwave light of the
background shines from long ago in an infant universe 300,000 years old
(the epoch of "last scattering") and illuminates the particle soup that

existed before this time. This soup has a very smooth consistency and is
composed of fundamental particles like electrons, protons, helium nuclei,
neutrinos.

The obvious questions are: how did the universe go from a smooth particle
soup to a complex system of galaxies and large scale structure. Can we use

the fact that we're seeing the surface of this soup in the microwave
background to help us understand this question.
If we have small wrinkles or hills and valleys early on in the universe,
matter will tend to fall into the valleys, eventually producing dense regions
that become the sites of galaxies.

We represent these wrinkles by a sort of "top view" where the color coding
refers to the density of matter (dark regions have more matter, light
regions less).

Needless to say, this is a bit of an idealization for illustative purposes.


Cosmologists actually run computer simulations to track how matter
collects into valleys. For example, here is a simulation running forward in
time which shows how particles collect and enhance small initially small
wrinkles.

One question that remains unanswered is what is the origin of such large
scale wrinkles in the first place. Inflation theory is that a period of rapid
expansion takes very small scale fluctuations at the level of the particle
soup and stretches them to cosmic proportions.

Here the blue bands are snapshots of the wrinkles in the density of the
universe at various times. As time goes on, matter falls into these wrinkles
and starts to build heavier and heavier objects. The crucial period when
this process of gravitational attraction and infall can occur is related to an
important concept in cosmology called the horizon. Like the horizon on the
earth, it is the point beyond which we're unable to look. Unlike the earth's
horizon, this distance is increasing with time because light from more
distant regions has had more time to reach us. Heuristically, if there is a
large clump in the universe we only know to fall toward it once it comes
into the horizon.

A useful property of the microwave background is that when we look out


across widely separated angles, we're looking at wrinkles on such large
scales that this process of infall hasn't yet begun. We're looking at the
primordial wrinkles themselves.

Small variations in the temperature of the background radiation from


point to point on the sky are called anisotropies. These anisotropies were
first detected by the COBE satellite in 1992. The current MAP version of
the CMB is:

COBE and MAP then has told us what the large scale ripples in the
background radiation temperature look like. However there is much to be
gained by examining the fine details of the ripples. Recall that on the large
scales, the temperature ripples reflected the primordial ripples themselves.
That is because on scales that are larger than the horizon there hasn't
been enough time for matter to collect in the valleys and the process of
structure formation to start. When we look at smaller scales than the
horizon, we see the process of structure formation at work.

The goal of the current generation of experiments is to understand this


process in detail by looking at the small scale ripples in the background
radiation temperature.
What we see on small scales is actually sound. The photons behave as a
gas just like air. Ordinary sound waves are just travelling compressions
and rarefactions of the gas which we hear as sound as they strike our ear
drum. The photons also carry sound waves as gravity tries to compress the
gas and pressure resists it. The reason why we see it rather than hear it is

that when we compress the gas it becomes hotter. We see the sound waves
as hot and cold spots on the sky.

The result is a spectrum of sound waves that are useful in determining the
origin, evolution and fate of objects in the universe.
We think that fluctuations may have originated from a period of rapid
expansion called inflation. Whether or not this actually happened can be
"heard" in the microwave background. The fundamental tone of a musical
system is related to its physical size - here the horizon size at last
scattering.

There is also a pattern of overtones at integer multiples of the


fundamental frequency.

In music, the pattern of overtones helps us distinguish one instrument


from another: it is a kind of signature of the instrument that makes the
sound. In the same way, the pattern of overtones in the sound spectrum of
the microwave background ripples acts as a signature of inflation.
Inflation's signatures are that the overtones follow a pure harmonic series
with frequency ratios of 1:2:3...
COBE told us what the large-scale fluctuations in the background look
like, but cosmologists today are more interested in the small-scale
fluctuations. Astronomers divide up the sky into angular degrees, so that

90 degrees is the distance from the horizon to a point directly overhead.


COBE measured temperature ripples from the 10 degree to 90 degree
scale. This scale is so large that there has not been enough time for
structures to evolve. Hence COBE sees the so-called initial conditions of
the universe. At the degree scale, on the other hand, the process of
structure formation imprints information in the ripples about conditions in
the early universe.
Since the COBE discovery, many ground and balloon-based experiments
have shown the ripples peak at the degree scale. What CMB
experimentalists do is take a power spectrum of the temperature maps,
much as you would if you wanted to measure background noise. The
angular wavenumber, called a multipole l, of the power spectrum is related
to the inverse of the angular scale (l=100 is approximately 1 degree).
Recent experiments, noteably the Boomerang and Maxima experiments,
have show that the power spectrum exhibits a sharp peak of exactly the
right form to be the ringing or acoustic phenomena long awaited by
cosmologists:

Origin of Structure :

As we move forward in time from the beginning of the Universe we pass


through the inflation era, baryongenesis, nucleosynthesis and radiation
decoupling. The culmination is the formation of the structure of matter,
the distribution of galaxies in the Universe.
During radiation era growth of structure is suppressed by the tight
interaction of photons and matter. Matter was not free to response to its
own gravitational force, so density enhancements from the earliest times
could not grow.
Density enhancements at the time of recombination (having their origin in
quantum fluctuations that expanded to galaxy-sized objects during the
inflation era) have two routes to go. They can grow or disperse.

The `pressure effects' that density enhancements experience are due to the
expanding Universe. The space itself between particles is expanding. So
each particle is moving away from each other. Only if there is enough
matter for the force of gravity to overcome the expansion do density
enhancements collapse and grow.

Top-Down Scenario:
Structure could have formed in one of two sequences: either large
structures the size of galaxy clusters formed first, than latter fragmented

into galaxies, or dwarf galaxies formed first, than merged to produce larger
galaxies and galaxy clusters.
The former sequence is called the top-down scenario, and is based on the
principle that radiation smoothed out the matter density fluctuations to
produce large pancakes. These pancakes accrete matter after
recombination and grow until they collapse and fragment into galaxies.

This scenario has the advantage of predicting that there should be large
sheets of galaxies with low density voids between the sheets. Clusters of
galaxies form where the sheets intersect.

Bottom-Up Scenario:
The competing scenario is one where galaxies form first and merge into
clusters, called the bottom-up scenario. In this scenario, the density
enhancements at the time of recombination were close to the size of small
galaxies today. These enhancements collapsed from self-gravity into dwarf
galaxies.

Once the small galaxies are formed, they attract each other by gravity and
merge to form larger galaxies. The galaxies can then, by gravity, cluster
together to form filaments and clusters. Thus, gravity is the mechanism to
form larger and larger structures.

Hot Dark Matter vs. Cold Dark Matter :


Each scenario of structure formation has its own predictions for the
appearance of the Universe today. Both require a particular form for dark
matter, a particular type of particle that makes up the 90% of the Universe
not visible to our instruments. These two forms of dark matter are called
Hot and Cold.

HDM produces large, smooth features since it travels at high velocity.


Massive neutrinos move at near the speed of light, yet interact very weakly
with matter so can serve to smooth out large density enhancements.

CDM, on the other hand, is slow moving and, therefore, clumps into small
regions. Large scale features are suppressed since the small clumps grow
to form small galaxies.
There is strong evidence that galaxies formed before clusters, in the sense
that the stars in galaxies are 10 to 14 billion years old, but many clusters
of galaxies are still forming today. This would rule against the top-down
scenario and support the bottom-up process.

Large Scale Structure :

Galaxies in the Universe are not distributed evenly, i.e. like dots in a grid.
Surveys of galaxy positions, e.g. maps of galaxies, have shown that
galaxies have large scale structure in terms of clusters, filaments and
voids.
The clusters, filaments and voids reflect the initial fluctuations at
recombination, plus any further evolution as predicted by HDM or CDM
models. CDM and HDM models have particular predictions that can be
tested by maps or redshift surveys that cover 100's of millions of lightyears.

Interestingly enough, the real distribution of galaxies from redshift


surveys is exactly in-between the HDM and CDM predictions, such that a
hybrid model of both HDM and CDM is needed to explain what we see.

The mapping of large scale structure also has an impact on determining is


the Universe is open or closed. Galaxies on the edges of the filaments will
move in bulk motion towards concentrations of other galaxies and dark
matter. These large scale flows can be used to determine the density of
large regions of space, then extrapolated to determine the mean density of
the Universe.
Anthropic Principle :
The success of science in understanding the macroscopic, microscopic and
cosmological worlds has led to the strong belief that it is possible to form a
fully scientific explanation of any feature of the Universe. However, in the
past 20 years our understanding of physics and biology has noted a
peculiar specialness to our Universe, a specialness with regard to the
existence of intelligent life. This sends up warning signs from the
Copernican Principle, the idea that no scientific theory should invoke a
special place or aspect to humans.
All the laws of Nature have particular constants associated with them, the
gravitational constant, the speed of light, the electric charge, the mass of
the electron, Planck's constant from quantum mechanics. Some are
derived from physical laws (the speed of light, for example, comes from
Maxwell's equations). However, for most, their values are arbitrary. The
laws would still operate if the constants had different values, although the
resulting interactions would be radically different.
Examples:
gravitational constant: Determines strength of gravity. If lower than
stars would have insufficient pressure to overcome Coulomb barrier
to start thermonuclear fusion (i.e. stars would not shine). If higher,
stars burn too fast, use up fuel before life has a chance to evolve.

strong force coupling constant: Holds particles together in nucleus of


atom. If weaker than multi-proton particles would not hold together,
hydrogen would be the only element in the Universe. If stronger, all
elements lighter than iron would be rare. Also radioactive decay
would be less, which heats core of Earth.
electromagnetic coupling constant: Determines strength of
electromagnetic force that couples electrons to nucleus. If less, than
no electrons held in orbit. If stronger, electrons will not bond with
other atoms. Either way, no molecules.
All the above constants are critical to the formation of the basic building
blocks of life. And, the range of possible values for these constants is very
narrow, only about 1 to 5% for the combination of constants. Outside this
range, and life (in particular, intelligent life) would be impossible.

It is therefore possible to imagine whole different kinds of universes with


different constants, all equal valid within the laws of Nature. For example,

a universe with a lower gravitational constant would have a weaker force


of gravity, where stars and planets might not form. Or a universe with a
high strong force which would inhibit thermonuclear fusion, which would
make the luminosity of stars be much lower, a darker universe, and life
would have to evolve without sunlight. Why don't those Universes exist?
Why does our Universe, with its special value exist rather than another? Is
there something fundamental to our physics that makes the present values
for physical constants expected?

Cosmological Constants:
The situation became worst with the cosmological discoveries of the 1980's.
The two key cosmological parameters are the cosmic expansion rate
(Hubble's constant, which determines the age of the Universe) and the
cosmic density parameter ( ), which determines the acceleration of the
Universe and its geometry).
The cosmic density parameter determines the three possible shapes to the
Universe; a flat Universe (Euclidean or zero curvature), a spherical or
closed Universe (positive curvature) or a hyperbolic or open Universe
(negative curvature). Note that this curvature is similar to spacetime
curvature due to stellar masses except that the entire mass of the Universe
determines the curvature.

The description of the various geometries of the Universe (open, closed,


flat) also relate to their futures. There are two possible futures for our
Universe, continual expansion (open and flat) or turn-around and collapse
(closed). Note that flat is the specific case of expansion to zero velocity.

Current values for the critical density range from 0.1 to 1, which produces
a new dilemma from modern cosmology, the flatness problem.

The flatness problem relates to the density parameter of the Universe, .


Values for can take on any number, but it has to be between 0.01 and 5.
If is more than 0.01 the Universe is expanding so fast that the Solar
System flys apart. And has to be less than 5 or the Universe is younger
than the oldest rocks. The measured value is near 0.2. This is close to an
of 1, which is strange because of 1 is an unstable critical point for the
geometry of the Universe.

Values slightly below or above 1 in the early Universe rapidly grow to


much less than 1 or much larger than 1 (like a ball at the top of a hill). So
the fact that the measured value of 0.2 is so close to 1 that we expect to
find in the future that our measured value is too low and that the Universe
has a value of exactly equal to 1 for stability.

And therefore, the flatness problem is that some mechanism is needed to


get a value for to be very, very close to one (within one part in a billion
billion).

Anthropic Principle:
So the philosophical dilemma is that the constants of the Universe on a
microscopic (atomic constants), macroscopic (electromagnetic forces) and
cosmological levels all appear to be extremely fine-tuned in order for life
and intelligence to evolve.
This concern of how conscious creatures, such as ourselves, came to be in
the Universe is called the anthropic principle, and has three forms; weak,
strong and final.
Weak Anthropic Principle: The observed values of all physical and
cosmological quantities are not equally probable but they take on values
restricted by the requirement that there exist sites where carbon-based life

can evolve and by the requirements that the Universe be old enough for it
to have already done so.
Strong Anthropic Principle: The Universe must have those properties
which allow life to develop within it at some stage in its history. Because:
1. There exists one possible Universe `designed' with the goal of
generating and sustaining `observers'. or...
2. Observers are necessary to bring the Universe into being
(participatory universe). or...
3. An ensemble of other different universes is necessary for the
existence of our Universe
Final Anthropic Principle: Intelligent information-processing must come
into existence in the Universe, and, once it comes into existence, it will
never die out.
The weak version of the anthropic principle just says that our existence
allows us to infer values of certain fundamental constants. Our existence is
an indicator of what values these constants have. But the strong version
claims not just that our existence allows us to infer the values of the
constants, but that it is moreover the explanation of why they have just the
values that they do.

Anthropic Principle and Circular Reasoning :


The usual criticism of any form of the anthropic principle is that it is guilty
of a tautology or circular reasoning.

With the respect to our existence and the Universe, the error in reasoning
is that because we are here, it must be possible that we can be here. In
other words, we exist to ask the question of the anthropic principle. If we
didn't exist then the question could not be asked. So there is nothing
special to the anthropic principle, it simply states we exist to ask questions
about the Universe.
An example of this style of question is whether life is unique to the Earth.
There are many special qualities to the Earth (proper mass, distance from
Sun for liquid water, position in Galaxy for heavy elements from nearby
supernova explosion). But, none of these characteristics are unique to the

Earth. There may exists hundreds to thousands of solar systems with


similar characteristics where life would be possible, if not inevitable. We
simply live on one of them, and we would not be capable of living on any
other world.
This solution is mildly unsatisfying with respect to physical constants
since it implies some sort-of lottery system for the existence of life, and we
have no evidence of previous Universes.

Anthropic Principle and Many-Worlds Hypothesis:


Another solution to the anthropic principle is that all possible universes
that can be imagined under the current laws of Nature are possible, and do
have an existence as superpositions.

This is the infamous many-worlds hypothesis used to explain how the


position of an electron can be fuzzy or uncertainty. Its not uncertain, it
actual exists in all possible positions, each one having its own separate and
unique universe. Quantum reality is explained by the using of infinite
numbers of universes where every possible realization of position and
energy of every particle actually exists.

With respect to the anthropic principle, we simply exist in one of the many
universes where intelligent life is possible and did evolve. There are many
other universes where this is not the case, existing side by side with us in
some super-reality of the many-worlds. Since the many-worlds hypothesis
lacks the ability to test the existence of these other universes, it is not
falsifiable and, therefore, borders on pseudo-science.

Anthropic Principle and Inflation :


The solution to the anthropic principle appears to lie in the very early
Universe, moments after the Big Bang, the inflation era. Our old view of
the Universe was one of newtonian expansion, at less than the speed of
light.

However, now we know that, because of symmetry breaking at the GUT


unification point, spacetime and matter separated and a tremendous
amount of energy was released. This energy produced an overpressure that
was applied not to the particles of matter, but to spacetime itself. Basically,
the particles stood still as the space between them expanded at an
exponential rate.

During inflation, the Universe expanded a factor of 1054, so that our


horizon now only sees a small piece of what was the total Universe from
the Big Bang.

Our visible Universe, the part of the Big Bang within our horizon, is
effectively a `bubble' on the larger Universe. However, those other bubbles
are not physically real since they are outside our horizon. We can only
relate to them in an imaginary, theoretical sense. They are outside our
horizon and we will never be able to communicate with those other bubble
universes.

Inflation's answer to the anthropic principle of any form is that many


bubble universes were created from the Big Bang. Our Universe had the
appropriate physical constants that lead to the evolution of intelligent life.
However, that evolution was not determined or required. There may exist
many other universes with similar conditions, but where the emergent
property of life or intelligence did not develop.

Hopefully a complete Theory of Everything will resolve the `how' questions


on the origin of physical constants. But a complete physical theory may be
lacking the answers to `why' questions, which is one of the reasons that
modern science is in a crisis phase of development, our ability to
understand `how' has outpaced our ability to answer if we `should'.

Origin of Life:
The Earth's crust became stable about 3.9 billion years ago. Life appeared
around 3.6 to 3.9 billion years ago, which is quite fast in astronomical

terms. Microfossils found in ancient rocks from Australia and South Africa
demonstrate that terrestrial life flourished by 3.5 billion years ago. Older
rocks from Greenland, 3.9 billion years old, contain isotopic carbon, carbon
that could only have belonged to a living organism. The early atmosphere
of the Earth was a secondary atmosphere from volcanic outgassing, very
CO2-rich with little free O2.

The Earth lies at the correct distance from the Sun for liquid water to
exist. The evolution of life requires two elements; energy and a medium for
growth. Sunlight serves as the source of energy for most life (a counterexample is bacteria that grows on the ocean trenches powered by heat from
thermal vents). Sunlight provides the energy needed for food manufacture
(biochemical energy storage) and molecular construction (genetic material,
cell walls, etc.). Indirectly, sunlight provides a warm temperature, which
means higher chemical reaction rates for simple life. More complex life
requires sunlight for vision and a stable environment.
Chemical Evolution:
Liquid water provides a universal solvent and warm environment for
chemical evolution. It is a vehicle for dissolved substances (it circulates).
And it provides the raw material for protein construction.

When the primordial soup is exposed to energy, organic compounds are


produced as shown by the Miller-Urey Experiment.
Amino acids are small, highly reactive molecules composed of 20 to 30
HCNO atoms. When amino acids link together in strings they form
proteins. Proteins govern chemical reaction rates and form the structural
material for cell parts.

Most importantly, they can form into microspheres when heated, which
serves to separate chemical reactions and processes. The problem is that
with the vastness of the Earth's oceans it is statistically very improbable
that these early proteins would ever link up. The solution is that the huge
tides from the Moon produced inland tidal pools, which would fill and
evaporate on a regular basis to produce high concentrations of amino
acids, who then linked themselves into macromolecules.
With the construction of large macromolecules, such as proteins and
nucleic acids, the Earth is poised for the next stage of biochemical
evolution. Living organisms are the supreme example of active matter.
They represent the most developed form of organized matter and energy
that we know. They exemplify growth, adaptation, complexity, unfolding
form variety and unpredictability. Almost appearing to be a class apart
from matter and energy, defying the laws that enslave normal matter and
energy.
Every organism is unique, both in form and development. Unlike physics
where one studies classes of identical objects (e.g. electrons, photons),
organisms are all individuals. Moreover, collections of organisms are
unique, species are unique, the evolutionary history of the Earth is unique,
the entire biosphere is unique. On the other hand, a cat is a cat, a cell is a
cell, there are definite regularities and distinguishing features that permit
organisms to be classified.

Each level of biology has new and unexpected qualities, qualities which
cannot be reduced to the properties of the component parts, this is known
as holism. A living organism consists of a large range of components
differing greatly in structure and function (heart, liver, hair). Yet, the
components are arranged and behave in a coherent and cooperative
fashion as though to a common agreed plan. This endows the organism
with a discrete identity, makes a worm a worm, a dog a dog.
No living thing exits in isolation. All organisms are strongly coupled to
their inanimate environment and require a continual throughput of matter
and energy as well as the ability to export entropy. From a physical and
chemical point of view, every organism is strongly out of equilibrium with
its environment. In addition, life on Earth is an intricate network of
mutually interdependent organisms held in a state of dynamic balance.
Then concept of life is fully meaningful only in the context of the entire
biosphere.
A large number of complex chemical reactions is the underlying process
that we call life. The ingredients for life are:
1. energy source
2. supply of nutrients (building blocks)
3. self-regulating mechanisms
The first two criteria were supplied by the conditions of the early Earth
environment. The third criteria was presented by the endpoint of chemical
evolution where the long chains of nucleic acids were formed which
developed into RNA and DNA.

RNA and DNA are molecular codes for the production of proteins. They
have the unique property of being self-replicating (when an RNA molecule
splits, amino acids connect to the endpoints producing an exact copy of the
original chain). The beginning of biochemical evolution was when RNA and
DNA evolved to coat themselves in protein shells. These coated RNA and
DNA packages are called a virus. A virus is halfway between life and nonlife, being non-living when in isolation, but adapting living characteristics
in interaction with other virus' or cells.
The next stage in biochemical evolution was for various virus' to take on
specialized tasks (energy production, protein production, etc). These
individual elements would combine to form the first cell. Our earliest
evidence of cellular life comes from fossil bacteria.

With the development of cells, life took on an explosive evolution into more
diverse forms, invading new environments (sea, lakes, land).

Photosynthesis:
Oxygen is a very small component to outgassing on the Earth, yet O 2 is a
significant fraction of our current atmosphere (thank goodness). Also note
that O2 is highly reactive and combines quickly with rock and soil to form
oxides (rust). Thus, the current amount O2 requires a constant process of
replenishment. That process is photosynthesis.

The first photosynthesizing organisms used UV light as an energy source


since there is more energy associated with short wavelength light than
long wavelength light (want proof? leave your shirt off for an hour at the
beach). This occurred about 3.5 billion years ago and the immediate byproduct was the ozone layer, which blocks UV light. This resulted in the
first mass extinction, the death of all UV photosynthesizing cells. Only

organisms which were able to utilize the visible portion of the spectrum
survived = green plants and plankton.

You might also like