Professional Documents
Culture Documents
01.02 An introduction
02.01 Chapter 2 - Seismic Acquisition
01.03 The geological time scale
02.02 The ideal seismic source
01.04 Rock types
02.03 Onshore seismic sources
01.05 Sedimentary basins
02.04 Other Land sources
01.06 Oil and Gas formation
02.05 Recording the data - Onshore
01.07 Oil and Gas Traps
02.06 Offshore seismic sources
01.08 Approximately Mathematical
02.07 Recording the data - Offshore
01.09 Chapter 1 - Questions
02.08 Chapter 2 - Questions
03.02 Waves
03.04 Wavefronts
1
03.07 Ghost Reflections
03.09 Diffractions
04.10 3D Geometry
04.11 3D binning
05.05 Digitisation
05.06 Aliasing
2
05.07 Multiplexed data
05.11 SEG-D
05.12 SEG-Y
06.03 Frequencies
06.10 Convolution
06.12 Phase
3
07.03 Transcription
08.02 Resample
08.06 FK Filtering
08.08 FX Deconvolution
4
08.13 Chapter 8 - Questions
09.08 Velocity QC
5
10.09 Residual Statics (1)
6
12.05 Zero Phasing
7
Chapter 1 - In the beginning ...
As the beginning seems an appropriate place to start, Chapter 1 provides a tiny bit of
geological and mathematical background to Seismic Processing. Please make sure that you
have read Navigating through this course before going any further!
We start with a brief introduction to the whole process of Seismic Exploration, a mention of
its history, and an explanation of the types of seismic trace displays used in this course.
As a prelude to a four page instant geological background, nothing less than the history of
the world is used to familiarise you with the formation names used by geologists. Although
not strictly related to Seismic Processing, some background to the overall geological setting
of seismic data is useful.
Specific rock types, and their formation, are discussed, with particular emphasis on the
sedimentary rocks that are the foundation of the world's Oil & Gas production.
Since all of the world's Oil & Gas reserves are found in sedimentary basins, this page
illustrates the formation of a typical basin. The structures produced during basin
formation provide the setting for both the generation and subsequent "trapping" of Oil &
Gas deposits.
The chemical and physical processes necessary for the production of Oil & Gas are
discussed, together with an example of an actual sedimentary basin cross-section.
8
Once the Oil & Gas has been produced, the conditions must exist for the "trapping" of the
Oil & Gas in a suitable structure. Examples of the major types of trap are shown.
9
An introduction
Seismic Exploration
Although this course will concentrate on the processing of reflection seismic data for hydrocarbon
exploration, it's probably worth reviewing the historical context and the other uses of seismic
techniques.
The Chinese had a device as early as 100 AD. for detecting earth tremors - probably the
first seismic receiver! The first use of explosives to delineate structures under the earth was
in the 1920's and 30's in the Southern US and South America. The techniques used
developed fairly slowly over the next twenty or so years until the advent of tape recording
in the 1950's, and digital computer processing in the 1960's. Since then the technology has
increased exponentially, with the ups and downs of the world oil price controlling the
overall research effort.
This course deals exclusively with seismic reflection data processing. The recording of
refracted seismic waves is used for shallow investigations and will be mentioned in passing
(we can't avoid recording refractions with the reflections). Reflection techniques can also
be used for very shallow site investigations (either for placing an oil rig, or for engineering
work), or very deep penetration into the earth for examination of the limits of the earth's
crust. Other geophysical techniques (gravity, magnetics etc.) will not be discussed.
Like most real-world activities, seismic exploration has not featured well in the movies.
The 1953 film "Thunder Bay" has James Stewart throwing sticks of dynamite from a boat
in a Louisiana bayou, and the 1976 re-make of "King Kong" used the excuse of seismic
exploration as a reason for visiting the appropriate island. Neither of these is
recommended viewing! (I won't mention the single shotgun blast at the beginning of
"Jurrasic Park" producing a 3D image, though it would be interesting to develop such a
technique!)
10
In simplistic terms, seismic exploration can be thought of as the sonic equivalent of
radar. An energy source produces sound waves that are directed into the ground.
These waves pass through the earth and are partially reflected at every boundary
between rocks of different types. The response to this reflection sequence is
received by instruments on or near the surface, and recorded on magnetic tape for
computer processing. The process is repeated many times along a seismic "line"
(generally a straight line on the surface), and the resultant processed data provides
a structural picture of the sub-surface. Sophisticated processing techniques can be
used (usually in conjunction with calibration data recorded down a well) to turn the
resultant seismic section into a direct indicator of rock types and (possibly) detect
the presence of hydrocarbons (oil & gas) within the earth.
The data recorded from one "shot" (one detonation of an explosive or implosive
energy source) at one receiver position is referred to as a seismic trace, and is
recorded as a function of time (the time since the shot was fired). As this time
represents the time taken for the energy to travel into the earth, reflect, and then
return to the surface, it is more correctly called "two-way time" and the vertical
scale is generally measured in milliseconds (one thousandth of a second - 0.001
seconds). During the processing sequence these traces are combined together in
various ways, and modified by some fairly complex mathematical operations, but
are always referred to as "traces". The display of many traces side-by-side in their
correct spatial positions produces the final "seismic section" or "seismic profile"
that provides the geologist with a structural picture of the sub-surface.
For reasons of efficiency and data redundancy, the results from each shot are
actually recorded at many different receiver positions. Receivers are placed at
regular intervals around or to one side of the shot position typically extending over
some 3 kilometres or more of the surface. The resultant collection of traces from
one shot is generally recorded together and referred to as a "Field Record". Each
shot position is numbered, and the position of each shot (normally the
"shotpoint") is accurately mapped. This "shotpoint map" shows the position of the
seismic cross-section on the surface. For conventional so-called two-dimensional
(2D) seismic data (we'll talk about 3D later...) this cross-section is assumed to lie
directly below the surface "line".
11
Since the early 1960's seismic data has been recorded and processed digitally.
What we visualise as a seismic trace is actually nothing more than a string of
numbers, where each number represents the amplitude (or height) of the seismic
trace at a particular time. Traces are typically recorded with a sample period (the
time interval between numbers) of 1-2 milliseconds (0.001 - 0.002 seconds), and
recorded to a total two-way time of (typically) 5 or 6 seconds. Each trace thus
consists of some 2,500 - 6,000 numbers recorded on tape. As each of these
numbers can take up to 4 bytes of computer storage, and as one field record can
consist of (for example) 240 seismic traces recorded every 25 metres on the
surface, the data storage problems are, even now, considerable. Six seconds of
data recorded at a 2 millisecond sample period, with one field record of 240 traces
shot every 25 metres = 115,200,000 bytes per kilometre (recorded every 5-6
minutes offshore, or 1.3 gigabytes per hour!)
In all of these cases we will assume that values above the zero line
represent positive numbers, whilst those below are negative
numbers.
12
When we are showing longer traces, or
collections of traces, we will resort to more
conventional displays.
{ Recent 10,000
Quaternary
{ Pleistocene 2,500,000 Humanoids
Tertiary { Pliocene 12,000,000 Grazing & Carnivorous
CENOZOIC { Minocene 26,000,000 Mammals
{ Oligocene 38,000,000
{ Eocene 54,000,000
{ Paleocene 65,000,000
13
Devonian 395,000,000 Amphibians & Insects
Silurian 430,000,000 Land Plants
Ordovician 500,000,000 Fish
Cambrian 570,000,000 Shellfish
The age of the Earth is approximately 4 thousand million years (4,650,000,000). Life first
appeared some 600 million years ago, with humanoids appearing about 3 million years ago.
The Earth's age is conveniently divided into two major time divisions marking the
boundary when life first appeared: the Cryptozoic (hidden life), or Precambrian; and the
Phanerozoic (obvious life), or Cambrian.
Until comparatively recently, with the advent of radio-carbon dating and similar processes,
the only way to establish the relative ageing of rocks was by means of the fossilised remains
within those rocks. This led to difficulties in dating rocks earlier than the earliest solid
remains (greater than 600 million years old, or Precambrian).
The time period over the remaining geologic time scale has been sub-divided into three eras
(Palaeozoic, Mesozoic and Cainozoic), each sub-divided into named periods and ages, as
well as further sub-divisions referred to by the name of a characteristic fossil.
Most, but not all, of the major oil discoveries are in rocks formed during the last 200
million years (Triassic and younger).
14
Rock types
Rocks can be classified into three main types, depending on the chemistry of their formation.
Igneous rocks
Igneous rocks were formed by the cooling and subsequent solidification of a molten mass of rock
material, known as magma. These rocks are still being produced by active volcanoes.
Plutonic (coarse-grained) rocks, such as granite and syenite, were formed from a magma
buried deep within the crust of the Earth which cooled very slowly during the initial stages
of the Earth's solidification, thus permitting large crystals of individual minerals to form.
Volcanic rocks, typified by basalt and rhyolite, were formed when the molten magma rose
from a depth and filled cracks close to the surface, or when the magma was extruded upon
the surface of the Earth through a volcano. The subsequent cooling and solidification of the
15
magma were very rapid, resulting in the formation of fine-grain minerals or glasslike
rocks.
As igneous rocks are composed almost entirely of silicate minerals, they are often classified
by their silica content. The major categories are referred to as acid (granite and rhyolite)
and basic (gabbro and basalt).
Metamorphic rocks
Metamorphic rocks are those whose composition and texture has been altered by heat and pressure
deep within the Earth's crust. Dynamothermal, or regional, metamorphism refers to those rocks
where both heat and pressure have caused changes. Where the changes have been produced by the
heat of an intrusion of igneous rock, the metamorphism is termed thermal, or contact.
Metamorphism occurs to both igneous or sedimentary rocks, where the resultant rock
depends on the amount of heat and pressure they have been subjected to. Shale is
metamorphosed to slate in a low-temperature environment, but if heated to temperatures
high enough for its clay minerals to re-crystallise as mica flakes, shale becomes
metamorphosed into a phyllite. At even higher temperatures and pressures, shale and
siltstone completely re-crystallise, forming schist or gneiss, rocks in which the alignment of
mica flakes produces a laminated texture called foliation.
Among the non-foliated metamorphic rocks, quartzite and marble are the most common.
Quartzite is typically a tough, hard, light-coloured rock in which all the sand grains of a
sandstone or siltstone have been re-crystallised into a fabric of interlocking quartz grains.
Marble is a softer, more brittle, varicoloured rock in which the dolomite or calcite of the
original sedimentary material has been entirely re-crystallised.
16
Sedimentary rocks
Sedimentary rocks are the weathered debris derived by the slow processes of erosion of upland
regions containing other rock types.
These weathering products are ultimately transported, by water, wind or ice to the seas,
lakes or lowland areas where they settle out and accumulate to form clastic (fragmented)
sediments. As the sediments are transported, the individual grains are battered and tend to
become more rounded and sorted according to size by the varying strengths of current and
wave action.
By the time it reaches the sea, and is distributed over the sea-floor, the sediment is
commonly well-enough sorted to be distinguishable as shingle, sand, silt or mud. Although
mixtures are also present, these are the deposits that later become hardened to form the
sedimentary rocks, conglomerate, sandstone, siltstone and mudstone or shale respectively.
During this process, trapped organic material gives rise to the oil and gas we are trying to
find!
Two other types of sediment are additionally deposited in the marine environment in which
they are created. An abundance of calcium carbonate secreting organisms, including
certain algae, corals and animals with shells, can give rise to limestone, a rock composed
almost entirely of the mineral calcite. An important variety is dolomite, where the basic
mineral constituent is a calcium-magnesium carbonate and which most commonly results
from changes to the rock due to percolating waters after deposition.
The intense evaporation of restricted or isolated sea areas can give rise to the accumulation
of sea salts on the sea floor. They may reach considerable thickness and include anhydrite
(calcium sulphate), calcium carbonate, rock salt (sodium chloride) and, eventually,
potassium salts. They are collectively referred to as evaporites.
At the edges of the continental land masses, this eroded and precipitated material,
deposited in the shallows of ancient seas, caused the sea bottom to subside and the gradual
build-up of sedimentary basins.
17
Sedimentary basins
Sedimentary basins were formed over hundreds of millions of years by the action of the
deposition of eroded material and the precipitation of chemicals and organic matter in the
sea water. External geological forces then distort and modify the layered strata. The
following pictures show (vertically exaggerated) the formation of a typical basin.
18
Sediment collects on the sea-bed, the weight causing subsidence. Different materials
collected at different times, so producing the regular "layering" of strata in the basin.
19
Volcanic action, or the movement of land masses, causes faults to appear in the basin.
These same forces cause rotation of the overall basin forming a new mountain range.
20
Erosion of the highlands, and additional subsidence forms yet another area of low-lying
land that is filled with water forming another ancient sea.
21
Additional sedimentation takes place, causing an "unconformity" in the underlying strata.
22
Finally, land mass movement causes folding and distortion of the basin.
In reality the above processes may have occurred again and again and in any order during
the formation of a sedimentary basin, giving rise to thick, complex structures in the
sediments.
23
Oil and Gas formation
Sedimentary basins exist around the world at the edges of ancient continental shelves.
These basins can be of complex structure, with many different layers within the sediments
deposited in a particular geological age.
The temperature increases with depth within the Earth's crust, so that sediments, and the
organic material they contain, heat up as they become buried under younger sediments.
As the heat and pressure increase, the natural fats and oils present in buried algae, bacteria
and other material link and form kerogen, an hydrocarbon that is the precursor of
petroleum.
As this source rock becomes hotter, chains of hydrogen and carbon atoms break away and
form heavy oil. At higher temperatures the chains become shorter and light oil or gas is
formed. Gas may also be directly formed from the decomposition of kerogen produced
from the woody parts of plants. This woody material also generates coal seams within the
strata.
If the temperature and pressure gets too high, the kerogen becomes carbonised and does
not produce hydrocarbons.
24
The oil and gas produced by these processes may be in any combination and are almost
always mixed with water. The minute particles of hydrocarbon are produced within the
pores of permeable rocks (for example sandstone) and, being lighter than the surrounding
material, move up through the rock until prevented from doing so by an impermeable rock.
Although the initial source rock may only contain minute amounts of hydrocarbon, as the
particles of oil, gas and water move, or migrate*, through the pore space within younger
permeable rocks, they coalesce into larger volumes.
By the time this movement is stopped by the presence of a cap of impermeable rock (or
when they reach the surface) the total hydrocarbon volume may be large enough to be a
produce an oil or gas field that will be profitable to develop. The ultimate profitability of
such a field depends, of course, on external economic forces and world demand as much as
on ease of extraction.
* Don't confuse the migration of hydrocarbons with the use of the term migration in seismic
processing - we'll be talking about that later!
25
Oil and Gas Traps
There are many types of traps associated with hydrocarbon accumulations, here are a few
examples.
26
Some of the possible traps associated with a
salt intrusion. The actual trap may form
above the salt plug, in which case it may be
easier to image on the seismic data than
those on the sides of the plug.
There are obviously many more types of trap possible (some still to be found) but you
should now have a basic idea of the kinds of structures that may be "interesting" on a
seismic section. We'll now move on to some very basic mathematics!
27
Approximately Mathematical
Although seismic processing utilises some of the most sophisticated numerical algorithms
known, the processing geophysicist can probably get by with simply understanding what
the processes do; not how they do it!
The single most common mathematical thread that runs through seismic processing is that
of "Least-Squares", the process of minimising the errors involved in making some
approximation to our data. This technique is used again and again within the seismic
processing sequence and is worth spending a few moments on. If you are quite happy with
this technique, or simply bored or terrified at the prospect then please feel free to click the
"Next Page" button above.
In order to explain the principle of Least-Squares, here's a set of numbers representing the
time of my journey to work over several days:-
Day 1 - 25 minutes
Day 2 - 37 minutes
Day 3 - 28 minutes
Day 4 - 35 minutes
I would like to establish the "best-estimate" for my journey time, without having any
concept of the meaning of "average", but with a knowledge of some basic maths.
I'll start by making a guess at the best time - say 30 minutes. In order to see how good a
guess this is, I'll simply subtract it from each actual value:-
Day 1 - 25: 25 - 30 = -5
Day 2 - 37: 37 - 30 = 7
Day 3 - 28: 28 - 30 = -2
Day 4 - 35: 35 - 30 = 5
The total error could be described as -5 + 7 + -2 + 5 = 5, but this does not take into account
the fact that some of the numbers are positive, and others negative. I'm really only
interested in the magnitude of the error, not it's sign, so I'll do the simplest thing
(mathematically) I can do and square the numbers before adding them. This gives me (for
a guess of 30) a total error squared of 25 + 49 + 4 + 25 = 103. My best guess will be where
this number is a minimum.
28
Guess 26 - error squared = 207 Guess 36 - error squared = 187 and
so on ...
Here's a plot of the error squared against guess for a whole range of numbers:-
It's fairly obvious that the Total Error-Squared reaches a minimum value at the base of the
curve, and this corresponds to the position of the "best-fit". In order to calculate its value,
we need to express our calculations in general terms.
If we call our initial data values X1, X2 etc., then any given value's error-squared is:-
or (by simplifying):-
Now, the only (slightly) heavy bit. In order to minimise this total error-squared, we need to
differentiate the right-hand side of the above with respect to G (the thing we're trying to
find). This will give us the slope of the resultant curve, and the "turn-over" point where
29
this equals zero.
which we set equal to zero to find the minimum error position for G:-
This appears to tell us that the best guess for G is the sum of all of the individual values
(25+37+28+35=125), divided by the number of values (125/4=31.25). A very long-winded
way of proving that the best-fit of a constant value to a set of numbers is the AVERAGE!
Of course, this process can be extended to "best-fit" any mathematical function to any set
of numbers of any dimension; the solution in each case having the minimum error-
squared. Here's our data points once again with a whole set of higher-order curves fitted:-
30
The red line is the average we just computed, the green is a straight line, the blue quadratic
and the brown cubic. The brown (cubic) curve fits four points exactly, but is hardly the
most meaningful "fit" in this case. (It implies that, next week, it'll take me a month to get
to work, I suppose the traffic's going to be really bad!).
We will come back to the concept of least-squares much later in this course!
31
Chapter 2 - Seismic Acquisition
This Chapter presents a very brief look at Seismic Acquisition, with particular emphasis on
those topics relevant to the seismic processor. If you've spent the last 15 years on a land
crew in the Amazon, or an equivalent length of time in the middle of the Arctic Ocean
collecting marine data (or are doing either of these as you read this), you might want to
skip this bit!
What goes into the ground comes back out again! The characteristics of the ideal seismic
source are discussed.
The use of explosives onshore - still the most common source for "Land" recording.
Other land sources are mentioned, and vibratory sources (the second most common source)
are examined in some detail.
The logistics and mechanics of recording data onshore can be enormous. A brief
explanation of the standard recording device is followed by a wider review of recording
techniques.
From land to sea! The differences in the marine environment, and details of Air-Guns, the
most common offshore seismic source.
Highlights the differences between onshore and offshore recording and addresses the
general problems associated with Marine Acquisition.
32
Page 02.08 - Questions
A set of questions to test your knowledge of this very brief introduction to Seismic
Acquisition.
Changes in the speed (velocity) of sound and the density within particular rocks causes
reflection and refraction of the sound waves produced by a seismic source. Specifically,
variation of these parameters at an interface between two different rock types causes a
reflection of some of the seismic energy back towards the surface. It is the record of these
reflections against time that produce our seismic section.
A seismic reflector can only reflect back to the surface an image of the energy pulse it
receives. If we send a complex pulse into the ground, that pulse will be superimposed on
every reflector we record. For this reason we wish to make the actual seismic source as
close as possible to a single pulse of energy - a spike.
A spike
of energy
sent into
the earth
produces
a set of
clear
reflection
s.
A more
complex
energy
pulse
produces
confused
reflection
s
In practice and ideal spike is impossible to achieve. As we will see later, a spike implies that
33
an infinitely wide range of frequencies need to be present in the source, all released over an
infinitesimally small time range.
34
Onshore seismic sources
There are enormous logistical problems associated with Onshore Seismic Exploration.
Once the line position is marked, the shooting and recording equipment can be transported
onto the line. Oil & Gas deposits tend to be in some of the more inhospitable regions of the
Earth, so the actual terrain conditions may limit the available shooting / recording
positions as well as define the costs of the acquisition.
Dynamite and other explosives are still used for roughly half of all onshore seismic
exploration. 1 kg of seismic explosive releases about 5MJ of energy almost instantaneously,
enough energy to keep your electric fire burning for almost 1 hours. The part of this
energy which can usefully be converted into a compressional wave of seismic energy
depends on the depth of the shot in the ground, and the local ground conditions.
Seismic shots are normally placed below the near-surface highly weathered layers of the
earth. This improves the coupling of the source to the ground, and avoids problems with
the vary variable (slow) acoustic velocities in the weathering layer. Tests may be necessary
in a new exploration area to determine the optimum shot depth and charge size.
It may be necessary to fire an array of shots for each shotpoint. The number and position of
35
each shot designed to improve the downward-going energy but to attenuate the energy
going in other directions.
Charges can vary from a few grams to several tens of kilograms, depending
on the depth of the target reflectors.
The explosive used is very stable, almost impossible to detonate without the
correct electrical detonator, and often comes in small "cans" which can be
combined together to make larger charges.
36
If we assume that the shooting medium is consistent in all directions, we can make some
generalisations about the effect of an explosive charge on the surrounding rock.
In the immediate vicinity of the charge extremely high pressures will exist very briefly at
the shot instant, pulverising the neighbouring earth materials and displacing these to form
a physical cavity within the shot hole. Beyond this actual cavity there will exist a region
where movement is so great that materials are stressed beyond their elastic limits. This
part of the sub-surface will be permanently altered.
37
Other Land sources
Metal plates hit with hammers (still used for very shallow refraction
surveys), weights dropped from various devices, explosive charges and
offshore seismic sources in water containers, even shotguns fired into
the ground.
Modified hardcore-tamping equipment has been used for shallow surveys, and ground penetrating
radar is used by archaeologists (and some police forces) for detecting near-surface disturbances.
Out of all of these, by far the majority of other onshore surveys are conducted using a vibratory source
on the surface. The most common of these being the vibroseis method developed by Conoco.
Vibroseis
In a Vibroseis survey, specially designed vehicles lift their weight onto a large plate, in contact with the
ground, which is then vibrated over a period of time (typically 8-16 seconds), with a sweep of
frequencies.
Up
Sweep
Down
Sweep
38
Regardless of the sweep direction, vibroseis
trucks normally operate in synchronised
groups, so increasing the total amount of
vibratory energy input into the ground.
On the first page of this Chapter, it was pointed out that the ideal seismic source is a spike, or as close
to it as possible. Explosives meet this criteria very well, but vibroseis is obviously very different - it's
akin to the "chirp" used by radar systems - very long in duration but carefully controlled and very
repeatable.
Because the vibroseis sweep is so carefully controlled (and directly recorded for each shot), it's effects
can be readily removed during the data processing. The technique, involving correlation of the
recorded data with the recorded sweep, reduces the apparent source to a symmetrical wavelet
containing the same frequencies as the sweep. The vibroseis correlation is now often performed
directly in the field and will be discussed when we come to look at correlation.
39
Onshore seismic data is recorded using a simple (normally electro-magnetic) device known
as a geophone.
The metal spike is pushed into the ground, and any upward-
travelling energy from the seismic shot is recorded as the electric
current generated by the movement of the coil relative to the
magnet.
The amount of energy recorded by the geophone is, of course, minute. Where conditions
allow (for example on relatively flat ground), several geophones may be grouped together,
or place in strings around the central (nominal) receiver position. This not only improves
the total signal output from the group, but also "tunes" the geophones so that energy from
below is enhanced whilst that from the side (ground-roll) is attenuated. We'll see more on
this when we discuss marine seismic sources.
In some cases, where the surface elevation varies rapidly, or there is very loose near-surface
material, it may be necessary to drill holes and bury the geophones below ground level. In
other cases (for example along a shopping street in the middle of a town) the geophones
may be mounted on metal cones and just placed in contact with the ground. Both of these
techniques have their own problems which must be addressed in the processing of the data.
40
environmental problems - from desert to
frozen tundra!
The geophone groups may be laid out for several kilometres prior to the start of shooting.
As the shot position advances down the line, different sections of the recording groups are
made "live" by the recording instruments to maintain a similar range of offsets (distances
from shot to receiver) for each shot.
At some stage the groups of geophones must be physically moved in order to maintain the
"live" section.
41
Most Offshore seismic exploration does not
have the logistical problems associated with
Onshore exploration, but the operational
difficulties more than compensate for this!
Although explosives were once used as an energy source for offshore exploration, the
environmental repercussions, and the need for rapid firing and repeatability have brought
about the design and construction of new sources.
The most common offshore (or Marine) source in use today is a variety of Air-Gun, first
produced in the 1960's. These guns use compressed air (at typically 2,000 to 5,000 psi) to
produce an explosive blast of air into the water surrounding the gun. The latest of these,
with the movable shuttle that releases the air on the outside, is the Sleeve-Gun.
Air enters through the pipe (A) and is fed into the main chamber (D) and the air-spring
return chamber (C).
Once the Solenoid valve (B) opens, air is allowed into the firing chamber (E), and the
pressure differential forces the outside sleeve to the left with great force, releasing the air
from the main chamber. The resultant air-bubble produces a shock-wave in the
42
surrounding water.
A single air-gun produces a pulse of energy (or signature) that looks something like this (the
upper plot shows the time-function, and the lower shows the frequency content of the
signature):-
We can't do much about the depth (see Ghosts in the next Chapter), but, if we build an
array of guns, made of different chamber sizes, and fire these simultaneously, we gain
several advantages.
1. We obviously increase the total amount of energy being directed into the ground for
one "shot".
2. The different chamber sizes will produce different bubble responses, and these will
tend to cancel out.
3. We improve the directivity of the source. Other than directly below the source
array, some frequencies will be attenuated by the spatial design of the array.
43
Here's a
typical
gun
array,
and the
sleeve
guns
being
deploye
d from
the rear
of the
vessel.
Once again, here's a plot showing the time and frequency response of the entire array.
We have mentioned before the concept that an array or group of shots or geophones can
improve the spatial response of our source and/or receivers.
This plot shows the hemisphere of energy emanating from the source array shown above
(viewed from below).
The grey line shows the direction of the seismic line (the direction the boat is moving in)
with the arrow showing the vertical output from the array. The colours show the total
energy, red being the highest.
Change the frequency of interest and see how, although the downward energy is always
maximum, the energy at other angles is attenuated by the array design.
Although the actual "shooting" of Marine data is made simple by modern Air-gun arrays,
there are still some operational difficulties associated with these sources. The high
44
operating pressures are very dangerous. Air at 80-100 psi is used in industry, with an
appropriate abrasive, to remove paint from metal. Air at 2000-5000 psi will remove almost
anything (including skin) without any abrasive.
The guns must be properly maintained - any gun failure will damage the desired array
output response and re-introduce bubbles into the signature. Deployment of the arrays is
made relatively simple by the use of floats etc., but the position of each array (multiple
arrays may be used for alternate shooting etc.) must be carefully monitored in all three
dimensions. We'll discuss the positioning of arrays in more detail when we discuss shooting
geometry.
Again, just like on land, many different types of seismic source have been used in the
marine environment. However, as almost all modern data is acquired with variations of
air-guns, we will restrict our discussion to those and move on to the recording instruments.
45
The recording of offshore seismic data is complicated by the fact that all of the recording
equipment must be encased in an oil-filled cable or streamer (about 10 cm in diameter) that
is towed behind the vessel.
Unlike land geophones, the hydrophones used in marine recording normally use a piezo-
electric device to record the incoming energy. These hydrophones are connected together
in groups (just like land recording) and may be placed every metre or so along a 3000
metre streamer.
The front-end of the streamer is connected to the vessel by a complex system of floats and
elastic stretch-sections, which are designed to eliminate any noise reaching the streamer
from the vessel. The end of the streamer farthest from the vessel is connected by similar
stretch-sections to a tail-buoy. This buoy may contain its own GPS receiver and radar
reflector so that its position can be established.
2. Determining the position of (possibly) multiple gun arrays, and every hydrophone
group in multiple streamers (vessels are now on the drawing board with up to 20
streamers!).
3. Getting all of the seismic signals from all of the recording groups (in all of the
streamers) back to the recording system on the vessel.
All of these problems are solved by the complex system of mechanical and electronic
systems inside or on the outside of the streamer itself. Here's some of them:-
46
Combinations of acoustic transceivers
(transmitter-receivers), operating at
frequencies above the normal seismic
frequencies can be used to establish
distances from one part of one streamer to
another and to the source array.
Other equipment, either in the streamer or in the instrument room on the vessel, can be
used for real-time processing of the seismic data. Other, more complex processing, can be
done while the vessel is turning-around between seismic lines. (This can take a time -
remember the 3 kilometres of equipment out the back!).
Other techniques can be, and are used for the recording of marine data. Ocean-Bottom
Cables, where the receivers are actually placed in contact with the seabed, are becoming
increasingly common. These allow for the recording of S-Waves (remember, these don't
travel though water), and can be fixed in position to allow for the re-recording of the data
in future years. This technique, or 4D recording (3 dimensional seismic data recorded at
time intervals), can be used to measure changes in an Oil or Gas field caused by the actual
extraction of the hydrocarbon.
47
Although the "per-kilometre" rate for offshore acquisition is much less than that for
onshore, there are still considerable costs involved. A single, modern, seismic streamer
costs about $1,000,000, and the running costs of personnel and equipment are also high.
The general parameters associated with waves, in both time and space.
The two types of waves encountered in seismic exploration, their differences, and all their
possible names!
One way of looking at seismic energy. A very brief mention of the wave equation used (in
an approximate form) in some of the more complex processing.
The other way of looking at the energy, and the one used most of the time. Some of the
physics associated with reflections and refractions.
More on ray paths. Reflection coefficients, and the arithmetic of reflection and refraction.
One of the more common problems associated with marine data. The duplicate reflections
produced by energy that has reflected back down from the sea surface.
The biggest single problem for the processing geophysicist! Events on our seismic section
produced by the energy reflecting more than once.
49
Page 03.09 - Diffractions
Another problem that can sometimes be useful data. Energy that is reflected back in all
directions from abrupt changes in the physical parameters of the Earth.
50
Waves
Making Waves
Whenever an acoustic source is detonated on or near the surface of the Earth, an acoustic wave is
produced that propagates away from the source.
Apart from effects very close to the source, this wave moves through the medium without
causing a net movement of the material - the medium (more or less) returns to its normal
state once the wave has passed through.
Waves can travel through a body (body waves), or along the surface of a body (surface or
interface waves), and, although we are not directly interested in surface waves they can be
used for shallow investigations (refraction studies) or they can cause problems by masking
body waves within seismic data.
Here are some of the parameters associated with a wave recorded as a function of time:-
A simple sinusoidal wave can be described by three parameters. Its amplitude, or maximum excursion
from the zero level, its frequency, or the number of "cycles per second" of the wave, and its phase -
the offset of the maximum value from time 0 measured in degrees along the cycle (1 cycle = 360
51
degrees). For more complex waves, the RMS or root-mean square amplitude may be more useful -
we'll come back to that later.
The wavenumber is usually measured in "cycles per 1000 metres" and represents the frequency of the
wave in space. For a wave measured in both time and space (on, for example, a seismic section) the
velocity relationship shown above gives the horizontal velocity of the wave through the earth.
Here's another time-domain wave, use the scroll bars to change frequency, amplitude and
phase:-
Even restricting our discussion to body waves, we still have to consider two wave types - P & S Waves.
52
P and S Waves
Energy that is applied exactly at right-angles to an elastic body produces an elastic body
wave in which the particle motion is in the direction of propagation - a P-Wave.
Primary wave
Compressional wave
Longitudinal wave
Push-pull wave
Pressure wave
Dilatational wave
Rarefaction wave
Irrotational wave
53
A wave in which the particle motion is at
right-angles to the direction of
propagation is known as an S-Wave.
Secondary wave
Shear wave
Transverse wave
Rotational wave
Distortional wave
Equivolumnar wave
Tangential wave
The (somewhat gross) assumptions made in seismic processing assume only P-Waves,
travelling through isotropic* and homogeneous* material.
* isotropic - the physical properties of the material don't depend on the direction of the wave -
it's the same in all directions.
* homogeneous - the physical properties of the material are uniform throughout the material.
54
Wavefronts
In much the same way as light, we can either consider the seismic energy as a wavefront, or
as a series of rays emanating from the shot.
For a wavefront, imagine a sphere of energy, expanding from the shot in all directions as
time increases. We are not normally interested in the upper part of this sphere, except for
where it interferes with the down going energy - see the following pages on Ghosts and
Multiples. We can therefore concentrate on the lower half of this sphere - it's expansion in
time being dependant on the velocity of the medium through which it is moving.
We can also view the exploding wavefront in plan view on one reflector. The highlighted
area showing the different regions of the horizontal layer that are "illuminated" by the shot
energy at different times (the buttons show the time in seconds).
As the shot expands, of course, the energy originally present in the shot is spread over a
larger and larger area as time increases.
This, coupled with energy losses due to friction within the rocks, causes a progressive loss of
energy with time that must be compensated for early in the processing.
55
Remember that the above diagrams are greatly simplified - the actual expanding wavefront
will be like a very distorted sphere, with the distortion changing as it passes through each
major reflector.
Each point on each reflector will also act like a seismic source, with a sphere of energy
being radiated from the reflecting point, some of which will reach the surface. To put it in a
nutshell, here, in rectangular co-ordinates, is the equation relating the spatial and temporal
dependence of a seismic wave:-
If this gives you a headache (it does me!) don't worry. I just included it to show the kinds
of books I read and, although it will be referred to in passing (THE WAVE EQUATION), I
will not be explaining it in great detail.
Perhaps, because of its complexity, the processing geophysicist can concern himself
99.95% of the time with the relative simplicity* of the ray path assumptions which we will
now discuss!
56
Ray Paths
Ray paths, or lines drawn on a cross-section showing the path of the energy from the shot
to the receiver, are a useful way of showing the total travel-time of a seismic "ray".
57
Now we have lots of reflectors/refractors.
As well as the ray path shown here, we will
also have reflections from each interface.
Finally, here's an interactive model. Click close to the surface to move the shot position, or
on a different horizon to map reflecting rays from that horizon. Use the buttons to change
the structure in the model and shoot some rays.
Pretty disgusting isn't it! Note that, on the more complex models, some of the ray paths
just stop. These have reached a critical angle, and no longer refract through a layer.
Just keep this model, and the complexity of the ray paths in mind during the later parts of
this course.
58
More Ray Paths
We can also use our ray path models to determine the actual amplitude of the reflections
returned from any layer. Referring back to our initial ray path diagram:-
A reflection coefficient of, for example, 0.3 implies that 30% of the energy reaching a
reflecting interface is returned towards the surface. The remaining 70% of the energy
passes through the interface. This remaining energy (1 - RC) is known as the transmission
coefficient. As we are only considering P-waves, these terms should more correctly be
referred to as the P-wave reflection and transmission coefficients.
The product of velocity and density for any layer is often referred to as the acoustic
impedance of that layer. The above equation then becomes simply difference divided by
sum of the acoustic impedances (AI's). If we use the normal units of metres/second for
velocity and grams/cc for density then the units for AI are somewhat complicated. For this
reason the term "AI units" is often used.
Generally speaking, both velocities and densities increase with depth. If density is roughly
proportional to velocity then for interfaces where the velocity increases across the interface
the reflection coefficient will be positive. RC's are negative when moving from a high
velocity to a low velocity.
What does a negative reflection coefficient imply? It simply means that the energy is
reflected in the opposite sense; a positive pressure wave becomes a negative one. Or, in
terms of the seismic trace:-
59
Incoming Reflection Reflection
"Wavelet" RC = positive RC = negative
The above RC equation is, of course, an approximation. If fact it only holds true for rays at
right-angles to an interface (which the ones above are obviously not!). We'll come back to
this point much later during our discussions on AVO (Amplitude Versus Offset).
We can also use ray paths to examine some of the problem rays associated with seismic
acquisition and processing. These are generally rays that have reflected more than once
before returning to the receiver.
60
Ghost Reflections
One of the commonest form of undesirable ray associated with marine seismic acquisition
is the ghost reflection. The sea surface is an almost perfect reflector. Energy travelling
from the shot (under the water) is travelling through a medium with a velocity of about
1500 metres per second, and a density close to 1.025 grams/cc. The air above the sea
surface has an acoustic velocity of about 350 metres per second, with a density of about
0.0013 grams/cc. Plugging these into the "RC" equation gives a reflection coefficient from
below the sea surface of about -0.9994, or almost perfect reflection (with sign reversal).
Here's the ray paths showing the possible routes for ghost reflectors:-
1. Direct ray
Given that the actual ray paths will have more curvature than those shown in the diagram,
the ray paths from the shot to the surface and back down again will be almost vertical. If
we assume that they are vertical, we can calculate the time difference (in milliseconds)
between the direct ray and the ghosted ray as approximately 4/3 times the depth of the shot
in metres (we're assuming a water velocity of 1500 m/s). The same calculation can be done
for the receiver.
If, for example, our shot depth is 6 metres, then the time difference between the direct and
61
ghosted rays will be about 8 milliseconds. If we assume that the shot energy consists of a
set of constant frequency cosine waves added together (we'll see later that this is a valid
assumption), we can consider the effect of the ghost on the individual frequency
components.
Added to the original to give: Added to the original to give: Added to the original to give:
You can see that the component frequencies of 40 and 80 Hz are actually increased by the
ghost, whereas the 125 Hz component (in this case) is completely removed! The direct ray
and the ghost are exactly 180 degrees out of phase, and one removes the other.
62
Multiple Reflections
Given that the sea surface is an almost perfect reflector, and that the seabed can also be an
equally good reflector, it's not surprising that these cause other problems for the processing
geophysicist.
The water layer is one of the prime candidates for the generation of multiple reflections (or
multiples), which occur when the energy reflects more than once on any reflector. It should
be stressed that the water layer is not the only source of multiples. Any shallow layer, with
sufficiently strong velocity contrast, can become a source of multiples on both marine and
land data.
Here's the ray paths showing the possible routes for multiple reflectors from the water
layer:-
The amplitude of the multiples will normally be sign-reversed (again due to the sea-surface
reflection) and will appear as an image of the primary reflector, usually some constant time
below it. Subsequent multiples will also exhibit this "periodicity". The amplitude of the
multiple can be anything up to a complete reversed image of the primary!
63
Multiples generated by the water layer will
cause "peaks" in the frequency spectrum
at frequencies which are a multiple of
(roughly) 750/(Water Depth).
The possible multiple paths for rays are endless, here are a few possibilities:-
None of those shown above are generated in the second layer, try to imagine the total
combinations possible! In practice, each pair of reflectors is a multiple generator - it's
simply a question of how strong the multiple is!
We will examine different techniques for multiple removal later in the course.
Diffractions
64
Two other types of ray commonly appear on our seismic records. There are refractions in
the early part of our records, caused by the energy refracting along a boundary between
the rapidly changing shallow velocities, and diffractions. We'll look at refractions briefly in
various parts of this course but they generally don't cause the processing geophysicist any
problems - they are removed from the front-end of the records at an early stage of the
processing. As well as being easy to remove, these refractions can also give us important
information about the shallow velocities on land data - more on this later as well.
The seabed here is very hard, and has (to use a technical
term) "lumps" of some kind scattered about on it. These
lumps are, effectively, single points that reflect energy back
from all directions.
It appears at the same time, and with the same average velocities
(in fact, a whole range of velocities) as our primary reflectors, and
can only be removed by its apparent dip (the slope of the event on
the section).
65
Any isolated point, either the
"lumps" shown above, or, for
example, the end of an interface
that truncates against a geological
fault, will scatter energy from all
directions back towards its
source.
All of the data recorded also suffers from other forms of undesirable "noise". Some of this
may appear as continuous events on our section (coherent noise), or as a background to all
of our data (random noise). Both of these types of noise are attenuated during the
processing, but the final level of noise may still be sufficient to mask primary reflections on
our seismic section - in some areas it's difficult to find any signal!
Chapter 4 - Geometry
We need to start the processing sequence by telling the computer where all our seismic data
came from!
We need to supply the X, Y and Z co-ordinate of every shot and geophone station for the
66
line. Luckily, in many cases, we can rely on a regular shooting pattern to simplify the
input.
The simplest and most regular geometry - a review of the important parameters necessary
to specify regular shooting.
We overlap the data from many shots to provide a redundancy of sub-surface information.
The calculation of the fold (the level of sub-surface redundancy) is simple for regular
shooting, and this page will calculate it for you!
We no longer have regular shooting! Shots and geophones can be positioned anywhere in
three dimensions!
The height above sea level of all of our shots and geophones is as important as their position
in the other two dimensions.
A simplified look at some of the calculations necessary to correct for data shot and
recorded at different elevations.
67
Although 2D seismic data provides a cross-section of the earth, it is not always all in the
same plane. The need for 3D acquisition and processing.
68
Marine 2D Geometry
We'll start with the simplest geometry - a marine vessel with just one source and streamer
shooting a conventional (2D) seismic line.
This diagram, typical of that provided to a processing centre by the acquisition crew,
provides all of the basic geometric information necessary to process the data. It may be
necessary, however, to check the logs provided for each seismic line to ensure that none of
these parameters varies dramatically from one part of the survey to another.
The depth of the cable is critical for two reasons. One, we need to (eventually)
correct all of the data to Mean-Sea-Level (MSL), and two, remember the
Ghost problem from Chapter 3? The cable ghost in this case will be at about
750/7 or 107 Hz - well above our normal seismic frequencies.
The group interval (the distance between hydrophone group centres) defines
the basic cable geometry. This, together with the information that we have
240 recording channels, enables us to check the overall cable length (240 X
12.5 = 3000 metres). The actual "live" part of the cable will be 12.5m less
than this - from the centre of group 1 to the centre of group 240.
69
The (incorrectly marked!) CDP position actually refers to the position of the
mid-point between the shot and the centre of the first recording group. This
would be the "common-depth-point" (CDP) if we only had perfectly horizontal
reflectors. This will become important when we decide how to label the shot
positions on our final section.
The depth of the shot is obviously as important as the depth of the streamer,
and may effect the signature of the source array (remember the bubble
periods for Air-Guns depended on the depth). The ghost frequency here will
be about 150 Hz - even more above the frequencies of interest.
Probably the single most important number, and the one that most errors are
made on, the offset or distance from the shot to the centre of the nearest
recording group. Although (for 2D recording) we assume that the streamer is
in a straight line behind the vessel, we may have a lateral, or cross-line offset if
the shot and streamer are not in the same "plane" (for example, the shot
being on the port side of the vessel, and the streamer on the starboard side).
The distance from the navigation recording antenna to the shot is not of great
importance to the seismic processor, but must be compensated for when the
navigation data is processed.
All of these parameters are important, and all are required to process the data correctly.
The paper (or electronic) logs from the acquisition must be checked to ensure that none of
these parameters vary widely along or between lines - if they do then we may need to
compensate for it in the processing.
It's about time we looked at some seismic! Here's one shot record from marine geometry
similar to that shown above:-
70
The annotation down the left-hand side shows the two-way time in seconds, and the
numbers across the top are the channel numbers, or numbers corresponding to the group
numbers on the cable. What's the differences between this record and the geometry shown
above?
Well, firstly we only have 120 channels, and, as the energy arrives first on channel 1, it's
safe to assume that channel 1 is nearest the shot. Note also that channel 116 (5 from the
right) looks to be a dead trace. This may well be a dead or bad channel throughout this
line, we'll need to check other shots and make sure we exclude it from the processing (it
may appear dead, but could just contain random noise).
The shallow part of the record, at longer offsets (further from the shot) become dominated
by refractions. These travel almost directly down from the shot, along an interface between
two layers at high velocity, and then back up to the receivers to arrive before the energy
coming directly from the shot through the water. We'll remove most of this refraction
"noise" in the processing.
The line-up of events on the left hand side of the section at about 1 second are reflections,
although they probably include some multiples (note the ringing associated with short-
71
period multiples). We'll discuss these in some detail later, in particular the curvature of
these reflections.
There is one other parameter, missing from all of the above diagrams, that is vital for the
processing of "regular" seismic data (normally 2D marine data).
We'll now go on to discuss this parameter, and its relevance to the processing sequence.
72
Multi-Fold Geometry
The other parameter that is important for regular shooting is the distance between shots.
This may be known as the shotpoint interval, the shot interval, the shot move-up, etc. It's
basically (in the case of marine data) the distance the vessel moves before firing the gun
array again.
The blue line on the sub-surface shows the sub-surface coverage for
this one shot - exactly half of the spread length.
This was, until the 1950's, the way that all marine data was
shot. The section produced from the placement of all the
shot records side-by-side was known as a 100% section (as
all of the sub-surface was covered).
If we reduce the distance between shots, we obtain multiple coverage of the sub-surface
points.
In this simple case, moving the shot by "S/X" covers each point under the ground "X/2"
times.
This is known as the fold, CMP fold or CDP fold, and provides a redundancy of data that
73
can be used to attenuate random noise, and to provide additional information on the
reflecting horizon.
Another way of specifying the fold refers everything to the 100% section mentioned above.
For example, 6-fold data is sometimes referred to as a "600%" section.
Although the theoretical fold is simple to calculate, the actual fold may be complicated if
the shot interval is not a multiple of half of the recording group interval - we'll look at this
next. In the meantime it's probably worth mentioning one additional constraint on the
shotpoint interval for marine data.
The shotpoint interval can be reduced if we are recording less than 6 seconds, although we
increase the risk of the energy from one shot interfering with the next and need multiple
source arrays to allow for the recycle time.
74
The calculation of CDP fold is normally quite simple for regular shooting.
75
CMPINT = GINT / 2
NFOLD = ( NCHAN * GINT ) / (2 * SPINT )
Number of CMPs on line = ( NCHAN / NFOLD ) * ( Number of shotpoints -
1 ) + NCHAN
As the computation can be complicated, here's a quick program to compute the fold using
the above formulae. Plug in the first three values (any units as long as there are consistent),
and press the button to compute the CMP interval and fold.
All of this assumes totally regular shooting. Any shots missing from the sequence, or any
consistent bad traces recorded from the receiver groups, will reduce the overall fold of
data.
Land Geometry
76
Seismic data recorded on land does not normally follow the same regular pattern as that
shot at sea. It may be impossible to arrange our shots and / or geophones in a straight line,
or to place them with totally regular spacing.
The above problems can be simply stated. We need to know the precise position in 3
dimensions of every shot and every geophone station for our line, together with information
on which geophone groups are live for each shot.
77
The enormous amount of additional
information required for the processing of
land data was, until fairly recently,
supplied on hand-written documents.
78
Elevations and shot depths
The elevation problem is a simple one. If
our shots and geophones are not all at the
same elevation (i.e. not marine data), then
the reflections from any event will be
distorted.
The distortion is (more or less) the same for ever reflector - it does not change with time.
For this reason it is known as a static correction (we'll mention dynamic corrections later),
and correcting for this is a major consideration in land data processing.
The elevations, or
heights above sea level,
are still often measured
in the field by
conventional surveying
methods.
79
Here's an (old) hand-written chart from
the field giving (at the top) details of the
elevations and static calculations.
In order to remove the effects of the different elevations of shots and geophones (and other
changes in the near surface), we need to time-shift or static correct all of our data to a new
datum at an early stage in the processing.
This datum may be a fixed datum, at a fixed elevation (for example, sea level), or may be a
floating datum - a smooth line through the mean of the shot and geophone elevations.
Either way we need to consider these initial Field Static computations in some detail.
80
Field Statics
In order to establish the time corrections (statics) for every shot and geophone station in
our line, it's necessary (once again) to look at some ray paths.
Given a weathering (low velocity) layer of depth "d", we can consider three types of ray
paths that will be recorded on the early part of our seismic field records.
There will be direct rays , that travel either through the air (air waves) at very low
velocity, or through the top of the weathering layer with a horizontal velocity of V1. There
will be reflected rays , travelling at velocity V1 and reflecting from the boundary at
the base of the weathering layer, and refracted waves , reaching critical angles and
travelling along the base of the weathering layer with (predominantly) velocity V2.
81
Here's how they will look on our shot
record.
Here's the first part of a record shot at part of a LVL (low velocity layer) survey. This
survey, shot especially to establish the shallow information necessary to static correct our
data, is usually shot with a low energy source (maybe even the hammer and plate shown
before!), and low geophone group spacing.
The "picks" of first arrivals (usually made by the processing geophysicist "helping" the
automatic picking of the computer) can be fitted with straight lines (probably least-
squares), and the various velocities established. Three distinct velocities (and hence three
shallow layers) can be identified on this record.
Once the velocities have been established, the intercept times (found by extrapolating the
lines back to zero-offset) can be used to establish the depths of the layers. For the two layer
case shown at the beginning of this page the depth can be calculated from:-
82
Where "t" is the intercept time.
All the above assumes two things. 1) that the events are horizontal, and 2) that the
horizontal velocities we calculate are also applicable vertically - there is no anisotropy*.
The first problem can be solved by shooting into both ends of the spread and averaging the
results (this "cancels out" the effects of dip). The second problem can only be solved by
other techniques such as "uphole-surveys" where geophones are placed down the hole and
the uphole times used to calculate true vertical velocities.
* Anisotropy - where the seismic velocity depends on the direction in which it is measured.
This only touches on the complex area of field statics. Other calculations need to be made
to tie uphole information to the LVL results and the whole process, although often done in
the field, can get very complex. As a guide to the sorts of velocities we may meet in LVL
surveys, and later in the normal processing sequence, here's the theoretical velocities of
some of the substances the seismic wave has to go through:-
83
All of the above static calculations, as usual, imply some level of approximation. The LVL's
are only run periodically along the line and the velocities and depths are interpolated for
every geophone and shot station.
Because of these approximations, and errors in elevations, shot depths and positioning the
field statics are almost always (to some degree) in error. This necessitates the use of
residual static calculations in the processing sequence where the seismic data itself is used to
refine (and sometimes recalculate) the field statics. More on this (much) later.
84
Crooked Lines
There are obvious situations in onshore exploration where we are unable to shoot straight
lines. In these cases a "crooked line" can be shot.
85
to determine the optimum "CMP"
(common mid-point) line that passes
through as many mid-points as possible.
86
If we stretch-out our line of CMPs (or as more commonly but incorrectly called CDPs!), we
can see the spread of mid-points around our desired "line". We can also see that only those
mid-points within the pink rectangle will be included in our processing, maybe our choice
of binning parameters should be modified?
If we check the fold of our CMPs we can see that, excluding the taper-on and taper-off of
fold at the ends of the line, that we reach our nominal 15 fold for most of the line, only
dropping to about 7 fold in areas where we have excluded lots of mid-points. A decision to
increase the bin "width" would have to depend on our knowledge of the area. Would the
noise attenuation we achieve by widening the bins be offset by the errors introduced by
including data from further off the line? If we have a major North-South structure in the
area then the cross-dip within each arbitrary CMP may cause problems. If the data is of
very high quality, maybe we can measure this cross-dip and compensate for it when we
combine our mid-points together. All of these decisions have to be made every time a
crooked line is processed, and may best be decided by test runs using different parameters.
There are as many problems caused by crooked-line shooting and processing as are solved
by the technique. It is, however, the only technique possible in many areas.
Before moving into other dimensions, here's an example 60-channel land shot record.
87
This is from a high-resolution survey (only 1 second of data), with the shot obviously in the
centre (a split-spread), and is of very high quality for land data.
88
The problems with 2D
Although 2D seismic data is still common (especially in frontier areas), there is increasing
use of 3D seismic acquisition and processing, which solves some of the problems associated
with 2D seismic data. What are these problems?
This problem effects all 2D seismic data to some degree. Although, as we have seen, we can
"tune" our shot arrays to reduce the energy going sideways, some energy (at some
frequency) will get through.
The "canyon" shown above could, of course, be a buried canyon deep in the sedimentary
section, and could be just as likely in both onshore and offshore surveys. Similar effects
89
can occur close to any steep reflector - the sides of buried sea cliffs, the edges of salt domes,
etc.
The only way to solve both of these problems is to cover the entire survey area with a series
of very closely spaced seismic lines, producing a 3D volume of seismic data.
90
3D Geometry
In order to produce a 3D volume of data, we need to acquire seismic data with a line
spacing similar to the spacing between the CMP's along each line - typically 12.5 to 50
metres.
Both land and marine data are acquired using multiple sources and geophone arrays, to
facilitate the acquiring of the large volumes of data necessary. The geometry for land data
can be extremely complex, essentially shooting multiple crooked lines at once!
91
This shows the mid-point coverage for one
sail-line (6 CMP lines) for just a few
hundred shots.
We will come back to the binning problems on the next page, but will now briefly consider
the problems of data volumes associated with 3D data.
Here's a new area ready for exploration. The types of surveys that may be required for the
development of the area are:-
1. A 2D reconnaissance survey
2. A detailed 2D survey
3. A 3D survey
The initial 2D survey will determine the predominant direction of dip in the area, which
will be used for the subsequent detailed 2D survey (to minimise the side-swipe problem).
Although not strictly necessary, the 3D lines are often also orientated in this direction.
92
similar amount with an enormous amount
of navigation data processing required
before the seismic processing can really
begin.
93
3D binning
It's almost impossible to visualise the
computations necessary for the binning of
3D data. The area needs to be divided into
regularly spaced rectangular bins, and all
of the mid-points in each bin cross-indexed
by line, shot and group number. We may
have 120 mid-points, from different shots
and different lines in just one 3D CMP bin,
and several tens of thousands of bins in the
survey.
94
The total fold in any bin
is normally examined
graphically. This is also
normally produced on
the shooting vessel.
Here's an enlargement of the area marked above - now you can see the individual bins.
We will see later that not only is the total fold important, but that we also need a good
spread of source-receiver offsets in each bin. The second button shows the fold for just the
far offsets (the last third of the streamer). Note that some lines (shown in white) have no
contributions from these offsets.
The third button shows the azimuth, or the average direction of each mid-point from the
centre of the bin. If this shows large areas with a constant azimuth, we may need to adjust
our binning parameters to avoid the overall spatial "shift" that this may introduce.
As we mentioned before, 3D binning is just like crooked line binning, with multiple lines
being binned at once! The only advantage is that the final output bins normally follow a
regular grid. As with crooked line binning, it may be necessary to test various binning
parameters on the seismic data itself, and make a qualitative decision as to the optimum
binning parameters.
Remember, for the correct processing of either 2D or 3D land or marine data, that we must
get the geometry correct or all that follows is a waste of time (and money).
95
Chapter 5 - Recording the data
This Chapter looks at how the voltages produced by the energy arriving at the geophones
96
(or hydrophones) are stored on magnetic tape for future processing.
There's "many a slip.." as the old proverb says between geophone and tape, and we'll try to
address some of the problems and limitations associated with digital recording.
For those of you unfamiliar with the number systems used by computers, the first three
pages of this Chapter provide a brief introduction to number systems ...
Some of the problems of recording data in a digital form are covered, with audio-visual
output!
One particular problem, that of the digital representation of the different frequencies
present in the data (and their reconstruction) is discussed in some detail.
The seismic data arrives at all our geophone groups at once! How the multiplexing of data
solves the problem when no temporary storage is available.
A quick look at recording filters, with a digression into dBs and Octaves.
97
The remaining pages in this Chapter cover the various standard tape formats currently in
use in seismic field recording and processing. This page presents an overview of these
formats.
SEG-A, B, C and D are the most common formats for field data still in use. This page looks
briefly at SEG-A, B and C ...
... before moving on to an in-depth look at SEG-D, the most common format at present.
Finally SEG-Y, a format more often used for processed data but becoming popular for field
data that has been partially processed in the field.
98
Number systems
A number system is defined by the base (or radix) it uses, the
base being the number of different symbols required by the
system to represent any of the infinite series of numbers.
The binary system, based on the number 2, was used by some tribes and, together with the
systems based on 8 and 16, is used today in computer systems.
Here's an example of a large decimal number (as an exercise, copy this onto your own
cheque and send it to the author of this course!). The text expansion of the number gives
some clue to the way that the position of a symbol in a number defines its value.
99
The number 21698.00 can be expressed in the following way:-
= 21698.00
Like all number systems, the decimal system uses positional notation to provide the
exponent (the power of 10) that the symbol needs as a multiplier; the "hundreds", "tens"
and "units" used in primary schools.
We can use any other integer as a base, here's 21698 decimal expressed in octal (to the base
eight):-
5x 84 = 5 x 4096 = 20480
+ 2x 83 = 2 x 512 = 1024
+ 3x 82 = 3 x 64 = 192
+ 0x 81 = 0x 8= 0
+ 2x 80 = 2x 1= 2
= 21698
So 2169810 = 523028, we'll use subscripts to indicate the base, but normally omit them for
decimal numbers.
Octal numbers need just 8 symbols. Unlike decimal numbers we don't use 8 and 9. Any
number base needs as many symbols as the base, and "x" digits in this base will store base x
numbers (from 0 to basex-1). For example, 3 decimal digits will store 103 or 1000 numbers
(0-999). 3 octal digits will store 83 numbers (0-7778 or 0-51110).
Conversion from one base to another is fairly simple - converting back to decimal is done
like the example shown above, multiply each digit (from the right) by successive powers of
the base.
To convert a decimal number to another base, divide by the base and keep remainders
(until you can't do it any more). Here's 21698 again, this time converted to hexadecimal
(base 16):-
100
21698 16 = 1356 remainder 2 = 216
1356 16 = 84 remainder 12 = C16
84 16 = 5 remainder 4 = 416
5 16 = 0 remainder 5 = 516
We then read the last column upwards to give 54C216 = 2169810. We need 16 symbols to
specify hexadecimal numbers so, by convention, we use the letters A-F to represent the
decimal number 10-15.
Let's now move on to the number system used in computers - the binary system, or
numbers to the base 2.
101
Binary numbers
Most electrical equipment (computers included!) only really
understands two states - on or off.
We can convert a number to base 2 (which needs just 2 symbols) in the same way as any
other base. Once again here's the conversion of 2169810 to binary:-
You can see that binary numbers are a lot longer than the decimal equivalent. Each binary
digit (or bit) can only store one of two possible values, so the fifteen bits shown here will
only store a number up to 215-1, or 3276710.
A group of 8 bits, or one byte, is generally the smallest unit that can be accessed by a
computer. This will store integers from 0-255, or, if we use one bit to indicate that the
number can be either positive or negative (a sign-bit) we can store values from -128 to +127.
To avoid writing long strings of binary numbers, programmers make use of the fact that
every 4 bits (half a byte, or a nibble) can be directly converted into one hexadecimal digit.
Here's the conversion table from decimal to hexadecimal to binary for the numbers 0 to
102
15:-
So, instead of writing 1010100110000102, we can left pad it to 16 bits, and convert each
nibble into hexadecimal; 01012 = 516, 01002 = 416, 11002 = C16, 00102 = 216, giving 54C216 =
1010100110000102 = 2169810.
Here's some typical number formats, and the range of values they can store:-
103
* The format and range of floating point numbers depends on the manufacturer of the
computer!
Binary arithmetic
104
Binary arithmetic is simple enough for a computer to understand!
Here's the sort of thing you might once (long ago) have done in school - a decimal addition
table. Next to it is the (somewhat simpler!) binary version:-
+ 0 1 2 3 4 5 6 7 8 9
0 0 1 2 3 4 5 6 7 8 9
1 1 2 3 4 5 6 7 8 9 10
2 2 3 4 5 6 7 8 9 10 11
3 3 4 5 6 7 8 9 10 11 12 + 0 1
4 4 5 6 7 8 9 10 11 12 13 0 00 01
5 5 6 7 8 9 10 11 12 13 14 1 01 10
6 6 7 8 9 10 11 12 13 14 15
7 7 8 9 10 11 12 13 14 15 16 Binary
8 8 9 10 11 12 13 14 15 16 17
9 9 10 11 12 13 14 15 16 17 18
Decimal
The logic in the binary table is almost obvious. If both inputs are the same, output a zero.
If they are different, output a one. If both inputs are "1" then carry one to the next
column.
Here's a simple example of binary arithmetic using 16-bit words to add 1234510 to 1487210:-
12345 0 0 1 1 0 0 0 0 0 0 1 1 1 0 0 1
+ 14872 + 0 0 1 1 1 0 1 0 0 0 0 1 1 0 0 0
27217 0110101001010001
Subtraction is done by adding negative numbers, where negative numbers are coded in 2's
complement. This technique, known as 10's complement in the decimal world, was known
to abacus operators long ago and is very simple.
Suppose we wish to subtract 1234510 from 1487210 in binary. We do this by adding -12345
to 14872. To convert 12345 to -12345, we take the binary version (0011000000111001),
105
reverse each bit (1100111111000110) and add "1" (1100111111000111). This gives us the 2's
complement of 12345.
-12345 1 1 0 0 1 1 1 1 1 1 0 0 0 1 1 1
+ 14872 + 0 0 1 1 1 0 1 0 0 0 0 1 1 0 0 0
2527 1 0 0 0 0 1 0 0 1 1 1 0 1 1 1 1 1
This gives the correct answer (252710), but one bit overflows off the end of the answer. This
bit is ignored during subtractions.
Multiplication is done by shifting and adding (much like long multiplication by hand, but
only multiplying by "1" or "0"), and subtraction is done by repeated addition of 2's
complements and shifting, using the overflow bit to decide when to shift! All of this can be
accomplished by very simple logic circuits on the computers CPU chip.
When floating point numbers are being added, the exponents must be normalised before
addition. This is like trying to add 1.0e+3 to 1.0e-3 on your calculator. The calculator goes
through this normalisation to give the correct answer (1.000001e+3).
Here's a version of that previous addition in IBM floating point (one of the common
floating point formats). This format uses 32 bits for each number:-
Exponent Mantissa
1.00E+03 is 1.95312510 x 29 coded as: 0 1 0 0 1 0 0 1 0 0 0 1 1 1 1 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0
1.00E-03 is 1.02410 x 2- coded as: 0 0 1 1 0 1 1 0 0 0 0 1 0 0 0 0 0 1 1 0 0 0 1 0 1 0 1 1 0 1 1 0
10
Normalis 0 1 0 0 1 0 0 1 0 0 0 1 1 1 1 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0
e
01001001000000000000000000000010
Add 01001001000111110100000000000010
The answer converts to 1000.00097610, which as close as we can get to 1000.001 in this
format.
106
Finally, here's a string of binary bits (128 bytes in all) - what do they signify?
or it could be a piece of
seismic trace, or CD music.
We will be examining some formats for seismic tapes once we've looked at digitising
problems.
Digitisation
Almost everything we see and hear these days comes from a digital source. Special effects
in the cinema or on TV depend on digital techniques, CD's are digital music. Even radio
107
and TV are about to be transmitted as digital signals. This text is just a series of black dots
defined by a series of binary numbers in a disk file.
Digital recording of seismic data has been in use since the 1960's, and, in deciding how to
record our data, we have to make three decisions.
1. What is the largest possible amplitude in our incoming analogue signal, and how do
we scale this for storage?
2. What is the smallest possible number (other than zero) we wish to record (relative to
the largest)?
The scaling of the data can be arbitrary - there is no guaranteed relationship between the
minute voltages produced at the receivers and the numbers we store on tape. These
numbers could represent, for example, millivolts, but may be scaled in almost any way - we
need to scale them so that they "fit" the numeric range we are using.
The total possible range of numbers (from the smallest to the largest) is often expressed as
the dynamic range of the system. This is usually expressed in decibels (dB), which is a
logarithmic scale where every 6dB represents (roughly) a doubling of amplitude. Each bit
of an integer system will thus give 6dB of dynamic range. 16-bit words used for some
systems (and CD's!) hence have a 6 x 16 or 96 dB dynamic range, just about equivalent to
the total audible volume range. Floating point systems now in use have much larger
ranges.
The choice of sample period is slightly more complicated, and we'll leave this until the next
page!
In order to illustrate some of the pitfalls associated with digital recording, the following
show a sine wave of 150 Hz sampled every 2 milliseconds, a digitised image, and a digitised
sound. If your system has a sound output, use the control buttons to play the sound.
All of these will be (mis-)treated in the same way to illustrate some of the problems
associated with digital recording.
108
This is our original data. The sine wave is sampled every 2 milliseconds, and the sound is
sampled at 1/8th of a millisecond. Both the sound and the picture use 256 levels of
digitisation. In the picture this represents a colour, while in the sound it's an amplitude
level.
The data here has been clipped (the blue line shows the original data). A bad choice of
recording parameters means that any very large amplitudes go beyond the recording range
(into the green area), and are recorded as the largest possible number.
Modern recording equipment makes automatic allowances for high amplitudes, but this
phenomenon is possible on some older field systems, and in some processing systems.
Displays are often clipped to prevent high amplitude traces obscuring other data. Clipped
data is usually fairly easy to spot - peaks and troughs will have flat tops rather than being
rounded. We need to make sure, in any digital system, that we have a sufficient numeric
range to record the largest amplitudes.
109
This set of data has been quantised. Although we have avoided clipping the data by using
large enough number, the actual range of numbers is limited. In this example we have used
only the numbers from -3 to +3 (in integers) to cover the full range of the data.
This problem also occurred on some early field systems, and can still be apparent on some
processed data with a very large amplitude range. If we store our data in 16-bit words so
that the largest value is +32767, and the data is incorrectly scaled (or is the raw field data),
we may find at the end of the trace that our amplitudes are in the range 0 to 0.5. All of these
small values will be quantised to the integer value zero!
Random noise, caused by all sorts of things in the field, will radically distort the signal in
our seismic trace. We will see later how much of the processing sequence is designed to
attenuate this random noise.
110
Coherent noise is another problem. The 150 Hz original trace now has 50 Hz noise added to
it (land data can suffer from 50 or 60 Hz noise from power lines).
The sound sample in this case has 100 Hz noise - 50 Hz is almost inaudible, and may
damage your speakers!
We also need to be sure that we record the correct frequencies in the field. The examples
shown here have all been filtered with a low-pass filter, removing any frequencies above a
fixed value. The 150 Hz sine wave has been completely removed by this filter, and we only
record zeroes!
Finally, the most complex problem, that of aliasing. Once again we only have low
111
frequencies, but, in this case, simply achieved by taking every other digital sample from our
original waveform (we have resampled the data to 4 ms). You'll note that the picture looks
like something from a TV "true-life" crime programme!
This example shows under sampling, with the added problem that the original 150 Hz sine
wave (and some of the higher frequencies in the sound sample) now alias to the wrong
frequency. The original 150 Hz signal now appears as 100 Hz! We'll discuss this in detail
on the next page.
Aliasing
112
Moving to a 4 ms sample period, and things
really start to go very wrong!
Here's a list of upper limits (or Nyquist frequencies) for some typical sample periods:-
Sample
Nyquist
Frequency
Period
(Hz)
(ms)
1 500
2 250
4 125
Seismic 8 62.5
Frequencies above the Nyquist frequency, if they are not removed before the data is
sampled, fold-back into the range of frequencies from 0 to Nyquist.
This plot shows input frequency against output frequency for data sampled with a 4 ms
sample period:-
113
The blue line in the plot above shows the
150 Hz signal from our previous examples
"aliasing" back to 100 Hz on data sampled
at 4 ms.
If we can guarantee that we only have the correct range of frequencies in the original
recording (by applying the appropriate filters to the data before it is digitised), we can fully
reconstruct the total waveform from the sample values.
114
2. The interpolating sinc functions on every sample
Multiplexed data
115
recording
Analogue to digital
converted to a numeric value
conversion
If this data is recorded on tape as soon as it is available, the data on tape will be in the
wrong order. We will have all channels at one time sample, followed by all data at the next
sample - the data is ordered by time, not as seismic traces.
This type of recording format, known as multiplexed recording, is still found on older data,
or current data shot with old recording instruments! If we place the "clock" in the flow
shown above (just before the A/D conversion), we only need one conversion unit for all
channels.
You should spot one other problem here. If the multiplexing "clock" shown here (for just 6
channels) rotates in 2 milliseconds, then the data from recording group 6 (and all those in
between) are recorded later than channel 1.
This multiplexer skew, dependant on the channel number, is usually corrected in the earliest
stages of the processing - when the data is being de-multiplexed from the field tapes into an
internal tape format in trace order.
When you get tired of clock watching, click on the animation to pause it!
The data is always received at the recording instruments in multiplexed order. If the
instruments have enough local storage (computer memory) available, then the data can be
collected in one order and output in the other - the data is de-multiplexed in the field.
Channel
116
Time 1 2 3 239 240
0.000 A001 B001 C001 YY001 ZZ001
0.002 A002 B002 C002 YY002 ZZ002
0.004 A003 B003 C003 YY003 ZZ003
1.000 A501 B501 C501 YY501 ZZ501
1.002 A502 B502 C502 YY502 ZZ502
When data is stored in de-multiplexed format (often known as a trace sequential format),
the groups of numbers are (logically) referred to as "traces".
For multiplexed formats, the group of numbers are often referred to as a "scan" - the data
from all channels at one time.
Recording filters
Before we get deep into the business of recording filters, we need to define a couple of terms.
117
octave.
Decibels
A bel (named after Alexander Graham, not ding-
dong!), is defined as the logarithm (to the base
10) of a power difference.
The bel was soon found to be a somewhat unwieldy unit. The total range of sound audible to humans,
from the flap of a butterfly's wings to the threshold of pain (or the average rock concert) represents a
power difference of about 1,000,000,000,000 to 1, or about 12 bels.
In order to be able to use a unit with more meaning, the decibel (dB), or one tenth of a bel has come
into general use. Thus the range of human hearing is usually expressed as about 120 dB.
The dB is a power ratio. If we are dealing with amplitudes (of, for example, a seismic trace) then, as
power equates to the amplitude squared, we can define the decibel as:-
118
Here's some typical amplitude ratios expressed in the logarithmic scale of dB:-
1:1 0 0 0
10:1 1 20 +20dB
100:1 2 40 +40dB
Octaves
We're probably familiar with the concept of octaves in a musical sense.
Here's the centre three octaves of a piano keyboard (middle "C" marked in red), with a plot showing
the frequency of each note (including the black ones!).
Use the buttons to switch the frequency axis to a logarithmic scale. It should now be obvious that the
frequency for each note is a constant factor (about 1.0595) times the frequency for the previous note;
a geometric progression.
The ratio is actually equal to the 12th root of 2 - each octave (12 notes) representing a doubling of
frequency. (The "A" below middle "C" is a fixed point - 440 Hz.)
An octave is, once again, a logarithmic scale representing the ratio between two different frequencies
(it comes out as about 3.322 x Log10 of the frequency ratio). This following shows some frequency
119
ratios expressed in octaves:-
10 15 1.5 0.58496
10 20 2 1
10 40 4 2
10 80 8 3
80 40 0.5 -1
80 10 0.125 -3
dBs/Octave
It we put the two concepts of dBs and Octaves together, we get a "Log/Log" scale.
This plot shows some typical filter "slopes" specified in dBs per octave. I've used an amplitude of "1"
at 50 Hz as my starting point (remember that both dBs and octaves are relative measures).
As 6dB is about one half, a slope of 6dB per octave means we halve the amplitude at half the
frequency - amplitude is proportional to frequency.
12dB per octave applies a scalar equal to frequency squared, and so on.
Switch the axes to log scales to see the different representations of the slopes - the only all appear
linear on the log-log scale.
Recording filters
Now we've worked our way through that lot, we can talk about recording filters!
120
Here's the amplitude response of a typical
recording filter.
Although the high-cut filter is very important to prevent aliasing, a low-cut filter is not always applied
(often referred to as "OUT"). This can cause some unwanted low frequency noise that can be removed
in the processing, provided that it doesn't completely obliterate the signal!
It's important for the person processing the data to be aware of any changes in recording
filters within one seismic survey. Because both the amplitude and phase characteristics of
the recorded data are irreversibly affected by this filter, data recorded using different
filters will not "tie".
It's usual for the signatures supplied to the processing centre by a marine vessel acquiring
data to be recorded with the same filter as the seismic data - just make sure you use the
121
right filter for the right data!
The small piece of (fully processed) seismic data shown here has been filtered with a whole range of
high-cut filters.
These were applied in the processing as digital filters, which do not introduce any time shifts in the
data.
Note the changes in fine detail as the high-cut moves lower. (The data are sampled at 4 ms so 125 Hz
= no filter.)
122
Tape Formats
123
All tape formats use some kind of error checking. The most simple form of error checking
for the binary numbers on tape is parity checking.
This technique, used throughout the computer industry, is a very simple check for the
individual "bits" in a binary number. The number of "1's" in a byte is counted and, if the
result is even, a "1" is put in a special parity bit appended to each byte. This is known as
odd parity.
When the number is read back from the tape, if the number of "1's" (byte plus parity bit)
is no longer odd, then one or more bits are in error. This is known as a parity error and
doesn't tell you which bits are wrong - just that something is wrong. Of course, if two bits
flip from 1 to 0 (or vice-versa), the parity check won't spot it.
Another technique used is known as a checksum. This takes all of the bytes in one
particular part of the tape (for example, one multiplexed scan of all channels at one time),
and sums them together. Any overflow from the resultant one or two byte value being
ignored.
Once again, on reading the tape, an error in the checksum shows that one of the numbers is
wrong - not which one.
This technique is similar to that used by accountants (before the advent of calculators) to
check a sum of a column of figures. Each figure was divided by 9 and the remainder noted
(known as Modulo-9). The sum of these results (again, Modulo 9) should equal the original
sum (Modulo-9).
124
Here's an example:-
Number Mod 9
6307 7
9622 1
316 1
183 3
4099 4
2446 7
Sum 22973 23
Mod 9 5 5
The one major failing of this technique (which you may like to test) is that is doesn't spot a
transposition of digits in the sum. If we add "6442" instead of the last figure "2446", we
get the answer 26969 which is still 5 Modulo 9!
Seismic field tapes (particularly multiplexed tapes) make use of yet another
form of error checking. At the beginning or end of each logical section in the
tape (for example, one scan), a sync code is written.
This sync code (generally a sequence of "all 1's") covers multiple bytes and is
used to check that the correct number of values has been recorded in each
section. A "sync error" generally means that things have got out of step, and
we may have to remove this shot from the processing sequence.
125
Regardless of the physical format of the actual magnetic media on the tape (which I will not
be addressing), each tape, or file on a multi-file tape consists of some or all of the
following:-
A general header that identifies the tape, or this part of the tape.
A header for each "batch" of data (for example, one field record) on the tape.
This sequence is repeated along the tape, with an "End-of-file" (a particular code) recorded
at the end of each file on the tape. A file may consist of many individual traces or records,
and the tape may consist of many different files.
The end of the tape is marked with a double End-of-file (EOF), and some kind of physical
marker may be used to prevent the device writing the tape from spinning off the end.
When the writing device recognises the end of the tape, it completes the writing of the
current record, and then instructs the operator to mount a new tape (or switch devices).
The data on the tape may be blocked into physical records (either fixed length or variable
length), and there may be "gaps" between these blocks of data. The blocking was
introduced so that simpler tape devices could be used that didn't have to cope with some of
the enormous record lengths possible in seismic recording. In the same way, the gaps are
needed for "stop/start" tape devices to stop at each gap so that one record doesn't run into
the next.
126
Field Tape Formats
SEG-A
SEG-A was the first attempt to standardise the format for seismic field tapes. Although not in general
use today, you may come across it when reprocessing older data.
Each shot record consists of a header block followed by a data block which may be stored as a
separate record or as part of the same logical record as the data block on tape. A single end-of-file
mark follows each shot record on the tape.
The header contains useful information on each shot - the exact format varies with the manufacturer
of the recording equipment. Some things will always be in the header. A Field Record number will
always be stored that identifies this particular shot (this may or may not be the same as the shotpoint
number annotated on the map and final section - the Observer's Logs will show the relationship).
One other useful item from the header is the initial gain. This is the gain that was used by the
127
recording amplifiers at the start of the record. This used to have to be set by hand so some overflow is
possible at the start of the record. The amplifier has only 15 bits (including the sign bit) with which to
represent the signal. Instead of recording a gain value with each sample, the recorder merely records
changes of gain from that applying at the previous scan, and periodically records the actual gain so far
calculated (the redundant gain) so that this may be checked by the decoding software. Gains are
stored as binary numbers representing multiples of 6dB gain applied to the data (for example a value
of "6" would mean 36dB or a gain of 26).
The actual seismic data (in multiplexed format) is stored in a series of scans, where each
scan consists of a 3 byte sync code, 1 byte in which the redundant gain is written, two byte
sample values for each data channel, and a number of auxiliary channels. The auxiliary
channels may contain additional information, the first generally containing the time value
for each scan.
The scan code will always consist of 3 bytes, normally FFFFFF; the 4th byte, shown above
as GG, contains the redundant gain. The redundant gain is the total gain applied to the
appropriate channel at that scan, recorded sequentially for each channel. On the first scan
it applies to channel 1, on the 2nd to channel 2, etc. The gain value at the start of the
record is taken from the field record header.
Bit 15 (the most significant bit) of every data sample is the "G" bit, and when set to "1"
indicates that there has been a gain change of 6dB since the previous scan. To determine
whether this gain was an up change or a down change, it is necessary to look at the U bit.
This is bit 7 (the most significant bit) of the 4th byte of the scan (GG). If on, it is an up
change, if off, it is a down change. Since the U bit applies to an entire scan at once, it is
clear that the gain in any scan can only change up or down, if it changes at all; it cannot
change up on one channel in that scan and down on another. This can lead to overflow
(and clipping of the data) but this is unusual.
Thankfully, most of the messy checking required by this format should be handled by the
128
processing software. It may be useful, however, to be aware of the format in order to be
able to spot any problems!
SEG-B
The SEG-B format uses 16 binary bits to represent the signal, plus another 4 bits for gain ranging the
values so that the 16 bits have the greatest possible significance. When writing to the tape the
channels are blocked into groups of 4, preceded by a gain word containing the gain bits for each of the
4 channels.
The first 4 bytes of each scan contain the sync code, the next 10 bytes are auxiliary channels
which are recorded as 16 bit integers with no gain. Auxiliary channel 1 is normally used as
a sync increment word. Hence each scan starts something like this:-
Scan
0/1 2/3 4/5 6/7 8/9 10/11 12/13 14/15 16/17 18/19 20/21 22/23 24/25 26/27 28/29
byte
aux aux aux aux aux Gain Seis Seis Seis Seis Gain Seis
0101 0103
ch.1 ch.2 ch.3 ch.4 ch.5 1-4 ch.1 ch.2 ch.3 ch.4 5-8 ch.5
Once again, each set of data from one shot is preceded by a header block (or separate record) that
contains the Field Record number and other useful information
SEG-C
In SEG-C format, the potential in millivolts (the exact value!) generated by the geophones is converted
to IBM floating point format. IBM floating point format uses 32 bits as shown below :-
The sign bit is bit 0, while bits 1 to 7 are the exponent which is biased by Hexadecimal `40'. Bits 8 to
31 contain the mantissa. We saw some examples of this format earlier.
129
The tape is recorded as before, with a header for each shot followed by the individual
scans. Each scan begins with a 4 byte sync code, normally FFFFFF00, then 2 bytes which
is used as a sync increment, followed by 2 bytes which are not used and normally set to
zero. Immediately following these 2 unused bytes are the IBM floating point
representations of channels 1 to N. Hence each scan starts like this :-
Sync Timing
Code Word
Although this format had none of the "gain" complications of the other formats, it proved unpopular
due to the amount of tape it used (only low density tapes were then available). As tape densities
improved, the format was quickly replaced by the more versatile SEG-D format which has now become
the general standard.
130
SEG-D
SEG-D is the most complicated of field formats to date. Firstly, it can contain either
multiplexed or de-multiplexed data and, secondly, it has so many variants and optional
parts that it can vary greatly from one set of instruments to another.
The general format is as follows, with the green sections optional, "IBG" meaning inter-
block gap (a space on the tape), and "EOF" as an end-of-file marker:-
Additional 2nd Last
General Channel Channel Channel Sample Channel Channel I I E I Next
General General Sample Sample Extended External Trailer
Header Set 1 Set 2 Set n Skew Sets 1-n Sets 1-n B DATA B O B Record
Header Header Skew Skew Header Header Blocks
2 Header Header Header Header Header Header G G F G Header
Blocks Header Header
1ST SCAN TYPE HEADER 2ND SCAN LAST SCAN
TYPE HEADER TYPE HEADER
The header record is a single block of information separated from the data by a standard
inter-block gap. The header record is composed of a general header, one or more scan type
headers and optionally, the extended and external headers. The header record is
"officially" between 300 and 10000 bytes in length and a multiple of 32 bytes, in practice
headers in excess of 20000 bytes are already in use for multiple-streamer (3D) marine data.
Within the extended and external headers, it is possible for equipment manufacturers and
seismic contractors to record certain additional information that is not defined in the SEG
standard. This information is defined and documented by the equipment manufacturer or
seismic contractor. Some contractors' formats contain shotpoint event time, source
identifiers, even latitude and longitude for the shotpoint. This is particularly useful for the
QC of 3D processing.
131
The order of actual data values (in red) will be described below, the format of the samples
can be in a number of possible different formats:-
0015 Multiplexed 20-bit binary data 8015 Demultiplexed 20-bit binary data
0022 Multiplexed 8-bit quaternary exponent data 8022 Demultiplexed 8-bit quaternary exponent data
0024 Multiplexed 16-bit quaternary exponent data 8024 Demultiplexed 16-bit quaternary exponent data
0036 Multiplexed 24-bit 2's complement integer data 8036 Demultiplexed 24-bit 2's complement integer data
0038 Multiplexed 32-bit 2's complement integer data 8038 Demultiplexed 32-bit 2's complement integer data
0042 Multiplexed 8-bit hexadecimal exponent data 8042 Demultiplexed 8-bit hexadecimal exponent data
0044 Multiplexed 16-bit hexadecimal exponent data 8044 Demultiplexed 16-bit hexadecimal exponent data
0048 Multiplexed 32-bit IBM floating point format 8048 Demultiplexed 32-bit IBM floating point format
data data
0058 Multiplexed 32-bit IEEE floating point format 8058 Demultiplexed 32-bit IEEE floating point format
data data
The numerical values are the format codes stored in the header for each block of data (I
don't propose to go into these in detail!).
Multiplexed data is stored very simply. Each scan is preceded by a Start-Of-Scan code and
a time value which identifies the following scan - the data samples being stored in one of the
formats given above, and additional information (gains, etc) being available in the general
header:-
Demultiplexed SEG-D format tapes (the most common format) consist of a header record
followed by a number of demultiplexed data records and then a single end of file mark.
This sequence is repeated for each shot on the line:-
H Trace I H I H H I H H I I I
D Header Data B D Data B D Data D Data B D Data D Data B Data Blocks B Data Blocks B Data Blocks
R Extension GR GR R GR R G G G
TRACE TRACE TRACE TRACE TRACE
TRACE 1
2 3 i 1 j
CHANNEL CHANNEL CHANNEL
CHANNEL SET 1 CHANNEL SET 2
SET n SETS 1-n SETS 1-n
LAST
SCAN
SCAN TYPE 1 SCAN
TYPE 2
TYPE
The data records are recorded immediately following the header record. Each data record
is separated from the next by a standard inter-block gap and may be identified using
information stored in the trace header which precedes each data channel. The original
132
SEG-D format only allowed 20-byte trace headers. This has now been extended to almost
any length, and, like other formats, the data is sometimes blocked into convenient units for
the tape drive to read. The most important things stored in the trace headers are:-
BCD stands for "Binary Coded Decimal" where each decimal digit is separately encoded
into 4 binary bits.
The data traces are recorded into a number of different channel sets for example, seismic
channels 1 to 240 may be recorded into channel set 1 of scan type 1, while a number of
auxiliary channels may be recorded into channel sets 2, 3, 4, etc of scan type 1.
If you need to become fully conversant with SEG-D (for whatever reason!) I would suggest
the SEG publications "Digital Field Tape Format Standards - SEG-D" or "Digital Field
Tape Format Standards - SEG-D Revisions 1 & 2", which describe the format in bit-by-bit
detail!
133
SEG-Y
In order to facilitate the movement of data from one preocessing contractor to another, the
SEG initially designed the SEG-X format for data eXchange. This was rapidly superceded
by SEG-Y which is a trace sequential (or de-multiplexed) format designed to store fully or
partially processed seismic data. Its versatility has led to it being used more and more for
raw or partially processed field data in a de-multiplexed format.
The tape is divided into seismic "lines" by End-Of-File marks (EOF's), with a double EOF
at the end of the tape. Each line consists of two line headers, followed by a series of seismic
traces each with their own header. The actual format of the seismic samples (and the
number of bytes in each trace) is defined by the format codes in the binary line header.
134
The first line header is a
free-format text header of
exactly 3200 bytes. Each
byte represents one
character, which is stored in
the (now) somewhat archaic
EBCDIC (Extended Binary
Coded Decimal Interchange
Code) format.
The standard SEG-Y binary header consists of 400 bytes, with integer binary coding
containing useful information for the whole line. Bytes 25-26 are particularly important, as
they define the format of the seismic samples that follow:-
135
2 = fixed point (4 bytes)
3 = fixed point (2 bytes)
4 = fixed point (wcode) (4 bytes)
5 = IEEE floating point (4 bytes) } NON-STANDARD!
6 = fixed point (1 byte) } May be redefined in future revisions!
Bytes 27-28 CDP fold expected per CDP ensemble
Trace sorting code:
1 = as recorded (no sorting)
Bytes 29-30 2 = CDP ensemble
3 = single fold continuous profile
4 = horizontally stacked
Vertical sum code:
1 = no sum
Bytes 31-32
2 = two sum
N = n sum
Bytes 33-34 Vibroseis sweep frequency at start
Bytes 35-36 Vibroseis sweep frequency at end
Bytes 37-38 Vibroseis sweep length (ms)
Vibroseis sweep type code:
1 = linear
Bytes 39-40 2 = parabolic
3 = exponential
4 = other
Bytes 41-42 Trace number of Vibroseis sweep channel
Bytes 43-44 Vibroseis sweep trace taper length at start
Bytes 45-46 Vibroseis sweep trace taper length at end
Vibroseis sweep trace taper type code:
1 = linear
Bytes 47-48
2 = COS-squared
3 = other
Vibroseis correlated data traces code:
Bytes 49-50 1 = no
2 = yes
Binary gain recovered code:
Bytes 51-52 1 = yes
2 = no
Amplitude recovery method code:
1 = none
Bytes 53-54 2 = spherical divergence
3 = AGC
4 = other
Measurement system code:
Bytes 55-56 1 = metres
2 = feet
136
Impulse signal polarity code:
Bytes 57-58 1 = increase in pressure or upward geophone case movement gives negative number on tape
2 = increase in pressure or upward geophone case movement gives positive number on tape
Vibrator polarity code:
1 = seismic signal lags pilot by 337.5 to 22.5 degrees
2 = seismic signal lags pilot by 22.5 to 67.5 degrees
3 = seismic signal lags pilot by 67.5 to 112.5 degrees
Bytes 59-60 4 = seismic signal lags pilot by 112.5 to 157.5 degrees
5 = seismic signal lags pilot by 157.5 to 202.5 degrees
6 = seismic signal lags pilot by 202.5 to 247.5 degrees
7 = seismic signal lags pilot by 247.5 to 292.5 degrees
8 = seismic signal lags pilot by 293.5 to 337.5 degrees
Bytes 61-400 Spare
The standard SEG-Y trace header consists of 240 bytes, many of the terms used won't
mean much to you at present, but should become clearer as we discuss the processing
sequence:-
Bytes 1-
Trace sequence number within line
4
Bytes 5-
Trace sequence number within reel
8
Bytes 9-
Field record number
12
Bytes
Trace number within field record
13-16
Bytes
Energy source point number
17-20
Bytes
CDP ensemble number
21-24
Bytes
Trace number within CDP ensemble
25-28
Trace identification code:
1 = seismic data
2 = dead
3 = dummy
Bytes 4 = time break
29-30 5 = uphole
6 = sweep
7 = timing
8 = water break
N = optional use
Bytes
Number of vertically summed traces
31-32
137
Bytes
Number of horizontally summed trace
33-34
Data use:
Bytes
35-36 1 = production
2 = test
Bytes Distance from source point to receiver group (negative if opposite to direction in which the line was
37-40 shot)
Bytes
Receiver group elevation from sea level (above sea level is positive)
41-44
Bytes
Source elevation from sea level (above sea level is positive)
45-48
Bytes
Source depth (positive)
49-52
Bytes
Datum elevation at receiver group
53-56
Bytes
Datum elevation at source
57-60
Bytes
Water depth at source
61-64
Bytes
Water depth at receiver group
65-68
Bytes Scale factor for previous 7 entries with value plus or minus 10 to the power 0, 1, 2, 3 or 4 (if positive
69-70 multiply, if negative divide)
Bytes Scale factor for next 4 entries with value plus or minus 10 to the power 0, 1, 2, 3 or 4 (if positive
71-72 multiply, if negative divide)
Bytes
X source co-ordinate
73-76
Bytes
Y source co-ordinate
77-80
Bytes
X group co-ordinate
81-84
Bytes
Y group co-ordinate
85-88
Co-ordinate units code for previous four entries:
Bytes 1 = length (metres or feet)
89-90 2 = seconds of arc (in this case the X values are longitude and the Y values are latitude, a positive
value designates the number of seconds east of Greenwich or north of the equator)
Bytes
Weathering velocity
91-92
Bytes
Sub-weathering velocity
93-94
Bytes
Uphole time at source
95-96
Bytes
Uphole time at receiver group
97-98
Bytes Source static correction
138
99-100
Bytes
Group static correction
101-102
Bytes
Total static applied
103-104
Lag time A, time in ms between end of 240-byte trace identification header and time break, positive
Bytes
if time break occurs after end of header, time break is defined as the initiation pulse which maybe
105-106
recorded on an auxiliary trace or as otherwise specified
Bytes Lag time B, time in ms between the time break and the initiation time of the energy source, may be
107-108 positive or negative
Bytes Delay recording time, time in ms between initiation time of energy source and time when recording
109-110 of data samples begins (for deep water work if recording does not start at zero time)
Bytes
Mute time--start
111-112
Bytes
Mute time--end
113-114
Bytes
Number of samples in this trace (unsigned)
115-116
Bytes
Sample interval; in micro-seconds (unsigned)
117-118
Gain type of field instruments code:
1 = fixed
Bytes
119-120 2 = binary
3 = floating point
N = optional use
Bytes
Instrument gain constant
121-122
Bytes
Instrument early or initial gain
123-124
Correlated:
Bytes
125-126 1 = no
2 = yes
Bytes
Sweep frequency at start
127-128
Bytes
Sweep frequency at end
129-130
Bytes
Sweep length in ms
131-132
Sweep type code:
Bytes 1 = linear
133-134 2 = COS-squared
3 = other
Bytes
Sweep trace length at start in ms
135-136
Bytes
Sweep trace length at end in ms
137-138
139
Taper type:
Bytes 1 = linear
139-140 2 = COS-squared
3 = other
Bytes
Alias filter frequency if used
141-142
Bytes
Alias filter slope
143-144
Bytes
Notch filter frequency if used
145-146
Bytes
Notch filter slope
147-148
Bytes
Low cut frequency if used
149-150
Bytes
High cut frequency if used
151-152
Bytes
Low cut slope
153-154
Bytes
High cut slope
155-156
Bytes
Year data recorded
157-158
Bytes
Day of year
159-160
Bytes
Hour of day (24 hour clock)
161-162
Bytes
Minute of hour
163-164
Bytes
Second of minute
165-166
Time basis code:
Bytes 1 = local
167-168 2 = GMT
3 = other
Bytes
Trace weighting factor, defined as 1/2N volts for the least significant bit
169-170
Bytes
Geophone group number of roll switch position one
171-172
Bytes
Geophone group number of trace one within original field record
173-174
Bytes
Geophone group number of last trace within original field record
175-176
Bytes
Gap size (total number of groups dropped)
177-178
Bytes Overtravel taper code:
179-180 1 = down (or behind)
140
2 = up (or ahead)
Bytes
Spare
181-240
Each trace header is then followed by the seismic samples, in a fixed length record with the
format specified in the binary header for the line.
Many variations on the standard SEG-Y format exist in practice, and a revision of the
format is due at any moment from the SEG. Regardless of variations, however, the
standard information in the headers should be in the bytes shown in the tables above.
141
Chapter 6 - Frequencies
The manipulation of the individual frequency components within a seismic trace is a key
part of the seismic processing sequence. This chapter looks at frequencies in some detail,
and establishes the basic principles of time & frequency domain conversions.
Another possibly unfamiliar number system - complex numbers. A brief history and even
briefer look at some of the mathematical operations on complex numbers.
142
... or going the other way (from frequency to time).
143
Complex Numbers
Although some numbers are difficult to represent in some number bases (for example 7
divided by 3 gives 2 and one-third, which is 2.333333 ... in decimal), they become simple in
other bases (7/310 = 2.13). Other irrational numbers (like pi), which cannot be fully
represented in any number of digits in any base, still have a position on the range of real
numbers we normally use.
Real numbers allow us to solve equations such as x2 = 49. Remember that there are two
answers to this, +7 or -7.
144
The problems start when we try to solve equations like x2 = -49. Our normal range of real
numbers does not allow us to solve this and early mathematicians believed that this
equation had no solution. By the middle of the 16th century, however, the Italian
mathematician Gerolamo Cardano and his contemporaries were experimenting with
solutions to equations that involved the square roots of negative numbers, and, by 1777, the
Swiss mathematician Leonhard Euler introduced the symbol i to stand for the square-root
of -1.
The imaginary numbers produced by using the symbol i have no physical meaning, but they
do allow for the solution of all polynominal equations (the square root of -49 is either 7i or
-7i). They also lead to some interesting mathematical solutions like e(pi i) = -1!
Numbers which consist of part real and part imaginary (for example 22.3 + 1.925i) are
called complex numbers.
145
Multiplication of complex numbers is based on the premise
that i i = -1. This gives the rule:-
(a + bi) (c + di) = (ac - bd) + (ad + bc)i
All of the normal mathematical functions (logs, trigonometric
functions etc.) can be applied to complex numbers (they
usually give complex answers). Plots of some part of some of
the more esoteric functions of complex numbers give rise to
the fractal patterns popular with computer artists!
We'll go on now and discuss frequencies, and how complex numbers play their part!
146
Frequencies
Way back in Chapter 3 we showed how a
waveform, consisting of a single frequency,
would appear in the time domain (as a
function of time).
The actual equation for the type of waveform shown above is:-
147
a*Cos(2*pi*f*t+p)
where a is the amplitude, f the frequency, t the time and p the phase. pi is either 3.14159...
if we're working in radians, or 180 if we're working in degrees, so the example given (with
everything in degrees) is:-
4*Cos(180*25*t+60) = 4*Cos(4500*t+60)
The clue to this is on the previous page. In the frequency domain we need a complex
number to specify the waveform. The combination of phase and frequency values in the
frequency domain transform into waveforms in the time domain, and allow for complex
numbers in both domains, as well as for negative frequencies (a concept that we'll discuss
when we get to spatial frequencies!).
Luckily, we don't record either complex or imaginary numbers in our seismic data (which
is just as well, as they'd be difficult to display!), and we don't record negative frequencies
(if you like, energy from the shot going backwards in time!), so we actually only need about
half as many "samples" in the frequency domain as we have in the time domain - more on
this later.
148
Once again, we'll go back to clock watching! Imagine a clock that rotates through 360
degrees in 40 ms.
If the length of the "hand" is four units, and we plot the height of the end of the hand from
the centre of the clock, we get the cosine wave shown here.
Click on the animation to stop it - if you can stop it after 1/6th of a revolution (at 2 o'clock),
you'll get the same time function as shown above - a 60 degree phase value!
Here's the complete relationship between amplitude (A), phase (p), and the Real (R) and
Imaginary (I) components of a complex number expressed in polar co-ordinates, or our
single frequency component expressed in Amplitude and Phase converted to its complex
equivalent:-
149
Combining different frequencies
Let's start by looking at what happens when we combine two waves of the same frequency
with different amplitudes and phases.
The plot here shows the time functions (from -0.2 to +0.2 seconds) for two 20 Hz
waveforms, their amplitude and phase values, and an Argand diagram showing the
amplitude and phase components as a vector. Change the amplitude (0 to 100) and phase (-
360 to +360) of each waveform and click on the button to add them together.
Note particularly what happens when the two waves have the same amplitude and exactly
180 degrees difference in phase - adding them together "cancels-out" this frequency.
Check what happens when they are 90 degrees apart, or when both have the same phase.
Now we'll increase the complexity by adding two waveforms of different frequencies. Once
again, try a range of amplitudes and phases with frequencies close together and far apart
(you can only specify frequencies from 0 to 125 Hz - I've assumed a 4 ms sample period):-
The result will always look "cyclic" (the pattern repeats). Try adding 100 Hz to 110 Hz -
you should see an apparent 10 Hz (the difference) in the output. This is due to some of the
relationships you might remember from trigonometry at school (COS (a + b) = COS (a)
COS (b) - SIN (a) SIN (b) and the like!).
150
Here's a few hundred milliseconds of one such waveform constructed by adding together 11
different frequencies with different amplitudes and phases. The sum (in blue) is scaled up
by a factor of 2 relative to the individual components in order to make it more visible!
The amplitude for each frequency varies up to an arbitrary value of 100, the phase is more
or less random for each component.
The addition of all of these frequencies produces, once again, a cyclic waveform (containing
a repetitive sequence). This sequence repeats every 80 ms (every 20 samples at a 4 ms
sample period).
I have deliberately used a set of sampled frequencies going from 0 Hz to the Nyquist
frequency (125 Hz), and, in fact, if we have "N" samples of amplitude and phase over the 0
to Nyquist range, the time function will repeat every (N-1)*2 samples (80 ms).
This ties in with a comment made on the previous page. If we only have positive
frequencies (from 0 to Nyquist), and "Real" numbers in our trace of "M" time samples, we
need (M/2)+1 frequency samples to represent all of the frequencies in that trace (we assume
that the trace repeats after "M" samples).
151
We could represent the
little piece of trace shown
above by plotting the
amplitude and phase of
each of the component
frequencies (when we
move on to many
frequency components
this will really be the
only way we can show
them).
Each frequency
component is shown on
both graphs - the top
giving the amplitude of
the particular frequency,
and the bottom the phase
angle.
152
synthesise the trace by
adding together the
individual frequency
The Fourier Transform is simply a mathematical process that allows us to take a function
of time (a seismic trace) and express it as a function of frequency (a spectrum). The full-
blown formal statement of the Fourier Transform is:-
Both of the above transforms require an integral over the range -infinity to +infinity, not a
lot of use when we have a finite string of numbers representing a seismic trace. Here's an
alternative formulation of the forward transform for sampled data:-
Although we've replaced the integration with a summation over a finite interval ("N"
samples), the "i" in the exponent on the right should give you the clue that we are dealing
here with complex numbers. We'll look at a practical example of this over the next few
pages for just real numbers, but let's first examine the results by using a ready-built
program.
154
shown here on the right (From: J. W. Cooley, P.
Lewis, and P. D. Welch, "Historical Notes on the Fast
Fourier Transform", IEEE Transactions on Audio
and Electroacoustics 1969 and later papers) is a
FORTRAN program that will compute the forward
FFT of any number of points that are a power of 2
(2, 4, 8, 16, etc).
DO 7 I = 1,NM1
Both the input and output series of numbers in this IF (I .GE. J) GO TO 5
program are complex numbers, and similar
T = A(J)
algorithms are available in other programs.
A(J) = A(I)
A(I) = T
5 K = NV2
6 IF (K .GE. J) GO TO 7
J = J - K
K = K/2
GO TO 6
7 J = J + K
PI = 3.141592653589793
DO 20 L = 1,M
Instead of programming this, we can use existing LE = 2**L
software. For example, there is an option in the
LE1 = LE/2
"Analysis Tool Pack" in Microsoft Excel to perform
U = (1.0,0.)
forward or inverse FFT's on a (once again) number
W =
of points that is a power of 2. CMPLX(COS(PI/LE1),SIN(PI/LE1))
DO 20 J = 1,LE1
The example output shown below, for an "8-point"
DO 10 I = J,N,LE
FFT, was produced using the Microsoft Excel
spreadsheet on a PC - note that the output consists IP = I + LE1
of complex numbers. T = A(IP) * U
A(IP) = A(I) - T
Input FFT
10 A(I) = A(I) + T
88 88 20 U = U * W
77 172.509667991878-49.5807358037436i RETURN
-39 124-18i END
4 -8.50966799187825-145.580735803743i
6 40
-56 -8.509667991878+145.580735803744i
9 124+18i
-1 172.509667991878+49.5807358037435i
We'll look at the output of this "forward transform" in some detail on the next page.
155
The Forward Transform
Let's examine the output from the spreadsheet on the previous page in more detail:-
156
Time Input FFT Real Imag Amp Phase Freq
0 88 88 88.00 0.00 88.00 0.00 0
0.004 77 172.509667991878-49.5807358037436i 172.51 -49.58 179.49 -16.04 31.25
0.008 -39 124-18i 124.00 -18.00 125.30 -8.26 62.5
0.012 4 -8.50966799187825-145.580735803743i -8.51 -145.58 145.83 -93.35 93.75
0.016 6 40 40.00 0.00 40.00 0.00 125
-0.012 -56 -8.509667991878+145.580735803744i -8.51 145.58 145.83 93.35 -93.75
-0.008 9 124+18i 124.00 18.00 125.30 8.26 -62.5
-0.004 -1 172.509667991878+49.5807358037435i 172.51 49.58 179.49 16.04 -31.25
I have added both time and frequency scales to this table, note that both of these scales
must start at "0", and that the last three values (N/2-1 where N is 8) represent negative
times and frequencies. Since I have chosen a 4 ms sample period for these values, the
frequency scale must reach a maximum of 125 Hz (Nyquist). For "N" points at a "DT" ms
sample period, the frequency increment will be 1000/(N*DT). In this case 1000/(8*4) =
31.25.
I have split the FFT output into its real and imaginary components, and then used the
relationships given at the bottom of page 3 of this Chapter to convert these to Amplitude
and Phase (in Degrees). In fact, as I was using a Microsoft Excel spreadsheet to do this, I
used the spreadsheet functions IMABS and IMARGUMENT to compute the amplitude and
phase directly from the complex FFT output (I multiplied the phase by 180/PI() to convert
to Degrees).
If we re-arrange the data sets into time order, we can plot the time function, the amplitude
spectrum, and the phase spectrum for this set of numbers:-
If we ignore, for the moment the value at the right of the two spectra (125 Hz), we should
notice that the amplitude spectrum is totally symmetrical, and that the phase spectrum is
asymmetrical. We can see this also in the original complex numbers in the above table:-
157
For a frequency of "-x", the real part of the complex FFT output is the same, the imaginary
part is sign reversed (in mathematical terms the second complex number is the complex
conjugate of the first). This is always true when we only have real numbers in the input
time function - the information contained in the negative frequencies is the same as in the
positive frequencies (with the sign of the phase reversed).
What this actually all means is that we can get away with just one-half of the spectrum
when we have an input that is just real numbers (like, for example, a seismic trace).
We'll look at this (and the "fiddle" that is required to handle this properly) by inverse
transforming the amplitudes and phases from the above table back into a seismic "trace".
Back to our trusty spreadsheet once again - the inverse Fourier Transform simply means
158
that we need to add together all of the appropriate frequency components at each time.
The left-hand columns in this table show the amplitude and phase values for the positive
frequencies in the transform on the previous page. Across the top are the output time
values (in normal order this time), and each white square contains the result of the
equation Amp*Cos(2*pi*frequency*time+phase) for each set of values.
Time
-0.012 -0.008 -0.004 0.000 0.004 0.008 0.012 0.016
Freq Amp Phase Cos Cos Cos Cos Cos Cos Cos Cos
0.00 88.00 0.00 88.00 88.00 88.00 88.00 88.00 88.00 88.00 88.00 x1
31.25 179.49 -16.04 -157.05 -49.59 86.91 172.50 157.05 49.59 -86.91 -172.50 x2
62.50 125.30 -8.26 18.00 -124.00 -18.00 124.00 18.00 -124.00 -18.00 124.00 x2
93.75 145.83 -93.35 -108.97 145.58 -96.92 -8.52 108.97 -145.58 96.92 8.52 x2
125.00 40.00 0.00 -40.00 40.00 -40.00 40.00 -40.00 40.00 -40.00 40.00 x1
We add up the columns to give the totals at the bottom with TWO PROVISOS.
1. Because we have ignored the negative frequency components, we have to double all
of the values for frequencies other than 0 Hz and Nyquist (125). These "end" values
have their negative components "built into them", whereas the other components do
not! The multipliers are shown in red in the last column.
2. We need to divide the final sum by the number of points in the original transform (8
time points) in order to normalise the amplitudes.
If we perform both of these steps correctly, then the final result (using just 5 frequency
points) will be the same as if we used all of the original 8 points. In general we only need
"N/2+1" samples in the frequency domain to represent "N" real numbers in the time
domain. For example, if we have 1024 sample values in time, we actually need just one
value each for 0 Hz (DC) and Nyquist (neither of these has an imaginary component in the
frequency domain), and 511 complex values for all of the other frequencies - in other words
511 x 2 + 2 = 1024 sample values in frequency!
We have shown a simple way of doing the inverse transform using a spread sheet. Is the
forward transform possible using a similar technique?
Well, not quite so simply, but a solution is possible. Expanding the original Fourier
Transform equations (no, I'm not going to do that here!) reveals that the real component of
the transform for any particular frequency can be calculated by multiplying the time
"trace" by a cosine wave for the frequency of interest, and summing all of the results. In
159
the same way the imaginary part can be computed by multiplying the trace by a sine wave
and summing the results.
Here's our original example again, this time with the transform calculated from the cosine
and sine cross multiplications:-
Mult/Add 88.00 0.00 172.51 49.58 124.00 18.00 -8.51 145.58 40.00 0.00
Real Imag Real Imag Real Imag Real Imag Real Imag
The first two columns contain the original time function. The columns labelled "F 0"
contain a cosine and sine wave for 0 Hz, the next two columns for 31.25 Hz and so on.
The Mult/Add row is the result of multiplying the "Amp" column by the appropriate
cosine/sine column and adding all of the results. The values computed are the same real
and imaginary values that we obtained before. Although this technique is not as fast as the
FFT program shown earlier, it is simpler to implement on a spread sheet.
Example FFT's
160
It's common, when discussing Fourier Transforms, to show transform "pairs" - that is the
equivalent time and frequency domain representations of some common functions.
Because we are only dealing with real numbers in the time domain, the following plots
show the time domain function (T), the amplitude spectrum (A) and the phase spectrum
(P). You can assign almost any scale to these displays - if the time samples are, for example,
4 ms apart, then the frequency scale will go from 0 to 125 Hz. If you stretch the time scale
(make them 8 ms apart), then the frequency scale compresses (from 0 to 62.5 Hz).
The amplitude plots are on a linear scale (not dB), and the phase plot goes from -180 to
+180 degrees. The time function, in each case, is centred around time zero.
161
A sinc function in the time domain
corresponds to a constant amplitude
"block" in the frequency domain,
showing the similarity between the
forward and inverse Fourier
Transform.
162
We looked at ghost's in Chapter 3,
this example shows a single source
or receiver ghost, with it's frequency
domain "notches".
163
Finally, here's a live FFT running in real time. The "zero" samples in each domain are
shown as open circles (time zero is in the centre). Use the left mouse button to drag any
sample, in any domain, to a new value, or the right mouse button to zero a sample. Use the
buttons to initialise or modify the appropriate spectra. Try single frequencies, linear
phases, or any of the examples shown above!
Digital Filtering
We've seen how to build containing specified amplitude and phase spectra, and how to
convert a "trace" from the time domain to and from the frequency domain. How do we use
164
this in the seismic processing sequence?
We will transform this trace from the time domain into the frequency domain, apply a
filter to remove the high frequencies, and transform it back. We apply the filter by
multiplying the amplitude spectrum by our desired filter pass-band, and, to avoid
introducing timing errors, we will leave the phase as it is - we will apply a zero-phase filter.
165
This is another low-pass filter.
166
A 90 degree phase shift once
again, this time coupled with an
amplitude correction
proportional to frequency.
It's fairly obvious from the above that we can perform almost any sort of filtering by this
method, but it does require a double transform to perform the filtering. Can we
accomplish the same thing in the time domain, without going through the transforms?
The answer to this is, of course, YES, and we now move on to Convolution!
167
Convolution
Yet another filter applied to our
poor seismic trace!
Where x is the input time function, h is the time domain representation of the filter, f is the
output, and * is used to indicate convolution - a combination of multiplication and addition.
168
3. Cross-multiply each sample in the filter by each sample in the trace, and add the
results to give one output sample.
We need the time-domain representation of our filter. We can get this by simply inverse
transforming the desired amplitude and phase spectra into the time domain, or, if our filter
is a "black-box" (for example, the electronics in the recording instruments), we can obtain
the filter response by using a single spike as the input to the filtering process.
Let's use this filter, and the equation shown above, to filter our data in the time domain!
Here's the original "trace" in digital samples (at a 4 ms sample interval), the time-reversed
filter (it's symmetrical, so no change), and the result of all the multiplications and
additions. Use the buttons to show each stage of the process:-
The final column above matches the plot shown at the top. We have low-pass filtered our
trace in the time domain by taking a running-average of every three samples (I had to
append zeroes to the trace to handle the "ends").
So, a running average is a convolution - we could accomplish the same thing in the
frequency domain by multiplying the spectrum of the trace by a complex spectrum which is
a sinc function.
169
Here's a final example of a convolution run as an animation.
The same input trace (again!) with a somewhat arbitrary filter of four points: 1, -2, 4, -1.
Click on the animation to pause and re-start it.
170
The frequency content of the seismic trace
We can assume that the seismic trace itself is made up of a series of convolutions of
different waveforms. The following displays show the time domain functions and the
amplitude spectra of just some of the functions that contribute to the recorded trace.
The shot
command -
the signal
from the
instruments
that a shot
must be fired.
Convolved with ... Multiplied by ...
The shot itself,
in this case an
array of
airguns from a
marine
survey.
Convolved with ... Multiplied by ...
A ghost
reflection on
marine data
caused by the
energy
reflecting
from the sea
surface.
Convolved with ... Multiplied by ...
The
reflections
themselves,
assumed to be
a random
sequence with
a fairly white
spectrum.
Convolved with ... Multiplied by ...
171
Energy losses
due to the
spreading of
the energy
from the shot,
and
absorption
caused by
friction within
the rocks.
Convolved with ... Multiplied by ...
The response
of our
recording
hydrophones
(and
amplifiers).
EQUALS EQUALS
The final
seismic trace!
We could continue this list ad infinitum! Random noise may be added at any stage,
coherent noise may be added, for example, at the recording instruments, and multiple
reflections will cause other peaks within the final amplitude spectrum.
We'll conclude our look at frequencies with one last topic - phase!
Phase
172
We need two parameters to specify a component of a given frequency - both its amplitude
and its phase. The following display shows six different "wavelets", each with the same
amplitude spectrum, but with differing phase spectra.
The phase spectrum, in each case, have been "unwrapped" to remove apparent jumps of
360 degrees in the phase. The grid lines on this plot show 90 degree steps as a reference.
Remember that a large linear phase component simply represents a "time-shift" in the
wavelet.
Flick through the different wavelets and try to find the one that is as short as possible in the
time domain, and that releases its "energy" as soon as possible.
The first wavelet in the sequence is obviously zero phase - it is symmetrical, so all the
component cosine waves must be symmetrical and peak at time zero. The subsequent
wavelets have various phase spectra, and various lengths but the last (number 6) meets the
criteria mentioned above - it's as short as possible in the time domain, and releases its
energy as soon as possible. Wavelet 5 is of equal length, but all of its energy is at the rear of
the wavelet.
The wavelet which, for a given amplitude spectrum, occurs only in the positive time range
(it starts at "0"), and releases its energy in the shortest possible time is known as a
minimum phase wavelet. It is possible (by a fairly complex process involving logs of
complex FFTs!) to compute the appropriate minimum phase spectrum from a given
amplitude spectrum and there is only one possible result - only one wavelet meets these
criteria for a given amplitude spectra.
Almost all of the filtering processes that occur to the seismic energy as it passes through the
earth are minimum phase processes. The shot energy itself is fairly close to minimum
phase, as are most of the effects shown on the previous page. Some of the processes
performed on the data (in particular Deconvolution which we will discuss later) are
minimum phase processes. The result of all of these combinations of minimum phase
"processes" is itself minimum phase!
173
We will come back to minimum phase concepts later, but for now, just remember that a
minimum-phase wavelet starts at time zero, is as short as possible, and releases its energy
as soon as possible. If you try to draw a wavelet, the chances are that it will start off with a
big peak, and then die away fairly quickly - it will be (roughly) minimum phase.
The fifth example in the sequence shown above is the time-reversal of the minimum phase
wavelet and is known as the maximum phase wavelet. Any other wavelet (not zero,
minimum, or maximum phase) is referred to as mixed phase.
One final note on phase. If we have just 2 samples (and it only works for 2!), there are
three possible phase spectra. Ignoring any linear phase components (or time shifts), if both
samples have the same value then the wavelet is symmetrical and hence zero phase. If the
second sample has a smaller absolute value than the first it's minimum phase, and
maximum phase if the second sample is larger.
174
It's about time we got onto processing!
We start with a look at some of the very early stages in the processing sequence.
An example of a modern processing flow that could apply to 2D or 3D land or marine data
- we'll look at all stages in detail.
Stage 1 - reading the data from the field tapes (and making sure that it's all there!).
Some special processing sometimes required for Vibroseis data, with an explanation of
correlation.
The summing of data from multiple shots sometimes required for Vibroseis data and other
"low amplitude" sources.
Calculating and applying the initial statics necessary to get our data referenced to the
correct "time zero".
175
... and more ...
The initial amplitude corrections applied to correct for the spreading of the energy (and
other factors).
176
A typical processing flow
As we've finally got around to talking about processing, we'll start with a typical processing
flow, showing all of the processes applied to one "line" of seismic data. This flow is based
on a typical marine 2D line, but could apply almost equally to any type of processing.
The processes necessary for the geometrical correction of the data are shown in red, whilst
the others (in yellow) are really just cosmetic processes designed to improve the final
result. Many of the processes are optional, and other processes may be used within the
sequence. We will discuss each process in detail in the following pages and Chapters.
Field Data
1 Transcription Conversion of the field data to an appropriate internal format.
A static correction so that time zero is the time that the shot was
2 S.O.D Correction
fired.
Signature
3 Replacement of the source signature with a more desirable wavelet.
Deconvolution
Initial Gain An initial correction for the spherical spreading of the signal with
4
Recovery time.
Anti-aliased and resampled to the highest sample period
5 Resample
commensurate with the desired frequency range.
Removal of bad traces / shots and correction of any polarity
6 Edit
reversals.
Multichannel Attenuation of shot-generated noise, improvement in spatial
7
Filtering coherency.
Input of geometry, and calculation of traces from every common
8 CMP Gather
mid-point.
9 De-Multiple Removal of long period multiple reflections.
Correction of the spread of data within one CMP for dipping
10 Dip Moveout
events.
11 Deconvolution Removal of short period multiple and frequency balancing.
12 NMO Correction Correction of curvature on events due to differing trace offsets.
13 Mute Removal of first-break noise.
14 Equalisation Amplitude normalisation.
15 CMP Stack Summation of all data from one common mid-point.
177
16 Datum Correction Final static correction to the final datum.
Adjustment of the initial gain recovery to compensate for velocity
17 Final Gain Recovery
changes.
Multichannel
18 Additional coherency enhancement (spatially).
Filtering
19 Deconvolution Final frequency balancing.
20 Migration Re-positioning of dipping events to their correct spatial position.
21 Spectral Shaping Adjustments to the final amplitude / phase spectrum.
22 Bandpass filter Limiting of frequencies to the useful signal range.
23 Equalisation Amplitude normalisation.
Final Display
Although many of these processes are optional the CMP (or incorrectly named CDP) stack
(item 15) is essential when we are processing multi-fold data. The processes prior to this
stage are called (logically) pre-stack processes, and those after post-stack processes. It's
useful to distinguish between these two parts of the processing sequence, as there may be a
60:1 or more reduction of data at the stack stage.
178
Transcription
The first stage of any processing sequence
requires that the data from the field be
loaded and checked.
All processing systems have their own internal format for the storage of data. Even with
the current trend towards cheap storage devices, it may still be impossible to store all of the
data on disk, and tape-to-tape systems are still quite common. Let's repeat an earlier
exercise and calculate the storage requirements for some typical field data.
Again, we'll assume a 240 channel recording system, which gives us 240 x 12,000 bytes, or
about 3 megabytes (with headers) for each field record. We've already got enough to fill
two 3" floppy disks with data.
Now we'll assume a 25 m shotpoint interval which, for a 30 km seismic line gives us 1200
shots. Multiplying up again, we now have a total of 3,600 megabytes (3.6 Gb) for just the
field data for this line. If we want to keep other processed versions of this line then we'll
need as much again for each stage before stack (unless we're resampling the data).
When one considers that a complete marine survey may consist of 1,000 or more kilometres
of data (120 Gb), and we will normally be processing many lines simultaneously, you begin
to see the storage problem. With multiple streamer 3D surveys the problem gets totally out
of hand!
179
So, our initial processing stage must read all of the field data and convert it into our
internal format.
We need to check this initial stage to ensure that every trace of every shot has been read
correctly. Many of the problems at this stage (hopefully getting fewer) are related to the
physical condition of the recording media. The reprocessing of old data that has not been
stored correctly may require the resources of one of the specialised companies that can,
hopefully, ensure that as much data as possible can be read from the field tapes.
We also need to compare the list of shots read from the tape against those listed in the
various field reports (normally the Observer's Logs). Although each shot is identified by a
shotpoint number, these may not be recorded on the tape. Instead a Field Record Number
may be used which might be designed to match the shotpoint number but which might,
occasionally, get out of step.
At the end of this stage we must have a complete list of the shotpoints and traces now in our
internal format - errors at this stage will affect everything that follows!
What kind of problems can we encounter even if we read the data (apparently) correctly?
Most of these are fairly obvious since they usually involve some kind of mis-interpretation
of the data on the field tapes. If, for example, we are expecting (and the field tape headers
agree) data recorded at 2 ms sample period, and in fact it has been recorded at 4 ms, then
we'll only get half the number of samples we expected.
If the computer gets confused about the actual format of the numbers on the tape, then the
output will normally be very "un-seismic" looking. A scan error (where one multiplex scan
of the channels gets out of "sync"), will generally mean that we'll have to omit that shot
from any processing, but, we should look at the data as soon as possible to confirm this.
The following display shows four examples of loaded data, using SEG-Y as an example.
Note the (sometimes fairly subtle) errors when we read data in the wrong format. The
180
abbreviation "FP" indicates floating-point data.
There are one or two other considerations that may be required by data from Vibroseis or
other (more unusual) sources - we'll look at these briefly before moving on through the
processing sequence.
181
Vibroseis data & correlation
This sweep is effectively superimposed on every reflection in our seismic section. If we could leave the
amplitude spectrum alone (it's pretty good), but change the phase spectra to something more useful
then maybe we would be able to "collapse" all of the sweep into a more concise source.
We could, of course, transform the sweep into the frequency domain, modify its phase, and transform
it back, but this would be quite costly given that a sweep may be (in some cases) more than 20
seconds long. There is, however, a time-domain technique that is just as useful!
Correlation
At this point we will digress slightly to refer back to the section on Convolution given in the previous
Chapter.
182
This is the formula for Convolution given in the last
Chapter. You should recall that Convolution in the
time domain (remember the multiplying and
adding) is equivalent to multiplying the amplitude
spectra of x and h together, and adding their phase
spectra.
What effect does this have on the outcome? Time-reversing a trace reverses all of its phase
components (+X degrees become -X degrees, etc.), and since Convolution multiplies the amplitude
spectra and adds the phase spectra, as we might expect Correlation still multiplies the amplitude
spectra but subtracts the phase spectra.
We will see later on how Correlation can be used to compare two traces, but, for now, let's
concentrate on the Correlation of a trace with itself - an Autocorrelation.
Autocorrelation
This diagram shows the Autocorrelation of a
three-point "trace". The input (in black) is not
time-reversed, and is placed alongside itself,
shifted, cross-multiplied, and added to give the
output trace.
Note two things from the above example. Firstly the centre value of the Autocorrelation, when the
183
trace lines up with itself exactly (the zero-lag value) is always going to be equal to the sum of the
squares of all of the amplitudes in the traces (this is regardless of the number of samples and shifts we
use). Since these squares will always be positive, this will always be the largest number in the
Autocorrelation.
The output Autocorrelation is symmetrical. Once again this is independent of the number of samples,
and the actual values. Sample "x" times sample "y" always equals sample "y" times sample "x". As
the output is symmetrical, it must be zero phase.
If we think about this in terms of the Fourier transform of the trace, we multiply the amplitude
spectrum by itself, and subtract the phase spectrum from itself - giving (ALWAYS) a zero-phase output
with the amplitude spectrum squared!
If we Autocorrelate the sweep with itself, we obtain a nice, short, zero-phase wavelet. If we correlate
the sweep with all of the traces recorded using that sweep, then every sweep present in the trace
(each reflection) will be collapsed down to the zero-phase wavelet shown above. We can then reduce
the data length down to the typical lengths needed for the final seismic data - we no longer need the
184
extra 12-16 seconds for the sweep.
Luckily, nowadays, almost all Vibroseis correlation is done in the field and the processing centre
receives correlated data which looks much like any other seismic data. Note that, unlike some of the
marine sources we have looked at, Vibroseis data (after correlation) is ZERO PHASE not minimum
phase, and we may need to make some additional allowances in the processing to compensate for
this.
Vibroseis correlation is really a form of signature deconvolution, where we change the source shape
(present on every reflector) into something more useful - more on this in a couple of pages time!
185
Shot summing
Another technique common with Vibroseis data, and also with other less "energetic" sources, is the
summation of a number of shots (at the same position) to produce one "field record".
Once again, this is normally done in the field but it's worth considering as a possible processing step
on older data.
It can be shown (by some fairly messy Maths) that the root-mean-square amplitude of the random
noise is reduced by one over the square-root of the number of summations - in this case a factor of 2
(4 summations).
Before closing this section of processing that is normally done in the field, let's digress again to discuss
the order of processes and so-called linear processes.
186
Linear Processes
A linear process is essentially one that can be reversed. Multiplying the samples in a trace by a
constant (for example -1 to polarity reverse it) is reversible (and hence linear) unless the multiplying
factor is zero (once we've killed a trace, we can't bring it back, or 1/0 = ?).
Correlation is non-linear unless we keep one of the two inputs as well as the output, we can then
reverse it. Convolution is the same, and both of these can suffer from the "1/0" syndrome shown
above - if we completely filter-out a certain frequency or frequencies then we cannot bring them back.
We'll look at some more examples of this later, but think about each process in terms of its
linearity.
Processing order
One classic example of process ordering is the Correlation and summation mentioned above for
Vibroseis data. If the sweeps used for a series of shots (in one position) were all identical, then we
could sum the data and then correlate it.
In practice each Vibroseis truck has different physical characteristics and the sweep is recorded for
every shot for every vibrator. As these can be quite different it's usually necessary to apply the
Correlation before the summation.
This is another thing we have to think about in the processing sequence - just what order to
apply things in!
187
Initial Statics
For the initial stages of processing, which may rely on the precise positions of shots and receivers, it is
important to correct the data so that the timing corresponds to the time from the shot to the reflector
and back to the surface again.
For marine data this will normally be relatively simple, we'll simply need to apply a static correction (a
fixed amount for each trace - normally for the whole survey!) to correct for any delays in the recording
system.
Even on marine data it's often possible to check this from the data itself.
188
Here's a straight line fitted to the above first
arrivals. This changes by 233 ms between traces
1 and 15 on the record.
If the above timing did not correspond to the expected offset, and if we were sure that we
were picking the very first arrival on each trace, then we could estimate the recording
delay by this method. Any calculated delay should, of course, agree with the value for the
delay in the Observers Logs or ancillary field information.
As this delay is typically 120 - 140 ms, it should be obvious, and should probably be zero
(no delay) on most modern data.
189
Picking the first breaks is the geophysical equivalent of
banging your head against a wall - it's nice when you stop!
(At times like these it's difficult to know who's in charge - you
or the computer!)
One of the nice things about picking the first breaks is that, provided you're reasonably consistent in
what you pick, you don't have to pick exactly the same part of the "break" on every single trace. It's
the time difference between successive traces that's important and any random errors in absolute
timing should cancel out in the complex calculations done on the results.
Datums
A datum is defined in Sheriff's Encyclopedic Dictionary of Exploration Geophysics as a reference value
to which other measurements are referred, or an arbitrary reference surface, reduction to which
minimises local topographic and near-surface effects.
For the initial stages of the processing we need to be sure that the times on our seismic records are
referenced to the time at which the shot was fired. For marine data this simply means applying the
S.O.D correction mentioned above - our "datum" in this case is now the shot and streamer position, a
190
few metres below sea level (and hopefully pretty constant). We will make the final adjustments to
(normally) sea level at the very end of the processing.
For land data our shots and receivers can be at almost any depth. In this case it is common to use a
floating datum - a smooth line that follows the general elevation trends along our line, but removes
any rapid shot-to-shot and receiver-to-receiver variation along the line. Once again, the correction to
final datum (which can be almost any fixed value for land data) will be done at the end.
The floating datum may be above or below the actual shot or geophone position and the resultant
statics may be positive (adding to the time and moving the data downwards) or negative (subtracting
from the time and moving the data up towards time zero).
In this diagram, where the floating datum is below the shot datum (at either end) we need
to take time off the events so a negative static correction is applied. It's positive when the
floating datum is above the shot datum.
191
Signature Deconvolution is normally applied early on in the processing to change the "source wavelet"
present on every reflection in the data set into something more desirable. The name "deconvolution"
should give a clue to one of the methods used - we need to find the convolution filter that will
accomplish our desired transformation.
If we can change the wavelet from something "long" into something "short" (and preferably minimum
phase) as shown in the top of the above diagram, then the image of that wavelet on every reflection
will be shortened (as show the bottom of the diagram), and individual reflections will be visible.
We've already seen a similar process in the correlation of a Vibroseis sweep with the data - the sweep
is reduced to the autocorrelation of the sweep (unfortunately zero-phase in this case).
This photograph of a familiar landscape has been modified in a similar way to the distortion that
occurs when long source wavelets are used.
Every dot in the picture has been replaced with a spread of values (in a pyramid shape).
Click on the second button to "remove" the effects of the long "signature".
192
Here's the time function, amplitude spectrum and phase spectrum
of a typical marine sleeve-gun signature.
The signature looks to be (and is) fairly minimum phase, but the
phase spectrum gets very inaccurate at the high frequencies
because the amplitudes are so small.
193
... and the spectra of the input
wavelet ...
Although the spectra look a bit weird, this will perform the transform we require. To check the filter,
we'll transform the final spectrum back into the time domain to give:-
This doesn't look too good does it! Why is it dominated by high frequencies?
This is a common problem with frequency division. When the input amplitude spectrum and desired
amplitude spectrum are about the same, then the output filter spectrum will be around "1". When
the input spectrum gets very small (at the high frequencies) we could be dividing 0.001 by 0.000001 -
very small numbers, but the answer is 1000!
194
Before leaping to any conclusions, let's check this filter by applying
it to the input signature - this shows the result.
The output looks pretty close to the desired output (except for
some "wobble" at the high frequencies where the division gets
prone to error).
Well ..., it all works O.K. if the input trace is perfect - it has no noise! To see what happens when there
is some noise, let's add a single "spike" of noise to the input trace and run the filter through it again:-
Input
with
spike
give
"noise",
s
convolve
d with
filter
YES, THERE IS A PROBLEM! Any noise in the input trace (unrelated to the original source signature)
will be overlaid with an image of the deconvolution filter (or operator). In this case the high frequency
noise generated by this has wiped out much of the information on the preceding reflection!
Although frequency domain division is often used for final shaping of the spectra at the end of the
processing sequence (and, of course, we will come back to this) we must always be aware that any
frequency domain process can introduce unwanted anomalies in the time domain.
To resolve this problem we go for an "approximate" filter of a fixed length - time domain
deconvolution!
195
Signature Deconvolution (2)
Can we design a filter in the time domain, of a fixed length, that will perform the same kind
of inverse filtering as that done in the frequency domain without the unwanted side effects?
Since we use Convolution to filter in the time domain, this problem reduces to:-
We'll take the simplest possible example, a three-point input trace (with values "a", "b"
and "c"), convolved with a three-point filter ("x", "y", "z") to give a three-point desired
output ("p", "q", "r"). Cycle through the buttons in the following display to show the
complete convolution process:-
Each individual multiply & add yields one equation, so we have 5 equations for 3
unknowns (x,y & z). We can (approximately) solve this using the Least-Squares technique
outlined in Chapter 1. If you want to see the full derivation then click here, here's the final
three equations arrange by variable:-
The coefficients are highlighted in yellow, the right hand side in red. Note the symmetry of
the coefficients. In mathematical terms this is a Hermitian Toeplitz matrix, which can be
solved by an iterative process (the Wiener-Levinson algorithm) which is much faster than a
conventional solution.
Are the terms on the left hand side familiar? On page 4 of this Chapter we looked at
correlation, in particular the autocorrelation of the sequence "a, b, c" came out as "ac,
196
ab+bc, a2+b2+c2, ab+bc, ac" which are the coefficients in the above equation. The right
hand side is the cross-correlation of the input with the desired out (a,b,c cross p,q,r). So the
above equation (in matrix form) reduces to:-
197
Signature Deconvolution (3)
O.K., here's the same example using a time-limited convolution filter to change our
wavelet. Given, once again, that:-
This still, of necessity, has some high frequencies in it, but is only the length that we specify
- the deconvolution process does "the best it can" to change the input to the desired output
under these constraints.
198
In calculating the operator to apply to our data, it may only be necessary to perform the
actual deconvolution calculation once for a whole survey. If the signature is reasonably
consistent from shot to shot (as in modern marine surveys), we may only have one
"typical" signature recorded for the whole survey. We design an inverse operator (or
deconvolution filter) on this signature to give a filter than can be applied (by convolution)
to every single trace in the survey.
It's obviously important to get this right! The most common errors associated with
signature deconvolution are usually in the timing of the data. If we specify the start time of
our desired output to be very different to the start time of the actual signature, then we will
introduce an unwanted time shift in all of our data. (Remember that a time shift is simply
a linear phase shift which the deconvolution will try to correct.)
We also need to ensure that our filter length and start time are optimal for a given filter
design - a filter with only positive time values, for example, can only shift the energy in each
reflector "down" (to a later time).
We can see from both the autocorrelation (it's nice and "sharp"), and the amplitude
spectrum that, although the phase spectrum must be really awful, the signature contains a
good range of frequencies and will respond well to signature deconvolution.
We need to specify a "window" on the signature (usually called the design gate) over which
the autocorrelation is computed and the deconvolution operates. What happens if we get
this wrong?
Here's the same "signature" again, but this time both the autocorrelation and spectrum are
only computed over the part shaded blue:-
It should be obvious that, in this case, the errors introduced into the autocorrelation and
199
spectrum will result in a "bad" signature deconvolution - the deconvolution operator will
only work on the part of the signature that it can "see".
The same criterion applies to the window used on the desired output, and the positioning of
the input with the desired output (often referred to as the lag of the filter). We'll go on to
look at some "live" examples now to show the problems.
200
Signature Deconvolution (4)
Here's a "live" example of Signature Deconvolution! The top two displays show 300 ms of
input "signature" and minimum-phase "desired output". The bottom displays show the
filter (the centre of this display is time "zero" for the filter), and the filtered output. To
start with, leave the design gates as they are and try changing the filter length to see how a
shorter filter effects the actual output - the blue line on the lower right plot shows the
desired output.
Try changing the input and output "windows" (the grey lines are every 50 ms), and see how
this effects the output. The "windowed" button shows the filter applied to the selected
input window, the "Full" button applies the filter to the whole input signature (including
the bits that the design window doesn't reach!).
Note that, like most processing systems, the design windows are specified in milliseconds,
but are converted to the nearest 4 ms sample by the program. The filter length is assumed
to be inclusive so that 8 ms (the smallest value you can specify) implies a "3-point" filter (0,
4 and 8 ms samples).
In some cases (particularly where the input and desired output window lengths are very
different) the filter cannot do anything useful - it doesn't have enough "length" to perform
an optimum calculation.
There's a lot of computation going on behind the scenes in the above example. Although
we can specify the filter length, we don't specify the time range of the filter relative to the
input and output windows.
This time shift (often known as the lag of the filter - the time at which it starts) is calculated
by the program by trying all possible lags, and finding the "best" one. For example, when
you specify a 51 point filter (200 ms), the program starts with the filter starting 25 samples
before time zero and then computes 50 different filters (up to a start 25 samples after
zero). The results of all of these trials are applied back to the input window, and the
difference between this and the desired output is squared and summed. The position
corresponding to the minimum error is finally selected and applied again to the input. This
is known as an optimum lag filter.
201
Depending of the computer you're running this on the computations listed above should
happen almost instantly. When digital processing first started (in the 1960's) the cost of
computing just one deconvolution operator was such that contractors could charge
something like 0.50 ($0.80) for the computation of one operator. Allowing for inflation,
one run of the example shown above (more than 50 operators are computed) should cost
you about 250 ($400), and would probably take several hours to run!
We very often use signature deconvolution just to modify the phase of a wavelet (not
changing its amplitude spectrum). Any attempt to change the bandwidth of the signature
(the overall frequency range) will lead us back into problems with in trying to enhance
frequencies that simply are not present.
It's very common just to design an operator that will convert the signature to minimum
phase, and to apply this to all the data from that source. This is a common technique where
the source is correlated Vibroseis data which, hopefully you'll remember, is zero phase.
We'll come back to other forms of deconvolution and inverse filtering in due course, but for
now we'll look at one more stage of the very initial processing, gain recovery!
202
Gain Recovery
Imagine a sphere of energy expanding from the seismic shot in all directions (click on the
image to stop it!):-
All of the energy initially contained in the shot is spread out over a larger and larger area
as time passes. This causes one of the possible losses of energy on a field record, and is
generally referred to as spherical divergence.
The surface area of a sphere (the area over which the energy is spread) is 4/3 pi times the
radius squared (I had to go and look that up, which just shows how long ago I was in
school!).
We can calculate the radius as the distance the wavefront travels in a given time, which is
equal to the time (T) multiplied by the average velocity over that time interval (V). We
would expect, therefore, that the energy loss due to spherical divergence would be
proportional to (VT)2.
Other more complex forms of energy loss (some of which are frequency
dependent) include that caused by the friction of particles moving against
each other, and losses at each interface through which the wave travels and
is refracted. (Some of the energy in the original seismic P-wave is converted
into an S-wave at each interface and not recorded - more on this later!)
In all cases this generates a total loss of signal that decreases with time,
which approximates to:-
203
Accurate determination of the inverse of this energy loss, the gain function that we need to
apply to the data is difficult. We've already seen that the radius of the expanding sphere
depends on the velocity of the medium through which it has travelled, and, as yet, we have
no idea what that velocity is. We could attempt to use the data itself to establish a function,
but how do we (at this stage) distinguish between primary and multiple reflectors and
random noise (all of which will have different decay functions)?
The solution is (as so often in seismic processing) to make some educated(?) guesses, and
choose the "best" result by looking at the data.
Here's part of a single (marine) shot record with different gain functions applied. Flick
through the various functions and choose the one that gives the best overall balance to the
data at all times (time is shown in seconds):-
The left-hand scale of the upper plot is in dB, and this plot shows the function that was
applied to the data. We tried some straightforward logarithmic scaling of 3, 6 and 9 dB per
second, two functions based on time and velocity, and two exotic functions based on an
approximation of the inelastic losses (the e0.002TV function), and a complex function which
applies a different correction to every trace in the record depending on its offset (distance)
from the shot.
The example with 6dB/second is reasonable, as is the simple T2 curve. The examples using
"V" (particularly the last one) are good, but, as I said above, we don't yet know the
velocities for this shot (I cheated in this case!). The exponential function (e0.002TV) is
obviously much too strong.
Out of all of these, one of the simplest (and reasonable) functions is the T2 curve. This is
simple to apply (multiply each sample by its time-squared) and has the advantage that we
can remove it later in the sequence (when we have velocity information) and apply a better
approximation. Removal of the function simply requires division by time-squared.
Although a "dB/sec" type function causes less distortion within the data, in practice a T2
curve is generally sufficient to balance the data over the time range. If there is excessive
noise at the bottom of the record (later time), we may need to stop applying the function at
some time (keeping it constant from then on). This is best determined, once again, by
testing.
We should expect the gain function to be pretty constant over a given area (provided the
source and recording conditions remain similar).
204
Trace editing
We need to find those traces that contain spikes or noise trains that are unrelated to the
"true" seismic data, and to spot any traces with the incorrect polarity.
In marine data, as the recording instruments move with the shot, we should find that any
"bad" channels are consistent from shot-to-shot. For example, trace 163 may be badly
connected and hence needs to be removed from every shot on a line. If we're lucky the
report from the field may mention this!
For land data the problem is more complex. Firstly a bad recording group will affect
different traces on different shots. By examining the geometry we should be able to spot
the logical progression of the bad trace through a sequence of records. 50 or 60 Hertz noise
is common on land data recorded near power lines. We don't normally kill these traces,
but use a notch filter to remove the offending frequency. This is simply a frequency domain
filter that simply wipes out the appropriate frequency.
Land data may also contain noise from the shot itself. This noise that travels through the
very near surface is usually referred to as ground roll, and we'll remove this with other
forms of filtering that we will discuss later.
We also need to identify any complete shots that need to be removed. These may be due to
some kind of "mis-fire" in the field (uncommon these days) or a mis-reading of the data
from tape (such as the sync errors we mentioned earlier). Once again a brief look at a
205
selection of data (and careful examination of the computer output "listing" from the
process that read the tape) should spot this.
206
It's obvious from the above that one recurring problem is
that of "dead" traces.
This clearly states that the onset from the compression wave
produced by an explosive source is represented by a negative
number on tape (and throughout the processing) and
displayed as a negative (white) trough.
Polarity reversal simply requires the computer to multiply every sample in the trace by -1.
Once again, the logical arrangement of reversed traces will follow that discussed above for
land or marine data. Do make sure that you only reverse the trace once in the processing
sequence. Doing it again will put it back to the reversed state!
207
On with the processing sequence - some more of the early stages common to all types of
processing.
We generally record more data than we really need for "conventional" seismic processing.
With the appropriate filtering, we can throw away some of the samples in time ...
In the same way as we can filter our temporal frequencies (in time) we can also filter our
spatial frequencies.
Another type of spatial filtering which transforms our traces from time and space into time
and dip.
Yet another type of spatial filtering, using deconvolution techniques on the complex
numbers which make up the frequency spectra of a group of traces.
208
Some examples of spatial filtering on real seismic data.
Interpolating data in the space domain - adding more traces to our seismic record or
regularising data with inconsistent spatial sampling.
Sorting, gathering and displaying our seismic data in a variety of different directions.
209
Resample
Those of you that can remember back as far as the start of
Chapter 7 will realise that I am describing the processes in a
different order to that shown in the processing sequence at
the start of that Chapter.
Resample, or the reduction of the data to a smaller sample rate (higher sample interval) is a
typical process that is normally applied near the start of the processing but not in any
particular position. If we don't have any frequencies in the original data above the new
Nyquist frequency then the process is reversible, and hence linear.
For conventional (deep) seismic data we rarely see good information at frequencies greater
than 100 Hertz. The bandwidth of the seismic source is usually quite limited, and the
absorption and other effects in the earth rapidly attenuate the higher frequencies.
210
Most field data is acquired at a higher sample rate than that usually used in the
processing. The cost of acquisition can be enormous and, if we find that we haven't
recorded quite the frequency range we require, we would have to go back and re-shoot the
data (anyone with a background in accountancy should go and lie down at this point).
Having acquired the data at a low sample period (typically 2 ms), we normally find that,
apart from the very shallowest data, we only have frequencies that can be handled by a
higher sample period (typically 4 ms). We therefore resample the data to this new period as
soon as possible in the sequence to avoid processing twice as many numbers.
Field signatures are normally recorded at the same sample rate as the data and, since the
first stage of reading the data has to read all of the samples from the tape, we normally
apply the signature deconvolution at the recorded sample rate. After that, subject to the
removal of frequencies that will alias, we can more or less do it at any stage (the sooner the
better).
Why don't we simply resample all data to (for example) 8 or 10 ms and reduce the volume
even further?
We need to keep the high frequencies to improve the resolution of the data. Here's a single
trace showing two distinct peaks progressively filtered down to lower frequencies (higher
sample periods). You'll notice that the two peaks become one at the lower frequencies (on
the right):-
We can see the same effect when we progressively resample a trace, with the appropriate
anti-alias filter applied (the buttons show the sample period in milliseconds):-
211
Similar considerations also apply in the other dimension - space. We'll go on to discuss
these next.
Spatial Resampling
As we are able to reduce the amount of data in the time domain, by resampling, can we do
the same in the other dimension - space?
The answer is yes, provided we take the same care with the frequency content, in this case,
the spatial frequencies.
This display shows a small piece of a seismic record with a single frequency dipping at 10
milliseconds per trace upwards towards the right. By convention, as time decreases as
space (trace number) increases, we will refer to this as a negative dip. In other words,
-10ms/trace dip.
The buttons show the frequency (in Hertz) of the dipping signal, divided by 10 ("10" = 100
Hz). Increase the frequency, and look at the apparent dip of the event:-
You should notice the first problem when the signal reaches 50 Hz. The event, in this case,
could be dipping in either direction. Above 50 Hz, the signal appears to dip in the other
212
(positive) direction, and at 100 Hz the event appears flat.
Let's assume a trace interval of 25 metres in the above example, and compute the
wavenumbers at each frequency using the above equations:-
213
10 -10 -2500 -250 -4
20 -10 -2500 -125 -8
30 -10 -2500 -83.33 -12
40 -10 -2500 -62.5 -16
50 -10 -2500 -50 -20
50 10 2500 50 20
60 6.67 3750 62.5 16
70 4.29 5833.33 83.33 12
80 2.5 10000 125 8
90 1.11 22500 250 4
100 0 Inf. Inf. 0
214
Spatial Frequencies
We can analyse spatial frequencies in the same way as we analyse temporal frequencies - by
using the Fourier Transform. (I'll try to use the term "frequency" when referring to
frequencies in time (Hz), and "spatial frequencies" or "wavenumbers" when referring to
those in space.)
Here's a typical array of geophones (11 in all) spaced 2 metres apart to cover 80% of the 25
metres between one geophone "station" and the next:-
These geophones would typically be arranged "in-line" with the shot, so that the energy
from the shot (and reflections) would be arriving from either the left or right edge of the
picture.
This energy will arrive at slightly different times at each 'phone, and the results from all of
the receivers will be combined together in the recording truck. This summation introduces
some form of spatial filtering to the data.
215
The vertical scale shows the amplitude of the
recorded signal (as usual in decibels), and the
horizontal scale now shows spatial frequency
(or wavenumbers) in cycles per kilometre.
The lower spatial frequencies, up to about 30 cycles per kilometre (in both directions) will
be recorded with no attenuation, but frequencies above this will be attenuated by more
than 10 dB. This prevents the recording of very high wavenumbers (or dips) that will
spatially alias across the shot record.
Unfortunately, this array only fully attenuates these higher frequencies for energy arriving
from the direction of the shot (along the line of geophones). Most of the energy will, of
course, come this way, but some (possibly reflected from shallow structures "off" the
seismic line) may arrive from other directions. What's the directional response of this
array?
The following plot shows wavenumbers on a circular axis coming from the centre of the
plot, and moving around the circles to a different "direction", gives the response in that
direction:-
If you look along the "East-West" line across the centre of the plot (through the numbers)
you will see the same wavenumber response as that shown in the plot above - red
represents high values, down through yellow, green and blue for the low values.
Now look along a line from the "South-West" to the "North-East". Any energy arriving
from these directions will not be as greatly attenuated as that along the centre line. Data
from top-to-bottom (at right angles to the array) will arrive at all geophones at the same
time, and not be attenuated at all.
Button "2" shows how we can introduce some directivity into the array by adding another
string of geophones (with the same spacing) just two metres away from (and parallel to) the
original string. The other buttons show the response for spacings of 4 and 8 metres.
It's also possible to design shot arrays to improve directivity, you'll remember from
Chapter 2 the discussion on arrays of air guns (of different sizes) used for most modern
marine surveys. I was able to compute the frequency / directivity response of one such
array by computing the Fourier Transform of the appropriate wavenumbers and then
converting these to temporal frequency by using the equation "wavenumber = 1000 x
frequency velocity". This was derived from the equations on the previous page, and, for
216
marine data, we can use a water velocity of about 1500 m/s.
Once again, given the high cost of acquisition, it may be advisable to record more spatial
data than is strictly necessary for an initial seismic section. We can spatially resample our
seismic data in many ways, the most common being by combining pairs of traces from each
shot and so reducing the number of traces per shot by a factor of two.
This trace mixing can be performed after applying time-shifts to each of the traces. This
slant-stack will bias the mix towards enhancing certain "dips" in the data and is sometimes
known as beam-steering.
There is a slight problem associated with trace mixing, caused by the fact that the two
traces we are summing together are at different offsets from the shot. We will address this
problem during our discussions on seismic velocities.
How do we decide how much data to process? We could do some fancy calculations
involving dips and wavenumbers, or we could (more usefully) see what sort of parameters
gave good results in this (or a similar) area before. If all else fails we may need to process a
small part of line with different spatial sampling to establish the cost / quality relationship
within our data.
217
An old (low fold) example of marine data
processed with different combinations of trace &
shot summation and omission.
The arrays of shots and geophones, and any trace mixing we do in the processing, introduce
some form of spatial filtering on our data. We can also apply other forms of spatial
filtering as part of our data enhancement techniques. We'll go on to discuss these next.
218
Spatial filtering
In order to understand the effects of various forms of spatial filtering, we need to look at a
2-dimensional spectra of both the temporal and spatial frequencies within a seismic record.
There is a great deal of equality between the time/frequency domains and the
space/wavenumber domains. Consider a simple "triangular" filter, with sample values
1,2,3,2,1. In the time domain it (and its amplitude spectra) look like this:-
The time samples (every 4 ms apart) transform into a "sync-like" function in the frequency
domain.
Apart from the scales, and the fact that in space we consider both negative and positive
wavenumbers, the two plots are identical.
In order to examine both the temporal and spatial frequencies at the same time, a type of
analysis, known as a FK analysis (F = temporal frequency, K =spatial frequency or
wavenumber), is used throughout the processing to design some types of spatial filters
219
(usually FK filters).
220
Since this
plot
attempts to
show a 3D
surface in
just 2D,
you'll have
to imagine
that the
yellow spike
comes out of
the screen
towards
you!
If we were to remove the spike from the FK spectrum (just by editing it out), when we
transformed back into time & space, we would have dead traces!
Note that we don't normally look at the phase part of the FK spectrum (although it is, of
course, computed). We generally assume that any FK editing we do will leave the (very
complex) phases unchanged.
221
FK Filtering
Yet another synthetic record, this time looking a little more complicated.
In fact, this just consists of two different temporal frequencies (two cosine waves) dipping
with two different dips, added together.
As we
might
hope, the
FK
spectrum
shows
both
frequenci
es and
dips as
separate
points in
the FK
spectrum
If we "surgically" remove the area inside the red
circle in the above transform, and then transform
back to the time/space domains, we'll totally
remove the cosine wave with positive dip (down to
the right).
222
A slightly more reasonable synthetic! We have
just one event, with a limited range of
frequencies, and one dip.
223
Any dips of more than 4 milliseconds (one
time sample) per trace will alias in
wavenumber.
224
FK filtering can be used in other ways to enhance the coherent signal within a shot record.
One common way is to compute the transform, raise it to some power (typically 1.1 to 2)
and transform back - this enhances any coherent dipping data. We can use the FK
spectrum to limit both the temporal and spatial frequencies, any editing usually being done
by marking part of the spectrum for removal, and transforming back into our normal
seismic record.
We can use FK filtering at the end of the processing sequence to remove other sorts of noise
(particularly diffracted noise), and we'll come back to it again when we talk about multiple
attenuation.
225
Linear Tau-P filtering
When we transform data into the FK domain, dipping events (in time/space) become
dipping events (in frequency/wavenumber). The Tau-P transform is yet another way of
looking at our data that transforms dips into (effectively) single points. We can then edit
the data in this "dip" domain and transform back into the time domain.
We get a similar problem if we sum seismic traces along lines of constant dip.
Here, for example, is a set of three samples from three seismic traces summed for a
particular dip (-2 samples per trace).
The summations of the sample values are shown as equations on the left, and, by clicking
on the buttons, you can see the summation for other dips.
From the above we have fifteen different equations (covering 5 different dips) for 9
unknowns. If we compute (as usual) a least-squares solution to these equations, we end up
with the following 9 equations to solve:-
226
Solving this (computers have their uses!) gives the answers:- a=6, b=-1, c=1, d=-9, e=-4, f=3,
g=7, h=-8 & i=1 - which are the numbers that I started with!
The results of summing along the original lines of "dip" gives us some estimate of the
coherent data along that dip. The ability to reconstruct the original data from these "dip-
stacks" enables us to transform to and from this "dip" domain.
We define the slope, or dip, of the line along which the summation is performed as "p", and
the intercept time (the time where this dip crosses some reference point) as "Tau". For this
reason, this type of transform (summing along linear "dips") is usually referred to as the
linear Tau-P transform.
Dip Horizontal
Velocity
Even ms/trac Trace Int. = Slowness (s/km)
t e 25m
A -10 - = -2500 - = -0.4
25/0.01 1000/2500
0
B -4 - = -6250 - = -
227
25/0.00 1000/6250 0.16
4
C 2 25/0.00 = 1250 1000/1250 = 0.08
2 0 0
The "horizontal velocity" shown above is simply the velocity at which the "event" moves
across the recording spread, the trace interval divided by the dip. Slowness (a term often
used in Tau-P transforms) is simply the reciprocal of this velocity (usually measured in
seconds per kilometre).
We now have a display that shows time (vertically) against dip (horizontally). Each of our
events has collapsed (more or less) into a small "spot" on the Tau-P transform, and are
easily separable in this domain.
228
common in all types of spatial filtering - the
filter has no data beyond this to continue
its computations.
In order to minimise some of the errors involved in Tau-P filtering we often transform back
that which we wish to remove (for example, the two negative dipping events shown above)
and then subtract this from the original record. Any "smoothing" inherent in the
transforms is then less obvious.
We will look at some real examples of Tau-P filtering (and some other techniques in the
Tau-P domain) in a couple of pages time. We will also come back to another form of Tau-P
filtering when we look at multiple removal.
For now, however, we'll look at yet another example of spatial filtering - FX Deconvolution!
FX Deconvolution
229
If we transform seismic data from time(t)
and distance(x) to frequency (f) and
distance (x), a time slice is converted into a
frequency slice, each sample in the
transformed data being a complex number
with both real and imaginary components.
The computed prediction filter is run in one direction across the traces, and then run again
in the opposite direction. The resulting predictions are averaged, in order to reduce
prediction errors.
230
domain.
The same plot again, this time showing the
imaginary component of the Fourier
Transform.
In practice, we normally select a series of overlapping windows in both time and space on
our original seismic data, and the process works on each of these windows in turn.
The data is transformed (trace by trace) into the frequency domain. For each frequency in
the transforms, an optimum deconvolution operator is designed that will "predict" the next
trace in the sequence. This operator is designed in the same way as that used in signature
deconvolution:-
In this case, however, every term in the above equation is a complex number, and the
solution is a complex filter!
Each trace is then "predicted" based on the traces before and after it (the filter is designed
and applied in both directions) and the output spectrum is based on the averaging of these
results. The resultant filtered spectrum is transformed back into the time domain to give
our output trace.
Typically a group of 10 or so traces is used at one time, with a filter of about 5 traces in
length. Overlapping time windows (1000 ms gates are typical) are used to keep the dips
within one gate fairly constant. FX deconvolution is a very robust process - even widely
varying parameter choices will produce similar results.
We'll look at some of these results (together with the other types of spatial filtering) on the
next page.
231
Spatial Filtering Examples (1)
In order to
demonstrate some
of the spatial
filtering
techniques we've
discussed, here's
a high resolution
land shot record
that we saw
earlier.
232
panel on the right
shows the part of
the data
highlighted in
(very light) red.
233
Here's what the above filter looks like in
three dimensions - a wedge shaped filter
(these filters are sometimes called "pie-
slices") with sharp edges.
The inverse
transform of the
filtered FK
spectrum.
We have removed
all of the high
dips (especially
the ground roll),
but the data now
has a slightly
"wormy"
appearance (it
lacks spatial
detail) indicating
that we've
probably
overdone the
spatial filtering.
234
One common way
of reducing the
effects of a heavy
FK filter is to
"mix-back" some
of the original
(unfiltered) data
with the output.
We are generally
allowed to make
this mix-back
time and space
variant, which
allows us to
modify the filter
response for
different parts of
the record.
We determine
this, as usual, by
testing!
235
Another copy of our FK analysis, this time
we have raised each point in the FK
spectrum to its "1.3" power.
A "light"
coherency
enhancement just
slightly
enhancing all the
coherent data
(including the
ground roll).
Once again, a
fairly robust
process that will
provide some
data
enhancement.
236
Spatial Filtering Examples (2)
For reference,
I've reproduced
the picture from
the top of the
previous picture,
showing the input
to all of our
filtering
examples.
237
Once again (for reference) the FK
spectrum of the above data.
The transformed
back high dip
section.
In theory we
could add this to
the low dip
section from the
previous page
and reproduce
our original
record.
We could also
subtract this
record from our
238
original record to
remove the high
dips.
239
The transform
back from the
(now limited)
Tau-P dataset
shows just the
low dips.
Although not as
severe as the FK
example shown
on the previous
page, we might
still want to mix-
back some of the
original data into
this output to
lessen its effects.
Finally, an
example using FX
Deconvolution.
As usual, a
"clean" data
enhancement that
doesn't overcook
things!
We can normally be fairly conservative in our choice of filtering on pre-stack seismic data.
The stacking process itself (when we sum maybe 120 individual traces together) is an
excellent spatial filter that will remove much of the noise in the original field data.
We may need to use fairly heavy filtering techniques, however, if we intend to extract some
information from the unstacked data - we'll come back to this later.
240
Once we've avoided any problems of spatial aliasing, with the application of the
appropriate spatial filter, we may want to reduce the total volume of data by trace
summing (or shot summing) or alternate trace (or shot) processing.
We may also (for numerous reasons) wish to improve the (apparent) spatial sampling by
inserting more data into our records - we'll look at this next.
Spatial interpolation
We can interpolate data spatially in much the same we as we resample data in the time
domain:-
241
Here's the FK transform from the last
couple of pages, "padded-out" to twice the
frequency in both time and space.
Some processes perform best when the input data is evenly sampled in both space and
time. The temporal sample period is normally fixed, so why isn't the spatial?
One possible problem occurs if the data is shot with irregular geophone spacing:-
This diagram shows a marine acquisition system popular at one time, that is still used on
rare occasions. The "mini-streamer", at the front of the main streamer, has one-half of the
group interval of the main streamer. This is designed to give better spatial sampling of the
shallow seismic data recorded by the groups nearest the shot.
For conventional processing, it's common to simply omit every other channel from the
mini-streamer, giving a constant "conventional" seismic record. In some cases, however,
we may want to retain as much spatial resolution as possible by interpolating the main
streamer data to the finer trace interval, and processing twice as much data as the
"conventional" approach.
242
Another problem with spatial sampling occurs in 3D surveys and 2D crooked line surveys.
We can (and do) apply spatial filtering and resampling/interpolation using many different
trace orders. Having decided on exactly which traces we are going to process, we now need
to provide the necessary geometry information to the computer to enable us to address our
243
traces in any number of different directions.
Just as early cave dwellers became gatherers and hunters, it's time for us to
become gatherers and sorters!
244
Once all of the geometric information has been input and
checked, it may be stored in a separate database, or in
the headers of each seismic trace (or both).
CMPINT = GINT / 2
NFOLD = ( NCHAN * GINT ) / (2 * SPINT )
No. CMPs = ( NCHAN / NFOLD ) * ( No. Sps - 1 )
+ NCHAN
245
We're now looking at all of the ray paths recorded by one
shot.
246
All of the ray paths recorded by receivers that are at a
constant offset from each shot - a common offset gather.
247
Another common offset section, this
time from channel 200 (40 channels
offset from the shot).
248
Probably the most important
gathering order! All of the
traces from one Common
Mid-Point (CMP).
If our data processing system stores all of our data on disc, then we can select and display
data from any dataset using any of these (or other) criteria. If the data is stored on tape it
may be necessary to physically sort the data into the order required for any subsequent
stages of processing.
It's quite common to sort the data several times during a processing sequence. Data in the
common offset domain (each gather representing one view of the sub-surface) responds
well to some forms of spatial filtering, and it is necessary to have data in this order for some
of the following stages.
Data in CDP order is necessary for our final stack where, after making the necessary
corrections of the offset distortion, we will sum all of the traces from one CDP together.
These CDP stacked traces will be displayed side-by-side to provide our final stack section.
249
Chapter 9 - Processing Flows (3)
Why and how our seismic reflections are curved on our CDP gathers - NMO.
250
How the NMO is related (albeit only just) to the true velocities of sound within the earth ...
... and what (if anything) these "seismic velocities" might mean.
Some of the methods used to estimate the velocities required to "flatten" our seismic
events.
More velocity picking (something you'll just have to get used to!).
... and how this QC extends to 3D data and other types of velocity analysis.
251
... yet even more on DMO!
Dynamic corrections
252
For all sorts of reasons, the ideal
seismic section would consist of a
series of traces shot with the shot and
receiver in the same position. This
would produce a true zero-offset or
normal-incidence section where, for a
horizontal reflector, the incident rays
would be at right angles.
The vertical distance from the centre of the shot and receiver to the
horizon.
253
Which, solving for TX gives us:
An equation that gives us the time at offset "X" (TX) as a function of the zero-offset time
(TV) and the velocity (V).
We often express the difference between this time and the zero offset time:-
The velocities determined from these hyperbolas are some kind of average velocity through
all of the layers down to the horizon we're considering. The actual relationship between
this velocity and the true interval velocities (the velocities in each rock layer) is complex,
and we'll be looking at this over the next few pages. Because the rock velocities generally
increase with depth (more pressure = more compaction = higher velocities), we would
expect our velocity function (a plot of velocity vs. time) to also increase.
254
Here's a typical velocity function from somewhere in
the middle of the North Sea.
255
NMO and Velocities
Here's the
velocity
function
from the
previous
page once
again, this
time with a
synthetic
record
constructed
from the
move-out
hyperbolas
computed
from the
function
every 100
ms.
I've put a
wideband
zero-phase
wavelet on
each
synthetic
"event".
256
This is the same record processed through Normal
Moveout Correction (sometimes also known just as
NMO).
We can illustrate this "stretching" by considering two events on the far trace (offset = 6000
m).
This table shows two events, their time and "average" velocity, and the computed moveout
and time for a trace with an offset of 6000 metres.
Both of these events arrive at the same time at this offset, even though their true "zero-
offset" times are almost 3 seconds apart!
This means that the sample at time 4.03 seconds on this trace would have to be stretched
over the entire range from .16 to 3.5 seconds!
257
The plot on the right shows (for our "typical"
function) the amount of stretch necessary for each
sample.
The only way that we can really examine the precise rock velocities
within the earth is to drill and hole, and lower a tool down the hole
that measures the exact velocity of each rock layer at very small
intervals.
258
Seismic Velocities
So what do our seismic velocities measure? Even assuming horizontal reflectors, the
velocity calculated from the moveout hyperbolas doesn't correspond to the true average
velocity. This first model shows just five layers with an increasing interval velocity in each
layer. The velocity computed from the moveout hyperbolas is always faster than the actual
average velocity down to each layer. The plot of the left is scaled in depth (4000 m), whilst
the right-hand velocity plot is scaled in time (4s).
If we add a sharp break in the interval velocity (I've made them much faster in this model),
the difference between the two functions becomes even more extreme.
Even with an inversion in interval velocities (the green layer is slower than that above it),
the velocities that we would pick from the seismic data are still faster than the true
averages.
259
Can we use the velocities determined from the moveout curves for anything? Well, with
some fairly gross assumptions, we can use the formula derived by the American
Geophysicist C. Hewitt Dix. This formula (known funnily enough as the Dix Formula)
gives us an approximation of the interval velocity between any two points on our
time/velocity graph established from the hyperbolas.
Taking V1 and V2 respectively as the velocities at times T1 and T2, VI is the interval velocity
between these layers. This is an approximation, however, and works best for horizontal
layers and small offsets.
In general we can only use the interval velocities derived from the Dix formula as a rough
guide. Under conditions of steeply dipping events or complex geology, the velocities we
derive are not much more meaningful than just a set of numbers that "best-correct" the
moveout hyperbola.
260
Now we've established what these velocities mean, how do we measure them?
261
Picking Velocities (1)
262
Another variation of the velocity gather -
the velocity fan. Here I've applied a (very
bad) "guess" velocity function to the
record marked "0%", and modified this
function by the percentages shown for the
other records.
263
The first attempts at automatic velocity
picking made use of a "moveout-scan".
264
parabolas!
With the computational power now available, it's possible to combine the best of all of the
above techniques into one display. These "composite" methods of velocity analysis,
possibly using some forms of automatic picking of the results, are now commonplace.
265
Picking Velocities (2)
The semblance of a small window of stacked data is effectively a measure of how well the
stack compares to the individual components going into the stack. Here's two examples:-
266
If we plot the computed
semblance against velocity,
together with a gather and
stack that are continually
updated, and a suite of
stacked velocity fans, we
have a very powerful tool
for analysing velocities.
Once
again, it's
a question
of who's
in charge
- you or
the
computer!
267
Now comes your chance to pick some velocities! You had better get used to this - if you
start processing data, you'll probably be doing a lot of this!
The display on the left shows the computed semblance as a function of time and velocity.
The display on the right shows the corresponding part of the CDP gather. Click the left
mouse button inside the semblance display to add a velocity "pick". Use the right button to
delete a pick. Try picking the correct semblance "peaks" (in red), but also try picking
some erroneous points to see the effect on the gather (and the NMO stretch).
Velocity problems
268
The picking of seismic velocities is one of the most boring,
routine and important jobs still left firmly in the hands of
the processing geophysicist.
Consider two events appearing on our velocity analysis at the same time, one a primary
reflector, and the other a strong multiple reflector. The multiple reflector must be
generated by some shallower primary reflector (or reflectors) and would have spent more
time travelling through the shallower parts of the section than the primary.
As the rock velocities generally increase with depth, we can make the general statement
that multiple reflections will have slower velocities than primary reflectors at the same
time. For this reason we would normally ensure that we pick the higher range of velocity
peaks on our semblance display.
We also, of course, have to deal with other forms of coherent and random noise within our
data. One way of improving the apparent signal-to-noise ratio of our data (and increasing
the number of traces in our CDP gather) is to examine several CDPs at once in our analysis.
269
X 1,4,7,10 ...
on the fact that if, for example, we are shooting 80 fold data 238
with 240 recording groups, each set of three CDPs contains all X+1 2,5,8,11 ... } Super
possible offsets. 239 Gather
X+2 3,6,9,12 ... }
In this case a super gather may be formed by taking 3 or 6 240
CDPs and merging traces together and summing identical X+3 1,4,7,10 ... }
channel numbers. 238
This resultant 240 trace "gather" actually covers 3 or 6 X+4 2,5,8,11 ... }
adjacent sub-surface points but the advantages we gain in 239
signal improvement outweigh the disadvantages of X+5 3,6,9,12 ... }
"smearing" the analysis across more than 1 CDP. 240
X+6 1,4,7,10 ... }
Decisions on the use of super gathers must consider the 238
likelihood of steep dips and/or rapid horizontal velocity X+7 2,5,8,11 ...
changes in the area. 239
We'll be discussing some of the problems caused by dipping events in a couple of pages
time, other problems involving static correction errors, or shallow velocity anomalies, we'll
come back to later.
If we pick the wrong velocity for our primary event, when we come to add all of the traces
together from one CDP (the stack), we will attenuate some of the primary reflection. Here's
an example of just how much attenuation we can expect from (for example) a 60 fold
stack:-
The horizontal scale on this graph is "Frequency times Moveout". Imagine a reflector with
a dominant frequency of 30 Hz and a residual moveout of 330 ms (0.33 seconds) on the far
270
trace. Frequency time Moveout equals 30 x 0.33, or about 1. Looking down the scale at
this point reveals that the stack will attenuate this event by about 12 dB - a factor of 4:1.
Higher frequencies will be attenuated even more.
We'll now move on to some of the ways in which we can "Quality Control" our velocity
picks.
Velocity QC
271
Once we've picked all of the velocity functions for one line, we
need to find some convenient way to check the consistency of
our velocities - to quality control (QC) our picking.
272
In some cases we need to be aware of exactly how the computer interpolates the velocities
between our picked functions. Unless we are very careful and track each event from
analysis to analysis using an initial stack or NTG (near trace gather), we are unlikely to
pick exactly the same events on each analysis. The processing software has to make some
decisions as to how to interpolate the values between the CDPs analysed.
This diagram shows one of the problems common in seismic data from the North Sea. The
Base Tertiary reflector is very strong and causes a sudden "break" in our velocity
function. If, as shown here, we picked velocities at CDPs 500 and 700, and the Base
Tertiary dips strongly between these locations, an interpolation error can occur.
The function generated by interpolating our functions over constant time intervals
(sometimes known as IsoTime interpolation) gives the black function at CDP 600. The true
function should follow the blue line.
Depending on the magnitude of the error, the simplest solution is usually to simply add
more functions - to analyse additional CDPs.
273
A gross error will usually show up fairly
easily on an Iso-Velocity display.
Before finishing our long discussion on velocities, let's look at some more exotic forms of
analysis and QC.
Many other methods have been (and are still being) used to both pick and QC seismic
274
velocities. With careful picking and analysis of stacking velocities and modelling, it's
possible to establish quite a good interval velocity field from the seismic data itself.
We can even make use of these picked horizons to make detailed analyses of the velocities
along each horizon. Instead of computing velocity semblance plots for selected CDPs over
the whole time range, we produce displays for every CDP over a limited time interval
around each horizon.
Although time-consuming (and costly) in both computer and man time this technique can
provide detailed information concerning the interval between two reflectors.
275
This shows part of a 3D survey, rotated from its true position, with
the blue dots representing the velocity analysis locations used during
the processing.
One way of QC'ing these velocities is by examining the normal Iso-Velocity contours for a
series of lines to check consistency.
Use the buttons to switch between the velocity contours for different parallel lines in the
survey.
We could, of course, equally well examine the "North-South" lines in our 3D volume.
We may need to pick (and QC) our velocities several times during the processing sequence.
To explain why, and to look at one of the fundamental geometric distortions present in all
seismic data, we'll examine the technique known as Dip-Moveout, or DMO.
276
DMO (1)
Seismic data suffers from the same problem as a bad
holiday choice. It's always in the wrong place at the
wrong time!
This diagram shows one reflector in a medium with a constant velocity of 2000 m/s. The
black line shows a "normal-incidence" ray at right angles to the reflector and the blue and
red lines show reflections for two different offsets.
Click on the buttons below the image to see the reflections (and apparent velocity) of events
with 10, 20 and 30 degrees of dip (or slope).
Note that not only do the reflection points move "up-dip", away from the nominal "CDP"
in the centre of the diagram, but that the actual points being recorded are different for
different offsets. The apparent velocity (obtained from common mid-point gathers) is also
incorrect. For events with constant dip the velocity will be wrong by a factor equal to 1 /
Cosine(Dip).
We obviously need to do something about this problem! Until the early 1980's this effect
was largely ignored, but, work by a number of geophysicists led to an understanding of the
problem and various approaches to solutions. The solutions are generally rather complex,
but try to correct everything to it's true zero-offset position so that a) the velocities are no
277
longer dependent on dip, and b) the data will stack together correctly.
On the left is a
simple syncline at
a depth of about
2000 metres.
Once again I
have assumed a
constant velocity
of 2000 m/s above
this reflector.
The diagram on
the right shows
some of the
normal-incidence
(zero-offset)
reflections from
this model. Note
how, at the
surface, they
cluster together
around the centre
of the model.
278
Time sections computed from this
model.
Not only are the red and green reflections in the wrong place, they are also not coincident.
Applying NMO to this data with the correct velocity of 2000 m/s will result in an over-
correction of the gathers in the region of dip.
You'll note from the above that the image we get of the syncline is compressed
(the red line). In general un-corrected (un-migrated) seismic data will
compress all synclines and expand all anticlines.
Close to the centre of the syncline reflections from the right of the structure
may appear on the left of our section (and vice-versa).
This is rather like the reflections from any concave surface (like the inside of a
spoon) - the curvature can reverse the order of reflections. We'll see more of
this when we look at the seismic data from this model on the following page.
279
DMO (2)
Here's 161 CDPs covering the 2 km of
model shown on the previous page.
280
This display shows 5 complete CDP
gathers from along the line, NMO
corrected with the (theoretical) model
velocity of 2000 m/s.
Let's apply DMO to this data, and see what effect it has on the velocities and the stack.
281
DMO (3)
DMO computes the necessary data "movements" in a number of ways. We can use
modified approximations to the horribly complex "wave equation" (a partial differential
equation), or methods that rely on convolutions in odd domains.
None of the "routine" methods of DMO require any knowledge of dip. The various
algorithms move the data for each offset "plane" into all possible zero-offset positions.
Believe it or not, the signal within the data combines correctly whilst the noise (and the
data that is in the wrong place) cancels out.
One of the most robust methods, using a Fourier transform technique, sometimes known
(incorrectly) as FK DMO, goes through a complex transformation. Each offset plane is
first transformed horizontally into "K" (spatial frequencies), and the time scale is stretched
logarithmically. Spatial convolution is then applied to this transform to perform the DMO
and the data is transformed back into normal "T-X" data. I do not propose to explain the
mathematics behind this or other methods in detail as it's probably beyond the scope of this
course. If you're desperate to find out then look at any of the massive amount of technical
papers available on DMO in the geophysical periodicals.
All of the normal DMO methods generally require that we process the data in a particular
order:-
282
1 Apply some pre-processing to our data (data enhancement)
2 Pick a widely spaced set of velocities along our line
3 Apply NMO to the data to "more-or-less" correct the data
4 Mute the first-break noise in the data
5 Sort the data into common-offset planes (see below)
6 Apply DMO to the data
7 Remove the NMO applied in stage 3 (inverse NMO)
8 Re-pick our velocities at the final interval
All of which generally requires at least three runs on the computer, and (as usual) lots of
velocity picking.
We need to ensure a "good" spatial sampling within each offset plane. For this reason we
may need to combine several offset planes together into one for DMO, adding a pseudo
offset (a constant) to each plane. Even on regular 2D data, we may have the odd numbered
channels in one CDP, and the even numbered in the next. We would normally combine the
two sets together giving each offset plane the correct CDP interval by using a DMO offset
increment of twice our group interval (and using the average offset for the "plane").
For crooked line land processing, or 3D processing, we may have a continually varying set
of offsets in each CDP "bin". We need to group these together into a regular set of offset
planes for the DMO process (we use the true offsets etc. for all other processing). In 3D
data we may also need to apply "3D-DMO" to correctly allow for those traces that are
displaced "sideways" from our desired bin-line. Some of these algorithms are complex and
require a full 3D velocity field before DMO.
283
has been reduced by the DMO process.
DMO is now a standard part of almost every processing sequence. We should expect some
noise attenuation in the DMO itself, but much better stacks because of the stabilisation of
the velocities, and the correction from CMP to CDP. After DMO we really can refer to
"CDPs" with confidence.
284
Here, finally, is the effects of DMO on some real data. This shows about 700 ms of stacked
data before and after DMO (each section having its own set of velocities). The differences
are not enormous, but you can see the random noise reduction and the enhancement of
some of the fine detail in the section after DMO. The velocities are, of course, much more
"reasonable" (and smooth) after DMO.
We mentioned above some data enhancement techniques that are required before DMO.
In the true tradition of putting the cart before the horse, we'll go on in the next Chapter to
discuss the most important ones connected with (among other things) multiple removal.
There's a few more thoughts on seismic velocities and DMO in a published paper by the
author of this course. Click here to read the article on "Seismic Velocities", otherwise just
click below as usual to move on to the questions for this Chapter.
285
Chapter 10 - Processing Flows (4)
Finally moving (almost) onto the CDP stack stage, let's talk about multiples (and their
removal) and some more static corrections.
A tuneful reprisal of some of the important characteristics of multiples, and how we may
try to remove them.
Statistical deconvolution - designing inverse filters from the seismic trace to try to
normalise the frequency content, and/or remove short period multiples.
A final summary of the important parameters for deconvolution, and some "live"
examples.
286
... and yet more de-multiple techniques.
Tidying up the statics problem. Using the seismic data to determine the final statics
corrections.
Some of the techniques used to determine the residual statics from the mass of statistics
obtained from the seismic data.
287
Multiples - Again!
We now move on to one of the areas in which seismic processing software writers
spend a lot of their time - multiple removal!
On marine data the most obvious multiple generator is the water layer itself but,
of course, any pair of strong reflectors can generate inter-bed multiples that will
also cause problems.
We normally distinguish between those multiples with a short period, and those with longer
periods. If your system supports it here, once again, is a "sound" example of short and
long period multiples!
288
The removal of multiples generally relies on either or both of two recognisable
characteristics.
1. Multiples will generally go on and on (rather like this course), repeating with the
same time interval and gradually decreasing in amplitude (think of repeated
"bounces" of energy in a water layer).
2. Multiples will generally appear on our CDP gathers with a velocity slower than the
primary velocity. They will have larger moveout values (more curvature).
289
Such repetitive bedding of the type shown
above occurs quite regularly within
nature, causing an apparent increase in
seismic energy below the "cyclic" section.
Deconvolution (1)
Both of these methods used the same principles - finding a limited-length convolution filter
that either
Either technique can be applied directly to a seismic trace, with the appropriate values
derived statistically from the trace itself. For this reason it's (logically) known as statistical
deconvolution, or, since it was the first use of these techniques, just deconvolution (without
the preceding "signature" or "FX").
290
Here's an example of one of the things that deconvolution is useful for - removing short-
period multiples (or reverberations) from the seismic trace.
291
Is it likely that we could successfully design a limited-length filter that could collapse an
effectively infinite length reverberating waveform into a spike?
Here's the input reverberating waveform (in blue) with an amplitude ratio of "-R" from
peak to peak.
The filter (red), which you'll remember is time reversed, just consists of the two samples
"1" and "R" with the exact reverberation time period between them.
Flipping through the buttons shows some of the convolution multiply and add products -
everything except the first point comes out as zero! The first 3 convolution results are:-
1x1=1
1xR-Rx1=0
1 x R2 - R x R = 0 etc.
This technique was one of the first approaches used for multiple
removal in the days before digital processing.
Unfortunately if the timing was wrong, not only would the multiple
remain, but the subtraction would introduce another multiple-like
anomaly into the data!
Let's move on to see how we can determine the correct timing and scaling from the data
itself!
292
Deconvolution (2)
For statistical deconvolution we use the trace itself to provide an estimate of the
reverberations and other nastiness associated with every reflection on the trace. Since the
autocorrelation removes all of the phase information from the trace (makes it zero) we can
imagine that the central zero-lag of the autocorrelation represents every primary reflection
on the trace (which we hope are randomly spaced) and any repetitive signal present on the
autocorrelation must be present throughout the trace (or the part of the trace we are
examining - the design window).
In the same way as the previously mentioned types of deconvolution we now have two
options as to how we specify the key part of the right-hand side of the above equation - the
desired output:-
1. for spiking deconvolution we specify our desired output as a spike, and try to convert
the entire trace into a single spike. This cannot be done exactly by a time-limited
293
filter but has the effect of equalising the frequencies within the trace (trying to get to
a "flat" spectrum).
2. for predictive deconvolution we use a later part of the trace as the desired output and
try to design a filter that will make the trace look like a time-shifted version of
itself. Subtracting this "model" from the actual trace is exactly equivalent to the
analogue process discussed on the previous page, but with the time-shift and scaling
automatically designed.
If we can successfully predict the necessary time-shift and scaling values from the data
itself then subtracting the scaled time-shifted trace from itself will remove all of the
multiple reflections.
2. The gap (again in ms or samples). A gap is inserted into the filter that prevents the
filter from changing the data close to every reflector. A gap of 1 sample or less
implies spiking deconvolution, any higher gap implies predictive deconvolution.
The gaps normally used extend from 2-10 samples of data and cause less spectral
whitening (and associated noise). High gaps of say 90% of the reverberation period
may be used for just removing certain multiples.
3. The design gate (see below). This should include only the regions of signal in our
data (omitting the first breaks) and may have to be specified as a function of trace
offset. For optimum results a design gate should be at least ten times the total filter
length (filter length plus gap). This may limit the effectiveness of long-period
multiple removal by deconvolution.
294
Here's a display of some seismic data before and
after stack.
In areas of very complex structure (for example, a line shot off the edge of a continental
shelf into very deep water), we may need to vary our time gates in space along the line.
Logically this is known as time and space variant deconvolution!
295
Deconvolution (3)
Remembering that the left-hand side of our deconvolution is the autocorrelation of the
trace, what happens if (mathematically) we can't solve the equations?
Here's a simple enough equation - imagine we're trying to design a two point
3a-3b=20 deconvolution filter and that the "3,-3" terms are the first two values in our
-3a+3b=-21 autocorrelation. This cannot be solved! We must make some adjustment in
order to reach an approximate solution.
3.003a-3b=20
The same equation with the diagonal values on the left increased by 1/10th of
-3a+3.003b=- 1 percent. Now the equation can be solved ...
21
a=-163.25 ... giving these solutions for "a" and "b" which we can plug 3a-3b=20.49
296
back into the original equations to see how our "adjustment" -3a+3b=-
b=-170.08
effects the results. 20.49
The value that we need to add to the diagonal by is usually of the order of 0.1 to 1 percent,
and is usually referred to as the white-noise level. To see why we'll have to delve once more
into the frequency domain.
White noise addition reduces the effectiveness of the deconvolution at other frequencies but
allows us to solve the equations. This pedestal addition is achieved by adding a small
amount to the zero-lag value of the autocorrelation, exactly the same as that shown in the
equations above. For all normal purposes, the white noise level should be kept as small as
possible whilst still giving a solution to the equations.
297
Having briefly discussed the choice of design gate and white noise
levels, we'll now move on to the two most important parameters, the
filter and gap lengths!
Deconvolution (4)
Let's start this last page on deconvolution by re-iterating the points made before
concerning parameters.
We need to make our total filter length long enough to "see" the multiple period -
preferably at least two bounces.
We should keep the gap fairly small (or even zero - spiking deconvolution) unless we
are specifically aiming the deconvolution at just one multiple. In that case we can
make the gap about 90% of the multiple period.
298
We should, whenever possible, obey the 10% rule. The total filter length should not
be more than 10% of the design window length.
Only use the white noise level to "help" the deconvolution. Only in very exceptional
circumstances should this parameter be changed from the "default" value suggested
by the program.
Use a design window (or windows) that contain just the data for deconvolution.
Test everything, and look at both filtered and un-filtered results from each test.
So, keeping that lot in mind, here's a single trace "live" deconvolution example. About
1200ms of seismic trace is shown in red, with its autocorrelation in blue, and its amplitude
spectra in magenta (or pink if you prefer!). The multiple period is very obvious on the
autocorrelation and is exactly 117ms. Use the parameters to check their effects, especially
when the filter/gap combination "misses" the multiple. Notice that increasing the gap
reduces the amount of "whitening" in the spectrum, and leaves the autocorrelation
unchanged close to the zero-lag value.
You should see how longer filters and shorter gaps increase the apparent amount of noise in
the trace, and that increasing the white noise level decreases the effectiveness of the filter. A
white noise of "0" uses the smallest necessary to successfully perform the mathematics.
Deconvolution is one of the processes tested extensively for any new processing. The
following is an example deconvolution trial from somewhere in the North Sea. The
parameters on the right show the filter length / gap combinations (in milliseconds). The
display shows unfiltered and filtered sections (we've already discussed the principles of
filtering - but see the next Chapter), and an autocorrelation OVER THE SAME DESIGN
WINDOW AS THE DECONVOLUTION (this is important). The autocorrelation is
plotted at twice the vertical scale so that you can really see the multiples. Choose your
favourite deconvolution!
A - None
B - 200/0
C - 400/0
D - 200/12
E - 200/24
F - 400/12
299
G - 400/24
H - 260/140
We'll come to deconvolution yet again when we look at DAS (deconvolution after stack).
I'll also try to mention some of the more esoteric forms of decon (as it's affectionately
known) at that time. In the meantime just one last item - where do we do our
deconvolution in the processing sequence?
Here's a "shot record" (offsets across the top, time down the
side) showing a series of water-bottom reverberations from a
primary at 2 seconds. The moveout differences at the far
offset mean that the time differences (in blue) between the
multiples are not constant. In signal processing terms it is a
non-stationary period. If we apply deconvolution before
NMO, the filter design will not "see" the true period and the
decon will be ineffective. If we apply NMO first (decon after
NMO), the stretch caused by the NMO will distort the
wavelet differently on different "bounces". Once again the
filter design will have problems. This is one of those
situations where you can't win!
In most cases deconvolution before or after NMO won't matter too much but, in some
cases, it can be critical. If in doubt, test it! A general (short) decon before NMO may
"clean-up" shot-to-shot variations in the amplitude spectra, but you may need another
(after NMO) to remove the multiples.
De-Multiple (1)
300
We're now ready to start attacking the longer
period multiples on our section.
I'm going to use this model to demonstrate come of the more common methods of de-
multiple. On the left is a velocity plot showing the primary velocity function (red) and
multiples generated by the single interface at about 1.2 seconds. The plot on the right
shows a theoretical CDP gather, with the multiples marked in red. Note that these always
have more moveout (curvature) than the primaries.
301
I've reproduced the corrected gather again
here (on the left). The plot on the right
shows a CDP stack of a series of CDP
gathers along our synthetic "line". We've
simply summed all of the traces for each
CDP together.
The problem above is obviously related to the energy on the near traces (those traces
closest to the shot). Although the far traces have sufficient residual moveout to mis-stack
(and so attenuate), the multiples are pretty flat on the near traces and so stack fairly well.
Can we reduce the effects of the near traces?
302
Another form of trace weighting - a near
trace mute. The near traces have been
removed below a selected time with a mute
similar to that used to remove the first
break noise (at the front).
These two methods, or variations thereof, provide the first steps in multiple attenuation. If
these techniques are unsuccessful, then we must move on to more sophisticated approaches!
De-Multiple (2)
303
Here, once again, is our synthetic gather
with, on the right, a gather that has had
NMO applied using a velocity function
that lies between the primary and multiple
velocity functions.
The resultant
CDP (on the
right) has the
positive dips
removed.
304
We now remove the NMO applied before
(reverse NMO), apply the correct primary
velocity function, and stack the data.
The parabolic transform nicely "splits" the primaries and multiples. The primaries "line-
up" close to zero moveout whilst the multiples appear as the "event" at higher moveouts.
To avoid problems with any non-parabolic data (which appears as the "noise" on the left-
hand traces) we remove the primaries from this display, transform back (the multiples),
and subtract them from the original record.
305
On the left is the gather (and it's
associated stack) after the sequence shown
above:-
Radon (or Tau-P) de-multiple is generally very effective (but quite expensive). It preserves
any AVO effects and works pretty well on real data as well as synthetics!
This display shows a (real) CDP gather corrected (believe it or not!) for primaries. The
centre panel shows the multiples extracted via Tau-P, and the right-hand panel shows the
resultant primaries (after the subtraction). As usual the stack (shown below) is not quite so
dramatic but you can see the dipping primaries coming through on the right-hand panel
(after de-multiple).
306
There are many other methods used for multiple attenuation, including, for example, using
the wave equation to predict multiples from the primaries and then subtract them from the
input. Other methods can be used where the dip of the multiple is very different to that of
any primary information, but these can be quite dangerous.
So we'll leave the world of de-multiple for now and move on to another important pre-stack
process - residual statics!
307
Residual Statics (1)
We've already looked in some detail at methods for computing static corrections from field
data. The problem is that these are never perfectly correct - we need some method of
refining these static corrections during the processing of the data. We use the seismic data
itself to determine these residual statics. (IRS = Inspector of Residual Statics, a newly
invented abbreviation!)
This table shows a few CDPs from a simple 12-channel, 6-fold regular data set.
Down the left are CDP numbers, with a trace count across the top. Clicking on the buttons
will show you the:-
trace offsets
Note that the traces from one particular shot or receiver position follow a regular pattern
in the table (either up or down towards the right). Although the pattern is complicated by
the 12-channel, 6-fold arrangement, any shooting geometry will yield a similar pattern - we
can relate our traces back to the original shot and receiver positions.
In a modern high-fold data set, the number of traces recorded from one shot, or into one
receiver position are very high. We can make use of this data redundancy by assuming that
all traces recorded from one shot will have a consistent "shot-static", and those recorded
into the same receiver group will have the same "receiver-static".
308
Here's part of a stacked seismic section with, as before, the part lightly shaded red enlarged
on the right. Although this is a "reasonable" section, showing a nice "bow-tie" in the
deeper section, the frequency content is poor (low), and the events look a little "ragged".
As this is land data, we can expect that there are some residual statics still present in the
data set that cause the inferior quality stack.
In order to establish the values of these residual statics, we need some estimate of what the
final section should look like - a sort of idealised final result. We pick one (or more)
reflections across our section (the red event here follows a strong peak on the data), and use
the data around this horizon (or horizons) to build a "pilot" section - the idealised final
result.
309
I've taken a window of a few hundred
milliseconds around this one event (now
marked in blue), and we will use the data in
this window to determine our residual static
corrections.
This doesn't look very seismic, but it does contain the major structural elements of the
original data (in a somewhat simplified form). Now we will go back to the data before
stack and compare the individual traces for every CDP against the pilot traces to try to
establish a set of surface-consistent residual statics.
310
Residual Statics (2)
In statistics, we use the term "correlation" as a measure of the similarity of two data sets.
In the same way we can use the correlation of two waveforms to establish their similarity
and their time difference.
311
centre of the right-hand display) show the
individual time differences between the
input traces and the pilot.
These are
plots of the
picked
correlation
peak times
from the
above
displays.
Although
there's
quite a lot
of scatter,
you can see
some
overall bias
to the
values for
this shot
and this
receiver.
To resolve these statics in a surface consistent manner, the computer goes through the
following steps:-
1. Average together all of the results for one shot - apply this "shot-static" to the
values.
312
2. Average together all of the results for one receiver - apply this "receiver-static" to
the values.
3. Iterate through steps 1 and 2 for either a predetermined number of steps or until the
results stop changing.
Here's the original stack once again (at the top) together with the improved "residual
statics stack" at the bottom. The frequency content is much better, and the events are more
continuous even outside of the window we used for the pilot. This is a good indication that
our residual statics are correct.
Static corrections are just what they say they are - static! That is corrections (or shifts) to
the data that are time invariant - they don't vary with time. This will, of course, add
confusion when we are trying to determine our dynamic corrections (to pick our
velocities). In practice this may mean going through a cycle of residual statics and velocity
picking until we're sure we're about right (or until things get really bad and we start
again!). Some residual static programs attempt to pick errors in residual moveout at the
313
same time as resolving the statics, but this is not always very successful.
While in a static mood, let's examine some other static requirements in our processing
sequence.
Static errors occur anywhere we have a "near-surface" anomaly - they are not just restricted to data
recorded onshore. We'll look at just two examples that can affect marine data.
314
Replacement Statics
Here's a piece of reasonable quality marine data (with a different colour scheme) after stack. Some of
the events, particularly the shallow ones, appear "broken-up", and there is obviously some sort of
shallow anomaly causing the inverted "V" patterns in the shallow section.
315
If we examine a near-trace gather (from the data going into the stack), we can see that something is
happening to the very shallowest reflector - the water bottom itself appears "lumpy".
A "stretched" version of the NTG shows the problem. There are reefs on the seabed which distort the
raypaths passing through them (note the wonderful multiple reflectors!). The lines drawn on this
section show the top of the reefs, and (what is assumed to be) the smooth "seabed" going underneath
them. As is common with most systems that are designed for "event-picking", these lines have been
"snapped" to the nearest peak or trough of the seismic data in order to give an accurate
representation of the anomalies caused by the reefs.
316
The reef is about 40 m thick at the point marked by the blue arrow. We can estimate the interval
velocity of the rocks just below the surface from our conventional velocity analysis. These, together
with published estimates of velocities within reefs, seem to imply an interval velocity of about 2500
m/s.
The two-way time for the seismic energy travelling through the reef is 2 x 40 2500, or about 32
milliseconds. If the reef was not there, the wave would be travelling through water with a velocity of
about 1500 m/s, giving a two-way time of 2 x 40 1500 or 53 ms. The time difference between these
two (53-32) is about 21 ms, in other words the time difference we would get if we could replace the
reef with water. Since any data travelling through the reefs will arrive earlier than expected, we apply
this static as a positive value - shifting the seismic data down.
We compute these statics for every shot and receiver that lies over one of the reefs. The spread
length shown in the diagram above indicates to us that some of our shots will partially or completely
"straddled" some of the reefs - the raypaths in these regions will be very distorted.
Once we've computed and applied these replacement statics, we re-stack the data to give this
section:-
317
To avoid trying to compare this with the one at the top of the page, the next page shows a detailed
comparison!
(We can quite legitimately run surface-consistent residual statics after the replacement statics since
the reef problem is entirely surface-consistent.)
318
You'll note that the replacement statics "flatten" some of the deeper events as well as improving
overall continuity. They remove the apparent "pull-ups" on the events due to the high velocities in
the reefs (remember that this is a time section).
The residual static corrections "fine-tune" our original statics and improve the high frequency content
(and continuity) of our final stack section.
Tidal Statics
Anyone who's tried to process seismic data
onboard a seismic vessel in a gale will
appreciate the effects that tides and weather
can have on seismic recording.
For "normal" 2D seismic data the effects of tides can generally be ignored - they, after all, vary very
slowly and will hardly be noticeable along our seismic lines. For high resolution seismic data however,
and, more especially, for 3D data, they can be significant.
319
This is a fully processed stacked section extracted from a 3D volume of high resolution seismic data.
It's what's known in 3D terms as a "crossline" - a line made up of 3D bins at right angles to the
direction in which the data were shot. Each trace on this section comes from a different original line,
shot at a different time (and possibly in a different direction) from the adjacent traces. The original
seismic lines were shot from your viewing position "into" the screen of your monitor (or vice-versa).
You'll notice the "jitters" in the above section - high (spatial) frequency static errors from trace to trace
caused by tidal differences.
Now we have much better continuity, and the data is correctly positioned for any processing we need
320
to do in the crossline direction. If we want to run some form of residual statics after this, we need an
approach that uses time-differences between adjacent traces as a basic input - the results are not
surface-consistent.
There is one other type of residual static correction that is sometimes used as a final "clean-up". It can
use the initial approach of correlations against a pilot used in our residual statics discussion, but make
no assumptions about surface-consistency - it just applies individual computed statics to individual
traces.
The technique, known as trim statics or CDP statics, can provide additional data improvements but is
inherently dangerous! Applying arbitrary large static corrections may change the apparent structure
on our final section and are better solved by other surface-consistent approaches (for example re-
computing the initial statics by re-picking the first breaks on every shot).
We finally get to stack our data, and move on to the final stages of the "regular" processing
sequence.
Front-end, back-end and surgical muting. Removing all of the nasty bits from our seismic
traces.
Nothing more complex than simply adding all of the traces from one CDP together - with a
few variations.
321
Page 11.04 - Filtering (1)
The practical aspects of digital filtering. How we specify our filters ...
... and how we choose the filters (as usual, lots of testing).
If we don't need to retain the "true" amplitudes within our data, we can use equalisation
and AGC to tidy-up the data both before and after stack.
Finally moving the data to its correct spatial position. We start by looking at some
theoretical stack responses for simple structures ...
... move on to some of the techniques used to migrate the data ...
A quick discussion of some of the other Post-Stack processes we may use, and a look at the
types of final displays that we might use.
322
Page 11.13 - Chapter 11 - Questions
Trace muting
323
On the left is a CDP gather
(with, hopefully, the correct
NMO applied) showing the
stretch introduced by the
NMO and the "first-break"
noise present on all the
longer offsets.
... and apply a ramp to the data starting at our picked time.
324
We could, equally well, pick
our mute before NMO. We
need to do this anyway prior
to any velocity analysis.
You can clearly see the noise from the first breaks in the later panels. We can pick this
(much like a CVS for velocity analysis) to determine the minimum time for each fold (and
hence the mute). Note also the improvement in the deeper data due to the increase in stack
fold.
325
We saw in our discussions in the last Chapter on multiple
attenuation, how we can use a near trace (or "bottom")
mute to remove the near traces and so improve the multiple
cancelling properties of the stack.
Before (at long last) we finally get to stack our data, just one small digression. Although we
normally only use linear tapers in the mutes for stack, there are some other shapes of mutes
commonly used for the tapering of windows used for statistical (or spectral) analysis and
the like. These are shown below, as usual use the buttons to switch between them.
1. A Linear taper - normally used for top, bottom and surgical muting of traces.
326
2. A Hanning function (sometimes called COS-Squared). Named after Julius von
Hann.
The last two functions are suitable for the ramping of the ends of short time windows - they
don't introduce spurious frequency effects.
327
CDP Stack
328
We're finally ready to handle one of the simplest, but most
important, stages of the seismic processing sequence - the CDP
stack. (Although it should more correctly be called a CMP stack -
Common Mid-Point, we'll stick to the conventional name!)
Way back at the beginning of the processing sequence (shot summing) we mentioned that
purely random noise is attenuated by a factor equal to the square-root of the fold of stack.
For this reason (it's the random noise we're trying to attenuate), we normally use a scalar
equal to the square-root of the fold rather that the fold itself. This type of scaling
(sometimes known as root-N scaling) only effects the relative amplitudes in the mute zone -
once we're into full-fold data it just represents an overall (constant) scale factor.
330
(bottom) mutes here.
There are many possible variations in the stacking process, most of which are now rarely
used. Here's a list of some of the more esoteric options:-
Median & Min/Max Stacks - Either computes the mathematical median of the data
or eliminates the highest and/or lowest amplitudes from the stack before summing.
I guess this could be useful to eliminate spikes, but is now generally handled by
other techniques.
Diversity stack - The traces are equalised over small windows prior to stack (see a
few pages time for equalisation) and the average equalisation is removed after
stack. Basically knocks down high amplitudes and pushes up low amplitudes.
Much conventional processing now uses a full equalisation before stack, so this
process is unnecessary.
Inversity stack - The opposite of diversity! High amplitude traces are boosted, and
lower amplitude traces are further attenuated. Probably better just to (once again)
equalise the data before stack.
Most of these techniques are not now generally in use, but may be necessary to solve some
obscure problems. If in doubt (as usual) test it!
331
Before finishing with stack, I'll just
mention one occasion where we
may need to mute the data after
stack. This is the first three
seconds of a stacked data set (at a
very high display gain). The first
major event (at about 2.2 seconds)
is the seabed, we've obviously
moved off the continental shelf!
332
Filtering (1)
Way back in the dim distant past of this course (Chapter 6), we discussed the theory behind
digital filtering. We use frequency filters at numerous points during the processing
sequence - any time that we need to remove some of the background "noise". Filters are
almost always applied, however, at the end of the sequence to tidy-up the final stacked
section and/or migration.
We briefly mentioned DAS (deconvolution after stack) in the last Chapter, and it is this
process that, more than any other, requires some filtering to reduce the unwanted
frequencies enhanced by the deconvolution.
We've seen many examples of filtering before - here's an example of a low-pass filter (only
passing the low frequencies) applied to a photograph.
In this case the filter is applied in both directions (across and down) but, having discussed
spatial filtering in some detail earlier, we are now looking at the "down" direction -
filtering in the time domain.
You'll see in what follows that filtering is one subject that is littered with names. We'll
mention Ormsby, Bartlett, Hanning, Butterworth and others!
Assuming that we have some idea of the frequencies we wish to "pass" through to our final
section, how do we design these filters?
We could simply decide (by testing) on the frequencies we want, and then simply apply a
filter in the frequency domain that just passes those filters. For example we might apply a
20-50 Hz filter which passes only the frequencies from 20 to 50 Hz.
The problem with this is the sharp cut-offs at 20 and 50 Hz. The resultant filter (in the time
domain) is roughly equivalent to the difference in two of the sync functions we mentioned
earlier - a pretty messy wavelet!
333
To avoid the "edge-effects" associated with
sharp cut-offs, we usually use filters that slope
on and off in the frequency domain.
What type of design do we use, Ormsby or Butterworth? In general, if the seismic data is
of high quality, it doesn't matter very much and Butterworth filters are probably slightly
easier to use. Ormsby filters require that we are careful in our choice of "corner-
334
frequencies" (F0, F1, F2 & F3) so that we don't introduce "ringy" wavelets into our data,
but have the advantage that they do remove certain frequencies. As usual, if in doubt, test
it!
Before the age of Fast-Fourier-Transforms, it was necessary to design and apply filters in
the time domain (by convolution). The Martin-Graham technique uses an approximation
(assuming that our data is continuos, not sampled), to design Ormsby filters (with linear
tapers) directly in the time domain. The equation is surprisingly simple:-
Butterworth filters, on the other hand, must be designed in the frequency domain. Here's
the amplitude spectrum for a Butterworth filter:-
O.K., putting all that lot into practice, the following program designs zero-phase Ormsby
filters by three methods. The time domain equation derived by Martin & Graham (with
the filter truncated to the 200 ms displayed), and frequency domain designs using linear
and Hanning tapers. In each case the amplitude spectrum is displayed in blue below the
time function.
The "reject" option produces a filter that removes the frequencies you specify. It's simply
produced by subtracting the bandpass filter from a white spectrum (a "1" in the middle).
Remember that F0 <= F1 <= F2 <= F3, and that all frequencies are less than or equal to 125
(I've assumed 4ms data).
You should notice that the time domain design is much "noisier" in both time and
amplitude, and that the Hanning tapers give a slightly better result than the linear tapers.
Here's a similar display for Butterworth filters. Remember that SA and SB are slopes (in
335
dB/Octave).
Once again you should see that low slopes give the smoothest time function.
We'll now move on to choosing filters (testing as usual) and see how we can vary them in
both time and space.
336
Filtering (2)
So here we go with our filter trials! We would expect that the frequency content of our
signal will become lower as we move to deeper times (the higher frequencies are absorbed),
so we are looking for the "optimum" filter at various times down the section - time variant
filters.
We'll start by looking at part of a stacked seismic line with a series of exclusive filters
applied. These were all applied as Ormsby filters with slopes of about half an octave on
each end (for example, 7-10-20-28).
The key on the right gives the passband for each filter (section "A" is unfiltered). Look for
the point at which 1) the low-cut removes signal, or 2) the high-cut no longer enhances the
signal content.
A Unf.
B 0-10
C 10-20
D 20-30
E 30-40
F 40-50
G 50-60
Not easy is it? You can see the reduction in high frequencies at later times (towards the
right), but it's quite difficult to establish exact passbands at different times.
Let's try a different technique. This time we've used a set of filters of equal "length" - one
octave. Each filter overlaps by half an octave with the previous! See if this is any easier to
interpret!
A Unf.
B 0-10
C 7-14
D 10-20
E 14-28
F 20-40
G 28-56
H 40-80
337
Perhaps a little bit easier to interpret, but still difficult!
Let's try the brute force approach - all the likely filters that we might use!
A Unf.
B 5-30 I 15-30
C 5-40 J 15-40
D 5-50 K 15-50
E 10-30 L 15-60
F 10-40 M 20-40
G 10-50 N 20-50
H 10-60 O 20-60
Although we now have a lot to look at (and we probably need to look at this at a large scale
on paper), this approach is probably the easiest to interpret. With the small running times
on modern machines, such a trial is not significant and we may save the more valuable
personnel time by running lots of filters in (maybe) several places in our survey.
Here's the above stacked section again, this time with a selected time-variant filter pattern.
The final selected filters vary from 10-90 at time zero to 7-40 at the final section time.
338
How are these filters applied?
339
The solution to this is very simple - just insert more filters into the
pattern to avoid rapid passband changes.
340
Just as there are many examples in our lives where some level of
equalisation would be useful, in many cases of seismic processing some
level of equalisation of the trace amplitudes may be very beneficial.
If, as is often the case, we are not trying to retain the absolute relative
amplitudes within our data, the equalisation of amplitudes throughout
the data may improve the signal to noise ratio.
Equalisation does just what it says it does - it equalises the amplitudes both from trace-to-
trace and down the time scale. AGC or Automatic Gain Control is a variation on this which
we'll reach further down this page.
It's normally applied before stack to improve the stack response and remove high
amplitude noise bursts, and/or after stack to "balance" the final section. We'll use one
example trace to show how it works.
341
The inverse of these RMS values (shown in
red) are positioned at the centre of each
window and a "gain-curve" is interpolated
between these points.
342
Yet another example. This time we have a
very weak trace with one major event in
the centre.
Shadow zones can be reduced by either careful window design or by more sophisticated
scaling. One option is to use two windows simultaneously (one short, one long) to try to
statistically predict very strong events and reduce the effects of the short-window scaling in
these regions.
AGC or equalisation always destroys any "true-amplitude" information within the data.
In all cases the inverse scaling produces a fixed RMS on output which may be some
arbitrary numerical value (in the examples shown above the direct inverse gives an output
RMS of "1").
Since we, once again, usually establish our equalisation/AGC parameters by testing, here's
some stacked seismic data with varying AGC length (in ms) applied. The first example,
where the AGC window is equal to the trace length is the same as an equalisation of the
same length and is often referred to as a base-level - it base levels each trace without
changing the amplitudes down the time scale.
343
Migration (1)
Migration is the process that moves the
data on our stacked seismic section into its
correct position in both space and time.
In each of the displays below, press the buttons to switch between 1) the way the structure
should look (after migration) and 2) the way it looks on our stacked section. The two boxes
on the right give a description of the appearance of the structure before migration and, in
the lower box (in red), the errors we can expect if our velocity field is incorrect. Remember,
it's the section you see when the number 2 button is pressed that represents our stacked
section - the migration process tries to correct this back to its true position (button 1).
A local high, or, in geological terms, an anticline appears wider on the stack section than
it should be.
The amount of "stretch" depends on the dips on each flank - the higher the dips the
wider the structure will appear on our stacked section.
If our velocities are incorrect then the final migrated structure may be narrower or wider
than the true structure - this could lead to the wrong estimate of any oil or gas reserves
underneath this "high".
Where the dip changes (at the centre of the structure) there is a danger that a cross-over
of raypaths will occur.
Once again, incorrect velocities will lead to an incorrect shape. In the velocities are too
high then the syncline will be wider than it should be after migration. Velocities that are
344
too low will "under-migrate" the structure, leaving it too narrow.
Steep dips on the flanks of the syncline, and a rapid change of curvature at the bottom,
leads to the "bow-tie" effect that we've seen before.
(Remember the upside-down reflection in a spoon? The reflections from the left-hand
side of the steep slope appear on the right and vice-versa.)
We can over or under migrate this with the wrong velocities. In the extreme case the
reflection will not be properly "focused" by the process and pieces of the "bow-tie" will
remain.
An event that should appear as a single point reflector on our migrated section appears
as a diffraction curve on the stack. I've had to increase the gain on the stack display
because all of the energy in the original point is now spread out over the diffraction
curve.
The hyperbolic shape of this curve is important - it's very similar to the curves we
compute from the velocities in applying NMO.
A velocity error will lead to the "point" being smeared on our final migrated section.
If our velocities are too high, the point will be over-migrated into a "smile".
If they're too low, the under migration will leave a residual "frown" in the data.
A discrete event, that terminates abruptly, will generate diffraction curves at its ends.
These curves have the same shape as those from a single point but exhibit an interesting
polarity anomaly.
The part of the curve "under" the actual event is polarity reversed relative to the other
part of the curve (with this colour scheme, red and blue change over).
Over and under migration here will mask the abrupt end of the event - it will, once again,
be badly focused.
A gap within an otherwise continuous event also generates diffractions at each break.
Note, once again, the polarity reversal and the interference between the two diffractions
in the centre of the gap.
If our velocities are wrong in this case, the gap will be poorly imaged. In extreme cases
"smiles" from over migrated data, or from random noise bursts in the data may
completely hide the gap.
345
If we have very small "events" and "gaps" then the interference pattern becomes very
complex.
One way to understand some of the migration methods that follow is to realise that even
a continuous event could be considered to be a series of discrete points with diffractions,
but these curves interfere with each other and "cancel-out" everywhere but along the
event.
This "cancelling-out" relies on the velocity field being correct. If it's wrong, the events
will effectively move to the wrong place (and time).
The reflection from the fault plane (if it's not too steep to be imaged) will be recorded to
the right of the fault and needs to be migrated up-dip into its correct position.
If the velocity field is incorrect, the fault plane will not "move" to the correct place and
the diffraction curves from the "corners" of the structure will not focus correctly.
Finally a very strange structure. If we had a perfectly parabolic event in the ground,
with its centre on our CDP, we would see only one point on our stacked section - the event
would be like a parabolic mirror, concentrating all of the reflected energy into one point.
Our synthetic stack doesn't quite show this - the errors caused by the very steep dips in
the "inverse-migration" program I used are the same as those we will encounter in the
forward migration programs - they are all (to some extent) an approximation!
Add a velocity error to the errors already introduced by the migration process and this
will be very difficult to image. If we have a single "point" of energy in our stacked
section that doesn't correspond to a "real" event (for example a spike), this will migrate
into a parabolic "smile" on our migrated section.
The equations that govern the amount of correction required for migration are relatively
simple. The spatial shift (delta-X) and the new (migrated) time can be calculated by
picking dips on a stacked section and using the stacking velocities. The equations are:-
346
Where Time is in seconds, Dip in seconds per metre and velocity in metres per second. Our
normal dip convention (increasing time with increasing distance is positive) is used. You
should be able to see from the above that any event will move "up-dip". The time equation
is identical to that used for NMO, except that we use half the velocity (or twice the distance)
in the calculation.
The top diagram shows a 5km seismic "line", with a depth scale extending down to 4km.
There is one single reflector (shown in red) with the "normal incident" raypaths (in blue)
drawn from this to the surface.
The lower diagram shows the "stack" produced from this section (in, for a change, green).
Use the scroll bar to add either a synclinal or anticlinal structure to the centre of the line
and examine the stack response!
Migration (2)
347
A stacked seismic data set is a little bit like the view of the world seen
by someone wearing thick glasses with cracked lenses! Every velocity
change in the shallow layers causes a bend in the rays entering (and
leaving) the earth, distorting everything below that layer.
The methods used to correct for this, and migrate the data to its
correct position, rely on some of the work done by the early pioneers
in optic research explaining similar effects on light rays.
348
If we combine the above equations, removing the
dip term, we can plot all possible positions for the
migrated event.
All migration methods (and there are many of them) derive some (sometimes very
approximate) solution to the wave equation:-
349
(Shown in Chapter 3 with a promise never to be shown again!)
Other methods involve some sort of differential solution to this equation, and are generally
known as finite difference methods. The term finite difference arises in the way in which
the differential (or slope) of a sampled curve is computed.
With a course sampling interval, an estimate of the slope from the sample values (shown in
green) is very inaccurate. As sampling (and computer time!) increases, the finite difference
more closely approached the true slope.
Other approximations in the solution lead to techniques that only work for limited dip
ranges (say up to 45), and yet other methods use a transformation into either F-K or F-X
domains to accomplish the migration.
Most of these wave equation methods use a technique known as downward continuation.
The stacked section is divided into thin layers in time (typically 16-40ms, but dependent on
dip and velocity changes), and the velocities within each layer are used to compute the
effects of this layer on everything below it. It is as though we moved the shots and
geophones through this layer so that they are now buried in the earth (and everything
above them is fully migrated).
Here's the same "model" from the last page, but this time an additional bar has been
added to allow you to "move" the shots and geophones down to some time in the section.
Use the lower scroll bar as before to introduce some structure, then move the upper scroll
bar down progressively "migrating" the lower section.
Everything above the grey line is fully migrated (the "lens" effect of the upper layers has
been removed) and the deeper data is partially migrated.
Before moving on to some real examples, and the extensions of migration to 3D and pre-
stack data, here's a brief summary of some of the more common migration methods with
an indication of their effectiveness (and cost) for various conditions:-
Migration Name Type Dip Vel-T Vel-X Cost Criteria
350
Type of migration (see
Kirchhoff Time Time Type
next page).
Effectiveness for steep
Kirchhoff Depth Depth Dip
dips.
Handling velocity
Kirchhoff Depth - Depth Vel-T
variations in time.
Modified Handling velocity
Vel-X
Kirchhoff Depth - Depth variations in space.
Explicit Relative cost (computer
Cost
Phase-Shift Time time).
Migration (3)
351
Both the Kirchhoff migration and variations on the other wave equation techniques can be modified to
take into account the bending of the raypaths (by refraction) within a complex section. These
modified techniques are usually referred to depth migrations - as well as imaging the data in the
correct position they also convert the vertical time scale of our section to depth. For the sake of
consistency we often convert the final depth section back into time since that's the usual scale for a
seismic section.
Both time and depth migrations are only approximations when applied to 2D seismic data.
Unless the 2D line runs exactly across the maximum dips in the structure (normally called a
"dip" line), some of the data should be migrated out of the plane of the section in three
dimensions. The only solution to this is, of course, 3D processing (and 3D migration) which
we will discuss below.
Let's start by looking at some real examples of migration, and some of the problems
common to all migration programs.
This section shows the bottom two seconds of a seismic line that was processed to four seconds
through wave equation migration.
The buttons show the time (in seconds) of the "downward continuation" computed by the
approximation to the wave equation above that point. 0 is unmigrated (the stack) and 4
(representing 4 seconds) is fully migrated.
Note how the complex unconformity at the bottom of this section gradually becomes
imaged as the wavefront effectively moves down the section.
If you are running this course on a fairly fast standalone system, or over a fast network,
you may wish to look at an animation of the complete migration of the above line - if you're
running on a slow machine I would not recommend this!
All migration algorithms suffer when the spatial sampling of the data is not enough to properly image
any steep dips in the section.
Here's a small piece of line migrated at its original 12 metre CDP interval, and then (by
simply processing every other trace) at a 25 metre interval. You can clearly see the
352
migration "noise" introduced by the under-sampling (or spatial aliasing).
Spatial aliasing of this sort can be reduced by some form of interpolation on the original
data. Just as we can reduce the sample period of our data in the time domain by
interpolating samples between those already there*, we can use FFT or Tau-P transforms
on our stacked section to interpolate the traces to a finer spatial interval before migration.
We normally use our stacking velocities as a basis for building a "velocity-field" for
migration. We have already seen how inaccurate these velocities can be (in a geological
sense) and we may need to smooth them and adjust them (typically adding a few per cent
to them) prior to migration.
Small errors in velocity are difficult to spot but, as we mentioned a couple of pages back,
the wrong velocities can produce the "wrong" structure. In cases of extreme structure the
velocity field should itself be migrated prior to migrating the data. This requires an
interpretation of the stacked section and velocities picked on each horizon. We then use
the local dip to migrate each velocity and rebuild our velocity field.
As an extreme example this section was processed with the "correct" velocities (or what we
believed were "about right"!) and then much lower (less 25%) and higher (+25%).
Some structures are impossible to stack correctly without migrating the data before stack. Pre-Stack
Time and Pre-Stack Depth migrations are now becoming commonplace and use all of the data (usually
in common-offset planes) as input to the migration process. Even with a relatively simple pre-stack
time migration we have the opportunity to re-pick our velocities after migration (with the events now
in the correct position) and re-iterate through the whole process combining NMO, DMO, Migration
and Depth Conversion into one (expensive!) single process.
353
A pre-stack depth
migration of a complex
(and very steep)
structure can produce
a very accurate
migration.
3D Migration
After 3D binning and stacking our 3D volume now represents a
complete "volume" of 3D traces - evenly spaced (though not
necessarily the same spacing) in both spatial directions (X and Y) and
sampled in time in the Z direction.
Note that we almost achieve the same result (the errors are
probably less than our velocity errors) by moving the data in
two orthogonal directions (along X and Y) and, by judicious use
of Pythagorean geometry accomplish our 3D migration in two
steps.
This shows a 3D surface before and after migration (imagine that this is just one reflection from our
3D dataset).
354
Once again every reflection point will move "up-dip", only this time the dip is not assumed
to lie on the line - the data is migrated in 3D.
You should be able to see the "tightening" of each peak, and the widening of each trough
within the data set - each peak and trough moving in the direction of its own slope (or dip).
Notice how the coloured grid, drawn on the original (stack) dataset, distorts into a new
position after migration.
Here the data is first migrated along each "inline" (the lines parallel to the direction of shooting), and
the result is then migrated along the "crosslines" to complete the process.
A conventional 2D migration can be used for each pass but the results are not absolutely correct (they
rely on the assumption that [1+a]2 = 1+a2 when "a" is very small).
Full 3D migrations are now commonplace but even now some of these "cheat" by using wave
equation migrations over thin layers in the "X" direction and then in the "Y" direction. One advantage
of the older two-step processes was the opportunity to examine the data after the first pass - it's quite
easy to spot positioning or acquisition errors on data that has only been migrated in one direction, the
"crosslines" can show all kinds of problems.
We'll look at some examples of fully migrated 3D data in a couple of pages, after we've examined
some additional post-stack/post-migration processes and more conventional displays!
355
Other Post-Stack Processes / Display
17 Final Gain Recovery Adjustment of the initial gain recovery to compensate for velocity changes.
Final Display
We've discussed most of these processes before, but I'll just go through them briefly to
highlight the post-stack parameters that we might use.
Our final datum correction is simply a static correction to move the data to a fixed datum.
For marine data this is simply a static to finally correct (to Mean Sea Level) for the depth
of the shot and streamer ([shot depth + streamer depth]/1500 m/s - typically about 8 ms).
For land data this step normally requires correcting the floating datum used in the
processing to a fixed datum. This final datum could, of course, be either above or
(unusually) below sea level.
The final gain recovery step usually removes the Time-Squared scaling applied at the
beginning of the sequence, replacing it with a "Velocity-Squared times Time" scalar which
is a better approximation - we can't do this until we have our final velocities.
356
The Multichannel Filtering step shown above typically includes any of the spatial filtering
techniques mentioned in Chapter 8. This may be used to remove any remaining diffracted
noise, or simply to improve the coherency of the data. Even a simple running mix of traces
is a multichannel filter.
Display
Despite the fact that most processed data is now supplied to the interpreter on tape, we still need to
make paper or film displays for QC purposes and for a final record of our processing. Although
modern display screens can display in excess of 1000 or more "dots" across the screen, a 400 dpi (dots
per inch) paper plotter output of a 10 km seismic line will use literally billions of dots. We often
cannot "see" enough data on one display screen.
Let's examine some of the types of display used for seismic data ...
357
common type of display combining the
advantages of both those listed above.
358
the troughs in a different shade - this is
sometimes called a "dual-polarity" display:-
359
Finally we can change the appearance of the
whole display by the judicious (or otherwise)
use of timing lines.
Before finally producing our final section, let's look at some 3D displays.
Display (2)
360
3D Displays
After stack and migration a 3D dataset
represents a continuous volume of data
sampled in 3 dimensions.
Many other types of display are possible. For example we can interpolate a "random line"
running on any path through the volume (sometimes called an Arbitary 3D line), or we can
display "horizon-slices" - timeslices that follow an event picked though the volume.
During data processing it's usually the three orthogonal displays we concentrate on - in
each of the following examples use the buttons to switch between examples of inlines,
crosslines and timeslices:-
INLINES
361
CROSSLINES
TIMESLICES
Note that the first and last inlines and crosslines are relatively weak. There is no data
beyond these lines to migrate into their position, this reduces their overall amplitude.
Note also the level of detail on the timeslices. At times a timeslice can look like a satellite
photograph, or a microscope slide taken from a rock sample! In general very small details
(smaller than the time sample period) can be seen to "line-up" on timeslices.
There will be a side label containing information on the acquisition and processing, plots
and annotation above and below the section, and, of course, the seismic data itself.
Click on the "Key" button and examine the following explanation on the various parts of
the display.
Key On Label
Red Basic line Company names, the line name and area details.
details
Yellow Field Hopefully, all of the geometry information from the field. For marine data (this
Parameters is a "land" line) this normally includes a boat diagram showing the overall
layout.
Green Processing Full details of the processing. This may include an enormous amount of detail,
Parameters or may simply list the processes applied without going into too much detail.
Many processing systems now include automatic labelling so that you can be
362
sure that the information on the label is correct.
Light Display details The scale used for the display. Normally the horizontal scale is expressed in
Blue both "true" scale terms (i.e. 1:25,000) and in terms of the parameters used to
plot the traces (10 traces per cm, 10 cms per second).
Dark Map Sometimes a location map is included (typically on 2D data) showing the
Blue position of the line on the surface.
Key On Section
Red Basic line A repeat of the line name at the other end of the section, possibly also showing
details the direction of the line (either in degrees or as, for example, "NNE").
Yellow Statics For land processing the elevations, field statics and residual statics are usually
plotted on top of the section, and the floating datum is often plotted on top of
the seismic data. For marine data, the water depths are usually just annotated
with the Shotpoint numbers.
Green Velocities The final velocities used to process this section are usually annotated along the
line. If this is a stacked section then we normally annotate the final stacking
velocities, if migrated then, of course, the migration velocities.
Light Line For 2D seismic data we mark the position of every other intersecting (or
Blue Intersections crossing) 2D line.
Dark SP/CDP The Shotpoint, Station and/or CDP numbers are annotated on either the top or
Blue numbers bottom of the section.
Pink CDP fold In many cases (particularly for land data) the CDP fold is also plotted.
There are no hard and fast rules as to "what goes where" on the section, but most modern
sections are something like the above example. Before closing the "normal" processing
route, we'll examine four major display parameters that still cause problems.
Display Parameters
There are four key areas where problems still occur in plotting seismic data - here's a brief summary!
DISPLAY SCALES
We're generally supplied with information regarding display scales in the form of "1:xxxx"
horizontal scale, and "zzz cms (or inches) per second. The second item usually relates
363
directly to a plotting parameter - we ask for "zzz cms per second". The first can, however,
cause problems.
Assume that you have a seismic line with a CDP interval of "cdpi" metres. In order to plot
this at a 1:xxxx horizontal scale we need to plot (xxxx)/(100*cdpi) traces per cm. For
example, with a 12.5 CDP spacing, and a 1:25,000 scale we need 25000/1250 or 20 traces
per cm.
So far, so good but, if we are plotting on a relatively low resolution plotter, we may need to
adjust this scale slightly so as to get an exact number of dots per trace to avoid nasty
looking aberrations in the display. For example, if we have a 400 dot per inch plotter, each
trace from the above example will need 7.87 dots! If we change this to "8", we'll get the
wrong scale (1:24,606 instead of 1:25,000) but a better looking display.
POLARITY
This is one subject that causes a lot of problems! The SEG (Society of Exploration
Geophysicists) definition of standard polarity ("SEG standard") is:-
"The onset of a compressional wave from an explosive source is recorded as a negative number, and
plotted on the final seismic section as a white trough."
This may seem a little odd, but is based on historical data and is the convention adopted for most
seismic data in the USA. Other countries (and companies) adopt the opposite convention so please
ensure that you know what you should be using and PUT IT ON THE SECTION LABEL!
PLOTTING DIRECTION
Again, a binary choice! We can either plot our sections from left-to-right or from right-to-
left. The convention here is that North and East are always on the right of the display -
roughly parallel lines should be plotted in the same direction regardless of the direction in
which they were shot. Traditionally anything shot from 0 (North) to 179 (almost South) is
plotted left-to-right, everything else from right-to-left but the overriding criteria should be
to keep roughly parallel lines plotted in the same (geographical) direction. If in doubt once
again, check with the person you're doing it for!
SHOTPOINT LABELLING
This only really applies to 2D Marine data. Our navigation data is stored using Shotpoint
numbers, and our seismic data is stored by CDP number. How do we relate these two
together?
We need to be sure just how our navigation data was recorded. Does it show the position of
each shot? The position of the recording antenna? The position of the mid-point between
shot and 1st receiver?
All of these are possible and require some adjustment to our labelling. For example, for
240 channel, 120 fold seismic data we will have two CDPs (at say 12.5 m) for every
Shotpoint (at 25m). The mid-point between the first shot and receiver corresponds to the
364
240th CDP on our section and, if the navigation data gives the position of this mid-point,
then we should mark our first SP on the 240th CDP. If the navigation data shows some
other position then we must offset our labelling by the corresponding amount AND PUT IT
ON THE SECTION LABEL!
Well, I guess that we've finally finished the basic processing sequence!
365
Once we've finished the processing (and plotted the data) we usually need to archive the
data on tape for the next stage - the interpretation of the seismic data. We normally run
several archives of the data for posterity - almost always in the SEGY format covered fully
in Chapter 5.
Although, traditionally, archives are made of the unfiltered data after both stack and
migration (so that we can change the deconvolution and filtering if necessary) we also need
to archive the final fully processed data set (that which we normally plot) for loading onto
the computer system used by the interpreter.
Until quite
recently, most
seismic data
was interpreted
with the aid of
a pencil and
(hopefully)
some geological
knowledge of
the area being
studied.
This not only necessitated the manual checking of every intersection in a 2D survey to
make sure that the interpretations "tied", but also required manual digitisation of the final
results - reading the picked time of events at small intervals along the line and hand-posting
these on a map for hand contouring.
366
The advent of large volumes of 3D data
pushed the development of CAI - Computer
Aided Interpretation.
367
Workstations can then be used for the complex tasks
involved with drilling, testing and prospect evaluation.
368
Chapter 12 - Advanced Processing
To conclude the course, some more advanced topics, and yet more questions!
Using complex numbers to represent the seismic trace, and displaying other "attributes"
extracted from the data.
Handling the data recorded by instruments lowered down a borehole. Which curves are
useful to the processing Geophysicist?
Matching well logs to seismic data, or matching one seismic survey to another.
Once we've matched our seismic to a well log, we can produce a "true" zero-phase section.
With the addition of the processing velocities (and a lot of complications) we can convert
our zero-phased seismic data into Acoustic Impedance.
The variation of amplitude with offset - how we can extract additional information from
our seismic data ...
369
Page 12.09 - Scanning & Reprocessing
Making use of our old data. Either scanning old paper sections or reprocessing old digital
data.
Some thoughts on parameter trials and the things that can (and do) go wrong with seismic
processing.
A bit of history, and a brief look ahead (including all of the things not mentioned
elsewhere).
370
Complex Trace Analysis
There is one method of displaying seismic sections that is useful for finding small changes
in the character of the data or for improving the continuity of the display. Although of
particular use to the interpreter, and common on the workstations used for interpretation,
we sometimes use this technique to enhance our data display during the processing
sequence.
Long, long ago, on a page far, far away in this course (actually in Chapter 6) I showed an
animation of a rotating clock, with the height of the hand plotted to give a simple cosine
wave.
If, as shown here, we imagine the same clock but this time with a variable rotation speed,
and a variable hand length, the resultant trace (at the top of this picture) is more
complicated and begins to look a little like a seismic trace.
If we assume that we can always represent our seismic trace by some sort of rotating vector
like this then we can also examine an alternate version of the trace - the quadrature trace
formed by plotting the "width" of the hand at any time.
We can obtain this quadrature trace from the original trace by a 90 degree phase shift or,
as it is sometimes known, a Hilbert transform (name dropping again - David Hilbert was a
German mathematician).
371
Another way of looking at
this is shown here. Imagine
a thin wire rotating about
an axis and spiralling off
into space.
So what's the point of all this? Well, it gives us four other parameters (and variations
thereof) that we can plot in place of the regular seismic trace.
Firstly we can plot the quadrature trace itself. This is just a 90 phase-shifted version of
the original trace - peaks and troughs now become zero-crossings, and what were zero-
crossings become peaks and troughs.
Secondly, and more importantly, we can plot some of the complex attributes of this pseudo
3D trace. The "length" of the rotating vector (the clock hand) can be plotted, and this is
usually known as the trace envelope. It contains the same amplitude information as the
original trace, but without any phase information.
Here's the envelope, or reflection strength, of a simple wavelet plotted (in red) as both
positive and negative values (the actual computed envelope is always positive).
The wavelet shown here is zero-phase, but, as you flick through the buttons the phase of
the wavelet changes. The envelope, however, always remains the same and always
completely "surrounds" the wavelet - regardless of phase.
372
We can also plot the "speed" of the rotating vector. This is known as the instantaneous
frequency of the trace, and provides useful information on the actual peak frequencies
within the trace at each time sample. The position of the vector around the "clock" is
known as the instantaneous phase. This removes all amplitude information from the trace,
allowing us to see the weaker events.
Here's the full set of complex attributes for a small piece of seismic data. These are:-
Note how the different events appear in each display. The envelope shows the overall event
amplitude (combining the peaks and troughs together into distinct "events"). The
instantaneous frequency highlights some bursts of high (and low) frequency in the data,
and the instantaneous phase removes all amplitude information - all events appear in the
same colours.
Although useful for improving the correlation of events across the section, these displays
don't actually add anything to the data - it's just another way of looking at our seismic
section. If we really want to add more information to the seismic then we have to look for
other sources of information. We'll start by looking at how we can integrate information
from holes in the ground!
373
Well Log Processing
Except when we're dealing with reconnaissance data in a new
area, much of the seismic data that we process runs close to a point
of absolute calibration - a well.
374
How is useful "geophysical" information obtained from a
well?
I don't propose to go into a detailed discussion of all these curves - for that please consult
your local well-log analyst or petrophysicist! For simple calibration of our final migrated
seismic data, just two curves are necessary - the "DT" curve giving the reciprocal of the
velocity (sometimes called the sonic curve), and the curve labelled "RHOB" - the density.
375
depths to metres (we now have a
reading roughly every 15 cms). As the
DT curve gives us slowness (in
microseconds per foot), we can convert
this into a velocity by dividing it into
304800 - this gives us the column
labelled "VEL". 1524.16 132.824 2.1983 2294.76 5044.58 -0.00681
1524.31 133.745 2.1836 2278.96 4976.34 -0.00158
If we compute the acoustic impedance 1524.46 133.568 2.1738 2282.00 4960.61 0.00178
for each layer as the product of velocity 1524.61 133.136 2.1745 2289.40 4978.29 -0.00391
and density, we can compute the
1524.77 134.676 2.1825 2263.22 4939.47 -0.00168
reflection coefficient at the interface
between each 6-inch layer using the 1524.92 135.45 2.1877 2250.28 4922.93 0.00314
equation derived way back in Chapter 1525.07 134.326 2.1832 2269.11 4953.92 -0.00119
3. A slight rearrangement of that 1525.22 134.282 2.1773 2269.86 4942.16 -0.00197
equation, using AI's, gives us:- 1525.38 134.645 2.1746 2263.73 4922.72 ...
AI2 - AI1 ... ... ... ... ...
RC =
AI2 + AI1
The reflection coefficient (the end
column) is simply the difference of
impedances divided by the sum.
Here then is a plot of these computed values for a 700 metre section of the well-logs:-
376
We now have a set of idealised reflection
coefficients for every depth interval in our
well, and we need to convert these to time.
We can now plot our reflection coefficients as a function of time (the left
hand scale is now in milliseconds) and produce an ideal synthetic
seismogram showing how our seismic trace would look if we could record
the very high frequencies represented by this "six-inch" depth sample
rate.
In order to more accurately predict how this synthetic trace will look in the "real" world
(sorry, but 3000 Hertz signals won't penetrate to 1740 metres - try 30 Hz!), this display
shows the effects of high-cut filters on the synthetic.
You can see that, below about 40 Hz, even the very strong event disappears!
Other events interfere with it at this low frequency.
We'll see how we can use this synthetic to precisely zero-phase our data, but
first let's look at the more general case of comparing one trace with another -
data matching!
377
Data Matching
The matching of well synthetics to seismic data is part of the much more general problem
of data matching.
378
We have a combination of Airguns and Vibrators as sources, and land and marine
geophones for recording. All of these have their own characteristics which lead to
problems in matching the final data set. Before examining some examples from this
transition zone survey, let's look at some of the reasons why the data doesn't tie.
For 3D data the enormous amount of data that overlaps can be a problem -
we need to use some fairly sophisticated techniques to "check" the
navigation data. We'll look at solutions to this on the next page.
379
PHASE differences can be slightly more difficult to quantify. We can
determine (as we'll see on the next page) a complex phase operator to
convert one dataset to another, but this can introduce dispersion (some
frequencies "shift" relative to others spreading out our events).
If it's possible, a simple phase shift (of "X" degrees) is the nicest correction
to determine - its simple to apply and reversible! Once again we'll look at
this on the next page.
We can measure the RMS amplitude in one or more time gates at each
intersection and generate either a constant correction or a gain-curve
adjustment to apply to one survey.
Remember that the actual numbers on tape don't necessarily mean very
much - there may be enormous amplitude differences between surveys.
Here's the processed (stacked) data from the transition zone survey shown at the top of the
page.
The three sections here are colour coded and correspond to:-
2. The data recorded by the land geophones - shot using both airguns and Vibroseis.
The final merged stacks both before and after phase correction (my apologies for the
quality of the displays - but this is very old data!).
Note particularly the "join" between the airgun and Vibroseis data (Yellow to Green). The
phase differences between the two different data sets are very obvious here.
Here's a much clearer example - the intersection between two different vintages of marine
380
data before and after correction.
For matching like this to work well, we need to use as much information as possible from
the two datasets. For 2D data we should use all intersections to determine the "best"
match between the surveys. We may need to make some adjustments to the navigation
data to optimise the "fit", but these corrections should be consistent (for example "move"
survey "X" 50 metres to the NW).
If we're matching multiple surveys, we need to use one survey as a "master" and match all
others to it. Once we have established to necessary corrections to match survey "B" to
survey "A", we can use both of these surveys to determine the corrections for survey "C",
and so on.
We'll now look at some of the techniques used for data matching, starting with the simplest
problem - matching the synthetic data from a well to one seismic line.
Zero Phasing
381
We mean that we are attempting to make
every reflection on our section into a
symmetrical zero-phase wavelet.
The ideal way of zero-phasing our data is by comparison with synthetic seismograms
produced from well data (as mentioned a couple of pages back). This type of "data-
matching" is identical to that used to match different surveys and/or source/receiver
combinations together, which we'll come back to at the bottom of this page.
Data matching, whether it be matching a well synthetic to a seismic line, or matching two
seismic lines together always comes down to just one basic question. Can we find an
optimum filter that will consistently transform one trace into another?
The three black "traces" shown on the right are three CDPs from a 2D
migrated line close to the nominal position of a well (determined from the
navigation data). The red trace is a synthetic produced from the well data,
filtered down to the same bandwidth as the seismic data (with a zero-phase)
filter. It's not obvious which of the traces on the left "best-fit" the well data!
382
We can mathematically
determine the position of
"best-fit" by the use of a
technique known as cross-
coherency.
For well vs. 3D this will indicate (in red) the best 3D line and
CDP position for matching.
383
all, supposed to be "correct").
Once we've determined the optimum position for the match, we can then go on to
determine the actual filters.
The wideband
synthetic
trace.
A filter
designed to
match this to
the seismic
data.
The seismic
data from our
"best" CDP
(repeated 5
384
times).
The results of
filtering the
synthetic with
the filter - the
well data
filtered to
match the
seismic.
Why try to match the well to the seismic when we're really trying to match the seismic to
the well?
It's all a question of bandwidth. The well synthetic contains very high frequencies
(equivalent to the original 6-inch sampling of the well log). If we try to design a filter to
convert the seismic data into this synthetic, the filter will "go mad" trying to boost
frequencies that just are not present in the seismic. By designing the filter the other way
we're reducing the bandwidth and get a stable result. Always try to make your filter
reduce the bandwidth rather than increase it.
The filter from the left-hand panel above was analysed (by Fourier Transform) and
appeared to be pretty much a +147 phase shift and +2.2ms time shift filter. By applying
the inverse of this to the seismic data (as numerical values) we can achieve a fairly good
zero-phasing of our seismic.
385
Here's the final phase/static corrected seismic line on the left,
with a "spliced" display on the right showing some
duplicated synthetic traces spliced into the section (I've
dropped the gain on this to show more detail).
You can see that the match is not perfect - it's a sort of "best-
fit" over the whole window. How could we improve this?
Be true amplitude.
Be accurately imaged
Be free of multiples.
When well data is not available, we can achieve an approximate zero-phasing of the data
from the data itself. We make an estimate of the amplitude spectrum of the signal within
our data, assume that this has been recorded and processed as minimum-phase and then
correct this to zero-phase. Depending on the type of spectral estimation used this can
386
correct the data to within about 30 of "true" zero-phase.
To extend the example shown above to general data matching, replace the well synthetic
with a trace from another survey and repeat the whole process for every intersection
between surveys!
If we can estimate phase, static and amplitude corrections at each intersection (after
correcting for positioning errors) we can average these to provide a consistent set of
parameters to "convert" one survey to another.
Some words of warning on averaging! If you average together a whole bunch of random
filters, the result almost always comes out close to zero-phase - do check that the individual
filters are consistent before averaging them. Even averaging numeric phase corrections
can be difficult - what is the average of 350 and 10 - it could be either 180 or 0!
Now to move on beyond zero-phasing, to see how much additional information we can
extract from our seismic data!
Seismic Inversion
387
Once we've successfully converted our seismic data to zero-phase, by matching it to a well
synthetic, is there any additional information that we can extract from the seismic?
As usual, the answer's yes, we can use the information from the well log to attempt to
convert our whole section back from its seismic "reflection coefficients" into the detailed
acoustic impedences (or even velocities) present in our well data. This process is known as
inversion or, perhaps more correctly, seismic inversion.
The first problem is that the seismic data doesn't contain anything like the same frequency
content as the well logs. We've seen before how the seismic data lacks the very fine detail
present in a well log. As well as this it's also usually very poor at the low frequency end of
the spectrum - we don't usually process data much below 3 or 5 Hz because of the noise this
introduces. Unfortunately these very low frequencies contain some very useful velocity
388
information in the well logs that is missing from our seismic data. Can we use any source
of low frequency velocity information?
Well, we do have the velocities used to stack and/or migrate the data. Although these are
only approximations to the true RMS velocities in the earth, we can use these to provide the
low frequency velocity "trend" in our data set and, with appropriate scaling, add these to
the results produced by integrating the zero-phase seismic data.
The second problem is that the processes used in the production of the synthetic
seismogram (multiplication, differentiation & convolution) and inherently more stable than
their inverses (division, integration & deconvolution). For this reason, considerable care is
needed during the inversion process to prevent errors creeping in, and our seismic data
needs to be as "clean" as possible. Inversion is a classic "GIGO" process - garbage in
garbage out!
389
of the velocity. In other
words:-
density=K.Velocity0.25
We can finally move on to an example of seismic inversion. The small piece of section
shown here has been through a similar route to that shown above, and the five sections
show:-
3. The seismic converted (by the appropriate scaling) into "seismic-band" acoustic
impedance.
4. The interval velocities derived from the processing, converted to impedance and
low-pass filtered.
390
Finally, a very detailed inversion from some high resolution data. The well position is
clearly marked and the quality of the match between this and the inverted seismic data is
phenomenal (and quite unusual!).
We can, of course, extract even more information from the seismic data, perhaps even some
physical parameters associated with different rock types. Let's move on to AVO!
391
AVO (1)
Amplitude versus Offset or Amplitude Variation with Offset are both names for what is
now known as AVO analysis.
What is AVO? Robert Sheriff's formal definition of AVO, as taken from the 1991 edition of
his Encyclopedic Dictionary of Exploration Geophysics is:-
"The variation in the amplitude of a seismic reflection with source-geophone distance.
Depends on the velocity, density and Poisson ratio contrast. Used as a hydrocarbon indicator
for gas because a large change in Poisson's ratio (as may occur when the pore fluid is a gas)
tends to produce an increase in amplitude with offset."
In other words, an analysis of the variation of the reflection amplitudes across the offsets
within one CDP gather can give us some valuable insights into the precise physical
parameters of the rocks at that interface. Let's digress for a moment and try to explain
how AVO effects come about.
392
If, as most processing geophysicists, you spend a lot of time banging
your head against a brick wall, you may like to consider just what
your violence is actually doing to the wall! If you hit the wall exactly
perpendicular to the wall, then all your energy will be converted into
a P-wave that will pass through the wall. If, as is likely, you hit the
wall some kind of "glancing" blow, then some of the energy will be
converted into S-waves, causing the wall to move up and down or
left to right.
Up until now we've assumed that the reflection coefficient at each interface is simply the
difference of the acoustic impedances divided by their sum. Way back in 1899, Knott
derived formulae for reflection coefficients based on Snell's law and the continuity of
displacement and stress across the interface. These were formalised into amplitude
coefficients by Zoeppritz in 1919 to produce what are known as the Zoeppritz Equations. As
they say on TV sports programs - if you don't want to see the result, look away now!
393
ARP, ARS, ATP & ATS are the P-wave and S-wave reflection and transmission coefficients, VP,
VS and p are the P-wave and S-wave velocities and the density in each layer.
Can we make any useful predictions from the Zoeppritz Equations? We can simplify their
calculation a bit by bringing in another parameter, an elastic constant associated with the
rock - its Poisson's ratio.
In terms of pure physics, Poisson's ratio is simply a measure of how much the cross-section
of a rod changes when it is stretched. In a fluid, doubling the length halves the width (the
volume is retained) which yields a Poisson's ratio of 0.5. A rod which never got any thinner,
regardless of the amount of stretching applied, would have a Poisson's ratio of zero. There
is a simple relationship between the P-wave velocity, the S-wave velocity and Poisson's
ratio...
394
... and here's some typical P-wave
velocities and Poisson's ratio for some
specific rocks.
395
wave velocity and hence the Poisson's ratio.
Approximations of the Zoeppritz Equations (boy, do they need it!) by Shuey and others led
to an elegant method of reducing these curves to just two parameters.
396
If we plot the AVO response curves as a
function of the "Sine of the incident angle
squared", they (at least out to about 30)
approximate a straight line. In other
words:-
R(angle) = R0 + G * Sin(angle)2
Of course, it isn't quite as simple as that - let's now move on to look at the problems we
may encounter, and at some real examples!
AVO (2)
397
For most AVO analysis projects, the first thing we need to look at are the
well logs. To estimate the theoretical AVO response we need accurate VP,
VS and density curves.
Direct S-wave velocities are not normally obtained from well data
(though becoming more generally available through full waveform
logging and 3-component VSP's etc.), but they can be estimated from a
precise knowledge of the lithology associated with a particular layer.
Once we have all of the curves, we can plug all of the values into the dreaded Zoeppritz
equation, and calculate the theoretical AVO response for a CDP recorded at our well
position. We can even "adjust" the well curves to remove or add different materials to the
curves (for example Gas or Oil) and recompute the synthetic data for these new scenarios.
The seismic data itself needs very careful preparation. It must be very well matched to the
well data (perfectly(!) zero-phased) and must be free of any "non-geological" anomalies.
Here's just some of the problems that may effect our seismic data, and their (possible)
solution:-
398
(Note - nobody said that this was going to be easy!)
Note here that the trough at about 1.6 seconds gets gradually
stronger across the spread, and the following peak gets
weaker. Remember that we keep just two values for each
sample - the R0 value, or zero-offset intercept, and "G" the
gradient of the amplitudes against "Sin(i)2".
This shows a very small piece of data processed through AVO analysis. The displays are:-
2. R0.
3. R0+G.
4. Sign(R0) time G.
5. R0 times G.
399
Note that the "R0" section represents the amplitudes extrapolated back to zero-offset -
equivalent to having our geophones on top of our shots! The coloured displays show some
typical combinations of R0 and G - in this case the final display really highlights this
shallow gas anomaly. Not the kind of place to drill (unless, of course, it's a productive
field!).
AVO analysis is, at best, difficult, and, at worst, impossible! We need very good quality
seismic and well data for meaningful calibrated results but, when it does work, it can reveal
much more information than that (apparently) present in the migrated section.
To close our AVO discussion, here's your chance to play with the Zoeppritz equations.
Enter P-wave velocities for each layer (you should use metres/second for the density
calculation to work), either S-wave velocities (numbers greater than 0.5) or (in the VS
space) a Poisson's ratio (0 to 0.5). If you submit the density as zero, the program will use
the Gardner relationship to estimate one.
As before, ARP, ARS, ATP & ATS are the P-wave and S-wave reflection and transmission
coefficients - if the curves stop, it's because you've reached a critical angle. The black line
shows the computed R0 and G.
Images courtesy of It's a non-trivial task to convert old paper data into digital traces.
Phoenix Data Solutions
Ltd. Firstly the scanned image must be corrected for any distortion
present in the original and then any superfluous information must
be removed.
401
"traces" being filtered to
remove any anomalies
caused by the original
display "dots".
Once all of that is complete the seismic traces can be output to tape Images courtesy of
in a standard format and reprocessed to yield a "modern" seismic Phoenix Data
section. Solutions Ltd.
Once we have our data in a digital form (usually just post-stack or even post-migration) we
can reprocess it, just as we can be asked to reprocess any previous processed data on tape
(usually the field tapes).
Why reprocess seismic data? Well, there are many reasons for reprocessing older seismic
data. The most common being:-
3. New technology
We should generally always be able to "improve" previously processed data, if nothing else,
we should have an original section to work from which we can at least aim to improve.
Aside from the continual improvements in computer hardware, and the introduction of
new processing techniques, a simple reappraisal of the parameters used in the original
processing, together with some new parameter trials (possibly based around the original
parameters) should lead to some improvement. Always remember that following the same
processing route, with different software and/or personnel, may not produce the same
results as the original section!
Before finally winding-up this entire course, let's have a quick look at some thoughts on
parameter trials, and some of the possible problems that we may encounter in processing.
402
QC - Trials and Tribulations
We've discussed (and demonstrated) "trials" at various points in the 1 Transcription
processing sequence described in the previous Chapters. Here's that S.O.D
sequence once again, with the places where trials are generally required 2 Correction
marked in green. I've marked the NMO correction stage in red
because we need to "test" our velocity functions at numerous points Signature
3
along each line (the same applies to the residual static corrections on Deconvolution
403
land data). How often should we run parameter trials, and how many Initial Gain
trials? 4
Recovery
5 Resample
Too many trials certainly leads
6 Edit
to confusion! I was once
involved in some deconvolution Multichannel
7
trials for a large 2D survey Filtering
where the client wanted 4 8 CMP Gather
deconvolutions before stack 9 De-Multiple
tested with 4 after at 7 different
10 Dip Moveout
locations. Allowing for the "no-
decon" panels this totalled 175 11 Deconvolution
separate displays with very NMO
12
minor differences! The Correction
parameters originally selected 13 Mute
by the processing geophysicist 14 Equalisation
(on the basis of a few trials at
15 CMP Stack
one location) were eventually
used! Datum
16
Correction
Here's a few tips on running parameter trials:- Final Gain
17
Recovery
Use the data itself to determine the initial parameter ranges (for Multichannel
18
example, on marine data, you should be able to estimate the Filtering
length of deconvolution operator necessary to attenuate the 19 Deconvolution
water bottom multiple).
20 Migration
In areas of complex geology or large structures it will be Spectral
21
necessary to repeat parameter trials in several areas - select Shaping
these areas with the interpreter. 22 Bandpass filter
23 Equalisation
Attempt to choose a consistent set of parameters, if results vary
considerably, a compromise is necessary.
404
2. Choose less sensitive parameters.
405
one 3D "bin". Strange patterning in the
near-surface.
Poor offset distribution.
DMO (Dip Moveout). To make a Common Mid-Point a Spatial aliasing.
Common Depth-Point. Poor structural resolution.
Deconvolution. To widen the bandwidth of the High noise content.
data. Poor continuity.
To remove short-period multiples. Introduction of pseudo-
multiples.
De-Multiple. To remove long-period multiples. High noise content.
Poor continuity.
Introduction of pseudo-
multiples.
Spatial aliasing.
NMO. To dynamically correct for time / Poor continuity.
space variant velocities. Poor stack response.
No data.
Mute. To remove the far-trace noise Noise.
bands. Removal of signal.
Poor stack response.
Poor multiple attenuation.
CMP Stack. To sum the data within one Poor continuity.
Common Mid-Point. No data.
Strange patterning in the
near-surface.
FK Filtering (& Mixing). To remove (dipping) noise or Dipping noise bands.
multiples. Spatial smearing.
Migration. To move the stacked data to its Spatial aliasing.
correct position. Bad structure.
Bandpass Filtering. To improve the signal-to-noise Removal of signal.
ratio of the data. High noise content.
Equalisation / AGC. To normalise the amplitudes in Shadow zones of low
both space and time. amplitude.
No amplitude
discrimination between
events.
Seismic Inversion. To convert the seismic data to Meaningless results.
(e.g.) Acoustic Impedance. Random noise.
AVO Analysis. To establish physical parameters Meaningless results.
from the seismic data.
The Final Display. To present the data to the All previous stages are
"client". wasted.
406
Yesterday, Today and Tomorrow
As we've now reached the last page of this course (aside from the questions), I thought that
407
it would be useful to examine the past, present and future of seismic processing.
408
Today it's far more boring!
409
Tape storage has
also changed.
The old 21 and
9-track tapes,
holding just
kilobytes of data,
have given way
to modern
cartridges
holding several
gigabytes of
data.
410
The output "plotting" of seismic data hasn't changed in
principle, only in the resolutions now available.
411
It's interesting to note that some things
have not changed. When I took these
photographs (in Robertson's Swanley
processing centre), almost all of the Land
processing personnel were picking first-
breaks, whilst almost all of those involved
in Marine 2D or 3D processing were either
Where is processing likely to go from here? Well, we've already seen an increase in the use
of more sophisticated techniques on a routine basis - pre-stack migration is now very
commonplace whereas, at one time, the running times would have made it prohibitively
expensive.
There is always hope that improvements in machine "intelligence" will alleviate the need to
pick either first-breaks or velocities, but it will be some time before geophysicists will fully
"trust" their computers to do all of the work.
I'll close this section now with a look at some of the things that I haven't specifically
mentioned elsewhere in this course - just to tidy things up!
412
and sample-to-sample relative amplitude. Essential for any advanced
processing (AVO & Inversion) that relies on accurate amplitude
information. Sometimes called True Amplitude Processing.
VSP Vertical Seismic Profiling. Geophones are placed down a borehole and
then normal surface sources are fired into these geophones. These provide
valuable additional information from the well, and can also be used to
provide P and S wave sections.
Final Questions
The following three questions are 1) quite hard, and 2) don't include the answers! Have a
quick look at them, and then send your answer to us as shown below. We'll try and get
back to you ASAP with the correct answers!
Question 1:
Question 2:
Here's a very old seismic section (migrated, and partially interpreted). What, if anything,
is wrong with it?
413
Question 3:
Here's three stacked sections from the same 2D line. Which one ("A", "B" or "C") is
correct, and which single process is missing from the processing sequence for the other two
sections (just one process is missing in each case).
To send us your answers, you might be able to Email them to us by clicking here (this, I'm
afraid, depends on your Browser set-up), or you can Email them manually to us at
"training@geo.robresint.co.uk".
If you need to send them in writing, either fax them to (+44) (0)1322-613650, for the
attention of the Training Department, or write them on a piece of paper and mail them to:-
414
Thank you for you patience! If you've got this far and read every single word then I can
only congratulate you!
I would like to close by thanking those people without whom this course would not have
been completed, particularly those at Robertson's in Swanley who checked and commented
on it's content (you know who you are, so I won't mention you!).
All that remains now is to wish you many years of successful data processing, as this is
really ...
415