Professional Documents
Culture Documents
Mohamed Shihataa
SHAHIK
1/11/2015
Chapter 1 introduction
Geophysical principles: Seismic method and response
Introduction
Conventional reflection seismic technology uses acoustic waves (sound) to image the subsurface
Conceptually, as shown below , we begin by generating a bang. The sound travels down into the earth,
some of it gets reflected off buried interfaces, and we record the reflected energy (echoes).
The distance from the surface to buried horizons is measured in time
(Two-way traveltime - TWT).
If we know the velocity of sound in the propagating medium we can derive true depths
In practice we need to determine the optimal source of acoustic energy for the situation at hand, there
is more than one interface in the subsurface and we need to repeat the exercise many times in order to
generate a seismic profile or volume
Ship-towed airguns used at sea
Dynamite or vibroseis used on land
Seismic Waves
The principle of sound propagation, while it can be very complex, is familiar. Consider a pebble dropped
in still water. When it hits the waters surface, ripples can be seen propagating away from the center in
circular patterns that get progressively larger in diameter (Figure 1). A close look shows that the water
particles do not physically travel away from where the pebble was dropped. Instead they displace
adjacent particles vertically then return to their original positions. The energy imparted to the water by
the pebbles dropping is transmitted along the surface of the water by continuous and progressive
displacement of adjacent water particles. A similar process can be visualized in the vertical plane,
indicating that wave propagation is a three dimensional phenomenon.( Gadallah and Fisher,2009).
a)
Figure 5 steps for generation seismic wave, a) use sources to generate pulses ,b)seismic wave propagate
the earth surface ,c) returned wave measures by receivers.
Interface 1
Interface 3
The relative sizes of the transmitted and reflected amplitudes depend on the contrast in acoustic
impedances of the rocks on each side of the interface. While it is difficult to precisely relate acoustic
impedance to actual rock properties, usually the harder the rocks the larger the acoustic impedance at
their interface.
The following equation defines the reflection coefficient (RC) in terms of AI for normal incidence of a
seismic pulse at an AI boundary:
The acoustic impedance of a rock is determined by multiplying its density by its P-wave velocity, i.e.,
V. Acoustic impedance is generally designated as Z. Consider a P-wave of amplitude A0 that is
normally incident on an interface between two layers having seismic impedances (product of velocity
and density) of Z1 and Z2 (See Figure.11). The result is a transmitted ray of amplitude A2 that travels
on through the interface in the same direction as the incident ray, and a reflected ray of amplitude A1
that returns to the source along the path of the incident ray.
The reflection coefficient R is the ratio of the amplitude A1 of the reflected ray to the amplitude Ao of
the incident ray,
The magnitude and polarity of the reflection coefficient depends on the difference between seismic
impedances of layers 1 and 2, Z1 and Z2. Large differences (Z2 Z1) in seismic impedances results in
relatively large reflection coefficients. If the seismic impedance of layer 1 is larger than that of layer 2,
the reflection coefficient is negative and the polarity of the reflected wave is reversed. Some Typical
values of reflection coefficients for near-surface reflectors and some good subsurface reflectors are
shown below:
Incident
Energy
Reflected
Energy
A
V 1
1
V2 2
10
Boundary/Reflector
Normal
Refracted / Transmitted
When a P-ray strikes an interface at an angle other than 90, reflected and transmitted P-rays are
generated as in the case of normal incidence. In such cases, however, some of the incident P-wave
energy is converted into reflected and transmitted Swaves (see Fig. 3.6). The resulting S-waves, called
SV waves, are polarized in the vertical plane. The Zoeppritz equations are a relatively complex set of
equations that allow calculation of the amplitudes of the two reflected and the two transmitted Basic
waves as functions of the angle of incidence. The equations require P- and S-wave velocities (VP2,
VS2, VP1, and VS1 in Figure.12) plus densities on both sides of the boundary. The S-waves that are
called converted rays contain information that can help identify fractured zones in reservoir rocks but
this book will discuss compressional waves.
Figure. 12: Reflection and refraction of an incident Pwave. VP2 > VS2 > VP1 >VS1
1.3.1Snells Law
This relationship was originally developed in the study of optics. It does, however, apply equally well
to seismic waves. Its major application is to determine angles of reflection and refraction from the
incidence of seismic waves on layer boundaries at angles other than 90.
Snells law of reflection states that the angle at which a ray is reflected is equal to the angle of incidence.
Both the angle of incidence and the angle of reflection are measured from the normal to the boundary
between two layers having different seismic impedances.
The portion of incident energy that is transmitted through the boundary and into the second layer with
changed direction of propagation is called a refracted ray. The direction of the refracted ray depends
upon the ratio of the velocities in the two layers. If the velocity in layer 2 is faster than that of layer 1,
11
the refracted ray is bent toward the horizontal. If the velocity in layer 2 is slower than that of layer 1,
the refracted ray is bent toward the vertical (Figure.13).
Sin A =
Sin C
Incide
nt
V1
V2
V1
V2
Reflecte
d
Boundary/Reflect
or
2
Normal
Refracted / Transmitted
Energy
The majority of seismic sources are designed to provide an energy pulse which propagates as a
compressional (P) wave. VSPs, however, usually exhibit other wave types with distinctive event
patterns. These need to be recognised either because they may provide additional useful information
12
concerning the geophysical or geological environment or because they may degrade the data with which
one is trying to work. In the first category are various types of shear (S) or distortional waves and in
the second are casing borne signals and tube waves which are dependent on the column of fluid in the
borehole. These categories are not absolute, useful information can be gleaned from the observation of
tube waves for example, likewise much of the shear wave activity in VSPs is not sufficiently consistent
to enable its use in any analytical studies
13
P-wave velocity,
S-wave
velocity,
where: =
formation density
rigidity modulus of the medium (shear
=
bulk
of
the
medium
=
modulus)modulus
(incompressibility)
For a rock
This implies
VP > VS
(For a fluid
= 0 and so VS = 0)
Generally the S-wave velocity in a formation is 50% to 75% of the P-wave velocity. As a consequence
of this at the same frequency the wavelength of the S-wave will be shorter than that of the P-waves.
This leads one to the observation that for the same recorded bandwidth, a VSP S-wave image is capable
of better resolution than the corresponding P- wave image. It is possible, therefore, that a more detailed
study of the subsurface could be made using S rather than P- wave images. As with all such simple
statements the practical aspects of this are not as simple as one might imagine and few VSPs contain
sufficient mode-converted energy to provide a meaningful image over the majority of the well
Application of Vs wave
14
15
The ratio VP/VS is more reliable than the seismic P-wave interval velocities in identification of rock matrix type
(carbonate or sandstone) and fluid saturation in the rock pores. This applies particularly to gas reservoirs.
and rarely persists for more than 100 to 200 ms, the majority of VSPs exhibiting such arrivals are not
affected over the area of interest, however, the conditions that permit the transmission of such arrivals
may of themselves degrade the VSP data. Figure 3.2 illustrates a typical casing arrival, it also indicates
that the arrivals are coherent between successive shots at a geophone level and therefore cannot be
removed by summation.
17
Interface waves
Pseudo-Rayleigh waves are reflected conical dispersive waves (B10t, 1952) At low frequencies
( <5 kHz), their phase and group velocities approach the S wave velocity of the formation,
while at high frequencies ( >25 kHz) their propagation
velocity becomes asymptotic
to the compressional wave velocity of the fluid This type of wave is only encountered in fast
formations Stoneley waves are scattered along mterfaces, in fast formations, they show group and
phase velocities at high frequencies that increase asymptotically towards the propagation
velocity in the fluid. In slow formations, these waves are more highly dispersed and are more
sensitive to parameters linked to S wave propagation At low frequencies, Stoneley waves are
analogous to the tube waves observed m VSP surveys.
Fluid waves are guided (or channel) waves, showing very little scattering, which are propagated
through the fluid located between the tool and the borehole wall.(figure.16).
18
The comparison between a seismic section (in two-way time) and an acoustic log
(interval transit time versus depth ) leads to questions about the relations between the
two types of data and the possible
combination of their corresponding datasets
(Figure.. 17,18).
The acoustic log provides an obvious link between geophysics, seismic and well
logging data. Although covering different frequency bands (acoustic logs: in the order
of 10 kHz; seismic: ranging from about 10 to I 00 Hz), the two techniques are
based on the same Jaws of wave propagation but with different mythologies. Under
a certain number of conditions, the seismic measurements
collected at these
different frequencies can be compared and used to improve knowledge of reservoir
characteristics. Acoustic log has a very different vertical and lateral range of investigation
compared with seismic surveys (surface or borehole) (figure.19)..
19
20
the depth-to-time conversion of well log data is carried out using the acoustic velocities of
formations obtained from acoustic logs (sonic logs), this method is insufficient to provide
an effective comparison between seismic and logging survey datasets.
There are
discrepancies between the acoustic velocities derived from logging and seismic surveys, it is
thus necessary to perform a sonic calibration for the depth-time conversion (figure.20).
21
The sonic log calibration involves establishing a time-depth relation consistent with the
seismic survey yielding the same vertical resolution provided by the sonic log. In other
words, the sonic log measurements are recalculated to be compatible with variations in fluid
and lithological composition, so the integrated travel time between two depth readings can
be matched with the corresponding data from well velocity surveys.
A well velocity (or check shot) survey is carried out by measuring the travel times of head
waves emitted from a surface shot by means of a geophone or an hydrophone placed at
various depths in a well. Check shot surveys are the predecessor of vertical seismic profiles.
Vertical seismic profiles (VSP) may use more sophisticated tools to record the entire seismic
wave train generated by surface source and transmitted through the earth filter downward. A
VSP survey is usually recorded at a much higher density of depth points but may not cover
the entire wellbore.
Once the calibration has been carried out and a corrected time-depth relation established, it
is possible to compare the well (logs) with surface seismic data. One technique employed
for this purpose is the creation of a synthetic seismogram using density and acoustic velocity
logs. Bulk density and acoustic velocity logs are used to create an acoustic impedance log.
After depth-time conversion, the reflection coefficients derived from the acoustic
impedance log are then convolved with an appropriate wavelet to produce the synthetic
seismic section (often referred to as a seismogram).
Seismic data obtained from the vertical seismic profile (VSP) with or without source offset,
are processed to provide seismograms at seismic frequencies that are directly comparable
with synthetic sections and surface seismic sections. Even though these data have a
poorer vertical resolution compared with well logging and a restricted frequency range,
they can be used to adjust profiles obtained from seismic reflection surveys carried out
at the surface. In addition, borehole seismic surveys can be used for defining appropriate
operators for stratigraphic deconvo lution and converting seismic sections to acoustic
impedance sections or logs.
22
Seismic data is recorded and usually worked with a vertical scale of 2-way travel time
To relate well data to seismic data, and vice versa, we have to handle this change in
vertical scale units (Figure23)
Thus:
Well-seismic ties allow well data, measured in units of depth, to be
compared to seismic data, measured in units of time
This allows us to relate horizon tops identified in a well with specific
reflections on the seismic section
We use sonic and density well logs to generate a synthetic seismic trace
The synthetic trace is compared to the real seismic data collected near the
well location
The well-seismic tie is the bridge we need to go from seismic wiggles to the rocks that
produced the wiggles and our interpretation of the subsurface geology (figure 22)
23
o The purpose and required accuracy of a well-seismic tie varies with the stage of
our studies
o If we are doing regional mapping, e.g., mapping a significant erosional
unconformity or a flooding surface, then our tie does not need to be very precise,
within 1 or 2 seismic cycles (peaks or troughs) and the seismic data quality does
not have to be very good
o In the exploration stage, we would like to tie well data, e.g., the top of a
stratigraphic horizon/marker within a cycle
o In the exploitation stage (development & production), we need to not only know
the seismic event within a cycle, but the shape of the real and modeled seismic
trace should be quite similar
For this, we need very good seismic data quality
If we obtain a good character (shape) tie between the real and synthetic
traces, then:
24
25
26
period
of a few
(8) Tools display different mechanical charac teristics , some may be rrgid
avoid wave propagation via the tool body) while others are flexible.
(machined to
A. Operations
For conventional operations and small diameter boreholes { < 5 m) generally dnlled in the oil
industry, sonic logs are run with axially symmetric tools that are centred in liquid-filled wells {mud
or water). The presence of gas bubbles in the mud usually leads to a mediocre quality of recording. In
large-diameter boreholes, the tool maintained in an off center poison in order to avoid excessive
dispersion of waves m the mud The logging speed rs usually 10-15 meters per minute
B. Calibration
Strictly speaking, the acoustic log does not require any calibration since the measurement of time is
based on a quartz crystal with a precisely defined oscillation frequency, thus leading to almost no
error m the calculated velocity Several sequences of pulses are used in order to provide a
measurement 6 inch intervals. Even though the lime measurement relatively precise, the first arrival
detection technique can lead to significant errors
C. Types of tool
1. Monopole tools
The conventional some logging tool has an axial symmetry and ts equipped with multidirectional
receivers. A compressional wave rs generated m the fluid by the transmitter, thus giving nse to
a compres sionnal wave (P wave) and a shear wave (S wave) m the surrounding formation at the
critical angles of refraction (see Fig. 25)
S waves. (Williams
In a vertical well, this type of tool enables the recording of five modes of wave propagation (see
following section, Presentation of acoustic log).
28
( l) refracted P waves,
(2) Refracted S waves, only in fast formations, (3) fluid waves,
(4) Two types of dispersed tube waves, corresponding to the pseudo-Rayleigh and Stonely waves
29
Schlumberger proposes a LOOI of rhe same type known as DSIT (Dipole Sonic Imaging Tool)
which has the un ique feature of being able lo operate in both monopole and dipole modes.
Figure 20 presents a comparison between logs obtained by LSAL (long Spaced Acoustic
Log, - also a Mobil monopole tool) and SWAL tools in a seisrnically slow formation (Figure.26)
Figure.26: Comparison between Long Spaced Acoustic Log (LSAL)and Shear Wave Acoustic Log
tSWAL1 recorded in unconsolidated Miocene formations.(Williams N al .. 1984)
sonic tools:
the BHC sonic and LSS (Long Spacing Sonic) tools of Schlumberger. tne acoustolog of
Atlas Wireline and the Full Wave Sonic Tool of Halliburton Logging services.
a sonde developed
by the Society d'Erudes de Me sures et de Maint enunce (SEMM). which
is a flexible tool equipped for simulaneous data acquisition
on either the two nearest or the
two farthest receivers, Mobil's flexible LSAL tool (Long Spaced Acoustic Logging:, William,
ct al.. 1984). with transmitter-receiver and inter-receiver connections being made by cable,
30
The flexible tool. a product of El] Aquitaine, the Array Sonic (SDT-A/C r. first proposed
by Schlumberger in 1984 (Morns el al., 1984).
2) S wave dipole emitter tools
Mobil's SWAL tool
(3) Mixed type tools (operating both in single-pole and dipole modes)
Schlumberger' DSIT,
The MAC tool of Atlas Wireline Services
Figure 21 illustrates the main features of some these tools (see manufacturers
more detailed information).
31
logging tools
specifications
for
32
34
The mean slowness of propagation of a wave across a given mterval corresponds to the time delay
acquired by the wave over this interval. The delay can be calculated by measuring the different
arrival times at each receiver (or from different transmitter positions) located in the depth mterval of
interest for a common depth of transmitter (or receiver). As a consequence, the slowness of a
particular formation may be estimated by measuring the delay m wave propagation by making
use of sorted acoustic waveform data. This can be achieved by gathering data derived either from
a common transmitter point or from a common receiver point. In this way, the average of the two
delays provides a slowness value which is compensated for borehole effects.
37
and dropped weight impulsive sources. The use of vibrator sources is widespread despite their being
unfavourable for the picking of first arrivals.The reference receiver is either a hydrophone for offshore
operations, a geophone or mud pit hydrophone for onshore operations
Figure.33:Implementation
Schlumberger)
of seismic
Check shot
The Checkshot is the very basic type of Borehole seismic survey. In the case of a vertical well it involves
positioning the source at a single fixed zero offset position, usually relatively close to the well bore. The
Borehole receiver tool is positioned at various stations throughout the well, e.g. every 500-ft, formation
tops and sonic log points. At least 3 shots are fired at each station. We are only interested in the first
arrival time (time it takes for the signal from the source to arrive at the downhole receiver), this time
enables us to obtain time/depth information used for correlation of the Surface Seismic and for the
calibration of acoustic type logging tools (figure34).
into the formation under good downhole conditions and can be subject to cycle skipping and washedout zones.
It involves the recording of first arrivals along a well that penetrates fairly deep target layers.
Objective is estimating the velocity and thickness of subsurface layers.
It is performed using receivers that are placed in the borehole at known depths and a source that
is placed near the well head.
It is similar to a downhole survey but using a deeper well and larger receiver spacing.
Z:
receivers
41
Borehole seismic data are the most effective correlation bridge available between the well bore and the
surface seismic data. Borehole seismic data that include the check shot velocity survey and the VSP can
measure large volumes of rock -- and will indicate the presence of velocity anomalies, which may be
totally missed by the sonic log. These velocity anomalies must be measured and dealt with accurately
when mapping the velocity fields that are so critical to an effective surface-seismic time to drill-depth
conversion process (figure.36,37).
The RMS velocity to the bottom of the Nth layer is calculated as:
A check shot velocity survey measures a much larger cylindrical volume of rock compared to the relative
soda straw volume measured by the sonic log. The check shot survey and the more precise vertical
seismic profile (VSP) should at least be considered in the logging program of every exploration and key
development well being planned to minimize or eliminate the ever-present and costly danger of surface
seismic time to depth conversion error (figure.38)
42
Figure.37: check shot raw data and how to convert to interval and rms velocity
When the sonic log is used to produce a synthetic seismogram for surface seismic correlation purposes,
one hopes that a check shot velocity survey is available from the same well to calibrate the sonic log.
Calibration and correction of the sonic log often may be needed because the production of a synthetic
43
seismogram from a sonic log is a hybridization and transform process that can introduce seismic travel
time error if cycle skipping, tool sticking, and washed-out zone effects are present in the sonic log. The
sonic log is also of very limited use in identifying interval velocity inversions -- or any abrupt rock
density and velocity change that are an appreciable distance from the well (Figure40).
The check shot velocity survey can be used to produce a corrected sonic log, allowing sonic log pitfalls
to be alleviated by enabling a data processing analyst to correlate effectively and more accurately through
questionable zones that were traversed by the sonic logging tool downhole.
A check-shot-corrected sonic log also makes it easier to determine interval velocities between key
formations, since familiar formation boundaries can be readily recognized from the sonic log. If density
log information is also available, a more accurate synthetic seismogram log integration usually results.
44
Figure.40: Time depth chart calculations methods (calibrated with sonic, not calibrated with sonic, sonic
used to calculate time depth).
45
comparison between the information gained from the drilling results and the surface seismic data. This
leads to a much more accurate assessment of the results of the well with regard to the original seismic
interpretation upon which the decision to drill was based. In addition, migration algorithms were being
developed along with sophisticated ray-trace modelling routines and these allowed for a more rigorous
treatment of VSP data recorded using more "exotic" source and receiver configurations. These
techniques were able to exploit the intrinsic properties of the VSP survey (i.e. low noise, and proximity
of the receiver to the target horizon) to provide small scale seismic images in the vicinity of the well. It
was recognised fairly early in the history of the VSP that there were two inter related properties of the
VSP that could be of particular interest to the geophysicist.
As the geophone is placed deeper in the borehole its earliest recorded reflection comes from ever deeper
reflecting boundaries. It is not, however, only the shallow primary events that are lost; the whole of the
long reverberant tail which follows each of the shallow reflections is also lost. In other words the deeper
the reflections are recorded within the earth, the less they are obscured by multiple events. Also the
nature of the recorded wavefields in the VSP provides a very effective means of removing
(deconvolving) the multiple tail associated with each reflector. In general this provides a multiple-free
data set which can be used to "calibrate" or aid in re-processing the surface derived data. It is also possible
to use data thus processed to predict ahead of the current drilled location of the well, a technique that has
saved many wells from the disastrous consequences of high pressure blow outs etc.
47
Figure. 40 Lateral range of investigation (LI) and lateral resolution (LR) in a vertical seismic profile.
VSP operations
The procedure for a VSP operatiot is identical to that carried out in well velocity surveying. However,
the depth sampling interval is set at closer and more regular intervals. The maximum spacing
between the successive depths depends on the minimum velocity (Vmin)
of the formation and
the maximum frequency (Fmax) that must be recorded in order to respect the Z sampling theorem
(Shannon, 1949) needed to avoid aliasing and ensure high-quality data processing.
The maximum sampling L1Zmax (two samples in one wavelenght) is given by the relationship:
Zmax = Vmin / 2F max
For example, if : Vmin = 1 500 mis,
and: Fmax = 150 Hz,
then: Zmax=5 m
The VSP recording is composed of upgoing and downgoing body waves of the P and/or S type,
as well as guided t,nterface modes linked to the well and the well fluid. The guided modes, usually
termed tube waves, are dispersive waves of the Stoneley type.
The upgoming bod y waves are primary or multiple reflected waves. Only the primary reflected
waves intersect the first arrivals. The downgoing body waves comprise waves emitted by the source
forming the direct arrivals, and all the multiple events created by seismic markers situated above the well
geophone.
Figure 41 shows a compressional wave VSP with a complex set of tube waves labelled TWI to TW6.
A simple way (although not always practical) of attenuating the tube waves created by surface
noisengenerated by the seismic source consists of lowering the column of mud in the well and/or
deviating themsource in relation to the wellhead.
48
Fig. 41 Example of VSP recording. (TW: tube waves). TWl, TW3 and TW6 are downgoing; TW2, TW4
and TWS are upgoing.
(Courtesy of Gaz de France and /FP)
VSP principles
Identification and origin of primary reflections
Primary events are easily identified in a VSP data set by the simple fact that they intersect the time-depth
curve. If the primary has been generated by a horizontal reflector, then it should appear as a horizontal
event across the VSP display aligned at two-way time. Such an event may lose continuity into the body
of the data due to multiple interference and possible worsening signal to noise ratio. This is caused by
the longer propagation paths associated with shallower geophone plants. Deconvolution will generally
improve the continuity of these events by removing the multiple activity. If the horizon is dipping then,
the event will appear with moveout into the body of the data. The identification of the event as a primary
and the determination of its lithologic origin is still secure, however, providing it cuts the time depth
curve.
Figure 42 illustrates VSP upgoing events at two-way time generated by horizontal or near horizontal
reflectors at the borehole. it is clear that the upgoing primary events intersect the time-depth curve which
49
is marked on the display, the depth at which the reflection originates is confirmed by the calibrated
velocity log to the left of the figure. As displayed the data has not yet been deconvolved and therefore
contains multiple activity and additionally the wavelet contains source components. It is unclear
therefore as to the exact relationship between the seismic events and the lithological changes. This is
alleviated by deconvolution and figure 5.8 shows the same comparison but this time using the
deconvolved upgoing data. The resident wavelet of this data is now zero phase, the centre of the wavelet
for any event occurring at the exact time of that event.
At SEG Normal polarity as displayed, an upgoing compressional arrival appears as a white trough, the
centre of which identifies the position of the reflector. Once the phase of the wavelet is known, a precise
correlation can be determined as the VSP and calibrated velocity log are tied to the same time
measurements. The lithologic significance of any primary event can, therefore, be assessed.
One may extend the identification of multiple events away from the VSP to encompass the surface
seismic record. The approach is essentially the same, although the choice of comparison trace is slightly
different. If the downwave is extremely stable, it is unlikely that any difference would be noticeable
between the comparisons of any trace with the surface record. If, on the other hand, there is a variation
in the downgoing wavefield with depth, one must ensure that the downwave used is compatible with the
data being analysed. To this end, the downwave trace for the level with the same two-way first arrival
time as the time of the reflector on the seismic record, should be used. Again the polarity of the
downwave first arrival must match that of the reflection event examined; any positive correlation
between the tail of the downwave and events beneath the primary then implies residual multiple activity
in the surface record.
Figure 34, is of a typical display provided by VSP contractors for correlation with surface data. The
direct correlations would be achieved using the transposed or corridor displays, the full deconvolved
panel would then be used for any detailed interpretation. If the VSP contains considerably more high
frequency energy than the surface seismic, then a second version of this display would be produced, with
the processing optimised for the data as recorded. This would preserve the information present in the
field data and provide the geophysicist with a high resolution data set upon which to base his
interpretation. The seismic bandwidth data would then simply form a control data set to guide the initial
assessment of the well results. A point to note here, is that the transposed display, although preserving
evidence of dip, cannot be used for dip calculations.
A practical example of VSP interpretation is presented as a work study with these course notes.
be adaptable to arms of different lengths so the tool can lock in well diameters ranging from 10 to 50 cm
(about 4 to 20 inches). Slim-hole tools with diameters of 5 cm (2 inches) or less are required if the
receiver must pass through production tubing. Slim-hole tools are also used to record data in the bottom
portion of ultra-deep wells that have been drilled with small bits.
VSP data should be acquired as quickly as possible, so that we can minimize rig standby cost. The
locking arm must therefore extend and retract quickly. Most tool designs similar to those in the two
previous graphics enable the receiver to lock fully in 30 seconds or less. In order to expedite recording,
some VSP engineers do not retract the arm as the tool is raised from level to level. Instead, they simply
decrease the locking force and maintain a modest arm-to-formation contact as the tool is raised. Then
they quickly relock the tool at the next depth level.
Since VSP tools need regular maintenance, it is essential that both service companies and clients insist
on tool designs that are quick and easy to maintain. Too often, components have to be replaced by tired
field crews, after midnight, in adverse weather and poor light. Neither the paying client nor the service
company engineer wants an unnecessary loss of time due to required tool maintenance.
Because of the hostile nature of the environment encountered by the downhole detector, the receiver
employed for such surveys requires significantly greater design expertise and manufacturing capability
than standard surface geophones (Figure.45). The downhole tool has to be able to survive up to - and in
some cases beyond - 20000 p.s.i. or 1380 bar (ie equivalent to 6 cars for every sq.in.) of pressure and
maximum downhole temperatures of more than 200C (392F). It is not possible to deploy a standard
geophone spread cable downhole to provide the exact analogy of a surface seismic survey; the geophones
must possess a mechanism for anchoring to the borehole wall and this will usually be powered and
controlled from the surface. The cable used to deploy any downhole tool must be strong enough to
support its own weight plus the weight of the downhole equipment, it must also be able to survive a
considerable amount of over-pull should equipment become stuck in the hole. The downhole
environment also often contains corrosive substances and all tools and cables must be able to resist attack
from these corrosive materials. Standard cables used by the majority of logging contractors consist of an
outer armoured section with seven conductor strands within the body of the cable. High tech cables
with fibre-optic conductors possessing much wider transmission bandwidth are available, but have yet
to establish themselves in the industry. This is mainly due to their significantly higher cost and the
increased difficulty of operation using these cables but also because of the unavailability of the required
high temperature electronics and the difficulties in terminating the cable at the tools.
52
Figure .45:The design in (Remotely deployed lightweight VSP geophone package) has a lightweight
geophone module that can be extended from, and retracted into, a mother tool.
53
Geophones
Velocity-sensitive geophones are the most common transducer elements used to record VSP data.
o
o
Transducer Geometry
We usually position these geophones in a VSP tool in one of two geometrical arrangements. The
geophones may be linearly oriented along the axis of the tool, or they may be arranged in an orthogonal,
three-component configuration.
In vertical boreholes, an axial alignment of geophones measures only the vertical component of particle
motion. If this type of geophone is used in a deviated hole, the geophone elements should be gimbalmounted, which uses gravity to orient them vertically.
Although we can record valuable VSP data with vertically oriented geophones, several important
applications of vertical seismic profiling require the X, Y, and Z components of particle motion to be
measured. for example, we can estimate the reflection and transmission properties of shear waves, the
energy mode conversions that occur at impedance boundaries, and determine fracture orientation only
by recording subsurface particle motion in three mutually orthogonal directions. To record threecomponent particle motion, the downhole VSP receiver must contain non-vertical geophone elements.
Figure46 (Gimbal-mounted geophones) illustrates an XYZ, three-component, gimbal-mounted package
ofgeophones.
Figure.46: geophones in three mutually orthogonal directions This particular design ensures that all
three geophones remain in an orthogonal XYZ configuration, even if the tool is rotated as much as 90
degrees from vertical.
54
Accelerometers
Accelerometers measure the acceleration of the mechanical ground motion rather than the velocity.
Accelerometers have a number of advantages. They can be designed to measure signals down to nearly
zero frequency (typically geophones measure down to around 5 Hz with a natural frequency of 10-13hz).
The accelerometers can also be tilted without changing the response. Thus they do not need gimballed
mountings to maintain X, Y and Z components horizontal and vertical.
The disadvantage is that they are usually less sensitive than velocity phones, producing less electrical
energy for the same mechanical energy. However, mass loaded velocity phones, which have the effect
of differentiating the velocity function to measure acceleration, are used in some VSP tools and do not
have to be gimballed.
The VSI Versatile Seismic Imager
The Versatile Seismic Imager (VSI) represents the latest available technology in the acquisition of
seismic waves generated by a seismic source. The VSI employs three-axis single sensor seismic
hardware and software and advanced telemetry for efficient transmission of the data from the borehole
to the surface. It consists of three parts (a power cartridge, a control cartridge, and the measurement
sonde) and takes its measurements by means of a three-axis gimbaled accelerometer package in the
sonde.
Each sensor package delivers high-quality wavefields by using three-axis geophone omnitilt
accelerometers, which are acoustically isolated from the main body of the tool and provide a flat response
from 3 to 200 Hz. The configuration of the tool (number of sensor packages, sensor spacing, and type of
connection (stiff or flexible) varies to provide the maximum versatility of the array. A maximum of 20
shuttles can be used, though only one has been used so far in ODP and IODP.three-axis single-sensor
seismic hardware and software and advanced wireline telemetry for efficient data delivery from the
borehole to the surface. Each sensor package delivers high-fidelity wavefields through the use of threeaxis geophone accelerometers, which are acoustically isolated from the main body of the tool. The
number of sensors, intersensor spacing, connection type (either stiff or flexible), and tool diameter are
field configurable to ensure the maximum (Figure.34)
56
Integrated processing for inter- pretation of borehole and surface seismic data
Images for reservoir definition
Images ahead of the bit
Three-dimensional (3D) VSPs
Pore pressure predictions
Planning for well placement
Simultaneous surface and bore- hole seismic recording for high- definition images
Shear wave processing and analysis
Figure.49: compareson between convention seismic imager (CSI) and Versatil seismic imager.
57
Figure.50: Amplitude spectra of vertical and horizontal geophones from remotely deployed VSP module
58
Figure. 52: 3-component VSP data are recorded with a gyro and gamma ray tool
An isolation subassembly is used to ensure that these added tools do not introduce undesirable resonances
into the geophone response. These logging tools provide a log curve which we can depth-correlate with
log curves recorded before or after the VSP data acquisition. A gamma-ray tool is usually more desirable
than a resistivity tool since we can record a gamma-ray response even in cased boreholes, whereas
resistivity tools provide no correlation inside casing. By correlating equivalent log curves (e.g., a gammaray curve recorded in an open hole logging run and one recorded during a VSP data-acquisition run), we
can often correlate VSP depths and logging depths to within 1 or 2 feet.
When running a depth-correlation logging tool in combination with a VSP receiver, we must be careful
to seismically isolate the geophone package from the logging tool so that the added mass and length do
not create geophone resonances within the seismic signal band. Some service companies have developed
isolation subs for this purpose to effectively attenuate frequencies above 1 or 2 Hz. The geophone module
deployment mechanism, Figure 53 shows remotely deployed VSP geophone that serves as an isolation
sub for the mother tool of a VSP receiver.
60
62
63
64
65
Figure 2
Figure .58 : Array tool with twenty, three-component receivers compares the length of an array to the
height of the Eiffel Tower.
This gives a clear indication, not only of the extra-ordinary length of the array, but also of the length of
borehole that can be interrogated by a single shot. The array shown has a variable receiver spacing
between 2.5m and 20m and three-component accelerometers with 24 bit ADC downhole. Each receiver
is independently locked using an electro-mechanically driven arm. The detectors are isolated from the
receiver body after locking as described previously in the CSI single receiver tool.
66
67
What then are the factors to be considered in the design of an ideal downhole tool?
A. Short, lightweight and rigid
B. Mechanically coupled
C. 3-component geophones
F.
I.
Modular construction
Some of these considerations are, to a certain extent, mutually incompatible. For example the
requirement for rigidity conflicts with the necessity of providing as small a diameter tool as possible, a
short-fat tool is inherently more rigid than a long thin tool. Choice of the appropriate geophone sensor
type is also of paramount importance as each of the three sensing axes must possess the same
performance characteristics. There are specific types of geophone for horizontal and vertical sensing,
which differ in the method of suspension of the sensing coil, optimising the suspension for the designated
sensing axis.
5. For an accurate definition of the complete seismic wavefield, it is essential to record data using
some form of three dimensional sensor, usually 3 orthogonally mounted geophones forming a
cartesian co-ordinate system. There are other options available but these either require specific
sensing elements or software manipulation to retrieve the seismic wavefield.
6. The use of gimbal mounted geophone sensors is preferred for deviated wells, this allows
automatic optimisation of the sensing geometry for the three sensors if the tool is not vertical i.e.
the vertical sensing element is always vertical and the horizontal elements are always horizontal.
In vertical holes, the gimbal mounting is not necessary and indeed can be detrimental to the
treatment of the data. It is desirable therefore to be able to use fixed sensing elements in these
instances. For maximum flexibility the gimballed sensors could be lockable, either on the surface
or remotely using some downhole mechanism. This is not easy to achieve in practical terms and
most contractors will supply either fixed or gimballed sensor cartridges according to the survey
parameters.
7. In many wells there can be a significant amount of energy transmitted down the mud column in
the form of a tube wave. There are still discussions as to the exact manner in which this energy
propagates down the well, but most authorities agree that the disturbance associated with this
energy travels in the annulus of the interface between the borehole fluids and walls. If one
reduces the cross-section of the tool and applies a taper to its ends, one can minimise the
interaction between this energy and the tool. The problem is that the resultant tool parameters
are then incompatible with the requirements.
8. High locking forces serve three interrelated purposes. Firstly and most basically, if the locking
force is significantly greater than the tools weight, the tool will tend to stay in position. Secondly
a high locking force will mean that the device is more securely held to the formation, such that
as far as the seismic pulse is concerned, it is acoustically indistinguishable from the formation.
Thirdly, if securely locked to the borehole walls, the tool will be less susceptible to fluid borne
arrivals.
9. It would be useful to check the effectiveness of the lock to the formation prior to recording data,
this has great potential benefits for data quality. In practical terms, devices to perform such tests
are difficult to calibrate and generally increase the complexity of the tool. Most contractors use
a variation on the theme of the pulse test. A geophone sensor is excited by an electrical pulse,
the subsequent decay of signal from the elements indicating the quality of lock (this can be
extended to the use of a swept frequency signal applied to a geophone element and some tools
possess additional elements specifically for this purpose). Although an item to be included on a
wish list, in practical terms the use of such a system is generally superfluous and time
consuming. This is particularly true if the tool is fitted with a calibrated locking force sensor, a
consistent locking force providing in most cases a better indication of data quality and
consistency.
10. It is still a fact that digital electronics components have a lower temperature tolerance than
analogue. To be able to run in extremely hot environments, therefore, the receiver tool should
possess some means of overcoming temperature limitations. This can be accomplished by using
69
insulating techniques (such as flasking) and/or Peltier semiconductor heat pumps or more
simply by designing the system to work with either digital or analogue electronics. In analogue
mode there will inevitably be a price to pay in the volume of data recorded per shot due to the
limitation of two three component sensors on seven conductor wireline cable.
11. It is essential for the tools to be easy to maintain in the field in order to maximise the survey
efficiency and minimise any possible downtime. The easiest way of achieving this is to design
the tool to be as modular as possible. Any failures would then be rectified using spare modules
rather than attempting a physical repair of a defective unit under field conditions.
12. The following figures illustrate a couple of the possible solutions that are currently available.
Figure 60 illustrates the concept of Baker Atlas SST 500 downhole receiver array; this is a fully
digital system capable of deploying up to 12 four component (three geophones plus hydrophone)
satellite receivers. Figure 61shows a photograph of some elements of the SST array. Figure 62is
a photograph of the BSR-2 analogue tool from Baker Atlas. This is capable of deployment in
vertical or deviated holes and can be supplied in variants capable of running continuously at
200C and 20000 p.s.i. (1380 bar).
70
71
W inch unit
72
Recording Equipment
After the downhole receiver the most critical part of the VSP acquisition system is undoubtedly
the recording equipment itself. Unlike the downhole receiver, however, the aspect of performance
that is most important is not the physical capability of the system (current electronics have capabilities
in excess of what is required for high fidelity recording of the seismic data) but more the quality control
facilities built into associated computer software. Audio CD technology for example requires a sample
rate of just over 44kHz at a rough approximation this generally equates to a data transfer rate of
approximately 1.4Mbaud (1.4 million bits per second), the maximum data transfer rate available from
current downhole systems is 512 kbaud. Continuing the comparison, audio technology requires a
bandwidth between 0 and 20kHz, seismic signals occupy approximately 0 to 200Hz, it is clear,
therefore, that the electronics to enable adequate recording of seismic data from downhole surveys are
readily available at the surface!
The limiting factor for seismic recording thereby reduces to what the contractor can achieve when
assessing the quality of the signals received by the system. Borehole seismic recording systems should
therefore be capable of at least the following functions:
These options will provide information for the acquisition engineer to make a valid assessment of the
data quality for an offshore survey or for onshore surveys using impulsive sources. It only allows,
however, the assessment of the data to be based on the signal quality of the direct propagating
downgoing wavefield. In the majority of instances this may be sufficient although it does not allow
for an easy assessment of the data contained in the upgoing wavefield. An indication of the arrival
quality for this can be obtained if the data are plotted as a VSP sectional plot i.e. the traces displayed
adjacent to each other in a manner similar to surface seismic trace displays. To further enhance the
quality control of the survey, the system software should be capable of processing the data such that at
least a first approximation of the upgoing wavefield can be generated. The use of vibrators as a source
for onshore data acquisition requires the additional ability to assess the signal after correlation with the
pilot sweep. The capabilities of the recording system must therefore be increased over the basic
requirements noted above by at least the following:
Provision of real-time full precision correlation for Vibrator source data
Provision of seismic plotting capabilities
Provision of in-field processed results for quick evaluation of survey data
In many ways the QC processing performed in the field is essential for the true assessment of data
quality - particularly with regard to the upgoing wavefield. It has additional benefits for the oil
company in that fast preliminary results will be available within an extremely short time frame; this
allows decisions to be made at the well site based on the results of the VSP.
73
Each VSP contractor possesses some variation on this theme of in-field processing. As this document
is prepared by Baker Atlas, examples of that contractors in field VSProwess QC capabilities are
shown in figures 63 to 69.
The VSProwess system hardware consists of a mixture of off-the-shelf items coupled to a proprietary
acquisition controller and associated software. The recording system itself - i.e. the computer and
media support - is based around two standard Intel processor based PC computers. These run
standard multi-tasking operating systems and are networked together with all disk and peripheral
devices shared between the two machines. One of the computers is dedicated to the task of data
acquisition with the other used for any data processing although the roles of each are interchangeable.
The computers are interfaced to an acquisition controller unit which either acts as a digital receiving
station for downhole digital array tools, or as surface located analogue to digital converters for analogue
downhole systems. The acquisition controller acts as an interface between the system control software
and the geophone surface control panel, this enables complete control of all aspects of the survey
acquisition and geophone deployment from a single device (the acquisition PC). The system illustrates
what can be achieved with modern equipment and by taking advantage of proven existing technology.
Furthermore with the adoption of standard PC technology, there is an almost guaranteed upgrade path
into the future should the need arise.
Figure 63: Display of record from 6 element four component downhole receiver
74
Figure 64: Display of vertical component geophone traces from 6 receiver array
Figure.66: Display of raw data for vertical, horizontal X and horizontal Y channels
76
77
Sources
Due to the nature of VSP operations, source requirements are generally different from those
encountered with surface seismic work. Notwithstanding this, any seismic source can be used to record
a VSP survey. Figure 61 lists some of the sources that have been used for VSP surveys. It should be
obvious that some sources are specific to either marine or onshore operations although there are some
that can be used for either. Dynamite, although frowned
upon these days due to its destructive nature, can be used in
both environments although will almost always be a last
resort. Recent years have also seen the introduction of
several designs of marine vibrator. In theory such a device
would probably provide the best possible source for marine
VSPs, it being possible to tailor the output to provide any
required energy spectrum in the same manner as for land
vibrators. These sources, however, have problems of
energy output and reliability, are cumbersome and expensive
to deploy and thus far except for some experimental wellshoots, have not been generally available.
An alternative shooting method uses a source placed
downhole and records the VSP response either with a surface
spread of detectors (reverse or inverse VSP), detectors in
an adjacent well (cross-hole VSP) or with detectors in the
same hole (co-well surveys). These surveys can have
advantages but suffer from the need to provide a nondestructive downhole source - or alternatively disposable
boreholes!
78
The requirements of an ideal source can be summarised as in figure 72. Most of the sources in use
in the marine environment can satisfy the majority of the requirements listed in this figure, but
some will require a modification to their mode of deployment to obtain the best results.
A brief discussion of this list is probably in order. The
consequences of the first point are obvious; if the source
lacks energy then seismic impulses will be attenuated to such
an extent that reflections from the subsurface will disappear
into noise. The second point is related to the bandwidth
of the seismic signal. Signal theory indicates that an
impulse of zero time duration contains equal contributions
from all frequencies, i.e. it possesses a white spectrum. It
follows, therefore that the closer a signal gets to that of an
impulse, the whiter its spectrum and by implication the
narrower the wavelet associated with the energy pulse. The
width of the pulse defines the minimum separation in time
that two reflections can be in order for them to be resolved in
the seismic image. The narrower the signal wavelet the
greater the resolution and it follows, therefore, that the ideal
source should be impulsive.
The third requirement comes from the fact that single or multi-channel processes can be applied to the
data, i.e. several traces can contribute to the result for the trace being processed. It is obvious that
changes in the data from trace to trace caused by variability in the earths response will need to be
retained. It is not at all easy, however, to design an algorithm that will preserve these changes and
reject changes brought about by other means. Specifically this means that changes in character caused
by variations in source output will be detrimental to the overall quality of the processed data image and
should be avoided if possible. As was shown in chapter 4, with marine wells (or indeed any wells
where there can be an accurate measure of the source signature) a signature deconvolution procedure
can be applied to data to remove source variations. In many cases however, particularly onshore, it is
not possible to provide a consistent recording of the signature.
In recent years there has been a commendable shift in attitude toward safety and environmental aspects
of operations within the oil industry. Safety has always figured highly in the seismic industry, possibly
prompted by the fact that the earliest seismic sources were explosives! It has long been a requirement
of the industry, therefore, that the sources employed must be as safe as possible and those that are
intrinsically dangerous should be used in a safe manner. In some circumstances this can lead to
particular sources being excluded from specific operations if the degree of risk is considered too high
for them to be employed.
Most VSP surveys are small-scale operations when compared to their surface seismic brethren. The
need to be able to easily deploy the source against the background of restricted facilities and space, has
led to the design of small compact airgun arrays. The need to access remote locations, possibly in
jungle or swamp environments requiring that the source dimensions are as small as possible for a given
energy output. This limitation on size severely restricts the maximum power output from VSP sources;
were it not for the fact that the detector is positioned in a quiet environment close to the reflectors, this
would probably result in a restriction in the applicability of VSP operations.
The final two points of fig 61 are linked, a source that cannot be repaired or serviced easily will not be
cost effective. A source that is expensive to operate will not provide sufficient benefits to outweigh its
cost of deployment for VSP surveys as the volume of data acquired for each operation is generally
small and provides specific information for a given survey configuration
79
Offshore sources
As the majority of offshore VSP surveys will be performed using some kind of airgun source, it is worth considering in
some detail the various factors influencing the performance of these devices. There are three main
types of airgun in common use and these can be typified by the conventional system (as manufactured
by Bolt technologies), the Sleeve Airgun and variations on these themes (for example the G and
GI guns manufactured by Sodera use a similar mechanism to the sleeve gun but have additional
deployment options).
The basic method of operation is identical for all these devices in that a volume of high pressure
compressed gas is rapidly vented to the water volume surrounding the gun, the method by which this
is accomplished marking the difference between the two main types. Figures 72 and 73 illustrate the
operation of the Bolt (conventional) and Sleeve airguns, the bolt gun is used here to provide a generic
description of their method of operation.
Referring to figure 72, high pressure air is fed into the gun via the connection adjacent to the solenoid
valve housing, this builds up the pressure in chamber A which is connected via the central passage in
the shuttle to the firing chamber B. Once charged the pressures of the gas in the top and bottom halves
of the gun are equal and the gun is kept sealed by the pressure applied to the top of the piston (the
surface area of the top of the piston in chamber A is greater than that of the bottom of the piston in the
firing chamber B). The gun is fired by venting some of the air in the top chamber to the underside of
the top of the piston via a solenoid valve, this reduces the pressure in the top chamber and increases
the effective surface area of the bottom sides of the piston. The resultant pressure in the lower chamber
now exceeds that in the top, hence the force experienced by the lower surfaces of the piston exceeds
that experienced by the top and the shuttle is pushed upwards. Once the piston is displaced, the
apparent pressure difference between the two chambers increases and the motion of the piston
becomes self-perpetuating - the shuttle moving quickly upwards venting the stored gas in the firing
chamber. The gas vents to the surroundings via four diametrically opposed ports in the body of the
gun forming an expanding bubble of gas in the water in which the gun is placed.
80
bubble overcomes the inertia of the water, at this point the bubble has reached its minimum volume
(and its second maximum pressure) after which the process repeats.
82
Operating pressure
The part of the waveform that is of most interest to the seismic data processor is the initial pulse. Ideally
this will be of short duration and contain all of the available energy input to the earth -- the majority of
effort expended on the design of individual airguns is directed toward the realisation of this ideal. How
then is the waveform affected by the operating pressure of the gun?
Figure 8.21 illustrates the effect of increasing the pressure within the gun. Not surprisingly, it is clear
that the peak output of the first impulse seen by the near field monitor is very nearly directly proportional
to the firing pressure used. Interestingly increasing the gun pressure correspondingly lengthens the
bubble period.
It is intuitively obvious that this should be the case as the water displaced by the higher energy stored
in the bubble will be accelerated to greater velocities it therefore taking longer to be overcome by the
ambient hydrostatic pressure. A further point to note is that the ratio of the amplitudes of the initial peak
and the bubble oscillation appear to decrease with increasing pressure. This is again an intuitive effect
in that the bubble will be expanding and contracting for longer periods and may therefore lose a greater
percentage of its energy (due to frictional losses etc.) than the less energetic bubbles from lower
pressures.
It is obvious from these figures that the higher the gun pressure the more closely the signature of the
airgun comes to resemble the ideal impulsive input. It is also obvious that there is a considerable way
to go before the output can be considered as satisfying the requirements of figure 73. There are also
some practical considerations to apply. For example the design of airguns is such that they will only
survive a given maximum operating pressure. If one repeatedly exceeds the manufacturers stated limits
then the device will eventually fail either through ruptured seals or through a catastrophic failure of
the gun body - something that would certainly undermine the safety of using the source.
In the diagrams in figures 75 and 76, the first pressure maximum is shown as a downward displacement.
Although this first event corresponds to an increase in pressure (and is labelled as such on the axes) it
has been a matter of historical preference that VSP first arrivals be displayed as downbreaks by the
majority of VSP contractors. It is interesting also that in accordance with the SEG polarity convention,
a display of a pressure sensor in this manner (hydrophones are used for these illustrations) corresponds
to SEG Normal polarity (a pressure increase giving rise to a negative voltage output by the sensor).
83
Figure 75:
pressure
Chamber volume
It has often been stated that larger chamber volumes provide greater energy and in the past it has been
quoted that the peak pressure output is proportional to the cube root of the chamber volume. Whereas
the initial statement may indeed be true in that there is a greater amount of energy stored in a larger
volume of gas at the same pressure, it is quite easy to show by experiment that the latter is not.That the
peak pressure increases with chamber volume can be seen in figure 67, if one measures the amplitudes
in this figure, however, it is obvious that the cube root relationship does not hold. The explanation for
this is really quite easy to see; if the energy associated with the gas was released instantaneously, there
would indeed be a strong relation between volume and peak pressure. When the gun is fired, however,
the gas is not released as quickly as one would possibly expect as it is vented via the ports of the gun
which by their nature have a limited area and in turn restrict the maximum rate at which the gas can be
discharged.
The rate of discharge is what determines the size of the maximum pressure peak associated with a
particular gun. What can be seen on the oscillograms in figure 67 is that increasing the chamber volume
increases the bubble period, this means that the extra energy associated with the additional volume of
compressed gas is being used by the system to feed the bubble. That there is more energy is
undisputed, but this energy is distributed over a longer time frame and dissipated more by frictional
effects than by radiation of acoustic energy. This smearing of the energy over a longer time period
means that guns with larger chamber volumes tend to exhibit a greate.
Port Area
This is possibly the most important aspect of gun operation, the port area defines how much gas can
pass from the interior of the gun to the surrounding volume of water in a given time. If one restricts the
size of the port it is intuitively obvious that the rate at which the gas can escape will be reduced; with a
large port area, the rate will be increased.
Hence the first statement that can be made is that the larger the port area, the greater the amount of
energy that can be expended in a given time. It follows therefore that if the rate at which the energy is
released increases, then there will be a greater high frequency content to the energy pulse. The
combination of these two effects tend to shape the pressure pulse such that it becomes narrower (HF)
and higher amplitude (energy). As more gas is released in the earlier part of the signature, the bubble
will tend to expand further due to the inertial effects of the higher energy water displacement, frictional
losses will therefore increase within the bubble. As less gas is available to feed the bubble, this will
tend to reduce the relative amplitude of the oscillatory contribution to the signature and increase the
period of oscillation. This effect, however, is not the only one working on the bubble. The lowered
rate of release for smaller ports means that there is more gas available to feed the bubble, this tends to
mean that the vented gas will tend to expand for longer. If the rate of injection of the gas exactly
balances the rate at which the pressure reduces as the inertial effects are overcome, the bubble will
reach equilibrium pressure prior to collapse and the oscillatory tail of the signature will be removed.
What then are the consequences of these effects? Firstly feeding the bubble has more effect on bubble
period than the increase in the inertia of the moving water, overall then one sees a reduction in bubble
activity for small ports with an associated lowering of peak energy and output bandwidth. Very large
port areas provide a high peak energy with moderate contributions from the bubble and wide
bandwidth. For intermediate port sizes, peak energy and bandwidth will exhibit values between the
extremes.
84
A great number of computer models have been constructed to describe the operation of an airgun. In
most cases these have been adequately tested by reference to recorded gun outputs to provide an
accurate prediction of gun and array performance without the necessity to measure the proposed
configuration under field conditions. It is convenient here to use one of these models to illustrate the
effects of port area on gun output, this example (figure 77) is taken from an article published in
Geophysics by Dragoset (1984).
It is clear from the foregoing discussion and the data displayed in figure 8.23, that in order to provide
the maximum peak pressure output from the gun coupled with the widest bandwidth, the port area
should be a large as possible. Indeed the greatest single limiting factor controlling output from the
Bolt-type airguns is that the ports are in fact quite small (this can have advantages, however, when
operating in marshy environments as it restricts the ingress of contaminants into the gun
mechanism). Such observations as these led to the development of the Sleeve Airgun. As noted
previously the basic operation of this device is the same as for the Bolt, but the design effectively
turns the gun inside out. Instead of an internally housed piston and shuttle arrangement, the sleeve
gun has an external sleeve (hence the name!) that slides up exposing a large circumferential port. This
is much larger in area than the Bolt ports and as such is associated with a commensurate increase in
performance with regard to bandwidth and peak pressure output.
Gun Depth
The depth of the gun determines the ambient hydrostatic pressure under which the gun is going to
operate. This has a profound effect on the performance of the gun and its output waveform. The
general effects can be seen in figure 78 and summarised as follows the initial pulse increases slightly in
amplitude with depth As depth increases the bubble period shortens
The initial pulse broadens slightly with depthmDeeper guns experience a longer delay between the first
pressure maximum and the ghost reflection from surface
85
Going from water (1v1 )to air (2v2) means that the expression above tends toward -1, hence
almost all energy reaching the surface will be reflected back downwards with a 180 phase shift (i.e.
reverse phase). This produces a dipole effect and inevitably leads to interference between the main
source output and the secondary virtual (ghost) source. The consequence of this is that the
interference leads to notches in the source spectrum, these ghost notches can be large and effectively
define the usable bandwidth of the source. A complication of this phenomenon for VSP processing
concerns the adoption of signature deconvolution procedures using near-field measurements of the
source output. It is readily seen that for a near field monitor positioned (say) 1m from the airgun, the
ghost energy will have travelled from gun to surface and back to the source monitor; directly
propagating energy to the monitor will, however, have travelled only the 1m gun to monitor separation.
As the energy is distributed over a spherically expanding surface, the ghost will have experienced a
relatively high reduction in amplitude when compared to the direct energy and the effect of the ghost
reflection will be reduced. In the far field, however, the propagation paths are of similar length, the
amplitudes of the ghost and direct wavefields being therefore comparable. The result is that the spectra
recorded in the near field will not be affected to the same extent as the far field by the ghost arrivals.
In addition if the monitor is positioned above the source, the path length differences are not the same
between near and far field recordings and it is intuitively obvious that the cancellation effects will be
at different frequencies. It is worth noting that although there is cancellation of selected frequencies
caused by the ghost effect, there is also a degree of enhancement of energy at
86
Onshore Sources
Dynamite
Dynamite is what one might term the traditional seismic source. In early seismic exploration it was
the only source that provided sufficient energy for usable data to be recorded on the instrumentation
of the day. Environmental and safety concerns mean that the use of dynamite has steadily declined
and in general it is only used today when other sources are impractical or there are specific problems
that require extremely high energy inputs to the ground.
Surface seismic operations still make use of explosive sources in environmentally less sensitive areas,
VSP operations, however, seldom do. Although in some ways the ideal source (high energy and
impulsive), it generally exhibits too variable a signature on a shot to shot basis - due to variations in
source environment - to allow effective use of the multi-channel operators employed in VSP
processing. It is also rare these days to find a rig operator that relishes the thought of explosives being
detonated alongside the rig.
Vibrators
Vibrators on land provide what is perhaps the ideal source for VSP acquisition from the standpoint of
data quality and flexibility. The features which make the vibrator source so suitable for VSP data are
that it is repeatable and that it can be controlled in such a manner as to be able to tailor the seismic
input to the earth for each specific well location and environment. For example if it is known that a
particular well location is associated with poor penetration at high frequencies, the vibrator sweep can
be modified to input more energy at these frequencies. Conversely - depending on the survey
objectives - the sweep can be designed to completely ignore this part of the seismic bandwidth and
concentrate in optimising the energy within the unaffected pass band. The various methods for
controlling vibrator output and the subsequent scope for an almost infinitely variable input to the earth,
make the flexibility of the vibrator source one of its most pleasing attributes (Figure.79).
One draw-back with the source is that spreading the energy over a range of frequencies distributed
over time, means that we move away from the ideal impulsive signature. Indeed to recover a dataset
that looks anything like a seismic image one must first cross-correlate the recorded data with a
reference signal derived at the vibrator. This reference sweep serves to define the seismic input to
the earth and the correlated output looks and behaves like data recorded using an impulsive source
(provided a sufficiently wide frequency band has been included in the sweep definition). However, a
benefit of correlation is that it rejects random noise and improves the signal to noise ratio (Figure.80).
A problem associated with vibrator derived VSP data is that for accurate processing a reliable transit
time from source to receiver is required. In concept this is not difficult, in practice, however, because
the final shape of the wavelet (bandwidth and phase response) is defined both by the correlation process
and earth filtering, there is a degree of uncertainty with regard to the precise values obtainable from
the data. That said, it is a relatively simple matter to provide an additional impulsive reference source
to control the results from the vibrator survey.
An additional advantage to the use of vibrators is their low impact on the environment and their
inherently higher safety of operation when compared to explosive or even airgun sources. The units
are usually mounted on off-road all-terrain buggy vehicles, there are few locations (assuming the
ground conditions support the vehicle) that they cannot reach, they are non-destructive in operation
and require no special site preparations to perform satisfactorily. Both P and S wave vibrators are
available and this source is one of the (very) few available that allows a controlled generation of
shear waves at or near the surface.
87
Land Airgun
This device is an attempt to make use of the repeatable impulsive nature of the airgun source signature
in an onshore environment. By its nature, the airgun must be operated in a marine environment. It is
possible to fire such a source in air, however there is little coupling of the source to the earth and the
lack of lubrication means that the seals in the gun rapidly deteriorate. The land airgun attempts
(quite successfully) to re-create marine deployment conditions by providing the gun with its own
portable marine environment. The most commonly used version is the LSS-3 from Bolt technologies.
This consists of a cage which supports a water-filled bell housing in which is positioned a small bolt
airgun of typically 60 cu.in capacity, the complete unit mounted on a truck. When in use, the bell
housing is hydraulically lowered onto the ground such that the rear of the truck is jacked up clear of
the surface and supported by the bell. The weight of the truck effectively pre-loads the base (pan)
of the LSS-3 onto the ground providing the reaction mass required to input energy to the subsurface
(Figue.81).
On firing the airgun (typically at a pressure of 2000 p.s.i.), the release of the compressed gas into the
bell, expands an elastomeric diaphragm stretched across its base, driving the pan downward against the
ground. The resultant upward reaction of the main assembly within the cage is damped by the catch
cylinders - essentially large shock-absorbers - and the unit is gently lowered back into its rest position
ready for re-firing. The damping of the system is essential to avoid the device generating secondary
seismic impulses. The spent compressed gases pass through a separator and once vented from the
system the unit is ready to fire again, cycle time is typically of the order of 6 seconds.
The land airgun provides a clean and impulsive source signature without any of the bubble effects that
are seen when airguns are deployed in a truly marine environment. Depending on the size of the units
a range of energy outputs are available. The largest units easily provide sufficient energy to penetrate
to depths of 4500m (15000ft) although these units are rarely - if ever - deployed outside the domestic
U.S.A. Throughout the rest of the world the units fielded, particularly in the European theatre, are
capable of providing penetration to around 2400-2700m (8-9000 ft). In very favourable conditions
penetrations of up to 3700m (12000ft) can be achieved, although the energy at this depth is low and
the data recorded with a single shot of dubious quality for VSP processing. It is interesting to note,
however, that with the repeatable signature and short cycle time, it is possible to record a great many
shots at a depth station for stacking.
89
Figure 81: Operation of LSS-3 and examples of units deployed on Unimog and MOL chassis
Airgun in Pit / Buried Airgun
The airgun in pit is perhaps the traditional onshore VSP source after dynamite. Although there are
many factors involved in its deployment that are less than ideal for VSP data acquisition, it has the
overriding advantage that it is easy to use in an efficient manner and provides a generally high energy
input to the earth. As with the land airgun, the deployment attempts to re-create the marine
environment, this time in the most direct fashion, by placing the gun in a water filled hole in
the ground. There are several problems involved with this method The most obvious of these is
that water in the pit will tend to leak away over the course of the survey unless some method of
waterproofing is used. The simplest method employed is to line the pit with a tough impermeable
polythene membrane. In some critical well- shoots where other sources for whatever reason were not
available, it has been known for the oil company to construct concrete or steel lined pits.
It is important that the pit should be constructed in a manner that allows for a good definition of the
source environment, particularly in terms of the way in which this is modified during the survey.
The energy output by the gun will interact with the pit walls. In the case where the walls are not
strengthened i.e. are simply formed from the soil of the hole, the energy will tend to produce a collapse
of the pit. In the worst-case scenario (an unlined pit), the material from the collapsed wall will mix
with the fluid in the pit creating a low velocity medium between the airgun and the base of the pit.
90
A modification to the seismic transit times of up to 7 or 8ms one-way has been seen in pits where
this collapse has been allowed to proceed, it is obvious therefore that the pit conditions have a great
influence on the data recorded.
A rule of thumb for pit construction is illustrated in
figure 82, the dimensions of 3m cube are in fact a
compromise in favour of ease of construction. Ideally
the gun should be considered as an isolated source and
be positioned as far as possible from the walls of the pit
to minimise interaction with what is after all a strong
acoustic impedance contrast. It is theoretically possible
for the walls of the pit to act as secondary sources of
predominantly shear wave energy. This effect can be
minimised by placing the source as central to the pit as
possible such that the contributions from opposite sides
are cancelled.
If shear energy is required, however, it may be possible
in some cases to maximise its input by placing the gun
adjacent to one of the walls. If this is attempted, one
should be aware that although the airgun is not an
explosive source, the peak energies associated with the
pressure pulse created are more than enough to damage
even concrete walls.
Figure. 82: Airgun in pit
A variation on the theme of the pit airgun is the buried airgun. This is an attempt to avoid the major
problem associated with the traditional method i.e. pit collapse and at the same time improve the energy
input to the earth by placing the source beneath (or at least close to) the base of the weathered layer. It
is a well known fact of seismic life that due to its un-consolidated nature, the shallowest part of the
subsurface will generally form an extremely lossy region with respect to seismic energy. If the source
can be positioned beneath this layer, the resultant energy input will be greatly improved. The buried
airgun operation therefore places the source at the bottom of a purpose-drilled hole of slightly wider
diameter than the gun at depths of up to 30 or more metres. The depth of deployment is basically
limited to the available length of airlines etc. Except for the region in which the gun is to be positioned,
the hole is cased with plastic retrievable casing (to prevent collapse during the survey) and filled with
water
91
Field Technique
Figure 83above illustrates what could be considered the standard operational arrangement for a the
majority of simple rig source VSP surveys. It can be extended to more complex survey configurations
by the simple expedient of an additional source controller unit at a remote source location. In all cases
the recording equipment configuration differs little from that shown although each VSP contractor
possesses equipment unique to his operation. Generically there are two variants in the surface
equipment chain and the selection of the appropriate system is primarily determined by the type
(analogue or digital) of downhole tool.
final problem with this method is that the conventional VSP display of depth against time is now
visually unappealing and therefore difficult to interpret.
Use of interpolation methods with the recorded data to provide traces at the requisite intervals is also
possible and can be an effective tool to use when time for processing is short. Use of this technique,
although quite common in surface data processing (to add in missing traces) can lead to problems in the
precise location of events in the VSP.
There is considerable scope for flexibility in the station interval use for a particular VSP survey. Much
work has been done regarding wavefield separation techniques, with the result that virtually any station
interval can be processed without undue difficulty. For example, when shooting a vertical incidence
survey in a deviated well, the simplest method is to shoot on constant measured depth interval. This
means that the resultant image will exhibit a variable trace spacing if the data are plotted on offset. The
survey can also be shot using depth stations calculated to give a constant offset spacing. Alternatively
the constant time increment method can be adopted; each method will process to approximately
the same standard, but each will behave in a slightly different fashion. Processing methodology is
sufficiently advanced for the general approach to be to use the shooting method that provides the
most cost-effective way of acquiring the data for the survey. That the data conforms to the requirements
of the survey objectives is still of
paramount importance; the point here is that there may be several methods of providing the same
dataset. With modern techniques, the only real reason (in general - there are exceptions) to choose one
method over another is that of expense.
A VSP survey will generally be shot from the deepest station to the shallowest. This procedure is
adopted to provide the greatest depth control accuracy due to the wireline cable always being under
tension when moving the geophone between stations. This avoids the possibility of recording erroneous
data if the tool becomes temporarily hung up whilst running into the hole.
At each depth the tool is locked to the formation by activating the locking mechanism. Once this has
fully engaged and the tool is supporting its own weight, slack can be applied to the wireline cable - to
avoid wireline conveyed energy. In recent times the need for slack has been reduced by the
development of tools capable of providing a high locking force to weight ratio (some deployments
from certain semi-submersible rigs in poor weather conditions, may still experience heave which the
compensation mechanisms cannot fully accommodate, in these cases applying slack is essential). The
tool is then allowed to settle. This last point is to ensure that any vibrations induced when pulling the
tool up the hole have time to damp out before the record is taken. Once the engineer is satisfied with
his geophone plant, he will then take several shots at the level in question to ensure good signal to
noise performance on stacking; the tool will then be unlocked and pulled to the next station. If more
than one source location is being used, data will in general be acquired from each source position before
the tool is moved, this removes the necessity of multiple runs into the well, thereby saving time and
money. This also means that there will be no variability between geophone plants between surveys,
important if one is comparing results from sources placed on opposite sides of the well for example.
With remote source configurations, it is desirable that the control and monitoring of all sources be
centralised at the recording location, usually the rig. A remote control device, for example Baker Atlas
RSS unit, is therefore required; this equipment should also act as a synchronising device between guns
in an array. Such units generally operate in pairs with a remote unit acting as a slave to one at the rig.
All information concerning the remote source, including a near- field monitor signal, is gathered by
the slave device and transmitted via a dedicated radio telemetry link to the master unit, where it is
passed to the recording equipment for writing to permanent storage media.
94
The positioning of a source is of prime importance to the processing of the data generated by its
operation. In the onshore case this is almost a trivial exercise in that all the required locations can be
surveyed in prior to the operation. In the offshore case, although the required positions are known, the
instantaneous position of the source at each shot must be monitored and the location of each individual
firing position noted for future quality control and use by the processing personnel. Some sort of
dynamic positioning equipment is therefore required. There are many systems available to the industry;
traditionally such surveys were recorded using the Artemis range-bearing system to record the
navigation fixes, this system possessing an accuracy of 3m in 6km (0.05% error).
In recent years although providing the accuracy required for a VSP survey, the use of this equipment
has declined in favour of the GPS system (for example the Tasman system) which uses the network of
positioning satellites in earth orbit. The accuracy of this system is currently similar to that which can
be achieved with Artemis. In practical terms, however, Artemis does have a disadvantage when
compared to GPS solutions, in particular, it has the limitation of being a line of sight system. Any
obstacles between the stations on the rig and the source boat will degrade the performance of the system
and indeed the need for base stations on both rig and boat can have its drawbacks. GPS on the other
hand can be utilised if necessary using a single receiver station at the source, can be extended to larger
offsets than line of sight allows and is less sensitive to physical obstructions obscuring the satellite
signals (assuming a minimum number of satellites in view)
95
Casing Arrivals
In general terms, when casing is set in a well only the minimum necessary cementing will be performed
to keep down costs and rig time. Although this approach can save money, it can have major
repercussions with regard to the quality of the data from a VSP survey. If casing is not cemented to
the formation, or the rocks have not filled back into the annulus surrounding the tubing, any energy
arriving at the hole can set the casing resonating. Energy then propagates down the metal of the
casing as described in section 3, swamping the true seismic arrivals. The situation can be
considerably worsened if there is more than a single string of casing in the hole. In this case, other
than anchoring the successive strings together, there is usually no need to provide a full cement bond
for the whole of the well. In practice this means that if there is more than one string in the borehole,
there is a strong likelihood of recording casing arrivals at some point in the well. Even if the tubing is
not excited into ringing, if there is no coupling between successive strings, very little of the seismic
energy is likely to be transmitted to the downhole tool in such a section of the well.
Unfortunately there is very little that can be done in the field to alleviate the problems of casing
arrivals. The arrival wavelets of this energy are coherent between successive shots at a level. This is
true because for each level there will be a unique depth or set of depths at which the casing is excited,
these will be consistent from shot to shot and hence will give rise to consistent arrivals at the geophone.
Although consistent for a particular depth station, it is not necessarily the case that they will be
consistent between stations hence it is not possible to suppress them using standard wavefield separation
techniques (in the way that tube waves can be attenuated). In general casing arrivals possess a high
frequency characteristic, this can be exploited to a degree by applying a low pass filter to the data,
thereby reducing their overall contribution to the dataset.
It must be stated that there is no processing method capable of completely or even adequately removing
casing arrivals. In general therefore when casing arrivals are seen in a VSP, the section over which they
occur is usually of little use.
What then of the field engineer? In general if casing arrivals are present this will be seen as a warning
that the limit of usable data has probably been reached. The engineer will therefore advise the client
representative that there is little point in continuing with the survey. He will, however, have satisfied
himself that the arrivals are not an isolated problem caused by a short section of un-cemented tubing.
In many instances, casing arrivals need only be evident over a short section of the well where local
conditions have dictated that the cement bond has been less than 100% effective. Once the geophone
tool has left this region and the bonding of the casing has improved, the metal-borne arrivals will be
damped out by the material surrounding the casing and the VSP will again yield valid results.
Tube Waves
Tube waves consist of energy travelling along the well-bore within the borehole fluids and it is
generally accepted that the energy propagates along the interface between the borehole fluids and the
borehole walls. Unlike a seismic pulse, where the propagating energy is distributed over a spherically
expanding wave-front, the tube wave energy as its name implies is restricted to the confines of the
borehole. In general, therefore, the only attenuation experienced by the tube wave is that associated
with frictional losses in the borehole fluids, as a consequence these arrivals are extremely persistent
when compared with true seismic arrivals.
How are tube waves generated? There are a variety of points in the borehole where a tube wave can
be produced, the traditional source of these arrivals is from horizontally travelling energy from the
source interacting with the well- bore at or near the surface. This excites the borehole fluid column
in a similar manner to that seen in the air column of an organ pipe. Additionally, however, any
discontinuities in borehole diameter or conditions (for example the end of a casing string or a significant
96
change in lithology), can give rise to secondary sources of tube wave activity. In these cases the change
in the borehole acts so as to refract energy from the seismic wavefront into the borehole. In open hole
sections through fractured zones, the fluids in the fractures may be in communication with the borehole
fluids. Seismic energy incident at the fractured zones will instantaneously squeeze the fractures,
generating an effective motion of the fracture fluids into the borehole fluids. This means that there will
be an exchange of energy from the seismic pulse to the borehole fluids and the subsequent generation
of tube waves. Such events, however, can be used to assess the presence and strength of the fractures
- not every aspect of tube wave activity is negative (see figure 3.4).
The effects of tube waves can be alleviated to a degree by placing some obstruction to their propagation
in the borehole (there are a number of projects underway investigating the possibilities), or an
improvement may be achieved by changing the survey configuration to modify the energy propagation
paths. In many cases slightly moving the source location can significantly reduce the amount of tube
wave generated. On land it may be possible to further reduce the amount of tube wave by placing a
physical barrier to horizontally travelling energy, for example digging a trench between the source
location and the wellhead. This works because the energy most likely to induce tube waves during
onshore surveys is ground-roll. Another method that, not surprisingly, has had a limited applicability
involves lowering the mud column so as to remove the coupling between the well fluids and the ground
roll energy. This method has worked well when attempted, but is expensive in time and materials
requiring as it does, that the mud weight be increased to maintain well equilibrium. In offshore wells,
the interaction between the seismic energy and some part of the seabed drilling equipment often
generates tube waves. If this is the case it is unlikely that any reduction in the intensity of the arrivals
can be effected.
If the engineer has not been able to reduce the level of the tube wave energy by applying one of the
possible remedies, then his function becomes to provide an assessment as to whether the arrivals will
preclude the production of a usable upwave dataset. In most cases, providing that the arrivals are
consistent, the energy will be removable during processing, indeed the latest generation of field
processing software from some contractors (e.g. the Baker Atlas VSProwess system), can easily
provide this process at the well site. If the arrivals cannot be removed at source, then the engineer
must ensure that the survey parameters are kept as constant as possible to ensure that the character of
the data remains consistent. In doing so he will greatly enhance the effectiveness of software based
removal strategies.
Random Noise
This phrase covers a whole range of phenomena, all of which exhibit a lack of periodicity or coherence
between successive traces. By the fact of being randomly distributed throughout the data, the general
level of this noise can be reduced by the simple technique of stacking successive shots at the same
geophone level. If the noise has a predominance of high frequencies, the data may be further
improved by applying spectral filtering. Although random noise can be attenuated, it is usually almost
impossible to eradicate; it is after all, random in nature and provides no clues to enable the engineer
to determine from where it originates.
The only approach available to the field engineer is to systematically shut down all possible sources of
noise at the rig site. This will eventually lead to a reduction in the noise levels when the offending
process has been terminated; it may be the case, however, that the offending arrivals have no
relationship to the rig-site machinery. For example, if the drilled formations are soft it may be possible
that some of the noise is a function of the formation deforming after the tool has locked. This give
will reach a limit after a period of time, hence the effects may be minimised by allowing a longer
settling time for the tool at each station before taking the first shot. It may be the case that the cycle
time between shots has to be increased for a similar reason.
97
Some boreholes are affected by micro-seismicity of the rocks, either due to natural seismic activity in
regions of high tectonic activity, or the release of stress from the environs of the borehole walls. This
latter phenomenon can either be induced by the drilling operation itself, or by naturally stored
energy finding a channel for release via the well. Whatever the reason, the effects cannot be predicted
and hence cannot be prevented, indeed there are techniques available that make use of such arrivals to
monitor stress releases in a reservoir. Another possible cause of this noise is the presence of gas. Even
with the most diligent attention to detail in the sealing of reservoir formations, if gas is present it
will seep into the borehole fluids and the bubbles thus formed will behave as small acoustic sources
each radiating stored energy.
In the final analysis, an engineer faced with high levels of random noise, must ensure that sufficient
shots are taken at each level to provide a realistic reduction of the noise by stacking. As
mentioned in chapter 4 the statistical improvement in signal to noise performance is proportional to
the square root of the number of shots taken. To achieve an improvement of 4:1, therefore requires a
total of 16 shots. To achieve a 5:1 improvement requires 25 shots, an increase of approximately 70%.
It is clear, therefore, that there is a limit as to the practical reduction of noise levels by stacking. Indeed
if to get any reasonable signal requires shooting more than 16 shots, then it will generally not be
possible to shoot the VSP economically and the engineer will recommend terminating the survey.
The engineer will make use of the ability of modern acquisition equipment to generate and modify stacks
on the fly to determine the optimum data for recording and inclusion in the survey dataset. This
together with spectral analyses of the recorded signals allows the effects of random noise to be
examined, quantified and the effects on the survey data reduced such that the final image is not
degraded.
Systematic Noise
The most common sources of non-random noise are electrical pick-up from cable leakage (usually
monotonic, 50 or 60
Hz oscillations depending on the frequency of the rig power supply), mechanical or electrical
activities from the rig crew and machinery, or local induced seismic activity (e.g. a nearby surface
seismic survey).
The first cause is the only one that can be easily addressed. Although it may involve considerable
trouble shooting to find the source, simple re-siting of equipment or re-routing of cables solves the
majority of cases. The second cause can be attacked in the same manner as for random noise, by
shutting down all non-essential activities on the rig until the noise is removed from the data. This
rather hit and miss operation usually results in some noise reduction, but can be very frustrating. The
reduction of noise from nearby seismic operations will depend on how successful the company
representative is in negotiating a time share agreement with the seismic survey. If this is not possible,
however, it is sometimes enough to time the VSP shots to coincide with the dead time between shots
from the seismic vessel depending on shot interval. Varying the time delay between shots such that the
offending wavefield appears at different (pseudo-random) times in the data can also be tried, the noise
reduction then achieved by stacking.
VariableSource Signature
This and subsequent QC sections are concerned with possible problems associated with the VSP
acquisition systems. As these problems are directly concerned with the equipment used to record a
survey, it is logical and quite correct to assume that the recording engineer can exercise more
control over these aspects of the operation. The first consideration under this umbrella is that of
source output. In the days when the borehole seismic survey was used primarily to provide velocity
information, all that was required of the source was that it provide a high amplitude impulsive first
pressure peak to enable accurate determination of travel times. There was no interest in what followed
98
this peak and in fact the source could possess all manner of secondary pulses without degrading the
primary data. VSP processing, however, requires that the waveform within the data remains constant
- or nearly so - from level to level in order to adequately separate the up and downward travelling
wavefields.
The use of airguns provides the capability of very stable source signatures, and hence stable seismic
wavefields, assuming the operational parameters of the guns are kept constant throughout the survey.
With a single airgun this is in fact generally easy to accomplish in good conditions, all the engineer
has to monitor is that the pressure to which the gun is charged remains constant and that a constant
hydrostatic head of pressure is maintained (i.e. the gun remains at a constant depth). As a further
safeguard he can make periodic checks of the signature by viewing the near field monitor traces, and
analysing their spectra.
On fixed installations (e.g. platforms and jack-up rigs) the source can be deployed at a constant depth
relative to the sea bed without too much difficulty, however, in a heavy sea state the height of water
above the gun can vary considerably. It is possible to alleviate this effect by using a height switch.
This arrangement uses two switches, attached to the airgun support, positioned a fixed distance apart
and which are activated by the movement of water past the switch. These can be adjusted so that the
gun can be fired only when there is a specific height of water over the gun. This arrangement may not
however provide adequate results when a remote source is deployed from a boat (i.e. not a fixed
installation). In this case using buoys to support the source will preserve the source character,
unfortunately this means that the absolute height of the gun above sea bed is then variable leading to
inaccuracies in transit time determination - this is particularly troublesome in poor weather conditions
with appreciable swell.
A further complication to the source output is the use of arrays. Modern airgun control systems can
be constructed to ensure timing accuracies of 0.1 ms. Unfortunately these systems cannot take into
account mechanical variations of the array between shots, they are of necessity after the event
synchronising systems. Fortunately the manufacturing tolerances of sources in use today are of a
very high order, once a delay is set into a firing circuit, there will be little drift. Nevertheless,
automatic monitoring and updating of individual firing delays is desirable and the engineer will
constantly monitor the output of an array, if there appears to be any change in output will re-synchronise
the individual array elements. Figure 84 illustrates the effect of poorly synchronised array elements.
The use of an array allows a more impulsive signature and a shorter reverberant tail. With either single
guns or arrays, the near field monitor provides a reference for the source output and provided a record
of every shot is made, can be used to design designature operators to remove the associated source
effects. In a sense this allows one to be less rigorous in the treatment of the source in that after
designature, the VSP data will have been normalised, on a trace by trace basis, to the output of each
shot. The far field effects of earth filtering will then have far greater impact on the recorded VSP than
variations in source output. To be rigorous, however, it is preferable even when using designature
processes, to keep the source variations to a minimum.
99
mechanism is working at the limits of its operating range. It is interesting to note, however, that the test
methods outlined above will not necessarily perform as expected under these conditions.
Poor locking may also be apparent when running in casing. This occurs for the same reason (i.e. poor
cementing) as gives rise to casing arrivals, although the record may not be affected by casing borne
energy. The effect, in many cases, may be extremely localised although in some wells the poor coupling
may extend for the whole casing string (Figure.85).
Whatever the reason for the poor coupling, the field engineer has two alternatives available to him.
When running in open hole, if the station is particularly important (for example at a formation top),
then the first course of action would be to unlock, then re-lock the tool. In many cases the slight
relocation of the arm and tool body are sufficient to ensure a good lock. In some cases the lock may
improve to a degree but an additional unlock/lock cycle may further improve the coupling. If this
operation does not work, or if the time available for the survey is limited, then the second course of
action is simply to move the tool by a short distance (e.g. 10 ft or 3 m), to a region that is not affected
by whatever is causing the original data degradation.
be recommended. It is not possible to accurately stack such data (due to differences in arrival times and
ray path variations) without considerable manipulation of the data.
The lateral positioning of a source can be monitored quite easily using navigation systems. As far as
field QC is concerned, the locations of every shot are available to the VSP crew and can be compared
with the target positions. The differences in travel path and hence arrival times of events can tolerate
reasonably wide errors in position without adversely affecting the quality of the VSP image (although
see paragraph above). A more serious source of errors arises out of uncertainties as to the depth of the
source. With a constantly changing gun depth, the absolute seismic transit time will be constantly
varying in sympathy. With vertical ray paths (rig source, vertical well or vertical incidence survey,
deviated well) the time difference between the direct arrivals and the reflections remain constant (figure
86). This means that, ignoring the absolute value of the direct arrival time, if the first arrivals for records
with different gun depths are shifted to occur at the same time, events within the data will stack in phase.
The derivation of the exact transit time can then be taken as an average of the values recorded for each
shot at the geophone station. This in fact involves little error as the height distribution should be random
in nature unless the engineer is
depth on VSP events
102
103
After wave separation, the choice of processing differs according to the acquisition geometry, well
profile and geological structure.
If the source and receiver can be considered as being on the same perpendicular as the reflectors (the
simplest case is that of a vertical well in hori zontal strata with the source situated close to the wellhead)
the processing steps are as follows:
(1) Deconvolution of the upgoing by the down going waves. Application of a deconvolution operator
at each geophone position allows the removal of both source signal and downgoing multiples.
(2) Flattening of the deconvolved upgoing waves is carried out at each depth point by the application
of a static correction equal to the first arrival time measured at the geophone position under
consideration. This operation renders the VSP recording comparable in time (two-way travel
time) to a recording obtained by surface seismic reflection.
(3) Obtaining a VSP stacked trace. The deconvol ved and flattened upgoing waves are stacked
in a corridor which is placed immediately after the first arrival . This restricted vertical
summation known as a corridor stack gives a trace in the seismic frequency bandwidth
without any assumption about the source signature.
After deconvolution,
the seismic
signal is a zero-phase signal.
The VSP stacked trace is comparable to a synthetic seismic record obtained from sonic
and density log data. A stacked trace obtained in this way may contain upgoing multiples.
To remove the effects of multiples, a narrow stacking corridor is chosen in order to accept
only the reflected signal received just after the first arrival.
Thus, the corridor stack is analogous to a syn thetic seismic record - without multiples - in
the frequency band of the received signal. In this way, it is comparable to a surface seismic CDP
stacked trace(Figure87).
Figure. 87 : Stacking corridor applied to processed VSP data, with resulting stacked trace. (Mari and
Coppens, 1987)
104
If the source and receiver cannot be considered as being on the same perpendicular as the reflec tors
(the simplest case would be a vertical well in a horizontally layered medium where the source is offset
from the wellhead), the data processing is as follows
(1) Deconvolution of the upgoing waves. The deconvolution operator is unique. Since it is extracted
from traces recorded at the bottom of the well, a source signature is not required
(2) Moveout correction of the deconvolved upgoing waves. These corrections are carried out by
introducing a velocity model derived from the first arrival times, or a model based on ray tracing
techniques designed to take account of the acquisition geometry.
(3) Flattening of deconvolved upgoing waves after move-out correction. This is performed by the
application of static corrections at each geophone position. The static correction corresponds to the first
arrival time reduced to the vertical.
(4) Migration. The method most commonly used in VSP is the one proposed by Wyatt and Wyatt (1982).
Figure 88 shows an example of processing of VSP data recorded between 1045 and 105 m, the source
being slightly offsetted (30 m) from the wellhead.The spacing between successivegeophone positions
varied from 3 to 23 m. The figure presents a VSP after editing, showing both downgoing and upgoing
waves. Upgoing waves were deconvolved
After deconvolution of the upgoing waves by the downgoing waves, the VSP trace obtained in the
stacking corridor is utilized to match the seismic surface survey with the downhole data as illustrated
in Fig. 89. The fit is obtained by using a deconvolution technique applied to surface seismic which
takes account of the source signature while attenuating the effect of multiples
105
Fig. 88 Example of VSP data processing. Time (s) vs. depth (m) sections, with Llz indicating
distance between two successive geophone positions. (Ga; de France-IF? document)
106
Figure. 89 Matching of surface seismic survey with VSP data. (After Mari et al., 1987).
The section (Fig. 90 7a) represents the data obtained on the vertical component Z of the well
geophone. Those obtained on the horizontal cornponent X are oriented m the plane passing through
the source and the well. A strong field of downgoing SV waves can be seen on the horizontal
component at 0.6 s and l s for geophone depths of 930 m and 1600 m.
In the example shown in Fig. 2. l 7b, the use of polarization filtering has led to a separation of
the P and SV waves (vibration direction included in the source-well plane) and to obtain VSP
sections in P and SV waves (Fig. 90 b).
107
Fig.90 VSP horizontal (X) and vertical (Z) components of the well geophone recording. (Mari and
Coppens, 1989), b)VSP separation of P and SV waves. (Mari and Coppens, 1989)
108
The presence of residual compressional ~aves may be noted on the SV wave section. The separate
processing of the VSP data in terms of P and SV wavefields makes it possible to obtain migrated VSP
seismic sections (Fig. 91).
The SV wave migratedsectionis shown with a time scale half of that used for the P wave section (200
ms in S = 100 ms in P), corresponding to a V,JVs ratio of 2. Correlation of the two sections cannot
be achieved by eye on thJ basis of seismic features. Instead, the depths and times of primary reflections
have been identifiedin the upgoing P and SV wave fields, after their separation and before migration
(Fig. 91a), using the time-depthrelations Tp = Fp(Z) and Ts= Fs(Z) obtainedby pickingfirst arrivals.
109
111
minimised), and secondly in the case of a varying source output, the process can be used to stabilise the
source signature from record to record. If the source has been so designed as to provide as impulsive a
signature as possible - without the presence of a bubble-tail - this process may be considered
superfluous.
If the source output is varying between shots, it is considered essential to apply a signature
deconvolution process. The variation in source character will be seen to add at best a degree of random
noise to the data reducing the inherent resolution, and at worst may add a non random element which
may confuse and mislead the interpretation of the data.
Figure 94 demonstrates the effect of applying a signature deconvolution technique to data acquired
using a single airgun as the source. In general this process can be applied before or after summing of
records at the same level, the trade off being in computational overhead. It is generally preferable,
however, to apply the process prior to summation.
Automatic routines can be applied using various methods; for example, threshold, cross correlation and
polynomial fit. The threshold technique works well for break picking provided the data possesses a
high S/N, cross correlation techniques work well for trough picking - even in the presence of noise and
polynomial fitting works well for either break or trough and does not require that the signal envelope
under analysis is consistent from trace to trace. It should be noted, however, that the cross correlation
can be adversely affected if there is any variation in the signature from record to record - see point (ii)
- or if there is not consistent coupling of the geophone to the formation.
In order to achieve the optimum stack of common depth records it is necessary to remove time variations
within a level before summing. This can be achieved by calculating the average first arrival time of the
traces to be summed and then shifting the individual first arrival times to this average time before
summing.
(iv) Summing
This process has been alluded to in preceding paragraphs and is usually the final stage in data
preparation prior to actual signal processing. In this process all available records after editing (as above)
are summed for each record with a common source-receiver configuration. This process effectively
reduces any random noise evident in the data. It is important, however, for some applications that
the absolute amplitude of the data be preserved and amplitude corrections depending on the number
of records summed have to be made. Figure 79 demonstrates the application of this technique to a
particularly noisy set of records. Statistically the improvement in S/N for such data is defined by the
square root of the number of records in the stack so for 4 records in the stack there will be a 2:1
improvement in S/N, Figure 95Improvement in signal / noise ratio by summing.
(vi) Filtering
The choice of any band-limiting or notch filter is best made by inspection of F-K transformed data i.e.
data after the application of a fourier transform operator (transformed from time-depth to frequencywavenumber space). Any noise bands are immediately visible on data thus processed and in addition it
is usually quite an easy task to identify the usable bandwidth of the propagating up and downgoing
wavefields. Figure 98 is of the raw VSP data from figure 80 after transformation and one can clearly
estimate the highest usable frequency by observing the limits of the coherent section of the respective
wavefields.
The filtering process should be utilised at various stages of the processing route and there are three main
stages that require such processing as a necessity. The first filter is applied at the initial inspection stage
to effectively set the primary bandwidth of the data. The second is applied after deconvolution (removal
of multiples) to cater for any spectral anomalies and noise introduced by the deconvolution operation.
Finally the data should be filtered.
115
(ii) The downgoing wavefield contains information concerning the multiple activity present in
the upgoing wavefield. It follows, therefore, that although the "target" wavefield consists of the
primary upgoing events, a necessary step in the isolation of these data is the identification of the
downgoing wavetrains from which to design the deconvolution operators
There are various methods for wavefield separation but whatever the method adopted it will be
subject to the effects of spatial aliasing. For VSP data there exists a fundamental alias frequency
which is defined by the travel time between adjacent geophone stations. Other bands of aliased
energy occur at the harmonics (multiples) of this frequency and at zero frequency. A common
misconception is that the resolution of VSP data after wavefield separation is limited by this
fundamental alias frequency. This in fact is not so, the band of frequencies affected in each alias
band is relatively small and it can be shown that the small distortions caused by spatial aliasing
are quite acceptable. The use of non-linear operators in processing (such as the median) can,
under certain circumstances, lead to aliasing effects that are almost negligible. This aside there
are occasions when the aliasing effects can down-grade the VSP performance, for example Pand S-wave partitioning using 3-component data - of which more later.
Synthetic data is used for the purposes of this discussion as it is essential to know the exact form
of the wavefields before the separation process is applied so that the quality of the wavefields
after separation may be assessed.
The use of just two reflected events is quite justifiable when linear operators are employed (i.e.
if the effect of wavefield separation on a number of reflections is exactly the same as the addition
of the responses of the individual reflections after wavefield separation). For non-linear processes
such as the median or coherency type operators, the situation is not quite so straightforward.
Figure 99 illustrates the subsurface model employed throughout this discussion.
The data from a standard rig source VSP survey form a two dimensional wavefield for which
the dimensions are depth and time. Both are discretely sampled but with different consequences.
(a)
For time sampling, anti-alias filters are included in the recording system to attenuate
energy above the Nyquist frequency (i.e. the frequency above which alias effects occur, this is
equal to half the sampling frequency). If this energy were not suppressed it would "fold into"
the useful seismic frequency band and cause distortion. Correct use of the recording instrumentation will
therefore eliminate temporal aliasing.
It is not possible to provide in the VSP sense a continuous depth record and additionally it
is not possible to "filter" depth. It is not therefore possible to remove depth contributions that
may lead to aliasing effects and as a consequence spatial aliasing effects will be introduced
whenever multichannel processes (again, for example, the median operator) are applied to the
data.
(b)
Broadly speaking the downgoing and upgoing data form two distinct wavefields. Under
spatial aliasing, these are indistinguishable at certain narrow frequency bands and these are
referred to as aliased or folded frequencies.
The fundamental alias frequency can be deduced from the relative dips of the wavefields in the
time-depth domain (T-X space). If the relative dip is T seconds per trace, then the fundamental
alias frequency of an event is 1/T Hz; further aliasing occurs at harmonics (multiples) of this
frequency i.e. 2/T, 3/T Hz etc. bands of such energy can be identified on figures 48 and 85.
The alias bands (apart from zero frequency) can be increased to beyond the useable bandwidth
of the VSP data by simply reducing T i.e. by decreasing the separation of the geophone
stations. Although close spacing is a sufficient condition for effective wavefield separation,
it is not a necessary one, this should become apparent from the following examples.
119
intersection occurs at zero frequency, the next at 25 Hz with subsequent intersections at higher
frequencies. This first intersection (above zero Hz) defines the fundamental alias frequency of
the steeply dipping upgoing events seen in panel (a). The shallower dipping event has its own
set of alias frequencies the first of which occurs at about 60 Hz.
This discussion implies that the preferred method of suppressing the downgoing wavefield in
F-K space is a narrow wavenumber reject filter whose band limits just encompass the
downgoing energy. As with the fan filter, wavenumber filtering also rejects segments of the
upgoing energy and although these segments are very much smaller, the upwave spectrum is
still left with small "bites" taken out of its spectrum; these can easily be seen in panel (c) of
this figure.
122
The transformation of the wavefield back to the T-X domain provides the estimated upgoing
wavefield. A cursory inspection of the data suggests that the resolution has been retained and
that there has been no significant distortion of the VSP wavelet.
smoothing operator across the traces (both of these approaches introduce side lobes to the
wavelet and in the case of the wavenumber filter these can be reduced by applying a cosine
taper to the pass-band).
Median filter
An alternative approach in T-X space would be the use of a running average across the trace,
the results would be similar to the box-car wavenumber filter. A disadvantage of the average
(mean) function is a tendency to "smear" energy between traces; a more effective separation
can be achieved by the use of the median operator.
For the estimation of the downwave the median behaves in a similar manner to the running
average because the downgoing wavefield tends to be spatially invariant (this property
providing near enough identical samples to work on between traces). The strength of the
operator, however, lies in its treatment of the upgoing wavefield. For a randomly distributed
reflectivity sequence the overall effect of the median is similar to the wavenumber filter in that
bands of aliased energy are included in the downwave estimate. For an isolated reflection
event, the median effectively "ignores" the upwave in its estimation of the downwave, see
figure 102r(right) for an explanation of the operation of the median.
Returning to the synthetic model, the median operator is applied horizontally across the data
aligned at zero time to provide the downwave estimate. This estimate is then subtracted from
the original data leaving the upgoing events with the high frequency response preserved. The
action of this filter on the synthetic data is illustrated in figure 102(left).
If there is a genuine change in the character of the wavefield being enhanced, another important
property of the median is that it will preserve the change. Any running average or F-K filtering
will tend to "smooth over" the transition and hence will produce a distorted image over that
section of the data.It should be noted that the so-called upgoing wavefield derived in this way
actually contains all wavefields except downgoing and requires enhancement to isolate the
upgoing wavefield (see 4.4).
124
Figure 102.,The operation of the median(right),Wavefield separation using median filters (left)
filter has introduced low amplitude events into the precursor and tail of the wavelet. The median
filtered trace shows no evidence of wavelet distortion of the upgoing energy and clearly for these
isolated events has done a better job than the wavenumber filter. As hinted above, this is
not always the case, if there were a significant number of upgoing events the distortions for both
the wavenumber and median filters would be similar and the only strong reason for using one
technique over the other would be a saving of computational time for the median operator.
As with all such statements the situation in the real world is not as straightforward as this and
other considerations must be borne in mind. For example, the median operator is not a linear
process and the application of a spectral filter prior to the median operator will not necessarily
produce the same image as applying the spectral filter after the median operator.
The proven conventional wavefield separation policy can be summarised as:
(i) Shift the traces to zero time, then either
(b)
Estimate the downgoing energy with a narrow accept wavenumber filter.Estimate the
upgoing energy using the same narrow band wavenumber filter in its reject mode (to suppress
the downgoing energy)
Figure .103: Comparison between resulting wavelets from F-K and median wavefield separation
126
128
129
131
132
Median filter
An alternative approach in T-X space would be the use of a running average across the trace, the
results would be similar to the box-car wavenumber filter. A disadvantage of the average (mean)
function is a tendency to "smear" energy between traces; a more effective separation can be
achieved by the use of the median operator and this is discussed below.
For the estimation of the downwave the median behaves in a similar manner to the running
average because the downgoing wavefield tends to be spatially invariant (this property providing
near enough identical samples to work on between traces). The strength of the operator, however,
lies in its treatment of the upgoing wavefield. For a randomly distributed reflectivity sequence
the overall effect of the median is similar to the wavenumber filter in that bands of aliased energy
are included in the downwave estimate. For an isolated reflection event, the median effectively
"ignores" the upwave in its estimation of the downwave.
Returning to the synthetic model, the median operator is applied horizontally across the data
aligned at zero time to provide the downwave estimate. This estimate is then subtracted from the
original data leaving the upgoing events with the high frequency response preserved. The action
of this filter on the synthetic data is illustrated in figure 110
If there is a genuine change in the character of the wavefield being enhanced, another important
property of the median is that it will preserve the change. Any running average or F-K filtering
will tend to "smooth over" .
compared to the downgoing wave at the same time. It follows, therefore that the major
contribution to the form of any upgoing primary reflection event will be from the downgoing
wavefield and to a first approximation the upgoing wavefield at any point in the subsurface will
consist of a scaled version of the downgoing wavefield measured at the same point. The
downgoing wavefield, therefore, contains a description of the reverberant systems that are
appearing in the upgoing wavefield and can be used in the design of deconvolution (multiple
suppression) operators.
The downgoing wavefield will consist of two components, the reverberant system which will be
minimum phase and a source generated component which may not. Two approaches can be taken
in the design of the deconvolution operators. The first approach involves the design of a gapped
operator which will "collapse" the reverberant tail and then to separately apply a wave-shaping
operator to produce the desired final wavelet.
The alternative approach is to design a single operator containing both the multiple and source
generated components separately assessing the minimum and maximum phase portions of the
signal.. The results are similar to the two stage operation described above.The deconvolution
operators designed from the downgoing wavefield are applied on a trace by trace basis and
applied to the equivalent upgoing wavefield. Figure 111 illustrates the application of this type of
deconvolution.
In general the use of the downwave derived operators as described here, produces a more
effective deconvolution. The use of longer operator lengths gives a more complete removal of
the longer period reverberants and the ability to use long operator lengths of the same order as
the design window comes from the deterministic nature of this type of deconvolution. As the
downwave can be precisely measured, providing the reverberant tail of the upgoing wavefield
mirrors that of the down, the deconvolution is also precise.
As an aside: Conventional deconvolution techniques which use statistical methods to derive the
operators cannot make use of long operators. These techniques rely on the assumption that the
reflectivity sequence of the earth is essentially random and the autocorrelation functions of the
upwaves are inspected for any non-random events within a specified design window. These nonrandom effects are then attributed to multiple activity and the operator designed accordingly; the
operator lengths are therefore of necessity shorter than the design windows. The basic
assumption of the random aspect of the reflection sequence is not rigorous and can lead to errors
in operator estimation, the deterministic downwave deconvolution process makes no such
assumptions.
As described above, the downgoing wavefield at any point in the borehole can be used to
deconvolve the upgoing wavefield at the same point. This process will, however, only
deconvolve reverberants that have been generated by at least one primary above the point at
which the downwave is being observed. The application of a single operator will therefore not
deconvolve any interbed multiples that have been generated by at least two primaries below the
point of observation, indeed it will not remove any upward travelling reverberants. If one
remembers that the amplitude of the primary upgoing wavefield is smaller than the downgoing
by one reflection coefficient, it should be obvious that the contributions from upgoing
134
enhance the upgoing, albeit with important differences. In most VSP data the reverberant events
in the downwave are near-consistent, at least in the early part of the data, from trace to trace.
This is a function of the majority of the downgoing arrivals being generated in the near surface,
where the lithology is often close to horizontal. As one progresses into the earth structural effects
can become more pronounced. In the presence of structure the upgoing events will show moveout
in time from trace to trace due to migration effects and these effects can become significant at
depth. Any processing that attempts to enhance the events in the upgoing wavefield must
therefore be capable of preserving and indeed clarifying any events thus affected. In particular,
the length of any spatial operator should be short enough to preserve any changes in moveout or
terminations of the events within the body of the VSP, but long enough to ensure that local
variations in the events do not prejudice the overall continuity of the data.
Figure 001 the results after the application of a median based point spatial filter designed to
recognise only horizontal or close to horizontal events. Much of the lithology at this well is
horizontal and therefore this form of enhancement has been reasonably successful in rejecting
noise and enhancing the desired arrivals. Similar results can be obtained for data exhibiting
dipping arrivals by defining the median operator to accept arrivals that display a particular
moveout within the data. This will be more effective in enhancing dipping events but will suffer
from similar limitations as the horizontal operator, in that noise arrivals may still be enhanced if
they occur at alignments close to that of the filter limits.
Figure 001 shows the same data as 001but in this case an alternative enhancement filter has been
applied. The filter is still based on the median operator, still has the same number of sample
points but is now applied in two passes. In the first pass a range of slope values are examined
and the median value for each slope alignment calculated. The slope with the largest median
value within the scan range is then stored along with the value. The second pass examines the
stored values; if the slope lies within a predetermined range of slopes, then the stored value is
output as the value for that sample. If the stored values lie outside this accept range, then the
sample is zeroed. This filter has the characteristic of enhancing any upgoing energy in the
specified window (as these events will usually possess a reasonable amplitude), but ignoring any
coherent noise arrivals, statistically more of which exist outside of the expected upwave
alignment than along it.
F-K techniques can also be used to enhance the upgoing data. Fan and wavenumber filters are
the most popular types in use by the industry and any enhance/reject window can be applied to
the data, as such these filters can be very flexible in use. It is more difficult, however, to define
exactly the portion of F-K space to which one wants to apply the process and in practice the
specification of slope when using the median operator is far simpler. Other operators such as
mean or semblance can be applied to the data, the appropriate choice depending very much on
the individual data sets although the industry as a whole tends to favour either median or F-K
methods.
136
Figure 001: Deconvolved upgoing wavefield after application of horizontal median operator
137
Figure 001: Deconvolved upgoing wavefield after application of noise rejection filter enhancing
dips of up to 1ms/trace
(I) Mechanical/electrical
(ii) Physical (eg tube wave and casing arrivals)
In category (i), the main generating mechanisms are functions of the electrical and mechanical
properties of the recording equipment, and category (ii) arrivals are generally due to the
configuration of the survey.
138
(I) Mechanical/electrical
The electrical noise generated by the recording equipment with present day technology is of a
very low order and this is illustrated by the fact that most of today's seismic recorders have
dynamic ranges in excess of 120 dB. There are forms of electrical interference other than
component noise, the most prevalent of which is mains pick-up. This is where the signal leads
from the downhole tool or monitor transducers are for some reason inadequately shielded from
stray magnetic and electrical fields originating form the rig's power supply system. This noise is
usually present as 50 or 60 Hz sine wave interference depending on supply type. Problems of
this nature can usually be isolated in the field but in some instances no remedy can be found (a
point to note here is that if all the signal path uses digital information transfer, the likelihood of
this type of noise is much reduced). There are two methods of removing this noise, firstly a very
narrow band reject (notch) filter can be applied to the data if the noise is monotonic, although
this should be avoided if at all possible as it distorts the VSP wavelet. The second method is
simply stacking shots at a level where the noise appears anti-phase, the subsequent cancellation
effects effectively removing the offending "arrivals".
There are also a number of causes of mechanical noise, the most common of which are tool
resonances induced by poor locking of the geophone to the formation. In this case the degradation
of the signal is easily apparent via the surface QC system and if necessary, the location of the
tool can be altered to provide a good lock. Again in modern tools the majority of mechanical
resonances have generally been reduced to a minimum, either in the design stage or by judicious
application of damping devices in the field.
(ii) Physical
The mechanisms for the generation of tube or casing arrivals have been dealt in last chapter ,but
it was noted there that the possibility of reducing their effects in the field are somewhat limited.
These classes of arrivals must be removed using some of the processing techniques noted
throughout.
Casing arrivals: as noted earlier these exhibit an apparent velocity within the data characteristic
of the steel that is transmitting them. They usually occur in the vicinity of the first arrivals and
tend to mask the real direct arrivals. As they form a coherent set of arrivals, they show the same
character from shot to shot and cannot therefore be reduced by the summation of shots at a level.
These arrivals, however, usually show a spectral content which is of higher frequency than the
seismic arrivals and can therefore be attenuated by the application of a low-pass filter.
Tube arrivals are more problematic. They do not usually lend themselves to attenuation via
spectral filtering due to the variability of the arrivals for different geophone locations. The quality
of the geophone lock has a direct bearing on the expression of any tube arrival at a particular
geophone station. In many ways a tube wave is analogous to the seismic arrivals that are the aim
of the survey. As such they can be attenuated to a great extent, by treating them as a separate
class of arrivals and applying wavefield separation techniques. Both median and F-K approaches
139
are valid and the mechanism is exactly the same as for downwave subtraction, the initial
alignment however has the tube waves positioned to present zero moveout of events between
traces. Figure 114 is of a VSP which suffers from tube wave interference, figures 115show the
data after tube wave subtraction using median estimates of the tube wave. The separation
techniques were applied as for downwave subtraction. There is a small amount of residual energy
present this being an unavoidable function of the variability of arrivals between geophone
stations. In this particular case, however, the effects of the residual tube wave are confined to
high frequencies and can be removed quite effectively by spectral filtering.
Another class of arrivals which can be considered as noise are mode-converted S-waves. These
can be used to gain useful geophysical information but more often than not interfere with the
upgoing P- wave to an unacceptable degree. Again similar separation techniques can be applied,
although due to their close association with the P-wave are not usually 100% effective. All the
foregoing discussions assume that a single geophone sensing element with its axis placed
vertically has been used for the acquisition. In the majority of cases these days, a 3-component
geophone with sensors placed in a cartesian coordinate system is used to record the VSP and
these allow a more rigorous separation of P- and S-mode energy fields
140
Figure .115: VSP after tube wave subtraction using median techniques
141
arrangement can have cost and logistic advantages on land but presupposes either expendable
boreholes (explosive sources) or non-destructive sources.
In every instance it is advantageous to know approximately what results to expect from a given
survey configuration. Indeed in many cases, just to know whether the proposed survey will even
have a chance of fulfilling the stated objectives, is an item of information that is not always
apparent from common-sense considerations. Some means of simulating the results of surveys
is therefore required.
Before leaving the topic of survey configurations, a general set of surveys that are gaining more
and more interest with increasing levels of technology is that of the so called 3D VSP. These are
the direct analogy of the 3D surface seismic dataset with the proviso that the array used to record
the data is limited to a single location in the subsurface and hence the redundancy of data seen
in the 3D VSP is not fully emulated. The surveys do, however, provide very detailed cover in the
vicinity of the well and, in theory at least, provide the opportunity for extremely high resolution
of detailed structure about the well. Figure 118 illustrates the generic form of the survey and they
will be discussed in more detail after the section on walkaway surveys.
3-component processing
"Traditional" VSP recording techniques relied, until the mid 1980s, almost exclusively on the
use of single-axis borehole geophones. The instruments were deployed in a manner that ensured
that the sensing element had a maximum response to seismic energy where the associated particle
motion was in the vertical direction (see figure 119). It was assumed that the major elements of
the seismic wavefield were up and downward propagating P-wave energy fields. For the rig
source VSP in the vertical hole and the vertical incidence VSP in a deviated hole this assumption
usually proves adequate, providing the local structure has low dip and is not complicated by
faulting or intrusions.
If there are to be significant offsets of the source from the geophone, or in regions of high
dip/complex structure, this assumption is no longer valid. In the simplest case of a purely
compressional wavefield, the oblique angle of arrival of the energy at the downhole detector
gives rise to an effective reduction in amplitude of the recorded signal. In general, geophone
sensitivity reduces proportional to the cosine of the angle between the direction of arrival and
the geophone axis. Consequently P-wave arrivals will not be recorded with correct amplitudes
unless incident in a direction precisely along the geophone axis (figure 119).
Further to these considerations, when the source is laterally offset from the receiver, the
propagation paths are no longer perpendicular to the bedding planes and there is therefore a
component of particle motion along the acoustic interface. This means that in addition to the
transmitted P-wave energy at an interface, some of the energy can be mode-converted to S-wave
and transmitted as such with possible re-conversion at subsequent interfaces etc., moment's
144
thought allows the deduction that with some configurations and dips, S-wave energy could be
maximally recorded by a vertically polarised geophone with a minimum recording of the P-wave
energy. In the more general case of a more even split in energy between the wave-modes, it is
quite often the case that the spatial proximity of the P- and S-wave arrivals gives rise to
unacceptable aliasing effects, if filters depending solely on apparent differences in velocity are
used to partition the wavefields.
If the VSP wavefield is recorded using a "3-component" downhole geophone many of these
problems can be avoided. In the majority of 3-component tools, three geophones are arranged to
be mutually orthogonal, forming the standard x,y,z axes of a cartesian co-ordinate system. With
such an arrangement the full 3-dimensional particle-motion vector can be recorded. With a
knowledge of the particle motion one can exploit to the full the differences between the P- and
S- propagation modes and accurately partition the two wavefields for separate processing. In
certain circumstances (usually where there is only one significant mode-conversion interface)
the S-wave data can be processed to provide an alternative subsurface image to that provided by
the P-wave.
One problem that must be overcome before any 3-component data can be processed is that of
geophone alignment. As the geophone is suspended down the hole by means of a wireline cable
(figure 120) it is clear that the tool is free to rotate in the borehole between successive stations
depending on the torque experienced by the cable. It is obvious also that unless there is some
means of referencing the orientation of the sensing elements, the horizontal component
information will be of little use.
There are basically two ways to overcome this problem. The most obvious method is to find
some method of directly measuring the tool orientation, and this can be accomplished by
incorporating some form of gyroscopic measuring device in the downhole tool. This method
provides the most accurate solution and allows the simultaneous acquisition of a borehole
directional survey with the VSP. This approach is not always possible and very difficult to
achieve with current technology if strings of downhole geophones are employed.
If this route is not possible then all is not lost as the second method utilises the amplitude
information recorded by the geophones by making a couple of simple assumptions.
The assumptions made are:
a) The first arrival at the geophone is the directly propagating P-wave and
(b) The direction of arrival lies in the source-receiver plane.
The P-wave energy usually travels with the highest seismic velocity and will arrive before any
other wavemode, unless one is extremely unlucky with unexpected refraction effects. The
direction of arrival is slightly trickier but again the least-time path between source and receiver
145
146
figure 122 shows the effect of its application to a series of traces where the reflection image
begins to be obvious.
148
model and the process run again until there is a convergence of the modelled and synthetic
results.
Examples of the different imaging techniques as applied to a real data set are shown in figure
123. This is derived from a rig source VSP from a deviated well in the North Sea. Four panels
are illustrated, the first is the original surface seismic data, the second is the mapped data using
a reasonably refined but horizontal subsurface model. The third is of the results using a migration
approach but with the same velocity model and the fourth shows the seismic section with the
migrated data spliced in along the well location. The migration approach has clearly better
imaged the dipping beds near TD of the well and the overall noise level is also much lower. It is
also interesting to note that the apparent zone of illumination is much smaller with the migration
approach, this indeed is a more accurate description of what to expect from this survey. The
cover indicated by the mapped data could be constrained by refining the model further to allow
for structural effects, the final result of which would be close in appearance to that from the
migration. It is obvious, however, that the migration has not been as severely constrained by the
velocity model and can produce a very reliable image with less detailed knowledge of the
subsurface.
Figure 123 Migration versus mapping, deviated well rig source VSP
at depth in the well and the source is fired at a series of positions along a line through the wellhead
location. Some of the expected travel paths are shown on this diagram and the shaded area
indicates the expected zone of subsurface illumination. Figure 011 shows the ray paths for one
shot point and one receiver location with both down and upgoing ray paths annotated. Note that
for long source offsets the path lengths for both classes of arrival will be of similar length
especially for a deep geophone location.
between shots (i.e. the distance between the geophone and a reflecting interface remains
constant), both up and downgoing wavefields display the same form of moveout. Both
wavefields produce a hyperbolic arrival within the data with the downgoing arrivals displaying
more curvature than the upgoing. For a deep geophone the downgoing events occur later than for
a shallow geophone, whereas the upgoing events occur at shorter times.
152
With this in mind, the majority of wavefield manipulation (i.e. separation of up and downgoing,
P- and S-wavefields etc.) for walkaway surveys, is accomplished with the data sorted according
to CSP gathers. After all such processing has been completed, the data are re-sorted back to
CGGs, further processing including imaging etc. then performed in the CGG domain. It is worth
noting that although removal of coherent events or wave-mode partition of wavefields is
accomplished in the CSP domain, the removal of random noise can be attempted in either CSP
or CGG domains with equal success. Equally enhancement of the partitioned upgoing wavefields
can be achieved if necessary using conventional velocity filtering techniques within the CSP
domain. Alternatively, however, as there are several independent estimates of the upgoing
wavefield (one available from each CGG), it is possible to image all the available data from each
CGG separately. One can then improve data quality by stacking these images; this method is in
fact the closest one can get in VSP processing to producing multi-fold cover in the true surface
seismic sense.
The modifications to the standard VSP processing route noted above, would appear at first sight
to be almost trivial, after all almost all are concerned with simple data manipulation rather than
processing. The main limitations with this type of survey occur when one considers the imaging
of the data. A moment's thought indicates that although the mechanics behind the imaging
process are relatively simple their application in practice is far more complex. Either method of
imaging can be performed on walkaway data. In some locations, however, where there is marked
angle dependency of reflection strength, the migration approach can provide unpredictable
results due to the wide variation in reflection angle seen between the near and far offset traces.
An important consideration is the velocity/acoustic impedance field used to image the data. Due
to the variability of the travel paths, anisotropy and lateral velocity variations, no two shot pointgeophone pairs experience the same velocity field; this is, of course, true for every VSP survey
although the effect is more pronounced with the walkaway. A static offset VSP can be designed,
for example, such that the difference between propagation histories is far less problematic.
In a sense the walkaway VSP is nothing more than a special case of the offset source VSP survey.
Whilst there are practical considerations to be made, from a processing viewpoint, that are
different to those required by static offset source data, the image presented to the oil company
interpreter has the same properties as one from a static offset source. Where the walkaway really
scores over the standard offset source VSP is in the acquisition stage.
3D-VSP Surveys
In many ways the 3D VSP is simply an extension of the walkaway VSP survey. It is obvious that
with the high cost of rigs and drilling operations, the 3D VSP must be performed in the shortest
possible time in order to maximise any economic advantage that might be hoped to be achieved
from running such a survey.
The question of why one would wish to perform a 3D VSP survey must be asked. It has already
been intimated that the VSP is a tool that can obtain images from the subsurface from regions
not illuminated by surface seismic surveys; in addition there is the benefit, always encountered
with VSP surveys that the seismic energy has suffered less modification by the earth due to its
reduced travel path length. These two factors, when combined, allow the operator to plan surveys
to provide a true 3D solution from the borehole derived dataset whilst being able to avoid some
of the pitfalls (for example by undershooting velocity anomalies etc.) associated with surface
datasets.
The 3D VSP only becomes viable, however, if sufficient data can be acquired to provide adequate
statistics for the various processes to be applied. This obviously would not be a problem if time
and money were unlimited, if this were the case any downhole receiver tool could be used to
153
(eventually) record all the data required. This is patently not the case in the majority of well
scenarios. The only manner in which this type of survey can be economically recorded, is by
using a downhole receiver array thereby reducing the number of traverses of the source. Ideally
data should be recorded with as few variations in survey parameters as possible. For example, it
is physically impossible in the offshore case to exactly re-occupy a source position, this means
that for different placements of the downhole receiver for successive passes of the source boat,
there may be considerable differences in the source conditions and the associated recorded travel
time. Whereas it is possible to account for much of the variation during processing, it is time
consuming and not always successful. The direct consequence of this is that image quality is
compromised.
The obvious solution is to acquire as much data as physically possible with the same source
conditions i.e. to use a downhole receiver array. As noted above the greater the number of
geophone stations recorded, the better the results should be. It follows, therefore, that the best
results would be obtained by the array that provides the greatest number of array stations, ideally
allowing the survey to be recorded in a single pass of the source. As a rule of thumb it has always
been considered that the minimum number of geophone stations required for adequate processing
is at least seven and preferably nine (odd numbers are required if median filters are to be used in
the processing route), it is clear, therefore, that the ideal 3D VSP geophone array should possess
at least this number of geophone stations if one is contemplating such a survey.
Another important consideration for both 3D VSP and 2D walkaway surveys is that of aliasing.
Temporal aliasing can in most cases be discounted in that all available field systems are capable
of providing a sufficiently time-sampled dataset to make such effects inconsequential. The
effects of spatial aliasing, however, can be considerable and should be avoided if at all possible.
The most deleterious effect is produced by the spatial separation of elements of the receiver array.
Shooting Patterns
In theory, it makes little difference as to the shooting pattern adopted for recording 3D VSP
surveys. There are very few of the shooting limitations encountered in surface seismic data in
that there is only one receiver location to be considered. In practice, however, there may be
distinct advantages to using a specific shooting programmers (figure.127)
4D-VSP Surveys
An extension to the 3D VSP survey, uses the data recorded in such an operation to monitor the
development of the reservoir over time. This so-called 4D survey (time being the 4th dimension)
requires complete repeat shooting of the 3D VSP survey.
As the differences between repeat surveys are likely to be small, the source and receiver positions
should ideally be precisely reoccupied in order to remove as many degrees of freedom as possible
from the operation. This from the source viewpoint is not possible to achieve in practice with the
current levels of technology, receiver array positioning is a far more precise activity but even
here it is possible to introduce errors. It is important, therefore, to have an independent reference
for the survey (in the form of a seismic marker above the reservoir) that is unlikely to be affected
by changes in reservoir characteristics. If this is possible, the precise re-positioning becomes less
important in that the processing applied to the data can be performed using the reference horizon
as a control.Standard processing of the survey to provide subsurface images can therefore
proceed using data that are recorded with less than perfect repeatability. The knowledge of the
155
actual positions is still of paramount importance, however, in order to produce the most accurate
image during each survey.
Where the requirement for complete accuracy of re-occupation of positions is essential, is if there
is a necessity for precise analysis of data using the same travel paths (for example performing
repeated AVO analyses of specific reflection points). Whether this latter type of processing is
even feasible, however, is as yet uncertain.
A 4D VSP may be used on its own as a valid tool for monitoring a reservoir, however it is more
likely to be used in conjunction with repeat 3D surface seismic operations. The main drawback
to the 4D approach using the VSP alone, is that there is a limited amount of subsurface
illumination available from such a configuration. Surface seismic on the other hand is more
difficult to calibrate than the VSP. The marriage of the two technologies has the potential of
providing the most accurate approach to large scale seismic reservoir monitoring by using the
VSP surveys as the calibration control for the surface seismic data.
156
A point to note here is that although the degree of dip can be calculated, with the source
positioned at the wellhead, there is no way of determining dip azimuth. This is easily
demonstrated if one considers the case of dip in the opposite direction to that in figure 5.1; the
ray paths associated with this will be a mirror image of those shown in the figure and the time
response will therefore be identical.
The computation of the angle of dip is very straightforward, but involves making two
assumptions:
(1) The dipping reflector is a planar, uniform surface.
(2) The velocity of the material above the reflector can be regarded as constant.
The first assumption is reasonably secure in regions of "well behaved" structure, and does not
significantly affect the accuracy of the calculations. The second assumption may not be as
reliable particularly in regions where there may be significant velocity changes in the near
surface. If the dips are generally below 30 degrees, however, the errors introduced are again
minimal; the basic consideration here is that the reflection points remain reasonably close to the
well location. Even in areas of great structural variation and high dip, accurate calculations can
be performed if geophone stations close to the reflector are chosen for the calculations.
157
158
Figure 130 VSP upgoing wavefield, at two-way time alignment, with velocity log
Figure 131 VSP deconvolved upgoing wavefield, at two-way time alignment, with velocity log
159
appropriate polarity display is used i.e. white troughs are matched with white troughs. This is
simplified if the data used has been signature deconvolved when the resident wavelet has been
considerably simplified.
In the horizontally layered earth, multiples will appear parallel to the primaries giving rise to
them. If there is considerable dip present, although the multiples will still exhibit termination
within the data, the events may not appear parallel to the generating primary and their period will
therefore not be consistent from trace to trace. The downgoing multiple tail will also exhibit
moveout between traces and care should then be taken when designing deconvolution operators
from the downwave to be applied to the upwave, as the respective periodicities will not
necessarily be the same.
One may extend the identification of multiple events away from the VSP to encompass the
surface seismic record. The approach is essentially the same, although the choice of comparison
trace is slightly different. If the downwave is extremely stable, it is unlikely that any difference
would be noticeable between the comparisons of any trace with the surface record. If, on the
other hand, there is a variation in the downgoing wavefield with depth, one must ensure that the
downwave used is compatible with the data being analysed. To this end, the downwave trace for
the level with the same two-way first arrival time as the time of the reflector on the seismic
record, should be used. Again the polarity of the downwave first arrival must match that of the
reflection event examined; any positive correlation between the tail of the downwave and events
beneath the primary then implies residual multiple activity in the surface record.
information concerning any thinly bedded strata has been lost. This aside, however, the VSP can
now be compared much more easily with the seismic record; figure 135 illustrates such a
comparison. The same well data is used as before, but now the degree of similarity between the
data sets can be assessed. Moving from left to right on the display the panels are:
Acoustic impedance log derived from wireline logs
Transposed undeconvolved upgoing VSP wavefield
Surface seismic data at the well location
Transposed deconvolved upgoing VSP wavefield
162
163
calculations. A practical example of VSP interpretation is presented as a work study with these
course notes.
in the log data and indeed apparently terminates within the panel before reaching the time-depth
curve. Using the conventional wisdom outlined in the preceding sections, this event would appear
to be a multiple reflection, although it is not immediately obvious as to which primary could give
rise to the event.
On closer inspection, the event does not actually terminate, but tails off in amplitude toward the
first arrival curve just failing to intersect the well location. This, however, still does not answer
the question as to where the large anhydrite event on the log has disappeared in the VSP. Looking
in even more detail at the VSP display, reveals a further broken "white-black" event which occurs
over about 14 traces (labelled "B"), 20 ms beneath the event discussed above. Further still, an
extremely weak event at the well location at approximately 2.2 seconds is visible (labelled "C"),
which appears to tie the anhyhrite formation. What then can be made of these observations?
That event A is dipping, is quite obvious from the time moveout exhibited, and this corresponds
to a dip angle of 11 degrees. Event B is also dipping, although in this instance the dip is much
higher and equates to an angle of approximately 30 degrees. It is not possible to ascertain to what
degree event C is dipping due to its very low amplitude and small lateral extent. From its
manifestation in the data, however, it is almost certain that the event exhibits a degree of dip,
why one asks should this be the case? The easiest and most likely explanation of the shape of
these events, is that the bed cuts the well, but only just, the bed terminating up-dip, extremely
close to the well location (event C). The termination is effected by a fault in the bed which
decreases the depth of the bed (event B); the bed is then faulted again, raising the event higher
and possibly positioning it closer to the well location. In the normal run of events, this scenario
would be considered somewhat unlikely to say the least, when one considers the resultant shape
of the subsurface model, a representation of which is provided at the bottom of figure 5.16. As
the well was drilled through a salt swell, however, this interpretation is quite possible due to the
mobility of the salt material.
A further point to notice is that there may be a slight problem with the deconvolution applied to
this data set. If one looks further into the data, it is possible to see a spatially extensive black
event occurring at approximately 2.25 seconds, this exhibits much the same moveout and lateral
extent as the anhydrite event, indeed it appears to terminate in much the same manner within the
data as the event of interest. It is possible to interpret this event as a multiple from the anhydrite
and it is still resident in the data probably due to an incompatibility of propagation paths between
the downwave and upwave. Further drilling in a deviated hole near and to the right the location
of the subject well, indicates a high degree of support for this interpretation. The anhydrite layer
was encountered shallower than in the first well with approximately 10 degrees of dip
.It is clear from this example, that one must be careful in areas of complex or unusual structure,
not to push the interpretation of a VSP past its useful limits. Inferences from the data may well
be made quite easily on the basis of the common-sense application of the simple rules for
identifying primary and multiple events, but the complexity of the structure may entail a distinct
"bending" of these rules.
167
168