You are on page 1of 22

Indian Academy of Sciences, Summer Research Fellowship

Project Report
Somrup Chakraborty,IIT Kharagpur
EPSS570-I

Dr. N.Satyavani
Sr. Scientist, Gas Hydrate Group,
CSIR- National Geophysical Research Institute (NGRI),
Hyderabad - INDIA

Acknowledgement
Every work we do is linked directly or indirectly to many different aspects, circumstances and people. Aspects which we try to
understand, work on and come to a conclusion , circumstances which motivate us and people who help us and guide us to
achieve what we intend to. Recollecting the near past events of my internship period I am deeply indebted to the people who
were responsible for the successful completion of my work.
I am thankful to Dr. N. Satyavani- Sr. Scientist, Gas Hydrate Group, CSIR- National Geophysical Research Institute (NGRI) who
was my guide and facilitator throughout my internship for guiding me through through different dimensions of Seismic Data
Processing and the subsequent Image Processing steps. It was her who created a flawless schedule and also insisted to work
on real seismic data.
I express my special sense of gratitude to the NGRI staff and my peers for helping me throughout my stay in the campus. The
immense cooperation given by them is unforgettable.

Contents:

Project Details
1.
2.
3.
4.

Objective
Introduction
Theory
Method

3
3
4
5

Seismic Data Processing


1.
2.
3.
4.
5.
6.
7.

Seismic Data Processing


Processing Flow
Near Trace Gather Display
Trace Muting and Editing
Deconvolution
Velocity Analysis
Brute Stack

7
8
9
10
11
13
16

Image Processing
1.
2.

Image Processing
Noise Removal

17
20

Conclusion

Objective:
In this project we proposed a novel method to enhance seismic data for manual and automatic interpretation.We use a
genetic algorithm to optimize a kernel that, when convolved with the seismic image, appears to enhance the internal
characteristics of salt bodies and the sub-salt stratigraphy. The performance of the genetic algorithm was validated by
the use of test images prior to its application on the seismic data. We present the evolution of the resulting kernel and its
convolved image. This image was analysed by my mentor, highlighting possible advantages over the original one.

Introduction:
The development of seismic attributes is a major part of these efforts. A seismic attribute is a quantitative measure of a
seismic characteristic of interest and can be used for the extraction of information from seismic reflection data, either for
quantitative or qualitative interpretation. Seismic attributes can enhance certain characteristics of interest in seismic data .The
definition of seismic attributes. relates them to the main objective of digital image processing: improving an image for its
perception by a human being or by a machine. Image improvement is usually done by point processing (modification
of a pixel independently of its neighbourhood), frequency-domain processing or spatial-domain processing. Spatial-domain
processing corresponds to the application of spatial filters (usually through discrete two-dimensional linear convolution) with
different purposes, e.g., smoothing, enhancing, edge detection. The objective of this procedure is to find a new image that is
better-suited for a specific application than the original one.
Spatial filtering consists in obtaining a new image through the convolution of a convolution filter (mask, filter or kernel, as we
employ throughout this article) and an original image. The basic concept consists in setting in a certain location of the new
image the result of combining the values of the kernel with the superimposed values of a fixed region in the original image.
Many of these kernels have been obtained from a theoretical process (smoothing, high-frequency enhancement), while others
come from empirical or intuitive considerations (point and line detection).
Genetic algorithms (GAs), originally proposed by Holland (1975), are a biologically inspired computational approach to
optimization through a simultaneous search on different areas of the cost function space and have proven useful as inversion
or optimization techniques in several areas, including geoscience.
In this project, we find 2D kernels that can enhance desirable features in seismic images and test them for a particular case
concerning salt bodies. Due to the obvious difficulties involving the theoretical construction of these kernels and the extent of
the space containing all of them, we propose and develop a GA as an optimization tool to find (design) the kernel that
minimizes the difference between the filtered image and a desired image, which in the first instance is manually constructed
to emphasize a feature of particular interest.

Theory
Image Filtering
One method for filtering an image consists in the application of a discrete 2D convolution of a kernel and the image.
Convolution is performed by multiplying the pixels in a small window of an image by a set of values in a convolution filter and
then replacing the value of one of the image pixels (usually the one in the middle) with the normalized sum of this
multiplication. In other words, perform a weighted average in the neighbourhood of each pixel and place the obtained value
in a new image.
Due to its
2D nature, the kernel is usually represented as a table, matrix or more commonly a square of odd dimensions so that the
4

convolved image does not change in size with respect to theoriginal image. For an image f of size R C (R rows and Ccolumns)
and a convolution kernel W of size M N (both M and N odd numbers) the 2D discrete linear convolution g of W and f is given
by

The objective of the study is to find a kernel W such that it approximates figure f to g by the convolution operator. In other
words, this inverse problem does not consist in separating two convolved signals after physical interaction; the objective is to
create or find a kernel W that interacts (convolves) with a known image f in such a way that the result g is more visually
appealing to a seismic interpreter (or to a pattern recognition technique) than the original image.

Fig 1: Process of convolution of an image with the kernel

Genetic Algorithm
Genetic algorithms emulate the process of natural evolution and are based on two premises: the survival of the fittest and the
improvement of new individuals through sexual reproduction.At first, an initial population that uniformly spans the search
space is randomly generated. Each individual (or chromosome) is represented as a set of elements commonly referred to as
genes or alleles. The initial population is evaluated with the cost function to discard those individuals that are not as well
adapted as the rest of them. Those organisms allowed to survive are paired and reproduced through a cross-over operation,
giving birth to a new generation. A small percentage of this new generation is arbitrarily mutated so different areas of the
search space (not available through sexual reproduction) can be explored. In this sense, mutation is useful avoiding local
minima in the optimization process. The rate at which mutation is applied is an implementation decision. The new generation
is also evaluated, allowing only the fittest individuals to survive and the process is repeated. The GA stops when a certain
number of generations is reached or when a certain threshold of the cost function has been achieved.

Method
From the section of a seismic profile shown , it was proposed to find a 2D kernel that, when convolved with the original image,
resulted in a new one that enhances the salt bodies present in it. Finding the components of a kernel W , which transforms an
image f into a desired image d, corresponds to solving an inverse problem, in contrast with the associated forward problem

noted by equation of image filtering. We propose that this inverse problem can be treated as an optimization one, trying to
minimize the cost function c given by

where g is given by

To solve this optimization problem, a real-valued multifunctional GA whose structure follows that of a typical GA but works for
more than one optimization function was developed. A modification of the GA was manipulating the chromosomes as
matrices, instead of strings. This is a more natural approach, since the solution we seek is also a matrix. Therefore, each
chromosome structurally represents a candidate, a potential solution W for the searched convolution kernel W and each gene
every one of its elements. The elements of a candidate kernel for a given row were written as a string of real values, which
represents the elements of a row of the matrix. According to this scheme for the chromosomes, the operator of the cross-over
was also modified so it could handle matrices. Instead of the usual single-point cross-over, it was proposed to randomly choose
two cross-over points that describe sub regions of one of the parent matrices, so that one of the two offspring of a pair of
parent matrices consists in this region from one of the parents, along with the rest of the elements from the other one .The
second offspring is obtained analogously, inverting the values used (inside and outside the sub-region) for each parent.
The genetic algorithm was tested for different population size ranging from 50 to 200 and different generations ranging from
100 to 200.The number of generations is restricted to low numbers to prevent complete whitening of the convolved image.
Other parameters include mutation function: which we do as adaptive feasible based on the constraint that the value of each
element of matrix will lie in the range -10 to +10,Parent Selection: weighted random pairing or Roulette wheel selection,
Crossover: double point crossover.

Seismic Data Processing


The purpose of seismic processing is to manipulate the acquired data into an image that can be used to infer the sub-surface
structure. Only minimal processing would be required if we had a perfect acquisition system. Processing consists of the
application of a series of computer routines to the acquired data guided by the hand of the processing geophysicist. There is
no single "correct" processing sequence for a given volume of data. At several stages judgements or interpretations have to be
made which are often subjective and rely on the processors experience or bias. The interpreter should be involved at all stages
to check that processing decisions do not radically alter the interpretability of the results in a detrimental manner
Processing routines generally fall into one of the following categories:
enhancing signal at the expense of noise
providing velocity information
collapsing diffractions and placing dipping events in their true subsurface locations (migration)
increasing resolution (wavelet processing)
In this project we will process raw seismic marine data using PROMAX and enhance the resultant seismic image through
Genetic Algorithm. Seismic Data Processing primarily consists of four steps namely Geometry Assignment; Parameter Test,
Deconvolution, and Brute Stack; Stacking (RMS) Velocity Analysis ; Final Stack and Migration.

Processing Flow
A processing flow is a collection of processing routines applied to a data volume. The processor will typically construct several
jobs which string certain processing routines together in a sequential manner. Most processing routines accept input data,
apply a process to it and produce output data which is saved to disk or tape before passing through to the next processing
stage. Several of the stages will be strongly interdependent and each of the processing routines will require several parameters
some of which may be defaulted. Some of the parameters will be defined, for example by the acquisition geometry and some
must be determined for the particular data being processed by the process of testing. It is reiterated that the parameter
choice is often subjective.

Prestack Processing Flows


The full prestack processing sequence follows:

SHOT DOMAIN
1. DATA INPUT & QC: usually reformatted to an internal format more efficient than that provided by SEG standards. Bad
or noisy traces are edited and geometry applied according to observers logs.
2. DESIGNATURE: conversion of source wavelet to minimum phase equivalent to prepare data for deconvolution.
3. RESAMPLING: data are often re-sampled from 2ms to 4ms following anti-alias filtering. This makes subsequent
processing cheaper and does not appreciably reduce frequency content for typical deep targets.
4. GAIN CORRECTION: to compensate for geometric divergence and other amplitude losses.
5. TRACE REDUCTION: often applied to reduce the group interval from 12.5m to 25m and consequently the CMP interval
to 12.5m from 6.25m. By halving the number of traces the subsequent processing stages will be cheaper at little
reduction of resolution. The reduction may be applied by summing adjacent traces (either with or without NMO), but
should be performed by K-filter and trace drop.
6. MULTIPLE SUPPRESSION: some routines perform better in the shot domain for example tau-p domain deconvolution
and certain wave-equation methods.

CMP (Common Mid Point ) DOMAIN

1.
2.
3.
4.
5.
6.
7.

CMP GATHER: Mandatory sorting process from shot gathers to CMP gathers.
DECONVOLUTION: To collapse seismic wavelet and suppress short period multiple reflections.
MULTIPLE SUPPRESSION: either using moveout or periodicity filters.
DMO OR PRESTACK MIGRATION: to remove the effects of dip and structure from velocity analysis.
VELOCITY ANALYSIS: To obtain NMO corrections prior to stacking.
NMO CORRECTION: Apply NMO using the velocities determined above.
MUTE: removal of unwanted direct arrivals, refractions and NMO stretch.

Post-Stack Processing Flows

1.
2.
3.
4.
5.
6.
7.
8.
9.
10.

STACK: increase signal-to-noise, attenuates multiples.


NOISE SUPPRESSION: It includes FK, FX deconvolution.
DECONVOLUTION: Further multiple suppression.
MIGRATION: collapses diffractions and correctly positions dipping events.
SPECTRAL BALANCING/WHITENING: resolution improvement.
ZERO-PHASE CONVERSION: improved well ties and resolution.
FILTER: removal of high and low frequency noise.
SCALE: bring key reflectors to a suitable gain level for interpretation.
DISPLAY: either paper or to workstation
REPORT: essential to describe the processing tests performed, reasons for the decisions made etc.

Steps followed during processing using PROMAX


Near Trace Gather Display:
In Near Trace Gather Display we select one channel for each receiver and then feed the input to trace display function which
gives an approximate idea of the geological structures present in our seismic data. Near Trace Gather is useful when it comes
to focusing our processing on a particular geological structure which in our case is mud diapers in a gas hydrate environment.

Near Trace Gather Display for a mud diaper

Band Pass Filter:


Next we apply a band pass filter mostly Ormsby Filter to get rid of noises. In order to apply band pass filter we need a
Interactive Spectral Analysis to get an idea of frequency bandwidth of our seismic signal. In order to design the frequency we
need to give the lower and upper band limit which in our case is 5-10-75-80 Hz. Applying this filter will get rid of all the noises
whose frequency doesnt lie in the above band width.
Ormsby Filter:

Where

f1 = low-cut frequency
f2 = low-pass frequency
f3 = high-pass frequency
f4 = high-cut frequency

Muting and Trace Editing


A mute is simply an area of data that is zero'd; it might be based on a line (with data above or below the line being muted), or
even a polygon, with data inside the polygon being muted.
There's a few ways in which mutes are used in a seismic processing sequence.
a) The first is to remove any strong, coherent noise that is generated by the shot , such as the direct and refracted arrivals. The
mute in this case is usually an "outer trace" or "front mute", where data above the mute time is zero'd. These strong arrivals
might also be attenuated by muting them in another domain - for example they may be more isolated from the data if you
transform to the Tau-P and apply a mute there. In the Tau-P domain the mute is usually an "inner trace" or "tail" mute, where
the data is zero'd below the mute line. Mutes in the FK domain can also be effective, especially in the form of polygons.
b) The second thing a mute is used for is to remove data that has been "over stretched" on common midpoint (CMP or CDP)
gathers when you have applied an NMO (Normal Moveout) correction; this "flattens" the hyperbolic shape of a reflection
based on the offset and a determined velocity. Where the correction is very large (in the shallow part of the section and at
longer offsets) the correction applied at the top and bottom of a given signal maybe so large as to distort the event. This
distortion is called "NMO stretch", as the event is stretched out - when you stack data with a lot of NMO stretch it creates low
frequency artefacts that obscure the real image. NMO stretch is usually avoided by muting the stretched data, either using a
percentage stretch mute, or manually picking a mute.
An inner trace mute on NMO corrected gathers can also be used to attenuate multiples, as a significant part of the multiple
seen on a stack comes from the inner traces where the Normal Moveout difference between primary and multiple signal is
small, so the multiple "stacks in"; SRME and other model-based demultiples are making the use of inner trace mutes less
common, but you may still see it deployed on some datasets.
Finally, its usual to mute stacked marine data above the seafloor, and remove "water column noise" - this is sometimes called a
"trim mute", and is often stored in a trace header and reapplied after processes like migration and filtering, that can add noise
into the water column. Its worth noting that the "water column noise" can be low amplitude reflections from subtle density
variations in the ocean. Mutes are almost always tapered, to avoid introducing "edge effect" issues when subsequent
processing is applied

10

Trace editing involves removal of anomalous frequency contents from the seismic trace. The traces to be killed can easily be
handpicked due to their difference with the neighbouring wavelets. A trace killing flow was done in order to remove them,
eliminating then the possible effect they might have had on the stack. Given below is an example of trace editing on a seismic
trace.

Effects of Muting;fig(above):Raw Input;fig(below):After Muting and Trace editing

Deconvolution
It is a filtering process which removes a wavelet from the recorded seismic trace by reversing the process of convolution. The
commonest way to perform deconvolution is to design a Wiener filter to transform one wavelet into another wavelet in a
least-squares sense. By far the most important application is predictive deconvolution in which a repeating signal (e.g.
primaries and multiples) is shaped to one which doesn't repeat (primaries only). Predictive deconvolution suppresses multiple
reflections and optionally alters the spectrum of the input data to increase resolution. It is almost always applied at least once
to marine seismic data.
The mathematics of predictive deconvolution require that the autocorrelation of the source wavelet is known. Since this is
rarely true in practice the autocorrelation of the seismic trace is used as an approximation instead. The autocorrelation
function is critical in picking the deconvolution parameters of gap (also called minimum autocorrelation lag) and operator
length (sometimes called maximum autocorrelation lag).

Predictive Deconvolution
Gapped or Predictive Deconvolution is the commonest type of deconvolution. The method tries to estimate and then remove
the predictable parts of a seismic trace (usually multiples). Predictive deconvolution can also be used to increase resolution by
altering wavelet shape and amplitude spectrum. Spiking deconvolution is a special case where the gap is set to one sample and
the resulting phase spectrum is zero.
Spiking Deconvolution:
The process by which the seismic wavelet is compressed to a zero-lag spike is called spiking deconvolution. The spiking
deconvolution operator is strictly the inverse of the wavelet. If the wavelet were minimum phase, then we would get a stable
inverse, which also is minimum phase. The term stable means that the lter coecients form a convergent series.
11

Once the amplitude and phase spectra of the seismic wavelet are statistically estimated from the recorded seismogram, its
least-squares inverse spiking deconvolution operator, is computed using optimum Wiener lters. When applied to the
wavelet, the lter converts it to a zero-delay spike. When applied to the seismogram, the lter yields the earths impulse
response .
The process with type 1desired output (zero-lag spike) is called spiking deconvolution. Cross correlation of the desired spike
(1,0,0,...,0) with input wavelet yields the series (x0, 0, 0,...,0). A flowchart for wiener filter design and application is shown.

The generalized form of spiking deconvolution is given below. Here ri,ai refers to the autocorrelation lags of the seismic signal
and wiener filter parameters. The resultant matrix represents minimum phase spiking where the maximum power is
concentrated at zero lag.

Other types of deconvolution

ADAPTIVE DECONVOLUTION: is a type of deconvolution where the gap and operator are automatically allowed to vary
sample by sample down the trace according to variations in the previous deconvolution performance. This dangerous
process is now rarely applied.
HOMOMORPHIC deconvolution transforms the data to the cepstrum domain where wavelet and earth reflectivity can
be separated.
MAXIMUM ENTROPY or BURG deconvolution uses an entropy criterion to produce the predictable and random
elements of the data and is a strong spectral balance.
MINIMUM ENTROPY deconvolution attempts to reduce the disorder of a signal and performs a zero-phase conversion
called Phase Deconvolution in PROMAX.
DIP DEPENDANT: In areas of strong dip and structure the multiple period is not stationary along the trace but may be
stationary in other dip directions. Most usually the data are composed into several dip limited sections by FK dip
filtering, the deconvolution is applied to each dip component and the resulting sections are added together.
TAU-P: deconvolution is an emerging process in which some dip and non-stationary elements are removed from the
data prior to deconvolution by transformation into the tau-p domain.
SURFACE-CONSISTENT: deconvolution is commonly applied to land seismic data and in AVO processing. The technique
ensures that traces from the same surface source and receiver location (or CMP, offset in addition) have the same,
consistent, operator applied.
SPACE-AVERAGE: or ensemble deconvolution in PROMAX is used to apply a single deconvolution operator to a group
of traces such as a shot record. Conventional deconvolution will apply a different operator for each trace.

12

Velocity Analysis
A sonic log represents direct measurement of the velocity with which seismic waves travel in the earth as a function of depth.
Seismic data, on the other hand, provide an indirect measurement of velocity. Based on these two types of information, the
exploration seismologist derives a large number of dierent types of velocity interval, apparent, average, root-mean-square
(rms), instantaneous, phase, group, normal moveout (NMO), stacking, and migration velocities. However, the velocity that can
be derived reliably from seismic data is the velocity that yields the best stack. Assuming a layered media, stacking velocity is
related to normal-moveout velocity. This, in turn, is related to the root-mean-squared (rms) velocity, from which the average
and interval velocities are derived. Interval velocity is the average velocity in an interval between two reectors. Several factors
inuence interval velocity within a rock unit with a certain lithologic composition:
(a)
(b)
(c)
(d)
(e)

Pore shape,
Pore pressure,
Pore uid saturation,
Conning pressure, and
Temperature.

For a single flat layer the shape of the moveout curve is defined by the hyperbolic relationship between zero-offset time and
velocity. Velocity analysis methods have been used in the past but today most velocities are picked interactively using
combination displays on processing workstations. Nevertheless, velocity analysis is still one of the most time consuming parts
of seismic processing. It is also probably the most critical stage since the velocity analysis is an initial interpretation of the data
and it is important that the seismic interpreter is involved in the analysis and quality control stages. Velocity analysis is often
carried out several times during processing resulting in an iterative improvement of velocity estimation.
The different types of velocity are:
a) Interval Velocity: It is the constant velocity of a single layer (which can be very thin). V int can be approximately calculated
from Vrms using the DIX equation.
b) NMO Velocity: It is the velocity required to best NMO correct the data using the hyperbolic NMO assumption. The
difference between Vnmo and Vstack is subtle.
c) RMS Velocity: For multiple flat layers and assuming the offset is small compared with the depth, a hyperbolic moveout
equation can be derived as a truncated power series in which Vrms is used as velocity. The root-mean-square (RMS) velocity is
calculated from interval velocities as shown in the figure. At large offsets more accurate NMO corrections can be performed by
retaining the next term of the equation - this is usually referred to by contractors as fourth order or higher order NMO
correction. For many targets this can become important at offsets greater than around 3km.
d) Stacking Velocity: It is the velocity required to best stack the data using the best-fit hyperbola over the available offset
range. The choice of Vstack can be rather subjective. However, an appropriate choice can cover up for a multitude of
assumptions made in the CMP stacking process. For horizontal layers and small offsets V stack should equal Vrms. For dipping
layers a higher velocity is required since Vstack = Vrms/cos(dip). Note this assumes no 3D effects. The application of DMO mostly
removes the effects of dip from Vstack such that Vstack approximates Vrms and interval velocities computed from the DIX equation
should be stable.
e) Average Velocity:It is the depth divided by the two way time to any interface. Vavg is often used for depth conversion but is
only valid where the velocity varies only vertically.
f) Migration Velocity: It is the velocity required to best migrate the seismic data and is related to the true interval velocity, not
the stacking velocity.
13

Velocity Analysis Methods:


PRECONDITIONING
Data must be appropriately pre-processed prior to velocity analysis. The processing to be applied will depend on the purpose
of the analysis since velocities will be picked at several stages during a processing sequence. For the first pass of velocity
analysis the data will usually have been deconvolved, sorted to CMP gathers and muted. Subsequent passes may require
application of multiple suppression, DMO or prestack migration prior to the velocity analysis. Depending on the processing
system being used the entire data may be supplied to the velocity analysis routine or more usually the data is edited to a
subset which defines the velocity analysis locations. The data may also be bandpass filtered and scaled to reduce noise prior to
velocity analysis computations. Severe noise contamination may be removed by dip-filtering. In areas where velocity analysis is
particularly difficult harsher procedures may be used to prepare data for velocity analysis than would be tolerated in
production processing.
VELOCITY ANALYSIS INTERVAL
Velocities are usually picked at discrete spatial intervals along a seismic section and the velocity field linearly interpolated
between analysis points. The spatial and temporal sampling interval will depend on the degree of lateral velocity variation and
should be sufficient to define the structures under consideration. In theory extra analysis points can be used at any point to
more accurately define a particularly complex geological structure. In practise this option is rarely pursued although some
contractors now use automatic picking routines to attempt to in-fill velocities from hand-picked seed points. The reliability of
these methods depends on the constraints employed within the picking algorithm. For the first pass of velocity analysis a
coarse interval of 500 CDPs are taken. In the next iteration the data for the 500 CDPs interval is taken as a reference to
compute the 250 CDPs intervals. The above method is iterated to compute the 100CDPs and 50 CDPS interval velocity analysis
with the previous velocity data as a reference for computing the current velocity data.
INTERACTIVE VELOCITY ANALYSIS
There are several methods of stacking velocity analysis. The preferred method depends on the data under consideration and
the preferences of the velocity picker. Almost all velocity analysis today is performed interactively on a screen using a
combination display configured according to user preference. Animated displays are common and show the results of applying
the NMO and stacking the data with the velocities chosen.Some systems will calculate the velocity analysis on the fly as
requested by the user but most systems expect the pre-computation of the velocity analysis displays. On some displays the
interpreter can pick several key horizons which the velocity interpreter can use as main velocity boundaries. Depending on the
geological province this method is critical, for example if velocities are to be picked for depth migration purposes. When
picking horizons care should be taken to ensure the velocity interpolation stage can handle pinchouts and other more complex
geological structures.
CONSTANT VELOCITY STACKS (CVS): In this approach around 10 (or more) adjacent CMPS are selected around each location
point. The CMPS are NMO corrected and stacked using a defined range of constant velocities e.g. 1500 to 5000m/s with an
interval of around 200m/s. The mini-stack panels are displayed next to each other and velocities picked where key events show
the highest amplitude or greatest continuity. The method shows what the data will look like if stacked with the chosen velocity
but has a resolution limited to the velocity interval chosen. This method is generally applied for data having poor signal to
noise ratio.
FUNCTION VELOCITY STACKS (FVS): These are common form of display in which the range of velocities used for the stack panels
is defined by percentage variations from a single (best choice) function. The individual panels show high resolution but the
quality of the panels depends on the accuracy of the initial function used. A combination display would usually show the
central gather NMO of the panels corrected using the range of function velocities. The PROMAX display shows the NMO
14

corrected gather in the centre of the display with function velocity stacks to the far right. The gather and stack displays are
interactively updated as picks are made. Stack display (a) is the stack with the currently picked velocity function, stack (b)
provides an animate display of the original FVS stacks as the picks are altered. The colour background display reflects interval
velocity.
VELOCITY SPECTRUM: The velocity spectrum display (shown on the left of the PROMAX screen) is calculated by determining
how well a given hyperbolic event matches real events on the central CMP gather. In this project we have chosen a velocity
range between 1000-4000 m/s .These represent far more velocity trials than can be performed using CVS or FVS analysis. The
maximum amplitude of coherence is expected where the hyperbola best fits a given high amplitude seismic event. The
measure of coherence most often used is called semblance which is robust to noise, spatial aliasing and lateral variations in
amplitude. There are various methods of displaying semblance but almost always on modern systems a colour contour display
is used with blue representing low semblance and red representing high semblance areas. The axes of the display are velocity
(horizontal) and zero-offset time (vertical). The semblance would usually be calculated for the central gather of a group of 10
(sometimes called a supergather), but sometimes an average of three adjacent gathers is used in order to reduce noise.
Averaging too many gathers would increase computation time and may start to filter out geological variations. The velocity
interpreter would make picks either on the semblance clouds or on the stack displays. The velocity spectrum is good for
identification of multiple reflections.
Quality Control of Velocity Data: For quality control the velocity picks from the previous CMP gather should be displayed in the
next CMP gather. This allows the processor to check that the variation of pick from one location to the next is consistent with
that expected from the geology. Generally lateral velocity variations are considered to be smoothly varying except where salt is
present. The RMS velocity should increase with depth. Sometimes picks are generally too high to avoid chains of multiple
reflections. Out-of-plane reflections may also appear with anomalous velocities and should be avoided.. If RMS velocities are
picked too closely together then the interval velocity will fluctuate rapidly and may turn negative. Generally the interval
velocity will increase with depth due to compaction. However the velocity may decrease beneath a fast layer such as chalk or
basalt. This is referred to as a velocity inversion. Erroneous picking of multiples may also cause a velocity inversion.

For a 2D survey velocities are expected to be consistent between profiles. For a 3D survey it is easy to QC in map
form - either along timeslices or extracted horizons. Some processing systems will allow the calculation of contoured
time slices from 2D data which can be a very rapid and highly effective method of quality control. Depending on the
degree of quality control required the process will take several days. Once an initial QC has been completed a rapid
final QC is to display velocity lines (or target portions of selected key lines) stacked with the picked velocity field and
with small percentage variations e.g. m 3% and m 5%. In this way the accuracy of the picks and the effect on section
appearance can be rapidly checked by the interpreter for further improvements.

15

Velocity Analysis Window

Brute Stack
The processed seismic record that contains the traces after deconvolution is sorted on the basis of common midpoint gather
and normal moveout corrected using the normal move out (NMO) velocities obtained after velocity analysis. These CMPs are
then stacked together resulting in a stack section, that is the representative of the underlying formations .No static corrections
is added to the stack as its a marine data set. The improvement of the seismic trace after brute stack is evident.

Stack Image in Variable Density Mode

16

IMAGE PROCESSING
After seismic processing we applied our genetic algorithm on the stacked image to enhance the certain seismic attributes.A
Matlab code was written to apply the genetic algorithm on the seismic image. The main objective of the image processing is to
enhance the contrast of certain structures of the image. For that the seismic image was first converted into its grey scale
equivalent and then higher contrast were put on our areas of interests manually .This image as shown below serves as the
objective image.

Stacked Image in grayscale after seismic processing

17

Objective Image prepared by putting high contrast in region of interest and low contrast elsewhere

With the grey-scale stacked image as f(The image to be processed) and the image above as d(Objective image), the GA was
executed several times with different initial populations and the best organisms from each one were used as the initial
population for a new execution, starting with 200 populations we proceeded with decreasing the population size in each
iterations .Also the number of generations were restricted to 200 to avoid whitening out of the image. The parameters of the
GA include a 15% mutation which was later changed due to constraints imposed on the kernel values, weighted random
pairing was chosen as the parent selection procedure as it yielded better results. The most suitable values for the GA
parameters are given in the table below based on several iterations.

Populations
Number of Generations
Mutation Rate
Constraints
Initial Population
Fitness Scaling
Selection
Mutation Rate
Crossover

50
30
5%
-5<x<5
Constraint dependent
Rank
Roulette Wheel Selection(weighted random pairing)
5%
Constraint Dependent

Given the nature of the GAs, their results are not unique, i.e., several possible solutions for the same problem can be obtained.
Some of the results are depicted below along with the kernel values and other parameters.

Results with 100 generations and with x=[

18

Results with 50 generations and with x=[

Results with 50 generations and with x=[

Results with 30 generations and with x=[

19

Results with 30 generations and with x=[

The above image shows significant improvements over the stacked image. The improved areas have been indicated in the
figure below. The improvement of the anticline structures as indicated by a and b and the layers indicated by c shows that the
given kernel is fit for image enhancement. We can compare with the original stacked image where the anticlines(a & b)were
not as well defined as in the resultant image. Similarly the layer continuity indicated by c is depicted better in the below
image than that of the original one.

Improved Image

Noise Removal
Although the image is enhanced but during the process of convolution of the image with kernel certain noise elements are
incorporated in the image which makes it difficult to interpret the improved results. Therefore several noise filters are
designed to reduce the noise level in the images.
a) Gaussian Blur(also known as Gaussian smoothing):It is the result of blurring an image by a Gaussian function. It is a widely
used effect in graphics software, typically to reduce image noise and reduce detail. The visual effect of this blurring technique
is a smooth blur resembling that of viewing the image through a translucent screen, distinctly different from the bokeh effect
produced by an out-of-focus lens or the shadow of an object under usual illumination. Applying a Gaussian blur to an image is
the same as convolving the image with a Gaussian function.

20

Result of Gaussian Blur on the improved image

b) Adding speckle noise and then applying wiener filter on the image. Speckle noise variance is chosen in such a way that it
resembles the mean noise in the image. For this image we had chosen a variance of 0.005.Wiener filter which produces an
estimate of a desired or target random process by linear time-invariant (LTI) filtering of an observed noisy process is applied on
the image with a kernel size of (7,7).Noise levels reduced significantly after applying this method.

Significant decrease in noise after Wiener Filter is applied

The layer contrast increased as a result of noise filtering

21

Conclusion
Results show that convolution kernels that may not have a theoretical background (in contrast with edge detectors, for
example) can still enhance characteristics of interest in seismic images. We found the kernels using genetic algorithms as an
optimization technique. Our GA was mainly developed following a very common structure in evolutionary computing but
involved the processing of matrices instead of the more commonly-used strings. One of the kernels we found proved to be
useful for both manual and automatic interpretation. The filtered image permitted the identification of an anticline that was
originally interpreted as a dipping layer. While manual seismic interpretation is a subjective task, the filtered image allowed for
a more consistent interpretation according to commonly accepted interpretation guidelines.
One of the problems that we have faced earlier is complete whitening of the convolved image during the optimization process.
This problem was avoided by putting constraints on the variables and designing other parameters based on the constraints. In
order to restrict randomization in the solution space we restricted our mutation rate to 5% instead of 15%.Designing the
parameters of the Genetic Algorithm were the priorities in this project.
Also the objective was to find those values of the convolution kernel X that maximize the contrast of the central tendencies
between the sets of values that represent regions with and without the presence of mud diapers. This search would minimize
the distance between obtained images corresponding to bodies and their projection onto a mud diapers space, maximizing the
distance of those which do not. These possible improvements along with our promising results encourage us to believe this
method could be useful for the enhancement of different seismic patterns.

22

You might also like