You are on page 1of 328

WA School of Mines

Mark/Grade:

Unit Code:

GEOP4000

Unit Name:

GEOP4000 Seismic Imaging and Modelling


Sasha Ziramov (Dr), Milovan Urosevi (Prof) , Aleksandar

Lecturer(s):

Dzunic (Prof)

LAB BOOK FOR GEOP4000 SEISMIC IMAGING AND MODELLING


(Assignment Title)
I declare that this assessment item is my own work, except where acknowledged, and it has not been submitted for
academic credit elsewhere, and acknowledge that the assessor of this item may, for purposes of assessing this item:

Reproduce this assessment item and provide a copy to another member of the University; and/or

Communicate a copy of this assessment item to a plagiarism checking service (which may then retain a copy of
the assessment item on its database for the purpose of future plagiarism checking).

I certify that I have read and understood the University Rules in respect of Student Rights and Responsibilities
(details of which can be found at: http://students.curtin.edu.au/administration/responsibilities.cfm).
Name of Student:
Student Number:
Signed:

PAUL SSALI
17390138___
Paul Ssali ___

15/06/2015
Date:

____________

Note: unless stated otherwise, assignments must be lodged with the Unit Coordinator or in the relevant WASM assignment box.

Lab 1 and Lab 2 - Building a velocity structure, seismic modelling and migration
Seismic Processing 423
Instructor: Milovan Urosevi
Assistants: Aleksandar Duni, Sasha Ziramov
Concepts:
ACTION
1 Create velocity module
2 Smooth velocity model
3 Create a zero-offset section using the
exploding reflector concept.
4 Display seismic section in time
5 Create average velocity in time from
interval velocity in depth
6 Convert time section to depth
7 Display depth section with velocity
model
8 Create RMS velocities from interval
velocities
9 Time migration
10 Display migrated section
11 Convert time migrated section to depth
12 Display migrated section in depth
13 Display migrated section in depth with
velocity model
Software: ProMax

ProMax module
Interactive Velocity Editor*
Velocity Viewer/Point Editor
Finite Difference Modeling
Trace Display
Velocity Manipulation
Time/Depth Conversion
Interactive Velocity Editor*
Velocity Manipulation
Memory Stolt F-K Migration
Trace Display
Time/Depth Conversion
Trace Display
Interactive Velocity Editor*

Goal of the Lab


This lab involves developing velocity sections, working throught the various flows and routines, converting
interval velocities to average velocities both in time and depth, eventually migrating the section with different
techniques (algorithms of migration) and the real eventual point is evaluating the final migrated output to the
starting velocity model.
Based on that, notice and comment on the efficiency, accuracy, consistency and weaknesses of each migrating
method in relation to the various velocity models (complexity of initial velocity field).
An analogous example of a the various migration processes is like using different eye lenses for different users in
relation to how accurate or inaccurately distorted they are able to see / resolve sight.

Start Promax software thru Teaching Cluster

Enter password S<student number> and continue

User name: S17390138,


Password: geophysics,

In the linux template enter password

geop4000 and press enter

The Promax software opens and assigns an area according to the student name
Open a line LAB_01 , Assign a line LAB_0102 (as shown below)

Create new flow by: click Add> LAB_0102 (for line name)

Step1: Setting up the velocity models


Procedure:

------Add Flow Comment------- routine to the flow editor


Interactive Velocity Editor* to flow
Separate that routine with ------Add Flow Comment-------

The settings of this routine are as below


Purpose of this Routine

Allows setting up of models (creating the velocity model) which is Interval velocity in Depth
Give the interval_velocity_model (in-depth) created or its database a name in this case its VMDL_01
Specify units (feet or meters) and minimum or maximum depth

Next is to Run the model

With the menu above, you can create a model by the following procedure:

MB1(left) click on Add at the top, and make sure its active (blue shaded)
MB1 Click the corners of the polygon, and at the last corner-joining to the first corner (where the polygon closes)
click Close at the top. This closes the polygon.
If corners or sides have to be adjusted, click Move and using MB1 drag the point in adjustment
If doing a polygon that shares corners or sides with an already existing polygon, at common corners use MBmiddle wheel and use MB1 at new corners (un common corners) and at the last corner click Close

This procedure is used to create all the shapes in the model being created

Step1.1: Assigning Velocities to the different shapes or bodies

Step1.1.1: to create other models

--------Add Flow Comment-------- to separate the previous Interactive Velocity editor module.
Interactive Velocity Editor* to flow
Separate that routine with ------Add Flow Comment------You will give this other model another name VMDL_02

Middle-click, select name of second model youre creating

Click on the INVALID name, it will display parameter file for Interval Velocity in Depth
Click Add and enter name for second velocity model VMDL_02

Below is the model name, and adjust all the other parameters accordingly

Then excute it (remember to innactivate other flows at each time youre running a certain flow

Follow the procedure above for shaping and creating the second model and assigning velocities
Doing the velocity models
Procedure
-

Start by Add Flow Comment on the flow editor


Select interactive vel Editor

At this interim stage, the model looks like this below

Adjusting the model


I might want to shift the top right common vertex more to the right and the bottom-right corner in pink area more to the right, to make it more like the given model in the notes
To do this, on the top click Move and MB1 click-hold dragndrop as per adjustment

Editing the velocity structure shapes and internal velocities


Following the above editing of velocity structure shapes, you can start to select each new structure and re-assign internal
velocities.
Procedure:

On the top click Move


MB1 click in space of that shape to select it (make sure its a single MB1 click otherwise the second click affects it)
Go to the bottom click Input / Set velocities and change appropriately
If it plays up (not accurately responsive), between each attempt switch by clicking another top button and immediately go
back to select button.

Functions of different tools

Exiting and saving the velocity model


Click File hold there and kind-of pull the cursor down while holding. A menu with options of saving is displayed on the side
which allows you to either:

Save and exit the velocity model


Exit without saving

To save and write table into database for save keeping

To do a model like model 5 with body in the middle

Then to assign that egg-shaped section in the middle, it has to be picked all round using Middle click (wheel) and at the last and first picks click close

Go ahead to assign it velocity

Below is VMDL_03 (in depth)

Save it (write table and polygon for save keeping)

To set-up others, you can duplicate the routines in the previous interactive velocity field editor (using the delete copy/paste
technique) and change name of output

Set up (click Add) new velocity model name

Below is VMDL_06 velocity model (in depth)

Step2: Smoothing the Interval velocity (with depth) models using VELOCITY VIEWER/
POINT EDITOR
were to use the Velocity Viewer / Point Editor* flow, input any of the model set-up above, select name of output
velocity database (VMDL_01_SMTH) which in this case is the smoothed model

Steps:

Click the INVALID and select input velocity database, browse and select the VMDL_01

Select name of output velocity file which is smoothed

The flow parameters should be as below

Run it, below is the output smoothed VMDL_01 velocity model in depth saved as VMDL_01_SMTH

All the other created velocity models can be smoothed using the above same procedure

Using the above procedures, the other models VMDL_06 and 05 can be smoothed with the above routine.
As below, activate the copied routine, edit input VMDL_06 (velocity model created in depth) and output VMDL_06_SMTH
(smoothed velocity model created in depth).

Select input

Name the output (smoothed)

The finalised flow / routine should have parameters set as below

Execute it and it should produce a smoothed output of your velocity model

This model is plotted CDP_number of X-axis, Depth on vertical , and clicking in gives the velocities.

Step3: Creating a zero-offset section using the Exploding reflector concept using FINITE
DIFFERENCE MODELLING
Separate the flow using ------Add Flow Comment------- and input another flow FINITE DIFFERENCE MODELLING

Set parameters as below

Add Disk Data Output and select name of the zero-offset section created

Save name of zero-0ffset section (in time)

By the look of the file name it means its already stacked in time

Execute this flow and this shall save the Zero-offset section (in time) calculated by the Finite Difference Routine applied
Make sure it runs successfully.

If we want to display output of the above process, we have to do the following:


Read-in the output file of previous process using the Disk Data Input
Trace Display to enable us view the output.

Select the input as output of previous process

Set parameters of trace display as below

Below is the output of the execution Same model in grey scale or VA

Step4: Applying BAND-PASS FILTER to preserve a range of frequencies and filter-out very
low and very high frequencies outside the specified range.

The flow will involve:


Disk Data Input to read in the data (note that this is the staked zero-offset staked velocity section in time)
Band pass fileter filter out frequencies on extreme ends of the range (very low and very high)
Disk Data Output to save the band-pass filtered output of velocity section
Trace display to display it

Select the staked velocity in time dataset from previous process

Set parameters of band-pass filter as below

Select parameters of disk data output, name of output file, record length = 3000

Set name of output file

Note that the name LM1_STK_time_Filt depicts a staked time velocity and filtered
The Disk data output parameters should be set as below

Trace display parameters as below

Execute it . Below is our zero-offset staked in_time image subjected to bandpass filter to have it clearer

Step5: Velocity manipulation


Objective: Creating Average Velocity (in time) from interval velocity in-depth section created in
previous process.
Note that velocity manipulation helps convert from time-to-space and vice-versa.

Using ------Add flow comment------- to separate new flow from previous flow
Velocity manipulation flow
So were to put-in our interval velocity model (in depth) initially created and have an output for an Average velocity in
time

Select interval velocity in depth and select one of the initial velocity models in depth created e.g VMDL_01

SELECT INPUT

Select input as interval velocity in depth or you can use the smoothed velocity

Select type of output type and pick average velocity in time

Select type of output type average velocity in time

Select name of output (average velocity in time)

Separate flow and then run it

Step6: Time-depth Conversion


Objective: Converting the average velocity (in time) section generated in previous section into Average Velocity
(in depth) .
Disk data input L1M1_STK_time_Filt (put in the stacked velocity section in_time and filtered)
Time /Depth conversion VMDL_01_SMTH_AVG
Disk data Output LAB_01_MDL_01_STK_depth
Under Disk data input select

Parameters of Disk data output

Note in the GET VELOCITY from database click YES and in the next step step go and create the table VMDL_01_SMTH_AVG

Parameters of the Time/Depth Conversion should be as below

The flow should be as below

Below is the output of the Time/Depth conversion

Explanation of the ringing effects


The ringing effects (hyperbola shaped features) are due to diffractions at edges or discontinuity of reflectors (reflective
surfaces) because each unflat or toothed feature on the surface of the reflector causes diffraction (ringing)

Step 7: Interactive Vel Editor*


Objective: Display the average velocity (in depth) section with velocity (to compare the depth
section model with the initially started input velocity model

Using -----Add flow comment------- to separate previous flow


Interactive velocity editor

The flow should be as below

Select new name for the output file of the Velocity_depth section and velocity_time section

Below is the stacked seismic section (converted to depth) overlayed with the initially set interval velocity (in depth)
model
Note that there is a miss-match in the seismic section and the velocity model because the seismic image is not well
migrated

Step 8: Velocity Manipulation


Objective: Creating Stacking RMS Velocity (VRMS) Velocity (in time) section from interval velocities
in depth.
Note:
Velocity manipulation is for switching from one type of velocity model to another e.g interval-to-average, interval-to-RMS,
average-to-Rms, velocity interval_(in depth) to velocity interval_(in time)
Note that were in-puting a smoothed interval velocity in depth model e.g. VMDL_01_SMTH and we would like to have a
stacked Vrms velocity model (in time)

Using -----Add flow comment----- separate new flow from previous flow.
Velocity manipulation*
Using -----Add flow comment----- separate new flow from next flow.

Input a smoothed interval velocity in depth model e.g. VMDL_01_SMTH

To select output velocity database entry


Click add and type in the name

The parameters of the velocity manipulations should be set as below

Step 9: Time / Depth Conversion


Objective: Converting from time section to depth section and vice-versa before migration.
Note:
Recall the previous flow created an average velocity (in time) section, so this flow is to convert that to average velocity
in depth.

Step 9: Migration
Objective: To move / correctly move all events to their points of origination not where they were
imaged or recorded. E.g. all reflections correctly moved to true reflection points rather than imaged
points.
Note:
The Velocity manipulation* routine preceding migration is to convert interval velocity (in depth) to interval velocity in
time and to select an output file V_interval_time.

The flow / routines should be as below

Set parameters of disk data input as below

Set parameters of F-K migration as below

Band pass filter parameters

Disk data output


Create the name for the migrated output

Set Disk data output parameters as below

In order to display it
Separate the previous flow by -----Add flow comment---- Disk data input to read-in migrated dataset
Trace diplay

Disk data input select migrated dataset

Parameters of Disk data input

Trace display

Below is the migrated seismic section in grey scale

Explanation of the ringing effects


The ringing effects (hyperbola shaped features) are due to diffractions at edges or discontinuity of reflectors (reflective surfaces)
because each unflat or toothed feature on the surface of the reflector causes diffraction (ringing)

Techniques in Polishing the image


A number of asthetic and image polishing, and sharpening do exist under View and Animation tabs.
Traces can be viewed in any of the modes, gains can be reduced etc.
Using the various tools the image can be edited

Explanation of the ringing effects


The ringing effects (hyperbola shaped features) are due to diffractions at edges or discontinuity of reflectors (reflective surfaces) because each unflat or toothed feature on the surface of the
reflector causes diffraction (ringing)

Explanation of the ringing effects


The ringing effects (hyperbola shaped features) are due to diffractions at edges or discontinuity of reflectors (reflective surfaces) because each unflat or toothed feature on
the surface of the reflector causes diffraction (ringing)

Conversion from Time to depth


This flow is meant to convert the time migrated time-CDP image into depth migrated Depth-CDP image

For disk data input select time migrated image in the previous step

Set parameters for Disk data input as below

Set parameter of Time/Depth conversion as below

Set parameters of trace length as below

Select name of output file

Set parameters of Disk data output as below

To display the above depth migrated image

Read-in depth migrated image file

Set parameters of trace display as below (very important to primarily sort by CDP)

Below is the depth_migrated image

Below is the depth migrated image in VA / Grey scale settings

Step 10: Migration


Objective: To Display migrated section (in depth) with the initial velocity model set-up (interval
velocities with depth model).
Select parameters as below

Below is the output of the process, the migrated section (in depth) overlayed with the initially set velocity model (in
depth)

Comment on efficiency of migration process i.e. position of velocity boundaries on seismic section Vs. position of boundaries on
the velocity model.
Do this for the other models and discuss results.

Lab 3 - Depth Migration


Seismic Processing 423
Instructor: Milovan Urosevi
Assistants: Aleksandar Duni, Sasha Ziramov
Objective:
Objective of this Lab is to compare results from post-stack time migration with post-stack depth migration vs. original velocity
model.
Procedures:
ACTION
ProMax module
Disk Data Input, Trace Display
1 Display time sections
2 Migrate (depth domain) seismic time section
3 Display depth migrated sections
4 Display depth migrated sections with velocity models

Disk Data Input, Explicit FD Depth


Migration, Disk Data Output
Disk Data Input, Trace Display
Interactive Velocity Editor*

Software: ProMax
Procedure 1:
Create job flow

Fig. 1 Job flow for post-stack depth migration


Step1: Display stacked time section with the flow below

Set parameters of Disk data input as below

Remember the dataset youre choosing is the stacked time section

Set parameters of trace display as below

When you run it

Procedure 2:
Migrate seismic time sections using ProMax module Explicit FD Depth Migration.
Note: to help understand the function of Explicit FD Depth migration

DISK DATA INPUT:


Read-in stacked time section data

Set parameters of Explicit FD Depth migration as below

Under disk data output

Add name of output file

Execute the above flow, completed successfully

Procedure 3 and 4:
Display depth migrated sections using Trace Display and Interactive Velocity Editor. Compare depth and time poststack migrated sections with velocity model.
TRACE DISPLAY:
Under trace display

set parameters of disk data input as below

Set parameters of trace display as below

Lab 4
Set parameters of interactive vel Editor

Run it and set the velocity model


Set-up velocity model (remember MB1 for new point and middle-wheel for common point

Below is the velocity model LAB04_VDML set up

Step2: Smoothing the Interval velocity (with depth) models using VELOCITY VIEWER/
POINT EDITOR
were to use the Velocity Viewer / Point Editor* flow, input any of the model set-up above, select name of output
velocity database (LAB04_VDML_SMTH) which in this case is the smoothed model

Set parameters of smoothing model as below

Below is the smoothed model

Procedure 3:
In the flow 020 PreStack_MDL execute Finite Difference Modeling module with parameters listed below (Fig. 4 and 5) and save
data using Disk Data Output.

Under that flow ass the following routines

Set the parameters of finite difference as below

Select name of output file

Disk data input to display file

Set parameters for trace display as below

Run it below is the output for the single shot

We can also set to generate 10 shots

To generate the 150 shorts


Run it and monitor it. Thru File> Monitor, this will show progress of the process

When the above is run and completed the entire 150 shots, the following meassage is presented at the end of the window

Procedure 4: Displaying the 150 shots

Fig. 10. Disk Data Input parameters


QC that the data is ok for further processing
Set parameters for trace display

When you run it, below is the display

Note: In case you copy-in the already made dataset: 150_PSTK_MDL_SHTS from AREA: tutor LINE: GP423, as well as
velocity model L03_VMDL (Interval velocities in depth).

Fig. 8. Parameters used for 150 shots FD modelling (Do not execute those parameters!)

Fig 9. Velocity model smoothed in Velocity model Viewer/Point Editor


In the flow 020 Prestack_MDL (Fig. 10):
-

In the Disk Data Input module within a Trace display option use Sort, instead of Get All parameter. Select primary
key: Live source number.
In the Sort order list for dataset select range from 10th to 150th shot record with the step of 10 shots: 10-150(10).
In the Trace Display module in the Number of ENSEMLES set this parameter to 15.

Step5: Assigning Geometry to Dataset


This step is purposed to assigning Geometry to the seismic dataset (i.e. 150 shots dataset prepared above)
Below is an extract of entire flow from lecture notes and following parts are break-downs into each step

Fig. 11a. Flow Geometry

Execute flow 030 Geometry Fig. 11a. Using module DDI and Extract Data Base Files, database is initiated and updated.

In the disk data input menu, select the seismic dataset 150_PSTK_MDL_SHTS for the 150 shots
steps.

Set parameters of disk data input as below

Set parameters of Extract database files as below

prepared in the previous

Separate the above flow by adding -------Add flow comment----Add routine 2D Land Geometry Spreadsheet
Run it and you will have a blank geometry table

Steps in proper assigning of geometry : Extracted from lecture notes, below is the summarised workflow for assigning geometry. The steps are broken-down for clarity in the next pages.

Execute interactive module 2D Land Geometry Spreadsheet*. Instructions how to fill the spreadsheets, you can find
on U-drive - Fig. 11b (ProMax modules Help): 2D Land Geometry.pdf and 2D_Geometry_how2.pdf. Finally,
execute last flow segment: DDI, Inline Geom Header Load, DDO, in order to update trace headers.
Since header values from modelled shots has been Extracted directly into ProMAX database (In the
Setup table choose Existing Index number mapping In the TRC and press OK)

Proceed directly to BINNING phase, following offered steps:


1. Assign midpoints
2. Binning
3. Finalize database
Step-2 under Binning, choose Assign midpoints by: Existing index number mappings in the TRC and OK

Click proceed on warning

Click OK to confirm successful geometry assignment

Step-2: Under the 2D Land binning menu, next step is to click Binning and OK

The binning algorithm runs and click OK to confirm

Step-3: under the 2D Land Binning menu select Finalise database and OK

Click OK to confirm geometry finalisation

The above should suffice the geometry assigning process, however, in case you have Reciever and Source tables are incomplete
and some importing work is needed you may follow the procedure below

Step4: QC and Viewing the assigned geometry


Click on Receivers, the Receiver table is completely populated with trace headers values

Below is the last part of the geometry table.


Note that the geometry table can be edited

You can click on sources to visualise / analyse the geometry table of sources

You can export the geometry table thru


File> Export, MB1 click on file and select name of exported file on path below

Step5: Set patterns table parameters


Note: setting patterns shall require to invoke an instruction for patterns otherwise the option for patterns shall remain inactive as
below, until that setting is invoked

To activate patterns settings

Set-up > Match pattern numbers using first line chan and stations > OK
After the above is invoked, patter icon shall be activated on the Land geometry assigning window

Setting pattern settings

MB1 > File> exit

We need to add the following flows

Disk data input parameters

Inline Geometry Header Load

In disk data output, select name of output file with geometry headers

Step6: Set Sources table parameters


Note that the sources interval is 210-250-290 as the sources interval is 40 (see survey settings below)

Below is the table with pattern edited

Below is the channel column edited as above

Below is the receiver station edited

Below is the First live channel column filled

Below is the complete source table populated

QC Comparison of Source Table filling correctness


Below is an overlay of final source table filled in lecture notes and my table filled following the above step-by-step procedures

After all table filling / edits save the table through MB1 File> Save> Exit

Binning

Breakdown of binning steps


In the Promax Land Geometry Assignment window, click Bin and select as shown below

Click Proceed

Confirm OK

Proceed to Binning

Confirm

Finalisation of Database

Confirm

exit the process by

Step 5.2 : Loading Trace headers from database and QC


We will use the flow below composed of
Disk data input to reading-in data to which the in-line geom header load is to applied, and
output the resultant file with the geometry headers

Run it and ensure that the its runs successfully, and the output file shall be populated

Step 5.2.1: Viewing and QC data


Add the following flows to view data with assigned geometry

Below is the output

Below is the same output in WT. We can check our assigned geometry if we sort in Source / Channel number

Best way to QC Geometry


The best ways to check all aspects of your geometry assignment is by checking the vertex cone of air-wave. Principally, the apex
of the air-wave should be at the base of the Shot flag
This process can be illustrated as below
Sort data in Source domain i.e. SIN / Channel
In the last flow to read-in and display data, include a Trace header maths routine, before the trace display

Under the trace header maths, make sure you pre-set up the various equations you want to investigate
The typical one for evaluation of geometry is the air-wave equation air=aoffset/340*1000 which simply means velocity of sound
in air = offset (distance)/ 340m/s(speed of sound) and *1000 to convert to milliseconds.

When you run the trace display


Picking > Edit header values > Air

Select header entry to edit, and OK

Thus, this geometry has been set right and accurate, Principally, the apex of the air-wave should be at the base of the Shot flag

Other methods of evaluating / QC Geometry include use of the View header plot menu
On the display go VIEW > HEADER PLOT > CONFIGURE > AIR

Select from the available trace headers, in this case this will be absolute offset (aoffset) . Note the offset is directly plotted over
the data and you can evaluate correctness of geometry by, clicking on a trace go up through a particular channel e.g. 41 and
reading the value of offset at the bottom in the worded display to see if it mages geometrical sense on the ground.

Note the offset is directly plotted over the data and you can evaluate correctness of geometry by, clicking on a trace go up
through a particular channel e.g. 41 and reading the value of offset at the bottom in the worded display to see if it mages
geometrical sense on the ground.

This can also be further checked by using the table below

Step 6.0 : True Amplitude Recovery (TAR)


This process is meant to recover amplitude losses the initial signal suffers as it travels and spreads through the media, hits the
reflector and travels back. These losses include:
Spherical divergence or geometrical spreading
Conversion into heat due to interparticle accelarations / motion

In the flow 040 True Amplitude Recovery execute module TAR to compensate amplitude losses. Read the ProMax
manual (click ?) and try using different approaches to obtain amplitude equalization of the reflections. Input file is:
030_150_SHOTS_GM and Stacking (RMS) velocity. Execute Velocity Manipulation module in order to compute
Stacking velocity from your interval vel. in depth (Fig. 12):

Fig. 12. Velocity Manipulation module


Below are other parts of the flow

Procedure
6.1 Add another flow 040_True_Amplitude_Recovery (TAR)

6.2 Under the flow 040_True_Amplitude_Recovery (TAR) create a separate flow Velocity Manipulation. The purpose of the
velocity manipulation is to compute stacking velocities from interval velocity
Youre in-putting initial interval velocity model (with depth)

Create new file name for the stack velocity to be created in this case LAB04_VMDL_stc

Below are the parameters of the Velocity manipulation set-up

Below is the flow

Execute it and make sure it runs successfully, meaning the stacking velocity table set-up is populated

Step 6.3: Compensating Amplitude loss (Gain) as a process of True Amplitude Recovery (TAR)

Under the disk data input read-in the dataset with geometry assigned

The disk data input get the data with geometry, inputs by sorting between the 10th-to-150th short and selects the 10th short (sorting
key is Live source number)

Note:
The above is just a testing step where were applying true Amplitude Recovery to just Part of the Data and analysing
the output. If consider it reasonably effective, must then apply TAR to the entire dataset thru a Get all in Disk Data
Input.

Other algorithms for applying True Amplitude Recovery (TAR)


Apply spherical divergence . Create corresponding velocity file. (set parameters as below)
Select velocity file as the stacked velocity made by manipulation from previous step
set parameters as below

Note trying to run with the above settings produced this error

Meaning, that in the True Amplitude Recovery, velocity should be set SPATIALLY VARIANT as below (doesnt
make sense velocity not being spartially variant)

In trace display select 15 Essembles and remember to set display key by live source number

Below is the amplitude recovered data

Trying various methods of Amplitude Recovery


Choose one of the given options from the TAR module and apply optimal correction parameter based on the previously
performed tests (Fig. 17). Input all shots and create new output file that will be used for further processing.
Try separated True Amplitude Recovery (TAR) flows in this style

Below are the various TAR algoriths or settings

Below are True Amplitude correction using Db/sec correction parameters in figure 15.

Bellow the output with this True Amplitude correction

Below is the TAR output with these settings, create a file output to save data 040_150_shots_gain with amplitude
recovered

Include disk data output to save the dataset to which amplitude has been recovered

Disk data input

Disk Data output to save file of entire dataset with TAR applied

Below is the new output file for the entire TAR-gained-dataset (TAR applied)

Below is the entire dataset with TAR applied

Step 7.0: Supergather Formation


This process is meant to sort data into common parameter gathers either:
Common mid-point gather (CMP gather)
Common Depth point gather (CDP gather)
Common Source gather (FFID-offset or FFID-channel)

Step 7.1: Pre-assessment of Data parameters before forming gathers


2D Supergather Formation - before executing flow, in the Dataset table (Fig. 18), using MB2 click on
040_150_SHOTS_GAIN file to examine CDP (CMP) range and decide the step for the velocity analysis.
Procedure
To do this MB1 click on the dataset of 150 shorts

MB2 (middle-wheel) click on the dataset table to view its characteristics

Fig. 18 Dataset Information (MB2 click on the file name in Dataset table)
From the data table above we can deduce the following:

Available CDP range for velocity analysis is 102-898.


Avoid first and last 100 CMPs due to low fold and choose step that will provide optimal spatial sampling of the velocity
model.

Note:
Its important to realise that, the above dataset is partial (its a product of a sorting process, not a Get_all process which
outputs complete TAR-gained dataset).
So in forming gathers, we have to use the complete dataset
The complete dataset name is 04_ALL_SHOTS_TAR_gain. Below is an analysis of its details as in the previous step.

Step-8(part 1) Supergathers Formation


050 Velocity Analysis
Execute first part of the flow 050 Velocity Analysis where module 2D Supergather Formation (Fig.19) selects CMPs as input
into interactive module for velocity picking.
Selected CMPs save (Disk Data Output) as SUP_GDR.
Read Velocity Analysis help file (U-drive) for further details.

Procedure
Create flow 050_ Velocity Analysis

Set parameters of 2D Supergather as below:

Disk data input


Read-in (Select) dataset with previous application of True Amplitude recovery to entire dataset

Note that:
were reading-in data with Amplitude Recovered
maximum CDP fold if we dont know we put in 999..
because were avoiding the starting and end portions of CDPs where fold is not to a maximum, were entering
minimum CDP number as 200 (instead of 138) and maximum 800 (instead of 898)
Key things to know about Forming Supergathers
the Supergather routine may not need Disk data input as long as its directed to the file to read-in
for good supergathers, pick regions of maximum fold (eliminate the first and end 100 from either side)
CDP increament in this case (25) means that, the first gather will have middle CDP at 200 and it will be combine by a
total of 25CDPs of which there will be 11 on both side i.e.
(200-11) on left end)..(200 in middle).(200 +11 on other end)
Next CDP gather will have middle at 225 and it will be combine by a total of 25CDPs of which there will be 11 on both
side i.e. (225-11) on left end)..(225 in middle).(225 +11 on other end)
Note that the CDPs to combine has to be an odd number in this case because one the middle CDP (mid of CDP gather)
has to be accounted for and the others to combine on both sides.

Failure to understand this always, PROMAX always returns error CDP smash must be an odd or even number
Poor choice of range of CDPs to be combined (including low-fold CDPs) in forming gathers creates CDP gathers with poor (weak
reflections that cannot be picked accurately).
Poor choice of number of CDPs to combine results into poor reflections (i.e.
too few-too weak),
optimum gathering gives best velocity analysis picking
and too many leads to poor lateral resolutions as the CDPs are not from the same positions.

In the disk data output set name of output file of super gathers 050_Supergathers

Run that flow and make sure it completes successfully. You can confirm this by checking successful message on the
bottom

Step 8.0 (Part-2): Velocity Analysis

This process is meant to use the supergathers formed in previous step and perform velocity picking for each CDP
Below is the flow,

Disk data input


This enters the supergarthers data formed and sorts it by Supergather Bin number as the primary sorting key.

Set parameters of velocity Analysis as below

Fig. 21 Velocity Analysis & semblance computation parameters


Please note that setting the red-bracketed parameters involves understanding your geometry and how you would like to gather
CDPs (form supergathers).
Below the two green marked (interact with other processes using PD ---set it to NO and Get guide functions from an existing
parameter table? Set it to NO Have to be adjusted, otherwise, Promax returns error related to unable to read PD.

Step 8.1: Band-pass filtering and Gain control prior to velocity analysis
This process is an optional process that may Use Band-pass Filter and/or AGC in order to improve Velocity analysis quality.
Have in mind peak (central) frequency of the wavelet that was used for finite difference modelling

So we may adapt our velocity analysis flow as below

However, at this point I dont have the appropriate information for proper setting of filter parameters.
So the filtering process may be held-off at this point.
So our velocity analysis will be run without the filtering

Below is the output of the velocity analysis

From the top menu, click on Gather-> Gather Parameters and set/change Number of CDPs to sum = 7 (Fig.24).

Fig. 24 Number of CDPs to sum


From the Semblance menu choose Semblance Parameters and change contrast factor to achieve higher resolution of the semblance plot (Fig. 25). Use Trace scaling option
from Gather menu to adjust trace amplitudes (gather/stack).

Fig. 25 Semblance and trace display parameters (scalar works only with VAWG plot)
When picking velocities of the particular CMP is finished, click on the black arrow (upper left corner) to proceed to the next CMP location (Fig. 26). When picking is
completed, click on File -> Save pick, File-> Exit/Stop flow.

However, there is a challenge in this as if this is set to YES, and Velocity Analysis run again,
It always returns an error relating to being unable to read / connect to PD after a certain number of iterations.

Useful Tricks in Velocity Analysis and Picking Velocities

Temporal Brute stacking process


Before starting velocity picking, it may be useful to make a brutestack (in CDP domain) on the side where youre using
a single velocity applied to the entire velocity field. As much as this is wrong, it guides you (gives an idea of moreaccurate picking and most importantly differentiating genuine reflection events and multiples at each CDP
Pin this on the side and use it to guide your picking.

Procedure
Using velocity manipulation, create a Brute stack velocity function by selecting a single brute stack velocity

Set parameters as below, select appropriate velocity field ,Type of velocity to output select Stacking (RMS) Velocity,
Select output velocity database name: brutevel

Select velocity field (realistic for working purposes). The above would be interpreted as below

Go and establish the velocity table Brutevel1

Run the velocity manipulation flow and ensure it runs effectively. This would mean the velocity function table is
populated

The above gives us a velocity function table, were to use for brute stacking

After True Amplitude Recovery (TAR) is completed in previous step, in the next step read-in the dataset
040_ALL_SHOTS_TAR_gain of all shots with true-amplitude-recovered.

In the Disk Data Input, read-in the dataset and primary sorting key CDP Bin Number

Apply Normal Move-out to it


In applying NMO, specify stretch mute% and specify velocity table (Brutevel)

To check the contents of this velocity table Brutevel, click Edit > and click the file

CDP Ensemble stack

Select name of the Ensemble brute stack. In this case 060_bstack

Run it to populate the and save the above file.


Runs normally

To display the brute stack

Below is the brute stack upon which a particular CDP can be picked.
Note that its in the same format CDP Vs. Time as the Velocity Analysis window, so the both can be aligned on multi screens for
better picking.

This brutestack is strictly for aiding picking, so it cannot be used anywhere in further steps of forming gathers or
velocity analysis

After picking velocities in velocity analysis save the picks File> Save picks

Step 9.0: Creating interval Velocities (in_depth) from Picked Stacking Velocity
Using module Velocity Manipulation (Fig. 28), create interval velocities in depth using picked stacking velocities
(~RMS).

Select input velocity table which is VSTK_picked (from your velocity analysis)

Select output velocity database, which shall be interval velocity (in depth) derived from stacking velocity picked
Note that, in the velocity manipulation there is smoothing velocity

Set all the above parameters of the velocity manipulation and run it.

Flow executed successfully.


Display and compare given and picked velocity model using Velocity Viewer/Point Editor (Fig. 29).
Set parameters of the Velocity viewer

Select input interval_velocity with depth

Select the output interval velocity

Set the parameters of the velocity viewer (as above) and run it
Below is the output

Note: there are details on editing the velocity field above (see Velocity viewer/Editor) help in PROMAX
Way forward: In the next step, were to do stacking using the Velocity picked Vstack_vel in Velocity Analysis
Do DMO using the above smoothed Velocity function

Step 9.0: DMO Correction and DMO Velocity Analysis


Create a new flow called 060_DMO_VELOCITY ANALYSIS_ITERATIVE

Under that create this flow. This routine is dedicated to smoothing the Velocity picked

Create name of output database that will contain smoothed version of stacking velocity picked in the velocity analysis. Create
smoothed version from picked

Below is the smoothed-picked-stacked velocity Vstack_picked_smoothed

Step 9.2 (Part-2): Stacking


Within flow 060 Stacking, perfom stacking of data with the velocity picked in Velocity Analysis.
Precisely, we
read-in the data > apply NMO > Disk data output (to Save the NMO applied data) >
Read-in the NMO-applied data > Use it to pick a mute function
At stacking stage, we read-in the dataset > Apply NMO > Apply mute function picked > Stack (emsemble stack)
View stacked dataset
Data should be in CDP order (Primary key: Offset) Display stacked section and compare to starting interval velocity model.
Below is the entire flow separated into parts

Disk data input


Key points
read-in data of all shots with True Amplitude Recovery applied i.e. 040_ALL_SHOTS_TAR_gain
sort by primary key CDP, secondary key Offset

Normal Move-out correction


For NMO we use the velocity picked from the velocity analysis VSTK_Picked (not the smoothed version)

Disk data output


To save output of NMO_corrected data this will populate the dataset table: 070_NMO_correct_using_Vstk_pic

In the second flow stage, the Disk data input parameters are set as below
Sorting in CDP : Absolute value of set
Focusing at CDPs 400-450

Trace Display parameters


Trace labelling in Live Source: Live Channel

Run the above active flow to populate the tables above and use them in subsequent flows

Or we may apply the mute based on one CDP

Step 9.2.1 (Part-2): Applying mute


On the NMO_corrected gather above (source gather) > click Picking (on top) > Pick Top Mute

Name the Top mute pick> Apply > OK

Select OFFSET > OK

After clicking OK, MB1 click on body of pannel (see blue dot)
Then MB1>click on start point of mute, move to end of mute MB1>click on end of mute line.

To save pick go File> Save Pick.


Other mute picks can be selected e.g see the other pic

Step 9.2.1 (Part-2): Applying mute (picked above) and


In the next step, we run the below activated section of the flow.
This shall read-in the data (all shots with TAR gain) > Apply NMO> Apply the mute function made above> form CDP Stacks and
save the pre-stacked dataset

Run the above flow, confirm successful running , this shall populate the dataset to be used in next step.

After that has run successfully (as above), then proceed to run the last section flow below
This just reads-in the prestack data made in previous step and displays.

Below is the stacked section

Step 10 : Migration
Note: This interim migration stage is obviously going to yield an inaccurate image because only NMO has been
applied (and this can only correct horizontal and near-horizontal velocity field, Zero-offset context and uses
stacking velocity not Vrms).
Its purpose is meant to demostrate that we need something extra DMO
DMO would correct dip-depenedent move-out as a partial migration to convert pre-stack non-zero offset data
into Zero-offset context which is later handle as post-stack.
In the flow 070 Migration perform 2 time and 2 depth migration algorithms (Kirchoff Time Mig, F-K Migration and Kirchoff
depth migration., Implicit FD Depth Migration)
Start new flow 070 Migration

Output of Kirchoff Time migration

F-K migration

Output of F-K migration

Below is a comparison of Kirchoff Time Migration (Left) and Memory Stolk F-K migration

Kirchhoff Depth Migration


Flow

Parameters
Disk data input

Kirchhoff Depth migration


Select velocity interval velocity with depth derived from stacking velocity picked

Select velocity interval velocity with depth derived from stacking velocity picked

Output parameters

Create new dataset table of the pre-stack Kirchhoff Depth Migration

When run it produces this error, what does this mean?


Does it mean that there is a velocity <0 in the table?

Key point (for velocity editing)


The above problem is caused by bad zig-zag picking in velocity analysis picking, this causes an errorneous approximation of
interval velocities (both interval velocities_in time and interval velocities _in_depth) as interval velocities are calculated from
picking.
The point is, in the Velocity analysis picking (NMO correctable phase) only concentrate on picking the reflections from
horizontal or near horizontal events and dont try forcing picking of dipping events (you can identify these by those events that
cannot be flattened by NMO. At this stage only pick the NMO events, and these should pretty much lie on a rather smooth line,
or curve rather than zig-zag. Also avoid picking multiples next to primaries.
Solution to problem above
Solution is to run the velocity viewer point editor routine, edit picks (velocity points in the velocity field), smooth it and then get
interval velocities in depth using the velocity manipulation.
the second approach to correct this would be use the menu, Velocity Manipulation and some where choose the option Clip-output
velocity to clip-out particular velocities that swing to extremes (0-to-max) that way it creates a velocity gate on picks which gate
is used to reasonably control the estimation of interval velocities without swinging to extremes, then going back to velocity
analysis window, and picking (not zig-zag) as explained above.
Selecet Clip output velocity

Select Yes on clipping output velocity and reasonably define boundaries of your highest and lowest velocity fields.

Notice that the option to smooth the interval velocity model is already selected, Normally (or optionally) this would be done in
the subsquent step

With all other routines innactivated, Run the routine and make sure it runs successfully. This re-populates the interval velocity
analysis with clipping

Now we innactivate other flow and run the velocity viewer / point editor flow (this will run based on new interval velocity table
formed with clipping-out extreme velocities

Below is the output (notice the zero velocity east of CDP 800)

Next thing is we go to editing this velocity field.


Below are some notes to guide editing

To edit this click on the velocity point you would like to edit (it turns pink, then move the cursor over the velocity field select the one you want to use as a reference (it will be blackdotted) then to keep that move cursor vertically along the line to the top > get out of pannel and move to editor side on the right.
Correct the redline against the blue line by using guides that show how to use MB1, MB2 and MB3 to move, delete, adjust etc.
The Optional and best way of editing is going to velocity analysis routine, increase maximum value (range of semblance) > go back to velocity analysis and re-pick reasonably avoinding
extreme interval velocities. (0-to-very high)

Interval Velocity Smoothing


Before using the interval velocity with depth (derived from velocity analysis) for migration we must smooth it.
Procedure:
Modify > Smooth velocity field > adjust the various parameters of smoother

Below is the smoothed velocity field using the above parameters?

Smoothing can be applied using various parameters


Note every time you smooth it you may want to save it to table (if you want to use it).
Otherwise trying to use it in migration without saving it to the table will create an error

Depth Migration (without DMO)


Depth migration with smoothed interval velocities in depth (interval velocities obtained from stacking velocities)
NB: Smoothed velocities have to be used.

Below is the flow, with smoothed version of velocity

Note that were using the smoothed version of velocity table for migration

Parameters
Disk data input

Kirchhoff depth migration parameters

Disk data output

Run it and make sure it runs effectively

To display the stack

For the display part, Disk data input parameters below

Trace display parameters use CDP: OFFSET

Below is the stacked section in CDP: CHAN sorting (with smoothed velocity field)
Importance of smoothing

Below is the stacked section with un-smoothed velocity field (sorting (none- smoothed velocity field)

Implicit FD Depth migration

Run it and make sure it runs successfully

To display the stack, run the two routines below it


Below is the stacked section from the Implicit FD Depth Migration

Below is the stacked section from the Implicit FD Depth Migration

Creating average velocities from picked stacked velocities and doing

Time-Depth conversions for

both time-migrated datasets and depth migrated dataset


Instruction
Create average velocity in time from picked stacked velocities and perform time to depth conversion of two time migration
sections. Display all migrated sections 2 time, 2 time converted in depth and 2 depth migration algorithms) and discuss obtained
results.
Interpretation of instruction above
Procedure:
Use Velocity manipulation to derive average velocities from picked stacked velocities
Input the migrated datasets (time-migrated and depth-migrated) one at a time
Apply Time-depth conversions on migrated sections
Save the output
Complete Flow below

Use a velocity manipulation routine to convert stacking velocity to average velocity

Parameters for Velocity Manipulation

Run it and ensure it works

Disk data input, we read-in Kirchoff time migrated stack

For the TIME/DEPTH CONVERSION


Use the average velocity to convert

Trace display

run the active flow

Below is the Time-to-Depth conversion of the initial Kirchhoff Time migrated section

Below is the flow for Time-to-depth conversion of Memory Stolk F-K time Migration

Disk data input

Time depth conversion

Create new dataset file

Below is the Time-to-Depth conversion of Memort Stolk F-K

To apply Time-to-conversion to Kirchhoff Depth migratio

Disk data input

Time depth conversion

Below is the Time-Depth conversion of Kirchhoff depth migration

Applying Time-to-Depth conversions to Implicit FD Depth migration

Time/ Depth conversion parameters

Below is the Time-to-Depth conversion of implicit FD migrated stack

Discussion of the various migrations

In general, Depth migration algorithms perfom better and relatively more precise than Time migration algorithms, however, the main challenge and
requirement for good performance of depth algorithms is the need for good velocity analysis process. They are highly dependent on velocity
Case-specific discussion (based on the migrated section diagrams above)
Time migrations
The top pair of time migrations, the resolving and resolution of time-migrations (i.e. : Kirchhoff time migration and F-K time migration) is significantly poor. The image is reasonably well
resolved on boundaries, but very poor resolution in the middle of the section where various velocities intersect. It creates severe and numerous micro diffractions.
Depth migrations
The bottom pair of depth migrations, the resolving and resolution of depth-migrations (i.e. : Kirchhoff depth migration and F-K depth migration) is significantly improved. The image
(layers) is reasonably well resolved and distinguished on both the boundaries and in the middle where various velocities intersect. It also creates diffractions, but not as micro and
numerous as the time migrations.
Note: Smoothing velocities refines the serretion (sharp jugged edges) on velocity boundaries.

Step 10 : DMO Correction


This is an interim dip-dependent migration (correction) step necessary for converting non-zero offset (pre-stack) data
associated to dipping events into zero-offset configuration, which can be well handled by post-stack migration
algorithms.
This helps in better velocity estimation, improves lateral resolution and suppresses coherent noises
Logic and how to use it.
Ideally, DMO should follow autostatics and NMO corrections in the processing flow.
Furthermore, the velocities used in NMO (before DMO) should be the dip-independent velocities appropriate for
horizontal reflectors. Estimating these velocities without first applying DMO can be difficult.
Therefore, a useful intermediate processing sequence is:
DMO Phase-1
1. Sort to desired input domain (Use Disk Data Input or Inline Sort)
2. NMO (Use the best estimate of velocities)
3. DMO (Use Vstack-picked and smoothed velocity for first DMO iteration)
4. Inverse NMO (Use the same velocities as for the initial NMO)
5. Velocity analysis (Perfom DMO Velocity analysis)
Subsequent DMO Phases
When the difference between initial and final velocities is significant, repeat the above sequence.
Having arrived at your final stacking velocities (in preceding velocity analysis), the processing flow will continue with:
- Sort to desired input domain (Use Disk Data Input or Inline sort)
- NMO (Use the final velocities from preceding DMO application)
- DMO (do DMO again with the velocity (smoothed) from previous NMO step)
- Sort to CDP domain (Use Disk Data Input or Inline sort)
- CDP Stack

Instruction
In the flow 080 DMO perfom Dip Move Out correction on shot gathers using Ensemble DMO T-X Domain module (fig.4 below).
Display and comment on results
Display:
- Raw Shot
- NMO corrected shot
- NMO+DMO
- DMO+NMO-1
FIGURE-4 DMO FLOW and PARAMETERS

Procedure
Below is the complete flow for first (initial) DMO interaction

Disk data input parameters

Read in all shots with gain applied, sort as shown

Normal moveout (NMO) correction


Note that were applying NMO so we use the FORWARD NMO and use the VSTK Picked (velocity picked from velocity
analysis.

Ensemble DMO T-X Domain parameters


Key thing to note is Typical CDP Spacing if data input is in common source domain. Doing it wrong may cause aliasing.

Inverse NMO
Select INVERSE, use the same stretch mute % as in Forward NMO and use the same velocity Vstack picked as used in
FORWARD NMO.

Disk data output


In our case we choose to save the output dataset to which the first DMO ensemble has been applied

To display the DMO Esembled gather


We use the lower two routines

Disk data input


In this case were reading all without sorting

Trace display

Below is the output in Source: Channel domain

Step 10.1 : DMO Velocity Analysis DMO


DMO Velocity Analysis involves doing velocity picking which is meant to correct dipping events and better
estimations of velocities.
Instructions
- In the flow new flow 090 DMO Velocity Analysis, perfom supergather formation for input into interactive velocity analysis.
- This time, velocity analysis will be done on DMO corrected data.
- Compare and comment picked velocities (from DMO velocity analysis process) of this process lab.6 with picked stacking
velocities from previous (i.e. NMO velocity analysis process)
- Convert both velocity models into interval velocities in depth and compare them to starting interval velocity in-depth (initially
self-designed model)
Parameters for flow are given below
Create new flow 090_DMO_VEL_ANLYS_iterations

Supergather formation
Complete supergather formation flow

First explore properties for dataset to be used for supergather formation


Procedure: see pages 163-165.

From the above to have maximum fold we will skip 100 CDPs on each side so we consider minimum CDP=200, maximum CDP
=800.
Parameters for Supergather formation
Refer to pages 164-167 on understanding supergather parameters

Disk data output


Click Add select name of new dataset 090_DMO_SUPERGATHER

Disk data output parameters

Execute this part of the flow and make sure it runs successfully.

If it does, the supergarthers are formed and the dataset file 090_DMO_SUPERGATHER for supergathers populated

Next we deal with second part of the flow

Disk data input


This enters the supergarthers data formed and sorts it by Supergather Bin number as the primary sorting key.

Automatic Gain control (AGC)


AGC is applied before velocity analysis or before stacking in order to equilize noise of all reflections in order to have the best
cancellation of noise.
AGC is some kind of scaller box along an input which scales output of both Amplitude and noise.
Set AGC parameters as below

Velocity Analysis
Select table to store DMO_velocity picks

Set parameters of velocity Analysis as below


(note table has been cut into parts, as it could not be copied on 1 screen (VNC problems)

Special tips on selecting velocity analysis parameters


- Line-2: Table to store DMO_Velocity picks to be set-up
- Line-3: select No. for is in-coming data pre-compute data
- Line-8: Number of CDPs to sum into gather: the smaller the finer the dataset (high data density) but too much
picking (more CDPs to pick), the lesser the less tighter the grid, so select appropriately.
- Line 16: Maximum semblance analysis value: increase it to 10k or higher. This brightens the semblance
window for easier picking.
- Method of computing velocity functions: Constant Velocity
- Interact with other processes using PDU? : No.
- Get guiding function from existing table: No. (because on the first attempt the table is empty. This might apply
on other successive velocity analyses, when the table exists and can be navigated to)
- Guide minimum value: minimum velocity in velocity field
- Guide maximum time value: 4000 time depth
- Copy picks to next location: Yes. This helps to automatically copy picks to next CDP, which is easy to move in
the picking process using MB3.

Fig. 21 Velocity Analysis & semblance computation parameters


Run this part of the flow and confirm its successful

Below is the velocity analysis, se we start picking

Improving visibility (of reflections)


If the gather appears kind of pale-grey and reflections are not distinct / (not seperable from multiples), you can improve visibility
by applying trace gain. Stacks > Trace Scaling

Use the slider or punch in numbers for gain improvement> click OK for temporal testing of appearance or click Apply and OK to
make permanent changes

Tip for velocity picking


Switch on the interval velocity (View > ) and when picking, keep an eye on both the interval velocity and the velocity at the
point.

In the Velocity manipulation


This routine is used to convert Velocities picked from first DMO iteration into interval_depth velocities that will be used for
creating iterative (first iteration) velocity field section
Flow

Below are the parameters


Note that were using VEL_DMO_picked (velocity picked from first iteration of DMO)

Set-up the output velocity table

We run the flow to ensure the interval velocity table newly set-up is populated
It ran normally,

Velocity Viewer / Point editor


Next is we want to view (and possibly edit ) our interval velocity field set-up

Parameters

Below is the output, notice that we have velocities up to 7000 which were not initially set in the model
This is due to the fact that DMO correction velocities (velocities for correcting dipping events are always higher than velocities for correcting horizontal events)

The above velocity field can be smoothed

Velocity field with V_dmo1_picked (i.e. picked after first iteration DMO_1 Velocity)

We can smooth it

Output of second DMO Iteration (second DMO Velocity analysis)

Smoothed version of second DMO iteration

This is output with V_dmo_2 (dmo velocity from iteration_2)

Smoothed version of V_dmo_2 (dmo velocity from iteration_2)

Comparing Velocities
Our task is to compare various velocity fields
Notice that these velocities are generally lower and the range comparatively smaller up to 4500.
This is because the stacking velocities are very much associated to V_nmo velocities that were picked during application of NMO (nmo-velocity analysis) and they are generally lower
because they are mainly used to correct horizontal events.

smoothed

Disk Data Input


Under the Disk Data Input, enter the dataset (040_ALL_SHOTS_TAR_gain) that covers all shots 150 shots with True
Amplitudes Recovered, make sure you get all

Normal Move out (Forward)


To the Data above, youre applying Normal Move-out correction (NMO-Forward) using the smoothed version of the picked
velocity (stacking velocity)
Key settings are: specify if its FORWARD NMO, Direct it to Velocity table and set appropriate stretch mute %

Ensemble DMO in TX-Domain


Set parameters as below

Tips:

A detailed guide on meaning of the above parameters is attached below the following text
Typical CDP spacing in essembles should be ideally the one you used in forming supergathers or may be varied, but
should not be less than the CDP interval otherwise you cause DMO alliasing.
Typical RMS in early times can be estimated from the velocity field obtained by stacking
Maximum offset can be obtained from geometry (survey shooting) or examining information of complete geometry
dataset (in the below, its expressed in terms of CDP spacing).= 1000 cdp x10=10000

Inverse NMO
Make sure for this you select INVERSE
Same stretch% and the same velocity table file (the smoothed version of the picked velocity (stacking velocity)

Disk Data output


Select name of output file

Running the flow below


Running the Flow requires separating the flow steps as each flow needs to run seperately to populate the table so that the next
flow uses it

Run the first flow of data input and NMO (Forward) this will populate that table. If successful inactivate it and activate next.
Next run the Esemble DMO in TX flow with others inactive. (make sure it runs successfully)

Lab 7: 3D seismic processing and imaging

From ProMAX to SeisSpace


ProMAX was introduced in 1991. The straight of the software at that time were:

Interactive workstation-based seismic processing


Easy to use, robust functionality
Dominant commercial system with ~50% market share
The design put limits on its parallel computing efficiency

SeisSpace evolved over a number of years.

Added JavaSeis data format


Executive to utilize JavaSeis efficiently
Highly scalable parallel performance
New tools and geophysics
Ultimately intend to absorb ProMAX

With the SeisSpace interface, the user has access to all ProMAX tools and can run all of the traditional job flows. But, he is
doing this in a modern Windows-compliant user interface with capabilities like copy-and-paste or drag-and-drop. There are
many new geophysical tools in the SeisSpace tool list, and these can be used in flows along with traditional ProMAX tools. The
JavaSeis format allows parallel read and write from disk with no file-locking., so it is extremely efficient and scales very well in
parallel.
To start working in Seisspace, type in X-Win terminal the following command:
vncserver -geometry 1280x1024
This is the message that will appear:

You have created geometry for TurboVNC connection, with an ID of 23 in the case above (write down your own ID, you will
have to use it every time you run TurboVNC).
Run TurboVNC on 134.7.152.10:ID as displayed bellow.

Parameterise to connect to students cluster and then select Connect.


Password is Geophy2015
Run SSclient in terminal.
This is how Seisspace session looks like.

The data is organised differently in Seisspace. Find and investigate the following folders: Project Area, Project, Subproject,
Flows, Datasets, Tables.
How can we access database and help on processes?
How can we create the job (processing flow)?
How can we submite a job on different nodes?

_____________________________________________________________________________________________

Step-1.0: Getting into Seispace


In the VNC window on the Cluster, right click and select open Terminal

To open seispace type in SSclient and press Enter

Otway 3D seismic survey


The Otway 3D project in South Western part of Victoria , Australia. The project is carried out by a collaborative research group
to prove that carbon dioxide can be stored alongside methane gas in a reservoir. The geological formation of Otway basin has a
good overlaying seal to prevent leakage and a good permeability and porosity to allow the flow of the gas. In our seismic
volume, target reservoir is expected to be found at 1600ms TWT. This technology will help to reduce the cost for control of
greenhouse gas emission.
For this lab new subproject has to be made!

Step-1.1: Reading-in the SEG-Y file (dataset)


Data read.
The first step is to read the data in segy format from the disk. The dataset is located at:
/export/data/teaching/geop4000/public/
Filename: Unit423_Otway_2009.sgy
Create a flow 001 SEGY IN, recommended setup is bellow:

In case of permission issues type in in terminal xhost +


Browsing to SEG-Y data file

Below is the dataset read-in

Saving the raw data (SEG-Y) file

Run it to confirm that the SEG-Y data is read-in and saved

Note : As a tip of using seispace 1D, 2D and 3D you can access routines and flows as follows
To select routines to flow Click; Products >> SeisSpace3D >> Show Profile

Invoking the above, gives a bunch of routines that can be selected on the right hand side.

Step-1.2: inputting data and displaying it


Flow1: 001_SEG-Y_IN This flow is to read-in the SEG-Y data file

Parameters for disk data input

Parameters of Trace display

Labelling the trace display

Run it to display raw data

In case of any permission issues failing it to run, type in the terminal xhost +

Below is the output display of the raw data

Step2: Assigning Geometry


Assignment of geometry is done in 3 stages.

Firstly, we extract as much information as we have from headers,


Then we use this information to finalise database.
Lastly, we apply geometry by adding all calculated information into the headers, such as offset and CDP information.

Recommended processing flow and binning parameters are illustrated bellow.

2.1 Disk data Input

2.2

Parameters for Extract Database Files

We inactivate the other routines and run the first two routines in the flow, i.e. Disk data input and Extract Database files

Completed successfully

2.3 3D Land Geometry Spreadsheet

We inactivate other flows and run Disk data input and 3D Land Geometry spreadsheet

2.3.1

This will produce a 3D Geometry Assignment table

Click on Bin > Assign mid-points > OK and in the next click Proceed

2.3.2

Next we set the Bin mid-points

As below, set parameters.

Azimuth is 25: meaning the Y-lines of grid are on bearing N 025 E i.e. 25-degrees east of North.
Bin size (Grid X bin dimension): is set to 10. The minimum allowable value is half of receiver spacing.
(Grid Y bin dimension): is set to 10. The minimum allowable value is half of receiver spacing. (However I have issues
with this as Y-spacing seems to be smaller than that).
Set all other settings as below.
Click Calculate Dim at the bottom left

Click OK

Displaying the Layout and Fold map


Tools > Geometry QC

In that view you can display a whole lot of things using the menu. the numerous options include coloring sources, recievers
differently, labelling etc.

Display layout and fold as shown below.

Yoy can view elevation contour of the survey

Elevation contour

Display elevations of receivers. What is the maximum elevation?


What is the distance between receivers?
How can we make sure our geometry is correct and that it matches seismic?
What is the velocity of refractor?

Grid parameters

Parameters for Inline Geom Header Load

Run it and it should run correctly

Disk data output parameters (Raw dataset with Geometry)

Run it and ensure its successful meaning the table is set-up and populated

4.0 Pre-processing
Make a processing flow that will:

properly apply Elevation statics,


Attenuate noise
and perform deconvolution.

The content of the flow is displayed bellow. Parameterisation should be tested on one-shot record, after that applied on the
whole dataset.

Display a shot record before and after pre-processing. Comment the results.
4.1
Disk data input parameters
(note that were testing it on the 50th shot or live source number)

Choose dataset with geometry done

4.2 Applying Elevation Statics


Before using the Apply Elevation Statics routine, we need to do a few processes in order to select velocity of refractor. This will
be the Replacement velocity
4.2.1 Determining Replacement Velocity
Do trace display of raw data with geometry and pick velocity of first arrivals

To get the replacement velocity> in the display > activate dx/dt button on the side > MB1 (click on start first break, move to end
of line MB1) MB3 to label the velocity.

Parameters for Elevation statics


(with selected replacement velocity and final datum)
To select final datum elevation: Go thru the icons see the line set-up and pick the maximum elevation (some where on tools see
that).

To apply the above, inactivate all the other routines in the flow, then run this routine.

Below is the output with Elevation statics applied

To view color : View > Trace Display>

Below is the output of data with Statics applied

Application of Elevation statics is meant to correct effects of elevation differences in relation to datum.
In this case application of elevation statics makes some improvement, though not huge improvements as the terrain is relatively flat with gradual (gentle) elevation differences
4.3 Apply Automatic Gain Control

The purpose of applying AGC is to compensate amplitude decay through a mechanism that equalize noise through a set re-scaling process operating as sliding window of fixed length is
used to compute the average amplitude within the window. This average is compared to a reference level and the gain computed for a point in the window. The window then slides down
one sample and the next gain correction is computed. The process continues until the whole trace has been gained.
Automatic Gain Control (AGC): is the commonest (and often most dangerous) scaling type used.

verage. In this method amplitude extremes are


preserved.
Parameters for AGC

Keep Disk data input, Apply Elevation Statics, AGC and Trace Display active. Inactivate other flows and run it. Below is the output.

4.4

Spiking Predictive Deconvolution

Deconvolution is a filtering process which removes a wavelet from the recorded seismic trace by reversing the process of
convolution. The commonest way to perform deconvolution is to design a Weiner filter to transform one wavelet into another
wavelet in a least-squares sense.
By far the most important application is predictive deconvolution in which a repeating signal (e.g. primaries and multiples) is
shaped to one which doesn't repeat (primaries only). Predictive deconvolution suppresses multiple reflections and optionally
alters the spectrum of the input data to increase resolution.
Parameters for Spiking Predictive Deconvolution set as below

In order to apply Spiking Predictive Deconvolution, we need to define (pick) a gate. Procedure below
Picking >Pick Miscellaneous Time Gates

Give the picking a name and select a Secondary Key

Click OK.
On the same display, select the second layer (bottom of gate)
Decide on the time width of gate, MB3> click along the time boundary on the display, and use drop down list. Click > New
Layer (decon-2). This will be the bottom of the time gate

After setting the parameters of the Spike predictive deconvolution and picking the gate, inactivate other routines as shown and
run a combination of Disk data input + Spike Deconvolution +Trace Display
Selection of gate

Run it, below is the output with Spiking deconvolution applied

4.4 Surface Wave Noise Attenuation


Surface wave noise generally obscure seismic signal during recording of data by single point sensor in land seismic
reflection exploration. Key problem is how to effectively attenuate surface waves and remnant surface waves during the
noise attenuation.
Strategy of noise attenuation with respect to signal depends upon the fact how the characteristics of noise differs from
signal in terms of a particular physical quantity in a specific domain
surface wave noise attenuation is done eliminate low velocity surface waves from the frequency content, as measure to retain
reflections. With the set velocity as threshold, velocities below this shall be attenuated

With other routines inactivated as shown above, run it


Below is the output with velocity 800m/s selected

This action is meant to attenuate surface waves with respect to the selected gate

Below is the output with velocity set to 1500m/s

4.5 Bandpass Filtering


The commonest form of filtering is to remove unwanted frequency components from the data by bandpass frequency filtering.
This may be to remove frequencies above the Nyquist before re-sampling or to remove noise types e.g. low frequency swell
noise from the data.

Filters are usually zero-phase (Ormsby) or minimum phase (Butterworth) although the filter type can actually be of either phase
and this should be clearly stated. The passband of a zero-phase Ormsby filter is usually defined by up to four corner frequencies
as shown by Figure 1a. The passband of a Butterworth filter is more complex and involves two cutoff frequencies (Figure 1b)
where the filter is at half power (or 3dB down on maximim power). Two filter slopes are also required and are specified in terms
of decibels per octave. An octave is defined as a doubling of frequency e.g. 120Hz is an octave above 60Hz
Set parameters of Band pass filter as below

Keep other flow inactive, activate the following (Disk data input, Bandpass Filter and Trace display) and run it to see effect of
filter.

4.6 Disk data output


Saving the Pre-processed data
After setting all routines in the pre-process flow, we will activate all of them > Run them and the final output is saved as
RAWDATA+Geom+Pre-process

Below is the output with all pre-processing routines applied


RAWDATA+Geom+Pre-process

5.0

Constant velocity analysis (CVA).

______________________________________ Instruction Notes________________________________________


Preliminary velocity analysis is done through evaluation of constant velocity stacks. Suggested flow is below.

Input data should be pre-processed dataset sorted by CDP, with a secondary key ILINE_NO. Only one ILINE_NO should be
analysed, chosen from the middle of the survey. Example of how CVS can be setup is given bellow.

Sort of the second DDI module is Panl_vel/xline_no. Trace display should have at least 5 stacks per screen with proper
annotations. See bellow.

Compute single velocity function from CVS.


_____________________________________________ Instruction Notes_______________________________________

Procedure

5.1

Create new flow 005 Constant Velocity Analysis

The complete flow should be as follows

5.1 Disk Data Input


Input data should be pre-processed dataset sorted by CDP, with a secondary key ILINE_NO. Only one ILINE_NO should be
analysed, chosen from the middle of the survey

Using geometery QC tools and attributes, we can select an inline which is well in the middle of survey grid

5.2

Trace Length

5.3 Constant Velocity Stack


Parameters set as below

5.4 Disk data output

To save constant velocity stacked section (stacked with the defined velocity Parameters)

5.5 Disk data input


Sort of the second DDI module as Panl_vel/xline_no.

5.5 Trace Display


Trace display should have at least 5 stacks per screen with proper annotations. See bellow.
Paraneters for Trace Display

Below is the constant velocity stack obtained by applying velocities between 1800-2050 m/s at intervals of 50m/s steps.
CONSTANT VELOCITY STACKS (CVS): In this approach a number of adjacent CMPS are selected around each location
point. The CMPS are NMO corrected and stacked using a defined range of constant velocities in this case 1800m/s to 2050m/s
with an interval of around 50m/s.
The mini-stack panels are displayed next to each other and velocities picked where key events show the highest amplitude or
greatest continuity. The method shows what the data will look like if stacked with the chosen velocity but has a resolution
limited to the velocity interval chosen. This may be the best method for data with very poor SNR. Some attention should also be
paid to the mutes applied for CVS analysis, particularly if multiples are present.

Below is the constant velocity stack obtained by applying velocities between 1800-2050 m/s at intervals of 50m/s steps
CONSTANT VELOCITY STACKS (CVS): In this approach a number of adjacent CMPS are selected around each location point. The CMPS are NMO corrected and stacked using a
defined range of constant velocities in this case 1800m/s to 2050m/s with an interval of around 50m/s.
The mini-stack panels are displayed next to each other and velocities picked where key events show the highest amplitude or greatest continuity. The method shows what the data will
look like if stacked with the chosen velocity but has a resolution limited to the velocity interval chosen. This may be the best method for data with very poor SNR. Some attention should
also be paid to the mutes applied for CVS analysis, particularly if multiples are present.

Qn. Based on the Constant Velocity Stacks, Compute the Single Velocity Function

Based on visualisation (examination) of the mini-stack panels which are displayed next to each other (above) the best single velocity function is selected depending on:

highest amplitude or greatest clear continuity.


Correctness (perfectness) of NMO Correction (closest to perfect horizontal, not over corrected nor under-corrected).

The method shows what the data will look like if stacked with the chosen velocity but has a resolution limited to the velocity interval chosen. In this case its 2000 m/s and 2050 m/s
because of highest amplitude and events continuity, and perfectly horizontal NMO Correction.

6.0

Interactive velocity analysis (IVA).

_____________________________________________
Notes___________________________________________________________
Use velocities obtained from CVS and use it as a guide function for IVA.
An example of the flow is displayed bellow.

Instruction

1. In first step create super-gathers,


2. in second step analyse velocities. You should analyse velocities in a grid 200mx200m.
Suggested parameterisation is bellow.

Display IVA1 velocity field using 3D volume viewer module.


_____________________________________________
Notes___________________________________________________________
Procedure
6.1 Create new flow called 006_Interactive Velocity Analysis

6.1.1 the flow should have the following routines

Instruction

6.1.1 3D Supergather Formation


Parameters for 3D Supergather Formation

From the grid and Fold map below, minimum and maximum in-lines are 1 and 87, minimum and maximum cross-lines are 1
and 94. In order to consider reasonable fold, we skip inlines and cross-lines on the periphery (with low fold) in this case inline
40 and 60 with an increment of 10. crosslines be 40 and 60 with an increment of 10.

Run it and confirm that the 3D supergathers are formed

We make some changes in velocity analysis. We create CVS table from values of CVA.
Parameters of Velocity Analysis flow is as given below:
For X and Y coordinates, click File > Resolve >Resolve X and Y from CDP>OK (Note CDP has been selected as 1 below).
With a similar approach values for In-Line and Cross-Line, choose resolve option as Resolve In-Line and X-Line from X and
Y.

6.1.2 Disk data output


To save supergather for interactive velocity analysis process. Dataset name Supergath

Run the two active flows to save dataset of supergathers

Run successfully, meaning the supergather dataset has been created and saved

6.2
Disk data input for Velocity Analysis
This reads-in the supergather dataset formed in previous step. Supergath.
Sorting is by Primary key: Supergather Bin Number

6.3

Parameters for Velocity Analysis set below

With the above set parameters, below is the velocity interactive velocity analysis display (initial display)

Below is pick on CDP No.

Residual statics.
Residual statics are computed in following steps:
1. Make a stacked volume using IVA1 velocities.
This shall be some form of Brute Stack formed using the IVA_1 Velocity function
Parameters for Disk Data input

Parameters for Normal Move out correction (NMO)

Bandpass filter parameters

AGC Parameters

CDP Ensemble Stack parameters

F-X Decon parameters

Disk Data Output parameters


The stacked dataset shall be called Stack_IVA_2

------------------------------------------------------Add Flow Comment----------------------------------------------------------Disk data input parameters

Bandpass filter parameters

Trace labelling

Trace display parameters

Inactivate the routines in the second part of the flow, activate the routines in the first part of the flow (as shown below) and run
it

This means the stacked dataset Stack_IVA_2 is formed and table populated.
Its characteristics are as below

Next action, we deactivate routines in the first phase of flow, activate routines in the second phase of flow, and run it

Greyscale (VA) option of CDP Ensemble stack

If we create ensemble stacks using velocity IVA_1

Flow

8.

Residual Statics (continuation)

8.2 Picking the Autostatics Gate

Choose parameters of Smash and gate width

2. Display stack with all ilines/xlines and pick autostatic horisons. Create the name of the gate and pick areas of highest
reflectivity. Save the gate before exiting.
3. Prepare dataset by sorting it in CDP domain and apply nmo.
4. Compute 3D max. power autostatics using prepared dataset and autohorison gate.
5. Apply residuals prior to stacking using apply residual statics module.

Flow for residual statics computation is displayed bellow.

Choose Smash and gate width

Show one iline and one xline from the centre of the survey with and without residual statics.
Comment the result.
Udate IVA velocity analysis (IVA2) by applying residual statics prior to velocity analysis. Compare stacks using IVA1 and
IVA2 velocities.
Procedure
Create new flow 008 Residual Statics

The flow should be as below

Parameters of DDI

Parameters of NMO

Bandpass filter parameters

Disk data output parameters

-------------------------------------------------Add Flow Comment------------------------------------------------------------2D/3D Max. Power Autostatics parameters

Deactivate the lower part of flow and run it

Parameters for 2D/3D Max. Power Autostatics

Activate the with all others and run it

-------------------------------------------------Add Flow Comment-------------------------------------------------------------

Parameters for apply residual statics

Parameters for NMO

Band pass Filter

AGC Parameters

CDP Ensemble Stack parameters

F-X Decon parameters

Disk data output

-------------------------------------------------Add Flow Comment-------------------------------------------------------------

Disk data input parameters

Sorting by primary key: CDP Bin number

Display labeling

Trace display parameters

Run the entire flow and below is the output

Residual statics Flow run successfully

Comparison of CDP Ensemble stack with Residual statics applied and one with no residual statics applied

Discussion: Applying Residual Statics improves quality and resolution of Stacks due to correction of remaining static errors, improved
correction of travel-time variations or shifts and effects of near-surface layer.
After normal moveout corrections, it will be easy to see any residual 'jitter' between adjacent traces due to any remaining uncorrected statics errors, because the NMO correction should
make all the reflections horizontal.
The remaining uncorrected statics errors may be due to errors at the shot points and at the geophone points.

After NMO correction is applied the misalignment of the waveform across the CDP gather results into a poor quality of stack. The immediate need is to estimate
time shifts from the time of an ideal alignment then compensate for time shifts by using automatic picking . It requires a model for the move out corrected travel
time from the source station to a depth point on a reflector then back to receiver station. The model adopted here assumes that the static shifts are dependent
on source and receiver locations not on travel ray- paths in the subsurface.
With Application of residual statics, image resolution is significantly improved.

_____________________________________________ Instruction Notes______________________________________

9.0

Dip-moveout correction (DMO).

DMO correction is a dip-dependent partial migration, applied so that nonzero-offset seismic data exhibit the same zero-offset
reflection times and reflection points for all offsets. This transformation from nonzero-offset to zero-offset yields improved (less
dip-dependent) velocity estimates and higher lateral resolution, as well as a few other desirable side effects, such as the
attenuation of coherent noise.
The Flow uses DMO to Gathers 3D and it applies DMO correction to prestack NMO corrected gathers. The output DMO
corrected gathers can be inverse NMO corrected and used for input to velocity analysis programs.
Example of DMO flow is illustrated bellow.

Use the latest velocity to appy DMO correction. After we obtain DMO corrected gathers, we need to stack the data. Note that
NMO is already applied to obtained dataset.
Compare DMO stack with residual stack.
Run another pass of velocity analysis on DMO corrected gathers.
Run second pass of DMO with newly obtained velocities.
Compare first and second pass DMO stacks.
Coment the results.
_____________________________________________ Instruction Notes_______________________________________
Procedure for flow 9.0
New flow 009_DMO_1 is created

Complete flow

9.1.1 Parameters of Disk data input

9.1.2

Parameters of Apply Residual statics

9.1.3

Parameters of NMO

9.1.4 Parameters of DMO to Gathers 3D

9.1.5

Disk data output parameters

Run the complete flow

Below is the proof of running successfully

6.2 DMO_Stack
We create a new flow 010_DMO_Stack_1

So we run it in stages (activate the routines to run and deactivate those in other stages of the flow)

The first stage run successfully

We deactivate the first, 3rd and 4th stages and we run the second stage

This is also completed successfully

Similary we run the 3rd.

This also run and completed successfully

Next we run the last part

Below is the DMO_Iteration_1 Stack

Below is the version in grayscale

Intermediate flow 10.5 Velocity manipulation to convert Stacking velocity to Interval velocity

We run it
Runs and completed successfully meaning the interval table has been created and populated

Migration.
Flow

Disk Data Input parameters

Stolt or Phase Shift 3D Mig parameters

F-X Decon parameters

Band pass Filter parameters

Disk data output

---------------------------------------------Add Flow Comment----------------------------------------------------------------Disk data input parameters

Band pass Filter parameters

Trace Display label

Trace display parameters

Run it in two stages

Stolt/Phase shift migration is computationally efficient and very accurate for constant velocity, but has difficulty imaging steep
dips in areas where there are large vertical or lateral velocity variations. Since this is not the case in Otaway survey, it is the most
effitient way to image our data.
The module used for poststack migration is Stolt or Phase Shift 3D Mig
Compare DMO stack and migrated stacks by displaying ilines 30,40,50,60 on one screen and the same range of xlines.

Lab 8 Seismic Attributes

Objective
Objective of this exercise is to calculate various Instantaneous Seismic Attributes for Otway seismic volume. Our
working area is Otway 3D.
Module to use for this lab: Trace Math Transforms.

The conventional seismic trace can be viewed as the real component of a complex trace, which can be uniquely
calculated under usual conditions. The complex trace permits the unique separation of envelope amplitude and phase
information and the calculation of instantaneous frequency. These and other quantities can be displayed in a colorencoded manner, which helps an interpreter see their interrelationship and spatial change.
(Taner, Koehler, Sheriff, 1979, Complex seismic trace analysis. Exploration Geophysics, 44(6), 1041-1063)

Background Literature

In context of reflection seismology, seismic attribute(s) are quantities extracted or derived from seismic data that can
be analyzed in order to enhance information that might be more subtle or hidden in a traditional seismic image, thus
enabling more unlocking of information for better geological or geophysical interpretation of the data. Examples of
seismic attributes can include measured time, amplitude, frequency and attenuation, in addition to combinations of
these. Most seismic attributes are post-stack, but those that use CMP gathers, such as amplitude versus offset (AVO),
must be analysed pre-stack. They can be measured along a single seismic trace or across multiple traces within a
defined window.
Calculate and display instantaneous seismic attributes for migrated Otway 3D volume. For better display purposes, sort
data as shown in example above.

Create new flow 012_Seismic Attributes

Parameters of Disk data output

Parameters of Trace Math Transforms


Note that the drop-down list offers options

Trace math transform types

Parameters of Trace display


For trace display set at least 4 inline/screen and set parameter Trace Scaling option to be Entire

Depending on type of attribute, use appropriate color pallet. From menu View, activate Color bar. Compute and
display instantaneous seismic attributes on 3D datasets.
Seismic Section rev_sw_bluwhtbn.rgb

-----------------------------------------Recommended Colour setting (from lab notes) ------------------------------------------

We can explore reflection strength attribute by choosing it from Trace Math Transforms

Reflection strength in uni_reflstg.rgb

Reflection strength in uni_reflstg.rgb

Perigram and color palette uni_reflstg.rgb

Perigram and color palette uni_reflstg.rgb

Instantaneous frequency and palette uni_segfreq.rgb

Instantaneous phase with color palette uni_segphase.rgb

Derivatives

First derivative

Second derivative

First derivative

Second derivative

Perigram * cosine of phase

Apparent polarity

First derivative, apparent polarity

Second derivative, Apparent polarity

SW_HUB_ATTR_STK

Seismic section in rev_sw_bluwhtbn.rgb

Instantaneous frequency and palette uni_segfreq.rgb

Instantaneous phase with color palette uni_segphase.rgb

Perigram * cosine of phase

Apparent polarity

First derivative, apparent polarity

Second derivative, Apparent polarity

You might also like