Professional Documents
Culture Documents
Mark/Grade:
Unit Code:
GEOP4000
Unit Name:
Lecturer(s):
Dzunic (Prof)
Reproduce this assessment item and provide a copy to another member of the University; and/or
Communicate a copy of this assessment item to a plagiarism checking service (which may then retain a copy of
the assessment item on its database for the purpose of future plagiarism checking).
I certify that I have read and understood the University Rules in respect of Student Rights and Responsibilities
(details of which can be found at: http://students.curtin.edu.au/administration/responsibilities.cfm).
Name of Student:
Student Number:
Signed:
PAUL SSALI
17390138___
Paul Ssali ___
15/06/2015
Date:
____________
Note: unless stated otherwise, assignments must be lodged with the Unit Coordinator or in the relevant WASM assignment box.
Lab 1 and Lab 2 - Building a velocity structure, seismic modelling and migration
Seismic Processing 423
Instructor: Milovan Urosevi
Assistants: Aleksandar Duni, Sasha Ziramov
Concepts:
ACTION
1 Create velocity module
2 Smooth velocity model
3 Create a zero-offset section using the
exploding reflector concept.
4 Display seismic section in time
5 Create average velocity in time from
interval velocity in depth
6 Convert time section to depth
7 Display depth section with velocity
model
8 Create RMS velocities from interval
velocities
9 Time migration
10 Display migrated section
11 Convert time migrated section to depth
12 Display migrated section in depth
13 Display migrated section in depth with
velocity model
Software: ProMax
ProMax module
Interactive Velocity Editor*
Velocity Viewer/Point Editor
Finite Difference Modeling
Trace Display
Velocity Manipulation
Time/Depth Conversion
Interactive Velocity Editor*
Velocity Manipulation
Memory Stolt F-K Migration
Trace Display
Time/Depth Conversion
Trace Display
Interactive Velocity Editor*
The Promax software opens and assigns an area according to the student name
Open a line LAB_01 , Assign a line LAB_0102 (as shown below)
Create new flow by: click Add> LAB_0102 (for line name)
Allows setting up of models (creating the velocity model) which is Interval velocity in Depth
Give the interval_velocity_model (in-depth) created or its database a name in this case its VMDL_01
Specify units (feet or meters) and minimum or maximum depth
With the menu above, you can create a model by the following procedure:
MB1(left) click on Add at the top, and make sure its active (blue shaded)
MB1 Click the corners of the polygon, and at the last corner-joining to the first corner (where the polygon closes)
click Close at the top. This closes the polygon.
If corners or sides have to be adjusted, click Move and using MB1 drag the point in adjustment
If doing a polygon that shares corners or sides with an already existing polygon, at common corners use MBmiddle wheel and use MB1 at new corners (un common corners) and at the last corner click Close
This procedure is used to create all the shapes in the model being created
--------Add Flow Comment-------- to separate the previous Interactive Velocity editor module.
Interactive Velocity Editor* to flow
Separate that routine with ------Add Flow Comment------You will give this other model another name VMDL_02
Click on the INVALID name, it will display parameter file for Interval Velocity in Depth
Click Add and enter name for second velocity model VMDL_02
Below is the model name, and adjust all the other parameters accordingly
Then excute it (remember to innactivate other flows at each time youre running a certain flow
Follow the procedure above for shaping and creating the second model and assigning velocities
Doing the velocity models
Procedure
-
Then to assign that egg-shaped section in the middle, it has to be picked all round using Middle click (wheel) and at the last and first picks click close
To set-up others, you can duplicate the routines in the previous interactive velocity field editor (using the delete copy/paste
technique) and change name of output
Step2: Smoothing the Interval velocity (with depth) models using VELOCITY VIEWER/
POINT EDITOR
were to use the Velocity Viewer / Point Editor* flow, input any of the model set-up above, select name of output
velocity database (VMDL_01_SMTH) which in this case is the smoothed model
Steps:
Click the INVALID and select input velocity database, browse and select the VMDL_01
Run it, below is the output smoothed VMDL_01 velocity model in depth saved as VMDL_01_SMTH
All the other created velocity models can be smoothed using the above same procedure
Using the above procedures, the other models VMDL_06 and 05 can be smoothed with the above routine.
As below, activate the copied routine, edit input VMDL_06 (velocity model created in depth) and output VMDL_06_SMTH
(smoothed velocity model created in depth).
Select input
This model is plotted CDP_number of X-axis, Depth on vertical , and clicking in gives the velocities.
Step3: Creating a zero-offset section using the Exploding reflector concept using FINITE
DIFFERENCE MODELLING
Separate the flow using ------Add Flow Comment------- and input another flow FINITE DIFFERENCE MODELLING
Add Disk Data Output and select name of the zero-offset section created
By the look of the file name it means its already stacked in time
Execute this flow and this shall save the Zero-offset section (in time) calculated by the Finite Difference Routine applied
Make sure it runs successfully.
Step4: Applying BAND-PASS FILTER to preserve a range of frequencies and filter-out very
low and very high frequencies outside the specified range.
Select parameters of disk data output, name of output file, record length = 3000
Note that the name LM1_STK_time_Filt depicts a staked time velocity and filtered
The Disk data output parameters should be set as below
Execute it . Below is our zero-offset staked in_time image subjected to bandpass filter to have it clearer
Using ------Add flow comment------- to separate new flow from previous flow
Velocity manipulation flow
So were to put-in our interval velocity model (in depth) initially created and have an output for an Average velocity in
time
Select interval velocity in depth and select one of the initial velocity models in depth created e.g VMDL_01
SELECT INPUT
Select input as interval velocity in depth or you can use the smoothed velocity
Note in the GET VELOCITY from database click YES and in the next step step go and create the table VMDL_01_SMTH_AVG
Select new name for the output file of the Velocity_depth section and velocity_time section
Below is the stacked seismic section (converted to depth) overlayed with the initially set interval velocity (in depth)
model
Note that there is a miss-match in the seismic section and the velocity model because the seismic image is not well
migrated
Using -----Add flow comment----- separate new flow from previous flow.
Velocity manipulation*
Using -----Add flow comment----- separate new flow from next flow.
Step 9: Migration
Objective: To move / correctly move all events to their points of origination not where they were
imaged or recorded. E.g. all reflections correctly moved to true reflection points rather than imaged
points.
Note:
The Velocity manipulation* routine preceding migration is to convert interval velocity (in depth) to interval velocity in
time and to select an output file V_interval_time.
In order to display it
Separate the previous flow by -----Add flow comment---- Disk data input to read-in migrated dataset
Trace diplay
Trace display
For disk data input select time migrated image in the previous step
Set parameters of trace display as below (very important to primarily sort by CDP)
Below is the output of the process, the migrated section (in depth) overlayed with the initially set velocity model (in
depth)
Comment on efficiency of migration process i.e. position of velocity boundaries on seismic section Vs. position of boundaries on
the velocity model.
Do this for the other models and discuss results.
Software: ProMax
Procedure 1:
Create job flow
Procedure 2:
Migrate seismic time sections using ProMax module Explicit FD Depth Migration.
Note: to help understand the function of Explicit FD Depth migration
Procedure 3 and 4:
Display depth migrated sections using Trace Display and Interactive Velocity Editor. Compare depth and time poststack migrated sections with velocity model.
TRACE DISPLAY:
Under trace display
Lab 4
Set parameters of interactive vel Editor
Step2: Smoothing the Interval velocity (with depth) models using VELOCITY VIEWER/
POINT EDITOR
were to use the Velocity Viewer / Point Editor* flow, input any of the model set-up above, select name of output
velocity database (LAB04_VDML_SMTH) which in this case is the smoothed model
Procedure 3:
In the flow 020 PreStack_MDL execute Finite Difference Modeling module with parameters listed below (Fig. 4 and 5) and save
data using Disk Data Output.
When the above is run and completed the entire 150 shots, the following meassage is presented at the end of the window
Note: In case you copy-in the already made dataset: 150_PSTK_MDL_SHTS from AREA: tutor LINE: GP423, as well as
velocity model L03_VMDL (Interval velocities in depth).
Fig. 8. Parameters used for 150 shots FD modelling (Do not execute those parameters!)
In the Disk Data Input module within a Trace display option use Sort, instead of Get All parameter. Select primary
key: Live source number.
In the Sort order list for dataset select range from 10th to 150th shot record with the step of 10 shots: 10-150(10).
In the Trace Display module in the Number of ENSEMLES set this parameter to 15.
Execute flow 030 Geometry Fig. 11a. Using module DDI and Extract Data Base Files, database is initiated and updated.
In the disk data input menu, select the seismic dataset 150_PSTK_MDL_SHTS for the 150 shots
steps.
Separate the above flow by adding -------Add flow comment----Add routine 2D Land Geometry Spreadsheet
Run it and you will have a blank geometry table
Steps in proper assigning of geometry : Extracted from lecture notes, below is the summarised workflow for assigning geometry. The steps are broken-down for clarity in the next pages.
Execute interactive module 2D Land Geometry Spreadsheet*. Instructions how to fill the spreadsheets, you can find
on U-drive - Fig. 11b (ProMax modules Help): 2D Land Geometry.pdf and 2D_Geometry_how2.pdf. Finally,
execute last flow segment: DDI, Inline Geom Header Load, DDO, in order to update trace headers.
Since header values from modelled shots has been Extracted directly into ProMAX database (In the
Setup table choose Existing Index number mapping In the TRC and press OK)
Step-2: Under the 2D Land binning menu, next step is to click Binning and OK
Step-3: under the 2D Land Binning menu select Finalise database and OK
The above should suffice the geometry assigning process, however, in case you have Reciever and Source tables are incomplete
and some importing work is needed you may follow the procedure below
You can click on sources to visualise / analyse the geometry table of sources
Set-up > Match pattern numbers using first line chan and stations > OK
After the above is invoked, patter icon shall be activated on the Land geometry assigning window
In disk data output, select name of output file with geometry headers
After all table filling / edits save the table through MB1 File> Save> Exit
Binning
Click Proceed
Confirm OK
Proceed to Binning
Confirm
Finalisation of Database
Confirm
Run it and ensure that the its runs successfully, and the output file shall be populated
Below is the same output in WT. We can check our assigned geometry if we sort in Source / Channel number
Under the trace header maths, make sure you pre-set up the various equations you want to investigate
The typical one for evaluation of geometry is the air-wave equation air=aoffset/340*1000 which simply means velocity of sound
in air = offset (distance)/ 340m/s(speed of sound) and *1000 to convert to milliseconds.
Thus, this geometry has been set right and accurate, Principally, the apex of the air-wave should be at the base of the Shot flag
Other methods of evaluating / QC Geometry include use of the View header plot menu
On the display go VIEW > HEADER PLOT > CONFIGURE > AIR
Select from the available trace headers, in this case this will be absolute offset (aoffset) . Note the offset is directly plotted over
the data and you can evaluate correctness of geometry by, clicking on a trace go up through a particular channel e.g. 41 and
reading the value of offset at the bottom in the worded display to see if it mages geometrical sense on the ground.
Note the offset is directly plotted over the data and you can evaluate correctness of geometry by, clicking on a trace go up
through a particular channel e.g. 41 and reading the value of offset at the bottom in the worded display to see if it mages
geometrical sense on the ground.
In the flow 040 True Amplitude Recovery execute module TAR to compensate amplitude losses. Read the ProMax
manual (click ?) and try using different approaches to obtain amplitude equalization of the reflections. Input file is:
030_150_SHOTS_GM and Stacking (RMS) velocity. Execute Velocity Manipulation module in order to compute
Stacking velocity from your interval vel. in depth (Fig. 12):
Procedure
6.1 Add another flow 040_True_Amplitude_Recovery (TAR)
6.2 Under the flow 040_True_Amplitude_Recovery (TAR) create a separate flow Velocity Manipulation. The purpose of the
velocity manipulation is to compute stacking velocities from interval velocity
Youre in-putting initial interval velocity model (with depth)
Create new file name for the stack velocity to be created in this case LAB04_VMDL_stc
Execute it and make sure it runs successfully, meaning the stacking velocity table set-up is populated
Step 6.3: Compensating Amplitude loss (Gain) as a process of True Amplitude Recovery (TAR)
Under the disk data input read-in the dataset with geometry assigned
The disk data input get the data with geometry, inputs by sorting between the 10th-to-150th short and selects the 10th short (sorting
key is Live source number)
Note:
The above is just a testing step where were applying true Amplitude Recovery to just Part of the Data and analysing
the output. If consider it reasonably effective, must then apply TAR to the entire dataset thru a Get all in Disk Data
Input.
Note trying to run with the above settings produced this error
Meaning, that in the True Amplitude Recovery, velocity should be set SPATIALLY VARIANT as below (doesnt
make sense velocity not being spartially variant)
In trace display select 15 Essembles and remember to set display key by live source number
Below are True Amplitude correction using Db/sec correction parameters in figure 15.
Below is the TAR output with these settings, create a file output to save data 040_150_shots_gain with amplitude
recovered
Include disk data output to save the dataset to which amplitude has been recovered
Disk Data output to save file of entire dataset with TAR applied
Below is the new output file for the entire TAR-gained-dataset (TAR applied)
Fig. 18 Dataset Information (MB2 click on the file name in Dataset table)
From the data table above we can deduce the following:
Note:
Its important to realise that, the above dataset is partial (its a product of a sorting process, not a Get_all process which
outputs complete TAR-gained dataset).
So in forming gathers, we have to use the complete dataset
The complete dataset name is 04_ALL_SHOTS_TAR_gain. Below is an analysis of its details as in the previous step.
Procedure
Create flow 050_ Velocity Analysis
Note that:
were reading-in data with Amplitude Recovered
maximum CDP fold if we dont know we put in 999..
because were avoiding the starting and end portions of CDPs where fold is not to a maximum, were entering
minimum CDP number as 200 (instead of 138) and maximum 800 (instead of 898)
Key things to know about Forming Supergathers
the Supergather routine may not need Disk data input as long as its directed to the file to read-in
for good supergathers, pick regions of maximum fold (eliminate the first and end 100 from either side)
CDP increament in this case (25) means that, the first gather will have middle CDP at 200 and it will be combine by a
total of 25CDPs of which there will be 11 on both side i.e.
(200-11) on left end)..(200 in middle).(200 +11 on other end)
Next CDP gather will have middle at 225 and it will be combine by a total of 25CDPs of which there will be 11 on both
side i.e. (225-11) on left end)..(225 in middle).(225 +11 on other end)
Note that the CDPs to combine has to be an odd number in this case because one the middle CDP (mid of CDP gather)
has to be accounted for and the others to combine on both sides.
Failure to understand this always, PROMAX always returns error CDP smash must be an odd or even number
Poor choice of range of CDPs to be combined (including low-fold CDPs) in forming gathers creates CDP gathers with poor (weak
reflections that cannot be picked accurately).
Poor choice of number of CDPs to combine results into poor reflections (i.e.
too few-too weak),
optimum gathering gives best velocity analysis picking
and too many leads to poor lateral resolutions as the CDPs are not from the same positions.
In the disk data output set name of output file of super gathers 050_Supergathers
Run that flow and make sure it completes successfully. You can confirm this by checking successful message on the
bottom
This process is meant to use the supergathers formed in previous step and perform velocity picking for each CDP
Below is the flow,
Step 8.1: Band-pass filtering and Gain control prior to velocity analysis
This process is an optional process that may Use Band-pass Filter and/or AGC in order to improve Velocity analysis quality.
Have in mind peak (central) frequency of the wavelet that was used for finite difference modelling
However, at this point I dont have the appropriate information for proper setting of filter parameters.
So the filtering process may be held-off at this point.
So our velocity analysis will be run without the filtering
From the top menu, click on Gather-> Gather Parameters and set/change Number of CDPs to sum = 7 (Fig.24).
Fig. 25 Semblance and trace display parameters (scalar works only with VAWG plot)
When picking velocities of the particular CMP is finished, click on the black arrow (upper left corner) to proceed to the next CMP location (Fig. 26). When picking is
completed, click on File -> Save pick, File-> Exit/Stop flow.
However, there is a challenge in this as if this is set to YES, and Velocity Analysis run again,
It always returns an error relating to being unable to read / connect to PD after a certain number of iterations.
Procedure
Using velocity manipulation, create a Brute stack velocity function by selecting a single brute stack velocity
Set parameters as below, select appropriate velocity field ,Type of velocity to output select Stacking (RMS) Velocity,
Select output velocity database name: brutevel
Select velocity field (realistic for working purposes). The above would be interpreted as below
Run the velocity manipulation flow and ensure it runs effectively. This would mean the velocity function table is
populated
The above gives us a velocity function table, were to use for brute stacking
After True Amplitude Recovery (TAR) is completed in previous step, in the next step read-in the dataset
040_ALL_SHOTS_TAR_gain of all shots with true-amplitude-recovered.
In the Disk Data Input, read-in the dataset and primary sorting key CDP Bin Number
To check the contents of this velocity table Brutevel, click Edit > and click the file
Below is the brute stack upon which a particular CDP can be picked.
Note that its in the same format CDP Vs. Time as the Velocity Analysis window, so the both can be aligned on multi screens for
better picking.
This brutestack is strictly for aiding picking, so it cannot be used anywhere in further steps of forming gathers or
velocity analysis
After picking velocities in velocity analysis save the picks File> Save picks
Step 9.0: Creating interval Velocities (in_depth) from Picked Stacking Velocity
Using module Velocity Manipulation (Fig. 28), create interval velocities in depth using picked stacking velocities
(~RMS).
Select input velocity table which is VSTK_picked (from your velocity analysis)
Select output velocity database, which shall be interval velocity (in depth) derived from stacking velocity picked
Note that, in the velocity manipulation there is smoothing velocity
Set all the above parameters of the velocity manipulation and run it.
Set the parameters of the velocity viewer (as above) and run it
Below is the output
Note: there are details on editing the velocity field above (see Velocity viewer/Editor) help in PROMAX
Way forward: In the next step, were to do stacking using the Velocity picked Vstack_vel in Velocity Analysis
Do DMO using the above smoothed Velocity function
Under that create this flow. This routine is dedicated to smoothing the Velocity picked
Create name of output database that will contain smoothed version of stacking velocity picked in the velocity analysis. Create
smoothed version from picked
In the second flow stage, the Disk data input parameters are set as below
Sorting in CDP : Absolute value of set
Focusing at CDPs 400-450
Run the above active flow to populate the tables above and use them in subsequent flows
After clicking OK, MB1 click on body of pannel (see blue dot)
Then MB1>click on start point of mute, move to end of mute MB1>click on end of mute line.
Run the above flow, confirm successful running , this shall populate the dataset to be used in next step.
After that has run successfully (as above), then proceed to run the last section flow below
This just reads-in the prestack data made in previous step and displays.
Step 10 : Migration
Note: This interim migration stage is obviously going to yield an inaccurate image because only NMO has been
applied (and this can only correct horizontal and near-horizontal velocity field, Zero-offset context and uses
stacking velocity not Vrms).
Its purpose is meant to demostrate that we need something extra DMO
DMO would correct dip-depenedent move-out as a partial migration to convert pre-stack non-zero offset data
into Zero-offset context which is later handle as post-stack.
In the flow 070 Migration perform 2 time and 2 depth migration algorithms (Kirchoff Time Mig, F-K Migration and Kirchoff
depth migration., Implicit FD Depth Migration)
Start new flow 070 Migration
F-K migration
Below is a comparison of Kirchoff Time Migration (Left) and Memory Stolk F-K migration
Parameters
Disk data input
Select velocity interval velocity with depth derived from stacking velocity picked
Output parameters
Select Yes on clipping output velocity and reasonably define boundaries of your highest and lowest velocity fields.
Notice that the option to smooth the interval velocity model is already selected, Normally (or optionally) this would be done in
the subsquent step
With all other routines innactivated, Run the routine and make sure it runs successfully. This re-populates the interval velocity
analysis with clipping
Now we innactivate other flow and run the velocity viewer / point editor flow (this will run based on new interval velocity table
formed with clipping-out extreme velocities
Below is the output (notice the zero velocity east of CDP 800)
To edit this click on the velocity point you would like to edit (it turns pink, then move the cursor over the velocity field select the one you want to use as a reference (it will be blackdotted) then to keep that move cursor vertically along the line to the top > get out of pannel and move to editor side on the right.
Correct the redline against the blue line by using guides that show how to use MB1, MB2 and MB3 to move, delete, adjust etc.
The Optional and best way of editing is going to velocity analysis routine, increase maximum value (range of semblance) > go back to velocity analysis and re-pick reasonably avoinding
extreme interval velocities. (0-to-very high)
Note that were using the smoothed version of velocity table for migration
Parameters
Disk data input
Below is the stacked section in CDP: CHAN sorting (with smoothed velocity field)
Importance of smoothing
Below is the stacked section with un-smoothed velocity field (sorting (none- smoothed velocity field)
Trace display
Below is the Time-to-Depth conversion of the initial Kirchhoff Time migrated section
Below is the flow for Time-to-depth conversion of Memory Stolk F-K time Migration
In general, Depth migration algorithms perfom better and relatively more precise than Time migration algorithms, however, the main challenge and
requirement for good performance of depth algorithms is the need for good velocity analysis process. They are highly dependent on velocity
Case-specific discussion (based on the migrated section diagrams above)
Time migrations
The top pair of time migrations, the resolving and resolution of time-migrations (i.e. : Kirchhoff time migration and F-K time migration) is significantly poor. The image is reasonably well
resolved on boundaries, but very poor resolution in the middle of the section where various velocities intersect. It creates severe and numerous micro diffractions.
Depth migrations
The bottom pair of depth migrations, the resolving and resolution of depth-migrations (i.e. : Kirchhoff depth migration and F-K depth migration) is significantly improved. The image
(layers) is reasonably well resolved and distinguished on both the boundaries and in the middle where various velocities intersect. It also creates diffractions, but not as micro and
numerous as the time migrations.
Note: Smoothing velocities refines the serretion (sharp jugged edges) on velocity boundaries.
Instruction
In the flow 080 DMO perfom Dip Move Out correction on shot gathers using Ensemble DMO T-X Domain module (fig.4 below).
Display and comment on results
Display:
- Raw Shot
- NMO corrected shot
- NMO+DMO
- DMO+NMO-1
FIGURE-4 DMO FLOW and PARAMETERS
Procedure
Below is the complete flow for first (initial) DMO interaction
Inverse NMO
Select INVERSE, use the same stretch mute % as in Forward NMO and use the same velocity Vstack picked as used in
FORWARD NMO.
Trace display
Supergather formation
Complete supergather formation flow
From the above to have maximum fold we will skip 100 CDPs on each side so we consider minimum CDP=200, maximum CDP
=800.
Parameters for Supergather formation
Refer to pages 164-167 on understanding supergather parameters
Execute this part of the flow and make sure it runs successfully.
If it does, the supergarthers are formed and the dataset file 090_DMO_SUPERGATHER for supergathers populated
Velocity Analysis
Select table to store DMO_velocity picks
Use the slider or punch in numbers for gain improvement> click OK for temporal testing of appearance or click Apply and OK to
make permanent changes
We run the flow to ensure the interval velocity table newly set-up is populated
It ran normally,
Parameters
Below is the output, notice that we have velocities up to 7000 which were not initially set in the model
This is due to the fact that DMO correction velocities (velocities for correcting dipping events are always higher than velocities for correcting horizontal events)
Velocity field with V_dmo1_picked (i.e. picked after first iteration DMO_1 Velocity)
We can smooth it
Comparing Velocities
Our task is to compare various velocity fields
Notice that these velocities are generally lower and the range comparatively smaller up to 4500.
This is because the stacking velocities are very much associated to V_nmo velocities that were picked during application of NMO (nmo-velocity analysis) and they are generally lower
because they are mainly used to correct horizontal events.
smoothed
Tips:
A detailed guide on meaning of the above parameters is attached below the following text
Typical CDP spacing in essembles should be ideally the one you used in forming supergathers or may be varied, but
should not be less than the CDP interval otherwise you cause DMO alliasing.
Typical RMS in early times can be estimated from the velocity field obtained by stacking
Maximum offset can be obtained from geometry (survey shooting) or examining information of complete geometry
dataset (in the below, its expressed in terms of CDP spacing).= 1000 cdp x10=10000
Inverse NMO
Make sure for this you select INVERSE
Same stretch% and the same velocity table file (the smoothed version of the picked velocity (stacking velocity)
Run the first flow of data input and NMO (Forward) this will populate that table. If successful inactivate it and activate next.
Next run the Esemble DMO in TX flow with others inactive. (make sure it runs successfully)
With the SeisSpace interface, the user has access to all ProMAX tools and can run all of the traditional job flows. But, he is
doing this in a modern Windows-compliant user interface with capabilities like copy-and-paste or drag-and-drop. There are
many new geophysical tools in the SeisSpace tool list, and these can be used in flows along with traditional ProMAX tools. The
JavaSeis format allows parallel read and write from disk with no file-locking., so it is extremely efficient and scales very well in
parallel.
To start working in Seisspace, type in X-Win terminal the following command:
vncserver -geometry 1280x1024
This is the message that will appear:
You have created geometry for TurboVNC connection, with an ID of 23 in the case above (write down your own ID, you will
have to use it every time you run TurboVNC).
Run TurboVNC on 134.7.152.10:ID as displayed bellow.
The data is organised differently in Seisspace. Find and investigate the following folders: Project Area, Project, Subproject,
Flows, Datasets, Tables.
How can we access database and help on processes?
How can we create the job (processing flow)?
How can we submite a job on different nodes?
_____________________________________________________________________________________________
Note : As a tip of using seispace 1D, 2D and 3D you can access routines and flows as follows
To select routines to flow Click; Products >> SeisSpace3D >> Show Profile
Invoking the above, gives a bunch of routines that can be selected on the right hand side.
In case of any permission issues failing it to run, type in the terminal xhost +
2.2
We inactivate the other routines and run the first two routines in the flow, i.e. Disk data input and Extract Database files
Completed successfully
We inactivate other flows and run Disk data input and 3D Land Geometry spreadsheet
2.3.1
Click on Bin > Assign mid-points > OK and in the next click Proceed
2.3.2
Azimuth is 25: meaning the Y-lines of grid are on bearing N 025 E i.e. 25-degrees east of North.
Bin size (Grid X bin dimension): is set to 10. The minimum allowable value is half of receiver spacing.
(Grid Y bin dimension): is set to 10. The minimum allowable value is half of receiver spacing. (However I have issues
with this as Y-spacing seems to be smaller than that).
Set all other settings as below.
Click Calculate Dim at the bottom left
Click OK
In that view you can display a whole lot of things using the menu. the numerous options include coloring sources, recievers
differently, labelling etc.
Elevation contour
Grid parameters
Run it and ensure its successful meaning the table is set-up and populated
4.0 Pre-processing
Make a processing flow that will:
The content of the flow is displayed bellow. Parameterisation should be tested on one-shot record, after that applied on the
whole dataset.
Display a shot record before and after pre-processing. Comment the results.
4.1
Disk data input parameters
(note that were testing it on the 50th shot or live source number)
To get the replacement velocity> in the display > activate dx/dt button on the side > MB1 (click on start first break, move to end
of line MB1) MB3 to label the velocity.
To apply the above, inactivate all the other routines in the flow, then run this routine.
Application of Elevation statics is meant to correct effects of elevation differences in relation to datum.
In this case application of elevation statics makes some improvement, though not huge improvements as the terrain is relatively flat with gradual (gentle) elevation differences
4.3 Apply Automatic Gain Control
The purpose of applying AGC is to compensate amplitude decay through a mechanism that equalize noise through a set re-scaling process operating as sliding window of fixed length is
used to compute the average amplitude within the window. This average is compared to a reference level and the gain computed for a point in the window. The window then slides down
one sample and the next gain correction is computed. The process continues until the whole trace has been gained.
Automatic Gain Control (AGC): is the commonest (and often most dangerous) scaling type used.
Keep Disk data input, Apply Elevation Statics, AGC and Trace Display active. Inactivate other flows and run it. Below is the output.
4.4
Deconvolution is a filtering process which removes a wavelet from the recorded seismic trace by reversing the process of
convolution. The commonest way to perform deconvolution is to design a Weiner filter to transform one wavelet into another
wavelet in a least-squares sense.
By far the most important application is predictive deconvolution in which a repeating signal (e.g. primaries and multiples) is
shaped to one which doesn't repeat (primaries only). Predictive deconvolution suppresses multiple reflections and optionally
alters the spectrum of the input data to increase resolution.
Parameters for Spiking Predictive Deconvolution set as below
In order to apply Spiking Predictive Deconvolution, we need to define (pick) a gate. Procedure below
Picking >Pick Miscellaneous Time Gates
Click OK.
On the same display, select the second layer (bottom of gate)
Decide on the time width of gate, MB3> click along the time boundary on the display, and use drop down list. Click > New
Layer (decon-2). This will be the bottom of the time gate
After setting the parameters of the Spike predictive deconvolution and picking the gate, inactivate other routines as shown and
run a combination of Disk data input + Spike Deconvolution +Trace Display
Selection of gate
This action is meant to attenuate surface waves with respect to the selected gate
Filters are usually zero-phase (Ormsby) or minimum phase (Butterworth) although the filter type can actually be of either phase
and this should be clearly stated. The passband of a zero-phase Ormsby filter is usually defined by up to four corner frequencies
as shown by Figure 1a. The passband of a Butterworth filter is more complex and involves two cutoff frequencies (Figure 1b)
where the filter is at half power (or 3dB down on maximim power). Two filter slopes are also required and are specified in terms
of decibels per octave. An octave is defined as a doubling of frequency e.g. 120Hz is an octave above 60Hz
Set parameters of Band pass filter as below
Keep other flow inactive, activate the following (Disk data input, Bandpass Filter and Trace display) and run it to see effect of
filter.
5.0
Input data should be pre-processed dataset sorted by CDP, with a secondary key ILINE_NO. Only one ILINE_NO should be
analysed, chosen from the middle of the survey. Example of how CVS can be setup is given bellow.
Sort of the second DDI module is Panl_vel/xline_no. Trace display should have at least 5 stacks per screen with proper
annotations. See bellow.
Procedure
5.1
Using geometery QC tools and attributes, we can select an inline which is well in the middle of survey grid
5.2
Trace Length
To save constant velocity stacked section (stacked with the defined velocity Parameters)
Below is the constant velocity stack obtained by applying velocities between 1800-2050 m/s at intervals of 50m/s steps.
CONSTANT VELOCITY STACKS (CVS): In this approach a number of adjacent CMPS are selected around each location
point. The CMPS are NMO corrected and stacked using a defined range of constant velocities in this case 1800m/s to 2050m/s
with an interval of around 50m/s.
The mini-stack panels are displayed next to each other and velocities picked where key events show the highest amplitude or
greatest continuity. The method shows what the data will look like if stacked with the chosen velocity but has a resolution
limited to the velocity interval chosen. This may be the best method for data with very poor SNR. Some attention should also be
paid to the mutes applied for CVS analysis, particularly if multiples are present.
Below is the constant velocity stack obtained by applying velocities between 1800-2050 m/s at intervals of 50m/s steps
CONSTANT VELOCITY STACKS (CVS): In this approach a number of adjacent CMPS are selected around each location point. The CMPS are NMO corrected and stacked using a
defined range of constant velocities in this case 1800m/s to 2050m/s with an interval of around 50m/s.
The mini-stack panels are displayed next to each other and velocities picked where key events show the highest amplitude or greatest continuity. The method shows what the data will
look like if stacked with the chosen velocity but has a resolution limited to the velocity interval chosen. This may be the best method for data with very poor SNR. Some attention should
also be paid to the mutes applied for CVS analysis, particularly if multiples are present.
Qn. Based on the Constant Velocity Stacks, Compute the Single Velocity Function
Based on visualisation (examination) of the mini-stack panels which are displayed next to each other (above) the best single velocity function is selected depending on:
The method shows what the data will look like if stacked with the chosen velocity but has a resolution limited to the velocity interval chosen. In this case its 2000 m/s and 2050 m/s
because of highest amplitude and events continuity, and perfectly horizontal NMO Correction.
6.0
_____________________________________________
Notes___________________________________________________________
Use velocities obtained from CVS and use it as a guide function for IVA.
An example of the flow is displayed bellow.
Instruction
Instruction
From the grid and Fold map below, minimum and maximum in-lines are 1 and 87, minimum and maximum cross-lines are 1
and 94. In order to consider reasonable fold, we skip inlines and cross-lines on the periphery (with low fold) in this case inline
40 and 60 with an increment of 10. crosslines be 40 and 60 with an increment of 10.
We make some changes in velocity analysis. We create CVS table from values of CVA.
Parameters of Velocity Analysis flow is as given below:
For X and Y coordinates, click File > Resolve >Resolve X and Y from CDP>OK (Note CDP has been selected as 1 below).
With a similar approach values for In-Line and Cross-Line, choose resolve option as Resolve In-Line and X-Line from X and
Y.
Run successfully, meaning the supergather dataset has been created and saved
6.2
Disk data input for Velocity Analysis
This reads-in the supergather dataset formed in previous step. Supergath.
Sorting is by Primary key: Supergather Bin Number
6.3
With the above set parameters, below is the velocity interactive velocity analysis display (initial display)
Residual statics.
Residual statics are computed in following steps:
1. Make a stacked volume using IVA1 velocities.
This shall be some form of Brute Stack formed using the IVA_1 Velocity function
Parameters for Disk Data input
AGC Parameters
Trace labelling
Inactivate the routines in the second part of the flow, activate the routines in the first part of the flow (as shown below) and run
it
This means the stacked dataset Stack_IVA_2 is formed and table populated.
Its characteristics are as below
Next action, we deactivate routines in the first phase of flow, activate routines in the second phase of flow, and run it
Flow
8.
2. Display stack with all ilines/xlines and pick autostatic horisons. Create the name of the gate and pick areas of highest
reflectivity. Save the gate before exiting.
3. Prepare dataset by sorting it in CDP domain and apply nmo.
4. Compute 3D max. power autostatics using prepared dataset and autohorison gate.
5. Apply residuals prior to stacking using apply residual statics module.
Show one iline and one xline from the centre of the survey with and without residual statics.
Comment the result.
Udate IVA velocity analysis (IVA2) by applying residual statics prior to velocity analysis. Compare stacks using IVA1 and
IVA2 velocities.
Procedure
Create new flow 008 Residual Statics
Parameters of DDI
Parameters of NMO
AGC Parameters
Display labeling
Comparison of CDP Ensemble stack with Residual statics applied and one with no residual statics applied
Discussion: Applying Residual Statics improves quality and resolution of Stacks due to correction of remaining static errors, improved
correction of travel-time variations or shifts and effects of near-surface layer.
After normal moveout corrections, it will be easy to see any residual 'jitter' between adjacent traces due to any remaining uncorrected statics errors, because the NMO correction should
make all the reflections horizontal.
The remaining uncorrected statics errors may be due to errors at the shot points and at the geophone points.
After NMO correction is applied the misalignment of the waveform across the CDP gather results into a poor quality of stack. The immediate need is to estimate
time shifts from the time of an ideal alignment then compensate for time shifts by using automatic picking . It requires a model for the move out corrected travel
time from the source station to a depth point on a reflector then back to receiver station. The model adopted here assumes that the static shifts are dependent
on source and receiver locations not on travel ray- paths in the subsurface.
With Application of residual statics, image resolution is significantly improved.
9.0
DMO correction is a dip-dependent partial migration, applied so that nonzero-offset seismic data exhibit the same zero-offset
reflection times and reflection points for all offsets. This transformation from nonzero-offset to zero-offset yields improved (less
dip-dependent) velocity estimates and higher lateral resolution, as well as a few other desirable side effects, such as the
attenuation of coherent noise.
The Flow uses DMO to Gathers 3D and it applies DMO correction to prestack NMO corrected gathers. The output DMO
corrected gathers can be inverse NMO corrected and used for input to velocity analysis programs.
Example of DMO flow is illustrated bellow.
Use the latest velocity to appy DMO correction. After we obtain DMO corrected gathers, we need to stack the data. Note that
NMO is already applied to obtained dataset.
Compare DMO stack with residual stack.
Run another pass of velocity analysis on DMO corrected gathers.
Run second pass of DMO with newly obtained velocities.
Compare first and second pass DMO stacks.
Coment the results.
_____________________________________________ Instruction Notes_______________________________________
Procedure for flow 9.0
New flow 009_DMO_1 is created
Complete flow
9.1.2
9.1.3
Parameters of NMO
9.1.5
6.2 DMO_Stack
We create a new flow 010_DMO_Stack_1
So we run it in stages (activate the routines to run and deactivate those in other stages of the flow)
We deactivate the first, 3rd and 4th stages and we run the second stage
Intermediate flow 10.5 Velocity manipulation to convert Stacking velocity to Interval velocity
We run it
Runs and completed successfully meaning the interval table has been created and populated
Migration.
Flow
Stolt/Phase shift migration is computationally efficient and very accurate for constant velocity, but has difficulty imaging steep
dips in areas where there are large vertical or lateral velocity variations. Since this is not the case in Otaway survey, it is the most
effitient way to image our data.
The module used for poststack migration is Stolt or Phase Shift 3D Mig
Compare DMO stack and migrated stacks by displaying ilines 30,40,50,60 on one screen and the same range of xlines.
Objective
Objective of this exercise is to calculate various Instantaneous Seismic Attributes for Otway seismic volume. Our
working area is Otway 3D.
Module to use for this lab: Trace Math Transforms.
The conventional seismic trace can be viewed as the real component of a complex trace, which can be uniquely
calculated under usual conditions. The complex trace permits the unique separation of envelope amplitude and phase
information and the calculation of instantaneous frequency. These and other quantities can be displayed in a colorencoded manner, which helps an interpreter see their interrelationship and spatial change.
(Taner, Koehler, Sheriff, 1979, Complex seismic trace analysis. Exploration Geophysics, 44(6), 1041-1063)
Background Literature
In context of reflection seismology, seismic attribute(s) are quantities extracted or derived from seismic data that can
be analyzed in order to enhance information that might be more subtle or hidden in a traditional seismic image, thus
enabling more unlocking of information for better geological or geophysical interpretation of the data. Examples of
seismic attributes can include measured time, amplitude, frequency and attenuation, in addition to combinations of
these. Most seismic attributes are post-stack, but those that use CMP gathers, such as amplitude versus offset (AVO),
must be analysed pre-stack. They can be measured along a single seismic trace or across multiple traces within a
defined window.
Calculate and display instantaneous seismic attributes for migrated Otway 3D volume. For better display purposes, sort
data as shown in example above.
Depending on type of attribute, use appropriate color pallet. From menu View, activate Color bar. Compute and
display instantaneous seismic attributes on 3D datasets.
Seismic Section rev_sw_bluwhtbn.rgb
We can explore reflection strength attribute by choosing it from Trace Math Transforms
Derivatives
First derivative
Second derivative
First derivative
Second derivative
Apparent polarity
SW_HUB_ATTR_STK
Apparent polarity