Professional Documents
Culture Documents
GeoFrame 4.031
November 6, 2002
GF4_CPS3_6Nov02.pdf www.geoquest.com
Copyright Notice
Disclaimer
Trademark Information
All other products and product names are trademarks or registered trademarks
of their respective companies or organizations.
Schlumberger Contents
Contents
Coverage
Definition of Mapping Environment Components . . . . . . . . 9-3
Display Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-3
Modeling Environment . . . . . . . . . . . . . . . . . . . . . . . . . . 9-4
The Relationship between CPS-3 Modeling Environments and
GeoFrame Binsets (Grid Libraries) . . . . . . . . . . . . . . . . 9-4
More Notes on Binsets . . . . . . . . . . . . . . . . . . . . . . . . . . 9-5
Making Use of Environments . . . . . . . . . . . . . . . . . . . . . 9-6
Multiple Environments . . . . . . . . . . . . . . . . . . . . . . . . . . 9-8
Setting Up for Horizontal and Vertical Scaling/Limiting 9-9
Storing and Retrieving Environment Definitions . . . . . . 9-9
Rotated Grids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-11
Association of Environments with Sets . . . . . . . . . . . . 9-11
Specifying Display and Modeling Environments . . . . . 9-12
Exercises for the Display Environment . . . . . . . . . . . . 9-13
Exercise: Setting Standard Parameters . . . . . . . . . . . . . . . 9-14
Exercise: Define a Display Environment for the Basemap 9-15
Exercise: Enlarge the Display Environment Window . . . 9-20
Exercise: Redisplay the Three 3D survey locations . . . . . 9-21
Exercise: Preparing a Location Basemap . . . . . . . . . . . . . 9-22
Border, Labels, Scale Bar, and Title . . . . . . . . . . . . . . . 9-22
Display 3D Survey Locations . . . . . . . . . . . . . . . . . . . . 9-24
Display 2D Seismic Line Posting . . . . . . . . . . . . . . . . . 9-25
Move the Relative Position of a Map Layer . . . . . . . . . 9-26
Post Borehole Locations . . . . . . . . . . . . . . . . . . . . . . . . 9-27
Post Bottom Hole Locations . . . . . . . . . . . . . . . . . . . . . 9-27
Save the Display as a Map Set . . . . . . . . . . . . . . . . . . . 9-28
View the Entire Basemap . . . . . . . . . . . . . . . . . . . . . . . 9-29
Create a Larger Display Environment . . . . . . . . . . . . . 9-29
Delete Old Display Environment . . . . . . . . . . . . . . . . . 9-30
Removing/Replacing Map Layers . . . . . . . . . . . . . . . . 9-30
Overview
This course is a comprehensive introduction to the CPS-3 software system
and designed for GeoFrame 4. The user learns about project preparation,
internal data storage conventions, as well as the basics of data flows between
other applications such as IESX, Charisma, WellPix, and ResSum.
This course has a specific workflow which begins with data import and ends
with the computation of oil in place with the CPS-3 Volumetric procedure. In
between, all basic mapping operations are exercised, including basemapping,
gridding, contouring, profiling, surface operations, data operations, utilities,
map editing, model editing, and more.
This course makes use of a fully populated GeoFrame project to provide data
for the workflow. Some ASCII files are used as well.
The emphasis for this course is to provide a solid overview of the operations
involved in computer modeling and basemapping. Although this course is
oriented toward beginning users, it is also recommended for those already
conversant with computer modeling who wish to learn the specifics of CPS-3.
Course Objectives
After completing this course, you will be able to:
Access data which has been created by GeoFrame, Charisma, IESX,
WellPix, ResSum, and others.
Access/display well locations, well paths, and markers.
Access/display 2D and 3D navigation data.
Access/display 2D/3D interpretation, and fault boundaries.
Access/display property data.
Create basemaps.
Load and display ASCII extended data.
Understand how to choose gridding parameters for a range of data
distribution types.
Generate grids from data sets.
Create displays of geologically faulted grid models, including color-
shaded contour maps.
Perform surface logic and math operations; perform z-field arithmetic.
Generate structural envelopes.
Fetch property averages from GeoFrame and make property grids.
Apply property grids to gross rock thickness and perform volume
calculations.
Write, execute, and edit macros.
Edit surfaces, faults, and data with the Model Editor.
Create custom palettes with the Color Palette Editor.
Perform simple map editing.
Training Workflow
Common Commands
The following table describes the most common commands you will
encounter throughout the exercise and training guides.
.
Right click Right Mouse Button Position the mouse cursor Right click the Field
(MB3) over a button or other area of icon to open the
a window, and click the right Insert Structure
mouse button. dialog box.
Double-click Items in a window or Position the cursor over the Double-click the
dialog box item and click the left mouse location name to
button twice rapidly. open the Location
Editor.
Enter Text fields 1. Position the cursor over Enter the item
the text field. number.
2. Click the left mouse
button once to activate
the field.
3. Type the desired text.
Open Windows and dialog Select the appropriate menu Open the Print
boxes option or click the appropriate window.
button or object to display the
designated window.
Press Keys on the physical Press the designated key. Press F9.
keyboard
Select Drop lists 1. Position the cursor over Select a location from
the arrow to the left of the the drop list.
drop list field.
2. Click the left mouse
button once on the arrow
to display the drop list
items.
3. Move the cursor over the
item to be selected. Click
the left mouse button
once to select the item.
Type Entries Text fields 1. Position the cursor over 1. Type the path
(An entry to be typed the text field. and file name in
appears in bold 2. Click the left mouse the Database
courier font). button once to activate field.
the field. 2. Enter MARG for
3. Type the desired text. Formation top.
3. Type SECTION 1
in the Name field.
Typographic Conventions
The following table lists the special formatting you will encounter throughout
the training and exercise guides.
Typed Entries Bold text, courier font Indicates the specific Type 112298 in the
information you must Date field.
type into a field or
command line
System Bold text, Courier font Highlights system If you receive the error
Messages/ messages or text that Database not
Screen Text appears on a window found, contact the
system administrator.
Procedures
Exercises
Exercise
3. Click OK to confirm the entry and close the XSection Create dialog box.
Standard Buttons
Use command bar buttons to inform the system that an action or command is
complete. All dialog boxes contain one or more of the following buttons:
OK applies the current settings and closes dialog box
Apply applies the current settings
Reset clears any changes or entries in the dialog box without saving
any information
Default restores the settings/information in the dialog box to the
system default
Cancel cancels any unsaved changes and closes dialog box
Close closes dialog box
Help displays help on the currently-active dialog box.
Overview
In this chapter, well make an inventory of data in the GeoFrame GullFaks
project which is used by this class.
This course uses only some of the data available in the project and includes
Velocity functions have been provided for the seismic cube, so that both
seismic and well data are available in time and depth. In this training course,
we will use depth data.
Naming Conventions
Be aware that since projects can be shared by several persons in different
disciplines, different names for different versions of interpretation, names of
GeoFrame containers, marker names, and the like must be coordinated
among those working in the project.
Volumetrics Notes
In this class, we will compute volume between the Tarbert and the Ness
horizons. Because of large erosion zones in the Tarbert caused by the Bunkritt
unconformity, the top of the reservoir must be a merging of the Tarbert and the
unconformity.
Most of the data for this course will come from the GeoFrame data base.
There will be several methods for gaining access to it from CPS-3. In addition,
some of the data will come from outside of the project. Regardless of its
origin, the next section provides an inventory of the data sets which will be
used in CPS-3 for this course.
Location Data
2D Locations
mm_2d_gullfaks_shtpt.dat (ASCII)
3D Locations
mm_3d_85acip_survey
mm_3d_g1_survey
mm_3d_offset_survey
Well Paths
mm_Boreholes_Depth_wpath
Interpretation
BUNKRITT
BUNKRITT_time_intrp
TARBERT
TARBERT_time_intrp
TARBERT_time_intrp_fpolys
NESS
NESS_time_intrp
NESS_time_intrp_fpolys
RANNOCH
RANNOCH_time_intrp
RANNOCH_time_intrp_fpolys
DRAKE
DRAKE_time_intrp
DRAKE_time_intrp_fpolys
BUNKRITT
mm_BUNNKRITT-1_BU-285_Depth_intrp
TARBERT
mm_TARBERT_smooth_Depth_intrp
mm_Tarbert
NESS
mm_NESS_smooth_Depth_intrp
mm_Ness
RANNOCH
mm_RANNOCH_smooth_Depth_intrp
DRAKE
mm_DRAKE_smooth_Depth_intrp
F15
F15a
F16
F17
F18
F19
F20
F21
Interval Definitions
Zone Versions
(none at present)
Zones
(none at present)
Properties
Lease Information
Lease polygons
mm_north_leases.ply (ascii)
mm_North_Leases
Project Units
Metric
Project Location
North Sea
Above, you can see fault patterns over the area as well as the limits of the 3D
survey in black. Two platforms are the source of many well trajectories
shown in red. The white dotted rectangle shows the approximate interpretation
coverage of the highest of the horizons - the Tarbert. Subsequently lower
horizons will cover more and more area towards the East. Well coverage for
the higher horizons such as the Tarbert are restricted to the lower SE quadrant
of the dotted rectangle
The interpretation for the upper unconformity (Bunnkritt) covers all of the 3D
rectangular area except for a small portion in the NW corner.
The figure below shows a smaller area which is focused on the extent of the
well paths. The Bunkritt interpretation is in white, and covers just about all the
3D survey. The Tarbert interpretation is shown in grey and covers only the
Western half. The Eastern platform has more and better distributed wells, but
does not overlap with the best seismic.
Reservoir Geometry
The area of the GullFaks field which we will be mapping consists of a series
of tilted fault blocks with several of the upper horizons of the group, for
example, the Tarbert, containing erosion zones where no interpretation exists
above the Bunnkritt unconformity.
Bunkritt
Tarbert
Ness
OverView
In this chapter, we will inspect existing data in the training project, making
sure that it is prepared for the exercises.
Exercise
In this exercise, we will introduce you to the GullFaks project and follow this
basic workflow:
Start GeoFrame from the Geonet menus and enter the name of the
project which the instructor will specify.
Invoke the General Data Manager to review the organization of the
existing data.
Review the coordinate system chosen for the project and set the
GeoFrame display units for Z to milliseconds.
Invoke IESX Basemapping and Seis2D/3D to view the interpreted
horizons and fault segments which will be imported into CPS-3.
Invoke CPS-3. Clear out the internal storage area.
Load external data required for this course.
3. From your instructor, determine the name of the project you require,
and when the GeoFrame Login window appears, highlight the desired
project in the project list, move the cursor to the Password box and
click the middle mouse button (MB2), and then click Connect (with
MB1).
3. When the Data Manager comes up, place the cursor on the project
icon.
5. Place the cursor on one of the horizons, such as DRAKE, and press
MB1 to select it, then with the cursor still on DRAKE, press MB3.
Each of the Horizon containers may have been created by any one of the
numerous GeoFrame applications - IESX or Charisma during interpretation,
Stratlog or WellPix during geological interpretation, or CPS-3 during
mapping.
7. Click File > Close, then back in the Data Management Catalog, click
Cancel.
Exercise
3. In the Set Units dialog box, pick the Unit classification name
Metric/Charisma, or Metric if the first is not available, and click
Inspect. This opens the Unit System dialog box, shown below.
6. Back in the Unit System dialog box, Cancel if no changes were made,
or click Save As and type in a new unit system name suffix, ms, and
click OK.
2. Highlight any Coordinate System seen in the dialog and click the blue
i icon to see its definition. You want to select any of the coordinate
systems which have the following definition:
Datum: European 1950, Norway & Finland
Ellipsoid: Int24
Projection: UTM
UTM Zone: 31
Hemisphere: Northern Tg
as seen in the Edit Dialog.
Note: Do NOT change anything in this dialog box.
3. Click Cancel back to the Project Management dialog box which you
may then iconize. We will be using it again.
Exercise
The next exercise is for those of you who want to look at the interpreted
horizons, fault boundaries, and fault segments which have been interpreted in
IESX or Charisma which will be mapped in this course. You may skip this
brief exercise if you wish. The intent is only to demonstrate the source of the
data which we will access from CPS-3 in the next few chapters. Those of you
who are IESX or Charisma users can take a moment to verify the presence of
the horizons and other data outlined in the previous chapter describing the
GullFaks project.
Note: Note that in early versions of this training course project, this exercise is not
possible.
Exercise
In this exercise, we will bring up CPS-3 and remove all sets in the CPS-3
storage area in anticipation of loading auxiliary data for the course. This
operation affects nothing in the GeoFrame data base itself.
3. Lower down, on the Display line, verify which workstation you want
the application to run on - 0.1 is right and 0.0 is left. After typing
in the desired location, click OK. This should bring up the Main
Module canvas:
4. Starting at the top menu bar in the CPS-3 Main Module, click
Utilities > Sets > List_Manage_Sets.
5. Sets marked as CPS are stored in the CPS-3 dsl, sets marked as
GeoFrame are stored in the GeoFrame database. We do not want to
delete any sets from the GeoFrame data base.
6. Toggle on All in the Set Type, Surface Type, and Property boxes,
changing them to red. Toggle only CPS in the Storage box
7. Click the Filter button at the bottom and VERIFY THAT ONLY
CPS3 IS TOGGLED ON WITHIN THE STORAGE BOX.
STOP TO MAKE SURE THAT THE RIGHT
P A N E L S H O W S O N L Y S E T S I N C P S . I F
SETS IN GEOFRAME SHOW UP, GO BACK
T O S T E P 4.
8. Highlight all sets in the right panel of the dialog and click the Delete
icon, the large red X. This will remove all CPS-3 sets in the CPS-3
dsl.
6. Back in the left side of the dialog in the Storage panel, click on All,
then click Filter. We should only see GeoFrame sets in the list, if
anything. These are grids, fault boundaries, and other items created by
other applications.
5. Make sure that the last character in the pathname is a forward slash
/, as seen in the following figure.
Figure 3.10 Path to location of external data destined for CPS-3 DSL
6. Click OK.
7. Click OK or Yes to all messages which appear, but note the files which
are being loaded in the CPS Status Window.
Figure 3.11 Multiple Set Selector displaying files loaded to CPS-3 DSL
Overview
This chapter provides a brief overview of all functionality provided from the
CPS-3 Main Module, and how it is organized. In addition, a navigational
summary of the menu hierarchy is given.
In addition, the following data management tools are available from the Main
Module:
Control Point Editor - interactive spreadsheet editor for Data sets
Subset Reorganizer - a fault management tool
Map Layer Manager - a tool for reorganizing graphic layers of a map
In this chapter we also present a convenient How To... Matrix which cross-
references common mapping operations with the menu navigation instructions
for how to get there. This cross-reference is at the end of the chapter.
Pull-Down Menus
The x,y tracking also occurs during screen digitizing initiated from the
Digitize pull-down menu.
For convenience, the tracker may also be invoked at any time with the tracker
icon:
Figure 4.3 Display x and y position at cursor icon (aka Tracker icon)
Measuring Tool
The measure distance icon in the following figure provides facilities for
measuring distances and angles on the graphic display.
Swap
Screens
Currently open Data set Currently open Surfaces Icon
UNIX Environment
In the Main Module go to User > Show Environment to bring up the CPS-3
Environment window. This window can remain visible while you perform
other mapping operations. Its purpose is to provide you with information
about where CPS-3 is installed, and where the configurable resource files are
located. It also serves to remind you of the path to your open project, so that
copying of configuration files into your project area from an xterm is
simplified.
After each edit, the model is regridded and saved upon completion of the
editing. The final model (grid) is then brought back into the CPS-3 Main
Module.
The data set shown below is a well marker data set, which was loaded as an
Extended data set with three z-fields, a well name, and symbol code - all of
which are available for editing.
The Set/Subset Reorganizer will perform this reordering for you. As the
different shades of gray in the first diagram above indicate, x,y,z fault traces
are grouped in Fault sets, one per horizon. In the Set/Subset Reorganizer
dialog box, select the Fault sets for all horizons which contain valid x,y,z
points to use in the gridding of the fault surfaces. From each of those selected
fault sets, you can choose any or all of its subsets (individual fault traces for a
single fault) which you want to be included in the output data sets.
In this example, we have asked for the output to be written to a Data set as
shown below. There will be as many output Data sets as there are unique
subset names (fault names) in the selected Fault sets. Each of the output Data
sets can then be gridded to obtain a model of the fault surface.
In this example, the picture below and on your screen was generated with
separate graphic commands; then the Map Layer Manager (Figure 2.15) was
invoked with the Manipulate current map layers icon (Figure 2.17).
To reorder the layers of an existing map, simply clear the screen, display the
map, then invoke the Map Layer Manager.
DISPLAY
GRIDDING/MODELING
MACROS
Edit a macro - Create a macro, then user the text editor of your choice
DIGITIZE
OPERATIONS/ C.P./CONTROL-POINT-MATH
DATA IMPORT/EXPORT
Icon Definitions
Unlock a set
Overview
This chapter reviews the nature and extent of data integration for CPS-3
within the GeoFrame data base.
In GF4.0, CPS-3 still maintains its own local data store, that subdirectory
known as the CPS-3 DSL, where CPS-3 binary sets are stored. However,
much of the data in GeoFrame is visible to CPS-3, either directly, or via the
GeoFrame data link, GFLink. It is up to the user whether individual sets are
stored in GeoFrame, or in the CPS-3 DSL.
GeoFrame data items are synonymous with CPS-3 sets, but data items stored
in GeoFrame must now also be identified with the following attributes:
Container name
Container type
Property code
Unit of measurement
2D survey location data are now accessed via the GFLink and can be posted
with the existing Extended data seismic line posting feature.
3D survey location data is accessed and displayed via the Display menu. A
new 3D seismic line posting feature is available.
Horizon interpretation is now seen from the CPS-3 file selection dialogs as
grids in GeoFrame. Use the Source set attribute to distinguish actual
interpretation grids from derivative grids. Note that in Modeling Office,
Horizon Modeling has been modified to accept GeoFrame grids directly as
input. In the CPS-3 Main Module, however, GeoFrame interpretation grids
must be Copied to Data sets before using them in Single Surface Gridding.
IESX Cartography, which will not reside in GeoFrame, will be brought into
CPS-3 with a new Culture Loader which will be invoked from the CPS-3
menus and from the Visualization catalog in place of the old IESX Link.This
facility will not be released in 4.0, but in one of the later versions such as 4.01.
Well location data, such as top location, bottom hole locations, and borehole
trajectories are accessed from the GFLink, just as before.
Well markers for specific Horizons are accessed from the GFLink, just as
before.
Property summations for specific Zones are accessed from the GFLink, just
as before.
Tip Loops created by Framework 3D are stored in GeoFrame, but are visible
from CPS-3 as Polygons.
All other GeoFrame surfaces, scatter sets, and fault boundaries can be seen
directly in the CPS-3 file selection dialogs, and, if necessary, can be COPYed
to the CPS-3 dsl, just as before.
Sets are associated with a geographic coordinate system, or with the default
coordinate system at the time of their naming and creation.
If two sets are associated with different coordinate systems, CPS-3 will
automatically perform the numerical conversions required during operations
which use both sets. Specific rules for these conversions are covered in a later
chapter.
Tip: You may load data having any range of cartesian X,Y values into any
GeoFrame project. As long as you reference the same GeoFrame coordinate
system definition where necessary, the system will not attempt any kind of
conversion. It will simply accept the data as it is.
Best Practice: Unless your project requires something different, the recommendation is to use
one coordinate system for all sets in your project.
Surfaces:
In practice, surfaces cannot be transformed from one coordinate
system to another. However, graphic manifestations of a surface, such
as contours or postings of nodes, can be transformed.
Reading ASCII data in geographic coordinates:
Data with geographic coordinates (degrees, minutes, seconds, or
decimal degrees) will be transformed using the currently active
Display environment.
Reading ASCII data in x, y coordinates:
In this instance, no conversion takes place. The data being loaded takes
on the stamp of the active Display environment.
Surface Arithmetic and Surface Operations:
The environments of all surfaces in the operation must match the
current Modeling environment.
Gridding:
During gridding, control points and fault traces are transformed to
match the coordinate system of the active Modeling environment.
Graphic Display:
Any set displayed will be transformed to match the coordinate system
of the currently active Display environment.
Data Links such as GFLink:
Any data moved into CPS-3 via these data links will be automatically
transformed to match the coordinate system of the currently active
Display environment. Note that units and domain of the output set is
established by the GeoFrame Display Units, not the currently active
CPS-3 Display environment.
can now spawn a system task from a macro and wait (GF3.8)
can now spawn a background task (GF3.8)
access to macros from three categories - system, project, user
can add descriptions to macros which are visible during macro selection
define and assign macro categories which are displayed during selection
new Yes/No prompt type
new prompt facility - select from list of options
prompt titles and prompt strings can now be variables
can load extended data with existing format
can define set attributes
15 Controls for posting fault names and z-values has been enhanced
selector dialog
The following macro excerpt shows how you can now use a variable string for the
dialog title and the prompting text. It also shows a simple Unix command can be
spawned.
Spawning detached tasks can also be done with the spawn command if you use the
ampersand at the end.
The next example shows how to establish the set attributes for new sets.
The example menu below shows that you may have three basic sources for
collections of macros a system-wide collection, a project-related collection, and
any number of user collections which can be identified simply by a path name.
In this menu, the project-based collection of macros has been selected and its
macros listed. One of them has been highlighted which causes its internal
description lines to be displayed.
In this example, the System macro group has been chosen, and the names of all
macros in the Display category have been displayed for selection.
Overview
In this lesson, we will describe the way in which CPS-3 sets are organized.
Historically, all CPS-3 sets were stored in its own local data store (the CPS-3
DSL), maintained by the CPS-3 Project Manager independently of the
GeoFrame data base. Now, however, many of these sets are just as easily
stored in GeoFrame.
/home/disk1/user1/projects/CLOUDSPIN/CPS
In this example, the full path shown above defines the location of the CPS
local data store, or partition, within the GeoFrame project Cloudspin. In this
directory, you will find everything which is managed by the CPS Project
Manager.
Ultimately, the data to which you have access in CPS-3 will be stored in one
or more of the file types outlined below. These files will reside in your CPS
dsl. Note that each is distinguished by its UNIX file extension which is not
seen in the CPS-3 menus or dialogs.
.1cps files - session files; files containing parameter values
.dcps files - data files; x,y,z files
.fcps files - fault trace files; x,y files; optionally, x,y,z files
.pcps files - polygon files; x,y only
.scps files - surface (grid) files; z only; x,y is inferred
.mcps files - map files; picture files
.tcps files - table files such as created by Framework3D
Data, Fault, and Surface sets may optionally be stored in either the local
project files, as shown above, or in the GeoFrame Oracle data base, or both.
Description
Data Types
CPS-3 uses a data sets Data Type to compute defaults for some of the
modeling and display parameters. M ost of the Data Types indicate a type of
spatial distribution pattern which can be exploited by certain algorithms. Data
types are shown in the following lists:
Each data point loaded into CPS-3 must have at least an X and Y coordinate,
but may also have up to 50 z-fields. These z-fields are numeric, and can
represent any variable which is spatially distributed. A vertical well, for
example, might have one z-field value for each horizon it passes through
(markers), or a single well point may have one horizon depth value, and an
associated thickness to the next layer. Z-fields may also contain dip and strike
information for the horizon being mapped. There are no predefined
combinations of z-fields which are required for CPS-3. Each z-field can
represent anything you like. The most common data files loaded by CPS-3 are
data files containing x,y, and a z-value representing time or depth.
As you will see when loading data for the course, there are two attributes
associated with each z-field:
Z Field Name (z1, z2, top, bottom,...
Z Field Type (depth, elevation, isochore,...
Some data files contain textual information for each point which is to be
posted on the map; for example, well name or operator name. There are two
methods for storing Data sets in CPS-3 - Extended Data, and non-extended
Data.
Non-Extended Data
A data point in stored in this form may have up to 50 numeric z-fields, but no
text fields. The only text allowed with this form of storage is a subset name.
For well data, this can represent the well name if you select the x,y,z + Name
Field option for loading. Use this type of Data for simple x,y,z points to be
gridded when no special display capabilities are required.
Extended Data
This type of data allows all of the above, but also allows the storage and
posting of up to 10 text fields, as well as the storage and utilization of graphic
symbology attributes. Symbology attributes, such as symbol type, size, and
color, can be associated with each data point. At present, Extended Data
cannot be exported from CPS-3, nor can Extended Data be loaded by the use
of macros.
Subsets
Data sets can be organized into subsets which makes certain processing easier.
For example, line-oriented data (2D seismic, digitized contours) should be
identified as such during ASCII loading so that each line becomes an
identifiable subset in CPS-3. Several formats suitable for controlling
line-oriented data will be illustrated below in the examples.
Description
Fault sets in CPS-3 store those polylines which are commonly referred to as
fault traces, or in GeoFrame terminology, fault boundaries. To the CPS-3
procedures, it does not matter how these polylines are geometrically defined.
For example, a non-vertical fault pattern associated with a particular horizon
can consist of two separate lines - an upthrown line and a downthrown line, or
it may consist of a single closed polygon.
Types of Faults
Fault Attributes
Each fault trace must have an x,y coordinate, but other information is optional.
For example, if z-values are available on the trace, then these values should be
loaded along with the x,y values to help in the gridding of the associated
horizon. Vertical and horizontal separation, as well as dip values are also
valid numeric fields which can optionally be loaded with fault traces.
Associations
Subsets
In a fault set for one horizon in CPS-3, there is typically one fault trace
pattern for each named fault in the reservoir. The pattern for each named fault
is stored as a separate subset in the fault set. Several loading formats will be
shown in the examples to ensure that individual lines are identifiable in
CPS-3.
Description
There are two main purposes for polygon sets - the first is to define some
cartographic basemapping feature, such as a shoreline which is to be posted
on a basemap. Note that unclosed polylines are also valid to store in a polygon
set. The other use of polygon sets is to define some region in the modeling
area within which operations are to be performed or excluded from being
performed, for example, grid blanking or volumetrics calculation.
Types
Subsets
Description
Surface sets hold the grids in CPS-3. Only the z-value at each grid node is
stored. Parameters stored with the surface, such as the lower left corner of the
grid, and the grid spacing in x and y, are used by the system to compute the
x,y coordinates of each node location when needed.
Classification
Associations
As mentioned under Fault Sets, the input fault trace set is associated with the
output surface in a gridding operation. In addition, there are spreadsheet-like
operations in CPS-3, particularly in Framework3D, where groups of surfaces
are defined. These groups of surfaces become associated during the creation of
a Table set (a .tcps file in the project directory). For example, a fault
framework Table set is created as the user loads surfaces in to the framework.
If a surface in a table happens to be deleted, the system will recognize its
absence the next time the table set is invoked.
Description
Essentially, a Map set is a picture file. The picture on the screen can be saved
to a Map set at any time in CPS-3. Although it is possible, Map sets are not
usually loaded into CPS-3 from ASCII source files.
Subsets/Map Layers
In the map set shown below, there are seven subsets, each of which is created
by a separate display operation.
Border
Border Labels
Scale Bar
Contours
Wells
Seismic Lines
Fault Traces
Overview
There are many routes by which data can be made available to CPS-3. One of
the traditional methods is using ASCII files, which we will demonstrate in
this chapter. However, continuing integration of CPS into the GeoFrame
environment provides faster, more efficient, data access methods. For
example, interpretation which has been performed in IESX or Charisma
should be visible to CPS-3 as grids in GeoFrame. Seismic locations in
GeoFrame will also be visible to CPS-3, as we will demonstrate in this
chapter via the GeoFrame Link (GFLink). Well locations are also accessible
via GFLink.
In the exercises, we will load an ASCII file containing Line name, Shot Point
number, x-coordinate, and y-coordinate as an Extended Data set so that we
can create a proper 2D line posting.
For example 2D/3D location data are now accessed via the GFLink as we
will demonstrate in the exercises.
1. First, well show where to load 2D seismic locations data from the
GeoFrame Link
2. Then, well show how to use the CPS-3 ASCII loader to load a
complex 2D location file.
6. Finally, well show how to load top and bottom hole well locations,
then well trajectories from the GF Link.
Exercise
2D seismic locations are accessed from CPS-3 via the GeoFrame Link.
2. Click the Load from GeoFrame tab, then click Data Types and
highlight 2D Navigation Lines in the second panel. Click OK.
Note: Temporarily, there are no 2D surveys. Shortly, this training project will be
updated with a 2D survey. As an alternative, we will load an ASCII file of 2D
locations.
Exercise
2. In the CPS-3 Main Module, click User > Show Environment. (This
opens the CPS-3 Environment window.)
3. Highlight the path to your CPS-3 dsl which is found next to Project
Location.
4. In the xterm, type cd, skip a space, and click MB2. (The path to your
CPS-3 dsl should appear.)
5. Type ls *.dat. One of the files that you see should be named
mm_2d_gullfaks_shtpt.dat. (an ASCII file) part of which is shown
below, containing 2D locations. Look at this file using any editor you
like.
This file is column-organized and contains the Line name in the first column,
the Shotpoint number in the second column, and the x and y coordinates in
columns three and four. This is an example of a very simple location file
which can be easily read by the CPS-3 ASCII reader. This file was copied
into your CPS dsl along with the other information, which we imported By
Directory earlier.
Well use the second file to demonstrate how to use the CPS-3 ASCII loader.
The first section of this file, where the lines all start with H, is a file header.
This header will be skipped during import. This file contains X and Y values
in both Cartesian coordinates and geographic coordinates.
The only columns that we want from this file are the Line name, Shotpoint
number, and the X and Y cartesian coordinates which begin after the letter W
in each line. In order to read this file, we will need to count the number of
header lines to skip, which is 32.
1. In the CPS-3 Main Module menu bar, go to File > Import > ASCII >
Extended Data to open the following dialog box.
3. In the File/Set Selection section, click New next to Output Set. (This
opens an additional dialog similar to the one in the following page.)
Tip: When loading files as Extended data, it is always a good idea to specify the
Subtype before you give the Input File name, since the system will do a better
job of automatic column assignment when it knows the Subtype. It is also
necessary to specify the number of Lines to Skip before you provide the Input
File name.
5. Under the File/Set Selection section, click Pick next to Input File.
(This opens the File Selection dialog box.)
8. Click on Line Name, which now should have only a single S, and
scroll down within the Field Type popup menu, and select Ignore.
9. Above the column containing slb1, click on Ignore, then scroll down
within the Field Type popup menu, and select Line Name.
11. In the next Ignore column, split the text between the 20328 and
857.22N, then change the 20328 column to Shotpoint Number.
12. Note that the Z column is unnecessary, since all the z-values will be
0.0. We can leave it as it is, it will not cause any harm.
When the ASCII Input Data section of your dialog box looks like the
following one, click OK to load the data and close that dialog box.
13. When the data has finished loading, in the CPS-3 Main Module menu
bar, go to Utilities > Sets > View Contents/Statistics to open the
following Main Module dialog box.
14. In the Set Type column, toggle the Data (D) button to red, and select
the data set we just loaded, charis_2d_nav, and then click OK.
Note that every set loaded from ASCII will be stamped with the
coordinate system of the current Display environment. We have not
yet defined a Display environment, but every new project has a
default Display environment, which is based on the GeoFrame
Display coordinate system.
The 2D locations, which we just imported into an Extended Data set, do not
actually match the map area we will define for the GullFaks project, however,
it allowed us to see how the ASCII Loader works.
Now, well import the first 2D location file we looked at which was much
simpler in organization, and will be used to define the initial GullFaks map
area.
3. The instructor will show you how to edit the Default display
environment so that it has a Map Ratio scale of 1650, giving a map
size of approximately 450x450cm.
8. Then click OK, then OK again. You should see the display, as shown
below:
Exercise
4. Select all lines in the Inline Range by moving the scroll bars to the min
and max positions. Set Increment = 10, and then click the
Appearance tab.
5. Choose all items for display except Tick Marks, Line Names, and
Location symbols.
8. If it is lit, click the Reveal All Graphics icon to see the entire map.
Exercise
In the interest of time, this exercise may be skipped, and you may use the
following Data sets in the CPS-3 dsl, which have already been loaded for
subsequent exercises:
mm_3d_g1_survey
mm_3d_85acip_survey
mm_3d_offset_survey
If you want more experience with the Extended Data Loader in CPS-3,
please continue. We will use these surveys in a basemap exercise which
follows after loading the well locations.
In the exercise which follows, we will make use of three ASCII files, each
containing a 3D survey exported from Charisma in a different project.
3d_g1_survey.dat (skip first 30 lines)
3d_85acip_survey.dat (skip first 30 lines)
3d_offset_survey.dat (skip first 30 lines)
1. Use the Extended Data Loader, just as we did with the 2D data, to
load these three ASCII files into CPS-3, creating the following data
sets
3d_g1_survey
3d_85acip_survey
3d_offset_survey
These three sets will represent the location data for the three 3D surveys in
this GullFaks project: Review the procedure for ASCII loading, and the
instructor will provide assistance if you need it.
Note: In order to load these three ASCII survey locations properly, you must
remember to specify Lines to Skip (30) first. You must load these data as 3D
lines, and also use the correct format, as shown below:
Exercise
Now, we will demonstrate the loading of top and bottom well locations, and
the wellpaths. Once more well use the GeoFrame Link.
2. Click the Load From GeoFrame tab, and then click the Data Types
icon to open the folder. This opens the GeoFrame Data Types dialog
box and again lets us identify the type of data we want to load.
6. Click Data Types again, and this time select Borehole Trajectories,
and click OK.
7. Click GeoFrame Data again, and find Depth in the list on the left.
Highlight it and click OK.
8. Highlight the item on the left and click OK, putting it on the output
list, as seen below.
9. Now highlight all the items on the output list on the right and click
Run. This transfers the items to the CPS-3 dsl.
10. In the CPS-3 GeoFrame Link dialog box, highlight the Boreholes...
item on the right, and then click Run.
Note: No provisions are made for specifying an output set name when loading data
with the GFLink. You have to simply accept the naming conventions, which
are actually quite clear. You may, however, use the CPS-3 Rename function
after the data is loaded.
Exercise
1. In the CPS-3 Main Module menu bar, click on Utilities > Sets >
View Contents/Statistics, so that we can look at the statistics for the
data sets that we just loaded.
Exercise
1. Click on the small [x,y] icon in the left column, the sixth icon from the
top.
5. Toggle Action to Copy Set Limits, toggle Set Type to Data, then
click Pick Set and highlight the data set mm_2D_gullfaks_shtpt, and
click OK.
6. Back in the Create Environment dialog, at the bottom left, set a Map
Ratio scale of 3000, then click OK.
8. Click Pick Sets and pick the set named mm_3d_85acip_survey. For
quick display, show only the lines. Set the symbol size to zero, and set
the No Text toggle to red.
9. Click Apply.
10. Click Data again, then click Pick Sets, and pick mm_3d_g1_survey.
11. Click Set Parameters, changing the color of the lines, then click
Apply.
7. Next click Display > Basemap > Data, and repeat the processes to
display mm_2d_gullfaks_shtpt.
Display well_Locations_wtloc.
Display well_Locations_wbloc as Extended Data, showing the well
name in black, and using about 4.0 or 5.0 as the symbol size.
Display boreholes_Depth_wpath in black.
8. Change colors as needed for visibility. Your map should resemble the
display below:
Here, we see the three 3D survey locations, the 2D locations, the top and
bottom well locations, as well as the borehole trajectories. This is not what we
would call a proper basemap, but it is meant to quickly show that the data has
been properly loaded, and to see the extents and relationships among the data.
Next, we will proceed to define a permanent Display environment, better
suited to our basemapping exercises.
Overview
Before proceeding with setting a display environment for inspection of the
data we loaded in the last chapter, we must take the time to become familiar
with the way in which CPS-3 sets and subsets are manipulated. In this
chapter, we will learn about many utilitarian features in CPS-3 to help us keep
up with all of our different sets; for example:
listing existing sets by type
selecting existing sets for input operations from either GeoFrame, or
from the local CPS-3 data store.
changing the attributes of existing sets
creating new sets
renaming sets
deleting sets
copying sets
Exercise
In the following exercises, we will introduce you to the menus and dialogues
which create sets and the procedure for selecting existing sets from either the
CPS local store or the GeoFrame database.
1. In the CPS-3 Main Module menu bar, click Utilities > Sets >
List/Manage Sets to open the following dialog box.
The left panel is a filter which can be used to control which sets you
see on the right panel. The right panel contains the list of sets and their
attributes. In the text entry fields for the filter, you may use wildcards
to control the contents of the list. Note that the Surface Name is the
GeoFrame Container Name.
In the Set Type panel, you can activate All sets, or just one type. The
Surface Type panel lets you filter on the Container name. The
Property panel lets you filter on that attribute. Normally, you toggle
the buttons that you want to red, and then click the Filter button to
update the list.
2. De-select All and activate only the Data button in Set Type. Click the
Filter button to refresh the list.
3. Now click and hold on the small rectangle to the right of the Filter
button, and drag it all the way to the left, effectively hiding the filter
dialog area.
The sets you see may not be the same as you see here, but the
procedure will be the same.
4. Move the horizontal scroll bar to the right to reveal all the parameters
associated with each set. All the information that you see on a single
line is actually stored in the set.
5. Click on the Storage column heading and you will see all entries
sorted on that column. (The sorting works for all column headings.)
6. You can control which attributes are displayed on the right panel by
clicking on the yellow smiley-face icon.
7. Position the cursor over the ten icons (but do not click) in the right
hand corner of the Multiple Set Selector dialog box to see a quick
description of each at the bottom of the dialog.
Note: The Edit attributes and Delete icons operate on multiple sets.
8. In the Multiple Set Selector dialog box, highlight the Data set
g_wells, and click the Edit attributes icon. The Main Module dialog
box opens.
10. Review the edited attributes in the Multiple Set Selector dialog, and
click Close.
Exercise
Many operations in CPS-3 require the specification of a new set name. For
every new set you create, you will make use of the same dialog. Lets learn to
make use of the new set dialog by copying one set to a new set.
Copying Sets
1. In the CPS-3 Main Module menu bar, click Utilities > Sets > Copy to
open the Main Module Copy Sets dialog box.
2. In the Copy Sets dialog box, toggle the Data button on, click the
Filter button, and select the set g_wells, and click OK.
3. In the Copying Set dialog box, leave Set Type as Data, and click
New.
8. Make sure that the Property Code is Depth, and the Z Unit is ft.
Click OK to close this dialog box.
10. In the CPS-3 Main Module menu bar, click Utilities > Sets >
List/Manage Sets and find my_data_set in the Data set list, verifying
that the attributes were set properly.
Note: When creating Polygon and Map sets in CPS-3, Storage Location, Container
Type, and Container Name are inactive.
Exercise
Renaming
1. In the CPS-3 Main Module menu bar, click Utilities > Sets >
Rename to open the Main Module Rename Sets dialog box.
2. In the Set Type column, toggle the Data (D) button on, and click
Filter.
5. In the Rename dialog, scroll down through the set list to verify that
the name was changed, then click on Cancel.
Viewing Statistics
1. In the CPS-3 Main Module menu bar, click Utilities > Sets > View
Contents/Statistics to open the following dialog box.
5. Click OK.
You will see statistics for the chosen set in the CPS-3 Status Information
window, in a form similar to the report below.
Deleting Sets
Although you can delete a set from the Multiple Set Selector dialog box, you
may prefer using the Utilities menu, which allows you to change set types
more easily during deletion.
1. In the CPS-3 Main Module menu bar, click Utilities > Sets > Delete
to open Main Module Delete Sets dialog box.
3. Select the set new_data_name, click Delete and answer Yes to the
affirmation message.
Overview
Here, we will discuss the concept of mapping environments and their
attributes. We will show the difference in environment attributes required to
perform display functions, versus the attribute requirements to perform
modeling operations. We will discuss the reason for having multiple
environments, and, in a later chapter, we will have exercises to create
environments, store them, and edit them.
Just to get started, we will make the following simple, but true, statement:
Before you can display anything, like the map below, you must define a
display environment. Before you can create a grid, you must define a
modeling environment.
Display Environment
A Display environment is used only for display purposes. It does not contain
the definition of a grid geometry. Data displayed through the currently active
Display environment will undergo the following procedures:
transformed to the active Geographic Coordinate System
scaled to the active scale
converted to the active horizontal units
clipped in x and y by the active Volume of Interest
Modeling Environment
A Modeling environment is used to create a grid or perform other modeling
operations. Think of a modeling environment as simply a display environment
to which a grid geometry has also been added. An environment which does
not contain a binset definition cannot be used for modeling, only display.
Grids created in modeling operations will assume the grid origin and spacing
of that defined in the currently active Modeling environment.
2. Highlight the desired binset and click the Get information... icon. (You will see
a list of all Surfaces containing grids which were created with the selected
binset.)
In the many exercises which follow, you will become familiar with the use of
these functions.
Notes
If during the creation or editing of an a Modeling environment,
you select an existing binset, then the binsets x,y box and
geographic coordinate system replaces the x,y box and coordinate
system of the Modeling environment you are creating or editing.
Environments you define are stored in your current session file,
<login id>.1cps, which is located in the CPS partition of the
project. Refer to a later section regarding the storing and
retrieval of session files.
Because of the closer integration with GeoFrame, and the
institution of environments, CPS-3 procedures have become more
sensitive to units and domains. For example, an attempt to
compute borehole intersections with a surface set, which is stored
in milliseconds from a borehole set which was loaded from
GeoFrame in seconds, will result in an error message.
Just as procedure sensitivity to environments may cause error
messages in some cases, it will also provide benefits. For example,
during many operations, input data from sets with differing
coordinate systems will be automatically converted to match the
coordinate system of the output set.
Multiple Environments
The most apparent characteristic of a display or modeling environment is its
x,y extent - the area of interest, or, as it used to be called, the engineering
window.
From the sample map at the beginning of this section, it can be seen why more
than one environment is needed in typical projects. The Modeling
environment, as its name suggests, is used for defining that area where grid
nodes are computed. The Display environment, on the other hand, has more
to do with defining the extent of the graphic entities making up a basemap.
Continuing with the example map, a basemap might be required which
included all available seismic surveys including the 2D, but the area required
for gridding and volumetric calculation may cover only the smaller 3D survey.
This is how you set the horizontal This is how you set the Z-limits for
scale along the baseline for your Extended Statistics and 2D cross-sections.
cross sections.
Tip: When an environment with an associated binset is loaded, the system searches
the GeoFrame data base for the first binset which matches the attributes of
the one in the session file, and links it to the environment being loaded,
regardless of the name of the original binset. If a matching binset is not found,
the environment is not loaded. For this reason, it is not recommended to make
use of sessions files from other projects.
Rotated Grids
By virtue of the attributes available in the definition of Modeling
environments, a new facility is now available in CPS-3 the ability to
create rotated grids. The rotation attribute in the coordinate system
specifications allows the definition of grids like the one shown in the map
below, which is based on the survey azimuth.
When creating Modeling environments, you have the same flexibility. For
example, you have the ability to create a Modeling environment by simply
pointing to an existing binset in GeoFrame. If you use this procedure, please
make note of the following:
Note: If you happen to choose a method to create your Modeling environment
which involves the selection of a particular binset from the GeoFrame list of
binsets, you should know that binsets contain not only a grid lattice definition,
but specifications for x, y limits and a coordinate system, both of which will
supersede those values you may have already entered for your new
environment in the CPS-3 dialogs.
The instructor should, by this point, have discussed the concept display and
modeling environments in CPS-3.
Exercise
Before beginning with our graphic operations, we will set the background
color and the screen margins.
1. In the CPS-3 Main Module menu bar, click Display > Display
Functions > Set Background Color.
3. In the CPS-3 Main Module menu bar, click on Display > Display
Functions > Set General Display Parameters. This opens the
General Display Parameters dialog box.
4. In the Set Screen Display Margin section, set the Left, Right, Top,
and Bottom margins to 1, 6, 1, and 1, respectively.
Exercise
Well now create a display environment based on the extent of the 2D seismic
locations, because it appears to cover more area than the 3D locations. This
will give us a quick look at the extent of the data that we have.
We know that the 3D surveys in this project overlap and anyone who has seen
the two surveys in IESX knows that the 2D survey covers a much larger area
than the 3D. Therefore, we will define our initial Display environment so that
its x, y extents match those of the 2D survey data set.
3. Click on the Lat/Lon Box tab, and ensure that the Coordinate System
is set to UTM, Zone 31.
9. To see all attributes associated with this new environment, click the
Review Environment icon.
We will display only minimal information at first, just to see the orientation
and coverage of the data.
1. In the CPS-3 Main Module menu bar, click Display > Basemap to
open the following Basemap Display dialog box.
4. Repeat this process for the two other 3D survey location data sets,
"mm_3d_g1_survey" and "mm_3d_offset_survey", but change the line
color for each. Click OK out of the Basemap dialog.
At this point, we see that the first 3D survey locations may possibly go beyond
the lower border, and so well demonstrate how to edit the current Display
environment to make it larger.
Exercise
1. Erase the screen using the Erase current display icon located in
the CPS-3 Main Module.
Exercise
1. Return to the Basemap Display dialog box, and redisplay the same 3D
data sets as before.
Exercise
Now we have an environment where we can see all of the data weve loaded.
Lets create a proper location map of the area. This exercise will introduce you
to some of the basemapping facilities in CPS-3.
2. In the CPS-3 Main Module menu bar, click Display > Basemap or
the Display basemap icon to open the Basemap Display dialog
box.
4. In the Display Window Labels dialog box, type the following values:
X:label size = .5
Y:label size = .5
X:label increment = 5000.0
Y:label increment = 5000.0
6. In the Basemap Display dialog box, toggle Scale Bar in the Basic
Elements section to red.
7. Click Set Parameters to open the Display Map Scale Bar dialog box.
8. In the Display Map Scale Bar dialog box, make the following
settings:
Toggle Use environment units, located across from Scale bar
units, to red.
Distance per scale bar segment = 2000
Number of segments in scale bar = 5
Positioning index = Center
X_position of scale bar = 8.0
Y_position of scale bar = -3.5
Scale conversison factor = 1
Text size = 0.4
10. In the Basemap Display dialog box, toggle Map Title in the
Basemap Display section to red, and click Set Parameters to open
the Display Map Title dialog box and make these settings:
Bottom margin title = GullFaks - 2D, 3D, and Well
Locations
Bottom margin text size = 0.5
Bottom margin X position of text center = computed
Bottom margin Y position of text center = - 2.2
Left margin text size = 0.5
12. Click OK in the Basemap Display dialog box to display the graphics
in the CPS-3 Main Module.
Note: It is a common occurrence that some graphic elements seem to fall off the
edges of the display. For example, the map title and scale bar, placed below
the y-axis are not visible at this time. However, we will continue with the
display of the basemap components, knowing that these elements have not
been lost and will be restored to visibility shortly.
2. Click Display > Basemap > Extended Data and select the data set
mm_3d_g1_survey.
2. Click Pick Sets and select the data set 2d_gullfaks_shtpt, and click
OK.
Angle = Perpendicular
Size = 0.12
Toggle Symbol parameters to red.
Color = black,
Increment = 25
Size = 0.12
Click on the Symbol code button and select a hollow circle.
Mode = Incremental
Toggle z1 to gray.
Note that the Shotpoint annotation increment is every 50th, and the
Shotpoint symbol increment is every 25th.
2. Highlight the line Data Set mm_3d_offset_survey, and then click the
down arrow once, to position it in the list as shown above.
3. Click OK. The smaller survey should move in front of the light blue
survey.
We will continue with posting of the boreholes and the bottom hole locations.
3. Click Set Parameters to open the Display a Set dialog box and make
the following settings:
In the Set Data Line Parameters section
Line style = solid line
Line width = thin line
Line color = black
In the Set Data Symbol Parameters section, Symbol size = 0.0.
In the Set Data Text Parameters section, toggle No text to red.
This completes the basemap, but as we noted, there are elements in the map
which we cannot yet see, such as the title at the bottom of the map.
Having spent a considerable time creating the display on the screen, lets save
our work, before worrying about the hidden parts.
All graphics on the screen, as well as the portions hidden from view, have now
been saved into a permanent CPS-3 Map set, which can subsequently be
displayed in its entirety with one operation.
Now lets zoom out and look at the hidden annotation around the map.
1. To reveal the graphic in its entirety, click on the Reveal all graphics
icon. All of the displayed graphic is now visible.
Without even using the Environment dialog, we have created a new Display
environment, activated it, and transferred the entire contents of the screen
display to the new environment. Had anything gone wrong, or if we had taken
a wrong step along the way, we can still go back to the Map set we saved
earlier, and use it as a basis for the new environment.
Now, let us assume that we wish to change the location of the scale bar, but
leave everything else in the map unchanged.
1. Click on Display > Map Layers to reveal the Map Layer Manager
dialog similar to the one below.
2. Toggle the Delete button to red, next to the layer named Map Scale
Bar, then click OK.
Having removed the Map Scale Bar, we will now reposition it in a
different location.
5. In the Display Map Scale Bar dialog box, retain all the previous
parameter settings, but toggle the Use cursor button for both the X and
Y-position of scale bar.
Overview
In GeoFrame 4.0, components of seismic interpretation are stored in the
GeoFrame data base and are accessible to CPS-3 in a variety of methods.
Some discussion of these new data relationships is needed, however, before
the exercises become intuitive. Here is a summary:
Seismic Attributes are stored as Grids in GeoFrame and are directly visible
in the CPS-3 Set Selector dialog.
2D locations are not directly visible in CPS-3 and are loaded into the CPS-3
with the GeoFrame Link.
3D locations are visible in CPS-3 and can be posted in the Display menu
Gridding 3D interpretation
Now, when gridding a 3D horizon, CPS-3 must be able to accept grids in
GeoFrame as input to the gridding algorithms. The Gridding menus have been
changed accordingly. Note the Single Surface gridding dialog now allows the
selection of either Data or Grid input:
However, as we examine the file selection dialog in CPS-3, our searching can
be clairified by the following facts:
There can be only one Time interpretation grid per horizon
There can be only one Depth interpretation grid per horizon
There will be only ONE set in the CPS-3 set selection dialog having
Name = Time, Surface = the desired horizon name, and Grid
Library = the correct grid library for the horizon. This set will be the
Time interpretation for the Horizon.
The fault boundaries we will use here are the polygonal fault boundaries
which represent the intersection of the fault surface with the horizon surface.
In a later chapter, we will use fault segment data in order to create grids for
one or more of the fault surfaces.
After loading or identifying the data for these horizons, we will look at their
basic statistics, and then display each data set to get an idea of the density and
distribution of its points. This will help us establish gridding parameters.
Exercise
2. Select grid as the Set Type, Select only GeoFrame for the Storage,
and then click the Filter icon.
From the Surface Name column we see that the first five grids, all named AA,
represent a depth interpretation for each the five horizons which we use in this
project. You may not see these grids in your project listing, but these
particular versions of the interpretation were done in Charisma.
The important point is that no loading is necessary. CPS-3 has direct access
to the 3D interpretation for gridding.
For example, these are the seismic interpretation grids we will use for our
models in this course. Note the filter settings.
In the same way, the fault polygons associated with some of these 3D horizon
interpretations can also be seen directly in the CPS-3 Set Selector. Fault
polygons created during seismic interpretation will be seen as below:
4. In the List/Manage Sets dialog, select Fault as the Set Type, select
GeoFrame as Storage, then click Filter.
In this project, we see only these two fault boundary sets, but in practice, there
are usually more - typically one for each interpreted seismic horizon, and even
several versions. It is up to the user(s) to keep track of these boundary sets by
using the assignment facilities in IESX.
If Interpretation Models are used, then it is easy to keep track of these fault
boundary sets at the seismic interpretation level, although only one version of
each fault boundary is allowed for one horizon.
Exercise
1. From the CPS-3 Main Module, click Tools > GeoFrame Link to
open the CPS-3 GeoFrame Link dialog box.
2. In the CPS-3 GeoFrame Link dialog box, click the Load From
GeoFrame tab.
3. In the Load From GeoFrame folder, click Data Types to open the
GeoFrame Surface Types dialog box so that we can focus on the type
of data we want to bring across.
Note: For each pick under the Containers list, a DEPTH entry under the
Representations list should appear.
9. Highlight all five items in the CPS-3 Output Sets list, and then click
Run.
11. Return to the CPS-3 dialogs, and from the List/Manage Sets dialog,
delete these five markers data sets which we just loaded. They will be
found in the CPS-3 storage area, not in GeoFrame. Do not remove
anything from the GeoFrame data area:
BUNNKRITT_Depth
DRAKE_Depth
NESS_Depth
RANNOCH_Depth
TARBERT_Depth
Exercise
In this class, we will be using a different set of markers for each of these
horizons than the ones which are available via the GFLink. The ones we will
use were loaded into GeoFrame from a different source, and because the
source was external to this current project, they are visible to CPS-3 in its file
selector dialog, as well marker scatter sets. See below:
Exercise
Having spent some time discussing the various data available for our horizon
modeling, lets look at a quick summary of what we have for each horizon in
stratigraphic order for the Bunnkritt, Tarbert, Ness, Rannoch, and Drake
The Bunnkritt unconformity is unfaulted and the Rannoch and Drake do not
have permanent fault boundary sets at this time.
Exercise
Let us look at the statistics of some of the data we have just loaded.
1. From the CPS-3 Main Module, click Utilities > Sets > View
Contents/Statistics to open the View Set Contents/Statistics dialog
box.
5. Verify that their z-ranges are specified in Depth (m) and are in the
following range:
Tarbert: 1781 - 2389
Ness: 1712 - 2400
Let us review the fault boundaries for the Tarbert and Ness horizons.
6. From the CPS-3 Main Module, click Utilities > Sets > View
Contents/Statistics.
11. Determine from the statistics whether or not these fault sets have any
z-values.
Exercise
In this exercise, we are going to remove the mm_ prefixes from the fault set
names, and strip off their z-values.
1. In the CPS-3 Main Module, click Utilities > Sets > Rename to open
the Rename Sets dialog box below.
3. Highlight the GeoFrame mm_Ness Fault set, and click Rename at the
bottom.
6. Exit from the Rename dialog, and then click Utilities > Sets > Copy.
8. In the dialog which appears, set Output Set Type to Polygon, and
click New.
9. Define a Polygon set with the same name, Ness, define it to be stored
in GeoFrame, and click OK, then OK again.
10. From the Copy dialog, repeat the process for the Tarbert fault
boundary set.
11. Still in the Copy dialog, toggle Set Type to only Polygon and click
Filter.
13. Toggle Output Set Type to Fault and click Pick. Choose the original
Ness fault set and click OK, then OK again, to overwrite the original
version.
15. Now repeat the statistics on both of these fault sets to verify that there
is no z-value field.
16. Make sure that your display environment is still set to Quick_Look.
Erase the screen, and click Display > Basemap
Toggle Border to red.
Toggle Faults to red, and click Pick Sets.
Select the Tarbert fault set, and then click OK.
Exercise
Without worrying about grid spacing or other gridding algorithms, lets have a
quick look at these data sets, which are really interpretation grids. We will
display each one using color-shaded symbols. This will not only show us the
data distribution and density, but will also give us a preview of the structure in
some cases.
3. Click Utilities > System > Set_Toggle_Switches, and make sure that
the switch Show Graphic Entities... does not have a red box next to it.
4. Click Display > Basemap, to open the Basemap Display dialog box.
This horizon has large erosion zones next to the fault zones which have been
eroded by the previously displayed Bunnkritt unconformity.
Display the fault set Tarbert on top of this display if you like.
This horizon also has similar erosion zones, but are limited to the Eastern half
of the map.
Display the fault set Ness on top of this display, if you like.
Rannoch
Drake
Overview
The purpose of this chapter is to introduce you to the most important aspects
of gridding. You will learn how to select the most critical gridding parameters
which affect the quality and appearance of your surface. There are many
algorithms from which to choose, and many parameters which can affect your
map. However, for the typical mapping task, very good maps can be generated
using only a few of these parameters. An Advanced Topics class is available
for those who wish to study the inner workings of the gridding algorithms
more deeply, but the schedule for this course does not allow for all algorithms
and their parameters to be covered in depth. In this course, we will try to take
the pragmatic approach and give you the tools to get the best map possible in
the shortest time.
What is Gridding?
Gridding is the process of transforming randomly located or other data into a
regularly spaced lattice of values representing the z-dimension of the x, y, and
z data. After the transformation, the data points are redundant and
unnecessary, since the model of the surface is now embodied totally within the
grid and the associated fault traces, if any. Even though defined by a finite
number of points, the grid is meant to be thought of as continuous surface.
Grid Terminology
1 2 Grid Columns 5 6 7 8 9
YMAX
4550
Row 2
Grid Node
Row 3
Row 4
Row 5
Grid
YINC
Cell
1800
YMIN
3400 XINC 7800
XMIN XMAX
Figure 11.1 Map grid and components
The above grid covers a range in X of 3400 to 7800, and in Y of 1800 to 4550.
This grid is a 6 by 9 grid, meaning 6 rows by 9 columns. The grid cell size,
the number of rows and columns and the grid range are related as follows:
X-interval = Range in X divided by (Number of Columns -1)
Y-interval = Range in Y divided by (Number of Rows -1)
CPS-3 may increase XMIN or YMAX to ensure that the range in x and y is
evenly divisible by xinc and yinc.
Next, examine the contours in the areas of the grid between the data points.
Are the contours reasonable?
Are they what you would expect?
Are they relatively smooth, considering the data?
Do they continue the trends established by the data?
If the answer to these questions is yes, then the second criteria has probably
been fulfilled. Remember that after gridding, your model is represented by the
grid and fault traces - the data has become redundant.
Gridding Algorithms
As you will see, CPS-3 has many gridding algorithms, but each of them is
designed to solve the basic gridding problem below:
Given the data points and fault traces above, create a model of regularly-
spaced grid points which honor the data and exhibit a smooth transition of
the surface between data points and the edge of the map.
Figure 11.1 Common gridding problem showing data and fault traces
Inspecting our data shown in the previous figure, let us assume for a minute
that we want to create a grid from only this well data and the faults shown. We
will assume that we want to include the far western well in the grid and so the
modeling area will cover the area shown. Looking at the fault traces, we note
that the horizontal separation is narrow for some of the faults, but significant
for others. We also note fairly thin fault blocks between several of the faults.
In its most basic form, the method for determining the proper grid spacing can
be stated as follows:
Find the closest two data points whose difference must be distinguishable in
the grid and let the grid spacing be 1/2 the distance between them.
This method is almost guaranteed to yield a grid having the desired criteria,
that is, a contour map which shows the difference between the two chosen
points. However, there may be other limiting conditions which you should
consider before settling on this grid spacing. For example, it could be that
such spacing generates a grid whose number of rows and columns is so large
that the gridding takes an inordinate amount of time. Is is also possible that
such a spacing is so small that it produces a grid which contains a very large
amount of high-frequency noise. While the stated method of choosing a grid
interval is a very good way to start, there are even more reasons why you
should look at other characteristics of your data as shown below.
Looking at our simple example again in the figure below, we can get a better
feel for the scale of our data and the grid spacing by introducing a graphic
lattice on the display. The lattice shown is 500 meters on a side. Judging from
the spacing of the well points, 500m is not a good grid spacing since there are
several points closer than this whose difference we will want to see.
Also note the size of the grid cell relative to the size of the thin fault blocks.
These blocks will hardly be defined with such large spacing. Sometimes, the
ultimate criteria for defining a grid size is the horizontal separation in the fault
zone. If it is desired to define the fault zones for volumetric purposes, then the
chosen grid size must provide enough nodes in each fault zone so that it can
be modeled along with the fault blocks.
Figure 11.1 Data points and fault traces with grid lattice
If we were to use the distance between the two closest points criteria for
selecting the interval in this data set, we might choose the two points, each of
which borders on a north/south-oriented fault trace toward the southeast center
of the map. As we have just noted, however, the width of the fault blocks in
that area is even smaller than the distance between these points. Clearly, the
width of these fault blocks should be the defining feature to be resolved in
the final grid. We would therefore choose a grid interval of approximately
100m in this example, which will allow us to define the fault blocks, but not
all of the fault zones.
Considering only the well data and the fault patterns, we have determined a
reasonable grid interval based on close inspection of the features in our data.
But in your project data, you may also have 2D or 3D seismic interpretation
to go along with the well data. This same sort of analysis should be applied to
the seismic data.
As you can see, there is no magic formula for determining a gridding interval,
other than deciding on the size of the smallest feature you want to resolve in
the resulting surface.
Those gridding tasks mentioned above are special cases and do not represent
the typical task of creating a structural model from well data, fault traces, and
seismic interpretation, which is the most common modeling exercise. The
operations above and the algorithms used for the creation of their surfaces are
important to learn about, but in the exercises for this chapter, we will focus on
the Convergent Gridding algorithm, which is recommended for day-to-day
structural modeling.
Convergent
A very stable, fast, general purpose algorithm for computing a smooth, but
accurate fit to almost all types of data. This algorithm provides trend-like
extrapolation, and being the flagship of the CPS-3 gridding algorithms, it
should be the first one considered for almost any structural data.
Contour to Grid
Least Squares
Moving Average
This is a simple, general purpose algorithm used for computing an average fit
to scattered data. Moving Average is mostly used for a quick-look or for
gridding noisy or statistical data. Use this algorithm when you do not want the
surface to contain values which fall beyond the range of the data.
SNAP
Isopach
Trend
Trend is a general purpose algorithm used for computing data trends. You
specify the order of the trend (1st, 2nd, etc.).
Polynomial
Step
Distance
Density
If, after the first attempt, you are not satisfied with the grid, use the following
guidelines for changing one or several of the three parameters above.
Problem Solution
Grid does not honor control points Make Final grid interval smaller
Grid takes forever to compute Make Final grid interval larger or Initial
grid interval smaller
Not enough extrapolation beyond data Make Initial grid interval bigger
points
Grid does not show features inherent in Make Final grid interval smaller
the data
The spacing of the nodes in the figure below represents the 50m x 50m grid
spacing which we decided to use for this training data. This figure shows a
zoomed-in area of our data, and, for purposes of this faulting discussion, we
will assume that the only data we are using for gridding is the well data.
This figure shows how data is selected to be used or not used during the
computation of a grid node value, depending upon its spatial relationship to
the node being computed and the fault patterns.
Any algorithm in CPS-3 which is computing the value of a grid node, such as
the one marked + in the figure above, will treat each fault line segment as a
barrier during the interpolation process. Data points which can potentially
contribute to the value of a node, must first pass a visibility test to see if they
will be used or discarded. If any point is in the shadow of any fault line
segment, then it will not be used in the computation of the node. For example,
in the figure above, points A, B, C, and D are in the shadow of at least one of
the faults, and will therefore not be used in the computation of that nodes
value. None of the other points are in the shadow of any other faults, and so
they will be used for the computation.
The idea is that only data points which are on the same side of a fault as the
node being computed will be used to compute that nodes value. Nodes on the
other side of a fault or on another fault block are not used.
Note that this same visibility criteria is also used by the CPS-3 grid-based
interpolator during the computation of other manifestations of a surface, such
as contouring, volumetrics, and refinement. That is, all grid-based procedures
in CPS-3 are cognizant of where the faults are located and will modify their
results based on the faults, as long as they are specified on input.
2D Seismic
Line-Decimated 3D
Dense 3D
For dense 3D, the first thing to decide is if you really need all the data points
which may be present. In some cases, 25m or 50m cdp spacing may lead to
grids in CPS-3 which exceed the reasonable limits. At present reasonable
limits are determined accordingly:
150x150 grid - a modest grid
300x300 grid - an average-to-large grid
500x500 grid - a very large grid which is manageable, but may cause
some delay in processing and require lots of storage if many horizons
are involved
1000x1000 grid - unless you have an extremely fast server and lots of
swap space and lots of storage, grids of this size are not manageable at
this time.
For this type of data, the interpretation is typically at a density which is equal
to or greater than the Final interval you might choose. For this reason, the
proper algorithm to select for gridding is SNAP, not Convergent. Choosing
SNAP and setting the parameters as indicated below will produce either a
literal copy of your interpretation (without any interpolation taking place), or a
sub-sampling of it, depending upon whether the data density is, respectively,
equal to, or greater than the selected Final interval.
Final interval: decide based on how much you want to invest in
gridding time, grid maintenance time, and storage space.
Initial interval: same as Final interval
Number of Nodes to Snap To: 1
As an integral part of the horizon being mapped, the fault zone should receive
as much attention as the fault blocks.
In this technique, you first grid the horizons fault blocks as well as possible,
while using the fault boundaries during gridding. This ensures the integrity of
shape for each block. When this initial grid is complete, blank out this grid
inside all fault polygons. This cleans the fault zones of any spurious
definitions caused by data which happened to fall in the fault zones. Next,
copy the blanked grid to a data set, then use that data set to recreate a grid of
the horizon with the convergent algorithm WITHOUT using the faults. The
theory of this technique is that the best possible data to define the fault zones
are the grid nodes right at their edges.
If you have quality z-values on your fault boundaries, then this is excellent
data to use to define the fault zone. Simply make sure that the gridding
algorithm can see and will use these z-values by setting the appropriate
switches when specifying gridding parameters. This technique requires only
one step, but also requires some quality control for the fault boundaries and
their z-values, and assumes there is no conflicting data which might happen to
fall in the fault zones.
This technique is not an easy one if you have more than a few faults. It does,
however, lend itself to being incorporated into a macro. The reason for this is
that it requires several operations per fault. Here is the outline of the steps for
one fault:
blank the existing fault surface outside of its associated fault boundary
polygon
perform the surface operation which replaces nodes in one grid (the
horizon grid) with nodes from another (the blanked fault surface). In
particular, the surface operation required is the 10th Multiple Surface
Logical Operation, A in Union, otherwise defined as
C=A, but if A or B is Null, then C = Valid(A,B) (-255)
Overview
1. Assuming that we will use both well data and seismic data in gridding,
well go through the decision-making process to settle on a Final
gridding increment for the sample data sets.
3. We will also discuss the topic of defining the fault zones in a horizon,
and when it is necessary.
4. Finally, well talk about when it is convenient to grid the fault surfaces
themselves, and describe a macro which uses a specific technique for
gridding fault surfaces.
Well data
Lets look only at our well data for a minute. In the map below, several of the
wells are fairly close together. The closest are about 50 feet apart.
In some cases, wells can be so close that choosing a gridding interval, based
on their distance, can lead to a grid which contains an inordinate number of
nodes. If we choose an xinc based on the two closest wells, according to the
rule of thumb, we will have a grid with a spacing of 25 feet, 490 columns, and
600 rows.
If this were all the data we had, and our concern was to reflect all data in the
grid as accurately as possible, then we might use the 25 foot spacing,
especially if the z-values in the two closest wells indicated a substantial
difference in slope from the rest of the map. This would guarantee that each
data point fell in its own grid cell, and the difference in z-values between the
two points would be preserved in the grid and the resulting contours.
If, as is often the case, the z-values of the two wells are not very different,
then, for all practical purposes, we only need one of them in the gridding
operation. This means that we could ignore their separation as a criteria for the
grid spacing and use a larger grid separation.
Here is another way to look at it. There are only two wells which are as close
as 50 feet. If there were many wells in the data set, then one might be willing
to accept a bit of averaging in the grid around these two wells, if it meant the
difference between a modest grid and a very large grid.
Looking at the rest of the wells, it appears that the next smallest separation is
approximately 100 or 200 feet. Let us assume that the difference in z-values of
the two closest wells is not significant. Then, this inspection of only the well
data tells us that we might use a grid interval of about 100 feet. This will
provide a surface which is relatively modest in size, but which will resolve any
important features inherent in this well data.
Note: It is appropriate to look closely at the well data when determining a grid
spacing, since traditionally, well data is considered to be of a higher quality
than the seismic interpretation.
Let us now look at the seismic data to see if it tells us anything different about
the grid spacing.
Seismic Data
1. Seed Interpretation
Cloudspin interpreted horizons having INTRP in their names fall into this
category. This means that the lines are about 1100 feet apart. This particular
geometric distribution of data points is characterized by lines which are
relatively far apart with respect to the density of points along each line.This
type of data distribution sometimes requires extra work in the normally simple
gridding operations, especially when a grid spacing is chosen which is close to
the distance between points on the lines.
The reason for the extra work is to overcome the tendency of any gridding
algorithm to tie the grid closely to the many points along the line, while giving
an average solution at nodes between the lines. The following figure shows
why the extra step is sometimes needed. It shows a grid computed from seed
interpretation data where only a simple convergent step was applied. Note
the bulls-eyes along the lines.
Figure 12.4 Seed interpretation grid displaying noise and bulls-eyes along
data
With this type of data, it is sometimes necessary to increase the final grid size
in order to reduce the noise and bulls-eyes along the data line. If you have
several horizons which must maintain the same x,y extent and grid spacing,
this can be a problem, only if this type of data happens to be available for one
of the horizons. In this case, the recommendation is to grid the data at the
smallest final grid interval which minimizes the bulls-eyes, but then refine the
grid down to the smaller required size.
2. ASAP Interpretation
Cloudspin horizons having ASAP in their name fall in this category. This
type of geometry infers that every cdp in the survey area is potentially defined,
as is almost the case in the portion of our Jakarta horizon shown in the
following figure.
The IESX automatic picking procedure called ASAP was used in the above
figure to finish the horizon interpretation between the 20x20 lines picked by
the interpreter. Dense data like this is usually very simple to grid. However,
holes in the interpretation can cause gridding problems. For example, in an
ASAP version of the Kobe horizon below, we see patches of missing
interpretation, especially below the Hobart fault in the lower part of the map.
When using the Convergent gridding algorithm, the key is to make the
Starting interval large enough to that holes in the grid are avoided. For the
Least Squares algorithm, make the Search Radius larger when unwanted
holes in the grid are identified.
3. Combinations
The Paris horizon below contains both 20x20 and ASAP interpretation
geometry, but it should not be any more difficult to grid than the others. All
data distribution types shown here can easily be gridded with the Snap and
Convergent Gridding algorithms, as we will demonstrate.
4. 2D Only
This type of data is similar to the seed distribution type, in that, data exists in
very dense points along lines which are sparsely distributed. As with the seed
data, the potential exists for bulls eyes along the lines if a too small xinc is
used initially. Either post-refinement of a somewhat larger final grid size, as
mentioned above, or smoothing of a required final grid size can help here.
There are many options available in seismic interpretation packages, and some
can affect the way in which data is gridded in the mapping stages. For
example, in the Cloudspin project, not all horizons were interpreted with the
automatic smoother activated. We will see one or more of our horizons
exhibit high-frequency noise in the initial contours. A small bit of smoothing
in CPS-3 can remove this noise with little affect to data tying.
We chose a grid spacing of 100 feet based on the well data, but note that for
the densely-interpreted seismic horizons, many of the data points are 55 feet
apart. The choice of a 50 foot grid spacing would give us a grid having 225
columns and 280 rows. This is not an extremely large grid, and 50 feet would
be an appropriate spacing if this reservoir were modeled with the intent of
computing accurate oil in place.
2. If the interpretation contains holes or does not exist in areas where the
grid must be defined, then use the CONVERGENT algorithm with
Number of Nodes = 16, and Starting Grid Interval = half the diameter
of the largest hole in the data.
As you will see in the exercises, defining the fault zones in a horizon does not
require the presence of a surfaces for the faults, which is discussed in a
different context below.
It is sometimes difficult to tell if grid nodes have been defined in fault zones.
When horizontal separation is small, only a few nodes can fit across the heave.
In this case, contours may not be generated in the fault zone, even though the
nodes are defined. (See the example below.)
Tip: Even though display functions like contouring may not be able to visually
render every portion of a gridded surface, it is still important that the nodes
be defined properly for structural reasons.
Then answer to this question depends on how much detail you want in the
structural envelope you create for volumetrics. If your structural envelope
does not involve sealing faults, then you probably dont need to grid your fault
surfaces. If the envelope does involve one or more sealing faults, but having
very modest horizontal and vertical separation, you may still decide not to
grid the fault surfaces, and simply treat the faults as vertical faults. However,
in the case of large throw or large heaves on the sealing faults, you will
probably get more accurate results in your volumetrics if you include the fault
surface in the structural envelope.
Here, well discuss a technique for gridding fault data which may not be
necessary for all fault data sets, but which is a very useful alternative to
Convergent gridding. The origin of this gridding technique is the realization
that most fault surfaces have strong linear components in the direction of dip.
With the typically sparse data provided by fault segments, as interpreted in the
seismic, strong linear trends are not always honored by the Convergent
algorithm. The gridding technique consists of the following:
Create an initial 2nd order Trend grid from the fault data points.
At all data point locations, compute the difference between the Trend
surface and the z-value in the data point.
Subtract the two values, creating a new z-field called Error.
Create a grid of the Error and add it to the initial Trend grid, giving
the final grid which contains a strong fault-like trend downdip, but also
ties to the observed data.
Overview
In this chapter, we will focus on that geographic area where the horizon grids
will be computed, and define the Modeling environment for it.
keeping in mind the guidelines that we learned from the lectures on gridding
techniques.
Exercise
The Tarbert interpretation, shown below in white, defines the area which we
want to map. This horizon pinches out against an upper unconformity, the
Bunnkritt, towards the East, but yet there are still plenty of wells in the area
from the Western platform. At this time, there is no interpretation for the 2D
survey locations. They have been used mainly for basemapping exercises. We
will therefore define an area which surrounds the Tarbert interpretation and
create a modeling environment for it.
1. Click the Select Environment icon in the Main Module and then
click on Create in the Modeling panel.
3. Click the Data Extents tab, then make sure that the coordinate system
is correct.
5. You will see a message that the system has accepted the coordinate
system. Click OK to this message, and continue by clicking the
Min/Max tab.
6. Then, under the X Axis and Y Axis panels, modify the limits of the
data minimums to match the following:
X-Minimum = 453700
Y-Minimum = 6784500
8. So that your maps will match the maps shown in these exercises,
increase or decrease the x and y maximums in your dialog, so that the
following values are reflected:
X-Maximum = 459600
Y-Maximum = 6792100
Column Count = 119
Row Count = 153
10. Click OK, then highlight the associated display environment called
StdView:gullfaks_gridding, and click Edit Current in the Display
panel.
11. Set the Map Scale Mode to Direct and specify a Scale of 250 m/cm.
Note that early versions of GF4 may incorrectly show (m/in) instead
of (m/cm). Verify that your plot is the same size as shown in the figure
above and click OK to return to the Environment dialog box.
Returning to the Select Environment dialog box, note that there are
now two new environments. One is our new Modeling environment,
the other is a companion Display environment called
StdView:gullfaks_gridding, which is useful, but only if the Modeling
environment is rotated.
Remember:
Note: If you happen to choose a method to create your Modeling environment
which involves the selection of a particular binset from the GeoFrame list of
binsets, you should know that binsets contain not only a grid lattice definition,
but specifications for x, y limits and a coordinate system, both of which will
supersede those values you may have already entered for your new
environment in the CPS-3 dialogs.
Exercise
We will choose the Snap algorithm in its most simple mode to obtain a grid
which is almost a literal copy of the interpretation. This is a very fast
procedure, since the algorithm is not required to make more than one iteration
through the data.
The single Snap procedure should be used with very dense data like the
Bunnkritt, where the data points are as close, or closer, than the grid spacing,
and there are no appreciable gaps or holes in the interpretation.
3. From the CPS-3 Main Module, select Modeling > Single Surface to
open the Single Surface Gridding dialog box.
4. Erase the screen, then click Display > Contours to open the Display
Contours and/or a Shaded Surface dialog box.
Select Bunnkritt for the Grid.
Make sure that Display Line Contours is toggled to red, and click
Set Parameters. This opens the Parameters for Contouring a
Surface dialog box.
Start contour level - toggle Computed to red.
Increment between contours - specify 5.
Number of contours - toggle Computed to red
Further down, on the right, click No Bolding, No Blanking,
and No Labeling.
Click OK to close the Parameters for Contouring a Surface
dialog box.
Click OK to start the contouring.
1. Because the data is skewed slightly to the Northeast, a few of the grid nodes
on the East and West edges were not defined by the simple snap gridding to
the two closest nodes. In order to fill out the entire grid, we need to re-grid
with different parameters, to force some extrapolation to occur at the edges.
Both of these minor problems can probably be repaired by simply setting the
number of nodes to a larger value, perhaps 8 or 16 for this particular data set.
This is better, but the border shows that we still have some missing nodes at
the edges.
Re-grid again, using Number of Nodes = 16
You will see that this will also not carry the grid all the way to the edge. With
this data set and map area, well have to use the Convergent algorithm with a
Starting Grid Interval of approximately 200m, in order to extrapolate all the
way to the edge
Re-grid again, this time select the Convergent algorithm and set
the Initial Grid Interval to 200m.
This map of the Bunnkritt is complete all the way out to the edges, and the
horizontal and vertical features have also disappeared. Well use this as the
final map for the Bunnkritt unconformity.
Most of the wells are coincident with the seismic interpretation. We trust that
calibration has been performed. One of the wells provides some additional
data in the erosion zone to the West of the F4 fault, but other than that, the
wells do not cover much geography in this horizon.
We also note that a few of the fault boundaries appear to be spurious, forming
fault blocks (see the arrows) where there is no data to define them.
F14
F7a_D
F6a
F15a
F6
In these cases, we have the option to remove the fault patterns for the F6a.
F7a_D, F15a, and F6 faults and obtain a simplified model. Another option will
also be discussed when we address filling in the fault zones.
If we were only making structure maps, and the long-term goal of our
exercises did not involve the computation of volumes, or the derivation of an
envelope for the gross isochore, we might forget about the fault zones and
simply grid this surface using the fault polygons provided, blanking out the
fault zones in the map. We will do this at first, but since the real purpose of
these grids is to construct an envelope for a specific lithologic zone, we must
address the gridding of the fault zones in the next phase.
As suspected, we have several fault blocks on the East side which are
undefined or poorly defined. At present, we do not have any other data with
which to define these blocks, and so we will remove the fault patterns
mentioned earlier.
We also note that the computed grid does not cover the entire map area due to
lack of data. We have the option of increasing the size of the initial grid
interval which will have the effect of causing more extrapolation into these
blank grid areas.
When we zoom in, closer to the well locations, we notice several large
discrepancies between the well and seismic interpretation. We will continue
without the well data when we re-grid this horizon just to get a smooth map,
but in a real project, of course, the geologist and seismic interpreter should
resolve the differences at the wells.
Another characteristic of this map which requires some remediation are the
contours in the fault zone. In this first part of the exercise for the Tarbert
horizon, well simply blank them out after removing the fault patterns and
regridding as mentioned above.
The last problem with this grid is that there is some inherent noise in the data
which is evidenced by the horizontal streaks. Again, this comes from the way
in which the data was prepared and will be remedied by smoothing as we
proceed.
In order to remove the proper fault polygon subsets from the Tarbert fault set,
youll need to know that the faults we want to remove have the following
subset numbers:
3: F7a_D
9:F15a
10: F6
12: F6a
4. Click Utilities > Sets > Subset, then click Delete, then Fault, and pick
the CPS set named mm_Tarbert.
Select subsets 3, 9, 10, and 12
Click Apply, then Cancel.
5. At this point, we will decide not the use the Tarbert wells because of
the bulls eyes we see in the final map. We will also decide to make use
of another secondary data set called
mm_TARBERT_contour_interp. This data set is derived from
previous experience, knowing that in the lower right-hand corner of the
map, we will have crossing grid problems with the Ness horizon unless
we control this part of the Tarbert horizon, which is so lacking in data.
6. Erase the screen and re-grid the Tarbert grid using Single Surface
gridding as before, except change the Initial Grid Interval to
500.0m. Remove the wells from the input data and instead use the
mm_TARBERT_contour_interp data set as the second data set.
There is still an area in the East edge of the map where the grid is not defined.
In this case, the reservoir which we will be modeling does not exist there, and
so we can leave the grid defined as it is.
We also note many parts of the fault zones are not defined. The reason for this
is that there was no data explicitly provided to model them. To complicate the
situation, there were many horizon interpretation data points which were
incorrectly left inside these zones by the interpreter. During gridding, the
algorithm used them with the result that you see - badly defined fault zones
with holes in them.
Now let us finish the first section for this Tarbert horizon by showing one way
to correctly handle the display and modeling of the fault zones.
A Final Smoothing
Color-shading does not reveal the fact that there are some areas of high-
frequency noise, coming from the data. We also saw this in the Bunnkritt.
9. Click Operations > Grid > Smooth, then select the Tarbert grid and
its fault boundary set as input. Click OK.
10. Choose a convergent threshold of 2% and click OK. This will remove
the kinks in this grid, which can be verified by dense line contours.
11. Since we know that the fault zones in this grid were created by
improper and incomplete data, the first step will be to simply blank
them out in the grid.
Click on Utilities > Sets > Copy and copy the Tarbert fault set to a
polygon set of the same name.
In the Copy Sets dialog box, toggle Fault [F] to red.
Select mm_Tarbert, and click Copy.
In the next dialog box, select Polygon for Output set type and
provide a new Output Set Name of mm_Tarbert, then click
OK.
In the Copy dialog, click Cancel to exit the copy procedure.
12. Click Operations > Grid > Blank to open the Blank a Grid dialog
box.
For Grid (input), select Tarbert.
For Polygon (input), select mm_Tarbert, and click OK. This
opens the Blank a Grid dialog box.
Select Inside only as the Calculation mode, and click OK to
begin the blanking.
13. Erase the screen and color-shade the Tarbert grid as before.
We now have a grid which is representative of the data. It has been somewhat
extrapolated in certain areas, and the fault zones have been blanked. If we
were only creating structure maps using limited extrapolation, we might say
that the map is finished.
In this class, however, our workflow extends all the way to volumetrics, and as
the instructor has explained, this will require us to fill in the fault zones for
each horizon in the structural envelope so that we do not end up with holes in
the isochore grids
There are actually several ways to fill in fault zones, but the one presented
here is the most direct and requires the least amount of extra data. We will
simply use the grid itself as data to re-grid the entire horizon without the
faults, since the fault blocks have already been very well defined. The grid
nodes along the edges of each blanked fault zone will be extremely good
data with which to define each of the fault surface patches which we need
to fill in the blank areas. When we re-grid in this manner, well also choose to
do a little more extrapolation on the fault blocks towards the East, as well.
1. From the Single Surface Gridding dialog box, re-grid the Tarbert
horizon as follows:
Under Data, select the Tarbert grid as the only Data set.
Because we created this version of the grid without using the fault boundaries,
the grid will not be automatically associated with a fault set. Therefore, we ill
force the association before we contour.
12. Erase the screen, and click on Display > Contours and contour the
Tarbert surface.
If you like, you can color-shade this new grid to verify that there a few, if any,
areas which are not defined.
When finished, your map should be named Ness, and should resemble the one
shown below:
We now have three of the several components we need to build the structural
envelope for the lithozone between the Tarbert and the Ness horizons. In
particular, we have the unconformity grid, Bunnkritt, and the two horizon
grids below it- the Tarbert and the Ness.
Well take a break from modeling activities for a moment to learn a little more
about Basemapping and ASCII file input in the next exercises.
Overview
In this chapter we will look at more options available for basemapping. We
will introduce you to colorshading, and demonstrate many of the graphic
parameters which we did not take the time to discuss during the gridding.
During screen display, CPS-3 first creates all graphics in a manner and scale
appropriate to the size of the plot you have specified in your Display
environment. It then renders a smaller version of this on your workstation
screen.
Plot
Size
Thus, rendering of character sizes, symbol sizes, and even contour quality are
a function of the physical size of the plot you specify when you scale your
Display environment. There are three modes by which you may set the map
scale Ratio, Direct, or NoScale.
When you select either Ratio, or Direct scaling methods, you have the ability
to define a plot size of your own choosing, for example, 18 inches for X and
22 inches for Y. This means that if you select a text size of, say, .25 inches for
contour labels, then they will be .25 inches high when the map is plotted at the
specified size. On your screen, the text will be smaller since the workstation
screen always presents a miniature version of your scaled map, except in the
rare case when you specify a plot size which is actually smaller than your
workstation screen.
When you select No Scale, CPS-3 simply assumes a default maximum plotter
size of 12 inches by 12 inches, then scales everything and maintains the aspect
ratio accordingly.
Even though your sets in CPS-3 are stored in meters or feet, you have may
choose to specify character sizes and other graphic dimensioning in English or
Metric notation. This is determined by the GeoGraphic Distance attribute in
the GeoFrame Display Units which is managed from the Project Manager
dialog.
Rendering
No matter how you actually specify the size of your paper plot, text sizing,
and symbol sizing, will be relative to that plot size, as shown in the previous
diagram. Stated another way - if you set a posting symbol size of .25 inches,
the symbol will be .25 inches high on the final map, not .25 inches on the
workstation screen.
When you select contour quality, the system attempts to tie the selected
quality specification to a measurable result on the physical plot. This is
accomplished by refining the contoured grid to levels which will cause smaller
and smaller contour line segments. If time permits, the instructor might
provide a basic description of the contouring algorithm at this point., but the
basic fact is that refining the grid before generating contour loci will cause the
individual contour line segments to become smaller. Below is a table which
maps Contour Quality to Refine Grid Cell Size in inches.
Note that in later versions of the software, this feature may work slightly
differently, but the on-line Help is always available for current details.
Map Size
For a given grid and a given Contour Quality, smaller plot sizes will
contour quicker than larger ones.
Exercise
In this exercise, we will learn how plot size affect the size of characters on the
screen. The assumption is that the GeoFrame Display Units are still
Metric/Charisma.
Text is about
1/4 inch high
Placing a border on the plot shows the physical size of the Display
environment as rendered on the screen. It should be very close to 9.1 by
11.7cm, or about 3.6 by 4.6 inches, which corresponds to the size specified in
the Display environment. In this case, text size on the screen is the same as
what you have specified in the parameters because the plot size is smaller
than the screen, and the system does not have to shrink it.
4. Now, return to the Select Environment dialog and again edit the
GullFaks_Gridding Display environment.
Change the Horizontal Map Scale Mode to be Direct, and the
Scale to be 120 meters/cm. Striking the <Return> key will cause a
new Resulting Map Size to be displayed. This should be 49.2 by
63.3 cm.
Click OK back to the Main menu, discarding any existing graphics
on the screen.
5. Contour the Ness grid as before, not changing any of the contouring
parameters.
1. The physical size of the plot no longer fits in the workstation viewport
and so the system has shrunk it to fit.
2. The same text size (.5cm) has been rendered to show how it would
look on the plot with the specified size, therefore, it appears smaller on
the workstation screen.
Text is about
1/16 inch high
Now return again to the Environment dialogs and change the map
scale for the GullFaks_gridding Display environment to 250
meters/cm.
Recontour the Ness grid, changing the following contour
parameters:
Distance to First Label = 1
Exercise
This shows the locus of the 1800m contour lines against the background of the
fault patterns.
Now bring up the contour menu again, and click Help. Point to
several parameter fields and make sure you understand the
meaning of the parameters.
Experiment with the other contouring parameters, such as Bolding
rate, Contour label rate, and Distance to first label.
Toggle on Display Faults and click Set Parameters. This opens the
Parameters for Displaying Faults dialog box.
2. In the Set Screen Display Margin section, set the Left margin to 6
and the Right margin to 2, and then click OK.
3. Save as a new map set called Ness_shade, and erase the screen.
Exercise
More Basemapping
We will finish this exercise by creating a map using some of the basemapping
tools which we have yet not tried.
3. Click the Display basemap icon to open the Basemap Display dialog
box.
Toggle on Map Set and select Ness_shade from Pick Sets.
Click Apply to display the map.
Toggle on Border.
Toggle on Labels and click Set Parameters.
Set X: label size and Y: label size to .2.
Set X: label increment and Y: label increment to 5000.
Click OK.
Toggle on Map Title and click Set Parameters.
Bottom margin title = Ness Structure and
Surrounding Surveys
Bottom margin text size = .5
Bottom margin X position of text center = Computed
Bottom margin Y position of text center = 1.5
Left margin text size = .5
Click OK.
4. Click Apply on the Basemap Display dialog box. Your display should
look similar to the following figure, which shows some of the graphics
off the workstation screen. We will remedy this in a minute.
5. Back in the Basemap dialog, toggle on Extended Data and post the
2D seismic location set, mm_2d_gullfaks_shtpt, just as we did for the
previous basemap. Click OK.
6. Click the Reveal all graphics icon to expand the x,y box.
Top of Ness
Depth/M
In the next figure, you see the names of the fields where you enter the
information for each of the locations in the title block.
10. Position the projection block similarly to the title block, in a location
of your choice.
12. Click the Save currently displayed map icon, and save the display to
a new map set called Ness_Overview.
Overview
The purpose of this chapter is to demonstrate the use of inverse interpolation
and control point arithmetic. The workflow will involve computing the
error, if any, in the Ness structure grid at certain marker locations which well
load from an ASCII file shown below. The file contains Well Name, Symbol
Code, and Symbol Color information as well as X, Y, Z data.
Display Ness_tops
1. Clear the screen and go to Display > Select Environment to verify
that the active Display environment is GullFaks_Gridding.
Note that the symbol you pick in this case represents only the default
symbol because the symbol codes were already loaded in this
particular data set. However, in case a well did not have a symbol code
in the ASCII file, the one selected here will be used.
5. Toggle the second field, z1 to red in the dialog box. This will be the Z-
value in each well. Choose parameter values which seem appropriate.
6. Toggle the Text1 field to red. This will be the well name. Choose
appropriate parameter values.
9. When the wells appear on the screen, use the Rubberband zoom-in
icon to view details of a few of the wells.
10. Click the Un-zoom icon , and return to the full area of interest.
Exercise
2. In the ensuing dialog box, toggle on Results to new Z Field, and type
in grid_value in the Selection box.
What you see is a listing of all stored data for the data set Ness_tops.
Across the page you see sequence number, X, Y, original Z, grid Z
(which we just calculated), symbol code, symbol color, and borehole
name for each well.
Note that some of the grid Z values are null (INDT). These are wells
which fall outside the area of the grid, and a value could not be
computed.
Overview
In this chapter, we will turn our attention to another step in the workflow
whose culmination will be the computation of oil in place in a reservoir
O/W
Fault A Fault B
We have already discussed the importance of defining the fault zones when
volumetrics is the workflow focus, and we should have already accommodated
this requirement when gridding the top and base structures as depicted below.
Horizon 1
Fault Horizon 2
Zones
We are careful when creating grids to be used in volumetrics to define all fault
zones in each horizon. There are many techniques for doing this as discussed
in a previous chapter, and not all of them require an actual fault surface. The
definition, during gridding, of all fault zones should be done for volumetrics
whether there are any sealing faults or not. It simply guarantees that the top
and base envelope will be continuous and that the resulting isochore will not
have any holes in it due to undefined fault zones
There are many algorithms to choose from when gridding faults, just as there
are when gridding any data. One of the differences between typical horizon
interpretation data and fault interpretation data is that the fault data tends to be
sparser and of a lower quality. For this reason, it is not always possible to
obtain a good model of the fault on the first iteration.
Try the Convergent algorithm first, or any other algorithm you choose. If you
like the resulting grid and it honors the data, then thats probably as far as you
need to go.
In the GeoQuest modeling system called Framework 3D, faults are gridded
many at a time, and a specific algorithm is used there which has a high
probability of creating a good grid on the first try. This algorithm can be
duplicated by several CPS-3 procedures and embodied in a macro. There is
another algorithm which has already been put into a macro which uses even
another technique. Both of these are discussed below.
Both of these techniques work well for. Be aware the output grid may not tie
to all of the data points, but it works well for fault input data which is either
very noisy or very sparse. Below is a description of it.
This is a technique for gridding fault data which may not be necessary for all
fault data sets, but which is a very useful alternative to Convergent gridding.
The origin of this gridding technique is the realization that most fault surfaces
have strong linear components in the direction of dip. With typically sparse
data, strong linear trends are not always honored by the Convergent
algorithm, nor ingrained in the fault cut data set.
Slope Method
In FW3D, a new fault gridding algorithm for GF4.0 has been installed which,
in most cases, gives better results than the Trend Method. It may not be in the
CPS-3 set of System Macros, depending of the version of your software, but it
works like this:
Use the data you have to create a fault surface using the Convergent
gridding algorithm for a grid which is 4 times a coarse as the desired
final grid size.
Use the Control Point operations to compute slopes at all control
points so that you have an augmented control point set containing
X,Y,Z,dZ/dX, dZ/dY.
Use the augmented control point set to create the fault grid at the final
grid size.
Now, lets review the geometry of the Gullfaks faults and horizons, decide on
the location of the reservoir to model, and grid those fault surfaces we need
for incorporation into the structural envelopes.
Bunnkritt
Unconformity
Tarbert
Ness
Fault 2 Fault 4
For the volumetric exercises, we will focus on the interval between the
Tarbert and the Ness between Fault 2 and Fault 4, which we will assume are
sealing faults. Notice that the top envelope of the reservoir is defined by both
the Tarbert and, in its erosion zones, by the Bunkritt unconformity. In a later
chapter, we will also introduce the oil/water contact.
Exercise
3. Click Data Types. In the next dialog, under Containers, select Faults,
and under Representations, select Fault Cut Sets, and click OK.
4. Click GeoFrame Data, and in the next dialog, set the Vertical
Domain to Depth.
5. In the left panel, select the F2 fault. You may see several items appear
under the Representations panel on the right, but highlight only the
row saying Fault Cut Set.
6. Click F4 in the left panel and on the right, highlight the row saying
Fault Cut Set, then click OK.
7. Back in the GeoFrame Link dialog, highlight both of the items in the
right panel, and then click Run.
These sets will be stored in the CPS-3 dsl as data sets named F2_Depth and
F4_Depth.
Exercise
1. If you like, display the fault cuts for the F2 and F4 faults. As seen
below against the background of the three surveys we have used, you
can see that each runs North/South with F2 on the left and F4 on the
right. It is not necessary to replicate this figure, only display the faults
in your current Display environment.
In a later chapter, you will learn how to create macros. Here, we simply want
to introduce you to how they can be used. The macro you will run was created
in CPS-3, then modified by adding Macro Command Language, which
allows prompting, substitution, and other useful features inside of macros.
Exercise
1. Click the Select environment icon, and set both the Modeling and
Display Environments to gullfaks_gridding.
5. Click on the list widget next to it, and select Fault Gridding. This will
cause the macro named GridFault.mac to appear under the Macros
panel. Highlight it, and you will see the description of the macro. This
macro represents one of several methods for gridding fault surfaces
and seems to work well with the data we have.
7. When the Select a Data Set dialog pops up, select the data set
F2_depth, and then click OK.
8. When the Type output surface name dialog pops up, type in the
name: F2_fault.
9. When the Enter an slm... dialog pops up, enter 2500.00, and
the macro will begin execution. When finished, it will display the
contours for the fault grid and the data points as below.
F2_fault
This is a reasonable-looking fault surface and so well continue and repeat the
macro for the second fault F4_depth. Specify an output name of F4_fault,
and an slm of 2500.
F4-fault
1. Click on Utilities > Sets > List_Manage_Sets, and set the filter so that
only grids are listed.
8. Repeat this process for the F4_fault surface, but choose F4 as the
Surface Name
Overview
In this chapter, we will continue with our modeling work and introduce you to
a simple quality control tool which is useful for understanding the relationship
between two surfaces.
We will discover something about the Ness and Tarbert horizons, and will
have to make some adjustments in the Ness.
We will also discuss the procedure for generating traditional cross sections
using CPS-3, which are useful when trying to understand the spatial
relationships among many surfaces.
Exercise
Geologically, the Tarbert and the Ness are separate events and their grids
should not cross. We worry about this for two reasons:
1. Relative to their grid spacing (50m.), the Ness and Tarbert are not very
far apart (about 100m), and have many faults.
To begin this quality-control procedure, simply subtract the two grids to create
an isochore. Then color-shade only the negative portion of the isochore grid
which, if we subtract the Tarbert from the Ness, will reveal where the Ness
deviated above the Tarbert.
After displaying a border and labels, your msp should resemble the one below.
Note that most crossings occur where the fault zones of the two horizons
overlap. This is an indication that the interpretation along these zones should
probably be improved, since these horizons are already so close together.
When horizons are so close, this is also an indication that a more rigorous
method of filling in the faults zones might be required.
The larger crossings in the corners of the map occurred even though we made
use of some digitized contours to control the Tarbert horizon, which we felt
was diving too steeply to the Southeast. These crossings are all caused by lack
of data in one or the other of the horizons and might be mitigated somewhat
by more careful interpretation, some data editing, and some changes in the
gridding parameters.
This macro was designed to remove discrepancies which were much smaller
than the ones we have, but in this training environment, the data is designed to
demonstrate principles of operation.
One final step remains, and that is to give the output grid a final smoothing.
This is recommended, but it is not part of the macro we ran.
3. Click Operations > Grid > Smooth, and set the following parameters:
Set Input Grid = Ness_conformed
Set Faults = mm_Ness
Set Polygon = null
Set Output Grid = null
Smoothing Operator = Biharmonic
Max # Passes = 1, and then click OK to start the smoothing.
We will leave it you to verify that the overlap zones have diminished
considerably by inspecting the z-range of the diff grid which you can
recompute by subtracting the Ness_conformed grid from the Tarbert.
4. Now delete the Ness grid, and rename the Ness_conformed grid to
Ness.
Exercise
3. Clear the screen and display a Border with Labels. Use a label
increment of 2000.
8. Using MB1, click the first point in the polyline on the left border at
approximately Y = 6790150. Click the second point at the same y-
location on the right border. If you need to delete a point, click MB3,
9. Answer Yes to Create another subset? Give the second polyline the
name Lower_EW, and in a similar fashion as above, digitize a line
from left to right at approximately Y = 6786200.
Y = 6790150
Y = 6786200
13. Erase the screen, display the fault set mm_Tarbert, and the polygon
set Baselines.
Note that when these baselines are used in the profiling procedure, the
left side of the profiles corresponds with the first point digitized.
Click on Select All, which selects all baselines in the set, and will
cause a profile to be drawn through each.
Click OK.
3. You can Add each grid separately, or highlight all at once by using the
<Control> key as you pick.
Click on Set General Display Parameters and set the Large Text
Size =2., Small Text Size =.1, and Symbol Size =.2 Click OK.
Click on Vertical Profile Limits and set the Axis Limitation to
XSections, then OK.
Click Primary Options to return to the top dialog.
7. Click Zoom Profile in the small dialog associated with the profile
display, and zoom into upper right corner, and other parts of the
display.
8. Click Do Next Profile to see the profile for the next baseline, then the
next, and click Close to exit.
Overview
The first step in computing oil in place within an interval is to define the top
and the base of the oil-bearing rock. If the top and base horizons were flat
surfaces, cleanly defined across the entire area, then the gross rock thickness
could be computed by simply subtracting the two surfaces. However, we must
consider the common case where both of these surfaces intersect, or at least
onlap or baselap an unconformity or other bounding strata. In addition, we
will also consider the common condition where a fault surface seals the
envelope along one of its boundaries as well. These surface-to-surface
interactions mean that we must perform other mapping operations to create an
envelope which surrounds only the oil-bearing rock in this interval.
Since we must also account for the presence of water and gas within the
structural envelope, one of the last steps will be the integration of the gas/oil
and oil/water contacts into the final envelope.
Here, we will outline all the steps required to compute a volumetric envelope.
These steps should also work for any reservoir that you encounter outside of
class.
Assumptions:
Z-units are assumed to be in depth
Fault zones associated with the top and base structure are assumed to
be completely gridded and integrated in the structure grid.
The goal of this operations is to end up with a top envelope and a base
envelope which cleanly intersect along their edges, but do not overlap (are
not coincident). This method does not minimize discontinuities in slope.
Create the Top Envelope:
3. If the top intersects a gas/oil contact, merge the two, retaining the
deeper portions of both.
Create the Base Envelope:
6. Merge the result of step 5 with the oil/water contact, if any, retaining
the shallower portions of both
7. Subtract the top envelope, the result of step 3, from the base envelope,
the result of step 6, which will give a final positive isochore in the
reservoir and a negative isochore everywhere else.
A visual symptom of isochores which have been clipped to zero are ragged
contours along the zero line. See example below which exhibits a wobbly zero
contour in the north.
Figure 18.1 Isochore clipped to zero displaying ragged contour along zero
line
In this example, the clipping was not explicitly performed. It was the result of
premature merging or truncation of surfaces, as is explained below.
The profile below shows a top envelope and a base envelope which are the
origin of the isochore contoured above. There is a section along the profile on
the left where both surfaces are coincident. This is what is causing the flat
zero area in the isochore, and the loss of the true location of the intersection.
The coincidence in the envelopes was caused when the two original horizons
were truncated along an unconformity at some step along the way. While
these envelopes are geologically correct, they are not formed to gain the best
results from volumetric calculations.
Figure 18.2 Top envelope and base envelope of isochore in Figure 18.1
As you can see, the thickness grid resulting from the subtraction of two
faulted structure grids, is, itself, faulted, or at least discontinuous in z in the
fault zones. It requires the boundary sets of both of the structure grids to
separate one discontinuous zone from another, and to allow the algorithm to
compute the most accurate results. For this reason, we will prepare fault sets
to use during the volume calculations for both intervals. Each prepared fault
set will be the combination of the fault traces for the top and bottom of the
associated envelope. We will do this below with a simple Copy/Merge
operation for each isochore.
It should be noted that if the fault zones are large, fault surfaces should
normally be used as part of the structural envelope, if available, and if the
faults are sealing.
Starting just to the right of the sealing F-2 fault, at the Tarbert horizon, we will
examine each intersection which occurs, noting which of the two intersecting
grids is the stratigraphicly LOWEST at the right of the intersection. That one
which is lower then becomes the upper boundary. For example the first
intersection with the Tarbert is along the 2100 g/o contact. Moving to the
right, the g/o contact then intersects the Unconformity, which now becomes
the boundary. Continuing on in this manner we can see that, excluding the
faults, the top of the reservoir is formed by sections of the Tarbert, the 2100
o/w contact, the Unconformity, and the F-4 fault.
Therefore, it is these four grids which we must merge to form the top.
F-2 F-4
U
2100
Tarbert
2200
Ness
Merging takes place two grids at a time, using a CPS-3 logical operation
which will take the stratigraphicly lower value of the two grids at each node
location. When one grid does not exist, output takes the other value.
The operation highlighted in the dialog above is the one we want for merging
the top envelope components, since we are in depth. When units are in depth,
the lower component will have the higher depth values.
Luckily, it does not matter which order in which these operations occur, the
result will be the same. Lets merge these components in the following order:
Merge the Tarbert with the Unconformity
Merge the result with the G/O contact
Merge the result with the F_4 fault
See each result below
F-2 F-4
U
2100
Tarbert
F-2 F-4
U
2100
Tarbert
F-2 F-4
U
2100
Tarbert
Now for the bottom of the envelope. We can see that its components are the
F_2 fault, the Ness horizon, and the O/W contact.
In the same manner we can merge these components, using a different logical
operation, one that takes the minimum (higher values in depth) from the two
grids at each step. Well merge in this order:
Merge F_2 fault with Ness, giving the result below
F-2 F-4
2200
Ness
F-2 F-4
2200
Ness
The next step, of course, is to make sure that the top envelope and the base
envelope cross at the edges, so that their subtraction will result in positive
isochore, where needed, as well as negative isochore where needed.
F-2 F-4
U
2100
Tarbert
2200
Ness
0.0
Isochore
We see that top and base envelopes cross well, but we see some characteristics
which may not be what we want. We note the small negative component in the
isochore in the middle of the map, but realize that this will not cause any
problems and can be disregarded. However, the top and the base of the
isochore have three areas where they are coincident. We would like to avoid
this, if possible. Therefore, when the isochore map is created, we must
examine the extent of these resulting flat zero areas. They may be small and
insignificant, but it is also possible that the isochore may need some repair.
Lets see how these coincident areas occurred and what we can do about them.
Fault
Coincident
Zone
Top
Top
Base
Base
A B
In the case of a bounding fault grid which forms one side of the reservoir, a
potential problem exists in the isochore. If the Fault is merged with the Base
to form the bottom envelope, and the Top is left by itself to form the top
envelope as in Case A, then when they are subtracted, there will be a small
zone where the top envelope and bottom envelope overlap, causing a flat zero
area in the isochore grid, where, instead, there should be an abrupt change to
negative values. In many cases, these areas are very small and insignificant,
but, in others, must be addressed. One way to address this problem is to edit
the top envelope so that it does not coincide with the base envelope, but cuts
cleanly across it, as in Case B.
Overview
In this chapter, we will make use of the CPS-3 arithmetic and logical surface
operations to prepare the volumetric envelope for the Tarbert/Ness interval.
The reservoir is limited horizontally by the two sealing faults Fault_2 and
Fault_4. It is limited vertically by the base horizon Ness and top horizon
Tarbert, which is also truncated by the Bunnkritt unconformity. The oil/water
contact is at 2200m and the gas/oil contact is at 2100 m. as shown below. T
Fault_2 Fault_4
2100 g/o
2200 o/w
We can get some idea of the shape of the bottom of the reservoir by
contouring the base surface, Ness, at the depth of the oil/water contact. This
contour line is shown in the figure below. We also know that the reservoir falls
between the two sealing faults, Fault 2 and Fault 4. Until we put all the pieces
together, it is not clear whether the reservoir will close in the South within the
map boundaries.
Fault 2
Fault 4
Note: It is very important that the instructor covers the lecture material for these
exercises, so that you will understand why certain operations are performed.
Refer to the lecture materials for this chapter.
Exercise
We will not make direct use of the CPS-3 Logical Surface Operations as in
the lecture notes. Instead, well use some easier-to-understand macros which
were designed for this purpose.
1. Click Macros > Execute, then toggle Show System Macros to red.
2. Select the Toplap.mac macro in the Macros panel and read the
description, then click OK to start the macro running.
Select Horizon = Tarbert
Select Unconformity = Bunnkritt
Select Output Grid = Tarbert-Bunnkritt; create a new grid name
and select an unknown Surface Type
Select Boundary = Tarbert-Bunnkritt; create a new boundary
name and select an unknown Surface Type.
Your display should resemble the map above showing the correlation
of the eroded zones (boundaries created by macro) and the distribution
of the interpretation.
5. As before, bring up the macro Execute dialog and then select the
System macro called ClipTOP_at_GO.mac, which will clip the result
from the first macro to the Gas/Oil contact.
Set Top Envelope = Tarbert_Bunnkritt
Set GOC grid = go_2100
Set Output grid = Tarbert_Bunnkritt_2100 (Surface Type =
unknown)
Set Boundary = Tarbert_Bunnkritt_2100 (Surface Type =
unknown)
The macro will generate a map similar to the one below. The white
line close to the boundaries can be eliminated by increasing the
colorshade resolution in the macro.
6. As before, bring up the macro Execute dialog then select the User
macro called a_SealTop.mac, which will seal the top envelope with a
fault surface.
Set Top Envelope = Tarbert_Bunnkritt_2100
Set Fault grid = F4_fault
The deep portions of the fault surface dominate the colors in this
display. Those of you who wish, can verify the accuracy of this grid by
displaying a cross-section using the same baselines we used earlier.
Remove the other grids from the profile except the contact grids.
1. As before, bring up the macro Execute dialog and then select the
System macro called ClipBASE_at_OW.mac, which will clip the
Ness horizon to the Gas/Oil contact.
Set Base Envelope = Ness
Set OWC grid = ow_2200
Set Output grid = Ness_2200 (Surface Type = unknown)
Set Boundary = Ness_2200 (Surface Type = unknown)
2. As before, bring up the macro Execute dialog and then select the
Project macro called a_SealBase.mac which will merge the sealing
fault F_2 with the result from the first macro.
Set Fault grid = F2_fault
Set Horizon or Base = Ness_2200
Set Output grid = Ness_2200_F2 (Surface Type = unknown)
Set Boundary = Ness_2200_F2 (Surface Type = unknown)
Now, we can simply subtract the base envelope from the top envelope to see
what the isochore looks like
3. Click Operations/Grid/Multiple_Grids/C=A-B
Set C = new grid named Gross_Isochore
Set A = Ness_2200_F2
Set B = Tarbert_Bunnkritt_2100_F4, then click OK.
4. Color-shade the Gross_Isochore grid, but only values above zero, and
use an increment of 5.0.
6. Click on Utilities > Sets > Statistics and view the Basic statistics for
the Gross_Isochore grid, in particular, the z-range
Sets are merged with the Copy Sets command on the Utilities menu. Below is
how to merge the fault sets for the Tarbert and Ness into a single set.
1. Click Utilities > Sets > Copy, and toggle on Fault [F] and click the
Filter button.
6. When the dialog appears again, click Pick Set, and pick the
Ness_Tarbert set, then back in the Copying a Set dialog, toggle on
Append To Set and click OK.
The system copies (appends) the set Tarbert to the fault set
Ness_Tarbert.
7. Display the fault set Ness_Tarbert to see what the result looks like.
Overview
In this chapter we will learn how to gain access to the necessary reservoir
property data from GeoFrame so that we can compute oil in place according
to this simple formula:
Well discuss the origins of the Zone properties, as stored in GeoFrame, and
demonstrate the calculation of each of the property grids, as well as the
series of volumetric grids:
Net Isochore
Net Pore Volume
Net Pay
Finally, well demonstrate the use of the volumetric procedure and discuss the
report which it produces.
Lets assume that we wish to compute Oil in Place between two horizons
named Jakarta and Kobe. A typical GeoFrame workflow for computing the
required properties in ResSum is:
Load the appropriate well logs
Create the geological markers for the Jakarta and the Kobe, either in
WellPix or by loading them with the General Ascii Loader
In Well Pix, define a lithozone between the Jakarta and the Kobe and
establish a Zone Version.
In ResSum, calculate the ratios and property averages for the layer.
When imported into CPS-3 via the GeoFrame link, each property value for a
particular Zone Version will take on the characteristics similar to a set of
markers; that is, a set of scatter points based on the well paths. The X and Y
values for these property values do not sit at the top or base of the lithozone,
but, rather, in the middle of it as seen below, depending on the particular
computational geometry chosen (TVT, MD, ...).
Approximate
property marker
locations
Lithozone
Typically, the following ResSum properties are the ones used in the
volumetric equation for CPS-3:
Net Thickness, Gross Thickness, Net Porosity, and Net Pay Water Saturation.
Each is retrieved as a scatter set from the GeoFrame data base with the
GFLink. and then gridded. These property grids can then be applied to the
gross isochore with CPS-3 grid arithmetic operations.
In the example shown, note that the gross isochore (shaded area) covers only a
portion of our Area of Interest (AOI). When we look at the quality of each
property grid we create, we will not be concerned with areas which are not
within this pay zone. Within the pay zone, however, there are certain criteria
which must be met:
Porosity and Saturation values must remain between 0. and 1.0.
This means that we may have to change parameters, or even gridding
algorithms if too much slope is introduced into the grids by the
algorithm.
The grids must be completely defined within the pay zone. No holes
in the grid are allowed, otherwise, no volume can be calculated there.
As with other grids, the grid values must honor the values in the
wells.
The grids should be relatively smooth between the well data points,
and should not contain sharp discontinuities which are not
associated with the existing fault boundaries, if used.
2. Good property data for all lithozones is typically hard to come by and
data from the wells is usually sparse. This type of data can be
challenging to model adequately. We see that in the example below, the
data points for which we have property values barely reaches into the
reservoir area.
If seismic attributes exist for which it can be shown that there is a correlation
with property data, then interval property grids can be improved with various
techniques including the use of the application Log Property Mapping
(LPM) which can extend the accurate extent of property grids beyond the
lateral limits imposed by the well data.
Gridding Guidelines
With sparse data, we will probably get the best results using the Convergent
algorithm. When gridding property data, it is almost invariable that
extrapolation will be required.
The only other issue to worry about is the value for the Starting Grid Interval
when using the Convergent algorithm. The rules of thumb we discussed in the
earlier gridding chapter can be summarized here for sparse data:
Remember, this number is not critical to determine the very first time. If it is
too small (pay zone not fully defined, or holes in the grid), then well simply
re-grid and make it bigger the next time.
Items in boldface are the individual property grids. The Gross_Isochore is the
structural envelope thickness, which is simply the difference of the top and
base structural envelopes. The final Net Pay grid becomes the input to the
Volumetrics operation, along with any associated fault polygons and the lease
polygons.
All operations involve only grid arithmetic when applying the properties.
When we created the structural envelope, we used primarily grid logical
operations.
In all the logical operations which we have performed to shape the envelope,
and all the grid arithmetic we have performed to compute the Net_Pay grid,
we have not used the fault boundaries (traces) at all. The reason for this is
that the CPS-3 logical operations and grid arithmetic functions perform their
calculations node-by-node, vertically, on corresponding grid nodes of the
input grids.
If the volumetric input grid were a gross isochore, the results would simply
be volume of rock. However, having applied all of the required rock properties
to the thickness grid, the volumetric input grid is no longer in a simple
thickness domain, and the result on the volumetrics report will be oil in place.
We have an option to report the results in barrels or other units, if desired. A
separate report for each lease polygon is generated.
The output from the Volumetric procedure is a report such as the example
shown below.
Volume is the integrated volume between the grid and the base plane
Flat Area is the flat area of that portion of the grid ABOVE or BELOW the
base plane. The area of the polygon (lease) is also a flat area.
Surface area is the curvilinear area of that portion of the grid ABOVE or
BELOW the base plane and should always be equal or greater than the Flat
Area.
Once these data are accessible, we will grid them in CPS-3 to create one grid
per property, then apply each to the Gross Isochore grid using Multiple Grid
Arithmetic according to the formula above.
As mentioned in the lecture for this chapter, the zone property values falling in
this lithozone are computed by ResSum, and are similar to geological
markers. For example, the porosity is a set of X,Y,Z points, where the Z-value
is an average porosity value in each borehole for the interval. Similarly, the
saturation and net/gross are sets of X,Y,Z points originating from the
boreholes, containing average values. X,Y locations of the strat markers will
fall along the borehole, midway between the geological markers.
You will recall that zone property values from ResSum are available in MD,
TVT, TVD, and TST form. For our purposes, we can use either the TVT or
TVD. Well start by accessing the property data from GeoFrame.
Exercise
In this project, however, the scatter set switch was not set, and the purpose
of this exercise is to show you how to use the GeoFrame Link to obtain
access to these property scatter sets in CPS-3
2. Click on the Load from GeoFrame tab, then click Zones to open
GeoFrame Zone Interval Properties, then click Zone Version and
select Formation, and then click OK.
Note: In early versions of the training project, the output sets may be empty.
8. If the sets are not empty, go to Utilities > Sets > View
Contents/Statistics to display the statistics for the data created, to see
if the values are reasonable.
Tarbert_Net_Pay_Water_Saturation_TVD_Formation
Tarbert_Net_Reservoir_Porosity_TVD_Formation
Tarbert_Net_Gross_Thickness_Ratio_TVD_Formation
mm_TARBERT_NESS_Porosity
mm_TARBERT_NESS_WSat
mm_TARBERT_NESS_Net-gross
Exercise
3. Go to Utilities > System > Set Toggle Switches and make sure that
the toggle Show graphic entities when z is null is off.
mm_TARBERT_NESS_net-gross
mm_TARBERT_NESS_Porosity
mm_TARBERT_NESS_WSat
Even so, however, when the Ness_Tarbert fault set is displayed, you can see
that the reservoir extends across three fault blocks, and the data is still sparse
when compared to seismic distributions. Some extrapolation will still be
necessary even with this relatively good data distribution.
Now the question is which gridding algorithm to use. Recall that the
Convergent algorithm provides the best extrapolator available, we should use
it. The lecture notes give a good general procedure to follow, but we will
depart from it here because we actually have fairly numerous data inside of
our fault blocks.
1) Using a very small initial grid size in the first grid, we take advantage of the
Convergents slope projection, but only closely around the data points. We
will use the faults in the first gridding to make sure that these initial values are
not tainted by data from different fault blocks.
2) Copy the first grid as data for a second gridding where well produce a
weighted average solution in extrapolated areas, instead of projecting
values. We wont use the faults in this gridding.
Overall, this will give a better-behaved property grid from sparse data for
which the fault blocks have been taken into consideration.
No matter how we grid the property, it must be defined in all areas of the pay
zone. Well store a map set of the pay zone for comparative purposes:
3) Erase your screen and display only line contours for the Gross_Isochore.
Use a light blue contours.
2. Use Fast Color-shading to render the grid, then display the data points
on top. You should see a grid similar to the one below:
The advantage of using the fault boundaries during gridding is that we get a
much more accurate model. Luckily, there is enough data in each fault block
to define almost all of the blocks.
The one exception is the small block in the North which is included in the pay
zone, but not defined in the property grid. This will become defined in the next
property gridding step. Note how the grid is tightly defined only around the
data. We controlled this with a small Initial Grid Interval.
Lets now continue with the second gridding for the final property grid.
5. Color-shade the resulting grid, using the faults for contouring, but not
for display, and display the map set PayZone on top.
This final porosity grid covers the area of the pay zone and looks to be
OK.
6. Now repeat this process for the net/gross data set and the saturation
data sets mm_TARBERT_NESS_net-gross and
mm_TARBERT_NESS_WSat_, creating a similar grid for each, one
called NetGross, the other called Saturation
The initial and final grids for each are shown below.
Exercise
As outlined in the lecture notes, well now use the CPS-3 multiple grid
operations to create the volumetric grids. We could use the multiple-grid
formula processor in CPS-3 to compute the final volumetric grid in one
process, but for sensitivity studies and for investigating results using different
Zone Versions for properties, well calculate the traditional intermediate grids
one at a time.
At this point, the most important things to verify about our three property
grids are:
1. Make sure that all three grids are defined within the area of the gross
isochore.
2. Make sure that none of the grids have flat, zero areas, or negative
areas. Its OK to be blank outside of the gross isochore, but no
property grid should not be negative anywhere.
If the property grids do not fit these requirements, they should be regridded
with different parameters until they do.
5. Click OK. Use the statistics tool to check the Z-range of the output
grid. The grid should retain negative vales from the gross isochore.
3. Click the Scan button (below the formula field) and a semicolon
should appear at the end of the formula. If it does not, you will see
Syntax error in the Status text field - this means the formula was not
entered correctly. Try again until the semicolon appears after clicking
Scan.
5. Press Return.
10. After the calculation is finished, click Save and provide the name of a
file in which to save the formula. (Dont forget to specify the file suffix
of .frm.)
12. Erase the screen and display contours for the Net_Pay grid, which
should resemble the map below.
1. Verify that this final grid is positive everywhere within its boundary by
doing the following:
Erase the screen
Contour the Zero Line of the Net Pay grid.
Color shade the Net Pay grid, but color shade values only
beginning at 0.0 using a contour interval of -10.
If any color-shaded contours appear within the reservoir boundary, this means
that one of the previous operations did not work as intended. The instructor
will help you determine which step of the procedure caused the problem.
Exercise
Here are two ASCII files, each containing the same basic information for our
lease polygons. The first file contains subset markers with the lease name to
mark the beginning of each subset. The second file contains the lease name in
every record, and CPS-3 can tell the end of one lease when the name changes.
Use either format you like for loading polygons.
k_leases_a.ply k_leases_b.ply
This file which we will actually use today is the file named
mm_north_leases.ply, which is to be found in your CPS-3 dsl. It resembles
the first file above, in that it contains CPS-3 subset markers to identify each
lease polygon.
3. Click New to create a Polygon set called Leases as the Output Set.
Note that portions of some of the polygons fall outside the border of
the Display environment. This will not affect the volumetric
calculations.
7. Display the polygon set again, but make the following settings:
Line color = red
First polygon of set to use = 2
8. Display the polygon set again, but make the following settings:
Line color = yellow
First polygon of set to use = 3
9. Display the polygon set again, but make the following settings:
Line color = light blue
First polygon of set to use = 4
10. Now display the whole polygon again with a black line, setting First
Polygon = 1, Number of Polygons = All, and Polygon Fill Method =
NoFill.
12. Toggle Map Set to red, and select the Lease_Names map set for
display.
Exercise
Follow these steps to compute the oil in place from the Net_Pay grid.
North_Leases
Polygon Set
Looking at the CPS-3 Status Information window, the report for the last
polygon, End_o_Rainbow, should be visible.
MyPart
The other sections are useful only if your volumetric grid represents a
structural model (topography) and you want to compute differences between
volumes above a certain elevation and below that elevation which is useful in
earth movement applications.
Note that the Volume reported, 42.1967 is in MMBL, and all Areas reported
are in acres. Your results will probably be slightly different due to the
multitude of steps required to get to this point.
Had we used something other than Full Refinement for the volume algorithm,
the results might have been somewhat less accurate, but would have run a bit
faster. When accuracy is the main consideration, use Full Refinement.
Overview
The purpose of this chapter is to introduce students to the interactive editing
tools available in CPS-3:
Model Editor
Color Palette Editor
Map Editor
The exercises suggested here are relatively unstructured and the recommended
procedure is for the instructor to present the lecture notes for each topic, then
demonstrate the facilities, and then let the students follow suit after each
demonstration. Student experimentation is encouraged with help from the
instructor.
The CPS3 Model Editor is a powerful tool that lets you edit gridded surface
models, along with data, faults, polylines, and cultural features. Surfaces are
edited though manipulation of contour lines and/or data values and fault
locations. After the edits are complete, the surface is then regridded using the
changed contours and/or data.
You may also change a grid by simply editing the node values directly in a
variety of ways.
The Model Editor gives you greater control during modeling, and allows you
to focus in specific areas for modification. It lets you integrate your knowledge
and interpretation into the modeling process.
Aside from editing the gridded model itself, the Model Editor has many
provisions for making changes to your Data sets, Fault sets, and Polygon
sets.
Independent Mode
The Model Editor is launched in independent mode from the following two
two locations:
In the GeoFrame Visualization Catalog, click on Model Editor.
At the bottom, designate on which monitor you want the Model
Editor to be launched.
< machine_name>:0.0 will launch it on the left monitor, :0.1 will
launch to the right.
Click OK.
In the CPS-3 Main Module, click Tools > Model Editor.
When launched in open set mode, the Model Editor assumes that you want it
to load those sets which are currently open in your CPS-3 session. Look at
your CPS-3 Status Information window and it will show you the currently
open sets at the top of the window. The following dialog box appears in open
set mode, and you can change any of the set names before the Model Editor
launches and loads them.
The Model Editor is launched in open set mode in the following two
methods:
In the CPS-3 Main Module, click Operations > Surface/Model
Editor.
Click on the Model Editor icon.
We recommend that you read this documentation if you intend to use the
Model Editor to any degree.
Exercise
Model Editor
3. Cut, Draw, and Extend contours as required to flatten the area, then
regrid, recalculate contours, and Save the grid.
7. Add new data points, delete points, change z-value, digitize a new
closed polygon, delete points inside polygon.
Tip: The Map Editor performs only graphic editing, and is limited to moving and
creating simple objects and modifying their attributes, such as color, font, size,
line style, width. The main use of this application is to clean up maps which
have already been created. The Map Editor is NOT designed for editing
contours, data sets, fault sets, polygon sets, or grids. These should be edited in
the CPS-3 Model Editor. Changes made in the Map Editor will only be
reflected in the CPS-3 Map set.
Pull-down menus
The pull-down menus - File, Activate, Mode, View and Options are located
across the top of the Map Editor window. Most of the functions under the
pull-down menus are also available as icons in the tool bar. Here are some of
the functions:
File
Load: you may load up to 5 map sets for each session;
Save: save the edited map set;
Save as: save the edited map set under a different name;
Unload: unload a map set from the session;
Exit: exit the session.
Activate
Map set: same as the icon (ACTIVATE:) Map. Among the loaded
map sets (up to 5 sets), only one of them is active. If not picked here,
the last one loaded to the session will be the active map set, which is
the one on display.
Editable Layers: same as the icon (ACTIVATE:) E Layers. All
elements/subsets of the active map set are listed here as separate
layers. You may turn ON and OFF any of them to allow selected ones
to be edited. The layers you turn off may NOT be edited.
Viewable Layers: same as the icon (ACTIVATE:) V Layers. All
elements/subsets of the active map set are listed here as separate
layers. You may turn off certain layers to hide them temporarily if you
do not want them to be on display.
Mode
Under Mode, you will see three groups of functions, Select, Add and
Composite. Functions under each group are also iconized in the tool bar.
Select Mode
Browse: This function displays the x,y location of the cursor as you
click the mouse button.
Move: This function allows you to move an object. Click on the Move
button, and click on the object that you want to move. You will see a
white frame enclosing the object you just picked. Leave the cursor
inside the white frame and click the mouse button again to pick the
white frame up and drop to the place you want to move to. You can
only move one object at a time.
Delete: This function allows you to delete an object. Click on the
Delete button, and click on the object that you want to delete. You will
see a white frame enclosing the object you just picked. Leave the
cursor inside the white frame and click the mouse button again to
delete the object. You can only delete one object at a time.
Copy: This function allows you to copy an object. Click on the Copy
button, and click on the object that you want to copy. You will see a
white frame enclosing the object you just picked. Leave the cursor
inside the white frame and click the mouse button again to pick the
white frame up and drop to the place you want to copy to. You can
only copy one object at a time.
Attribute: This function allows you to change the graphic attributes of
an object. To change attributes, click on the Attr button, and click on
the object you want to change. You will see a white frame enclosing
the object that you just picked. Leave the cursor inside the white frame
and click the mouse button again, an attribute dialog box will pop up.
If the object you picked is text, you can change the text content, size,
rotation angle, font, color, quality and justification. If it is a symbol,
you will be able to select a different symbol, along with size, color and
rotation angle. If it is a line, you may change the line style, width,
color and smoothness. After you input the new attributes, click on
Apply button. The changes will display on screen. You can only
change attributes on one object at a time.
Note: Changing of text font will not show up in the Map Editor display.
Add Mode
This feature allows you add text and symbols to a map set.
Text: To add text, click on the (ADD:) Text button, displaying a dialog
box which allows you to specify text attributes. After setting the text
attributes, click on the Apply button. Then move the cursor to the Map
Editor window, a white frame will appear. Locate the white frame at
where you want the text to be, click the mouse button again to display
it.
Symbol: To add a symbol, click on the (ADD:) Symbol button,
displaying a dialog box which allows to you pick a symbol from the
library and specify its attributes. After setting the symbol attributes,
click on the Apply button. Then move the cursor to the Map Editor
window, a white frame will appear. Locate the white frame at where
you want the symbol to be, click on mouse button again to display it.
Composite Mode
View
Refresh - redraws the contents of the screen.
Zoom - provides zoom functions - In, Out, Extents
Options
Icon Bar -toggles to icon bar on and off.
Info Window - toggles the info window on and off
Set Background Color - changes the background color of the canvas
Set Line Edit On - edits a line
Set Point Edit On - edits a single point
Set Quick Screen Repair - sets quick refresh
Exercise
Map Editor
After presentation of the lecture notes (Topic 25, Map Editor), these exercises
are suggested:
1. Import a map
3. Save map.
Exercise
1. Create a simple palette for use in error mapping. Color 1 = red, Color
2 = White, and Color 3 = blue.
Overview
One of the most useful features in a mapping package is the ability to perform
a series of predefined steps at different times or under different conditions.
CPS-3 has had the ability to make macros for some time, but macro
extensibility has not been generally possible. Recently, however, macros were
made much more powerful with the introduction of a flexible macro
command language, which are an addition to the current macro structure, and
allow prompting, variable substitution, and other features to make macros
more powerful than before.
Creating Macros
Macros are built, one command at a time, interactively in the CPS-3 Main
Module, as you step through the dialog boxes performing the actions which
you want to be stored in the macro. You can choose to only go through the
motions when creating a macro, or you can literally execute the steps as you
create it. By executing the steps as you go, you can catch errors which you
would not see if you only go through the motions, however, both methods
have their uses.
Once created, the macro is placed in the Project macro directory, which is the
projects CPS-3 dsl.
Macro Grouping
For all your macros, you have the option to organize them into meaningful
groups. Read how to do this in the online CPS-3 User Manual. You can
define group names in a simple file, and assign each macro to a different
group. As you access the Macro functions from the Main Module menu, you
then have the opportunity to select your macros by group.
Running Macros
After you create a macro from CPS-3, you can run it macro interactively, or
in the background while you continue other mapping tasks. Macros enable
you to automate common, repetitive mapping tasks, as well as mapping tasks
that take a long time to complete.
Below, youll see how to modify a fixed-name macro into one which prompts
for input.
The new language provides you with a variety of abilities within any macro
you create with CPS-3. These abilities are listed below.
Establish named variables in a macro for your own purposes.
Variables can contain characters, numbers, set names, or colors.
Perform arithmetic and logical operations between variables.
Control the flow of the macro with these operators:
While statement
If - Then - Else
Add interactive prompting inside the macro to establish variable
values.
Perform substitution operations on set names and parameter values
before the macro is run.
Spawn detached processes, or spawn a process and wait for it to finish.
Note: For detailed descriptions of each of these language facilities, please refer to
the chapter about Macros in the CPS-3 Users Manual.
This file could be inserted as it is into a macro if you wanted to prompt for a
Data set, and an output Surface set name. As you can see, this file could be
used as simple template and you can change the names, or even add others
sets to be prompted for. Refer to the on-line Users Manual for details.
prompt activates a single prompt line in the dialog for one variable
In general, variables should be declared as set, if they will be used for the
selection of existing input sets. Variables declared as string are set by the user
only by typed response.
The next phase in making a macro universally useful is to find all places lower
in the macro where the fixed-name sets are specified, and change them to the
appropriate variable name. For example, if the original unedited macro
contained the line
which opens a Data set named mysetname, we should replace the fixed set
name (including double quotes) with the variable name $DATA1 (including
the Dollar sign):
M1OPEN DA $DATA 1UNKNOWN
After all fixed set names have been replaced as shown with the proper variable
name, the macro will have been converted into a universally useful tool which
can be passed around for anyone to use in their own project.
F1DRAW n
where n is an environment number in the current session file. The only way to
determine n is to view all environments in CPS-3 and identify it in the GUI.
F1MODL n
The macro is read and then converted to a 6.0 macro, having a naming
convention of CPS60_<old name>. During this process, older native
commands such as F1WINE, F1GINC, F1ZTYP, F1UNIT, and F1VSCL,
which were previously used to establish a primitive mapping environment, are
commented out.
After a macro has been converted, it contains a header record at the beginning
of the file which looks like
!CPS_VERSION_6.0 T=98
which means that CPS-3 will read and execute the macro without assuming
that it needs conversion. All macros created in GF3.5 and later will contain the
same header, so that no conversion will be attempted.
Overview
If a permanent copy of the screen contents is desired, use the Save currently
displayed map icon to write screen memory to a permanent CPS-3 map
set.
Saved map sets retain all displayed graphic components and their attributes. In
addition, subset markers are written into the map set so that the graphic output
of one process can be distinguished from another. These layers of a saved map
set can be deleted, or rearranged, if desired. Use the Manipulate current
map layers icon to invoke the Map Layer Manager.
We will perform an exercise later in the course to illustrate how map layers are
rearranged and deleted.
It is the last operation, clipping, which provides the main focus for this
chapter. We will discuss conditions when graphic displays become clipped, or
when they are merely hidden.
Note: The basic 2D clipping rectangle is defined by the X and Y extents of the
Volume of Interest. However, these can be enlarged by the temporary
specification of a Top, Bottom, Right, and Left margin, if desired.
For clipping purposes, objects to be displayed in CPS-3 fall into one of two
classes - Inside objects and Outside objects.
Inside
Inside objects (see the following) are clipped to the Volume of Interest
during display.
Data
Faults
Polygons
Surfaces
Maps (optional)
Any graphic manifestations of the CPS-3 set types shown above, such as point
display, polylines, data value display, or contouring, are also considered to be
Inside objects, and any graphics generated outside the X,Y box will be
discarded.
Upon displaying a Map set, you will be given the choice of treating it as an
Inside or Outside object, as desired.
Outside
Outside objects are NOT clipped during display. The following are
considered Outside objects:
Maps (optional)
Borders,
Labels
Title blocks
North arrows
In general, graphic output from all Basemap dialog boxes, except
Display Set and Lines and Annotation sections, is considered
Outside graphic objects.
When Inside objects are displayed on the screen, they are clipped exactly to
the active Volume of Interest, and then stored in Screen Memory. This
occurs whether the user is zoomed in or not.
When Outside objects are displayed on the screen, they are NOT clipped, and
are stored in their entirety in Screen Memory, even though they may not be
visible. Again, this occurs whether or not the user is zoomed in. Even though
Outside graphics may not fall within the visible area of the display, they can
be made visible by creating and activating a Display environment whose
display volume is large enough to include the data.
Alternatively, clicking the Reveal all graphics icon, will show all
graphic elements currently in screen memory.
The Reveal all graphics icon is activated when the current display does not
show all graphic components stored in Screen Memory. An example might be
a title block which was placed at some x and y offset from the lower left
corner of the map, but which fell outside the current viewing area.
Note: There was no mention of the current zoom window in the previous discussion
concerning clipping. Since there is now only one clipping window, which is
defined by the X, Y limits of the currently active Display environment, and the
margins, display of data while zoomed in will NOT cause the data to be
clipped to the zoom window. This means that when you zoom out again after
having displayed some data, you will see all of it inside the X, Y limits, not just
a piece of it, as in earlier versions.
The exercises for this chapter will provide the opportunity to demonstrate the
principles discussed here.
Overview
The CPS-3 ASCII Data Loader is invoked from the CPS-3 Main Module
menu bar and offers a very wide variety of data loading features.
Click on File > Import > ASCII to see that you can load any of the five basic
set types in CPS-3 from ASCII files.
While the CPS-3 ASCII Data Loader is very flexible and can load many
different file formats, there are certain conventions, regarding the organization
of these files, which are outlined in this chapter.
General Requirements/Options
In general, each physical record in a data file represents one data point or
vertex in CPS-3.
All information associated with one point must be on the same physical
record.
There is a storage limit of 50 z-attributes per data point. That is, a data point
can have up to 50 z-values, each representing a measurement.
You can skip any number of records at the beginning of a file, but you
CANNOT skip any at the end. Trailer records, which are not data records,
must be physically deleted from the end of the file before loading.
Each x, y, z, text, and symbology field must be justified to the same column in
each record.
Format
Format actually refers to the manner in which you want to define the contents
of the data fields in the ASCII file. These options apply to Data, Fault,
Polygon, Surface, and Map set loading.
2. Ordered Input/Output
This method causes the system to read data in the following order from each
record without the user having to specify a format:
x, y, z1 ....zn
3. Point to Fields
This option provides you with a special menu on which your data file is
displayed so that you can pick the column limits for each field you want to
read into CPS-3.
File Types
There are almost no limits to the way in which ASCII data can be organized.
However, these predefined formats are simplistic enough to cover just about
any kind of data which is loaded into CPS-3.
1. X,Y,Z Only
This file format contains only x,y,z values in each data record and is typically
read using non-extended mode.
x1 y1 z1 . . . z1.n
x2 y2 z2 . . . z2.n
x3 y3 z3 . . . z3.n
...
This format includes the name (for example, line name or well name) in each
record. This type is typically read using non-extended mode. All points having
the same name field will be grouped into the same subset with that name. If
the name field is a unique well name, then each well becomes its own subset,
and its name can be posted during graphic display.
x1 y1 z1 . . . z1.n JOE
x2 y2 z2 . . . z2.n SALLY
x3 y3 z3 . . . z2.n FRED
...
Unlike other formats, this one incorporates the setting of parameter values at
the top of the file and uses subset markers to define subsets. When exported in
this format, the parameters associated with the set are automatically recorded
by the system. This can be an advantage when transferring sets from one
network node to another while still retaining certain set characteristics
embodied in the parameters. It is not common practice to load data with these
parameters unless reloading one which was exported from CPS-3. The main
advantage of this format is the use of the subset markers to define individual
lines and polylines. As with the other set formats, multiple z-fields may be
present.
! comments
! comments
...
native parameter command 1
native parameter command 2
native parameter command 3
...
->subset 1 name
x1 y1 z1 ...
x2 y2 z2 ...
x3 y3 z3 ...
->subset 2 name
x4 y4 z4 ...
x5 y5 z5 ...
x6 y6 z6 ...
...
In this format, multiple text fields exist in each record, as well as symbol
codes or other graphic attributes which are to be associated with the individual
data point or vertex represented by the record. As with other file formats,
multiple z-fields may be present. Data in this format should be loaded with the
Extended Data loader, so that all fields and symbology can be stored and
used.
x1 y1 z1 . . . Text1.1, 1.2 . . . Attribute1.1, 1.2 . . .
x2 y2 z2 . . . Text2.1, 2.2 . . . Attribute2.1, 2.2 . . .
x3 y3 z3 . . . Text3.1, 3.2 . . . Attribute3.1, 3.2 . . .
Load data into non-extended data sets if it does not contain, or you do not
wish to use, textual or symbology fields. For example, if your ASCII data file
contains a well name and a well symbol code, but you do not plan to make use
of them, load just the x,y,z portion into a non-extended data set, ignoring the
other fields in the record.
Load data into extended data sets if the ASCII file contains text, numerics, or
symbology fields which you wish to display, or make use of, on a base map.
Examples of text fields - well name and operator name
Examples of numeric fields- shot point number and line number
Examples of symbology - well symbol, well symbol size, and well
symbol color
Data Files
Here is an example of well data in X,Y,Z Plus Name Field format with a well
name:
Here is an example of the same portion of the data file above in which the
fields are separated only by commas. This file can be read by selecting the
Ordered Input/Output format.
6438.0,3593.0,1017.9,R-01
6176.0,636.2,991.80,R-02
6903.7,5122.1,967.90,R-03
...
Here is an example of X,Y,Z with Text Fields and Symbology with some
comments at the top, three z-fields, a well name, operator name, symbol code,
and a symbol color associated with each well location.The Extended Data
loader is best used to load these types of data files.
! This file contains example well data,
! for three horizons, Z1, Z2 and Z3,
! including a well name and symbol code
6438.0 3593.0 1017.9 1047.9 1.E+30 R-01 Mobil 2 4
6176.0 2636.2 991.80 1023.8 1054.3 R-02 Mobil 13 5
6903.7 5122.1 967.90 1000.9 1.E+30 R-03 Chevron 18 3
7472.5 3590.5 1021.0 1064.5 1104.0 R-04 Chevron 18 4
5251.0 3663.9 1025.1 1051.5 1.E+30 R-05 Chevron 19 2
6405.2 1892.1 1097.9 1136.9 1187.4 R-06 Chevron 56 3
5596.1 5106.4 993.00 1020.0 1.E+30 R-07 Chevron 42 7
6667.8 4027.9 1039.0 1073.0 1089.5 R-08 Mobil 19 5
5374.2 2944.7 1077.4 1103.4 1111.4 R-09 Chevron 18 5
7054.7 3155.6 1037.7 1070.5 1188.5 R-10 Chevron 18 6
6682.0 6009.0 969.10 1000.1 1.E+30 Z-01 Chevron 12 2
5420.7 4355.6 1009.4 1036.9 1.E+30 R-12 Chevron 17 1
Fault Files
Here is an example of a fault file showing the use of the subset markers to
define the separate fault traces for each named fault. It is valid for fault
vertices to contain z-values to aid in the gridding process. We would choose a
fault attribute of x,y only and a file type of CPS-3 format for loading this
data.
->Northwest_vertical
6101.402 2260.635
6037.690 2552.553
5958.145 2881.542
5867.917 3162.905
5905.114 2886.946
5989.938 2520.760
6101.402 2260.635
->Southern_Green_A
6398.598 3401.793
6387.917 3629.874
6387.917 3667.197
6377.361 3959.115
6377.361 4192.600
6414.432 3969.671
6430.392 3746.742
6425.114 3640.556
Next is an example showing the same fault traces, but having the fault name
encoded on each vertex record, removing the need for the subset markers.We
would choose the attribute x,y only attribute, and the X,Y Plus Name Field
file type to load these faults.
6101.402 2260.635 Northwest_vertical
6037.690 2552.553 Northwest_vertical
5958.145 2881.542 Northwest_vertical
5867.917 3162.905 Northwest_vertical
5905.114 2886.946 Northwest_vertical
5989.938 2520.760 Northwest_vertical
6101.402 2260.635 Northwest_vertical
6398.598 3401.793 Southern_Green_A
6387.917 3629.874 Southern_Green_A
6387.917 3667.197 Southern_Green_A
6377.361 3959.115 Southern_Green_A
6377.361 4192.600 Southern_Green_A
6414.432 3969.671 Southern_Green_A
6430.392 3746.742 Southern_Green_A
6425.114 3640.556 Southern_Green_A
Polygon Files
Here is an example of a polygon lease file with lease names used as the subset
names.
->Smith_Hawking Lease
4101.402 2660.635
4037.690 2852.553
3958.145 2281.542
3867.917 3362.905
3905.114 2786.946
3989.938 2120.760
->Wiley_Chevron Lease
6698.598 2401.793
6787.917 2629.874
6787.917 2667.197
6477.361 2959.115
6477.361 3192.600
Data Transformations
As ASCII data is imported or exported from CPS-3, transforms can be
applied to the data to move it, rotate it, scale it, or perform other arithmetic
operations. The transformation menu is shown below
Overview
The Convergent gridding algorithm has proven to be a very reliable and
predictable general-purpose gridding algorithm. Its innovative methodology
has clearly set a standard for the industry. In this chapter, we will give you a
brief overview of the mechanics of its operation, so that you can use the
information to your advantage.
Iterative Procedure
Convergent gridding gets its name from a process whereby grid node values
are converged upon by iteratively assigning control point values to nearby grid
nodes. The process starts with a large grid cell size, the Starting Interval, in
the first iteration, and the process ends when the grid cell size reaches its
desired size, the Final Interval. The grid is refined to smaller and smaller cell
sizes between each iteration. At each iteration, each control point value is
assigned to a certain number of the closest grid nodes, specified by the
parameter Number of Nodes to snap to. Thus, the most important parameters
associated with the Convergent gridding algorithm are:
Starting Interval
Final Interval
Starting Number of Nodes to snap to (max = 16)
As you can see, in the early iterations, each control point contributes to a large
geographic area, but in the final iterations, each control point is tied to a very
small area. The result of the Convergent gridding technique is that areas
outside of the data have been modeled with a smooth, trend-like solution, but
yet in the middle of the data, the grid ties the data as closely as the final grid
size will allow.
In the case of faulted surfaces, the Convergent algorithm applies the standard
visibility criteria when determining if a particular data point is appropriate to
use in the computation of grid node values.
Blending Algorithm
Multiple weighted Z values assigned by several control points to the same grid
node are mathematically merged using a sophisticated blending function. The
function is based on the Taylor Series, which allows the prediction of a shape
at any point x by knowing the behavior of the shape at several points a.
( f ( a ) ) 2
f ( x ) = f ( a ) + f ( a ) ( x a ) + ----------------- ( x a )
2!
X is the grid node location, and a are the data point locations.
Slope and curvature information is calculated on the first iteration and carried
from one iteration to the next.
We will not explicitly discuss these options in class, but you can read about
them in the on-line Users Guide.
To help you see how the Convergent algorithm computes the final grid, the
following schematics of several iterations illustrate the process.
1 2 3
4 5 X 6
X
YINC
7 8 9
At the beginning of the first iteration, the grid is null, and all control points
will contribute, to some degree, to all grid node values. The effect of each data
point depends upon its distance to the grid node. At the end of the first
iteration, the grid is essentially a weighted average. Note that the amount of
overall extrapolation desired around the edge of the data can be controlled by
the Initial grid interval.
1 2 3 4 5
6 7 8 X 9 10
11 12 13 X 14 15
X
16 X 17 18 19 20
X
YINC
21 22 23 24 25
In this iteration, the grid starts with the refined values of the first iteration, and
the area affected by each control point has become smaller. For example, the 8
closest nodes for the upper left control point are 1, 2, 6, 7, 11, 12, 8, and 3.
Because the grid starts with existing values in this iteration, the assignment of
control point values now becomes the projection of control point values using
the slopes and curvature computed from the grid.
X
X
In this iteration, the grid begins with the refined values from the second
iteration, and the area affected by each control point has become even smaller.
Each control point in this iteration will change only the four nodes of the cell
in which it falls, but the overall trend in the grid is retained. The change and
readjustment of the four nodes continues as the projection of control point
values along the slopes and curvature computed from the grid.
Overview
anisotropic: Having properties that differ according to the direction of
measurement. Least-Squares gridding allows for directional bias based on
user-defined trends.
audit trail: When activated, the audit trail records all parameters and
operations used to create or modify data, grids, faults, polygons and maps.
The user can request an audit report for a specific set or all sets in the project.
bathymetric data: x,y,z data sets describing the depth of large bodies of
water
blanking: A procedure for setting the value of selected grid nodes to a null
(indeterminate) value. Nodes are usually selected by enclosing them in
polygons.
cell: The base unit of the grid; bounded by four grid nodes. The dimensions of
a cell are defined by the x- and y-increment.
color-shade contours: A user specified color palette is used to color the space
between contour intervals. A single color from the palette is used to paint the
area of the surface falling within a defined contour interval. Adjacent contour
intervals are painted with adjacent colors from the color palette.
column: Within a computed grid, all nodes with equal X locations, but
different Y locations comprises a single grid column
contours: There are two types of contours within CPS-3. 1.Normal contours--
lines on a map joining points of equal z-values (elevation). 2. Orthogonal
contours--These are generally drawn perpendicular to normal contours to
indicate direction of flow.
cultural data: Any information, such as political boundaries, roads, rivers and
lease boundaries used as reference points on a map.
data transformation: Any function that can be performed on the data during
it's import/export into CPS-3. System functions include scaling, shifting the
origin, rotation, or converting from latitude/longitude to X,Y.
deterministic data: Control points with very precise Z-values, such as well
data.
digitize: Method of directly entering new data into CPS-3 via a mouse
attached to a digitizing tablet or the screen.
engineering units: The units of measure contained within the users' data,
usually feet, meters, miles, or kilometers.
fault boundaries: These are equivalent to polygonal fault traces (see below).
In IESX, fault boundaries can be digitized directly, or initialized from the fault
contacts (see below).
fault contacts: Those points computed in IESX which are the actual or
projected intersection of an interpreted fault surface and a specific interpreted
horizon along a seismic line. Viewed in plan view, the fault contacts for a
particular horizon and fault approximate the fault boundaries which, in IESX,
can be derived from the contacts. Fault contacts in IESX can be identified by
horizon, by fault, and by upthrown or downthrown side. Typically, the fault
contacts are used as a guide to digitize or initialize the fault boundaries
(polygons).
field: Within an ASCII data file, a field contains one item of information in a
computer record (a record may have one or more fields). In multiple records, a
given field should always contain the same type of data. An example is a well
record in which one field stores the well name, an X-field contains latitude, a
Y field contains longitude and the Z-field contains a formation top.
filter: There are two types of filters within CPS-3. 1.File filter--The graphical
interface allows the user to limit the file list when dealing with files external to
CPS-3. Ex. When importing ascii fault files into CPS-3, the user, by default, is
allowed to select the file from a list of all existing .flt files. If the users file has
a different extension (.dat), the filter can be changed to .dat. The user will
then be allowed to select the correct file. 2.Smoothing Filter--There are two
types of filters which may be used during the smoothing process: a. The
biharmonic filter is a converging filter requiring one or more passes through
the surface to converge on the final smoothed surface model. This filter
minimizes curvature without affecting local slopes. This is usually the
preferred filter and is always used during snap or convergent gridding. b. The
ring convolution filter is a non-converging filter that passes through the grid a
user-specified number of times to create the final smoothed surface model.
This filter reduces both slope and curvature. (see also smoothing).
gridding: The basic computer mapping process for transforming user x,y,z
data into a regularly distributed x,y,z data set referred to as a grid. There are
several algorithms within CPS-3 for creating grids.
gridding (continued)...
step--Grid node values are set to the value of the closest control
point.
isochore: A surface which represents the true vertical thickness between two
other surfaces. Often the result of subtracting two surfaces.
lattice: The wire-mesh figure created by drawing line segments through each
column and row in a grid.
macro: A text file consisting of one or more commands which perform one or
more mapping tasks. These can be built interactively using the macro build
command within CPS-3 and recording your commands or macros can be built
using the macro builder under the CPS-3 Application Manager.
map: The entire graphic image generated by CPS-3. Maps may be displayed
on the screen or sent to plotters for hardcopy output.
menu: A list of options presented to the user for selection. Menu choices may
invoke mapping functions or lead to other menus.
node: The intersection points of the rows and columns of the grid lattice.
These are the locations of the calculated Z-attributes of the grid model.
plotting units: The units of measure used to define the size of graphical items
posted on the plot or the size of the plot itself. The default units of measure
are inches but can be changed to centimeters.
report file: During each interactive CPS-3 mapping session a file called
project_name.rep is created. This file contains a summary of all the
commands and parameters used during that session.
response file: A text file that stores all the user responses made during
creation of a macro within the Macro Builder Module. The response file is
used to edit a macro previously created with the module. Response files have
an .rsp extension.
row: Within a computed grid, all nodes with equal Y locations, but different
X-locations comprises a single grid row.
search limit (SLM): The length of the radius used to describe a circle which
contains all control point data used to calculate a value for a grid node at the
center of the circle. A search limit radius is used in least squares and moving
average gridding algorithms.
subroutine library: The core of the CPS-3 system. The library is comprised
of a set of highly organized subroutines for performing all of the mapping
functions available with the software.
surface strike: The direction normal to the dip of a surface. Normal contours
follow surface strike.
symbol code: A numeric code used by CPS-3 to post the appropriate symbol
at a particular location. Numbers 1-65 represent stroked symbols. The default
symbols and codes are shown in Appendix C of the CPS-3 User's manual.
Symbol numbers 1001 and greater are machine symbols and are posted much
quicker.