Professional Documents
Culture Documents
Geomodelling & Reservoir Management G11MM MSc Reservoir Evaluation and Management
Institute of Petroleum Engineering
This manual and its content is copyright of Heriot Watt University © 2016
Any redistribution or reproduction of part or all of the contents in any form is prohibited.
All rights reserved. You may not, except with our express written permission, distribute or
commercially exploit the content. Nor may you reproduce, store in a retrieval system or transmit
in any form or by any means, electronic, mechanical, photocopying, recording or otherwise without
the prior permission of the Copyright owner.
Modelling O N E
C O N T E N T S
3. TOOLBOX
3.1 Pixel Models
3.2 Object (Boolean) Models
3.3 Multiple-point Geostatistics
3.4 Kriging with External Drift, Cokriging and
Collocated Cokriging
3.5 Geomodels Honouring Flow Units and
Geological Trends
4. TYPES OF VARIABLE
5. EQUIPROBABLE REALISATIONS
6. MEASURE OF UNCERTAINTY
7. DETERMINISTIC INFORMATION
8. REPRESENTING UNKNOWNS
9. SUBSURFACE GLUE
9.1 Published Modelling Studies
10. WORKFLOW
11. CONCLUSIONS
APPENDIX
LEARNING OBJECTIVES
Having worked through this chapter the Student will be able to:
2
Modelling O N E
1. INTRODUCTION
Background texts in geostatistics (Journal and Huibregts, 1978; Isaaks and Srivastava,
1989; Dubrule, 1998; Jensen et al., 2000; Deutsch, 2002) and compilations of modelling
applications in petroleum (Yarus and Chambers, 1994) are sources of further reference
material. Useful reviews are also given in Haldorsen and Damsleth (1990) and
Cosentino (2001). The use of geostatistics for petroleum reservoir modelling is a
rapidly evolving subject and the literature is probably (as usual) a few years behind
the industry. Likewise, strictly research papers may be a few years ahead of routine
application.
Late 80’s Norwegian Applications (STORM, IRAP) (Haldorsen & MacDonald, 1987)
Stanford (SCRF, GSLIB) (Deutsch and Journal, 1992)
IFP (Heresim)
Sophisticated Reservoir Models
Dubrule (1998, AAPG Continuing Education Course notes 38) gave a few pointers
as to what geostatistical modelling is:
1. Geostatistics provides a practical way (the only way?) of populating very large
geological models – the largest to date that we’ve come across is 23 million cells!
1. Geostatistics is not an algorithm tossing a coin to decide what facies are present
between wells.
4
Modelling O N E
1. Geostatistics are not at all useful, where there are too few data to constrain the
variograms, or object distributions. One might say that the one or two well scenario
that we see from time to time is such a case when overly complex geostatistical
models are inappropriate.
2. STATIONARITY
Stationarity is an implicit assumption of a statistical field. The mean and variance should
be independent of location in order for a field to have ‘second-order stationarity’. This
means that the property should be uniformly variable (homogeneously heterogeneous)
over the area/region/zone of interest. The mean, variance and variogram are constant
and relevant to the entire study area (Deutsch, 2002).
Stationarity is also a function of scale. Local stationarity implies that the mean and
variance should be constant over a reasonable scale for the property investigated. For
reservoir modelling the properties should be stationary over, at least, the inter-well
spacing for the modelling to have any validity.
Trends are often present in geological data. A trend can be removed and the residuals
should have mean of zero and constant variance. These residuals would then be a
stationary field and could be modelled as such and recombined with the trend to
create the property model.
3. TOOLBOX
“Geostatistics provides the user with a toolbox of approaches for generating realistic
3D representations of the distribution of heterogeneities”
Sill
Normalised Semivariance
Exponential
Range Spherical
Gaussian
Figure 2 Pixel models from various variogram model types (from Dubrule, 1998).
Top: Spherical model, Below: Exponential model. The exponential model rises
more steeply from the origin than the spherical model and therefore has a more
‘pepper pot’ character. The gaussian model rises more slowly from the origin and
therefore has the most continuity and is appropriate for continuous surfaces such
as structure maps and isochores (Deutsch, 2002)
6
Modelling O N E
Figure 3 Pixel models from with varying vertical correlation lengths (from
Dubrule, 1998). Top: Vertical correlation 1% model, Below: Vertical correlation
10% model
Figure 4 Pixel models from with varying horizontal correlation lengths (from
Dubrule, 1998). Top: Horizontal correlation 2% model, Below: Horizontal
correlation 10% model
There are two main types of pixel models, Sequential Indicator Simulation (SIS)
and Sequential Gaussian Simulation (SGS). In Sequential Indicator Simulation an
indicator value is chosen from a preconditioned distribution (ensuring 50:50 black:white
proportions in the models shown in Figures 2 to 4). In Sequential Gaussian Simulation
the original (porosity or permeability) data are transformed to a normal Gaussian
distribution (using a power transform). Each cell is then assigned a value drawn at
random from the normalised distribution (using a Monte Carlo procedure). The required
correlation structure is then imposed on the field by moving the values according to
the local correlation structure. In this process, the modelling is “sequential”. Note
that the Gaussian models must be de-transformed before usage.
SIS is typically used for modelling lithofacies where the lithofacies (e.g., sand and shale)
are assigned indicator values (1 and 0 perhaps). Figure 5 shows an SIS model with 3
facies. There is a practical limit to the number of indicators that can be accomodated
whilst preserving the correlation structure of each. A background indicator facies is
applied at the outset of the modelling. Commonly, the most abundant lithofacies is
used as background. SIS can also be used for modelling petrophysical rock types
(Figure 6). Each facies may contain several rock types.
SGS is used for modelling porosity data (Figure 7). The porosity data can be drawn
from a single distribution or from several distributions for each rock type
Figure 5 2-D slice from a 3-D Indicator pixel model involving three facies; Dune
(white), Interdune (black) and Fluvial (Grey). This model was built sequentially
with the fluvial added after the dune/interdune modelling.
Figure 6 3-D Indicator pixel model involving four rock types with varying
porosity-permeability relationships
8
Modelling O N E
130 m
Main Pay
N 635 m
0 30
Porosity (%)
Random
(neutral)
Attraction
(neutral)
Repulsion
(anti-clustered)
A reservoir unit can contain a number of different geometries of lithofacies (Figure 10).
10
Modelling O N E
EW Variogram
250.000
0.400
1
0.300
g
North
0.200
0
0.100
0.000
0.0 0 40 80 120
0.0 East 250.000
Distance
EW Variogram
250.000
0.400
1
0.300
g
North
0.200
0
0.100
0.000
0.0 0 40 80 120
0.0 East 250.000
Distance
(c) Channels
EW Variogram
250.000
0.400
1
0.300
g
North
0.200
0
0.100
0.000
0.0 0 40 80 120
0.0 East 250.000
Distance
Figure 11 A variety of object and pixel models displaying the same variogram.
Given a variogram, the challenge becomes to choose the correct model as the three
facies fields clearly represent different geology (from Journel, 2003). Geostatistics
alone cannot give the answer
Training Image
? u3
u4
u1
P(A B) = 3
4
A = Sand
Channel
North
Shale
0.0
0.0 East 250.000
12
Modelling O N E
One Realization
250.000
Channel
North
Shale
0.0
0.0 East 250.000
Kriging with external drift allows the seismically mapped surface to be used as an
external drift, with the match at the wells being determined by the variogram range.
The seismic data is being considered as the low frequency representation of the actual
horizon. The local structure around the wells may have been smoothed out by the
seismic data. Kriging with external drift can put high-frequency information back
into the model (Figure 15). This method might be appropriate to use if the residuals
between seismic depth and well depth are shown to be random and appropriately
described by a variogram. Invariably, systematic shifts can be due to mis-identification
of seismic pick (mis-tie) at the wells and/or presence of subseismic faults (Figure
16) – in which case more alternative techniques might be more appropriate. The
advantage of the kriging with external drift technique is that a number of equiprobable
realisations can be generated – so that an uncertainty map for the depth location of
the seismic event away from the wells can be produced. This technique is used most
often in the industry for time to depth conversion (Dubrule, 2003).
Kriging ED Large-Range
Kriging ED Small-Range
Cokriging was developed to integrate data from various sources, particularly to integrate
seismic and well data. Doyen (1988) showed an example of bivariate cokriging used
to map porosity in the subsurface. First of all, a kriged porosity (primary variable)
map was produced from the well control (Figure 17). Then a relationship between
porosity and the inverse of acoustic impedance (secondary variable) was developed
at the well locations. The porosity map is then generated from the well data and
the seismic data. The method is more complex than previous ones, as it requires
the variogram of both primary and secondary variables and their cross- covariance.
Note that where there are well data the resulting map is controlled by those data and
where there are few well data the map is controlled by the seismic data.
14
Modelling O N E
1 2
Variance γ ( h ) var1 = Σ( Var1 (x) − Var1 (x + h )) ditto Var2
2N
Cross - covariance γ ( h ) var1 var 2 = 1 Σ( Var1 (x) − Var(x + h ))( Var2 ( x ) − Var2 (x + h ))
2N 1
160
Distance (10m)
120
Kriged Porosity 80
40
1/ (Acoustic Impedence) 80
40
160
Distance (10m)
120
Porosity by Cokriging 80
40
Figure 17 A summary of porosity prediction from well and seismic data using
cokriging (Doyen, 1988, Dubrule, 2003)
The secondary variable can be cross-correlated with the primary variable at the wells.
Collocated cokriging uses:
1 The correlation coefficient between primary and secondary variables at the well
Jeffry et al. (1996) describe an example where gravity data are used as a secondary
variable to assist depth conversion (Figure 18 and 19). The average velocities measured
at the wells are the primary variable. In this case the variance of the residual gravity
data was used so that the well data was honoured.
Figure 18 A map produced by using gridded well velocities and a residual gravity
map for an oil field (Jeffry et al, 1996)
0
0 4 6 12 15
Lag Spacing (km)
2.5
1.0
0.0
-0.5
-1.0
-1.5
3500 3600 3700
Velocity (???)
0.8
Residual Gravity
Isotropic Variogram
Gamma
Figure 19 A summary for the field in Figure 18 of depth conversion using well
and gravity data by collocated cokriging (Jeffry et al., 1996; Dubrule, 2003)
Kriging with external drift, cokriging and collocated cokriging are geostatistical
methods for combining data from different sources and these techniques lend themselves
to the integration of seismic and well data.
16
Modelling O N E
The Rotliegend Hyde Field reservoir was sub-divided into three layers – alpha,
beta and gamma (Sweet et al., 1996) – and sub-models (Figure 20) were built and
subsequently recombined in order to model the gas production for this field (Figure
21). Earlier attempts at modelling this tight gas field had been unsuccessful. In this
well-described modelling study example from the North Sea, the authors emphasise
several points that were found to be important:
• The recognition of key deterministic (i.e., correlatable) flow units within the
reservoir.
• The importance of the regional context to get the appropriate trends and
orientations of the lithofacies.
Figure 20 Three layers in the geostatistical model of the Hyde Field (from Sweet
et al., 1996)
18
Modelling O N E
The basal layer – Alpha - is modelled with a background eolian facies with increasing
amount of fluvial (modelled using SIS) in the upper part of the layer. The middle
layer – beta - comprises deterministic layers of eolian facies and a correlated random
field layer of sandy sabkha/eolian sheet facies (SIS). The upper layer – gamma- in
the model is a mixture of sandy sabkha/eolian sheet facies.
• Three flow units with different models (different realizations) for each unit.
• Sequential Indicator (SIS, for facies) and Sequential Gaussian (SGS, for
property) Simulation techniques used
Proportion Curves are used (as a form of external drift) to impose trends of increasing
(or decreasing) proportions (Eschard et al, 1998)
4. TYPES OF VARIABLE
“The geostatistical method to use depends on the type of variable that is modelled,
on the depositional environment and of the scale at which the representation must
be used”
We have seen a range of models being used for various geological problems and
challenges:
Poroperm data – pixel models from a framework facies model using gaussian
transformations
Facies – either Pixel Indicator or Object models
Braided fluvial – shale objects in sand background
Base level rise and fall - modelled with proportion curves (Eschard et al, 1998)
Fluvial – channels models – fettucini (Figure 22), point bar models
Marine – external drift with discontinuous shales
Aeolian – external drift controlling wet-dry, fluvial-aeolian proportions laterally
and vertically (Sweet et al., 1999)
Turbidite – sands in shales – holes in shales
Porescale – vugs in a vuggy carbonate (Dehgani et al, 1999)
Microscale - stochastic poroperm models
Mesoscale - connectivity issues in channel sandstones
Megascale – scenarios – faults vs non-faulted, fluvial vs turbidite models.
20
Modelling O N E
5. EQUIPROBABLE REALISATIONS
“Geostatistics allows the generation of equiprobable realisations of the subsurface, all
compatible with the data and the statistical parameters used as input to the models”
Multiple realizations of the same statistical field can be generated (Figure 23).
Figure 22 An object - ‘fettucini’ - model of a fluvial channel system (from Tyler, 1994)
6. MEASURE OF UNCERTAINTY
Questions concerning the role of geology often arise during modelling studies. Does
the geology matter and, if so, by how much? While the interaction of heterogeneity and
flow processes is still being researched, there are increasing indications that geological
information may be quite important for accurate flow behaviour models. The Page
Sandstone is a case of particular interest and it, along with others, is presented below.
A succession of works has detailed the geology (Chandler et al., 1989), the permeability
distribution (Goggin et al., 1988; Goggin et al. 1992) and fluid flow (Kasap and Lake,
1990) through the Page Sandstone, a Jurassic eolian sand that outcrops in the extreme
of northern Arizona near Glen Canyon Dam.
The geological section selected for the flow simulation was the northeast wall of
the Page Knob (Goggin, 1988). The data were derived from probe permeameter
measurements taken on various grids and transects surveyed on the outcrop surfaces,
together with profiles along a core. The individual stratification types comprising
the dunes, i.e., grainfall and windripple, have low variabilities (CV’s of 0.21). The
interdune element has a CV of 0.81. The interdune material is less well-sorted than
the dune elements. Grainfall and windripple elements are normally distributed, the
interdune log-normally. The individual stratigraphic elements in this eolian system
are well-defined by their means, CV’s, and PDF’s.
The vertical outcrop transects are more variable (CV = 0.91) than the horizontal transects
(CV = 0.55), an anisotropy that seems to be typical for most bedded sedimentary rocks.
The global level of heterogeneity for the Page Knob is probably best represented by
the transect along the core, which had a CV = 0.6. Semivariograms were calculated
for the grids and core profiles. The grids allowed spherical semivariogram ranges to
be determined for various orientations. These ranges indicate the dip of the crossbeds
; the ranges were 17 m along the bed and 5 m across the bed (Goggin, 1988). Hole
structures are present in most of the semivariograms, indicating significant permeability
cyclicity that corresponds to dune crossbed set thicknesses.
For our purposes, the most important facet of this work is the modelling of a matched-
density, adverse-mobility-ratio miscible displacement through a two-dimensional
cross section of the Page Sandstone. Figure 24 shows a typical fluid distribution
just after breakthrough of the solvent to the producing well on the right. The dark
band is a mixing zone of solvent concentrations between 30% and 70%. The region
to the left of this band contains solvent in concentrations greater than 70%, to the
right in concentrations less than 30%. The impact of the variability (CV = 0.6) and
continuity can be seen in the character of the flood front in the large fluid channel
occurring within the upper portion of the panel. There is a smaller second channel
that forms midway in the section. As we shall see, both features are important to the
efficiency of the displacement.
22
Modelling O N E
A B C D
5. The specific geometry of the features was accounted for through the use of
finite elements; and
6. Each bounding surface feature (low permeability) was accounted for explicitly
with at least one cell in each surface.
In all, over 12,000 finite-element cells were required to account for all of the
geologic detail in this relatively small cross section. Indeed, one of the purposes
of the simulation was to assess the importance of this detail through successively
degraded computations.
We compare both figures to the distribution in Figure 24. Qualitatively, the fractal
representation (Figure 25B) seems to better represent the actual distribution of solvent;
it captures the predominant channel and seems to have the correct degree of mixing.
The spherical representation (Figure 25A) shows far less channeling and too much
mixing (that is, too much of the panel contains solvent of intermediate concentrations).
However, a quantitative comparison of the two cases shows that this impression is in
error (Figure 26). The distribution that gave the best qualitative agreement (Figure
25B) gives the poorest quantitative agreement. Such paradoxes should cause us
concern when performing visual comparisons; however, there is clearly something
incorrect about these comparisons. The key to resolving the discrepancy lies in
returning to the geology.
Figure 25A Simulated solvent distribution through cross section using a spherical
semivariogram. From Lake and Malik (1993). The mobility ratio is 10. The
scale refers to the fractional solvent concentration. Flow is from left to right
24
Modelling O N E
1.0
0.8
0.6
0.0
0 1 2 3 4
Cumulative Pore Volumes Injected
l = range
Figure 26 Sweep efficiency of the base and CS cases. From Lake and Malik (1993)
Figure 27 shows the actual distribution of stratification types at the Page Sandstone
panel based on the northeast wall of the Page Knob.
5
2.5
0 5 10 15 Grainflow
Windripple
Scale (meters)
The thin bounding surfaces are readily apparent (black lines), as are the high-permeability
grainflow deposits (shaded) and the intermediate-permeability windripples (light).
This is the panel for which the simulation in Figure 24 was performed. Even though
the entire cross section was from the same eolian environment, the cross section
consists of two sands with differing amounts of lateral continuity: a highly continuous
upper sand and a discontinuous lower sand. Both sands require separate statistical
treatment because they are so disparate that it is unlikely that the behaviour of both
could be mimicked with the same population statistics. (Such behaviour might be
possible with many realizations generated, but it is unlikely that the mean of this
ensemble would reproduce the deterministic performance.)
When we divide the sand into a continuous upper portion, in which we use the fractal
semivariogram, and a discontinuous lower portion, in which we use the spherical
semivariogram, the results improve (Figure 28). Now both the predominant and the
secondary flow channels are reproduced.
Now both the predominant and the secondary flow channels are reproduced. More
importantly, the results agree quantitatively, as well as qualitatively (Figure 29).
The existence of these two populations is unlikely to be detected from limited well
data with statistics only; hence, we conclude that the best prediction still requires
a measure of geological information. The ability to include geology was possible
because of the extreme flexibility of CS. The technological success of CS did not
diminish the importance of geology rather, each one showed the need for the other.
1.0
0.8
Vertical Sweep Efficiency
0.0
0 1 2 3 4
Cumulative Pore Volumes Injected
Figure 29 Sweep efficiency of the base and dual population CS cases. From Lake
and Malik (1993)
26
Modelling O N E
It was after this study, following an appreciation of meeting the challenge posed by
geological models with complex layering and varying modelling strategies that Larry
Lake coined the phase “The engineer’s secret weapon is the geologist”!
A 4000
Average Cross Section
Stochastic Cross Section
Gas/Oil Ratio (m /m )
3000
3
3
2000
1000
0
0 2 4 6 8 10
Time (Years)
B 400
Average Cross Section
Oil Production Rate (m /day)
200
100
0
0 2 4 6 8 10
Time (Years)
C
Frequency (No. of Realizations)
10
0
36.9
37.5
38.1
38.7
39.3
39.9
40.5
41.1
41.8
42.4
43.0
7. DETERMINISTIC INFORMATION
Conditioning Data
(Sand-Shale)
Data without uncertainty might be considered “hard data” although data (usually
interpretations) without uncertainty are not common in the petroleum industry.
Distribution functions (porosity, permeability) and empirical relationships (poroperm
X-plots) can also be treated as deterministic data. The permeability times height (kh)
determined from the interpretation can also be treated as hard data. Deterministic data
are associated with certainty, p=1. In reality none of the data are truly deterministic.
Top structure maps are derived by depth conversion from a velocity model which
might be non-unique. Well test kh is derived by imposition of a model (infinite
radial acting) which may be inappropriate or non-unique. Core laboratory plug data
is interpreted following a cleaning and restoring to overburden conditions, which
may introduce artifact.
28
Modelling O N E
Realisations are equiprobable models which honour the input statistics. The comparison
of modelled (poroperm) data with actual raw data is a good validation of a model
(Figure 32). Plotting the differences (residuals) from model and data is a key statistical
quality check (like examining the residuals following linear regression).
21 21
18 18
Facies
Facies
Relative Frequency (%)
0 0
0.00 0.04 0.08 0.12 0.16 0.20 0.24 0.28 0.00 0.04 0.08 0.12 0.16 0.20 0.24 0.28
Poro Poro
369 Observations (0 undef), Min = 0, Max = 0.29922 3917825 Observations (0 undef), Min = 0, Max = 0.29922
Mean = 0.15335, St. dev. = 0.057074, Skewness = -0.20905 Mean = 0.15154, St. dev. = 0.054394, Skewness = -0.43086
Figure 32 Comparison of input (raw) and output (modeled) porosity data for
validation of the model
Other data sources abound in engineering practice and should, as time and expense
permit, be part of the reservoir model also. The most common of these are the following.
1. Pressure-transient data: In these types of tests, one or more wells are allowed
to flow and the pressure at the bottom of the pumped wells observed as a function
of time. In some cases, the pressure is recorded at nonflowing observation wells.
Pressure-transient data have the decided advantage that they capture exactly how a
region of a reservoir is performing. This is because interpreted well-test properties
represent regions in space rather than points. Resolving the disparity in scales between
the regional and point measurements and choosing the appropriate interpretation
model remain significant challenges in using pressure-transient data. However,
this type of data is quite common and may be relatively inexpensive to acquire (in
an offshore situation) making it an important tool in the tool-box. Well tests can
also be used to image the limits of geological objects and help define their scale.
2. Seismic data: The ultimate reservoir characterization tool would be a device that
could image all the relevant petrophysical properties over all the relevant scales and
over a volume larger than interwell spacing. Such a tool eludes us at the moment,
especially if we consider cost and time. However, seismic data come the closest to
the goal in current technology.
3. Production data: Like pressure-transient data, production data (rates and amounts
of produced fluids versus time) reflect directly on how a reservoir is performing.
Consequently, such data form the most powerful conditioning data available. Like
the seismic integration, incorporating production data is a subject of active research
(see Datta-Gupta et al., 1995, for use of tracer data) because it is currently very
expensive. The expense derives from the need to run a flow simulation for each
perturbation of the stochastic field. Furthermore, production data will have most
impact on a description only when there is a fairly large quantity of it, and, of course,
we become less and less interested in reservoir description as a field ages (until we
start a new process - waterflood, gas flood, etc). Nevertheless, simulated annealing
offers a significant promise in bringing this technology to fruition.
8. REPRESENTING UNKNOWNS
It is very unlikely that a geologist or engineer will know exactly what lies between
the wells. Using geostatistical techniques, one can generate many realisations (10,
100, 1000) and it is usually assumed that the truth lies between the extremes of the
realisations. Be very careful not to use too few realisations as a few limited realisations
might be quite different from the average.
A statistical study has been carried out on a carbonate reservoir to model properties
between the wells (Lucia and Fogg, 1990; Fogg et al., 1991). Flow units extend
between wells but no internal structure can be easily recognized or correlated. The
San Andres formation carbonates are very heterogeneous with a CV of 2.0 - 3.5
(Kittridge, et al., 1990; Grant et al., 1994). A vertical permeability profile (Figure
33) from a wireline log-derived estimator shows two scales of bedding.
3480
3500
Type B Type A
3520
Depth, ft
3540
3560 Type B
3580
Figure 33
30
Modelling O N E
2.0 Sample
Semivariance,mD2
1.6
1.2
0.8
Spherical Model
0.4
0
0 8 16 24 32 40
Lag, ft
2.0
Semivariance,sq. mD
1.6
1.2
0.8
0.4
0
0 1000 2000 3000
Distance, ft
Figure 35 Sample semivariograms in two horizontal directions for the San Andres
carbonate. Semivariograms with different ranges can express the anisotropy
suggested by the geological model. From Fogg et al. (1991)
2. The autocorrelated random field model does not explicitly represent the baffles
caused by the bed bounding surfaces; these may have a different flow response
from that of the models in Figure 36. A subsequent study (Grant el al., 1994;
Kerans et al, 1994, Figure 37) found these baffles to be important. The CRF
model is isotropic with now preferred flood direction. A subsequent model
with baffles showed that there was a preferred flood direction. This latter study
confirmed the importance of deterministic surfaces in carbonates.
60
40
20
0
Height (ft)
REALIZATION No. 90
80
60
40
20
0
0 200 400 600 800 1000 1200 1400
Distance (ft)
100.0-10.0md 10.01md
32
Modelling O N E
The panels in Figure 36 also illustrate a generic problem with pixel-based stochastic
modelling: there is no good a priori way to impose reservoir architecture. If the
geometry of the beds were determined to be important, other modelling techniques
(e.g., object-based) might have been useful.
The San Andres study illustrates how semivariograms can be used to generate
autocorrelated random fields that are equiprobable and conditioned on “hard” well
data. These realizations can form the basis of further engineering studies; however, it
is unlikely that all 200 realizations will be subjected to full flow simulation. Usually
only the extreme cases or representative cases, often selected by a geologist’s visual
inspection, will be used. In recent years, streamline simulations have made it easier
to simulate ALL the geomodel realizations and to use quantitative methods to select
the extreme and mean cases.
Cumulative Oil Production as Percent Oil
40
30
in Place
20
Left-to-right Injection
10
Right-to-left Injection
0
0 0.2 0.4 0.6 0.8
Injected Pore Volume
9. SUBSURFACE GLUE
To build a pixel model using variograms, these functions are required to be defined
from data or estimated from analogues. Geological data (from outcrop), geophysical
properties (attribute maps), core data (properm plugs), well test data (boundaries
from pressure build up) can all be used in the model. The model can be conditioned
on data from various sources. A permeability map can be produced from a facies
map conditioned on seismic attributes. Different weighting factors can be put on the
data – strongly favouring the well data when near wells – strongly favouring seismic
data when away from wells. In this way, the models can bring together the different
disciplines and appropriate geomodels (in terms of scenarios considered, model
complexity, number of realizations, etc) are key to a successful geoengineering study.
34
Modelling O N E
Frequency
4
ABI's
3
1 Azimuth
0 A B C D E F G H I J K L
A B C D E F G H I J K L Rank: 8 10 12 9 4 5 2 1 6 7 3 11
Azimuth Combined Rank: 4 5 1 3 6 2
Many thousand synthetic wells were drilled in various realisations of the model (Figure
38). Each well was analysed for the frequency and number of aeolian sandbody
(assumed to be the best reservoir units) intersections along each borehole (Figure
39). Azimuth G turned out to be oblique to the geological grain. Drilling parallel
to the geological grain might have encountered the maximum eolian sand interval
intersected - but also a significant chance of missing the eolian sandbodies altogether.
This shows how geostatistical models can be used to assess the uncertainty in well
planning in addition to their use in reservoir performance simulation.
Sweet et al. (1996) built a model of the Rotliegend Permian aeolian sandstone in
the Hyde Field, Southern North Sea. The field is a producing gas field and three
models were built for the three distinct layers and then recombined to account for the
non-stationarity and geological facies changes between layers. Stochastic modelling
of the heterogeneity enabled the authors to produce significantly improved model
predictions.
Eschard et al (1998) used proportion curves to model the Triassic Alluvial Fan System
in the Paris Basin. The Chaunoy Field reservoir facies developed as a response to
rising and falling lacustrine base-level. The authors conclude that the layering is
important and that the generation of proportion curves was useful to validating the
geological model. However, they also noted that the variogram range selected is
not so sensitive because of the strong interconnectedness between channels. The
combination of geological and geostatistical analysis was thought to result in a
successful characterisation of the Chaunoy Field.
Dehgani et al. (1999) found that the detailed modelling of vuggy carbonates
(dolostones of the Grayburg Formation, Permian) in the McElroy field (Texas)
significantly improved the history-matches as the core data tended to underestimate
the permeability, due to sampling problems. Small scale, high resolution models
were built and upscaled.
10. WORKFLOW
Geology provides several insights that are useful for statistical model-building:
categorization of petrophysical dependencies, identification of large-scale trends,
interpretation of statistical measures, and quality-control on generated models. The
first two serve to bring the statistical analysis closer to satisfying the underlying
assumptions. For example, identification of categories and trends, and their subsequent
removal, will bring data sets closer to being Gaussian and/or to being stationary.
The last two are to detect incorrect inferences arising from limited and/or biased
sampling. Consequently, we shall see aspects of these in the following procedures.
1. Divide the reservoir into flow units. Flow units are packages of reservoir rocks
with similar petrophysical characteristics—not necessarily uniform or of a
single rock type—that appear in all or most of the wells. This classification will
serve to develop a stratigraphic framework (“stratigraphic coordinates”) for the
reservoir model at the interwell scale.
What follows next depends on the properties within the flow units, the process
to be modelled, and the amount of data available. We presume some degree of
heterogeneity within the flow unit.
3. If there are no data present apart from well data and there is no geologic
interpretation, the best procedure is to simply use a conditional simulation
technique applied directly to the well data (Figure 40). The advantage of this
approach is that it requires minimal data: semivariogram models for the three
orthogonal directions. (Remember, a gridblock representing a single value of
a parameter adds autocorrelation to the field to be simulated.)
36
Modelling O N E
dm hz
f g
k hx
km
g
Column
z
hy x
km y
Geopseudos
f
cm to m
Core Geology
Deterministic
model
5. If the property distribution is multimodal, the population in the flow unit can be
split into components and indicator conditional simulation used to generate the
fields. This is useful in fields where the variation between the elements is not
clearly defined and distinct objects cannot be measured. This approach
(Figure 42) yields fields with jigsaw patterns. This was the approach taken by
Rossini et al. (1994).
dm hz
f η
k hx
km
η
Column
z
hy x
km
y
6. If the flow unit contains distinctly separate rock types and a PDF can be
determined for the dimensions of the objects, the latter can be distributed in a
matrix until some conditioning parameter is satisfied (e.g., the ratio of sand to
total thickness). This type of object-based modelling lends itself to the
modelling of labyrinth fluvial reservoirs (Figure 43). More sophisticated rules
to follow deterministic stratigraphic trends (e.g., stacking patterns) and interaction
between objects (e.g., erosion or aversion) are available or being developed. A
similar model for stochastic shales would place objects representing the shales
in a sandstone matrix following the same method.
m z
f F
k km x
F
Column
km y
z
x
y
These models are the most realistic of all for handling low net-to-gross fluvial reservoirs;
however, they require a good association between petrophysical properties and the
CDF’s for the geometries. Tyler et al. (1994) give a good example of this application.
38
Modelling O N E
If the reservoir contains more than one flow unit, then the procedure must be
repeated for the next zone. In the Page example (Figure 26), the upper layer needed
a different correlation model (generated by a fractal semivariogram) from the lower
layer (spherical semivariogram). In the Hyde Field example (Figure 21), three
sub-models were used.
11. CONCLUSIONS
The choice of a geostatistical model and its parameters is often guided by subjective
considerations (Dubrule, 1998):
• The software and time that are available (which is often limited).
EXERCISE 1
Given the following permeability data (same data as was encountered in Chapter 8 of the
Reservoir Concepts course)
An experimental variogram (Gamma) was calculated as follows. Note that the variogram has
been normalized by the variance.
Lag Ga mma
1 0.75
2 0.87
3 1.45
4 1.37
5 0.78
6 1.03
1. Determine the sill (c), nugget (c0) and range (a) for this variogram.
2. Fit a spherical and an exponential model to these data given. (Note that sill (c) is 1- c0
on normalised variogram)
3h 1 h 3
c 0 + c • −
Spherical model: γ (h) = 2a 2 a ,if h ≤ a
c 0 + c ,if h > a
40
Modelling O N E
0, h=0
−h
Exponential model: γ (h) = c 0+ (c − c 0 ) 1− exp , h≠0
a
EXERCISE 2
Six sample 2D “reservoir images” are presented: a single body, a single meandering channel,
stacked, multiple channels, parallel channels, shale.
(i) Discuss which stochastic simulation algorithms might be used for modelling these patterns.
(ii) Match six corresponding omni-directional variograms (a-f) with the sample “reservoir
images” (I-VI).
(iii) Which of the patterns (I-VI) would be better modelled with an anisotropic variogram model.
(iv) Can you point out which patterns feature strong (zonal) anisotropic patterns and which
-- weaker (geometric) anisotropy.
1. Determine net:gross for each well (net:gross is a key controlling factor for the model).
4. Consider geometry away from the well location (the aspect ratio).
5. Use analogue geobody geometry data (outcrop, well test) to guide the aspect ratio.
WELL A WELL B
Figure A1: Two wells showing the distribution of permeability on a linear scale
Note the linear scale is a good scale to consider the distribution of high permeability zones.
The linear scale (Figure A-2) is more appropriate than the logarithmic scale for this purpose
as the fluid flow in reservoirs is controlled by permeability and not the log of permeability.
WELL A WELL B
Figure A-2: Two wells showing the distribution of permeability on a logarithmic scale
42
Modelling O N E
Note the logarithmic scale is a good scale to consider the net:gross. For this exercise take
1mD as the appropriate cut-off for an oil field.
To consider the spatial correlation in the vertical direction permeability semivariograms are
also provided (Figure A-3).
g (IhI)
666 628
556
540000 614
588 476
676 542
480000
450
420000
486
360000
278
300000
240000
180000
120000
60000
0 1 2 3 4 5 6 7 8 9
(h)
g (IhI)
450 476
0.0014 666 588
676
556 628 486 542
0.0012
0.01 614
0.008
0.006
0.004
278
0.002
0
0 1 2 3 4 5 6 7 8 9
(h)
Figure A-3a: Poroperm plots for Well A (top: core data from 1610-1640; bottom:
Density log data).
In each of the wells the core semivariograms show a different nugget than the log data.
This is often the case with core data as there is often little correlation (i.e., high difference)
between adjacent core data. This is usually the result of small scale heterogeneity. In
Well A, there is a short range (1m) and a stationary (probably exponential) model. In Well
B, there is a longer range (4m) and a clear hole. The hole in Well B is confirmed by the
variogram, of the density log data whereas the apparent hole in Well A is not present in the
log data.
g (IhI)
630
1400000 518 424
624 556
482
1200000
578
584 468
1000000 514
800000
600000 260
400000
200000
0
0 1 2 3 4 5 6 7 8 9
(h)
g (IhI)
598
546
598 522 486 394
0.009 446
560
0.008
0.007 454
0.006 506
0.005
0.004
0.003 252
0.002
0.001
0
0 1 2 3 4 5 6 7 8 9
(h)
Figure A-3b: Poroperm plots for Well B (top: core data from 1620-1650; bottom:
density log data).
Consider the textural variations expected in this environment and how you might interpret
the wells in terms of layers or channels and whether the distribution is systematic or
random. The semivariograms hold vital clues.
We have three averages which can be used for estimating effective properties. Their
application depends on various assumptions. For single phase flow in two dimensions:
Additional information is available from a fluvial outcrop analogue (Figure A-4) and a
subsurface analogue (Figure A-5). From these you can determine an average aspect ration
of approx. 1:30.
44
Modelling O N E
Outcrop Analogues
Note that the La Serreta outcrop has a lower net:gross than well A or B. This is because
the gross is defined from the WHOLE cliff face interval. Selecting the top and base of the
laterally amalgamated sandstone in the middle of the picture will give a lower net:gross.
Subsurface Analogues
15
s
em
Channel Sandbody Width (m)
st
Sy
14
id ed s
Bra st em
Sy g
1000 os ite erin
mp a n d
Co Me tems
s
Sy
100
Faulted anomalies
Well Testing Results
10
1 10 100
WELL A
10m
50m
Count 276
Average 456 Clue: think about net/gross and geobody interpretation
Geometric 32
Harmonic 0.2
Cv 1.9
WELL B
10m
50m
Count 271
Average 577 Clue: think about net/gross and geobody interpretation
Geometric 20
Harmonic 0.2
Cv 1.7
Figure A-6: Template for drawing a model for the region around the wells A and B
Sketch the geology away from the wells on the templates provided (Figure A-6) using
the locations of the sand, their aspect ratio and the net:gross. Remember that it’s highly
unlikely that well A or B is drilled in the centre of all the channels intersected. These are
relatively high net:gross sandstone and one can expect the channels to be well connected.
Consider the most appropriate average for the resulting model.
Solutions to follow refer to Figures A-6 to A-12.
46
Modelling O N E
REFERENCES
Toro-Rivera, M., Corbett, P.W.M., and Stewart, G., 1994, Well test interpretation
in a heterogeneous braided fluvial reservoir, SPE 28828, Europec, 25-27 October.
Zheng, S-Y., Well Testing and characterisation of meandering fluvial channel reservoirs,
November 1997, Unpublished PhD Thesis, Heriot-Watt University, 226p.
EXERCISE 1 SOLUTION
1. Determine the sill (c), nugget (c0) and range (a) for this variogram.
Spherical model:
3h 1 h 3
c 0 + c • −
γ (h) = 2a 2 a ,if h ≤ a
c 0 + c ,if h > a
La g (h) Gamma(h)
0 0.65
1 0.85
2 0.98
3 1.00
4 1.00
5 1.00
6 1.00
Exponential model:
0, h=0
−h
γ (h) = c 0+ (c − c 0 ) 1− exp , h≠0
a
La g (h) Gamma(h)
0 0.65
1 0.75
2 0.82
3 0.86
4 0.89
5 0.91
6 0.92
48
Modelling O N E
Variogram
1.60
1.40
1.20
1.00
Gamma
0.80
0.60
0.40
Experimental variogram
0.20
Spherical Model
Exponential Model
0.00
0 1 2 3 4 5 6 7
Lag
Variogram
1.60
1.40
1.20
1.00
0.80
Gamma
0.60
0.40
Experimental variogram
0.20 Spherical Model
Exponential Model
Hole Effect Model
0.00
0 1 2 3 4 5 6 7
Lag
EXERCISE 2 SOLUTION
i.) Discuss which stochastic simulation algorithms might be used for modelling
these patterns.
For most of these models object modelling algorithms would be better than
pixel models. The background facies would always be the black facies and the
white facies would be the modelled objects. The white facies might represent
sand (channel models) or shales. Placement rules would vary, in (I) the object
seems to be placed in the centre - in (V) a repulsion models seems to be used.
ii.) Match six corresponding omni-directional variograms (a-f) with the sample
"reservoir images" (I-VI).
II
III
50
Modelling O N E
IV
VI
iii.) Which of the patterns (I-VI) would be better modelled with an anisotropic
variogram model.
Isotropic
IV
Ansotropy
52
Modelling O N E
III
Strong Anisotropy
II
Strong Anisotropy
VI
Strong Anisotropy
iv.) Can you point out which patterns feature strong (zonal) anisotropic patterns and
which -- weaker (geometric) anisotropy.
Pattern types I – VI
I II
III IV
V VI
54
Modelling O N E
a) b)
c) d)
e) f)
Exercise 3 Solutions
In these proposed solutions, the model has been simplified and approximated.
WELL A SOLUTION
10m
50m
Count 276
Average 456
High Net: Gros
Geometric 32
Harmonic 0.2
Clue: Aspect ratio of about 1;30
Cv 1.9 Random System: keff = Geometric average 32 mD
Figure A-7: Solution to well A. Small channels only are present in the well
suggesting that small channels of limited extent will be present in the region
around this well. Because the random nature of these small channels, the
geometric average is the appropriate average to estimate effective permeability.
56
Modelling O N E
WELL B SOLUTION
10m
50m
Count 271
Average 577
Geometric 20 Clue: Aspect ratio of about 1:30
Harmonic 0.2
Random System: keff = Arithmetic average 577mD
Cv 1.7
Figure A-8: Solution to well B. A few large channels are present in the well
suggesting that channels of more lateral extent will be present in the region around
this well. Because the layered nature of these small channels, the arithmetric
average is the most appropriate average to estimate effective permeability.
In the above examples, the channels around the well are located in the well. Even if
every interpreter of the well data identified the same sands in the same locations, the
location of the well (either centrally, left-laterally or right-laterally in each channel) for
each channel will vary. Away from the well further channels are needed to maintain
the same net/gross over the volume of the model. The thickness of these channels is
kept close to the thickness of the observed channels in each well. In this sense the
properties – channel thickness, aspect ration, net:gross ratio are considered stationary
for each well model.
The models are also simplified in a 2-D across the channel direction (transverse to the
flow direction). Despite these simplifications the models are thought to be useful and
illustrative. In a real exercise a distribution (pdf) of channel thickness and aspect ratio
could give higher variability. If the net:gross was lower then the sands would become
more disconnected and for this model the effective permeability is drastically reduced.
If these wells were part of the same field (which they are) then this issue of whether
these two wells are from a stationary field with more variability (to accommodate
both well data sets in a single model) would need further consideration.
WELL B SOLUTION
10m
50m
Figure A-9 In the alternative the channels are effectively disconnected and the
effective flow is across the inter-channel material which could be very low –
possible as low as the harmonic average.
With core plug data consideration has to be given to sample sufficiency issues. With
number of samples needed to estimate the arithmetic average within +/- 10% of the
true arithmetic mean (95% of the time) you require 10Cv2 samples. For Well A and
B these are 361 and 289 respectively equating to 23 and 21%. In this example, one
could conclude that the core samples do a reasonable job at catching the variability
(assuming that there is no bias and the lower permeability zones systematically missed).
A cross plot for vertical (kv) vs horizontal (kh) core plug data (Figure A-10) shows
that there is a lot of variability at the core plug scale. These plugs are adjacent pairs
and variability in the ratio shows a lot of local heterogeneity rather than anisotropy.
Considering the models developed above the layered system (Well B) would have
the worse case vertical permeability (as low as the harmonic average of 0.2mD). In
a random system, the vertical permeability would be closer to the geometric average
and the system effectively more isotropic. This example is shown to draw attention to
the difficulty of estimating effective vertical permeability because of the differences
of scale. The layering (or not) of the system will have the major effect.
58
Modelling O N E
10000 10000
1000 1000
100 100
10 10
1
0.01 0.1 10 100 1000 10000 0.01 0.1 10 100 1000 10000
0.1 0.1
0.01 0.01
Figure A-10 Core plug kv vs kh ratio (Note: kv on the y axes, kh on the x axes)
WELL A SOLUTION
10m
50m
kv/kh = 1
WELL B SOLUTION
10m
50m
10m
50m
kv/kh = .004
60
Modelling O N E
The well test data for these two wells is shown in Figure A-12. These data were
presented in the statistics chapter. Well A shows the effects of small scale channels
near the well (negative skin) and a low effective permeability of a random system.
Well B shows very high permeability which is essentially the permeability of the
high permeable channels (arithmetic average when confined to the channel intervals).
WELL A
ETR MTR LTR
DP
DP
ETR MTR LTR
WELL B
Time
Begg, S.H., Kay, A., Gustason, E.R. and Angert, P.F. (1996) Characterization of a
complex fluvial-deltaic reservoir for simulation. SPE Formation Evaluation Sept
1996, 147-153.
Begg, S.H., et al., (1985), Modelling the effects of shales on reservoir performance:
Calculation of effective vertical permeability, SPE 13529.
Clemetson, R., Hurst, A.R., Knarud, R. and Omre, H. (1990) A computer program
for evaluation of fluvial reservoirs. In: A. Buller et al. (eds) North Sea oil and gas
reservoirs II. London, Graham and Trotman, 373-386.
Cosentino, L., (2001), Integrated Reservoir Studies, Editions Technip, Paris, 310p.
Cox, D.L., Lindquist, S.J., Havholm, K.G. and Srivastava, R.M. (1994) Integrated
modelling for optimum management of a giant gas condensate reservoir, Jurassic
Eolian Nugget Sandstone, Anshutz Ranch East Field, Utah Overthrust (USA). In:
J.M. Yarus and R.L. Chambers (eds) Stochastic Modelling and Geostatistics. AAPG
Computer Applications in Geology No. 3. Chapter 22, 287-320.
Data Gupta, A., Lake., L.W., and Pope, G.A., (1995), Characterizing heterogeneous
permeable media with spatial statistics and tracer data using sequential simulated
annealing, Mathematical Geology, 27(6), 763-787.
62
Modelling O N E
Dehgani, K., Harris, P.M., Edwards, K.A. and Dees, W.T. (1999) Modelling a vuggy
carbonate reservoir, McElroy Field, West Texas. AAPG Bull, 83(1), 19-42.
Deutsch. C.V., 2002, Geostatistical Reservoir Modelling, Oxford University Press, 376p
Deutsch, C. V., and Journal., A.G., 1992, GSLIB: Geostatistical Software Library
and User’s Guide, New York, Oxford University Press, 340p
Doyen, P.M., 1988, Porosity from seismic data: a geostatistical approach, Geophysics,
53(10), 1263-1275.
Doyen, P.M., Psaila, D.E. and Strandenes, S. (1994) Bayesian sequential indicator
simulation of channel sands from 3D seismic data in the Oseberg Field, Norwegian
North Sea. SPE 28382.
Doyen, P.M., den Boer, L.D., Pillet W.R., Seismic porosity mapping in the Ekofisk
Field using a new form of collocated co-kriging, SPE paper 36498.
Dubrule, O., Basire, C., Bombarde, S., Samson, Ph., Segonds, D. and Wonham, J.
(1997) Reservoir geology using 3D modelling tools. SPE 38659.
Dubrule, O., (2003), Geostatistics for seismic data integration in earth models, 2003
Distinguished Instructor Short Course notes, SEG and EAGE.
Eisenberg, R.A., Harris, P.M., Grant, C.W. et al. (1994) Modelling reservoir
heterogeneity within outer ramp carbonate facies using an outcrop analog, San Andres
Formation of the Permian Basin. AAPG Bull 78(9), 1337-1359.
Fanchi, J.R., Meng, H.A., Stolz, R.P. et al. (1996) Nash reservoir management study
with stochastic images - a case study. SPE Formation Evaluation 11(3), 155-161.
Fogg, G.E., Lucia, F.J., and Senger, R.K., 1991, Stochastic simulation of interwell-scale
heterogeneity for improved prediction of sweep efficiency in a carbonate reservoir, in
Reservoir Characterisation II, Lake, L.W., Carroll, H.B.Jr. and Wesson, T.C. (Eds.),
Academic Press Inc., New York, 355-381.
Eschard, R., Lemouzy, P., Bacchiana, C, Desaubliaux, Parpant, J., and Smart, B.,
(1998), Combining Sequence stratigraphy, geostatistical simulations, and production
data for modelling a fluvial reservoir in the Chaunoy Field, (Triassic, France), AAPG
Bulletin, 82 (4) 545-568.
Geehan, G.W. (1993) The use of outcrop data and heterogeneity modelling in
development planning. In: R.
Goodyear, S.G., and Gregory, A.T., (1994), Risk Assessment and Management in
IOR projects, SPE 28844, presented at Europec, London, 25-27 Oct.
Grant, C. W., D. J. Goggin, and P. M. Harris, (1994) “Outcrop Analog for Cyclic-
Shelf Reservoirs, San Andres Formation of Permian Basin: Stratigraphic Framework,
Permeability Distribution, Geostatistics, and Fluid-Flow Modelling,” American
Association of Petroleum Geologists Bulletin, 78, 23-54.
Grant, C.W., Goggin, D.J., Harris, P.M. (1994) Outcrop analog for cyclic-shelf
reservoirs, San Andres Formation of Permian Basin: Stratigraphic framework,
permeability distribution, geostatistics, and fluid-flow modelling. AAPG Bull 78(1), 23-54.
Grant, C.W., Goggin, D.J., Harris, P.M. (1994) Outcrop analog for cyclic-shelf
reservoirs, San Andres Formation of Permian Basin: Stratigraphic framework,
permeability distribution, geostatistics, and fluid-flow modelling. AAPG Bull 78(1),
23-54.
Haas, A. and Dubrule, O. (1994) Geostatistical inversion of seismic data. First Break
12(11), Nov. 1994.
Hu, L.Y., Joseph, Ph. and Dubrule, O. (1992) Random genetic simulation of the
internal geometry of deltaic sandstone bodies. SPE 24714.
Isaaks, E.H., and Srivastava, R.M., (1989), Applied Geostatistics: New York, Oxford
University Press, 561p.
Jeffry, R.W., Stewart, I.C., and Alexander, D.W., 1996, Geostatistical estiomation
of depth conversion velocity using well control and gravity data, First Break, 14(8),
313-320.
Jensen, J.L., Lake, L.W., Corbett, P.W.M., and Goggin, D.J., (2000), Statistics for
Petroleum Engineers and Geoscientists, 2nd Edition, Elsevier, Amsterdam, 338pp.
Journal, A.G., and Huijbregts, C. J., (1978), Mining Geostatistics, Academic Press, 600p
Kasap, E. and Lake, L.W. (1990) Calculating the effective permeability tensor of a
grid block. SPE Formation Evaluation, 5, 192-200.
Kerr, D.R., Ye, L.M, Bahar, A. et al. (1999) Glenn Pool Field, Oklahoma: A case of
improved production from a mature reservoir. AAPG Bull 83(1), 19-24.
Kerans, C., Lucia, F.J., and Senger, R.K., (1994) Integrated Characaterization of
Carbonate Ramp Reservoirs using Permian San Andres Formation Outcrop Analogs,
AAPG Bulletin, 78(2), 181-216.
Kerans, C., and Tinker, S., (1997), Sequence stratigraphy and characterisation of
carbonate reservoirs, SEPM short course No. 40, 130p.
Lake, L.W. and Malik, M.A. (1993) Modelling fluid flow through geologically
realistic media. In: C.P. North and D.J. Prosser (eds) - Characterization of Fluvial
and Aeolian Reservoirs. Geol Soc Spec Publ 73, 367-376.
Lanzarini, W.L., Poletto, C.A., Tavares, G. and Pesco, S. (1997) Stochastic modelling
of geometric objects and reservoir heterogeneities. SPE 38953.
Lia, O., Omre, H., Tjelmeland, H., Holden, L. and Egeland, T. (1997) Uncertainties
in reservoir production forecasts. AAPG Bull 81(5), 775-802.
MacCleod, M., Behrens, R.A., and Tran, T.T., (1996), Incorporating seismic attribute
maps in 3D reservoir rocks, SPE 36499, presented at 71st SPE Ann. Tech. Conf. and
Exhibit., Denver, Co., Oct. 6-9.
MacDonald, A.C., Hoye, T.H., Lowry, P., Jacobsen, T., Aasen, J.O. and Grindheim,
A.O. (1992) Stochastic flow unit modelling of a North Sea coastal-deltaic reservoir.
First Break 10(4), April 1992, 124-133.
Meehan, D.N. and Verman, S.K. (1995) Improved reservoir characterization in low-
permeability reservoirs with geostatistical models. SPE Reservoir Engineering 10(3), 157-162.
Omre, H., Tjemland, H., Qi, Y. and Hinderaker, L. (1993) Assessment of uncertainty
in the production characteristics of a sandstone reservoir. In: Linville (ed) Reservoir
Characterization III. Penwell, 556-603.
Ovreberg, O., Damsleth, E. and Haldorsen, H.H. (1992) Putting error bars on reservoir
engineering forecasts. Journal of Petroleum Technology, June 1992, 732-738.
Petit, F.M., Biver, P.Y.A., Calatayud, P.M., Lesueur, J.L. and Alabert, F. (1994) Early
quantification of hydrocarbon in place through geostatistical object modelling and
connectivity computations. SPE 28416.
Schildberg, Y., Poncet, J., Bandiziol, D., Deboaisne, R., Laffont, F. and Vittori, J.
(1997) Integration of geostatistics and well tests to validate a priori geological models
for dynamic simulations: a case study. SPE 38752.
Seifert, D. and Jensen, J.L. (1997) Object and pixel-baesd reservoir modelling of
a braided fluvial reservoir. In: Pawlowsky-Glahn (ed.) Proceedings of IAMG ‘97,
Barcelona, pp. 719-724.
66
Modelling O N E
Seifert, D., Lewis, J.J.M. and Hern, C.Y. (1996) Well placement optimisation and
risking using 3-D stochastic reservoir modelling techniques. SPE 35520.
Strebelle, S., Payrazyan, K., and Caers, J., (2002), Modelling of deepwater turbidite
reservoir conditional to seismic data using multiple-point geostatistics, SPE 77425,
presented at SPE Ann Tech Conf and Exhibit, San Antonio.
Sweet, M.L., Blewden, C.H., Carter, A.M. and Mills, C.A. (1996) Modelling
Heterogeneity in a low-permeability gas reservoir using geostatistical techniques,
Hyde Field, Southern North Sea. AAPG Bull 80(11), 1719-1735.
Tjolsen, C.B., Johnsen, G., G. Halvorsen, A. Ryseth and E. Damsleth, (1996) Seismic
data can improve stochastic facies modelling, SPEFE, 11, 141-146.
Weber, K.J. (1996) Visions in reservoir management - what next? In “TRC Special
Publications of the Japan National Oil Corporation, Technology Research Centre”.
Yarus, J.M., and Chambers, R.L., (1994), Stochastic modelling and geostatistics:
principles, methods, and case studies: AAPG Computer Applications in Geology, 379p.
APPENDIX
Spatial correlation analysis entails stationary assumption in some extent. There are
distinguished several levels of stationarity assumptions:
For any set of n samples at locations xi(i=1,...,n) and any vector h joint multi-dimensional
distribution function V(x1),V(x2),...,V(xn) is identical to
V(x1+h),V(x2+h),...,V(xn+h)
• Second-order stationarity:
68
Modelling O N E
1. E[V(x+h)-V(x)] = 0, ∀ x
1 N (h)
C(h) = ∑ Z (x i )Z(xi + h) − m−h m+ h
N(h) i =1 (1)
where N(h) is the number of pairs separated by the lag vector h and;
N(h ) N (h)
m+ h = 1
2 N(h ) ∑ Z(x i + h), m−h = 1
2N (h) ∑ Z (x )i
i =1 i =1
9.99e+00
2.78e+02
5.45e+02
8.13e+02
1.08e+03
1.35e+03
bw 1.62e+03
1.88e+03
2.15e+03
2.42e+03
2.69e+03
2.95e+03
3.22e+03
3.49e+03
ht
2.47
NORTHING
h
-22.21
In 3D case the angle tolerance becomes from flat sector to 3D cone. Thus, for vertical
spatial correlation (direction angle is always 90°) the pairs for each data point are
collected according to Figure App. 2.
Angle Tolerance
h-tol
h+tol
Bandwidth Bandwidth
So for the full 3D case the number of parameters to determine the covariance
calculation is the following:
• Azimuth angle s
• Azimuth angle tolerance Ds
• Azimuth bandwidth bw
• Dip angle
• Dip angle tolerance
• Dip bandwidth
• Lag distance can be direction specific) Dh
• Lag tolerance (half lag distance) ht
• Number of lags
• Max distance = Nlags
• Lag.dist.
Semivariogram / variogram is the basic tool for spatial structural analysis and
variography. The theoretical formula can be expressed, under the intrinsic hypothesis
of stationarity, as the variance of the increments:
70
Modelling O N E
{
γ (x,h) = 12 Var {Z(x) − Z(x + h)} = E ( Z(x) − Z(x + h))
2
} = γ (h)
Hence, empirical (experimental or raw) semivariogram can be computed as follows:
1 N (h )
γ (h) = ∑ ( Zi (x) − Zi (x + h))2
2N(h) i =1
g(IhI)
352 314 270
16 232
230 382
256
14 184 Lag 3
12
10 172
8
52 Lag 2
6
4
Lag 1
2
0
0 20 40 60 80 100 120 140
IhI
16
14
12
Cadmium (x+h)
10
2
Corresponding lag
h-scatter 0
0 2 4 6 8 10 12 14 16
plot for Lag1: Cadmium (x)
16
14
12
Cadmium (x+h)
10
0
0 2 4 6 8 10 12 14 16
Lag 2 Cadmium (x)
16
14
12
Cadmium (x+h)
10
0
0 2 4 6 8 10 12 14 16
Lag 8 Cadmium (x)
72
Modelling O N E
320
290
260
230
Northing
200
170
140
110
250 280 310 340 370 400 430 460 490
Easting
5.4
g(IhI)
52 4.8
2.1
4.2
1.8
3.6
Arsenic (x+h)
1.5
3
1.2 320
2.4
174 294 215
0.9 241 360 1.8
245
0.6 1.2
224
0.3 0.6
154
0 0
0 20 4 60 88 100 120 144 0 0.6 1.2 1.8 2.4 3 3.6 4.2 4.8 5.4
IhI Arsenic (x)
g(IhI) 2.7
362 306
208 2.4
0.48
232 282 242
164 2.1
0.42
0.36 1.8
220
Arsenic (x+h)
0.3 1.5
0.24 164
1.2
0.18
0.9
0.12 40
0.6
0.06
0.3
0
0 20 40 60 80 100 120
0
IhI 0 0.3 0.6 0.9 1.2 1.5 1.8 2.1 2.4 2.7
Arsenic (x)
High value data (outliers) can greatly influence the variogram. The data distribution
presented in Figure App. 4 has a single maximum value greater than 5. The variogram
computed with this value features quite a fluctuating behaviour, as illustrated on the
h-scatter plot, and is caused by the influence of the pairs, which include this maximum
value. If we ignore the outlier by removing values greater than 5, the new variogram
based on n-1 data will display a more typical trend of increasing and levelling off
behaviour.
Another useful tool to visualize the pairs contributing to the variogram is the variogram
cloud (Figure App. 5). Non-averaged variances for each pair are plotted against
separation distance. This plot also shows the number of data pairs supporting the
variogram in each lag.
3.5
2.5
1.5
0.5
0
0 20 40 60 80 100 120 140
IhI
The characteristics of a variogram curve can be described in terms of its key components:
nugget, sill and range (Figure app. 6).
30
Sill
20
10
Nugget
0
0 1000 2000 3000 4000 5000
Range 1 Range 2
The nugget is the variogram value corresponding to the zero distance (obtained
from extrapolating the variogram curve). It represents the random part in the spatial
continuity.
The Sill is the difference between the nugget and the value where the variogram
tends to a constant value at some lag distance, provided stationary conditions. The sill
represents the correlated part of the spatial continuity. The sum of the nugget and sill
gives the level of the a priory covariance (variance of the global distribution) in the
74
Modelling O N E
case of stationarity and no trend conditions. In non-stationary case, the sill does not
exist and the variogram continues to increase, showing the existence of correlation
at all distances within the data range.
The range is the distance where the variogram reaches the sill. In case of anisotropy, the
range can change (Range1, Range 2) for different directions (Figures App. 6, and 7).
24000 -90.0[deg]
-60.0[deg]
-30.0[deg]
0.0[deg]
16000 30.0[deg]
Variogram
60.0[deg]
8000
0
0 40 80 120
Average Distances Between Points in Lag
In isotropic case, the variograms are almost the same in all directions (Figure App.
9). If the range varies for different directions, while the sill is constant, the anisotropy
is called geometric (Figure App. 10). In geometric anisotropy, the variogram rose
contours form tight ellipsoids. If the range and sill vary for different directions, the
anisotropy is called zonal (Figure App. 11). In this case the variogram contours are
parallel to and symmetrical about the axis of the longer (discontinuous) correlation
range.
Variograms work only under conditions of stationarity. The variogram, being a squared
difference function, it is very sensitive to outliers (anomalously high data values),
which may need to be ignored if not representative of the population.
There are several other second-order moments besides the variogram, which can also
describe spatial correlation. These are:
• Semivariogram
• Standardised Variogram
• Covariance
• Correlogram
• Madogram
• Rodogram
• Relative Variograms
• Drift
The Madogram, for example, is based on the absolute value of the differences between
the values in pairs:
1 N (h)
M(h) = ∑ | Zi (x) − Zi (x + h) |
2N(h) i=1
The Rodogram _ is based on the square root of the differences between the values
in pairs:
1 N(h ) 1
R(h) = ∑ i
2 N(h) i =1
{| Z (x) − Z i (x + h) |}2
76
Modelling O N E
126
0
1000
2000
3000
63
4000
5000
7500
0 10000
12500
15000
20000
-63 25000
-126
-126 -63 0 63 126
0.0
0.2
0.4
2000 0.5
0.6
0.8
1000 0.9
1.0
1.1
0 1.2
1.4
-1000
-2000
-90.0[deg]
1.4
-60.0[deg]
-30.0[deg]
0.0[deg]
1.2 30.0[deg]
60.0[deg]
Model - 90.0 [deg]
1.0 Model - 60.0 [deg]
Model - 30.0 [deg]
Model 0.0 [deg]
Normal Score Variogram
0.6
0.4
0.2
0.0
0 500 1000 1500 2000 2500
Average Distance (m)
78
Modelling O N E
0.4
0.2
0.0
0 500 1000 1500
Average Distance (m)
-30.0[deg]
0.0[deg]
0.8 30.0[deg]
60.0[deg]
0.6
0.4
0.2
0.0
0 500 1000 1500
Average Distance (m)
Figure App. 11 Zonal anisotropy: 2D experimental (raw) variogram and the fitted
theoretical model. Bottom: Raw variogram contours
In case of more than one correlated spatially distributed variables, their joint spatial
correlation is measured by the cross covariance:
Cij(x,h)= E{(Zi(x)-mi(x))(Zj(x+h)-mj(x+h))}
1 N (h )
γ ij (h) = ∑ ( Z (x) − Zi (x + h))( Z j (x) − Zj (x + h))
2N(h) i =1 i
80
Modelling O N E
1.2
1.0
0.8
g 0.6
0.4
0.2
0.0
0 50. 100. 150. 200. 250. 300.
Distance, Pixels
1.2
1.0
0.8
g 0.6
0.4
0.2
0.0
0 50. 100. 150. 200. 250. 300.
Distance, Pixels
1.2
1.0
0.8
g 0.6
0.4
0.2
0.0
0 50. 100. 150. 200. 250. 300.
Distance, Pixels
Original Variable
45
Original Variable Variogram of Original Variable
45
35 Variogram of Original Variable
1.6
Depth, ft Depth, ft
35 1.2
25 1.6
γ
0.8
1.2
25
15 γ
0.4
0.8
15
0 0
0.4
-2.5 -1.5 -0.5 0.5 1.5 2.5 0 5 10 15 20 25 30
Normal Score Value Distance, ft
0 0
-2.5 -1.5 -0.5 0.5 1.5 2.5 0 5 10 15 20 25 30
Normal Score Value Distance, ft
Figure App.13 Trend removal: 1D case Data and the corresponding variogram
with trend
Original Variable
45
Original Variable Variogram of Original Variable
45
35 Variogram of Original Variable
1.6
Depth, ft Depth, ft
35 1.2
25 1.6
γ
0.8
1.2
25
15 γ
0.4
0.8
15
0 0
0.4
-2.5 -1.5 -0.5 0.5 1.5 2.5 0 5 10 15 20 25 30
Normal Score Value Distance, ft
0 0
-2.5 -1.5 -0.5 0.5 1.5 2.5 0 5 10 15 20 25 30
Normal Score Value Distance, ft
Figure App.13(b) Data residuals after linear trend removal and the corresponding
variogram of the residuals
82
Modelling O N E
2. Spherical model _ features linear behaviour near the origin and reaches its sill with
zero derivative. This sill is the statistical (a priori) variance. The random function
is continuous but not differentiable.
6. Hole effect model _ represents periodic structures (e.g. objects), acts in one direction only.
7. Damped hole effect _ a product of the exponential covariance and the hole effect;
more common than pure hole effect.
Nested variogram structures. The variogram can reveal nested structures - hierarchical
structures, each characterised by its own range and sometimes sill. In this case
the variogram can be modelled as a sum of theoretical variograms with positive
coefficients; the resulting variogram will be positive definite as long as individual
models are positive definite.
g(h)=g(-h)
g(h)=C(0)-C(h)
84
Modelling O N E
This procedure of decomposition of joint pdf into product of conditional pdfs is very
general and can be used for spatial random functions as well. Remember that a spatial
function is a collection of random variables. It makes possible the construction of both
a non-conditional (M=0) and conditional (M>0) simulations. The same procedure can
be applied to co-simulation of several non-independent random functions. It produces
simulations that match not only the covariance but also the spatial distribution. In
general, it is not known where to take conditional distributions. But for a Gaussian
random function with known mean, the conditional distribution is Gaussian with
mean and variance obtained from simple kriging.
86
Modelling O N E
1. Define a random path through all nodes of the estimation grid, visiting each node just once.
2. Select a node from the random path. Using at the start the original data (and
subsequently all simulated value), make a kriging estimate with uncertainty at the
selected node.
4. Draw a random sample from the cumulative local distribution function. This is
the simulated value, which becomes a new data point.
5. Having added a new data point return to step 2 and continue until all nodes have
been modelled.
24
14.9
Choose a
?
random location
50.2
24
Estimate value
14.9
30.1 +
_9 and uncertainty
50.2
35.2
24 ?
Use simulated
35.2 14.9 value in text next
predictions
50.2
Direct kriging of the indicator variable I(x; sk) provides an estimate/model for the
probability that sk prevails at location x.
n
Pr ob* {I(u;sk ) = 1| (n)} = p k + ∑ λα [ I(uα ;sk ) − pk ]
α =1
When the average proportions vary locally, one can explicitly provide the simple
indicator kriging with smoothly varying local proportions.
1. Define a random path through all nodes of the estimation grid, visiting each node once.
2. Select a node from the random path, say at location x. Using the original data
(which includes any previously simulated previously simulated indicator values for
categories sk), make an indicator kriging followed by order relation correction to
obtain K estimated probabilities pk(x|(ci)), k= 1,...,K .
4. Define any ordering of the K categories, say 1,..,K. This ordering defines a cdf-
(cumulative distribution function) type scaling of the probability interval [0,1] with
K intervals.
6. The simulated value becomes an additional data point, return to step 2 and continue
until all nodes have been modelled.
88
Modelling O N E
Thus, the indicator based simulation algorithm can be viewed as a two-step procedure:
1) Simulation class-value
2) Draw a simulation value from that class using a class distribution models (e.g.,
uniform, power, etc.). Consequently, indicator simulations guarantee approximate
reproduction of only the K class proportions and corresponding indicator
semivariograms and not reproduction of the cdf and semivariogram of the original
continuous z-values. Therefore, actual approximation of one-point and two-point
z-statistics by sequential indicator realisation depends on several factors: number of
thresholds, information accounted when performing indicator kriging, interpolation/
extrapolation models used for increasing the resolution of ccdf (conditional cumulative
distribution function).
Cut-Off Facies I1
I1
I0
I0
?
1(0)
Use simulated
1(0) 0(1) value in text next
predictions
1(0)
Figure App.19 Facies dsitribution (2D slice) from Sequental Indicator simulations
(Petrel)
90
Modelling O N E
1. Determine the univariate cdf FZ(z) representative of the entire study area and not
only of the z-sample data available. Declustering may be needed. If the original data
are clustered, the declustered sample cdf should be used for both the normal score
transform and back transform (see Annex). As a consequence the unweighted mean
of the normal score data is not zero, nor the variance is one. In this case the normal
score covariance model should be fitted first to these data and then renormalized
to unit variance.
2. Using the cdf FZ(z), perform the normal score transform of z-data into y-data
with a standard normal cdf as illustrated in Figure App. 20.
1 1
G(y)
0.5
F(z)
-2 -1y 0 2 2 2
y=φ-1 (z)
3. Check for bivariate normality of the Normal Score values. There are different ways
to check for bivariate normality of a data set whose histogram is already normal. The
most relevant check directly verifies that the experimental bivariate cdf of any data
pairs is indeed standard bivariate normal with a given covariance function CZ(h).
There exist analytical relation linking the covariance with any standard normal cdf
value (see Deutsch and Journel 1997).
4. Assuming multivariate Gaussian normality random function model for the normal
score transformed variable, local conditional distribution is normal with mean and
variance obtained by simple kriging. The stationarity requires that simple kriging
(SK) with zero mean should be used. If there are enough conditioning data to
consider inference of a non-stationary random function model it is possible to use
moving window estimations with ordinary kriging (OK) with the re-estimation of the
mean. But in any case SK variance should be used for the variance of the Gaussian
conditional cumulative distribution function if there are enough conditioning data
it might be possible to keep the trend as it is.
6. Use simple kriging (SK) with the normal score variogram model to determine the
parameters (mean and variance) of the ccdf of the random function Y(x) at location x.
9. Return to Step 7, and compute the next simulated value until all nodes are simulated.
10. Back transform the simulated normal values yl(x) into real simulated values for
the original variable zl(x).
Gaussian models are theoretically consistent models. The Gaussian approach is also
related to maximum entropy and correspondingly to the maximum “disorder” in the
data. Perhaps, it is not the best choice when spatial correlations between extremes
are of special interest. One possibility is to take another nonparametric model like
indicator based simulations.
92
Modelling O N E
0.1
-0.8
Transform data to
?
Normal distribution
N(0,1)
1.3
0.1
Compute Kriging
-0.1 +
_ 0.3 -0.8
estimate and error
1.3
1.3
Backtransform the
simulated values
Differences between the simulated realisations can be also quantified using variograms.
A variogram is computed for each simulated petrophysical property field and compared
to the one based on the initial data (Figure App. 23). The spatial correlation from the
simulation is expected to exhibit the same features as the one from the initial data
but with some scatter due to stochastic variations. Note, that Gaussian simulation
is a maximum entropy algorithm, which leads to maximum differences (disorder)
between the realisations.
94
Modelling O N E
2000
1500
Semivariogram
1000
500
0
0 500 1000 1500 2000 2500 3000
Lag Distance
Figure App. 23 Variograms of the simulated realisations of porosity vs. the “truth
case” variogram
4 θ+06
5050
sgsim realisations
sgsim realisations
Simulations
3 θ+06 TruthSimulations
case
TRUTH
FOTP (SM3)
2 θ+06
1 θ+06
0 θ+00
0 5 10 15
Years
Figure App. 24 Full oil production total resulting from 50 simulated realizations
of the petrophysical properties vs. the “truth case” production scenario
When different simulated petrophysical properties are fed into a dynamic simulator
the modelled outputs will vary for each realization as shown on Figure App. 24, where
50 cumulative oil production curves are shown. A histogram of the cumulative oil
production for each realization is shown in Figure App. 25, reflecting the range and
uncertainty that comes from the modelling, which is valuable in reservoir management.
FOPT
15
10
Frequency
0
3400000 3600000 3800000 4000000
Total
Figure App. 25 Distribution of the final full oil production total resulting from
50 simulated realizations of the petrophysical properties vs. the “truth case”
production solution
Annex. Declustering
Clustering (preferential sampling) of monitoring networks has significant influence
on spatial data analysis and modelling. Simple example, how preferential sampling
influences the estimation of mean value for the 1 dimensional function is presented
in Figure App. 26. The true mean value of the function is 0.5. Function was sampled
two times with 1) preferential clustered sampling in the regions with high levels and
2) preferential sampling in the regions of low values. In the former case calculated
mean value is overestimated (1.45) and in the latter case underestimated = - 0.5. The
same effect can be observed when estimating variance, and more generally histograms/
distribution functions. Clustering has also influence on structural analysis _ spatial
correlation analysis (variography), which is a central part of geostatistics.
Figure App. 26
96
Modelling O N E
Thus, as a result of clustered monitoring networks, collected data sets are not
“representative” - they do not represent “true patterns”. The objective of declustering
is to recover representative part of information taking into account both clustering
effect and preferential sampling. The most straightforward way to do it is to apply a
kind of weighting procedures to the raw data. It means, that raw data are multiplied by
weights, and new weighted data are used as a “representative” data set. For example,
declustered mean and variance values can be estimated as follows:
N
Z m = ∑ Z iω i
i =1
N
Var( Z ) = ∑ ( Z i − Z m )2 ω i
i =1
Optimal declustering weights are defined as the weights, which provide the most
representative histogram (Journel and Deutsch 1998).
1. Random declustering.
2. Cell-declustering
3. Voronoi polygons
4. Kriging weights
Cell-declustering
Cell-declustering method was proposed by Journel (1983). The idea is to cover
spatial domain by a regular grid with different cell sizes and apply an equal weighting
within each grid cell followed by averaging of the means of the cells (see Figure App.
27). Equal weights are assigned to each data within a cell inverse proportional to
the number of data in it. Overall mean depends on cell size. The minimum of mean
corresponds to optimal cell size when preferential sampling is in high value regions
and to maximum otherwise. This is a fast and efficient method that uses all the data.
98
Modelling O N E
APPENDIX REFERENCES
3. Deutsch C.V. and Journel A.G. (1998) GSLIB. Geostatistical Software Library
and User’s Guide. N.Y., Oxford University Press.
7. Pannatier, Y., (1996) VARIOWIN: Software for Spatial Data Analysis in 2D,
Springer-Verlag, New York, NY.
100
Geomodelling Workflow T W O
C O N T E N T S
21/07/16
R Geomodelling & Reservoir Management
E
M
LEARNING OBJECTIVES
Having worked through this chapter the students should be able to:
• Uncertainty in geomodelling.
2
Geomodelling Workflow T W O
1. INTRODUCTION
This chapter will concentrate on the workflow of geomodelling. Since the 1990’s
commercial computer based geomodellers have become widely available to construct
geomodels, and contain built-in suites of powerful analytical, geostatistical and
visualisation tools to help in the construction. The intention in this chapter is to ensure
that students understand the fundamentals in each step of the workflow rather than
be a substitute for software user manuals.
Keeping in mind the eight cardinal rules of geomodelling, will ensure the success of
any geomodelling exercise.
3) Establish key objectives of modelling and design model to meet these (Fit to
Purpose).
4) Grid size must be linked to the required degree of detail one is trying to capture
and the problems that the modelling is addressing.
7) Upscaling always degrades input data, therefore avoid very fine geological
grid, which subsequently need to be upscaled to a manageable dynamic grid
size.
To ensure that the student grasps the key aspects of geomodelling, the chapter is
complemented by a tutorial run in parallel to this course, where the student will build
a simple geomodel and test it in a flow simulator.
Because geomodels are used as the primary input for flow simulators, it is imperative
that all heterogeneities and reservoir characteristics likely to impact flow are accurately
represented or captured in the geomodel. Geomodelling and flow simulation are
therefore key tools in determining the optimal development of a reservoir and its
management, in order to recover the maximum amount of hydrocarbons by the most
efficient, safe and economic means. Flow simulation will allow production profiles
to be modelled and assess the impact of key uncertainties and reservoir management
strategy using “what if” scenarios. Geomodelling and flow simulation are used in
all stages of field development and management in order to obtain the following
information:
∂ρ
div (ρV) + q =
∂t
K
V = - (∇ P - ρ g)
µ
There are no analytical solutions to these partial differential equations, but they can
be solved by a numerical approach.
(e)
For a cell (i)
conservation of mass:
Δ mi ( Δ φρ ) i
∑ Q i, e + q i = = Vi
(e) e Δ t Δt
⎛ KS ⎞ kr ρ
Q i, e ⎜ ⎟ ( ) i (P i - P e + ρ g Δ Z i, e )
(i) ⎝ L ⎠ i, e μ
Qi,e
(e)
(e)
The discrete equations are then solved sequentially in time steps for each cell grid.
In multiphase fluid flow, There are three types of flow mechanisms. These are:
• Viscous Flow
• Capillary Imbibition
• Gravity
All three are solved in the Darcy equation and modelled in a flow simulator.
Capillary Gravity
Viscous Forces Forces Forces
Qo = 1
Kr μ
(Qt + K . A. Krμ ww ⎡⎢ ∂∂Pxc − Δρ .g .sin θ6 ⎤⎥ )
⎣ 1, 0133 .10 ⎦
1+ w o
μ w Kro
Heterogeneities occur over a very wide range of scales ranging from micron scale to
Km scale as shown on figures 2 and 3
1 Sealing fault
Partially sealing fault
Non-sealing fault
2 Sedimentary units
3 Permeability zonation
in a sedimentary unit
6 Micro-heterogeneities
(Weber, 1986)
6
Geomodelling Workflow T W O
100m
1-10km
qq Dm
qq hm
1-10m
10-100mm
10-100µ
(Pettijohn)
Heterogeneities at all scales have some impact on flow. The difficulty is estimating
how the heterogeneities at the varying scales sum up to affect flow at the field scale,
and the ideal gridding size of the reservoir needed to capture the heterogeneities
which have the greatest impact on flow.
There are six main types of heterogeneities which affect horizontal and vertical
continuity in a reservoir:
Horizontal Continuity:
• Lateral continuity of reservoir/ flow units (facies changes, faults etc)
• Horizontal anisotropy of matrix properties
• High permeability horizontal drains in the matrix
• Fractures
Vertical Continuity:
• Vertical anisotropy of matrix properties
• Vertical barriers or baffles
• Fractures
8
Geomodelling Workflow T W O
To a certain extent, well tests, outcrop analogues and greater numbers of wells
(especially if closely spaced) may fill some information gaps. However the key
method for closing the gaps and populate geomodels with representative petrophysical
properties, remains conceptual ideas based on sound geological models (sedimentary
environment, tectonic and diagenesis).
A B C
Well test
NTG
φ
Sw
Kx, Ky
Kz Well test
KR, Pc
Core
Core
Facies
Core
Figure 5 Geomodelling
5. GEOMODELLING WORKFLOW
5.1 Introduction
The geomodelling workflow is illustrated on Figure 6 and entails 5 main stages:
• Structural Modelling (horizons and fault mapping)
• Reservoir correlation and zonation
• Cell gridding (orientation, size)
• Facies modelling
• Petrophysical modelling (NTG, Phi, Kh, Kv, Sw)
• Upscaling and transfer to flow simulator
In the case of Static Data, one commonly refers to Hard and Soft data, the former
being for most part direct measurements or computations from direct measurements,
while Soft data refers to inferred information which may be more tenuous and/ or
subjective, such as geological model or seismic attributes. The various data types,
together with their usage and/ or necessary QC are summarised in Table 1 below.
10
Geomodelling Workflow T W O
STATIC DATA
HARD DATA USE and/ or QC
Seismic (2D and/or 3D) - Structural maps and faults.
- Quality, processing, static corrections, multiples?
Well seismic (VSP) - Velocity model, seismic calibration
Checkshots - Synthetic seismograms
Velocity calibration Depth conversion
Wells - Coordinates, KB, deviation surveys etc
- Well location in the field, TD etc
Cores & core description/analysis - Facies description, depositional environment,
- Core Analysis (plug poro-perm & SCAL)
- Mineralogy
- Hydrocarbon shows
- Ensure depth shifting onto wireline logs
Mudlogs - Rock description and mineralogy
- Hydrocarbon shows and kicks
- Mudlosses, porepressure, etc
Sequence stratigraphic interpretation - Correlation and Zonation
- Biostratigraphy and/or Chemostrat
Wireline logs - Petrophysical properties, log typing
- Formation evaluation & Sedimentary environments
Dipmeter and Image Logs - Structural interpretation and coherence with
structural maps
- Fault identification
- Fractures
- Sedimentary features and environments
SOFT DATA
Geological Model - Depositional environment, Tectonic history,
diagenetic evolution etc
- Core description and wireline information
Seismic Attributes - Inversion, Attributes, AVO etc
- Proper calibration to geology?
DYNAMIC DATA
Well Tests, - Permeability height, barriers, connectivity, fluid
type, flowrates, reservoir pressure
PLT’s - Producing intervals and contribution
RFT - Reservoir pressure, pressure gradient, OWC and GOC
Production History - Reservoir behaviour (pressure changes), material
balance, production profiles, water-cut, GOR
- Calibration for static and dynamic model.
Fluid Samples - PVT data for hydrocarbons.
- Formation water sample
The data used in the modelling must always be QC’ed and validated prior to building
a static or dynamic model and its internal coherence checked. For example, does the
OWC as seen on RFT coincide with formation evaluation, mapped structural closure
and well tests.
Typically in the case of well data, composite log montage showing: wireline logs,
formation evaluation results, core intervals and description, core analysis results
(plug poro-perm results), perforation intervals, formation test intervals, stratigraphic
and/ or seismic markers, pressure data, biostrat zonation etc, would be generated for
each well, to allow a full QC.
6. STRUCTURAL MODELLING
As a general rule, depth conversion is done using interval velocities between seismic
markers (Figure 7). The wells themselves are the primary source of interval velocities
and act as anchor points in depth conversion. Velocities in rocks vary widely depending
not only on their mineral constituents (Figure 8), but also on their depth of burial,
porosity and the type of fluid in the pore space. This means velocity models can vary
quite significantly from well to well and in the space between them. Velocities used
in depth conversion come from acoustic logs, well check-shot surveys, as well as
stacking and migration velocities computed during seismic processing.
Int_Velocity_V1u
Int_Velocity_V_overburden_u
Int_Velocity_V2u
Int_Velocity_V_overburden_I
TwT ms
Int_Velocity_V3u
Int_Velocity_V1l
Int_Velocity_V2l
12
Geomodelling Workflow T W O
alluvium 700
300
dry sand
600 1850
wet sand
1500 2000
argillaceous sand
1000 2000
Clay
1100 2500
Marl
2000 2500
Sandstone
2000 3500
Chalk
2100 4200
Anhydrite
3500 5500
Basalt
5000 5000
Dolomite
3500 6900
Granite
4500 6000
Gneiss
3500 7500
Salt
4300 5500
Compact clay and shale
3300 5500
Figure 9 schematically illustrates the range of structural uncertainty due to the seismic
picking and/or depth conversion error and how this affects the bulk rock volume
(BRV) computed from that structure.
Seismic interpretation and mapping must not only be coherent with the geological
setting (extensional versus compressional regime) but with all available information.
For example, mapped closure or spill point must be consistent with the OWC identified
from formation evaluation or RFT data at the wells. Sequence boundaries being time
lines, these must equate with mappable seismic markers and in some cases are key
inputs for correlations between wells.
14
Geomodelling Workflow T W O
Spill point
Normal
fault
Reversed
fault
Spill point
The interpretation of structural style must be coherent with the known regional
tectonic regime of the basin. Extensional basins are characterised by normal faults
and tilted fault blocks, while reverse faults occur in compressional regimes and flower
structures typically in wrench systems.
The sealing capacity of a fault can however be determined from well tests (Figure
12). Differing OWC in adjoining fault compartments would be indicative of sealing
faults. Varying pressure changes in adjoining fault compartments would also be
indicative of sealing or partially sealing faults.
(Badleys, 2004)
Partially Communicating Fault Inhibits Lateral Flow Sealing Fault Preventing Lateral Flow
Active
Active Well
Well
Diagenesis
Zone
p 1 (v)
D (iii)
1 (i)
0.5
CRD
0.1
0.01 0.1 1 10 100 1000
tD/LD2
16
Geomodelling Workflow T W O
Remember that sub-seismic faults (faults below seismic resolution) are also likely
to be present in faulted reservoirs. These faults however can sometimes be identified
directly in wells from cores and/or image and dipmeter logs (Figures 13 and 14)
Faults and compartmentalisation in a field have a major impact on flow and therefore
on its development and management:
• Limits of reserves (fault bounded structures)
• Connectivity and therefore production management
• Number of wells (producers & injectors)
• Sweep efficiency & pressure maintenance
• Secondary gas cap
Dipmeter & Image Logs – Fault Recognition
Complex zone
of faulting
(45m cut-out
at well)
5m
500m
Horiz
&
Vert
3850
Drag on
hanging Fault
Fault wall plane dip
zone Fault
plane dip
3900 Drag on
foot wall
3950
-2 18° 10 20 30 40 50 6070 80 90
Deviation
cc > 0.6
cc > 0.3
(Cowan et al, 1993)
In faulted reservoirs, great care must be taken where sections are repeated (compressional
setting such as thrust) or lost (extensional setting such as in pull apart basins) as a
result of faulting. Failing to account for these, reservoir sections may wrongly be
inferred to be thickening or thinning between wells.
This is illustrated in Figure 15 where Well B has a loss of section due to a normal
fault. Were it not for the fact that a fault was identified both on seismic and from the
dipmeter log in the well, the reservoir section would have been modelled as thinning
towards Well B, rather than being essentially isopach between wells A and C.
18
Geomodelling Workflow T W O
3.0 km
X
X
Y
Datum
Y
X
Fault
Y
GR
Neut / Dens
φ
GR
Neut / Dens
φ
GR
Neut / Dens
φ
Figure 15 Faulting and loss of section
Note also that slumping such as identified from image and dipmeter logs in the example
on Figure 16, may also result in an apparent thinning or thickening of section at a
well, which may be a localised feature rather than a general trend.
On a final note regarding faults and tectonic regime interpretation. Be careful about
your interpretation. Figure 17 shows how a repeat section may be encountered
in a deviated well drilled in an extensional setting with normal faults. Figure 18
meanwhile illustrates how the same structural map may represent quite different
tectonic regimes.
20
Geomodelling Workflow T W O
Faults and fractures are closely associated. Fractures are a fundamental consequence
of rock deformation (post-yield) controlled by the tectonic regime. Fractures are key
contributors to flow in many reservoirs, such as many of the supergiant carbonate fields
in the Middle East. Fracture distribution is typically stratabound within mechanical
units of competent rock separated by more ductile incompetent rocks such as claystones
and shales. Fracture density and distribution depends on several factors:
21/07/16 Institute of Petroleum Engineering, Heriot-Watt University 21
R Geomodelling & Reservoir Management
E
M
There are several ways of detecting fractures and identifying fractured reservoirs in
the subsurface. These are:
• Pressure build-up from formation test (derivative, negative skin)
• Production logs (PLT)
• Temperature (injection)
• Chemical tracers
• Fractures in Cores (not always good in vertical cores)
• Image logs (FMI, FMS, UBI. etc)
• Wireline logs (caliper, resistivity (MSFL), microlog, sonic)
• Mudlogging (ROP, mud loss)
• Well Seismic - VSP (Velocity anisotropy)
Wireline logs, complemented with core and cuttings description are the main data
for correlating reservoirs between wells. Log motifs and biostratigraphic data are
the principal means of correlating wells. Where biostrat is barren, chemostrat which
identifies correlateable signatures from heavy mineral concentrations or rare earth
element concentration and assemblages can sometimes be used to refine reservoir
unit correlations.
Reservoir correlation and zonation can be a relatively simple exercise in the case of
layer cake reservoirs but can be difficult depending on the complexity of the reservoir
architecture, where rapid vertical and horizontal facies changes mean that correlation
is difficult even between numerous and closely spaced wells. Scarcity of wells and
spacing may also increase uncertainty in correlations. The drilling of highly deviated
or horizontal wells adds a further complexity, as log response may be affected and
log correlation may in some cases become impossible. A geological model or some
conceptual model of the facies distribution possibly integrated with seismic may be
required to properly correlate such wells.
There are two main types of correlations: Sequential and Flow Units Correlations.
22
Geomodelling Workflow T W O
C A B D
Correlations
a) Distinct layering with marked continuity a) Different sand bodies fitting together without a) Com
and gradual thickness variation major gaps. Occasional low permeable zones can and lenses often appearing discontinuous
occur locally between adjacent or superimposed in sections
sand bodies.
b) Layers represent sands deposited in b) Reservoir architecture determination requires b) In 30 interconnections exist locally but in
same environment of deposition detailed sedimentological analysis part only via thin low permeable sheet sands
c) Excellent log correlation showing gradual c) Although the sand/shale ratio is high, correlation c) Difficult log correlation even when well
lateral changes in thickness and properties may be difficult without detailed facies interpretation spacing is 400 to 600 m
Appoximate average data density requires for deterministic correlation of major sand units
Well pattern Spacing, m Wells/km2 Well pattern Spacing, m Wells/km2 Well pattern Spacing, m Wells/km2
Rectangular 1000 1 Rectangular 600 3 Rectangular 200 25
Triangular 1200 0.8 Triangular 800 2 Triangular 300 13
Random 1.3 Random 4 Random 32
In this type of correlation, the notion of time stratigraphic boundaries is not considered,
rather the correlation of reservoir sections with similar flow properties (or lack thereof
in the case of barriers). Flow Unit correlations can and do cut across time line as
illustrated on Figures
Correlations 22 and
- Sequence 23.
Stratigraphy versus Flow Units -
Progradational Parasequence Set
Datum:
A B C D Parasequence
Chronostratigraphic correlation set boundary
A 4
4
3 3
3
2 2 300'
2
2
1 1 1
Basinward 1
10 miles
C D Datum:
Top of marine
sandstone
2 4
3 4
1
2 3 3
1 2
2
Lithostratigraphic correlation
Wagoner et al, (1990)
300'
10 miles
C D Datum:
Lithostratigraphic correlation Top of sandstone
B B
In the Progradational example (Figure 22), it is clear that even though the correlations
using Sequence Boundaries or Flow Units are very different, the computed OOIP from
both models would be much the same. Similarly, the connectivity of the reservoir is
essentially the same between both correlation models, except for some minor isolated
reservoirs, and flow simulation (and therefore reserves) would likewise be similar
when computed from either model.
However, in the case of the retrogradational model (Figure 23) although the OOIP
would be very similar for both the Sequence Stratigraphic and Flow Unit models, the
reservoir connectivity and flow behaviour would be very different. As such reserves
estimates would vary significantly between the two models.
26
Geomodelling Workflow T W O
B
C
A
A B C
A B C
A B C
28
Geomodelling Workflow T W O
correlations reflect isopach layer-cake successions, the trends on the LOC will be
parallel (Wells A and B, Figure 26) In the case of an isopach correlation, but with a
missing section due to a fault, the trend on the LOC will be parallel but with an offset
(Well C) if bed thickness decreases, the slope of the LOC trend will decrease (Well
D) and increase if there is a thickening. In the case of gradually increasing thickness
with depth (Well E), the trend will gradually deviate from the reference trend to give
a curved LOC as shown on Figure 26.
Correlation Tools - Line of Correlation Diagram (LOC)
1750 2000 2250 2500 3000
1750 A B C
Well A
Well B
Well C D
2000 Well D
Well E
E
Depth TVDSS
2500
Constant thinning
3500
Varying reservoir pressures and/ or gradients can also be used to correlate reservoir
units. Figure 27 shows such an example where two vertical barriers were correlated
in a field, solely on the basis of vertical pressure changes.
Zonation & Reservoir Barriers or Baffles
Pressure
Well A
Well B
Well C o o o Well A
o o
o o o
Depth TVDSS
o o
o
o
o o
o
o
Well B
o o
o o o
o o o Well C
o
oo o
oo o
oo o
o o OWC
oo o
oo o
o o
o o
o
• Oil Field with 3 major fault compartments but field hydraulically in Equilibrium
•
Figure 27 Zonation and reservoir barriers or baffles
Northern Sector put on production
• 5 years later Wells B and C are drilled
• Change of Pressure shows Compartmentalisation
• Vertical
Institute BarriersEngineering,
of Petroleum / Baffles Heriot-Watt University
21/07/16 29
R Geomodelling & Reservoir Management
E
M
• Oil field with 3 major fault compartments but field hydraulically in equilibrium
• Northern sector put on production
• 5 years later wells B and C are drilled
• Change of pressure shows compartmentalisation
• Vertical barriers / baffles
Once the main reservoir correlations are picked, the reservoir is zoned in order to
capture subtler reservoir characteristics, such as high permeability or tight-cemented
intervals. This may include zones of enhanced secondary porosity and permeabilities
or zonation governed by the degree of dolomitization, depending on the impact
dolomitisation has on poroperm properties. In other words, refine the correlation
into zones with different poroperm and flow characteristics compared to units above
and below.
Figure 27 shows a reservoir section, where a modified Lorenz plot highlights high and
low permeability intervals and has been used for flow unit zonation in a reservoir.
E. Facies
Layering
Dens/Neut
core porosity Blocked
GR core permeability Phi Phi
Flow unit zonation
2
3
4
9
10
11
12
30
Geomodelling Workflow T W O
Highly deviated and horizontal wells are common these days and their correlation not
always easy. It is very common for deviated/ horizontal wells to be projected back
as TVD vertical wells to facilitate correlation. However this is not always possible
in horizontal wells, since the horizontal section, sometimes hundreds of meters in
length will be projected over less than one meter of true vertical section, or the section
intersected may not be representative of the point where it is being projected back.
Furthermore, horizontal wells may in the case of dipping beds move up section and
emerge back at the top of a reservoir for example. Yet, horizontal wells are often key
producers and must be properly correlated along their entire horizontal section, or
else the model may be importantly flawed.
8. GRIDDING
Armed with a structural interpretation and reservoir zonation for the field, the next
step is creating a grid, which will be used to construct the geomodel and populate
it with the reservoir properties. Since the key objective of a geomodel is to capture
heterogeneities that significantly impact flow, the grid orientation and size must be
constructed such that these heterogeneities can be effectively modelled. Although
modern geomodelling softwares allow models with up to 50 million cells, it must
be remembered that dynamic simulation grids are limited to between 200,000 to
500.000 cells depending on the reservoir complexity. Building very detailed fine
grid geomodels to capture a high detail of heterogeneities will require upscaling to a
coarser flow simulation grid, with the inevitable and often unquantifiable degradation
of the fine grid data.
The geomodel and grid must be made fit for purpose and ideally, geomodels should
be constructed using a grid size that will minimize or not require upscaling when
used as input to a flow simulation.
At the moment most commercial geomodellers use orthogonal corner point geometry
to generate grids, but limited PEBI gridding (hexagonal shaped cell grids) now exist
and will become more widely available in the near future.
• Aquifer
• Production Mechanism
• Pressure and saturation distribution
• Dynamic simulator (problems of convergence)
• Computation time
• Grid size linked to degree and size of key heterogeneities
• Maximum number of cells allowed (computing limitations)
• Most regular and orthogonal Gridding as possible
• Grid size changes should ideally be less than 2 in either ∆Xi+1/ ∆Xi or ∆Yi/ ∆
Yi direction
• Always try to have more than 2 grid cells between wells
This is schematically
Gridillustrated
orientationon Figures
and 29 Yand
X versus 30length of cells
axis
Sediment Source
Open Fractures
?
Active Faults
Aquifer
Identifying main flow units and fluid flow directions for each
reservoir unit is important at onset of the modelling
Figure 30 Gridding
32
Geomodelling Workflow T W O
Besides the reservoir zonation discussed in the previous section, the zones can be
further subdivided vertically in the grid by what is commonly referred to as layering.
Layering can vary between different reservoir zones in a geomodel, depending on
the complexity of the reservoir morphology and properties that we are attempting
to capture for each zone.
Key geological and reservoir considerations when layering the grid are:
• The number of grid cells (Layering greatly impacts on the number of cells)
• The size of your model (cells or physical dimensions)
• Reservoir correlation and zonation
• Objectives of model (Full field versus sector or phenomenological)
• Complexity of the field
• Fluid being produced (oil/ Gas)
• Production Mechanisms (Depletion and Sweep)
You may need to adapt your layering to accommodate and capture certain reservoir
anomalies, well behaviours or just practical reasons linked to your wells or completions.
Typical reasons why you may adapt your layering are:
• Displacement fronts (sweep efficiency)
• Water coning or gas cusping
• High permeability drains
• Baffles or vertical barriers
• Perforation intervals for production/ Injection, completions and even DST’s
• Production logging results (PLT)
• Accommodate Horizontal Wells
As shown in Figure 30, there is great flexibility in the way a reservoir section can be
layered and vertically gridded. It is clear that all the above conditions cannot possibly
be satisfied in the gridding of a reservoir. Compromise will therefore be necessary
to favour the criteria considered most important in the model.
Along I or J Direction
Oblique Direction
1) Oblique
The grid cells terminate in a non-orthogonal fashion along the fault-plane. This
generates complex grid cell shapes along the fault trace, which can adversely
affect flow computation along and across the fault-plane.
2) Along I or J direction
The grid cells are aligned along the fault-planes to keep the cell grids along the
fault orthogonal.
3) Zigzag
The fault-plane is bent along the edges of grid cells in a zigzag pattern in order
to keep the grid cells orthogonal along the fault trace.
Variable size cartesian grid Local grid refinement Radial grid (round well bore)
34
Geomodelling Workflow T W O
Petrophysical Groups
Petrophysical
Groups
(characterized by
porosity,
Permeability (md)
permeability)
80
70
Porosity (%) 60
50
Sedimentological and Lithofacies analysis 40 Group_04 s
up
30 Group_03 g ro
20 Group_02 ical
10
Group_01 hys
0
tr op
Pe
F_01 F_02 F_03 F_04 F_05 F_06
LithoFacies
0.9
0.8
Relative permeability
0.7
k ro
0.6
k rw
0.5
0.4
0.3
0.2
0.1
Petrophysical group
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
SW, Water Saturation, Fraction
Permeability
Associate Pc and Kr to
Petrophysical Groups
Using core data, cuttings and wireline logs, lithofacies can be identified, each lithofacies
typically falling within a Petrophysical Group as illustrated on Figure 33.
Examples of Lithofacies are:
• Coarse pebbly sandstone
• Sorted cross-bedded sandstone
• Massive well sorted sandstone
• Upward fining laminated sandstone
• Fine-grained argillaceous sandstone
The lithofacies identified from core and wireline data now need to be extrapolated to
the uncored section of the well and other non-cored wells in the field. This is done by
establishing lithofacies predictor from wireline logs. This is known as Log Typing.
Lithofacies based on Log Typing are commonly referred to as Electrofacies in order
to differentiate them from those determined from cored data. Again Log Typing and
Electrofacies are not formal terms and will vary between workers and companies.
Lithofacies Discriminator
2.250
2.292 0.3
2.335
2.377
0.25
2.419
2.462
0.2
Density
2.504
2.546
2.588 0.15
2.631
2.673 0.1
2.715 2.706
2.758
0.05
2.800
0.000
0.027
0.053
0.080
0.107
0.133
0.160
0.187
0.213
0.240
0.267
0.293
0.320
0.347
0.373
0.400
Neutron Porosity
Figure 35
36
Geomodelling Workflow T W O
Phie
E_Facies
Phie_avg_Facies
40%
Predominant facies attributed
GROSS
NET
Phie_avg
Cell Height
30%
20%
10%
cut-off
Reser. Layering
Electro_Facies
Predominant
Mineralogy
Facies
GR Phie NtG Kh (md)
0% 30%0 1 0.01 1000
Upscaled Phie
Avg per Layer Avg NtG per Layer Upscaled Kh_Int
Avg per Layer
Upscaled Phie
Biased for Litho
Avg per Layer Upscaled Kh_Int
cut-off
High Kh streak
Note that the values allocated to the cells by the well control become anchor points
and it is important to ensure that these are real and representative. In case of badhole
sections (Figure 38), the computed petrophysical properties are not representative over
these intervals, and must be excluded from the log blocking as shown on Figure 39. If
the amount of badhole section is excessive, the well may become non-representative
and should be ignored in those sections with excessive badhole sections.
38
Geomodelling Workflow T W O
Top Reservoir
Measurements in Badhole
sections must be excluded
from Facies and Petrophysical
computations
OWC
Badhole Section
E_Facies Phie
Fa s
Phie_avg_Facies
Phie_avg
hie_a
hie_
Cell height
GROSS
NET
Care must also be made when using wells with reduced or missing sections due to
faulting or partial penetration. The reduced intersected section may not be representative
of the entire interval and should be ignored. Also remember that wells intersecting a
fault or in close proximity to a fault may be affected by diagenesis and may therefore
have petrophysical properties not representative of reservoirs away from the fault
Modelling of Facies and Petrophysical Properties
(Figure 40).
Well A Well B
Well A not representative of Facies Volume. Need to estimate the fractional volumes of the various Lithofacies in the Model
If diagenesis from fault, well A may not have • Usually per interval
representive NTG, Phi and Kh away from fault. • Beware of wells with sections missing
• Are wells with intersecting faults representative – Diagenesis?
Well trajectory
3D grid cells
a c b
40
Geomodelling Workflow T W O
Finally, in case of deviated wells, where a well may only intersect a small fraction
of a cell (Figure 41) these cells based on limited well data may be ignored in order
to avoid allocating non-representative values to such cell. Geomodeller softwares
have different methods in Log Blocking to deal with such problems, as shown on
Figure 41.
FACIES - Diagnostics
Log
motif
Glauconite
and shell Tidal Tidal sand Regressive
debris channel wave barrier
(high-energy bar
marine)
Carbonaceous Prograding
Fluvial or Delta
detritus and delta or
deltaic distributary
mica crevasse
channel channel
(dumped) splay
42
Geomodelling Workflow T W O
However one must be aware that diagnostic log motifs may in some cases be variable
within the same depositional environment. Figure 44 illustrates how gravel lithofacies
and log motifs within a braided stream deposit can vary significantly simply as a result
of gravel composition (variations in lithic fragments: shales, sandstones, igneous
rocks, feldspathic rocks, coal fragments, evaporites etc). Core data may in such case
be the only means of getting a proper interpretation.
However, log motifs can in most cases be used to interpret depositional environment.
Figures 45 to 48 illustrate diagnostic log motifs and major reservoir characteristics of
several depositional environments ranging from braided and meandering channels to
delta lobes and distributary mouth bars. Note that not only are log motifs important,
but the interpretation must also consider the lateral and vertical relationship between
the different sub-environments. Remember that certain geometries can be recognized
from seismic and if available, seismic must be integrated into the interpretation.
However from Figures 47 and 48, it is evident that without additional information,
it may be impossible to differentiate between the delta lobe and the distributary
mouth-bar. In this case however, the two environments are likely to have very similar
lithofacies distribution and would therefore not adversely affect the geomodel and
reservoir prediction whichever depositional model is adopted.
FigureDepositional
Diagnostic 45 Diagnostic depositional
Environment environment
from from wireline logs
Wireline Logs
44
Geomodelling Workflow T W O
Delta lobe
As mentioned above, log motifs on their own may therefore not be sufficient for
a clear diagnostic, requiring additional information to determine the depositional
environment. For example the log motifs on Figures 49 and 50 display typical upward
coarsening cycles, one from a wave dominated delta and the second from a shoreface
Parasequences and Diagnostics from Logs
environment.
Beach
Grain size
Grain size
46
Geomodelling Workflow T W O
Parasequence
boundary
Distributary
mouth bars Lenticular geometry
Remember that all interpretations must integrate all available information, and all
interpretations must be internally coherent.
In the example on Figure 53, we have a shoreline depositional setting and as can
be seen from that example, wells A and B are likely to correlate very well even if
quite distant from each other, while the much closer wells B and C are likely to
have different lithofacies assemblage and be difficult to correlate. Uncertainty in the
interpretation and geological model is often a function of the number of wells, their
spacingDepositional
and spatialEnvironment
distribution.
and Paleogeography
Lagoonal Fluviatile
A
C
Upper-Shore Face
B
Lower-Shore Face
1 Km
48
Geomodelling Workflow T W O
Dipmeter may be very useful in giving orientation of channels or beach axes. (Cowan et al, 1993)
Schematic cross-sections showing reservoir architecture and depositional elements (Lawrence et al, 2002)
A
Submarine Sandy levees
turbidite channel
The first estimate will come from the wells themselves, where the weighted average
percentages of Electrofacies computed from the wells may be representative of the
field as a whole. Figure 57 shows an example where the weighted average percentages
of shale and sandstone for each layer have been computed from a number wells. If we
are satisfied that these percentages are representative of the entire field, the geomodel
will be populated with the same percentages as illustrated on Figure 57. However
if the wells were drilled in an area of the field where one or more facies are over or
under represented, the percentages can be modified accordingly, either over the entire
section or selectively for any given layer(s).
50
Geomodelling Workflow T W O
Facies Modelling
In this example:
• Are all Wells on upthrown side representative of that
o segment?
• Is only 3 wells on the downthrown side representative of
that segment?
• Consider decluster approach
o o
o
o B
o A
Reservoir
Different Geology?
For example, in a field cut by a major growth fault such as schematically represented in
Figure 58, it is very likely that the geology and lithofacies on the upthrown side of the
fault will be different from that on the downthrown side. As such, only the lithofacies
and percentages from wells on the upthrown side of the fault should be considered in
populating that segment of the field, and only the wells on the downthrown side of
the fault considered as representative of the southern segment of the field. Note that
in cases where there is insufficient number of wells to give a representative facies
percentage, analogues can be used to get better estimates.
Remember that facies proportions for each zone and layer are computed directly
from averaging the proportions seen in all selected wells. In cases where wells are
closely grouped together, percentages will be strongly biased towards well clusters,
which may not be representative of the whole field as illustrated on the upthrown
segment from example in Figure 58. Techniques, such as the appropriately named
declustering may need to be applied to remove this bias.
Having established the percentages of facies to be modelled, the way these are to be
interpolated between wells and distributed in space in the geomodel requires applying
geostatistical modelling techniques which are discussed next.
The variograms that produce the wanted distribution and clustering in 3D space
are defined by specific parameters. These are: the variograms model (Spherical,
Exponential, Gaussian); nugget, sill and correlation lengths. (Variograms are covered
in Section 15). The variogram parameters are determined from several sources.
Typically from well data if sufficient well data exists for the variogram to be statistically
significant (which is often not the case). More commonly, facies models are based
on statistics taken from outcrop analogues (See AAPG Studies in Geology, No 50)
and/ or published data that offer many different relationships such as those shown on
Figures 59 and 60, which can be used for modelling different facies assemblages.
When modelling complex facies associations in a reservoir with little well control
and/ or where it is difficult to correlate reservoir and sealing units between wells
(rapid spatial variability in X, Y and Z planes) or where modelling interpolation of
facies and sealing facies between wells may be difficult or impossible based on the
existing well data, we can revert to analogues to guide the modelling. Analogues are
based on detailed studies of modern depositional environments (present day fluvial,
deltaic or beach environments) or ancient sedimentary sequences exposed at surface.
These analogues will give key insight on how, for a given depositional setting, various
facies are stacked in 3D space, their continuity (vertical and horizontal) or lack
thereof, their proportions (percentages), their size (length, width and height), their
azimuth, and how they vary from proximal to distal settings. All that information can
be used as input, besides the well control, into the modelling exercise if we believe
that our reservoir is comparable to some present day or ancient outcrop analogue. For
example, if we a reservoir consisting of a fluvial braided system, with many different
facies ranging from very high permeability streaks to silty overbank deposits and
thin shale intercalations, their vertical and horizontal stacking will greatly affect the
52
Geomodelling Workflow T W O
flow behaviour of the reservoir, yet even with a large number of well control, only
an analogue study could guide in the accurate modelling of the facies. Looking at
turbidite and fluvial system analogues will be the core objective of your field trip to
the Southern Pyrenees at the start of term 3.
100
Marine
Deltaic Barrier
Longer (%)
50
Delta Fringe and Delta Plain
Distr. Channel
Point Bars
0
0 500 1000 1500 2000
Length of Shale Intercalation (ft)
(Webber. K,J, 1986)
1:10
10 10
Thickness (m)
5 5
0 0
10 100 1000 10000
B Width (m)
Figure 60 Thickness versus width for channel and channel-belt sandbodies in the
Escanilla Formation at Olsón (from Dreyer et al., 1993).
When using pixel based facies models, we require the following parameters: i) facies
proportions (as discussed in the previous section); ii) variograms for the different
facies; and iii) to tie the facies to the wells in our model. Incoherence between these
parameters (for example using variograms which create geometries which cannot
honour the wells for instance) means the simulation will fail to “converge” towards
the correct solution, and the geomodelling software will compensate by modifying the
facies proportions which may be significantly different from those wanted. Although
some variations are expected from one realisation to the next the proportions should
cluster close to those proportions we want, else a major error could be introduced
in the model.
d
(1) (6)
h
τ a
c
I
(2)
h2 I
h1
h
c
I
c
(3) (5)
e
h
d
(4) (7)
c c
I h
d
54
Geomodelling Workflow T W O
The key requirements in Object Modelling are the selection of the geobodies (Figure
67), their orientation (azimuth), and their sizes in X; Y and Z directions. Note that unlike
Pixel based modelling, there are no variograms in Object modelling. The selection
of geobodies and their parameters is determined from facies interpretations based on
well data (cores, wireline log patterns etc) and possibly high resolution 3D seismic
if available. The parameters (percentages, size, orientation etc) for the geobodies,
like in Pixel based modelling, are typically based on outcrop analogues. Finally, the
placement rules are equally applicable to object based modelling.
Object based modelling begins by placing the geobodies in their correct position
and correct intersection at the well bore (Figure 62). Modelling is than extended by
generating geobodies in the interwell area until the required proportions have been
reached. Because of the placement rule geobodies are modelled in a pre-selected
order in order to allow (if wanted) one type of geobody to cut or erode into another
(Figure 62).
Conditioning Data
(Sand-Shale)
(Srivistava1994)
Conditioning of the placement rules is possible with some geomodellers, such that
modelled objects can cluster or alternatively anti-cluster (Figure 63). Note that the
problem of convergence also exists in object based modelling and may be more
problematic than in pixel based modelling, because it may be difficult to get the
selected bodies and their given sizes to honour the wells and the required facies
proportions. This is all the more so as the number of wells increases.
Random
(neutral)
Attraction
(neutral)
Repulsion
(anti-clustered)
(Clemetson et al, 1990)
With the advent of 3D seismic and improved imaging, most geomodellers now have
the possibility of modelling objects directly from correlated seismic attributes. Figure
64 is an example where a high amplitude event in a seismic cube (possibly correlated
with some lithofacies) is Modelling
Object captured extracted
as a 3D object.
from Seismic
(Petrel 2003)
56
Geomodelling Workflow T W O
In its simplest form, net-pay can be defined as those portions of a reservoir that contain
commercially producible hydrocarbons. There is therefore:
• Notion of moveable hydrocarbons
• Notion of commercial rather than technical limitation
The commercial factor means that cut-offs are subjective and may vary from one
company to the next and often from regulatory or governmental bodies. The SPE
Applied Technology Workshop (28-29 September 2000) in Dallas, which looked
specifically at the problem of cut-offs concluded:
• Net-pay (h) is one of the most important parameters used in geological mapping
and reservoir calculations.
• Its application includes: volumetric estimates of in-place hydrocarbons,
reservoir simulation, well test interpretation, fluid injection analysis, flow rate
estimates, geological modelling, well completions, stimulation design, and equity
determination.
• Unfortunately, the petrophysical, geological, and petroleum engineering literature
provide very limited insight and guidance into how net-pay should be defined
and computed and each application may involve different criteria.
• Net-pay means different things to different people.
• Current industry net-pay cut-offs are largely subjective, object oriented, and
based on local experience.
• There is no systematic industry guideline for selecting net-pay cut-off criteria.
• When discussing the concept of net-pay, a clear understanding of net-pay
definitions and the basis upon which it is calculated should be clarified (one
person’s definition of net-pay may not agree with that of another person).
• Net-pay determination is impacted by Darcy’s law and must consider such
items as fluid mobility, reservoir pressure (gradient), reservoir drive mechanism
(primary, secondary, or tertiary), and wellbore skin (stimulation affects).
• The cut-off should give net-pay heights which are coherent with flow intervals
measured in actual well tests. Here again we have the similar risk that we may
remove some rock volume which failed to produce at the well-bore, but do
undergo some depletion and therefore contribute to production.
• Note that in well tests, especially if these are short, the well may not have fully
cleaned up by the time the test is completed (variable amounts of formation
damage or skin factors) and as such some intervals which failed to flow during
the test could well have done so under normal conditions. The well in Figure
88 is such an example where some zones which on poro-perm properties were
expected to flow, failed to do so during testing as shown by the PLT log.
• Invasion profiles on resistivity logs, RFT’s and mud cake will all indicate
permeability, and as such, the selected cut-offs should be coherent with these
permeability indicators.
• Beware of thin beds within a shale rich interval. These may be masked on the
wireline logs due to limited vertical resolution, but may in fact contain some
very good producing horizons.
• If producing thin beds are proved to exist in such a shale rich setting (Figure
65), the net-pay can be approximated by:
Net Height = Gross Height * (1-VshAverage)
PhiAverage in Net-pay = PhiAverage in Gross Height / (1-VshAverage in Gross Height)
• Avoid cut-offs on water saturation.
• Cut-offs are often a major uncertainty in reservoir modelling and sensitivities
are necessary to determine their impact on hydrocarbons in place and reserves.
• The choice of a cut-off will impact both the NTG of the reservoir and the average
porosity in the net-pay
• Remember that cut-offs are based on moveable hydrocarbons and are therefore
a function of permeability and fluid mobility. Cut-offs will therefore vary for
different fluids.
Cut-Offs
• In thinly interbedded/ laminated sequence thin beds with sometimes very good
permeability are not detected by wireline logs (high Vshale) and excluded from Net Pay.
58
Geomodelling Workflow T W O
It is relatively easy to compute porosity directly from well logs, and since there is
a correlation between porosity and permeability, the permeability threshold can be
converted to a porosity cut-off using a porosity - permeability cross plot (Figure 66)
and applied readily to all wells. In the example on Figure 66, a porosity permeability
function is shown, together with different cut-offs. These may be for fluids with
varying viscosity or the different permeability thresholds could be part of a sensitivity
analysis on the same fluid (Min, ML, Max). There may on the other hand be some
uncertainty on the porosity - permeability function itself. So for a given permeability
cut-off, different porosity - permeability functions will yield different porosity cut-offs
(Figure 67). These functions may represent Min, ML and Max scenarios.
Porosity – Permeability Cut-Offs: Lithofacies
10000
Permeability cut-off is subject to hydrocarbon
mobility (type and viscosity).
Rule of thumbs
1000
1.0 md for 30-35 API oil
10
Hogg and others, 1996
Triassic sandstones
Sherwood Sandstone Group
4md Wytch Farm Field
Hampshire Basin
1md 1 Southern England
coarse upper
0.1 coarse lower
medium upper
medium lower
fine upper
fine lower
ver fine upper
0.01
0 10 20 30 40
5% 9% 11%
Helium Pororsity (%)
1000
100
Permeability (md)
10
Hogg and others, 1996
Triassic sandstones
Sherwood Sandstone Group
Wytch Farm Field
Hampshire Basin
1md 1 Southern England
Grain size
coarse upper
0.1 coarse lower
medium upper
medium lower
fine upper
fine lower
ver fine upper
0.01
0 10 20 30 40
7.5%
5% 10% Helium Pororsity (%)
60
Geomodelling Workflow T W O
1000
100
Permeability (md)
10
Hogg and others, 1996
Triassic sandstones
Sherwood Sandstone Group
Wytch Farm Field
Hampshire Basin
1md 1 Southern England
0.5md
Grain size
coarse upper
0.1 coarse lower
medium upper
medium lower
fine upper
fine lower
ver fine upper
0.01
0 10 20 30 40
5% 9%
3% Helium Pororsity (%)
7%
Porosity is used as a cut-off because of its link to permeability. In other words, the
porosity cut-off is simply a converted permeability cut-off (Figures 66 and 67). To
determine which volume of the reservoir can be considered as Shale, we use the Vshale
property computed from logs (and calibrated to core data if possible). Cross-plots of
computed Vhale versus core permeability are sometimes used to determine a Vshale
cut-off. If this is not possible Vshale is plotted against computed Porosity to determine
the maximum acceptable Vshale for reservoirs above the porosity cut-off.
On the basis of our established Porosity and Vshale cut-offs, we can now compute
a new Property called NTG, where NTG equals 1 if above the cut-offs and zero
otherwise.
Is core on depth?
What shift to apply?
Top Reservoir
Ties with structural map?
OWC
Consistent with Closure on structural map?
Consistent with RFT pressures?
Vshale, is GR good estimator?
If not why not?
What alternatives
62
Geomodelling Workflow T W O
35%
Model 2
30%
25%
20%
Computed
Model 1
15%
10%
5%
The permeability of a rock is a function of its pore geometry, i.e. pore space or pore
diameter and their interconnectivity via pore-throat and the diameter of the pore-
throats themselves. This is in turn dependent on mineralogy and texture (grain size,
sphericity, sorting and packing) of the reservoir rock. Figure 71 shows three cross-
plots of permeability versus porosity, pore diameter and porethroat diameter for the
same set of core plugs. Not surprisingly, the three plots show a clear correlation
with permeability, with the best correlation and linearity between permeability and
porethroat diameter.
10000 10000
1000 1000
Permeability (md)
Permeability (md)
100 100
10 10
Fontainbleau
Guadalupe
Mirador
1 1
0 10 20 30 1 10 100
Porosity (%) Median pore diameter (um)
Porosity Permeability Law
10000
1000
Permeability (md)
100
10
mm= =2.0266
2.0266
R2R2= =0.9599
0.9599
1
0.1 1 10
Median pore throat
diameter * porosity (um)
Trend between Permeability and Porosity, but better with Pore Diameter
However computing porethroat diameter from core plugs is difficult and impossible
to measure from wireline logs. Porosity meanwhile can be readily measured in core
plugs and computed from wireline logs. Modelling porosity distribution in space
is also far easier than porethroat diameter, and if a relationship between porosity
and permeability can be established, permeability can be modelled as a function of
porosity.
64
Geomodelling Workflow T W O
A very useful catalogue of porosity and permeability cross-plots from core plugs in
Siliciclastic Rocks exists on the US Geological Society website http://pubs.usgs.gov/
of/2003/ofr-03-420/ authored by Philip H. Nelson and Joyce E. Kibler (USGS)
The bulk of Porosity Permeablity plots used in the ensuing slides were extracted
from the above referenced dataset
However, as shown on Figures 72 and 73, there are many factors, that cause movement
along the porosity - permeability trend or away from the “normal” compaction curve.
These factors are:
• Compaction
• Grainsize
• Sorting
• Composition
• Cementation and types of Cement
• Dissolution
• Diagenesis
• Clay content
• Deformation and Fractures
10000
n
ctio
pa
Co om g
C
ar
se r tin
gr So
Gr ain
av s
el
1000 fra
cti Fi
on ne
gr
ai
ns
Permeability (md) log scale
100
Fel
n
utio
Qu Lith dspar
con ar tz ic c and
sol
ten ont
ent
Dis
10
n
atio
nt
nte
ent
y co
Cem
Cla
0.1
0 10 20 30 40
Porosity (%)
66
Geomodelling Workflow T W O
(Selley, 1988)
Relationship between porosity and permeability for the different types of pore
systems. Note that fracturing will enhance permeability dramatically for any
type of reservoir.
Because we use mercury injection or helium gas to measure porosity and permeability
in core plugs, we assume that it is effective porosity that is measured. Core plug
measurements are our calibration points for establishing a porosity permeability
function. However these are normally measured at surface conditions, after drying
etc, which is not the same as at reservoir conditions. Corrections to permeability
can be made (Klinkenberg) or calibrated to reservoir conditions from permeability
measurements made under confined pressure. It is therefore important to consider if
a compaction correction is necessary to be applied to the core poroperm data.
Data Set 33
100
10
1
Permeability (md)
0.1
Keighin and others, 1989
Almond Formation
Greater Green River Basin
Effect of confining pressure
on 17 samples
0.01
Confining pressure
250 psi
0.001
2250 - 9000 psi
250 psi
Pororsity (%)
Because the porosity permeability function established from core plugs will have to
be extended to non-cored sections where porosity has been computed from wireline
logs, it is important that the computed porosity is well calibrated to the core data
(properly corrected for Vshale for example). Alternatively the uncertainty or error
in the computed porosity must be quantified using Core porosity versus Computed
porosity cross-plots (Figure 70) since over or under estimated porosities will impact
the permeability accordingly.
We will now review some of the factors that affect porosity and permeability. The
effects from grain size and texture have already been covered in detail elsewhere
in this course (Reservoir Concepts and Reservoir Sedimentology Modules) and the
student should refer to those modules for more information.
68
Geomodelling Workflow T W O
104
High Primary Quartz content = Good Kh
even at relatively low Phi
10 3
Mixed Clastics = Good Kh at high Phi
i.e. Quartz Arenites typically have better Kh
than subarkose or sublitharenites at same porosity
102
Permeability (md)
101
100
10-1
Sandsone classification
Quartzarenites
10-2
Sublitharenite
Subarkose
Litharenite
Feldspathic Litharenite
10-3
Arkose
Poorly consolidated
(Nelson, 2004)
10-4
0 10 20 30 40
Porosity (%)
Figure 75 shows a porosity permeability cross plots for sandstones with differing
composition (quartzarenites to arkose) as well as poorly consolidated sandstones. The
clean sandstones (quartzarenites) display the best permeablities, even at relatively low
porosities. As the sandstones become progressively more arkosic or lithic, permeability
becomes comparatively poorer, with the better permeabilities restricted to significantly
higher porosities compared to the quartzarenites. The unconsolidated sediments
meanwhile show significantly higher porosities than the cemented sandstones, but
permeability trends not unlike those from the cemented rocks. This again highlights
that porethroat rather than porosity is the key controlling factor in permeability.
Figure 76 shows a porosity permeability cross plot with three Petrophysical Groups
correlating with different depositional environments. The three depositional
environments have overlapping porosity ranges, but significantly different permeability
ranges, which will give each Petrophysical Group a different porosity-permeablity
function.
10000
Atkinson and others, 1990
Permian-Triassic sandstones
Ivishak Formation
Sadlerochit Group
Prudhoe Bay Field, Alaska
1000
Permeability (md)
100
10
Fluvial setting
Mid-braided stream (conglomera)
Mid-distal braided stream
Distal fluvial and floodplain
1
0 10 20 30 40
Porosity (%)
• Depositional Settings have differing Energy Environment
• This will impact the texture of the rock (Grain size, Sorting etc)
• Therefore impacts Phi_log(K)
• Same Porosity Range but varying Permeability Range
Figure 77 on the other hand shows how a single porosity-permeablity function, albeit
non-linear, may be used for three different Petrophysical Groups correlating with
different depositional environments.
70
Geomodelling Workflow T W O
Data Set 35
1000
100
10
Permeability (md)
1
Kerr and others, 1989
Pennsylvanian
'Bartlesville sandstone'
Glenn Pool Field
Northeastern Oklahoma Platform
Oklahoma
0.1
Porosity (%)
Sandstones
Positive
Carbonates
Positive
(Selley, 1998)
72
Geomodelling Workflow T W O
Figure 79 shows a single reservoir lithofacies split into three Petrophysical Groups
on the basis of different types of cement.
1
Permeability (md)
10.1
0.01
Chlorite-cemented facies
Calcite-cemented facies
Quartz overgrowth-cemented facies
0.001
0 10 20 30
Porosity (%)
Dolomite being denser than limestone, dolomitization effectively reduces the limestone
matrix volume and therefore increases porosity. In theory, dolomitization may reduce
the limestone matrix volume by as much as 12.5%.
either directly from the porosity curve computed at the well or from Blocked data for
each reservoir lithofacies (Petrophysical Groups) and per reservoir zone. In doing
this, always check whwther porosity distributions vary above and below the OWC.
This is quite common as diagenesis and porosity reduction commonly occurs in the
water leg, while the presence of hydrocarbons above the OWC reduces diagenesis
and preserves better poroperm properties. If there is a difference, porosities will have
to be modelled separately above and below the OWC
All cells with non-reservoir lithofacies (shales for example) should by default be
attributed zero porosity; zero NTG and zero permeability. In cells with reservoir
facies, the average porosity must be computed using the net reservoir (above porosity
cut-off) of that facies only. The porosity distribution used to populate the reservoirs
must be based solely on the intervals above the Vshale and Porosity cut-offs. Each
Lithofacies (Petrophysical Group) must have its own porosity distribution.
Modelling NTG is a quite different to Porosity. Firstly, the input curve from the log is
discrete (1 or 0) unlike porosity which is a continuous variable. There are two main
ways to model NTG. These are:
2. Case where we cannot explicitly model the non-reservoirs. Imagine the reservoir
case where we have two lithotypes as above. Shale (non-reservoir) and Sand (reservoir)
with a 40% and 60% distribution. However in this scenario, we cannot explicitly
model the shales (partially or in their entirety) because these occur predominantly
as thin interbeds within the sand, way below any cell resolution. For the fraction of
shale we cannot explicitly model, we have to include them within the sand, where
we reduce NTG to account for the fraction of shale occurring as interbeds. In other
words, the NTG in the Sand cells must account for both the porosity cut-off and the
shale volume. As such the model will display a larger number of cells with Sand, but
as a whole, the reservoir will still have an overall NTG of no more than 60%.
As for porosity, NTG histograms can be generated from the NTG Blocked data
for each reservoir lithofacies (Petrophysical Groups) and per reservoir zone. NTG
modelled between wells is typically controlled by facies distribution (using the same
variograms as for facies modelling), although these can be overprinted by diagenetic
constraints independent of facies that may have affected NTG (depth of burial,
uplift, tilting etc). So NTG (and other properties for that matter) may be affected by
geological trends (other than facies) which we must incorporate in our modelling to
accurately distribute the properties in the interwell volume. This is commonly referred
as “Property Conditioning”.
74
Geomodelling Workflow T W O
NTG NTG
Ø ? Ø ?
1.0
0.9
0.8
Gross E_Facies SST/Gross Thk
0.7
♦
♦
0.6 ♦ ♦
♦ ♦
0.5 ♦
♦ ♦
0.4
♦ ♦
0.3
♦
0.2 ♦
0.1 ♦ ♦
0.0
0.0 20.0 40.0 60.0 80.0
Gross Thk (m)
Another example of trend conditioning is in the case of where porosity and/ or NTG
are known to be affected by diagenesis. In that case, assuming a trend map has been
established (degree of dolomitization for example) the porosity and NTG should be
conditioned to the dolomitization trend map of the reservoir.
Compaction Compaction
φ/NTG/Kh φ/NTG/Kh
However on the upthrown side of the growth faults, shallower water conditions and
higher energy means better porosity development. In addition, better ground water
circulation together with the abundant salt in the fault plane, means the ground water is
likely to be contaminated by salt to create brines resulting in extensive dolomitization
and improved porosity. As such, Porosity, NTG and Permeability are likely to have
trends as illustrated on Figure 82
76
Geomodelling Workflow T W O
Log K = a∆eff + b
However the relationship between porosity and permeability can be more complex,
and sometimes better described by a polynomial or an exponential function.
Figure 83 shows two such functions for the Fontainebleau Sandstone. One correlation for
sandstones below 9% porosity and a second correlation for porosities above 9%.
Permeability Predictor
Data set 7
10000
Bourbie and Zinszner, 1985
Oligocene (eolian and marine beach)
Fontainebleau Sand
Paris Basin
France
1000
100
Permeability (md)
10
0.1
0 10 20 30
Porosity (%)
Fontainebleau Sandstone
value, there is a distribution of permeabilities, that can vary by one or more orders of
magnitude, as illustrated on Figure 84. Co-simulation of porosity and permeability
may (in this example) be a better option, in that it would allow the simulation for
a given porosity, to pick a permeability value from a distribution. Co-simulation is
discussed in section 16.4
Permeability Predictor
1000
Bowker and Jackson, 1989
Weber sandstone
Pennsylvanian-Permian
Rangely Field, Colorado
100
30md
10
Permeability (md)
2md
1
Inverse slope = 3.3%
Invtercept, porosity
at 1 md = 9%
0.25md
0.1
78
Geomodelling Workflow T W O
Data Set 53
10000
Reedy and Pepper, 1996
Pleistocene and Pliocene
Green Canyon 205 Unit
Gulf of Mexico, Offshore Louisiana
1000
100
Permeability (md)
10
0.1
0 10 20 30 40
Porosity (%)
k
0.0314
RQI φ
FZI = =
Φz φ
1 − φ (1)
Where:
RQI = Reservoir Quality Index
∆z = Normalized Porosity Index
Samples with the same FZI will lie on a straight line with unit slope on a log-log plot
of RQI versus ∆z and have similar porethroat attributes, that equate to a hydraulic
unit. Samples with different FZI will lie on separate parallel lines on the RQI versus
∆z log-log plot.
By re-arranging the FZI equation (2), lines of constant FZI or HU can be determined
and plotted on a porosity-permeability plot.
2
φ
( FZI)x 1 − φ
K=φ 0.314
(2)
10000 10000
1000 1000
100 100
10 10
Kh
Kh
1 1
0.1 0.1
0.01 0.01
0.001 0.001
0 5 10 15 20 25 30 35 0 5 10 15 20 25 30 35
φ effective φ effective
Core data cross-plot Core data cross-plot discriminator on facies
10000
1000
100
10
0.1
0.01
0.001
0 5 10 15 20 25 30 35
φ effective
Core data cross-plot discriminator on HU
Hunter Williams & Straub 2002
80
Geomodelling Workflow T W O
Having established permeability predictors, these can then be applied to all the wells
in the field. Results should always be carefully QC’ed and validated before using the
computed permeabilities in the geomodel. To start, it is important that the computed
porosities used as input in the permeability predictor are properly calibrated to the
core porosities. Failing that, the permeabilities derived from the permeability predictor
will be erroneous. QC can be carried out by simple graphic overlays of the core and
computed poroperms to give a quick qualitative comparison between the core and
computed poroperms. A cross-plot of core versus computed porosities (Section 14.2,
Figure 70) will give a quantitative comparison, which can be used as a correction
factor to ensure that the computed permeabilities tie with the core data.
0 1 2 3 4 5 6 7 8 9 10 11 12 0 1 2 3 4 5 6 7 8 9 10 11 12
1000 1000 1000 1000
10 10 10 10
1 1 1 1
0 0 0 0
K plugs K modelled
Plot Core_K and Computed Log_K Same range of Permeabilities between core
against lithofacies and electrofacies and modelled values for each litho electrofacies
However the key validity check for the modelled permeabilities comes from well test
data. Figure 88 is a well log display showing a single test interval with a number of
perforation intervals and PLT results. The PLT shown as the percentage contribution
of each perforated interval to the total flow. Because a well test gives the permeability
height product or KH, estimating the average permeability correctly from a well test
depends on an accurate estimate of the contributing height.
Predominant facies
Formation test
Perforation
Layering
GR
Phi
PLT
8% 8%
KH DST : 1020 mD.m
KH Estimator : 1280 mD.m
0% 0%
H* DST : 37 m
H Cut-off : 57m
K DST: 27.4 mD
K Estimator: 22.4 mD 14% 14%
0% 0%
7% 7%
58% 58%
0% 0%
0% 0%
0% 0%
NOTE:
Well Tests gives you K.H
So an error in H could give significant difference in K between Test and from Estimator
In this example H from cut-offs estimated at 57m
In the example on Figure 88, although the Net Height seen by the PLT is different
from that estimated by the cut-off, the KH and average permeability based on the
permeability predictor and the well test closely match, validating the permeability
predictor.
Remember too, that in a well test, reservoir conditions are seldom if ever single phase.
Assuming we are dealing with a virgin reservoir, water saturations will typically be
at irreducible water saturation conditions (Swi) which means that in an oil reservoir
for example, the permeability will be Kabsolute * Kowr (relative permeability of oil to
water). Since Kowr is usually less than 1, the permeability measured in a well test will
therefore be lower than Kabsolute.
82
Geomodelling Workflow T W O
The modelling of permeability in the interwell space, first entails Log Blocking of the
well data (Section 11.0). In the case of horizontal permeability (Kh) the arithmetic
average can be used to compute Kh in the blocked cell (Figure 89). This same example
also highlights the importance of the reservoir layering and the major impact it can have
in the case of permeability modelling. We see for example that in the case of the delta
front layer there is a very high permeability streak (drain) at the top of that interval
(Figure 89), which is around four orders of magnitude greater than in the rest of the
interval. The average permeability calculated in this layer is neither representative
of the drain or the lower permeability reservoir below it, and will therefore fail to
represent the real flow behaviour in that interval of the reservoir. A refined zonation,
with the drain interval zoned separately would solve this problem. Likewise, in the
estuarine bar layer of this same example, the very low permeabilities (less than 0.01
md) are likely to be below the cut-off and should not be included in computing the
average Kh permeability of the cell. Note however that this very low permeability
interval will greatly impact the vertical permeability and must be included in when
computing vertical permeability (Kv).
LN (permeability)
Upward Coarsening Deltaic Sequence.
Very High Permeability
We have a typical Drain:
Computed permeability will neither
be representative of High
or Intermediate Kh Estuarine bar
Digital log
Depth (m)
The Low Kh Values in Estuarine Bar should not have
Delta front
been included in computation of Kh since they are
below the cut-off and therefore non-reservoir. Mean
Compute Kh in Net reservoir only Standard
Deviation
This low Kh interval will however have big impact on Kv/Kh
of reservoir
Distal delta lobe
Ringrose, 2003
As for other reservoir properties, permeability histograms for input to the interwell
modelling can be generated directly either from the permeability curves computed
at the well by the Permeability Predictor or from the upscaled permeability (Blocked
data). These distributions are generated for each reservoir lithofacies (Petrophysical
Groups) and per reservoir zone. Non-reservoir facies such as Shale will be given
zero Kh. Since there is a dependence between porosity and permeability, horizontal
permeability is typically modelled with the same variograms used for populating
porosity and by collocated kriging with porosity (See section 16.4 on collocated
kriging).
84
Geomodelling Workflow T W O
10000
Crosskv/kh=0,6
Plot Core_ Kv Kv/Kh = 0,04
NKSM2
against Core Kh _core NKSM1
1000
NKF101
Establish Ratio Kv /Kh Method 1
100
Method 2
Kv (mD)
10
0.1
Kh (md)
0.01
0.01 0.1 1 10 100 1000 10000
Method 3
70
Kh (mD)
60
Kv/Kh
50
Frequency
40
LithoFacies 1
30
LithoFacies 2
20 LithoFacies 3
10
Method (1):
For each Petrophysical Group (lithofacies)
Establish Kv/Kh from core measurements.
In this Example Kv/Kh = 0.6
Compute Kv Curve from Computed Kh
Method (2):
Compute Geometric or Harmonic Mean of Kh in each Cell at the wells to establish Kv
Method (3):
Draw Histogram of Kv/Kh Using Log_Blocked Values of
Arithmetic Mean Kh and Harmonic Mean Kh (Estimate of Kv)
do this for each Petrophysical Group (lithofacies)
• Method 1
If you have representative and statistically sufficient vertical permeability
measurements (quite rare in reality) from core plugs, a Kv/Kh ratio can be
established from a cross-plot with the horizontal permeability measurements
(Figure 90). If there is sufficient core plug data, this Kv/Kh ratio can be established
for different Lithofacies or Petrophysical Groups. Multiplication of the Kh
curve (computed from the permeability predictor) by the derived Kv/Kh ratio,
will allow a Kv curve to be computed at the wells. Note that in core analysis,
sampling is usually biased towards shale free intervals avoiding non-reservoir
or intervals with extremely low permeabilities, which typically have the main
impact on Kv. As such, measured Kv in cores are often optimistic, and Kv/Kh
ratios established from cross-plots between core Kv and Kh must be seen as
best case scenario.
Using this established Kv/Kh ratio (for each Lithofacies), we can derive a Kv
curve in the wells and upscale these in Log Blocking. Kv can now be modelled
between wells for each lithofacies using the same variograms used for populating
Kh together with collocated kriging to Kh (See section 16.4). Cells with non-
reservoir facies will be attributed a Kv of zero.
• Method 2
Using Log Blocking (Section 11.0), a Kv average is computed from the harmonic
average of Kh curve generated by the Permeability Predictor. The blocked Kv’s
are then cross-plotted against the blocked Kh to give a “blocked” Kv/Kh ratio for
each of the Lithofacies in the model. This ratio is then applied to the modelled
Kh values over the entire geomodel. Once again, as for method 1, the Kv/Kh
ratio must be considered as the best case scenario.
• Method 3
As for Method 2, using Log Blocking, compute facies biased Kh and Kv by
arithmetic and harmonic averaging respectively. Generate histograms of Kv/
Kh for each Lithofacies and co-simulate Kv/Kh and Kh. Generate Kv in the
geomodel by multiplying the two properties.
• Method 4
We know that for clastics, porosity is a function of grain size, with typically the
coarsest and cleanest sandstones displaying the highest porosities, and the low
porosity sands having the smallest grainsize and greatest shaliness. This explains
the decreasing permeabilities in low porosity sands. However, because shales
tend to be deposited horizontally, often as thin, but extensive sheets, these will
act as vertical barriers or baffles, greatly affecting Kv. Modelling Vshale in
the reservoirs can be used for refining the Kv modelled in Methods 1 to 3, by
accounting for the shale fraction in the sands and their impact on Kv. Accounting
for Vshale on Kv can be done by using a simple formula (or variations thereof) such as:
Kv = (1-Vshale)*Kv
where Kv comes from Kv modelled in Methods 1 to 3 above. No changes to
Kv when Vshale is 0, but Kv decreases as Vshale increases.
It is also possible to model Kv directly from the modelled Kh and Vshale, using
similar formula to the one above:
Kv = (1-Vshale)*Kv/Kh*Khmodelled
where Kv/Kh is the ratio established in Methods 1 or 2 above and is a
constant. Khmodelled is the modelled horizontal permeability. Maximum
Kv occurs when Vshale is 0, and decreases as Vshale increases.
86
Geomodelling Workflow T W O
Ultimately the choice in modelling permeability (Kh and/or Kv) is dependent on the
data available and the modellers’ preferences. However the modelled permeabilities
should be coherent with core data; geological model and especially the dynamic data
when available. For example, the wells in the model should have flowrates matching
those measured during well-tests.
Impedance
Seismic inversion is also commonly used in seismic in order to interpret the geology
and petrophysical properties. The principle is simple. When an energy pulse in
the form of an acoustic wave is sent into the subsurface, the reflected energy (and
therefore seismic image) we record at surface is directly dependent on the acoustic
impedance of the layers making up the subsurface (Figure 92). Assuming we have a
good idea of the wavelet, we can than invert the seismic with this wavelet and obtain
an impedance contrast volume. Assuming we know the first order reasons for the
impedance contrast (well calibration) we can model the petrophysical properties of
the subsurface.
SEISMIC ACQUISITION
Earth Energy Seismic
Extraction
RESERVOIR CHARACTERIZATION
Acoustic lmpedance Earth
Pc
h=
( w − o )g
88
Geomodelling Workflow T W O
Where h is height above the FWL in meters. Pc is capillary pressure in bars. rw and
ro are water and oil densities in Kg/m3 and g is the earth’s gravitational constant of
9.82 Newtons.
Swi is reached at the OWC where it remains constant above that level. The saturation
height curve can be entered directly into the model as a function to model Sw.
Note that each Petrophysical Group will have its own saturation versus height
curve as illustrated on Figure 93. This figure illustrates how the transition zone and
irreducible water saturation (Swi) progressively decrease with improving poroperm
properties.
Saturation versus Height Curves
80
Increase in
Porosity &
70 Permeability
60
Height above Fwl (m)
50 7 6 9 9 2 5 3 4 1
40
30
20
10
0
Swir Swir
10 20 30 40 50 60 70 80 90 100
Sw %
Above the OWC, Sw can either be modelled as a constant Swi depending on its
Petrophysical Group, or if we have reliable computed Sw curves from the well logs
and Sw distribution histograms, we can model Sw with collocated kriging to porosity.
This will ensure higher Sw’s modelled in lower poroperm reservoirs and vice versa
in good poroperm reservoirs.
In rocks with very good to excellent poroperm properties (25% porosity and hundred
or more millidarcy permeabilities) the transition zone may be small and is sometimes
ignored. In such cases, Sw is modelled solely on the basis of the distribution of Swi.
15. GEOSTATISTICS
In broad terms geostatistics can be defined as: “the branch of statistical sciences that
studies spatial/ temporal phenomema and capitalizes on spatial relationships to model
possible values of variable(s) at unobserved, unsampled locations” (Caers, 2005)
Deterministic
Days in a month
Stochastic
Monthly Rainfall
The study a single variable is known as univariate analysis. This typically entails
trying to determine the distribution of the population. To do this involves taking
a representative sample from the population whose distribution is believed to be
representative of the population. Variability is displayed as a histogram or a frequency
distribution (Figure 95)
These distributions will have different shapes (Normal or Gaussian; Log Normal
and many others) that can be mathematically described with their own specific
characteristics such as mean, standard deviation, variance, coefficient of variance,
skewness, median, mode etc. Any distribution can readily be transformed into a
probability distribution function (pdf), which is a key tool in determining the expected
value of a variable.
Univariate Analysis
f(v) f(v)
1.0 1.0
0.5 0.5
0 v 0 v
It must be noted however that variables are not always independent (average rainfall
and seasons for example). Determining how one variable relates to another is called
bivariate analysis.
Porosity and permeability are a typical bivariate pair as illustrated on Figure 96.
Computing statistical parameters such as regression coefficients and covariance
allows us to quantify the degree to which two variables are related. The problem
with covariance is that it is not dimensionless, making comparison between different
pairs of variables difficult.
1000
Bowker and Jackson, 1989
Weber Sandstone
Search for a function relating two variables Pennsylvanian-Permian
Use Regression and Covariance Rangely Field, Colorado
100
1
Inverse slope = 3.3%
Invercept porosity
at 1 md = 9.4%
0.1
0.01
0 10 20 30
Porosity (%)
92
Geomodelling Workflow T W O
1 N
Covariance = (vi - mv)(wi - mw)
N i=1
= covariance -1 1
v. w
0.7 0 + 0.7
Very Erratic
Variable
Sampling of a variable on its own
Not necessarily sufficient to describe
Its spatial continuity
Moderately
Continuous
variable
Need to statistically
quantify the spatial continuity
Continuous
variable
94
Geomodelling Workflow T W O
Where:
la = kriging weight
For this computation to have any meaning, the data must in statistical term display
stationarity. This means the data points that are used in computing our estimate
must all have similar statistical properties. For instance, in a reservoir, it would be
nonsense to compute estimates in some interval using sample points that came from
some underlying zone with different statistical properties to the interval where we
are computing our estimates.
1. The closeness of the sample points relative to the unsampled point. However
because of spatial correlation, this does not necessarily equate to some inverse
distance factor. This is illustrated on Figure 99. In that example we note a
northeast-southwest continuity. Sample z(u1), although closest to unsampled
point z*(u) should carry far less weight than sample points z(u2), z(u3) and z(u4)
in computing point z*(u) even though these are much further away from z*(u).
Figure 99 Map of some property with an underlying spatial continuity. We note a distinct
Northeast to Southwest spatial correlation of the property.
• Sample points z(u2), z(u3) and z(u4) are clearly more relevant to estimating z*(u) than
sample point z(u1)
• Sample points z(u2) and z(u3) being very close together have a high redundancy
2. Data redundancy. Taking the example on Figure 99, sample points z(u2) and
z(u3) being close to each other, correlate strongly and therefore have a high
redundancy when computing z*(u). These two sample points should therefore
have a combined weight equivalent to point z(u4).
This is illustrated in Figure 100 where we can compare a weighing based purely on
inverse distance and Kriging.
λ2 = 1/3 λ2 = 1/4
λ3 = 1/3 λ3 = 1/2
λ3 = 1/3 λ1 = 1/4
Figure 100 Kriging. Example of data redundancy where l2 and l1 are given a combined
weight equivalent to l3.
The geostatistical method that uses a generalized least squared regression method to
compute z*(u) together with a weighing factor (la) based on spatial correlation and
redundancy is know as Kriging.
Variograms
A variogram is a statistical technique that describes the spatial variation of a variable
(a petrophysical property for example). It assumes that closely spaced samples are
likely to have some correlation compared to more widely spaced samples. What the
variogram establishes is the distance between sample points beyond which there is
no correlation between them. This correlation distance can, and often is, anisotropic
in the X, Y and Z directions. A key pre-requisite for a variogram, is stationarity of
the data from which it is determined (see Chapter 1). This means that the local mean
must be the same as the global mean.
96
Geomodelling Workflow T W O
Sill No correlation
Increasing
variability
Nugget
Range
Three main variogram models are used in stochastic modelling. These are: Gaussian,
Spherical and Exponential (Figure 103)
Sill
Normalised Semivariance
Exponential
Range Spherical
Gaussian
Total randomness
500 km
Mountain range h
nugget
b)
Z(x) G(h)
c)
Z(x) G(h)
d) G(h)
Z(x)
Completely Predictable
organisation h
50 m
Parabolic
Slope
98
Geomodelling Workflow T W O
this trend (slope of the flank) would yield a variagram like Figure 104-b. Finally at
the closest range (Figure 104-d) there is no randomness at all (fully correlated and
predictable) and we get a parabolic variogram. For this last case, there would be no
requirement for stochastic modelling, and a deterministic model could be used instead.
Kriging
In kriging, properties between sample points are modelled with a spatial variability
defined by a variogram. This is illustrated on Figures 105 and 106, where the spatial
variability between two sample points is modelled. In Figure 105, isotropic variograms
with progressively longer ranges are used, while on Figure 106 we have the same
two sample points, but anisotropic variograms with progressively longer ranges are
used for kriging. From these two examples we can see how kriging with different
variograms affects the spatial distribution of the estimates between the sample points.
Kriging with Gaussian Variogram, Isotropic
Variogram Range=100m
Variogram Range=250m
Variogram Range=500m
Variogram Range=1000m
Figure 107, is another display of kriging results using different variogram models,
nuggets and ranges. Note that exactly the same data set is used in all cases.
100
Geomodelling Workflow T W O
From the examples on Figure 107, we note how local variation in the spatial distribution
increases progressively as the variogram model goes from Gaussian to Exponential
or as range decreases, or the nugget effect increases. This is because in all cases,
the correlation length is effectively decreased, allowing wider contrasting values to
be modelled in close proximity to each other (checker board pattern). This may not
be realistic for certain variables such as porosity or permeability which are likely to
have a smoother distribution from high to low values.
Figure 108 illustrates different types of kriging used to map seven sample points from
a twin population of channels and inter-channel lithofacies. The results from normal
kriging using isotropic and anisotropic variograms are shown, with the anisotropic
kriging, probably giving a better match to the original population. Also displayed is
kriging done with external drift or as collocated-kriging where the spatial distribution
of the channels and inter-channel lithofacies is conditioned to that of a secondary
variable.
Types of Kriging
= Sample Points
Population:
Channels and Overbanks
External Drift/Trend
Collocated Kriging
(primary variable)
Kriging is fine for modelling if the sampled properties are representative of the
population such as the continuous variable shown on Figure 98. If however properties
have a complex spatial distribution and are not fully represented by the samples,
Stochastic Simulation can be used. In stochastic simulation, in addition to kriging
using sample values and their variograms, the population’s property distribution
is also used as input. Properties modelled that way look more realistic than when
using deterministic methods or by simple kriging, and will honour the population’s
distribution better. However be aware that this is a stochastic modelling and that each
run will generate a new, but equiprobable output, which may look quite different in
the interwell space from one run to the next. In other words, do not drill a “high”
based on the outcome of a single stochastic modelling.
There are two main types of Sequential Stochastic Simulation: Sequential Indicator
Simulation (SIS), which simulates discrete variables such as facies and Sequential
Gaussian Simulation (SGS), which simulates continuous variables such as porosity
or permeability.
102
Geomodelling Workflow T W O
?
1(0)
Use simulated
1(0) 0(1) value in text next
predictions
1(0)
In the example on Figure 109, we are considering only two variables (shale and sand).
These are converted into category 0 and 1 (There could of course be more than two
categories). Assume a global proportion of 50% for each facies (category), and also
assume that we know the variograms describing each variable’s spatial distribution.
There are three sample points with either category 1 or 0 as shown on Figure 109.
A node or cell is selected at random and by indicator kriging the local probability of
each indicator is estimated at that cell and normalised to give a total local probability
of 1. This turns out as 0.8 for Indicator 1 and 0.2 for Indicator 0 in the example. A
local cdf is generated and sampled, giving 80% probability of selecting indicator 1
and 20% indicator 0. Once the indicator is selected, this node becomes a new data
point and is included in the computation on the next random cell, where the process
is repeated. This cycle is continued until the entire estimation grid is simulated.
It is evident from this, that the final outcome from one SIS simulation to the next will
be different but equiprobable, since it will honour the sample points, variograms and
global proportions from one simulation to the next.
1 1
G(y)
0.5
F(z)
-2 -1y 0 2 2 2
y=φ-1 (z)
104
Geomodelling Workflow T W O
0.1
-0.8
Transform data to
?
Normal distribution
N(0,1)
1.3
0.1
Compute Kriging
-0.1 +
_ 0.3 -0.8
estimate and error
1.3
1.3
Backtransform the
simulated values
The first step in SGS simulation is transforming the global probability distribution
function of the variable into a normal distribution with mean = 0 and variance = 1
using a “normal score transform” (Chapter 1 - Appendix) as illustrated on Figure
110. The sample data are also converted to a normal distribution data: N(0,1) as
illustrated on Figure 111. A node or cell is selected at random and the mean computed
by kriging. Using the kriging error as variance we can construct a local pdf for
that node or cell. The local pdf is then sampled (Figure 111) and the selected value
becomes a new data point, which is used when the process is repeated in computing
a value for the next random cell. This cycle is continued until the entire estimation
grid is simulated. Once the simulation is completed the normalised values are back
transformed to their real values.
As for the SIS simulation, the outcome will be different (but equiprobable) from one
SGS simulation to the next.
Population
0.5
Phi
f
0
0 0.5
0.5
Phi
f
0
0 0.5
Overbanks
This simulation allows a property that has different distributions for different spaces
of the grid to be modelled selectively for their given space. In the example, we have
channels with relatively high porosities and inter-channel facies with poorer porosities
as shown by their respective distributions on Figure 112. Both these distributions
can have differing variograms that will control how the porosities cluster in the two
environments. Taking the facies map generated by collocated kriging in our previous
example in Figure 108, the regions interpreted as channels (grey zones) will be
populated with the higher porosity distribution and the regions interpreted as inter-
channels will be populated with the lower porosity distribution.
106
Geomodelling Workflow T W O
Assume that you have two variables with some correlation between the two. If you
know the spatial distribution of both variables and the cross-covariance between
the two, you can model one variable and its spatial distribution by co-kriging and
capture the cross-correlation that relates the two variables. Collocated co-kriging,
is a reduced form of co-kriging. In reality co-kriging and collocated co-kriging are
difficult to compute, so the process is further reduced to collocated kriging, which is
the technique widely used in geomodelling.
To illustrate collocated kriging, let’s use a simple example. Take two correlated
variables such as porosity and seismic acoustic impedance (Figure 91). A correlation
coefficient has been established between the two variables from a cross plot as shown
on Figure 91. If the spatial distribution of both variables is assumed to be the same, it
is possible using the modelled spatial distribution of the acoustic impedance (primary
variable) to model porosity (secondary variable) using the correlation coefficient
linking the two variables and the spatial distribution of the acoustic impedance. Note
that in this example, the spatial distribution of the acoustic impedance is explicitly
determined from attribute analysis of a 3D seismic cube, but could have just as easily
come from a variable modelled from a variogram.
Correlation
Coefficient
=1 =0.8 =0.5 =0
The normalised value from each cell of the primary variable (acoustic impedance) is
used to determine the value of the secondary variable in that same cell. Lets assume
that for a given cell, the value of the primary variable is the circled value lying
somewhere on its normalised distribution (Figure 113). If the correlation coefficient
between the primary and secondary variables is 1, the value from the first variable
would map exactly with an equivalent normalised value from the secondary variable.
In other words, all cells with the same normalised acoustic impedance value would
yield exactly the same normalised porosity value. If the correlation coefficient is
lower than 1, we see that the normalised porosity value will now be sampled from a
wider region on the normalised pdf curve of the secondary variable as illustrated in
Figure 113. That sampling region progressively widens as the correlation coefficient
approaches 0. At zero correlation coefficient, since there is no correlation, the porosity
can be sampled from anywhere over the entire normalised distribution.
108
Geomodelling Workflow T W O
In summary this uncertainty relates to reservoir zonation and the 3D geometry of the
reservoir, it’s structural style faulting etc.
Reservoir zonation, can be either in terms of Sequence stratigraphic and/or flow unit
correlations. Wrong correlations will yield erroneous reservoir connectivity models.
Correlations should be coherent with geological model and Sequence strtigraphic
correlations should tie with the seismic.
Structural interpretation in the subsurface is for most part based on the interpretation
of seismic. Imaging quality depends on 2D versus 3D seismic and their quality
which includes frequency content (impacts resolution) as well as the acquisition and
processing parameters used on the data.
Good ties between seismic markers and the wells via synthetic seismograms. This also
depends on the quality of the well correlations. Weak or highly variable impedance
contrasts, together with other factors such as tuning effects, or correlating events
across major faults, all contribute to uncertainty in the seismic picking and the
resultant TWT mapping.
Depth conversion is also a major uncertainty. Seismic interpretation and mapping must
be coherent with the geological setting (extensional versus compressional regime)
and all available information. For example mapped closures or spill points must be
consistent with the OWC seen in the wells.
4. Equiprobable realizations
This is the uncertainty linked to the fact that geomodelling is a stochastic exercise.
Stochastic modelling will yield a series of different, but equiprobable outcomes, all
of which will for example give a different hydrocarbon in place estimate.
The way to estimate the impact from this uncertainty is to run multiple realizations,
and in each case compute some key parameter (typically hydrocarbon in place,
either STOOIP or GIIP) from each outcome, and plot these as histograms. The range
from this histogram will quantify the uncertainty stemming from the randomness of
the simulation. Some geomodellers in fact allow you to run many simulations and
generate histograms of the computed hydrocarbon in place for each outcome. These
hydrocarbon in place histograms can be used to determine P10, P50 and P90 cases
for STOOIP and GIIP. Testing each outcome in a dynamic simulator, and get a range
of reserves is also possible, but can be prohibitively time consuming in terms of CPU
time. However it is easier to select models close to the P10, P50 and P90 cases for
example and use them as input in a dynamic simulator to assess the reserves uncertainty.
1. Top Down Approach, where we begin with a coarse and relatively simple model,
and add details and complexity as required.
2. Bottoms-up Approach, where we look in great detail at very small sectors of the
model, hoping to capture and address all geological uncertainties.
Which approach is best is impossible to say at present. Suffice to say that when
constructing a model, all key uncertainties should be identified and ranked. The key
uncertainties should be assessed and if possible their impact quantified (typically by
running sensitivities). Assessing uncertainties in a model and their impact is a key
objective in the construction of any model.
18. UPSCALING
Dynamic simulations are limited in terms of the number of cells that can be modelled.
Although these have steadily increased in line with increasing computing power,
dynamic simulation models are still limited to around 200.000 cells, going up to
500.000 cells depending on the complexity of the model and the dynamic simulation
method used. Geomodels on the other hand are far less restricted with models of up
to 50 million cells reported. These geomodels cannot be dynamically tested without
first reducing the number of cells to a manageable size. Transforming a fine grid to
a coarser grid to reduce the number of active cells is known as upscaling. This is
illustrated on Figure 114.
110
Geomodelling Workflow T W O
Upscaling
Upscaling always degrades a model, as averaging may remove some key details, which
greatly affect reservoir flow behaviour. Upscaling may cause loss of coherence in
geological model, and coherence between Static and Dynamic Model during History
Matching. If possible avoid upscaling, by designing your model and grid size such
that upscaling will not be required.
Upscaling along the main axis of flow minimises the degradation of the input parameters
The boundary conditions set in FBT upscaling are important in computing the pressure
and flow fields. Typically, there are two boundary conditions:
Pressure
Remember that the effective permeability calculated using open boundary conditions
is always greater or equal to the effective permeability calculated with close boundary
conditions.
One way round this problem is Pseudoisation, which is simply the application of the
Kyte and Berry method to compute pseudo Kr and Pc. Pseudoisation applications
are readily available on most commercial dynamic simulators.
112
Geomodelling Workflow T W O
20m
5m
Oil Water
2m
Finally, to verify whether upscaling has not adversely affected the flow behaviour of
the fine grid reservoir, it is necessary to test the upscaled model by comparing flow
simulations results from representative fine-scale sector models (assumed truth case)
and their upscaled equivalent. This is time consuming, but the only way to properly
validate an upscaled model.
There are two simulation approaches we can use for multiphase flow modelling.
These are:
In other words for modelling more complex fluid flow, you should use FD modelling.
Single M M M
Flow
Porosity
Dual F F F Flow
Porosity
M M M Flow
Dual F F F Flow
Porosity
Dual Flow
Permeability
M M M
Flow
114
Geomodelling Workflow T W O
Note that this modelling technique requires twice as many grid cells as for single porosity
modelling, and therefore reduces the number of possible active cells accordingly.
The grid cell restrictions of the dual porosity model also apply here, but with the added
problem that flow is now modelled both in the matrix and fracture grids, meaning
significantly longer CPU times.
Fluid properties
Bo Oil FVF vs. pressure
Bw Water FVF vs. pressure
Bg Gas FVF vs. pressure
ρσ Oil density at standard conditions
ρw Water density at standard conditions
ρg Gas density at standard conditions
Rs Gas in solution vs. pressure
μo Oil viscosity vs. pressure
μg Gas viscosity vs. pressure
μw Water viscosity vs. pressure
Co Oil compressibility
Cw Water compressibility
Saturation functions
PCwo vs. Sw Water-oil capillary pressure (drainage and imbibition)
PCgo vs. Sg Gas-oil capillary pressure (drainage and imbibition)
Kr0, Krw vs. Sw Oil and water relative permeability functions (drainage and imbibition)
Kr0, Krg vs. So Oil and gas relative permeability functions (drainage and imbibition)
Kr0, Krg vs. Krw 3 phase oil, gas and water relative permeability functions
Table 3 Fluid properties and saturation functions as input parameters (Cosentino, 2001)
Note that there will be as many pairs of Kr and Pc curves as there are Petrophysical
Rock Types (Section 9) identified in the reservoir. In all cases, saturation end points:
Swir (irreducible water saturation), Sor (irreducible oil saturation) and Srg (irreducible
gas saturation) must be also be defined for each pair of Kr and Pc curves.
116
Geomodelling Workflow T W O
The key field and well constraints are listed in Table 4 below.
3) Initial Conditions
The last reservoir engineering parameters required as input parameter for dynamic
simulation are the initial reservoir conditions. These include the initial reservoir
pressure, the OWC; GWC and/or GOC.
The incremental time steps of the simulation may also be considered part of the initial
conditions, as is the decision to model the reservoir with an active or inactive aquifer.
The simulator models the same production parameters commonly recorded during
production (Figure 118). Typically, output parameters such as: cumulative oil/ gas
and water production; reservoir pressure changes; oil/ gas and water production
rates etc are computed at each time increment of the simulation and can readily be
plotted against time for comparison and evaluation. Modern flow simulators also
have powerful 3D visualisation graphics that can be used as analytical tools, to study
the evolution of reservoir conditions (saturation, reservoir pressure etc) in time and
space. This permits the visualisation in space and time of displacement fronts, sweep
efficiency etc.
Production Profiles
GOR
2000 10000 BSW 500
Reservoir pressure
Ps
ratio
B/d
PROD
psi
50 250
1000 5000 Oil production
GOR
Water production
25
Water
Bsw
0
0 0 0
69 70 71 72 73 74 75
Because simulations are run over pre-determined time spans, they are instrumental
for predicting key parameters such as future production (oil, water and gas) for a
field, and are used as primary input for economic analyses.
In cases where the field has been on production for a length of time, the results from
the simulation can be directly compared to the real field production data. This is called
history matching. In case of mismatches, the reservoir parameters can be modified
or adjusted until the predicted and actual production history of the field give a good
match. Any adjustment(s) to the reservoir parameters to get a good history match
should be internally coherent, as history matching has non-unique solutions and may
eventually be reached, but for the wrong reasons.
Dynamic simulations are also useful for testing different development scenarios or
management strategies. The outputs can be compared and ranked on any basis we
may wish, such as:
• Economic criteria (maximum production in shortest time - Scenario 1 Figure
119)
• Maximum reserves recovery (not time critical - Scenario 2 Figure 119)
• Minimum environmental impact (fewer wells, re-injection of production water
and/ or gas)
• Test reservoir management ideas for improved recovery (IOR - Figure 119)
• Compare results from several equiprobable geomodels and assess modelling
uncertainty
• Compare results between Max, ML and Min modelling scenarios (oil production
- Figure 119).
Note that for any given oil field, production will at some stage fall below some economic
threshold at which point the field becomes uneconomic to produce. This threshold
118
Geomodelling Workflow T W O
will vary according to both the oil price and financial constraints (Capex, Opex, Tax
regime etc). Figure 119 shows how for Scenario 1, the field becomes uneconomical
from 1997, while this date is shifted to 2003 for scenario 2. The same figure also
illustrates how gas injection from 1989, is predicted to significantly improve recovery
and increase the economic life of the field to beyond 2002.
Dynamic Simulators Output
Max
Recoverable Reserves
ML
Min
1980 82 84 86 88 90 92 94 96 98 2000 02
Years
Production Rates
Scenario 1
Oil Production Rates
Scenario 2
IOR
(Gas Inject)
Economic
Thresholds
1980 82 84 86 88 90 92 94 96 98 2000 02
Years
Figures 120, shows a modelled comparison between a single horizontal producer and
5 vertical producers. It is clear that the horizontal well shows significant improvement
in recovery.
Figure 121 shows the results from a simulation on a fractured reservoir. Oil recovery
is shown to increase with water injection, until injection rates reach around 200m3/
day/ well. Beyond that rate recovery decreases significantly due to early water
breakthrough and increased by-passed oil. This would naturally adversely impact
the field economics.
Recovery - Horizontal versus Vertical Producers
27.6% RF
0 20
Time (Years)
40%
150 200
100 250
300
350
0
500
750
Recovery (%)
120
Geomodelling Workflow T W O
Example 1
Example 2
20. REFERENCES
1. Amaefule, J.O., Altunbay, M., Tiab, D., Kersey, D.G., and Keelan, D.K. 1993
Enhanced reservoir description: using core and log data to identify hydraulic
(flow) units and predict permeability in uncored intervals/wells. SPE paper
26436, 205-220.
2. Busch D. a. et al, 1985. Exploration Methods of Sandstone Reservoirs. OGCI
Publication
3. Caers, J, 2005, Petroleum Geostatistics, SPE
4. Chidsey, T. C., Adams, RR. D., Morris T. H., 2004. Regional to wellbore analog
for fluvial-deltaic reservoir modelling: The Ferron Sandstone of Utah. AAPG
Studies in Geology, No 50
5. Cowan G., et al., 1993. The use of dipmeter logs in the structural interpretation
and palaeocurrent analysis of Morecambe Fields, East Irish Sea Basin. In: J.R.
Parker, ed., Petroleum Geology of Northwest Europe: Proceedings of the 4th
Conference, The Geological Society, London, p. 867-882
6. Deutsch, C, 2002. Geostatistical Reservoir Modeling. Oxford University Press
7. Dubrule, O, 1998. Geostatistics in Petroleum Geology. AAPG - Continuing
Education Course Note Series # 38
8. Hatton, I.R. et al, 1992. Techniques and applications of petrophysical correlation
in submarine fan environments, early Tertiary sequence, North Sea. In: Geological
Applications of Wireline Logs II (A. Hurst, C.M Griffiths & P.F.
Worthington, eds.) Geological Society, London, Special Publication, 65, 21-30
9. Horne R. N, 1995, Modern Well Test Analysis, Second Edition (2000).
Petroway Inc
10. Lawrence D.A., 2002, Net sand analysis in thinly bedded turbidite reservoirs
- case study integrating acoustic images, dipmeters and core data. Paper HH,
SPWLA 43rd Annual Logging Symposium, June 2-5, 14 pp
11. Nelson P.H, 2004, Rock Evolution on the Permeability-Porosity Plane: Data
Sets and Models. AAPG Hedberg Conference, Austin, 8-11 February 2004.
12. Nelson P.H, 1994, Permeability-Porosity Relationships in Sedimentary Rocks,
Log Analyst, p38-64
13. Reading H.G (editor), 1978. Sedimentary Environments and Facies, Second
Edition (1986). Blackwell Scientific Publications
21. TUTORIAL
122
Data Integration T H R E E
C O N T E N T S
1. INTRODUCTION
2. FUNDAMENTALS
10. EXERCISES
11. REFERENCES
21/07/16
R Carbonate Geomodelling
E
M
LEARNING OBJECTIVES
2
Data Integration T H R E E
1. INTRODUCTION
Data integration is a key aspect of reservoir management. “The best way to identify
and quantify rock framework and pore space variations is through the deliberate and
integrated use of engineering and earth-science (geoscience) technology” (Harris and
Hewitt, 1977). Understanding and awareness of other technologies will promote the
free exchange of ideas. Integration helps reduce uncertainty.
In the following sections we examine some of the issues in the data integration in the
areas of petrophysics, geochemistry, seismics and geostatistics. The review is not
by any means exhaustive, but serves to show that cross-disciplinary integration does
not follow a universal recipe. On the contrary, the solution to a reservoir problem
often lies in a unique blend of disciplines. The challenge for the geoengineer is to
make sure the appropriate integration happens - by developing recipes where they
don’t exist. In each case study presented, which act as model recipes, the value of
integration speaks for itself - can industry afford not to integrate data? “Integrated is
always better than disintegrated” even if the word is rather fashionable, writes Luca
Cosentino in the preface to a book devoted to this subject (Cosentino, 2001).
2. FUNDAMENTALS
Data integration is usually through models (e.g., linear regression models, reservoir
simulation models, etc.). There are a number of fundamental issues in modelling
that have bearing on data integration.
Volume
Microscopic Macroscopic
Figure 1 Definition sketch of a porous media (after Bear, 1972 and Haldorsen, 1986)
Side 1
1000
100
1
0.01
Side 2
1000
100
1
Permeability, mD
0.01
Small
Side 3
1000
100
1 Large
0.01
1000 CUBES
100
1 Side 4
0.01 Depth cm
0 1 2 3 4
Figure 2 Permeability profiles on the four sides of a small sandstone block measured by
small and large probe permeameter compared with measurements on cubes (Corbett et al,
1999)
4
Data Integration T H R E E
1000
100
1
0.01 Side 1
Side 2
1000
100
1
Permeability, mD
0.01
CUBES
1000
100
1
0.01 Side 3
Side 4
1000 Small
100
1
Large
0.01 Depth cm
0 2 4 6 8 10 12
Figure 3 Repeat of experiment described in Figure 2 for a carbonate block. In this case the
different scales of measurement give different values of permeability (Corbett et al, 1999)
The experiment described above has been repeated in a very heterogeneous carbonate
sample (Figure 3). This time the probe measurements and Hassler cube measurements
are not necessarily comparable with larger measurements both lower and higher
“observed” permeabilities. This is clearly a sample with a sample support problem.
There is no confidence in the data, its not clear if a systematic averaging procedure
can be used and there is a question mark over the data values as the rock is not an
effective porous media. The devices are responding to local disconnected vugs. These
data should not be used for modelling without careful further analysis.
A comparison of upscaled (averaged) probe data and the cube shows that the appropriate
upscaling average is between harmonic and geometric.
Upscaling test
Probe to cube
Kprobe
Kprobe
10
1
1
0.1
0.1 0.1
0.1
1
10
100
1000
0.01
0.1
10
100
0.01
0.1
10
Kcube Kcube Kcube
Figure 4 Ultimate test of upscaling from probe to cubes for the limestone experiment
shown in Figure 3 (Corbett et al, 1999)
a: Up-scaling
Permeability
b: Cross-scaling
Log Log
Plug Plug
and
Permeability Porosity
Density Permeability
10-5 10-5 1 1
Volume (cu.m)
6
Data Integration T H R E E
Nelson ( 1994) reviews porosity permeability relationships for a range of sands, clays
and sandstones. Examples of linear relationships on log(k) vs _ are compared with
examples where there is little relationship. Examples of clastics and carbonates are
given. Clustering the data by facies, grainsize, clay content can reduce the scatter to
manageable subsets, providing these can be recognised on the logs. Several empirical
models and log-based predictors are given. A summary sketch for the impact of grain
size, sorting, clay and interstitial cements upon poroperm trends is given in Figure 6 and 7.
gr
av
el
1000 sorting
tra
ctio
n
co
ar
se
100
ni
ng
fining
k (mD)
10 y
, cla
ent
cem
Figure 6 A summary sketch for the impact of grain size, sorting, clay and interstitial
cements upon poroperm trends (Nelson, 1994)
m
0µ
100
50
k(mD)
µm
62
10
m
8µ
1
0.1
0.0 0.0 0.2 0.3
φ
Figure 7 Berg’s theoretical model for poroperm relationships incorporating grain size
(lines are lines of increasing grain size). Case shown is for a well sorted sand
1. In the absence of well logs and core, look for a suitable petrophysical analogue
2. When porosity and grain size estimates are available use Berg’s theoretical
model (Figure 7).
3. With porosity and water saturation, permeability can be estimated from Timur’s
equation (k = [afb]/[Swc2], where a and b are constants and Swc is the connate
water saturation).
Permeability prediction in the subsurface is often a critical part of any field description,
one that is often glossed over in the geostatistical simulations of plug data - issues to
remember are representivity and upscaling.
100
Permeability (mD)
10 coarse
1 medium
0.1
fine
very fine
0.01
0 5 10 15 20 25 30
Porosity (%)
Figure 8 Poroperm relationships for various grain size classes (Harrison, 1994)
8
Data Integration T H R E E
Models for cementation can be used to examine the poroperm relationships in response
to diagenesis. Quartz overgrowths reduce the smaller pore throats more than the
larger pore throats and this is reflected in the resulting poroperm curves (Figure 9).
Combining grain size variations and diagenetic modification can lead to a large scatter
in the poroperm data (e.g. braided fluvial reservoirs).
10000
1000
100
k(mD)
10
1
Data
0.1 Model prediction
0.01
0 10 20 30
Porosity (%)
Figure 9 Poroperm relationship for a simple quartz overgrowth model (Bryant et al.,
1993)
When log based predictors are being used one also has to consider the effect of upscaling
(plug - log response) and cross-scaling (perm - density). The permeability prediction
in a braided fluvial reservoir was improved when upscaled probe data (6”inch running
window) was compared to 6” resolution microresistivity data (Ball et al., 1994). In
this reservoir, there was a poor relationship between plug permeability and wireline
density (Figure 10). The relationship between resistivity and saturation is supported
by laboratory measurements (Figure 11).
Probe
Core plug
R2 = 0.81 R2 = 0.54
log permeability (mD)
4 4
log permeability (mD)
3 12 3 1 2
8 8
9 10 Key data points 9
2 numbered for 2 10
1 comparison 1
6 6
7 7
0 54 3 0 5
11 3 4 11
Glass Tray
3
Brine Rock Slab
Lead Perforated
Electrode Spacer
2
Probe Permeameter Apparatus General Arrangement
Nitrogen injection
y = 9.0617 - 5.7724x R^2 = 0.902
1
0.9 1.0 1.1 1.2 1.3
log Formation Resistivity Factor (F)
Rock Slab
10
Data Integration T H R E E
-11900 -11900
-12050 -12050
PLT Cum. probe
-12100 -12100
-12150 -12150
0 50 100 150 200 250 300 350 400 0 250 500 750 1000 12501500 1750
Permeability (mD)
A later study (Thomas et al, 1996) showed how probe data could be upscaled to
various scales by using arithmetic average for horizontal and harmonic average for
vertical permeability for comparison with MDT measurements (Figures 13 & 14).
In Figure 13, the MDT determined kv (MDT kv) and MDT determined kh (MDT kh) are
shown for the interval of the MDT measurement (4238 - 4254 ft). In this same interval there
are few kv plug measurements and each is significantly higher than the MDT estimate.
The (horizontal) probe permeability measurements and detect a low permeability interval
at 4243.5 ft. This interval will control the vertical permeability as shown by the MDT kv.
The probe detects horizontal permeabilities similar to the MDT kh.
cl slt vf f m
4220
4230
Depth (ft)
WT kv
4240
Channel
Abandonments
Clay Drapes
Clean Sands
4250
Horizontal Plugs
Vertical Plugs
WT kh Probe Permeameter
Permeability (mD)
Figure 13 Integration between MDT and Geology, Sherwood Sandstone, Irish Sea Basin
(Thomas et al, 1998)
12
Data Integration T H R E E
MDT
0.01
0.001
0 5 10 15
Measurement interval (ft)
The MDT measurement falls within the upscaled envelope of probe measurements
and is close to the effective kv/kh for the interval.
Sedimentologists cluster sediments with textural variations into grain size and sorting
classes. Porosity and permeability are driven by variations in grain size and sorting.
Therefore it should be possible to cluster porosity and permeability into classes in
a similar way.
A Hydraulic (Flow) Unit (HU) was defined as the representative elementary volume
(REV) of the total reservoir rock within which geological and petrophysical properties
that affect fluid flow are internally consistent and predictably different from properties
of other rock volume (Amaefule et al., 1993).
The HU’s for a hydrocarbon reservoir can be determined from core analysis data (k
& f). This technique has been introduced by Amaefule et al. (1993), and involved
calculating the flow zone indicator (FZI) from the pore volume to solid volume ratio
(fz) and reservoir quality index (RQI) through Equation 1.
k
0.0314
RQI φ
FZI = = (1)
Φz φ
1 − φ
From FZI values, samples can be classified into different HU’s. Samples with similar
FZI value belong to the same HU (Mohammed, 2002; Mohammed and Corbett, 2002).
The permeability and porosity data have been classified been classified into seven
distinct HU’s with different hydraulic properties (Figure 15) in a well from a reservoir.
100000
100000
10000 10000
1000 1000
100
100
k, md
10 kb (mD)
10
1
0.1 1
0.01
0.1
0 0.05 0.1 0.15 0.2 0.25
Phi
HU2 HU3 HU1 HU6
0.01
HU4 HU5 HU7 All k-phi data
0 0.05 0.1 0.15 0.2 0.25
phi
Figure 15 A k-phi cross plot showing Hydraulic Units for routine plugs (left) compared
with a single empirical relationship (right)
The Hydraulic Unit approach is a ‘rock typing’ approach to clustering core plug data
(Unit here is a unit in porosity-permeability space - not one with physical dimensions).
Other rock typing approaches have also been proposed. Winland (this method is
attributed to Dale Winland of Amoco, but has never been published although it is
discussed by Spearing et al., 2001) established an empirical relationship between
porosity, permeability, and pore throat radius from mercury injection capillary pressure
(MICP) measurements in order to obtain net pay cut-off values in some clastic
reservoirs. Winland rock typing is based on samples with similar R35 belonging to the
same rock type. Essentially, Winland Rock typing and HU rock typing give consistent
breakdown of the porosity permeability data and an R35 value can be determined for
the same rock types as determined by an FZI value, and vice versa.
HU classes are strongly texturally controlled (Mohammed, 2002; Corbett et al, 2003).
Correlations were observed between grain size and sorting and HU. HU units therefore
represent a fundamental link with the depositional texture. Sedimentologists classify
sands by grain size classes which are clusters based on a systematic series of median
grain diameters.
In two field studies different numbers of, and different (i.e., FZI values) HU’s occurred
in separate wells. However, the numbers of HU’s seen in these fields didn’t vary
largely and differences between HU’s was often small. It was possible in each case
to develop a small number of HU’s for each field. It becomes important to consider
how separate the HU’s should be.
14
Data Integration T H R E E
2
φ
( FZI)x 1 − φ
K = φ 0.0314 (2)
and using this equation, lines for constant FZI can be determined. Selecting a systematic
series of FZI values allows the determination of HU boundaries to define 10 porosity-
permeability elements (termed Global Hydraulic Elements). The definition of these
boundaries is arbitrarily chosen in order to split a wide range of possible combinations of
porosity and permeability into a manageable number of Hydraulic Elements (Table 1).
Table 1 Hydraulic Unit boundaries (shown as FZI values) for 10 Global Hydraulic
Elements
The Global Hydraulic Element approach is being taken in a number of modeling studies
(reducing the complexity of the porosity-permeability modeling. Four GHE’s were
recognized in a Siberian Field (Field K, Figure 16). These GHE’s occur systematically
in the coarsening-up tidal sand body. The variation of HE’s laterally was considered
between wells prior to the making of a reduced (i.e., deterministic object approach
to modelling, rather than full pixel petrophysical simulation) simulation model was
used to generate synthetic well test responses for well test design.
Figure 16 Field K k-phi cross plot showing Hydraulic Elements for routine plugs
The Global Hydraulic Element approach uses specific HU’s (FZI values) as the
boundaries between classes, in a similar way that certain median grain sizes are used
in sedimentology. The GHE approach provided a useful reduced (in complexity)
simulation model for engineering studies and a link between the geology and the model.
In a North African reservoir (Ellabad, 2003), showed the link between the Lorenz
plot and Hydraulic Units for a heterogeneous North African fluvial reservoir (Lc
=0.74). The flow into the well shown by the plot is dominated by one of the hydraulic
units (HU 1 in this case). The modified Lorenz Plot and the PLT clearly show the
influence of this HU on the inflow performance (Figure 17). When 70% of the flow
comes through a single thin zone, there is always scope for confusion with a fractured
reservoir, though in this case there are plug measurements supporting matrix flow
(and fractures are not usually measured in core plugs).
1
12240
0.1
0.01
12260
0.001 Perforated
interval
0.0001 12280
0.000 0.050 0.100 0.150 0.200 0.250
Phi, frac.
12300
Lorenz Plot, Well X7
1.0
12320
0.9
0.8
0.7 12340
0.6
k*h
0.5 PLT
12360
0.4
0.3 HU1
HU2 HU1
0.2 HU3 12380 HU2
HU4 HU3
0.1 HU5 HU4
0.0 HU5
0.0 0.2 0.4 0.6 0.8 1.0 12400
Phi, frac.
Figure 17 Hydraulic units in a North African field showing the dominance of one HU on
flow. Top left: Poroperm plot showing the various HU. Bottom left: Lorenz Plot showing
that 70% of the flow capacity in this well is from HU1 Right: PLT correlated with modified
Lorenz plot showing the location of HU1 in a single zone (Ellabed, 2003)
Well testing provides a unique insight into the effective in-situ permeability of
reservoirs. As a cross-check on the veracity of the interpretation the permeabilities
are usually compared with core plug data, when available. However, the plug data
need to be upscaled in order to be comparable with the scale of measurement of the
well test. We have investigated the comparison of well test data and core data in
16
Data Integration T H R E E
two fluvial case studies, where the reservoirs are characteristically heterogeneous
because of the poorly sorted nature of the sediments. The pitfalls in applying simple
averaging for upscaling are explored.
Permeability measurements from core plugs and well tests are derived from
interpretations of pressure and rate data under some flow assumptions. Core data are
usually derived from assumed linear flow in small core plug samples. The well test
response is derived from a radial flow assumption over a significantly larger volume.
There are theoretical and/or statistical methods for the integration of core and test
data (Oliver, 1990; Deutsch, 1992; Desbarats, 1994). However, this contribution
illustrates an alternative pragmatic approach. This subject is also addressed in a case
study from a North Sea Jurassic reservoir (Zheng et al., 2000).
In wells where both coring and well testing have been undertaken across the same
intervals, the opportunity arises for comparison of the two types of permeability
measurement at different scales. Comparisons have to be made on the basis of some
assumptions:
1. Representivity and upscaling. The representivity of the core within the volume of
investigation of the well test and the representivity of the core plug samples
of the well bore. The appropriate upscaling or averaging of the plug data for
the well test volume. Often, the assumption will be made that the geology is
layered (requiring the arithmetic average of plug data) or random (requiring
the geometric average).
3. Relative permeability effects. In well testing usually only one phase is flowing
for the length of time of the test.
It is the first problem of comparison that this contribution addresses. For the purposes
of the two case studies presented here, the stress effects and relative permeability effects
have been assumed to be of less significance (the end point oil relative permeability
in the UK data set is 80% of the absolute perm) than the effects of representivity
and upscaling.
We find that the comparison between plug and well test is meaningless if:
3. The geological architecture over the volume of investigation is not taken into
account
This is fairly obvious but you’d be surprise how few published examples there are
of comparisons between well test and core. The common factor linking many case
studies is that both are fluvial reservoirs. Fluvial reservoirs, whether high or low
net:gross, are characteristically heterogeneous because of the poorly sorted nature
of the sediments in a high energy depositional environment (Brayshaw et al., 1995).
The effective permeability (i.e., the permeability that a grid block of a certain volume
will have) at various scales for use in reservoir simulations is a critical issue in such
reservoirs. It is no coincidence that the integration of petrophysical and well test data
provides the greatest challenge - and potentially the greatest reward - in understanding
such reservoirs.
-2
-4
Depth (m)
-6
-8
-10
.01 .1 1 10 100 1000 10000 100000
cl s vf f mc vc Plug permeability (mD)
The core plugs taken at a regular 0.25m spacing show the interval to be heterogeneous
with permeabilities in the tested interval ranging from 10mD to 10000mD. The
arithmetic average of the core plugs is 963mD, the geometric average 254mD and the
coefficient of variation of 1.9. Using the No concept of Hurst and Rosvoll (1991) for
the number of samples and the level of variability suggests that the “true” arithmetic
average lies between 318 and 1608mD. Because of the low number of samples for
the variabity there is great uncertainty in the arithmetic average (and other statistics)
derived from the data.
18
Data Integration T H R E E
The derivative of the pressure response from the well test (Figure 19) shows radial
flow in the Middle Time Region (MTR) from which a well test permeability of 345mD
with a skin of -1.67 was derived. The radial flow becomes linear with time and
such a flow regime is to be expected from a channel sandstone where the “parallel”
boundaries for the channel give rise to a 1/2 slope on the derivative of pressure plot.
In the Late Time Region (LTR), a bilinear flow regime is seen (1/4 slope on the
derivative) suggesting a reduction in permeability at some greater distance from the
well (Figure 19).
MTR
LTR
Time function
Figure 19 Log-log plot of pressure and derivative for the Norwegian test
The well test of 345mD is within the error bands of the arithmetic average permeability
and could be indicating layer parallel flow. Alternatively, the well test average is
closer to the geometric sample average which could indicate a random permeability
distribution. The higher permeabilities towards the base of the channel sandstone
are consistent with simple models for channel fill (i.e., fining upward) and suggest a
geological control. Additional data acquisition reveal further permeability structure
(Figure 20). The cores exhibit very marked cross-bedding with large grain size
variations. These result in marked permeability contrasts at the lamina-scale as
measured by the probe permeameter (Brendsdal and Halvorsen, 1993). The probe
data show dramatic variation (Fig. 16) and show additional high (>10000mD) and
low (<10mD) permeable zones than were detected by the plugs. The arithmetic
average of the probe data is 1038mD, geometric average 306mD and the coefficient
of variation 1.85. The number of measurements exceeds the No criteria and the true
mean lies within ±20% of 1038mD (Corbett and Jensen, 1992). Clearly, the well test
permeability is significantly less than the arithmetic mean of the interval.
Plug
0
Probe
-2
-4
Depth (m)
-6
-8
-10
.01 .1 1 10 100 1000 10000 100000
cl s vf f mc vc Plug permeability (mD)
The effects of the lamination were then considered. The presence of several high
permeability laminations intersecting the well bore (solid arrows in Figure 20) could be
responsible for the negative skin observed. Commonly a phenomena associated with
fractures, in this example, the “geoskin” appears to be derived from depositionally-
controlled permeability contrasts. The lamina, related to cross bedding, are unlikely
to be laterally extensive away from the well bore. Flow in the immediate well bore
region is therefore enhanced by the presence of high permeable laminae.
Within the formation, away from the wellbore, the effective (single phase) flow should
approximate the harmonic average of the lamina as flow will be across the laminae.
This configuration was considered for a four-layer model in which the permeabilities
(derived by the harmonic average within the layers) were 651, 168, 297 and 1169,
respectively. The arithmetic average for this layered model is 571mD. The well test
permeability is still significantly less than this, suggesting the laminae alone are not
alone responsible for the reduced effective permeability.
The low permeability streaks shown by the open arrows in Figure 20 are associated
with the reduced bounding surface or “shale drape” permeabilities between the sets.
Shale drapes commonly occur between the architectural elements within braided
fluvial reservoirs (Høimyr et al., 1993). To simulate the effects of the low permeability
draping network a simple numerical model was constructed based on an orthogonal
network separating blocks of different permeability (Figure 21). Whilst clearly a
simplification, the model showed that the presence of the network could be responsible
for the remaining permeability reduction. For a range of shale permeabilities (1-50mD)
and spacings (40-80m), that are “realistic” given the probe and analogue data, the
model produces permeabilities close to the well test permeability (330-375mD).
20
Data Integration T H R E E
Injection Production
50 x 1m cells
651 mD
8 x 1m 168 mD
cells
297 mD
1169 mD
No crossflow
between layers
"Matrix" permeability
Variable
1-40mD
The well test permeabilities had, historically, generally matched the arithmetic
average of the plug data and the well tests were initially thought to be showing some
mechanical alteration. The permeability distributions and mean permeabilities in the
two wells are, nevertheless, very similar (Figure 22).
70
60
Well A
50
Arith. av.: 400mD 40
Geom. av.: 43mD Count
30
Harm. av.: 0.22mD
Cv: 1.52 20
10
0
-6 -4 -2 0 2 4 6 8 10 12
in(x) of KH
70
Well B
60
Cv: 1.71 30
20
10
0
-6 -4 -2 0 2 4 6 8 10
in(x) of KH
Figure 22 Permeability histograms for the two Wytch Farm wells (from Toro-Rivera et
al., 1994)
For Well A, inspection of the core data (Figure 23) shows the presence of several,
relatively thin, permeability intervals. The intervals are considered to be relatively
minor channels of limited extent (based on unpublished interpretations by Mckie
and Little of Badley-Ashton). The well test (Figure 24) build-up data show negative
skin (from the high permeable channels), early linear (channel) flow and radial flow
(with the “effective permeability” of the combined channel/inter-channel reservoir),
all consistent with the minor channel model. The radial flow permeability of 44mD
is close to the geometric average, suggesting that the system behaves as if the
permeabilities were randomly distributed. In this example, the late time increase in
permeability seen by a downturn in the derivative can be explained by the oil-water
contact, which is not far below the tested interval.
22
Data Integration T H R E E
Well A
XX10
XX65
0.1 100 0 2000 4000
Well A
Pressure and derivative
Time function
In Well B, the core data show up a few relatively thick, high permeability major channels
(Figure 25). In this well, the flow is likely to be dominated by these channels. The
well test permeability (1024mD) from the radial flow region (Figure 26) is close to
the arithmetic average (911mD) of the channel intervals. Unfortunately, there is no
production log available to confirm that the channels alone were flowing, however,
this would explain a test permeability higher than the core plug arithmetic average.
The late time decrease in permeability (upturn in the derivative) is thought to be due
to a fault mapped from seismic data at the appropriate distance.
Well B
XX20
35m
XX55
.01 100 0 2000 4000
Permeability (mD)
Well B
Time function
This study shows that the effective permeability of a reservoir in the well test volume
of investigation, is dependant on the medium scale (bed) architecture, even when the
total permeability field appears relatively stationary (i.e., the mean and variance for
the reservoir are constant).
24
Data Integration T H R E E
• The flow regimes identified (including skin) should be rationalised with respect
to the interpreted geology at the well and in the radius of investigation to confirm
or cast doubt on the pressure interpretations
• With downhole shut-in, the Early Time Region can be interpreted providing
additional geological information on a scale more readily comparable with the
core data (Toro-Rivera et al., 1996)
Data ordered
Data not ordered
2872
DST-3
0.25 2874
Storage Capacity (Pki*h)
2876
0.5
2878
0.75
2880
2882
1
1 0.75 0.5 0.25 0
Figure 27 Lorenz Plot and modified Lorenz Plot showing high permeability zone in centre
of a channel sandstone, Ness Formation (Zheng et al, 2000)
1000
2 Original well test
Well test permeability (mD)
1 interpretation
750 versus core plug
average
New test interpretation
500 0.5
versus probe average
23
95% Confidence estimate
250
22
Uncertainty in well test
19
0 estimate of permeability
0 250 500 750 1000 due to uncertainty in h
Core permeability (mD)
Figure 28 Comparison between test and core plug permeabilities for 3 wells in a North
Sea fluvial reservoir. Error bars in core estimates derived from small sample size and in
the well test from uncertainty in flowing interval (Zheng et al, 2000)
2. Numerical models can be very useful in well test interpretation. The approach
taken here further developed with additional focussed analogue and petrophysical
data, integrated by geostatistical models, will enable greater understanding of well
test data (see Bourgeois, et al, 1993 for a turbidite example and De Rooij et al,
2003, for a meander loop example).
3. A coherent interpretation of geology, petrophysics and well test data, can add
confidence to the description of complex reservoirs resulting from interpretations
of the same data in isolation.
Refer to Smalley and England (1994) and Larter et al., (1994) for a review of the
uses of geochemistry in reservoir engineering. Geochemists are specialists in rock-
fluid interaction and their skills should be used more in data integration and reservoir
management. This area of reservoir description is very much in its infancy but holds
out promise for the future in a number of areas:
26
Data Integration T H R E E
FT
10
Petrophysics Geochemical
Parameters
1030
kPa
-20
210 kPa
-40
550
kPa
-60
0.721
0.724
RSA 875r/865r
7033-7970
7915
9225
Forties SE Forties
Bubble Point Pressures
for Forties Oils
28
Data Integration T H R E E
0 probe k
core
plug k
60
Figure 32 Probe (uncleaned core) and plug (cleaned) permeability differences reveal in-
situ tar mats (Larter et al, 1994)
Time lapse seismic monitoring (i.e., repeating the survey or parts of the survey, 2-D or
3-D, after a period of time, sometimes known as 4-D seismic in the case of repeat 3-D
surveys) can be used as an important control for reservoir management. In the case
of the Heimdal Field (Figure 33, Norwegian North Sea) 11 lines costing $0.6million
were acquired (Figure 34) to monitor suspected water encroachment (Figures 35,
36) in the northern part of the field (Grinde, et al., 1994). Water encroachment into
the Palaeocene gas reservoirs can have a significant impact on the drainage of a gas
field. Gas depletion in the presence of a strong water drive invariably results in lower
recovery, with wells “prematurely” watering out. AVO (amplitude-versus-offset)
seismic processing and inversion was used to determine the gas-liquid contact (GLC).
A difference image was generated from the initial and repeated survey to give a
difference image. This image was used to determine the rise in the water table. This
was seen to be occurring in a northern accumulation without separate withdrawal
points. Good drainage was shown to be occurring across the field using history matched
model (Figures 37, 38) obviating the need for an additional well (at $16million!).
Whilst the relatively shallow high porosity Tertiary sands lend themselves to seismic
monitoring, the techniques have been used on other North Sea fields with varying
success. The combined effort of the team of geophysicists, geologists and reservoirs
was a key factor in the success of the project (and a conclusion of the paper!!). That
people needed to stress the importance of teamwork in 1994, shows that nothing had
really changed since Harris and Hewit’s paper in 1977!! This is a practical example
of the integration of geophysics and reservoir engineering.
70
Frigg
60 Complex
40
km
Heimdal
200
0
0 10 20 30
km
Figure 33 Location map Reservoir/Aquifer Grid for Heimdal Field (Grinde et al, 1994)
1km
OGWC
Seismic Grid
Figure 34 Location of Time Lapse Seismic Grid, Heimdal Field (Grinde et al, 1994)
30
Data Integration T H R E E
50
Exploration
Potential
Remaining
Gas
Column (m)
Production
Exploration
S N
Gas Sand
AQUIFER
Figure 36 Model For Trapped Gas, Heimdal Field (Grinde et al, 1994)
Gas
Pressure
10
Gas MMCM/D
Rate
160 0
77 86 94
Figure 37 Heimdal Field Gas and Aquifer Pressure History Match, Heimdal Field (Grinde
et al, 1994)
+26m
+16m
-26m
Figure 38 Comparison of Water Rise Match Between Seismic and Reservoir Model,
Heimdal Field. Seismic shows area to north has been produced (Grinde et al, 1994)
Tjolsen et al. (1998) report on a study of the Ness Formation in the Oseberg Field.
In this study (Figure 39) acoustic independance is used to predict sand proportions
and greatly reduce the spread in simulation models.
lt u
Fa
L
t
Faul
Vertically L
Average
Impedance H
In Upper Ness
L
H-High Impedance
L-Low Impedance H
Figure 39 Distribution of high and low impedance in the Upper Ness in Oseberg Field
(Tjolsen et al, 1995) Note both production (•) and injection (º) wells are located close to
areas of low impedance (high net: gross)
32
Data Integration T H R E E
Tarbert
Upper Ness
Lower Ness
Coal
Sand
Shale
Cross-section
Figure 40 Cross-section through Ness Formation in Osberg Field (Tjolsen et al, 1995)
20
Frequency
10
0
0-2 2-4 4-6 6-3 8-10 10-12 12-14 14-16 16-18 18-20
Channel Deposit Thickness (M)
Figure 41 Channel Deposit Thickness (M) in the Ness Formation (Tjolsen et al, 1995)
9
9
8
Impedence
Impedence
6 0
0 0.5 1
Non-Res Sand
Channel Proportion
Figure 42a Using impedence to discriminate reservoir and non-reservoir (Tjolsen et al,
1995). Figure 42b Using impedence to discriminate reservoir and non-reservoir in Ness
Formation, Oseberg Field (Tjolsen et al, 1995)
No
Response
Response
No
Response
Ten unconditional stochastic realizations were upscaled and simulated. The range
of outcomes are shown in Figure 44. A further 10 runs were conditioned on the
seismic data. The plateau length was significantly longer for the seismic-conditioned
simulations. The spread in total production and plateau height was also reduced.
+
Cum Oil
Oil Rate
DAYS
Figure 44 Reservoir Prediction With (+) and Without (-) Seismic, Ness Formation,
Oseberg Field (Tjolsen et al, 1995)
34
Data Integration T H R E E
There are numerous studies of the use of geostatistics in reservoir studies. As an example,
Rossini et al. (1994) describe a case study in which geostatistics have been used to honour
the petrophysical heterogeneity whilst minimising the history matching process. The
study shows how the static and dynamic data are integrated. The reservoir comprises a
transitional sand - dolomite lithology in three distinct flow units (Figure 45). The PEF
(Photoelectric Effect log) was used to discriminate between dolomite and sand, the poroperm
characteristics for the facies being quite different (Figure 46).
Layer 1
Layer 2
Layer 3
Figure 45 Geological cross section of the field showing three well-defined layercake flow
units with internal heterogeneities (Rossini et al., 1994)
1000. 1000.
100. 100.
K (mD)
K (mD)
10.0 10.0
1.00 1.00
.100 .100
.010 .010
.0 6.0 11.0 17.0 40.0 .0 4.0 9.0 19.0 40.0
PHI (%) PHI (%)
Figure 46 Porosity-permeability for sandy and dolomite facies (Rossini et al., 1994). Note
that the dolomite facies is severely undersampled
A two-stage geostatistical simulation was carried out using the appropriate variograms
(Figure 47):
a)
Vertical variogram Horizontal variogram
98
1995 1230 507
γ γ 1241
0 0
h(m) 24 h(m) 1400
Sill Range Sill Range
Nugget 0.015000 Nugget 0.015
Spherical 0.150000 6 Spherical 0.15 500
b)
Vertical variogram Horizontal variogram
1217 342
969 795 250 247
γ γ 164
384 94 147
130
193 75
0 0
h(m) 45 h(m) 2550
Sill Range Sill Range
Nugget 0.03 Nugget 0.03
Exponential 0.97 56 Exponential 0.97 850
c)
Vertical variogram Horizontal variogram
138
133 62
189
237 36
γ γ 82
0 0
h(m) 30 h(m) 1400
Sill Range Sill Range
Nugget 0.03 Nugget 0.03
Gaussian 0.96 10 Gaussian 0.96 650
Figure 47 Variograms of (a) facies indicators (b) sandy facies porosity (c)dolomite facies
porosity (Rossini et al., 1994)
36
Data Integration T H R E E
The three flow units were generated separately, but treated as one for the purposes
of flow simulation. The petrophysical models were 1.6 million grid blocks (large
for the mid 90’s), and these had to be upscaled for the purposes of flow simulation.
Porosities were upscaled using a weighted arithemetic average (N.B., resulting pore
volumes were checked for fine and coarse grids). Permeabilities were upscaled from
the fine grid block transmissabilities using combined arithmetic and harmonic averages.
The upscaled models were then history matched with RFT pressures, GOR and water
cut. The latter two proved the most discriminating and enabled an appropriate static
model to be selected for further reservoir management studies (Figure 48). In this
way, geostatistical techniques have been used to provide a more reliable fluid-flow
model. Similar techniques were also used by Damsleth et al. (1992).
8
7
6
2
2
Grading
1 1 1 1
-1
-2
-2
-3
-4
-4
-6
1 2 3 4 5 6 7 8 9 10
Realization Number
Figure 48 Ranking of realization based on the number of wells in the model that matched
real production characteristics (Realization 9 being the best model)
Geostatistics provides a tool for the integration of geology, petrophysics and reservoir
engineering. There are more examples of geostatistical models used in the integration
of geoscience and engineering in Yarus and Chambers (1996).
Shared Earth Model has been applied to a single vendor’s platform (Consentino,
2001) or an integrated database (Fanchi, 2003). The lack of fully interoperable
software systems and completely integrated platforms will require a continuation
of data transfer from one piece of software to another, requiring the geoengineer to
be computer literate or part of a team that includes a computer programmer, for the
forseable future.
Braided fluvial reservoir deposits are formed as the coarser grained, higher energy
part of fluvial systems. These reservoirs have characteristic patchy distribution of
varying grain size and sorting in outcrop sections (Figure 50). Recent flume tank
experiments have shown those elements to be deposited in secondary channels within
the fluvial system (Figure 51). The geometry of the secondary channels results in
randomly distributed 3-D patches.
Production log data (PLT) from these braided fluvial systems often shows point source
entry points for fluid (Figure 53). These high permeability intervals correlate with the
best reservoir property core plugs, classed as hydraulic unit 1 (HU1) in this example.
38
Data Integration T H R E E
The Lorenz plot for the interval shows that HU1 contains 70% of the transmissivity in
the well in only 15% of the porosity – this PLT response and degree of heterogeneity
is often observed in braided fluvial reservoirs.
In the reservoirs in North Africa that were the basis of this study the porosity is
moderate (10-15%). The background average permeability was around 3mD with the
best permeability zones approaching and sometimes exceeding 100mD. The Lorenz
Plot (Figure 54) shows clear double matrix porosity behavior.
Numerical well test model: The geostatistical models (an example shown in Figure 52)
were incorporated into a black oil simulator. A single well was placed in the centre
of the model with local grid refinement. Numerical artefacts can often occur in
simulation models when a radial local grid refinement around the well is nested
within a cartesian grid. For this reason the model described here uses a cartesian
grid throughout. Careful review of simulation models has eliminated any significant
numerical artefact in the middle time region. However, numerical artefacts may still
be present in the early time period. Because the phenomea we are describing occurs
in some realizations but not in others (Figure 55), whilst the grid remains constant,
we are confident that the phenomenum that we observe is free of numerical artifact.
Geochoke phenomenum: The restriction of flow for a short period of time represents
depletion of the high permeability zones connected to the well, and the delay in
recharging from other patches away from the well. This model requires there to be a
fairly high density of patches and for some to be intersected by the well. This is why
only some numerical realisations show the hump and some don’t to the same degree.
Figure 56 shows a derivative from a well test in a North African reservoir in a braided
fluvial environment. The mapped faults in this field were at a distance from the well
that was outside the radius of invesitigation. Clearly sub-seismic faulting could be
invoked. The geochoke response could be reasonably modelled by an analytical
radial composite model with a lower permeability ring around the well. Simple
faulting would not give this response. In the full field simulation model more
extensive channels had been modeled and low permeability regions around the well
were needed to improve the match8. It was beyond the scope of this work to rebuild
the reservoir model. This well test interpretation suggests that a model should be
updated to include short correlation length patchy high permeability sands, within
the more extensive channels.
40
Data Integration T H R E E
FIGURES
Issues
PIXEL
NUMERICAL MODEL Visulisation
3-D Geological model, stress model GEO MODEL
Dynamic
Upscaling
UNDERSTANDING
Sensitivity analysis
Data Quality
WELL
VALIDATION TEST DATA
Figure 50: A view of a typical braided fluvial system at outcrop (Eocene Escanillia
Formation, N. Spain). The figure on the left is identifying the coarser grained elements
– the figure of the right, the finer. It is likely that the better sorted finer elements will
have a significantly different permeability for the poorly sorted coarser elements.
Note that there are no obvious flow barriers between the elements to stop crossflow.
Figure 51: The result of a laboratory analogue study of a braided fluvial system.
The high permeability class units in D relate to the secondary channel fills in C that
have been mapped in cross sections (A and B) through modern sediments deposited
in a flume tank (from Moreton et al., 2002)
42
Data Integration T H R E E
12240
12260
Perforated
interval
12280
12300
12320
12340
PLT
12360
HU1
12380 HU2
HU3
HU4
HU5
12400
Figure 53: Point inflow from thin high permeability (HU1) interval in a braided fluvial
reservoir. The production log (PLT) inflow (solid line) is predicted by the cumulative
core permeability. Note that the inflow corresponds to plugs from hydraulic unit 1
(HU 1), the rock type with the best reservoir properties.
Figure 54: Lorenz Plot for a braided fluvial reservoir. The best permeability units
(HU1) contain 70% of the k*h (transmissivity) but only 15% of the f*h (storage).
Braided fluvial reservoirs contain double matrix porosity – part of the matrix is
transmissivity-dominated (in this case with only 15% of the pore volume) and part
is storage-dominated (85% of the pore volume).
10
MCL2
MCL1
Delta P/Delta Q (psi/STB/day)
MCL5
MCL4
1
MCL5 Derivative
0.1 MCL4 Derivative
Figure. 55: Numerical simulations from five realizations (MCL1-5) of the model
shown in Fig. 51. The hump in the middle time regime is only obvious in only two
of the realisations. This result suggests that the geochoke phenomena will occur
only for particular arrangements of the high permeability patches around the well.
It also shows that the effect can be quite subtle. Upper set of curves show pressure
buildup, lower set show derivative.
Delta P/Delta Q (psi/STB/day)
0.1
0.01
Figure 56: Field example of a pronounced humped middle time region in a well test
buildup from a braided fluvial reservoir in a North African fluvial reservoir example.
Upper curve is pressure buildup and lower curve is the derivative.
44
Data Integration T H R E E
1
(-0.5) -ve geoskin
P+ P+
2
T T
PLT
% Flow
100kh 0 PAY PAY
1
High k Low k
Sand Sand
2
kh
P+
B
T
fh
A
Lorenz Coefficient
< 0.3
P+
B
P+
T geochoke
Figure 57: The geochoke response occurs in double matrix porosity systems with
short lateral and vertical correlation lengths. The Lorenz plot on the left defines
the transmissive and storage elements. The matrix in the centre shows decreasing
correlations in the vertical and horizontal correlation lengths from top left to bottom
right (based on the Tyler and Finley 1991 architectural matrix). The various derivative
responses relate to well test characteristics for various scenarios.
1
(-0.5)
P+
P+
2
T
PLT T
% Flow
100kh 0 PAY PAY
1
P+ P+
+
ve Channel
geoskin Thickening
B
T T
fh A
Lorenz Coefficient
< 0.3
Cross Flow
by Connection
A
P+ B P+ P+
U-Shaped
Boundary
T T T
Figure 58: The geochoke response occurs in single matrix porosity systems with
short lateral and vertical correlation lengths.
EXERCISE 1
Given the following permeability data (same data as was encountered in Chapter 8
of the Reservoir Concepts course)
1. Crossplot data and fit a regression line that you might want to use for permeability
prediction. (i) Predict the missing value “nmp”. (ii) Predict a cut-off porosity for
a) an oil reservoir and b) a gas field
2. Fit on a plot of Global Hydraulic Elements. How many GHE’s are present and
how does this relate to the heterogeneity of the interval (as calculated in Chapter
8 of Reservoir Concepts).
2
φ k
( FZI ) x 0.0314
1 − φ RQI φ
K = φ 0.0314 or FZI = Φ = φ
z
1 − φ
3. Fit HU(s) lines to clusters. Estimate HU values from reference to the GHE plot
above.
46
Data Integration T H R E E
EXERCISE 1 - SOLUTION
Given the following permeability data (same data as was encountered in Chapter 8
of the Reservoir Concepts course)
1. Crossplot data and fit a regression line that you might want to use for permeability
prediction. Predict the missing value “nmp”
Linear fit. Note High Coefficient of Determination but the expression can’t be used to
predict permeability of 14.6% porosity point as this would be negative permeability!
Also note that the residuals would be non-linear.
Cross Plot
600
300
200
100
0
19 21 23 25 27 29 31
-100
Poro
y = 0.1373x - 1.2691
2.5
R2 = 0.9557
2
Log (k mD)
1.5
0.5
0
0.00 5.00 10.00 15.00 20.00 25.00 30.00 35.00
Porosity (%)
5.1 mD at 14.6% f
9.2% f at 1md
1.96% f at 0.1md
-5.3% f at 0.01md
Note this expression predicts negative porosity at 0.01mD which cannot be possible.
Many relationships are also non-linear in the log(k) - linear (f) space. This non-linearity
is also suggested by the HFU relationships.
2
φ
( FZI ) x
1 − φ
K = φ 0.0314
48
Data Integration T H R E E
Data lie in just two GHE’s. The Cv is 0.7. Data lying in one GHE are homogeneous.
12. REFERENCES
Amaefule, J.O., Altunbay, M., Tiab, D., and Kersey, D.G., 1993, Enhanced Reservoir
Description: Using core and log data to identify hydraulic (flow) units and predict
permeability in uncored intervals/wells SPE 26436 68th Ann. Conf. And Exhibit.,
Houston, Texas, Oct 3-6
Anguy, Y., R. Ehrlich, C.M. Prince, V.L. Riggert, D. Bernard, 1994, The sample
support problem for permeability assessment in sandstone reservoirs, in J.M. Yarus
and R. Chambers (eds.) Stochastic modelling and geostatistics. AAPG Comp. Appl.
in Geology, 3, 37-54.
Ball, L. D., Corbett, P.W.M., Jensen, J.L., and Lewis, J.J.M.L., 1994, The role of
geology in the behaviour and choice of permeability predictors, SPE 28447, presented
at 69th Annual Technical Conference and Exhibition, 25-28th September.
Bourgeois, M.J., Daviau, F.H., Boutard de la Combe, J-L., 1993. Pressure behaviour
in finite channel levee complexes, SPE paper 26461, presented at the 1993 SPE
Annual Conference, Houston, October 3-6.
Brayshaw, A.C., Davies, G.W., and Corbett, P.W.M., 1995, Depositional controls on
primary permeability and porosity at the bedform scale in fluvial reservoir sandstones,
in M.Dawson (Ed.), Advances in Fluvial Dynamics and Stratigraphy, 373-394.
Bryant, S., Cade, C., and Mellor, D., 1993, Permeability prediction from geologic
models, AAPG Bulletin, 77, 1338-1350.
Corbett, P.W.M., Jensen, J.L., and Sorbie, K.S.S., 1998, A review of Upscaling and
Cross-Scaling issues in Core and Log Data for Interpretation and Prediction, in
Core-Log Integration P.Harvey and M.Lovell, (Eds.), Geol. Soc. Spec. Publ., 136, 9-16.
Corbett, P.W.M., Anggraeni, S., and Bowen, D., 1999, The use of the probe
permeameter in carbonates - Addressing the problems of permeability support and
stationarity, The Log Analyst, 40, 316-326.
50
Data Integration T H R E E
Corbett, P.W.M, Ellabad, Y. , Egert, J.I.K., and Zheng, S., 2005, The geochoke well
test response in a catalogue of systematic geotype curves, SPE 93992, presented at
SPE EROPEC, Madrid, Spain, 13-16 June.
Cosentino, L., 2001, Integrated Reservoir Studies, Editions Technip, Paris, 310p.
Damsleth, E., et al., 1992, A two-stage stochastic model applied to a North Sea
reservoir, JPT, April
De Rooij, M., Corbett, P.W.M., Barens, L., 2002, Point Bar geometry, connectivity
and well test signatures, First Break, 20, December.
Desbarats, A.J, 1994, Spatial averaging of hydraulic conductivity under radial flow
conditions, Mathematical Geology, 26, 1-21.
Deutsch, C.V., 1992, Annealing techniques applied to reservoir modeling and the
integration of geological and engineering (well test) data, PhD Thesis, Stanford, Ca.
Grinde, P., Blanche, J.P., and Schnapper, D.B., 1994, Low-cost integrated teamwork
and seismic monitoring improved reservoir management of Norwegian gas reservoir
with active water drive, SPE 28876, presented at Europec, 25-27 October.
Harris, D.G., and Hewitt, C.H., 1977, Synergism in Reservoir Management - the
geologic perspective, JPT, July, 761-770.
Harrison, P.F., 1994, Wytch Farm: Horizontal well application. Paper presented at
3rd Horizontal well Technical Forum, 18-19 August, Heriot-Watt University.
Høimyr, Ø., Kleppe, A., and Nystuen, J.P., 1993, Effects of heterogeneities in a
braided stream channel sandbody on the simulation of oil recovery: a case study
from the Lower Jurassic Statfjord Formation, Snorre Field, North Sea, in M.Ashton
(Ed.), 1993, Advances in Reservoir Geology, Geological Society Special Publication,
69, 105-134.
Larter, S.R., Aplin, A.C., Corbett, P.W.M., Ementon, N., Chen, M., Taylor, P.N., 1994,
Reservoir geochemistry: A link between reservoir geology and engineering, SPE
28849, presented at Europec.
Mohammed, K., Corbett, P.W.M., Bowen, D., Gardiner, A.W., and Buckman, J.,
2002, Solution seams in the Mamuniyat Formation El-Sharara-A Field, SW Libya,
Impact on Reservoir Performance, Journal of Petroleum Geology 25(3), 281-296.
Mohammed, K., and Corbett, P.W.M., How many relative permeability samples do
you need? A case study from a North African Reservoir, SCA2002-03, Monterey,
September
Moreton, D.J., Asworth, P.J., and Best, J.L., 2002, The physical scale modeling
of braided alluvial architecture and estimation of subsurface permeability, Basin
Research, 14, 265-285.
Oliver, D.S., 1990, The averaging process in permeability estimation from well test
data, SPEFE, September, 319-324.
Rossini, C, Brega, F., Piro, L., Rovellini, M., and Spotti, G., 1994, Combined
geostatistical and dynamic simulations for developing a reservoir management
strategy: A case history, JPT, November, 979-985.
Tjolsen, C.B., Johnsen, G., Halvoren, A., Ryseth, A., and Damsleth, E., 1995, Seismic
data can improve the stochastic facies model significantly, SPE 30567, presented at
SPE Annual Technical Conference and Exhibition, Dallas, 22-25 October.
Toro-Rivera, M.L.E., P.W.M. Corbett and G. Stewart, 1994, Well test interpretation
in a heterogeneous braided fluvial reservoir, SPE 28828, presented at Europec,
25-27th October.
Tyler, N. and Finley, R.J., 1991, Architectural Controls on the Recovery of Hydrocarbons
from Sandstone Reservoirs, SEPM Concepts in Sedimentology and Palaeonotology,
V3, p3-7.
Yarus, J.F., and Chambers, R.L., 1996, Stochastic Modelling and Geostatistics,
Principles, Methods and Case Studies, AAPG Computer Applications in Geology,
3, 379pp.
Zheng, S., Corbett, P.W.M., Ryseth, A., and Stewart, G., 2000, Uncertainty in well
test and core permeability analysis: A case study in fluvial channel reservoir, Northern
North Sea, Norway, AAPG Bulletin, 84(12), 1929-1954.
52
Reservoir Management F O U R
C O N T E N T S
1. INTRODUCTION
2. SYNERGY
4. STRATEGY
7. MANAGEMENT OF WATER
8. CASE STUDIES
8.1. Water Shut-off
8.2. Improved Oil Recovery (IOR) and Enhanced
Oil Recovery (EOR)
8.3. Infill Drilling
8.4. Fraccing
9. FIELD REVITALISATION
10. SUMMARY
11. REFERENCES
21/07/16
R Geomodelling & Reservoir Management
E
M
LEARNING OBJECTIVES
Having worked through this chapter the students will develop understanding of:
2
Reservoir Management F O U R
1. INTRODUCTION
Sound reservoir management practice relies on the use of available resources (human,
technological and financial) to maximise profits from a reservoir by optimising recovery
whilst minimising capital investment and operating expenses (Satter et al., 1994;
Satter and Thakur, 1994). Reservoir Management can be reactive or proactive - it
involves making choices - let it happen or make it happen. There are many definitions
of reservoir management but improving recovery, minimising expenditure, prolonging
field life and the management of resources are usually involved.
2. SYNERGY
Figure 1 Definition of synergy: The output of a synergistic team is larger than the sum of
the output of individuals
In the UK, the operator is expected to deliver a Field Management Plan to the DTI
(Owens, 1998) which sets out clearly:
The Reservoir Management Strategy - detailing the principles and objectives that
the operator will hold when making field management decisions and conducting
field operations, and;
The Reservoir Monitoring Plan - describing the data gathering and analysis
proposed to resolve existing uncertainties and understand dynamic performance
during development drilling and subsequent production phases
4. STRATEGY
Reservoir management (just like any other form of management) is simply about
following a systematic strategy (Satter and Thakur, 1994):
Developing,
Implementing,
Monitoring the plan, and,
Evaluating the results.
The integration of data at all stages is key to successful reservoir management and
should be considered a dynamic process.
An enormous amount of data are collected during a field life from discovery to
abandonment. An efficient data storage and retrieval system is a fundamental part
of Reservoir Management (Satter and Thakur, 1994). As the geoengineer will be
saddled (at least in the early part of their career) with a system in place, and it is a
major problem to change anything, databases are difficult to summarise. Up to 70%
of the geoengineers active working day is spent in data retrieval. A standard computer
software platform (through POSC - Petrotechnical Open Software Corporation) and
modern compatable machines are helping the establishment of more user-friendly,
accessible data bases. 3-D modelling packages are increasingly being used for data
storage - the “Shared Earth Model”, (Gawith, & Gutteridge, 1996). Outsourcing of
databasing is also occuring - however, a good, complete, readily accessable database
is what makes an Oil Company - and many developments in this area might stay
in-house.
Before one can implement a new reservoir management scheme one needs to identify
the remaining oil and gas. The location of remaining oil and gas was well described in
a study on Forties Field by Brand et al (1996, Figure 2). Determining the remaining oil
requires evaluation of available infill well locations with production logs and RFTs.
(Spaak et al 1999) identify interesting and very subtle barriers in Fulmar Field (Figure
3). These barriers are likely to be thin flooding surface shales in a shallow marine
environment. Remaining oil and gas can be assigned to various definitions.
4
Reservoir Management F O U R
Attic Oil
TOP RESERVOIR
CHANNELSANDS
50m
SEAWATER
500m
Figure 2 Forties field - habitat of remaining oil (from Brand et al., 1996)
STRATIGRAPHY
STRATIGRAPHY
CHRONO
CNL
4750 4850 4950
Strat-trapped residual oil
CRETACEOUS
Group
GH .1 1 MPH log 100 45 NPH -.15 0 GHPC .5
0 150 .1 1 DPH log 100 1.35 RHCB 205 0 POR .5
10000 Kimmeridge
Clay
Ribble
Avon
10100
MERSEY
10200
10500 LYDELL
10300
LATE JURASSIC
Fulmar
Flooding Event
{
Fm
10400
Shoreface USK
progradation
10500
FORTH
Smith 10800
TRIAS Bank Fm
Figure 3 Fulmar Field - shoreface reservoir (from Spaak et al., 1999). Pressure
discontinuities and residual oil trapped by subtle shale breaks (by-passed and possibly
attic oil)
Trapped gas can also refer to gas trapped at the front of an advancing oil bank in front
of a waterflood. Repressuring in the latter case will encourage the gas to redissolve
in the oil.
6
Reservoir Management F O U R
Producer Injector
Modified
Lorenz plot Lorenz plot Cross section
φh kh x(m)
Top
Long vertical
and horizontal
Fining up
Bottom
Water Reduced sweep
kh shut - off /
plug back
Top
φh
Long vertical
and horizontal
Coarsening up
Bottom
Improved sweep
Short vertical
Long horizontal
φh
Short vertical
φh and horizontal
correlation
kh Improved sweep
Figure 4 Schematic sweep characteristics defined by Lorenz plot, modified Lorenz plot
and correlation lengths
7. MANAGEMENT OF WATER
Water influx into wells requires a range of diagnoses and treatment opportunities.
Arnold et al. (Schlumberger Technical Review, Summer 2004) ranked various water
influx scenarios by complexity of treatment. We repeat those here.
(7) Coning (usually water upward) or Cusping (usually gas downward) (Figure 5g)
Emplacement of fluid/gel pancake 50ft extensive might stop this. High angle
wells are effective in high vertical permeability wells.
8
Reservoir Management F O U R
Figure 5a-j
Aadland et al. (1994) review the reservoir managment of Statfjord Field (Figure 6a).
The team developed a plan maintaining the well production potential by high well
activity (Figure 6b). The plan is implemented by drilling high-angle and horizontal
wells towards the flanks of the field to drain attic oil. Reservoir simulation and
well studies address the long and short term monitoring and evaluation of the plan.
Other recovery mechanisms (WAG - Water Alternating Gas - injection, polymer
or surfactant flooding) have been evaluated, other business opportunities (satellite
fields, gas storage), and quantification of remaining oil saturation (cased - hole
logging, sponge coring, resisitivity logging) all indicate the the managment team on
the Statfjord Field are taking a very proactive role, ensuring that the asset continues
to perform for years to come.
200m
Rim oil
Statfjord
Remaining oil locations 500m
Brent
New completions High angle wells
Infill wells
200m
Horizontal wells
Extended reach
drilling (ERD)
Statfjord
Remaining oil locations 500m
10
Reservoir Management F O U R
Mijnssen and Maskall (1994) also describe a proactive hunt for the remaining gas
in the Leman Field. The objectives of the plan are to locate remaining produceable
gas. The plan is achieved by (re-)analysis of cores and logs. The study involved a
reclassification of lithofacies, a petrographic study and petrophysical analysis. The
authors conclude that horizontal wells drilled parallel to the palaeowind direction in
the aeolian sandstones are optimum (Figure 7) in this integrated study. In general there
are a number of opportunities to access remaining gas in the Rotliegend sandstone
(Figure 8).
Interdune
Dune
Well
0 1 km
kD Dune
Kl kD
kI = 2 - 12
kI
Kll Interdune
Kll
= 20 - 75
Kl
Weisliegendes
20-
100m Water
Rotciegendes
Original
Gas
Water
Contact
Trapped gas
500m
Horizontal well/multilateral opportunities Fraccing
An integrated re-interpretation of the dynamic and static data was required to provide
the framework for the management of the Ness reservoir in Brent Field (Bryant and
Livera, 1991). Mapping of individual genetic units was needed to identify accurately
the original oil-in-place and to efficiently manage continued production (monitor
sweep and identify by-passed oil). (Figure 9).
A Broom, Rannoch, Etive formations B
WELLS A B C B A B A
Ness Formation
WATER
PLT 81 85 PAY
ETIVE
OIL
ORIG. NOW
PERF 81
4
RANNOCH
CEMENT 85
3
OIL PERF 89
PERF 87
RFT 85 OWC
BROOM
(corrected to datum)
OIL WATER
1
Figure 9 Brent field reservoir monitoring, initial and changing conditions of fluids and
perforations (Bryant and Livera, 1991)
In 1975, initial completions in the lower Brent Group were in Etive and Rannoch
Formations (Figure 9(a)). By 1990, production was only coming from lower Rannoch,
the rest having watered-out (Figure 9(b)). Original production from sand 4, later
production from sand 3 only in the Ness Formation.
• Reservoir pressure maintained at a level which 100% watercut wells will flow to
surface by voidage balanced waterflood.
It was observed that Ninian’s three platforms could have been replaced with two with
the advances in drilling reach.
12
Reservoir Management F O U R
• Zonal flow tests over dedicated intervals can be used to monitor fluids and pressures.
• Production logs are routinely run to assess zonal contributions and production
splits
The challenge in Ninian is to improve the sweep efficiency (c.f. Brent, Bryant and
Livera, 1991). Ninian is a typical tilted Brent fault block with various heterogeneities
and a relatively low (38-49%) recovery. Vertical sweep efficiency is being addressed by
improved zonal selectivity in wells, remedial well work (perforation of new intervals,
squeeze cementing of intervals, reconfiguration of completions by wireline), varying
the injection allocation where appropriate, perforating thin bypassed intervals (oil
scavenging), intermittant production of high watercut wells, chemical shut-off of
water out zones, horizontal wells in the Rannoch. Areal sweep efficiency is being
addressed by artificial lift, infill drilling, flood realignment, development of new areas
(East Flank), and chasing Ness channels.
All this reservoir management is supported by various well management practices (use
of chrome completions, scale inhibitors, slimhole completions) in a very proactive
management strategy. Clearly the comprehensive monitoring programme has
provided invaluable data for managing the field, however, with 51-62% of the original
2,900MMSTB in place remaining, Ninian still called for a reservoir management
plan for improved recovery! Note that the Ninian Field was later sold by Chevron
to Oryx - who felt that they could optimise the recovery of remaining oil to their
economic advantage.
1979 1985
1994 2002
Figure 10 Increase of well density in Yibal Field (Mijnssen et al., 2003) from vertical
producers in 1979 to horizontal producers and injectors in 2002
14
Reservoir Management F O U R
WOR (frac)
4
0
0 0.1 0.2 0.3 0.4 0.5
Recovery Factor (frac)
Figure 11 Rise in field water - oil ratio (WOR) due to horizontal well activity, Yibal field
2003. Mijnssen et al., 2003
Figure 12 Schematic geology of the Yibal field, showing many the if zones (both
stratigraphic and structural) (Mijnssen et al., 2003)
Figure 14 Varying recovery factors for Heather field reservoir layers on crest and flank of
the field, show variations in vertical and areal sweep efficiency
16
Reservoir Management F O U R
10760
1 Km 10720
10740 10700
010680 1058
6
106 40 0
106
105 Fault Y
40 0
46
10 Fault X
Figure 15 Infill drilling in Heather field targetting three fault blocks (1-3)
8.4. Fraccing
The pressuring-up of the formation until it fails and the injecting of proppant into
artificial fractures is a method to access bolt by-passed or residual hydrocarbons.
Fraccing is usually used to improve the productivity, but can also be used to access
additional reserves.
9. FIELD REVITALIZATION
New Technology
The project is expected to take Brent recovery factors to 59% oil recovery on the
west flank (57% total) and 80% of the original gas in place.
Approx
Depth
(ftss)
W E
6000
formity
8000 X-Uncon
GOC 9100'
12000
OIIP 3800mmbbls GIIP 7.5TCF Reserves(1999) 200mmbbls & 2.6TCF (biggest UK field in 1999)
Figure 17 Brent Field (from James et al., 1999). In 1999 this field was the largest in the
North Sea despite being in production for 20 years
18
Reservoir Management F O U R
10. SUMMARY
Develop plans in teams for the maximum oil production that resources allow. This may
be modified in the light of the companies’ or country’s overall production strategy.
Evaluate using state of the art geoscience and reservoir models. This phase could
also be a technical audit and could actually start the process: EDIM).
Detailed
Seismic
and Geology
Studies
Start of production
Economic
threshold
EXERCISE 1
Given the following permeability data from 3 wells (each data set contains 10 layers)
Well A Fluvial
Porosity(%) Permeability(mD)
0.08 0.05
0.20 30
0.20 1000
0.18 5000
0.10 0.1
0.08 0.05
0.20 100
0.25 1250
0.30 800
0.13 3
Well B Turbidite
Porosity(%) Permeability(mD)
0.20 100
0.25 1000
0.22 500
shale
shale
0.18 10
0.22 500
shale
0.21 80
0.18 20
1. Plot a Lorenz and Modified Lorenz Plot for each of the three reservoirs.
2. Use these two plots to discuss production characteristics and expected sweep
efficiency of the three reservoirs
20
Reservoir Management F O U R
EXERCISE 1 SOLUTION
Given the following permeability data from 3 wells (each data set contains 10 layers)
Well A Fluvial
Porosity(%) Permeability(mD)
0.08 0.05
0.20 30
0.20 1000
0.18 5000
0.10 0.1
0.08 0.05
0.20 100
0.25 1250
0.30 800
0.13 3
Well B Turbidite
Porosity(%) Permeability(mD)
0.20 100
0.25 1000
0.22 500
shale
shale
0.18 10
0.22 500
shale
0.21 80
0.18 20
WELL A: Fluvial
1 1
0.8 0.8
Transmissivity
Transmissivity
0.6 0.6
0.4 0.4
0.2 0.2
0 0
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
Storativity Storativity
Fluvial
10,000,00
1,000,000
100,000
10,000
Permeability (mD)
1,000
100
10
1
0.1
0.01
0.001
0 0.1 0.2 0.3 0.4 0.5
Porosity (Decimal)
Short vertical
φh Long horizontal
φh
Short vertical
φh and horizontal
correlation
kh Improved sweep
22
Reservoir Management F O U R
WELL B: Turbidite
1 1
0.8 0.8
Transmissivity
0.6 0.6
Transmissivity
0.4 0.4
0.2 0.2
0 0
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
Storativity Storativity
Turbidite
10,000,00
1,000,000
100,000
10,000
Permeability (mD)
1,000
100
10
1
0.1
0.01
0.001
0 0.1 0.2 0.3 0.4 0.5
Porosity (Decimal)
1 1
0.8 0.8
0.6 0.6
Transmissivity
Transmissivity
0.4 0.4
0.2 0.2
0 0
Storativity Storativity
Shallow Marine
10,000,00
1,000,000
100,000
10,000
Permeability (mD)
1,000
100
10
1
0.1
0.01
0.001
0 0.1 0.2 0.3 0.4 0.5
Porosity (Decimal)
Top
Long vertical
φh and horizontal
Fining up
Bottom
kh Water Reduced sweep
kh shut - off /
plug back
Top
φh
Long vertical
and horizontal
Coarsening up
Bottom
Improved sweep
24
Reservoir Management F O U R
11. REFERENCES
Aadland, A., Dyrnes, O., Olsen, S.R., and Dranen, O.M., 1994, Statfjord Field: Field
and reservoir management perspectives, SPEFE, August, 157-161.
Arnold, R., Burnett, D.B., Elphick, J., Feeley, T.J. III.,Galbrun, M., Hightower, M,
Jiang, Z., Khan, M., Lavery, M., Luffey, F., and Verbeek, P., 2004, Managing Water
- from waste to resource, Oilfield Review, Schlumberger, 16(2) p26-41.
Brand, P.J., Clyne, P.A., Kirkwood, F.G., and Williams, P.W., 1996, The Forties Field:
20 years young, Journal of Petroleum Technology, April, 280-291, 1996
Bryant, I.D., and Livera, S.E., 1991, Identification of unswept oil volumes in a
mature field by using integrated data analysis: Ness Formation, Brent Field, UK
North Sea, in Generation, accumulation and production of Europe’s hydracarbons
(ed. A.M.Spencer) EAPG Spec. Publ. 1, 75-88.
Christiansen, S.H., and Wilson, P.M., 1998, Challenges in the Brent Field: Implementing
Depressurisation, synopsis of SPE paper 38469, JPT, Feb, 1998, 75-77.
Heward, A.P., and Gluyas, J.G., 2002, How cen we help ensure success of oil and
gas field rehabilitation projects, Petroleum Geoscience, 8, 299-306.
James, S.J., 1999, Brent Field Reservoir Modelling: the Foundations of a brown field
redevelopment. SPE Reservoir Evaluation and Engineering 2(1):104-111.
Mijnssen, F.C.J., and Maskall, R.C., 1994, The Leman Field: Hunting for the remaining
gas, SPE 28880, presented at Europec, 25-27th October.
Mijnssen, F.C.J., Rayes, D.G., Ferguson, I., Al Abri, S.M., Mueller, G.H., Razali,
P.H.M.A., Nieuwenhuijs, R., and Henderson, G.H., 2003, Maximising Yibal’s
remaining value, SPE Reservoir Evaluation and Engineering, August, 255-263.
Owens, J., 1998, The role of the DTI in the UK Oil and Gas Industry, presentation
at Heriot-Watt, 1998.
Pressney, R., 1993, Reservoir Management in the Ninian Filed - a case study, paper
presented at Heriot-Watt SPE Lecture, December.
Satter, A., and Thakur, A., 1994, Integrated Reservoir Management, Pennwell, 335p.
Satter, A., Varnon, J.E., and Hoang, M.T., 1994, Integrated Reservoir Management,
JPT, December, 1057-1064
Spaak, P., Almond, J., Salahudin, S., 1999, Fulmar: a mature field revisited. Mohd
Salleh, Z., and Tosun, O., 1999, in Fleet and Boldy (Eds) Petroleum Geology of North
West Europe, The Geological Society, pp l089-110.
26
Handling Uncertainty F I V E
C O N T E N T S
1. INTRODUCTION
2. RESERVES ESTIMATION
5. BAYES THEOREM
6. REFERENCES
7. APPENDIX
21/07/16
R Uncertainty inGeomodelling & Reservoir
Reservoir Evaluation and Management
Management
E
M
LEARNING OBJECTIVES
Having worked through this chapter the students will develop knowledge of:
2
Handling Uncertainty F I V E
1. INTRODUCTION
Uncertainty in prediction of oil recovery from reservoirs arises primarily from our
lack of knowledge of the subsurface. The uncertainty is there whether we choose to
acknowledge it or not, and the primary reason for quantifying uncertainty is to improve
the decisions taken. The important aspect of uncertainty quantification is what we do
with our estimates of uncertainty – acquire more data, create intervention plans, etc.
The ability to estimate uncertainty accurately – and we’ll define what we mean by
accurate estimation of uncertainty later – has a direct impact on a company’s financial
performance. This can be through reserves bookings, which affect a company’s share
price, or through an effective reservoir management plan, which can reduce OPEX
and increase income by increasing oil production.
To carry out this process effectively, we need to understand how the uncertainties in
both elements of this process arise. That is, how do the uncertainties in the measured
data arise, and how do the uncertainties in our beliefs or inferences compare with
the truth.
WHAT IS UNCERTAINTY?
It is important to think about what we mean by the word uncertainty: all too often,
the temptation is to rush in and “calculate the uncertainty” without being clear what
we mean by this statement.
Although these definitions provide partial answers to the first 3 questions, they do not
address the issue of a single correct answer to the question “what is the uncertainty”
nor do they address the question of what being right or wrong means.
More common in the natural sciences is the categorisation into Epistemic and
Aleatory uncertainty. Aleatory uncertainty is categorised by inherent randomness, is
due to the intrinsic variability of nature, and over time, all values will eventually be
sampled. Epistemic uncertainty, on the other hand, is due to our lack of knowledge,
our inadequate understanding, and epistemic uncertainty can be reduced by additional
measurements.
The categorisation into epistemic and aleatory uncertainty ties in with two distinct
statistical approaches – Bayesian and frequentist.
The Frequentist approach believes that probability only exists in reference to well
de-fined random experiments. For a frequentist, probability is defined as the relative
fre-quency of a particular outcome in the limit of infinitely many repeated experiments.
A Bayesian, on the other hand, believes that probability theory can be applied to the
degree of belief in a proposition. For a Bayesian, the probability of an event represents
the degree of your belief in the likelihood of that event.
Both approaches to probability follow the same rules, and will give the same answers
given large amounts of data. The frequentist approach is limited to aleatory uncertainty
by definition, whereas the Bayesian approach can handle both epistemic and aleatory
uncertainty. But, Bayesian methods have not received wholesale acceptance largely
because of the subjective element introduced by the concept of “degree of belief”.
4
Handling Uncertainty F I V E
Data Uncertainty
Where do the uncertainties in the data come from?
First of all, reservoir properties are variable, and so there is an issue of sampling
density. Have enough samples been taken to capture variability and trends? What
has happened to the samples since they were taken? Did the collection method
change their properties?
Second, how does the measurement process work? What is actually measured, and how
is the quantity of interest calculated? How are the uncertainties accounted for?
Not all of these uncertainties will be of equal magnitude, but you will have to consider
the effects of both variance and bias.
Assessment of uncertainties in data can be complex. It is not clear that even apparently
well carried out analyses always capture the full range of uncertainty. As an example,
consider measurements of the speed of light as shown in Figures 1 and 2.
These figures show that estimating accurate uncertainty bounds can be very hard,
even for a quantity as well-defined as the speed of light.
300000 299840
Measured Speed of Light (km/sec)
299830
299950
299820
299900
299810
299850
299800
299800
299790
299750
299780
299700
299770
299650 299760
299600 299750
1870 1880 1890 1900 1900 1910 1920 1930 1940 1950 1960
Year of Experiment
Figure 1 Measurements of the velocity of light; 1875-1958. Results are as first reported,
with correction from air to vacuum where needed. The uncertainties are also as originally
reported, where available, or as estimated by the earliest reviewers. Error bars show
standard error (s.e. = 1.48 x probable error)
299805
Estimated Speed of Light (km/sec)
299800
299795
1984 Value
299790
299785
299780
299775
299770
299765
1920 1930 1940 1950 1960 1970 1980
Year of Estimate
2. RESERVES ESTIMATION
The concept of “proven” reserves is hard to square with the concept of uncertainty in
reserves estimates. What exactly do we mean by proven when we can’t be certain?
Reserves definitions have changed over recent years to account for the increasing
importance of uncertainty.
Let’s look at how the reserves definitions have evolved over time:
6
Handling Uncertainty F I V E
What does “estimated with reasonable certainty” in this definition mean? There
are two definitions given in the August 1996 article. If using deterministic methods,
it means “with a high degree of confidence”; if using stochastic methods, it means
“at least 80%”.
Notice that this definition still does not remove ambiguity. One person’s “high
degree of confidence” may well be significantly different from someone else’s. If
using stochastic methods, are you going to choose a p80, a p90, p95, or even p99 to
satisfy “at least 80%”?
The current SPE definitions assign the following values to proven, probable, and
possible:
Note that SPE regulations for proved oil require financial and regulatory conditions
to be wet as well
How do we compute OIP and reserves uncertainties. The first thing to do is to look
at the definition of oil in place, which is given by:
So, if we can assess uncertainties in each of the terms in this equation, we can combine
them to compute the overall uncertainty in oil in place.
How do we decide what distributions we should use for each of the individual terms?
For example, if we have a porosity log showing porosity varies in one well from 3%
to 28%, with a mean value of 22%, should we use those three numbers to define a
triangular distribution for porosity? The answer is no, for reasons explained below.
The correct distributions to use in equation (1) are distributions of the average values.
For example, the porosity in equation (2) is the average porosity in the reservoir, and
it is the uncertainty in that average value that goes into the calculation of OIP.
We can see why this should be the case by looking in detail at how we calculate Oil in
Place. First, let us define a characteristic function, χ(x,y,z) which is 1 where we have
pay and zero otherwise. Then our Oil in Place is given by (in reservoir units):
OIP=∫ φ So χ dτ
(2)
Basically, this equation sums up the porosity times saturation everywhere we have pay.
S=
∫ φS χdτ = OIP
o
(3)
∫φχdτ ∫φχdτ
Similarly, the average porosity would be given by
φ =
∫φχdτ (4)
∫χdτ
So OIP is given by
The last integral is summing up all the pay throughout the reservoir, and so is the net
rock volume, or the net to gross times the gross rock volume. So, the oil in place is
given by
Notice that we have switched back to surface quantities through the introduction of
Bo, and that all quantities are averages given by the above equations.
How would we use these ideas in computing uncertainty in OIP? Suppose you have 4
wells, with saturation and porosity data. For each well, you can compute the average
saturation and average porosity. Look at the uncertainty in those quantities and use
that as a guide to estimate the uncertainties in average porosities and saturations.
8
Handling Uncertainty F I V E
Reservoir
Exploration Appraisal Development
Management
Development Decision
P10
Reserves Estimate
P50
P90
Figure 3
120
Development P10
100 Decision
P50
80 P90
STOIIP
60
40
20
0
0 2 4 6 8 10 12
Time (Years)
Figure 4
What is happening here? Why are our expectations of decreasing uncertainty not
being met? This pattern is repeated across different companies, so it is unlikely that
there is a failing of one company’s method that is responsible for this underestimation
of uncertainty.
60 Underestimate
60
% Reserve Change
60
-20
-40
-60 Overestimate
Low Degree of Complexity High
Submarine Fan Reservoirs
Other Reservoirs
Figure 5
Figure 5 shows a summary from Dromgoole and Speers of the change in p50 reserves
from discovery to 4 years later categorised by reservoir type. The message here is
that reserves tend to be underestimated for simple reservoirs and overestimated for
complex reservoirs.
Psychological Research
There have been many studies reported in the psychology literature looking at how
effective people are at knowing what they do and don’t know. See for example
Kahneman and Tversky’s book.
10
Handling Uncertainty F I V E
1.0
.9
.8
Proportion Correct
.7
.6
.5
.5 .6 .7 .8 .9 1.0
Subjects' Responses
Figure 6
Figure 6 shows an example outcome from one of the studies. In this case, a collection
of people were asked questions with a choice of two answers, such as “Which city is
closer to London – (a) New York, or (b) Moscow”. They were asked to select one of
the answers and then assess their confidence in the correctness of their answer from
50% (pure guesswork) to 100% (absolute certainty). Each study asked a number
of people between 50 to 100 questions and then looked at the frequency of correct
answers in each probability decile. If people are well calibrated, we would expect
on average 50% of the guesses to be correct, 70% of the answers assessed as 70%
confidence to be correct and so on.
Figure 6 shows results from 4 independent studies. We can see that there are remarkable
similarities between the different studies (which all used questions considered
“difficult”). In all cases, there is no real improvement in accuracy of answers until
confidence rises above 80%, and accuracy at 100% confidence seems to be around
75 – 85%.
The graph shown in Figure 6 is called a calibration curve, and shows how how well
calibrated our estimates of uncertainty are. Clearly, the Dromgoole and Speers study
shows that OIP estimates in the oil industry are not well calibrated. Other industries
also produce calibration curves, and an example from weather forecasting is shown in
Figure 7. Here, data from a set of weather forecasts is plotted showing the frequency
of observed precipitation as a function of forecast probability of precipitation. The
numbers next to each point are the total number of forecasts at that probability.
Clearly, despite what we may think about the quality of weather forecasts, they do a
much better job than the oil industry!
100
152
90
468
80
Observed Relative Frequency (%)
564
70
587 1065
60
50
1277
40
1031
30 1574
20
3206
313810
1134 2502
0 816
0 10 20 30 40 50 60 70 80 90 100
Forecast Probability (%)
Figure 7
12
Handling Uncertainty F I V E
w = n1 ∑ ( xi + yi ) = n1 ∑ xi + n1 ∑ yi = x + y
so the mean of the sum is the sum of the means.
σ 2 = ( x + y) − ( x +y ) = ( x + y ) −x 2 − y 2 −2x .y
2 2 2
w
Expanding (x+y)2, we get
2 2 2 2 2
σ w = x + y +2xy − x − y −2x .y
which becomes
2 2
σw = x + y 2 + 2xy −x 2 − y 2 −2x .y
For uncorrelated distributions, the value of r is zero, and so the variance of the sum
of distributions is equal to the sum of the individual variances.
2867 151733
Forecast: total
10,000 Trials Frequency chart 9,933 Displayed
.0 2 1 206
.0 1 5 1 5 4 .5
Probability
Frequency
.0 1 0 103
.0 0 5 5 1 .5
.0 0 0 0
1 ,9 0 3 .0 5 2 , 4 0 4 .7 3 2 , 9 0 6 .4 0 3 ,4 0 8 . 0 8 3 ,9 0 9 .7 6
The mean of the distributions computed by the parametric method is 2867, and by
Crystal Ball is 2865 (shown in Figure 8). The standard deviation (square root of the
variance) is 398 for the parametric method, and 390 for Crystal Ball.
To quantify the uncertainty in Oil in Place, we need to multiply the distributions for
each of the input parameters. Table 2 shows a set of parameters for this example system.
The first step is to identify suitable distributions for each of the input values.
Remember, these are average values, so we are looking for distributions that express
the uncertainty in average porosity, average So, average net-to-gross, as well as the
distribution of uncertainty in gross rock volume. The largest uncertainty here is
almost certainly gross rock volume.
14
Handling Uncertainty F I V E
The mean of the distribution is the product of the individual means. We calculate
the mean of each triangular distribution using µ = (a+b+c)/3 ; this calculation is
shown in the “mean” column in Table 3. Remember, we are multiplying by 1/Bo,
so we need to compute the mean and variance of 1/Bo rather than Bo. The variance
is calculated using σ 2=(a2+b2 +c2–ab –ac –bc)/18, and is shown in the “variance”
column in table 4. Once we have the mean and variance we can compute 1+ σ 2/
µ2 for each distribution. We then multiply the means and the values of 1+ σ 2/µ2 to
compute the mean and the value of 1+ σ 2/µ2 for the combined distribution. From the
mean and 1+ σ 2/µ2 we can compute the variance of the new distribution.
Overlay Chart
Frequency Comparison
.023
.006
OIP
.000
5,000,000.00 10,000,000.00 15,000,000.00 20,000,000.00 25,000,000.00
The mean from the parametric method is 14,224,013, compared with a value calculated
from Crystal Ball of 14,187,863 (Figure 9). The standard deviations were 3,864,486
(parametric) and 3,951,269 (Crystal Ball). Figure 9 also shows a log-normal curve with
that mean and standard deviation, showing how close the distribution is to log-normal.
5. BAYES THEOREM
Bayes Theorem provides a way to update our estimates of probability given new
data. For example, we may have generated estimates of uncertainty in Oil in Place,
and wish to update them given some production data – for example rate or pressure
data. Bayes Theorem relates posterior probabilities to prior probabilities through a
likelihood term. The prior probabilities are our belief before we acquire the data. The
posterior probabilities are the probabilities updated to reflect the new information.
An example of where we might wish to use Bayes Theorem is where we have estimated
uncertainty in OIP and have then acquired some production data. Bayes Theorem
provides a consistent way of updating our beliefs with new information.
p(O | mi )p(mi )
p(mi |O) =
∑ p(O |mi ) p(mi )
Here mi is one of a set of possible models, p(mi) is the prior probability of the model.
p(mi|O) is the probability of the model given the observations.
What Bayes Theorem says is that to update the probability of a model given some
observation, we assume that the model is true and compute the likelihood of seeing
the observation assuming that the model is true. If we multiply that likelihood by
the original probability, and then normalise, that gives us the updated (or posterior)
probability
1000
800
Water Rate (stb/d)
600
400
simulated
200
observed
0
0 200 400 600 800 1000 1200 1400
time (days)
Figure 10
16
Handling Uncertainty F I V E
As an illustration, consider the water rate history match graph shown in Figure 10.
If we assume that the simulated model correct, how do we calculate the probability
of the observation? Assume that each point has some experimental uncertainty about
it as shown in Figure 10 and that the experimental uncertainty is given in the form
of a Gaussian:
( q−q obs )2
−
2σ
2 (7)
p=e
The probability of the true measured value being equal to our simulated rate is
given by substituting the simulated rate for q in Equation 7. Thus the likelihood of
observing a single point is
( qsim−q obs )2
−
2 (8)
2σ
p=e
The probability of all the observations being consistent with the model is the product
of the individual probabilities if all the observations are independent.
Figure 11
As an example of how to use these ideas, Figure 11 shows oil rates from a simulation
of the Wytch Farm Reservoir carried out as part of an uncertainty study at BP. The
reservoir model was constrained to produce at the observed rate, and was run with
6 possible top surfaces.
Initially, all 6 top surfaces were judged as equally likely, so the prior probabilities
were set to 1/6. On running the reservoir model, 3 of the top surfaces showed good
agreement with the observed data (left hand picture), and 3 failed to match (right
hand picture).
A first update with Bayes theorem could then be to set 3 likelihoods equal to 1, and
3 equal to zero. The normalising constant in the denominator of Bayes Theorem is
then 3*1/6 + 3 * 0 = 1/2. For the successful models, the prior probability is 1/6, and
after the Bayes update is (1/6)/(1/2) or 1/3.
A more sophisticated approach uses the Gaussian errors discussed above, and is shown
in Figure 12. We computed the least squares error and then used
18
Handling Uncertainty F I V E
0.35
0.3
0.25
0.02
0.15
0.1
0.05
0
Model 1 Model 2 Model 3 Model 4 Model 5 Model 6
Prior
Posterior
Improved
Figure 12
6. REFERENCES
20
Handling Uncertainty F I V E
7. APPENDIX
This section reviews statistical terms and concepts we need in uncertainty quantification.
The mean value is the most frequently used and most commonly known average. It
is given by
1
µ X=
n
∑ Xi
It is also often represented as <X>, with the angled brackets implying a sum (or
integral) over all the X values, and dividing by the number of points, or as X .
The median value of a probability distribution p(x) is the value of x at the midpoint
of the cumulative probability distribution.
xmed ∞
1
∫ p(x)dx = = ∫ p(x)dx
2 xmed
−∞
The mode is the most likely value of x. That is the value with the highest probability of
occurrence. For a Gaussian distribution, the mean, median, and mode are all identical.
Mean
Median
Mode
Value
Figure A1 Mean, Median, Mode for a Normal Distribution (from Spencer et al)
Mode
Median
Mean
Value
Figure A2 Mean, Median, Mode for a Log-Normal Distribution (from Spencer et al)
Variance
The variance is the expected value of (X −X )2 . For a finite number of variables it
is given by
1
∑(X −X)
2
σ X2 = i
n−1
The standard deviation is the square root of the variance.
Coefficient of Variation
The coefficient of variation is the standard deviation divided by the mean.
σX
κX =
X
Correlation Coefficient
The correlation coefficient measures the degree of correlation between two distributions.
The coefficient is calculated by:
∑(x i −x )( yi − y )
i
r=
∑(x − x) ∑(y −y )
2 2
i i
i i
Types of Distribution
There are a number of common distributions used in uncertainty quantification in the
oil industry. Here we list the distributions and some of their properties.
22
Handling Uncertainty F I V E
The observations are a time series of measurements of cumulative oil produced and
pressure - (P(ti),Z(ti))=(Pi,Zi). At each time, the errors in each quantity are s P, s Z.
Material balance is a simplified model of the reservoir which assumes pressure gradients
are negligible and treats the reservoir as a simple tank with only average quantities
of interest. This leads to a material balance relationship, such as Z=Z(P,Ne,Wi,Cr),
expressing the oil produced in terms of the average pressure, compressibility, initial
oil in place, and water encroachment (plus other PVT terms such as FVF etc).
Because there are errors in each quantity, it is less straightforward to determine the
appropriate likelihood or misfit formula. To determine the correct expression, we
assume Gaussian errors in each quantity; then the pdf for the true location of the
measured value is
P −P 2 2
−
( i ) + ( Z −Z i )
2σ 2 2σ Z2
p( P, Z) = e P
The key term is the likelihood p(Pi, Zi |Ne,Wi,Cr) - the probability of the observed
data given the model parameters. The model describes a curve passing close to the
observed data, and so we have to compute the maximum likelihood along the path
described by the material balance equation given the parameters specified.
To maximize the likelihood analytically, we have to linearise the curve Z(P) around
∂Z
the point Pi Z( P) = Z ( Pi ) + ( P −Pi ) and then minimize the negative log of the
likelihood ∂P
2
∂Z
Z (Pi ) + (P − Pi ) − Zi
(P − Pi ) (Z − Zi ) ( P −Pi )
2 2 2
∂P
M= 2 + 2 = 2 + 2
2σ P 2σ Z 2σ P 2σ Z
∂Z ∂Z
Z( Pi ) + (P − Pi ) −Zi
dM ( P −Pi ) ∂P ∂P
= 2 + 2
dP σP σZ
dM 1 ∂Z 2 1 1 ∂Z
= (P −Pi ) 2 + 2 + 2 (Z (Pi ) − Zi )
dP σP
∂P σ Z σ Z ∂P
1 ∂Z ∂Z
2 ( ( i)
Z P − Zi ) ( Z( Pi ) − Zi ) σ 2P
σ ∂P ∂P
( P − Pi ) = − Z 2 =−
1 ∂Z 1 2 ∂Z2 2
σ 2 + ∂P σ 2 σ Z + ∂P σ P
P Z
Substituting this expression back into the equation for the misfit, we obtain
(P − Pi ) (Z − Zi )
2 2
M= 2 + 2
2σ P 2σ Z
2 2
∂Z 2 ∂ Z
( Z( Pi ) − Zi ) σ P
( Z( Pi ) −Zi ) σ P 2
∂P Z P − ∂Z ∂ P −Zi
2 ∂Z 2 2 ( i) ∂ P 2
∂Z σ 2
σ Z + σ P σ 2Z + P
∂ P ∂ P
i i
= 2
+ 2
2σ P 2σ Z
( Z( P ) −Z )
2
i i
M= 2
∂Z 2
2σ 2Z + σ
∂ P P
24
Handling Uncertainty F I V E
(P − Pi ) (Z − Zi )
2 2
M= +
2σ 2P 2σ 2Z
2 2
∂Z ∂ Z
σ 2P ( Z( Pi ) −Zi ) σ P ( Z( Pi ) −Zi )
2
∂P Z (P ) − ∂ Z ∂P −Z
∂Z
2 i
∂P 2 2
i
2 ∂Z
σ 2Z +σ P2
σ Z +σ P
∂ P
∂P
= 2
+ 2
2σ P 2σ Z
2
2
2 ∂Z
σP
∂P
( ( i) i)
2
Z P − Z 1−
2 ∂ Z
2
2 2 ∂ Z
2
σP ( Z (Pi ) − Zi )
Zσ + σ
∂P
4
P
∂P
= +
2 2 2σ 2Z
∂ Z
2σ P σ Z + σ P
2 2 2
∂P
2
∂Z
σ ( Z (Pi ) − Zi )
4 2
( Z ( P ) − Z ) (σ )
2 2 2
P
∂P i i Z
= 2 + 2
2
2 2 ∂Z
2
2
2 2 ∂Z
2
2σ P σ Z +σ P 2σ σ Z + σ P
∂P Z
∂P
2 4 2 ∂Z
2
( ( i ) i ) P Z ∂ P +σ 4Zσ 2P
Z P − Z σ σ
= 2 2
2
2 2 2 ∂Z
2σ σ σ Z +σ P
P Z
∂P
2 2 ∂Z
2
( ( i ) i ) P ∂ P +σ 2Z
Z P − Z σ
= 2
∂Z 2
2 σ Z2 + σ 2P
∂P
( Z( P ) −Z )
2
i i
=
2 2 ∂Z
2
2σ Z +σ P
∂ P
26
Course Code: G127
21/07/16
SECTION A
1a
Describe what you understand by the following terms used in
geostatistical modelling of reservoir sandstones:-
1b
Describe the model of a Carboniferous outcrop on the next page.
Describe the steps taken to identify facies from core and the
modelling techniques used to represent the geometry and
distribution of facies. [5]
Figure for Question 1
21/07/16
2a
Geological modelling is now a standard technique used in the building
of computer simulation grids for reservoir simulation.
What techniques or considerations are available for geoscientists
and engineers to ascertain that a reservoir model is correct? [10]
2b
Describe the various ways 3-D geological and geostatistical
modelling can be used in petroleum engineering. [10]
2c
What developments in the technology do you expect to see in the
next 5 years?
[5]
3a
You are presented with two wells from a braided fluvial environment.
Describe the workflow you would follow to subdivide facies and to
model the space between the wells as part of a reservoir simulation model.
Assume the well spacing in this case is a typical 1000m spacing.
[20]
3b
How would you expect your model to change if the well spacing
was 250m? [5]
SECTION B
4a
In the preface to his book on Shared Earth Modelling, John Fanchi of the
Colorado School of Mines, claims that;
"As reservoir engineers, geophysicists and petroleum geologists create
separate simulations of a reservoir, that vary on the technology each earth
scientist is using, shared earth modeling allows them to consolidate their findings
and create an integrated simulation. This approach will provide specialists with
a more realistic picture of reservoirs, and thus can drastically cut the cost of
drilling and time mapping the reservoir."
4b
Look forward to 2020 and describe what would you imagine to
be the characteristics and usage(s) of the 3-D shared earth
computer model. [5]
21/07/16
5a
Give a definition of mature field reservoir management. [4]
5b
What is meant by the following terms in reservoir management?
i) Synergy
v) Sweep efficiency
vi) IOR/EOR
6a
Explain the concept of a calibration curve in uncertainty quantification.
Comment on the track record of oil in place uncertainty estimation.
[5]
6b
State Bayes' Theorem. Explain one way of calculating likelihood for a
history matching problem. State the assumptions involved and explain
how your uncertain forecasts might be affected if the assumptions were
violated. [10]
Continued...
6b Continued...
You are given a portfolio of 3 fields with p90, p50, p10 reserve
estimates as shown in the table. State whether the distribution for
each field is likely to be normal or log-normal, and explain your
reasoning. Estimate p90, p50, p10 reserves for the portfolio.
[10]
Normal distribution:
p90 = µ – 1.28σ
p50 = µ
p90 = µ + 1.28σ
Log-normal distribution
µ −1.28σ
p90= e
µ
p50= e
p10= e µ −1.28σ
End of Paper
21/07/16
Model Solutions
N GIS SE:
A
M
RE UR
E : RA
CO AR TU
T
Y
E
TI
SI
O
8 Pages
N
N
A
N
O
.:
R E:
Co sea
m
pl unti
et
e t l the ishe
l
hi
s se
ct
io
is
n
ex
fin
bu ion
am
td
in
o
at
d
no
t
Date:
INSTRUCTIONS TO CANDIDATES
No. Mk.
1. Complete the sections above but do not seal until the examination is finished.
5. This book must be handed in entire with the top corner sealed.
6. Additional books must bear the name of the candidate, be sealed and be affixed to
the first book by means of a tag provided
21/07/16
1
R Geomodelling & Reservoir Management
E
M
Note exam required answers to two questions from each section In the
solutions key points are given – answer should also give examples where
Section A
A1.
same location.
stationarity.
etc. for populating models. The data are measured through outcrop
2
Model Solutions
3. Pixel Model
anisotropy.
4. Conditional simulation
between impedance and porosity would be used. For each cell the
21/07/16
3
R Geomodelling & Reservoir Management
E
M
are conditioned on well or seismic data are more ‘expensive’ (in terms
model. Assuming the scale is in metres this model is 3km x 1.5km. The
wells in the model are approx 800m apart which is close to offshore
seen this outcrop on their first day of the course. However by the
time that this question cames up in the exam, they will have forgotten
this fact!).
4
Model Solutions
channels are blocks. It looks as if the channels cut into the incised
the case. One would expect incised channels to cut our coals and
limestones–– but this doesnt seem to happen. Bars might have been
A2.
“All models are wrong” in that they can never be exact replication of
21/07/16
5
R Geomodelling & Reservoir Management
E
M
• Performance prediction
• Data storage
• Numerical well testing
• Geosteering operations
Next 5 years:
Increased functionality (fault modelling – complex object, point bar - modelling)
6
Model Solutions
A3.
channels
• Dynamic calibration
21/07/16
7
R Geomodelling & Reservoir Management
E
M
If the wells are closer then you might expect easier correlation and
more flow units. Depends if the wells are across the channel flow
Section B
B1.
(i) “Shared” means that the model data can be used by many disciplines.
8
Model Solutions
(iii) As the data is to be used by many disciplines that the data need to be
robust. Data should have the same support volume appropriate for the
(v) Cost reductions due to drilling (infill drilling locations, rapid updating
(vi) The future will be increased capability in the software available on the
desk and a seamless workflow up and down through the disciplines.
Need for rapid update of model for new drilling locations. Increased
21/07/16
9
R Geomodelling & Reservoir Management
E
M
B2.
of the asset. Mature means after the initial development phase, when
the geology becomes less uncertain and is usually associated with declining
pressure, etc.
(1) Synergy – sum of the parts in a team effort is more than the sum
(2) Attic Oil – oil remaining updip (and out of reach) above the highest
(3) Water shut-off is the shutting off of perforations through which water
is entering the well by various methods – cementing (if the lower perfs
(4) Residual oil – oil trapped win the pore space after a waterflood.
gas displacement.
(6) IOR – any improved oil recovery mechanism (including EOR), infill
10
Model Solutions
(7) Infill drilling – drilling for attic or bypassed oil by sidetracks or high
B3.
Method:
(4) Convert mean, variance to P10, P50, P90 using formulae for log-
21/07/16
11
R Geomodelling & Reservoir Management
E
M
sig^2 0.165
sig 0.406 sig 602.462
mu 7.179
12