14 views

Uploaded by moji20067147

rbgre

- Structural Modeling by Example Applications in Educational Sociological and Behavioral Research
- EVT_for_Visual_Recognition_Review_Copy.pdf
- statugmcmc
- stat packages
- Stochastic Finite Element Method for Slope Stability Analysis
- Macroeconometrics and Reality
- 0419
- Pricing Decision Support Systems
- Monte Carlo
- Drop Wave Function
- Structure of Motor Symptoms of Parkinson Disease
- energies-08-08573
- Structural Equation Modelling
- Jsa Revise Final
- Paper Truncamiento
- Topic Models in Natural Language processing
- Achen, Garbage Can Models
- The Solutions Of Travelling Salesman Problem using Ant Colony and Improved Particle Swarm Optimization Techniques
- Kybernetika_50-2014-5_11
- A New Admixture Model for Inference of Population

You are on page 1of 8

Linah Mohamed, SPE, Mike Christie, SPE, and Vasily Demyanov, SPE,

Institute of Petroleum Engineering, Heriot-Watt University

Summary

History matching and uncertainty quantification are two important

research topics in reservoir simulation currently. In the Bayesian

approach, we start with prior information about a reservoir (e.g.,

from analog outcrop data) and update our reservoir models with

observations (e.g., from production data or time-lapse seismic).

The goal of this activity is often to generate multiple models that

match the history and use the models to quantify uncertainties in

predictions of reservoir performance. A critical aspect of generating multiple history-matched models is the sampling algorithm

used to generate the models. Algorithms that have been studied

include gradient methods, genetic algorithms, and the ensemble

Kalman filter (EnKF).

This paper investigates the efficiency of three stochastic sampling

algorithms: Hamiltonian Monte Carlo (HMC) algorithm, Particle

Swarm Optimization (PSO) algorithm, and the Neighbourhood

Algorithm (NA). HMC is a Markov chain Monte Carlo (MCMC)

technique that uses Hamiltonian dynamics to achieve larger jumps

than are possible with other MCMC techniques. PSO is a swarm

intelligence algorithm that uses similar dynamics to HMC to guide

the search but incorporates acceleration and damping parameters

to provide rapid convergence to possible multiple minima. NA is a

sampling technique that uses the properties of Voronoi cells in high

dimensions to achieve multiple history-matched models.

The algorithms are compared by generating multiple historymatched reservoir models and comparing the Bayesian credible

intervals (p10p50p90) produced by each algorithm. We show

that all the algorithms are able to find equivalent match qualities

for this example but that some algorithms are able to find good

fitting models quickly, whereas others are able to find a more

diverse set of models in parameter space. The effects of the different sampling of model parameter space are compared in terms of

the p10p50p90 uncertainty envelopes in forecast oil rate.

These results show that algorithms based on Hamiltonian

dynamics and swarm intelligence concepts have the potential to be

effective tools in uncertainty quantification in the oil industry.

Introduction

History matching and uncertainty quantification are two important

research topics in reservoir simulation currently. Automated and

assisted history-matching concepts have been developed over

many years [e.g., Oliver et al. (2008)]. In recent years, research

has been quantifying uncertainty by generation of multiple history-matched reservoir models rather than just seeking the best

history-matched model. A practical reason for using multiple

history-matched models is that a single model, even if it is the

best history-matched model, may not provide a good prediction

(Tavassoli et al. 2004).

Most uncertainty quantification studies use a Bayesian approach,

where we start with prior information about a reservoir expressed as

probabilities of unknown input parameters to our model. These prior

probabilities can come from a number of sources (e.g., from analog

This paper (SPE 119139) was accepted for presentation at the SPE Reservoir Simulation

Symposium, The Woodlands, Texas, USA, 24 February 2009, and revised for publication.

Original manuscript received for review 3 November 2008. Revised manuscript received for

review 13 March 2009. Paper peer approved 19 March 2009.

a similar depositional environment. The prior probabilities are then

updated using Bayes rule, which provides a statistically consistent

way of combining data from multiple sources. The data that are used

to update the prior probabilities are observations about the reservoir

(e.g., production data or time-lapse seismic data).

By generating multiple models that match history and are consistent with known prior data, we are able to estimate uncertainties

in predictions of reservoir performance. A critical aspect of uncertainty quantification, therefore, is the sampling algorithm used to

generate multiple history-matched reservoir models.

There are a number of algorithms that have been used in the

petroleum literature to generate history-matched models, and these

algorithms fall into two principal types: data assimilation methods

and calibration methods (Christie et al. 2005). Data assimilation

methods calibrate a number of estimates of model parameters

sequentially to points in a time series of observed data. In calibration methods, on the other hand, a complete run of the simulation

is carried out and the match quality to the historical production

data is used to move the estimates of the model parameters toward

a better solution.

The main data-assimilation method used for history matching

in the oil industry is EnKF. Evensen (2007) describes the theory

of EnKF and shows a number of applications of EnKF to history

matching field examples. Liu and Oliver (2005) showed that EnKF

compared favorably with gradient-based methods when applied to

history match a truncated Gaussian geostatistical model of facies

distributions. The number of applications of EnKF is growing

rapidly, with several papers presented at the 2009 SPE Reservoir

Simulation Symposium.

Gradient methods are highly efficient and have been widely

used in history-matching problems. These methods require the

calculation of the derivative of the objective function with respect

to the reservoir model parameters as either gradients or sensitivity

coefficients. Techniques available include steepest descent, GaussNewton, and Levenberg-Marquardt, which can be found in some

modern commercial history-matching software, for example SimOpt

(2005). Early work by Bissell et al. (1994) history matched two real

case studies using gradient methods and found that the results were

comparable with hand matches. Lepine et al. (1999) estimated uncertainty of production forecasts using linear perturbation analysis. A

range of possible future production profiles and the confidence intervals for the future production performance were derived to quantify

the uncertainty. The main problem of using gradient-based methods

is that they can easily get trapped in local minima (Erbas 2006).

Recently, stochastic methods have gained some popularity for

their relatively simple implementations. They are equipped with

various heuristics for randomizing the search and, hence, exploring the global space and preventing entrapment in local minima.

Examples of stochastic methods include genetic algorithms (GAs)

and evolutionary strategies. GAs have been used widely in history

matching and are available in a variety of forms, including binary

coded GAs and real-coded GAs. Romero et al. (2000) applied a

modified GA to a realistic synthetic reservoir model and studied

the main issues of the algorithm formulation. Yu et al. (2008) used

genetic programming to construct proxies for reservoir simulators.

Bayes Theorem and Uncertainty Quantification. Bayes theorem is the formal rule by which probabilities are updated given

new data (Jaynes 2003). Bayes theorem is:

31

p(m | O) =

p(O | m) p(m)

. . . . . . . . . . . . . . . . . . . . . . . . . . . (1)

p(O)

model given the data). p(O|m) is the likelihood term (i.e., the probability of the data assuming that the model is true). p(m) is the prior

probability and can be given as a sum of independent probabilities

for model parameters or as some more complex combination of

model inputs. The normalizing constant p(O) is sometimes referred

to as the evidence; if this term is small, it suggests that the model

does not fit the data well.

Sampling algorithms often work using the negative log of

the likelihoodusually called the misfit M. Assuming that the

measurement errors are Gaussian, independent, and identically

distributed, the misfit can be computed using the conventional

least-squares formula,

T

M=

t =1

(q

obs

q sim

2 2

2

t

, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (2)

obs and sim refer to observed and simulated, and 2 is the variance

of the observed data. Other statistical models for observational

noise give rise to different expressions for the misfit.

Sampling Algorithms and

Uncertainty Quantification

The purpose of this paper is to investigate the efficiency of three

stochastic sampling algorithms for generating history-matched

reservoir models. The algorithms are: HMC, PSO, and NA. HMC

is an MCMC technique that uses Hamiltonian dynamics to achieve

larger jumps than are possible with other MCMC techniques. PSO

is a swarm intelligence algorithm that uses similar dynamics to

HMC to guide the search but incorporates acceleration and damping parameters to provide rapid convergence to possible multiple

minima. NA is a sampling technique that uses the properties of Voronoi cells in high dimensions to achieve multiple history-matched

models. NA has been used in a number of reservoir history-matching studies (Subbey et al. 2003, 2004; Rotondi et al. 2006; Erbas

and Christie 2007). A new, more efficient PSO technique called

flexi-PSO has been developed recently by Kathrada (2009) and

applied to history matching of synthetic reservoir models.

The Neighbourhood Algorithm. NA is a stochastic sampling

algorithm that was developed originally by Sambridge (1999a)

for solving geophysical inverse problems. It is a derivative-free

method that aims at finding an ensemble of acceptable models

rather than seeking for a single solution. The key approximation

in NA is that the misfit surface is constant in the Voronoi cell

surrounding a sample point in parameter space. Quantifying the

uncertainty using NA involves two phases: a search phase, in which

we generate an ensemble of acceptable solutions of the inverse

problem; and an appraisal phase, in which NA-Bayes (NAB) (Sambridge 1999b) computes the posterior probability on the basis of

the misfits of the sampled models and the Voronoi approximation

of the misfit surface.

The search phase can be summarized as follows:

Step 1: Initialize NA with a population of an initial set of ninit

models randomly generated in the search space by a quasirandom

number generator; for each model, the forward problem is solved

and the corresponding misfit value is obtained.

Step 2: Determine the nr models having the lowest misfit

values among the previously generated models ns.

Step 3: Generate a total of ns new models using a Gibbs

sampler in the nr Voronoi cells previously selected.

Step 4: NA returns to Step 2, and the process is repeated until

it reaches the user-defined number of iterations.

Thus, a total of N = ninit+nsnumber of iterations models is

generated by the algorithm. The ratio ns/nr controls the behavior

of the algorithm: The lowest value of ns/nr = 1 aims to explore the

space and find multiple regions of good fitting models; as the value

32

obtained at the expense of finding multiple clusters of good-fitting

models. A general guideline is to start with a value of ns/nr = 2 to

obtain a balance between exploration and exploitation.

Because the sampling density of the misfits obtained is not

related to the posterior probability density, a separate calculation

has to be carried out to compute probabilities of the models. This

calculation assumes that the misfit is constant in each Voronoi cell

and calculates the probability as the exponential of the negative

misfit times the volume of the Voronoi cell. The resulting ensemble

of models with their posterior probabilities can then be used to

estimate the p10p50p90 uncertainty envelopes.

PSO. PSO is a population-based stochastic optimization algorithm

developed by Kennedy and Eberhart (1995). It is known as a swarm

intelligence algorithm because it was originally inspired by simulations of the social behavior of a flock of birds. Many studies have

shown PSO to be a very effective optimization algorithm. PSO

is easy to implement and computationally efficient and has been

applied successfully to solve a wide variety of optimization problems (Kennedy and Eberhart 1995; Eberhart and Shi 2001).

The main computational flow is described in the following steps:

Step 1: Initialize the PSO algorithm with a population of

particles of ninit models at locations randomly generated in parameter space. Each particle is also assigned a random velocity. For

each model, the forward problem is solved and the relevant misfit

value M is obtained.

Step 2: At each iteration, evaluate the fitness of individual

particles.

Step 3: For each particle, update the position and value of

pbest, the best solution the particle has seen. If current fitness value

of one particle is better than its pbest value, then its pbest value and

the corresponding position are replaced by the current fitness value

and position, respectively.

Step 4: Find the current global best fitness value and the corresponding best position gbest across the whole populations pbest and

update if appropriate.

Step 5: Update velocity for each particle using Eq. 3. The

updated velocity is determined by the previous iterations velocity

and by the distances from its current position to both pbest and gbest.

vi k +1

k

k

, . . . . . (3)

= wvi k + c1rand1 pbest i xi k + c2 rand 2 gbes

st x i

where:

vik is the velocity of particle i at iteration k;

xik is the position of particle i at iteration k;

c1 is a weighting factor, termed the cognition component, which

represents the acceleration constant which changes the velocity of

the particle towards pbesti;

c2 is a weighting factor, termed the social component, which

represents the acceleration constant which changes the velocity of

k

the particle towards gbest

;

rand1 and rand2 are two random vectors with each component

corresponding to a uniform random number between 0 and 1;

pbesti is the pbest of particle i;

k

gbest

is the global best of the entire swarm at iteration k;

and w is an inertial weight that influences the convergence of

the algorithm.

Initially, the values of the velocity vectors are randomly generated with vik= 0 [ vmax , vmax ] . If one component in a particle violates

the velocity limit vmax, its velocity will be set back to the limit.

Step 6: Update the position for one particle by using

xi k +1 = xi k + vi k +1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (4)

Step 7: Repeat Steps 2 to 6 until the maximum number of

iterations is reached or another stopping criterion is attained.

Because PSO was developed as an optimization tool, we

have to extend the algorithm to quantify uncertainty in reservoir

modeling. We choose to extend the algorithm using the same

concepts as the NA by running the NAB code, which computes

March 2010 SPE Journal

surface is constant in each Voronoi cell surrounding a particle

(Sambridge 1999b).

Hamiltonian Monte Carlo. HMC is an MCMC method that was

introduced by Duane et al. (1987) for sampling from distributions.

HMC aims to fix some of the problems that MCMC algorithms can

suffer from [e.g., the Metropolis algorithm (Metropolis et al. 1953)

can suffer from slow exploration of the probability distribution if

the step size is too small or can suffer from excessive rejection of

proposed locations if the step size is too high].

The idea of HMC is to regard each set of parameter variables

as a point in parameter space and introduce a set of auxiliary

momentum variables u. We define a potential energy as the negative logarithm of the posterior probability U(x) = logp(m|O), and

a kinetic energy as K(u) = uTu/2. The Hamiltonian or total energy

is then H(x,u) = U(x) + K(u). Our extended probability distribution,

assuming the momenta are sampled from a normal distribution

with mean zero and variance 1, is

p ( x , u ) e H ( x ,u ) = e U ( x )e K (u ) p ( m | O ) N ( u; 0,1) . . . . . . . (5)

So, if we can sample from the extended distribution, we can

obtain samples from the posterior probability distribution of interest by discarding the information on the momentum variables.

We generate new samples by solving Hamiltons equations of

motion:

u =

H ( x , u )

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (6)

x

and

x =

H ( x , u )

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (7)

u

state space and total energy. This means that, if we simulate the

dynamics exactly, we will leave the extended density invariant

(Bishop 2006). In practice, we simulate the Hamiltonian dynamics

using the leapfrog algorithm

U ( x ( t ))

t

t

u t + = u t t

2

2

x

t

x (t + t ) = x (t ) + tu t + . . . . . . . . . . . . . . . . . . . . . . . (8)

2

Although the leapfrog algorithm simulation is time-reversible and

volume-preserving, it preserves energy only to second order in the

a Metropolis accept/reject step at the end of the trajectory.

The steps for the HMC algorithm are as follows:

Step 1: Initialize the HMC algorithm with a population of an

initial set of ninit models randomly generated in the search space

by a quasirandom number generator; for each model, the forward

problem is solved and the relevant misfit value M is obtained. The

population of models with their misfits will be used to approximate

the surface and obtain the gradients (see the next section).

Step 2: Generate a new momentum vector u from the Gaussian distribution p ( u ) exp ( K ( u )). This step is considered as

the stochastic part of the algorithm, which ensures that the whole

phase space is explored.

Step 3: Starting from the current state, perform L leapfrog steps

with a step size t, resulting in the new state (x(tL), u(tL)).

Step 4: Employ the Metropolis rule, make the next

min (1, exp ( ( H ( x ( tL ) , u ( tL )))) H ( x , u )) , where k is the

sample

k +1

k +1

iteration number.

Computing the Gradients. Implementing HMC requires us to

compute the gradient of the negative logarithm of the posterior

distribution, which is the sum of the misfit and the negative log of

the prior. There are two approaches that can be followed to compute this term: If gradients of the solution with respect to uncertain

variables are available (e.g., from an adjoint code), then this term

can be computed directly; alternatively, it can be computed less

accurately from a proxy.

In the application presented here, we use a general regression

neural network (GRNN) (Specht 1991; Nadaraya 1964; Watson

1964) with Gaussian kernels to approximate the misfit surface

and then compute the gradient from the approximated surface. A

critical step in the regression is that of computing the kernel-width

parameter. In our application, we used a fraction of the average

distance between points in each dimension. More sophisticated

methods are available [e.g., Kanevski and Maignan (2004)]. It is

important to note that there are some restrictions on updating the

surface to ensure consistency with the MCMC assumptions.

Field ApplicationThe Teal South Reservoir

Teal South is a reservoir located in the Gulf of Mexico, approximately 144 km southwest of Morgan City, Louisiana, as shown in

Fig. 1. The 4,500-ft-deep sand is bounded on three sides by faults

and closed by dip to the north, and is shown in Fig. 2. Fluids are

produced from a single horizontal well through solution-gas drive,

aquifer drive, and compaction drive. Production started in late

1996, and data are available in the form of monthly oil, water, and

gas measurements, as well as two pressure measurements of 3,096

psi initially and 2,485 psi after 570 days of production (Christie

et al. 2002).

March 2010 SPE Journal

the 11115 simulation grid.

33

set of reservoir model parameters fit the observed data. The standard deviation of the oil production measurement errors was set to

100 STB/D. Use of a least-squares misfit is based on the assumption that the measurement errors are Gaussian and independent.

TABLE 1PARAMETERIZATION

FOR THE TEAL SOUTH MODEL

Parameter

Units

md

Prior range

10

kv/kh

Aquifer strength

million STB

Rock compressibility

psi

10

1

10

10

{1...3}

{2...1}

{79}

{4.0963.699}

coarse simulation model with a grid size of 11115. We set up the

model with eight uncertain parameters: horizontal permeabilities

of all five layers, a single value for kv/kh, rock compressibility, and

aquifer strength. We chose uniform priors in the logarithms of the

variables, as shown in Table 1.

Fig. 3 shows the observed production rates for oil and water as a

function of time. The oil rate peaked after 80 days of production and

then declined rapidly. Water production started after the oil rate peaked

and stayed steady for the majority of the time. The first 181 days of

production data were used in the history match, and the remaining

3 years is used as prediction data to measure the predictive quality of

the history matches. The dashed black line shows the data used for

the history matching (six measurements out of 41 measurements). The

simulator production was controlled to match total liquid rate, and

history matching was carried out by matching the field oil rate.

For all the methods, we started from same initial population composed

of 30 models generated randomly in parameter space. Fig. 4 shows

the two different 2D projections of the initial models. In Fig. 4a, we

show the models plotted against the scaled values of two layer permeability multipliers (P1 and P2), and, in Fig. 4b, the models are plotted

against the scaled aquifer strength and the scaled rock compressibility

multipliers. The points are color-coded according to the misfit.

We ran 45 iterations for both NA and PSO algorithms. For

NA, the parameters we chose were: ns/nr = 30/15. For PSO, the

parameters we chose were: c1 = c2 = 2 with a linear decrease in

the inertial weight w from 0.9 to 0.4 (Eberhart and Shi 2007). The

total number of reservoir model simulations was 1,380.

We ran a single HMC chain of 1,350 reservoir model simulations starting from a random one of the 30 initial points used for

NA and PSO. We started the HMC sampling using gradients estimated from the GRNN with kernel-width adjustment factor equal

to 0.4. The leapfrog step sizes were chosen to be = 0.02 i,

where i = (Cii), and C is the covariance matrix. At each HMC

iteration, the number of leapfrog steps taken was randomly drawn

from a uniform distribution from 10 to 25.

1,000

Water Rate, STB/D

2,500

2,000

1,500

1,000

500

800

600

400

200

0

0

200

400

600

800 1,000 1,200 1,400

Time, days

200

400

600

800 1,000 1,200 1,400

Time, days

Fig. 3 Production history (oil rate and water rate) for Teal South reservoir.

1.0

Scaled P2

0.8

0.5

0.2

0.0

0.0

(a)

1.0

4.0

6.0

8.0

10.0

0.3

0.5

Scaled P1

0.8

0.8

0.5

0.2

0.0

0.0

1.0

4.0

6.0

8.0

10.0

0.3

0.5

0.8

Scaled Aquifer Strength

1.0

(b)

Fig. 4 Two different 2D projections of initial population of 30 randomly generated models in 3D parameter space.

34

2500

PSO maximum likelihood model

Observed data used for HM period

Observed data for prediction

2000

180

FOPR

1500

160

NA1

140

PSO1

120

100

1000

80

60

500

40

20

0

0

0

200

(a)

400

600

800

Time, days

1000

kh1

1200

kh2

kh3

kh4

kh5

(b)

Fig. 5 Comparison of the best history matches (a) and the corresponding permeability estimates (b) obtained from NA and PSO.

180

180

160

NA2

140

PSO2 140

120

120

100

100

80

80

60

60

40

40

20

20

NA3

160

PSO3

0

kh1

kh2

(a)

kh3

kh4

kh1

kh5

kh2

kh3

kh4

kh5

(b)

Fig. 6 Two alternative sets of parameter values providing almost equivalent match qualities to the maximum likelihood model.

sampling evolves. Note that the white (light) points have misfit 10 or

above and include many models that do not match at all well.

The parameter values for the good history-matched models can

be seen by looking at the range of the black (dark) points. Both

algorithms appear to concentrate sampling for the permeability

NA

PSO

6.0

Generational Minimum

We will first compare the performance of the two sampling algorithms NA and PSO. Both NA and PSO generate multiple historymatched models. Uncertainty quantification is then carried out

using a separate code that converts the posterior probability density

at each sampled location to a posterior probability (equal to density

times volume of Voronoi cell surrounding a point).

Fig. 5a shows the best history match obtained by NA and

PSO. There is little to choose between the two history-matched

models. Fig. 5b shows the optimal values for the five horizontal

permeabilities; the best-fitting parameters found by NA and PSO

are different (although the differences are not large). Two other

sets of parameter values providing almost equally good matches

are shown in Fig. 6.

Fig. 7 shows the progress of the mean generational minimum

misfit of NA and PSO. To generate this figure, we ran five runs of

NA and PSO. NA and PSO used identical sets of points for each

run, with a new set of random starting conditions generated for

each run. We then plotted the mean generational minimum misfit

along with the standard deviation around each point. We can see

that NA and PSO reach the same misfit but that, on average, PSO

reduces the misfit in each generation more rapidly than NA.

The sampling history of NA is shown in Fig. 8, and that of PSO is

shown in Fig. 9. Each figure consists of eight plots, showing the evolution of the parameters sampled as a function of the number of sampled

points. The points are color-coded according to the misfit, showing

the concentration of sampling at low misfit values as the algorithm

5.5

5.0

4.5

10

20

30

40

Generation

Fig. 7 Evolution of the mean generational minimum misfit for

NA and PSO.

35

1.0

0.8

0.6

0.4

0.2

0.0

500

P1

P2

P3

P4

1000

1.0

0.8

0.6

0.4

0.2

0.0

1.0

0.8

0.6

0.4

0.2

0.0

P5

log(kv/kh)

Rock compressibility

Aquifer strength

1.0

0.8

0.6

0.4

0.2

0.0

500

P2

P3

P4

P5

log(kv/kh)

Rock compressibility

1000

Misfit Evaluation Number

1000

P1

1.0

0.8

0.6

0.4

0.2

0.0

1.0

0.8

0.6

0.4

0.2

0.0

500

500

1.0

0.8

0.6

0.4

0.2

0.0

Aquifer strength

1.0

0.8

0.6

0.4

0.2

0.0

1000

Misfit Evaluation Number

parameters.

parameters.

two possible minima for log(kh2) (upper right plot), whereas PSO

has homed in on the higher value. The sampling for rock compressibility, aquifer strength, and log(kv /kh) evolves differently for

both algorithms. Nonetheless, the best matches obtained are very

comparable in quality, as shown in Fig. 5.

Fig. 10 shows the sampling history for HMC. Note that, in

HMC, the algorithm is not continually trying to improve the degree

of match; rather, it is constantly sampling models that are acceptable history matches. Most of the models generated have misfits

observed values of 1.5 standard deviations or below.

0

P1

400

600

P2

2,500

P3

P5

1.0

0.8

0.6

0.4

0.2

0.0

200

1.0

0.8

0.6

0.4

0.2

0.0

log(kv/kh)

Rock compressibility

P4

Aquifer strength

1.0

0.8

0.6

0.4

0.2

0.0

4.0

6.0

8.0

10.0

400

600

Misfit Evaluation Number

Fig. 10 Sampling history of HMC for each of the eight unknown parameters.

36

1.0

0.8

0.6

0.4

0.2

0.0

200

HMC. HMC is designed so that the models generated sample correctly from the posterior distribution (within sampling error), which

means that p10p50p90 predictions can be generated from an

appropriate sum of the generated models. Neither PSO nor NA has

this property, so a separate assessment has to be made to determine

the probability of each of the sampled models. We used the NAB

code (Sambridge 1999b) to determine these probabilities for PSO

and NA. NAB works by running a Gibbs sampler on the approximate

misfit surface generated by assuming that the misfit is constant over

each of the Voronoi cells surrounding a sampled point.

P90

P50

P10

Truth

End of history

2,000

1,500

1,000

500

0

0

200

400

600

800

Time, days

1,000

1,200

March 2010 SPE Journal

2,500

P90

P50

P10

Truth

End of history

2,000

2,500

P90

P50

P10

Truth

End of history

2,000

1,500

1,500

1,000

1,000

500

500

0

0

0

0

200

400

600

800

Time, days

1,000

1.0

600

800

Time, days

1,000

1,200

1.0

0.8

0.6

0.4

0.2

NA

PSO

HMC

Observed

0.0

400

0.8

0.6

0.4

NA

PSO

HMC

Observed

0.2

0.0

1,100

(a)

200

1,200

1,200

1,300

1,400

1,500

100

(b)

150

200

250

Fig. 14 Cumulative distributions from NA, PSO, and HMC at (a) and after (b) the end of history matching.

p50p90) for oil rate after history matching to the first 181 days

of production. Both NA and PSO have produced similar ranges.

The equivalent plot for HMC is shown in Fig. 13.

Fig. 14 shows the cumulative density function of the oil rate

at two times. Fig. 14a is the CDF for the oil rate after 181 days

at the end of the history matching period. The uncertainty plots

for all three algorithms are very similar. Fig. 14b shows the CDF

after 1,187 days. In this case, there is a greater difference between

the algorithms, although they are still similar. HMC provides a

slighter wider p10p90 range after the end of the history-matching

period than NA and PSO. The solid black vertical lines show the

observed value at the end of history match time (left) and at the

forecasted time (right).

Conclusions

This paper investigated the efficiency of three stochastic sampling

algorithms: HMC, PSO, and NA. We compared the algorithms by

generating multiple history-matched reservoir models for the Teal

South reservoir with eight unknown parameters.

Specific conclusions drawn are as follows:

NA, PSO, and HMC are able to find equivalent match qualities

for this example.

March 2010 SPE Journal

than NA for this example, and this behavior is robust to changing

the initial random starting conditions.

PSO tends to concentrate sampling more in the low-misfit regions

than NA; for each algorithm, this behavior is likely to depend on

the algorithm parameter setting though, and so general conclusions are hard to draw.

NA and PSO need a separate calculation to go from sampled

models to forecasts of uncertainty.

HMC is able to generate samples from the posterior in one step.

NA, PSO, and HMC are all able to produce equivalent forecasts

of uncertainty.

These results show that algorithms based on Hamiltonian

dynamics and swarm intelligence concepts have the potential to be

effective tools in uncertainty quantification in the oil industry.

Acknowledgments

We would like to thank the sponsors of Phase II of the Uncertainty

Quantification Project at Heriot-Watt for their support: BG, BP,

Chevron, ConocoPhillips, Eni, StatoilHydro, JOGMEC, Shell,

TransformSW, and the UK DTI (BERR). We also thank Schlumberger-GeoQuest for the use of the Eclipse reservoir simulator

software.

37

References

Bishop, C.M. 2006. Pattern Recognition and Machine Learning. New

York: Springer.

Bissel, R., Sharma, Y., and Killough, J.E. 1994. History Matching Using the

Method of Gradients: Two Case Studies. Paper SPE 28590 presented

at the SPE Annual Technical Conference and Exhibition, New Orleans,

2528 September. doi: 10.2118/28590-MS.

Christie, M.A., Glimm, J., Grove, J.W., Higdon, D.M., Sharp, D.H., and

Wood-Schultz, M.M. 2005. Error Analysis and Simulations of Complex

Phenomena. Los Alamos Science 29: 625.

Christie, M.A., MacBeth, C., and Subbey, S. 2002. Multiple historymatched models for Teal South. The Leading Edge 21 (3): 286289.

doi: 10.1190/1.1463779.

Duane, S., Kennedy, A., Pendleton, B., and Roweth, D. 1987. Hybrid

Monte Carlo. Physics Letters B 195 (2): 216222. doi: 10.1016/03702693(87)91197-X.

Eberhart, R. and Shi, Y. 2007. Computational Intelligence: Concepts to

Implementations. Burlington, Massachusetts: Morgan Kaufmann Publishers (Elsevier).

Eberhart, R.C. and Shi, Y. 2001. Particle swarm optimization: developments, applications and resources. In Proceedings of IEEE Congress

on Evolutionary Computation 2001, Vol. 1, 8186. Piscataway, New

Jersey: IEEE Service Center.

ECLIPSE SimOpt User Guide 2005A. 2005. Houston: Schlumberger/

GeoQuest.

Erbas, D. 2006. Sampling strategies for uncertainty quantification in oil

recovery prediction. PhD thesis, HeriotWatt University, Edinburgh,

UK.

Erbas, D. and Christie, M.A. 2007. Effect of Sampling Strategies on

Prediction Uncertainty Estimation. Paper SPE 106229 presented at the

SPE Reservoir Simulation Symposium, Houston, 2628 February. doi:

10.2118/106229-MS.

Evensen, G. 2007. Data Assimilation: The Ensemble Kalman Filter. Berlin:

Springer Verlag.

Jaynes, E.T. 2003. Probability Theory: The Logic of Science, ed. G.L.

Bretthorst. Cambridge, UK: Cambridge University Press.

Kanevski, M. and Maignan, M. 2004. Analysis and Modelling of Spatial

Environmental Data. Lausanne, Switzerland: Environmental Sciences,

EPFL Press.

Kathrada, M. 2009. Uncertainty evaluation of reservoir simulation models

using particle swarms and hierarchical clustering. PhD thesis, Heriot

Watt University, Edinburgh, UK.

Kennedy, J., and Eberhart, R. 1995. Particle swarm optimisation. In Proceedings of the IEEE International Conference on Neural Networks,

Vol. 4, 19421948. Piscataway, New Jersey: IEEE Service Center.

Lepine, O.J., Bissell, R.C., Aanonsen, S.I., Pallister, I.C., and Barker, J.W.

1999. Uncertainty Analysis in Predictive Reservoir Simulation Using

Gradient Information. SPE J. 4 (3): 251259. SPE-57594-PA. doi:

10.2118/57594-PA.

Liu, N. and Oliver, D.S. 2005. Critical Evaluation of the Ensemble Kalman

Filter on History Matching of Geologic Facies. SPE Res Eval & Eng

8 (6): 470477. SPE-92867-PA. doi: 10.2118/92867-PA.

Metropolis, N., Rosenbluth, A.W., Rosenbluth, M.N., Teller, A.H., and Teller,

E. 1953. Equation of State Calculations by Fast Computing Machines.

J. Chem. Phys. 21 (6): 10871092. doi: 10.1063/1.1699114.

Nadaraya, E.A. 1964. On Estimating Regression. Theory Probab. Appl.

9 (1): 141142. doi: 10.1137/1109020.

Oliver, D.S., Reynolds, A.C., and Liu, N. 2008. Inverse Theory for Petroleum Reservoir Characterization and History Matching. Cambridge,

UK: Cambridge University Press.

38

Romero, C.E., Carter, J.N., Gringarten, A.C., and Zimmerman, R.W. 2000. A

Modified Genetic Algorithm for Reservoir Characterisation. Paper SPE

64765 presented at the International Oil and Gas Conference and Exhibition in China, Beijing, 710 November. doi: 10.2118/64765-MS.

Rotondi, M., Nicotra, G., Godi, A., Contento, F.M., Blunt, M.J., and

Christie, M.A. 2006. Hydrocarbon Production Forecast and Uncertainty

Quantification: A Field Application. Paper SPE 102135 presented at the

SPE Annual Technical Conference and Exhibition, San Antonio, Texas,

USA, 2427 September. doi: 10.2118/102135-MS.

Sambridge, M.S. 1999a. Geophysical inversion with a neighbourhood

algorithmI. Searching a parameter space. Geophysical Journal International 138 (2): 479494. doi: 10.1046/j.1365-246X.1999.00876.x.

Sambridge, M.S. 1999b. Geophysical inversion with a neighbourhood algorithmII. Appraising the ensemble. Geophysical Journal International

138 (3): 727746. doi: 10.1046/j.1365-246x.1999.00900.x.

Specht, D. 1991. A general regression neural network. IEEE Transactions

on Neural Networks 2 (6): 568576. doi: 10.1109/72.97934.

Subbey, S., Christie, M.A., and Sambridge, M. 2003. A Strategy for Rapid

Quantification of Uncertainty in Reservoir Performance Prediction.

Paper SPE 79678 presented at the SPE Reservoir Simulation Symposium, Houston, 35 February. doi: 10.2118/79678-MS.

Subbey, S., Christie, M.A., and Sambridge, M. 2004. Prediction under

uncertainty in reservoir modeling. J. Pet. Sci. Eng. 44 (12): 143153.

doi: 10.1016/j.petrol.2004.02.011.

Tavassoli, Z., Carter, J.N., and King, R.P. 2004. Errors in History Matching.

SPE J. 9 (3): 352361. SPE-86883-PA. doi: 10.2118/86883-PA.

Watson, G.S. 1964. Smooth Regression Analysis. Sankhya: The Indian

Journal of Statistics 26 (Series A, Pt. 4): 359372.

Yu, T., Wilkinson, D., and Castellini, A. 2008. Constructing Reservoir

Flow Simulator Proxies Using Genetic Programming for History

Matching and Production Forecast Uncertainty Analysis. Journal of

Artificial Evolution and Applications 2008: 113. Article ID 263108.

doi: 10.1155/2008/263108.

Linah Mohamed is a PhD student in petroleum engineering at

Heriot-Watt University, UK. She holds an MS degree in computer

science from the University of Khartoum, Sudan. Mohameds

principal research interests are history matching and uncertainty quantification using stochastic sampling techniques

based on Hamiltonian dynamics, swarm intelligence concepts,

and machine learning. Mike Christie is professor of reservoir engineering at Heriot-Watt University in Edinburgh, where he runs a

research group on uncertainty quantification in reservoir simulation. Before joining Heriot-Watt, he spent 18 years working for BP,

holding positions in research and technology development in

the UK and USA. He holds a bachelors degree in mathematics

and a PhD degree in plasma physics, both from the University

of London. Christie has been an SPE Distinguished Lecturer and

was awarded the SPE Ferguson medal in 1990. Vasily Demyanov

carries out research in spatial statistics and machine learning applications to model uncertainty of subsurface reservoir predictions. He holds a PhD in physics and mathematics

from the Russian Academy of Sciences and has worked with

a wide range of spatial statistics and artificial neural networks

applications in environmental and geosciences fields. He has

published several papers in the leading geosciences, environmental, statistics, and computer science editions, including an

award winning paper in Pedometrics. Demyanov is a coauthor

of the chapters on geostatistics and artificial neural networks

in Advanced Mapping of Environmental Data: Geostatistics,

Machine Learning and Bayesian Maximum Entropy. Advanced

kernel learning methods, stochastic optimisation algorithms, and

Bayesian statistics are among his current research interests.

- Structural Modeling by Example Applications in Educational Sociological and Behavioral ResearchUploaded byarmawaddin
- EVT_for_Visual_Recognition_Review_Copy.pdfUploaded byPOOJANKUMAR OZA
- statugmcmcUploaded byhidy_ho
- stat packagesUploaded byPraveen Thulasi
- Stochastic Finite Element Method for Slope Stability AnalysisUploaded byHari Krishna Shrestha
- Macroeconometrics and RealityUploaded byJuan Jose Gaviria Salazar
- 0419Uploaded bykacongmarcuet
- Pricing Decision Support SystemsUploaded byYasemin Ozcan
- Monte CarloUploaded byJyotikaLal
- Drop Wave FunctionUploaded bymelo_gemelo3348
- Structure of Motor Symptoms of Parkinson DiseaseUploaded bymusalek
- energies-08-08573Uploaded bySubhradeep Dhal
- Structural Equation ModellingUploaded byPrateek Chopra
- Jsa Revise FinalUploaded bybiles1234
- Paper TruncamientoUploaded byVictor Santa Cruz
- Topic Models in Natural Language processingUploaded byritesh
- Achen, Garbage Can ModelsUploaded byStuart Buck
- The Solutions Of Travelling Salesman Problem using Ant Colony and Improved Particle Swarm Optimization TechniquesUploaded byInternational Journal for Scientific Research and Development - IJSRD
- Kybernetika_50-2014-5_11Uploaded byMartin Pedroza
- A New Admixture Model for Inference of PopulationUploaded bygoldspotter9841
- Reinforcement Learning an Introduction (by Sutton & Barto)Uploaded byJuan Vega
- Kickoff PresentationUploaded byMuhammad Sherif Farhoud
- Scott Moss, Bruce Edmonds - Sociology and Simulation Statistical and Qualitative Cross-ValidationUploaded byFahad Ahmad Khan
- Genetic Algorithm in slope stability anaylisisUploaded byFrancisco Z.
- Mb 48Uploaded bySandeep Kumar
- 9e Financial Calculator ReferenceUploaded byAbir Sinno
- 10.1109@ICETECT.2011.5760092Uploaded bysanjay sah
- ae46-0040.pdfUploaded bydiiit
- Hybridization Concepts of Artificial Human Optimization Field Algorithms Incorporated into Particle Swarm OptimizationUploaded bySatishGajawada
- Comparative Study Applied to Riser OptimizationUploaded byKrixvic

- Gas Flow EqnsUploaded bymoji20067147
- ENI_WTA_2Uploaded byAhmed Raafat
- GRM-chap1-matbalUploaded byEduardo Mejorado
- note14Uploaded bySushant Barge
- Gas PropertiesUploaded bymoji20067147
- dp.pdfUploaded bymoji20067147
- dp.pdfUploaded bymoji20067147
- Petrel - Introduction to RE Through Petrel - Procedimento MinicursoUploaded byElcio Dias Junior
- P311_2007C_Syllabus_(Blasingame_070829)Uploaded bymoji20067147
- Diff Equ SolUploaded bymoji20067147
- Petrel_Tutorial_Full_Notes_2012[1].pdfUploaded bymoji20067147
- P324_04A_SyllabusUploaded bymoji20067147
- Reserve Estimation Methods_00_IntroductionUploaded byviya7
- P324_02A_Mod1_00_CoverSheet.pdfUploaded bymoji20067147
- fazelipour.pdfUploaded bymoji20067147
- A Compositional Model for CO2 Floods Including CO2 Solubility in WaterUploaded bymoji20067147
- che374_differ2Uploaded bymoji20067147
- Spycher Pruess Geochim Cosmochim Acta 05Uploaded bymoji20067147
- 5Uploaded bymoji20067147
- jufile-6Uploaded bymoji20067147
- 2015....Uploaded bymoji20067147
- 3Uploaded bymoji20067147
- Mirzaei-Paiaman and Masihi 2014Uploaded bymoji20067147
- Programming 2Uploaded bymoji20067147
- ردیفUploaded bymoji20067147
- Petrel_Tutorial_Full_Notes_2012[1].pdfUploaded bymoji20067147
- [Elearnica.ir]-Scaling of Counter-current Imbibition Processes in Low-permeability PorousUploaded bymoji20067147
- [Elearnica.ir]-Correlation for the Effect of Fluid Viscosities on Counter-current SpontaneUploaded bymoji20067147
- Effect of Brine Composition on CO2/Limestone Rock Interactions during CO2 SequestrationUploaded bySEP-Publisher
- The solubility of Carbon Dioxide in Calcium-Chloride-Water Solutions at 75, 100, 120˚C and high pressureUploaded bymoji20067147

- Surfatron accelerationUploaded byAlexander Itin
- Could the Classical Relativistic Electron be a Strange AttractorUploaded byahsbon
- grbook.pdfUploaded byAleksandër
- PHYS363 Formula SheetUploaded byBen MacLellan
- problem_sheet_4.pdfUploaded byShweta Sridhar
- 9902115v1.pdfUploaded byGil Sousa
- CSE Proposed 2nd Year Syllabus-11.07.11Uploaded byAman Kumar
- 1308119317_lecture11Uploaded byelectrotehnica
- Hamiltonian For A Relativistic ParticleUploaded bykillaforeveah
- HamilUploaded bynope not happening
- Pitkanen - How to Define Generalized Feynman Diagrams (2010)Uploaded byuniversallibrary
- Beginner guide to non commutative geometryUploaded byfreimann
- Course Syllabus _ Indian Institute of AstrophysicsUploaded byPranav Nath
- Falkovich Statistical.physics NotesUploaded bytulinne
- Advanced QMUploaded bySacha Mangerel
- Lectures on Classical Mechanics - John BaezUploaded byclidad
- HSChapter4.pdfUploaded byThevakaran Arunasalam
- Mandl IntroductionToQuantumFieldTheoryUploaded byMkt Schrbr
- Generalized CoordinatesUploaded byKevin G. Rhoads
- IIT JodhpurUploaded bysurujJD
- Hestenes.-.Hamiltonian Mechanics With Geometric Calculus.(1993)Uploaded byUlas Saka
- Aurcet SyllabusUploaded byrajendra kumar devarapalli
- jarzynski_statphys23_epjbUploaded byMehdi Rezaie
- BookUploaded bysncamie
- Darryl D. Holm and Jerrold E. Marsden- Momentum Maps and Measure-valued Solutions (Peakons, Filaments and Sheets) for the EPDiff EquationUploaded by23213m
- Chaos in the Double PendulumUploaded byThuy Linh
- Cm LecturesUploaded byThevakaran Arunasalam
- AfnanQM.pdfUploaded byAyorinde T Tunde
- 203.pdfUploaded byChernet Tuge
- MScPhysics-SyllabusUploaded bymichaeledem_royal