You are on page 1of 60

Ensemble-Based History Matching for Channelized Petroleum Reservoirs

Matei ene
Delft Institute of Applied Mathematics

Motivation

2 / 47

Petroleum Reservoir

3 / 47

Water flooding

4 / 47

2D Channelized Reservoirs
Rock type Permeability Porosity

Channel sand
Background shale

100 mD
0.1 mD

20%
5%

Y-channel reservoir
5 / 47

Sat after 1 year

Sat after 5 years

Reservoir State
=

Rock properties Flow variables Production data

= 1, , = 1, ,

0, = 1,

,0,1-

likelihood that grid cell is in a channel


6 / 47

History Matching
a.k.a. Data Assimilation

are poorly known a priori Let represent the prior information More information becomes available during production

Task: incorporate this information to obtain s.t.


Realizations also show channelized structure Observations are verified by the estimate Proper representation of uncertainty

7 / 47

Ensemble Kalman Filter


Kalman, 1960; Evensen, 1994; Burgers 1998

Monte Carlo approximation of and 1 , ,

Forecast model
(+) = ()

Observations
= +

Update

= + = +
8 / 47

Multi-point Geostatistics
Strebelle, 2002

9 / 47

EnKF Workflow

10 / 47

Adapting the Results for Simulation

= , where =1 : , = 1 linear, but not convex, combination ,0,1-, but may break outside of ,0,1 We use to compute and Negative permeabilities? Porosities > 1?

11 / 47

(1) Truncation

12 / 47

(2) logit transform

13 / 47

Parameterized EnKF

14 / 47

Research topic 1

Propose a parameterization that preserves the structure of channelized reservoirs over sequential assimilation steps.

15 / 47

Polynomial Feature Space (1)

Coordinates are all monomials of the state variables up to degree Mapping Example: for = 2 grid cells and degree = 2 = 1 2

= 1

1 2

2 2

1 2

2 1

image vector

16 / 47

Polynomial Feature Space (2)

17 / 47

Dealing with Dimensionality

The dimension of :

=1

+ 1 109 for 45 45 cells and = 3

- Hilbert space (Mercers theorem)


Inner products are easy to compute:

, = , ()

=
=1

Use Principal Component Analysis!

18 / 47

Principal Component Analysis


Hotelling, 1933

19 / 47

Kernel PCA
Schlkopf, 1998 Let 1 , , be a set of reservoir realizations The PC are the eigenvectors of their covariance matrix: = PCA on (1 ), , ( ) Keep the most significant of the

Compute projections on the PC


: , = for any state vector where = (),

Solely through the kernel function!

20 / 47

The Preimage Problem


We need = 1 ( ) Impossible, in general Sarma et al (2008) approximate using fixed-point iterations:

() =

() =

() =1 (1) 1 =1
=1 (1) =1

= 1,2,

Local optima!

21 / 47

Analytical Solution

Let , = If invertible, then


= = 1 (, ) = 1 =1 ( , )

() =
=1

where = ,0 0 1 0 0-

We propose the similar kernel


, = + 1 1 = =1

invertible for odd

Less computational expense!


22 / 47

Preimage Experiments (1)


Grid: 45 X 45 Training set: 500 samples Analytical trunc

Iterative

23 / 47

Preimage Experiments (2)


Grid: 45 X 45 Training set: 4000 samples Analytical trunc

Iterative

24 / 47

Preimage Experiments (3)


Grid: 100 X 100 Training set: 500 samples Analytical trunc

Iterative

25 / 47

Preimage Experiments (4)


Grid: 45 X 45 Training set: 500 samples Analytical logit

Iterative

26 / 47

Preimage Experiments (5)


Grid: 200 X 200 Training set: 500 samples Analytical logit

Iterative

27 / 47

KPCA-EnKF

= 1

= 3
28 / 47

= 5

Ensemble Collapse!

= 1

= 3
29 / 47

= 5

Subspace EnKF
Sarma and Chen, 2013 Partition the ensemble into groups Define a different parameterization for each group Assumption: the EnKF update is equivalent to the steepest descent equation
= + =

where is the mean squared error

Then, by the chain rule,


= +


30 / 47

Subspace EnKF Workflow

31 / 47

Results (1)

= 1

= 3
32 / 47

= 5

Results (2)

= 1

= 3
33 / 47

= 5

Comparative History Matching

Ribbon Reservoir
Sat after 1 year Sat after 5 years

34 / 47

Ensemble Mean

EnKF

KPCA-EnS, d=3
35 / 47

5-Subspace EnKF

Ensemble Variability

EnKF

KPCA-EnS, d=3
36 / 47

5-Subspace EnKF

Research topic 2

Study the effect of the number of subspaces when using the Subspace EnKF for history matching channelized reservoirs.

37 / 47

Experiment Setup

Sources of information
Ensemble: 100 members Training set: 5 1500 samples

Split the 7500 training samples evenly over *2,10,50+ subspaces

38 / 47

Ensemble Variability

2 subspaces

10 subspaces
39 / 47

50 subspaces

Research topic 3

Develop a strategy to form the subspaces which takes into account the prior information about the reservoir.

40 / 47

Training Set Clustering

Generally applicable to any type of reservoir It can create specialized subspaces We used a separate set of 1400 samples to train a KPCA order 3 parameterization, Applied it to the training set, = ( ) And performed K-means clustering on the , in order to partition the 7500 training samples for *2,10,50+ subspaces

41 / 47

Ensemble Variability

2 subspaces

10 subspaces
42 / 47

50 subspaces

Ensemble Means

2 subspaces

10 subspaces
43 / 47

50 subspaces

Contributions

Proposed a new approach to update binary variables Studied adaptation methods for the update of bounded variables Developed an analytical method to compute polynomial KPCA preimages Paired the KPCA parameterization with the Iterative EnS and Subspace EnKF to avoid collapse Proposed training set clustering to adapt the parameterizations to the prior information
44 / 47

Recommendations
The analytical solution is generally preferable over approximate preimage schemes Normalization + logit transform is generally preferable to truncaton when updating bounded variables When using the Subspace EnKF, the number of subspaces needs to be balanced with the training set size. Training set clustering seems to increase posterior variability, especially when a large number of subspaces is used.

One assimilation method is not generally better than the others; the results need to be discussed with an expert

45 / 47

Future research
What is the effect of polynomial KPCA when used to update continuous variables?

Can we extend the facies variables to cases with more than 2 types of rock? (see Sebacher et al, 2013)
What is the benefit when using polynomial chaos expansions together with KPCA? (see Ma and Zabaras, 2011).

Is the Kalman update equivalent with the steepest descent equation? (see Sarma and Chen, 2013).
Is it possible to adapt higher degree KPCA to the Subspace EnKF framework? (see Sarma and Chen, 2013). How do the presented assimilation methods scale to realistic 3D cases?
46 / 47

Keywords
Water flooding Channelized reservoir State vector, facies History matching Ensemble Kalman Filter Multi-point geostatistics Adaptation methods
47 / 47

Parameterization

Feature space
Polynomial KPCA Preimage problem Ensemble collapse Subspace EnKF Training set clustering

Cheat Slides

48 / 47

Rock Properties

Porosity (%)

Permeability (mD)
flow effort pore connectivity

49 / 47

Computational Expense (1)

50 / 47

Computational Expense (2)

51 / 47

Data Flow

52 / 47

Normalization

= + 1

53 / 47

transform

: 0,1

1 1+

54 / 47

Y-channel Setup

55 / 47

Ribbon Setup

56 / 47

Exp2 Ensemble Means

2 subspaces

10 subspaces
57 / 47

50 subspaces

Exp3 Production (1)

Prior
58 / 47

2 subspaces

Exp3 Production (2)

10 subspaces
59 / 47

50 subspaces

Exp3 50 subspaces members

60 / 47

You might also like