You are on page 1of 13

SPE 77688

Analysis of Well Test Data From Permanent Downhole Gauges by Deconvolution


Thomas von Schroeter, SPE, Florian Hollaender, SPE, and Alain C. Gringarten, SPE, Imperial College, London
Copyright 2002, Society of Petroleum Engineers, Inc.
This paper was prepared for presentation at the 2002 SPE Annual Technical Conference and Exhibition held in San Antonio, Texas, 29 September 2 October
2002.
This paper was selected for presentation by an SPE Program Committee following
review of information contained in an abstract submitted by the author(s). Contents
of the paper, as presented, have not been reviewed by the Society of Petroleum
Engineers and are subject to correction by the author(s). The material, as presented, does not necessarily reflect any position of the Society of Petroleum Engineers, its officers, or members. Papers presented at SPE meetings are subject
to publication review by Editorial Committees of the Society of Petroleum Engineers. Electronic reproduction, distribution, or storage of any part of this paper
for commercial purposes without the written consent of the Society of Petroleum
Engineers is prohibited. Permission to reproduce in print is restricted to an abstract of not more than 300 words; illustrations may not be copied. The abstract
should contain conspicuous acknowledgement of where and by whom the paper
was presented. Write Librarian, SPE, P.O. Box 833836, Richardson, TX 750833836, USA, fax 01-972-952-9435.

Abstract
Current trends towards permanent downhole instrumentation
allow the acquisition of large sets of well test data ranging over
much longer periods of time than previously imaginable. Such
data sets can contain information about the reservoir at a substantially larger radius of investigation than that accessible to
conventional derivative analysis, which is limited to the interpretation of single flow periods at constant rate. By contrast,
deconvolution methods do not suffer from this constraint as they
are designed to perform well test analysis at variable flow rate.
Recently we presented a new method for the deconvolution
of well test data in which the problem is reformulated as a separable nonlinear Total Least Squares problem which accounts
for uncertainties in the measurement of both rate and pressure
data.8 In this paper we report a number of improvements to our
algorithm, and derive error bounds for rate and response estimates in the presence of uncertainties in the data, for which we
assume simple Gaussian models. We illustrate our method by
applying it to a small simulated example and two large sets of
field data with up to 6000 hours of pressure data and up to 450
flow periods.
Introduction
With current trends towards permanent downhole instrumentation, continuous bottomhole well pressure monitoring is becoming the norm in new field developments. The resulting well
test data sets, recorded mainly during production, consist of
hundreds of flow periods and millions of pressure data points

stretched over thousands of hours of elapsed time. Such data


sets contain information about the reservoir at distances from
the well which can be several orders of magnitude larger than
the radius of investigation of a single flow period.
Conventional derivative analysis is therefore ill equipped to
access the full potential information content. What is required is
an analysis method which can estimate the response which the
reservoir would exhibit when subjected to a single drawdown at
constant rate over the entire production period. In mathematical
terms, this is a deconvolution problem. Over the last 40 years,
this problem has received sporadic, but recurring attention in
the Petroleum and Ground Water Engineering literature.
At last years SPE Annual Technical Conference, we presented a new approach which is based on a nonlinear Total Least
Squares formulation and accounts for errors in the measured
rates as well as in the pressure signal8 (henceforth referred to as
paper I). We tested our method with simulated well test data
as well as a small-scale field example with 31 hours of usable
duration. Since then we have successfully applied our method
to a number of substantially larger data sets from production
wells with permanent downhole gauges.
In this follow-up we give a brief summary of the current state
of our approach, including some minor modifications made
since the first paper appeared, and a list of unresolved issues.
We also derive analytic expressions for the expected bias vector
and covariance matrix of the estimated parameter set based on
simple Gaussian models for the measurement errors in pressure
and rate signals. We then illustrate our method with a small
simulated data set, showing the effect of varying levels of regularization on bias and variance.
The second part focuses on practical aspects and includes results from some of the larger field examples mentioned above,
including one which allows a direct comparison of our method
with derivative analysis. Our experience to date suggests that,
within reasonable limits of data quality, and provided a careful choice of process parameters is made, our method produces
reliable estimates.
Deconvolution and well test analysis
The object of study in well test analysis is the wellbore pressure drop over time , and in particular its derivative with respect to the natural logarithm  , for production at constant

T. VON SCHROETER, F. HOLLAENDER, A. C. GRINGARTEN

rate. More specifically, we will consider the rate-normalized



wellbore pressure drop  , i.e. the pressure drop caused
by constant production of one unit of rate measurement (hence
the subscript U). If the flow in the reservoir is governed by a
diffusion equation and boundary conditions which are linear in

the pressure, then the pressure drop signal 
 observed in a

well test with time-varying flow rate  satisfies the integral
equation

           . . . . . . . . . . . . . . . . . . . . (1)



where denotes the reservoir response, which is the ordinary
time derivative of the wellbore pressure drop for unit rate:

      !  "

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (2)

Eqn. (1) is known as Duhamels principle; it is the foundation of


well test analysis. Duhamels principle is a direct consequence
of the linearity of the diffusion equation. In situations where
nonlinearities play an important role, as for instance in gas or
multiphase flow, it can only be an approximation.

any bias due to assumptions about the reservoir model. Thus


whereas conventional derivative analysis seems preoccupied by
the idea to substitute time by a new variable with respect to
which the pressure signal can be differentiated in the multirate
case, the essence of the deconvolution approach could be summarized as estimating the derivative without taking it.
Beyond these common features, there is still a considerable
variety of approaches to deconvolution; see our first paper for a
brief summary of what we believe to be the main contributions
to date. However, so far none of these approaches appears to
have met with universal success; in particular, the ones that have
been tested with simulated signals seem to share an extreme
sensitivity to measurement inaccuracies, with authors reporting
uninterpretable results in the presence of error levels as modest
as 1% in the rates. This probably explains why deconvolution
has not yet become a standard tool in well test analysis.
In our view, the difficulties can be attributed to one or more
of the following reasons:

&

Poor numerics. Especially some of the early contributions underestimated the level of numerical difficulty involved in solving what is usually an ill-conditioned problem. For instance, a typical source of errors is to attempt
to solve near-singular linear systems using Gauss elimination. Since the 1970s, the standard numerical tools for
maintaining stability in this situation have been QR factorization and Singular Value Decomposition (SVD); see, for
instance, Golub and van Loan.6

The crucial first step in well test analysis is to compute an


estimate of the quantity

  #    




. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (3)

from measurements of rate and pressure drop. Up to multipli


cation with time, the task amounts to estimating the function
from the linear system (1) given  and as functions of time.
In mathematical terms, this is a deconvolution problem. Common well testing practice tends to ignore this fact and to use
estimation schemes which produce estimates for each flow period separately, attempt to account for the effect of earlier parts
of the rate history on the basis of rather crude approximations
in the reservoir model, and involve numerical differentiation of
the pressure signal. Such an approach is fraught with numerical and also conceptual difficulties; what makes matters still
worse is that it can only yield estimates of the response up to
the maximal elapsed time during each flow period. In practice,
the radius of investigation is even further reduced in the process of numerical differentiation which amplifies measurement
noise just as it amplifies reservoir features.
By contrast, deconvolution estimates are by definition estimates for varying production rate, and are therefore not subject
to any constraints on the radius of investigation short of the test
duration $ . Besides, they are based entirely on the linear system (1) and therefore avoid numerical differentiation as well as

There are of course situations in which it may be desirable to consider


only a small portion of the pressure sequence, for instance to test for changing
reservoir behaviour (and thus for the extent to which eqn. (1) is satisfied for the
data set in question). However, the important point is that with deconvolution
methods, the size of the pressure data window is freed from the constraint of
having to fit into a single flow period, and becomes entirely a matter of choice.

SPE 77688

&

&

Failure to use regularization. While an SVD-based direct


solution of the discretized linear system (1) may just be
robust enough for noise-free simulated data, it will usually fail in the presence of even benign levels of noise in
pressure and rate signals. Some form of regularization is
needed in order to impose conditions on the solution which
make it a physically meaningful estimate of the function
 , like smoothness and positivity. Research in Petroleum
Engineering has been slow to recognize this, even though
a paper in Water Resources Engineering from the 1960s
already contained an element of regularization.3
Inappropriate error model. The more recent suggestions
for deconvolution methods tend to be based on a Least
Squares formulation, and are thus at least implicitly based
on an error model. However, the error model associated
with an ordinary Least Squares approach attributes the entire difference between the two sides of eqn. (1) to errors
in the pressure measurement. This is in sharp contrast
with current well testing practice, where reported rates
are rarely measured at all, and even if they are, measurement errors of up to 10% are not uncommon. In any case,
the relative uncertainty in the rate signal is typically much
larger than that in the pressure signal. If rate data are to be
used at all in the deconvolution process, the error model

SPE 77688

&

ANALYSIS OF WELL TEST DATA FROM PERMANENT DOWNHOLE GAUGES BY DECONVOLUTION

should certainly reflect the relative size of their contribution to the overall error. This line of reasoning leads directly to a Total Least Squares formulation (see, for instance, Golub and van Loan,6 12.3); in Statistics, this type
of problem is known as an Errors-In-Variables problem.
Unfavourable encoding of the solution. It is tempting to
discretize eqn. (1) by linear interpolation of the response

function itself, as this leads to a system which is linear
in the coefficients of the response. This is the approach
that most methods published to date have taken. However,
it then becomes necessary to encode positivity of the response as an explicit constraint. In practical terms, this
introduces a fair degree of complication in the solution algorithm. In mathematical terms, what is at issue is the
topology of the solution space. The only kind of inequality that can be enforced by explicit constraints is the non 
strict kind, 
for
; however only the strict


makes physical sense. Indeed, the
inequality 
conventions for the diagnostic plot are such that the loga    and         are plotted
rithms 
 , which moves the boundary 
over 
to minus
infinity!






   


 


The remedy is already contained in these observations,


namely to encode the solution in such a way that explicit
sign constraints are unnecessary. The obvious candidate
for such an encoding is the one which is used in the diagnostic plot. To avoid a profusion of numerical factors,
we shall use natural instead of decadic logarithms; thus we
propose to estimate

  

 

   


. . . . . . . . . . . . . . . . . . . . . . . . . . . . . (4)

as a function of 
. From a practical point of view,

encoding the solution in this way has of course the unwelcome effect of rendering the problem nonlinear. But at
least some of this complication is offset by the disappearance of sign constraints: Optimization under constraints is
considerably harder than unconstrained optimization.
In paper I we proposed a formulation of the deconvolution
problem which tries to incorporate all these observations and
is still remarkably simple in computational terms. In the following sections we give a brief summary of the algorithm and
indicate some of the improvements made over the past year.
Deconvolution of well test data as a nonlinear Total
Least Squares problem
We will assume that the average initial reservoir pressure 
is known , and denote the wellbore pressure and pressure drop



Contrary to what we stated in our first paper,8 we do not recommend trying


to estimate the average initial pressure in the deconvolution process if rates and
response are both unknown, since this usually leads to an indeterminacy which
manifests itself as slow convergence with gradual drift in the response iterates,
or even as failure to converge at all. The average initial pressure is often well
known from measurements.

"!

!  #  !

measured at time respectively by  and 



 ,
 " " " . We also assume that an interpolation scheme
for
the flow rate data has been chosen such that

$ % '&

*
 )+-(, +./+    . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (5)

./+

where the functions are the interpolants and the coefficients


are known from measurements. Typically, flow rates are reported as step-wise constant over time; if, in this case, denotes the rate in a time interval
, then

+
1 35+
0 +21 4
%
. +  76  if 0 + 1 143 + , . . . . . . . . . . . . . . . . . . . . . (6)
otherwise.
 ! yields a set of equations
Then evaluating eqn. (1) at times
which, in matrix-vector form, can be written as

 98    

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (7)

8 is a matrix-valued function with components


28 ! +      . +  '! <;>= ;@?/A
=CB ED " . . . . . . . . . . . . . . . (8)
:

We can think of  as a vector    " " "  @F if we choose another
K

%

fixed set of interpolants >GIH J
" " "'L and write
F*
  D  H G H  D " . . . . . . . . . . . . . . . . . . . . . . . . . . . . (9)
H,
where

In the presence of measurement uncertainty, both sides of


eqn. (7) are affected by errors. In this case, let us denote the
true, but unobserved signals by 
and
, where and
are signals representing the measurement errors in pressure and
rate, respectively. These unobserved signals satisfy Duhamels
principle:

NMPO

TM<O 98     UMVR  "

QMSR

. . . . . . . . . . . . . . . . . . . . . . . . . (10)

A good deconvolution result is one in which the rate match and


pressure match vectors, and , are small and in which the response is sufficiently smooth, in a sense yet to be defined.
These requirements motivate the following class of error measures:

YX O

X[ZZ M\ X R ]X ZZ M_^ X]`   J ]X ZZ "

. . . . . . . . . . . . . (11)

XabX ZVc
`
X]`  J J X Z

Here and are adjustable weights which allow a tradeoff be


tween rate match, pressure match, and smoothness;
denotes the 2-norm of a vector , and
and are a

constant matrix and vector chosen such that
is a
measure of the smoothness of . In terms of these quantities,
 for given weights    as
we define our estimate of   
the global minimizer of the error measure (11) subject to the

d afega

hO -R 

i\ 5^

k It is beyond the scope of this paper to address the question of uniqueness

T. VON SCHROETER, F. HOLLAENDER, A. C. GRINGARTEN

constraint (10).

Alternatively, expressed in terms of  and the true, unob UMVR , our estimate is the unconstrained miniserved rates
mizerW of
YX[8      X]ZZ M \ X  X]ZZ M ^ X]`   J X]ZZ " (12)
A more compact formulation which exhibits the linear dependence of the residue on the rates is

YX       / X]ZZ  . . . . . . . . . . . . . . . . . . . . . . . . . (13)






where the functions   is a matrix and   are given by
 8  





d
d
  
  

\  (  
d ^  ` \  J  "

Here  denotes the identity matrix of size . Thus, in mathematical terms, minimizing the error measure amounts to a separable nonlinear Least Squares problem.2

The class of error measures (12) still depends on the parameters and and on the matrix and the vector . We shall
postpone our discussion of the various possible choices until
after the error analysis, which is independent of these choices.

Error analysis
For the purposes of error analysis, we shall assume that perturbations in the observed quantities  and are sufficiently
small to allow us to linearize the residue about the true rate and
      . To do this, we shall write
reservoir parameters 

8     8  2M        . . . . . . . . . . . . . . . . . . . . . (14)
8
where and  are constant matrices given by
8  98       ?  8      ? , ? " . . . . . . . . . . (15)
This kind of linearization is also used in Gauss-Newton type
algorithms such as the Variable Projection algorithm,2 which is
the standard algorithm for this type of problem, and the one we
use. With the substitution (14), we can approximate the error
measure (12) in the form

 a  X a  3 X]ZZ 
a      and
where

8 

d

\  ( d  `

^

(16)

 3 




 d 
M 
d ^\ J

analysis is straightforward. The estimate is the unique minimizer of (16) with minimal vector 2-norm, which is given by

a 
   3  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (17)

where  denotes the generalized (Moore-Penrose) matrix in


verse2 of
rank,

. In the generic case in which

    e   :  e 

"

This is a standard Least Squares problem, and thus its error


of the solution. At a local level, the use of the generalized matrix inverse in
the joint linearized estimate (discussed below) ensures that in the case of rank
deficiency of the matrix  , the solution with minimal vector 2-norm is chosen;
this solution is unique. Cf. Bjorck.2

has full column

. . . . . . . . . . . . . . . . . . . . . . . . . . . . .(18)

which is the form familiar from the normal equations.


While regularization has the effect of smoothing the estimate
and also of narrowing the confidence intervals (as will be shown
below), these benefits come at a price: Regularization introduces bias, i.e. a difference between the true value and the
expectation value of the estimate (which can be thought of as
an average over a large number or repetitions of the estimate
with different realizations of the perturbation sequences). Thus,
regularization allows a tradeoff between an estimate which is
correct on average over many realizations but has large error
bounds for each individual realization, and an estimate with
narrower error bounds which, on average, has some hopefully
minor errors. In our linear approximation, these two effects are
easy to quantify.


It is shown in Appendix A that, provided that the matrix


has full rank and that the two error sequences and have zero
mean, the bias of the parameter estimate (17) is given by


 a   a   ^   e  :   ` e  J  `     . . . (19)


where  denotes expectation. This shows that, at least in the
 e 
` e  J S`   !9 ), the estimate
generic case (
invertible,


is biased unless ^
.


In order to derive confidence intervals, it is necessary to make


our error model more explicit. We shall assume that the perturbations and are uncorrelated, normally distributed random sequences with zero mean and variances " and # , re
spectively.
Denoting the expected parameter estimate by $


, the covariance matrix for the estimate is

 ag

DZ

DZ

a $   a   a $  e*) +
   % '&  3 ,  e 
where the covariance matrix of the vector 3 is given in matrix
'&  aQ  

(

block form by

SPE 77688

%
&  3 
'



 a 

D "Z



 - \ D #Z  

 ( 

"

& M KM `

Here the entire matrix is square and has size . ./ , where


/ is the row dimension of the regularization matrix . Only

the diagonal elements of % '&
are needed for the confidence
intervals; their square roots are the radii of these intervals. Thus

 a

SPE 77688

ANALYSIS OF WELL TEST DATA FROM PERMANENT DOWNHOLE GAUGES BY DECONVOLUTION

D !Z

 ga a  ! % D 1 ! a $ ! 1 fD ! M L ,
 $ M with

if
is the -th diagonal element of % '&
,


then the estimate will lie in the interval $
a probability of about 68%.

a!

DZ



DZ

If this type of error analysis is to be applied to field data,


estimates of the error variances " and # are required. An
unbiased estimate of the pressure variance is given by

D  "Z 

&

!,

*-

Z! 


(
e is the singular value de e  and 8  `
where
8

composition of the matrix   after the last iteration. In our
experience this works well with simulated data. As for the rate
D Z , probably the best estimator would be the quotient
variance
D  " Z  \ ,# where
\
is the minimizer of the Generalized


. . . . . . . . . . . . . . . . . . . . . . . . . (20)







 

 

Cross-Validation (GCV) error;5 however, with our field examples, GCV minimization failed or led to absurdly low values of
the error weight, and so we used the RMS rate match instead,

D  #Z YX  X ZZ  

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . (21)

context of the joint linearized estimate have so far not led to an


efficient algorithm; this is a matter of current research.
Thus, with regard to more rigorous selection principles, the
situation is very much the same for the second parameter ; in
particular, our experience with the L curve criterion tested in
paper I is that it consistently overestimates the optimal level of
regularization by several orders of magnitude, with the effect
that important features of the solution are lost. However, at
least the effect of this parameter is easier to observe since it
relates to the smoothness of the result. Thus a possible way
to choose it is simply by looking at the result and increasing
the parameter to a value for which the response is just smooth
enough to be interpretable without losing its dominant features.
This is admittedly a very subjective criterion, but, at the time of
writing, it is the only one we can offer. As a starting value which
consistently underestimates the optimal level of smoothing, we
use

X  [X ZZ  & . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (23)
and then increase ^ by successive powers of 10.




which underestimated the rate error in the simulated examples


by about 10-20%.

The two remaining choices define the geometric content of


the smoothness criterion and require more detailed discussion.

Synopsis of the remaining choices


The following table summarizes the principal remaining
choices which must be made before the method is completely
specified, and indicates our current preferences. Equation numbers with superscripts refer to specifications unchanged since
paper I; we refer the reader to the description given there.

Regularization by curvature
Type curves for known reservoir models do not exhibit sharp
features; instead, flow regimes tend to follow one another
smoothly. Previous authors, including ourselves, have sought to
quantify the geometric content of this statement in terms of an
average measure of the derivative of the response. However, as
well test analysis draws a large amount of infomation from the
slopes of the graphed solution, penalizing derivatives is likely
to lead to bias. Thus, a more accurate description of the kind of
smoothness at issue is in terms of the curvature of the graph.

.+
GID H
H
\
^`
J

Parameter
Rate interpolation
Response interpol.
Nodes
Error weight
Regul. parameter
Regul. matrix
Regul. vector

Choice / criterion
Unit rectangle
piecewise linear

uniform in 



 

Smooth response
Curvature
Curv. at 1st node

Eqn.
(A-1)8
(32)8
(41)8
(22)
(29)
(28)

Our preferred choice for the error weight is



 X X X XZ ZZ
& Z



. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (22)

&

the only change from paper I is the factor


which is intended to balance the effect of vastly differing sample sizes for
pressure and rate signal. This recommendation is at present not
based on any rigorous analysis, but solely on experience. We
were hoping to develop a more rigorous selection criterion using Generalized Cross-Validation5 (GCV) in the context of the
isolated rate estimate; however, as we already mentioned, this
worked only with simulated data and failed with the two field
examples discussed below. Attempts to use the cross-validation
principle for the selection of both parameters and in the

For the simple regularization scheme which we are considering, this suggests that the third term in the error measure (12)
should be a measure of the average curvature of the graph of .
For the sake of generality, we shall develop our measure for an
arbitrary spacing of nodes.

For a piecewise linear function, the curvature of its graph is


localized at its nodes; an obvious measure of the overall curvature is

F
Z  * : Z!  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (24)
!,
! is the angle between the segments joined at node D#! .
where


This angle is given by

  0 !!  X X  0 ! !  0 ! ! : X /X Z  . . . . . . . . . . (25)


0 Z 0 0 : Z
where denotes the vector product (or cross product) in 3 di!
mensions, and the 0 are the vertices of the graph of  . The


!  X X 0 ! !
0

T. VON SCHROETER, F. HOLLAENDER, A. C. GRINGARTEN

 

the Frobenius norm of


where

       


%


 



situation is depicted schematically in Fig. 1. As in paper I, we


model wellbore storage by assuming unit slope before the first
node:
     '&&  . . . . . . . . . . . . . . . . . . . . . (26)
 



X 0 Z 0 X Z
X 0 Z 0 X Z X X Z
& 
 %  %  is tangent to the first (infinite) segment. To
where
bring the curvature measure into a form suitable for regulariza! in (24) by its sine, set the
tion purposes, we approximate
 components of the vectors appearing
in the denominators in
X &X
Z in (26) by one. The
(25) and (26) to zero, and replace

combined effect of these approximations is that we can write

Z YX[`   J ]X ZZ 

where

and

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (27)

F:

  %    " " "   )(+*

(28)

 L  % ) , L with first row given by


 D  : 21 K
 % 
 D : 2
 3 
4
1
(29a)
 2
 5 " " "'L 
 1 6

is a matrix of size

 D
` + .
 / 0-   D Z Z
$ 73 " " "'L _
 %
and rows

` !+ 

/ -88
0

given by

 D <! D ! D ! :  :
f
D
!
 D ! <D !  D ! :  D ! 
:


 D ! <D !  : 

$_
 % 
 $ 
 $ M % 

21 
21

88

21

(29b)

otherwise.

L 4%
:

D#!

These formulae are valid for any spacing of the nodes . For
 of reduce
uniform spacing with step size 9 , rows 2 to
to the well-known discrete approximation of a second derivative
:3 9 in the diagonal and 9 in the sub- and
operator with
superdiagonal.

`  ]X ` X<;

and

J  X]` X";

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (30)

Numerical experiments show that the use of curvature instead


of derivatives for regularization is a big improvement. See Fig.
3 (compare with Fig. 4 in paper I).

Fig. 1 Angle between two successive line segments in a


piece-wise linear interpolation of the response function.

, i.e. we use

X]` X[Z;  * ` !Z + "


#! = +

 $ 
  %
  


 
 %




! "#



SPE 77688

Finally, in order to make our curvature measure invariant


and by
under subdivision of node intervals, we divide

Implementation issues
The standard algorithm for the minimization of the separable
error measure (13), and the one we use, is the Variable Projection algorithm.2 As the problem is nonlinear in , the algorithm
is iterative; initial values for rates and response are as described
in paper I.

The Variable Projection algorithm involves the solution of


two ordinary Least Squares problems in each iteration. For
these, we implemented two algorithms, one based on the
QR decomposition and one on Singular Value Decomposition
(SVD). The advantage of the QR algorithm is that the QR
decomposition can be computed in situ, which makes it the
method of choice for large-scale data sets. Its only drawback
is that in case of rank deficiency it is not guaranteed to produce the minimum 2-norm solution. However, rank deficiency
is usually avoided due to regularization. As a check we compute

the thin SVD of after the last iteration. We encountered rank

deficiency of the matrix  in a number of cases, indicating
that an unregularized attempt to deconvolve the rates from the
last response iterate would be underdetermined. However, we

never found rank deficiency in the matrix .

8 

We implemented two versions of our algorithm, a prototype


in Mathematica (Wolfram Inc.) and a more efficient version in
C. The C version makes use of the M ESCHACH library of matrix algebra and factorization routines which is public domain
software.7
Tests with simulated data
Figures 28 show results for essentially the same simulated
example that was already used in paper I. The simulated reservoir behaviour is radial flow with wellbore storage, skin and
a sealing fault according to the model given by Agarwal, AlHussainy
and Ramey.1 The dimensionless parameters are
>)? 
BA , and a distance to the fault of 300 well, @
bore radii. For the sake of generality, all other quantities in the
plots are dimensionless too.

%>

Fig. 2 shows a plot of the dimensionless unit pressure drop


        over   for this
and the derivative  
model (blue) together with the type curves for infinite radial
flow (black) and a linear interpolation of the simulated derivative (purple) at nodes



DH 

 

 %
_
M J 3

DC 3



)J K% " " " 3% "

. . . . . . . . . (31)

SPE 77688

ANALYSIS OF WELL TEST DATA FROM PERMANENT DOWNHOLE GAUGES BY DECONVOLUTION

t gHtL

t g(t)

10

10
5

1
0.5

0.5

0.1
0.05

0.05

0.01

0.1

log10 t

Fig. 2 Pressure and derivative type curves for radial flow


(black) and a sealing fault (blue), and interpolated derivative
type curve (purple). The two dashed vertical lines mark the
end of the longest flow period and the end of the test.

0.01
1

log10 t

Fig. 3 True type curves and deconvolved responses for the


simulated tests shown in Fig. 4. Black dots mark the initial
response. The two dashed vertical lines indicate the end of
the longest flow period and the end of the test.

pHtL

qHtL

60

50

4
40

3
30

20

10

50000

100000

150000

t
200000

50000

100000

t
200000

150000

Fig. 4 Rates (left) and pressure signals (right) obtained from the model response shown in Fig. 2. Clean data in black;
perturbed rate signals in yellow for 1% and in red for 10% error level; perturbed pressure signals in green for 0.5% and in red
for 5% error level.
y - ytr
@%D
ytr
10

g - gtr
@%D
gtr

60

40
6
20
4
1

log10 t

-20

50000

100000

150000

t
200000

-40

Fig. 5 Relative errors of the rate (left) and response estimates (right) for the simulated example, with default parameters (solid)
and error weight adapted by GCV (dashed).

T. VON SCHROETER, F. HOLLAENDER, A. C. GRINGARTEN

biasHyL y @%D
10

SPE 77688

BiasHgL gTr @%D


100
10

0.1
0.1

0.01
0.01

0.001

0.001

0.0001

50000

100000

150000

t
200000

0.0001

log10 t

Fig. 6 Predicted relative bias of the rate and response estimate computed with the true response and the true error levels for
3 levels of regularization: default (dashed), 1% of the default (dotted), and 100 times the default (solid).
y y @%D
10

expHz L-1 @%D


1000

100

0.1

10

0.01

0.001
50000

100000

150000

0.1

t
200000

log10 t

Fig. 7 Predicted standard deviations of the rate and response estimate computed with the true response and error levels.
Colour code as before; regularization levels as in Fig. 6.

t gHtL

t gHtL
10
5

1.5

0.5

0.1
0.05

0.01
1

log10 t

3.75

4.25

4.5

4.75

5.25

5.5

log10 t

Fig. 8 Predicted confidence regions for the response estimate, based on the true response and the predicted bias and
standard deviations. Colour code as before; regularization levels as in Fig. 6. The right plot shows the late time part in greater
detail.

SPE 77688

Data set
1
2
3
4
5

ANALYSIS OF WELL TEST DATA FROM PERMANENT DOWNHOLE GAUGES BY DECONVOLUTION

Colour
Black
Green
Yellow
Red
Purple

Pressure error

0.5%
0.5%
0.5%
5%

Rate error

1%
10%
10%

Table 1. Data sets and colour code for the simulations.

%/ 

Fig. 4 shows simulated rates and pressure signals as functions


3 ,
of time. The duration of the longest flow period is
,

5
which corresponds to the dashed line at 
in Fig. 2. The
"
3D,
duration of the entire test is
, which corresponds to the
7A 5

dashed line at 
" . Thus by construction the difference between the two reservoir models emerges only after the longest
rate period, and is therefore invisible for conventional derivative analysis.

<%/



Clean data are shown in black. Four levels of measurement


errors are simulated and colour-coded as shown in table 1. Here
the error levels are defined as the relative RMS differences from
the true signals,
 and
. An error level of
0.5% in the pressure signal would correspond to the following
parameter combination: Permeability 100 mD, reservoir thickness 50 ft, flow rate 1000 barrels per day, viscosity 0.6 cP, pressure uncertainty 2.5 psi. In addition to the perturbations considered in paper I, we also simulated an error level of 5% in the
pressure signal.

XO XZ X

XZ

XRXZ X XZ

\  

^
  

As before, we used the default choices for error weight and


 " and  3 " .
regularization parameters, which are
Fig. 3 shows the deconvolved responses. The starting values are
shown as black dots. Except for the cases with 10% rate errors,
the results are almost indistinguishable from the true type curve.
The cases with 10% rate error deviate noticeably from the correct behaviour at very early and late times; the result for 5%
pressure error and 10% rate error would probably not lead to a
correct interpretation.
Fig. 5 shows the relative errors of the deconvolved rate and
response with the default parameters and with error weight
adapted by GCV. As was to be expected, the effect of the 5%
error in the pressure signal is much more damaging for the response estimate than the rate estimate. Using the GCV error of
the isolated rate estimate instead of the default choice leads to
slight improvements (the most noticeable one being the reduction in the error of the last node for data set 4).
A more thorough analysis is possible on the basis of the statistical information which we are now able to compute. Fig. 6
shows the predicted relative bias of the joint rate and response
estimate for varying levels of regularization, computed with the
true response and error levels, illustrating how regularization
causes bias. Fig. 7 shows the predicted standard deviations, illustrating how regularization reduces variance. The combined
effect is shown in Fig. 8. Evidently, the response curve obtained
with 100 times the default regularization is too stiff to fit the

true reservoir behaviour. Predictions at this level of regularization are likely to lie within the slim, solid tubes which include
the type curve for infinite radial flow, but not the correct one for
the sealing fault. On the other hand, the predicted confidence
regions at lower levels of regularization show a divergence in
the late time region which, at the larger error levels, makes it
equally difficult to interpret the model with any confidence.
Field data
In this section we shall show applications of our deconvolution algorithm to two large sets of field data which we shall refer
to as Well A and Well B.
Well A is an oil well in the North Sea for which data were
gathered only for the first 50 and the last 8000 hours of its
30,000 hours of production history (Fig. 9, left plot). The
pressure signal is only available again for the last 6000 hours,
clearly indicating some unrecorded production. A build-up test
of about 3000 hours is visible at the end, which makes this data
set ideal for verification purposes.
For deconvolution we used the 35 rates visible in the left plot
and the pressure signal from all build-ups after 25300 hours,
except for the last one. The pressure signal consists of 9221
data points (see Fig. 9, right plot).
Fig. 10 shows a comparison of deconvolution and derivative
estimates. The purple and blue curves are the deconvolved results for regularization levels of
and
times the default,
respectively. The red solid curve is the deconvolution result obtained from the last build-up alone (though with the same rates
as before). Derivative results from the last build-up are shown
as black dots, those from an earlier build-up as blue dots.

%/

%> 

Evidently the deconvolved responses are in good agreement


at late times; their disagreement at early times is probably due
to missing early time data in the last build-up.
As for the comparison between deconvolution and derivative
estimates, the shapes obtained from both types of analysis are
remarkably similar. To examine the reason for the vertical shift
between them, we computed the build-up type curve which conventional derivative analysis would ideally obtain if the true
reservoir behaviour was represented by the solid red curve and
the measured signals were free from measurement errors. The
result is the dashed red curve, which is in good agreement with
the derivative (black dots). Thus the shift is most likely due to a
well-known systematic error in the way conventional derivative
analysis evaluates build-up data.

%/ ^

Fig. 12 shows the rate and pressure match achieved with the

deconvolution results at a regularization level of
 .
The rate match shows that some of the rates are adapted by up
to 20 %; the average rate match per period is about 17 % (see
Table 2). The average pressure match per point is about 200
psi, or 10 % of the average RMS pressure drop of the build-ups
used for deconvolution (black dots in the right plot). Evidently
the drawdown periods are not well matched. Including them in

10

Colour
Yellow
Orange
Red
Purple
Blue

T. VON SCHROETER, F. HOLLAENDER, A. C. GRINGARTEN

^# ^



1
10
100
1,000
10,000

D  "   

9.84%
9.92%
10.0%
10.1%
10.5%

D  #  

16.8%
17.2%
17.5%
17.4%
16.9%

Table 2. Colour code, regularization levels, and estimates of


the relative standard deviations for well A, where
and
are the RMS averages of the measured pressure drop
and rate sequences, respectively.

 



the data set for deconvolution altered the response at early and
intermediate times, but not at late times.
Fig. 11 shows estimated confidence intervals for the response
based on the results obtained at the various levels of regularization. Estimates for the two response curves shown in Fig. 10
are in the range of 510 % except at very late times.
Well B is a lean gas condensate well in the North Sea; a more
detailed analysis of this data set is contained in another paper
presented at this Annual Technical Conference.4 The data consist of about 9,000 hours of production history; the pressure signal is only available for the last 1,400 hours. However, no rates
are missing, and the data are of much better quality than those
of Well A (Fig. 13). The entire data set comprises 448 flow periods and about 55,000 pressure data points; for deconvolution
we used all rates and every 4th pressure sample.

the dashed blue line. The choice of these features in the analytic model reflects additional information from seismic imaging which is not used by deconvolution.
Pressure matches with the two different deconvolved responses are shown in Figs. 15 and 16; they correspond to the
blue and green response curves, respectively. Both pressure
matches are excellent; the relative RMS pressure match error
is about 34.7 psi per point, which is about 1% of the average
RMS pressure drop. Fig. 16 also contains the last 1600 hours
of the rate match obtained from the data set excluding the first
200 hours of the pressure signal. The average RMS rate match
per period is about 3% of the average RMS rate. The earlier
rates (up to 9000 h) are hardly adapted at all, which suggests
that they have little or no effect on the pressure signal over the
time interval in which it has been measured.
Conclusions
In paper I and in this paper we have presented a new algorithm for the deconvolution of well test data which is characterized by the following novel features:

&
&

a nonlinear encoding of the reservoir response which


makes explicit sign constraints unnecessary;

&

an error measure that accounts for errors in the flow rate as


well as in the pressure signal;
regularization by curvature, which allows the user to control the degree of smoothness while avoiding the flattening
of slopes associated with regularization based on derivatives;

The deconvolved responses are shown in Fig. 14, together


with some results from conventional analysis. The deconvolved
results use the entire set of rates and varying subsets of the pressure signal, subsampled by a factor of 4 in each case.
The purple and blue solid curves represent the deconvolved
results obtained with the first half of the pressure signal (up to
9850 h) and the entire signal, respectively. The main difference
between them is the extent of the intermediate stabilization (between 10 and 100 hours), which may indicate a growing zone
of liquid condensate drop-out. Results from derivative analysis
of the longest flow period in the first half are shown as black
dots and seem to agree well with the deconvolved results.
Closer scrutiny of the data set4 reveals that the first 200 hours
of the pressure signal (corresponding to part of the pressure signal shown in grey in Fig. 13) is affected by phase redistribution.
Omitting this part of the pressure signal in the deconvolution
process produces the response shown in green. Surprizingly,
the effect on the deconvolved result is negligible at early times
where changing wellbore storage would be visible; however, at
late times, the behaviour is very different: the lower stabilization (between 100 and 1000 hours) is no longer visible.
The intermediate red curve is an analytic type curve for a twozone radially composite model in a closed rectangle fitted to one
of the later build-up periods. The second stabilization and the
late time behaviour are not visible by conventional derivative
analysis, which can only analyze the part of the response up to

SPE 77688

&

finally, to our knowledge, it is the first method to give estimates for bias and confidence intervals of the parameters.

We have demonstrated the potential of our method by applications to simulated data and to two large field examples. The
simulated examples suggest that our method is much less sensitive to errors in the rate signal than those previously published.
We attribute this success to the explicit inclusion of rate uncertainty in our error model.
We believe that the three examples also underline the superiority of deconvolution over the conventional derivative method,
which is based on the following facts:

&
&

Unlike the multirate extension of derivative analysis, deconvolution does not suffer from any bias due to implicit
model assumptions;

&

it has no restrictions in terms of the choice of pressure data


window, thereby allowing for a much greater radius of investigation; and
it handles measurement error in a more sensible way, conceptually and numerically.

SPE 77688

ANALYSIS OF WELL TEST DATA FROM PERMANENT DOWNHOLE GAUGES BY DECONVOLUTION

8p @psiD, q @bopdD<

11

8p @psiD, q @bopdD<

12000

10000

10000
8000
8000
6000

6000

4000

4000
2000
24000

26000

28000

30000

t @hD

2000
25000

25500

26000

26500

27000

t @hD

Fig. 9 Well A: Entire data set (except for an initial 50 h test, left) and close-up of data to be used for analysis (right, enlarged).
The right plot shows the build-up data used for deconvolution in black and the remaining pressure data in grey.
t dp
t gHtL, @psibopdD
q dt
1

expHz L-1 @%D


1000
500

0.1
100
50

0.01

10
5

0.001

0.0001
0.01

0.1

10

100

1000

t @hD
10000

Fig. 10 Well A: Deconvolved responses obtained from


all buildups before the last (purple, blue) and from the last
buildup alone (red, solid); derivative estimates for the last
(black dots) and an earlier buildup (blue dots); ideal build-up
type curve based on the deconvolved response from the last
buildup (red, dashed).
8q @bopdD, y @bopdD<

0.01

0.1

10

100

1000

t @hD
10000

Fig. 11 Well A: Estimated relative standard deviation of the


deconvolved response. Colour code as in Table 2.

8p @psiD, y @bopdD<
10000

10000
8000
8000
6000

6000

4000

4000
2000

t @hD
25000 25500 26000 26500 27000 27500

2000

25500

26000

26500

27000

t @hD

Fig. 12 Well A: Rate match (left) and pressure match (right). Given rates in black, deconvolved rates in blue, pressure data
used for deconvolution in black, remaining pressure data in grey, pressure signal from deconvolved response and fitted rates
in red.

12

T. VON SCHROETER, F. HOLLAENDER, A. C. GRINGARTEN

8mn HpL @psiD, q10 @MscfdD<

8mn HpL @psiD, q10 @MscfdD<

6000

6000

5000

5000

4000

4000

3000

3000

2000

2000

1000

1000

2000

SPE 77688

4000

6000

8000

10000

t @hD

9250 9500 9750 10000 10250 10500

t @hD

Fig. 13 Well B: Entire data set (left) and close-up of the pressure signal (right). The first 200 hours (grey) are affected by phase
redistribution.
t gHtL @psi*dMscfD

8mn HpL @psiD, y10 @MscfdD<


6000

0.1

5000

0.01

4000
3000

0.001
2000

0.0001
0.01

0.1

10

100

1000

t @hD
10000

Fig. 14 Well B: Deconvolved responses for the first half


of the pressure signal (solid purple), for the entire pressure
signal (solid, blue), and without the grey part of the pressure
signal in Fig. 13 (green); derivative obtained with the longest
build-up in the first half (black dots); fitted analytic type
curve (red). Vertical lines: maximal elapsed time (dashed),
entire duration of the pressure signal (solid, colours matching responses).
8q, y @Mscf dD<

1000

9200

9400

9600

9800

t @hD
10000 10200 10400 10600

Fig. 15 Well B: Pressure match for the entire data set (pressure subsampled by factor 4). Adapted rates in blue; pressure signal in black, pressure signal from deconvolved response and fitted rates in red.

8mn HpL @psiD, y10 @MscfdD<


6000

60000
5000
50000
4000

40000

3000

30000

2000

20000
10000
9250

9500

9750 10000 10250 10500

t @hD

1000
9600

9800 10000 10200 10400 10600

t @hD

Fig. 16 Well B: Rate and pressure match for the data set excluding the first 200 hours of the pressure signal. Given rates in
black, adapted rates in blue; pressure signal in black, pressure signal from deconvolved response and fitted rates in red.

SPE 77688

ANALYSIS OF WELL TEST DATA FROM PERMANENT DOWNHOLE GAUGES BY DECONVOLUTION

The combination of these advantages makes deconvolution the


method of choice especially for large data sets.
An issue still unresolved is the search for better (i.e. less subjective) criteria for the selection of error weight and regularization parameter. This is a matter of current research.
Nomenclature

`W /

=
=
=

  =
 =
 =
9 =
=

=
=
=

  =
 , =
 =
  =
, =
=
=
 C
 =
,
=

82! + 

&(

]


a aH
,

  
 , H

R , R +

=
=
=
=
=

number of rows in matrix


regularization matrix
error measure
matrix in separable nonlinear LS residue, eqn. (13)
reservoir impulse response, eqn. (2) [psi/(h bopd)]
components of matrix-valued function , eqn. (8)
uniform step size between nodes
,
identity matrix of size

number of pressure data points
number of nodes for the deconvolved response
number of flow periods
continuous pressure [psi]
(vector of) pressure data [psi]
average initial pressure [psi]
continuous flow rate [bopd, MScf/d]
(vector of) discrete flow rates [bopd, MScf/d]
time [h]
test duration; as superscript: transpose of a matrix
vector in separable nonlinear LS residue, eqn. (13)
(vector of) parameters estimated in the joint lin    , eqn. (16)
earized LS problem,
(vector of) estimated rates [bopd, MScf/d]
vector of true rates [bopd, MScf/d]
continuous logarithmic response function (used for
the diagnostic plot), eqn. (4)

(vector with components) 
true values for vector

DH

 DH

X X
X "X ; Z

1.

2.

3.

4.

5.

6.
7.
8.

R. G. Agarwal, R. Al-Hussainy, and H. J. J. Ramey. An investigation of Wellbore Storage and Skin Effect in Unsteady Liquid
Flow. I: Analytical Treatment. SPE Journal, Sept. 1970:279290,
1970. Paper SPE 2466.
Numerical methods for Least Squares problems. SociBjorck, A.
ety for Industrial and Applied Mathematics (SIAM), Philadelphia,
1996.
K. H. Coats, L. A. Rapoport, J. R. McCord, and W. P. Drews.
Determination of Aquifer Influence Functions From Field Data.
Trans. SPE (AIME), 231:141724, 1964.
S. Daungkaew, F. Ross, and A. C. Gringarten. Well Test Investigation of Condensate Drop-out Behaviour in a North Sea Lean Gas
Condensate Reservoir. SPE Annual Technical Conference and Exhibition, San Antonio, Texas, Sept. 29 Oct. 2, 2002, 2002. Paper
SPE 77548.
G. H. Golub, M. Heath, and G. Wahba. Generalized CrossValidation as a Method for Choosing a Good Ridge Parameter.
Technometrics, 21:215223, 1979.
G. H. Golub and C. F. van Loan. Matrix Computations. The Johns
Hopkins University Press, Baltimore, third edition, 1996.
D. E. Stewart and Z. Leyk. M ESCHACH, version 1.2b, 1994.
http://www.netlib.org.
T. von Schroeter, F. Hollaender, and A. C. Gringarten. Deconvolution of Well Test Data as a Nonlinear Total Least Squares Problem.
SPE Annual Technical Conference and Exhibition, New Orleans,
Louisiana, Sept. 30 Oct. 3, 2001. Paper SPE 71574.

Appendix A: Bias of the linearized parameter estimate


To derive eqn. (19), we first remark that the true parameter
      is an exact solution of the deconvolution
vector 
 
problem for the true pressure drop sequence  
,
and hence minimizes

 YX]8 

M O

2M          X ZZ "

. . . . . . . . . . . . .(A-1)

Thus, both estimate and true solution satisfy normal equations,

= (vector of) absolute rate errors [bopd, MScf/d]



 = continuous pressure drop signal [psi]
 ,  = (vector of) pressure drop data [psi]
, = (vector of) absolute pressure data errors [psi]
  = rate interpolants, eqn. (5)
 = response interpolants, eqn. (9)
= regularization parameter, eqn. (12)
 = default value for regularization parameter, eqn. (23)
= relative error weight, eqn. (12)
  = natural unit for relative error weight, eqn. (22)

= nodes for logarithmic response interpolation
 = integration variable

O. + O !
GH D
^
^
\
\D
H

References

13

= 2-norm of a vector
= Frobenius norm of a matrix, eqn. (30)

Acknowledgements
This work was supported by BP Amoco, Schlumberger,
Norsk Hydro and Conoco.

e  a   e 3 
  e  a    e  . . . . . . . . . . . . (A-2)
 8    and     M   . Subtracting the two
where 


normal equations from each other and putting


c + e    e    \   ( ^ ` e `   . . . . . . (A-3)
e   a  , where
 e  a   a  + e 3 
one obtains that


e   a    ` \Ne R   8 e ` O    " . . . . (A-4)
 e 3 


^ J
 e 
By assumption,
is invertible, and thus one obtains
a   a     e  :  ` \ e R   8 e ` O    " . . . . . . (A-5)
^ iJ
Eqn. (19) now follows by taking expectations and using the as
   R  .
sumption  O


You might also like