You are on page 1of 454

DISCLAIMER

Portions of this document -:nay be illegible


in electronic image products. Images are
produced from ~ h e best available original
document.
~
- :'
- - - - - - - - - - ~ - - - ~ ~ ------ ---
To my Mother and Father
Abstract
A stochastic groundwater pollution model has been developed called PAGAP
(Probability Analysis of Groundwater And Pollution). PAGAP combines the finite
element method (FEM) for the solution of 2D steady state groundwater flow with
pollution transport, and the fIrst order reliability method (FORM), to make reliability
and probability estimates of hazardous events taking place as a result of pollution
transport in aquatic systems. Element hydraulic conductivities, longitudinal
dispersivities, transverse dispersivities and decay constants as well as constant head
nodes may be treated as random variables in which case second order statistical
information must be assigned to them. PAGAP can assist in the formulation of the
correlation matrix with the use of an algorithm which calculates the correlation matrix
by choosing one of the five available correlation functions and giving the correlation
scale in X and Y directions. Three types of problems can be analyzed. For a given point
in the aquifer one can estimate the following probabilities: 1. The probability of the
concentration at a specified time exceeding specific safety values. 2. The probability of
the maximum concentration observed in a specified period of time, exceeding
predefmed safety values. 3. The probability of the residence time for a specified
concentration, exceeding specific safety values. The output from PAGAP besides
reliability and probability estimates, includes diverse sensitivity estimates showing the
influence of the input statistical values and relative importance of each parameter on the
reliability and probability estimates obtained. PAGAP is a tool that should be used
mainly for assessment purposes and risk analysis tasks. It can be used to assess the
iv
---- - - -- ---- - - - ~ - ~ - - - --------
effectiveness of a given remediation technique, or study the effects of the parameter
distribution for a given problem (sensitivity study).
v
Acknowledgments
The financial support of the Norwegian Research Council (NFR) is gratefully
acknowledged. The financial support received from the Gardermoen project, and the
University of Oslo, for completing the final stages of my studies is very much
acknowledged too. I would also like to thank the Norwegian Geotechnical institute
(NGl) for making all of its resources available to me. Thank you very much.
To simply acknowledge the help I have received from my supervisor, Prof. Kaare
HllJeg, for his support, encouragement and motivation, is not enough. Thank: you for
everything. I really appreciate everything you have done for me Kaare.
I would like to thank Prof. Per Aagaard for always being interested, always being.
ready to help, and always smiling. Thank you very much Per.
I would also like to thank: Dr. Farrokh Nadim for always having time for me, and
always trying to answer my questions, even the silly ones. Thank you Farrokh.
There are many others who deserve thanks as well.
Rajinder Bhasin my good friend and fellow student, who managed to put up with me
for the last 4 years. My good personal friend Costantinos Baharias for his motivation
and spirit. And my son Kristoffer for always keeping me on track.
Thank you all.
vi
Table of Contents
ABSTRACT _ _ iv
ACKN"OWLEDGMENTS _........................._ vi
TABLE OF CONTENTS.................................................................................................................. vii
LIST OF FIGURES _ xiii
LIST OF TABLES xxiii
L IN'TRODUCTION _ 1
1.1. GROUNDWATERPROBLEMS ...... 1
1.2. SCOPEOF WORKFORTInS THESIS. S
2. MVIEW OF THEORETICALFORMULATIONS _ _ 9
2.1. INTRODUCTION.................................................................. 9
2.2. GROUNDWATERFLOW ....................... 9
2.2.1. Aquifer Flow Parameters and their Characteristics 15
2.3. POLLUTANT'rRANsPORT ... 24
2.3.1. The AveragingProcess 35
2.3.2. The Dispersion Tensor andDispersivity 37
3. THE FINITE ELEMENT MODEL _ _ _ 45
3.1. INTRODUCTION................................................................... 4S
3.2. GROUNDWATERFLOW ..... 48
3.3. POLLUTANT'rRANSPORT ... S4
vii
----..._... -_. - . _ ~ -
3.4. 1':RIANGULARELEMENTS ......................................................................................................59
3.5. ISOPARAMETRICQuADRILATERALELEMENTS .....................................................................64
3.6. THEFiNITEELEMENTMErnoDIN CONVECTIONDOMINATEDPROBLEMS 71
3.7. VERIFICATION 82
3.8.1. Constant Source Verification 83
3.9.2. Injection Source Verification 91
4. REVIEW OF STOCHASTIC METHODS 101
4.1 INTRODUCTION........................................................................... 101
4.2 MONTECARLOSIMULATIONMErnoD 103
4.2.1. The Hit andMiss Monte Carlo Method 103
4.2.2. The Sample-Mean Monte Carlo Method 106
4.2.3. Comments on Monte Carlo Methods 107
4.2.4. Monte Carlo methods in Hydrogeology 108
4.3 PERTIJRBATIONMErnoDs 110
4.3.1. Perturbation Example 110
4.3.2. Perturbation Methods in Hydrogeology 113
4.4 REUABILlTYMErnoDs 118
4.4.1. First-Order Second-Moment ReliabilityMethods 118
4.4.2. The Hasofer andLindReliabilityIndex 124
4.4.3. The First Order ReliabilityMethod 129
4.4.4. Second Order ReliabilityMethod 135
4.4.5. Finding the Design Point. 137
4.4.6. Sensitivity Information 143
4.4.7. ReliabilityMethods in Hydrogeology 145
5. THE PERFORMANCE FUNCTION _........................ 149
5.1. INTRODUCTION........................................................................................................ 149
5.2. DIRECTEvALUATION 152
5.2.1. Evaluation ofthe sensitivity vector with respect to the element dispersivities 154
viii
5.2.2. Evaluation ofthe sensitivity vector with respect to the elementvelocities 155
5.2.3. Evaluation ofthe sensitivity vector with respect to the element hydraulic conductivities 156
5.2.4. Evaluation ofthe sensitivity vector with respect to constant hydraulic head nodes 161
5.3. THEDIFFERENCE APPROACH..165
5.4. CoMPARINGTIIEDIRECTANDDIFFERENCE APPROACHES 166
5.4.1. Problem definition 167
5.4.2. Random longitudinal dispersivities 168
5.4.3. Hydraulic conductivities 174
5.4.4. Conclusions 177
6. INPUT DATAFORPROBABaISTICANALySIS 179
6.1. INTRODUCTION 179
6.2. SPATIALCoRRELATION 180
6.3. GEOSTATISTICALAPPRoACH........................185
6.3.1. Kriging with Second-Order Stationarity 186
6.3.2. Kriging in the Intrinsic Case 189
6.3.3. Incorporation ofAveragingEffects 192
6.4. THE GoTTSCHALKAPPROACH 193
6.5. THEVANMARCKEAPPROACH 195
6.5.1. The Variance Reduction Function 195
6.5.2. The Scale ofFluctuation 199
6.5.3. Covariance ofLocal Averages 200
6.5.4. Two Dimensional Problems 202
7. OTHERPAGAP CHARACTERISTICS 207
7.1. INTRODUCTION......................................207
7.2. THEPROGRAMCoDE 207
7.3. SOLUTIONALOORl'IlTh{S 210
8. THEORETICAL CASE STUDIES AND PAGAP VERIFICATION215
ix
8.1. INTRODUCTION..........................215
8.2. FIRST CASE ..................................218
8.2.1. Case 1a 223
8.2.2. Case lb 258
8.3. SECOND CASE......... 278
8.3.1. Case 2a 280
8.3.2. Case 2b 294
8.4. CoNCLUSIONS 303
9. THE GA.RDERMOEN CASE STUDY _ 305
9.1. INTRODUCTION 305
9.2. 2D VERTICAL CROSS SECTIONMODEL 310
9.2.1. Global random dispersivities with deterministic decay 313
9.2.2. Global random dispersivities with randomjield decay 318
9.2.3. Randomjielddispersivities 321
9.3. 2D HORIZONTALMODELANALYSIS 334
9.3.1. 2DHorizontal model with homogenizedhydraulic conductivities .337
9.3.2. 2DHorizontal model with specific aquifer thickness 357
9.4. CONCLUSIONS 380
10. SUMMARY,CONCLUSIONS AND RECOMMENDATIONS FOR FUTURE WORK. 387
10.1. SUMMARYAND CONCLUSIONS 387
10.2. REcOMMENDATIONS FORFU'ruRE WORK 392
REFERENCES _ 395
APPENDJ:XA...................................................................................................................................405
PROBABILITYESTIMATION 405
APPENDIX B 409
CHOLESKI FACTORISATION ...............409
x
APPENDIX C 411
CALIBRATION-INvERSE GROUNDWATERMODELING 411
C.1. Numerical Parameter Estimation 414
C.2. ReliabiliryApproach 418
C.3. Minimization Approach 424
xi
.
- ._--.. _._. ------------
List of Figures
Figure 2.2.1. Control volume ofa confined aquiferfor the derivation ofa horizontal two-dimensional
flow equation (based on Kinzelbach, 1986). 12
Figure 2.2.2. The control volume in a phreatic aquiferfor the derivation ofhorizontal two-dimensional
flow.
Figure 2.2.3. Schematic presentation ofa leaky aquifer.
13
14
Figure 2.3.1. Schematic description ofthe effects ofconvection, dispersion, adsorption, and chemical
degradation ofpollutant transport (based on Kinzelbach,1986).
Figure 2.3.2. Nomenclature for the pollutant mass balance over a control volume extendingfrom the
bottom to the top ofan aquifer (based on Kinzelbach,1986).
Figure 2.3.3. Macrodispersion process in a layered aquifer (based on Kinzelbach 1986).
Figure 2.3.4. A.Longitudinal dispersivity as dependent on an overall displacement scalefor diffirent
types ofobservations and media. B. Field longitudinal dispersivity data classified according to
reliability (bothflguresfrom Gelhar, 1993).
Figure 3.4.1. The unitfunctionsfor linear triangular elements.
Figure 3.4.2. The basisfiunction N. =N ~ +N ~ +N ~ +N ~ +N ~ +Nt
J J J J J J J
Figure 3.4.3. Element illustrating area coordinates. (Based on Pinder and Gray, 1977)
Figure 3.5.1. Quadrilateral element in Cartesian global coordinates and transformed local
coordinates.
Figure 3.6.1. Amplitude ratios andRelative celeritiesfor Galerkins method (from Zienkiewicz and
Taylor 1991).
27
28
39
43
60
60
61
65
77
Figure 3.6.2. Amplitude ratios andRelative celeritiesfor Petrov-Galerkins method (from Zienkiewicz
and Taylor 1991). 78
Figure 3.6.3. Velocity versus time stepfor different dispersivity values. 80
xiii
---- ~ - - - ---- ~ - - --------- --
Figure 3.6.4. Velocity versus Element size for different dispersivity values. 80
Figure 3.8.1. Analytical solutions obtainedwithaL = 10m 84
Figure 3.8.2. Difftrences between numerical and analytical results after 50 days for -L = 10 85
Figure 3.8.3. Differences between numerical and analytical results after 70 days for ar. =10 86
Figure 3.8.4. Analytical solutions obtainedwithaz. = 1m 87
Figure 3.8.5. Difftrences between numerical and analytical results after 50 daysfor -L = 1m 88
Figure 3.8.6. Difftrences between numerical and analytical results after 70 daysfor ClL = 1m 88
Figure 3.8.7. Analytical solutions obtainedwithaz. = O.lm 90
Figure 3.8.8. Difftrences between numerical and analytical results after 50 daysfor -L = O.lm 90
Figure 3.8.9. Difftrences between numerical andanalytical results after 70 daysfor -L = O.lm 91
Figure 3.9.10. Analytical resultsfor -L = 10m 93
Figure 3.9.11. Differences between numerical and analytical results after 20 days for -L = 10m 93
Figure 3.9.12. Differences between numerical andanalytical results after 50 daysfor -L = 10m 94
Figure 3.9.13. Differences between numerical and analytical results after 70 daysfor -L = 10m 95
Figure 3.9.14. Analytical resultsfor -L = 1m 95
Figure 3.9.15. Differences between numerical and analytical results after 20 days for ClL = 1m 96
Figure 3.9.16. Differences between numerical andanalytical results after 50 daysfor -L = 1m 96
Figure 3.9.17. Differences between numerical and analytical results after 70 daysfor -L = 1m 97
Figure 3.9.18. Analytical resultsfor -L = O.lm 97
Figure 3.9.19. Differences between numerical and analytical results after 20 daysfor -L = O.lm 98
Figure 3.9.20. Differences between numerical and analytical results after 50 daysfor -L = O.lm 99
Figure 3.9.21. Differences between numerical andanalytical results after 70 daysfor -L = O.lm 99
Figure 3.9.22. Differences between numerical and analytical resultsfor aL=O.lm 100
Figure 4.2.1. Hit andMiss Monte Carlo method. 104
Figure 4.4.1. Thefailurefunction and its characteristics. 119
Figure 4.4.2. Illustration ofa one dimensional linear performancefunction. 123
Figure 4.4.3. A one dimensional non-linearfailure function.A. Mean value approach. B. Hasoftr and
Lindapproach. 125
xiv
Figure 4.4.4. Illustration ofthe rotational symmetrical property ofstandardnormal space (based on
Der Kiureghian andLiu ,1986). 130
Figure 4.4.5. The fitting pointparaboloid (based on Der Kiureghian et al. 1987). 136
Figure 4.4.6.. Theoretical case study showing a situation where the RFalgorithm will not converge (
basedon Kiureghian et al. 1987). 139
Figure 4.4.7.. ModifiedRFalgorithm 142
Figure 5.4.1. FEAt{grid andboundary conditions usedfor the comparison case study. 167
Figure 5.4.2. Time step iterations verses time per FORMiterationfor case3. 170
Figure 5.4.3. Most likely longitudinal dispersivity values at the design point. 170
Figure 5.4.4. Absolute diffirences between most likely dispersivity values obtainedfrom the direct
approach and the difference approachfor the same time step. 172
Figure 6.2.1. Illustration ofhydraulic conductivity (K), one dimensional randomfield andspatial
averaging effect.
Figure 6.3.1. Covariance and semi-variogram (De Marsily 1986).
184
190
Figure 6.5.1. Definitions ofthe distances D usedforfinding the correlation coefficient between D
a
and
Db. (Based on Vanmarcke,1983) 201
Figure 6.5.2. Definitions ofD distancesfor finding the correlation coefficient between the two shaded
areas.
Figure 7.3.1. LU-factorization algorithm
Figure 7.3.2. Solution algorithm
Figure 7.3.3. LUfactorization ofa (p,q)banded matrix.
Figure 7.3.4.So1ution algorithmfor a bandedmatrix.
Figure 7.3.5. A (2,2) bandedmatrix.
Figure 8.2.1. Finite ElementMesh with Boundary conditions and Target nodes.
Figure 8.2.2. Mean Value Steady State Groundwater Flow Solution.
Figure 8.2.3. Mean Value Solution Obtainedfor Pollution Transport after 100Days.
Figure 8.2.4 Design Point Groundwater Solution giving c(30,40, 100) = 0.2
Figure 8.2.5 Design Point Pollution Solution giving c(30,40,100) = 0.2
203
211
212
213
213
213
219
222
222
226
226
xv
Figure 8.2.6. Design Point Longitudinal Dispersivitiesfor case1a. A. Correlation scale 20m. B.
Correlation scale 10m. CNo correlation. 228
Figure 8.2.7. Design Point Transverse Dispersivitiesfor easela. A. Correlation scale 20m. B.
Correlation scale 10m. CNo correlation. 232
Figure 8.2.8. Design Point Hydraulic ConductivitiesfOr case1a. A. Correlation scale 20m.
B. Correlation scale 10m. C. No correlation. 234
Figure 8.2.9. Design point Lower Boundary Constant Head values andtheir Sensitivities. 235
Figure 8.2.10. Design point Upper Boundary Constant Head values andtheir Sensitivities. 235
Figure 8.2.11. Sensitivities ofLongitudinal Dispersivitiesfor case1a A. Correlation scale 20m. B.
Correlation scale 10m. C. No correlation. 239
Figure 8.2.12. Sensitivities ofTransverse Dispersivitiesfor uncorrelatedcase1a. A. Correlation scale
20m. B. Correlation scale 10m. C. No correlation.
Figure 8.2.13. Sensitivities ofHydraulic Conductivities for case1a. A. Correlation scale 20m. B.
Correlation scale 10m. C. No correlation.
241
243
Figure 8.2.14. Mean Value Sensitivities ofLongitudinal Dispersivitiesfor easela. A. Correlation scale
20m. B. Correlation scale 10m. 247
Figure 8.2.15. Mean Value Sensitivities ofTransversal Dispersivitiesfor easela. A. Correlation scale
20m. B. Correlation scale 10m. 248
Figure 8.2.16. Mean Value Sensitivities ofHydraulic Conductivitiesfor case1a. A. Correlation scale
20m. B. Correlation scale 10m.
Figure 8.2.17. Lower boundary constant head sensitivities.
249
250
Figure 8.2.18. Upper boundary constant headsensitivities. 250
Figure 8.2.19. StandardDeviation Sensitivities ofLongitudinal Dispersivitiesfor easela. A. Correlation
scale 20m. B. Correlation scale 10m. 251
Figure 8.2.20. Standard Deviation Sensitivities ofTransverse Dispersivitiesfor easela. A. Correlation
scale 20m. B. Correlation scale 10m. 252
Figure 8.2.21. StandardDeviation Sensitivities ofHydraulic Conductivitiesfor easela. A. Correlation
scale 20m. B. Correlation scale 10m. 253
Figure 8.2.22. Design Point Groundwater Solution giving c(40,40,100)= 0.2 260
xvi
Figure 8.2.23. Design Point Pollution Solution giving c(40,40,100)= 0.2 260
Figure 8.2.24. Design Point Longitudinal Dispersivityvaluefor case1b. A. Correlation scale 20m. B.
Correlation scale 10m. 261
Figure 8.2.25. Design Point Transversal Dispersivity valuesfor case1b. A. Correlation scale 20m. B.
Correlation scale 10m. 262
Figure 8.2.26. Design Point Hydraulic Conductivity valuesfor case1b. A. Correlation scale 20m. B.
Correlation scale 10m.
263
Figure 8.2.27. Design point Lower Boundary Constant Head values andtheir Sensitivitiesfor case 1b264
Figure 8.2.28. Design point Upper Boundary Constant Head values and their Sensitivitiesfor case 1b264
Figure 8.2.29. Sensitivities of Longitudinal Dispersivitiesfor caselb A. Correlation scale 20m. B.
Correlation scale]Om. 265
Figure 8.2.30. Sensitivities of Transversal Dispersivitiesfor easelb. A. Correlation scale 20m. B.
Correlation scale 10m. 266
Figure 8.2.31. Sensitivities of Hydraulic Conductivitiesfor case1b. A. Correlation scale 20m. B.
Correlation scale]Om. 267
Figure 8.2.32 Lower boundary constant headsensitivities 269
Figure 8.2.33. Upper boundary constant headsensitivities 269
Figure 8.2.34. Mean Value Sensitivities of Longitudinal Dispersivitiesfor case]b. A. Correlation scale
20m. B. Correlation scale 10m. 270
Figure 8.2.35. Mean Value Sensitivities of Transverse Dispersivitiesfor case1b. A. Correlation scale
20m. B. Correlation scale 10m. 271
Figure 8.2.36. Mean Value Sensitivities of Hydraulic Conductivities for caselb. A. Correlation scale
20m. B. Correlation scale lOrn.
Figure 8.2.37. StandardDeviation Sensitivities of Longitudinal Dispersivitiesfor caselb. A.
Correlation scale 20m. B. Correlation scale 10m.
272
273
Figure 8.2.38. StandardDeviation Sensitivities of Transverse Dispersivitiesfor caselb. A. Correlation
scale 20m. B. Correlation scale 10m. 274
Figure 8.2.39. Standard Deviation Sensitivities of Hydraulic Conductivitiesfor case1b. A. Correlation
scale 20m. B. Correlation scale]Om. 275
xvii
---- ---- - ~ ~ _ . _ - - - - - - - - -- ---- - - - - ~
Figure 8.3.1. Finite ElementMesh andBoundary conditionsfor case2. 280
Figure 8.3.2. Groundwater solution at the design pointfor case 2a. 283
Figure 8.3.3. Mean value concentration distribution. 284
Figure 8.3.4. Concentration distribution at the design pointfor case 2a 284
Figure 8.3.5. A. Lumpedmean value poI/ution. B. Consistent Mean value poI/ution C. Lumpeddesign
pointpoI/ution with a correlation scale of10m. D. Consistent design pointpollution with a
correlation scale of10m
Figure 8.3.6. Design Point Hydraulic Conductivitiesfor case 2a. A. Correlation scale 20m. B.
Correlation scale 10m
Figure 8.3.7. Sensitivities ofHydraulic Conductivitiesfor case2a. A. Correlation scale 20m. B.
285
286
291
292
Correlation scale 10m
Figure 8.3.8 Sensitivities with respect to Mean Values ofHydraulic Conductivities case2a. A.
Correlation scale 20m. B. Correlation scale 10m.
Figure 8.3.9. Sensitivities with respect to StandardDeviations ofHydraulic Conductivities with
correlation scale 20mfor case2a. 293
Figure 8.3.10 Design Point Hydraulic Conductivitiesfor case 2b. A. Correlation scale 20m. B.
Correlation scale 10m. 296
Figure 8.3.11. Sensitivities ofHydraulic Conductivitiesfor case 2b. A. Correlation scale 20m. B.
Correlation scale 10m. 298
Figure 8.3.12. Mean Value Sensitivities ofHydraulic Conductivitiesfor case 2b. A. Correlation scale
20m. B. Correlation scale 10m. 299
Figure 8.3.13. StandardDeviation Sensitivities ofHydraulic Conductivitiesfor case 2b. A. Correlation
scale 20m. B. Correlation scale 10m 302
Figure 9.1.1. Simplified quaternary map ofthe Gardennoen area. The rectangle denotes the modeling
area. (Modified after @stmo 1976) 306
Figure 9.1.2. Hydrogeological map ofthe modeling area. The water divide (green)Jlow/ines (blue), and
the 182m contour line define the 2D horizontal modeling area. The vertical profile line (red)
shows the 2D vertical profile position. The stratigraphicprofile line shows the direction ofthe
xviii
profile offigure 9.1.3 (Modified after@stmo 1976) 309
Figure 9.1.3. Hydrostratigraphy ofthe Gardermoen aquifer and assumed boundary between layers 2
and 3. (Based on Tuttle et al.1994) 309
Figure 9.2.1 2D vertical cross section FEkfgrid. 310
Figure 9.2.2. Concentration distribution. Deterministic solution without decay. 315
Figure 9.2.3. Concentration distribution. Deterministic solution with decay elements. 315
Figure 9.2.4. Concentration distribution. Design point solution with no decay 316
Figure 9.2.5. Concentration distribution. Design point solution with decay. 316
Figure 9.2.6. Design point decay constants and their '"fsensitivityvalues 319
Figure 9.2.7. Normalized mean value andstandarddeviation sensitivities ofthe decays 319
Figure 9.2.8. Desigpoint uncorrelated decay constants and their '"fsensitivity values 320
Figure 9.2.9. Normalizedmean value and standarddeviation sensitivities ofthe uncorrelateddecays320
Figure 9.2.10.Design point longitudinal dispersivities distribution 323
Figure 9.2. 11.Gamma sensitivity distribution 323
Figure 9.2.12Mean Value sensitivity distribution 324
Figure 9.2.13. Standard deviation sensitivity distribution 324
Figure 9.2.14. Concentration distribution. Design point solution 325
Figure 9.2.15. Design point longitudinal dispersivity distribution 329
Figure 9.2.16. Designpoint transverse dispersivitydistribution 329
Figure 9.2.17. Longitudinal dispersivity '"fsensitivity distribution. 330
Figure 9.2.18. Transverse dispersivity '"fsensitivity distribution 330
Figure 9.2.19. Mean value longitudinal dispersivity sensitivity distribution 331
Figure 9.2.20. Mean value transverse dispersivity sensitivity distribution 331
Figure 9.2.21. Standarddeviation longitudinal dispersivity sensitivity distribution 332
Figure 9.2.22. Standard deviation transverse dispersivity sensitivity distribution 332
Figure 9.2.23. Concentration distribution. Design point solution 333
Figure 9.3.1. 2D horizontal FEMgrid. 335
Figure 9.3.2. Transmissivity distribution as obtainedfrom the parameter estimation model. 336
Figure 9.3.3. Mean value solution after 3000 days. 340
xix
------
Figure 9.3.4. A. Design point solutionfor L target node withar. as a global random variable. B.
Design point solutionfor R target node withaz. as a global random variable. C. Designpoint
solutionfor L target node with ar. +fXT as global random variables. D. Design point solutionfor
R target node with aL+aT as global random variables.. 341
Figure 9.3.5. Reliability index andprobability as afunction ofsource duration for the L target node 344
Figure 9.3.6. Design point longitudinal dispersivities and y-sensitivities verses source duration for the L
target node.
Figure 9.3.7. Designpoint transverse dispersivities y-sensitivities verses source duration for the L
target node.
345
345
Figure 9.3.8. Reliability index andprobability as afunction ofsource durationfor the R target node348
Figure 9.3.9. Design point longitudinal dispersivities and"{-sensitivities verses source durationfor the R
349
Figure 9.3.10. Design point transverse dispersivities and y-sensitivities verses source durationfor the R
target node. 349
Figure 9.3.11. Design point longitudinal dispersivity distribution. 353
Figure 9.3.12. Design point transverse dispersivity distribution.
Figure 9.3.13. Longitudinal dispersivity y-sensitivities.
Figure 9.3.14. Transverse dispersivity y-sensitivities.
Figure 9.3.15. Longitudinal dispersivity mean value sensitivities.
Figure 9.3.16. Transverse dispersivity mean value sensitivities.
Figure 9.3.17. Longitudinal dispersivity standard deviation sensitivities.
Figure 9.3.18. Transverse dispersivity standard deviation sensitivities.
353
354
354
355
355
356
356
Figure 9.3.19. A. Mean value concentration distribution after 3000 days for aquifer with a specified
thickness. B. Mean value concentration distribution after 3000 daysfor aquifer with homogeneous
hydraulic conductivities. 359
Figure 9.3.20. A. Design point solutionfor L target node withar. as a global random variable. B.
Design point solutionfor R target node withar. as a global random variable. C. Design point
solutionfor L target node with ar. +aT as global random variables. D. Design point solutionfor
R target node with ar. +aT as global random variables. 360
xx
Figure 9.3.21 Reliability index andprobability as afunction ofsource duration for the L target node362
Figure 9.3.22. Design point longitudinal dispersivities and'fsensitivities verses source duration for the
L target node. 363
Figure 9.3.23 Designpoint transverse dispersivities and'fsensitivities verses source durationfor the L
target node. 363
Figure 9.3.24. Design point solutions A. Duration time 10 days. B. Duration time 120 days. C.
Duration time 240 days. D. Duration time 360 days. 365
Figure 9.3.25. Design point solutions A. Duration time 420 days. B. Duration time 440 days. C.
Duration time 460 days. D. Duration time 480 days. 366
Figure 9.3.26 Reliability index andprobability as afunction ofsource duration for the R target node368
Figure 9.3.27. Design point longitudinal dispersivities and'fsensitivities verses source durationfor the
R target node. 369
Figure 9.3.28. Design point transverse dispersivities and 'fsensitivities verses source durationfor the R
target node. 369
Figure 9.3.29.Reliability index andprobability as afunction ofcorrelation coefficient between az. and
aT for the R target node with a source duration of120 days. 372
Figure 9.3.30. Design point longitudinal dispersivities and'fsensitivities verses correlation coefficient
between az. and ar for the R target node with a source duration of120 days. 372
Figure 9.3.31. Design point transverse dispersivities and'fsensitivities verses correlation coefficient
between az. and aT for the R target node with a source duration of120 days. 373
Figure 9.3.32.ar/az. ratio verses correlation coefficient between az. and aT for the R target node with a
source duration of120days. 373
Figure 9.3.33. Design point longitudinal dispersivity distribution. 376
Figure 9.3.34. Design point transverse dispersivity distribution.
Figure 9.3.35. Longitudinal dispersivity -y-sensitivities.
Figure 9.3.36. Transverse dispersivity 'fsensitivities.
Figure 9.3.37. Longitudinal dispersivity mean value sensitivities.
Figure 9.3.38. Transverse dispersivity mean value sensitivities.
Figure 9.3.39. Longitudinal dispersivity standard deviation sensitivities.
376
377
377
378
378
379
xxi
--- --- - ----------------
Figure 9.3.40. Transverse dispersivity standard deviation sensitivities. 379
Figure C.2.l. SimplifiedFEMgrid usedfor testing purposes. 420
Figure C.2.2. Initial transmissivities 20Om
2
/d. 421
Figure C.2.3. Initial transmissivities 25Om
2
/d. 421
Figure C.2.4. Initial transmissivities 30Om
2
/d. 422
Figure C.2.5. Initial transmissivities 20Om
2
/d, with mean values and standard deviations equal to
300m
2
/d. 423
Figure C.3.1Minimizedsimplified Gardermoen case after 304 iterations 424
Figure C3.2. Minimized simplified Gardermoen case after 975 iterations 425
Figure C.3.3Minimization after removal ofprecipitation input at thefirst FEMgrid element row ofthe
simplified Gardermoen case
Figure C.3.4Minimization after removal ofhead conditions on the water divide.
Figure C.3.5.Detailedparameter estimation distribution based on the simple FEMgrid
Figure C.3.6. Detailedparameter estimation distribution based on the actual FEMgrid
xxii
426
428
430
430
List of Tables
Table 2.2.1 Aquifer Parameters 16
Table 2.2.2 Log-Hydraulic conductivity statisticsfrom Hoeksema andKitanidis (1985) 22
Table 2.2.3 Natural logarithm Hydraulic conductivity statisticsfrom Gelhar(1993) 23
Table 2.3.1 Dispersivitiesfrom two well testsfromAnderson (1979) 41
Table 2.3.2 Regional Dispersivities from Anderson (1979) 42
Table 3.5.1 Truncatedvalues ofGausspoints andweightfactors. 70
Table 5.4.1 Effict ofTime step on deterministic solution obtainedat the target node. 168
Table 5.4.2. Time step effict on FORMresults with dispersivity variables using the direct approach 169
Table 5.4.3. Time step effect on FORMresults with dispersivity variables using the dif.fi!rence approach
wjth aperturbation of1( J ~ 169
Table 5.4.4 Effect ofperturbation on FORMresults 173
Table 5.4.5 Time step effect on FORMresults with hydraulic conductivity variables using the direct
approach. 175
Table 5.4.6 Time step effict on FORMresults with hydraulic conductivity variables using the diffirence
approach with aperturbation of1(J6. 175
Table 5.4.7 Time step effict on the most likely hydraulic conductivity values and their y-sensitivities
using the direct approach
176
Table 5.4.8 Maximum differences between values obtainedby the direct approach andvalues obtained
by the diffirence approach with a perturbation of1fJ6. 176
Table 8.2.1 Deterministic results 221
Table 8.2.2 ReliabilityResultsfor Case 1a. Random FieldDispersivity 225
Table 8.2.3 ReliabilityResultsfor Case 1a. Random Variable Dispersivity 255
Table 8.2.4 ReliabilityResultsjOr Case lb. RandomFieldDispersivity 259
xxiii
Table 8.3.1. Results ofReliabilityAnalysisfor case 2a.
Table 8.3.2. Results ofReliabilityAnalysisfor case 2b.
Table 9.2.1. Reliability results with global variables - Vertical Section
Table 9.2.2. Sensitivity resuIts with global variables 2D Vertical Section
Table 9.2.3. ReliabilityResults - Vertical Section
Table 9.3.4. Global random variable results
Table 9.3.5. Global random variable results
Table A.1. Gauss points andweightfactors for n=6.
xxiv
281
295
314
314
325
338
358
406
1. Introduction
1.1. Groundwater Problems
Many European countries depend heavily on their groundwater resources to satisfy
their national needs in water, be it for domestic, agricultural or industrial use. The need
for increasingly larger quantities of groundwater while at the same time maintaining or
improving its quality, is a problem which can not easily be solved. Groundwater re-
sources must be managed with great care in order to avoid severe environmental dam-
ages and must also be protected against polluting agents which can reduce its quality.
The need for careful management and protection of groundwater, has resulted in the
need for more accuracy when solving complex hydrogeological problems. Problems
like overpumping of aquifers and polluted groundwater, occur with increasing fre-
quency. To satisfy the needs of society, we must be able to predict the reactions of aq-
uifers to management strategies and the effects that pollution agents have on the
groundwater quality. This has lead to the extensive use of groundwater and pollution
transport modeling
The simulation of groundwater flow and pollution transport with modeling software is
not always sufficiently accurate. There are of course errors introduced by the numeri-
cal methods used for the development of such software and also some round-off errors
introduced by the computers, but primarily the inaccuracies are due to problems with
obtaining reliable input for the models.
page 2 Chapter 1. Inttoduction
The input to groundwater flow and pollution transport models, tries to describe as accu-
rately as possible all aspects of the investigated aquifer. Among other things, we need
to describe the geometrical shape of the aquifer, the parameter distribution over the
aquifer, fluxes and other conditions at the aquifer boundaries, external input or output
fluxes, pollution sources, the initial aquifer hydraulic head distribution, the initial
pollution concentration distribution. Considering the complicated nature of aquifers,
and the fact that we do not have direct access to them, it becomes obvious that this in-
formation is not easily obtained. Actually, no matter how much effort is put into ob-
taining the input information, it will always give us only an approximate description of
the aquifer. In order to obtain a reliable numerical model using this approximate input,
we initiate model verification and calibration tests. We compare the reactions of the
aquifer to those of the numerical model and thereafter adjust the input information in
order to make the numerical model reaction similar to that of the aquifer. Unfortu-
nately, even a verified model will not always give reliable results, because the condi-
tions under which the verification took place are not constant. Besides the parameter
distribution which can be considered to be constant, all other input information can
change with time.
So the question remains, how to obtain reliable results from a numerical model. The
only option is to update the model as the conditions change, through reverification and
extensive calibration procedures. This is a also a very costly approach and a very time
consuming one as well. It is therefore necessary to evaluate the heedfulness of an accu-
rate result for a given problem. In many types of groundwater management problems
and pollutiori remediation schemes we are basically interested in assessing the effec-
tiveness of the methods used. The assessment is basically done by comparing the nu-
merical results with predefined safety values. If the safety values have not been ex-
ceeded, then the methods used are effective. The larger the difference between the nu-
merical values and the safety values the more effective is the method used. If the nu-
merical results are reliable, such problems are solved easily. However, if we accept the
Chapter 1. Introduction page 3
fact that the results will always be unreliable to some degree or another, then the reli-
ability of the results becomes a factor that must be taken into consideration for the
evaluation of the effectiveness of the management or remediation methods in question.
The means for estimating the reliability of the numerical results are provided by prob-
ability theory. Probability theory requires that the input information be described with
the use of random variables, and also requires knowledge of the joint probability den-
sity function (PDF) which gives the probability density for a given set of values for the
random variables. However, since the joint PDF is unknown, direct evaluation of the
probability and therefore the reliability can not be obtained. It is necessary to employ
alternative methods, which make use of reasonable assumptions, in order to approxi-
mate the probability content.
The Monte Carlo Simulation method (MCS) was the fIrst method to be used for this
purpose. Using the statistical information assigned to the random variables, the MCS
method generates a large number of realizations for the random variables which then
are used as a basis for approximating the joint PDF with the use of the flow and pollu-
tion transport equations. This approach is fairly straight forward and is considered to
give very good results. The disadvantages of this method are related to the large num-
ber of realizations required, which are computationally very demanding. Many im-
provements have been made to the MCS method in order to reduce the number of reali-
zations required and thus make the algorithm more effective. In spite of the method's
dependability, many scientists consider the Monte Carlo Simulation method computa-
tionally very demanding and avoid using it if other methods can provide reasonably
good approximations. However, if everything else fails, MCS can always be relied on
to solve the problem. In hydrogeological problems the MCS method has an additional
disadvantage. Hydrogeological parameters are considered to be random fields and
autocorrelation information is used to describe them. In order to generate a random
field realization consistent with the autocorrelation information, the joint PDF of the
parameter random field must be known. Since the random field PDF is not known, one
page 4 Chapter 1. Introduction
must assume a PDP which can produce the required realization. The choice of PDP will
obviously effect the results obtained.
Another approach which can be employed for the approximation of probability contents
and reliability estimates is the perturbation approach. In these methods, the statistical
moments for chosen parameters of interest, are obtained by expanding the governing
differential equations. These solutions often give valuable insights into the behavior of
the parameters of interest and do not require large amounts of computer time. The
perturbation methods are developing rapidly and the complexity of problems that can
be solved with their use is considerable. It is likely that these methods will soon be-
come a regular part of university hydrogeological courses.
In contrast to the MCS and perturbation approaches, the first and second order reliabil-
ity methods are designed to evaluate a direct estimate of the probability of exceeding a
certain safety value. These methods can incorporate any amount of probabilistic in-
formation, including the first and second order statistical moments, marginal distribu-
tion and the full joint probability distribution. The reliability methods use analytical or
numerical deterministic equations for problem calculations - as does MCS - and the
complexity of problems that can be analyzed is restricted only by the limits of the de-
terministic equations used. Any problem that can be solved with MCS can be solved
with reliability methods as well. The reliability methods have many of the disadvan-
tages of the MCS methods. They are usually faster than MCS, but not always accu-
rate. In problems with low probabilities the reliability methods are faster and give very
good results.. An advantage over the Monte Carlo Simulation methods is that reliability
methods provide sensitivities of the solution to the various input parameters and their
statistical moments, as an integral part of the solution.
Chapter 1. Introduction
1.2. Scope of Work for this Thesis
pageS
The main purpose of this project was to develop a probabilistic finite element numeri-
cal model for the analysis of groundwater pollution problems. For this purpose two
computer codes were developed. (1) A finite element method (FEM) code was devel-
oped for the approximate solution of the equations of groundwater flow and pollution
transport, and (2) a first order reliability method (FORM) code, which can perform a
probabilistic analysis of any given analytical or numerical equation. The two codes
were connected into one model which has been named PAGAP (probability Analysis
of Groundwater And Pollution) and is the main product of this project.
PAGAP can be used for the analysis of three types of probability problems. 1. The
probability of the concentration at a specific place and time to exceed a predefined con-
centration value. 2. The probability of the maximum concentration at a specific place
to execeed a predefined concentration value. 3. The probability of the residence time at
a specific place exceeding a predefined period of time.
In the course of PAGAP's development, many numerical methods had to be employed.
The finite element method had to be used for the numerical solution of steady state
groundwater flow and transient pollution transport in two dimensions. For this purpose
many textbooks have been consulted. Zienkiewicz and Morgan (1983), Johnson (1987)
and Zienkiewicz and Taylor (1989 and 1991), present a detailed description of the fi-
nite element method from a mathematical and practical point of view, and show how it
can be applied to all types of equations and problems. These textbooks however, lack
the hydrogeological perspective which is needed to solve practical groundwater and
pollution problems. Textbooks on hydrogeological modeling with the use of numerical
methods which have been consulted, include Pinder and Gray (1977), Kinzelbach
(1986) and Bear and Verruijt (1987). The largest part of the work done in this project
has been based on the descriptions given by Kinzelbach(1986) with some additional
page 6 Chapter 1. Introduction
infonnation taken from Pinder and Gray(1977) and Zienkiewicz and Taylor (1989 and
1991).
Besides the above textbooks, many others exist on the subject of numerical methods in
hydrology or hydrogeology. However, very few exist on the subject of probability
analysis with the use of reliability methods. Madsen et al. (1986) presents the reliabil-
ity methods and uses them for the solution of structural safety problems. Detailed de-
scriptions of the first and second order reliability methods can be found in Cawlfield
and Sitar(1987), Jang et al. (1990) and Dimakis(1992). Many papers have been writ-
ten on this subject. Some of the papers which have been used in the development of
PAGAP include, Der Kiureghian and Liu (1986), Sitar et al. (1987), Gollwitzer (1988)
Der Kiureghian et al. (1988 and 1989), and Jang et al. (1994). The development of
PAGAP is quite similar to the development of CALREL-TRANS, which is a reliability
method model presented by Jang et al.(1990).
Many other numerical algorithms and computer codes have been used and developed
during this project. An algorithm which produces second order statistical infonnation
from point statistics, follows the work of Vanmarcke(1983). Diverse preprocessors
have been developed for PAGAP as well as graphical interfaces for visualization of the
results. A modified Rackwitz and Fiessler algorithm (Rackwitz and Fiessler, 1978) has
been developed and used successfully for the iterative processes required by the reli-
ability method. Diverse parameter estimation algorithms have also been developed in
order to use PAGAP for the analysis of actual case studies.
Although it is desirable to present all aspects of PAGAP's development in this thesis,
practical considerations dictate that this is not possible. In chapter 2 the theoretical as-
pects of groundwater flow and pollution transport are presented. The derivation of the
governing equations and the hydrogeological parameters controlling these processes are
introduced and discussed in brief.
Chapter 1. Introduction page. 1
In chapter 3 the finite element method as applied to the groundwater flow and pollution
transport equations is presented in detail. At first the derivation of the numerical
equations and the implementation of linear triangular and quadrilateral isoparametric
elements are presented. A discussion on the numerical stability of the algorithm fol-
lows, and finally verification tests of the FEM code are presented.
In chapter 4 the stochastic methods are presented. The Monte Carlo Simulation and
perturbation methods are presented briefly as well as recent developments in their ap-
plications to hydrogeological problems. The reliability methods are presented in detail
in this chapter. Starting from the first order second moment reliability methods and the
reliability index introduced by Cornell(1969), then to the mean value first order second
moment method, the introduction of the Hassofer and Lind (1974) reliability index and
finally to the first and second order reliability methods. The chapter ends with the pres-
entation of algorithms for obtaining the design point and sensitivity information.
The reliability method requires a mathematical defmition of the problem to be solved.
This is done with the use of the performance function. The first order reliability
method linearizes the performance function at the so called design point while the sec-
ond order reliability method fits a paraboloid to the performance function at the design
point. In chapter 5 the three performance functions defined in PAGAP are presented.
The reliability method algorithm needs to linearize these performance functions at any
given point, and in order to accomplish this it requires that the derivatives of the per-
formance functions with respect to all random variables are estimated. The estimation
of these derivatives is the main subject of chapter 5. Two approaches are presented.
The direct approach, whereby the equations which directly estimate the derivatives are
developed, and the difference approach where the derivatives are estimated though a
forward difference formula. The two approaches are then compared for speed and ac-
curacy.
page 8 Chapter 1. Introduction
In chapter 6 methods for obtaining statistical input for PAGAP from field measure-
ments are presented. The geostatistical method, a method proposed by Gott-
schalk(1993), and the method of Vanmarcke(l983) which has been implemented in
PAGAP, are presented. These presentations are rather brief, and do not include some
important practical aspects of these methods.
Chapter 7 has been used to present some aspects of PAGAP which did not fit into any
of the previous chapters. Basically two things are discussed: The programming features
of PAGAP and the solution algorithm used for solving the [mite element equation sys-
tem.
In chapter 8 theoretical case studies have been analyzed by. PAGAP and the results
compared to those published by Jang et al.(1990). These comparisons are used for
verification purposes, but they also show that the first order reliability method does not
always give good probability estimates and the FEM formulation of the problem can
have a large impact on the probability estimates.
In chapter 9, PAGAP has used data from the Gardermoen aquifer to analyze some
practical case studies. Many simplifying assumptions have been made in order to ana-
lyze these field case studies, and the input data had to be manipulated considerably.
Parameter estimation algorithms were employed to obtain reasonable parameter distri-
butions, and some aspects of the aquifer geometry had to be changed through trial and
error procedures to obtain a more realistic flow regime. Since no dispersivity data were
available these had to be assumed. In spite of all these problems, some interesting re-
sults have been obtained.
Finally, in chapter 10 a summary of all results obtained is presented together with the
conclusions and an evaluation of the performance of PAGAP. Recommendations for
changes and further developments in PAGAP are proposed.
2. Review of Theoretical Formulations
2.1. Introduction
In this chapter we shall derive the partial differential equations of groundwater flow and
pollutant transport in porous media and at the same time present the parameters con-
trolling these processes. This presentation will be as brief as possible and is primarily
based on the equivalent presentations by Kinzelbach (1986) and Bear and Verruijt
(1987). It is not the purpose of this presentation to address all the problems involved in
obtaining the flow and pollution equations. For details refer to the above mentioned
authors.
2.2. Groundwater Flow
Two basic principles are involved in the derivation of flow equations for all aquifer
types. The principles of continuity and Darcy's law. Continuity demands the conserva-
tion of water mass, while Darcy's law states that in an isotropic porous medium the
specific flow rate is proportional to the negative head gradient. In horizontally 2D
groundwater flow this is expressed as
(2.2.1)
where
page 10 Chapter 2 Review of Theoretical Formulations
The proportionality constant K
D
is called hydraulic conductivity. It is a function of the
geometry of the porous media and the fluid characteristics. The hydraulic conductivity
has units [L,T-I),and according to Todd (1980) : "A medium has a unit hydraulic con-
ductivity if it will transmit in a unit time a unit volume ofgroundwater at the prevailing
kinematics viscosity through a cross section of unit area, measured at right angles to
the direction of flow, under a unit hydraulic gradient". Many authors use the term
permeability instead of hydraulic conductivity. In the hydrogeological and geotechnical
field the two terms are interchangeable. However, in the petroleum engineering field,
the term permeability is commonly used to refer to what is more precisely called in-
trinsic permeability (k). The intrinsic permeability also defmes the ability of a rock or
soil to transmit a fluid, but is a function of the geometry of the porous media only. It
has units [L2) and is related to the hydraulic conductivity through the relationship
(2.2.2)
where p is the fluid density, J1 is the dynamic viscosity and rthe weight density. Equa-
tion 2.2.2 can be alternatively written in terms of the kinematic viscosity v, by consider-
ing the relationship v = J1Ip.
In an anisotropic medium the direction of flow mayor may not be consistent with the
direction of maximum head decrease (i.e. the head gradient direction), and therefore the
Darcy law as described by equation 2.2.1, can not be used. To incorporate anisotropic
behavior the generalized form of Darcy's law is used which has the form
(2.2.3)
where
- ~ . - ---- ------- ------
Chapter 2 Review of Theoretical Formulations page 11
K
D
is the second rank tensor of hydraulic conductivity. The hydraulic conductivity ten-
sor is a symmetric tensor, i.e. ~ = Kyx , and therefore we need to know three parameters
to defme the tensor, instead of one parameter required for an isotropic material. It is
therefore common to make the assumption of isotropic material in cases where the ma-
terial exhibits only slight anisotropic behavior
To obtain the differential equation of flow we first define a characteristic control vol-
ume, which extends (for 2D horizontal flow) vertically from the bottom of the aquifer
to the top of the aquifer. Since the top of the aquifer has different properties for con-
fmed and phreatic aquifers we must consider them separately. For a confmed aquifer
the control volume is shown in figure 2.2.1.
The first step is to use the continuity principle for this control volume to obtain equa-
tions describing the water balance. Over the time interval [1, HAt] the net flow entering
the control volume must balance out the water stored in the control volume. Flows en-
tering the control volume are counted positive, while flows leaving are negative. We
consider the horizontal flows and the recharge or abstraction through the top of the
control volume.
At[vXIX+MmAy - vx1xmAy +vylY+bymL\x - vytymL\x + qAxAy] =
SL\xAy(h(t +At) - h(t})
(2.2.4)
where m is the thickness of saturated flow. S is the storage coefficient, expressing the
amount of water (volume) that can be stored additionally by compressibility in a col-
umn of the aquifer with a unit cross-sectional area and height m when the head is in-
creased by one unit. q is the recharge/discharge from the top of the aquifer expressed as
flow rate per unit horizontal area.
page 12
m
Bo
...
...
.:....
Chapter 2 Review of Theoretical Fonnulations
----------- Y+Ay
x x + ~
Figure 2.2.1. Control volume of a confined aquifer for the derivation of a horizontal
two-dimensional flowequation (based on Kinzelbach, 1986).
The fmal form of the differential equation describing the continuity principle for this
control volume is obtained by dividing 2.2.4 with the control volume base area and the
time step (i.e. At Ax Ay) and then taking the limits of Ax.-70, Ay-70, At.-70. resulting in
- (}h
V(mv)+q = Sa;
(2.2.5)
The second step is to insert Darcy's law into the water balance equation 2.2.5 which
yields the flow equation expressed in the variable h.
-( -) ah
(2.2.6)
V mKVh +q=Sai
or
-( - ) ah
(2.2.7)
V TVh +q=Sai
where T = m K is the transmissivity tensor of the aquifer.
Chapter 2 Reviewof Theoretical Fonnulations
Groundwater Table
V
x1x
Bo
x x+Ax
_._-y
page 13
Figure 2.2.2. The control volume in a phreatic aquifer for the derivation of horizontal
two-dimensional flow.
The corresponding equation for a phreatic aquifer is easily obtained from 2.2.6 if we
make two modifications. For a confmed aquifer the transmissivity is a function of the
location only (Le. aquifer thickness and aquifer hydraulic conductivity at a location).
For a phreatic aquifer the transmissivity is in addition a function of the hydraulic head h
(Le. the water table level at this location). An easy way to compensate for this is to use
the following relationship
m=h-b (2.2.8)
where b is the elevation of the impervious bottom. The second modification is related to
the storage coefficient S. In a phreatic aquifer the storage due to compressibility can be
neglected in comparison to the storage effected by the movement of the water table.
Figure 2.2.2 shows the control volume for a phreatic aquifer and as can be seen the
amount of water stored in the aquifer due to the movement of the water table is the vol-
ume of water between the water table at time t (Le. the surface h(t and the water table
at time HAt (i.e. the surface h(HAt. The volume of water stored per volume of aqui-
fer is equal to the porosity n
e
of the aquifer, and therefore we must replace S with the
page 14
porosity. The equation then becomes
- - - - - - - - ~ ' - - - - - - - - - - ~ -------- ----------
Chapter 2 Review of Theoretical Fonnulations
-( - ) ah
V (h-b)K.Vh +q = n
e
at
(2.2.9)
In the case of a leaky aquifer, the flow equation is obtained from the confmed aquifer
equation by adding the exchange with the overlying and underlying aquifers as a
source/sink term. According to Darcy's law, the flow between neighboring aquifers is
proportional to their head-difference h1-h or h
2
-h, respectively
(2.2.10)
In the above equation it is assumed that the hydraulic heads hI and h
2
are known. If
these are not known then a system of equations is required. Figure 2.2.3 illustrates the
situation described by equation 2.2.10.
DATUM
Figure 2.2.3. Schematic presentation of a leaky aquifer.
The leakage terms are also used to introduce inftltration from or to surface water bod-
ies.
The flow equations derived above describe the relationship between the aquifer proper-
ties and the hydraulic head distribution as a function of time and space. In order to use
them we need to specify the geometry of the aquifer, boundary conditions and initial
Chapter 2 Review of Theoretical Formulations page 1"5
conditions. The geometry of the aquifer specifies the area over which the flow equation
is applied to and is termed the domain. The closed curve defming the domain is termed
the boundary. The boundary conditions specify the fluxes coming into and/or out of
the domain along the boundary. This is commonly done either by directly assigning
fluxes to portions of the boundary or by assigning constant head values. The initial
conditions is the specification of the hydraulic head distribution over the domain at time
t=O.
If the boundary conditions and external fluxes have constant values, then after some
time the hydraulic head distribution will reach a steady state condition where no more
changes are observed. The steady state hydraulic head distribution can be obtained di-
rectly by simply replacing the time derivative in the flow equations with zero. Steady
state flow equations are commonly used to establish the average or most likely hydrau-
lic head distribution.
2.2.1. Aquifer Flow Parameters and their Characteristics
The intrinsic permeability, the hydraulic conductivity, the transmissivity and the storage
coefficient are the parameters which are commonly used to describe the hydraulic prop-
erties of the aquifer. The frrst three parameters are of course interrelated. The trans-
missivity is used when the groundwater flow can be considered to be 2D horizontal,
meaning that the vertical component of flow can be neglected. Otherwise the hydraulic
conductivity is used. The hydraulic conductivity for different soil types is given in ta-
ble 2.2.1.
The intrinsic permeability is mostly used for the unsaturated zone since it does not de-
pend on the fluid characteristics (water/air interface). Under saturated conditions the
intrinsic permeability can be used to calculate rough estimates of the hydraulic conduc-
tivity as described by equation 2.2.2, if a grain size analysis is available. Many empiri-
cal equations can be found in literature which relate the intrinsic permeability with the
second power of some characteristic grain size. Such a relationship is confmned by ex-
page 16
perimental evidence.
Chapter 2 Review of Theoretical Formulations
Table 2.2.1 Aquifer Parameters
Soil Type Hydraulic Conductivity Log-H. Conductivity Intrinsic Permeability
(mls) log(mls) (m
2
)
Clay
10.
10
- 10.
8
-23.025 - -18.421
10.
17
- 10.
15
Silt
10.
8
_10.
6
-18.421 --13.816
10.
15
- 10.
13
Sand
10.
5
_10.
3
-13.816 - -6.908
10.
12
- 10.
10
Gravel
10.
2
-10.
1
-6.908 - -2.303
10.
9
_10.
8
A equation of this type is the Kozeny-Carman equation
(2.2.11)
where n is the porosity and d is some mean particle size. The parameter c takes values
which depend on the way the particle size d is chosen. A convenient defmition of the
particle size can be made in terms of the specific surface M (the total area of the wetted
surface per unit volume of the solid material), namely
6
d=-
M
(2.2.12)
The factor 6 has been introduced so that for a packing of equal spheres the value of d
corresponds to the diameter of the spheres. For such a defmition of d the value of c
which gives .the best fit with the experimental data is approximately c =11180. Many
other formulas of this type exist, but one should keep in mind that only rough estimates
can be obtained which in some cases may be completely wrong.
Although the aquifer parameters are theoretically capable of describing the aquifer
properties completely, practical problems exist which prevent us from obtaining com-
plete aquifer property information. What we need to know in order to have complete
- .-------------
Chapter 2 Review of Theoretical Formulations page 17
aquifer information, is the distribution of the aquifer parameters over the whole aquifer
domain. Mathematically this is expressed as P(x,y,z,t) , showing that a property P is a
function of space and time. For hydrogeological purposes we make the assumption that
the parameters do not change with time and therefore the properties are simply a func-
tion of space i.e.P(x,y,z) .
The common methods used for obtaining parameter estimations are specimen depend-
ent, and therefore there are practical limits as to how many specimens and therefore
measurements, can be obtained. This limited information in conjunction to some as-
sumptions about the distribution characteristics of the aquifer parameters allow us to
obtain reliable complete aquifer properties. The reliability of such a complete aquifer
property set is always questionable and therefore uncertain. The uncertainty is directly
dependent on the inhomogeneties and anisotropies present in the aquifer.
Obviously, different assumptions and different analysis methods will lead to different
aquifer property sets. For notation purposes let us denote the actual complete aquifer
property set with P/x,y,z) while the estimated aquifer property set can be denoted as
Plx,y,z). In general Po=p. at points at which we have property measurements. At all
other points the difference between Po and p. will be large or small depending on the
assumptions made about the distribution properties of the aquifer parameters. Conven-
tional interpolation methods assume that the distribution of the aquifer properties be-
tween known measurement points, can be expressed through linear, quadratic or cubic
polynomial functions of various types and configurations. Such methods usually do not
result in reliable p. functions. The best results are obtained when a large number of
measurement points are available which in addition must be evenly distributed through-
out the aquifer domain. Such conditions are seldom met in actual aquifer measure-
ments.
In order to improve the interpolation scheme used for the evaluation of p. ' we need to
make assumptions which describe the parameter distribution in the aquifer more accu-
page 18 Chapter 2 Review of Theoretical Formulations-
rately. This is usually done by taking the geological characteristics of the aquifer into
consideration. If the geology of the aquifer suggests homogeneous sedimentation
conditions, we expect the hydraulic properties of the aquifer to reflect such conditions.
An ideally homogeneous aquifer will thus have Po=constant. However, most homoge-
neous aquifers will exhibit some spatial variation. An approximation P
e
for a homoge-
neous aquifer can be obtained by averaging the available property measurements. One
can also obtain an estimate on the variation of the parameter by estimating the standard
deviation of the measurements. The average of a set of measurements will not neces-
sarily give the best approximation for P
e
In order to improve the estimation we con-
sider the parameter P
e
to be a random variable with a probability distribution that fits
the measument set of data. The parameter Pe is thus estimated to be equal to the mean
value of the distribution. Many studies on this matter have revealed that the best mean
values and standard deviations are obtained when a lognormal distribution is used for
transmissivity and hydraulic conductivity measurements. Because of this, many authors
prefer to present their data in the log transmissivity or log hydraulic conductivity form.
In this case log refers to the natural logarithm of the transmissivity or hydraulic con-
ductivity. The log hydraulic conductivities have been included in table 2.2.1 for differ-
ent soil types. It is important to keep in mind that the hydraulic conductivity in table
2.2.1 is expressed in mls and the log hydraulic conductivities are expressed in log(mls).
Such estimations of P
e
have been widely used in ID and 2D analytical solutions of the
groundwater flow equations. In such cases, the standard deviation is a measure of the
accuracy of the estimated Pein the sense that it shows how accurate is the assumption of
a homogeneous aquifer. Further on, it can be used toset upper and lower bounds to the
solutions obtained.
The assumption of a constant distribution of the aquifer parameters through the aquifer
domain expressed as P
e
= constant, is in most cases an oversimplification. It is also a
simplification that is not required when numerical solutions of the groundwater flow
equations are used.
Chapter 2 Review of Theoretical Formulations page 19
In order to obtain a P
e
function as close as possible to the actual parameter distribution
we need a way to incorporate into the interpolation scheme the distribution characteris-
tics of the aquifer property. Statistical interpolation schemes allow us to do so. The
basic idea behind statistical interpolation schemes is that points which are near each
other should have property values which are also close to each other. This idea is im-
plemented through the use of covariance or correlation functions which establish the
relationship between distances and property values. Chapter 6 discusses such methods
and the reader is referred to that chapter for more details. The mean distance over
which, property measurements are correlated is called the correlation scale and is a sta-
tistical characteristic of the aquifer property.
Statistical interpolation methods assume P
e
to be some kind of random field. Terminol-
ogy and notation varies from method to method and so do the constraints or assump-
tions placed on the random field. The most common sets of conditions placed on the
random field are referred to as "second order stationarity hypothesis" and the "intrinsic
hypothesis". Second order stationarity assumes that the random field is statistically
homogeneous meaning that the random field has a constant known mean value and a
standard deviation which is finite in value. It also assumes that the covariance function
will be a function of distance only, independent of direction and position. Transmissiv-
ity and hydraulic conductivity usually satisfy second order stationarity (De MarsHy,
1986). However, in some cases, field investigations show that the data do not satisfy the
second order stationarity hypothesis. Some aquifers show clearly that the parameter
distribution increases or decreases in some directions. Such trends in the parameter
distribution violate the assumption of a constant mean value for the random field. An-
other problem is that small scale inhomogeneties such as clay or gravel lenses, fault
zones and fractures, effect the field measurements to a degree where it becomes diffi-
cult to obtain a fmite standard deviation for the random field.
The intrinsic hypothesis overcomes these problems by defming a new random field as
Po -P
e
and then assuming second order stationarity for this random field. The mean
page 20 Chapter 2 Review of Theoretical Fonnulations
value for the intrinsic hypothesis random field is usually set equal to zero, although it is
not necessary to do so. The variance of the difference Po -P
e
is more likely to have a
finite value and can be expressed in terms of distance only with the use of a semivari-
ogram function. The intrinsic hypothesis was developed and used for the interpolation
of geological parameters and therefore these interpolation methods are called geostatis-
tical.
The fundamental geostatistical interpolation scheme is commonly called ''kriging'' from
the name of D.G. Krige who developed some of the basic concepts. Statistical and
geostatistical interpolation methods have introduced the concept of a random field for
the description of the aquifer properties which has caused extensive studies of the sta-
tistical behavior of aquifer properties. Tables 2.2.2 and 2.2.3 show hydraulic conduc-
tivity statistics for diverse types of aquifers.
Hoeksema and Kitanidis (1985) evaluated data from 31 regional aquifers with the ob-
jective of estimating the horizontal spatial correlation structure of the transmissivity,
hydraulic conductivity and storage coefficient. They attempted to fit one of three co-
variance models to field data using a maximum likelihood estimation procedure. All
three models were variations of the exponential covariance function. The correlation
scale and two types of variance were estimated: unstructured variance (the so-called
nugget effect in geostatistics), and the structured variance (variance directly attributable
to spatial separation). The unstructured variance measures extremely small scale het-
erogeneity effects that cannot be detected with normal sampling. Hoeksema and Ki-
tanidis reported some difficulties in their estimation procedure, and observed that pre-
cise estimates for the correlation scale were difficult to make. In 27 out of 41 parame-
ter estimation runs, the correlation scale was constrained by design to either a minimum
or maximum allowed value. In Table 2.2.2 some of the results obtained by Hoeksema
and Kitanidis have been compiled. No units have been given for the mean value and
variances.
Chapter 2Reviewof Theoretical Formulations
page 21
Table 2.2.2 Log-Hydraulic conductivity statistics from Hoeksema and Kitanidis (1985)
Type of Aquifer data U V
n
V
s
A. (miles)
Unconsolidated sediment 46 -0.102 3.24 0.00 NA
Sand and outwash 28 -0.123 0.00 0.149 3.27
Glacial till 38 0.041 0.107 0.141 3.92
Glacial drift 60 0.218 0.390 0.596 7.65
Glacial drift 23 -0.221 0.456 0.701 6.64
Glacial drift 71 -0.176 0.434 0.940 8.06
Glacial drift 60 0.041 0.579 0.520 3.86
Glacial drift 58 -0.186 0.323 0.467 4.94
Sand and Gravel 19 0.076 0.179 0.000 NA
Alluvial deposits 58 -0.053 0.337 0.456 0.477
Alluvial deposits 64 -0.080 0.068 1.76 0.636
Sandy gravel 30 0.201 2.92 0.89 2.88
Sand 15 0.308 0.806 3.77 78.9
Sand and gravel 35 -0.001 0.491 2.98 1.21
U Mean value
V
n
Unstructured (nugget) variance
V
s
Structured (spatial) Variance
A. correlation scale - 1 mile = 1.609 km
NA Not available
page 22 Chapter 2 Review of TheoreticaI Formulations
Table 2.2.3 Natural logarithm Hydraulic conductivity statistics from
Gelhar(1993)
Type of Aquifer Type (J Corr. scale (m) Overall scale (m)
Horiz. Vertical Horiz. Vertical
Alluvial basin T 1.22 4000 30000
Sandstone A 1.5-2.2 0.3-1.0 100
Alluvial basin T 1.0 800 20000
Fluvial sand A 0.9 >3 0.1 14 5
Limestone T 2.3 6300 30000
Sandstone T 1.4 17000 50000
Alluvial T 0.6 150 5000
Alluvial T 0.4 1800 25000
Limestone T 2.3 3500 40000
Chalk T 1.7 7500 80000
Alluvial T 0.8 820 5000
Fluvial soil S 1.0 7.6 760
Eolian sandstone outcrop A 0.4 8 3 30 60
Glacial outwash sand A 0.5 5 0.26 20 5
Sandstone T 0.6 45000 500000
Sand and gravel A 1.9 20 0.5 100 20
Prairie soil S 0.6 8 100
Weathered Shale subsoil S 0.8 <2 14
Fluvial sand and gravel A 2.1 13 1.5 90 7
Homra red Mediterranean S 0.4-1.1 14-39 100
Gravely loamy sand soil S 0.7 500 1600
Alluvial silty-clay loam S 0.6 0.1 6
soil
Glacial outwash sand A 0.8 5 0.4 30 30
and gravel outcrop
Glacial-lacustrine sand A 0.6 3 0.12 20 2
Alluvial sand (Yolo) S 0.9 15 100
T Transmissivity
S Soils
A Three dimensional
Chapter 2 Review of Theoretical I ; mulations page 23
Gelhal' (1993) presents a Sli,l1mary of variances and correlation scales obtained from
saturated log-hydraulic con..-..'tivity data from several different sites. These data in-
volve three general types 01 :-servations. The largest scale measurements are generally
depth averaged (transmissi\' .. , ) observations based on aquifer pumping tests andlor, in
many cases, specific capacii measurements in a single well. The depth averaged data
do not contain any informal', 1n about vertical variations. A second category of infor-
mation that is often availab! .. at an intermediate scale is that based on observations of
infiltration rate on the soil :>,,:l"ace. These observations again contain only information
about horizontal variations i" hydraulic conductivity. A third category of information,
described in the table as :.quifer data (A) involves multidimensional observations
(horizontal and/or vertical ... lriations) often obtained from boreholes penetrating the
saturated zone. but, in some ,ases. including multidimensional measurements above the
water table. Graphical prest'mation of the data shows that the standard deviation of log
conductivity varies over an "der of magnitude, but does not show systematic depend-
ence on the scale, while in lHltrast the correlation scale systematically increases with
increasing overall scale.
Despite statistical interpolal j, \11 methods giving us a reliable property distribution func-
tion, we still have the prob,;:111 of using this function in numerical solution equations.
Numerical methods descreti;'" the aquifer domain into a finite number of elements and
require us to define the hydl.:ulic properties of each element. In the case of hydraulic
conductivity, if the aquifer j . isotropic we need to specify for each element a hydraulic
conductivity value, if it is an; ... otropic we have to define the hydraulic conductivity ten-
sor for each element. The pI' '!1erty value assigned to each element has to be the average
value for the element as a wl.tile. This means that we can not directly use statistical in-
terpolation results for obtai::'ng such values since interpolation methods give us point
results. In chapter 6 one, :1 find the general equations that describe the averaging
problem. The average probi : ~ 1 is usually referred to as the upscaling problem and new
approaches have been suggl' :d for its Solulion.
page 24
2.3. Pollutant Transport
Chapter 2 Review of Theoretical Formulations
The pollutant transport equation can not be obtained as easily as the groundwater flow
equation. The basic reason for this difficulty is that pollution can be transported by sev-
eral different mechanisms or processes. Usually advective transport (i.e. due to the ve-
locity of groundwater) and molecular diffusion are the most important processes, but in
many cases adsorption and chemical transformations take place which add to the com-
plexity of the problem.
We frrst consider an ideal tracer which experiences neither adsorption nor chemical
transformation, in other words we assume that only advective transport and m o l e ~ u l a r
diffusion take place. Such processes are well known and can easily be described from
the equations given below.
The advective pollutant flow is simply obtained by considering that the pollutant with
concentration c will move with the same velocity as the water. If we consider an ele-
mentary area dA with a velocity un normal to this area, then the pollutant flow is given
as (Kinzelbach, 1986)
(2.3.1)
where
X
3
= (x,Y,z) , is the location vector
ii
3
= (n:,,,ny,n
z
)' is the unit normal vector of the area element dA
Jconv = u
3
c ,is the advective flux
The diffusive pollutant flow through the same area element is given by Fick's law.
Fick's law is similar to Darcy's law and states that the diffusion is proportional to the
Chapter 2 Review of Theoretical Formulations page2S
(negative) gradient of the concentration normal to the area element. This is written as
(2.3.2)
where
Jdiff = -DmV3C is the diffusion flux
In order to use the above equations we must know the concentration c the diffusion co-
efficient D
m
and the velocity vector un (or alternatively u3n3). Assuming that the con-
centration is known and that the diffusion coefficient can be obtained experimentally,
there still remains the problem of fmding the velocity vector. The velocity in equation
2.3.1 is a point velocity, meaning that when considering groundwater flow in a porous
medium it is the velocity in the pores of the aquifer. This means that equations 2.3.1
and 2.3.2 can only be used for pollution transport in the microscopic scale (between
pores), while we are interested in equations describing the pollution transport in a re-
gional scale.
In order to obtain equations which allow us to consider macroscopic volumes when
taking pollutant balances, we must calculate average fluxes. The total flux at a specific
point is given as
(2.3.3)
The averaging process is obtained by considering the following relationships
c=c+&
(2.3.4a)
(2.3.4b)
page 26 Chapter 2 Review of Theoretical Formulations
where the" "symbol is used for the average. Equations 2.3.4 simply state that the
concentration (or velocity) at a specific point is equal to the average value plus some
value 0 c. Inserting equations 2.3.4 into 2.3.3, multiplying with the effective porosity
and averaging, we obtain the volumetric average flux
(2.3.5)
which simplifies after averaging to (i.e. the average of 0 cand 0 u is zero)
(2.3.6)
In equation 2.3.6 the first right-hand side term is called the convection (or advection)
term, the second term is called the molecular diffusion term, while the third term is
called the dispersion term.
The only problem remaining for using equation 2.3.6 to describe the pollutants trans-
port in a regional scale is to estimate the dispersion term. The dispersion term when
averaging over large volumes accounts for all the inhomogeneties in the aquifer and is
assumed to have a diffusion-like effect (see figure 2.3.1). It is therefore estimated in a
similar way as the diffusion term i.e.
(2.3.7)
where D
3
is the dispersion tensor. Replacing 2.3.7 into 2.3.6, we obtain a flux equation
which describes the pollutant flow at any point in the aquifer.
(2.3.8)
where the average bars have been dropped for simplicity.
With equation 2.3.8 we are now ready for obtaining the pollution transport equations.
We basically use the same approach as for the derivation of the groundwater flow
equations. Equation 2.3.8 replaces the Darcy law equation and once again we demand
~ ----,- _.. '- -,---- ----- - . - - ~ - - - - - - - - - , - - - - ~ . _ - - - - - _ .. _-- ---
Chapter 2 Review of Theoretical Formulations page 27-
the pollutant mass balance to be zero.
Taking an arbitrary volume V defmed by a surface S, we demand that the total flux
over the surface S @ld the proliferation rate of internal pollution sources and sinks in-
side V, balance the storage of pollution inside V per unit time. This is written as
(2.3.9)
Pollutant Distribution at time t=O
AqUifer
Direction ojllftow
Pollution
II
x=o --------------------.. ~
Pollutant Distribution at time t1>0
1."''''of ad,.""
11--..
x=ut, X
C Effect of advection and dispersion
X
C Effect of convection, dispersion, and adsorption(linear isothermal)
'... retardation
Effect of convection, disapersion, a;dsorption, and degradation
i
i
X
Figure 2.3.1. Schematic description of the effects of convection, dispersion, adsorption, and
chemical degradation ofpollutant transport (based on Kinzelbach,1986).
Using the Gauss integral theorem, we can transform the surface integral into a volume
integral, thus obtaining
page 28 Chapter 2 Review of Theoretical Fonnulations
(2.3.10)
Demanding now that 2.3.10 is zero we obtain the equation of 3D pollution transport
To obtain a 2D pollution transport equation one could assume that U
z
is equal to zero
and also set the concentration gradient in the z direction to be zero (Le. ac/az = 0). In
addition one should replace all boundary conditions on the z boundary to equivalent
external sources and sinks. These changes would reduce equation 2.3.11 to a 2D form,
but by doing so we have indirectly assumed a complete vertical homogeneity and in-
stantaneous vertical mixing of the pollutant.
m=a-b v
-
n
Figure 2.3.2. Nomenclature for the pollutant mass balance over a control volume extending
from the bottom to the top of an aquifer (based on Kinzelbach,1986).
The basic problem with obtaining a 2D equation is that the aquifer always is vertically
inhomogeneous. Lenses of silt/clay in the aquifer will effect, not only the velocity dis-
tribution but also the dispersivity values. To avoid some of these problems, we obtain
the 2D equation by averaging over the depth. The averaging approach will only to a
certain degree compensate for vertical inhomogeneties.
Chapter 2 Review of Theoretical Formulations page 29
Let us first consider a confmed aquifer with bottom and top impervious to the pollutant.
In this case the depth average of the vertical pollutant flux must be zero (i.e. Jz= 0).
The averages over the fluxes in the x and y directions yield the 2D flux vector J =
(Jx,J
y
)' The averaging proceeds as before. The local velocity and local concentration in
the z direction are decomposed into their respective depth averages and deviations from
those. The depth average concentration flux can then be expressed as
(2.3.12)
The bars in 2.3.12 indicate the depth average. The second term on the right hand side of
2.3.12 represents the macrodispersion due to vertical inhomogeneties. It is again mod-
eled by the Fickian approach. Note that the Fickian model is only reasonable if mixing
over the depth is fast compared to the time scale of the plume to be modeled. So we
write
where
(2.3.13)
(
D'
D'= =
D'
yr
We now assume that we can write
D ~ )
D'
.'
V=(rJ/dx)
rJ/iJy
where
(
D" D")
" = xy
D=D;D;
(2.3.14)
While D' expresses the dispersion effect due to velocity variations from layer to layer,
D" describes the average dispersion within one layer.
page 30 Chapter 2 Review of Theoretical Formulations
With an effective porosity which is reasonably constant over depth, the depth average
dispersive flux is fmally obtained as
(2.3.15)
where
D =D' +D" = (D
rx
DxyJ
Dyx D
yy
If the assumption of an impervious top of the aquifer is relaxed, the depth integral of J
z
may be non-zero. Omitting in the following the averaging bars we have that
b
JJzdz = mJ
z
= Jexternal = -qc
in
a
(2.3.16)
where q is the flowrate per unit area in the vertical direction from or to the aquifer. Cjn
is the concentration of the inflowing water in the case of infiltration or the average con-
centration in the aquifer in the case of abstraction of water. In other words this implies
that the concentration at the top of the aquifer is approximately equal to the depth-
averaged concentration or that water is abstracted by perfect wells.
Finally the transport equation in 2D is obtained by taking a mass balance around a con-
trol volume extending from the top to the bottom of the aquifer.
-fJndS - JJexsternaldA +JmmiA =J( ) ( c ~ m ) dA
S A A A
(2.3.17)
The surface integral is transformed into a volume integral by the Gauss-theorem as fol-
lows
fJndS = JV]dV = JV(mJ)iA
S v A
(2.3.18)
From equations 2.3.17 and 2.3.18 the differential form of the transport equation is ob-
Chapter 2 Review of Theoretical Fonnulations
tained by equating the integrands.
a(cnem) -( -)
at +VmJ -mn-qc;n = 0
~ ) ' ~ l , - - , - =---._____ __ ___
page 31
(2.3.19)
Inserting equation 2.3.15 into 2.3.19, we fmally obtain the 2D transport equation which
is usually referred to as the Bear equation.
(2.3.20)
Equation 2.3.20 is for a confmed aquifer, but by replacing m=h-b, the equation be-
comes valid for phreatic aquifers too. We have yet to build into equation 2.3.20 the ef-
fects of adsorption and chemical reactions.
Equation 2.3.20 is valid for what is termed aconservative pollutant: this means apol-
lutant which does not decay (e.g. NaCl). We now introduce a first-order reaction into
2.3.20. The effect of a frrst order reaction or generally of decay is reduction of the con-
centration. This effect can be simulated by manipulating the sink term to cause the de-
sired reduction. In a first order reaction the decay is proportional to the concentration
present.
(2.3.21)
where A. is the decay constant. The same type of law holds for radioactive decay.
In the case where a pollutant is adsorbed by the matrix of grains, the mass balance must
include not only the dissolved pollutant mass but also the absorbed pollutant mass.
While the dissolved concentrations c are usually measured as mass of pollutant per vol-
ume water, the adsorbed concentrations c
a
are measured as mass of pollutant per mass
of dry matrix material. To be able to compare these two concentrations anormalization
is required. The total pollutant mass in a unit of volume aquifer is given by
page 32 Chapter 2 Review of Theoretical Formulations
(2.3.22)
where Pis the density of the dry matrix material. Including adsorption and fITst order
reaction in equation 2.3.20 and neglecting molecular diffusion against dispersion, we
obtain the transport equation
a(mnec+m(l-ne)PCa) -( -) -( - )
at +V mneuc -V mneDVc +
A(mnec+m(l-ne)pca)-mn-qcin =0
(2.3.23)
where (J now denotes volume sources and sinks other than first order reactions. The
adsorbed mass appears only in the storage term and in the decay term, the later being
due to the fact that the adsorbed pollutant mass will also decay.
The fmal transport equation is obtained by developing the adsorbed concentration term
in 2.3.23. If the adsorption rate is fast compared to the time scale of flow, we can as-
sume that the adsorption concentration c
a
is always in equilibrium with the dissolved
concentration C ; this means that
c
a
= j(c) (2.3.24)
The function j(c) is called the isothermal function and it describes the equilibrium in
constant temperature. For small concentrations we can usually simplify the isothermal
to
(2.3.25)
If dissolved and absorbed concentrations are not in equilibrium, a separate differential
equation for C
a
must be given. The simplest model states that the exchange between
adsorbed and dissolved phases is proportional to the difference in respective concentra-
tions.
(2.3.26)
Chapter 2Reviewof Theoretical Formulations page 33
Equations 2.3.23 and 2.3.26 form a system of equations that can be solved to obtain the
concentrations c and ca.
Up to now we assumed that the total porosity is identical to the effective porosity,
which means that we assumed that no pollutant is stored in the part of the pore space
which is inaccessible to convection transport. In reality total and effective porosities
may differ considerably. The immobile water can receive pollutant mass by diffusion,
and the retention effect due to dead end pore volume can be taken into consideration in
the same way as non-equilibrium adsorption. Writing balance equations for the mobile
and the immobile pore water separately with an exchange term of the form of 2.3.26 we
obtain
(2.3.27)
where Cl and c2 are the concentrations in the mobile and immobile pore water respec-
tively, and nl and n2 are the fractions of pore space occupied by mobile and immobile
water.
Considering only equilibrium adsorption, we can insert the linear isothermal into equa-
tion 2.3.23 to obtain
(2.3.28)
where R=l+p{l:,n,)
If we make the assumption that m,n
e
and R show little spatial variation and that the
saturated thickness is constant over time, we can divide equation 2.3.28 by the factor
page 34 Chapter 2 Review of-Theoretical Formulations
de -(ii) -( 1 -) (J q
-+V -c -V -DVc +Ac-----c. =0
dt R R neR mneR In
(2.3.29)
Equation 2.3.29 shows that the introduction of a linear adsorption is essentially equiva-
lent to a retardation of the transport process as the pore velocity is replaced by a dimin-
ished velocity ulR, both in the convective term and dispersion tensor. The division of
all terms by R reflects the fact that part of the pollutant mass injected will be absorbed
onto the matrix and not contribute to dissolved concentration at the time of injection.
The coefficient R is called the retardation factor.
In a phreatic aquifer the saturated thickness may vary with time and therefore the stor-
age term must be written as
a(mneRc) de am
---:....----'-=mn R-+n Rc-
dt edt edt
The volume sources (J are converted into areal sources by definition
s.
(J =.....!!!.
m
(2.3.30)
(2.3.31)
The convection term can be transformed by means of the continuity equation of water
flow into
- - - - cq cS ah
V(iic) =iiVc+cVii =iiVc+----
nem nem dt
(2.3.32)
So we fmally come up with an equation which is valid for both phreatic and confmed
aquifers
de ii - -(D - ) q ( ) Sint C (ah am)
-+-Vc-V -Vc +Ac--- c. -c ------ S--n R- =0
dt R R mneR In mneR mneR dt e at
(2.3.33)
The transport equation is a second-order partial differential equation which requires
Chapter 2 Reviewof Theoretical FonnulatioDS page 35
initial conditions and boundary conditions. Initial conditions are given by the concen-
tration distribution c(x,y,t
o
) at the starting time of simulation. As in the case of the flow
equation, there are three possible types of boundary conditions. First kind (Dirichlet)
boundary conditions specify prescribed concentrations on the boundary. Inner bounda-
ries of the fIrst kind can describe pollution sources. Second type (Neuman) boundary
conditions specify the concentration gradient normal to the boundary. This means they
prescribe the dispersive flux. The convective flux at boundaries cannot be prescribed
by second-kind boundary conditions. It is a result of the interplay of concentrations on
the boundary, as given for example by first type conditions and velocities. At impervi-
ous boundaries the flow model has to yield zero water flow across the boundary, and we
only need to prescribe the vanishing dispersive flux.
Prescribing the total flux (i.e. the sum of convective and dispersive flow) corresponds to
a third kind (Cauchy) boundary condition. As the total flux is a linear combination of
the concentration on the boundary and the normal derivative of the concentration on the
boundary, we have
iiii _D-
f.
=-c-n-Vc
,ext R R
(2.3.34)
If the dispersive flux is small compared to the convective flux the boundary condition
approached a fIrst kind boundary condition.
2.3.1. The Averaging Process
In order to obtain a 2D equation for pollution transport, Bear (1972) introduced an av-
eraging process which is based on the equations
U
3
= u
3
+t5 u
3
c=c+Dc
which have already been introduced (2.3.4). These equations are actually used two
times. First to obtain the 3D equation from the initial Fickian equation and then to ob-
page 36
-----
Chapter 2Review'of Theoretical Formulations
tain the 2D equation from the 3D equation (i.e. averaging over the depth). Here we
consider the second implementation of the averaging scheme.
Using the above equations we can calculate the average flux according to
u
3
c= +&) =U
3
C +Su
3
c +U
3
&+Su
3
&
= u
3
c+Su
3
c+u
3
&+Su
3
&
=U
3
C+Su
3
&
We must keep in mind two things when considering the above result. First that this is
the average flux over the third dimension (usually depth), and this average flux is
meaningful only in the case where the variation with depth is not large. The second
thing we have to keep in mind is that the second term on the right side is significant
only when the variation is large. It is clear from the above that this averaging method is
not problem-free.
Figure 2.3.3 can be used to illustrate the problem. Obviously the averaging effect re-
sults in a diffusion like transport of the plume although the basic transport mechanism is
a convection. Since the transport process in described by the averaging scheme, some
additional considerations must be made in order to make the model realistic.
1. Initially the plume must be equally distributed over the depth of the aquifer (as in
figure 2.3.3). In order to fulfill this condition, the pollution must enter the aquifer
through a perfect well (i.e. penetrating to the bottom of the aquifer). In most cases
however the well will be imperfect or the pollution enters through the top section of the
aquifer. This means that we cannot model the pollution source realistically, but must
move the location of the source in order to simulate the pollution source correctly.
How this can be achieved is unclear. Usually field experiments, practical considerations
and judicious decisions allow us to make "reasonable" corrections.
2. The aquifer must be relatively homogeneous as far as its hydraulic properties are
concerned. Aquifers with large inhomogeneties can not be modeled by the averaging
Chapter 2 Review of Theoretical Formulations page 37
scheme proposed above. The higher variability exhibited by the hydraulic properties
the higher the dispersivity coefficient, the less accurate are the model results.
3. The averaging scheme leads to an average flux whose variance depends on the
magnitude of the dispersivity coefficient. This means that we can only obtain an ap-
proximation to the actual pollution transport, while the uncertainty of the results will
depend on the variation of the dispersivity .
The above conditions imposed by the averaging process indicate that the pollution
transfer process is not a deterministic process but a stochastic process. The results ob-
tained give us the average transport effect, and since no consideration is given to the
variance of the solution then one should be very cautious when interpreting such results.
In other words, the transport equations can always be used qualitatively to model pollu-
tion transport. However, quantitatively, the equations are not very dependable unless
the aquifer exhibits fairly homogeneous and isotropic dispersion characteristics.
2.3.2. The Dispersion Tensor and Dispersivity
The dispersion tensor is in diagonal form if one of the coordinate axes is aligned with
the velocity vector. If we assume that it is also aligned with the x-axis, it has the form
(2.3.35)
The dispersion coefficients can be written as the product of the absolute value of veloc-
ity and the length scale, called dispersivity.
Longitudinal dispersion coefficient: D
L
= aL U
Transverse dispersion coefficient: D
T
=aT U
where u = liil-
The tensor elements in an arbitrarily oriented coordinate system are obtained by rota-
page 38
tion.
- - -- --- -- --- - ------- ----
Chapter 2 Review of Theoretical Formulations
where u =lui and
(2.3.36)
Of course, in actual models we need numerical values for the dispersivities. Usually
such values are not known and difficult to estimate. Laboratory e x p e ~ e n t s show
longitudinal dispersivities in the range of O.lcm to lOcm (Kinzelbach, 1986). They ba-
sically reflect the microdispersivity due to the interaction of flow and grain. Their size
varies with porosity, grain diameter, shape of grains and grading of grains.
In field tracer experiments, dispersivities obtain much higher values. This is caused
partly by the higher inhomogeneity of the soil material, but more basically because of
the influence of small scale inhomogeneties in the permeability of the aquifer. Even
larger values are obtained in regional pollution transport. This scale effect on disper-
sion means that there is no such thing as generally valid values. In the literature, dis-
persivity values range between O.lm to 500m for porous aquifers.
The observed scale dependence is usually attributed to two reasons. First, with growing
size of tracer cloud, larger and larger inhomogeneties can contribute to its dispersive
spreading. As long as their extension is too small to be described in detail by the flow
field, their average effect will show up in the apparent dispersivity. An asymptotic state
with constant dispersivities is reached when the size of the plume is large compared to
the largest known randomly distributed inhomogeneties.
----,- ---- ~ , - - - ~ - - -
Chapter 2 Review of Theoretical Formulations
POLLUTANT DISTRIBUTION AT TIME t =0
POLLUTANT DISTRIBUTION AT TIME tl > 0
Depth average Concentratlon
page 39
Hydraulic Conductlvlty
Distance
Figure 2.3.3. Macrodispersion process in a layered aquifer (based on Kinzelbach 1986).
The second reason for the apparent growth of the dispersivity is the inadequate use of
the Fickian dispersion model in situations which show too little randomness to resemble
a diffusion process. A simple conceptual example is presented in figure 2.3.3 by a lay-
ered aquifer. The major macro dispersion effect in horizontally 2 dimensional transport
models of a layered aquifer stems from averaging over the depth. If an aquifer was
page 40
~ - - - - ~ - ~ - ~ - - - ~ ~ - - - - - ~ -------- ~ - - - - ~ - - - -
Chapter 2 Reviewof Theoretical Formulations
composed of parallel horizontal layers with zero mass transfer and differing horizontal
velocities, the differential convection would lead to a spreading which corresponds to a
dispersivity growing to infinity with distance.
A Fickian model which uses a constant dispersivity does not describe this situation ade-
quately. If transverse mixing in the vertical direction exists, the observed dispersivity
will not grow indefmitely but will approach a fmite asymptotic value after some flow
distance when there is an equilibrium between longitudinal spreading due to differential
convection and transverse dispersive mixing. The larger the vertical mixing, the faster
the asymptotic Fickian behavior of longitudinal macrodispersion will be reached. For
very small vertical mixing it may occur that asymptotic behavior is never reached
within a whole groundwater basin.
Another possibility for accounting for the scale effect on dispersivity when considering
a momentary point like discharge, is the introduction of a time or distance dependent
longitudinal dispersivity, which grows from a very small value at the origin of the
tracer cloud to an asymptotic value. Laboratory experiments show values of 0.01 to OJ
for fkrIcxv The vertically transverse dispersivity may even be smaller, but it effectively
governs the longitudinal macrodispersion process as discussed previously.
Table 2.3.1 Dispersivities from two well tests from Anderson (1979)
Type of Aquifer Location Well Distance Porosity Dispersivity
(m) (m)
Fractured dolomite Carlsbad N.M. 38.1 -54.9* 0.12 38.1
Fractured schist & gneiss Savannah River Plant S.C. 538 0.0008 134.1
Alluvial sediments Barstow California 6.4 0.40 15.2
Alluvial sediments Tucson Arizona 79.2 0.38 15.2
Fractured chalk Dorset England 8 0.005 3.1
Chalk Dorset England 8 0.023 1.0
* Inclined hole: 38.1m at surface
Chapter 2 Review of Theoretical Formulations
page 41
Table 2.3.2 Regional Dispersivities from Anderson (1979)
Type of Aquifer Location a
L
(m) a/a
L
Nodal Type of
spacing model
Alluvial sediments Rocky Mountain 30.5 1.0 305 Areal
Arsenal Colorado 30.5 0.3 660 x 1320 Areal
California 30.5 0.3
305
Areal
Lyons France 12 0.33
NR
Areal
Barstow California 61 0.3
305
Areal
Sutter Basin California 80 -200 0.1
Variable
3D
Glacial deposits Long Island NewYork 21.3 0.2 Variable Areal
Limestone Brunswick Ga 61 0.3 Variable Areal
Fractured basalt Idaho 91 1.5 640 Areal
Hanford Washington 30.5 0.6 NR Areal
Alluvial sediments Barstow California 61 1/330 3x152 Profile
Alsace France 15 0.067
NR
Profile
Glacial till! shale Alberta Canada 3.0 and 6.1 0.2 Dc=79 Profile
Limestone Cutler area Florida 6.7 0.1 Variable Profile
Hypothetical 0.003 -30 0.2 Dc=30.5 Profile
21.3 0.2 Variable Profile
10
-
NR 10
0.5 -100 1 - 0.05 12.5x 12.5
Profile
ex Longitudinal dispersivity
L
ex Transverse dispersivity
T
NR Not reported
10 One dimensional model
3D Three dimensional model
Areal and Profile refer to two dimensional models
The molecular diffusion coefficient is usually neglected because it has a value of
D
m
=1O-
9
m
2
/s at a temperature of 10 which is much smaller than the convection diffu-
sion processes caused by velocities in the range of O.lm1d to 10m/d.
Obtaining the dispersivity distribution over the aquifer is a far more complex problem
than obtaining other aquifer property distributions. We have already discussed how one
can obtain a "reliable" hydraulic conductivity or transmissivity distribution over the
aquifer in section 2.2.1. Theoretically the same methods can be applied to the disper-
page 42 Chapter 2 Review of Theoretical Formulations
sivity as well. However, dispersivity is the least known field parameter. Anderson
(1979) presents two sets of dispersivity data seen in tables 2.3.1 and 2.3.2. The fIrst set
is determined from two-well tracer tests, while the second set uses environmental trac-
ers, i.e. substances that are present in the aquifer prior to the start of the investigation.
The magnitudes of the dispersivity values in the two tables vary considerably depending
on the scale of the dispersivity measurements. The dispersivities in table 2.3.2 were
obtained through calibration of numerical models to field plume migration. The values
are three to four orders of magnitude larger than the laboratory scale dispersivity data
(10-
1
to 1 cm) and also show large differences from site to site.
It is expected that the longitudinal dispersivities will be larger that the transverse dis-
persivities and this is typically the case. However, for the fractured basalts at Hanford a
reverse relationship has been suggested by numerical modeling. The dispersivity is the
most uncertain aquifer parameter, since its variability is not readily obtainable in the
field and it is difficult to perform the experiments repeatedly.
Gelhar (1993) has summarized some field observations of longitudinal dispersivities
which have been determined as a function of the scale of experiment, that is, the mean
displacement distance involved in the experiment. The scale of observation ranges
from 1m to 100Km, and dispersivities from 1cm up to IOKm are reported. The data
were also segregated according to whether the observations were a tracer test with a
known controlled input of solute, contamination events where the nature of the source
may be ill-defmed, or environmental tracers where natural processes control the nature
of the input. Most of the tracer tests have been carried out on a relatively small scale,
say, less than lKm, whereas the largest-scale observations are those based on environ-
mental tracers. The various sites were also classified according to whether the aquifer
was described as porous medium or a fractured medium. Figure 2.3.4A does not seem
to indicate that there is a difference between the dispersion characteristics of porous and
fractured medium. A detailed quality assessment of the dispersivity data was then car-
ried out and the results are presented in figure 2.3.4B.
Chapter 2 Review ofTheoretical Formulations
page 43
10
4
10
3


o
A
10
2


e- o
..
....

:>.
0
- ~
: CO': tee to"
0
.f!!
10
1
'" o " ~ ~ o
a.

'"
0
<a
10
0

c o ~ ~
'C ..". .. . Ul ~
.2

:::::I
~
'0,
e
8.
al
C
10-
1
.:::
0
-l

tracer
tests
0

contam. ... A
10-
2

models

envir.

c
tracers
10.
3
10.
1
10
0
10
1
10
2
10
3
10
4
10
5
10
6
A
Scale (m)
10
4
10
3




10
2


:[
0

~ o
.
.> .
Q.- --0
.
~ 10
1
.
.
Q)
% ,0. CO
a.
~ o
Ul
Ci
iii
10
0
o . ~ o 000
c
o ~ . 0 g
'C
3
o ~ . 0
RELlABILllY
C>
0
g 10-
1 . 0

low
-l ..
0

0 intermediate
10-
2
0
0
high 0 0
10.
3
. "
10
2
10
3
10
4
10
5
10
6
10-
1
10
0
10
1
Scale (m)
B
Figure 2.3.4. A. Longitudinal dispersivity as dependent on an overall displacement scale for dif-
ferent types of observations and media. B. Field longitudinal dispersivity data classified accord-
ing to reliability ( both figures from Gelhar, 1993).
page 44 Chapter 2 Review of Theoretical Formulations
The largest symbols in the figure identify those observations of relatively high reliabil-
ity, whereas the smallest points represent observations in which minimal confidence can
be placed. It can be seen that most of the reliable information falls within a scale of less
than about 100m. Most of the data at scales over a kilometer are of low reliability. One
should therefore not conclude that the longitudinal dispersivity increases indefmitely
with increasing scale as might be inferred from figure 2.3.4A.
3. The Finite Element Model
3.1. Introduction
The finite element method is widely used by engineers for obtaining numerical ap-
proximations to the solutions of ordinary differential and partial differential equations.
Compared to the finite difference method the finite element method is more complex,
but it allows us to treat complex geometries, boundary conditions and tensors with great
ease, properties which make the"method very popular.
In this chapter we shall implement the finite element method using the Galerkin ap-
proach on the equations derived in the previous chapter. The groundwater flow equa-
tions are ideally suited for using the finite element method. On the other hand the ad-
vection-diffusion equations are rather badly suited for this method. There are ways to
overcome the problems which appear, but mostly one has to rely on common sense and
visualization of the results.
The finite element method is based upon the weighted residual approximation method,
which in tum has many characteristics of a "trial and error" approach. An approximate
function is built - a trial function - which usually does not fully qualify as a solution of
a given differential equation. The trial function is placed into the differential equation,
and since it does not fulfill all conditions it will produce a residual. If the trial function
fulfilled all conditions then the residual would be zero. The weighted residual method
is then used in order to force the trial function to produce a zero residual. The condi-
page 46
tion is mathematically formulated as
jW9tdxdy=O
n
Chapter 3. The Finite Element Model
(3.1.1)
where W is the weight function, 9t is the residual function and Q is the domain. The
basic idea implemented in the weighted residual condition is that if this condition is
satisfied independently of which weight function is used, then the residual must be
identically zero. In other words if the weighted residual conditions holds "for all pos-
sible weight functions" then the residual is equal to zero and the trial function is in fact
the solution to the differential equation.
Since integration over domains with complex geometry's is not practical, the finite
element method introduces the concept of an element which is used to divide the do-
main in areas with simple geometry which can easily be integrated.
Triangular and quadrilateral elements are the most common types of elements used, but
more complex forms are also available. The introduction of elements, has the drawback
of reducing the freedom with which we can choose the trial and weight functions since
these must be defined not only over the domain but also over each element. For a given
set of weight and trial functions and by splitting up 3.1.1 into a sum of integrals over
each element, a system of equations can be obtained which when solved will produce a
trial function which approximates the actual solution.
The numerical solution is approximate for two reasons. First because the actual domain
geometry is approximated by a finite set of elements and second because only one
weight function has been used rather than "all possible weight functions". In order to
see how good the obtained approximation is, the finite element method provides practi-
cal and theoretical "tools" for making an error analysis. In spite of the available tools
for a finite element error analysis, it is not always possible to get good error estimates.
Usually one has to approximate these as well.
Chapter 3. The Finite Element Model page 47
Errors due to the difference between the actual domain and the modeled domain are
important only in the case when we are interested in solutions near the boundary of the
domain. The type of boundary conditions imposed on the boundary and the geometry
of the boundary will greatly influence the numerical solution obtained. Improving the
solution near the boundary is never easy. However, the finite element method produces
solutions which are smooth, and due to this smoothing effect one can be sure that
boundary errors will not influence the solution obtained further in the domain. In other
words, boundary errors are located near the boundary and do not propagate inwards.
No matter which trial function we choose, we do not expect this function do be a solu-
tion of the differential or partial differential equation of interest. Therefore we shall
always obtain a residual function which will not be equal to zero. Using condition 3.1.1
on a non-zero residual function with an appropriate weight function, produces a solu-
tion where the residual becomes as small as possible. In general, one can state that the
finite element method will give the best possible approximation for a given residual
function (i.e. trial function) and weight function. Although this means that the solution
will be optimal, it does not mean that the approximation will necessarily be good.
Practically, one can think of the trial function as an interpolation function and the finite
element method as a method to fit this interpolation function to the solution of a differ-
ential equation. If the interpolation function used is capable of approximating the solu-
tion well, then good results are obtained. In the opposite case a different interpolation
function must be used. The weight function is also an interpolation function and its
purpose is to weight off as much as possible the errors introduced by the trial function.
Using the same interpolation function as both a trial and weight function has many ad-
vantages and is called the Galerkin approach. This approach makes the numerical im-
plementation of the finite element method easier, and is the most commonly used ap-
proach.
page 48
3.2. Groundwater Flow
-------- ---
Chapter 3. The Finite Element Model
We shall implement the finite element method using the Galerkin approach on equation
2.2.6 which we shall refer to as the "flow" or groundwater equation. This equation can
be rewritten in a less compact form as
= S ah
ax ax dy dy at
(3.2.1)
where T is the transmissivity and is assumed to be a scalar in this equation, q is the ex-
temal flux, S the storage coefficient and h the hydraulic head.
A
We must fIrst construct an approximate solution to h (a trial function). Suppose h is
such an approximate solution of the flow equation at every time step and is given from
the following general equation
n
h(x,y,t) = Lhj(t)Nj(x,y)
j=1
(3.2.2)
where Nj(x,y) is called the basis function for node with global index i and n is the total
number of nodes. The basis function must satisfy two conditions. First, it must have a
value of I at node i and second, it must be zero at all other nodes. Zienkiewicz and
Taylor (1989) call these functions "shape junctions". The basis function is defined
over the domain and is thus a global function. For practical purposes we have to build
up the basis functions through functions defined over each element. Once the element
functions are defined the global basis function is obtained from the relationship
m
Nj(x,y) = I,Nj
e
(x,y)
e=1
(3.2.3)
where m is the total number of elements. Nt (x,y) is called the unit function, it is de-
fined over element e and is a part of the basis functionNj(x,y). The terminology used
here is not widely accepted. Zienkiewicz and Taylor (1989) call the unit functions local
Chapter 3. The Finite Element Model
page 49
shape functions, while Pinder and Gray (1977) call them local base functions. The no-
tation employed by the above authors is also quite different. Zienkiewicz and Taylor
(1990) use N
i
for the global shape functions and ~ for the local shape functions. The
local functions do not require an index e since they are only used in equations which are
defined in local (element) coordinates. Both functions are referred to as shape functions
without any distinction between them (besides the notation difference). Pinder and Gray
(1977) use different notation for unit and basis functions but refer to all functions as
base functions. The notation however, makes it clear which functions they are referring
to. Here we employ the terminology used by Kinzelbach (1986). Since different nota-
tion is necessary to distinguish between the two functions then, either different terms
should be employed, or the use of the epithets "global" and "local" are required. Each
element will have a total of unit functions equal to the number of nodes defining the
element.
A
If the approximate solution h is inserted into the flow equation it will not completely
satisfy it, but leave a residual 9t. The residual can be thought of as a function of time
and space (x,y,t). This function would be given, by definition, from the following
equation
d( dhJ d( dhJ dh
ax T ax +dy T dy +q-
S
ai=9t(x,y,t)
(3.2.4)
The residual function at any point (x,y) and time t has a value which depends on the val-
ues given to h
j
(t) (i = 1,...,n). According to the weighted residual method, in order to
make 9t minimal in a global sense, we can demand that
where Wi are the weight functions.
for i =1, ... ,n (3.2.5)
Galerkin's approach is to use the basis functions Ni in place of the Wi functions and de-
---'- - - - ~ - - - - - - - - - -- -------
page 50 Chapter 3. The Finite Element Model
mand:
for i = l, ... ,n (3.2.6)
Inserting equation 3.2.4 into 3.2.6 gives a system of n linear differential equations with
respect to time for the n unknown piezometer heads.
for i = t .../l.
(3.2.7)
Using Green's theorem the first two terms can be integrated by parts, yielding
for i = l, ... ,n
[
a( n JaN. a( n JaN. J n J ]
! -Tax ~ h j N j ax' -Tdy ~ h j N j a; - t J l ~ h j N j N
j
+qN
j
dxdy
aNj
ax
aNj
dy
(3.2.8)
where fir is the unit vector normal to the boundary r, and ds the differential of path-
length along the boundary. The integral over the boundary r is zero if node i does not
lie on the boundary because Ni is equal to zero. If a node lies on the boundary then the
boundary integral can be written as
(3.2.9)
where qn is the flow across the boundary per unit boundary length. This flow must be
specified as part of the input to the model. The boundary flux information is part of the __ .__ 0 -_--
Chapter 3. The Finite Element Model
page 51
boundary conditions of the problem. More specifically this is a second kind or Neu-
mann boundary condition.
At prescribed heads the integrals do not need to be evaluated because the head is al-
ready known and the following equation replaces the equivalent FEM system equation
hi(t) = J(t)
(3.2.10)
A prescribed head condition can be applied to any node, but in most problems it is used
to define conditions at the boundary and is therefore another type of boundary condi-
tion. It is more specifically called a frrst kind or Dirichlet boundary condition.
There also exists a more general type of boundary condition which is called a third kind
condition or a mixed condition or Robin condition. This boundary condition is the
most general type of boundary condition and requires the definition of a boundary flux
and a boundary head value. It is usually indirectly implemented with the use of a leak-
age term which is introduced into the groundwater equation 3.2.1. (for details see Kin-
zelbach, 1986).
If we look closely at the equation system, we observe that T,S,q and Ni are known (the
partial derivatives of Ni with respect to x and y are therefore also known). Since upon
integration the partial derivatives vanish, we can write the above system in the follow-
ing form
where
for i = 1, ... ,n (3.2.11)
page 52 Chapter 3. The Finite Element Model
-J( aNi aNj aNi aNj)
p.- T--+T--
IJ n ax ax ay ay
~ j = JSNiNj dxdy
n
F; =JqN
j
dxdy+ JqnNj tis
n r
(3.2.12)
If we replace in equations 3.2.14 the basis functions with their sums of unit functions,
meaning that we use equation 3.2.3 and then perform the integration over each element
instead of the domain, we get exactly the same equations with the difference that in-
stead of Ni we have N: and instead of defming P R and F over the domain, we must
defme them over an element. This gives us the following local (element) equations:
where
(
:'lwe a N ~ :'lW
e
a N ~ J
~ ; = I r a ~ a: +r a ~ a; dxdy
R;; = JseNtN; dxdy
e
F;e = JqN
i
e
dxdy+ JqnNt tis
e r'
(3.2.13)
M
~ j = L ~ ;
e=]
M
R;j = LR;;
e=l
(3.2.14)
In the third equation of 3.2.13 besides the boundary flux qn we must also specify the
external flux q for element e. There are many types of external element fluxes and each
type requires the integral to be evaluated differently. The most common external fluxes
are due to: 1. Recharge/discharge wells which are point external flux sources/sinks and,
2. Precipitation input which is usually considered to be a uniform recharge over the
whole element area.
In the point source/sink case the external flux is replaced with
(3.2.15)
Chapter 3. The Finite Element Model
and
F'/ =fN
j
e
Q8(x-x
r
)8(Y-Yr)dxdy=Q Nt(x"Yr)
e
page 51
(3.2.16)
To avoid complications one usually positions the nodes is such a way so that the posi-
tion of the point flux corresponds to the position of one of the nodes. In such a case the
unit function for node i is equal to 1.0, while for all other unit functions in the same
element the unit functions are zero. One must keep in mind that equation 3.2.16 is de-
fined for element e and therefore the point flux besides being assigned to a specific
node needs to be assigned to an element as well. Any element which has the flux node
can be used for this purpose, but care must be taken to avoid assigning it to more than
one element.
Using equations 3.2.13 and 3.2.14 we can build up the equation system 3.2.11. Before
proceeding with the solution of the equation system we need to estimate the partial de-
rivative of h with respect to t. This is done by replacing the derivative with a difference
quotient. The final form of the system is thus
Ph(t') +~ h(t +~ - h(t)) _ F =0
(3.2.17)
where P and R are n x n matrices, F and h(.) are vectors with n entries and At is a sca-
lar, and we also have the condition
t'e[t,t+At]
The significance of t' is that we know there exists at' in the range of t to t + At for
which the partial derivative evaluated at t'is equal to the difference quotient. Of course
t'is not known and therefore the difference quotient is only approximately correct.
The system 3.2.15 can be solved by an explicit scheme by setting t' = t (this method can
be unstable) or an implicit scheme where t' =t +.8t. We can also use the more general-
ized scheme where h(t') is replaced by a weighted average,
page 54
------- ----- -- ------- --- ~ - ~ - ----- -
Chapter 3. The Finite Element Model
h(t') = (1- O)h(t) +Oh(t +I!..t) for 0 < 0:::; 1 (3.2.18)
When 0 = 0.5 we get the Cranck-Nicholson method which is a second order method.
The hydraulic heads hi(t) at nodes i = 1,...,n and at time to are obtained iteratively. In
order to start the iterative process we need to know the hydraulic heads hi(t) i =1,..,n
at the starting time t. which is usually set to 0.0. This information constitutes the initial
conditions of the problem. Knowing the nodal hydraulic heads at time t. and using some
small value for !:it we can solve the equation system 3.2.15 to obtain the nodal hydraulic
heads at time t.+l!..t. With the new nodal heads we repeat the processes to obtain the
solution at time t.+2I!..t, and we continue in this manner until we obtain the solution at to
= t.+k I!..t. Obviously I!..t can not be chosen freely if we want to obtain the solution at a
specific time to'
For the purposes of this thesis afully transient groundwater flow model is not required.
We simply need to obtain a steady state solution and this simplifies considerably the
equation system 3.2.17. The R matrix vanishes from the system since the derivative of
h with respect to time is zero for steady state cases. The steady state system is thus
Ph-F=O
which can be solved once the P matrix and F vector have been calculated.
3.3. Pollutant Transport
(3.2.19)
The pollution equation has already been introduced in chapter 2, and has the form
(3.3.1)
Equation 3.3.1 is defined over a domain Q with a boundary r. Once again we intro-
Chapter 3. The Finite Element Model
duce an approximation function of the fonn
n
C= Lc;Ct)N;(X,y)
;=1
page 55
(3.3.2)
where c; represents the unknown concentration at node i. The basis functions Ni have
the same properties outlined previously and can be decomposed into the elementary ba-
sis functions according to the same relationship used in the groundwater section Le.
m
N; = LN;Cx,y)
e=1
(3.3.3)
Inserting the approximation function 3.3.2 into 3.3.1 results into a residual 9t(x,y,t)
which is forced to be as small as possible in a global sense by demanding that the
Galerkin integral conditions
(3.3.4)
must be satisfied for every node i. These n conditions fonn a system of ordinary differ-
ential equations for the N unknown concentrations. Inserting equations 3.3.2 into 3.3.1
and demanding the fulfillment of 3.3.4 we obtain
(3.3.5)
with i=l,...,n. The boundary integral represents the total dispersive pollutant flux across
page 56
----- --- --- -------
Chapter 3. The Finite Element Model
the boundary. It is zero at impervious boundaries or at boundaries which are well re-
moved from the pollutant plume. At prescribed concentration boundaries knowledge of
the boundary integral is not required. The boundary integral is usually written in the
more condensed form given bellow, where vector-matrix notation has been employed.
(3.3.6)
Along boundaries with prescribed total pollutant flux, we calculate the flux by demand-
ing that the total flux outside the boundary is the same as the flux inside the boundary
(3.3.7)
Equation 3.3.7 corresponds to a third-kind boundary condition. If the total flux is
known then the dispersive flux can be'expressed in terms of the unknown nodal concen-
trations on the basis of equation 3.3.7. Although the total flux might be known at in-
flow boundaries, it is usually unknown at outflow boundaries. In the latter case one can
use 3.3.7 by making the assumption that both dispersive and convective fluxes are the
same on the inside and the outside of the boundary. This assumption leads to a trans-
mission boundary.
The above problems are usually avoided if we choose the modeled region in such a way
that the pollutant distribution either does not reach the boundary or reaches the bound-
ary only at nodes where the dispersive flux can be neglected. All inflow boundaries are
thus treated as prescribed-concentration boundaries, while outflow boundaries as zero-
dispersive-flux boundaries.
As we did previously for the groundwater problem, the domain integrals of equation
3.3.5 are split up into sums of integrals over the individual elements.
Chapter 3. The Finite Element Model
where the elements of each of the matrices P,R and vector F are defined as
m m
R;j = LfNtN; dxdy == LR;;
e=1 e e=l
page 57
(3.3.8)
(3.3.9a)
(3.3.9b)
(3.3.9c)
Qdisp,i is the dispersive flux defined over element e and assigned to node i , and is con-
sidered only in the case of a prescribed dispersive boundary flux'. The re(dis)charge q
must be considered separately. Basically there are two cases we must consider; frrst the
case of a re(dis)charge concentrated at a point and second when the re(dis)charge is
distributed over an area. In the case of a re(dis)charge at a point with coordinates
(xr,Yr), the contributions to P and F are estimated according to the relationships
f...!LNtN; dxdy =JLN:Cxr,Yr)N;Cxr,Yr)
enem nem
f...!Lc;nN;e dxdy = Q c;nN:Cxr,Yr)
enem nem
(3.3.10)
The prescribed dispersive boundary flux must be defined in the case of an impermeable boundary. If
it is set to zero it means that the pollution will not contaminate the impermeable layer.
page 58 Chapter 3. The Finite Element Model
If the point coincides with one of the nodes then, as we did in for the flow equation, we
must take it into consideration only for one specific element.
At prescribed boundaries the nodal equation is replaced by
(3.3.11)
The equation system described by 3.3.8 is a system of ordinary differential equations
with respect to time and has the exact same form as the equation system obtained for
the flow equation. The usual way to solve such a system is to replace the time deriva-
tives with a fmite difference approximation of the type
de c(t +At) - c(t)
-=
dt At
(3.3.12)
In order to obtain an approximation of the time derivative at time Tusing 3.3.12 we
fITst divided T into small steps At - not necessarily equal - and starting from the initial
time and using the initial conditions defining the nodal concentrations usually at time
t = 0 we fmd an approximation for HAt, then for t+2At, .. and proceed in this manner
until an approximation at time T is obtained. Introducing weighted averages in order to
enhance the flexibility of the [mite difference approximation we obtain
(3.3.13)
where Be [0,1]. For e= I we get the fully implicit scheme, while for 0.5 we get the
Crank-Nicholson scheme. The above equation system can be made more compact by
introducing the difference term
Ac, = c
j
(t +At) - c
j
(t)
into equation 4.3.13. After some rearrangements we obtain the system
(3.3.14)
(3.3.15)
Chapter 3. The Finite Element Model
page 59
The system 3.3.15 can be solved for!J..c and then c(t+!J..t) is calculated from 3.3.14.
One can easily see that the calculation effort in 3.3.15 has been considerably reduced
when compared to system 3.3.13.
3.4. Triangular Elements
There are many types of basis functions which can be used for the purposes of the finite
element method. The most simple and common basis function used in the finite ele-
ment method is the one defined with the use of linear triangular elements. Linear trian-
gular elements are defined by three non-collinear nodes at the comers of the triangle.
Assigning to each node of the triangle the indices ij and k and using their coordinates
(x,y) we write the unit functions for each node in the following form
Nje(x,Y) =
[(XjYk -xkYj )+(Yj - Yk )x+(xk -Xj )Y]
De
o
for (x,y) in element e
elsewhere
(3.4.1)
while the other two unit functions N;, N; can be found by cyclical permutation of i, j, k
and
1 x
j Yj
De =
1 X
j Yj
(3.4.2)
1 X
k Yk
Equation 3.4.1 represents a plane in 2D and that is the reason for calling these triangular
elements" linear". The unit and basis functions for linear triangular elements are illus-
trated in figures 3.4.1 and 3.4.2.
The basis function seen in figure 3.4.2 shows only the part of the basis function N;
page 60
----- ------ ---
Chapter 3. The Finite Element Model
which is non-zero Le. the part of the function which is of interest. The basis functions
must nevertheless be defined always over the whole domain Q. This is simply done by
assigning a value of zero at all elements which are not defined by node i.
k k
1
k
Figure 3.4. 1. The unit functions for linear triangular elements.
Figure 3.4.2. The basis function N," = N':' +N ~ +N ~ +N ~ +N
e
+Nt
1 1 I , , I
Chapter 3. The Finite Element Model page 61
In order to simplify notation the following abbreviations are always used for defining
linear triangular elements
(3.4.3)
where Xn,Yn n =ij,k are the coordinates of the comers of the triangular element and ij,k
are taken counterclockwise. With this notation the unit functions for element e become
(3.4.4)
If we are to use linear triangular elements in the finite element method we simply need
to replace the unit functions in the finite element system equations with equations 3.4.4
and then perform the integrations. To perform these integrations it has been found that
transformation to area coordinates simplifies the integrations considerably.
y
k
X
Figure 3.4.3. Element illustrating area coordinates. (Based on Pinder and Gray, 1977)
Given a point P in a triangle, we can divide the triangle into three areas as illustrated in
page 62 Chapter 3. The Finite Element Model
figure 3.4.3. We can now define three area coordinates L
i
, L
j
and L
k
where L
i
= A/A, L
j
= AlA and 4. = AjA where A is the total triangle area. If we describe the area coordi-
nates in terms of the coordinates of point P and the coordinates of the points i, j and k
we obtain equations 3.4.4. In other words the area coordinates are equal to the unit
functions. This equality allows us to write
(3.4.5)
e e
and the second integral can be easily performed once dA is expressed in area coordi-
nates. From calculus we know that the transformation is expressed by
dL;dL
j
= detJdA
where J is the Jacobian matrix defmed as
aLi
aLj
B ~
B ~
I J
ax ax De De
J= =
aLi
aLj
c ~
c ~
I J
dy dy De De
(3.4.6)
(3.4.7)
and the determinant of the Jacobian matrix can be easily found to be equal to detJ=l/D
e

So we obtain the transformation


The integration now is carried out in the following manner
Jf(Nt ,N;)dA= JJf(Nt ,N;)IYdL;dL
j
e L/L
j
= De Lf(I,=YJ(Li'L
j
)dL;JdL
j
Lj=O I, =0
Some of the most common integrals give the following results
(3.4.8)
(3.4.9)
Chapter 3. The Finite Element Model
page 63
J
L dA=!De
I 6
e
l=i,j,k
(3.4. lOa)
m=i,j,k
(3.4. lOb)
(3.4.IOc)
(3.4.lOd)
(3.4.10e)
(3.4.100
(3.4. 109)
As a further example let us take equation 3.2.13 obtained during the fmite element for-
mulation of the flow equation
(
. ::'IW
e
::lW
e
J
= f T
e
_0._; __J +T
e
_0.._; __J dxdy
'J e axax dydy
(3.4.11)
Using triangular linear elements and equations 3.4.10f-g obtained above, we obtain af-
ter integration

'J 2De 'J 'J
(3.4.12)
Although it is conventional that the nodes ij,k are taken in a counter-clockwise order,
Kinzelbach (1986) notes that changing from counter-clockwise to clockwise changes
only the sign of De. So if we take the absolute value of De we can disregard the order
of the nodes in an element. Kinzelbach (1986) also introduces a convenient way for
combining 3.4. lOb and 3.4.lOc into one equation which has the form
where
J
LIL dA=_1
m 24 'J
e
1=i,j,k m=i,j,k (3.4.13)
page 64
~ - ~ - - - - - - - - - - --------- ------ ~ - ~ - - ~ - - ~ - - -
Chapter 3. The Finite Element Model
for i orj not contained in element e
fori '* j
fori =j
Areal coordinates can also be used for evaluating boundary integrals, which are basi-
cally straight forward line integrals. For example functionf can be integrated along the
side ij of an element in the following way.
where iij is the length of the element side. If the functionfis defined asf=fl--I+J;L
j
where
fis specified at the nodes and varies linearly along the element side ij then the line inte-
gral becomes
1 1 1
Iij f(/;L
i
+fjLj )dLj = Iij/; f(1- Lj )dLj +IijfjfLj dLj
o 0 0
=/. ..f. (.!.) +l./. (.!.)
IJJj 2 IJ J 2
Of course line integrals can be also evaluated by conventional approaches which are
more convenient for linear elements.
3.5. Isoparametric Quadrilateral Elements
In general, the f"mite element method is fonnulated in the Cartesian coordinate system
using (x,y) coordinates to define the node position, the element geometry and interpola-
tion functions e.t.c. In many cases however it is easier to define a local coordinate sys-
tem (;,7]) which simplifies the geometry and numerical equations which usually involve
the evaluation of integrals.
In figure 3.5.1 we see the effect of using local coordinates on a linear quadrilateral ele-
ment.
'11=-1
1~ - - - - - " - - - = - - - - 2
Chapter 3. The Finite Element Model
y
3
1
4
~ = - l
n=1
3
IF=1
page 65
X
Figure 3.5.1. Quadrilateral element in Cartesian global coordinates and transformed local co-
ordinates.
Once in the local coordinate system the quadrilateral becomes a square and thus nu-
merical integration techniques can be employed with easiness to evaluate the integrals.
In order to obtain the transformation equations we simply need to consider two things.
Obviously the nodes in Cartesian coordinates must be mapped at the comers of the
square at local coordinates, and secondly the unit functions must be consistent with the
transformations. Let us proceed in obtaining these equations for the case outlined in
figure 3.5.1.
Consider a linear quadrilateral element in global Cartesian coordinates with nodal co-
ordinates given by n,(xI'Y')' n
2
(x
2
,y
2
), n
3
(x
3
'Y3) and nix4'Y4)' In order for this element to
comply with the above rules, a polynomial is required of the type
(3.5.1)
which is linear in; when 17 =1 and linear in 17 when; =1 and will give the x coor-
dinate for any point p(;, 1]) in the transformed coordinate system. The mapping re-
quirements are met if
X=X, when ;=7]=-1
x=x
2
when ;=1 7] =-1
x=x
3
when ;=7]=1
x=x
4
when ;=-1 17 = 1
(3.5.2)
page 66
- ~ ------ ~ - - - - . . . . : : _ - - ~
Chapter 3. The Finite Element Model
Substitution of these constraints into the polynomial equation yields
Xl
1
gl 111
gl11
l
a 1 -1 -1 1 a
x
2
1
g2 112
g2112
b 1 1 -1 -1 b
= = =Ap (3.5.3)
x
3
1
~ 3
rh
~ 3 1 J 3
C 1 1 1 1 C
x
4
1
c;4 114 c;471
4
d 1 -1 1 -1 d
where (;1' 1]1)' (;2.1]), (;3.1]3) and(;4.1]4) are the coordinates of the nodes in the trans-
formed system.
The parameters p can be easily obtained from the equation
P = A-IX (3.5.4)
where
1 1 1 1
A-I =.!..
-1 1 1 -1
4
-1 -1 1 1
1 -1 1 -1
We could have used the y coordinate of the nodes instead of the x coordinates without
any change to the system of equations. A consequence of the above analysis is that we
can also obtain the defmition of the shape functions in local coordinates. Since the co-
ordinates x,y can be expressed with the use of the unit functions as
n
X= "LNjXj
j=l
n
y= LNjYj
j=l
(3.5.5)
where n is the number of nodes defining the element (i.e. one unit function per node)
and the "e" superscript for the unit functions is omitted since we are talking about a
specific element. Replacing x with its equivalent polynomial form we obtain the follow-
Chapter 3. The Finite Element Model
ing equation written in matrix form
Xl
a
Xl
{N
I
N
4
}
X
2
={l
; ;7J}.
b
={l ;
;7J}A-1
X
2
N
2
N
3 7J 7J
X
3
C X
3
X
4
d X
4
{N
I
N
2
N
3
N
4
}={1 ;
7J
;7J}A-1
page 6.7
(3.5.6)
This last expression defines the unit functions in local coordinates. Carrying out the
matrix multiplication we obtain the [mal form of the unit functions in local coordinates.
(3.5.7)
With these equations we can proceed with the evaluation of the [mite element integrals.
One can define three types of integrals which need to be evaluated with respect to the
unit function expressions found in the integral. These can be seen bellow
JaNi aNj dxdy
e ax dy

e
JNiNjdxdy
e
(3.5.8)
These integrals are defined in the Cartesian coordinate system and therefore variable
transformations must be used for obtaining the equivalent forms in local coordinates.
The first transformation is that of changing the integral in the following fashion
page 68 Chapter 3. The Finite Element Model
I I
Jdxdy -7 JJd ~ d 1 ]
e -I-I
(3.5.9)
To accomplish this we need only consider the transformation
(3.5.10)
where det(J) is the determinant of the Jacobian matrix of the transformation which is
defined as
(3.5.11)
and can be estimated from the relationship
aNI aN
2
aN
3
aN
4
XI
Yl
ag ag ag ag
x
2 Y2
J=
aNI aN
2
aN
3
aN
4
x
3 Y3
aT} aT} a1] aT}
X
4 Y4
(3.5.12)
The above relationship is written specifically for the linear isoparametric elements de-
fmed previously, in other words for an element defmed by four nodes with four unit
functions.
The jacobian matrix J as defmed above can be directly used to transform from local co-
ordinates to Cartesian coordinates. In most cases however we need to transform in the
opposite direction Le. from (x,y) -7 (g,1J) and thus we need the inverse of the jacobian.
In this specific case, the jacobian is a 2x2 matrix and its inverse is known to be given by
the general expression
A _ [3 b] -7 A-1 _ ------:--:-1[d -b]
- c d - det(A) -c a (3.5.13)
- ~ - - ~ - ~ - - - - ~ ~ - - - -
Chapter 3. The Finite Element Model
where det(A) = ad-bc and in the jacobian matrix case
page 69
(3.5.14)
Once the determinant and entries of the jacobian are estimated, then the inverse jaco-
bian can also be build up and all transformations from Cartesian to local coordinates
can take place. In general we will have that
(3.5.15)
So after the transformations we obtain the following integrals
1 1
fN;Njdxdy = ffN;Nj det(J)dgdT}
e -1-1
(3.5.16)
Using the above definitions and transformations we can transform any FEM integral
into local coordinates. To evaluate the resulting integrals we employ numerical inte-
gration.
The most common numerical integration technique used is Gaussian quadrature. In this
page 70
------------"-------
Chapter 3. The Finite Element Model
method the integral of a functionj(x) in the interval [-1, 1] can be approximated by us-
ing a weighted mean of n particular values of the function evaluated at particular points.
To be more specific, Gauss defined n specific points Xi and n weights Wi for which
1 n
J!(x)dx= LW;!(x;)+R
-1 ;=1
where R is the residual and equal to
(3.5.17)
2
2n
+
1
( 1)4
R= n. f ' 2 n ) ( ~ )
(2n+ 1)[(2n)ly
(3.5.18)
Equation 3.5.18 shows that if j\x) is a polynomial of degree 2n-l or smaller, then the
residual is zero (since the 2n'th derivative will be zero) and equation 3.5.17 will be ex-
act. So the number of points used will determine the precision of the approximation.
Table 3.5.1 Truncated values of Gauss points and weight
factors.
x n w
0.577350269189626 2 1.000000000000000
0.774596669241483 3 0.555555555555556
0.000000000000000 0.888888888888889
0.861136311594053 4 0.347854845137454
0.339981043584856 0.652145154862546
Extensive tables giving the weights Wi and the points Xi for different values of n can be
found in the literature (for example Abramowitz and Stegun [1970]). Table 3.5.1
shows weights and point values for n = 2,3 and 4.
In two dimensions the integral has the form
1 1
1= JJ!(x,y)dxdy
-I -I
Chapter 3. The Finite Element Model
------------------ -----
page 71
The integration proceeds stepwise. The inner integrals is considered frrst while holding
y constant
1 n
Jf(x,y)dx =Itwjf(xj,y) =l!J(y)
-1 j=1
The outer integral can now be evaluated as
1 n n n n n
Jl!J(y)dy = Itw;l!J(y;) =ItW; Itwjf(Xjy;) =It Itw;wjf(xjy;) =1
-1 ;=1 ;=1 j=1 ;=1 j=1
where for n =3, a polynomial of up to the futh power in both x and y can be integrated
exactly.
For isoparametric linear quadrilateral elements, the resulting functions which must be
integrated, will be at most of third order and therefore for n = 2 we can obtain exact
evaluations of all integrals.
3.6. The Finite Element Method In Convection Dominated Prob-
lems
We have already mentioned in the introduction to this chapter, that the finite element
method is not well suited for the solution of convection-diffusion equations. Compari-
son with analytical solutions often show that the numerical solution shows a higher de-
gree of dispersion while other times the numerical solution exhibits oscillations.
We shall follow the presentation of Zienkiewicz and Taylor (1991), who discuss these
problems extensively. For simplicity we shall first consider the ID case of the problem
and then show how the principles involved can be extended to 2D problems. Assuming
a steady state ID problem with the equation
page 72
dc d
2
c
u--D-+Q=O
dx dx
2
Chapter 3. The Finite Element Model
(3.6.1)
where u is the velocity, D the dispersion and Qthe external pollutant flux. The concen-
tration c is of course the unknown. The above is an ordinary second order differential
equation. Using linear unit functions and employing the Galerkin approach we obtain
the numerical system equations
(3.6.2)
h P
uh d - . th . d . d .
were e = - an c
j
IS e estimate concentratIOn at no e 1.
2D
Pe is known as the mesh Peelet number. The above equation is also obtained if a cen-
tral [mite difference scheme is used. Unfortunately the above numerical equation is not
very stable and becomes increasingly unstable as Pe becomes larger. A large Peelet
number indicates that the process is convection dominated while low Peelet numbers
indicate a dispersion dominated process. The instability of the solution for large Pe
usually results in oscillations of the solution which bear no relation to the underlying
problem.
In steady state problems the instabilities occur near the boundary regions, where the
imposed boundary conditions are felt only in this very small region and are referred to
as the boundary layer problem. Practically, the instabilities occur due to bad estimations
of the gradients in these regions. Convection dominated problems cause the concentra-
tion to change abruptly and if the element mesh is not adequately fine to "capture" the
sudden change, the solution starts oscillating.
Intuitively, we understand that if we manage to approximate the gradients through a
better scheme, then the problems will be reduced. The practitioners of the finite differ-
ence method found out that if a upwind difference scheme was used for the approxima-
tion of the convection gradient, then a realistic solution was obtained (not always accu-
rate) which was very stable.
Chapter 3. The Finite Element Model page 73
Petrov-Galerkin type of weight functions are capable of introducing the same upwind
difference scheme effects into the finite element method. The weighting functions for
linear ID elements are constructed as
W ; = N i + a ~
where
J
- h
W;dx=-
e 2
(3.6.3)
There are several choices of weighting functions with the above properties the most
simple being
- ( ) hdN
aW; = sign u .a2" dx'
(3.6.4)
The numerical equations obtained with these weight functions are dependent on the
value of the parameter a . For ~ = 0 the standard Galerkin approximation is obtained
which gives very good results for dispersion dominated problems while for a=1 the full
upwind approximation scheme is obtained which gives very good solutions for convec-
tion dominated problems. Now if the value of this parameter is chosen as
1
la/ =aopt =coth/pe
l
-
1pe
/
(3.6.5)
we obtain a good approximation for any value of the Peclet number. For the ID case it
can be shown that if a has values which are larger than a critical value defined by the
equation
1
lal> a
crit
=l-IPel
(3.6.6)
then oscillatory solutions will never arise. For the Galerkin approach a=O and thus
oscillatory solutions will be obtained for Pe > 1.
Zienkiewicz and Taylor (1991) note that the Petrov-Galerkin approach is equivalent to
page 74
- - - - - - ~ ---
Chapter 3. The Finite Element Model
the use of the standard Galerkin approach with the addition of a dispersion
1
D =-auh
a 2
(3.6.7)
Inserting this dispersion into the equations - provided Q is constant - gives the same
results as the Petrov-Galerkin process. It also gives us some insight to the effects of the
Petrov-Galerkin approach. Introducing such dispersion into the equations is much eas-
ier than the implementation of the Petrov-Galerkin approach, especially in 2D or 3D
problems, but unfortunately it does not provide any modification to the source terms
and although the results obtained seem reasonable they are usually erroneous.
The Petrov-Galerkin weighting scheme can be extended into 2D problems. The Peclet
number is now a vector quantity and is defined as
uh
Pe=-
2D
where u = {::}
(3.6.8)
Since convection is only active in the direction of the resultant element velocity, the
upwinding needs to be directional in 2D depending on the direction of the velocity u.
This is introduced by the following weight functions
The parameter a is obtained by the previously defined expression
1
a = a = cothPe--
opt Pe
where
(3.6.9)
(3.6.10)
Pe= Inlh
2D
and lui = ~ u ; +u ~
Chapter 3. The Finite Element Model
page 75
The above equations assume that the velocity components are constant - or at least sub-
stantially constant - in a particular element and that the element size h can be reasonably
dermed.
The Petrov-Galerkin formulation can be directly employed in time dependent problems
as well. The equations obtained are quite different than the one we have considered so
far and other types of problems arise with respect to the stability of the equation system.
In general, the fmal system obtained has the form
(3.6.11)
One would expect that as usual (J;;::: 1/2 would give unconditional stability and that with
(J = 0 and substitution of the lumped matrix R
L
=R an explicit procedure would be avail-
able. However this is not the case since stability problems arise. In many numerical
problems it is not always easy to obtain error estimates in order to understand the nature
of the instabilities that occur. It is then necessary to compare the numerical solutions
with exact solutions which are available. The most convenient way of making such
comparisons is to use a uniform spatial mesh for the numerical solutions and obtain the
so-called amplitude and relative celerity ratios.
To illustrate the use of these ratios an example presented by Zienkiewicz and Taylor
(1994) is most informative. They consider a lD time dependent convection only trans-
port equation' with no sources. The solution of this equation has the form
ac ac
-+u-=0 ::::} e =e e
i
<1(x-ut)
at ax 0
(3.6.12)
where Co and (j have arbitrary values. At a given x coordinate we define the growth ratio
as
A. = c(t +dt) = e-iUCTM
e(t)
(3.6.13)
page 76 Chapter 3. The Finite Element Model
which has a modulus equal to 1.0 and the arguments are written in the form
arg(A,) =u(jAt =pC P=(j/),x
At
C= u-= Courant number
/),x
We can defme a similar ratio for the numerical scheme that links discrete values of c at
two time levels t and HAt, assuming that in x the exact solution is satisfied, i.e.
i+l i
'] *_C _ C _ iU!J;J<
.fL, -----e
c
i
C
i
-
1
(3.6.14)
The assessment of the numerical solution is done through the definition of the ratio
... it*
it=-
it
(3.6.15)
The modulus of the above ratio is known as the amplitude ratio while the ratio of the
arguments of it* and it is the relative celerity. Both the amplitude ratio and relative ce-
lerity must be close to unity. In the present case the amplitude ratio must never exceed
unity if stability is to be achieved.
Using the Galerkin scheme for the above equation, Zienkiewicz and Taylor (1994)
show that
it* = (cosp+2)-3Ci(1+0)sinp
(cosp +2) +3CiOsinp
(3.6.16)
with these equations we can proceed with an analysis of the numerical scheme. The
Courant number is very important in such an analysis. If we take L1x= h which is the
element size, then we see that the Courant number is the ratio of the distance the pollut-
ant is going to move during a time step u Lit divided with the element size. If the veloc-
ity is large then all the pollutant will move out of the element causing numerical prob-
lems. Using now different values for the Coura.nt number and for 0 = 0, 1/2 and 1 the
results seen in figures 3.6.1 and 3.6.2 where obtained for the Galerkin and Petrov-
Galerkin schemes respectively. The amplitude ratio expresses the stability, while the
~ - - - - - - - - -
Chapter 3. The Finite Element Model page 77
relative celerity expresses the accuracy of the results. Since both of these properties are
ratios between numerical and analytical results, the ideal results are obtained when they
give values close to unity. When the amplitude ratio gives values larger than one, oscil-
lations in the numerical solution occur and thus unstable results are obtained.
9=0
2.0
0 ?;-
'! 1.5 til.O
.,
iJ
"C
. ~ 1.0
.,
~
Q.
~
e
~ 0.5
< 0.5
0.0
0.0
10+
1
10+
1
utJx
9=0.5
'!
1.0
.2
....
!
~ 1 . 0 ~
.,
8 E. "C
:I
., .
i O.S
> ~
e
~ O'Sf
<
r
0.0 0.0
10+
1
10..
2
utJx
8-1.0
1.0
.g
W
....
e
-s 1.0
j
iJ
1
t 0.5
~ 0.5
tJ:
0.0 0.0
10<01 10+1
utJx
" "]
. l
Figure 3.6.1. Amplitude ratios and Relative celerities for Galerkins method (from Zienkiewicz
and Taylor 1991).
It becomes evident that the Galerkin process is stable with e= 1/2 and accurate, but is
unconditionally unstable for e=O. For low Courant numbers the results with e=1/2
and e= 1 seem to be the same, but as the Courant number increases the accuracy is re-
duced. The amplitude ratio for e= 1 suggests that the numerical results will be un-
conditionally stable but with increasing Courant number they will underestimate the
analytical results.
The Petrov-Galerkin approach, for e= 0 is conditionally stable requiring small Courant
numbers, and is unconditionally stable for e= 1/2 and e= 1. For large Courant numbers
page 78 Chapter 3. The Finite Element Model
the Petrov-Galerkin does not seem to give better results than those obtained from the
Galerkin approach. Comparing the two approaches at a value of 10 elements seems to
give approximately the same results for both () = 1/2 and () = 1. However for 5 ele-
ments the Petrov-Galerkin approach seems to give better results.
Since the Peclet number does not come into the above calculations, these do not show
the effects of numerical dispersion. In general we can conclude that the best results in
time dependent problems are obtained for () = 1/2 independently of which approach is
used. The number of elements should be as large as possible especially in cases where
the Courant number is larger than 1. Since a large number of elements will increase the
Peclet number, the Petrov-Galerkin approach seems to give a big advantage over the
Galerkin approach.
0.0 .:-"...................1"':'0+.1
UJh

0.0
UJh
o
!
II

J0.5
0.0
UJh
0.0 ---.......................1... 0+:-:"1-'--.........
UM
Figure 3.6.2. AmplitUde ratios and Relative celerities for Petrov-Galerkins method (from Zien-
kiewicz and Taylor 1991).
Another type of problems we are interested in solving are cases where an injection of a
Chapter 3. The Finite Element Model
page 79
pollutant has taken place. If the injection is "instantaneous" analytical equations can
also be obtained. These type of problems are sometimes referred to as the Gaussian
wave propagation problems because the pollutant exhibits a distribution similar to a
Gaussian curve. In general the Galerkin and Petrov-Galerkin schemes will not work
well for this type of problems. Good solutions are obtained only in the case of Courant
numbers of 0.1 or smaller and even in such cases the numerical approximation shows
damping (Le. a lower peak vaIue and sometimes considerable spreading out of the pol-
lution).
Zienkiewicz and Taylor (1994) propose a number of other approaches for these type of
problems like the Characteristic (or streamline) Petrov-Galerkin, the Characteristic
Galerkin, the Lax and Wendroff procedure also known as the Taylor-Galerkin approach
e.t.c. The Characteristic methods basically divide the transport process into a convec-
tive part and a dispersion part, the convective part can be solved with the method of
characteristics employing Taylor development of the convective terms while the dis-
persion part is solved by typical Galerkin or Petrov-Galerkin procedures. Combining
these two into a set of equations the problem can be solved, but many stability problems
arise.
All stability problems are connected to the values of the Peelet number and the Courant
number. From the definitions of these numbers we see that they have an opposite effect
on the stability in the sense that a reduction of element size reduces Pe but increases C.
In general, we would like that both these numbers be kept small, in order to obtain the
best results. In the ID Galerkin approach we have seen that Pe must be smaller than 1
to have stability, while for Gaussian wave problems C must be kept smaller than 0.1.
Choosing the element size to be such that the Peclet number always remains equal to 1,
and using this element size to define the time step which makes the Currant number
equal to 0.1 we obtain the charts of figures 3.6.3 and 3.6.4 for different velocities and
dispersions.
---- ~ - ~ - --- - ---- ~ - - ~ --- -- - ~ - -
page 80 Chapter 3. The-Finite Element Model ~
1.0Et6
1.0Et5
1.0Et4
1.0Et3
~ 1.0Et2
Cl
E
j:: 1.0Et1
1.0EtO
1.0&1
1.0E-2
-0=0,1
-0=1
'.
,
-0=10
~ \ " " ' ,
-0=100
~ \ ,
r-----
----
\, r-----
-
-
-
,
r-----
:---..
~
---
1.0E-3
o 0.5 1.5
Velocity
2 2.5 3
Figure 3.6.3. Velocity versus time step for different dispersivify values.
1.0Et5
1.0Et4
1.0E+3
l!1 1.0Et2
0;
-
c
..
g
iii 1.0Et1
1.0EtO
1.0E-1
-0=0,1
-0=1
-0=10
'-.
l \ " ~
-0=100
r---
l\"-
r--
I \ ~
~
~
r--
1.0E2
o 0.5 1.5
Velocity
2 2.5 3
Figure 3.6.4. Velocity versus Element size for different dispersivity values.
Both charts indicate that for small velocities (smaller than 0.1) the time step and ele-
ment size can be rather large. As the dispersion becomes smaller both time step and
element size must also become smaller. Under usual hydrogeologic conditions the ve-
- - - ~ - ~ - - _ . ~ ~ ~ ~ - - - - - . ~
Chapter 3. The Finite Element Model page 81
locity will not exceed a value of 2m/day while the dispersion will range from 10-
1000m
2
/day. Under these conditions one can see that if the element size is chosen to be
5m and the time step O.2days, then stability will be ensured for most hydrogeological
conditions. However, in most cases we can not use so small elements and time steps.
For example, a small aquifer with a length of 1 Km and 500m width, will require 20000
elements to ensure stable results. So, for practical reasons we must use larger elements
and therefore tolerate instabilities in the results obtained.
Something we have not considered so far is that in all the above equations the disper-
sion is assumed to be a scalar, while in most hydrogeological problems it is a tensor.
We usually work through the longitudinal and transverse dispersivities in order to de-
fine the dispersion tensor. In order to be on the safe side the transverse dispersivity
should be considered when estimating the Peelet number.
We have previously mentioned the lumped formulation of the equations in respect to
how the transient equation system can be formulated. More specifically it was men-
tioned that the R matrix can be approximated by its lumped form R
L
The R matrix in
the pollution transport equation is build up based purely on geometrical information Le.
no transport parameters come into the calculations involved. For practical reasons
many numerical analysis experts replace this matrix with its lumped form which is a
diagonal matrix (i.e. only the entries Ru i=1,2,...,n are non-zero) which allows the ma-
trix to be stored as a vector, thus giving as a substantial reduction in memory require-
ments and making many calculations faster and easier. The basic idea is that R has
properties which allow us to transform it into a diagonal matrix through rotation proce-
dures similar to those used for obtaining the eigenvalues and eigenvectors of matrices.
However, rotation algorithms are quite demanding and more simple methods are used.
Zienkiewicz and Taylor (1994) discuss lumped matrices in Appendix 8 where 4 differ-
ent approaches are explained. Using a lumped matrix simplifies the formulation of the
equation system but causes some loss of accuracy. The R matrix - usually referred to as
the consistent matrix - has a further disadvantage since it is not diagonally dominant.
page 82 Chapter 3. The Finite Element Model
Actually neither R nor P are diagonally dominant" and therefore the equation system is
not perfectly conditioned. However, the diagonal entries in R and P are always the
largest valued entries in each row so pivoting is not necessary. One can compare the P
matrix with the respective matrix form the steady state groundwater flow rmite element
equation system which is diagonally dominant - actually the diagonal entries are always
equal to the sum of non-diagonal entries - and is very stable.
The reason for the discussion on lumped matrices is that in the next section compari-
sons will be made with results obtained by Jang et al.(1990) They have employed a
lumped formulation which results into significantly different deterministic results espe-
cially when the time is of importance.
3.7. Verification
We have already discussed the problem of formulating the fmite element equation sys-
tem with the use of a lumped matrix in comparison to a consistent matrix. Due to the
importance of the differences between these approaches it was considered necessary to
verify the numerical results with analytical solutions in order to have a better under-
standing of the differences between these two schemes.
In numerical analysis if a matrix is strictly diagonally dominant then a Gaussian elimination based
algorithm - as for example a LU factorization - without pivoting will always manage to solve the equa-
tion system without any numerical problems. A strictly diagonally dominant matrix has the property
1
M
;; I> i\M;j\
j=l
j#
where M. are the diagonal entries of a matrix M. If the above is true with a " ~ " then the matrix is only
diagonally dominant.(see chapter 7)

Chapter 3. The Finite Element Model
3.8.1. Constant Source Verification
page 83
The analytical equation used is the one presented by W. Kinzelbach (1986) given as
where
C (X) [ (-xr) { x-utr /R)
c(x,t)=-...exp -- . exp -- ern
2 2a
L
2a
L
/ R
(
xr)
-exp -- . eUI
2a
L
/ R
(3.8.1)
The pollution source is located at point x = 0, with a constant injection rate given by M
starting at time t = O.
w the width of the aquifer,
m the aquifer depth,
n. the effective porosity,
u the average pore velocity in the x direction (i.e. u),
R the retardation rate,
Il the decay constant
CXr. the longitudinal dispersivity.
The numerical grid was set up by asingle row of square isoparametric elements with a
side of 1m. The row was made to be 100m long using 100 elements with a total of 202
nodes. For the groundwater flow problem a aquifer is assumed. The two left
side nodes in the grid where assigned a constant head value of 100m while the two right
side nodes 50m giving a gradient equal to 112. For each element the effective porosity
was set equal to 1, and the hydraulic conductivities were set to be homogeneous and
equal to Im/d which essentially results in a pore velocity of 0.5 mid in row direction.
The pollution transport problem simply assumes that the left side nodes are constant
concentration nodes with a value of I mg/m
3
while the dispersivities ( a
L
and C4) have
page 8 ~ Chapter 3. The Finite Element Model
the values 10m, 1m and O.lm for each case study and results are obtained at 50 days
and 70 days for each dispersivity value.
For the analytical equation the following input was used. The pore velocity was set
equal to 0.5 mid, the width and depth of the aquifer to 1m, the retardation ratio to 1, the
decay constant to 0, the effective porosity was also set equal to 1, while the source term
Co = 1mglm
3
Six cases were considered in all where a
L
obtains the values 10m , 1m
and O.lm for respective times 50 days and 70 days.
Concentration Profiles atSO and 70 days with dlsperslvlty= 10
.......
ro::::::: ~
-Analytical 50 days
"""
~ ~
-Analytical 70 days
"'"
~ ~
~
~
"'-
'"
"'
'"
"'"
'"
"'
"'"~
""
'"
'"
~
"'"
~
""-
"-
~
1'00..
.........
......
r---
-
i""""""-
-
0.1
0.0
o 5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85 90 95 100
Distance
1.0
0.9
0.2
c
,g 0.6
e
c 0.5
..
u
c
8 0.4
0.3
0.7
0.8
Figure 3.8.1. Analytical solutions obtained withlXL =10m
The analytical results with a dispersivity of 10m can be seen in figure 3.8.1. If the
pollutant transport was only subject to advective transport it should have covered a dis-
tance of 25m and 35m after 50 days and 70 days of transport respectively. In this case
however the large value of the dispersivity dominates the pollution transport and the
magnitude of the advective transport is "hidden", in the sense that it is difficult to ob-
serve it in the results. We have already mentioned previously that numerical results for
problems with a dominating dispersion transport will be very stable and therefore we
expect the numerical results to be very good.
The easiest way to compare the numerical results with the analytical results is to take
Chapter 3. The Finite Element Model page 85
the differences between numerical and analytical solutions and make a plot of these.
Such a plot can be seen in figure 3.8.2 and the same presentation scheme is used
throughout this section. The difference plots also allow us to see the behavior of the
numerical results in relation to each other. Positive differences indicate that the numeri-
cal results are larger than the analytical results and negative differences indicate smaller
numerical results than the analytical ones. Direct comparisons between analytical and
numerical results are not always informative when plotted against each other, since in
many cases the differences are too small to be noticeable.
Figure 3.8.2 shows the concentration differences obtained by four different finite ele-
ment numerical schemes, which resulted from using different matrix formulations (
lumped and consistent) with different difference schemes for the time derivative ( (J
"Theta" =0.5 central difference scheme and (J =1.0 backwards difference scheme). The
results seem to indicate that the lumped matrix formulation with (J = 0.5 has given the
best results, although all results in this case are very good. The numerical dispersion
observed in these results is very small and so is the boundary effect observed at the
100m distance.
Concentration Differences after 50 days
V/
..........
~
/'
~ r--.~
,
~ .A
.---'
,,-
/v
j"......... ..........
-
~
~
L-
'/I
""
=-----
~
V
"'"
/
-Lumped, Theta =0.5
"
V
-Lumped, Theta =1.0
.........
-Consistent, Theta =0.5
-Consistent, Theta =1.0
0.0010
0.0008
0.0006
~
g 0.0004
l!!
~ 0.0002
'D
l5 0.0000
~ -0.0002
f:
:5 -0.0004
o
-0.0006
-0.0008
-0.0010
o 10 20 30 40 50
Distance
60 70 80 90 100
Figure 3.8.2. Differences between numerical and analytical results after 50 days for at =10
In figure 3.8.3 we see the results obtained after 70 days of transport. The same com-
page 86 Chapter 3. The Finite Element Model
ments made for the 50 days results can be made for these results as well. We notice
however the differences. We fIrst notice the high boundary effect for these results. A
larger boundary effect was expected in this case since more pollution reaches the
boundary in 70 days than in 50 days. All four solutions converge to a value of 0.06 at
the 100m distance which is not shown in the plot in order to maintain the scale of the
plot. We also observe that the numerical dispersion has become smaller in the distance
from 0 to 80m. or in other words the numerical solutions are more accurate in this range
than the 50 day results.
Obviously the boundary effect causes a large numerical dispersion near the boundary
area which does not seem to propagate inwards to the rest of the solution - at least not
in the case seen here - and therefore must be related to the different boundary condi-
tions used in setting up the analytical and numerical problems. In order to reduce the
boundary effect two approaches seem appropriate. The fIrst approach is to implement
infmite fmite elements at the right side of the aquifer which will simulated the boundary
conditions of the analytical equation in a more precise manner. The second approach -
the practical approach - is of course to avoid the boundary effect by simply making the
"numerical" aquifer longer Le. 200m long. so less pollution reaches it.
Concentration differences after 70 days
71'
~ 1
~ ~ 7
-/
7 t--- ./
-
----
~
17
V'_
~
~
./
--7
I::-.....
/'
~
r-... ~
V
'"
/
-Lumped, Theta =0.5
-Lumped, Theta =1.0
-Consistent, Theta =0.5
-Consistent, Theta =1.0
0.0010
0.0008
0.0006
III
QI
g 0.0004
!
~ 0.0002
"l:I
IS 0.0000
;.;
~ -0.0002
QI
u
l5 -0.0004
o
-0.0006
-0.0008
-0.0010
o 10 20 30 40 50
Distance
60 70 80 90 100
Figure 3.8.3. Differences between numerical and analytical results after 70 days for aL =10
Chapter 3. The Finite Element Model page &1.
Using now a dispersivity a
L
= 1 we obtain the analytical results shown in figure 3.8.4.
The advection transport in this case is more obvious so we expect larger numerical dis-
persions to be observed. The numerical results are limited now to two types. One using
the lumped matrix formulation and the other using a consistent matrix formulation. The
central difference scheme for the time derivative is used in both cases since this seems
to give the best results. Figures 3.8.5 and 3.8.6 give the difference plots for transport
after 50 days and 70 days respectively. Due to the smaller value of the dispersivity no
pollution reaches the right side boundary and therefore no boundary effects are ob-
served, or at least are not observed in the scale used in the plots. We frrst observe that
the numerical dispersion of these solutions is approximately 10 times larger than in the
a
L
= 10m case, which of course was expected. We also notice that the lumped formula-
tion does not give much better results than the consistent formulation, one observes re-
gions where the consistent formulation gives better approximations.
Concentration profile after 50 and 70 days with dlsperslvlty= 1
........
""
~
-Analytical SO days
\ '\.
-Analytical 70 days
1\ \
\ \
,
1\
\ \
\
,
\ 1\
\.
,
f'.-..
""
~
0.1
0.0
o 5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85 90 95 100
Distance
1.0
0.9
0.2
c
.2 0.6
~
5 0.5
u
c
8 0.4
0.3
0.7
0.8
Figure 3.8.4. Analytical solutions obtained withaL = 1m
page 88
~ ~ ~ - ~ - ~ , , ~ ~ ~ - -----
Chapter 3. The Finite Element Model
Concentration differences after 50 days
"...
/ 1\
j'
~
~
{"~
..-.
\
~
~ J
'-/
-Lumped, Theta = 0.5
-Consistent, Theta =0.5
0.008
0.006
:l 0.004
g
~
~ 0.002
C
6 0.000
i -0.002
c
8 -0.004
-0.006
-0.008
o 10 20 30 40 50
Distance
60 70 80 90 100
Figure 3.8.5. Differences between numerical and analytical results after 50 days for aL =1m
Concentration dIfferences after 70 days
/ r\.
I
V
~ 100.
~
k' /'~
"'"
\ 1/
~ J
"'"
-Lumped, Theta =0.5
-Consistent, Theta =0.5
0.006
0.004
Ol
QI
l:!
e 0.002
~
'ts
6 0.000
~
B -0.002
c
o
U
-0.004
-0.006
o 10 20 30 40 50
Distance
60 70 80 90 100
Figure 3.8.6. Differences between numerical and analytical results after 70 days for aL = 1m
As we did in the previous case we observe that the numerical dispersion in the 70 days
results is less that the numerical dispersion in the 50 days results. In both plots we ob-
serve that for the lumped matrix formulation the highest differences are observed at the
25m and 35m distances which are the advective transport distances covered in 50 days
and 70 days respectively. The same can be said for the consistent matrix formulation
results although the highest differences are observed at slightly larger distances than the
advective transport distances. However, both numerical solutions have to be judged as
Chapter 3. The Finite Element Model page &9
fairly good. The differences observed are so small that it would be hard to distinguish
the between the numerical and analytical solutions in ausual concentration-
distance plot.
In figure 3.8.7 we see the last results presented for a constant pollution source where the
dispersivity is equal to O.lm. One can clearly see that this case is dominated by advec-
tion transport. It is this case where we expect to have the largest numerical errors ex-
pecting to see overshoots in addition to numerical dispersion.
Figures 3.8.8 and 3.8.9 show the difference plots for this case. We immediately notice
that the numerical dispersion has become approximately 10 times that of the previous
case. The differences are so high that one would easily see the difference between these
results and the analytical results on a concentration-distance plot. Besides the obvious
numerical dispersion for both numerical solutions we also see considerable overshoots
from the lumped matrix formulation solution, in the left side of the pollutant front. The
consistent formulation does not show any overshoots - not at least in the scale used -
and gives approximately half the amount of numerical dispersion of the lumped formu-
lation. The highest differences are once again observed in the area near the advective
transport distances. We also see that the differences are slightly reduced in the 70 day
solution compared to the 50 day solution. In this case the consistent matrix formulation
gives much better results than the lumped matrix formulation.
Looking back to all the difference plots we have seen for the constant pollution source
problem, we can conclude the following. In cases with significant dispersion both ap-
proaches will give very good results. The consistent matrix formulation seems to al-
ways give positive numerical dispersion while the lumped formulation seems to give
mainly negative numerical dispersion. In cases dominated by advection transport the
consistent matrix formulation seems to be more stable - no overshooting - although nu-
merical dispersion is considerable. The lumped matrix formulation for advection
dominated transport is quite unstable - at least with e = 0.5 -and should be avoided.
page 90 Chapter 3. The Finite Element Model
Generally speaking, however, it is hard to decide which approach - lumped or consis-
tent - is the best to use.
Concentration Profile after 50 and 70 days with dispersivity =0.1
1\ \
-Analytical 50 days
\ \
-Analytical 70 days
\
,
\
\ \
\
,
0.0
o 5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85 90 95 100
Distance
1.0
0.9
0.8
0.7
0.1
0.3
0.2
c
.2 0.6
e
'l: 0.5
~
S 0.4
U
Figure 3.8.7. Analytical solutions obtained withlXL =O.1m
Concentration Differences after 50 days
II
I
,.,.
I l ~
1-
,
,
'.'-.
V
,
-lumped, Theta =0.5
"
-Consistent, Theta =0.5
0.12
0.10
0.08
:: 0.06
g
~ 0.04
~ 0.02
C
. 0.00
~ -0.02
8 -0.04
c
8 -0.06
-0.08
-0.10
-0.12
o 10 20 30 40 50
Distance
60 70 80 90 100
Figure 3.8.8. Differences between numerical and analytical results after 50 days for aL =0.1m
Chapter 3. The Finite Element Model
Concentration Differences after 70 days
page 91
"
I /
l(
\
I
,
~ ' -
V
, J
V
-Lumped, Theta =0.5
-Consistent, Theta =0.5
0.12
0.10
0.08
~ 0.06
u
~ 0.04
~ 0.02
C
g 0.00
~ -0.02
c
fl -0.04
c
:3 -0.06
0.08
0.10
-0.12
o 10 20 30 40 50
Distance
60 70 80 90 100
Figure 3.8.9. Differences between numerical and analytical results after 70 days for aL =0.1m
Although the overall results seem to indicate that the consistent approach is more de-
pendable, the numerical advantages of the lumped formulation can not be ignored, and
in well behaved dispersion dominated transport problems they perform very well.
Therefore the decision of which scheme should be used will depend on the type of
problem which is analyzed.
Besides the numerical solutions presented herein, many other numerical solutions have
been obtained with diverse () values and for different times in order to understand the
effects these have. The most noteworthy observations from these results are the follow-
ing. The lumped matrix formulation with fJ = 1.0 does not show any overshoots in the
a
L
= O.lm case although the numerical dispersion is considerable. The trend of the nu-
merical dispersion being reduced with time is also confirmed from results at 90 days
and 110 days which were obtained. This suggests that if boundary effects are avoided
then numerical solutions for large times (Le. 360 days, 720 days) will be more accurate.
3.9.2. Injection Source Verification
In this case W. Kinzelbach (1986) gives the following analytical equation
page 92 Chapter 3. The Finite Element Model ft
(3.9.2)
where the defmitions are the same as for the analytical equation used for the constant
pollution source problem. The only new parameter is AM which is the pollution mass
injected into the aquifer. The injection is taking place at x =0 at time t =O. This func-
tion produces a Gaussian bell shaped distribution around x = utIR with a width equal to
~ 2 ( X L ut I R. All parameters are assigned the same values as in the constant source
problem.
The numerical problem uses the same grid net and parameter values as the previous
problem, but of course some changes had to be made concerning the pollution sources.
Basically the injection sources are implemented by setting the initial conditions on the
source nodes to 1mg/m
3
In order to avoid diverse boundary effects the source nodes
were placed at a 30m distance from the left side of the aquifer so the effective domain is
now 7Om. In this case if the injection nodes are placed too near to the boundary a sig-
nificant dispersion flux takes place through the left boundary and therefore part of the
injection pollution mass disappears which contradicts the assumption of the analytical
equation, where an infmite aquifer is assumed (in both negative and positive directions)
where no such pollution "removal" takes place. One can actually minimize this disper-
sive flux through the left side boundary by setting the boundary conditions of the left
sides nodes to be constant pollution nodes with a value of zero. This improves the re-
sults - no flux takes place - but the effects on the concentration distribution are signifi-
cant.
For this problem, 9 cases are considered where (XL takes the values 10m, 1m and O.lm
for results at 20 days, 50 days and 70 days after injection.
The analytical results for (XL =10m are shown in figure 3.9.10. The advection transport
distances are 10m ,25m, and 35m for transport times of 20 days, 50 days and 70 days
respectively. We see that the analytical results show a maximum concentration at these
Chapter 3. The Finite Element Model
page 93
distances which in turn shows that for injection source problems, advection transport
will always playa significant role. With the exception maybe of the 20 day results, we
expect high boundary effects in the numerical results.
Concentration Profiles for 20, 50 and 70 days with dlspeslvIty =10m
10 15 20 25 30 35 40 45 50 55 60 65 70
Distance from Source
5
-
.....
-Analytical 20 days
~
V
"""-
"'"
V "\
-Analytical SO days
f\.
-Analytical 70 days
"
~
I---""""
'""
I""""---
~
V
" ~
r\
~
i'"""--...
./
-'
~
'-
r--......
..........
~
~
"
~
.............
............
""
'"
...........
r---.....
"
- 0.000
o
0.005
0.025
0.030
0.020
c
o
!
C 0.015
fl
c
o
o
0.010
Figure 3.9.10. Analytical results for lXt. =10m
Concentration DIfferences after 20 days
-
'"
\
..........
~
./
----
--....... \ ~
~ \
.........
--
\ 1/
-Lumped, Theta =0.5
\ /
-Lumped, Theta =1.0
" \ ~
-Consistent, Theta =0.5
0.00015
0.00012
III 0.00009
Col
u
~ 0.00006
~
E 0 . O O 3
c
o
~ 0 . 0 o o
fl 0.00003
c
o
o -0.00006
-0.00009
0.00012
o 10 20 30 40 50 60 70
Distance from Source
Figure 3.9.11. Differences between numerical and analytical results after 20 days for lXt. =10m
Figure 3.9.11 shows the 20 day difference results. The difference values are very small
and the overall performance of the numerical results seems to be very good. The consis-
tent matrix formulation gives slightly better results, while for (J = 1 the lumped matrix
page 94 Chapter 3. The Finite Element Model
formulation gives much higher numerical dispersion. It is interesting to notice that the
lumped results will be slightly skewed towards the left and seem to overestimate the
peak concentration while the consistent results underestimate the peak concentration
and seem to give a slightly more wide bell shaped curve. In other words the consistent
matrix formulation introduces a numerical dispersion which is very hard to detect.
Figures 3.9.12 and 3.9.13 show the results for 50 and 70 days respectively. The bound-
ary effects are very clear in these plots, to the point where they seem to dominate the
behavior of the numerical results. In the 50 day results the differences are still quite
small compared to the concentrations values although the right side boundary effect is
large enough to show a visible deviation from the analytical results. All numerical
schemes seem to behave similarly, which in this case simply shows the dominance of
the boundary effect. The 70 day results show differences which are on the same scale
as the concentration values and therefore the numerical solution deviated considerably
from the analytical solution. All three difference curves fallon each other, but that is to
be expected in the scale used for the plot.
Concentration Differences after 50 days
0.0012
0.0010
til
Cl
0.0008 u
c
!!
~
0.0006
i3
c
0
:;;
0.0004
.:::
c
Cl
u
c
0.0002
0
U
0 . O O
-Lumped, Theta =0.5
I
-Lumped, Theta =1.0
I
-Consistent, Theta =0.5
/
,
/
'"'
~ /
...............
~
-0.0002
o 10 20 30 40 50 60 70
Distance from Source
Figure 3.9.12. Differences between numerical and analytical results after 50 days for aL =10m
Chapter 3. The Finite Element Model
page 95
Concentration Differences after 70 days
0.0025 -r-----.....-------,-----.-------r-----.------,,------,
0.0020 -Lumped, Theta =1.0
-Lumped, Theta =0.5

u
5i -Consistent, Theta =0.5
g0.0015 -1-----.-----4-----j-----+----t----1f----I-l
C
c
o
0.0010
41
u
c
8
0.0005
70 60 50 40 30 20 10
0.0000
o
Distance from Source
Figure 3.9.13. Differences between numerical and analytical results after 70 days for aL = 10m
The results for a dispersivity of 10m have not been very useful. They have illustrated of
course the importance of avoiding boundary effects. In the next case the dispersivity is
reduced to a value of 1m and the analytical results in figure 3.9.14 indicate that no
boundary effects are present.
Concentration Profiles after 20, 50 and 70 days with dlspersivlty =1m
-Analylical20 days
I \
-Analylical50 days
I \
-Analylical70 days
I \
If
,
/
J \
J'
'"

,
I \/ / \.
'"
I 1\

\
'\
I / \ ./
"- "-
-"""
,

'-
""'"

-
.....
0.10
0.09
0.08
0,07
c
.2 0.06
'E!
0.05
u
c
8 0.04
0.03
0.02
0.01
0.00
o 5 10 15 20 25 30 35 40 45
Distance from Source
50 55 60 65 70
Figure 3.9.14. Analytical results for aL =1m
The difference plot in figure 3.9.15 for the 20 day solution is very much like the one
obtained with a dispersivity of 10m, although the numerical dispersion observed is
page 96 Chapter 3. The Finite Element Model
much higher and the performance of the two schemes is no longer comparable. The
consistent formulation introduces a pure numerical dispersion, while the lumped formu-
lation is once again skewed and seems to make the right side of the curve steeper at the
middle section and flatter at the bottom.
Concentration Differences after 20 days
1\
I ,
I
I I-..
~ ~
'-
I
'"
/ I
"
I
\ I
-Lumped, Theta =0.5
\ J
V
-Consistent, Theta =0.5
0.0025
0.0020
0.0015
:l
g 0.0010
:!!
~ 0.0005
C
g 0.0000
;.;
~ -0.0005
..
u
6 -0.0010
u
-0.0015
-0.0020
-0.0025
o 10 20 30 40 50 60 70
Distance from Source
Figure 3.9.15. Differences between numerical and analytical results after 20 days for XL =1m
Concentration Differences after 50 days
I t\
I \
I \
I \ I-
-'-
\
-
/
...............
~
-- -- --
.....
~ / ~ I
"'-
\ I
\ I
-Lumped, Theta =0.5
\ I
,
IJ
-Consistent, Theta =0.5
0.0010
0.0008
0.0006
.,
..
g 0.0004
f!
~ 0.0002
C
6 0.0000
~ -0.0002
..
u
6 -0.0004
u
-0.0006
-0.0008
-0.0010
o 10 20 30 40 50 60 70
Distance from Source
Figure 3.9.16. Differences between numerical and analytical results after 50 days for lXL =1m
Chapter 3. The Finite Element Model
Concentration Differences after 70 days
page 97
. r ~
/ \
/ \
l
\
/
........
.........-.
.............
~ '
1"'000... \--'
/
~
"\
\ /
\ /
-Lumped. Theta =0.5
~
-Consistent. Theta =0.5
0.0008
0.0006
~ 0.0004
u
c
~
~ 0.0002
Ci
g 0.0000
g
~ -0.0002
u
c
8 -0.0004
-0.0006
-0.0008
o 10 20 30 40 50 60 70
Distance from Source
Figure 3.9.17. Differences between numerical and analytical results after 70 days for aL =1m
The same can be said for the results of figure 3.9.16 and 3.9.17 for the 50 day and 70
day numerical solutions respectively. In these figures we can see the full extent of the
symmetry exhibited by the numerical solutions. The scales of these plots indicate that
numerical dispersion is slightly reducing with time. For the 50 day results the analytical
results show that the bell-shaped concentration distribution starts approximately at 3m
and ends at 47m. The consistent results seem to be confined to this range, while the
lumped results are not. This last is asign of instability.
Concentration Profiles after 20, 50 and 70 days with dlsperslvlty = 0.1m
-Analytical 20 days
-Analytical SO days
-Analytical 70 days
I ~
I \
(
\
/ \ / \
)
'\
V j
I
,
~
'-
~
0.30
0.25
0.20
c
o
~
C 0.15
..
u
c
8
0.10
0.05
0.00
o 5 10 15 20 25 30 35 40 45
Distance from Source
50 55 60 65 70
Figure 3.9.18. Analytical results for aL =0.1m
page 98 Chaptet:3.. The Finite Element Model
For a dispersivity of O.lm the analytical results are given in figure 3.9.18. We expect to
see high numerical dispersions and overshoots which in this case will have the form of
oscillations.
The 20 day numerical results seen in figure 3.9.19 show both of the expected problems.
The lumped formulation shows differences in the same scale as the concentration values
so we expect large deviations from the analytical results. Large oscillations can be seen
in the left side of the pollution peak while on the right side they are much smaller and
have a smaller lateral extent. The same can be said for the consistent formulation al-
though to a much lesser degree.
Concentration Differences after 20 days
r
"
II r ~
1
I'
\J
J
-Lumped, Theta =0.5
-Consistent, Theta =0.5
0.15
0.12
0.09
II
l:! 0.06
!!
~ 0.03
is
5 0.00
i
~ -0.03
..
u
8-0.06
-0.09
-0.12
-0.15
o 10 20 30 40 50 60 70
Distance from Source
Figure 3.9.19. Differences between numerical and analytical results after 20 days for ltL =0.1m
The 50 and 70 day results of figures 3.9.20 and 3.9.21 respectively, show clearly the
oscillations of the lumped results on the left side of the concentration peak. The differ-
ences have the same scale as the concentration values so lumped results will deviate
considerably from the analytical results. Once again we notice the time effect, whereby
numerical dispersion is reducing with time.
--------------- ~ - - - - - - - ~ - -
Chapter 3. The Finite Element Model page 99
Concentration Differences after 50 days
A
I ,
I'It.. 1\
.- -I.
'-
- V
\
r-/
\J \ I
V
-Lumped, Theta =0.5
-Consistent, Theta =0.5
0.08
0.06
:a 0.04
l:!
f!
.cg 0.02
C
g 0.00
1li
~ -0.02
u
c
8 -0.04
-0.06
-0.08
o 10 20 30 40 50 60 70
Distance from Source
Figure 3.9.20. Differences between numerical and analytical results after 50 days for aL =0.1m
Concentration Differences after 70 days
If\
.- 1/\
t-
-
~
-
~
\ l
r-/
V
V
-Lumped. Theta =0.5
-Consistent, Theta =0.5
0.06
0.04
:a
u
f! 0.02
.cg
C
:5 0.00
~
fl -0.02
c
o
o
-0.04
-0.06
o 10 20 30 40 50 60 70
Distance from Source
Figure 3.9.21. Differences between numerical and analytical results after 70 days for aL =0.1m
The consistent formulation seem to be much more stable than the lumped formulation
in this case. The numerical dispersion is much smaller and the scales of the plots of
figures 3.9.19, 3.9.20 and 3.9.21 do not allow us to see any details. In order to observe
these differences in a more suitable scale, figure 3.9.22 was prepared where only consis-
tent matrix formulation results are shown. In this figure we see that the numerical dis-
persion is indeed quit small, with the exception maybe of the 20 day results. One can
page 100
-------------- -- -----
Chapter 3. The Finite-Element ModeL
clearly see the oscillations for the 20 day solution and to a much smaller degree for the
50 day solution.
Concentration Differences of Consistent Numerical Results, disperslvlty =0.1 m
1
n /\. I'
""
-",,,,,,,
v \
v--
\/
'J
V
-
-Consistent 20 days
-Consistent 50 days
-Consistent 70 days
0.020
0.016
0.012
..
III
g 0.008
!
~ 0.004
E
l5 0.000
'iii
E -0.004
III
~ -0.008
-0.012
-0.016
-0.020
o 10 20 30 40 50 60 70
Distance from Source
Figure 3.9.22. Differences between numerical and analytical results for aL =0. 1m
In conclusion for the injection problem, it seems that the consistent formulation per-
forms much better than the lumped formulation. In all cases the consistent results show
a tendency to introduce numerical dispersion which reduces the peak concentration and
widens the concentration distribution range. We have already mentioned in the previous
section that the element size plays a significant role in the behavior of the numerical
results.
-------------------
4. Review of Stochastic Methods
4.1 Introduction
In this chapter the most common stochastic methods are presented. The Monte Carlo
Simulation (MCS) method, the Perturbation method and the Reliability method. Only
the Reliability method will be presented in detail since it is the method used in this the-
sis, while the other two methods will be presented in brief. Let us begin with a simple
example illustrating the kind of problems we want to solve with the use of stochastic
methods.
A stochastic problem is similar to a mathematical problem. We must first formulate the
equations that describe the problem and then use the available techniques/methods for
obtaining a solution.
Let us take the Darcy law equation"
q=Ki (4.1.1)
where q is the specific discharge, K is the hydraulic conductivity and i is the hydraulic
gradient. Let us further assume that equation 4.1.1 is used to determine if the specific
discharge is greater or smaller than a given qtarg value. Applying equation 4.1.1 is
elementary as long as K and i are known with sufficient accuracy, and we can easily
This example is partly based on a similar example from Cawlfield and Sitar (1987).
page 102 Chapter 4. Stochastic Methods
find out if the qtarg value has been exceeded or not. In the case where the values of K
and i are uncertain, a probabilistic formulation of 4.1.1 is required for deciding if the
target flux will be exceeded.
The probabilistic formulation is obtained by fust replacing the uncertain variables with
random variables. Equation 4.1.1 becomes q = Xl X2 where Xl is a random variable
describing the uncertainty associated with the hydraulic conductivity K, while X2 is a
random variable describing the hydraulic gradient i. The problem is defmed as finding
the probability that the qtarg value will be exceeded. This is written as
replacing q = Xl X2 in the above we get
P
f

p[qtarg 0]
(4.1.2)
(4.1.3)
Since P[Z 0] = .F:(O) which is the cumulative distribution function of the random
variable Z evaluated at 0, we have formulated a probability problem.
A general way of solving this problem would be the following
P
f
J/x(x).dx
zSO
(4.1.4)
in whichfx(x) is the joint PDF of X and the integral is a double integral over the unsafe
region. In the general case X will be a n-vector of n random variables, fx(x) will be the
joint PDF of X and the integral will be a n-fold integral. Obviously for a large n the
multi-fold integral can not be evaluated and statistical information is usually limited
and does not permit the determination offx(x).
Therefore approximate methods for evaluating the integral have been developed and
Chapter 4. Stochastic Methods
alternative measures of reliability have been introduced.
4.2 Monte Carlo Simulation Method
page 163
We have already mentioned that in order to calculate the probability of failure we must
evaluate the following multidimensional integral
PI =P[Z::; 0] =Fz(O) = ffx(x)dx
g(X)SO
(4.2.1)
One of the IITst methods that has been used for the evaluation of the above integral is
the Monte Carlo Simulation (MCS) method, where the word "simulation" is optional
and a matter of context. The term Monte Carlo was introduced by von Neumann and
Ulam during World War II, as a code word for the secret work at Los Alamos, and was
applied to problems related to the atomic bomb. The work involved direct simulation of
behavior concerned with random neutron diffusion in fissionable material. Shortly
thereafter Monte Carlo methods were used to evaluate complex multidimensional inte-
grals and solve certain integral equations, occurring in physics, that were not amenable
to analytic solution.
When considering Monte Carlo integration there are two basic methods which can be
used. The "Hit and Miss" method and the "Sample mean" method.
4.2.1. The Hit and Miss Monte Carlo Method
Consider a problem of evaluating a one dimensional integral of a function h(x) in the
interval a::; x::; b.
b
1= fh(x)dx
a
We further assume for simplicity that the function h(x) is bounded
(4.2.2)
-- ~ - - -- -- -------- - - - ~
page 104 Chapter 4. Stochastic Methods
h(x) -5: c, a -5: x -5: b
Let D denote the rectangle of figure 4.2.1, dermed as
D={(x,y):a-5:x-5:b,O-5:y-5:c}
Let (X,Y) be a random vector uniformly distributed over the rectangle D with a prob-
ability density function (PDF)
if(x,y) eD
otherwise
(4.2.3)
The probabilityp that the random vector (X,Y) falls within the area under the curve h(x)
is
b
Jh(x)dx
areaS a I
p= = =---
areaD c(b-a) c(b-a)
(4.2.4)
y
C 1----,--------,
-MISS D
a
Figure 4.2.1. Hit and Miss Monte Carlo method.
Let us assume that N independent random vectors (X,Y) are generated Le.(XJ,Y
J
) ,
(X
2
,Y
2
) , ,(XN'Y
N
). The parameterp can be estimated by
(4.2.5)
where N
H
is called the number of hits and gives us the number of occasions on which
h(X
j
) ~ ~ for i=1,2,3,...N. The number of misses is given by N-NH' where a miss is de-
Chapter 4. Stochastic Methods
fined as h(X
t
) < Y; for i=1,2,3,...N.
page 10..5
Using now equations 4.2.4 and 4.2.5 we finally obtain for the estimation of the integral
N
I ~ 8 = c(b -a)-!!'"
N
where the parameter 8 is a unbiased estimator of I, i.e. E[8] = I.
(4.2.6)
The variance of 8 can be estimated once the variance of p is estimated. We have
var(p) = va{~ ) = }p var(NH) = ~ p ( 1 - p)
var(p)= 1 I ]2 [c(b-a)-I]
N[c(b-a)
Thus
(4.2.7)
var(8) =[c(b -a)yvar(jJ) =[c(b _a)]2 ..!..p(1- p) =..[c(b-a)-I]
N N (4.2.8)
(16 = [Var(8)J'2 = N-
1/2
{I[c(b - a) _ I]}1I2
So the precision of 8 is proportional to NJ/l.
One can also estimate the number of points required to obtain an estimation of the inte-
gral which gives an error smaller than e with a given probability a. This is formulated
as
P[l8-lJ < e] ~ a
Using now Chebyshev's inequality we obtain
(4.2.9)
P[l8 -lJ < e] ~ 1- v ~ 8 )
e
<1- var(8)
a_ 2
e
substituting the value of var(8) and solving for N we obtain
page 106
--- -----
Chapter 4. StO'chastic Methods
N> (l-p)p[c(b-a)r
- (l-a)e
2
(4.2.10)
4.2.2. The Sample-Mean Monte Carlo Method
In this approach we try to represent the integral in a form consistent with the expected
value of some random variable. For example the integral
b
1= Jh(x)dx
a
can be rewritten in an expected value form as
b
1=Jhex) fx(x)dx
a fx(x)
assuming that.fx(x) is any PDF such that.fx(x) > 0 when h(x):#O. Then
1= J h(X)]
.LjLfx(X)
If we assume for simplicity that.fx(x) is defined as
{
I
fx(x)= b-oa
otherwise
then the expectation becomes
(4.2.11)
(4.2.12)
(4.2.13)
(4.2.14)
I
E[h(X)]=-
b-a
1=(b-a) E[h(X)]
So an unbiased estimator of I can be obtained if we estimate the expected value of h(x)
in the above equation with the sample mean i.e.
1 N
I=: fJ = (b
N i=1
The variance of fJ in this case is equal to
(4.2.15)
Chapter 4. Stochastic Methods page 10?-
(4.2.16)
4.2.3. Comments on Monte Carlo Methods
Both of the above methods can be easily used for the solution of multiple integrals. It is
noteworthy that neither method requires the function h(x) to be explicitly defmed. They
only require that the function can be evaluated at a specific point x.
The most common approach for evaluating the integral 4.2.1 is the so called Zero-One
indicator MeS. According to this approach the probability of failure is defined as
PI = JI(x)fx(x)ax
xcR
n
where the function [(x) is called the indicator function an has the form
{
I for g(x) S; 0
I(x) =
o for g(x 0
(4.2.17)
(4.2.18)
Performing N simulations of the vector x we can obtain an estimation of the probability
of failure based on the equations
(4.2.19)
where Pi = [(Xi) fx(X
i
) and Xi is the i'th simulation point. Bjerager (1989) observes that
liP, simulations are required in order to obtain one outcome in the failure set. He also
suggests, as a rule of thumb, a sample size of 1OO/P, in order to get a probability esti-
mate with good confidence. In general this approach will give good result for midrange
probabilities. One must not forget that the joint PDFfx(x) is not known although it is
common to make the assumption of a joint multinormal distribution.
page 108
- - - - - - - - ~ - - - - - - ------ ----- -
Chapter 4. Stochastic Methods
4.2.4. Monte Carlo methods in Hydrogeology.
In contaminant transport analysis, Monte Carlo Simulation has been mostly used to
evaluate the effects of heterogeneities on macroscopic dispersion. Warren and Skiba
(1964) showed the influence of variations in hydraulic conductivity on a set of tracer
particles moving through an hypothetical medium. In their approach hydraulic conduc-
tivity was generated from a lognormal distribution and was considered to be uncorre-
lated. They investigated the effects of mesh spacing used in the simulations and
showed that the apparent dispersion is a function of mesh spacing and increases signifi-
cantly as the number of mesh points per constant permeability block increases. The de-
pendence of the apparent dispersion on the mesh spacing is caused by the disregard of
correlation. Schwartz (1977) explored the effect of subsurface characteristics on the
macrodispersion using an idealized media consisting of low permeability inclusions
within a higher permeability medium. He found that a unique dispersivity value could
not be defmed when the inclusions were not arranged homogeneously within the flow
region. The magnitude of dispersion was controlled by the contrast in hydraulic con-
ductivity between the inclusions and the remainder of the medium, the number of in-
clusions, and the structure of the medium.
Smith and Schwartz(1980) investigated the mass transport in a heterogeneous medium
by using a hybrid deterministic-probabilistic model. A two dimensional hydraulic con-
ductivity field is generated which is correlated by a first order nearest neighbor sto-
chastic process and releases a line of tracer particles on the inflow side of the region. A
velocity is calculated for each particle at each time step from four surrounding values
and the particles are moved a distance that is fixed by the magnitude of the time step
and the velocity. For each realization of the hydraulic conductivity field the flow is
nonuniform; however, the mean gradient field averaged over a set of 300 Monte Carlo
realizations described a uniform flow field. They conclude that the tracer particles con-
vected through a random statistically homogeneous conductivity field do not exhibit a
Fickian distribution and do not yield a constant dispersivity. They suggested that the
advection-dispersion model may be inadequate to describe mass transport in spatially
Chapter 4. Stochastic Methods page 109
variable, heterogeneous geologic units. In a companion paper, Smith and Schwartz
(1981) explored the relationships between the number of hydraulic conductivity meas-
urements and the resulting uncertainty in predictions. They found that the spatial varia-
tions in hydraulic conductivity are more important in controlling contaminant transport
than the uncertainties in estimating the parameters of the probability distribution for the
hydraulic conductivity.
Gelhar and Axness (1983) noticed that the correlation scale of 12.5 m used for the hy-
draulic conductivity by Smith and Schwartz, was slightly larger than the size of the
elements which was 10m. The analysis therefore may be subject to discretization errors
similar to those of Warren and Skiba (1964). This argument is consistent with other
work, showing that when modeling random fields the element size should be at most on
the order of one quarter to one half of the correlation scale in order to accurately ac-
count for the correlation (e.g. Der Kiureghian and Ke , 1988).
Tang et al. (1982) developed a numerical stochastic transport equation which describes
the temporal variation in the ensemble mean concentration. The hybrid probabilistic-
deterministic model developed by Smith and Schwartz was used, and the flow domain
was generated from a lognormal distribution of hydraulic conductivities using Monte
Carlo simulation. The concentration and velocity were represented by the statistical
mean and standard deviation. In this approach, the mass transport equation is expanded
in terms of the mean and standard deviation. The ensemble mean concentration is ob-
tained by averaging the mass transport equation after transferring all velocity terms to
the right hand side (i.e. the terms become the forcing function). The resulting govern-
ing equation is
d8(c) +(8(U.) _ ~ ) d(C) = ("JlJ5. +8 .) a
2
(c)
dt J r dx. dx . IV Jk 'PJk dx .dx
k
J J J
(4.2.20)
where u is he advective velocity and is a random variable, x is the Cartesian coordinate
vector measured from the contaminant source, r is the molecular diffusion coefficient
and is constant, (] is the porosity assumed deterministic, <c> is the ensemble mean con-
page 110
-- --- -- ---- -----
Chapter 4. Stochastic Methods
centration , 0 is the dirac delta function, and p the ensemble dispersion coefficient. The
ensemble dispersion coefficient contains the covariance term which correlates the
neighboring velocities and is a function of travel distance from the source.
They showed that the extent of spreading in the ensemble mean concentration depends
upon the mean velocity, the variance of the velocity and the correlation scale length.
The mean and variance of the velocity were important in determining the distribution of
the ensemble mean concentration; while minor differences in the ensemble mean con-
centrations were produced by changes in the correlation scale length and advective ve-
locity.
4.3 Perturbation Methods
The perturbation methods are not directly capable of solving the integral 4.1.4 given in
the introduction of this chapter. They are commonly used to develop relationships be-
tween the statistical properties of the parameters and the statistical properties of the so-
lution. With reasonable assumptions one could employ these methods to produce rough
probability estimates. The perturbation example given bellow, shows clearly that these
methods can produce results which can enhance our understanding of the stochastic
nature of hydrogeological problems.
4.3.1. Perturbation Example
Let us introduce the perturbation method with the use of a ID flow example presented
by Gelhar (1993). We start off with the one dimensional equation of flow which has
the form
(4.3.1)
Let us assume that the specific discharge q is known exactly from independent meas-
urements. Equation 4.3.1 can then be integrated once, and after dividing through by the
Chapter 4. Stochastic Methods
non-zero hydraulic conductivity we obtain
dh I
-=-q-
dx K
page 111
(4.3.2)
Let us now defme the following problem: What is the solution h, when the hydraulic
conductivity varies in an irregular fashion with the distance x? The hydraulic resistivity
11K, will be regarded in this analysis as a spatial stochastic process. This obviously
means that equation 4.3.2 becomes a stochastic differential equation in which the solu-
tion h will also be a random process. The random processes h and 11K are written in
terms of their expected value plus a zero mean perturbation, as follows
h = H +f/J E[h] = H E[f/J] = 0
I/K=W+w E[I/K]=W E[w]=O
(4.3.3)
Substituting these decompositions into equation 4.3.2 and taking the expected value we
obtain
~ : J = : =-J=-qE[IIK]=-qW
(4.3.4)
Subtracting 4.3.4 from 4.3.2 we obtain a stochastic differential equation describing the
zero-mean perturbations in f/J as produced by the perturbation in w.
dt/J =_qw
dx
(4.3.5)
In order to solve this stochastic differential equation, Gelhar employs spectral theory.
The basic idea is that, if the resistivity w can be assumed to be statistically homogene-
ous (stationary), then it is plausible to seek solutions for the head f/J perturbation that
are also statistically homogeneous. With this assumption the perturbations of head and
resistivity have the spectral representations of the form
-
f/J(x) = Je
i
la:dZ
9
(k)
-
w(x) =Jeila:dZw(k) (4.3.6)
page 112 Chapter 4. Stochastic Methods
where k denotes the wave number. Putting these equations into 4.3.5 along with the
uniqueness of the spectral representations, leads to the following relationship
(4.3.7)
The spectral density function of ~ is found by multiplying this equation with its com-
plex conjugate and then taking the expected value. We thus obtain
(4.3.8)
The next steps are to introduce a proper spectrum function for the resistivity, obtain the
spectrum function for the head, and use inverse processes to obtain the covariance
function for the head. Before doing so, Gelhar notes that equation 4.3.7 has a singular-
ity around the zero wave number, which means that the head process will not be a finite
variance homogeneous process. In order to avoid this problem Gelhar assumes that the
resistivity spectrum is expressed through a "hole function" which has the form
(4.3.9)
where CT
w
is the standard deviation of the resistivity and 1is the correlation length. The
equivalent covariance function is
(4.3.10)
Replacing the spectrum of the resistivity into equation 4.3.8, k
2
is removed and the re-
sulting spectrum function for the head becomes a homogeneous process, which results
in the covariance function
~
C;9 =JeiksS;;(k)dk =q 2 C T ~ P ( 1 +lsi / l)e-isill
The head variance is found at s = 0 where s is the separation as
(4.3.11)
Chapter 4. Stochastic Methods
page 113-
(4.3.12)
In this fashion we can obtain relationships between the parameter statistics and the so-
lution statistics. Obviously, the choices and assumptions we make for the random proc-
esses and their spectral form will influence the equations obtained. The approach de-
. scribed herein is usually termed the spectral method.
4.3.2. Perturbation Methods in Hydrogeology.
The perturbation method was mainly used to investigate the variability of the solution
obtained from equations whose parameters were uncertain. Tang and Pinder (1977)
solved the 2D advection-dispersion equation using the perturbation method. The dis-
persion coefficient was assumed random and the velocity deterministic. The output was
obtained in the form of the mean and variance of the concentration. The results showed
that the standard deviation of the concentration at a point increases quite abruptly early
in the simulation and decays as a function of time. They concluded that the mean and
the variance of the concentration should approach an asymptotic value with time. In a
latter study Tang and Pinder (1979) solved a problem of 1D mass transport wherein the
dispersivity and velocity were treated as random variables with known means and vari-
ances. Parameters with large variances were used in this analysis at the expense of a
considerable amount of computational effort. The spatial dependence of the parameters
was not included in this work. They found that the largest uncertainty occurred at the
concentration front and the uncertainty in the computed concentration was much
smaller than the uncertainty in the input parameters. Thus, the transport equation ap-
parently tends to dampen the uncertainty associated with the physical parameters, ve-
locity and dispersivity. Also, the choice of the distribution function (normal or lognor-
mal) influenced significantly the form of concentration proftle and the associated stan-
dard deviation.
The ensemble mean approach which simulates the physicality of subsurface media has
been pursued by many investigators through the analytical and numerical application of
the perturbation method (Dagan, 1982, 1984 1987; Winter et al. 1986; Gelhar and Ax-
page 114
-- ~ ~ - ~ ~ ~ ~ - ~ ~ ~ ~ - - - ~ - - - ~ -----
Chapter 4. Stochastic Methods
ness 1983). In this approach the analytical expressions of the macrodispersivity as well
as the mean concentration are derived based on the ergodicity assumption, i.e. the 00-
havior of a single aquifer, which is the spatially averaged property of the aquifer, is rep-
resented by the ensemble average over a large number of realizations of the aquifers
having the same underlying statistical properties. The scale of heterogeneities must be
small compared with the overall scale of observation for the spatial averaging process
to be meaningful (Gelhar ,1986).
Gelhar et al. (1979) analyzed the macrodispersive process in a stratified heterogeneous
porous medium based on Taylors (1953) classical analysis of dispersion in a tube. In
their analysis the porous medium is assumed perfectly stratified and hydraulic conduc-
tivity is homogeneously distributed throughout the strata. Mass transport of a conserva-
tive solute with concentration c is represented by:
(4.3.13)
where D
L
and Dr are the local longitudinal and transverse dispersion coefficients. The
governing equation is formulated by separating the local dispersion coefficients D
L
and
Dr' hydraulic conductivity K and concentration c into mean and variance and then ex-
panding the equation with perturbed parameters. The resulting mean-removed equation
was solved using the spectral method. The explicit expression for the macroscopic
longitudinal dispersivity was derived in terms of the statistical properties of the medium
from this analysis.. They found that for a large time the longitudinal dispersivity ap-
proaches a constant value and mass transport becomes Fickian.
Matheron and De Marsily (1980) analyzed the same type of stratified medium with
flow parallel to the bedding using a random motion model. In their analysis, the seep-
age velocity was generated from a weakly stationary hydraulic conductivity random
function K(z), in which z denoted the depth of the medium. A tracer particle located at a
point (xo'zo) at time t=O moves to another point (x"Z,) at time t. The probabilistic distri-
bution of the particles at a certain time t is equivalent to the concentration distribution
~ ~ - " ' - ~ ~ - - - ~ - - - - - - - - - - -
Chapter 4. Stochastic Methods
-------------.-. --- - - ------
page 115
obtained by solving the advection-dispersion equation. They concluded that Fickian
behavior does not occur when the flow is strictly parallel to the stratification. Fickian
behavior is eventually reached if transverse local -dispersion takes place or the flow is
not strictly parallel to the stratification.
Gelhar and Axness (1983) analyzed the dispersive mixing in the 3D domain using a
spectral method. In this approach the approximate stochastic differential equation de-
scribing the concentration perturbation c' is as follows
where the concentration c and the specific discharge q/ is represented as
c=c+c'
q/ =7/; +qi ' i =1,2,3
(4.3.14)
(4.3.15)
Xl is the direction of flow, ~ is the longitudinal dispersivity and ~ is the transverse
dispersivity. The perturbed specific discharge and equation 4.3.14 are obtained in the
spectral domain and the macrodispersivity tensor Ali is derived as
_J ~ Sq,qJ (k)dk
~ j - ['k 1r
2
(k
2
k
2
) 2]
- ~ 1 1 +a
L
"1 +a
T
2 + 3 q
(4.3.16)
where Sq/qJ (k) is the cross spectrum of the i andj components of the specific discharge,
and k = (k
1
,k
2
,k
3
) is the wave number vector.
The results of this analysis show that the macrodispersion process is convection con-
trolled and the theory, which treats the asymptotic condition of displacements, indicates
that a classical Fickian transport relationship is valid for large scale displacements.
Analytical expressions for spatial invariant macrodispersion and the effective hydraulic
conductivity were found explicitly. Application of these expressions to field problems is
quite restricted, because the equations do not apply in the near source regions, they can
not deal with forcing functions, and the analysis relies on small parameter variation.
page 116
~ - - - - - - ~ - - - - - ~ ~ - - - - - ~ - - - ~ - -- -._--- - - _ . _ ~ - - - - -----
Chapter 4. Stochastic Methods
Also, in order to transform the perturbed specific flow quantity q' and the solute trans-
port equation into the frequency domain, infinite domain is assumed, Conditioning in
the spectral domain was suggested by Ababou (1988) to account for the finite domain
effects. This approach allowed to explicitly quantify the relative amounts of uncertainty
and spatial variability with respect to the scale of the problem.
Graham and McLaughlin (1989) proposed a numerical approach to obtain the rIrst and
second moments of the transient concentration. A set off partial differential equations
for unconditional ensemble moments was derived using the first order perturbation
technique. This method is computationally intensive for solving the ensemble moments
but it is more general than analytical approaches. The macrodispersivity derived, is
spatially variant and the mean concentration distribution is not necessarily Fickian be-
cause the derivation of macrodispersivity is based on a volume averaging approach,
rather than an advection-dispersion type of equation. They found that the concentration
uncertainty was highest in the areas where the mean concentration was large and the
magnitude of uncertainty decreased with time as the plume dispersed. The increasing
correlation scale produced a more dispersed mean plume as well as increasing standard
variations of concentration throughout the domain. The random deviations from the
ensemble mean were found to be correlated for longer distances along the streamlines
of the mean velocity than across.
Dagan (1982,1984 1987) developed a theoretical model to predict the contaminant
transport in heterogeneous aquifers. Based on Taylor's theory of diffusion by continu-
ous motions, he related the expected concentration to the probability density function
(PDF) !, of a particles trajectory function X. The relationship is represented as
dlvf
(Ac(x,t) = -f(x,t)
n
(4.3.17)
where clc(x,t is the expected value of the concentration distribution of a particle, dM
is a solute particle as an individual infmitesimal body of mass, n is the effective poros-
ity, andfis the PDF of the particles trajectory function X. In this equation the expected
Chapter 4. Stochastic Methods page 117
value and covariance of X are proportional to the first and second spatial moments of
<L\.c>.
By limiting the scope to statistically stationary properties, he obtains the relationship
between the particle-displacement covariance and the Eulerian velocity field as
(4.3.18)
where ~ / is the covariance of the trajectory function X, (uj (0,0) .U
z
(X
t
, t)) is the co-
variance of the Eulerian velocities u
j
and U/' and ~ is the total displacement of a parti-
cle. The velocities u
j
and u/ are connected to the heterogeneous structure of the subsur-
. .
face by using a frrst order approximation of the flow equation and solving the approxi-
mated equation by Greens function. The two and three dimensional analytical equa-
tions for the covariance ofX
j
/ are derived by integrating equation 4.3.18 twice.
Dagans conclusions from the analysis of his model are: (1) The solute concentration
determined in terms of its expected value and variance is subject to a high degree of
uncertainty and this uncertainty is greatly reduced if the solute input zone is large com-
pared to the transmissivity integral scale, and (2) the diffusion equation is obeyed after
the solute has traveled a distance significantly larger than the transmissivity integral
scale. Dagan (1982) further discussed the question of applicability of the ergodic argu-
ments to solute transport in heterogeneous formations. In most cases the concentration
in the actual realization and its ensemble average are close and the ensemble variance is
small, because the solute volume insufficiently large compared to the correlation scale.
For sufficiently large L!Iy- L is the distance of solute travel and I
y
is the hydraulic con-
ductivity integral scale- the spread in the flow direction is governed by a Fickian equa-
tion with macrodispersivity, which is much larger then the pore scale dispersion.
One significant achievement of this analysis is the elimination of the dispersion term by.
assuming the spreading of the solute plume to be a function of heterogeneity of the hy-
draulic conductivity and of the integral scale. This approach to modeling the transport
page 118 Chapter 4. Stochastic Methods
process is based more on the physical considerations of the subsurface environment
than simply using the dispersion coefficient which is obtained from the comparison of
field data to a numerical transport model. There are several possible criticisms for this
model: (1) Spreading of the plume is dependent not only on the heterogeneities of hy-
draulic conductivity, but also on other factors; for example, diffusion may become a
significant f a c t o ~ it transport occurs over a long period of time. (2) Variability of the
subsurface parameters is dependent not only on the inherent heterogeneity of the sub-
surface media, but also on the lack of measured data. If the variability of the data is
dominated by lack of data, then the solution obtained from this model does not have a
physical meaning. (3) On a regional scale, where the transmissivity correlation scale is
usually a few orders of magnitude larger then the laboratory or local scales, solutions
based on the small scale transmissivity may not be warranted. and (4) The frrst order
approximation of the flow equation is accurate only for very small variances. Higher
order effects on the solution need to be considered if the variability of the hydraulic
conductivity is large (Dagan, 1988) and conditioning of the data using measurements of
hydraulic conductivity and hydraulic head at a few points has been suggested to resolve
items (2) and (3) (Dagan 1984,1987).
4.4 Reliability Methods
4.4.1. First-Order Second-Moment Reliability Methods
Let us go back to the Darcy equation example used in the introduction to this chapter
and introduce some terminology. Consider the function
(4.4.1)
where x = {K, i}. This function is called the failure function or limit state function in
reliability method theory. It is essentially based on equation 4.1.1 and its 'purpose is to
divide all x values into three categories (figure 4.4.1).
-------------,
----- ~ .. _. -_ ..~ .._ , , ~ ~ ,
Chapter 4. Stochastic Methods
1. g(x) > 0 target value is not exceeded.
2. g(x) = 0 the solution is the target value.
3. g(x) < 0 target value is exceeded.
K
Safe Region
g(x) > 0
i
Figure 4.4.1. The failure function and its characteristics.
page 119
The failure function is not unique for a given probability problem. For example the
failure function g(x) = qtarg!Xl - X2 will also divide all x values into the above three cat-
egories and describes the same problem. The curve defmed by the function g(x)= 0 is
called the limit state surface, while the region where g(x) < 0 is the ''unsafe'' region and
g(x) > 0 defmes the "safe" region" .
Replacing x in the failure function with the vector of random variables X gives us the
performance function g(X) = qtarg - XIX2. The performance function is also known as
the "safety margin", we denote it by Z and it reflects the arbitrariness introduced by the
choice of failure function.
"The First Order Reliability method was formulated for solving structural safety problems and terminol-
ogy reflects this.
page 120
- -
Z=g(X)
Chapter 4. Stochastic Methods
(4.4.2)
Using the above theory any probability problem can be written as
Jfx(x).dx
g(x)SO
where we have made use of the equality between the failure function and z.
(4.4.3)
Evaluation of the probability of failure PI is not possible when only the fIrst two statis-
tical moments of X are known. The known information is limited in most applications
to the mean vector M = {Jli} and the covariance matrix C = [J>ij OJ. OJ], where Jli and OJ.
are the mean and standard deviation of Xi and Pij is the correlation coefficient between
Xi and Xj- Cornell (1969) dermed an alternative measure of reliability, known as the
reliability index, which requires only the above information to be computed. This in-
dex is denoted by f3 and is dermed in terms of the mean and standard deviation of the
safety margin as
(4.4.4)
In the case of a linear performance function the mean and the standard deviation of the
safety margin can be evaluated exactly. Let us take an example: A linear performance
function can be expressed as
n
g(X) =a
o
+I,aiX;
i=l
The mean and the variance are then given from the equations:
n
JIz =a
o
+I,aiJIi =g(M)
i=l
n n
(J' / =LLaiajpij(J'i(J'j
i=l j=l
(4.4.5)
(4.4.5a)
(4.4.6b)
Chapter 4. Stochastic Methods
page 121
If the performance function is nonlinear, then the full joint probability distribution is
needed, and even then it might be difficult to compute the two moments. An approxi-
mation though can be obtained by linearization of the failure function around a point m.
This approach gives the following equations:
2 := .dg(m)}
(J z .J .J :1.__ :1..._ ij (Ji (Ji
i=l i=l ami ami
(4.4.6a)
(4.4.7b)
Obviously g(x) must be differentiable with respect to each variable Xi. These ap-
proximations are linearizations at a point m with coordinates (m1, m2, ..., mn). Using
the above equations for the approximation of the mean and variance of the safety mar-
gin we can then proceed with the evaluation of the reliability index. The reliability in-
dex obtained is called a first-order second-moment (FOSM) reliability index. The
name of this reliability index signifies that a "first order" (linear) approximation of the
failure function is used with "second momenf' statistical information. If the lin-
earization point is the point, the reliability index is further called mean-
value first-order second-moment reliability index. Replacing m with M in equation
4.4.6a we can see that JIz:= g(M). An important problem with the FOSM method is
that mutually consistent formulations of the performance function lead to different re-
suIts. This indicates that the performance function and the linearization point must be
standardized before this method can be used. Because of this problem, when using a
FOSM reliability index we must specify the performance function that has been used
and the point of linearization. If the mean-value point has been used for linearization,
we must specify that it is a MVFOSM reliability index and again give the performance
function.
Let us apply the above to the example presented in the introduction to this chapter.
First we assign second order statistical information to the random variables Xl and X2.
page 122
/l x = 0.1 (Jx = 0.1
1 1
/l X
2
=0.5 (J X
2
=0.1
PXX = 0.0
1 2
Chapter 4. Stochastic Methods
(4.4.8)
The performance function is g(X) =qtarg - (Xl X2) where we take qtarg =0.1. The
gradients at the mean point are
c1g(X) =-/l =-0.5 c1g(x) =-/l =-0.1
ax
l
X
2
dxz X
2
(4.4.9)
and /lz =0.05 , OZ =0.07 resulting into f3 =0.71. If the performance function was de-
fined as g(X) =qtarg/XI - X2 which is simply another way to express the same prob-
ability, then
(4.4.10)
resulting in /lz =0.5 ,OZ =1.12 and f3 =0.45.
Before seeing how we can overcome the above problem, we should examine the re-
liability index to see what makes it agood measure of reliability.
In figure 4.4.2A, we have defmed the one dimensional linear performance function g(X)
= 10 - 2X. The random variable X has been assigned second-order statistical infor-
mation /lx, (J){, as can be seen in the figure, and the performance function defmes the
problem, P [ ~ 0] where Z = g(X), or equivalently P. [ ~ 5 ] .
The reliability index, in the frrst-order second-moment method, is defmed as f3 = /lz I
OZ, and in figure 4.4.2A we can see how the /lx, OX are related to /lz, OZ for this one-
dimensional problem. The mean and standard deviation can be found mathematically as
/lz =g(/lx)=1O-2x4=2
--- ---------- -----
Chapter 4. Stochastic Methods
page 123
and
x
x
g(x)=lO-2x
J1z=4
(J,=2
J.Lrt(J, .--...-: .
!
i
I
!
:
II< ----1--..-.-..-----
i i
! i
! i
! :
A. g(x
10
Figure 4.4.2. Illustration ofa one dimensional linear performance function.
The reliability index is therefore f3 =2/4 =0.5. As we can see in figure 4.4.2B the re-
liability index f3 is a measure of the distance between the mean Jl.Z and the point of ori-
gins (where Z = 0) expressed in units of uncertaintYt namely standard deviations. In
other words, since the mean value pz is the "most likely" value to occur and the origin
represents the target value, the reliability index ~ tells us how far away the most likely
value is from the target value in terms of uncertainty and thus is a measure of probabil-
ity. A small reliability index means that the target value is very near the most likely
value and therefore the probability associated with this event is big while in the oppo-
site case the probability is small. If the most likely value exceeds the target value, then
~ is negative and measures the certainty with which the target value is exceeded.
For illustration purposes, an imaginary probability density function fz for i has also
been included in figure 4.4.2B. The probability of "failure" PI is equal to P[B;O ], and
page 124
- - - - - - - - ~ ~ - - -
Chapter 4. Stochastic Methods
one can see how the reliability index is associated to this probability.
The FOSM method is formulated on Z in order to fmd the probability P [ ~ O ] , but since
this is a one dimensional problem, the probability Pr can also be formulated directly on
X. In this case Pf =P [ X ~ ] =P [ ~ O ] , which can easily be confmned by the reader.
This formulation should give the same reliability index since the probability is the same
as before. The reliability index is formulated as
f3 = Distance between most likely and target value
Unit ofuncertainty
(4.4.11)
As can be seen in figure 4.4.2B the distance between the most likely v.alue Jlx, and the
target value ( 5 ), is equal to 1. The unit of uncertainty is the standard deviation of X ,
OX. The reliability index is therefore f3 =11 2 =0,5 , which is the same as the one ob-
tained for Z. Again the probability density function fx of X is illustrated in figure
4.4.2B.
4.4.2. The Hasofer and Lind Reliability Index
If we are going to use the reliability index as a measure of probability, we bave to fmd a
way to evaluate f3 which will yield the same result for mutually consistent formulations
of g(X).
In figure 4.4.3 we have defined the one dimensional non-linear performance function
g(X) = 10/X - 2, which is simply another formulation of the linear performance func-
tion g(X) = 10 - 2" X, seen in figure 4.4.2. Using a non-linear performance function
produces many problems when trying to fmd the reliability index. The most important
problem is that we can not find the standard deviation OZ.
In the linear case of figure 4.4.2, the standard deviation of Z was defined from the ge-
ometrical relationships
B. g(x
Chapter 4. Stochastic Methods page 125"'
J1z +CT
z
=g{J1
x
-CT
x
) => CT
z
=g{J1
x
-CT
x
)-J1z
or (4.4.12)
J1 z - CT z = g{J1x +CTx) => CT z = J1 z - g{J1x +(Jx)
In the non-linear case however, these relationships are not equivalent and will give dif-
ferent standard deviations, which indicates that higher statistical information is needed
to define the uncertainty of this problem.
A.
g(x
./g(x)=lOlx-2
~ Jl.=4
<1.=2
g' (x) =3- O.62Sx
/g(X)=lOIX-2
Jlz+C1z f - ~ .....
Jlzt-----!--=;::::..
Figure 4.4.3. Aone dimensional non-linear failure function.A. Mean value approach. B. Hasofer
and Lind approach.
As mentioned in the previous section, an approximation of the standard deviation can
be obtained by replacing g(x) with the tangent at the mean value point. This approach
can be seen in figure 4.4.3A. The tangent performance function at the mean value point
is gt(X) = 3 - 0.625 X. Using now gt(x) instead of g(x), we can find
[(
2 2 ]172
(J' Z z -0.625) (J' x = 1.25
(4.4.13)
Since the mean value point is mutual for g(x) and gt(x), we can find it from either one
of the failure functions as
(4.4.14)
The reliability index can then be found to be ~ z 0.5 I 1.25 = 0.4, while the correct
value is 0.5 as found in the previous section.
page 126 Chapter 4. Stochastic Methods
If we use the performance function gt(X) and formulate the problem on X, we find that
the probability associated with this formulation is given as P [ X ::; 4.8 ] and f3=0.4. So
by using the approximate failure function gt(x) we have altered the target value (which
should be equal to 5) and therefore obtained a different reliability index.
In figure 4.4.3B we have tried to find a linearization point on g(x) which will result in a
gt(x) which will not alter the target value when formulated on X. Since X
targ
must be 5,
the only gt(x) which results into gt(5) =0 is the one which is tangent to this point,
namely the point (5,0). As can be seen in figure 4.4.3B, linearization at this point pro-
duces the failure function gt(x) = 2 - 0,4 x . By design, formulating the probability on X
and using this gt(X) function, will result in f3 = 0.5. Formulating now the problem on Z
with this gt(X) function, we find
Il
z
= l(llx) = 2-0.4x4 = 0.4
and f3 =0.4/0.8 =0.5, which is the correct result.
From all the above approaches to find the reliability i n d e ~ for a non-linear performance
function, it seems that the best general approach is to formulate on X and use gt(x)
which is tangent to the point Xt for which g(xv =0 as done in figure 4.4.3B.
If the performance function is non-linear and two-dimensional, we would not be able to
formulate on X ={XI. X2} and use gt(X), because the target value X
targ
instead of be-
ing a point will be a curve ( in multidimensional cases it will be a surface - the limit
state surface) and we have to decide at which point of this curve are we going to lin-
earize. Finding the correct linearization point is a problem analyzed by Hasofer and
Lind (1974).
Let us assume that we have defined the same non-linear performance function of figure
4.4.3, but this time Ilx =0 and ax =1. Formulating the probability problem on X will
Chapter 4. Stochastic Methods
be as before P [ X~ 5 ], and the reliability index for this case will be
page 127
In other words, when IlX is equal to zero and ax equal to one, the reliability index is
equal to the distance from the origin to the target point.
This observation suggests that if we transform the random variable X into a random
variable Y which has zero mean and a standard deviation of one, then the reliability in-
dex can be measured directly as the distance between the origin and the target point.
Let us now define a non-linear, two-dimensional performance function and then trans-
form X = { Xl, X2 } into Y = { YI, Y2 } in such a way that YI, Y2 have both a mean
value equal to zero, standard deviations equal to one and are uncorrelated. The trans-
formations will also change g(x) to a function G(y). Setting now G(y) = 0, will define a
curve and we have to decide which point on this curve should be used for finding the
reliability index. Using the observation made above, we see that the reliability index
associated with a point on the curve G(y) = 0, is simply equal to the distance from the
origin to this point. So logically,. if we find the point of G(y) = 0 which is nearest to
the origin, this point will have the smallest reliability index and therefore the highest
probability content. The reliability index thus defmed is called the Hasofer Lind reli-
ability index, while the point on G(y) = 0 which is nearest to the origin, is called the
design point.
Hasofer and Lind defined the reliability index for a general performance function as the
distance from the origin to the nearest point on the limit state surface in standard nor-
mal space. The standard normal space consists of uncorrelated variates with zero mean
and unit standard deviations. Furthermore (Der Kiureghian and Liu, 1986): "For all
hyperplanes of equal distance from the origin, the probability content P is constant
within a second moment representation i.e., the Tchebycheff bound, regardless of the
orientation ofthe hyperplane." (See also figure 4.4.4.)
page 128
---------- ----
Chapter 4. Stochastic Methods
In the one dimensional problem that we saw in figure 4.4.3B, the limit state surface is
reduced to a point and of course this point is also the design point. We saw that by lin-
earizing at this point and formulating the probability problem on the safety margin Z,
we found the correct reliability index. In general, if we use the design point as a line-
arization point, the reliability index will remain unchanged for mutually consistent for-
mulations of the performance function.
The Hasofer and Lind reliability index is actually a FOSM reliability index, with a
specific linearization point (the design point in the original space). In the case where
the performance function is linear !3MvFOSM = /3HL. The ambiguity in the value of
the frrst-order reliability index is thus resolved. Hereafter when referring to the calcula-
tion of the reliability index, we shall mean the calculation of the Hasofer and Lind reli-
ability index.
The transformation to standard normal space is usually given as (Der Kiureghian and
Liu,1986)
(4.4.15)
where D = diag[Oi] is a diagonal matrix of the standard deviations and L is the lower
triangular decomposition of the correlation matrix R =[ Pij], i.e. R = L LT.
The Hasofer and Lind reliability index requires determination of the nearest point on
the limit state surface to the origin in standard normal space. In terms of the coordi-
nates of this point the reliability index is given by the inner product
f3 = a*y*
(4.4.16)
where y* denotes the coordinates of the design point in standard normal space (x* in the
original space) and a* is the negative unit gradient vector at the design point also in
standard normal space.
Mter evaluating the reliability index an ad hoc approximation of the probability of fail-
Chapter 4. Stochastic Methods
ure is given by
P
f
Z <p(- f3) = 1- <pCf3)
where <P(.) is the standard normal CDF.
page 129-
(4.4.17)
This approximation assumes that the probability density function of Z, is the normal
distribution. The usual justification for this assumption is that since only second-order
statistical information is known, then Z can theoretically assume any value between -00
and +00. Therefore the normal distribution should give a fairly good estimate of the
probability.
4.4.3. The First Order Reliability Method
The first order reliability method (FORM) is an extension of the FOSM-HL method. A
first order (linear) approximation of the failure function again is used, but the reliability
index is not calculated only on the basis of second order statistical information. FORM
can also use marginal distribution information for the random variables or even the
joint probability distribution if known. The transformation of the random variables in
standard normal space must be consistent with all the available statistical information.
If the joint distribution is given, then the transformation can be written as
Y=T(X) (4.4.18)
In the case where the joint distribution is normal, then the transformation used for the
FOSM-HL method applies (equation 4.4.15) and for a given hyperplane the probability
is given exactly by <p(-{3) which is only a function of the distance between hyperplane
and origin in standard normal space.
In the general case the limit state surface will not be a hyperplane, but if we replace the
limit state surface with a hyperplane tangent to the limit state surface at the design
point, then the approximation Pfz <P(-{3) will be good, as long as the limit state surface
is not too nonlinear. In the case where the limit state surface is highly nonlinear at the
page 130 Chapter -1-. Stochastic Methods
design point the approximation obtained by the tangent hyperplane will not be suffi-
cient and higher order approximation will be required (SORM). Such a case is illus-
trated in figure 4.4.4, where G(Y) =0 is very curved at the design point. To account
for such cases, a generalized reliability index has been defined, according to which the
probability of failure must be equal to the area bounded by!u(u) and G(Y) = 0 ,with ref-
erence to figure 4.4.4. The tangent hyperplane at the design point, as can be seen in
figure 4.4.4, will slightly overestimate the probability of failure in the illustrated case.
We can also see that the first order approximation will be better when f3HL has high
values than when it has small values, but basically it is the curvature of G(Y) = 0 at the
design point which decides the accuracy of the first order approximation.
Y2
unsafe region
G(y)<O
Figure 4.4.4. Illustration of the rotational symmetrical property of standard normal space (based
on Der Kiureghian and Uu ,1986).
As already mentioned, the joint distribution is usually unknown and the statistical in-
formation is in most cases limited to second moment information for the random vari-
abIes. With only second order statistical information given, the transformation to stan-
dard normal space is not possible.
When only second order information is given, FORM is equivalent to the FOSM-HL
Chapter 4. Stochastic Methods
page 131
method. Let us first consider the case where the random variables are uncorrelated.
Each random variable is assumed normally distributed and the following transformation
for each random variable is performed.
N(p,cr) ~ N(O,I) (4.4.19)
If the random variables are correlated, then the transformation given by equation
4.4.15 is used, which assumes that the joint distribution is normal. Actually the trans-
formation 4.4.19 results from the transformation 4.4.15 when the random variables are
uncorrelated.
From the above we see that when only second order statistical information is given for
the random variables, we must also make the assumption of a jointly normal distribu-
tion for the random variables, in order to perform the transformation to standard normal
space.
In many cases the marginal distributions of the random variables are known or can be
assumed with reasonable confidence. Let us fITst assume that the random variables are
uncorrelated. If the marginal distribution for each random variable is known, then the
transformation is given by
fx(x) ~ N(O,I) (4.4.20)
where Ix is the marginal distribution of the random variable X. The transformation
4.4.20 must be carried out in such a way that the probability contents of the variables
Xi, remain unchanged during the transformation, in order for the overall probability
content to be well approximated in standard normal space. The transformation 4.4.20
can be performed by considering the following relationship separately for each random
variable:
(4.4.21)
where <I>(y) is the CDP of the standard normal distribution (also, <!>(x) is usually used for
page 132 Chapter4. Stochastic Methods
the standard normal PDF) and Fx the marginal CDF of the random variable X.
The transformation is thus
i =l, ... ,n (4.4.22)
This transformation fmds the standard normal Yi which describes best the marginal Xi
and allows us to approximate the reliability index within a factor of approximately 2 to
5 (Madsen et. al,1986).
If the random variables are correlated, then the Rosenblatt transformation can be used.
The transformation is similar to 4.4.18 and is defined as
YI = <P-I(FXj (Xl))
Y2 = <P-I(F
xz
(x
2
I xI))
(4.4.23)
where FX.(Xil Xl. X2, ... , Xi-I) is the distribution function of Xi conditional upon
1
(Xl = XI, X2 = X2, ... , Xi-l = Xi-I):
%/
JfXj .. ~ x , (X
I
,X
2
"",xi-},t)dt
Fx, (Xi IXl> X
2
' , Xi-}) = _-00"-- _
fXj ...~ X , _ l (Xl' X2,, Xi-I)
(4.4.24)
The transformation described by equations 3.21 is carried out in a stepwise manner.
First we transform Xl into the standard normal variable YI. Then all conditional vari-
ables of X2 I Xl =Xl are transformed into a standard normal variable and so forth.
The inverse transformation is obtained from the inverse equations
Chapter 4. Stochastic Methods
Xl = (<I>(YI ))
X
2
= (<I>(Y2)1 Xl )
page 133.
(4.4.25)
A much simpler approach is known as the principle of normal tail approximation. Ac-
cording to this approximation we try to substitute the distribution of each random vari-
able Xi with a normal distribution N(j1i,Gi) which can then be transformed to standard
normal space by means of the transformation 4.4.19.
The transformation is based on the following relationships:
<I>(X
j
- Jij) = Fx, (x.)
G I I
I
- Jii) =i (x.)
G. G. X, I
I I
The mean and standard deviation are then given from the equations:
l/J[<1>-1 (Fx, (X
j
))]
G. =
I Ix,
Jij = Xj -Gj<I>-I(Fx, (x;})
(4.4.26)
(4.4.27)
The above principle can be applied also to cases where the random variables are de-
pendent, but then the joint PDP of X must be known. Oer Kiureghian and Liu (1986),
developed a joint PDP which is consistent with marginal distributions and correlation
coefficients. According to this model the joint PDF is:
(4.4.28)
where !Xi(Xi) =d[FXi(Xi)]! dXi is the marginal PDF of Xi. Zi = <I>-l[FXi(xD] in which the
superimposed -1 denotes the inverse function, f/J(i) is the standard normal PDF evalu-
ated at i, and l/Jn(z,Ro) is the n-dimensional normal density of zero means, unit standard
page 134
------- -----
Chapter 4. Stochastic Methods
deviations and correlation matrix Ro. The elements Po,ij of Ro are obtained from an
integral relation as follows:
(4.4.29)
Where Pij is the correlation coefficient between the random variables Xi and Xj, with
known marginal distributions and tfr2 is the bivariate normal PDF of Z'";LQ means, unit
standard deviations and correlation coefficient Po,ij. Finding Po,ij is t e d ~ c a s , but fortu-
nately Der Kiureghian and Liu (1986a) give semi-empirical formulas of the form
Po,ij = F Pij for different pairs of marginal distributions. Considering pairs with combi-
nations of normal - lognormal distributions, which are usual in hydrogeology, the for-
mulas given are exact. The transformation into standard normal space ~ s considerably -
simplified with the use of the joint PDF proposed by Der Kiureghian and Liu, since it
uses only marginal distributions instead of the conditional distributions used by the Ro-
senblatt transformation. The Rosenblatt transformation however, does not make any
assumption as to the joint PDF and is therefore more general. The Der Kiureghian and
Liu joint PDF is equivalent to the joint normal PDF used when only second order sta-
tistical information is given.
With the use of the normal tail approximation coupled to the Der Kiureghian and Liu
joint PDF, we can transform the random variables into standard normal space. We use
equation 4.4.15 by simply replacing R with Ro and using the normal tail mean and
standard deviation Pi, (Ji (given from equations 4.4.27), instead of the mean and vari-
ance assigned to each random variable. It is important to notice that Ro needs to be cal-
culated only once, but Pi and (Ji are dependent on the linearization point, meaning that
if different linearization points are considered, then each point results in a different pair
of /li and (Ji for each random variable.
The first order reliability method is an extension of the FOSM-HL approach. The first
order second moment approach can use only second order statistical information and
Chapter 4. Stochastic Methods page 135
assumes a jointly normal distribution for the random variables. The first order reliabil-
ity method can also use marginal distribution infonnation for the random variables or
even the full joint PDF if known.
4.4.4. Second Order Reliability Method
We have already mentioned that approximating the probability of failure with the use of
the FORM reliability index will not give good results when the performance function is
non-linear at the design point. A better approximation is obtained if a second order ap-
proximation is used for the performance function. This is typically done by fitting a
paraboloid to the performance function at the design point with the use of the curva-
tures of the performance function at the design point.
The simplest SORM approach has been proposed from Breitung (1984) and is given
from the relationship
n-l
Pf2 "'" <P(-{j)I1(I+{jK"if
1l2
i=1
(4.4.30)
where {j is the reliability index defmed as in the FORM approach and K; denote the
main curvatures of the limit state surface at the design point
The point-fittedparaboloid
The approximating paraboloid presented here is based on the work of Der Kiureghian et
al.(198?). For convenience we consider the second order approximation in a rotated
standard space Y' in which the Yn' axis passes through the design point. This can be
achieved by an orthogonal transformation Y'=RY where the n'th row of the transfor-
mation matrix R is selected to be equal to the unit normal vector a.
According to this method a paraboloid is defined in terms of a set of fitting points on
the limit ,state surface in the neighborhood of the design point. These 2(n-l) points are
selected along the coordinate axis of the rotated space in the manner described in figure
4.4.5.
- --- - - ~ - - ~ - - - - - - ~ ~ - - - - -
page 136 Chapter 4. Stochastic Methods
Y'n
----Tangent plane
Tl+i
Limit state surface Semiparabola
,/"/- Intersecting
Fitting Point ../"., Parabola
---..-
Design Point
y* ---------------
Figure 4.4.5. The fitting point paraboloid (based on Der Kiureghian et al. 1987).
The approximating paraboloid is defined through the expression
(4.4.31)
where 1\ are the principle curvatures. These curvatures are determined in terms of the
fitting points. Considering again figure 4.4.5, we can defme two semiparabolas which
are tangent to the limit state surface at the design point and pass through the fitting
points. The intersecting parabola shown in figure 4.4.5 represents the curve obtained
from the previous equation where only the ith terms has been included. One way of
defining the intersecting parabola is as the weighted average of the two semiparabolas.
The curvatures are determined from the equation
(4.4.32)
where
are the curvatures of the semiparabolas.
Chapter 4. Stochastic Methods
page 137
Once the curvatures have been found we can use the algorithm presented by Tvedt
(1988), which is based on an exact formula for the probability content of a parabolic set
(4.4.33)
where i = H. Using now the saddle point method Tvedt (1988), gives the following
method for obtaining the probability estimate.
Using the trapezoid rule the integral in the above equation is evaluated as
where U
z
is the saddle point found by solving
d1jf(u) =0
du
(4.4.34)
(4.4.35)
where
and
lIn
1jf(U)=-(U+ /3)2 --
2 2 j=l
112
is the scaling factor used in the quadrature equation.
4.4.5. Finding the Design Point
If the limit state surface is a hyperplane, the design point can be found by means of
vector theory. In the general case though, fmding the design point will require an itera-
tion process.
The iteration algorithm usually employed to find the design point is the so called
Rackwitz and Fiessler algorithm (RF). This algorithm is very fast. It qsually converges
page 138 Chapter 4. Stochastic Methods
in 3 - 20 steps. However, in many cases it will not converge. The mathematics of this
algorithm are relatively simple and manipulation of the algorithm is fairly easy. Sup-
pose we have a point p = (Pl,PZ' ...'Pn) as a starting point. The tangent plane on the per-
formance function at this point is given as
gT(x) = g(p) + Vg(p).(x-p)
where Vg(P) is the gradient vector estimated at point p.
(4.4.36)
The tangent hyperplane defmed above is used to approximate the g(x) = 0 function, and
therefore we need to defme the gT(X)=O function, which simply is obtained as
0= g(p) +Vg(p) (x -p) (4.4.37)
The distance between the intersecting hyperplane and the point of origins (i.e. the point
o=(0,0,...,0) ) will lie on the g(x)=O plane and is given from the relationship
D= g(p)+Vg(p)(-p)
IIVg(P)11
Rewriting the above equation and making some substitutions we obtain
(4.4.38)
D g(p) +Vg(p). (-p) g(p) -Vg(p) g(p) (4 439)
= IIVg(p)11 = IIVg(p)11 +IIVg(p)11 p = IIVg(p)11 +a.p . .
where a = -Vg(p)
IIVg(p)11
The vector a is the negative normalized gradient vector and is normal to the plane de-
fmed in equation 4.4.37 and is directed towards the "unsafe region" of the performance
function. The distance D can be written in terms of the point pz which is the closest
point of the hyperplane defined by equation 4.4.39 and the point of origins
(4.4.40)
Taking now the inner product of both sides of the above with the vector a we get
Chapter 4. Stochastic Methods
P2 =( g(p) +a.p).a
T
IIVg(p)11
page 139
(4.4.41)
where the last equation is obtained by replacing the distance D with its equivalent from
equation 4.4.39.
Equation 4.4.41 is the basis for the RF algorithm. At each step a new point is found and
the series P,P
Z
'P
3
, ... converges to the design point p*. The reliability index is equal to
the distance D and in addition we obtain the normal vector a, which gives us sensitivity
information about the design point.
Kiureghian et al.(1989) have shown that in many cases the RF algorithm will not con-
verge and they attribute this behavior to special geometries occurring in the perform-
ance function. In particular they demonstrate that often the algorithm starts pivoting
between two points as indicated in the following figure.
Y2
.. ."
.,
! /
.......
( ./
\ ~ ......../ ~ /
......- - - - - _ . _ - - ~ . - - ~ ~
Figure 4.4.6.. Theoretical case study showing a situation where the RF algorithm will not con-
verge (based on K!ureghian et al. 1987).
IIi order to improve the RF algorithm and avoid the problems depicted in figure 4.4.6,
page 140
-------- --- --" - " - ----"---
Chapter 4: Stochastic Methods
Kiureghian et al.(1987) presented a modified algorithm where a non-negative merit
function is introduced.
(4.4.42)
in which c is a positive constant and y is the current point. The new iteration point is
selected by a line search along the direction vector
d, = IV 1 I' [Vg(y,)y, - g(y,)). Vg(y,)T -y,
g(Yk;)
(4.4.43)
until m(y) achieves a sufficient decrease. This algorithm does indeed solve the problem
outlined in figure 4.4.6. It is also a very demanding algorithm, since during the line
search in the d
k
direction and subsequent evaluation of the merit function m(y) the gra-
dient of the performance function at that point must be estimated. In relation to figure
4.4.6, the ~ direction vector is the vector connecting the points (b,a) and (a,b). Since
the performance function g(y) obtains its minimum value at the midpoint between the
two pivoting points and at the same point the :fITst term in equation 4.4.42 becomes
zero, the midpoint will be the next iteration point.
Experimentation with the RF algorithm for problems of the type depicted in figure
4.4.6, shows that the RF algorithm will only converge when the initial points used have
coordinates (t,t) where t can be any number except 0, since this is a saddle point for the
equations considered and the gradient is zero there. For all other initial points the RF
algorithm will exhibit the behavior outlined in figure 4.4.6. The Kiureghian et al.
modification corrects this, since the midpoint is chosen which results in an iteration
point with coordinates (t,t) and thereafter their algorithm behaves exactly like the stan-
dard RF algorithm.
The Kiureghian et al. modified RF algorithm would be very effective if it did not re-
quire the evaluation of the gradient vector during the line search. Let us assume the
Chapter 4. Stochastic Methods page 141
gradient vector used in the merit function is the gradient estimated at the previous itera-
tion point. In other words we do not update the gradient vector during the line search in
the ~ direction. In such a case the merit function would obtain a minimum somewhere
between the midpoint and the point (b, a) - assuming (a,b) is the previous iteration
point. The first term in the merit function in such a case becomes zero at the point (b,
a) while the performance function becomes zero at the midpoint. In other words the
point which is chosen when the gradient vector is not updated will depend on the value
of the coefficient c in the merit function. If c has a very high value, then the next itera-
tion point will be near the midpoint, while for a small value for c the next iteration
point will be near the (b, a) point. This scheme works quite well. For the problem of
figure 4.4.6, convergence is obtained when c has very large values Le. 100000.0 since
this results in choosing the next iteration point near the midpoint value. However, this
algorithm did not always manage to converge. By manipulating the freedom with which
the next iteration point is chosen a constant diversion is introduced which does not al-
low the performance function to obtain a value smaller than this diversion. In other
words the iterations result in points which move closer and closer to the design point,
but at some specific value for the performance function, which depends on the value of
c, the next iteration point can no longer move towards the design point and consequent
iteration points remain at a constant distance from it.
In order to avoid the demands of the Kiureghian et ale RF algorithm a different ap-
proach has been developed and used herein. Consider figure 4.4.7 and assume that our
initial iteration point is PI with coordinates (a,b). Instead now of choosing the point P
2
=
(b,a) as our next iteration point as the standard RF algorithm does, we can choose a
point P which lies on the tangent plane but is positioned somewhere between P2 and the
projection of PIon the same tangent plane. The point P
2
is found by using the RF it-
eration rule. The projection of PIon the tangent plane is easily found by considering
that the distance between PI and the tangent plane is given by
page 142
-- -- --------- --- ---
Chapter 4. Stochastic Methods
(4.4.44)
Multiplying this with the unit vector a and adding it to PI we obtain the projection point
P P
g(Pl)
IP = 1 +II Ii
a
Vg(P
1
)
Figure 4.4.7.. Modified RF algorithm
(4.4.45)
The coordinates of point P can now be found by frrst fmding the vector P1PP2 and
adding this to the PIP vector. This results into the following iteration rule
(4.4.46)
The new iteration point will lie somewhere between the points PI and P2P. For 9 = 0.5 it
will be in the middle as shown in figure 4.4.7, for f) = 1.0 we obtain the standard RF
algorithm while for 9 = 0.0, we obtain the projection point to the new tangent plane.
Obviously, a condition is further required which will specify the best position for point
Chapter 4. Stochastic Methods page 143-
P Le. the best value for e. Presently, no such conditions have been developed. This
modified RF algorithm has been used with e= 0.5 and converges in many cases where
the standard algorithm fails, but many problems will still not converge. Positioning the
next iteration point nearer to the projection point P2P makes the algorithm very slow al-
though better stability is observed. From equation 4.4.11 we see that if the algorithm
converges then, P" =P".l which indicates that g(P".l) =O. The algorithm will in general
require more iteration steps than the conventional RF algorithm. In addition different
criteria must be employed in order to determine the convergence of the algorithm then
those used for the RF algorithm.
4.4.6. Sensitivity Information
The results obtained by the frrst-order reliability method, depend on the statistical in-
formation assigned to each random variable and on the accuracy with which the design
point is calculated. It is often of interest to know, how do these deterministic values ef-
fect the reliability index. The frrst-order reliability method can provide such sensitivity
information.
In general, the sensitivity of the reliability index to a given deterministic value is given
by the partial derivative of ~ with respect to this value. We have already calculated a
sensitivity measure, during the process of fmding the design point. The reliability in-
dex is given from the relationship
n
f3 =y *a* =LYiai
i=l
(4.4.47)
where y* = (Yl, Y2, .,. ,Yn) is the design point in standard space and a* =( at. a2 ... ,
an) is the unit vector. The partial derivatives of f3 with respect to the coordinates of the
design point can easily be seen to be
df3
-=a
dyj ,
(4.4.48)
page 144 Chapter 4. Stochastic Methods
The unit vector a* is thus a measure of sensitivity of the reliability index to inaccu-
racies in the coordinates of the design point in standard space. The partial derivatives
of the reliability index with respect to the coordinates of the design point in the original
space can be obtained by using the jacobian of the transformation from Y (standard
normal) -4X (original space) at the design point. This can be written in the vector form
as
V,J3 = a *Jy,x
(4.4.49)
where Jy,r is the jacobian matrix of transformation from Y to X. Cawlfield and Sitar
(1987) and Jang et al. (1990) used a sensitivity measure proposed by Der Kiureghian
and Liu (1985), called the y(gamma) sensitivity. The 'Y sensitivity is given by the rela-
tionship
(4.4.50)
and as can be seen, it is obtained by, fIrst scaling the V
x
f3 vector with the diagonal ma-
trix of standard deviations and then normalizing the resulting scaled vector. This vector
gives an indication as to the importance of each random variable in the original space
and measures the sensitivity of the reliability index to "equally likely" changes in the
basic random variables. If the random variables are independent, then 'Y =V
y
~ .
The reliability index can be formulated directly in the original space (Madsen et al.,
1986) and is given by the relationship
[ ]
112
f3 = (X- E[x]t ci (X- E[X])
(4.4.51)
where C
x
is the covariance matrix (ex = DRD). In this form it is possible to find the
derivatives of the reliability index with respect to the second-order statistical informa-
tion assigned to each random variable and thus obtain sensitivity measures. According
to Madsen et al.(1986), in the general case, the sensitivities of the reliability index with
Chapter 4. Stochastic Methods
respect to a distribution parameter p will be given from the equation
df3(p) _ 1 *i1r(x*,p)
----y
dpi f3 dpi
page 145
(4.4.52)
where y* and x* denote the design point in standard normal and original space respec-
tively, and Tis the transformation" from X-7Y which can be written in the form
x(p) =T-
1
(y,p) (4.4.53)
and the transformation is carried out at the design point. p signifies that this is the set
of parameters used for calculating the reliability index. Likewise, if the sensitivities
with respect to a performance function parameter need to be found (for example the
sensitivity with respect to the target value), then
dG{y*,p)
_df3(_p) = dpi
IIVG{y*, p)11
(4.4.54)
where G(y,p) is the failure function in the standard normal space and which is a func-
tion not only of the variables Yi but also of the parameters Pi.
4.4.7. Reliability Methods in Hydrogeology
The reliability methods in groundwater flow and pollution transport problems, quantify
the uncertainty of the estimated hydraulic head or concentration which is caused by the
uncertainty in the hydrogeological input data.
Sitar et al. (1987) applied the frrst order reliability method (FORM) to subsurface flow
and contaminant transport analysis and illustrated the capability of the method by pre-
senting three examples. The performance functions were defmed with the use of ana-
lytical equations which later on were replaced by numerical equations by Cawlfield and
The superimposed small T denotes the transpose of the y* vector and has nothing to do with the trans-
formation 'function T (.).
page 146 Chapter 4. StochasticMethods
Sitar (1988). The FORM algorithm used in these analyses differs from the conven-
tional FORM algorithm which had been used by Ditter and Wilson(1981) Devary and
Doctor (1982) and Townley and Wilson (1985). The latter are using the mean value
frrst order reliability method which linearize at the mean values of the random vari-
ables and leads to inconsistent results.
Cawlfield and Sitar (1988) investigated the effects of spatial variability by performing a
reliability analysis of steady state groundwater flow. The finite element method was
used in the formulation of the fust order reliability algorithm and the element hydraulic
conductivities and constant head nodes were considered to be random. Diverse correla-
tion functions were used to assign correlation information to the element hydraulic con-
ductivities and the effects these had to the reliability results were presented.
Wagner and Gorelik (1987) have also used the frrst order reliability method in a
groundwater management model in order to fmd the optimal pumping strategy that
provides some assurance in the water quality. The mean v.alue frrst order reliability
method was used to estimate the statistical moments of the random variables. In the
management model the fmite element flow and transport model SUTRA was coupled
with the multi-regression package STARPAC to estimate the mean and covariance pa-
rameters from the concentration data measured at different times and at severalloca-
tions. The unknown parameters estimated were, the effective porosity, the hydraulic
conductivity, and the longitudinal and transverse dispersivity. The estimated parame-
ters were input into the optimization model were the constraint used was nonlinear and
can be represented as:
Prob(C
i
~ C
i
*) ~ II
where II is the reliability level which can vary from location to location and with time.
The reliability model was to provide information whether the concentration C
i
, exceeds
the threshold concentration C
i
*. The uncertainty model in this analysis did not incorpo-
rate the spatial variability and the distribution information of the input parameters.
Chapter 4. Stochastic Methods page 141
Veneziano et al. (1987) proposed two computationally efficient methods for estimating
the P-fractile pollutant concentration. The frrst method is refereed to as the Modified
Monte Carlo which is computationally less intensive than the full Monte Carlo. The
second method is based on the Rackwitz and Fiessler algorithm. Both algorithms allow
joint probability distribution to be assigned to the input parameters and provide sensi-
tivity information. The reliability methods are typically used to fmd the probability of
exceeding a target concentration value. The problem of Veneziano et al. is slightly dif-
ferent. They define a P-fractile - typically the 90% fractile - and estimate the concen-
tration Yp. Practically this means that they have defmed a probability of 10% - or more
generally 1-P - and estimate a concentration Yp for which
Prob[Y >Yp] = I-P.
LaVenue et al. (1989) analyzed groundwater travel time uncertainty using the frrst or-
der second moment method (FOSM) and compared results with Monte Carlo simula-
tions. The FOSM method used in their analysis is the mean value frrst order second
moment method (MVFOSM) which linearizes at the mean value point. The adjoint
technique which is useful for determining sensitivity derivatives of nonlinear parame-
ters was used to obtain sensitivities with respect to the input parameters. The analysis
was performed by releasing a particle at a source at the center of the modeled layer and
the uncertainty of the subsequent particle travel path, particle travel time, and normal-
ized sensitivity coefficients were estimated. The results were compared with Monte
Carlo Simulation using 1000 realizations. The FOSM results were remarkably similar
to the MCS results for small variability in the hydraulic conductivity, but departed sig-
nificantly for cases with a large variance. The mean and variance of the travel time
determined by both approaches were also very similar.
Jang et al. (1990, 1994) developed a stochastic model named CALREL-TRANS.
CALREL is a reliability method package which includes algorithms for FORM and
SORM analysis, Monte Carlo Simulation, Directional simulation, frrst order sensitivity
analysis, and frrst order bounds for series analysis. TRANS is a finite element numeri-
page 148 Chapter4. Stochastic Methods
cal transport model which was linked to the CALREL package. They also linked
CALREL to a ID analytical transport equation. A series of theoretical cases were ana-
lyzed and comparisons were made between FORM. SORM and MCS results. The ef-
fects of correlation scale. large variability values for the random variables and the
overall perfonnance of the algorithms were also tested.
5. The Performance function
5.1. Introduction
To use the flIst order reliability method (presented in chapter 4) we need three things.
(1) We need statistical information for the random variables (at least second order), (2)
we must defme a performance function g(X) which in essence defmes the type of reli-
ability problem that must be solved, and (3) we must find the design point which will
give us the reliability index and the solution to the reliability problem. To find the de-
sign point an iterational algorithm is used which requires the evaluation of the perform-
ance function and its flIst order derivatives with respect to the random variables. In this
chapter the methods and equations ~ s e d for these evaluations are discussed.
The computation of the performance function and its derivatives is accomplished nu-
merically with the use of the fmite element method and thus the equations developed
herein can be considered as the equations which couple the flIst order reliability algo-
rithm to the fmite element method. Two approaches are considered. 1. The direct ap-
proach and 2. The difference approach.
Based on the system equations obtained from the fmite element method, the direct ap-
proach tries to develop equations which will allow us to evaluate the derivatives of the
performance function directly. This was actually the flIst method used in this project,
but due to various problems associated with this approach - that will be discussed later -
it was abandoned in favor of the difference approach which is much simpler, but less
page 150
accurate.
Chapter 5. The Performance Function
Three performance functions are considered. For constant pollution sources we have
the performance function
g(X) = CtaIg -C(x,y,t)
(5.1.1)
This performance function is used to defme probability problems where we are inter-
ested in finding the probability of a concentration at a specific place and time c(x,y,t) to
exceed a predefind clafg value.
For transient pollution sources we have two performance functions
g(X) = TtaIg - T(x,y,c;:::: C
limit
)
(5.1.2)
(5.1.3)
In the fIrst performance function for the transient pollution sources, t
ma
represents the
time within a specific period of time T
2
which gives the maximum concentration at
point (x,y), while in the second performance function C ~ C
limll
is used to specify that the
period T represents the period of time for which the concentration exceeds the value c
limll
at the point (x,y). In the above performance functions cle1g' Tlafg' x, y, t, t
ma
, T
z
and c
limll
are
user-defined and therefore have specific values.
The performance function 5.1.2 is used to define the probability of the maximum con-
centration observed at a specific place (x,y) and in a given period of time T
z
' exceeds a
predefmed clafg value. The time t
ma
at which the maximum concentration is observed at
point (x,y), is required for the reliability analysis, and must be estimated with the use of
the fmite element method before the reliability analysis can commence, and must be
reevaluated at each design point iteration step.
The performance function 5.1.3 defmes the probability of the period of time for which
the concentration at a specific point is larger than a predefmed C
limll
value, exceeds a
._------------- --
Chapter 5. The Perfonnance Function
page 151
given period of time Tlarg. This performance function is used for the probability analysis
of the residence time problem, Le. the length of time for which high concentrations val-
ues are observed at some point in the aquifer.
Taking the derivatives of the performance functions with respect to the random vari-
ables we obtain for each performance function
cJg(X) _ dc(x,y,t)
aX
j
aX
j
cJg(X) dc(x,y,t
max
)
. . . . = . . . ; ~ = -
aX
j
aX
j
cJg(X) _ aTex, y, C 2: C
limit
)
aX
j
aX
j
(5.1.4)
From a numerical point of view the functions c(x, y, t) and T (x, y, c) are evaluated at
the nodes of the FEM grid, and therefore complete evaluation of equations 5.1.4 re-
quires their evaluation at all nodes. This can be written in the general form
a ~
aX
j
aV
2
(]V aX
j
--
aX
j
(5.1.5)
where V is the vector of either c(x, y, t) or T (x, y, c) evaluations at the n nodes of the
FEM grid. Since c(x, y, t) is also a function of time, the vector representation used is
e'" =( c ~ , c ; , ... ,c: ), where mis the time step number used Le. t =mAt. The vectors
av laX
j
are called state sensitivity vectors or simply the sensitivity vectors, because
they describe how the variables of interest, in this case the concentration and residence
time values, change in response to changes in the random variable Xi. So the task of
calculating the derivatives of the performance functions is equivalent to calculating the
page 152
state sensitivity vectors.
5.2. Direct Evaluation
-------- ~ - - -
Chapter 5. The PeIfonnance Function
The finite element method gives us a system of numerical equations describing the
pollution transport problem which can be written, as we have seen in chapter 4, in the
following matrix form
(5.2.1)
where capital bold letters (P, R) are n x n matrices, small bold letters (f , e
m
+
1
, em) are
vectors and At ,0 are scalars. The same notation system will be used throughout this
chapter. em+land em represent the concentration vectors at time steps m+1 and m respec-
tively. At is the time step and, 0 has values in the range 0 to 1, where 0 represents the
explicit formulation and 1 the implicit formulation.
Equation 5.2.1 can be written in a more compact form by making the following substi-
tutions
R
A=-+(JP
At
R
B=-+(8-I)P
At
With these substitutions we obtain the matrix equation
(5.2.2)
(5.2.3)
We seek now to compute the partial derivative of the concentration e
m
+
1
with respect to
each random variable Xi. In general we see that by taking the p ~ a l derivatives of
equation 5.2.3 we obtain
Chapter 5. The Performance Function page 153'"
(5.2.4)
We see that a direct computation of the partial derivatives requires that we know the
concentration vectors e
m
+
1
and em and the derivatives of matrices A and B with respect to
the random variables. The derivative of f in most cases vanishes because it does not
depend on the random variables.
To obtain the partial derivatives of the A and B matrices, we can frrst substitute their
equivalents from equations 5.2.2
aA I aR (Jp
-=---+8-
aX
i
At aX
i
aX
i
dB laR iJP
-=--+(8-1)-
aX
i
At aX
i
aX
i
(5.2.5)
where the :fmal equations are obtained by taking into consideration that the derivative of
the R matrix is zero since it does not depend on the parameters
From equation 3.1.9a we have that the P matrix is defmed as
M ( awe awe awe
De _i_J+D
e
_i_J+D
e
_i_J
!1 f:: xx ax ax xy ax ()y yx ()y ax
1 e (5.2.6)
)
+D
e
__I __J __J __J dxdy
Y.Y ()y ()y X 1 ax y 1 ()y 1 J nem 1 J
In this equation the dispersion coefficients D:xx, Dyy' Dxy' the velocities u", u
y
and the de-
cay coefficient A are potential random variables. The superscript "e" shows that these
are the parameters of element e, and M is the total number of elements. Since the dis-
persion coefficients are functions of the dispersivities and these are also potential
random variables. If the flux q is considered to be a random variable then the partial
derivative of f must also be included in the above calculations. In this chapter we as-
page 154
Chapter 5. 'The Performance Function
sume that q is not a random variable. Further on, since the velocities are a function of
the hydraulic gradient and the hydraulic conductivities the hydraulic conductivities are
also potential random variables. Constant hydraulic head nodes and constant concen-
tration nodes can also be treated as random variables although only constant hydraulic
heads are discussed.
5.2.1. Evaluation of the sensitivity vector with respect to the element dis-
persivities
Let us ftrst consider the case where the random variables are the dispersivity coeffi-
cients CXz. and 1Ir Differentiation of equation 5.2.6 with respect to one of the dispersivi-
ties will remove all terms which do not have that dispersion coefficient, so we end up
with the expression
= fJ(aD:X aNt aN; + aNt aN; +
e=l e ax ax ax dy
an
e
an
e

ae:; a; a: +aa1 a; a: dx dy
(5.2.7)
where k takes the values from 1 to M. It is important to notice that for k # e the deriva-
tives of the dispersion coefficients with respect to the dispersivity are zero. This means
that the only entries in the (Jpf()CXz." matrix will be those from a single element. Practi-
cally this means that a triangular element will contribute with 9 entries while a quadri-
lateral with 16 entries, all other entries in the matrix will be zero.
U U
aD
xx
u;
:::}--=-
aa
L
U
aD
yy
u;
:::}--=-
aa
L
U
aDxy aDyx u:cUy
::}--=--=--
aa
L
aa
L
U
(5.2.8)
In order to obtain these few entries for a speciftc element, we must evaluate equation
5.2.7 for all cases where i andj represent nodal indices of the element we are interested
Chapter 5. The Performance Function page 155
in. For this purpose we must use the equations given in chapter 3, which defme the dis-
persion coefficients using the dispersivities and the velocities ( equations 5.2.8). The
. .
superscripts "e" used in chapter 3 have been omitted since all the parameters involved
in the equations belong to the same element. These equations show that by setting for a
specific element the value of lXz. equal to 1 and the value of CXr equal to 0 we obtain the
derivatives which are needed. The same can be done for the partial derivatives with
respect to the transverse dispersivity llr (i.e. CXr =1 and lXz. = 0 ). For each element k
we can now obtain or CJPICJa'/ which will have at most 16 non-zero entries.
5.2.2. Evaluation of the sensitivity vector with respect to the element ve-
locities
The sensitivity vector with respect to the velocities can be obtained by following the
same procedure used for the dispersivities. From the defmition of the P matrix we see
that the derivative with respect to the velocities is obtained in the following general way
(5.2.9)
where is the velocity at element k and the subscript d could be either x or y, in order
to obtain the respective derivatives. We notice once again that only when k = e the
above derivatives are non-zero, which means that matrix for a specific element
will have at most 16 entries. We also notice that for the velocity terms we have
if
if = u: and k = e
For the dispersion terms we obtain
page 156 Chapter 5. The Perfonnance Function
where
(5.2.10)
The derivatives with respect to u
y
are obtained in a similar manner. However, for prac-
tical purposes the velocities are not useful as random variables. It is preferable to use
the hydraulic conductivities which are elemental properties and can be measured in the
field.
5.2.3. Evaluation of the sensitivity vector with respect to the element hy-
draulic conductivities.
Obtaining the sensitivity vector with respect to the element hydraulic conductivities is
far more complicated. The first thing we have to do is to introduce the element hy-
draulic conductivities into the system of fmite element equations. Independently of the
type of elements that can be used, for each element the velocity is obtained from an
equation dermed over an element - superscripts "e" are omitted - of the general type
(5.2.11)
where n. is the effective porosity, K
xx
' K
yy
' ~ and Ky:c are the hydraulic conductivity ten-
sor entries and h is the hydraulic head.
In order to keep the number of random variables small we make use of the K
max
simpli-
fication, by which we describe the element hydraulic conductivity tensor with the use of
Chapter 5. The Performance Function
the equations
K
xx
= Kmax[cos
2
(e)+r. sin
2
(e)]
K
yy
= cos
2
(e)]
Kxy =Kyx =Kmax(I-r). cos(e) sin(e)
page 157
(5.2.12)
where r is the ratio KwJK
max
, K
min
is the conductivity normal to K
max
and eis the angle
between the direction of K
max
and the x-axis used in describing the geometry of the FEM
mesh. With these simplifications the velocities can be expressed simply as
where
[
ah]
I cos
2
(e)+rsin
2
(e) (1-r)cos(e)sin(e) ax
(uJ = n. (1-r) cos(8) sin(8) sin' (8) +r' cos' (8)-
(5.2.13a)
(5.2. 14b)
Finally, we must insert the numerical estimation of the gradients of the hydraulic heads
in the above equations. The numerical estimation of the gradients will depend on the
type of elements used. For triangular elements we replace the derivatives with their
numerical equivalents as follows
ah =.!-tB.h.
ax D i=l I I
ah =.!-t Ch.
dy D i=l I I
(5.2.15)
where BI,C
I
and D are dermed in chapter 3 (equations 3.4.3) and hi will be the hydraulic
head at the nodes defIning the element. For quadrilateral isoparametric elements the
gradients are estimated based on the equations
page 158 Chapter 5. The Performance Function'
(5.2.16)
Equations 5.2.16 can not be used directly. We must first transform the derivatives of the
shape functions into derivatives with respect to the local coordinates. Making these
changes we obtain the equations
(5.2.17)
where
(
COs2(O)+r sin
2
(O) (l-r) cos(O) Sin(O)]
K'=
(1- r) cos(O) sin(O) sin
2
(O)+r cos
2
(O)
J is the jacobian transformation matrix, and the gradients of the shape functions are
now in local coordinates. However, since the quadrilateral elements are not linear inside
the element area, the gradients of the shape functions will have different values at dif-
ferent points. The usual approach is to fmd the gradient at four points which are sym-
metrically placed in the element and then take their average. A usual choice is to use
the same points as those used for the element numerical integrations.
All the above equations basically show us that the hydraulic conductivities are intro-
duced into the system equations through the velocity parameters. To obtain now the
sensitivity vector with respect to the element ~ values we can directly replace the
velocity parameters with their equivalents given from the equations developed above in
the defmition of the P matrix and then proceed with the differentiation. Obviously the
final equations will be very complex and difficult to work with. The alternative is to use
some simple mathematics to simplify the derivations. The second approach is used
herein.
Chapter 5. The Performance Function page 159'
We have previously shown that in order to obtain the gradient of the performance func-
tion we need to obtain the gradient of concentration which eventually requires the esti-
mation of the apra matrix the entries of which are given from
ap;j _ (aD: aN: aN; aN: aN;
---.J +------+
e=1 e ax ax ax iJy
aD
e
aD
e

+ y.r I J __' _J_+
iJy ax iJy dy
due due J
+ % __J __J dxdy
aK
k
I ax aK
k
'.:l.,
max max uy
(5.2.18)
We have two types of partial derivatives to estimate. The derivatives with respect to the
local dispersion coefficients, and the derivatives with respect to the element velocities.
Actually, the flux term q will also depend on the hydraulic conductivity, but here we
assume that no flux terms are used in the formulation of the problem and hence it will
be zero.
Local velocity derivatives
We have already developed the equations which give us the velocity and these show
that the element velocity - either x or y component - is a function of the element hy-
draulic conductivity K
mar
and the element nodal hydraulic head values, all other parame-
ters in the equations are either deterministic or geometrical and are of no importance.
This means that we can write the equations in the general form u =
U( h: ,h;, Jz;, h:), where u can be either the U
x
component or u
y
component of the
element velocity of element e, depending on which derivative we are estimating. The
derivative now of u with respect to can be obtained through the chain rule as
due aUe aK
e
aUe aJJ; aUe ah
e
__. _=__. +__.__. _+__. ? +
aK:nax all; ah;
aUe ah; aUe ah:
--. +-_.--
ah; ah;.
(5.2.19)
Using now the fact that u is proportional to K
mar
and is a linear combination of the ele-
page 160
ment nodal hydraulic heads we can write
- - - ~ - - - ~ - - - ~ - - - - - ~ - - ~
Chapter 5. The Perfonnance Function
aue . aK;ax = Ue (1 z"e z"e he he) aK;ax
aKe aK' '''I ,''2' 3' 4 aK'
max max max
(5.2.20)
We note that
a K ~ a x _ {I ife = i
a K ~ a x - 0 in all other cases
The derivatives of the nodal hydraulic heads with respect to hydraulic conductivity Kmaz
are obtained through the steady state groundwater equation system. Based on this equa-
tion system we can calculate the derivatives with respect to the element hydraulic con-
ductivity as follows.
(5.2.21)
where E is the flow matrix, h is the unknown nodal head vector, q is the vector of nodal
external fluxes and K ~ a x is the hydraulic conductivity of element i It is important to
notice that although K:
ax
is defmed over a specific element the derivative of the hy-
draulic head is defined over all nodal hydraulic heads. This means that - theoretically at
least - all elements will contribute to the "dPta K:
ax
matrix. In reality nodes which are
far away from element k will have derivatives either equal to zero or very small values.
Chapter 5. The Performance Function page 161
It is easily shown that the derivative of the E matrix with respect to K
max
can be obtained
by building up the E matrix for a specific element and setting the value of K ~ a x = 1.
a E l d K ~ a x will have 9 entries for triangular elements and 16 entries for quadrilaterals.
Finally one must also take into consideration the fact that the Ematrix will be modified
to incorporate constant head nodes and that entries of the sensitivity vector ahlaK ~ a x
for constant head nodes must be equal to zero.
Local dispersion derivatives
For the dispersion partial derivatives with respect to the element hydraulic conductivi-
ties we can write D' = G( u;, u;). Using again the chain rule we obtain
(5.2.22)
where D
g
can be either one of D
xx
' D
yy
' Dxy. All the derivatives appearing in equation
5.2.22 have been estimated previously. The derivatives with respect to the velocities
are given from equations 5.2.10 in the velocity section while the derivatives with re-
spect to the hydraulic conductivities where developed in the previous section, equations
5.2.20.
5.2.4. Evaluation of the sensitivity vector with respect to constant hy-
draulic head nodes
The hydraulic head nodes are introduced into the pollution transport equation system
through the same equations used for the hydraulic conductivities. In other words, to
obtain the sensitivity vector with respect to the constant hydraulic head nodes we have
to follow the same procedure as the one used for hydraulic conductivities. The differ-
ence is now that we must take the derivative with respect to the constant head nodes.
The velocity derivatives are found from the chain rule
page 162 Chapter 5. The Performance Function
(5.2.23)
where h: is the constant head node with index 1. We note that the derivative of the hy-
draulic conductivity with respect to the constant head node will always be zero, so the
final form is obtained as
due. d h ~ = ue(Ke 0001) dh:.
he dh' max' , " ddh'
4 c c
To obtain the fmal equations we must estimate the dh/d h ~ values. These hydraulic head
sensitivities can be estimated in the following way. Once more starting with the nu-
merical solution for confmed flow given by the system equation
h=E-1.q
m m
(5.2.24)
where the subscript "m" denotes the modified! equations, and differentiating with re-
spect to the head at a constant head node h ~ results into
~ = dE; +E-
1
d q ~
dh' dh' qm m dh'
c c c
=> ~ =E-
1
d q ~
dh' m dh'
c c
(5.2.25)
1 The modifications required to introduce constant head nodes into the equation system.
Chapter 5. The Performance Function
page 16-3
where the second equation results from the fact that the matrix Em and Em-
l
do not de-
pend on Assuming
2
now that constant head modifications result into
if i =constant head node
in all other cases
(5.2.26)
where qi is the external flux at node i. The differential now will be
()qm.' = {I if i = constant head node
0 in all other cases
The sensitivity vector can now be calculated.
For the dispersion terms we obtain we obtain
aD
e
aGe au
e
aGe au
e
--.
dh' au
e
ah' au
e
dh
'
c x eye
where all derivatives have been previously calculated.
Unconfinedflow versus confinedflow
(5.2.27)
(5.2.28)
To obtain the derivatives with respect to the element hydraulic conductivities and the
constant hydraulic head nodes, we have assumed that the steady state flow is consistent
with a confmed aquifer environment. If the aquifer is unconfmed, then several changes
must be made to the way we evaluate the derivatives.
The equations for 2D unconfined flow must be solved iteratively (see Kinzelbach,
1986) because the thickness of the aquifer depends on the hydraulic head values. How-
ever, if we assume that the bottom of the aquifer is a horizontal plane and use this plane
as the datum for the hydraulic heads, then the governing equation for 2D steady state
unconfined flow in a horizontal plane can be written in the following form (Cawlfield et
2 This will depend on how the constant head node modifications are carried out.
page 164
al.1987)
Chapter 5. The Performance Function
a ( ah
2
) a ( ah
2
)
- K - +- K - 2q=O
ax xax dy Ydy
(5.2.29)
The difference between this equation and the confined flow equation is that h
2
takes the
place of hand 2q takes the place of q. This results in the FEM numerical equation
(5.2.30)
where the flow matrix E is the same as the one resulting for a confmed aquifer.
The FEM solution will give us the h
2
vector and the flux at constant head nodes will be
given as 2 times the q vector. So in the unconfined case the FEM algorithm is the same
as for the confmed case as long as we remember to fmd the square root of the solution
head vector and divide fluxes by 2.
The derivatives of the hydraulic head with respect to element hydraulic conductivities
or the constant hydraulic head nodes, i.e. the ahlaXi vector, where ~ can be either pa-
rameter, can easily be found, given the ah
2
/aXi vector and the solution vector h (the
square root ofh
2
).
ah;2 = 2h. ah;
a ~ ' a ~
ah. 1 a h ~
=>-' =--'
a ~ 2h; a ~
(5.2.31)
When computing for example the vector ahlaK:axfor unconfmed flow, the process de-
scribed for confined flow yields the equations
where
~ z =E-1.f
aK' m m
max
f = - ~ h z
m aK'
max
(5.2.32)
(5.2.33)
Chapter 5. The Performance Function
page 165
the vector ahlaK:mx is simply found by dividing the resulting ah
2
/a vector with
The derivative with respect to the constant head will also be changed. We have that
where
ifi =constant head node
in all other cases
and therefore
dq = {2h: ifi = constant head node
iJhe 0 in all other cases
once the ah
2
/a vector is obtained we divide it by 2h
j
to obtain the required ahla
vector.
5.3. The difference approach
The direct approach is rather intimidating due to the very complex equations that must
be implemented, especially so when one considers the simplicity of the difference ap-
proach. The derivatives need only be approximated through a simple difference scheme
in the manner described by the equation
x. X) =: ... X; ... X;""Xn)
ax 1"" p'" n AX (5.3.1)
I
where L1X is usually called the perturbation term. From a computer implementation
point of view the above scheme is elementary. We need only evaluate the performance
page 166
------ - -----
CHapter 5. The Perfonnance Function
o
-
in the next section.
function for a set of parameters Xi. To obtain all the necessary derivatives we need to
estimate the performance function R+1 times where R is the total number of random
variables. Since each evaluation of the performance function requires the solution of
the pollution transport equation system, we need to solve R+l different equation sys-
terns to obtain all derivatives. This is a very time consuming processes as we shall see
(
Equation 5.3.1 is basically a numerical approximation of the derivatives, which means
that numerical errors are introduced in all evaluations. Theoretically, as the perturbation
becomes smaller in value, the approximation will become better, or in other words the
numerical errors will become smaller. However, there is a limit as to how small the
perturbation value can become, since considerations must be taken in anticipation of
round off errors produced by the computer. Another way to reduce the numerical errors
is to use a mid-value scheme based on the equation
X. X ... X; ... ... Xn)
ax 1' p .. n 2M
I
which is a second order scheme meaning that the numerical errors are proportional to
M
2
, while equation 5.3.1 is a fITst order scheme. The problem is that the mid-value
scheme requires 2R estimations of the performance function Le. approximately twice as
much CPU time.
5.4. Comparing the direct and difference approaches
Due to the simplicity of the difference scheme for estimating the gradient of the per-
formance function it was easy to develop a version of PAGAP implementing this ap-
proach. There were basically two reasons for developing a difference approach version
of PAGAP. The amount of CPU time required for problems with a large number of
random variables, when PAGAP used the direct approach, is substantial, so it was rea-
Chapter 5. The PeIfonnance Function
page 161,
sonable to implement the direct approach to see if a faster algorithm is obtained. The
second reason was that the complex mathematical equations for the direct approach
were difficult to debug. One could not be sure that everything was implemented cor-
rectly. The difference approach would allow comparisons to be made and thus establish
the correct implementation of the direct approach equations.
5.4.1. Problem definition
The purpose of this case study is to study the differences between the direct approach
and the difference approach in estimating the gradient of the performance function. We
will be mainly interested in the time required by each approach and the numerical errors
introduced by the difference approach.
For this purpose a simple case has been chosen as shown in figure 5.4.1. The aquifer
considered is 20m wide and 200m long, and the groundwater flow is from the left
boundary towards the right boundary and described by constant heads of 50m for the
left boundary and 40m for the right boundary. The mean value of the maximum hy-
draulic element conductivity is equal to 1.0mId with a standard deviation of 0.3 mid and
the porosity is deterministically equal to 0.1. A constant pollution source is placed at
the left boundary with a constant value of 100 g1m
3
The longitudinal dispersivity has a
mean of 10m with a standard deviation of Sm, while the transverse dispersivity has a
deterministic value of 10m.
Element Size 10x10m
_ Constant Pollution Source
No Flow Boundary
Constant Head Boundary
Target Node
Figure 5.4.1. FEMgrid and boundary conditions used for the comparison case study.
page 168 Chapter 5. The Performance Function
The problem to be considered is the probability of node 20 - the target node - having a
concentration above 15 g1m
3
The deterministic value for node 20 using the mean values
for all parameters is approximately equal to 10 glm
3

The direct and difference approaches using different time steps, have been used to cal-
culate the reliability index, the most likely parameter values and sensitivity values.
First the problem is analyzed assuming that only the longitudinal dispersivities are ran-
dom variables for time steps of 5, 1, 0.5, 0.1 and 0.05 with both approaches. Then the
same problem is analyzed when the element hydraulic conductivities are assumed to be
the only random variables for time steps of 5, 1, 0.5, and 0.1. Finally the difference
approach is used with different perturbation values to see how well the results obtained
compare with the results from the direct approach.
5.4.2. Random longitudinal dispersivities
In table 5.4.2 and 5.4.3 we see the results obtained from the direct approach and differ-
ence approach with a perturbation of 10-6, respectively. The reliability index and the
probability of failure are given up to the lO'th decimal digit and we see that respective
values are identical with the exception of the reliability index for a time step of 0.5
where the last digit is different.
Table 5.4.1 Effect of Time step on deterministic solution
obtained at the target node.
Time Step Mean Value Solution
L1t (seconds) c( 60,10,50) g/m
3
5 10.06180576
1 10.11831190
0.5 10.12008711
0.1 10.12065528
0.05 10.12067304
- -- -- -- ---- -- ... - - ~ - - - - - - . - ' - - - -----
Chapter 5. The Perfonnance Function
Table 5.4.2. Time step effect on FORM results with dispersivity vari-
ables using the direct approach
Time Step Reliability Index Probability Time per FORM
L1t (seconds) f3 P, Iteration (seconds)
5 2.1675482930 0.0150965358 3
1 2.1308758628 0.0165496852 11
0.5 2.1297426533 0.0165964327 21
0.1 2.1293801861 0.0166114092 96
0.05 2.1293688602 0.0166118774 194
Table 5.4.3. Time step effect on FORM results with dispersivity vari-
ables using the difference approach with a perturbation of 10-6.
Time Step Reliability Index Probability Time per FORM
L1t (seconds) f3
p,
Iteration (seconds)
5 2.1675482930 0.0150965358 37
1 2.1308758628 0.0165496852 40
0.5 2.1297426532 0.0165964327 44
0.1 2.1293801861 0.0166114092 72
0.05 2.1293688602 0.0166118774 107
page 169'-"
page 170 Chapter 5. The Perfonnance Function
./
/
V
--Direct
./
__Difference
/
./
--
/
--
--
--
~ ~
t----
--
.-4"----
://
..
~
1/
200
180
160
:;140
~
~ 120
:E
II: 100
~
!. 80
~ 60
40
20
o
o 100 200 300 400 500 600 700 800 900 1000
Time iterations
Figure 5.4.2. Time step iterations verses time per FORM iteration for case3.
\.
\
I
V
14
16
~
=12
>
f
II
~ 10
E
~
E 8
..
m
~ 6
.=
:J
'ti 4
~
2
o
1 4 7 10 13 16 19 22 25
Element Indices
28 31 34 37 40
Figure 5.4.3. Most likely longitudinal dispersivity values at the design point.
For both tables we observe that as the time step becomes smaller the reliability index
decreases. This is not a general observation, but holds true for this case. In this specific
case the reliability index decreases because the deterministic mean value solution in-
creases as the time step becomes smaller. Table 5.4.1 shows the solutions obtained for
the target node using mean values for all parameters for different time steps. Since the
reliability index is a measure of the distance between the deterministic mean value so-
lution and the target value, it must become smaller when the distance becomes smaller.
In other cases the mean value solution decreases as the time step decreases and then we
~ - ~ ~ - - - - ~ - - - - - - -----
Chapter 5. The Penormance Function pageFH
expect the reliability index to increase. The time step dependency of the reliability in-
dex is a source for errors when comparing different probability problems, especially in
cases which have reliability indices of the same magnitude.
The main differences between the results in tables 5.4.2 and 5.4.3 are the time per
FORM iteration. In both cases the Rackwitz and Fiessler algorithm has been used,
which converged in 9 iterations to a value smaller than 5.10.
7
on the performance func-
tion. The direct approach is much faster than the difference approach for large time
steps. As the time step becomes smaller the direct approach looses its time advantage
and the difference approach becomes increasingly faster.
We expect some kind of linear relationship between the time per FORM iteration and
the time steps used in the FEM algorithm. Figure 5.4.2 shows a plot of these values.
From this figure we see that as the time step iterations increase (Le. the time step de-
creases ), the time required to complete a FORM iteration increases much faster for the
direct approach than for the difference approach. For a value slightly smaller than 300
FEM iterations both approaches require the same CPU time.
Since the Finite Element Method gives only approximate solutions to the transport
equations, both approaches will incorporate these errors. However, the difference
method introduces errors due to the fact that it also approximates the performance gra-
dient. It is these last errors we are interested in. We shall fIrst compare the most likely
values obtained by the direct approach and the difference approach. Figure 5.4.3 shows
the most likely longitudinal element dispersivities obtained with a time step of 0.1 using
the direct approach. The differences between the most likely dispersivity values ob-
tained by the direct approach and those obtained by the difference approach are very
small and can not be observed on a plot like that of figure 5.4.3. In order to see the dif-
ferences figure 5.4.4 was prepared which shows the absolute differences between the
most likely dispersivity values, for different time steps.
page 172
----
Chapter 5. The Perfonnance Function
3.00E.o7 -r-------------------------,
250E.o7 -t------....J------------------i
__5.0
--1.0
8 200E.o7 __O.5
! __0.1
E
Q 1.50E.Q7

'0
..
..,
ce 1.00E.o7 t-tI--++--\--H1I----I-'\-I--t--\----\------------i
5.00E.Q8 ........-Hf--''c_I__\_--\+-------------t
N M
Element Indices
Figure 5.4.4. Absolute differences between most likely dispersivity values obtained from the
direct approach and the difference approach for the same time step.
We observe that the absolute differences are very small indeed. The highest difference
is observed for a time step of 0.1 giving a difference of slightly over 2.510'7. One also
observes that the highest difference for a time step of 0.5 is approximately 1.75.10.
7
, for
a time step of 1.0 the highest difference is 1.5-10.
7
, while for a time step of 5.0 the high-
est difference is approximately 9.10.
8
This indicates that the differences decrease with
increasing time step. This trend seems to further indicate that the difference approach is
less dependent on the time step than the direct approach. The time step has a double
impact on the direct approach, since it affects both the FEM solution and the calculation
of the gradient, while for the difference approach it has mainly an effect on the FEM
solution. The difference approach will depend much more on the value of the pertur-
bation. To see the effects of the perturbation, table 5.4.4 was prepared. The perturba-
tion values vary from 1 to 10.
10
while the last row gives the direct approach results. The
time step is equal to 1.0. The Rackwitz and Fiessler algorithm has been used to fmd the
design point.
We notice that the perturbation value affects the most likely dispersivity and sensitivity
values much more than the reliability index value obtained. Compared to the values
obtained with the direct approach, we see that the best approximations for the most
Chapter 5. The Performance Function page 173-
likely element dispersivity values and sensitivity values are obtained for a perturbation
value of 10-6. The reliability index is very well approximated for a much wider range of
perturbation values. If we round off all values in the table to the third digit we see that
aperturbation in the range 10
Z
to 10.
9
will give the same results.
Table 5.4.4 Effect of perturbation on FORM results
Perturbation Most Likely aL r-Sensitivity Reliability Index FORM
& for element 1 for element 1
fJ
Iterations
10
0
14.0542952012 0.3805224925 2.1309096208 12
10.
1
14.0260369555 0.3778761900 2.1308762297 10
10.
2
14.0232163225 0.3776115158 2.1308758630 9
10.
3
14.0229253516 0.3775842059 2.1308758625 9
10-4
14.0228962467 0.3775814741 2.1308758628 9
10.
5
14.0228933343 0.3775812007 2.1308758628 9
10.
6
14.0228929828 0.3775811677 2.1308758628 9
10.
7
14.0228927234 0.3775811434 2.1308758628 9
10.
8
14.0228966904 0.3775815157 2.1308758627 9
10.
9
14.0228972673 0.3775815701 2.1308758614 9
10.
10
14.0216546296 0.3774649488 2.1308758031 9
Direct 14.0228930126 0.3775811705 2.1308758628 9
The reason that perturbation values smaller than 10-6 are not giving better approxima-
tions, is probably related to the way the computer stores numbers in its memory. Con-
sidering that in double precision the computer uses 16 digits to store a number, then
division by a number smaller than 10-6 will give less than 10 digit accuracy. This trun-
cation of numbers introduces errors which with additional calculations results to a fur-
ther lose of accuracy. We conclude that the best approximation can be obtained with a
perturbation of 10.
6
, and this is the default value used in the difference approach.
page 174
5.4.3. Hydraulic conductivities
Chapter 5. The Performance Function
Here we consider the same problem, with the difference that the element hydraulic con-
ductivities are considered to be the random variables. The calculation of the perform-
ance gradient with the direct approach for hydraulic conductivities is a very time con-
suming process. The reason is that for each element we must build up the global matrix
of the gradient of the P matrix with respect to the element hydraulic conductivity (see
5.2.1). For the dispersivities, the respective gradient matrix is sparse having only 16
entries for each element. In the case of the hydraulic conductivities the gradient matrix
has a global character, having the same number of entries as the P matrix. Considering
that each element requires its own gradient matrix and that in this case we have 40 ele-
ments, we need to build up 40 gradient matrices. Once these are built up, we do not
need to build them up again for each time step. However, storing 40 gradient matrices
in computer memory is not feasible in conventional desktop computers. So although it
is a waste of CPU time the gradient matrices are calculated at each time step for each
element. In that way we need only store one matrix at a time. This causes the direct
approach to be very time consuming.
Two time steps are considered, 5.0 and 1.0. The difference approach requires exactly
the same CPU time as for the dispersivity case to approximate the performance function
gradient. Since the random hydraulic conductivities are considered to be uncorrelated,
all elements at the design point will obtain the same value. This shows the global effect
of the hydraulic conductivities on pollution problems whereas the dispersivities have a
local character. Tables 5.4.5 and 5.4.6 shows the results obtained with the direct and
difference approach respectively. A perturbation value of 10-6 has been used for the dif-
ference approach. The Rackwitz and Fissler algorithm has been used to obtain the de-
sign point. In all case the algorithm converged in 5 iterations to a value less than 5.10.
7
for the performance function.
Both approaches give the same reliability index up to the lO'th decimal digit and of
course the same FORM probability is calculated.
Chapter 5. The Perfonnance Function
Table 5.4.5 Time step effect on FORM results with hydraulic conductivity
variables using the direct approach.
Time Step Reliability Index Probability Time per FORM It-
Lit (seconds)
f3
P, eration (seconds)
5.0 1.9500832977 0.0255830958 113
1.0 1.9154356433 0.0277184846 560
0.5 1.9143645856 0.0277867937 1118
0.1 1.9140219941 0.0278086728 5647
Table 5.4.6 Time step effect on FORM results with hydraulic conductivity
variables using the difference approach with a perturbation of 10-6.
Time Step Reliability Index Probability Time per FORM It-
Lit (seconds)
f3
P, eration (seconds)
5.0 1.9500832977 0.0255830958 37
1.0 1.9154356433 0.0277184846 40
0.5 1.9143645856 0.0277867937 44
0.1 1.9140219941 0.0278086728 72
page 175
page 176 Chapter 5. The Performance Function-
Table 5.4.7 Time step effect on the most likely hydraulic conductivity
values and their ')'-sensitivities using the direct approach
Time Step Most Likely T r--Sensitivity
L1t (seconds) for all elements for all elements
5.0 1.1541676212 0.1581138830
1.0 1.1514284836 0.1581138830
0.5 1.1513438091 0.1581138830
0.1 1.1513167248 0.1581138830
Table 5.4.8 Maximum differences between values obtained by the direct
approach and values obtained by the difference approach with a pertur-
bation of 10-6.
Time Step Max. difference Most Max. Difference
L1t (seconds) Likely T r-Sensitivity
5.0 0.0000003548 0.0000003639
1.0 0.0000003123 0.0000003261
0.5 0.0000005541 0.0000005789
0.1 0.0000004916 0.0000005136
Chapter 5. The Performance Function page 177
The time required to complete one FORM iteration with the direct approach is totally
impractical. For a time step of 0.1 the direct approach requires over 1.5 hours to com-
plete one Rackwitz and Fiessler iteration.
Both approaches show that the most likely element hydraulic conductivity values
should be the same for all elements in this case, as are also the sensitivity values.
Table 5.4.7 shows the most likely hydraulic conductivity value and y-sensitivity ob-
tained for different time steps when the direct approach is used. All elements in the di-
rect approach had exactly the same most likely value up to the 10'th decimal digit,
which was the output precision. In the difference approach the most likely values and
y-sensitivities showed some variation from element to element. Table 5.4.8 shows the
maximum differences between results for the direct and difference approaches. For
both values we see that the difference approach gave accurate up to the 5'th decimal
digit, which is more than sufficient for all practical purposes.
5.4.4. Conclusions
The above comparisons between the direct and difference approach in estimating the
gradient of the performance function, make it clear that the difference approach has a
considerable advantage over the direct approach in terms of CPU time consumption,
especially in cases where hydraulic conductivities are considered to be random vari-
ables. In the case study used for the comparison, the perturbation value for the differ-
ence approach did not cause significant deviations from the results obtained from the
direct approach. The results do indicate however that the perturbation value used is
very important. A perturbation value of 10-6 seems to give the best results in the prob-
lem analyzed, which has parameter values of 10m for the longitudinal dispersivities and
Im/d for the hydraulic conductivity. However, it is very likely that the best perturba-
tion value will depend on the value of the parameter of interest and probably should be
defined as LUC = X 10-6 where X is the parameter value. In any case care should be
taken to use the smallest possible perturbation, but not so small as to be affected by
page 178
numerical truncation errors.
- - - - - - - - - - - - - - ~ ----
Chapter 5. The Perfonnance Function
The direct approach for cases with dispersivity random variables and large time steps At
was shown to be faster than the difference approach. If computer memory is not an
obstacle, then one can store the gradient hydraulic conductivity matrices for each ele-
ment, and this will improve the CPU consumption for the direct approach with hydrau-
lic conductivity random variables. Under such conditions the direct approach might be
faster than the difference approach for all random variables for some large At value. Of
course, large time step values effect the Courant number and may cause numerical er-
rors in the FEM algorithm, and therefore this approach will not always give acceptable
results. It is however an alternative which should be considered for some problems.
The direct approach as implemented in this chapter has many disadvantages. However
an approach which has not been tried out is to see if it is possible to use different time
steps for the direct approach and the FEM solution. There is nothing which compels us
to use the same time step in the direct approach as the one used in the FEM equations.
One can for example use a time step of 0.1 for the FEM equations and a time step of 1.0
for the direct approach. This will give more accurate FEM concentration values which
can then be used in the direct gradient estimation algorithm. Using different time steps
is maybe an approach which can make the direct gradient estimation algorithm more
efficient than the difference approach. One should also consider one step FEM formu-
lations. Such FEM formulations are not very accurate but perhaps they are good enough
for reliability problems.
------ - ~ - - - - ~ ----
6. Input Data for Probabilistic Analysis
6.1. Introduction
In order to perform a probability analysis with any type of stochastic model, stochastic
information for all the random variables defined in the model are required. The PAGAP
program requires at least second order statistical information for the random variables
as input. Obtaining such data is not a straight forward process because the model ran-
dom variables are defmed over the element volume and are not point values as are those
usually obtained in the field. In this chapter we shall present some methods that can be
used to obtain second order statistical information which can be used in PAGAP based
on field measurements.
In general, the problem with using groundwater models of any type, arises from the
need to assign "appropriate" parameter values to each element. The word "appropriate"
simply means that the values that have been used model the aquifer fairly well. These
"appropriate" values can be considered as some kind of "average" parameter values
over the element volume.
In order to avoid a trial and error approach, a number of methods have been proposed
which derme an averaging process based on field measurements. Such methods do not
necessarily yield appropriate values. If the field measurements do not reflect the aqui-
fer properties correctly, then no matter which method is used, the values obtained will
not be appropriate. The only way to avoid this type of problems is to obtain a good un-
page 180
--- -- ----
Chapter 6. Input Data for Probabilistic Analysis
derstanding of the geological characteristics of the aquifer and rely on experience and
common logic to make the right choices and assumptions about where and how the field
measurements should take place. The field measurements should also be taken accord-
ing to the geological characteristics of the aquifer and according to the purpose of the
investigation. Unfortunately, we can never be sure that the field measurements have
captured all inhomogeneities in the aquifer. Keeping this in mind, we can say that all
averaging methods try to give the best possible element coefficients for a given set of
field measurements.
Once field measurement are available one can fmd appropriate stochastic information
for the parameter values and thus describe these parameters with second order statistical
information. This information can not be used directly in PAGAP. We must frrst ac-
count for the element volume effects on these stochastic parameter values. The most
important method developed for addressing and solving this type of problems is the
geostatistical method. Although not used in the PAGAP program, this method is un-
doubtedly the most elegant, reliable and thorough of all the approaches to aquifer
problems. Geostatistics are widely used in hydrogeology and has been proven to be an
invaluable tool. The PAGAP program uses a method presented by Vanmarcke (1983).
This method is not as accurate as the geostatistical method, but it results in closed for-
mulas which can be inserted into a computer program relatively easily.
As an introduction to these methods we shall frrst consider spatial variability and spatial
correlation.
6.2. Spatial Correlation
Using random variables to describe the aquifer properties can lead to absurd results.
Classical statistical concepts can be applied to variables which have two essential prop-
erties: 1) the possibility, theoretically at least, of repeating indefinitely the test that as-
Chapter 6. Input Data for Probabilistic Analysis page 181
signs a numerical value to the variable, and 2) the independence of each test from the
previous and the next ones. One of the popular statistical examples which exhibits the
above properties is the coin tossing example.
For example, previous hydrogeological investigations have assumed' that hydraulic
conductivity and transmissivity are lognormally distributed. However, this assumption
can be misleading. If we randomly sample a given aquifer, the hydraulic conductivities
obtained from these samples will be approximately lognormally distributed only if the
aquifer is sufficiently large and the samples are not too close to each other. Only under
these conditions can the samples be assumed to be independent from each other. If two
samples are close to each other, we expect that the samples will be similar and the
sample hydraulic conductivities will be more or less alike. If a number of samples
which are relatively close to each other, are used to obtain a local average, we can not
assume that they are independent and therefore lognormally distributed.
We expect that the dependence between two samples will be a function of the distance
between them. This distance is referred to as "lag" distance between two sample points.
We also expect that there must be a lag distance for which the samples can be consid-
ered to be independent. Commonly the "correlation integral scale" is used as a measure
of the mean distance over which a property exhibits positive correlation. On the other
hand when the lag distance approaches zero we expect that the samples will become
identical. Of course, only one sample can be taken from a specific location, so we can
not have two samples with a lag distance of zero. There are also practical limits to how
small the lag distance can become, since the samples, although considered as point sam-
ples, are not without physical dimensions. Even if two samples with a very small lag
distance are obtained, they will not be identical due to sampling errors and small scale
The assumption of a lognonnal distribution for hydraulic conductivity is supported by a large number
of laboratory and field measurements. Freeze (1975) presents an excellent review of evidence for as-
suming a lognormal PDF for hydraulic conductivity and transmissivity.
page 182
effects.
Chapter 6. Input Data for Probabilistic Analysis
The correlation coefficient expresses the linear dependency between two random vari-
abIes and is defined as
(6.2.1)
where O'i is the standard deviation of the random variable Xi, and COV(Xi,Xj) is the co-
variance between the two random variables, defmed as
(6.2.2)
where E [ ] is the expectation and Pi the mean value of Xi. Two things that should be
mentioned are: 1) when i = j, the covariance equals the variance of Xi, and 2) the corre-
lation coefficients are nothing more than the covariances normalized by the standard
deviations. The correlation coefficients have values in the range [-1, 1] where 0 signi-
fies that the variables are uncorrelated, 1 that they are absolutely positively correlated
and -1 that they are absolutely negatively correlated.
The concept of "statistical homogeneity" is used to characterize a material with uncer-
tain properties according to if its statistical properties vary in space. If, for example, the
expected value of the hydraulic conductivity for a given aquifer, does not change as a
function of spatial location within the aquifer, then hydraulic conductivity for this aqui-
fer can be characterized as statistically homogeneous.
In the simplest case, covariance and statistical homogeneity are combined to form the
hypothesis of weak stationarity or second-order stationarity. If the mean is constant
throughout the aquifer, and the covariance is assumed to be a function of the lag dis-
tance only, then the material property is said to be second-order stationary. Mathemati-
cally we can express the above hypothesis as
for all i (6.2.3a)
----- - - ~ - ~ - --
Chapter 6. Input Data for Probabilistic Analysis page 183.
where d is the lag distance between points i and j. Other assumptions lead to methods
for handling weaker stationarity, but independently of which approach will be used, all
methods result in giving second order statistical information.
Since the PAGAP program uses a fmite element discretization of the aquifer, we have
to solve the following problem: Given second-order statistical information for sam-
pling points, fmd appropriate second-order information for the elements used by the
numerical model.
The spatial average mean and covariance for hydraulic conductivity K
ei
in element ei
are given as follows (Hachich and Vanmarcke, 1983):
(6.2.4a)
(6.2.4b)
where A
ei
is the area of element ei, JlK,. is the point mean value, xei is a point within ele-
ment ei and CJ!..xe1,Xe2) is the covariance function of the point K values. If we assume
statistical homogeneity in the mean, then equation 6.2.4a gives that the spatial average
mean and the point mean are identical. Finding the element covariances is more com-
plicated. We must evaluate the integral either analytically for a specific covariance
function and element shape or numerically. Vanmarcke (1983) has developed a rather
simple solution for homogeneous random fields which is easy to implement in a nu-
merical model which uses regular element shapes (square or rectangular elements).
page 184
---------------------------
Chapter 6. Input Data for Probabilistic Analysis
K
K
Kavg
O'Kavg
X
Figure 6.2.1. Illustration of hydraulic conductivity (K), one dimensional random field and spatial
averaging effect.
If we compare the distribution of point data values with that of the average values ob-
tained by an averaging process, we see that all irregularities observed on the point data
are smoothened out (figure 6.2.1). The averaging process will also cause the spatial
average variance to become smaller than the spatial point variance, as the averaging
area increases. Theoretically the spatial average variance will approach zero as the av-
eraging area becomes infInitely large. The variance reduction effect can be written
mathematically in the form (Vanmarcke, 1983)
(6.2.5)
where r is some variance reduction function which depends on the covariance function
and the averaging area. The averaging process also results in a stronger correlation be-
tween spatial averages than between point values separated by the same lag distance.
The PAGAP program can also use marginal PDP information if this is available. If we
assume, for example, that the point hydraulic conductivities are lognormally distributed,
Chapter 6. Input Data for Probabilistic Analysis page 185
can we use this information also for averaged hydraulic conductivities? Vanmarcke
(1977) noted that the PDF of averaged properties will be narrower than the point PDF,
as a direct result of the variance reduction phenomenon. Based on the Central Limit
theorem, a normal distribution is often assumed for the spatial average marginal PDF.
In the case of hydraulic conductivities the choice of a normal distribution is poor. The
normal distribution is unbounded and can give negative values which are unacceptable.
Der Kiureghian (1987)* has shown that, for practical purposes, the spatial average PDF
is approximately equal to the point PDF when the spatial average distance is small rela-
tive to the correlation integral scale. In other words, we can use the point PDF to repre-
sent the spatial average PDF as long as the numerical element discretisation results in
small size elements compared with the correlation integral scale.
The above theory seems to be relatively straight forward, but in real world practical
problems it is not adequate. One has to decide which is the appropriate correlation
function or variance reduction function to obtain the needed second-order results. On
the other hand, field information will usually include hydraulic conductivity values ob-
tained from pumping tests, observation wells and laboratory samples. While laboratory
samples can be considered to be point values, this is not true for pumping test values,
since these characterize a fairly large area. Choosing the right method which will be
able to incorporate all field information and give the appropriate variance reduction
function, is a crucial decision the analyst must make.
6.3. Geostatistical Approach
In geostatistics the variables that exhibit spatial correlation are termed regionalized
variables (ReV). Regionalized variables can be divided into two categories: stationary
According to Cawlfield and Sitar (1987).
page 186
- - -- -- - - - - ~ -- -- - -- -----'---
Chapter 6. Input Data for Probabilistic Analysis
and non-stationary. In the latter the variable has a defmite trend in space: for instance,
the variable decreases systematically in one direction. On the contrary, there is no sys-
tematic trend in space for stationary variables. Only stationary ReVs will be considered
in this section.
Geostatistics employ a method known as ''kriging'' to estimate the value of a re-
gionalized variable at a specific location based on measurements at surrounding loca-
tions. In order to use kriging the concept of a random function (RF) is introduced. A
random function can be thought of as the geostatistical equivalent to a random variable.
It is easier to explain the concept of RF with an example. Let us assume that for each
location in an aquifer, we defme a random variable which describes the uncertainty at
this point of the value that hydraulic conductivity has, then an infmite set of random
variables is produced. In order to keep track of this infmite set of variables we give a
name to this set and the location, for example Z(XI) is a hydraulic conductivity random
variable at location Xl and Z(x) is a random function of the hydraulic conductivities for
this specific aquifer. Since hydraulic conductivity has a unique value at a given point,
kriging uses a RF for which we only have one realization.
6.3.1. Kriging with Second-Order Stationarity
The defmition of second-order stationarity has already been defmed in section 6.2.2.
The same equations apply in geostatistics with the difference that we use the RF nota-
tion.
E[Z(x)] =,u (6.3.1a)
COV(X
1
,X
2
)= Cov(d) =E[(Z(x
1
)- ,u). (Z(x
2
)- ,u)] =E[Z(x
1
) Z(X
2
)]_,u2
(6.3.1b)
where Z(x) is a RF and d = Xl - X2 ,the distance between points Xl and X2. From the co-
variance equation we notice that when d =0 then Cov(O) =Var(Z) =~ is the variance
(or dispersion variance) of Z.
Chapter 6. Input Data for Probabilistic Analysis
page 187
Assuming that the mean value J1 and the covariance function are known, we proceed by
defIning a new RF as
Y(x) = Z(x)- J1
(6.3.2)
The new RF will have E[Y(x)] = O. We shall estimate the value of Y at a point Xo. This
estimation is given from the following equation
n
Y(x
o
) = L AfoY(x
o
)
i=l
(6.3.3)
where the asterisk shows that this is an estimate of the RF at location Xo The are the
weights of the kriging estimator and are the unknowns of the problem. The index "0"
shows that for each point Xo there will be a different set of weights, while i indicates
that the weight is connected to the measurement at point Xi
The kriging weights are obtained by minimizing the theoretical estimation variance,
given by
(6.3.4)
Replacing Y*(xo) with the expression in 6.3.3 and developing 6.3.4 we obtain the equa-
tion
E[(r(x
o
) - Y(x
o
)t] = Y(xi)]
, J
-2 Y(x
o
)] +E[{Y(X
O
))2]
,
By defmition of the covariance function we can write
but the mean J1 of Yis zero, giving
(6.3.5)
(6.3.6)
page 188 Chapter 6. Input Data for Probabilistic Analysis
we have also noted that
E[YCxJ'YCX)]=COVCX
j
-x)
(6.3.7)
(6.3.8)
By replacing the expectations with the covariance function we obtain the equation
E[(Y*CXo)-YCXo)f] =~ ~ X o A ! o C o v C X j -Xj)
I )
-2I,X
o
CovcXj -XO)+CovCO)
j
(6.3.9)
We rmally minimize the above expression by setting the partial derivatives with respect
to kriging weights to zero.
where i =1,2,...,n. This results in a linear system of n equations with n unknowns.
I,XoCOV(Xj -x) = Cov(xj -x
o
)
j
(6.3.11)
This system has only one solution if the covariance function is positive dermite and the
Xi are distinct Assuming that this is the case, we can use an LU decomposition to solve
the system of equations and obtain the kriging weights for point Xo. Since the left hand
sides of the system equations do not depend on Xo we need only to decompose the sys-
tem matrix once. For rmding the kriging weights for another point, one needs only to
calculate COV(Xi-Xo) and use the solution algorithm of the LU decomposition.
We must not forget that the original RF was Z(x) and not Y(x). The [mal solution is ob-
tained as
(6.3.12)
Chapter 6. Input Data for Probabilistic Analysis
We can also calculate the estimation variance from the equation
Var[Z(xo)-Z(xo)]=Var[Z]- -X
O
)
j
6.3.2. Kriging in the Intrinsic Case
page 18-9.
(6.3.13)
In many cases it has been shown that the hypothesis of second-order stationarity with a
finite variance Cov(O) is not satisfied by the data. The variance increases with the size
of the area under consideration. To make the estimation possible, the "intrinsic hy-
pothesis" is called.
The intrinsic hypothesis assumes that even if the variance of Z is not fmite, the variance
of the frrst-order increments of Z is fmite and these increments are themselves second-
order stationary. In other words the increment Z(x + d) - Z(x) satisfies the conditions
E[Z(x+d) - Z(x)] = p(d)
Var[Z(x+d) - Z(x)] =2r(d)
(6.3. 14a)
(6.3. 14b)
which are both functions of d (not x), where d is the distance and r(d> is only a function
of the distance d.
We usually assume that the mean value is zero, but if this is not the case we simply de-
fme a new RF Y(x) for which Y(x) = Z(x) - p(x). This satisfies the intrinsic conditions
since p(x+d) - p(x) = p(d).
The variance of the increment defmes a new function called the semi-variogram (or
simply variogram) '}(d):
1
r(d) = -Var[Z(x+d) - Z(x)]
2
(6.3.15)
Since the expected value of the increment has been to be equal to zero, we can
write
page 190 Chapter 6. Input Data for Probabilistic Analysis
r(d) =
(6.3.16)
d
o
In this form J{d) expresses the mean quadratic increment of Z between two points sepa-
rated by the distance d.
Cov(d)
y(d)
o
d
Figure 6.3.1. Covariance and semi-variogram (De Marsily 1986).
Under the second-order stationarity hypothesis, both covariance function and semi-
variogram function exist and it can be shown that these functions are related as
r(d) = Cov(O)-Cov(d) (6.3.17)
An illustration of these functions can be seen in figure 6.3.1. When the covariance
function is fmite (meaning that Cov(O) has a specific value), the semi-variogram tends
towards an asymptotic value equal to this variance ( Cov(O) = Var(Z) ), which is also
called the sill of the semi-variogram. The distance at which the semi-variogram reaches
its asymptotic value is called the range.
Finding the estimate Z*(xo) of the unknown quantity Z(xo) follows, in general, the same
process as in the second-order stationarity case. Z*(xo) is set equal to the weighted sum
of all available measurements:
Chapter 6. Input Data for Probabilistic Analysis
n
Z(X
o
) = I,XoZ(Xj)
j=1
page 191
(6.3.18)
But since the mean value is not known, we set the additional condition, that the ex-
peeted value of the estimated RF must equal the expected value of the true RF.
Mathematically this is expressed as
(6.3.19)
This condition leads to the following condition for the weights
(6.3.20)
As before, we develop the expression
(6.3.21)
and then incorporate the semi-variogram into the resulting expression. We finally get
E[(Z(Xo)-Z(Xo)r] = - -x
o
) (6.3.22)
I } I
In order to minimize the above expression and at the same time satisfy the condition
imposed on the kriging weights (condition 6.3.20), we use a Lagrange multiplier. We
minimize the expression
(6.3.23)
where v is a new unknown, called a Lagrange multiplier. Minimizing this expression
results in the system of equations
I,AIor(xj -xj)+v=r(xj -x
o
)
j
i =1,2, ...n
(6.3.24a)
(6.3.24b)
page 192 Chapter 6. Input Data for Probabilistic Analysis
After the kriging weights have been calculated we simply use the kriging estimate
equation to fmd the solution at pointxo (equation 6.3.18).
6.3.3. Incorporation of Averaging Effects
The geostatistic method can incorporate spatial averaging effects (known as regular-
ization in geostatistics). If the value is to be estimated over some area So. sayan ele-
ment in a numerical model, then the kriging equations become
:LAlar(x
j
- Xj) +V =y(X;>So)
j
i =1,2, ...n
(6.3.25a)
(6.3.25b)
where Y(Xi , So) is the average semi-variogram between Xi and So and is given from the
relationship
(6.3.26)
The scale (or size) of a sample is known in geostatistics as the support. The support of
the measured values can also be incorporated. The kriging equations become
I,Afor(x;>S)+v = r(8;>8
0
)
j
where Si is the support of measurement i and
i=1,2, ...n (6.3.27a)
(6.3.27b)
r(S;>So) =s . ~ Jr(xj -x)dxdxj
I 0 s,So
The estimation variance is given as
(6.3.28)
(6.3.29)
Chapter 6. Input Data for Probabilistic Analysis
Var(Z;o - ZsJ = +v-r(Sj,So)
j
page 193
(6.3.30)
Finally assuming second-order stationarity, the correlation coefficient between two
elements ei, ej , can be found from the equation (Cawlfield and Sitar, 1987)
l-r-(S -S )
e
j
e
J
(6.3.31)
Although only the basics of the geostatistical method have been presented here, one can
see that the method is very flexible and general. The calculations required to obtain
second-order information for the numerical elements are rather tedious, and that is why
the Vanmarcke approach (6.5) which results in a closed form solution for evaluating
the spatial average and correlation, has been used in PAGAP.
6.4. The Gottschalk Approach
Usually the stochastic variables of interest must be defmed through point field meas-
urements. In general we can write the following relationship
X(A) = jJW(u)X(u)dU
A
(6.4.1)
where X(u) is the stochastic field with a continuous variation in space, characterized by
the vector u, and X(A) represents the integrated value of this felt over an area A with a
weight function W(u). The process defmed with this equation is the so called general-
ized stochastic field (Gelfand and Vilenkin, 1964) and according with this terminology
we are talking about a process with a local support over an areaA.
The above equation can be considered to have an averaging effect over the area A.
Actually if we define the weight functions to be W=lIA the equation takes the form of
an, average over the area A. Assuming that this is the case, Gottschalk (1993) uses this
page 194
equation to obtain the following defmitions
----------
Chapter 6. Input Data for Probabilistic Analysis
Y(4,Aj )=Y(4,Aj )-
where (6.4.2)
With a similar approach and assuming we have a homogeneous stochastic field we can
obtain the Covariance function
(6.4.3)
Assuming in addition that the variance is more or less constant Le. changes very slowly
and is asmooth function, we can obtain the correlation function as
(6.4.4)
Based on these equations Gottschalk proposes the following algorithm
1. Each areaA
i
is represented by its center of gravity point u
i

2. Empirical correlation coefficients are calculated based on the point measurements


available, and are plotted against the distances between the points hij
3. A theoretical correlation function is fitted to the curve, pO(H).
4. New correlation coefficients are calculated based on the equation
pm+l(4,A.) = pm(A;,A)
J pm(Aj,A
j
)
where
pm(A
k
, A
k
) = JJJJpm(lu-vl)dudv
A
k
A
k
(6.4.5)
Chapter 6. Input Data for Probabilistic Analysis page 195.
5. We repeat the above process until two consecutive correlation functions become
identical.
The above algorithm seems fairly straight forward and will give consistent results. It
relies however on optical confmnation and thus is a rather complicated process to im-
plement on a computer. The integrals in step 4 must be evaluated numerically.
In the case of a semivariogram we use the equation
where
R(h) = {(i, j): h - e::;; lUi - U
j
I::;; h+e}
and N(h) is the number of elements in the distance R(h).
6.5. The Vanmarcke Approach
Vanmarcke (1983) describes the spatial variability of subsurface materials with the use
of random field theory. The problem defmition is the following: Assuming that we
have a given set of point data with known mean values and standard deviations, how do
we estimate the variance reduction effect that will result by averaging this property over
a given distance (or area)? The primary objective of Vanmarcke's approach is to esti-
mate the variance reduction effect. In order to simplify things, Vanmarcke introduced
the variance reduction function nd> and the scale of fluctuation e.
Vanmarcke's approach will be presented, f1st for a one dimensional case and then will
be extended to two dimensions. In order to avoid the presentation of random field the-
ory, some simplifications will be made.
6.5.1. The Variance Reduction Function
Vanmarcke defmed a moving averaging process XD(X) over a continuous parameter
page 196
stationary random process X(x) as follows:
Chapter 6. Input Data for Probabilistic Analysis
D
%+-
I 2
XD(x) =- JX(u)du
D D
%-
2
(6.5.1)
where V is the averaging distance (one dimension). X(x) can be thought of as a geosta-
tistical random function under the second-order stationarity hypothesis. X(x) is charac-
terized by the mean J1 and the variance dl. The averaging process XD(X) will not effect
the mean value because of the second-order stationarity hypothesis, but the variance
will be reduced (see figure 6.5.1). The variance of XD(X) can be expressed as follows:
(6.5.2)
where reV) is the variance reduction function of X(x), which measures the reduction of
the point variance a2 under local averaging. The variance function possesses the fol-
lowing properties:
r ( v ) ~ o
f(O) = 1
r(-D) = r(IDD = reD)
(6.5.3a)
(6.5.3b)
(6.5.3c)
The variance function rev) is related to a correlation function p(-c) according to the
equation
(6.5.4)
where -ris the lag distance between points XI, X2.
Using equation 6.5.4 Vanmarcke found the equivalent variance reduction functions of
commonly used correlation functions.
Chapter 6. Input Data for Probabilistic Analysis
page 197
l,The triangular correlation function that decreases linearly from 1 to 0 as the lag dis-
tance goes from 0 to a:
(6.5.5a)
Its variance reduction function for D~ 0 is:
(6.5.5b)
2.The exponential correlation function associated with a first-order autoregressive (or
Markov) process:
~
p(d) =e b (6.5.6a)
6.5.6b)
3. The correlation function associated with a second-order autoregressive process:
4. The Gaussian correlation function:
(
ldl)2
p(d) = e b
(6.5.7a)
(6.5.7b)
(6.5.8a)
page 198 Chapter 6. Input Data for Probabilistic Analysis
(6.5.8b)
where erj( ) is the error function which can be expressed in terms of the standard nor-
mal CDF as erj(u) = 2 [ cI>(u) - 0,5 ].
5. The spherical correlation function:
(6.5.9a)
r(D) =

2 g 20 g

D'5:g
D>g
(6.5.9b)
As the averaging distance goes to infmity the variance reduction function r(D) o.
Vanmarcke also noted that as the averaging distance becomes large, the variance func-
tion becomes inversely proportional to the distance D. For the above cases of variance
functions he concluded that
case 1:
case 2:
case 3:
case 4:
case 5:
a

D
2b
D
4c
D
.[if
D
3
-g
r(D)-7
L
D
(6.5. lOa)
(6.5. lOb)
(6.5.10e)
(6.5.10d)
(6.5.10e)
Chapter 6. Input Data for Probabilistic Analysis
page 199
Vanmarcke noticed that there are certain advantages in defming the integral part of
equation 6.5.1 as a separate function. This function is termed the local integral of the
random process X(x), and is given by the equation
V
%+-
2
Iv (x) = fX(u)du =D Xv (x)
V
%-
2
. (6.5.11)
The random functions XD(X) and ID(X) differ only by a factor D. Their respective vari-
ances differ by a factor D2. With the use of the variance reduction function we can
write
(6.5.12)
To simplify notation, a new variance function A(D) is introduced for the local integral
function which is defmed as
A(D) = D2 reD)
6.5.2. The Scale of Fluctuation
(6.5.13)
The scale of fluctuation e, is a parameter that equals the proportionality constant in the
limiting expression of the variance function. It is defmed as follows:
(} =1 ~ D r ( D ) or reD) =~ when D -7 00
(6.5.14)
The scale of fluctuation is illustrated in figure 6.2.1. For the four models presented
previously the scale of fluctuation is simply related to the model parameters:
(i) e = a,
(ll) e=2b,
(iii) e=4c
(iv) e=-v;c f
page 200 Chapter 6. Input Data for Probabilistic Analysis
(v)
In general the scale of fluctuation will be given from the following relationship (if it
exists):
(6.5.15)
6.5.3. Covariance of Local Averages
To find the covariance between two local averages, Vanmarcke first defmed a random
function X(x) and then defmed two averaging line segments D
a
, Db in X(x) (figure
6.5.1).
Using the average process random function XD(X) we can write
x ~
I 2
X
a
=X
D
(X)=- jX(u).du
a D
a
D
x_a
2
and
(6.5.16)
(6.5.17)
and since the covariance of Xa,xb will be proportional to the covariance of the local in-
tegrals (Cov[ la, lb] = D
a
Db Cov[ X
a
, XbD , we can also write
In other words, the covariance between the two local averages can be computed if the
covariance between the local integrals can be computed.
Vanmarcke defined the distances Do, Dj, D2 and D3 as can be seen in figure 6.5.1. To
avoid misunderstandings involving the defmitions of the above distances, he describes
- - - - - - - - ~ - - _._-.-
Chapter 6. Input Data for Probabilistic Analysis
them as follows:
page 201
I. Do = distance from the end of the fITst interval to the beginning of the next interval,
II. D1 =distance from the beginning of the fust interval to the beginning of the second
interval,
ill.D2 = distance from the beginning of the fITst interval to the end of the second inter-
val,
N.D3 = distance from the end of the fITst interval to the end of the second interval.
X(x)
x
Do
DO
Db
D1
D2
I
D3
I
Figure 6.5.1. Definitions of the distances Dused for finding the correlation coefficient between
Da and Db.(Based on Vanmarcke, 1983)
Of course the fITst interval is D
a
while the second is Db. The above definitions should
be followed in all cases to avoid problems which appear when the intervals D(b Db
overlap or one lies completely in the other.
With the above definitions, Vanmarcke reached to the following algebraic identity:
(6.5.18)
page 202 Chapter 6. Input Data for Probabilistic Analysis
where ID(} 1Dl' 1D2.ID3 are the local integrals of X(x) over the intervals Do, Di, D2, D3
respectively.
Using the variance reduction function d(x), as dermed for the local integral function,
one can obtain the following expression for the covariance between the two local inte-
grals
(6.5.19)
using now the relationships
the correlation coefficient between the two average local intervals can be estimated
from the equation
- D}2r(D}) +D;r(D
2
)- D;r(D
3
)
Pa,b =
(6.5.20)
It is easy to verify, that if D
a
, Db overlap, then Pa,b becomes equal to 1. In the case
where D
a
, Db are separated by a large distance, the variance reduction function can be
replaced by r(D) = () I D, where () is the scale of fluctuation. It can then be seen that the
numerator becomes equal to 0 in such a case.
6.5.4. Two Dimensional Problems
Equations for two-dimensional problems are developed in the same way the one-di-
mensional equations were developed. A 2D random field is defined as X(x,y) and the
average over a rectangular area with dimensions A =D
x
D
y
is considered-, The average
random function XA(X,y) is defmed as the double integral
Also D
x
is parallel to the x axis and D
y
to the y axis.
Chapter 6. Input Data for Probabilistic Analysis
x ~ x A
1 2 2
XA(X,y)=X
DD
(X,y)=-- J JX(u,v)dudv
~ y DxD
y
D ~ D
x--x-
Y
2 2
page 203
(6.5.21)
The variance reduction function is defmed as in the one-dimensional case, but now it is
a 2D function r(Dx,D
y
) = rCA). Finally the scale of fluctuation instead of being a dis-
tance becomes an area and is defined as
a
r(A)=-
A
A-7
00 (6.5.22)
where a is the two-dimensional scale of fluctuation. The covariance of the local inte-
grals is given by the equation
(6.5.23)
where the distances Dlk k = 0, 1, 2, 3 and D211 = 0, 1, 2, 3 are illustrated in figure 6.5.2
and are defmed as in the one dimensional case.
y
__D_1L..!.1__1
D
I.
10 11----

X
Figure 6.5.2. Definitions of Ddistances for finding the correlation coefficient between the two
shaded areas.
page 204 Chapter 6. Input Data for Probabilistic Analysis
Using equation 6.5.23 we can obtain the correlation coefficient between the two aver-
aging areas. If we use the expression
(6.5.24)
we can further fmd the correlation coefficients in terms of the variance function r(x,y).
If this function is known then we can proceed with fmding the correlation coefficients.
Unfortunately finding a 2D variance function is not easy.
Vanmarcke introduced a conditional ID variance function r(ylx) which allows us to
write
r(x,y) = r(x) r(Ylx) (6.5.25)
The conditional variance reduction r(ylx) gives us the variance reduction due to av-
eraging in the y direction given that we have already averaged in the x direction. Of
course, if we had frrst averaged in the y direction and then in the x direction we would
obtain
r(y,x) =r(y) r(xIY)
this implies that
1. rex) r(ylx) = r(y) r(xly) and
(6.5.26)
2. r(x,y) = r(y,x) commutative.
since the total variance reduction should not depend on the order in which we average.
The problem of fmding a 2D variance function is greatly simplified with the intro-
duction of the conditional variance reduction, since now the 2D function is reduced to
the product of two ID functions. Even so, fmding a conditional variance reduction
function r(ylx) which is consistent with the variance reduction function rex) is not so
simple.
Chapter 6. Input Data for Probabilistic Analysis page 205
In order to simplify things even more, we assume that the 2D variance reduction func-
tion is separable. This means that we can write
r(x,y)"". r(x) r(y) (6.5.27)
and thus we eliminate the need for finding the conditional reduction function. This is of
course an approximation, because most 2D variance reduction functions are not mathe-
matically separable. However, Cawifieid and Sitar (1987) have compared results ob-
tained from using the geostatistical approach and results using the 2D Vanmarcke ap-
proach with separable variance reduction and have concluded that if the dimensions of
the rectangular areas are smaller than 1/3 the scale of fluctuation, then the approxima-
tion is reasonably good.
The fmal simplification that can be made is related to the way the correlation coeffi-
cient is computed. The straight forward way, would be to formulate the equation that
gives the correlation coefficient in 20 and replace r(D(bDb) with the product of the 10
variance reductions r(D
a
) r(Db). It can be shown however, that by using separable
variance reduction the correlation coefficient becomes also separable, i.e.
(6.5.28)
meaning that we can use the 1D correlation coefficient formula. With reference to fig-
ure 6.5.2 we use the Dik values to compute PD,D and the D21 values to compute
x" Xb
Using the Vanmarcke approach with the separable correlation assumption as given in
equation 6.5.28, the PAGAP program can compute the element second-order statistical
information for the hydraulic conductivities and dispersivities that are needed. To do so,
the point statistics for each element hydraulic conductivity and each element dispersiv-
ity must be given (mean and standard deviation obtained for example from kriging) ,the
1D variance reduction function must be specified and the 10 scales of fluctuation in the
x and y direction must be given.
page 206
Chapter 6. Input Data for Probabilistic Analysis
7. Other PAGAP Characteristics
7.1. Introduction
In this chapter we shall briefly discuss some of the characteristics of PAGAP
(probability Analysis of Groundwater and Pollution) which have not been presented in
the previous chapters. Mainly two things are presented. First, the software used for the
development of PAGAP as well as PAGAP' s computer characteristics, and second the
solution algorithm used.
7.2. The Program Code
PAGAP was written in the C programming langUage. Its initial structure was devel-
oped with the use of the THINK C compiler for Macintosh computers, while the later
stages of development were made on a DECstation 5000 with the use of the standard
UNIX C compiler. Avoiding the use of special features available in THINK C made the
transition from Macintosh to DECstation simple. The transition was necessary because
of the memory requirements of PAGAP when large problems are analyzed. The mem-
ory requirements of PAGAP were anticipated from the very beginning of its develop-
ment. In order to make PAGAP as independent as possible from memory requirements,
dynamic memory allocation procedures - available in C - have been employed, which
practically allow PAGAP to access all the computer memory (RAM) it requires for a
given problem. If the computer does not have sufficient memory, an error message ap-
page 208 Chapter 7. OtherPAGAP Characteristics
pears and the program automatically exits.
PAGAP does not use an optimum amount of computer memory for a given problem.
The computer memory used by PAGAP can be reduced considerably, but such a reduc-
tion will result in repeating a large number of computations at each reliability iteration
step. Since PAGAP is computationally very intensive, optimization with respect to
memory has been ignored, while optimization with respect to speed has been priori-
tized. Nevertheless, PAGAP is still very slow. Problems with more than 600 random
variables and small time steps can take several days to be analyzed on the DECstation
5000.
In the rmal stages of development, PAGAP was connected to the UNIRAS graphics li-
brary. With the use of these graphical procedures, PAGAP is capable of producing col-
ored 2D and 3D contour plots. Hardcopies of these plots can also be obtained in encap-
sulated postscript (BPS) format. Most of the contour plots presented later on in chap-
ters 8 and 9 are produced in this manner. The EPS format has the advantage of being
easily transported from UNIX to MS-DOS or Macintosh operating systems, but has the
disadvantage of producing very large meso Most of the 3D contour plots produced by
PAGAP are approximately 1MB (megabyte) in size which makes their use difficult and
this is the reason that they have not been used in this thesis. UNIRAS has also the abil-
ity of producing hardcopies in Computer Graphics Metafile (CGM) format but it is
more complicated to obtain files in this format. Since CGM mes are smaller in size and
portable, it might be necessary to introduce this ability,in PAGAP at some later stage.
The main problem with UNIRAS is that it does not allow dynamic memory allocation
and therefore the number of points which can be used for contour plots has been set to
1000 and is constant. In other words, if we have a problem with more than 1000 ran-
dom variables, PAGAP will not be able to produce contour plots. Of course, one can
always increase this number and recompile the program.
The Macintosh version of PAGAP does not make use of a graphics library. It is a purely
Chapter 7. OtherPAGAP Characteristics page 209
computational program which produces an output me with all the results obtained. To
obtain contour plots, a program named hydroplotter has been developed. Hydroplotter
has a typical Macintosh user interface and produces 2D and 3D contour plots as well as
3D wire nets. It was developed on an old Macintosh SE/30 and can not produce col-
ored plots since the SE/30 has a black and white screen. Instead of colors, hydroplotter
uses gray scale patterns to fill the spaces between contour lines. The 3D plots are pro-
duced based on the scheme described by Kinzelbach (1986), whereby the X and Z axes
are plotted in the horizontal and vertical directions respectively, while the Y axis is in-
troduced with a user defmed angle'with respect to the X axis. This scheme is easily
implemented and gives a good 3D presentation of the results, but has the disadvantage
of allowing only partial rotation of the plot. The program makes use of dynamic mem-
ory allocation procedures and produces hardcopies in the Macintosh PICT format. The
PICT format is capable of producing high quality plots and since it is the default format
for Macintosh computers, it is easily transported and inserted into other documents.
A large number of fmite element grid generating programs have also been developed.
Without such programs using PAGAP would be extremely difficult. Most of these pro-
grams have a simple user interface requiring the user to defme the geometrical charac-
teristics of the aquifer and the number of rows and columns that should be used for the
grid. Some of these programs can produce irregular grids where the number of col-
umns per row can be reduced or increased, while other programs are capable of refming
the grid - increase the number of elements used - at specified areas. The main disad-
vantage with these programs is that user can not give parameter and boundary condition
infonnation to produce a complete data me for PAGAP. The parameter and boundary
condition information must be manually introduced into these mes which can be very
time consuming. An exception to this is a X windows program which uses the vector
widgets. The vector widgetS were developed at the University of Oslo by Otto Milvang
in 1990 and allow 2D and 3D graphics to be produced with ease. One of the character-
istics of the vector widgets which is very useful, is that one can control the mouse ac-
page 210
-- - - ~ - - - - - - - - - - - - - - - ------- - --- - --------- - ----
Chapter 7. Other PAGAP Characteristics
tions. The program produces only regular FEM grids, but then with simple mouse
clicks one can move the nodes around or specify boundary conditions and element pa-
rameter values. The main disadvantage of the vector widgets is that quality hardcopies
of the plots can not be produced.
7.3. Solution Algorithms
An important aspect of any finite element model is the solution algorithm used for the
solution of the system equations. Many algorithms have been tested before making the
final choice. Three different LU factorization algorithms, and three iterational algo-
rithms, namely a Gradient, a Conjugate gradient and an Orthomin algorithm, have been
tested and the codes are included in PAGAP. From the iterational algorithms Orthomin
performed best, but when the number of unknowns was large, it required a very large
number of iterations to converge. For problems with more than 400 unknowns, one of
the LU factorization algorithms was actually faster, especially when the number of time
steps required was large.
Solving a system of equations with the LU factorization - or LU decomposition - algo-
rithm requires two steps. Let us assume the equation systemis written in the form
AX=B
where A is a NxN matrix and called the system matrix, X is the vector of the N un-
knowns and B is the right hand side vector. The frrst step in using the LU algorithm is
the decomposition of the A matrix into a lower triangular matrix L and an upper trian-
gular matrix U in such a way so that A = LU. The second step is to solve the two result-
ing equation systems i.e.
{
LY=B
AX=B ~ (LU)X=B ~
. ux=y
Chapter 7. OtherPAGAP Characteristics
page 211
Since L and U are triangular matrices the solution simply requires a forward substitu-
tion for the first system and a backwards substitution for the second system. The sim-
plest LU factorization algorithm can be implemented as shown in figures 7.3.1 and
7.3.2. Figure 7.3.1 shows the algorithm for the decomposition of the A matrix. where
Au represents the i'th row and j'th column of A. Since the L and U matrices have the
form
1 0 0 0
U
ll
U
12
U
13
U
14
L
21
1
0 0
0 U
22
U
23
U
24
L=
u=
L
31
L
32
1 0
0 0 U
33
U
34
L
41
L
42
L
43
1
0 0 0 U
44
the algorithm shown in figure 7.3.1 replaces the original A matrix. with
U
ll
U
12
U
13
U
14
L
21
U
22
U
23
U
24
A=
L
31
L
32
U
33
U
34
L
41
L
42
L
43
U
44
where Lij are the entries of the L matrix. and U
ij
are the entries of the U matrix.
for k =1 with a step of (1) until k =N -1
for i =k +1 with a step of (1) untili =N
4k =4k / Akk
forj = k +1 with a step of (1) untilj = N
\4j = 4j - 4k
A
kj
Figure 7.3.1. LV - factorization algorithm
page 212 Chapter 7. OtherPAGAP ChaIacteristics
for k = 1 with a step of (1) untili = N
for i = k + 1 with a step of (1) untili = N
Bj =Bj -4k
B
k
for i = Nwith a step of (-1) until i = 1
Xi =B
i
forj=i+1 with a step of(l)untili =N
I Xi =Xi - ~ j X j
Xi =X
i
/ A
jj
Figure 7.3.2. Solution algorithm
The solution algorithm of figure 7.3.2 uses the decomposed A matrix obtained from the
fIrst algorithm to solve the two system equations and obtain the X vector, and during
this process it destroys the B vector. So, if the A matrix or the B vector are needed,
backups should be made before using them in the algorithm.
This simple LU algorithm will not always manage to solve an equation system. Some
of the reasons that can lead to failure can be ascertained by simply having a closer look
at the algorithm. In figure 7.3.1 we see that a division withAH: takes place and there-
fore if one of the diagonal entries of A is zero, the algorithm will fail. In the case that
the diagonal entries are non-zero but nevertheless very small in value, numerical errors
will occur. To avoid such problems pivoting is used. With pivoting we rearrange the
equation system by changing rows or columns or both in order to obtain an A matrix
with diagonal values which are non-zero and as large as possible. Although a pivoting
LU algorithm is available in PAGAP it is not the algorithm used, mainly because the
system matrix for steady state groundwater flow and that for pollution transport are well
structured. The LU algorithm used in PAGAP is shown in figures 7.3.3 and 7.3.4.
This algorithm is the same as the one outlined above, with the difference that it makes
use of the fact that the system matrices we need to solve in groundwater and pollution
transport problems are banded.
Chapter 7. Other PAGAP Characteristics
for k =1 with a step of (1) until k =N -1
for i =k + 1 with a step of (1) until i =MinCk + p,N)
= / Akk
forj =k+1 with a step of(l)untilj =min(k+q,N)
= -
Figure 7.3.3. LV factorization of a (p,q)banded matrix.
for k =1 with a step of (1) until i =N
for i = k + 1 with a. step of (1) until i = MinCk + p, N)
B; =B; -4kBk
for i =Nwith a step of (-1) until i =1
Xi =B
i
forj =i + 1 with a step of (1) until i =MinCk + q, N)
I X; =Xj

Figure 7. 3.4. Solution algorithm for a banded matrix.
page 21l
11 4 3 0 0 0
5 6 4 3 0 0
1 3 5 9 2 0
0 4 6 6 7 2
0 0 9 2 11 3
0 0 0 5 4 10
Figure 7.3.5. A (2,2) banded matrix.
A banded matrix is a matrix where, in a diagonal sense, a band of non-zero matrix en-
page 214
- - - ~ - - - - -
Chapter 7. Other PAGAP Characteristics
tries can be de:fmed. An example of a (2,2) banded matrix is shown in figure 7.3.5.
Any banded matrix is characterized by two numbers. The :frrst tells us how many lower
sub-diagonals are non-zero and the second how many upper sub-diagonals are non-
zero.Using the fact that the system matrices are banded can speed the LU algorithm
considerably. The LU algorithm requires considerable time to solve large equation
systems which in general will be proportional to N/3 where N is the number of un-
knowns. So, as an example, if we want to solve a system of 10000 equations the time
need will be given approximately from the equation
where a is a proportionality coefficient depending on the speed of the computer used. If
we assume a = 10-6, then inserting into the above equation gives as a time of T::= 4days.
If the same system matrix is (100,100) banded then by using the banded LU algorithm
the time required would be
Tn =aI00
4
+2a100
3
::=102sec
When using LU factorization one should keep in mind that the factorization of the A
matrix is much more time consuming than the solution algorithm. Actually, in most
cases the factorization will overshadow the solution algorithm. The advantage of the
LU algorithm is that once A has been decomposed, we can obtain solutions for different
B vectors just by using the solution algorithm which is very fast. This means that when
solving the pollution transport equations we need to decompose the system matrix only
once and then obtain solutions for each time step with the use of the solution algorithm.
In PAGAP speed was a major concern and since the banded LU factorization was the
fastest algorithm, it was the algorithm chosen for solving the system equations. One
should keep in mind however, that the iterational solvers did not make use of the
banded structure of the system matrix. It is very likely that if the Orthomin algorithm
was modified to reduce computations within the banded width of the system matrix it
would become considerably faster.
8. Theoretical Case Studies and
PAGAP Verification
8.1. Introduction
The main purpose of developing a stochastic model like PAGAP.is to be able to esti-
mate the probability of a given event taking place. Therefore, in order to verify the
PAGAP model we need to verify that the probability estimates obtained are correct. Un-
fortunately this is not easily done, because the actual probabilities are not available for
comparisons.
In order to verify PAGAP, the best approach would be to compare probabilities ob-
tained from a Monte Carlo Simulation (MCS) stochastic model since such probability
estimates are very reliable especially when a large number of simulations has been used.
Since a MCS stochastic model was not available the next best approach in verifying
PAGAP would be to compare it with some other reliability method stochastic model.
Such a model is CALREL-TRANS developed by Jang et al.(l990). Although this model
was not available for direct comparisons, some of the results obtained from it have been
published by Jang et al.(1990 & 1994), and it is these published results that are used for
the verification of PAGAP.
Comparing results obtained from PAGAP and CALREL-TRANS was not problem free.
The two models differ in many respects and these differences are reflected in the results
obtained. The major difference between the models is in the implementation of the
page 216 Chapter 8.Theoretical Case Studies and PAGAP Verification
FEM code. PAGAP can use either a consistent or lumped matrix formulation and pro-
vides the freedom to use any time difference scheme through the use of a parameter f)
which can take values in the range [0,1] (see chapter 3). The CALREL-TRANS model
uses an implicit lumped matrix formulation.
Theoretically, if PAGAP uses the lumped matrix formulation and eis set equal to 1.0
which gives an implicit scheme, the two models should give the same results. Unfortu-
nately this is not always observed. Therefore, a separate version of PAGAP was made
where the CALREL-TRANS FEM code has been implemented as described by Jang et
al. (1990) in order to identify the differences between them. Comparison between these
two versions shows that the results are very much dependent on the time step At used.
Although the differences are small, the impact on the reliability analysis is evident as we
shall observe later on. Another important difference is related to the way pollution
sources have been implemented in the CALREL-TRANS model. Jang et al. (1990)
have implemented a source term C
s
used in the CALREL-TRANS FEM code, which is
based on the equation
~ +vVc-V(DVc)-c
s
=0
where C
s
is defined as the concentration at the source nodes. However, since all the
other terms in the equation represent pollution fluxes, C
s
must represent a pollution flux
as well, and in this case a source pollution flux. This term should basically be equiva-
lent to the internal flux term introduced by Kinzelbach (1986), namely
Therefore, if we use the concentration source node term to set the concentration at a
source node equal to lOOmg/l, this means that lOOmgll per time unit is introduced into
the aquifer and therefore is not equivalent to using a constant concentration node with a
value of IOOmg/l. Only if the flow velocity is very small in the neighborhood of the
Chapter 8.Theoretical Case Studies and PAGAP Verification page 217
source node will the two formulations behave similarly. For even moderate velocities
we expect the concentration at the source node to be smaller than the concentration as-
signed to it. In general, the use of the concentration source node term will result in
lower concentration profiles than those observed when using constant concentration
nodes as source nodes.
The PAGAP and CALREL-TRANS models have also differences iIi the reliability
method codes. CALREL-TRANS uses the CALREL general purpose reliability module
developed by Der Kiureghian and Uu in the period from 1987 to1990 and has a special
design point algorithm devised by Der Kiureghian (1989). PAGAP obtains the design
point either by using the original Rackwitz and Fiessler (RF) algorithm or by using a
modified version of the RF algorithm which was developed in the course of this project.
This difference should not have an impact on the results obtained since all algorithms
should give the same design point, if they manage to converge. However, the tolerance
values used for the algorithm might have a significant effect. Jang et al. (1990) use a
tolerance of 0.001 - in some cases 0.01 - on the reliability index, while PAGAP uses a
tolerance of Ig(X)1 ~ 0.000005 on the performance function. This choice is based on the
observation that sometimes the reliability index does not change significantly from one
iteration step to the next, especially in cases which have convergence problems. In such
a case a tolerance on the reliability index might be satisfied although the value of the
performance function shows that the estimated design point is not near the g(X) = 0 hy-
perplane.
Another important difference is related to the methods used for obtaining the correlation
matrix. PAGAP employs Vanmarcke's approach which in addition to the correlation
matrix also introduces a variance reduction, while the CALREL-TRANS model seems
to use a simple one dimensional correlation function (an exponential correlation func-
tion) in all cases presented. Such a scheme would not cause variance reduction effects
to enter into the calculations, and therefore we expect the reliability indices from
page 218 Chapter 8.Theoretical Case Studies and PAGAP Verification
PAGAP to be slightly larger than those from CALREL-TRANS.
Two cases studies have been presented by Jang et al. (1990) and the cases presented
herein are imitations of these. Diverse problems have been analyzed for each case, and
comparisons, discussions and comments are made which do not cover all the aspects of
the analysis results, but at least cover those which seem to be the most important ones.
8.2. First Case
Jang et al.(1990) considered an aquifer with dimensions 60x70 m which can be seen in
figure 8.2.1. The finite element mesh consists of 168 isoparametric elements with 195
nodes. The same mesh has been used for simulating the steady state groundwater flow
and the pollutant transport. Two reliability problems are basically considered:
a. The probability that the concentration at the symmetric target node will exceed the
value of 0.2 after 100d of pollution transport.
b. The probability that the concentration at the asymmetric target node will exceed the
value of 0.2 after 100d of pollution transport.
The overall formulation of the problem is based on the following input information.
The longitudinal dispersivity ~ for each element has a mean value of 10m and a stan-
dard deviation of 1m while the correlation is considered to be described by an exponen-
tial function with a correlation scale of 20m. The marginal distribution of each element
~ is assumed to be normal.
The transverse dispersivity lXr for each element, has a mean value of 3m and a standard
deviation of O.3m and is normally distributed The correlation is described by an expo-
nential function with a correlation scale of 20m.
Chapter 8.Theoretical Case Studies and PAGAP Verification
H=O.8m
page 219
D Element 5x5m
Cl Symmetric Target node
e:> Asymmetric Target node
-
No Flux Boundary
-
Constant Head Boundary
-
Constant Source Boundary
H=2.0m
Figure 8.2.1. Finite Element Mesh with Boundary conditions and Target nodes.
The hydraulic conductivity for each element has a mean of 3m/d, a standard deviation of
O.3m1d, and the correlation is described by an exponential function with a correlation
scale of 20m. The marginal distribution is assumed to be lognormal. The hydraulic
conductivity is assumed to be deterministically isotropic.
In addition to the above random variables, all constant heads nodes are considered to be
random variables with a standard deviation of 0.14m. The constant head nodes at the
lower boundary are assumed to have a mean value of 2.0m while those of the upper
boundary a value of 0.6m. No correlation is assigned to these variables (Le. cross corre-
lation's are set to zero). The marginal distribution is assumed to be normal.
The porosity is assigned a deterministic value of 0.3 for all elements, and the decay
constant a deterministic value of 0.0 for all elements. In all we have 168 ~ , 168 lXr,
168 K
max
, and 26 constant head nodes random variables, in other words a total of 530
random variables.
page 220 Chapter 8.Theoretical Case Studies and PAGAP Verification
Results from Deterministic Anlysis
Figures 8.2.2 and 8.2.3 show the steady state groundwater flow and pollution transport
obtained deterministically. All parameters are assumed to have values equal to their
mean values. PAGAP uses a lumped matrix formulation and an implicit difference
scheme for the time derivative.
The steady state groundwater flow solution simply shows that the total head distribution
will linearly decrease from 2.0m at the lower boundary to 0.6m at the upper boundary.
The velocity of groundwater flow can be easily estimated to be 0.2 mid in the vertical
direction, giving a Peelet number of 5/6, estimated with the use of the transverse dis-
persivity. The Courant number is C=0.04 .1t, so a time step of 0.1 day seems more
than sufficient for stability and is actually the time step used for this case study.
The concentration at the target nodes is at the symmetric target c(30,40,100) = 0.140
and at the asymmetric target c(40,40,100) = 0.097. The symmetric target concentration
is the same as the one obtained from the CALREL-TRANS model, but for the asym-
metric target node the concentration obtained from CALREL-TRANS is given equal to
0.067. Comparing figure 8.2.3 with the equivalent pollution contour figure given by
Jang et al. one notices a very good agreement, and therefore the concentration given for
the asymmetric target node could be a "typing" error in this paper. If it is not a typing
error, then this difference must be attributed to the use of source nodes. Jang et al. do
not specify how the pollution source was numerically implemented. When PAGAP
used source nodes instead of constant concentration nodes it did not manage to obtain a
value of 0.067 at the asymmetric node, indicating that this is simply a typing error.
Using PAGAP with a consistent matrix formulation gave higher mean value determi-
nistic concentrations at the target nodes. At the symmetric target node a concentration of
c(30,40,100) = 0.150 was estimated, while at the asymmetric target node c(40,40,100) =
0.101.
The differences in the results between the lumped and consistent matrix formulations,
Chapter 8.Theoretical Case Studies and PAGAP Verification page 22t
for practical purposes, might be considered small and not significant. However, these
differences will effect the reliability analysis. Table 8.2.1 presents deterministic results
obtained for the symmetric and asymmetric target node. For each matrix formulation -
lumped or consistent - two time schemes have been used, the implicit scheme (8 = 1),
and the Crank-Nicholson scheme (8 = 0.5). All results are rounded to the 6th decimal.
The Crank-Nicholson scheme gives somewhat better results than the implicit scheme, as
one can see in the table. The implicit scheme results seem to converge towards the
Crank-Nicholson scheme results as the time step becomes smaller.
The Crank-Nicholson scheme should be the one employed for analysis purposes. How-
ever, lang et al (1990) are using the implicit scheme because when coupled to a lumped
matrix formulation it can optimize the performance of the finite element code. Unfortu-
nately, there is no mention of the time step employed in their analysis and, as one can
see from table 8.2.1, the deterministic results depend on the time step used. These small
differences will have an effect on the reliability index calculated by PAGAP and there-
fore one can not expect that results from PAGAP and CALREL-TRANS to be exactly
the same.
Table 8.2.1 Deterministic results
Matrix scheme Time scheme .6t = 1 .6t = 0.1 .6t = 0.01
Sym. Node 9=1 0.140088 0.140048 0.140045
Lumped 9=0.5 0.140036 0.140045 0.140045
Asym. Node 9=1 0.097126 0.096823 0.096793
Lumped 6=0.5 0.096786 0.096790 0.096790
Sym. Node 9=1 0.150260 0.150429 0.150447
Consistent 9=0.5 0.150444 0.150449 0.150449
Asym. Node 9=1 0.101433 0.101244 0.101225
Consistent 9=0.5 0.101219 0.101223 0.101223
--- .. . - - - - - - . - ~ -
page 222 Chapter 8.Theoretical Case Studies and PAGAP Verification
ABOVE 1.900
1.800 - 1.900
~ 1.700 1.800
_ 1.600 - 1.700
IIIII!IIiI 1.500 - 1.800
liliiii 1.400 - 1.500
_ 1.300 - 1A00
_ 1.200 - 1.300
_ 1.100 1.200
_ 1.000 - 1.100
~ 0.900 - 1.000
o 0.800 - 0.900
o 0.700 - o.soo
o Baow 0.700
Figure 8.2.2. Mean Value Steady State Groundwater FlowSolution.
3S
3J
25
-
ABOVE 0.900
-
0.800 - 0.900
15 a
0.700 o.soo
-
0.800 - 0.700
-
0.500 - 0.800
I.
-
0.400 - 0.500
-
0.300 - 0.400
ail
0.200 - 0.300
0
0.100 - 0.200
0
BaOW 0.100
I. 15 20 25 3J 3S .. 45 50 55 60
Figure 8.2.3. Mean Value Solution Obtained for Pollution Transport after 100 Days.
Chapter 8.TheoreticaI Case Studies and PAGAP Verification
8.2.1. Case 1a
page 223
The first problem analyzed was that,of the conce.ntration exceeding a value of 0.2 at lo-
cation (30,40) after 100 days. For this purpose the first performance function was used
defined as
g(X) = 0.2 - c(30,40,100)
The Rackwitz and Fiessler algorithm was used for obtaining the design point. The al-
gorithm converged in 8 iterations giving a first order reliability index of /3 = 1.135116
and a probability of P[c(30,40,100) 0.2] =0.128163. For the same problem Jang et al.
(1990) obtained in 10 iterations with a tolerance of 0.001 on /3, a first order reliability
index of /3 = 1.1224 giving a probability of Pc = 0.1308.
Jang et al. applied Monte Carlo Simulation on the same problem obtaining a reliability
index of /3 = 1.4192 giving a of Pc = 0.0779. Obviously, the first order reli-
ability approach overestimates the probability in this case, indicating that the limit state
surface must be very curved near the design point. Second order estimates given by
Jang et al. (1990) seem to be very near the Monte Carlo results.
In order to see the effects of using a lumped matrix formulation and the effects of the
correlation scale on the results, additional cases where analyzed. Table 8.2.2 summa-
rizes some of the results. In general, using a consistent formulation matrix will give
much lower reliability indices because the mean value target node concentrations are
higher (Le. nearer to the 0.2 value). High correlation scales seem to result in lower reli-
ability indices. The highest /3 is obtained for the uncorrelated case.
Although it was not expected that the PAGAP first order reliability estimate would be
the same as the CALREL-TRANS estimate, one has to consider if the difference be-
tween the two estimates is significant. If we consider the difference between the
PAGAP consistent and lumped matrix formulation reliability indices, we notice that
they are considerably different. The actual difference between them is equal to
page 224 Chapter 8.Theoretical Case Studies and PAGAP Verification
Li,B= 0.229184 while the mean value target node concentration difference is equal to
J1c =0.010401. Although these two differences can not be related directly to each other,
we do expect that increasing !1c will result in a increase in!1{3 and vise versa. Let us
assume that these differences are more or less proportional to each other , although it is
very unlikely that such a relationship exists. Under this assumption, the difference be-
tween the PAGAP and CALREL-TRANS reliability index, which is !1{3 =0.0127, can be
explained if the difference in the mean value concentrations is approximately Lie =
0.000576. This shows that even a small difference in the concentration will result in a
noticeable difference in the reliability index. Considering in addition all the other dif-
ferences between the two models outlined in the introduction to this chapter, the reli-
ability index difference observed is considered to be reasonable.
In figure 8.2.4 we see the steady state groundwater solution obtained with design point
parameters. We notice that the hydraulic gradients near the source nodes have increased
noticeably. Assuming that the design point hydraulic conductivities have not changed
very much from their mean values, the hydraulic gradient increase indicates that higher
pore velocities will be observed in this area. Since the pollution sources are modeled
with constant pollution source nodes, an increase in velocity will increase the amount of
pollutant introduced into the aquifer. Increased velocities will also cause increased dis-
persion. Therefore we expect the pollution front to move slightly faster than with mean
value parameters and show a larger dispersion in the vertical direction. A larger disper-
sion in the horizontal direction is also expected since we notice that there are areas
where the gradient has a significant horizontal component. Figure 8.2.5 shows the pol-
lution transport solution with design point parameters. Comparisons with the mean
value solution show that the pollution plume has spread out to a larger area. The design
point pollution profile shows the expected characteristics with respect to vertical and
horizontal dispersion. Increased advection can be noticed by comparing the 0.9 contours
in both plots.
It is worth mentioning that if the pollution source was modeled as a pollution flux
Chapter 8.Theoretical Case Studies and PAGAP Verification page 225
boundary, the results would be very different. The amount of pollution introduced into
the aquifer would be constant, and therefore we would expect the overall horizontal dis-
persion to be reduced in order to maintain high concentrations in the vertical direction.
The design point values of the element longitudinal and transverse dispersivities are
seen in figures 8.2.6 and 8.2.7. The design point values are the parameter values at the
design point and as such are the parameter values which result into g(X) = 0 and at the
same time give the highest probability content. The design point values besides being
optimum parameter values for the given problem, are also important since they show the
most likely distribution of the parameters and therefore indicate which areas affect
mostly the reliability index obtained.
Table 8.2.2 Reliability Results for Case 1a. Random Field Dispersivity
Model Type
~
PI Iter.
CALREL-TRANS FORM L,CS=20 1.1224 0.1308 10
SORM 1.4692 0.0716
MCS 1.4192 0.0779
PAGAP FORM L,CS=25 1.114558 0.132520 8
FORM L,CS=20 1.135116 0.128163 8
FORM L,CS=10 1.197572 0.115542 8
FORML, NC 1.291440 0.098276 9
FORM C,CS=20 0.905932 0.182487 6
FORM C,CS=10 0.948180 0.171519 6
L = Lumped matrix fonnulation
C = Consistent matrix fonnulation
NC = No correlation between random variables
CS = Correlation Scale
page 226 Chapter 8.Theoretical Case Studies and PAGAP Verification
10 15 m
7Oj.......
65'j-'-'---'----'-....;......;..-'-_--:.'-- '-- -t65
-
ABOVE 2.000
-
1.900 - 2.000
-
1.800 - 1.900
-
1.700 - 1.800
-
1.800 - 1.700
-
1.500 1.600
-
1.400 - 1.500
-
1.300 - 1.400
-
1.200 - 1.300
-
1.100 - 1.200
-
1.000 - 1.100
-
0.900 - 1.000
CJ
0.800 - 0.900
0
0.700 - o.soo
0
0.800 - 0.700
0
BB.OW 0.800
'0
,.
20 25 30 35 ..
4'
50 55
'"
Figure 8.2.4 Design Point Groundwater Solution giving c(30,40, 100) =0.2
'0
,.
20 25 30 35 .. .. 50 55
'"70
os os
'"
55
50
4'
..
35
30
25
20
-
ABOVE 0.900
-
0.800 - 0.900
,. M 0.700 - o.soo
-
0.600 - 0.700
-
0.500 - 0.600
'0
-
0.400 - 0.500
-
0.300 - OAOO
0.200 - 0.300
0 0.100 - 0.200
0
BB.OW 0.100
0
'0
,.
20 25 30 35 .. .. 50 55
'"
Figure 8.2.5 Design Point Pollution Solution giving c(30,40, 100) =0.2
Chapter 8.Theoretical Case Studies and PAGAP Verification page 227
55 so 40 ~ 5 20 2S 3> 3S 15 10 EO
7o-j........L.L1"""""'u.J..LU........L.L1"""""'u.J.LJ..I........u.J...............L.L1"""""'u.J.LJ..I........l.U.......u.J..u.t:-.70
_ ABOVE 10.180
_ 10.180 - 10.180
_ 10.140 - 10.160
_ 10.120 - 10.140
_ 10.100 - 10.120
_ 10.080 - 10.100
_ 10.060 - 10.080
_ 1Q.040 - 10.060
o 1Q.020 - 10.040
o Ba.OW 10.020
10 IS 20 25 30 3S .co ~ 5 50 55 60
A
40 45 60 15 20 2S 30 35 10 .. 60
O;-u.J................................u.J..LU..L.LL.L1.J...wu.J..L.u........l.U............................u.J..LU..u..l.U."'-t-70
o
65
,0
15 20 2S 30 35 40 ~ 5 60 55 60
55
50
_ ABOVE 10.120
_ 10.110 -10.120
_ 10.100 - 10.110
n 10.090 - 10.100
_ 10.080 - 10.090
_ 10.070 - 10.080
_ 10.060 - 10.070
_ 10.050 - 10.060
_ 10.040 - 10.050
_ 10.030 - 10.040
_ 10.020 - 1Q.030
E::l 10.010 - 1Q.020
o 10.000 - 10.010
o 9.990 - 10.000
D Baow 9.990
B
page 228 Chapter 8.Theoretical Case Studies and PAGAP Verification
_ ABOVE 10.035
_ 10.021 - 10.035
.. 10.007 - 10.021
_ 9.993 10.007
_ 9.979 - 9.993
~ 9.965 - 9.979
D BElOW 9.965
C
Figure 8.2.6. Design Point Longitudinal Dispersivities for case1a. A. Correlation scale 20m. B.
Correlation scale 10m. CNo correlation.
Chapter 8.Theoretical Case Studies and PAGAP Verification page 229"
The magnitude of change between these values and the mean parameter values are di-
rectly affecting the value of the reliability index and therefore the probability. We must
also keep in mind that the design point values are based on the statistical information
defined for the problem and therefore these values incorporate the spatial correlation
information initially used.
Figure 8.2.6 shows the distribution of the longitudinal dispersivities at the design point.
Three cases are presented where different correlation information has been assigned to
the element longitudinal dispersivities. All three plots show an increase between the tar-
get node and the source nodes The changes are relatively small and therefore the longi-
tudinal dispersivities do not affect the reliability index very much. Something that
should be noted is how smoothly the contours of the longitudinal dispersivities change
from high values to low values. This effect is due to the correlation information given to
the problem. The design point values are forced to reflect the correlation information
assigned to the random variables. Therefore plot 8.2.6A shows a correlation scale of
20m in both horizontal and vertical directions, and plot 8.2.6B shows a correlation scale
of 10m, in both cases using an exponential correlation function. When no correlation is
assigned to the longitudinal dispersivities we obtain the plot seen in 8.2.6C. It is inter-
esting to notice that in the uncorrelated case, three areas are of importance. First, the
area directly bellow the target node where the highest longitudinal dispersivity values
are observed. Second, above the target node where the lowest values - lower than the
mean values - are observed, and third, the pollution source area. In the correlated cases
the importance of these areas has been masked under the correlation information. Of
course, one could point out that perhaps the importance of these areas is unrealistic,
since correlation information has been ignored.
The design point transverse dispersivities are seen in figure 8.2.7 for three different
cases of correlation input, and all three show a reduction in values compared to the
mean values. The reduction of the transverse dispersivities was very much expected
page 230 Chapter 8.Theoretical Case Studies and PAGAP Verification
since this would result in more solute being available for advective transport and longi-
tudinal dispersion towards the target node. The reduction of the transverse dispersivities
is very small and therefore we do not expect these parameters to influence the reliability
index very much. In figure 8.2.7C we see the uncorrelated transverse dispersivities.
One notices that the design point values are practically the same as the mean values.
However the distribution of the element transverse dispersivities suggests that these
must be reduced near the target node. We also see that a left and right side pollution
"barrier" has been established, which reduces transverse dispersion pollution transport
outside the area between the pollution source and the target node, thus helping to keep
pollution concentrations high in that area. These characteristics can not be seen in the
correlated cases. One has to assume however, that the objectives are the same as the
ones for the uncorrelated case. This means, that correlation information introduces a
complexity which does not allow us to see what are the underlying objectives which
have been achieved. We do know that the design point values are the most likely values
and are optimum in achieving the required objectives.
Finally, the design point hydraulic conductivity values are given in figure 8.2.8 for the
same correlation cases seen previously. We observe once more the correlation effect on
the distribution of the hydraulic conductivity, which is similar to the effects we saw for
the longitudinal dispersivities. The uncorrelated case shows that the highest values are
observed near the source nodes, which are only slightly above the mean values. At the
target node area the values are more or less equal to the mean values, while in the rest of
the aquifer all values have been reduced. The correlated results are very different in this
respect, showing an increase in the largest part of the aquifer. Correlated results seem to
always give much higher parameter values. This effect is reasonable since change in
one element variable must alter neighboring variables as well in order to keep the corre-
lation between them as defined.
Chapter 8.Theoretical Case Studies and PAGAP Verification
page 231
I. 15 .5 30 35 .. 4S so 55
_ ABOVE 2.997
_ 2.996 - 2.997
_ 2.995 - 2.996
_ 2.994 - 2.995
_ 2.993 - 2.994
_ 2.992 - 2.993
_ 2.991 - 2.992
_ 2.990 - 2.991
_ 2.989 - 2.990
_ 2.988 2.989
o 2.987 - 2.988
o 2.988 - 2.987
o BELOW 2.986
A
I. 15 20 25 30 35 .. 4S so 55 ..
_ ABOVE 2.998
~ 2.997 - 2.998
_ 2.996 - 2.997
_ 2.995 - 2.998
_ 2.994 - 2.995
_ 2.993 - 2.994
o 2.992 - 2.993
o Baow 2.992
B
10 15 20 25 30 35 40 (5
so 55 60
page 232 Chapter 8.Theoretical Case Studies and PAGAP Verification
_ ABOVE aooo
_ aooo-aooo
_ 2.999 - aooo
_ 2.999 2.999
_ 2.999 - 2.999
~ 2.998 - 2.999
o BElOW 2.998
C
Figure 8.2.7. Design Point Transverse Dispersivities forcase1a. A. Correlation scale 20m. B.
Correlation scale 10m. C No correlation.
Chapter 8.Theoretical Case Studies and PAGAP Verification
A
_ ABOVE 3.080
o 3.070 - 3.080
.. 3.060 - 3.070
.. 3.050 - 3.060
_ 3.040 - 3.050
_ 3.030 - 3.040
_ 3.020 - 3.030
D 3.010 - 3.020
D Ba.OW 3.010
page 233
-
ABOVE 3.045

3.040-3.045
E:I
3.035 - 3.040
.. 3.030-3.035
-
3.025 - 3.030
.. 3.020 - 3.025
-
3.015 - 3.020
-
3.010 - 3.015
-
3.005 - 3.010
-
3.000 - 3.005
D
2.995 - 3.000
0
2.990 - 2.995
D
BaOW 2.990
,. ,.
20 25 3D 3. ..

50 55 60
B
page 234 Chapter 8.Theoretical Case Studies and PAGAP Verification
10 15 20
C
Figure 8.2.8. Design Point Hydraulic Conductivities for case1a. A. Correlation scale 20m.
B. Correlation scale 10m. C. No correlation.
Chapter 8.Theoretical Case Studies and PAGAP Verification
page 235
Q.400
0.300
..
"
..,
0 . 2 0 0 ~
';;
c
"
Ul
0.500
0.100
0.000
-0.100
2.06
2.08
2.00
1.98
2.10 ,-----------------------------------r0.600
1.96 +------+-----i------l-------if--------1------+ -0.200
o W ~ ~ ~ 50 60
Xcoordinate (m)
Figure 8.2.9. Design point Lower Boundary Constant Head values and their Sensitivities.
-0.025
0.040
-0.030
-0.036
11
-0.045 ~
..
c
"
Ul
-0.050
-0.055
-0.065
-0.060
0.600 ,------------------------------,-0.020
0.592
0.598
:[ 0.596
."
:l
x
c
~
8 0.594
0.590 +------+------l------t------if-------i------+-0.070
o W ~ ~ ~ 50 60
Xcoordinate (m)
Figure 8.2. 10. Design point Upper Boundary Constant Head values and their Sensitivities.
page 236 Chapter 8.Theoretical Case Studies and PAGAP Verification
For correlated variables, the maximum change in hydraulic conductivity values appears
in the area between the target node and the source nodes and expressed in terms of stan-
dard deviation this change is equal to 0.15cr
Hc
' For the dispersivities we have O.13cr
L
and 0.033cr
r
In figures 8.2.6, 8.2.7 and 8.2.8, we observe that the design point values are for most
practical purposes the same as the mean values. The differences are so small that we do
not expect them to influence the analysis very much. However, something that can not
be observed in these figures is the effect of the constant head random variables. The de-
.
sign point constant head values are considerably different from the mean values, sug-
gesting that these are the parameters of major influence in this problem.
In figures 8.2.9 and 8.2.10 we see the distribution of the constant head values at the de-
sign point. The mean values for these parameters are 2.0m and 0.6 for the lower and up-
per boundary respectively. The standard deviation for all constant heads is set to O.14m.
From these figures we see that the design point values show a maximum increase of
more than 0.50"ch which is more than 3 times higher than the changes observed for the
previously discussed parameters.
The importance of the constant head values was expected, since even small changes in
their values influence the flow pattern throughout the aquifer.
Although the importance of the constant head values is reasonable, we can also establish
it numerically by using the sensitivity information provided by the analysis. The "(-
sensitivities of the constant head parameters are also presented in figures 8.2.9 and
8.2.10 and show very high values for the lower boundary constant head nodes and es-
pecially the nodes which are also the pollution source nodes. Their relative importance
will become more clear once we have seen the sensitivity results for the other parame-
ters.
PAGAP gives in all 5 types of sensitivity information, and here we present 3 of them,
Chapter 8.Theoretical Case Studies and PAGAP Verification page 237
namely y-sensitivities, mean value sensitivities and standard deviation sensitivities. The
a-sensitivities are not presented because they have correlation distortion. In order to re-
move the correlation distortions in the a-sensitivities they are transfonned from stan-
dard normal space to the original space. The transformed sensitivities are not presented
either, because they have the same form as the y- sensitivities. Actually the y-
sensitivities are the normalized form of the transformed a-sensitivities. In order to pre-
sent this information in the best possible way, it was decided to use 7 equidistant con-
tours spanning the whole range of values for each plot. In this way we can observe the
distribution patterns of the sensitivity information over the aquifer, and compare them to
the patterns observed for other types of sensitivities. However, this makes it difficult to
compare contour plots showing the same type of sensitivities.
In figure 8.2.11 we see the y-sensitivities of the longitudinal dispersivities for the three
cases of correlation information. Since y-sensitivities have been normalized with respect
to the standard deviation and correlation information, all three plots look similar, and
actually, have the same characteristics as the design point longitudinal dispersivities for
the uncorrelated case seen in plot 8.2.6C . In general, we observe that the largest part of
the aquifer shows sensitivity values of zero indicating no effect on the reliability index.
In the area between the source nodes and the target node there are positive values indi-
cating that an increase of the longitudinal dispersivities in this area will result in an in-
crease of the reliability index. In the area above the target node negative values are
observed indicating that an increase of the longitudinal dispersivities will cause a re-
duction in the reliability index.
The sensitivity values also indicate that the reliability index is highly dependent on the
longitudinal dispersivities near the source nodes. The highest y-sensitivities are ob-
served for the uncorrelated case, although the correlated cases show only slightly
smaller values. In all cases the sensitivities are approximately 20 times smaller than the
highest constant head node sensitivities.
page 238 Chapter 8.Theoretical Case Studies and PAGAP Verification
A
_ ABOVE 0.Q23
_ 0.014 0.023
.. 0.005 - 0.014
_ -0.004 - 0.005
_ ..(J.013 - ..(J.004
l2!J -0.022 - ..(J.o13
o BELOW ..(J.022
B
t.
t. 15 20 30 3S 40 45
EO
EO
55
55 60
_ ABOVE 0.022
II'i!liII 0.013 - 0.022
_ 0.005 - 0.013
_ ..(J.004 - o.oos
_ ..(J.013 - ..(J.004
o ..(J.022 - -0.013
o Baow ..(J.o22
Chapter 8.Theoretical Case Studies and PAGAP Verification
_ ABOVE 0.D27
IIiiiIiiI 0.018 - 0.D27
_ 0.005 - 0.016
_ -0.008 - 0.005
_ -0.016 - -0.006
BE -0.027 - -0.016
o BElOW -0.027
page 239
C
Figure 8.2. 11. Sensitivities of Longitudinal Dispersivities for case1a A. Correlation scale 20m. B.
Correlation scale 10m. C. No correlation.
page 240 Chapter 8.Theoretical Case Studies and PAGAP Verification
A
B
ABOVE 0.0008
0.0000 - 0.0008
II1II 0.0008 - 0.0000
_ -0.0016 - -0.0006
_ -0.0025 - -0.0016
Ea -0.0033 - -0.0025
o BELOW -0.0033
_ ABOVE 0.0005
_ -0.0003 - 0.0005
l1li -0.0010 - -O.0Q03
_ -0.0017 - -0.0010
_ -0.0025 - -0.0017
o -0.0032 - -0.0025
o BELOW -0.0032
Chapter 8.Theoretical Case Studies and PAGAP Verification
_ ABOVE 0.000
_ .().OO1 - 0.000
_ '()'002 - -0.001
_ '()'002 - -0.002
_ '()'003 - .().002
o '()'004 - -0.003
o BElOW -0.004
page 241
C
Figure 8.2.12. Sensitivities of Transverse Dispersivities for uncorrelatedcase1a. A. Correlation
scale 20m. B. Correlation scale 10m. C. No correlation.
page 242 Chapter 8.Theoretical Case Studies and PAGAP Verification
, ,
, "
35
,0-
30
2S
" "
20
..
'.
,
's ..
"
-
ABOVE ll.O38
IiaI 0.031 - 0.038
10
-
0.023 - 0.031 >--
-
0.016 - 0.023
-
0.008 - 0.016
r::::lil 0.001 - 0.008
0
BELOW 0.001
'0 's 20 2S 30 35
..,
45 50 55 50
A
B
_ ABOVE D.037
liM 0,030 - ll.O37
_ 0.023 - 0.030
_ 0.016 - 0.023
_ 0.009 - 0.016
~ 0.002 - D.OO9
o Baow D.OO2
Chapter 8.Theoretical Case Studies and PAGAP Verification
_ ABOVE 0.045
_ 0.037 - 0.045
_ 0.028 - 0.037
_ 0.019 - 0.028
_ 0.011 - 0.019
c;:] 0.002 - 0.011
o BElOW 0.002
page 243
C
Figure 8.2. 13. Sensitivities of Hydraulic Conductivities for case1a. A. Correlation scale 20m. B.
Correlation scale 10m. C. No correlation.
page 244 Chapter 8.Theoretical Case Studies and PAGAP Verification
The transverse dispersivities seen in figure 8.2.12 are much more interesting. We once
again observe the similarity between them and with the design point transverse disper-
sivities for the uncorrelated case seen in plot 8.2.7C. We observe that most values are
negative, indicating that an increase of any kind will result in a decrease of the reliabil-
ity index. Positive values are only observed near the source nodes. It is interesting to
note the horse-shoe shape of the sensitivities, i.e. the barrier sides mentioned previously.
The y-sensitivity values are very low, more than 5 times smaller than the longitudinal
dispersivity sensitivities, which show the little influence these parameters have for this
problem.
The hydraulic conductivity "(-sensitivities are seen in figure 8.2.13. for the three correla-
tion cases. The distribution of the sensitivities is approximately the same for all three
cases and is also the same as the distribution of the design point hydraulic conductivity
values seen in figure 8.2.8C. The sensitivity values are higher than the longitudinal dis-
persivity values, but again much smaller - more than 15 times - than the highest constant
head node sensitivities. It is interesting to notice that near zero sensitivities are observed
in the area outside the barrier zone seen in the transverse dispersivity results.
The 'V-sensitivities are measuring the influence of the design point parameters on the re-
liability index. In many cases we are also interested in seeing what is the influence of
the statistical input parameters on the reliability index. We need to estimate the mean
value sensitivities and standard deviation sensitivities.
Mean value sensitivities usually resemble y-sensitivities as far as the distribution of the
sensitivities is concerned. A significant difference however is that the mean values have
an opposite effect on the reliability index. This is better understood if one considers that
the y-sensitivities are based on changes at the design point, while mean value sensitivi-
ties are based on changes at the point of origins. If a variable has a positive effect on the
reliability index, then increasing its value at the design point will increase the reliability
index, while increasing its mean value will move the point of origins nearer to the limit
Chapter 8.Theoretical Case Studies and PAGAP Verification page 245
state surface and thus reduce the reliability index.
The standard deviation sensitivities are very interesting because changes in the standard
deviations have a shrinking/expanding effect in standard normal space, and thus change
the shape and position of the limit state surface. The idea that the standard deviations
control is some respect the shape of the limit state surface is very interesting. It suggests
that one could find variables which have a direction approximately normal to the unit
vector ex: at the design point, assign to these variables near zero standard deviations and
thus reduce the curvature of the limit state surface, which in turn will improve first order
reliability estimates. If we take this idea one step further, it seems reasonable to assume
that such variables will show near zero sensitivities, due to their directions, and proba-
bly should be considered to be deterministic variables and not random variables. How-
ever, if such variables have such a great influence on the shape of the limit state surface,
they should not be removed from the reliability analysis but simply assigned very small
standard deviations.
Figures 8.2.14, 8.2.15 and 8.2.16 show the mean value sensitivities obtained for the
longitudinal dispersivities, transverse dispersivities and hydraulic conductivities respec-
tively. Each figure show sensitivity values for a correlation scale of 20m (A) and a cor-
relation scale of 10m (B). Comparing the distribution patterns of the mean value
sensitivities with the respective y-sensitivities reveals that they are practically the same.
This basically means than the mean value sensitivities do not have anything new to add
to the information obtained from the y-sensitivities. However, it also means that, once
the mean value sensitivities have been normalized by multiplying them with the stan-
dard deviation, we can make direct comparisons with the y-sensitivities. We can thus
determine if the reliability index is more influenced by the mean values assigned to each
parameter or by the position of the design point which is target value dependent.
Figure 8.2.14 gives the mean value sensitivities for the longitudinal dispersivities and
since the standard deviation is equal to 1m, no normalization is required. Comparing
page 246 Chapter 8.Theoretical Case Studies and PAGAP Verification
the mean value sensitivity values with the respective r-sensitivities shows that these are
almost he same. The mean value sensitivities are slightly lower than the r-sensitivities
for the 20m correlation case while they are slightly higher in the 10mcorrelation case.
The mean value sensitivities for the transverse dispersivities are given in Figure 8.2.15.
Normalization with the standard deviation which is O.lm produces very small sensitivity
values, however the respective r-sensitivities are very small as well. Comparisons show
that the normalized mean value sensitivities are 2 to 3 times smaller than the
r-sensitivities.
The mean value hydraulic conductivity sensitivities are given in figure 8.2.16 The stan-
dard deviation is O.3m1d and after normalization we observe that the mean value sensi-
tivities are smaller than the "{-sensitivities for the 20m cases while the opposite is
observed for the 10mcase.
In this specific case study the most important information obtained from the mean value
sensitivities is that concerning the transverse dispersivities. They show that the mean
values used for these parameters is of little importance for the analysis and therefore it is
not necessary to have good estimates for them. The longitudinal dispersivity and hy-
draulic conductivity mean value sensitivities are slightly higher than the r-sensitivities
for a correlation scale of 10m while the opposite is observed for a 20m correlation scale.
This could of course be a correlation scale effect, but with only two correlation scale
results available, it is impossible to be sure. One should keep in mind that the elements
in the FEM grid used are 5x5m, and empirical rules suggest that the elements should be
smaller than 1/3 of the correlation scale to capture the correlation distribution correctly,
but never larger than 1/2. So the element size in this case is on the limit of being too
large for a 10m correlation scale. In any case, the differences between r-sensitivities and
mean value sensitivities for these parameters is so small that probably one should not
put much significance on the differences observed.
Chapter 8.Theoretical Case Studies and PAGAP Verification
page 247
A
_ ABOVE 0.022
11II 0.013 - 0.022
_ 0.004 - 0.013
_ 00.005 - 0.004
_ -0.013 - -0.005
(Gil -0.022 - -0.013
CJ BElOW -0.022
_ ABOVE 0.024
IlIIIIIil 0.014 - 0.024
_ 0.005 - 0.014
_ -0.005 - 0.005
_ -0-015 - -0.005
~ -0.024 - -0.015
CJ Baow -0.024
B
Figure 8.2.14. Mean Value Sensitivities of Longitudinal Dispersivities forcase1a. A. Correlation
scale 20m. B. Correlation scale 10m.
page 248 Chapter 8.Theoretical Case Studies and PAGAP Verification
A
_ ABOVE 0.011
_ 0.008 - 0.011
_ 0.005 - 0.008
_ 0.003 - 0.005
_ -0.000 - 0.003
~ -0.003 - -0.000
o BElOW -O.0D3
B
os
_ ABOVE 0.012
_ 0.009 - 0.012
_ 0.008 - 0.009
_ D.OO4 - 0.008
_ 0.001 - 0.D04
~ -0.002 - 0.001
o BElOW -0.002
Figure 8.2.15. Mean Value Sensitivities of Transversal Dispersivities for case1a. A. Correlation
scale 20m. B. Correlation scale 10m.
- - - _ . _ - - - ~ - - - - - - - - . ~ - ~ - - - . . . " " ' - - - - - - -,--, .. - ' ~ , ~ - - - - - - - -
Chapter 8.TheoreticaI Case Studies and PAGAP Verification
A
_ ABOVE -0.005
IIlil!iI -0.028 - -0.005
_ -0.051 --0.028
_ -0.074 - -o.os1
_ -0.098 - -0.074
~ -0.121 - -0.098
o BELOW -0.121
_ ABOVE -0.006
IIRl -0.032 - -o.ooa
_ -0.057 - -0.032
_ -0.083 - -0.057
_ -0.108 - -0.083
~ -0.134 - -0.108
o BaOW -0.134
page 249-
B
Figure 8.2. 16. Mean Value Sensitivities of Hydraulic Conductivities for case1a. A. Correlation
. scale 20m. B. Correlation scale 10m.
page 250 Chapter 8.Theoretical Case Studies and PAGAP Verification
0.20.,------------------------------,. 0.05
0.10 1-:===:-11:::::'"
0.00
.,
~
~ -0.10
in
c
'"
: -0.20
::l
~
c -0.30
'"
'"
::
-0.40
-0.50
"';:;>-==:--1 0.00
.,
'"
-0.05 ~
;:
in
c
'" -0.10 '"
c
~
- 0.15 a;
'tJ
'E
'"
-0.20 ~
iii
-0.25
-0.60 -I------I------I------!-----I-----I-----!- -0.30
o 10 20 30 40 50 60
XDistance (m)
Figure 8.2. 17. Lower.boundary constant head sensitivities.
0.07.,-------------------------------,- 0.0000
0.06
., 0.05
:!
~
~ 0.04
CD
'" CD
::l
in 0.03
>
c
'" CD
:: 0.02
0.Q1
-0.0005
-0.0010 .,
CD
;:
'>
-0.0015 =Ji
c
CD
-0.0020 ~
o
'l;
-0.0025 a;
'tJ
-0.0030 ~
'tJ
C
.l!!
-0.0035 Ul
-0.0040
0.00 +-----+------!-----I------if------t-----I--0.0045
o 10 20 30 40 50 60
X distance (m)
Figure 8.2.18. Upper boundary constant head sensitivities.
Chapter 8.Theoretical Case Studies and PAGAP Verification
-,-- ':: .
- - -- --------
page 251
A
_ ABOVE 0.0006
_ 00.Q004 - 0.0008
_ 00.0015 - 00.0004
_ -0.0028 - 00.0015
_ 00.0036 - 00.0026
~ -0.0047 - oO.l1038
o BElOW 00.0047
_ ABOVE 00.0005
_ -0.0011 00.0005
_ 00.0018 - 00.0011
_ -0.0024 00.0018
_ oO.l1031 - -0.0024
~ oO.l1037 - 00.0031
o BElOW 00.0037
B
Figure 8.2.19. Standard Deviation Sensitivities of Longitudinal Dispersivities forcase1a.
A. Correlation scale 20m. B. Correlation scale 10m.
page 252 Chapter 8.Theoretical Case Studies and PAGAP Verification
A
_ ABOVE 0 . o o
_ -0.0001. 0 . o o
_ -0.0002 - -0.0001
_ -0.0003 - -0.0002
_ -0.0004 - -o.D003
~ -0.0005 - -0.0004
o BElOW -0.0005
_ ABOVE - 0 . O O
_ -0.0001 - - 0 . o o
_ -0.0001 - -0.0001
_ -0.0002 - -0.0001
_ -0.0002 - -o.D002
~ -0.0003 - -0.0002
o BElOW -o.D003
8
Figure 8.2.20. Standard Deviation Sensitivities of Transverse Dispersivities for case1a.
A. Correlation scale 20m. B. Correlation scale 10m.
Chapter 8.Theoretical Case Studies and PAGAP Verification
page 253
A
_ ABOVE -0.0016
_ 00.0059 - 00.0016
_ -0.0103 - -0.0059
_ 00.0146 - -0.0103
_ -0.0190 - -0.0146
~ -0.0233 - -0.0190
o BElOW -0.0233
_ ABOVE -0.0019
_ 00.0046 - -0.0019
IlIII 00.0073 - -o.0D46
_ 00.0100 - -0.0073
_ 00.0127 - -0.0100
~ -0.0155 - -0.0127
o BElOW -0.0155
B
Figure 8.2.21. Standard Deviation Sensitivities of Hydraulic Conductivities forcase1a.
A. Correlation scale 20m. B. Correlation scale 10m.
page 254 Chapter 8.Theoretical Case Studies and PAGAP Verification
In figure 8.2.17 we see the lower boundary constant head mean value and standard de-
viation sensitivities. The values shown in the figure have been normalized with the
standard deviation which is O.14m for the constant head parameters. The mean value
sensitivities are directly comparable with the r-sensitivity values, seen in figure 8.2.9,
with the difference that the mean value sensitivities have inverse values. We notice that
the lowest values are observed at the pollution source nodes which are the most sensi-
tive. All other constant head nodes have positive values, implying that increments in
their mean values will increase the reliability index.
In the same figure we can see the normalized standard deviation sensitivities for the
lower boundary constant head parameters. All constant head nodes have negative sensi-
tivities, with the most important sensitivities observed at the pollution source nodes.
The standard deviation sensitivities are approximately half the value of the respective
mean value sensitivities, and therefore changes in the standard deviation have a smaller
effect on the reliability index. However, it is much harder to obtain good standard de-
viations than mean value estimates.
Figure 8.2.17 makes it clear which are the variables controlling the reliability index.
The upper boundary constant head mean value and standard deviation sensitivities are
seen in figure 8.2.18. Compared to the lower boundary sensitivities, the upper boundary
sensitivities are much smaller. They are however comparable to the sensitivities ob-
tained for the longitudinal dispersivity and hydraulic conductivity.
Finally, figures 8.2.19, 8.2.20 and 8.2.21 show the standard deviation sensitivities for
the longitudinal dispersivities, transverse dispersivities and hydraulic conductivities re-
spectively. For each case two contours plots are presented one with a correlation scale
of 20m and one with a correlation scale of 10m. These sensitivities have not been nor-
malized. Upon normalization, the transverse dispersivity sensitivities become very small
(_10-
5
) and have no influence on the reliability index.
- - - - - - - - - - - - ~ - ~ ~ ....
Chapter 8.Theoretical Case Studies and PAGAP Verification page 255
The standard deviation sensitivities for the longitudinal dispersivities and hydraulic
conductivities are approximately 5 times smaller than the mean value sensitivities for
parameters. The hydraulic conductivity sensitivities are higher than the longitudinal dis-
persivity sensitivities. The longitudinal dispersivity shows areas with positive standard
deviation sensitivities - although the sensitivity values are very small - indicating that
increasing the standard deviation in the area above the target node will increase the reli-
ability index.
Random Variable Dispersivity
In casela we assumed that the dispersivity of the aquifer is a random field which, with
the use of the Vanmarcke variance reduction approach, produced a set of random vari-
ables - one for each element. This is assigned second order statistical information,
which is then used as input to PAGAP.
Table 8.2.3 Reliability Results for Case 1a. Random Variable Dispersivity
Model Type
13
PI Iter.
CALREL-TRANS FORM L,CS=20 1.1038 0.1348 10
SORM 1.360 0.0870
MCS 1.364 0.0864
PAGAP FORM L,CS=20 1.109573 0.133591 8
FORM L,CS=10 1.150716 0.124925 8
FORM C,CS=20 0.887928 0.187290 6
FORM C,CS=10 0.915705 0.179911 6
L =Lumped matrix formulation
C = Consistent matrix formulation
CS = Correlation Scale
page 256
---------- -- -------------------- --------
Chapter 8.Theoretical Case Studies and PAGAP Verification
Although this is the most detailed approach, since it assigns one random variable to each
-element property, it is not always necessary or required to be so. One can group a num-
ber of elements together into one random variable and thus reduce the number of ran-
dom variables. Of course the variance reduction algorithm for element groups must be
able to accommodate complex group geometries. This grouping ability has not been
implemented in PAGAP.
What has been implemented is the ability to use an ultimate group Le. one random vari-
able can represent the entire random field. Obviously, if one random variable is used to
describe - for example- the dispersivity over the whole aquifer, then the dispersivity is
no longer a random field but simply a random variable.
For this sub-case, results were obtained by assuming that the longitudinal and transverse
dispersivities are random variables defined over the entire aquifer. This assumption re-
duces the number of random variables from 530 to 196.
The motivation for using random variables for the dispersivities instead of random
fields is related to the problems involved in obtaining field estimates for these parame-
ters. Field estimates involve spatial and temporal averaging and therefore some theoreti-
cal and practical difficulties arise when such estimates are used for defining a random
field (see Dagan 1984). Comparing random field results with random variable results is
interesting because we see the effects of averaging over the domain. Table 8.2.3 sum-
marizes the results obtained for random variable longitudinal and transverse dispersivi-
ties. The correlation scales noted in the table are for the hydraulic conductivity random
field.
Comparing the results of table 8.2.3 with those of table 8.2.2, one notices a small de-
crease in the reliability index values for the random variable dispersivity case. It can be
argued that the small change is not significant for the reliability results ~ d therefore the
spatial distribution of the dispersivities has only a minor effect in the estimation of the
probability.
Chapter 8.Theoretical Case Studies and PAGAP Verification page 257
This is indeed the argument of Jang et al. (1990) who conclude that this is an advection
dominated problem, so it is not surprising that the dispersivities do not playa significant
role in the analysis. The only problem with this argument is that Jang et al. assume that
the influencing parameter is the hydraulic conductivity and its spatial distribution be-
cause it shows "higher" sensitivities. However, from the sensitivity and other infonna-
tion presented so far, one can also argue that this is not a good case study for
investigating the effects of any type of spatial distributions, be it dispersivities or hy-
draulic conductivities, since the constant head random variables have a dominant influ-
ence on the analysis and will overshadow all other random variables. In order to study
the effects of each type of parameter on the results obtained, additional cases' were
analyzed. In the first such case, we assume that the dispersivities and hydraulic con-
ductivities are detenninistic over the domain with values equal to their mean values, and
the only random variables are the constant head nodes. After 8 iterations PAGAP con-
verged to a reliability index of 1.279030 giving a probability of 0.100443, which is only
slightly above the probability obtained for the uncorrelated case seen in table 8.2.2.
Another interesting case study is obtained by assuming that dispersivities and hydraulic
conductivities are global variables, while the constant head nodes are detenninistic. In 5
iterations PAGAP gave a reliability index of 1.884849 with a probability of 0.029725.
Obviously the importance of the dispersivities and hydraulic conductivities is smaller
than the constant head parameters. However this case also gave r-sensitivities for each
global parameter which indicate that the hydraulic conductivity is indeed the most im-
portant of the three element parameters. The r-sensitivities obtained are 0.357892,
-0.137299 and 0.923614 for the longitudinal dispersivity, transverse dispersivity and
hydraulic conductivity respectively. In order to verify these sensitivities 3 additional
simple cases were analyzed. Each case had only one global random variable while all
other parameters were considered deterministic. For the global longitudinal dispersivity
In all cases a lumped matrix formulation is used with a implicit scheme for the time derivative
page 258 Chapter 8.Theoretical Case Studies and PAGAP Verification
a reliability index of 5.461178 was obtained giving a probability of 0.0 (Le. smaller than
10-
6
). The global transverse dispersivity case gave a reliability of 8.8086615 with a
probability of 0.0. The global hydraulic conductivity case gave a reliability index of
2.042096 with a probability of 0.020571. These reliability indices confirm previous re-
sults and show that the hydraulic conductivity is indeed the most important element pa-
rameter for this cases study.
8.2.2. Case 1b
The second case analyzed was that of the concentration exceeding a value of 0.2 at lo-
cation (40,40) after 100 days. For this purpose the first performance function was used
defined as
g(X) =0.2 - c(40,40,100)
The standard Rackwitz and Fiessler algorithm did not manage to converge in this case.
The modified Rackwitz and Fiessler algorithm was then employed which converged in
9 iterations to a value smaller then 5.10-
7
for the performance function and a value
smaller than 5.10-
7
for the inner product aT.y, where y is the random variable vector in
standard normal space and a is the unit normal vector at the design point. The reliabil-
ity index was found equal to f3 =2.103848 and the probability of failure PI =0.017696.
Figure 8.2.22 shows the groundwater steady state solution at the design point. The flow
pattern observed is significantly different from the original mean value flow pattern.
The flow pattern shows that many flow lines starting at the pollution source area are
curving towards the asymmetric target node area. This will obviously cause pollution
concentrations to increase in the direction of the asymmetric target node.
Figure 8.2.23 shows the design point pollution transport solution. The solute is not
symmetrically distributed - as expected - but not as asymmetric as one might have ex-
pected it to be. The design point solution is obviously not dominated by the position of
the target node. Comparing figures 8.2.22 and 8.2.23 one notices that the design point
Chapter 8.Theoretical Case Studies and PAGAP Verification page 259
flow pattern simply moves the pollution from the right side pollution source node to-
wards the target node. It does the same thing to the middle pollution source node but to
a much lesser degree, while the left side pollution source does not seem to contribute
anything to the target node. The overall effect is that the solute has a distribution which
is only slightly asymmetric with an inclination towards the target node.
Figures 8.2.24 and 8.2.25 show the design point element dispersivity distributions for a
20m correlation scale and a 10m correlation scale. The longitudinal dispersivity distri-
bution is as expected. It has the same characteristics as that of case la, with the differ-
ence that in this case the distribution is curved towards the asymmetric target node. The
longitudinal dispersivity values are much higher than those observed for case la which
is consistent with a higher reliability index. The correlation effect can be seen clearly,
causing the dispersivity values to change over a large area of the aquifer. The transverse
dispersivities do not resemble those observed in case la, although many of the same
characteristics can be observed.
Table 8.2.4 Reliability Results for Case 1b. Random Field Dispersivity
Model Type p
PI Iter.
CALREL-TRANS FORM L,CS=20 2.0486 0.02025 NA
PAGAP FORM L,CS=20 2.103848 0.017696 9
FORM L,CS=10 2.232286 0.012798 11
FORM C,CS=20 1.908570 0.028158 9
FORM C,CS=10 2.018534 0.021768 9
L =Lumped matrix formulation
C =Consistent matrix formulation
UN =Uncorrelated random variables
CS =Correlation Scale
NA =Not Available
- -- ~ - - ~ - ~ - ~ ~ - - - - - ----------
page 260
Chapter 8.Theoretical Case Studies and PAGAP Verification
10 '5
20 2S 30 35 40 4S so 55
'"
70
"
"
,
..'
6S
65
'"
--
,-'''':' :
;. ....
_ ABOVE 2.000
_ 1.900 - 2.000
,. 1.800 - 1.900
_ 1.700 - 1.800
_ 1.600 - 1.700
_ 1.500 - 1.600
_ 1.400 - 1.500
_ 1.300 - MOO
_ 1.200 - 1.300
_ 1.100 - 1.200
_ 1.000 - 1.100
_ 0.900 - 1.000
o 0.800 - 0.900
o 0.700 - 0.800
o 0.600 - 0.700
o Baow 0.800
Figure 8.2.22. Design Point Groundwater Solution giving c(40,40, 100)= 0.2
55
so
45
..
35
30
25
20
-
ABOVE 0.900
II1II
0.800 - 0.900
15
,.
0.700 - 0.800
-
0.600 - 0.700
-
0.500 - 0.600
10
-
0.400 - 0.500
-
0.300-0.400
-
0.200 - 0.300
0
0.100 - 0.200
0
B8.0W 0.100
Figure 8.2.23. Design Point Pollution Solution giving c(40,40, 100)= 0.2
Chapter 8.Theoretical Case Studies and PAGAP Verification
A
10 15 ~ Z ~ ~ ~ ~ ~ ~ M
'of'u.l.r..l.1.........Lu..u.l..........I.i.l'"'-'-JL.I.I-u.l.L.u.'-l.I..LL.L...u...........J..u'-U..IL.uouJ..........y.70
..
_ ABOVE 111330
_ 10.290 - 111330
_ 10.250 10.290
_ 10.210 - 10.250
_ 10.170 10.210
_ 10.130-10.170
_ 10.090 - 10.130
_ 10.050 - 10.090
o 10.010 - 10.050
D BElOW 10.010
_ ABOVE 10.230
_ 10.210 - 10.230
_ 10.190 - 10.210
_ 10.170 - 10.190
_ 10.150 - 10.170
_ 10.130 - 10.150
_ 10.110 - 10.130
_ 10.090 - 10.110
_ 10.070 - 10.0g0
_ 10.05010.070
_ 10.030 - 10.0s0
~ 10.010 10.0s0
o 9.990 - 10.010
o 9.970 - 9.990
o BB.OW 9.970
page 261
B
Figure 8.2.24. Design Point Longitudinal Dispersivity value forcase1b. A. Correlation scale
20m. B. Correlation scale 10m.
page 262 Chapter 8.Theoretical Case Studies and PAGAP Verification
A
_ ABOVE 2.997
_ 2.995 2.997
I11III 2.9Q3 - 2.995
_ 2.991 2.993
_ 2.989 - 2.991
_ 2.987 - 2.989
_ 2.Q85 - 2.987
_ 2.983 2.985
~ 2.981 - 2.983
o 2.979 - 2.981
o BElOW 2.979
_ ABOVE 2.Q98
_ 2.997 - 2.Q98
_ 2.91l8 - 2.997
_ 2.9Q5 - 2.Q1l8
_ 2.9114 - 2.QQ5
_ 2.QQ3 - 2.Qll4
_ 2.9112 - 2.9Q3
_ 2.991 - 2.9112
_ 2.9QO - 2.991
_ 2.989 - 2.QQO
_ 2.Q88 2.989
_ 2.987 - 2.988
o 2.986 2.987
o 2.985 - 2.Q86
o 2.9S4 - 2.985
o saow 2.9S4
B
Figure 8.2.25. Design Point Transversal Dispersivity values for case1b. A. Correlation scale
20m. B. Correlation scale 10m.
Chapter 8.Theoretical Case Studies and PAGAP Verification
-
ABOVE 3.180
.. 3.180 - 3.180
.. 3.140 - 3.180
-
3.120 - 3.140
-
3.100 - 3.120
-
3.080 - 3.100
-
3.060 - 3.080
-
3.040-3.060
0
3.020 - 3.040
0
BELOW 3.020
I.
"
20 2. 30 35
'"

50 55 ..
A
-
ABOVE 3.110
-
3.100 - 3.110
~
3.090 - 3.100
-
3.080 - 3.090
-
3.070 3.080
..
3.080 - 3.070
-
3.050 - 3.OBO
-
3.040 - 3.050
-
3.030 - 3.040
-
3.020 - 3.030
~ 3.010 - 3.020
0
3.000 - 3.010
0 2.990 - 3.000
0
Baow 2.990
I.
"
20 25 30
3'
..

50 55 ..
page 263
B
Figure 8.2.26. Design Point Hydraulic Conductivity values for case1b. A. Correlation scale 20m.
B. Correlation scale 10m.
page 264 Chapter 8.Theoretical Case Studies and PAGAP Verification

2.20
2.16
2.12
::
'tI
g 2.08
x
'E

c
c3 2.04
2.00 +------
1.96
0.60
0.40
III
C>
;:
0.20 ;j;
Ui
c
r1l
0.00
_0.20
1.92
o W 40 60 80
Xcoordinate (m)
Figure 8.2.27. Design point Lower Boundary Constant Head values and their Sensitivities for
case 1b
0.00
_0,01
_0.02
_0.03
_0.04
III
CI

-0.05 =
en
c
a;
-0.06
_0.07
_0.08
_0.09
_0.10
50 60 40 30
X coordinate (m)
20 10
0.570 -!----___I_-----+-------II------+------!-----+
o
0.580
0.600 r------------------------------.-
0.590
0.595
0.575
::
'tI
co

'E 0.585
S
III
C
o
l,)
Figure 8.2.28. Design point Upper Boundary Constant Head values and their Sensitivities for
case 1b
I
I
Chapter 8.Theoretical Case Studies and PAGAP Verification
A
_ ABOVE ll.O32
_ ll.O21 - 0.D32
_ 0.009 - ll.O21
_ -0.002 - 0.009
_ -0.013 - -0.002
9 -0.024 - -0.013
o BELOW -o.G24
_ ABOVE 0.031
_ 0.020 0.031
_ 0.009 - 0.020
_ -0.002 - 0.009
_ -0.013 - -0.002
~ -0.024 - -0.013
o BaOW -0.024
page 265
B
Figure 8.2.29. Sensitivities of Longitudinal Dispersivities for case1b A. Correlation scale 20m.
B. Correlation scale 10m.
page 266
-----------------
Chapter 8.Theoretical Case Studies and PAGAP Verification
A
_ ABOVE 0.0059
liii!ll 0.0023 0.0059
_ -0.0014 0.0023
_ -O.OOSO -0.0014
_ -o.OOS6 -0.0050
EII -0.0122 -o.ooS6
o BELOW -0.0122
_ ABOVE o.oos
_ 0.002 o.oos
_ -0.001 D.OO2
_ -0.005 -0.001
_ -O.OOS - -o.oos
Q -0.012 - -0.008
o Ba.OW -0.012
B
Figure 8.2.30. Sensitivities of Transversal Dispersivities for case1b. A. Correlation scale 20m.
B. Correlation scale 10m.
----'-- ~ - - - -
Chapter 8.Theoretical Case Studies and PAGAP Verification
page 267
_ ABOVE 0.061
_ 0.049 - 0.061
_ 0.037 - 0.049
_ 0.025 - 0.037
_ 0.013 - 0.025
Ei!I 0.001 - 0.013
D BElOW 0.001
10 15 20 25 30 35
A
65
_ ABOVE 0.059
0.045 - 0.059
0.036 - ll.O48
0.025 - 0.036
_ 0.013 - 0.025
E::Il Q.OO2 - 0.013
(:::J Ba.OW Q.OO2
B
Figure 8.2.31. Sensitivities of Hydraulic Conductivities forcase1b. A. Correlation scale 20m. B.
Correlation scale 10m.
page 268
- - ~ - - - - ~ - - - - - - - ~
Chapter 8.Theoretical Case Studies and PAGAP Verification
All values have been reduced, especially in the area bellow and to the right of the target
node, making more solute available for advective transport towards the target node.
This characteristic reminds us of the barrier features observed in the casela transverse
dispersivity distribution. In this case however, only the right side barrier has been de-
veloped. The pollution barrier features are quite clear in this case, despite correlation
effects.
The design point element hydraulic conductivity distribution can be seen in figure
8.2.26 for the two correlation scales. These resemble the distributions of case la with
the obvious difference of being curved towards the asymmetric target node. The corre-
lation effects can be clearly seen, causing the design point distribution to effect the larg-
est part of the aquifer.
Figure 8.2.27 and 8.2.28 show the design point constant head values and their sensitivi-
ties at the lower and upper boundary. In case la we saw that all pollution source nodes
were equally influencing the problem. In this case the rightmost source node - the
source nearer to the target node - has a dominating influence compared to the two other
nodes. We also note that the constant head nodes to the left of the source nodes do not
show any changes from their mean values, while in the symmetric case all constant head
nodes where influenced. In the upper boundary all constant head nodes are influenced
especially in the rightmost side. The r-sensitivity values show the same distribution as
the design point values. One can actually choose a sensitivity scale in the plots which
would make the sensitivity curve fall exactly on top of the design point curve. It should
be mentioned that such a behavior is not realistic since the constant heads should exhibit
a high correlation between them, in which case the two curves would be very dissimilar.
The r-sensitivity values are very high at the lower boundary, especially for the rightmost
pollution source node. All constant head nodes at the left side of the pollution sources
show zero sensitivities, suggesting that they have practically no influence on the reli-
ability index.
Chapter 8.TheoreticaI Case Studies and PAGAP Verification
page 269
3.0 -r-----------------------------...,. 1.0
2.01'- - ........
1.0
~ 0.0 +----------.....
">
E -1.0
..
c
..
: -2.0
::I
~ -3.0
c
..
~ -4.0
-5.0
-6.0
0.0
-1.0 ..
..
~
-2.0 E
..
c
..
-3.0 ~
,g
m
-4.0 ~
'tl
'tl
-5.0 :;
'tl
C
~
-6.0 l/l
-7.0
-7.0 -I-----+-----I------/-----+-----!------!- -8.0
o 10 20 30 40 50 60
X dIstance (m)
Figure 8.2.32 L,ower boundary constant head sensitivities
-0.02
-0.14
-0.04 ~
~
-0.06 ~
..
..
c
-0.08 ~
~
-0.10 'tl
:;
'tl
c
-0.12 ~
0.2
0.9 -...::::------------------------------""""T 0.00
0.8
0.7
0.1
..
~ 0.6
~
u;
i 0.5
..
..
~ 0.4
c
:g 0.3
:E
o-!------I------I------I------I------I-------!- -0.16
o 10 20 30 40 50 60
X distance (m)
Figure 8.2.33. Upper boundary constant head sensitivities
page 270
- - ~ - ------- - - - - - ~ - - - - -
Chapter 8.Theoretical Case Studies and PAGAP Verification
A
_ ABOVE 0.D23
_ 0.012 - 0.023
_ D.OO2 - 0.012
_ ~ 9 - 0.002
_ -0.020 - -0.009
~ -0.030 - -0.020
o BElOW -0.030
_ ABOVE 0.028
_ 0.014 - 0.028
_ 0.002 - 0.014
_ -0.010 - D.OO2
_ -0.022 - -0.010
E:;z] -0.034 - -0.022
o Baow -o.D34
B
Figure 8.2.34. Mean Value Sensitivities of Longitudinal Dispersivities forcase1b. A. Correlation
scale 20m. B. Correlation scale 10m.
Chapter 8.Theoretical Case Studies and PAGAP Verification page 271
A
,.
20 30 3S 40 04S so 55
_ ABOVE 0.039
l1li 0.027 - 0.039
_ 0.016 - D.027
_ 0.004 0.016
_ -0.007 - D.OO4
_ -0.019 - -0.007
o BElOW -0.019
_ ABOVE 0.043
.. 0.030 - 0.043
_ 0.016 - 0.030
_ 0.005 - 0.018
_ 00.007 - 0.005
iii 00.020 - -0.007
o Baow -0.020
B
Figure 8.2.35. Mean Value Sensitivities of Transverse Dispersivities forcase1b. A. Correlation
scale 20m. B. Correlation scale 10m.
page 272 Chapter 8.Theoretical Case Studies and PAGAP Verification
A
ABOVE -0.003
-0.039 -0.003
_ -0.076 - -o.D39
_ -0.112 - -om6
_ -0.148 - -0.112
~ -0.164 - -0.148
o BELOW -0.164
_ ABOVE -0.005
l1li -0.045 - -0.005
_ -0.086 - -0.045
_ -0.126 - -0.088
_ -0.166 - -0.128
_ -0.208 - -0.168
o BELOW -0.208
B
Figure 8.2.36. Mean Value Sensitivities of Hydraulic Conductivities forcase1b. A. Correlation
scale 20m. B. Correlation scale 10m.
_":_,--- -----
Chapter 8.Theoretical Case Studies and PAGAP Verification
10 -15 20 25. - 30 3S ~
A
so ss
page 273
_ ABOVE 0.0006
l1li -0.0016 - 0.0006
_ -0.0042 - -0.0016
_ -0.0066 - -0.0042
_ -0.0090 - -0.0066
~ -0.0114 - -0.0090
o BElOW -0.0114
B
_ ABOVE -0.001
_ -O.Q03 -0.001
_ -0.004 - -0.003
_ -0.006 - -0.004
_ -0.008 - -0.006
[i!! -o.tlO9 - -0.008
o BB.OW -ll.OO9
Figure 8.2.37. Standard Deviation Sensitivities of Longitudinal Dispersivities forcase1b. A. Cor-
relation scale 20m. B. Correlation scale 10m.
page 274 Chapter 8.Theoretical Case Studies and PAGAP Verification
253035-40
.. 55
A
o
'0
'0
'5
'5
20
20 25 30 os
45
40 45
.. 55
_ ABOVE O.llOO7
_ 0 . O O - 0.0007
_ .0.0006 - 0 . o o
_ .0.0013 .0.0006
_ .0.0020 - .0.0013
Em! .0.0027 - .0.0020
o BElOW .0.0027
1S 20 25
'0
'0 '5 20 2S
30
30 35 40 45
..
..
55
55 GO
_ ABOVE 0.000
_ .0.000 . o o
_ .0.001 - . o o
_ .0.001 - -0.001
_ .0.002 -0.001
E;] .0.002 - -0.002
o Baow -0.002
B
Figure 8.2.38. Standard Deviation Sensitivities of Transverse Dispersivities forcase1b. A. Cor-
relation scale 20m. B. Correlation scale 10m.
Chapter 8.Theoretical Case Studies and PAGAP Verification
A
~ ~ - - - - - - - - --- -- -- ----
page 27S'
_ ABOVE -0.006
lII!I!I -0.023 - -0.006
_ -0.041 - -0.023
_ -0.059 - -0.041
_ -o.rm - -ll.O59
~ -0.094 - -o.arr
o BElOW -0.094
_ ABOVE -0.007
l1li -0.020 - -0.007
_ -0.032 - -0.020
_ -0.045 - -0.032
_ -o.os8 - -0.045
mI -0.070 - -0.058
o BELOW -0.070
B
Figure 8.2.39. Standard Deviation Sensitivities of Hydraulic Conductivities forcase1b.
A. Correlation scale 20m. B. Correlation scale 10m.
page 276 Chapter 8.Theoretical Case Studies and PAGAP Verification
The upper boundary sensitivities show that these constant head nodes play a more im-
portant role in this case than in case 1a. The sensitivity values at the SOm and SSm dis-
tance nodes are comparable to many of the sensitivities for nodes at the lower boundary.
The y-sensitivities shown in figures 8.2.29, 8.2.30 and 8.2.31 do not reveal any surpris-
ing results. For both dispersivities we see that the sensitivities are highest in the target
node area, but also in the source node area. The sensitivities at the source node area
show a much higher sensitivity to the rightmost node than the other two source nodes.
Since the sensitivities resemble the distribution of these parameters in the case where
the random variables were uncorrelated, one can clearly see the impact of correlation.
The longitudinal dispersivity sensitivities have slightly higher values than those ob-
tained for case la. It is interesting to notice the negative sensitivities observed at the
right side of the pollution source nodes.
The transverse dispersivity y-sensitivities have much higher values than those for case
1a. We notice the right side barrier zone and the remains of the left side barrier zone. It
is however interesting to notice the extent of the positively sensitive area. It is obvi-
ously this area which causes the curvature of the contours observed in figure 8.2.25.
The y-sensitivities for the hydraulic conductivities are given in figure 8.2.31. For both
correlation scales the sensitivity values are higher than those for case 1a. The highest
sensitivities are observed near the rightmost pollution source node. The overall sensitiv-
ity distribution resembles the distribution observed for case 1a with once again having
the obvious difference that it is curved towards the asymmetric target node.
As in case1a, case1b is dominated by the influence of the constant head nodes. Figures
8.2.32 and 8.2.33 show the normalized mean value and standard deviation sensitivities
for the constant head nodes at the lower and upper boundary respectively. The mean
value sensitivities are the exact inverse of the y-sensitivities seen in figures 8.2.27 and
8.2.28 and therefore no additional infonnation is obtained. However the standard devia-
Chapter 8.Theoretical Case Studies and PAGAP Verification page 277
tion sensitivities are very interesting for the lower boundary. The standard deviation
sensitivity for the rightmost pollution source node is higher then the mean value sensi-
tivity. In other words, increasing the constant head standard deviation assigned to the
rightmost pollution source node will reduce the reliability index more than a similar in-
crement to the constant head mean value assigned to this node.
In casela the standard deviation sensitivities where approximately half of the mean
value sensitivities for the constant head pollution source nodes. This is a significant dif-
ference between the two cases. In casela one needs good estimates for the constant
head mean values at the pollution source nodes, in order to obtain good reliability re-
sults, the constant head standard deviations for these nodes is also important for case 1b
but to a lesser degree. In caselb however, both constant head mean value and standard
deviation for the rightmost pollution source node is important. Considering that mean
value data are more reliable than standard deviation data, one has to conclude that the
reliability index in caselb would be less reliable than the reliability index obtained in
casela.
Figures 8.2.34, 8.2.35 and 8.2.36 give the mean value sensitivities for the longitudinal
dispersivities, transverse dispersivities and hydraulic conductivities respectively. Each
figure shows the contour sensitivity plots for a case where the correlation scale is 20m
and the 10m respectively. Comparing these figures with the respective y-sensitivity
contour plots one notices a remarkable resemblance, although - as noted previously - the
mean value sensitivities have an opposite distribution of that of the y-sensitivities.
Upon normalization with the standard deviation for each parameter, we observe that for
the 20m correlation scale analysis, the y-sensitivities have slightly higher values than the
mean values sensitivities while for the 10m analysis the mean value sensitivities have
slightly higher values then the y-sensitivities.
The differences between y-sensitivities and" the mean value sensitivities are very small,
so' one could make the general statement that the reliability index is approximately
page 278 Chapter 8.Theoretical Case Studies and PAGAP Verification
equally dependent on the mean values and the problem definition. However, if we put
some weight to the differences observed, then the correlation input seems to reduce the
influence of the mean values on the reliability index. It is quite reasonable that the corre-
lation input will have such an effect. The results presented for this case study show
clearly the effect of the correlation scale and indicate that increasing the correlation
scale will reduce the reliability index. Therefore a reduction of mean value sensitivities
is expected when increasing the correlation scale.
Figures 8.2.37, 8.2.38 and 8.2.39 are the last contour plots presented for this case study
and show the standard deviation sensitivities for the longitudinal dispersivities, trans-
verse dispersivities and hydraulic conductivities respectively. As previously done, two
plots are presented for each parameter. One for the 20m correlation case and one for the
10m correlation scale. In all three figures the normalized sensitivities are 3 to 10 times
smaller than the respective mean value sensitivities, and therefore the influence of the
standard deviations on the reliability index is very small. The sensitivity distribution
observed in these figures is similar to the one observed for the mean value sensitivities
and 'V-sensitivities.
8.3. Second Case
In the second case study an aquifer with dimensions 60x100m is used to simulate a
transient pollution source problem. The problem geometry and the finite element grid
used can be seen in figure 8.3.1.
The finite element mesh is common for the steady state groundwater flow analysis and
the pollution transport analysis. The number of elements is 240 while the number of
nodes is 273.
The transient pollution sources are active for a period of 20d, the porosity has a deter-
ministic value of 0.3, the decay constant is 0.0 and the constant head nodes are assumed
Chapter 8.Theoretical Case Studies and PAGAP Verification
~ - - - - - - - - - - - - - - - -
page 279
to be detenninistic with mean values 4.6m at the lower boundary and O.6m at the upper
boundary.
The element hydraulic conductivities are assumed to have a mean value of 3m1d and a
standard deviation of O.3m1d with a distribution which is assumed to be lognonnal. The
correlation scale used is 20m with an exponential function.
An aquifer with the same characteristics has also been analyzed by Jang et al.(1990) and
comparisons to their results will be made. Jang et al.(1990) assumed in addition that the
dispersivity values were global random variables with a normal distribution. They give a
mean value for the longitudinal dispersivity of 3m and a standard deviation of O.3m,
while for the transverse dispersivities a value of 105m for the mean and a.15m for the
standard deviation. These same conditions have been implemented in PAGAP.
With the above infonnation we are going to analyze the following two problems.
1. Find the probability of the maximum pollution at the target node exceeding a value of
0.35 over a period of 150d.
2. Find the probability of the residence time exceeding 75 days for a concentration
higher than 0.2, over a period of 150d.
We have mentioned in chapter 3 that the finite element method is not well suited to
problems where the pollution source is transient. The numerical errors introduced by
the finite element method will not only effect the concentration estimates but also the
estimation of t
rrmx
(the time at which the maximum concentration is observed), and the
residence time, which in tum will have a considerable impact on the reliability analysis
results.
The Peclet number is estimated with the use of the transverse dispersivity and is equal to
Pe = 5/3 which is double of that for case 1, and unfortunately is over the stability value
ofl.a.
page 280 Chapter 8.Theoretical Case Studies and PAGAP Verification
The pore velocity, with mean value hydraulic conductivities, is equal to OAm/d, giving
a Courant number equal to C = 0.08 11t. The Courant number is small enough to ensure
that time results will only have moderate errors.
H= O.6m
r'J
L',...J
DElement 5x5m
Source nodes
rd Target nodes
_ Constant Head Boundary
_ No Flux Boundary
H=4.6m
Figure 8.3.1. Finite Element Mesh and Boundary conditions for case2.
8.3.1. Case 2a
For the first problem we must find the probability of the maximum pollution concentra-
tion at the target node exceeding a target value of 0.35 over a period of 150d, and for
this purpose we shall use the second performance function defined in chapter 5 given as
Chapter 8.Theoretical Case Studies and PAGAP Verification
g(X) = C,arg - C
max
(x, y, T)
or
g(X) = C,arg -c(x,y,t
max
)
page 281
(8.3.1)
where e
max
is the maximum pollution at point with coordinates (x,y) over a period of time
T ,and t
max
in the alternative equation is the time at which the maximum concentration is
reached. The second performance function is simpler, but t
max
is unknown and is part of
the solution to the problem and therefore one can not use this equation. Equation 8.3.1
for this specific problem is thus written as
g(X) = 0.35 - C
max
(30,50,150) (8.3.2)
Table 8.3.1. Results of Reliability Analysis for case 2a.
Model Analysis Type
tmax - tefesign P Cmax
13
PI
CALREL-TRANS FORM L,CS=20 84.49-79.40 0.3052 3.1576 0.000795
SORM 84.49-79.40 0.3052 3.1848 0.000724
Monte Carlo 84.49-79.40 0.3052 3.2038 0.000678
PAGAP FORM L,CS=20 84.20-80.70 0.3149 2.564905 0.005160
FORM L,CS=10 84.20-83.00 0.3149 2.637570 0.004175
FORM L,GL 84.20-74.50 0.3149 1.993219 0.023119
FORM C,CS=20 76.70-75.70 0.3369 0.919375 0.178950
FORM C,CS=10 76.70-76.50 0.3369 0.974496 0.164905
FORMC,GL 76.70-73.70 0.3369 0.734325 0.231375
L =Lumped Matrix Formulation
C =Consistent Matrix Formulation
CS =Correlation Scale
GL =Global Random Hydraulic Conductivities
Let us first look at the results obtained by Jang et al. (1990). The maximum conce,ntra-
tion at the target node was observed after t
max
= 84.49 days, with a concentration of
page 282 Chapter 8.Theoretical Case Studies and PAGAP Verification
0.3052. The design point concentration of 0.35 was obtained at t
max
= 79.40 days. The
FORM reliability index was equal to 3.1576, while a Monte Carlo simulation based on
the same FEM fonnulation gave a reliability index equal to 3.2038.
PAGAP with a lumped formulation and a Crank-Nicholson scheme for the time deriva-
tive gave a maximum concentration of 0.314863 at the target node at time 84.2 days,
with a time step of 0.1. After 5 iterations using the standard Rackwitz and Fiessler al-
gorithm, a reliability index of 2.564905 was obtained giving a probability of failure
equal to 0.005160. The design point maximum value of 0.35 at the target node is ob-
tained at t
max
= 80.7d. The reason that an implicit scheme has not been used in PAGAP
for this case, is that the second performance function has been implemented differently
in CALREL-TRANS, which makes comparisons between the results very difficult. The
differences between the two implementations are discussed in chapter 9.
PAGAP with a consistent matrix formulation and a Crank-Nicholson scheme for the
time derivative gave a maximum concentration of 0.336943 at the target node at time
76.7 days, with a time step of 0.1. After 5 iterations using the standard Rackwitz and
Fiessler algorithm, a reliability index of 0.919375 was obtained giving a probability of
failure equal to 0.178950. The design point maximum value of 0.35 at the target node is
obtained at t
max
= 75.7 days.
Table 8.3.1 summarizes the results obtained from PAGAP and CALREL-TRANS. It
should be kept in mind that the SORM and Monte Carlo results simply improve the
probability estimate and thus a better reliability index can be obtained . They do not es-
timate a new design point or change the sensitivity results. From table 8.3.1 we can
clearly see that the differences between the results from CALREL-TRANS and PAGAP
are related to the c values. Since the two models calculate the maximum concentra-
max
tion differently, direct comparisons between the results can not be made.
In table 8.3.1 one can also see the t
max
times for the mean value solution and at the design
point solution. For the mean value solutions, the difference between the lumped and
Chapter 8.Theoretical Case Studies and PAGAP Verification page 283
consistent formulation is 7.5 days. Considering that the average pore velocity is O.4rn/d
and that the target node is 30m away from the nearest row of source nodes and 35m
from the farthest row, then due to advection the maximum concentration of the pollution
.
plume should reach the target node at some time between 75 days and 87.5 days. How-
ever, we must also take dispersion processes into consideration. Dispersion will of
course reduce the peak concentration of the pollution plume and spread out the plume in
both horizontal and vertical directions. Dispersion makes the monitoring of maximum
concentrations complicated, because the maximum concentration at a specific point in
the aquifer will not coincide with the maximum plume concentration at that specific
time.
0 10 20 30 40 50 60
100 100
90 90
80 60
70
60

ABOVE 4.400

4.200 4.400
-
4.000 - 4.200
50 50
IIIIl!I
3.800 - 4.000
.. 3.600 - 3.800
-
3.400 - 3.600
40 40
-
3.200 - 3.400
-
3.000 - 3.200
-
2.800 - 3.000
30
-
2.600 - 2.800
30
-
2.400 - 2.600
-
2.200 - 2.400
-
2.000 - 2.200
-
1.800 - 2.000
-
1.600 - 1.800
0
1.400 - 1.600
10 10
0
1.200 - 1.400
0
1.000 - 1.200
0
0.800 - 1.000
0
BELOW 0.800
0
0 10 20 30 40 50 60
Figure 8.3.2. Groundwater solution at the design point for case 2a.
page 284
Chapter 8.Theoretical Case Studies and PAGAP Verification
o 10 ~ ~ ~ ~ m
'oo-t-'-......................................i...L............w......................J....r..............L..J..""-7..........t'oo
80
70
-
ABOVE 0.350
..
0.315 - 0.350
-
0.280 - 0.315
-
0.245 - 0.280
-
0.210 - 0.245
-
0.175 - 0.210
-
0.140 - 0.175
10
-
0.105 - 0.140
Q 0.070 - 0.105
0
0.D35 - 0.070
0
B8.0W 0.035
10 20 30 40 50 60
Figure 8.3.3. Mean value concentration distribution.
'0
20 ..
70
eo
-
ABOVE 0.350
IIIiiII
0.315 - 0.350
-
0.280 - 0.315
..
0.245 - 0.280
..
0.210 - 0.245
..
0.175 - 0.210
-
0.140 - 0.175
10
-
0.105 - 0.140
0
0.070 - 0.105
"0
0
D.035 - 0.070
0
B8.0W 0.035
'0 20 30 40 50 60
Figure 8.3.4. Concentration distribution at the design point for case 2a
----------------------------
Chapter 8.Theoretical Case Studies and PAGAP Verification page 285
_...........
_ O-SU-uaG
lIB CUIO .. CU15
_ Q.24I-Q2IO
_ a.21O.. UQ
_ o.175-U1O
_ 0.1)-4171
.. _ CL1ClS-Q.MO

amD_Q.1CS
.......,.
...... ....
_...........
Kl o.sU:-D:l5O
t:::J Q.2IG.Q.S15
IIiII G.24.5-Q2IG
_ G.J10-a.z45
_ Q.l75-G.J1O
_ o.wo_c.'171
.. _ D.1OS .. Q.'I4O

O.G7'D_o.l05
..... ..,. _....
A B
-_....
.. Q,21S_Q.SSg
I!!!l ..... ..,.
_ 0.24S.0..:-0
.. _ U1O-OMS
_ o,m-UIO
_ G.UO.o.17I
M _ 0.105-0.140

0..D10-001ClS
..... ..,.
..........
.-....
.. o.s15-U
tfi =:= _ o.J:'tO-C.M5
_ 0.115-0210
_ o.UO-o.l1l
M_ 0.105-0.140

0..D10-o.105
.......,.
o+r,.,....,....................,..,....,.....,...................... _ ....
c o
Figure 8.3.5. A. Lumped mean value pollution. B. Consistent Mean value pollution C. Lumped
design point pollution with a correlation scale of 10m. D. Consistent design point pollution with a
correlation scale of 10m
--_.__. ~ - - - - - - ..._ - - ~ - _ . _ - ------
page 286 Chapter 8.Theoretical Case Studies and PAGAP Verification
50 ..
~ " " " " " " ' ' ' ' ' ' ' ' + ' ' '
.. 30 20

ABOVE 3.300
Q 3.260 - 3.300
IIII!II
3.220 - 3.260
-
3.180 - 3.220
-
3.140 - 3.180
I.
-
3.100 - 3.140
III
3.060 - 3.100
0 3.020 - 3.060
0
BaOW 3.020
,.
20 30 .. 50 ..
1.
A
,.
,.
20
20
30
30
.. 50
so
50
_ ; 40
300.
-
-
-
-
-

~
EJ
o
GO
ABOVE 3.220
3.190 - 3.220
3.160 - 3.190
3.130 - 3.180
3.100 - 3.130
3.070 - 3.100
3.040 - 3.070
3.010 - 3.040
2.980 - 3.010
BaOW 2.980
B
FigureB.3. 6. Design Point Hydraulic Conductivities for case 2a. A. Correlation scale 20m. B.
Correlation scale 10m
Chapter 8.Theoretical Case Studies and PAGAP Verification page 287
This is significant because it means that c
max
is in a position in front of the maximum
plume concentration, and t
max
will have avalue near to the 75 day mark. Therefore, the
lumped fonnulations giving t
max
at 84.2 days is an unreasonable result. It is interesting to
notice in table 8.3.1 that the cases where global hydraulic conductivities have been used,
the design point t
max
has moved near to the 75 day mark.
In figure 8.3.2 we see the groundwater head contours at the design point when using a
lumped matrix fonnulation and the Crank-Nicholson scheme for the time derivative.
We certainly do not notice any dramatic changes from the mean value solution where
the contours would be parallel lines from the upper constant head boundary to the lower
constant head boundary. Small differences can be noticed, however. We observe a small
oval shaped depression with a center approximately at coordinates (30,55), slightly
above the target node, indicating a slightly increased hydraulic gradient. The contour
plot does not indicate that the hydraulic conductivities have had a major impact on the
design point.
Figures 8.3.3 and 8.3.4 show concentration contours using mean parameter values and
design point parameter values for the lumped matrix fonnulation with a correlation
scale of 20m. Figure 8.3.4 does not have the shape that would nonnally be expected.
Since the velocity field has a constant value and constant direction over the whole do-
main, and dispersivity values also are constant, we expect contours to have elliptic
shapes with a common center at which the highest concentration is observed.
One could assume that this deviation from the expected is due to a boundary effect.
However, when using a consistent matrix formulation we do indeed get the expected
pollution plume shape as can be seen in figure 8.3.5. This figure shows four pollution
distributions. 8.3.5A and 8.3.5B are the mean value distributions when the target node
obtains its c
max
value. Note that figures 8.3.3 and 8.3.5A are the same. The design point
solutions in figure 8.3.5 (Le. C and D) show the pollution plume when the target node
obtains the target concentration for a correlation scale of 10m. Figure 8.3.5 shows that
page 288 Chapter 8.TheoreticaI Case Studies and PAGAP Verification
there is a considerable difference in the shapes of the plumes obtained from a lumped
matrix formulation and a consistent matrix formulation. The differences observed
should be attributed to numerical errors introduced by the lumped matrix formulation.
The reason for discussing this deviation is that the design point results of figure 8.3.4
show the same general characteristics as those of figure 8.3.3, and one should not attach
any significance to these characteristics in relation to design point calculations.
Comparing figures 8.3.3 and 8.3.4 we observe that the design point pollution plume
shows higher concentrations and less dispersion in the horizontal direction than the
mean value pollution plume. It is interesting to notice that in the vertical direction both
plumes seem to have approximately the same dispersion. A better understanding of
these characteristics is obtained by considering the design point values for the global
dispersivity values which are ~ = 2.7746m and lXr=1.2124m (mean values are 3.0 and
1.5 respectively).
Obviously, the lower transverse dispersivity value has caused the "narrowing" of the
design point plume. The lower longitudinal dispersivity must be balanced out by the
higher groundwater velocity. We shall discuses the groundwater velocity once the de-
sign point hydraulic conductivities are presented, but an indication of this velocity is
already given in the PAGAP results. If we go back to the description of the PAGAP
results, it is mentioned that the target concentration at the target node is obtained at t =
80.7 d, while the mean value solution is obtained at t = 84.2d i.e. 3.5 days later. Since
both global dispersivity values have been decreased, less spreading of the plume takes
place and therefore higher concentrations are observed.
The design point hydraulic conductivity values can be seen in figure 8.3.6 for a correla-
tion scale of 20m (8.3.6A) and a correlation scale of 10m (8.3.6B). Although the plots
do not use the same contour levels, one can observe that in the 20m case the hydraulic
conductivities have higher values than in the 10m case. The design point hydraulic
conductivities are forced to have the user defined correlation scale. The effects of the
Chapter 8.Theoretical Case Studies and PAGAP Verification page 289
correlation scale in these contour plots is clear. The 20m case shows that the design
point hydraulic conductivities have affected the whole aquifer, with the largest effects
observed in the area directly above the pollution sources. The 10m case shows the same
characteristics although the overall effects over the aquifer are more moderate. The de-
sign point hydraulic conductivity values affect both advective and dispersive processes
and are therefore difficult to interpretate. The hydraulic conductivity distributions seen
in figure 8.3.6 are obviously increasing the velocity between the pollution source nodes
and the target node. Considering now that time is not directly included in the perform-
ance function, one can reason that advective processes do not directly influence the
analysis either. In other words, advective processes have only a marginal influence on
the analysis, the extent of which is confined to the interaction between advection and
dispersion due to the groundwater velocity.
Increased velocity causes increased dispersion, but at the same time the plume reaches
the target node faster, so there is less time for dispersion processes to take place. In this
case the mean value dispersion in the vertical direction seems to be approximately the
same as the one observed for the design point vertical dispersion despite the fact that the
global longitudinal dispersivity has been reduced. Therefore we have to assume that the
increased velocity has balanced out the longitudinal dispersivity effects and the control-
ling variable is the global transverse dispersivity.
Figure 8.3.7 shows the hydraulic conductivity y-sensitivities for the 20m correlation
scale case (A), and 10m correlation scale (B). The two contour plots are very similar.
In both plots the influences of the area between tl1e pollution source nodes and the target
node is obvious, although the 20m case is slightly more sensitive to changes in this area
than the 10m case. The highly sensitive areas are confined on each side with negative
sensitivity areas (yellow areas). The negative sensitivity area has a larger extent in
8.3.7A than in 8.3.7B, although a better choice of contour levels would probably reveal
a cpnsiderable negative sensitivity area for plot B as well. The negative sensitivities are
very interesting because indirectly they show us the impact of the correlation scale. In-
page 290
----- - ~ - - - - - - ~ - - - - ---------- - - - - ~ - - - - -
Chapter 8.Theoretical Case Studies and PAGAP Verification
crements in hydraulic conductivity values in the positive sensitivity area will give a
larger reliability index, however the correlation scale will transmit part of these incre-
ments into the negative sensitivity area resulting in a lower reliability index. Of course
the positive sensitivity area has a much larger influence on the reliability index as indi-
cated by the high "{- sensitivity values observed there, but the negative sensitivity area is
much larger. Obviously, negative sensitivity areas will have a larger impact when the
correlation scale is large. For the 20m case, the y-sensitivities for the globallongitudi-
nal and transverse dispersivity are -0.354175 and -0.890283 respectively. Both sensi-
tivities are negative, showing that reducing the global dispersivity values will increase
the reliability index and also confirm that the global transverse dispersivity is the most
important variable in the analysis.
Figure 8.3.8 shows the sensitivities with respect to the mean values used for the ele-
ment hydraulic conductivities. The contour levels have been chosen to be equidistant
and span the whole range of sensitivity values. The same choice of contour levels was
used for the contour plots of the "{-sensitivities seen in figure 8.3.7. The reason for this
choice of contour levels is to reveal the similarity between the "{-sensitivities and the
mean value sensitivities. There is of course one significant difference between them.
The mean value sensitivity distribution is almost exactly the inverse of the "{-sensitivity
distribution. Highly negative mean value sensitivities correspond almost exactly with
highly positive "{-sensitivities. This behavior has also been noticed in case 1 results. In
order to find out if the reliability index is more sensitive to changes in the element hy-
draulic conductivity value than in changes to their respective mean values, we must
multiply the mean value sensitivities with their respective standard deviations, in this
case 0.3. For the 20m case we see that the reliability index is slightly more dependent
on the design point hydraulic conductivity values than the hydraulic conductivity mean
values. For the 10m case the reliability index is more or less equally dependent on both
values.
Chapter 8.Theoretical Case Studies and PAGAP Verification
'0
A
_ ABOVE 0.077
III 0.057 - 0.077
_ 0.1138- 0.057
0.018 - 0.038
-0.002 0.D18
~ -0.021 - -0.002
o BaOW -0.021
page 291
'0
'0
20
20
30
30
..
..
50
50
-
-
10.

o
..
ABOVE 0.067
Q.047 - 0.067
0.027 - 0.047
0.006 - 0.027
-0.014 - 0.006
BaOW -0.014
B
Figure 8.3.7. Sensitivities of Hydraulic Conductivities for case2a. A. Correlation scale 20m. B.
Correlation scale 10m
page 292 Chapter 8.Theoretical Case Studies and PAGAP Verification
A
10
10
20 30 so
so
20_
-
-
10 _
-
~
o
so
ABOVE 0.081
o.ooa - 0.081
-0.049 - 0.008
-0.104 - -0.049
-0.180 - -0.104
-0.215 - -0.180
BB.OW -0.215
'0
'0
20
20
30
30
50
so
_ ABOVE 0.054
_ -0.015 - 0.054
'0 _ -0.084 - -0.015
_ -0.152 - -0.084
_ -0.221 - -0.152
o BELOW -0.221
so
B
Figure 8.3.8 Sensitivities with respect to Mean Values of Hydraulic Conductivities case2a. A.
Correlation scale 20m. B. Correlation scale 10m.
Chapter 8.Theoretical Case Studies and PAGAP Verification page 293
90
80
70
60
80
50
50
40
30
20
-
ABOVE 0.012
-
-0.035 - 0.012
-
-0.082 - -0.035
10
-
-0.129 - -0.082
-
-0.176 - -0.129
~
-0.224 - -0.176
0
BElOW -0.224
10 20 30 40 50
Figure 8.3.9. Sensitivities with respect to Standard Deviations of Hydraulic Conductivities with
correlation scale 20m for case2a.
page 294 Chapter 8.Theoretical Case Studies and PAGAP Verification
The last results presented for case study 2a are those of figure 8.3.9 showing the sensi-
tivities with respect to the standard deviations for the element hydraulic conductivities
with a correlation scale of 20m. An almost similar plot was obtained for the 10m case.
The plot has the same characteristics as the mean value sensitivity plot. The main differ-
ences are that the highly negative area is smaller but with higher sensitivity values. The
positive sensitivity area is much smaller than in the respective area for the mean value
sensitivities.
8.3.2. Case 2b
The second problem belongs to the so called residence time problem category. The
third performance function is used for this problem which is defined as
g(X) = R
targ
- T
C
"2C (x,y,T)
larg
(8.3.3)
where R
wg
is the target residence time and I::"2C
1atB
is the period of time for which the
concentration exceeds a value of CWg at point (x,y) over a period of time T. The begin-
ning and end of the residence time is part of the solution. In this particular case the per-
formance function is written as
g(X) = 75 - 1 ; ; ~ O . 2 (30,50,150) (8.3.4)
For the residence time problem Jang et al (1990) calculated a mean value residence time
of 65.62 days which started at 57.03 days. At the design point the residence time is of
course 75 days and started at 59.05 days. The FORM reliability index obtained was
2.6355.
The PAGAP lumped formulation gave a mean value solution for the residence time of
65.93 days which started at 55.78 days. The RF algorithm converged in 7 iterations to a
reliability index of 2.661269 with a probability of failure 0.003892. The residence time
lasted 75 days at the design point and it started at 57.55 days Le. approximately 2 days
later than the mean value solution.
Chapter 8.Theoretical Case Studies and PAGAP Verification page 295
The PAGAP consistent matrix formulation gave a mean value residence time of 65.99
which started at 50.26 days, Le. approximately 5.5 days before the lumped matrix solu-
tion and over 6.5 days before the lang solution.. The RF algorithm converged in 7 itera-
tions to a reliability index of 2.830823 giving a probability of failure equal to 0.002321.
The residence time started at 52.44 days at the design point.
Table 8.3.2. Results of Reliability Analysis for case 2b.
Model Analysis Type T
p
PI
C ~ C l a r g
CALREL-TRANS FORM L, CS=20 65.62 2.6355 0.004200
SORM 65.62 2.7051 0.003414
PAGAP FORM L,CS=20 65.93 2.661269 0.003892
FORM L,CS=10 65.93 2.839749 0.002257
FORML,GL 65.93 2.043385 0.020507
FORM C,CS=20 65.99 2.830823 0.002321
FORM C,CS=10 65.99 3.102145 0.000961
FORMC,GL 65.99 2.008076 0.022318
L =Lumped Matrix Formulation
C =Consistent Matrix Formulation
CS =Correlation Scale
GL =Global Random Hydraulic Conductivities
page 296 Chapter 8.Theoretical Case Studies and PAGAP Verification
A
I.
I.
20
20
30
30 50
-
-
-
20_
-
-
I. _
-
o
o
ABOVE 2.920
2.900 - 2.920
2.880 - 2.900
2.880 - 2.880
2.840 - 2.880
2.820 2.840
2.800 - 2.820
2.780 - 2.800
2.760 - 2.760
BaOW 2.760
-
ABOVE 2.990
~ 2.970 - 2.990
20
..
2.950 - 2.970
-
2.930 - 2.950
-
2.910 - 2.930
I.
-
2.690 - 2.910
-
2.870 - 2.890
0
2.850 - 2.870
0
BaOW 2.850
,.
20 30 40 50 50
B
Figure 8.3.10 Design Point Hydraulic Conductivities for case 2b. A. Correlation scale 20m. B.
Correlation scale 10m.
Chapter 8.Theoretical Case Studies and PAGAP Verification page 297
From table 8.3.2. we see that the mean value resident times obtained from the models
are very close to each other. However the times at which these resident times start are
quite different for each model. The consistent matrix formulation gives an initial time of
50.26, the lumped matrix formulation gives 55.78, while Jang et al. gave a starting time
of 57.03 days.
The design point starting times are 52.44 for the consistent formulation, 57.55 for the
lumped case and 59.05 for the Jang et al. model. We see that the design point initial
times start approximately two days after the mean value initial times.
Intuitively, one expects that the pollution plume velocity has to decrease in order for
the residence time to increase to 75 days, Le. the plume has to pass over the target node
more slowly. However, such a decrease will also decrease the dispersion processes
causing the plume to have smaller lateral extension and higher peak concentration.
Therefore, one expects to see a decrease in velocity and at the same time an increase in
longitudinal dispersivity to maintain or even increase slightly the lateral extension of the
pollution plume. The transverse dispersivity will most likely decrease in order to main-
tain higher pollution concentrations in the plume.
Figure 8.3.10 shows the design point hydraulic conductivities for a correlation scale of
20m (A) and 10m (B). In both plots we observe that hydraulic conductivity values have
been reduced from the initial3.0mJd mean value. A higher reduction is observed in the
20m case than in the 10m case, which is not consistent with the reliability indices calcu-
lated, and therefore must be attributed to the global values of the longitudinal and trans-
verse dispersivity values. For the 20m case these values are 3.208872 and 1.165227,
respectively (mean values 3.0m and 105m), while for the 10m case 3.254892 and
1.1247259, respectively. These are consistent with the reliability indices and seem to be
the control variables.
page 298
A
Chapter 8.Theoretical Case Studies and PAGAP Verification
2D
-
ABOVE 0.017
-
0.007 - 0.017
-
-0.003 - 0.007
,.
-
-0.013 - -0.003
-
-0.023 - -0.013
[:3 -o.D33 - -0.023
0
ea.ow -0.033
,.
2D 30 .. 50 50
,.
1.
20
20
30
30
so
so 60
-
-
,...
-
..
o
ABOVE 0.011
0.001 - 0.011
-0.009 - 0.001
-0.019 - -0.009
-0.029 -0.019
ea.ow -0.029
B
Figure 8.3.11. Sensitivities of Hydraulic Conductivities for case 2b. A. Correlation scale 20m. B.
Correlation scale 10m.
Chapter 8.Theoretical Case Studies and PAGAP Verification
-
ABOVE 0.113
l1li
0.079 - 0.113
-
0.D45 - 0.079
I.
-
0.011 - D.045
-
00.ll23 - 0.011
l1li
-0.058 -0.023
0
BaOW -0.058
10 20 30 .. .. ..
A
page 299
I.
I.
20
20
30
30
..
..
50
50
-
-
I. _
-
-
o
..
ABOVE 0.111
0.073 - 0.111
0.034 - 0.073
-0.004 - 0.034
-0.042 - 00.D04
BaOW 00.042
B
Figure 8.3.12. Mean Value Sensitivities of Hydraulic Conductivities for case 2b. A. Correlation
scale 20m. B. Correlation scale 10m.
page 300
- - ~ - ~ -- - - ~ - - - - ----- -- ----
Chapter 8.Theoretical Case Studies and PAGAP Verification
The highest element hydraulic conductivities are observed in both cases at the target
node and are centered approximately Sm under it. The influence of the position of the
pollution sources is less obvious, although the shape of the contours in the lower half
section is a clear indication of their influence.
An important clue as to why the dispersivities playa more significant role then the hy-
draulic conductivities (i.e. velocities), is obtained by looking back at the residence
starting times. Independently of which numerical scheme is used, the design point
starting times are approximately 2 days after the mean value starting times. This indi-
cates that the velocity changes are not significant enough to increase the residence time
by approximately 9 days as requested. In the best case they will increase the residence
time by approximately 3 days. The remaining 6 days have to be obtained from changes
in the dispersion process.
The y-sensitivities can be seen in figure 8.3.11 for a correlation scale of 20m (A) and
10m (B). The plots are similar and have interesting characteristics. The sensitivities
help us to see the influence of the pollution source nodes, but also an optimum way for
pollution preservation. Just above the pollution sources we observe an area of positive
sensitivities suggesting that higher hydraulic conductivities or velocities are required.
This will move more pollution towards the target node since less transverse dispersion
will take place. On both sides of this positive sensitivity area, highly negative sensitiv-
ity areas exist where low velocities are obtained. These areas work as pollution barriers
reducing the amount of pollution lost to dispersion processes and therefore help to keep
concentration levels high in the pollution plume. Outside the pollution barriers positive
sensitivities are observed. Although the amount of pollution reaching these areas will
not be large, the higher velocities will transport it faster than the pollution concealed
within the barrier zone.
The y-sensitivities for the longitudinal and transverse dispersivities are 0.293246 and -
0.940007 respectively, for the 20m correlation scale case. The importance of these vari-
Chapter 8.Theoretical Case Studies and PAGAP Verification
abIes is very clear.
page 301
Figure 8.3.12 shows the mean value sensitivities for the hydraulic conductivities. The
contour levels have been chosen to be equidistant and covering the whole range of sen-
sitivity values as did those of figure 8.3.11.
The resemblance between the y-sensitivities and the mean value sensitivity distribution
is striking in this case. As we have noted previously, the relationship between them is
inverse in the sense that the mean value sensitivities are the mirror images of the
y-sensitivities. Upon normalization with the standard deviation we see that the mean
value sensitivities are just slightly larger than the y-sensitivities, and therefore the initial
mean values have a slightly larger influence on the reliability index than the design
point values. This influence is higher for the 10mcase than the 20m case.
The global dispersivities have normalized mean value sensitivities of -0.261620 and
0.838629, respectively and are both smaller than the "{-sensitivities.
Finally, figure 8.3.13 shows the hydraulic conductivity standard deviation sensitivities,
which have the same characteristics as the mean value sensitivities, although upon nor-
malization we see that they have a smaller influence on the reliability index than the
mean values.
The global dispersivities have normalized standard deviation sensitivities of -0.182150
and -1.871653, respectively.
The sensitivities obtained in this case study are very useful. Besides the fact that they
show us which areas will influence the reliability index, they also provide information
about which input is of importance. In this case the sensitivities show that it is impor-
tant to obtain a good mean value estimate for the global transverse dispersivity and,
even more'important ,to obtain a good standard deviation estimate. For the element hy-
draulic conductivities good estimates are required not only at the target node area but
also at the barrier regions.
page 302
A
Chapter 8.Theoretical Case Studies and PAGAP Verification
-
ABOVE 0.034
-
0.010 - 0.034
-
00.015 - 0.010
to
-
00.039 - 00.015
-
-D.063 - -0.039
~
00.087 - -0.083
0
BB.OW -0.087
to .. 30 .. 50 50
10
10
..
..
30
30
40
..
50
50 GO
_ ABOVE 0.010
_ -0.006 - 0.010
'0 _ -0.023 - 00.006
_ -0.039 - 00.023
_ -0.055 - 00.039
o BB.OW 00.055
B
Figure 8.3. 13. Standard Deviation Sensitivities of Hydraulic Conductivities for case 2b.
A. Correlation scale 20m. B. Correlation scale 10m
Chapter 8.Theoretical Case Studies and PAGAP Verification page 303
It is also interesting that high mean values in the area just above the pollution sources
will actually reduce the reliability index. The influence of the upper half part of the aq-
uifer is very small and the standard deviation sensitivities suggest that a deterministic
approach for the largest part of this area - excluding the area near the target node -
should be appropriate.
8.4. Conclusions
In this chapter theoretical cases studies have been used to verify the accuracy of the
PAGAP model. The results obtained from these case studies are compared to those ob-
tained by Jang et al. (1990, 1994) who used the same case studies for the presentation of
their stochastic model CALREL-TRANS.
The two programs have several inherent dissimilarities, which make direct comparisons
difficult. PAGAP results indicate that the reliability index is very sensitive to the con-
centration estimates obtained from the finite element part of the model. Therefore direct
comparisons could only be made between results obtained from CALREL-TRANS and
PAGAP when the later is using a lumped matrix formulation and the implicit scheme,
which is the FEM configuration employed by CALREL-TRANS.
The differences between the reliability indices obtained from PAGAP and those ob-
tained from CALREL-TRANS are in the range 2.6% - 0.5%, and are considered to be
in good agreement. The differences observed are attributed mainly to the different cor-
relation schemes used by the two programs. The PAGAP reliability indices are in all
cases larger than those from CALREL-TRANS, indicating that the correlation scheme
used by PAGAP produces a larger variance reduction. Comparisons between results
using the second performance function are not included in the above due to the different
algorithm used for estimating the maximum concentration.
page 304 Chapter 8.TheoreticaI Case Studies and PAGAP Verification
When a consistent matrix formulation is used with a Crank-Nicholson scheme, the dif-
ference was as large as 20%, and in all cases the reliability index from PAGAP was
smaller than the one estimated by CALREL-TRANS.
The sensitivity of the reliability index to the concentration estimates is troubling since
numerical errors can then influence the reliability index considerably. The results will
not only depend on the matrix formulation and time derivative scheme employed in the
FEM code, but also on the time step used and the finite element grid spacing. It would
be interesting to know if the SORM and MCS estimates are equally sensitive to the con-
centration estimates. This sensitivity can cause a serious consistency problem.
The sensitivity values produced by PAGAP have helped considerably in identifying
which parameters are influencing the reliability index. In the first case study, the con-
stant hydraulic head node values had the largest influence on the reliability index, over-
shadowing the influences of the other parameters. In the second case study the
significance of the transverse dispersivity was quite clear.
The sensitivity values can not help us very much in understanding the influence of the
correlation on the reliability results obtained. Increasing the correlation scale decreases
the reliability index. The minimum reliability index is obtained when considering the
parameters to be global, since this corresponds to a condition where the correlation scale
is assumed to be extremely large. However, using global random variables instead of
random fields, can. reduce the computational effort considerably, and in many cases the
reliability results will not be altered significantly by this assumption. The highest reli-
ability indices where obtained when no correlation was assigned to the random vari-
ables. However, PAGAP failed to converge in several cases where no correlation
conditions were introduced.
9. The Gardermoen Case Study
9.1. Introduction
We have already demonstrated the types of results obtained from PAGAP with the use
of the theoretical case studies presented in the previous chapter. In this chapter PAGAP
will be used for the analysis of an actual field case study. The purpose of such a study
is to demonstrate some of the problems involved in modeling actual cases and of course
to evaluate the usefulness of PAGAP under more realistic conditions. The case study
will be based on available data from the Gardermoen area (figure 9.1.1).
The Gardermoen area is the subject of an extensive environmental research program in
Norway which goes under the name "MiljjiS i Grunnen" and will be referred to herein as
the "Gardermoen" project. It is supervised by the Norwegian Hydrological Committee
(NHK) and financed by the Research Council of Norway (NFR) and Luftfartsverket.
The purpose of this project is to study and enhance our understanding of the geochemi-
cal and geophysical processes in subsurface groundwater environments, and the large
0vre Romerike aquifer in the Gardermoen area was ideally suited for these purposes.
The 0vre Romerike aquifer is the largest selfsustained - precipitation controlled - aqui-
fer in North Europe with atotal input area of 55.lKm
2
The total amount of water in the
aquifer has been estimated by Jj{jrgensen and 0stmo(1990) to be 1.34xI0
9
m
3
Despite
these large water resources, the aquifer is not presently utilized ~ a source of drinking
water. The relatively poor quality of water ( high concentrations of iron and calcium) is
one reason, but probably the main reason is that the local communities have the easier
page 306 Chapter 9. The Gardennoen Case Study
and cheaper alternative of lake and river water at their disposal.
KVART.ERGEOlOGISK KART, FORENKLET
(Quarternary map, simplified)
~ MORENE (TIal .
t::;:;:;@ BREELVAVSETNINGER IGlcclofluYial d.pa,II'1
I f = = ~ < ~ 1 BRESJllAVSETNINGER (Glaclalacu'ltln. d.pasll"
1:-:-1 HAVAVSETNINGER (Marin. d.p..U,1
E:-::-:3 YINOAVSETNINGER IEaUan dopa,ltsl
1#$1 IoIYR (BOllI
I : : : : : ~ BART FJELL (EspGlld btdtackl
Figure 9.1.1. Simplified quaternary map of the Gardermoen area. The rectangle denotes the
modeling area. (Modified after 0stmo 1976)
The flow regime in the Gardermoen area is a typical precipitation controlled regime.
The main input of water is due to rain and snow melting, and it is this characteristic
which makes it susceptible to pollution since all surface pollutants will eventually be
washed into the subsurface environment. The groundwater table is fairly near the
ground surface (approx. 20m), so in the event of a pollution incident at the ground sur-
Chapter 9. The Gardermoen Case Study
--
page 307
face one must first consider if the pollution can be detained in the unsaturated zone. If
the pollution factor is not detained in the unsaturated zone, then one must consider what
will happen when it reaches the groundwater table. Ideally one should prevent the pol-
lution from reaching the groundwater, and because of this, the main research objective
for the Gardermoen project has been the study of the unsaturated zone. The event of a
pollution factor reaching the groundwater table seems to be rather for the
Gardermoen area, since the available data today indicate that it can be detained in the
unsaturated zone. However, in the case studies considered herein, we assume that the
pollution reaches the groundwater table and analyze some of the effects and conse-
quences of this event in a stochastic manner.
PAGAP is used to model a 2D vertical cross section and a 2D horizontal area of the
Gardermoen aquifer. Unfortunately, at the start of these case studies, not much informa-
tion was available from the Gardermoen area. It was therefore necessary to use the
available data plus a number of assumptions to build up the models. A description of
the available data follows.
A hydrogeological map prepared by 0stmo (1976) compiled from measurements taken
in the period 4-7 November 1975, was used as the main groundwater table reference
(figure 9.1.2). More recent groundwater table measurements taken during the Garder-
moen project have had an indirect impact on some of the assumptions used. Figure 9.1.2
also shows the 2D horizontal modeling region and the location of the 2D vertical cross
section. The groundwater table contours give us a very good picture of the flow regime
in the area. The small black dots depict well sites, while the small circles with dots in
them depict the groundwater table divide. A couple of arrows have been used by 0stmo
to indicate the direction of groundwater flow, while the blank circles have been used to
divide the aquifer into sub-regions and are of no importance for this study.
I !
- ~ - - ~ - - - - - - -'--
---",4
I
V
m
yo
o
Chapter 9. The Gardermoen Case Study
page 309
Figure 9.1.2. Hydrogeological map of the modeling area. The water divide (green),f1owlines
(blue), and the 182m contour line define the 20 horizontal modeling area. The vertical profile line
(red) shows the 20 vertical profile position. The stratigraphic profile line shows the direction of
the profile of figure 9.1.3 (Modified after 0stmo 1976)
The Gardermoen project has made available geological and hydrogeological informa-
tion including a hydrostratigraphic profile - figure 9.1.3 - whose location is shown in
figure 9.1.2. The top two layers have a mean value hydraulic conductivity of 10
4
m/s,
the third layer has a conductivity of 1O,
s
m/s, while the fourth layer has a conductivity in
the range 1O.
6
-10
9
m/s. In the modeled area the groundwater table lies bellow the first
layer, and the fourth layer has been considered impervious due to the low conductivity
values. We shall concentrate therefore on the second and third layer which are impor-
tant for the models.
.. '" C
!
West East
210 8' Della Outwash PlaIn F 210
200 Ul 200
190 .-<::::"":-:-:-:_=-:::-::-:::-:::------:-----....;...---___ K-10" u: 190
180 - - - - - - - - - - ASS'UMeo- _ _ _ - 180
170 170
160 160
150 150
140 140
130 130
120 J 120
3500 3000 2500 2000 1500 1000 500 0
Figure 9.1.3. Hydrostratigraphy of the Gardermoen aquifer and assumed boundary between
layers 2 and 3. (Based on Tuttle et al.1994)
The much higher hydraulic conductivity of the second layer compared to the third layer
suggests that it this layer which controls groundwater flow. This is consistent with
geological observations made in the Gardermoen area which suggest that the main
transport of w.ater is through the second layer. Many small water springs in the area are
directly associated with the boundary between the second and third layer. However,
when the groundwater table measurements of 0stmo (1976) were used in conjunction
with the hydrostratigraphic information to develop the 2D vertical cross section model,
inconsistencies appeared. To achieve consistent results, it was necessary to assume that
the boundary between layer 2 and 3 was positioned lower as- indicated by the dashed
green line in figure 9.1.3. The reasons that l,ead to this assumption are discussed in Ap-
pendix C.
page 310 Chapter 9. The Gardermoen Case Study
9.2. 20 Vertical cross section model
In a 2D vertical section we are modeling groundwater flow in the :xz plane and in order
to do so, we must have no flow taking place in the Y direction in the modeled region.
Only in a few exceptional cases would we be able to define a :xz plane in an aquifer
where the flow does not have a Y component. In most cases we can only find curved
surfaces with this property. Such surfaces are directly related to flowlines since, by
definition, the flow nonnal to the direction of a flowline is zero. Aflowline is not easily
defined. In the Gardennoen case we have a map of the groundwater table (figure 9.1.2),
and assuming that the hydraulic conductivity is isotropic, we can draw a flowline on the
XY plane. The vertical profile flowline marked in figure 9.1.2 is the flowline used for
the definition of the vertical section. This curved surface is stretched out to become a
plane (the:XZ vertical model plane) and a FEM grid has been developed to fit the aqui-
fer geometry and hydrostratigraphy. This grid can be seen in figure 9.2.1.
o 250 500 750 1000 1250 1500 1750 2000 22SO 2500
190
180
200
185
195
175 _ 18.000m/d
o 23.250m/d
170 50.000m/d
Ill! O.864m/d
175
170
180
200
190
165
195
250 500 750 1000 1250 1500 1750 2000 22SO 2500
Figure 9.2.1 2D vertical cross section FEM grid.
The FEM grid in figure 9.2.1 was generated by a computer program developed for this
purpose. The program requires as input the coordinates :xz of the top and bottom of the
Chapter 9. The Gardennoen Case Study page 311
aquifer, and in this specific case the coordinates of the boundary between the 2 layers.
One can use as many coordinate points as one considers necessary for the precise geo-
metrical definition of these boundaries, but obviously not less than 2points. An interpo-
lation function is then used to define these boundaries and to divide each into propor-
tionally spaced elements. In this case it was decided to use 50 element columns, 6 ele-
ment rows for the top layer and 3 element rows for the bottom layer (green). This gives
a total of 450 elements and 510 nodes. A spline interpolation function was used, but this
choice was somewhat unsuccessful. Although this interpolation function gives very
smooth boundary lines - which was the reason it was chosen - it also has some other un-
desirable effects. The first of these is the tendency to give lines with near zero gradients
at the end-points. One notices that the top aquifer boundary line - groundwater table -
seems to be normal to the right side boundary. Closer inspection shows that the gradient
is not normal, but it is nevertheless 4-5 times smaller than the gradient which would be
obtained from linear interpolation. Another effect of the spline interpolation function
are the waves shapes/curvations produced in areas where the number of interpolation
points is small. The groundwater line in the area between the 1750m and 2250m shows
a slight bend which is influencing the hydraulic gradient in this area. This bend has no
basis in the groundwater table data and is purely a result of the spline interpolation. Al-
though it is barely noticeable, it does have an impact on the nodal head values and its
effect is clearly observed during model calibration.
The trial and error approach has been used model for calibration purposes, and the final
conductivity values used can be seen in figure 9.2.1. The original hydraulic conductivity
values where 10-4 and 10-
5
mls as indicated in figure 9.1.3. Upon changing the units to
mid we obtain 8.64 and 0.864, respectively. The bottom (green) layer was assumed to
have the mean value hydraulic conductivity value given in figure 9.1.3 and was not
manipulated. The top layer was assumed to be homogeneous at first, but even the best
parameter value gave a maximum head difference of more than 18m between observed
and estimated hydraulic heads. It was therefore necessary to assume the top layer to be
page 312
~ ~ ~ ~ - - - - ~ - ~ ~ ----- ------------ - ------
Chapter 9. The Gardennoen Case Study
inhomogeneous. The 3vertical divisions of the layer shown in figure 9.2.1 were mainly
based on dividing the layer in regions with similar hydraulic gradient characteristics.
From left to right we have a region with a relatively small gradient, a high gradient re-
gion and finally a small gradient region. The spline interpolation function has had con-
siderable influence in the rightmost region where the small gradients required high hy-
draulic conductivities. The assumed boundary between the top and bottom layer has
influenced the hydraulic conductivities of the first two regions. The second region has
a higher hydraulic gradient but also is less thick than the first region and resulted in a
higher conductivity value. Compared to the original mean conductivity value for this
layer (see figure 9.1.3), we have that the first region is 2.1 times higher, the second re-
gion is 2.7 times higher and the third region is 5.8 times higher. The maximum differ-
ence between computed and measured hydraulic heads is now 0.48m and is observed in
the area with a bend in the top flowline discussed previously. Since the seasonal varia-
tions in the groundwater level have a magnitude of l.S-2.0m, the obtained head differ-
ence is considered to be very good. Are the hydraulic conductivities obtained accept-
able? We ignore the third region since the hydraulic conductivity is artificially produced
due to the spline interpolation function. The two first regions show hydraulic conduc-
tivities which are within acceptable limits. Field measurements up to 4 times the mean
hydraulic conductivity have been obtained in the area, but also measurements as small
as 0.5 times the mean value. These conductivity values can be reduced if we assume a
larger thickness for the top layer. For the purposes of the subsequent reliability analysis
we assume that the hydraulic conductivities, as presented in figure 9.2.1, will produce
an acceptable groundwater flow simulation of the actual flow regime, and are consid-
ered to be deterministic parameters in the reliability model.
Unfortunately, no information about the dispersivities which should be used was avail-
able. Tracer tests will be performed in the Gardermoen area, but the results were not
available in time, for these analyses. Considering the dimensions of the aquifer, wwe
assummed a longitudinal dispersivity of 100m with a transverse dispersivity of 10m.
Chapter 9. The Gardennoen Case Study page 313"'
Global random variables are introduced for the dispersivities with the above values used
as both mean and standard deviation. The last parameter required is the porosity, which
has a value of 0.3. Finally, the effects of decay have been introduced into these simula-
tions and into the reliability analysis. The decay elements are defined in figure 9.2.1
and have a mean value of 0.005.
In all cases presented below, the problem that is analyzed is the same. We want to find
"
the probability of the target node (node 398 depicted in figure 9.2.1) having a concen-
tration larger or equal to 0.2c
o
at time 1800 days after a constant pollution source Co has
been activated. For this purpose the first performance function has been used in
PAGAP. In each case different statistical input has been used.
In the first set of cases we assume the dispersivities to be global random variables and
analyze two cases one without decay elements and one with decay elements. The decay
elements are treated deterministicly in these cases.
In the second set of cases, dispersivities are treated as global random variables and the
decay elements as a random field. At first we assume that the decay elements are corre-
lated and then present results with no correlation assigned to them for comparison.
In the last set of cases the dispersivities are treated as random fields. In the first case
only the longitudinal dispersivity is considered to be a random field, while in the second
case both dispersivities are random fields.
9.2.1. Global random dispersivities with deterministic decay
Without decay and using a consistent matrix FEM formulation, the deterministic solu-
tion at the target node after 1800 days with a time step of 10 days is equal to 0.131c
o
'
ft
With decay the deterministic solution is equal to 0.063c
o
'
Figures 9.22 and 9.2.3 show the deterministic concentration distributions. Running the
PAGAP model with and without decay we obtained the results shown in table 9.2.1.
Without decay the standard RF algorithm converged in 6 iterations to a performance
page 314 Chapter 9. The Gardermoen Case Study
function value g(X) $; 0.0000005, giving a reliability index of 0.704. With a decay
constant of 0.005 assigned to the decay elements the same analysis required 14 itera-
tions to reduce the performance function to a value smaller than 0.0000005 and gave a
reliability index of 2.926.
Table 9.2.1. Reliability results with global variables - Vertical Section
Case Algorithm Iterations
/3
Probability
No Decay RF 6 0.704371 0.240601
With Decay RF 14 2.925553 0.001719
Table 9.2.2. Sensitivity results with global variables 20
Vertical Section
Parameters No Decay With Decay
Design Point <XL 167.560584 389.146466
Design Point aT 8.007645 16.089972
y-Sensitivity <XL 0.959162 0.978094
y-Sensitivity aT -0.282856 0.208165
MV Sensitivity aL -0.009591 -0.009781
MV Sensitivity aT 0.028286 -0.020816
SO Sensitivity aL -0.006480 -0.027988
SO Sensitivity aT -0.005635 -0.012677
;
Chapter 9. The Gardermoen Case Study
page 315
250
60) 750 '000 '250
,60)
'750 2000 22SO 2SOO
200
200
'95
195
100
100
las
las
'80
'80
-
ABOVE 0.900
-
0.800 - 0.900
1m
0.700 - 0.800
'7S
175
-
0.600 - 0.700
-
0.500 - 0.600
-
0.4ll0 - 0.500
'70
'70
-
D.300 - 0.400
g 0.200 - 0.300
0
0.100 - 0.200
165
'65
0
ea.OW 0.100
0 250
60) 7SO '000 '250
,60) 1750 2000 22SO 2SOO
Figure 9.2.2. Concentration distribution. Deterministic solution without decay.
250
60) 7SO 1000 '250 '60) '750 2000 22SO 2SOO
200
200
195
'95
100
'00
,as
,as
180
'80
-
ABOVE 0.900
~
o.soo - 0.900
E:Il
0.700 - 0.800
'7S
17S .-
0.600 - 0.700
-
0.500 - 0.600
-
0.4ll0 - 0.500
'70
'70
-
D.300 - 0.400
0
0.200 - 0.300
0
0..100 - 0.200
,6S
16S
0
BB.OW 0.100
0 250
60) 750 1000 '250
,60)
'750 2000 22SO 2SOO
Figure 9.2.3. Concentration distribution. Deterministic solution with decay elements.
~ - - - - ~ - - - - - - ~ - - - - - - - -
page 316 Chapter 9. The Gardennoen Case Study
250 500 750 '000 1250 '500 17SO 2000 22SO 2SOO
200
,OS 16.
190 190
las
".
leo ,eo
-
ABOVE 0.900
11m; 0.800 - 0.900
I'ff] 0.700 - O.BOO
17. 17.
-
0.800 - 0.700
-
0.500 - 0.800
-
0.400 - 0.500
170 170
-
0.300 - 0.400
ITI:J
0.200 - 0.300
0
0.100 - 0.200
'65
".
0
BElOW 0.100
0 250 500 7SO 1000 1250 1500 17SO 2000 22SO 2SOO
Figure 9.2.4. Concentration distribution. Design point solution with no decay
250 500 750 1000 1250 1500 17SO 2000 22SO 2SOO
200 200
16' ,OS
190 190
las ,as
,eo ,so
-
ABOVE 0.900
E<'ilI
0.800 - 0.900
L:;;jJ 0.700 - O.BOO
17' 17' ..
0.800 - 0.700
-
0.500 - 0.800
-
0.400 - 0.500
'70 170
-
0.300 - 0.400
0
0.200 - 0.300
0 0.100 - 0.200
165
'6'
0
BElOW 0.100
0 250 500 7SO 1000 1250 1500 1750 2000 22SO 2SOO
Figure 9.2.5. Concentration distribution. Design point solution with decay.
Chapter 9. The Gardennoen Case Study page 311
Table 9.2.2 summarizes the design point and sensitivity results obtained for the global
dispersivities for the two analyzed cases, while figures 9.2.4. and 9.2.5. show the design
point concentration distributions for each case.
The design point dispersivity values for the no decay case are as expected. The longi-
tudinal dispersivity has been increased considerably in order to move the 0.2 concentra-
tion contour further away from the pollution source and towards the target node. At the
same time the transverse dispersivity has been reduced in order to keep the concentra-
tion values high in the upper layer. As we see in figure 9.2.4, the design point disper-
sivities have made the plume unable to fully cover the whole thickness of the aquifer.
The sensitivity values indicate that the longitudinal dispersivity is more important than
the transverse dispersivity. Before comparing the mean value (MV) and standard devia-
tion (SD) sensitivities with the 'V-sensitivities, one should normalize them by multiply-
ing with the variable standard deviations which in this case are 100m and 10m, respec-
tively, for longitudinal and transverse dispersivity variables. After normalization the
MV sensitivities become exactly the same as the 'V-sensitivities but with opposite signs,
showing that the mean values are as important as the target value is. The SD sensitivi-
ties after normalization show that the influence of the longitudinal dispersivity standard
deviation is less than that of its mean value but nevertheless still significant, while the
standard deviation of the transverse dispersivity is the least significance.
In the case with decay elements the design point longitudinal dispersivity has increased
dramatically to a value almost 4 times the mean value. In addition the transverse dis-
persivity has also increased considerably. This indicates that high pollution dispersion
will reduce the effect of decaying processes. The sensitivity information is quite similar
to that of the no decay case. The 'V-sensitivities and normalized MV sensitivities are
equal with opposite signs showing the importance of the longitudinal dispersivity in the
analysis. The normalized SD sensitivities, however, show that in this case the standard
deviation of the longitudinal dispersivity is ca. 3 times higher than the 'V-sensitivity or
------------ - - - - - - - ~ - - - ~ ~ ------
page 318 Chapter 9. The Gardermoen Case Study
MV sensitivity. In other words, the reliability index is much more dependent on the
value of the longitudinal dispersivity standard deviation than on the mean value and the
target node value.
9.2.2. Global random dispersivities with random field decay
In this section, in addition to the global variables of longitudinal and transverse disper-
sivity, we consider the decay coefficients to be random variables with point statistics of
a mean value of 0.005 and a standard deviation of 0.005. Each decay element is as-
sumed to have a normal marginal distribution. The correlation scale is set to be 200m in
the X-direction and 20m in the Z-direction with an exponential correlation function. In
all we have 26 random variables.
PAGAP converged in 9 iterations to g(X) ~ 0.0000005. A reliability index f3 = 1.819
was obtained giving a probability of P=0.034. The longitudinal dispersivity at the de-
sign point has a value of 219.559 while the transverse dispersivity did not change much
with a design point value of 9.351. We see that by assuming that the decay coefficients
are random variables the design point longitudinal dispersivity has been reduced consid-
erably (from ca. 380m to 220m), indicating that the decay processes playa considerable
role in the analysis.
Figures 9.2.6 and 9.2.7 show the design point and sensitivity values obtained for the de-
cay random variables. Thedesign point decays show a smooth profile. All values are
smaller than the mean values. The decay values after the target point (element 19) show
only a small decrease and obviously have the smallest effect on the concentration at the
target node. Decay near the pollution source is important but not as important as the
decay at a distance of 500-600m from the source, where the smallest decay values are
observed. All sensitivities show a sudden change after element 18, which is consistent
with the position of the target node. The y-sensitivities and mean value sensitivities
show a similar profile but with inverse values. Comparison between these two sensitivi-
ties shows that the mean value sensitivities are approximately 0.75 times smaller than
Chapter 9. The Gardennoen Case Study page 319
the y-sensitivities indicating that the design point and therefore the target value is influ-
encing the analysis much more than the mean values. The standard deviation sensitivi-
ties show a smaller influence than the mean values but only slightly smaller. It is inter-
esting to notice that the highest sensitivity values do not coincide with the smallest de-
sign point values. This is probably due to the correlation information assigned to the
decay variables.
0.005 -r--------------------------,- 0.00
0.004
~
1lI
III 0.003
Q
>-
]l
::i
ti 0.002
o
::
0.001
-0.02
-0.04
CI)
~
-0.06 ~
01
C
'"
l/)
-0.08
-0.10
0.000 -0.12
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
Decay Element Numbers
Figure 9.2.6. Design point decay constants and their y-sensitivity values
0.09 0.00
0.08
-0.01
0.07
III
-0.02 ~
U)
~
~ 0.06
Z -0.03
0;
c:
'iii
'"
l/)
5i 0.05 c:
l/)
-0.04 g
Gl 1lI
~ 0.04 ">
Gl
>
C
c -0.05 'E
1lI
0.03
'"
1lI
::
'C
c:
0.02
-0.06 ~
l/)
0.01
-0.G7
0.00 -0.08
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
Decay Element Number
Figure 9.2.7. Normalized mean value andstandard deviation sensitivities of the decays
page 320 Chapter 9. The Gardermoen Case Study
-0.12
0.0034
0.0032
0.0050 0.00
0.0048
-0.02
0.0046
0.0044
-0.04
.,
>-
1lI
g 0.0042 CIl
c
-0.06 ~
~ 0.0040
~
oX
c;;
::i
-0.08 Iii
0; 0.0038 Ul
0
::
0.0036
-0.10
0.0030 -0.14
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
Decay Element Number
Figure 9.2.8. Desig point uncorrelated decay constants and their y-sensitivity values
-0.040
0.02
0.14 0.000
-0.005
0.12
-0.010 en
oS!
.,
0.10 :;
~
-0.015 ~
~
c
~ 0.08
Cll
-0.020 ~
Cll
Ul
0
III 1ii
:>
-0.025 i
is 0.06
> C
C
'tl
111
-0.030
...
Cll 1lI
:: 0.04
."
c
OJ
-0.035 iii
0.00 -0.045
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
Decay Element Number
Figure 9.2.9. Normalized mean value and standard deviation sensitivities of the uncorrelated
decays
In order to confinn this correlation effect the same case has been analyzed once again ,
but this time without any correlation assigned to the random variable decays. PAGAP
converged to g(X) ~ 0.0000005 in 10 iterations and gave a reliability index of 2.565
with a probability P = 0.005. The design point global longitudinal dispersivity has a
value of 322.950m while the transverse dispersivity a value of 13.857m. Compared to
the correlated decay case we see that the longitudinal dispersivity has increased consid-
Chapter 9. The Gardermoen Case Study page 321'
erably. The transverse dispersivity has increased as well and in this case has a value
larger than the mean value. These values indicate that the decay elements have a smaller
influence on the analysis than the correlated decay elements.
Figures 9.2.8 and 9.2.9 show the design point uncorrelated decays and their sensitivity
values. We observe that the design point decay values have a similar profile to that of
the y-sensitivities. The design point uncorrelated decay values are much nearer to their
mean values than those in the correlated case. The y-sensitivity profile is similar to the
y-sensitivity profile for the correlated decays with the difference that the values ob-
served for the uncorrelated case are higher. The mean value sensitivities and y-
sensitivities are exactly the same but with inverse values. The standard deviation sensi-
tivities have almost half the values of those for the correlated decays and at the same
time show a similar profile to the design point values and y-sensitivities.
These results clearly show the effects of correlation on the design point decay values.
We notice once again that the reliability index is larger for the uncorrelated case than
the correlated case, which is caused by the variance reduction resulting from the corre-
lation assigned to the variables.
9.2.3. Random field dispersivities
In order to obtain more detailed results with respect to the behavior of the dispersivities,
two cases have been analyzed where the dispersivities are considered to be random
fields. In the first case the longitudinal dispersivity is a random field with point statis-
tics given by a mean value of 100m and a standard deviation of 100m. Each random
variable has a normal marginal distribution, while an exponential correlation function is
used with a correlation scale of 200m in the horizontal direction and 20m in the vertical
direction. The longitudinal dispersivities in the lower layer are considered to be deter-
ministic. This assumption is made simply to reduce the number of variables in the
analysis. The transverse dispersivities are assumed to be deterministic in the whole ver-
page 322
tical section. In all we have 300 random variables.
Chapter 9. The Gardennoen Case Study
In the second case the longitudinal dispersivities are a random field with the same char-
acteristics as before, but the transverse dispersivities are also considered to be a random
field. The element transverse dispersivities have a mean value of 10m and a standard
deviation also equal to 10m. Each transverse dispersivity random variable has a normal
marginal distribution, while an exponential correlation function is used with a correla-
tion scale of 200m in the horizontal direction and 20m in the vertical direction. The
transverse dispersivities in the lower layer are considered to be deterministic. In all we
have 300 CXr. plus 300 ex,. random variables.
In the first case PAGAP converged in 6 iterations to a performance function tolerance of
g(X):5; 0.0000005, giving a reliability index of 1.059 and a probability of 0.145. If the
same case was analyzed assuming a single global longitudinal dispersivity we would
obtain a reliability index of 0.722 with a probability of 0.235 where the design point
global longitudinal dispersivity is 172.170m. In other words, assuming a spatial varia-
tion for the longitudinal dispersivity will reduce the probability of the target value being
exceeded.
Figure 9.2.10 shows the design point longitudinal dispersivities, while figure 9.2.11
shows the y-sensitivity values. Comparing the two figures makes the correlation effects
very clear. The y-sensitivities show that the longitudinal dispersivities at the source area
are the most important in the analysis and to a lesser degree the longitudinal dispersivi-
ties near the target node. The same is observed in the figure showing design point val-
ues, with the difference that the correlation effects have made the affected area much
larger, practically occupying the whole area between the source and the target node, al-
though the bigest differences are again observed near the pollution source area.
Chapter 9. The Gardermoen Case Study
page 323 .
250 500 750
'000
1250
'500
1750 2000 2250 2500
200
t ~ I ~
190
'50
18S ,as
leo

ABOVE 165.000
1m!!
155.000 - 165.000
Ifi
145.000 - 155.000
17S 175
-
135.000 - 145.000

125.000 - 135.000
-
115.000 - 125.000
170 170
-
105.000 - 115.000
~
95.000 - 105.000
0
85.000 - 85.000
165
0
1l8.0W 85.000
250 500 no 1000 1250
'500
1750 2000 2250 2500
Figure 9.2.1a.Design point longitudinal dispersivities distribution
250 500 750 1000
'250
1500 1750 2000 2250 2500
200 200
t95 t.S
190 190
"s
,as
teo teo
'7S '7S

ABOVE 0.188
L:J
0.128 - 0.188
III
0.069 - 0.128
170 '70
-
0.009 - 0.069
-
-0.050 - 0.009
0
-0.109 - -0.050
'65
0
BELOW -0.109
250 500 750 tooo 1250
'500
1750 2000 2250 2500
Figure 9.2. 11.Gamma sensitivity distribution
page 324 Chapter 9. The Gardennoen Case Study
250 500 750 '000 1250 1500 17SO 2000 22SO 2SOO
200 200
105 '05
190 190
185 lB5
110 110
175 175
-
ABOVE 0.0003
0
0.0001 - 0.0003
-
-0.0000 - 0.0001
170 170
-
-0.0002 - -0.0000
-
-0.0003 - -0.0002
D
-0.0005 - -0.0003
165
D
Baow -0.0005
250 500 750 1000 12SO 1500 17SO 2000 22SO 2SOO
Figure 9.2. 12.Mean Value sensitivity distribution
250 500 750 1000 12SO 1500 17SO 2000 22SO 2SOO
200
,.5
190
185
110
'75
-
ABOVE -0.0001
D
-0.0001 - -0.0001
-
-0.0002 - -0.0001
170
-
-0.0002 - -0.0002
-
-0.0003 - -0.0002
D
-0.0003 - -0.0003
165
0
Baow -0.0003
250 500 750 1000 1250 1500 17SO 2000 22SO 2SOO
Figure 9.2.13. Standard deviation sensitivity distribution
Chapter 9. The Gardermoen Case Study page 325
250 500 7SO '000
1250 1500 17SO 2000
=
2500
200 200
195 195
.'
'90 '90
185 185
,ao lao
-
ABOVE 0.900
mI
0.800 - 0.900

0.700 - 0.800
170 110
-
0.600 - 0.700
-
o.soO - 0.600
-
0.400 - 0.500
170 110
-
0.300 - 0.400
Ern
0.200 0.300
0
0.100 - 0.200
'.S
,GS 0
BB.OW 0.100
0 250 500 7SO 1000
'250 1500 17SO 2000
=
2500
Figure 9.2.14. Concentration distribution. Design point solution
Table 9.2.3. Reliability Results - Vertical Section
Var. Type Variables
/3
Probability Iterations
Global 1lXL 0.721697 0.235240 5
1<XL+1cxT 0.704371 0.240601 6
Random 300 <XL 1.058823 0.144840 6
Field 300<XL + 300CXT 1.039575 0.149269 6
Figures 9.2.12 and 9.2.13 show the mean value sensitivities and standard deviation
sensitivities, respectively. The mean value sensitivities have a distribution similar to the
y-sensitivities but with an opposite sign. Upon normalization with the standard devia-
tion (lOOm) we see that the mean value sensitivities are much smaller than the y-
sensitivities. The normalized standard deviation sensitivities are smaller than the mean
page 326 Chapter 9. The Gardennoen Case Study
value sensitivities and show that the source area is the most sensitive area. The sensi-
tivity at the target area is very small.
The design point concentration distribution seen in figure 9.2.14 is quite different from
those observed in the global variable cases shown previously. The distribution resem-
bles the deterministic solution, with the obvious difference that the higher dispersivity
values have caused the pollution to spread out more in order to obtain the target node
concentration value of 0.2c
o
'
In the second case PAGAP converged after 6 iterations to a reliability index of 1.040
with a probability of 0.149. Table 9.2.3 summarizes all reliability results and includes
cases where global random variables have been used from section 9.2.1. We notice two
things in table 9.2.3. First, the transverse dispersivities have only a small effect on the
results, indicating that the longitudinal dispersivity is the dominating variable, and sec-
ond, the global variable cases show higher probabilities than the random field cases. A
similar reduction of the reliability index was also observed in the theoretical case studies
of chapter 8.
Clearly, the global random variable cases have larger variances than the random field
cases and therefore higher probabilities are obtained when global random variables are
used. However, such an explanation seems to contradict the variance reduction effect
since one could use the same argument for an uncorrelated random field. In the latter
case the uncorrelated random variables will not experience any variance reduction, and
therefore they will have larger variances than the correlated variables. Obviously the
term "variance reduction" can be misleading and the effects of correlation need to be
clarified.
All the results we have seen so far clearly show that the" reduction of the reliability index
is connected to the correlation scale. Let us for example consider the case where we
have 300 longitudinal dispersivity random variables. If these variables are considered to
be uncorrelated, then one can define a space U, which includes all the different combi-
- - - ~ - - - - - ~ ~ ~ ~ ~ ~ -
Chapter 9. The Gardennoen Case Study page 327
nations of values these variables can have which lead to a concentration equal to the tar-
get concentration. Let us express this as
U ={X: XI'" X300 are uncorrelated random variables and g(X) =o}
If the random variables are correlated, we can similarly define a space U
c
which includes
all the combinations of correlated variables which make the performance function equal
to zero. Le.
Uc ={X: X1'" X 300 are correlated random variables and g(X) =o}
Obviously U
c
must be a subspace of U, since U is the space of all possible combinations
while U
c
is restricted to those combinations with a specific correlation. Another way to
look upon these spaces is to consider the function g(X)=O, which defines the limit state
surface. In this approach U is the space of all points on the limit state surface, while U
c
represents those points on the limit state surface whose coordinates exhibit the specified
correlation. In other words, when working in the U
c
space the limit state surface is not
a surface anymore. However, the reliability method transforms the random variables
into standard normal space, and t h ~ transformation g(X)-7G(Y) leads to a limit state
surface in standard normal space which is always a surface. This means that when
working in U
c
space the transformation will cause all points in this space to move to-
wards each other and form a surface.
Such a movement can only be caused when the transformation contracts the original
space. This contraction will not only produce a limit state surface, but also place it
nearer to the point of origins in standard normal space and therefore will reduce the reli-
ability index. Based on the above, we expect that the largest reliability index will be
obtained when the random variables are uncorrelated, while the smallest reliability in-
dex will be obtained when the random variables are absolutely correlated. Absolute cor-
relation means that the correlation scale 'A. -7 00. Practically, it means that we are using
a global variable and in such a case the limit state surface is a single point.
page 328
---- ~ ~ ~ - - - - - - ~ - - - - - - - - ~ - ~ - ~ ~ ~ - - - ~ - - - - - ~ - ~
Chapter 9. The Gardermoen Case Study
The contraction of the original space upon transformation into standard normal space
due to the correlation assigned to the random variables, has in addition a smoothing ef-
fect on the resulting limit state surface in standard normal space. This smoothing effect
can practically be observed during the reliability index iteration process. In many cases
the iteration process will not converge when no correlation is assigned to the random
variables indicating a highly irregular limit state surface. Once correlation information
is assigned to the random variables, the iteration process converges without any prob-
lems.
The design point random variable values and the sensitivities obtained from the second
case are shown in figures 9.2.15 through 9.2.22. The results concerning the longitudinal
dispersivities have the same characteristics as seen in the first case. The only difference
is that they have slightly lower values.
The design point transverse dispersivities and their y-sensitivities are seen in figures
9.2.16. and 9.2.18 respectively. The transverse dispersivities are sensitive only at the
pollution source area while the target node area is of no importance. The correlation ef-
fects are obvious when comparing the two plots. Compared to the mean value of 10m
the design point transverse dispersivities have not changed much. The maximum
change is ca. 15% which is approximately 4 times smaller than the maximum change in
the design point longitudinal dispersivities.
Figures 9.2.20 and 9.2.21 show the transverse dispersivity mean value sensitivities and
standard deviation sensitivities respectively. Both plots show the same distribution as
the y-sensitivities. Upon normalization the mean value sensitivities obtain a maximum
negative value of 0.033 which is 4 times smaller than the maximum y-sensitivity indi-
cating that changes in the mean value are less significant than changes in the design
point. The normalized standard deviation sensitivities are very small - maximum nega-
tive value of 0.005 - showing that changes in the standard deviations will not influence
the reliability index.
Chapter 9. The Gardermoen Case Study
page 329
250 500 750 1000 1250 1500 '750 2000 2250 2SOO
200
195
'00
,as
180
-
ABOVE 165.000
.. 155.000 - 165.000
l1\li
145.000 - 155.000
'75
-
135.000 - 145.000
-
125.000 - 135.000
-
115.000 - 125.000
-
105.000 - 115.000
Elm
95.000 - 105.000
0
65.000 - 95.000
165
0
El8..0W 65.000
250 500 750 1000 1250 1500 1750 2000 2250 2SOO
Figure 9.2.15. Design point longitudinal dispersivity distribution
2SO 500 750
'000
1250
'500
1750 2000 2250 2SOO
200 200
lOS 195
190
190
las
,as
180
180
-
ABOVE 11.500
0
11.300 - 11.500
175
175
-
11.100 - 11.300
..
10.900 - 11.100
-
10.700 - 10.900
170
170
-
1D.500 - 10.700
-
10.300 - 10.500
0
10.100 - 1D.300
'5
165
0
El8..0W 10.100
0 2SO 500 750 1000 1250 '500
1750 2000 2250 2SOO
Figure 9.2.16. Design point transverse dispersivity distribution
page 330 Chapter 9. The Gardennoen Case Study
250 500 750 \000 1250 \500 1750 2000 2250 2SOO
200 200
'os lOS
100 100
'05 '8S
,ao lao
,7S ,7S
-
ABOVE 0.177
E:l
0.121 - 0.177
-
0.065 - 0.121
'70 170
-
0.008 - 0.065
-
-0.048 - 0.008
0
-0.104 - -0.048
'65
0
BB.OW -0.104
250 500 750
'000 1250 1500 '750 2000
=
l!SOO
Figure 9.2.17. Longitudinal dispersivity y-sensitivity distribution.
250 500 750 1000
'250 '500 1750 2000
=
l!SOO
200 200
'OS 'OS
'00
'90
'8S
'05
,ao ,ao
'7S '7S
-
ABOVE 0.126
0
0.104 - 0.126
-
0.083 - 0.104
'70 '70
-
0.061 - 0.083
-
0.040 - 0.061
0
0.018 - 0.040
'65 '65
0
BB.OW 0.018
0 250 500 750
'000 '250 '500 '750 2000
=
l!SOO
Figure 9.2.18. Transverse dispersivity y-sensitivity distribution
Chapter 9. The Gardennoen Case Study
page 331
250 500 750 1000 1250 1500 1750 2000
=
2500
200 200
105 '95
'00
''''
,8S
'85
100 '00
175 17S
-
ABOVE 0.0003
~
0.0001 - 0.0003
-
-0.0000 - 0.0001
170 170
-
-0.0002 - -0.0000

-0.0003 - -0.0002
D
-0.0005 -0.0003
'85
0
BELOW -0.0005
250 500 750 1000 1250 1500 1750 2000 2250 2500
Figure 9.2.19. Mean value longitudinal dispersivity sensitivity distribution
250 500 750 1000 1250
'500
1750 2000
=
2500
200 200
105 105
,,.,
'85
'00
17S 17S
-
ABOVE -0.0005
Q
-0.0010 - -0.0005
.. -0.0016 - -0.0010
170
-
-0.0022 - -0.0016
-
-0.0026 - -0.0022
0
-0.0033 - -0.0026
'85
0
sa-ow -0.0033
250 500 750 1000 1250
'500 '750 2000
=
2500
Figure 9.2.20. Mean value transverse dispersivity sensitivity distribution
page 332
Chapter 9. The Gardermoen Case Study
250 500 750
'000 '250 '500 '750 2000 2250 2500
200 200
,.. ,.s
'00
,,.
,OS '8S
'00 '00
175 17S
-
ABOVE -0.0000
[ill -0.0001 - -0.0000
-
-0.0002 - -0.0001
170 '70
-
-0.0002 - -0.0002
-
-0.0003 - -0.0002
CJ
-0.0003 - -0.0003
165
0
BElOW -0.0003
250 500 750 1000 1250 1500 1750 2000 2250 2500
Figure 9.2.21. Standard deviation longitudinal dispersivity sensitivity distribution
250 500 750
'000
1250
'500
1750 2000 2250 2500
200 200
,.. ,.s
'00
,os
'00
17S
-
ABOVE -0.0001
0
-0.0002 - -0.0001
,.
-0.0003 - -0.0002
'70

-0.0003 - -{).0003
-
-0.0004 - -0.0003
0
-0.0005 - -o.D004
'6S
0
BELOW -0.0005
250 500 750 1000 1250 1500 1750 2000 2250 2500
Figure 9.2.22. Standard deviation transverse dispersivity sensitivity'distribution
Chapter 9. The Gardermoen Case Study
page 333
2SO 500 750 1000
'2S0
1500 1750 oaoo
=
2SOO
200
200
105 '05
'00
'00
115
,as
110
100
-
ABOVE 0.900
II!IIII
0.800 - 0.900
...
0.700 - 0.800
175
175
-
0.600 - 0.700
-
o.soo - 0.600

0.400 - 0.500
'70
170

0.300 0.400
Ik1li
0.200 - 0.300
0
0.100 - 0.200
'65
'65
0
eB.OW 0.100
0 2SO 500 750 '000 '2S0 '500 '750 oaoo
=
2SOO
Figure 9.2.23. Concentration distribution. Design point solution
page 334 Chapter 9. The Gardennoen Case Study
9.3. 20 Horizontal model analysis
The Gardennoen aquifer was presented in section 9.1 where figures 9.1.1 and 9.1.2
show the Gardennoen area and the 2D horizontal model domain. Figure 9.3.1 shows
the horizontal finite element grid used for the Gardennoen area. This figure also shows
the boundary conditions implemented for the 2D horizontal model, as well as the posi-
tion of the source nodes and target nodes that will be used in the reliability analysis.
The target nodes have been named left (L) and right (R) target nodes, as shown in the
figure.
Although the finite element grid has irregularly sized elements, it nevertheless belongs
to the regular type of grids, since it has a constant number of element rows and element
columns. The finite element grid has 32 element rows and 16 element columns giving a
total of 512 elements and 561 nodes. The positioning of the nodes is very irregular re-
sulting in a grid which is not evenly spaced. The reason for this, is that many nodes
have been positioned on the known hydraulic head contours. By positioning them so,
no interpolation was required during the inverse modeling process.
Strictly speaking 2D horizontal modeling cannot be used for the Gardennoen area, be-
cause - as we have already seen - the Gardennoen aquifer is inhomogeneous in the ver-
tical direction (two layers with different hydraulic conductivities). In order to overcome
this problem we have to simplify the actual geological characteristics of the aquifer by
replacing the two layers with one layer which is the average of the two. There are many
ways one can obtain the average, but none of them can be applied in this case because
the thickness of each layer is not known. It was therefore necessary to use inverse pro-
cedures to obtain reasonable parameter values for the assumed ~ v e r a g e layer. The in-
verse procedures and parameter estimation methods employed for this purpose are pre-
sented and discussed in Appendix C. The resulting transmissivity distribution can be
Chapter 9. The Gardermoen Case Study
page 335
seen in figure 9.3.2. By using transmissivity values instead of hydraulic conductivity
values we have avoided defining the thickness of the aquifer.
30
250
250
500
500
750
750
1000 1250 1500
3000
2000
1750
1500
1250
1000
750
\/" ...-... - Constant Head Boundary
500 -- No Flux Boundary
Target Nodes
250
Source Nodes
1000 1250 1500
Figure 9.3.1. 2D horizontal FEM grid.
However, in order to continue with the solution of the transport problem, we need to
know the aquifer thickness distribution. Since this is not known, further assumptions
were required.
Two different assumptions about the aquifer thickness have been made which have lead
to two groups of cases which have been analyzed. In the first group we assume that the
aquifer hydraulic conductivity is constant throughout the aquifer and as a result the
transmissivity distribution observed in figure 9.3.2 is the result of changes in the aquifer
page 336 Chapter 9. The Gardennoen Case Study
thickness only. In the second group we define the aquifer thickness to change linearly
with distance from West to East. With reference to figure 9.3.1 we assume that the
thickness at the divide is approximately 25m while at the constant head boundary at the
top of the figure the thickness is approximately 21m.
250 soo 7SO 1000 1250 1500
3000
2000
1750
1500
1250
_ ABOVE 375.000
'000 _ 350.000 _ 375.000
~ 325.000 - 350.000
750 III 300.000 - 325.000
_ 275.000 - 300.000
_ 250.000 - 275.000
500 _ 225.000 - 250.000
_ 200.000 - 225.000
o 175.000 - 200.000
250 0 150.000 - 175.000
o Baow 150.000
250 50) 7SO 1000 1250 1500
Figure 9.3.2. Transmissivity distribution as obtained from the parameter estimation model.
For each group two types of reliability problems have been defined:
In the first problem the source nodes are considered to be constant pollution sources
with a concentration of Co ,and we are interested in finding the probability of the concen-
tration at the target nodes to be equal or larger than O.2c
o
after 3000 days since the acti-
vation of the pollution sources. The first performance function is used for this purpose
and is defined as
g(X) =0.2-c(x,y,3000)
where x and y are the coordinates of the target nodes L and R.
Chapter 9. The Gardermoen Case Study page 337
In the second problem the pollution sources are active for diverse periods of time and
the probability of the maximum concentration observed over a period of 5500 days at
the target nodes being equal or larger than 0.2c
o
, is analyzed. The second performance
function is used for this problem and is defined as
g(X) = O.2-c(x,y,t
max
)
where x and y are the target node coordinates.
0< t
max
< 5500
Each problem requires statistical input for the random variables, but at the time these
analysises were carried out, no infonnation was available about the dispersivity parame-
ters. It was necessary to assign reasonable statistical information for these parameters
and different assumptions have lead to different reliability analysis results.
Obviously, too many assumptions have had to be made in the process of building up a
2D horizontal model for the Gardermoen aquifer. A more realistic model may be ob-
tained once more information becomes available. It is therefore important to keep in
mind that the results presented herein are not necessarily representative for the Garder-
moen aquifer.
9.3.1. 20 Horizontal model with homogenized hydraulic conductivities
In this section the element hydraulic conductivities have all been set equal to 10m/d.
Using the element transmissivities from figure 9.3.2, the thickness for each element can
be defined from the relationship be =T/IO. From a practical point of view this assump-
tion means that the hydraulic head distribution is dominated by the aquifer thickness
geometry.
Three cases have been analyzed and are presented in this section. In the first case we
shall analyze the first problem (defined above) with the assumption of global random
variables for the longitudinal dispersivity, the transverse dispersivity and the hydraulic
conductivity. In the second case we shall analyze the second problem with global ran-
dom variables for the parameters, while in the third case we analyze the first problem
page 338 Chapter 9. The Gardermoen Case Study
but this time we assume the longitudinal and transverse dispersivities to be random
fields while the hydraulic conductivity is assumed to be deterministic.
Case 1: First problem with global random variables
The statistical input to PAGAP for this problem is as follows. The longitudinal disper-
sivity is assumed to have a mean value of 100m, a standard deviation of 100m and a
normal distribution. The transverse dispersivity has a normal distribution with a mean
value of 10m and a standard deviation of 10m, while the hydraulic conductivity is as-
sumed to have a lognormal distribution with a mean value of lOm/d and a standard de-
viation of 10m/d.
Table 9.3.4. Global random variable results
Target Node Variables
~
Probability Iterations
Left <XL 1.243443 0.106852 6
aL+ aT 1.008617 0.156579 7
aL+aT+K 1.008617 0.156579 7
Right
<XL 9.251902 0.000000 6
<XL + aT 6.053244 0.000000 25
<XL+<Xr+K 6.053244 0.000000 25
Using the first performance function for the analysis of the first problem, with a time
step !1t = 10 days and a consistent matrix formulation with 8= 1/2 for the time deriva-
tive, we obtained the results summarized in table 9.3.4. The deterministic solutions at
the target nodes L (left) and R (right) are respectively 0:121c
o
and 0.086c
o
' The L target
node has a global index number of 315 and coordinates (771.23, 2013.68), while the R
target node has the global index number of 318 and coordinates (1032.61, 1982.67).
The deterministic concentration distribution can be seen in figure 9.3.3.
For the L target node we notice two things. When the longitudinal dispersivity is the
only global random variables, the reliability index obtained is larger than in the cases
Chapter 9. The Gardennoen Case Study page 339
where the transverse dispersivity and the hydraulic conductivity are also global random
variables. This seems to indicate that the effect of the transverse dispersivity plays a
significant role in the analysis and can not be ignored. We also notice that the reliability
index obtained with a global random hydraulic conductivity is the same as the reliability
index where it is deterministic. This indicates that the reliability index does not depend
on the hydraulic conductivity. For the R target node the exact same observations can be
made. Compared to the L target node reliability indices, the R target node indices are
much larger giving probabilities which are for all practical purposes equal to zero. The
large number of iterations required for the R target node cases are due to the modified
RF algorithm used for these cases, since the standard RF algorithm did not manage to
converge.
Figure 9.3.4 shows the design point solutions obtained for the cases of the first problem.
Figures 9.3.4 A and B show the L and R target node design point solutions respectively
in the case where only the longitudinal dispersivity is a global random variable. In fig-
ure A, a design point value of 224.34m was obtained while the design point longitudinal
dispersivity value for figure B is 1025.19m. In both figures the transverse dispersivity
is equal to 10m. Although the high longitudinal dispersivity value in figure B has
caused the pollution plume to have an extensive spread in the flow direction, no signifi-
cant difference is observed in the transverse flow direction between figures A and B.
Since the flow velocity field is constant and the transport has a specific time duration,
the value of the transverse dispersivity will limit the amount of lateral spreading of the
pollution plume. Consider now a case where the target node is placed near the right or
left aquifer boundary. In such a case, the concentration at the target node will very
much depend on the transverse dispersivity, since it controls pollution transport normal
to the direction of flow. If the transverse dispersivity is treated as a deterministic pa-
rameter and is not assigned a value capable of transferring enough pollution to the target
node, then no matter which value is assigned to the longitudinal dispersivity, no design
point can be obtained. For the R target node case, a transverse dispersivity of lOm
page 340 Chapter 9. The Gardennoen Case Study
seems to be very near the limit where no design point can be obtained.
250 500 750 1000 1250 1500
3000 3000
2750 2750
2500 2500
2250 2250
2000 2000
1750 1750
1500 1500
1250 1250
1000
1900
-
ABOVE 0.900
III
MOO - 0.900
750 750
1&1
0.700 - 0.800
..
0.600 - 0.700
-
0.500 - 0.600
500 500
-
0.400 - 0.500
-
0.300 - 0.400
f:' t ~ l 0.200 - 0.300
250 250
0
0.100 - 0.200
0
BELOW 0.100
250 500 750 1000 1250 1500
Figure 9.3.3. Mean value solution after 3000 days.
A
c
Chapter 9. The Gardermoen Case Study
""
"0>
.'"
1000 _ ABOVE cuoo
_ Q.IOO cuoo
no I!ll\I!I 0.700 Q.IOO
_ Q.IOO 0.700
- Q.5OO - Q.IOO
_ Q.4OO-Q.500
_ Q.5OO-Q.400
ml ll200 - Q.5OO
D Q.lOO-ll2OO
D ea.ow 0.100
:so 500 ?SO 1000 1250 ISOO
1710
,..,
.000 _ ABOVE CUOO
... Q.IOO CUOO
no I!!l!liI 0.700 Q.IOO
_ Q.IOO 0.700
- Q.5OO Q.IOO
_ Q.4OO.Q.500
_ Q.IOO-Q.400
Em ll200 - Q.5OO
D Q.looll2OO
D ea.ow 0.100
no 100 ?SO 1000 1250 .soo
B
D
""
:so 500 7SO lOCO 12$Q 1$00
lceo ttsO ISOG
page 341
_ ABOVE cuoo
_ 0.100-0.900
_ 0.700-0.100
_ Q.IOO - 0.700
- Q.5OO Q.IOO _ Q.4OO-Q.500
_ Q.5OO-Q.400
E:l ll200 - Q.5OO
D Q.l00ll2OO
D ea.ow 0.100
_ ABOVE 0.900
.. Q.IOO-CUOO
BII 0.700 - 0.100
_ Q.IOO - 0.700
- Q.5OO - Q.IOO
_ Q.4OO.Q.500
_ Q.5OO.Q.400
~ ll200 Q.5OO
D o.l00-ll2OO
D eaow 0.100
Figure 9.3.4. A. Design point solution for L target node withaL as a global random variable. B.
Design point solution for R target node withaL as a global random variable. C. Design point so-
lution for L target node with aL +ar as global random variables. D. Design point solution for R
target node with aL +ar as global random variables..
page 342 Chapter 9. The Gardermoen Case Study
Figures 9.3.4 C and D show the design point solutions when the transverse dispersivity
is a global random variable in addition to the longitudinal dispersivity. Figure C shows
the L target node case, while figure D the R target node case. The design point disper-
sivity values used in figure C are 181.98m and 4.12m for the longitudinal and transverse
dispersivity respectively. Figure D was obtained with the design point dispersivity val-
ues of a;. = 671.33m and lXr =30,OOm. It is rather easy to see the effects that these values
have on the pollution plume when we compare them with the A and B figures and figure
9.3.3 of the mean value deterministic solution.
The sensitivities obtained for the cases analyzed for the first problem reveal that the re-
liability index is equally sensitive to the design point values as it is to the mean values
used. However the importance of the standard deviation changes from case to case. In
the cases with only a global longitudinal dispersivity the standard deviation sensitivity
was higher than the y-sensitivity and mean value sensitivity for both target nodes. For
the L target node it is 1.25 times larger, while for the R target node it is 9.14 times
larger.
For the two random variable cases and for the L target node, the standard deviation
sensitivity is smaller than the y-:sensitivity (i.e. 0.66 , 0.81 respectively), but for the R
target node it is 5.4 times higher.
The sensitivities for the 3 variable cases indicate that the hydraulic conductivity has no
impact on the reliability index. In both cases (L and R target nodes) the hydraulic con-
ductivity sensitivities are less than 10000 times smaller than the other sensitivities. This
is not surprising, because the velocity field is precipitation controlled. Independently of
the value of the hydraulic conductivity the same amount of water has to flow through
the aquifer, and therefore the hydraulic head distribution is changed accordingly to keep
the velocity field more or less constant.
There is one more thing that should be discussed, and that is the flow pattern observed.
The flow pattern is easily observed in figures 9.3.4 Band D due to the high longitudinal
dispersivity values. The original hydraulic head distribution suggested that the flow
Chapter 9. The Gardermoen Case Study page 343
lines passing through the pollution sources would start approximately at the point
(983.7, 215.4) while the observed flow lines seem to start approximately at point
(704.5, 116.3) Le. ca. 300m to the left of were it should be. This error in the imple-
mented flow pattern is due to the assumptions made in the inverse process. By allowing
the hydraulic gradients in the lower left area of the aquifer to freely change in order to
keep the estimated hydraulic conductivities within reasonable values, we have changed
the flow pattern. Although the flow pattern between the pollution sources and the target
nodes does not seem to be affected very much by this manipulation, it will probably
have an effect on the concentration estimates at the target nodes. We expect that if the
original flow pattern was used, then the concentration distribution at the target node area
would be approximately the same as the one observed in figures 9.3.4. However, since
the transport flowlines used herein are longer than the original flowlines, we have to as-
sume that more water is transported through the source nodes now than would have if
the original flow pattern was used. In other words, the original flow pattern would have
lower velocities and therefore higher transport times would be required to obtain the
distributions observed herein.
Case 2: Secondproblem with global random variables
In the second case we have two global random variables namely the longitudinal and
transverse dispersivity. The hydraulic conductivity has no effect on the results and has
therefore been omitted. The second performance function has been used to estimate the
probability of the target nodes obtaining a maximum concentration of 0.2c
o
in a period
of 5500 days when the source nodes have been active for different periods of time.
Specifically, we have analyzed cases where the sources have a duration of 10,
30,60,120,180,240, 300, 360,420, and 480 days. The results are presented in the form
of plots where the X axis is the pollution source duration time. Figures 9.3.5 - 9.3.7
show the reliability index, the probability, the design point longitudinal and transverse
dispersivity values and the y-sensitivity values obtained for the L target node.
page 344 Chapter 9. The Gardennoen Case Study
Figure 9.3.5 - as expected - is showing that the probability" of the L target node to ob-
tain a maximum of 0.2c
o
,is increased when the pollution sources are active for longer
periods of time. If the pollution source is active less than 240 days, the probability in-
creases in the average by 0.00028/day while after 240 days the probability increases
with twice this rate. Choosing the 240 day source duration time for obtaining these
rates is arbitrary. However, it does demonstrate that safety limits and norms need to be
defined. Should one employ an inexpensive method to deactivate the pollution source in
300 days or should one use a much more expensive remediation method which will give
the same results in 200 days? How significant is the 5% gained in probability when an
expensive method is used?
Figures 9.3.6 and 9.3.7 show the design point dispersivity values. Comparing the sensi-
tivities of the two figures we see that for small duration times the reliability index is
most sensitive to the longitudinal dispersivity.
--
----
/ r-..........
......
~
/
./
""'-
L>
",.....-
'-
~
------
""
---
~
-
'"
~
'"
0.8
0.7
0.6
~
~ 0.5
~
:c
.!!! 0.4
a;
a:
0.3
0.2
0.1
o 60 120 180 240 300 360 420
0.5
0.45
0.4
0.35 ~
:c
III
J:J
e
0.3 D.
0.25
0.2
0.15
480
Source Duration (days)
Figure 9.3.5. Reliability index andprobability as a function of source duration for the L target
node
One should keep in mind that the estimated reliability index is the first order reliability index, which for
small values may give bad probability estimates depending on the curvature of the limit state surface at
the design point.
Chapter 9. The Gardermoen Case Study
page 345
~
/
./
/
./
V
--
/
~
.,...-
./ ~
V
~
.-
./ .-"'"
~
~
~
~
100
g 90
1Il
~
Z
80
e
III
Co
1Il
C
70
m
c
i5
.E
60
til
c
0
..J
>-
50
OJ
.:w:
::i
-
1Il
0
40
:!;
30
o 60 120 180 240 300 360 420
0.30
-0.40
-0.50
1Il
-0.60 ~
~
iii
-0.70 ;
II>
-0.80
-0.90
-1.00
480
Source Duration (days)
Figure 9.3.6. Design point longitudinal dispersivities and y-sensitivities verses source duration for
the L target node.
-0.7
-0.3
-0.2
-0.6
-0.4
1Il
~
">
-0.5 E
.,
c
..
II>
-0.8
480 420 360 180 240 300
Source Duration (days)
120 60
~
~
.......
~
~
..... /
~
~
/
V
~ i"-..
~
"-
~ ~
--....
7.0
o
10.0
....
.s
9.5
1Il
..
~
1!
9.0
..
Co
1Il
C
Gl
e 8.5
Gl
>
1Il
C
~
8.0
>-
OJ
.:w:
::i
-
1Il
7.5 0
:!;
Figure 9.3.7. Design point transverse dispersivities y-sensitivities verses source duration for the
L target node.
page 346 Chapter 9. The Gardennoen Case Study
As the duration time increases the sensitivity to the transverse dispersivity increases
until at approximately a source duration time of 430 days the sensitivities become equal.
Thereafter the transverse dispersivity becomes more important than the longitudinal
dispersivity. For a source duration smaller than 200 days the longitudinal dispersivity
must have a value smaller than half of its mean value for the L target node to obtain a
maximum value of 0.2e
o
in a period of 5500 days. As the source duration time in-
creases, the design point dispersivity value increases too. The transverse dispersivity,
however, initially decreases until a value of ca 7.5m is obtained for a source duration of
240-300 days, and then increases very rapidly. Obviously there must exist a source du-
ration time which will give the required maximum concentration at the L target node
without any change of the parameter values (i.e. design point values are equal to the
mean values). Extrapolation from the figures indicates that this source duration time
must be approximately at 560 days.
Although figures 9.3.6 and 9.3.7 show the behavior of the reliability index as a function
of source duration time, one wonders if these figures have any practical significance.
Since the estimated reliability indices have small values, one must be cautious in using
these values for probability estimates. However, the estimated reliability indices can still
be used in a qualitative manner. This is especially true in cases where we want to com-
pare events which are similar. For example, figure 9.3.5 shows that the reliability index
is reduced 4-5 times when the source duration time is allowed to increase from 10 days
to 480 days. It is reasonable to assume that the actual reliability indices would also show
a similar reduction, since the curvature of the limit state surface is expected to be similar
for the same type of problems. Comparisons between different types of problems can
not be made unless the reliability indices have values larger than approximately 2.5, in
which case the first order reliability index will be much more accurate.
The design point parameter values will always have the highest probability content in-
dependently of the uncertainties involved in the estimation of the reliability index. As an
example, let us take the 120 day source duration time case and analyze it with the as-
Chapter 9. The Gardermoen Case Study
-- - - - - ~ ~ - - ~ - - - - - -
page 347
sumption that only a
L
is a random variable while a, is detenninistic with a value equal
to 10m. First we assume a mean value and standard deviation equal to 100m and then a
mean value and standard deviation value of 50m. From figure 9.3.6 and 9.3.7 we know
that the highest probability content is obtained when the dispersivities have values of ca.
40m and 8m respectively. Since the transverse dispersivity is now detenninistic and has
a value other than 8m, our one global variable analysis must result in a larger reliability
index. This is indeed the case since the PAGAP results give a reliability index of 0.662
while from figure 9.3.5 we have 0.635. The design point longitudinal dispersivity value
is 33.84m. In the second analysis the reliability index was much smaller with a value of
0.323, and the design point a
L
is exactly the same as is in the previous case. This is ex-
pected in a one variable problem, since there can be only one parameter value leading to
a maximum of 0.2c
o
at the L target node. In the one variable problem the design point
dispersivity value has a deterministic value and is independent of the statistical input
which will only affect the reliability index. Once we have more than one variable the
design point parameter values become dependent on the statistical input. For example,
if the 120 day source duration time problem used a mean value and standard deviation
of 50m for a
L
- half of the original values- and a mean value and standard deviation of
10m for a, - the same as the original values - then- the design point parameter values
obtained are approximately 38m and 8.5m respectively. The reliability index for this
case is 0.282. Despite that the design point parameter values have the deterministic
property of satisfying the performance function, they are not detenninistic in nature
since they depend on the statistical input.
-From a practical point of view, the second problem analyzed above corresponds to a
situation where the only available information is the location of the pollution sources,
the initial concentration of the sources co' and that a maximum concentration registered
at a target point is 0.2c
o
We do not need to know when the pollution sources became
active, and we do not know for how long they were active. A detenninistic approach for
finding the unknown pollution source duration can be obtained by first obtaining repre-
page 348 Chapter 9. The Gardennoen Case Study
sentative dispersivity values and then running the deterministic model with diverse
pollution source duration times until a maximum of 0.2c
o
is obtained at the target point.
In other words, the duration time estimates will be as good as the dispersivity values
used. The reliability approach follows basically the same approach, but the dispersivi-
ties are defined through statistical information and therefore the model is more flexible.
The analysis should not stop at the 480 day source duration time but continue at least
until a reliability index of 0.0 - corresponding to a probability of 50% - is obtained. We
would not only obtain the most probable source duration time, but also the design point
dispersivity values.
---....... /
-............
~ V
~
~
V
...........
><: ./
V"
.........
'"
~
~
.......
'"
-----
-----
"
0.88
0.86
0.84
)(
III
~ 0.82
~
:c
!!! 0.8
.,
c::
0.78
0.76
0.74
o 60 120 180 240 300 360 420
0.231
0.225
0.219
0.213 ~
:c
Cll
..c
0.207
0.201
0.195
0.189
480
Source Duration (days)
Figure 9.3.8. Reliability index andprobability as a function of source duration for the R target
node
Chapter 9. The Gardennoen Case Study page 349
/
/
/
or
~ /
./
~
~
V
~
V
)0" ./
.....
~
~
25
13
o 60 120 180 240 300 360 420
-0.9995
-0.9996
-0.9997
.,
~
-0.9998 ~
.,
C
Il>
III
-0.9999
-1.0000
-1.0001
480
Source Duration (days)
Figure 9.3.9. Design point longitudinal dispersivities and y-sensitivities verses source duration for
the Rtarget node.
~
/'
~
:/
/
/"
~
r/'
~
~
~
V
10.16
:[ 10.12
.,
Il>
~ 10.08
f!
Il>
C.
.,
C 10.04
Il>
f!
Il>
>
:!! 10.00
~
>-
~ 9.96
::i
iii
~ 9.92
9.88
o 60 120 180 240 300 360 420
0.0200
0.0150
0.0100
.,
0.0050
~
~
'iii
0.0000
C
Il>
III
-0.0050
-0.0100
-0.0150
480
Source Duration (days)
Figure 9.3. 10. Design point transverse dispersivities and y-sensitivities verses source duration
for the R target node.
page 350 Chapter 9. The Gardermoen Case Study
The advantages of using the reliability model are obvious. However in this specific
case, PAGAP will give results which will be the same as the ones obtained by a deter-
ministic model using dispersivity values equal to the parameter mean values. Since
PAGAP is estimating a first order reliability index, it will not be able to estimate cor-
rectly when the reliability index becomes equal to 0.0. A second order reliability ap-
proach would be better suited for this task. The design point dispersivity values do not
depend on the order of the estimated reliability index and therefore continue to be the
design point parameter values.
Figures 9.3.8 - 9.3.10 show the reliability results obtained for the second problem for
the R target node. The reliability index and probability have the same form observed
for the L target node case, but in this case the scales of the plots show that the reliability
index - and therefore the probability - are less sensitive to the source duration time. If
these curves were plotted with the scales used for the L target node plots, they would be
fairly flat, indicating that the source duration has only a small effect on the results. In
other words, source duration time is not an important factor to consider when choosing a
remediation method.
The design point values and their sensitivities indicate clearly that the -longitudinal dis-
persivity is dominating the analysis. The design point a
L
values are not only much
smaller than the mean value but are also not very sensitive to the pollution duration
time. The sensitivity values can be considered to be independent of the duration time
with a constant value of ca. -1.0. On the other hand the transverse dispersivities have
values close to the mean value and sensitivities which are practically equal to zero. Ac-
tually, for a source duration time of ca. 270 days the design point CXr obtains a value
equal to the mean value and a sensitivity equal to zero. At the same point the design
point a
L
is equal to 18m. The probability of a
L
being smaller than 18m is equal to
0.206. In this case we expect the same probability to be obtained from the reliability
method and figure 9.3.8 seems to verify this.
Chapter 9. The Gardennoen Case Study
Case 3: First problem with randomfield variables
page 351
We want to estimate the probability of the target node having a concentration equal or
larger than Oolc
o
after 3000 days of pollution transport, where the pollution sources have
a constant concentration equal to co' The difference from case 1 is that we consider only
the L target node and that the dispersivities are considered to be random fields with
point statistics defined as: A mean value and standard deviation of 100m for au and a
mean value and standard deviation of 10m for ex.,. The correlation scale is set for both
random fields to be 400m in both horizontal and vertical directions and is described by
an exponential correlation function. The correlation information will produce a variance
reduction which will not be the same for each element since the elements do not have
the same size. Obviously this will cause the reliability results to differ from those of
case 1.
For each element we have two random variables, one for the element longitudinal dis-
persivity and one for the transverse dispersivity. This would give a total of 1024 ran-
dom variables. Unfortunately, the computer avilable could not handle a model with so
many random variables. In order to reduce this number is was assumed that elements
below the pollution source area and elements above the target node area would have lit-
tle effect on the results and can be treated as deterministic variables. More specifically,
the first 5 element rows from the bottom of the grid and the 10 first element rows from
the top of the grid are assumed to be deterministic with values equal to the mean values.
All other elements have random variable properties giving a total of 544 random vari-
ables.
PAGAP converged in 7 iterations to a reliability index of 1.120 giving a probability of
0.131. The reliability index is only slightly higher than the one obtained from the global
variable analysis for the L target node (i.e. 1.009). Figures 9.3.11 and 9.3.12 show the
design point longitudinal and transverse dispersivity distributions respectively. The cor-
relation effects are obvious in both figures. The highest longitudinal dispersivity area is
located near the pollution source area with values 70% above the mean value. At the
page 352 Chapter 9. The Gardennoen Case Study
target node area the longitudinal dispersivity is lower than the mean value by 10%. The
transverse dispersivity shows a minimum area which does not correspond to the maxi-
mum longitudinal dispersivity area. The minimum transverse dispersivity area seems to
be located approximately in the mid-distance between the source nodes and the target
node and shows a reduction of more than 40% with respect to the mean value.
Figures 9.3.13 and 9.3.14 show the y-sensitivities for the longitudinal and transverse
dispersivities respectively. The highest longitudinal dispersivity sensitivity values are
observed above the pollution source area, while the lowest - negative - values at the tar-
get node area. The transverse dispersivities show a much more complex distribution.
The highest C4 sensitivity values are approximately 4 times smaller than the highest a
L
sensitivities and are located at the left side of the pollution sources and the right side of
the target node. The lowest - negative - values are approximately 50% smaller in value
than the respective a
L
values and form a pollution barrier for the left side of the aquifer.
Some small negative sensitivities are observed on the right side area as well. The distri-
bution of the design point values and the y-sensitivities resemble the ones observed for
the equivalent theoretical case study seen in chapter 8 for the asymmetric target node.
The sensitivity values show that the longitudinal dispersivity has a much higher influ-
ence on the reliability index than the transverse dispersivity. The sensitivities with re-
spect to the mean values used for the longitudinal and transverse dispersivities are given
in figures 9.3.15 and 9.3.16, respectively.
Chapter 9. The Gardermoen Case Study page 353
2SO 500 7SO 1000 1250 1500
3000 3OCiO
27SO 27SO
2500
22SO
0
1750
1500
1250
'000
-
ABOVE 170.000
..
160.000 - 170.000
7SO
-
150.000 - 160.000
..
140.000 - 1so.ooo
-
130.000 - 140.000
500
-
120.000 - 130.000
-
110.000 - 120.000
~
100.000 - 110.000
2SO 2SO
0
90.000 - 100.000
D
BaOW 90.000
250 500 750 1000 1250 1500
Figure 9.3.11. Design point longitudinal dispersivity distribution.
2SO soo 750 1000 t250 1500
_ ABOVE 10.000
IIiiII!i1 9.500 - 10.000
I:I!I 9.000 - 9.500
_ 8.500 - 9.000
_ 8.000 - 8.500
_ 7.500 - 8.000
_ 7.000 - 7.500
rn 8.500 - 7.000
o 8.000 - 8.500
D BaOW 6.000
250 500 750 1000 1250 1500
Figure 9.3.12. Design point transverse dispersivity distribution.
page 354 Chapter 9. The Gardennoen Case Study
2SO 500 750 1000 1250 1500
-
ABOVE Q.218
D
0.160 - Q.218
~
0.102 - 0.160
-
0.043 - 0.102

-0.D15 - 0.043

-0.073 - -0.015

-0.132 - -0.073
so
D
-0.190 - -0.132
CJ
BELOW -0.190
2SO soo 7SO 1000 1250 lSOO
Figure 9.3.13. Longitudinal dispersivity y-sensitivities.
2SO soo 750 1000 1250 1500
_ ABOVE 0.050
D 0.030 - 0.050
lID] 0.010 - 0.030
_ -0.010 - 0.010
_ -0.030 - -0.010
_ -0.050 - -0.030
_ -0.070 - -0.050
o -0.090 - -0.070
D BELOW -0.090
2SO soo 750 1000 1250 1500
Figure 9.3.14. Transverse dispersivity r-sensitivities.
Chapter 9. The Gardermoen Case Study
250 SCll 7S4 1000 1250 ISCll
- ' -
3000
2750
2SOO
22SO
2000
1750
'SCll
'250
'000
-
ABOVE 0.0010
750
0
0.0007 - 0.0010
-
0.0004 - 0.0007
III
0.0001 0.0004
500 III
-<l.0002 - 0.0001
III
-0.0005 - -0.0002
-
-0.0009 - -0.0005
250
0
-0.0012 - -0.0009
0
BaOW -0.0012
250 SCll 7S4 1000 1250 ISCll
Figure 9.3.15. Longitudinal dispersivity mean value sensitivities.
2SO 500 750 1000 1250 1500
page 355
_ ABOVE 0.0049
c::J 0.0038 - 0.0049
g 0.0027 - 0-0038
.. 0.0016 - 0.0027
_ 0-0006 - 0.0016
_ -0.0005 - 0.0006
l1li -0.0016 - -0.0005
o -0.0027 - -0.0.016
o BaOW -0.0027
2SO soo 7SO 1000 1250 1500
Figure 9.3.16. Transverse dispersivity mean value sensitivities.
page 356 Chapter 9. The Gardermoen Case Study
2SO SOO 7SO 1000 1250 1500
_ ABOVE -0.0001
D -0.0002 - -0.0001
iii -0.0004 - -0.0002
_ -0.0005 - -0.0004
_ -0.0006 - -0.0005
_ -0.0007 - -0.0006
_ -0.0008 - -0.0007
o -0.0010 - -0.0008
o BaOW -0.0010
2SO soo 7SO 1000 1250 1500
Figure 9.3.17. Longitudinal dispersivity standard deviation sensitivities.
250 500 75<> 1000
'250 1500
3llOO
27SO
2SOO
=
2000
17SO
'500
1250
1000
-
ABOVE 0.0007
75<>
D 0.0003 - 0.0007
IIR -0.0001 - 0.0003
-
-0.0005 - -0.0001
500
-
-0.0010 - -0.0005
-
-0.0014 - -0.0010
II1II
-0.0016 - -0.0014
250
0
-0.0022 - -0.0016
0
BaOW -0.0022
250 500 ?SO 1000 '250 1500
Figure 9.3.18. Transverse dispersivity standard deviation sensitivities.
Chapter 9. The Gardermoen Case Study page 357
We once again observe their resemblance to the y-sensitivities with the only difference
being that negative y-sensitivities correspond too positive, mean value sensitivities and
vice versa. Upon normalization with the standard deviation, we obtain mean value sen-
sitivity values which are approximately 60% of the corresponding y-sensitivity values.
The standard deviation sensitivities - seen in figures 9.3.17 and 9.3.18 - upon nonnali-
zation show different characteristics for the longitudinal dispersivities and the transverse
dispersivities. The longitudinal dispersivity standard deviation sensitivities are slightly
smaller than the mean value sensitivities and are mainly observed at the pollution source
area. All sensitivity values are negative showing that any positive change in the longi-
tudinal dispersivity standard deviation will reduce the reliability index. On the other
hand the transverse dispersivity standard deviation sensitivities are less than half of the
mean value sensitivities.
9.3.2. 20 Horizontal model with specific aquifer thickness
The only difference between this section and the previous section (9.3.1) is the defini-
tion of the aquifer thickness. In the previous section we defined the element thicknesses
in such a way that all element hydraulic conductivities would have a value of 10m/d. In
this section we assume the aquifer thickness is linearly increasing from the top of the
aquifer towards the bottom of the aquifer (Le. from East to West). More specifically, at
y =Om the depth is 25m while at y =3500m it is 20m. The actual calculations for each
element are made by first averaging the y coordinates of the nodes of the element and
then using the relationship
TH =25-
l
e 700
where THe is the thickness of element e, and Y
e
is the average coordinate. Dividing the
element transmissivity with the above thickness we obtain the hydraulic conductivity.
The thicknesses and hydraulic conductivities obtained in this manner will give a more
realistic flow regime than the one used in the previous section. In this section the flow
page 358 Chapter 9. The Gardermoen Case Study
regime is more realistic in the sense that the hydraulic conductivity distribution will
have an impact on the flow regime, while in the previous section the hydraulic conduc-
tivity had no effect on the flow regime which was controlled by the thickness and the
hydraulic gradients of the aquifer.
The FEM grid, boundary conditions and positions of the target nodes are the same as in
the previous section and can be seen in figure 9.3.1. The exact same cases analyzed in
the previous section will be analyzed in this section as well.
Case 1: First problem with global random variables
The longitudinal dispersivity is assigned a mean value and standard deviation of 100m,
while the transverse dispersivity has a mean value and standard deviation of 10m. The
first problem is to find the probability of the concentration at the target nodes exceeding
a value of 0.2c
o
in 3000 days after the activation of the pollution sources, where Co is the
constant concentration of the pollution sources The target nodes are defined in figure
9.3.1 and are referred to as left (L) and right (R) target nodes.
Table 9.3.5. Global random variable results
Target Node Variables (3 Probability Iterations
Left (XL 0.590106 0.277560 5
<XL + (XT 0.492312 0.311249 6
Right (XL 9.242265 0.000000 6
<XL + (XT 4.993787 0.000000 17
Table 9.3.5 shows the PAGAP results obtained for this case. The reliability indices for
the L target node are approximately half of those obtained in the previous section, indi-
cating that transportation is faster now. For the R target node we once again obtain very
high reliability indices, giving probabilities which are practically equal to zero. When a
L
is the only global random variable, the reliability index is approximately the same as in
the previous section. When a
L
and CXr are global random variables, the reliability index
Chapter 9. The Gardennoen Case Study
page 359-
is smaller by one unit compared to the one estimated in the previous section. This
change is not however significant enough to change the probability estimate. The high
number of iterations in this case is due to the modified RF algorithm used, as the stan-
dard RF failed to converge.
Figure 9.3.19A shows the deterministic concentration distribution after 3000 days, when
the mean values are used for the parameters, while 9.3.19B is the same figure as 9.3.3
showing the deterministic mean value solution obtained in the previous section for
comparison. Close inspection of these two figures shows that the pollutant has dispersed
more in figure A than in figure B. This can be observed when comparing the 0.2 contour
in both plots. In figure A the 0.2 contour line is slightly over the y=2000m mark while
in figure B it is under the mark, approximately at 1920m. At the pollution source area
we see that figure B shows a large high concentration area (defined by contour 0.9),
while the same area in figure Ais much smaller.
no 51)) 7SO 1COO 12$0 tSOO
no 5/ 7SO ICOO 1250 1500
""
""
... _ ABOVE lL800
.. a.aoo - lL800
'" I!!'III 0.700 - a.aoo
_ a.aoo 0.700
_ a.soo - a.aoo
_ ..... -a.soo
- Q3OO ......
IdR] G.2OO Q300
o 0.100 G.2OO
o ea.OW 0.100
""
_ ABOVE lL800
.. 0.100 - lL800
7$Q R: Q.7QO ... a.aoa
_ a.aoo 0.700
_ a.soo a.aoo
_ ...... a.soo
- Q3OO- .....
o G.2OO Q300
o 0.100- G.2OO
o ea.OW 0.100
A
B
Figure 9.3.19. A. Mean value concentration distribution after 3000 days for aquifer with a speci-
fied thickness. B. Mean value concentration distribution after 30.00 days for aquifer with homo-
" geneous hydraulic conductivities.
A
c
page 360
... '"
1000 '250 tSOO
lOCO ,:SO tSOQ
,...
"'" _ ABOVE D.IOO
_ D.IOO D.IOO
_ 0.700 - D.IOO
... D.IOO - 0.700
Q.5OO - D.IOO
0.400 - Q.5OO
Q.5OO - 0.400
CJ Q.2OO - Q.5OO
o 0.100 - G.2OO
c:J BB..OW 0.100
""
....
_ ABOVE 1.800
lIiiiIlI UOQ-1.I00
m 1.AOO-1.I00
IIii!II UOQ-1AOO
_ 1.000 - 1.200
_ D.IOO - 1.000
_ 0.000 - D.IOO
E:J OAOO - D.OOO
... 0 G.2OO - OAOO
o ea.ow G.2OO
B
D
Chapter 9. The Gardermoen Case Study
_ ABOVE 0.&00
... D.IOO - D.IOO
_ 0.700-0.800
_ D.IOO - 0.700
_ Q.5OO - D.IOO
0.400 Q.5OO
Q.5OO - 0.400
rn Q.2OO - Q.5OO
o 0.100 - G.2OO
o BELOW 0.100
'"
_ ABOVE 0.&00
... D.IOO - D.IOO
_ 0.700 - D.IOO
_ D.IOO - 0.700
_ Q.5OO - D.IOO
_ OAOO - Q.5OO
_ D.IOO - OAOO
~ G.2OO - D.IOO
o 0.100 G.2OO
o BELOW 0.100
Figure 9.3.20. A. Design point solution for L target node withaL as a global random variable. B.
Design point solution for R target node withaL as a global random variable. C. Design point so-
lution for L target node with aL +aT as global random variables. D. Design point solution for R
target node with aL +aT as global random variables. .
Chapter 9. The Gardermoen Case Study page 361'-
Since both plots have the same dispersivity values, the higher dispersion observed in
figure A must be due to a higher velocity at the pollution source area. The practical im-
portance of these differences is rather unclear. These are the solutions after 3000 days of
transport ( ca. 8.33 years) and the transport differences have a difference of less than
100m. Adifference of 80m for the O.2e
o
contour does not seem very much, especially
when considering how differently the element thicknesses have been defined. However,
the reliability indices for figure A are half of those for figure B, indicating that these dif-
ferences are considerable, or at least can not be ignored.
In figure 9.3.20 we see the design point solution for each case. Figures A and C show
the one and two global variable case for the L target node, while figures Band C show
the respective design point solutions for the R target node. All four plots have the same
general characteristics observed and discussed for the same problem in the previous
section. seen in figure 9.3.14. There are differences of course. Since advective trans-
port is slightly larger than in the previous section, the design point longitudinal disper-
sivities are smaller then the ones obtained for the homogeneous hydraulic conductivity
case in section 9.3.1.
As a result, pollution dispersion below the pollution source area is reduced compared to
that of figure 9.3.14. It is interesting to notice the similarities between plots 9.3.20B
and 9.3.14B. The reliability indices for these cases are approximately the same, and so
are the design point values for (XL' Le. 1024m and 1025m respectively. The higher ve-
locity field in figure 9.3.20B is clear, but its influence on the concentration at target
node R seems to be insignificant. In both cases the transverse dispersivity has a deter-
ministic value equal to 10m and we therefore expect a slightly higher transverse disper-
sion in 9.3.20B since the velocity is slightly higher. On the other hand, the higher ad-
vection .transport will reduce the amount of pollution available for transverse dispersion.
We have to conclude that small changes in the velocity field will have very little influ-
encl? in this particular case. Once the transverse dispersivity becomes a random vari-
page 362 Chapter 9. The Gardennoen Case Study
able, figures 9.3.20D and 9.3.14D show that this variable is equally important. The
transverse dispersivity obtained a design point value of 29.9m and 30m, respectively,
for figures 9.3.20D and 9.3.14D, showing once again the small influence of the velocity
field. The design point a
L
values change considerably in this case (557m and 671m re-
spectively), which obviously causes the change in the reliability indices estimated for
each case.
Case 2: Secondproblem with global random variables
In this case we want to find the probability of the concentration at the target nodes ex-
ceeding a maximum concentration of 0.2c
o
' in a period of 5500 days when the pollution
sources have a time duration of 10, 30, 60, 120, 180,240,300,360,420 and 480 days.
This is exactly the same problem analyzed in section 9.3.1.
The results obtained from PAGAP for the L target node case can be seen in figUres
9.3.21 through 9.3.23.
0.180
0.160
0.170
0.240
0.250
0.210 ~
0.200 :g
. e
0.190 Q.
0.230
0.220
0.150
480 420 360 300 240 180 120 60
-
----..
----
~
~
'"
l/
~
><
~
---
'\
-
\
0.700
o
0.750
0.900
0.950
>C
.g 0.850
.5
~
:c
III
'ai 0.800
a:
Source Duration (days)
Figure 9.3.21 Reliability index andprobability as a function of source duration for the L target
node
Chapter 9. The Gardermoen Case Study page 363
76.000
g 66.000
<II
Gl
::
~ 56.000
Gl
c-
<II
is 46.000
'iii
c
:c
::>
~ 36.000
c
oS
~ 26.000
...
::i
iii
~ 16.000
~
)
.-V
V
~
.,...--
W
~
-0.3
-0.4
-0.5
<II
-0.6 ~
">
::
en
-0.7 ~
IJl
-0.8
-0.9
6.000
o 60 120 180 240 300 360 420
-1.0
480
Source Duration (days)
Figure 9.3.22. Design point longitudinal dispersivities and y-sensitivities verses source duration
for the L target node.
-0.3
-0.8
-0.6
-0.2
0.0
-0.7
-0.1
<II
-0.4 ~
~
en
-0.5 15
IJl
-0.9
480 420 360 300 240 180 120 60
---.--.
.....
~
""""
"'"
~
~
1.0
o
10.0
9.0
g
<II
8.0
.!!!
~
e
7.0
Gl
c-
<II
is
6.0
Gl
l!!
Gl
>
5.0
<II
C
lU
t::.
4.0
>-
a;
...
::i
3.0
iii
0
::
2.0
Source Duration (days)
Figure 9.3.23 Design point transverse dispersivities and y-sensitivities verses source duration
for the L target node.
page 364 Chapter 9. The Gardermoen Case Study
Figure 9.3.21 shows the reliability indices and the respective probabilities as a function
of the source duration. Up to a source duration time of 240 days the reliability index
decreases very little. Between 240 and 360 days of source duration time, the reliability
index decreases approximately twice as fast, and between the 360 and 480 day duration
times, the decrease is doubled once again. However the overall decrease is not very
large causing a probability increase from 17.5% to 23.4%.
The design point dispersivity values and their y-sensitivities are given in figures 9.3.22
and 9.3.23. These figures in addition to the usual duration times include results for du-
ration times of 440, 450 and 460 days. The reason for this addition was that a dramatic
change was observed after the 420 day duration time in all three plots, and a more de-
tailed plot could assist us in understanding what was causing the change. The plots
clearly show that the characteristics of the analyses change dramatically from the 440
duration time to the 450 duration time. The reliability index curve looses its smooth-
ness in this area and shows a very small change. The changes however in the design
point dispersivity values and their sensitivities is much more dramatic.
Up to a duration time of 440 days the longitudinal dispersivities dominate the analysis
with high sensitivity values, while the transverse dispersivity plays a very minor role.
After the 450 day duration time, the opposite is observed. The transverse dispersivity is
dominating the analysis while the importance of the longitudinal dispersivity is reduced
considerably. In order to understand what is causing this change, figures 9.3.24 and
9.3.25 show the design point solutions for different duration times. In the first figure we
see the pollution plume for a duration of 10, 120, 240"and 360 days. We observe that
the transverse dispersion is approximately the same in all contour plots. As the time
duration increases the longitudinal dispersion increases which is clearly seen in the de-
sign point solutions for 240 and 360 days. The 10 and 120 day duration plots look very
much the same, and this might seem unreasonable considering the large difference in
duration times.
Chapter 9. The Gardermoen Case Study page 365-
'SOl
.'"
no _ ABOVE Q.3OO
f::ID ll.25O - G.3OO
_ Q200 - Cl250
_ 0.15O-Q200
_ 0.100-0.150
'" CJ 0Jl50 - 0.100
" I . . , . , . , . . . . . . ; : ; : : r . , . . , . . . . . , . . , ; ; : . . , . . , . , . . , . . , . , . , ~ , . , . . , . , . , . . , . . . I " 0 ea.OW 0Jl50
'SOl
''''
150 _ ABOVE Q.3OO
~ ll.25O-G.300
_ Q2OO-Cl25O
_ 0.15O-Q200
_ 0.100-0.150
CJ 0Jl50 - 0.100
o ea.ow Q05Q
2$0
'" '"
'0>0
"50
'500 2$0
'"
,0>0 ,2$0
mo
I)
2m

,."
''''
''''
'SOl
'SOl
'2$0
'2$0
'0>0
'0>0
-
ABOVE G.3OO
'"
-
ABOVE G.3OO
E3]
ll.25O - G.3OO
1m ll.25O - G.3OO
-
Q200 - Cl250
.. Q200 - Cl250
-
0.150 - Q200
-
0.150 - Q200
-
0.1000.150
-
0.100 ... 0.150
CJ
Q.05O ... 0.100
CJ
Q.05O ... 0.100
o ea.OW Q05Q
o ea.OW Q05Q
2$0
'" '"
,0>0 '2$0 1500 2$0 500
'",
,0>0 ,2$0 1500
A B
2$0
'" '" ''''
'500 2$0 500
""
c D
Figure 9.3.24. Design point solutions A. Duration time 10 days. B. Duration time 120 days. C.
Duration,time 240 days. D. Duration time 360 days.
- - ~ - - ~ ~ - ---------
page 366 Chapter 9. The Gardermoen Case Study
_ ABOVE .....
o ll25O .....
==:= _ 0.100-0.150
D QJl5O. 0.100
D BB.OW 0Jl50
,soo
no _ ABOVE Q300
g ll25O ......
.. ll2DO-1l25O
_ 0.15O-ll2DO
_ 0.1000.150
D 0Jl50 - 0.100
D BB.OW 0Jl50
''''
''''
250 sec no 1CO) 1250 1500
200l 200l
"'"
.."
"'" "'"
'000
-
ABOVE .....
'"
-
ABOVE .....
rn
0.250 Q.3OO
Dcl 1125O .....
.. D.2OO - 0.250
-
D.2OO Q.25O
-
0.150 .. D.2OO
-
0.150 - D.2OO
-
0.100 - 0.150
-
Q,1DO - 0.150
CJ o.oso - 0.100 D QJl5O0.1OO
D BB.ow 0Jl50 D BB.OW 0Jl50
'"
soo
'"
'000
''''
'soo
'"
soo
'"
'000
"'"
.."
A B
'"
soo
'"
'000
''''
'soo
'"
soo
'"
'000
''''
'SOO
3000
mo
c D
Figure 9.3.25. Design point solutions A. Duration time 420 days. B. Duration time 440 days. C.
Duration time 460 days. D. Duration time 480 days.
Chapter 9. The Gardennoen Case Study page 367'
However, this can be explained when one remembers that t
max
- the time at which the
maximum concentration is obtained -)s part of the solution and varies for different du-
ration times. For the 10 day duration time t
max
=4450 days, while for the 120 duration
time it is 4440 days. The estimated t
max
are not very precise since a time step of I1t = 10
days has been used. For the 240 and 360 source duration times, t
max
has respectively
values of 4420 and 4410 days.
In figure 9.3.25 we see the design point solution for duration times of 420,440,460 and
480 days. For the 420 and 440 day durations, the solutions have the same characteris-
tics as the ones seen in the previous figure and are clearly a continuation of them. The
460 and 480 day duration results have clearly different characteristics. The transverse
dispersion is considerably smaller while the longitudinal dispersion is much bigger. In
both cases the pollution plume has reached the upper boundary. The t
max
times change
considerably as well. The 420 and 440 duration times have both a maximum time of
4400 days while the 460 and 480 duration times have 4350 and 4340 days respectively.
The reason for these changes in characteristics, should most likely be attributed to the
fact that the pollution plume reaches the upper boundary when the pollution duration
time becomes larger than 450 days. The advective pollution flux at the upper boundary
is automatically taken into consideration by the model, but the dispersive flux is not.
Actually, PAGAP considers the dispersive flux to be zero at the upper boundary. Kin-
zelbach (1986) suggests two approaches for correcting the pollution flux at the upper
boundary. The first is to implement a third-kind boundary condition and treat the upper
. boundary as a transmission boundary. The second approach is to increase the dimen-
sions of the aquifer domain in order to avoid having the pollution plume reaching the
boundary. Besides the above approaches one could also consider using infinite finite
elements. The implementation of such approaches was prohibited by the limited time
available for this project, and therefore one can not conclusively say that the change of
characteristics is caused entirely by boundary errors. One could also consider as the
page 368 Chapter 9. The Gardermoen Case Study
cause of this behavior the large time step used in the analysis (Le. /).t= 10 days). How-
ever, a reanalysis with a pollution duration time of 460 days with a time step of 6t=2
days shows that the time step has only a small influence on the results obtained. The
reliability index was estimated to be 0.764605 which differs only slightly from the
0.764593 reliability index with the 10 day time step.
Figures 9.3.26 through 9.3.28 show the R target node results. The source duration time
has a very small effect on the reliability results obtained. The reliability index de-
creases from a value of 0.995 for a source duration time of 10 days to a value of 0.957
for a duration time of 480 days.
1.000
0.995
0.990
0.985
)(
~ 0.980
.5
~ 0.975
:E
III
~ 0.970
a::
0.965
0.960
0.955
---
r---.
~ ./
"'"
V
'"
./
/ ~
~
loo""
'" ~
""
-
I--"
'"
0.173
0.170
0.167 ~
:E
III
.c
E
0.164 a.
0.161
0.950
o 60 120 180 240 300 360 420
0.158
480
Source Duration (days)
Figure 9.3.26 Reliability index andprobability as a function of source duration for the R target
node
- - - ~ _ . ,--'-._----
Chapter 9. The Gardermoen Case Study page 369'
-0.99997
-0.99999
-0.99998
III
tl>
-0.99995 ~
in
-0.99996 lS
(J)
-0.99994
-0.99993
-0.99992
-0.99991
-1.00000
480 420 360 300 240 180 120 60
/
/
/ I
./
V
/
./
/
/
/' /
~ ./
V
------
~
./
./
l---'"""
.,.-
n . ~ ....c
0.0
o
4.5
4.0
g
~ 3.5
>
~
8. 3.0
III
C
iii 2.5
c
:;:;
::I
~ 2.0
c
.3
>- 1.5
~
:! 1.0
III
o
::
0.5
Source Duration (days)
Figure 9.3.27. Design point longitudinal dispersivities and y-sensitivities verses source duration
for the R target node.
~
~
~
/
"
./
,,-
./
'f'
./
,,/
/
V
10.12
:: 10.10
III
tl>
~ 10.08
f!
tl>
C.
III
C 10.06
tl>
12
..
>
~ 10.04
~
>-
~ 10.02
:i
ti
~ 10.00
9.98
o 60 120 180 240 300 360 420
0.012
0.010
0.008
III
0.006
~
~
iii
0.004
c
tl>
(J)
0.002
0.000
-0.002
480
Source Duration (days)
Figure 9.3.28. Design point transverse dispersivities and y-sensitivities verses source duration
for the R target node.
page 370 Chapter 9. The Gardermoen Case Study
In terms of probability this is a change of approximately from 0.16 to 0.17. The longi-
tudinal dispersivities have y-sensitivities which are practically constant and equal to -
1.0, emphasizing the small effects of the source duration time. The design point a
L
val-
ues are very small indeed. The transverse dispersivities have y-sensitivities which are
approximately 100 times smaller than the a
L
sensitivities. For a 90 day pollution source
duration the transverse dispersivity has a design point value equal to the mean value,
and in general the design point C4 values remain near the mean value. The plots show
the same characteristics as the plots ofingures 9.3.8 to 9.3.10 for the homogeneous hy-
draulic conductivity case. In this case the longitudinal dispersivity plays are even more
dominant role in the analysis.
It is interesting to notice that in the analysis of the second problem for both L and R tar-
get nodes, we obtained design point a
L
values which were smaller than the design point
C4 values. This is a very unreasonable result. We expect the ratio aja
L
to be equal to
or smaller than 1.0. Field measurements suggest a mean value of 0.1 for the dispersivity
ratio. In other words, although the design point values obtained have the highest prob-
ability content, they can not be encountered in nature. In order to obtain more reason-
able design point values, we must either .assign some correlation between the longitudi-
nal and transverse dispersivity, or introduce the dispersivity ratio as a random variable.
Assigning an absolut.e correlation between the dispersivities, we can ensure that the de-
sign point dispersivity values will have the same ratio as the mean values assigned to
the random variables. One should keep in mind that a design point with a specific dis-
persivity ratio might not exist. The problem of the dispersivity ratio becomes clearer if
we consider replacing C4 with aLaR in PAGAP, where a
R
= aja
L
thus obtaining a
model where a
L
and a
R
are the random variables. Assigning an appropriate probability
distribution to a
R
is not easy, because it has to be in the range 0.0 < a
R
::; 1.0. A lognor-
mal probability distribution would work fairly well as long as the standard deviation is
small enough to ensure that a value of 1.0 is practically unlikely to be obtained.
Chapter 9. The Gardermoen Case Study
-------- - ~ ~ - - - ._- - ~ ~ - -
page 371
In order to see the effects of taking the dispersivity ratio into consideration, a case has
been analyzed with a source duration of 120 days for the Rtarget node, where the dis-
persivities are considered to be absolutely correlated. The first attempts in obtaining re-
liability results under the above conditions failed. The modified RF algorithm failed to
converge because negative longitudinal dispersivities values were obtained in the sec-
ond iteration step. In order to increase the flexibility of the analysis, the time period
considered was increased from 5500 days to 10000 days. This change does not affect
the results obtained for the uncorrelated case. The uncorrelated case gave a reliability
index of 0.992 with a design point Ct
L
= 0.756m and 07 =1O.006m, with a
t
max
= 4480days. The correlated analysis gave a reliability index equal to 0.986 with de-
sign point Ct
L
=1.440m and 07 = O.l44m with t
max
= 9690days. As expected, when the
dispersivities are absolutely correlated the ratio ajCt
L
becomes equal to 0.1 which is the
ratio of the mean values. Figures 9.3.29 through 9.3.32 give the results obtained for
correlation coefficients between 0.0 and 1.0. Figure 9.3.29 shows that the reliability
index is not sensitive to changes in the correlation coefficient. The reliability index in-
creases for correlation coefficients up to a value of 0.2 and then decreases. This change
is associated with a large change in t
max
which jumps from a value of 4500 days for 0.2
to a value of 9710 days for 0.3. Initially, reliability results were obtained for correlation
coefficients up to 0.4 using a time period of 5500 days, and these showed ~ h e reliability
index increasing. With a time period of 5500 days reliability results could not be ob-
tained for correlation coefficients above 0.5, because negative longitudinal values were
obtained during RF iteration steps.
Figures 9.3.30 and 9.3.31 show the design point dispersivity values and their
'V-sensitivities. The design point longitudinal dispersivities decrease up to a value of 0.2
for the correlation coefficient, and then increase to obtain the highest value when the
dispersivities are absolutely correlated. The design point transverse dispersivities show
an almost linear reduction from a value of ca. 10m for uncorrelated dispersivities to a
value ofO.l44m for absolute correlation.
page 372
-- ~ - - - - - - - ~ ~ - - -
Chapter 9. The Gardennoen Case Study
v
-
/ ./
.......
-" ............ ~
V
-
-/
............
/
'"
/
~
/'
""
./ "
'"
/
,
""-
r----..
r-....
"'-.... -/
/'
.............
0.995
0.994
0.993
0.992
)(
.g 0.991
.E
~ 0.990
:c
'"
'iii 0.989
II:
0.988
0.987
0.986
0.985
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
0.1622
0.1620
0.1618
0.1616
0.1614 ~
0.1612 ~
J:J
e
0.1610 l1.
0.1608
0.1606
0.1604
0.1602
1.0
Correlation coefficient between longitudinal and transverse disperslvity
Figure 9. 3.29.Reliability index andprobability as a function of correlation coefficient between aL
and aT for the R target node with a source duration of 120 days.
/
"
/ \ ~
/
/'
".
/' /
/
\
I ~ /' /' \
/ \. / /' \
/ ./ \
/ ./ \
-
I
./ \
"""-""!
'-
.,/
..........
---
1.6
1.5
~ 1 . 4
:;
f!
.. 1.3
Co
'"
~ 1.2
ca
c
'S
.a 1.1
a.
. 1.0
>-
a;
~ 0.9
...
'"
~ 0.8
0.7
0.6
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
-0.99990
-0.99991
-0.99992
-0.99993
-0.99994 ~
:;
-0.99995 ~
c
..
-0.99996 (J)
-0.99997
-0.99998
-0.99999
-1.00000
1.0
Correlation coefficient between longitudinal and transverse dispersivlty
Figure 9.3.30. Design point longitudinal dispersivities and y-sensitivities verses correlation coef-
ficient between aL and aT for the R target node with a source duration of 120 days.
Chapter 9. The Gardennoen Case Study page3H'
"'
"
~ .\
/ ~
V \ ~
..,-/
\
"'
I
\ ~ /
\
~ /
-
~
V
~
"'
~
10.0
9.0
~
8.0
~
l:'!
7.0
CI>
C.
In
'C
6.0
CI>
l:'!
CI>
5.0 >
In
C
g
4.0
>.
'iii
5 3.0
U;
0
:;;
2.0
1.0
0.0
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
0.025
0.020
0.015
0.010
0.005 ~
">
0.000 ~
c
CI>
-0.005 en
-0.010
-0.015
-0.020
-0.025
1.0
Correlation between longitudinal and transverse dlspersivity
Figure 9.3.31. Design point transverse dispersivities and y-sensitivities verses correlation coeffi-
cient between (XL and (XT for the Rtarget node with a source duration of 120 days.
~
~
'"
'"
" ~
~
...........
-.,
~
r---
14.0
12.0
10.0
8.0
..
~
fj
6.0
4.0
2.0
0.0
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
Correlation between longitudinal and transverse disperslvity
Figure 9.3.32.aT/aL ratio verses correlation coefficient between aL and aT for the R target node
with a source duration of 120 days.
page 374
Chapter 9. The Gardennoen Case Study
The r-sensitivities have been normalized and do not show correlation effects, and there-
fore one can not directly compare design point values and y-sensitivities as done in pre-
vious cases. The longitudinal sensitivities can all be considered to be practically equal
to -1.0. The transverse sensitivities are more than 50 times smaller then the longitudinal
sensitivities and obviously have a very small influence on the reliability index. This
also means that the changes in the design point transverse dispersivity values are purely
the product of the correlation coefficient.
Finally, figure 9.3.32 shows the change of the ratio between the transverse and longi-
tudinal dispersivity. We see that ratios bellow 1.0 are obtained only when the correla-
tion coefficient is larger than 0.87.
Case 3: First problem with randomfield variables
In the third case analyzed we want to estimate the probability of the L target node hav-
ing a concentration equal or larger than 0.2c
o
after 3000 days of pollution transport,
where the pollution sources have a constant concentration equal to Co. This is the same
probability estimated in the first problem. The point statistics for the random fields are
the same as previously. A mean value and standard deviation of 100m is assigned to the
longitudinal dispersivity with a correlation scale equal to 400m in the horizontal and
vertical direction expressed through an exponential correlation function. The transverse
dispersivity is assigned a mean value and standard deviation of 10m with the same cor-
relation function and correlation scale used for the longitudinal dispersivity. Due to the
large number of random variables we have once again assumed that the 5 first element
rows from the bottom (West side) of the grid and the 10 first element rows from the top
of the grid are deterministic with values equal to the mean value. With this assumption
we have a total of 544 random variables.
PAGAP converged in 5 iterations to a reliability index of 0.566 giving a probability of
0.286. The reliability index is approximately half of that obtained for the same problem
with the homogeneous hydraulic conductivity assumption in section 9.3.1. Figures
Chapter 9. The Gardermoen Case Study page 375'
9.3.33 through 9.3.40 show the design point dispersivity distributions, y-sensitivities,
mean value and standard deviation sensitivities obtained. The general characteristics of
these figures are the same as the ones seen in the equivalent homogeneous hydraulic
conductivity case. The main differences are due to the smaller values observed in this
case. Figures 9.3.33 and 9.3.34 show the design point longitudinal and transverse dis-
persivity distributions, respectively. The correlation effects are obvious in both figures
and resemble the distributions seen in figures 9.3.11 and 9.3.12 The highest longitudi-
nal dispersivity area is located once again above the pollution source area with values
approximately 35% above the mean value. The transverse dispersivity shows a mini-
mum area which does not correspond to the maximum longitudinal dispersivity area.
The same was observed in figure 9.3.12. The minimum transverse dispersivity area
seems to be located approximately in the mid-distance between the source nodes and the
target node and shows a reduction of more than 20% with respect to the mean value. In
figures 9.3.11 and 9.3.12 of the previous section the maximum changes were 70% above
mean value and 40% below mean value respectively. In this case the maximum design
point values seem to be approximately half as big but occupy much larger areas.
Figures 9.3.35 and 9.3.36 show the y-sensitivities for the longitudinal and .transverse
dispersivities respectively. Although both sensitivity plots show the same characteris-
tics as observed in figures 9.3.13 and 9.3.14 of the homogeneous hydraulic conductivity
case, there are some differences especially in the longitudinal dispersivity y-sensitivities.
The highest longitudinal dispersivity sensitivity values are observed above the pollution
source area (i.e. above 0.273), while the lowest - negative - values at the target node
area (i.e. below -0.185). In figure 9.3.13 these values were 0.218 and -0.190 respec-
tively, suggesting that the pollution source area was only slightly more important than
the target node area. In this case however, the difference between the sensitivities of the -
source node area and target node area is much larger, and the importance of the source
node area has increased considerably.
page 376 Chapter 9. The Gardermoen Case Study
250 500 7SO 10c0 1250 1500
1750
.500
500
ABOVE 170.000
160.000 - 170.000
150.000 - 160.000
140.000 - 150.000
130.000 - 140.000
120.000 - 130.000
110.000 - 120.000
100.000 - 110.000
90.000 - 100.000
BB.OW 90.000
.000 _
-
-
-
-
-
-
f;El
o
o
750
.250
2SO 500 750 1000 1250 1500
Figure 9.3.33. Design point longitudinal dispersivity distribution.
250 soo 7SO 10c0 1250 1500
_ ABOVE 10.000
B 9.500 - 10.000
~ 9.000 - 9.500
_ 6.500 - 9.000
_ 8.000 - 8.500
_ 7.500 - 8.000
_ 7.000 - 7.500
L3 6.500 - 7.000
o 8.000 - 6.500
o BB.OW 8.000
250 500 750 1000 1250 1500
Figure 9.3.34. Design point transverse dispersivity distribution.
Chapter 9. The Gardermoen Case Study page 377
250 500 750 1000 1250 1500
3000
2750
2500
2250
2<100
'750
1500
1250
'000
-
ABOVE D.273
750
0
0.20B - 0.273
II!!I!II
0.142 D.20B
-
0-077 - 0.142
500
-
0.011 0.077
-
-0.054 - 0.011
-
-0.119 - -0.054
250
0
-0.185 -0.119
0
Baow -0.185
>SO 500 750 1000 1250 1500
Figure 9.3.35. Longitudinal dispersivity y-sensitivities.
2SO 500 7SO 1000 1250 1SOO
'750
1500
1250
'000
-
ABOVE 0.043
750
0
0.026 0.043
g
o.OOB - 0.026
.. -0.009 - o.OOB
500
-
-0.026 -0.009
-
o{).Q44 - -0.026
-
o{).QB1 -0.044
250
0
-0.078 - -0.061
0
BaOW -0.078
250 500 750
'000
1250 1500
Figure 9.3.36. Transverse dispersivity y-sensitivities.
page 378 Chapter 9. The Gardermoen Case Study
250 500 750 '000 .250 '500
3000
2750
2SOO
2250
2000
.750
.500
'250
'000
-
ABOVE 0.0010
750
CJ 0.0006 - 0.0010
-
0.0003 - 0.0006
-
~ 0 0 0 1 - 0.0003
500
-
~ 0 0 0 4 - ~ 0 0 0 1
-
~ 0 0 0 7 - ~ 0 0 0 4
.. ~ 0 0 1 1 - ~ 0 0 0 7
250
0
~ 0 0 1 4 - -0.0011
D
Ba.OW ~ 0 0 1 4
250 500 750 1000 .250 .500
Figure 9.3.37. Longitudinal dispersivity mean value sensitivities.
250 500 750 1000 1250 1500
_ ABOVE 0.0040
D 0.0032 - 0.0040
B 0.0023 - 0.0032
_ 0.0014 - 0.0023
_ 0.0005 - 0.0014
_ ~ 0 0 0 4 - 0.0005
_ ~ 0 0 1 3 - ~ 0 0 0 4
o ~ 0 0 2 2 - -0.0013
o Ba.OW ~ 0 0 2 2
2SO 500 7SO 1000 1250 1500
Figure 9.3.38. Transverse dispersivity mean value sensitivities.
Chapter 9. The Gardennoen Case Study
250 500 750 lOCO
'250 1500
:lOCO
27SO
2500
2250
2OCO
1750
1500
1250
1000

ABOVE -0.0001
750
0 -
.- -
-
- -0.0002
500
-
- -0.0003
-
-
-
-
2SO
0 -
D
eaow -0.0006
250 500 750 lOCO 1250 1500
Figure 9.3.39. Longitudinal dispersivity standard deviation sensitivities.
250 soo 750 1000 1250 1500
2500
2250
2OCO
1750
1500
'250
1000
-
ABOVE 0.0002
750
D 0.0001 - 0.0002
III
0.0000 0.0001

-0.0002 -0.0000
500

-0.0003 - -0.0002
-
-
-
-0.0006 - -0.0004
2SO
D
-
D
eaow
250 500 750 1000 1250 1500
Figure 9.3.40. Transverse dispersivity standard deviation sensitivities.
page 379
page 380
---- - ~ - - - - - - - -
Chapter 9. The Gardermoen Case Study
The transverse dispersivities show the complex distribution seen also in figure 9.3.14.
Actually the two figures are very similar with the difference that in figure 9.3.36 the "{-
sensitivities seem to be reduced by approximately 15% compared to those in figure
9.3.14. The sensitivity values show that in this case the longitudinal dispersivity has an
even higher influence on the reliability index than the one observed in the previous sec-
tion.
The sensitivities with respect to the mean values used for the longitudinal and transverse
dispersivities are given in figures 9.3.37 and 9.3.38 respectively. The distributions ob-
served are the same as the ones observed for the y-sensitivities. Upon normalization
with the standard deviation, we obtain mean value sensitivity values which are ap-
proximately 50% of the corresponding "{-sensitivity values. The standard deviation
sensitivities - seen in figures 9.3.39 and 9.3.40 - upon normalization show different
characteristics for the longitudinal dispersivities and the transverse dispersivities. The
longitudinal dispersivity standard deviation sensitivities are ca. 50% smaller than the
mean value sensitivities and are mainly observed at the pollution source area. All sensi-
tivity values are negative showing that any positive change in the longitudinal disper-
sivity standard deviation will reduce the reliability index. On the other hand the trans-
verse dispersivity standard deviation sensitivities are more than 5 times smaller than the
mean value sensitivities. In the previous section the mean value and standard deviation
sensitivities were influencing the reliability index much more than they do in this case.
9.4. Conclusions
Initially, the purpose of incorporating a field case study as a part of this thesis, was to
use actual information to obtain the point statistics of the aquifer properties, which
would then be used in PAGAP for the stochastic analysis of an actual problem. The
Gardermoen data were very promising in this respect. Unfortunately, only limited in-
formation from the Gardermoen area was available at the time the analyses presented in
- - - - - ~ - - - - - - - - ~ ~ _ . ~ - .---- -
Chapter 9. The Gardermoen Case Study page 381
this chapter took place. The information available at the time consisted of a cross sec-
tion hydrostratigraphic profile showing 4 layers with their approximate thicknesses,
mean value hydraulic conductivities, and a groundwater table map prepared by 0stmo
in 1976. With this limited information, even a simple deterministic model could not be
properly implemented.
Therefore, in order to be able to proceed with the case study, it was necessary to intro-
duce a number of assumptions. Since the available data were not sufficient for an actual
case study, semi-actual case studies were carried out as part of this thesis. The most
valuable data available were those of the groundwater table map. It was decided to keep
these data unchanged, which in tum resulted in assuming the hydraulic conductivities to
be deterministic. All other parameters, namely longitudinal dispersivity, transverse dis-
persivity and decay constant have been treated as random variables in a global sense or
a random field sense.
Since no directly applicable results could be obtained from these analyses about the
Gardermoen aquifer characteristics, the case studies have been used mainly for assess-
ing the flexibility and performance of PAGAP under simulated realistic conditions.
In the case studies of the vertical cross section we see the impact of the decay elements
on the reliability indices obtained. Although the decay constants have very small values,
the reliability index becomes 3 to 4 times higher compared to the case with no decay.
The sensitivities show that the longitudinal dispersivity is the controlling parameter.
However, since the hydraulic conductivities are deterministic, we can not evaluate the
significance of such a result. When correlation is assigned to the decay parameters, the
reliability index decreases. This behavior has been observed in all cases and has been
explained as a shrinking of the standard normal space with increasing correlation scale.
The correlation effect is also responsible for the different results obtained when global
random variables and when random field variables are used. A global random variable
can be assumed to be equivalent to a random field with infinite correlation scale. There-
page 382 Chapter 9. The Gardermoen Case Study
fore the highest possible shrinking of the standard normal space takes place which re-
sults in the smallest possible reliability index.
For the 2D horizontal aquifer model many different cases have been analyzed. Two dif-
ferent assumptions about the aquifer thickness have been made, and comparisons show
that the reliability index is greatly influenced by these assumptions. In the first case the
aquifer thickness was chosen under the assumption that the hydraulic conductivity had a
constant value throughout the aquifer (Le. homogeneous and isotropic aquifer). When
the hydraulic conductivity was treated as a global random variable, it did not have any
influence at all on the reliability index, because any change in its value caused a propor-
tional change in the hydraulic head distribution, resulting in a constant velocity field. In
the second case the thickness of the aquifer was assumed to decrease linearly from 25 m
at the West side of the aquifer to 20m at the East side. The reliabilities obtained were
less than 50% of those of the previous case.
Another type of problem that was analyzed assumed the pollution source to be transient,
and the probability of a target concentration being exceeded was estimated for different
source duration times. Since the hydraulic conductivity was not included in this analy-
sis (deterministic), the longitudinal dispersivity became the controlling parameter. The
results show that the probability of exceeding the target concentration increases as the
source is active for longer periods of time. Based on the curves obtained, one can make
choices about time dependent remediation strategies. These results also show that the
position of the target nodes in relation to the plume transport path, playa significant role
in the analysis. Of course, this role could be reduced c.onsiderably if the hydraulic con-
ductivities were also treated as random variables, giving the ability of altering the flow
regime in the aquifer.
In many cases the design point transverse and longitudinal dispersivities had a ratio
which was much larger than 1.0. This observation lead to a case study where correla-
tion was assigned between the two dispersivities to see if this would keep the dispersiv-
Chapter 9. The Gardermoen Case Study
page 383
ity ratio within reasonable values. The results show that the dispersivities have to be
highly correlated in order to obtain ratios below zero. This suggests, that in some cases
at least, a high correlation should be assigned to the two dispersivities. Alternatively, the
transverse dispersivity could be defined through the longitudinal dispersivity and the
dispersivity ratio, in order to have a better control over the allowable dispersivity ratio.
page 384
---------- -
Chapter 9. The Gardennoen Case Study
10. Summary,Conclusions and Recommenda-
tions for Future Work
10.1. Summary and Conclusions
In order to solve hydrogeological problems, we must combine the principles governing
the processes of groundwater flow and pollutant transport, with the knowledge provided
by understanding the nature of the geological formations in which these processes take
place.
The governing principles of groundwater flow and pollutant transport are well known
and are expressed in the form of mathematical equations, which provide the basis for
solving any type of groundwater pollution problem:
Understanding the nature of the geological formations and expressing this in a mathe-
matical model is far more difficult. Extensive information about aquifers and their
properties has been gathered during the last 2-3 decades and shows that these properties
can not be expressed satisfactorily with deterministic mathematical formulas. Stochastic
fields are more appropriate for describing the spatial variability these properties exhibit.
However, stochastic fields cannot be incorporated directly into the deterministic equa-
tions of groundwater flow and pollutant transport, so new approaches and methods have
been developed. In general, methods which make use of the statistical properties of
hydrogeological parameters and conditions, are referred to as stochastic methods or sto-
chastic models.
page 386
-- ------------- --------- ------ - - - - - - ~ - -
Chapter 10 Summary,Conclusions and Recommendations
A stochastic groundwater pollution model has been developed called PAGAP
(Probability Analysis of Groundwater And Pollution). PAGAP combines the fmite
element method (FEM) for the solution of 2D steady state groundwater flow and pollut-
ant transport, and the frrst order reliability method (FORM), to make reliability and
probability estimates of hazardous events taking place as a result of pollution transport
in aquifers. Element hydraulic conductivities, longitudinal dispersivities, transverse
dispersivities and decay constants, as well as constant head nodes may be treated as
random variables, in which case at least second order statistical information must be
assigned to them. PAGAP can assist in the formulation of the correlation matrix with
the use of an algorithm which calculates the correlation matrix by choosing one of the
five available correlation functions and giving the correlation scale in X and Y direc-
tions. Three types of problems may be analyzed using PAGAP. For problems with a
constant pollution source one can estimate the probability of the concentration exceed-
ing a specific safety value at a given point in the aquifer and at a specific time. For
problems with a transient pollution source and for a given point in the aquifer and for a
specific period of time, one can either estimate the probability of the maximum concen-
tration exceeding a specific safety value, or one can estimate the probability of the
residence time exceeding a predefmed safety value. The output from PAGAP, besides
reliability and probability estimates, includes diverse sensitivity estimates showing the
influence of the input statistical values and the relative importance of each parameter on
the reliability and probability estimates obtained. PAGAP is a tool that should be used
mainly for assessment purposes and risk analysis tasks. It may be used to assess the ef-
f e c ~ v e n e s s of a given remediation technique or study the effects of the parameter dis-
tribution for a given problem (sensitivity study).
From a numerical point of view, two significant schemes have been introduced in
PAGAP which have not been used before in reliability models. A direct approach was
used in obtaining the gradient of the performance function with respect to the random
variables, and a modified Rackwitz and Fiessler (RF) algorithm was developed and
~ ~ ~ ~ ~ - ~ ~ - - - - -
Chapter 10 Sununary,Conclusions and Reconunendations
implemented for obtaining the design point.
page 38.7
The gradient of the performance function obtained with the direct approach was com-
pared with that obtained from the difference approach, which had previously been used
by Jang et al.(1990), with respect to speed and accuracy. In terms of accuracy the dif-
ference approximation seems to work very well as long as the perturbation used for the
estimation is chosen carefully. Large perturbations ( 10-10.
2
) lead to bad estimates,
while very small perturbations ( 10-8 - 10.
10
) are compromised by truncation errors by
the computer. For a variable value of 10.0 a perturbation value of 10
s
_10-6 gives the
best estimates. In terms of speed, the difference approach performs most of the time
better than the direct approach. In problems where the random variables are dispersivi-
ties, the direct approach is faster as long as the time step used is large. For the case
study used in the comparison tests, a time step larger than approximately 0.3 is required
for the direct approach to be faster than the difference approach. For hydraulic con-
ductivity random variables the direct approach was much slower than the difference ap-
proach. This was mainly caused by memory management problems, which made it nec-
essary to recalculate a large amount of information at each time step, because there was
not enough computer memory available for storing it.
The modified RF algorithm introduced in this project performed very well. Without
this algorithm the analysis of the asymmetric target node cases in chapter 8 and many
problems in chapter 9 would not have been feasible. Compared to the Der Kiureghian
RF algorithm, the modified RF algorithm is less flexible. However, it requires less
computations of the gradient of the performance function than the Der Kiureghian al-
gorithm and is only slightly slower than the original RF algorithm.
In chapter 8 theoretical cases studies were used to compare PAGAP with results from
the Jang et al. (1990) reliability model CALREL-TRANS. Considering the differences
between the two models, i.e different FEM formulations; different.RF algorithms and
different correlation input algorithms, the first order reliability estimates were compa-
page 388 Chapter 10 SUlIlIIlaIY,Conclusions and Recommendations.
rable. Monte Carlo simulations from CALREL-TRANS did not compare well with frrst
order reliability estimates obtained from either model. However, second order reliabil-
ity estimates and Monte Carlo simulation from CALREL-TRANS compared well, sug-
gesting that second order reliability estimates are required for good probability ap-
proximations. The same results show that the reliability index is very much dependent
on the FEM formulation. Using a consistent versus lumped formulation for the FEM
equations has a very large impact on the reliability estimates, although the impact on
the deterministic results is rather small.
PAGAP was also used for some field case studies employing actual data from the
Gardermoen project. A number of assumptions were required to establish a reasonable
groundwater flow regime fitting the limited available data and some hypothetical pollu-
tion problems were defmed and analyzed. A 2D vertical cross section problem was
constructed and analyzed frrst without decay elements and then with decay elements. A
constant pollution source was defmed and a target node was placed llOOm from the
source. The probability of the concentration exceeding a value of 20% of the pollution
source concentration was calculated. The decay coefficient was set to 0.005 and at frrst
was considered to be a deterministic variable. Field tests actually indicate a value of
0.01 to be more realistic (Breedveld ,1994). The deterministic results indicate that the
introduction of decay reduced the concentration by 50% at the target node, while the
reliability results show a reliability index reduction of more than 75% which resulted in
a probability reduction from 24% to 0.17%. In the next case the element decay con-
stants were assumed to be random variables. A correlation scale of 200m was assigned
to them with an exponential correlation function. This resulted in a reliability index of
1.82 and a probability of 3.4%, while the uncorrelated decay random variables resulted
in a reliability index of 2.57 and a probability 0.5%. The overall results suggest that
decay will reduce significantly the possibility of high concentrations at the target node.
2D horizontal case studies were also analyzed. These case studies were divided into
two groups based on assumptions made about the thickness of the aquifer at Garder-
Chapter 10 Summary,Conclusions and Reconunendations page 389
moen. In the .fITst group it is assumed that the hydraulic conductivities have a constant
value throughout the aquifer and that the transmissivity distribution is a function only of
the aquifer thickness distribution. In the second group the aquifer thickness is 20m at
the East end and linearly increasing to 25m at the West end. Two target nodes were
considered, one lying near the main pollution transport path and the other lying almost
outside the pollution plume reach. The pollution source and target nodes were placed at
positions similar to those used in the vertical case studies. Using constant pollution
sources, the highest probabilities were obtained for cases of the second group giving a
probability of 31% for the concentration exceeding a value of 20% of the initial pollu-
tion concentration. For the first group the probability was only 16% for the same
problem. The laterally placed target node resulted in probabilities which where smaller
than 10-6 for both groups. Using transient pollution sources with varying active time
durations, we obtained reliability results showing the reliability index and probability as
a function of source duration time. The longer the pollution source is active, the more
probable it becomes for the concentration at the target node to exceed prede.fmed target
values. The results confrrmed our expectations but also showed that one can identify
ranges of source duration times where the probability changes very little, while in other
ranges the probability becomes much more sensitive to changes in source duration time.
Finally, a case study was made where the effects of assigning a correlation coefficient
between the longitudinal and transverse dispersivity was examined. By assigning a cor-
relation coefficient of 1.0 to the dispersivities we are forcing the ratio between them to
be constant throughout the analysis and equal to the ratio between their mean values.
The highest probability is obtained when the correlation coefficient is equal to 1.0. For
correlation values from 0.0 to 0.2 the probability decreased and thereafter it began in-
creasing. In the case analyzed the smallest probability was 16.02% and the highest
16.22% indicating that the probability was not significantly changed by the
cross-correlation between longitudinal and transverse dispersivity. This was however a
case study where the dispersivity of the whole aquifer was defined through two global
random variables, one for the longitudinal and one for the transverse dispersivity, and
page 390
- - - - - - - - - ~ - - - - - - - ---- - - .----.----.- ---- - ~ - ~ . - - - - - - - - - - - ---
Chapter 10 SUIIUDaIy,Conclusions and Recommendations
therefore one can not reach defmite conclusions about the results. However, it seems
reasonable, that if the ratio between the dispersivities has avalue around 0.1, conditions
must be set in the reliability analysis which do not allow this ratio to change freely but
keep it near the original value. One way to accomplish this is by assigning a correlation
coefficient of 1.0, or slightly smaller, between the two dispersivities, or maybe by in-
troducing a ratio coefficient into the equations and elimiriating the transverse dispersiv-
ity as a variable all together.
10.2. Recommendations for Future Work
There are several aspects which need to be modified or developed further in order to
improve the performance of PAGAP. Firstly and most importantly, a new reliability
iteration algorithm is needed. A dependable iteration algorithm is necessary in order to
obtain reliable performance from PAGAP or from any other reliability method model.
Neither the standard Rackwitz and Fiessler, nor the modified Rackwitz and Fiessler al-
gorithm developed during this project, manages to handle "all" types of problems. The
incapability of the iteration algorithm to converge is one of the most frustrating situa-
tions which arises when using PAGAP. In some cases the algorithm might be able to
converge, but if a very large number of iterations is required, this becomes impractical.
Many convergence problems can be solved by placing proper restrictions on the free-
dom with which the next iteration point is chosen. There are cases however where the
algorithms simply can not converge, and therefore alternative algorithms should be pur-
sued.
Completing the testing of the SORM algorithm would also be a good improvement, al-
though the tests which have been made so far are not very encouraging as to the effec-
tiveness of the point fitting algorithm which has been employed. The SORM probabil-
ity estimate requires that the FORM estimate is already known and that a paraboloid is
fitted to the performance function. The amount of "work" required for the fitting is ap-
---------
Chapter 10 SUII1ltlaIY,Conclusions and Recommendations page 391
proximately equal to 4-6 additional RF iterations. For some of the case studies ana-
lyzed in this thesis on the DECstation 5000, this translates to an additional 16-24 hours
of computing time. From a practical point of view it seems unreasonable to use a whole
day just to get a better probability estimate. On the other hand, a good probability
analysis model should give good reliability estimates. One should also consider the fact
that moderate to high probabilities obtained from Monte Carlo Simulation require ap-
proximately the same amount of computation effort as SORM probabilities.
A more general correlation matrix generation algorithm would also be a positive im-
provement. The Vanmarcke solution used for this purpose is not entirely adequate,
since too many assumptions are used in its formulation. A Geostastistical approach or
the one suggested by Gottschalk (1993) could be employed for this purpose.
A very useful feature which should be implemented into PAGAP is the ability to use
different element configurations for the finite element part and the reliability part.
Implementation of such a feature could possibly make the first version of PAGAP -
using the direct approach for calculating the gradient of the performance function -
much faster. The simplest way to implement such a feature is by grouping. i.e. to be
able to define a random variable as a group of FEM elements. Grouping in principle is
easy to implement, but obtaining acorrelation matrix for groups with irregular shapes is
a serious difficulty.
Finally, it would be interesting to see if a one time step approximation of the gradient of
the performance function can result in equations which give good reliability estimates.
If such an approach is possible, then the reliability method's performance will be im-
mensely improved. The basic idea is to use the fmite element method with a proper
time step to avoid instabilities in the numerical solution, but then use the direct equa-
tions for the evaluation of the gradient of the performance function with only one time
step.
Obviously many other improvements can be made to PAGAP. A proper user-interface
page 392 Chapter 10 SummaIY.Conclusions and Recommendations
for the program should be developed ,and the several preprocessors which have been
developed should be integrated into PAGAP. Improvements can also be made to the
overall performance of PAGAP by optimizing the programming code, although there is
not very much room for improvements in this area. The UNIRAS graphics library can
be linked to a proper X Windows interface and allow the presentation of multiple plots
on the screen, instead of one at a time as it does now.
References
Ababou, R., Three dimensional flow in random porous media, Ph.D. thesis, Mass. Inst.
of Technology (MIT), Cambridge, 1988.
Abramowitz,M. and I.AStegun.,Handbook of Mathematical Functions, Dover Publica-
tions, Inc.,New York.,1972.
Allen, J.R.L., A review of the origin and characteristics of recent alluvial sediments,
Sedimentology, v. 5, 89-191, 1965.
Bear, 1. and A Verruijt, Modeling Groundwater Flow and Pollution, D. Reidel Publish-
ing Company,Dordrecht,Holland, 1987.
Bear, J., Dynamics offluids in porous media, American Elsevier Pub. Co. Inc., 1972.
Birkeland, B., T. Lyche and R. Wmther, Partielle Differensiallikninger; Analytiske og
Numeriske Metoder, (Del I og Del II), Matematisk Institutt 1988, Univeristettet i
Oslo.
Bjerager, P., Probability integration by directional simulation, Journal of Engineering
Mechanics, ASCE, 114(8), pp. 1285-1302, Aug.1988.
Breitung, K., Asymptotic approximation for multinormal integrals, Journal ofEngineer-
ingMechanics, ASCE, 110(3), pp. 357-366, April, 1981.
Cawlfield, J.D., and N.Sitar, Application of first-order reliability to stochastic finite ele-
ment analysis ofgroundwater flow, Report No. UCB/ GT/87-01, June, 1987.
page 394
- ~ _ .. _ ~ - - - - -
References
Choot, Gary E. B., Stochastic underseepage analysis in dams, Ph.D. thesis, Mass. Inst. of
Technology (MIT), Cambridge, 1980.
Cornell, C.A, A probability-based structural code, J.oj American Concrete Institute
66(12), pp. 974-985, 1969.
Dagan, G., A note on Higher-Order Corrections ofthe Head Covariances in Steady State
Aquifer Flow, Water Resour. Res. 21(4), pp. 573-578 April 1985
Dagan, G., Solute transport in heterogeneous porous formations, J. Fluid Mech., 145,
pp. 151-177, 1984.
Dagan, G., Statistical theory of ground water flow and transport: pore to laboratory,
laboratory to formation, and formation to regional scale. Water Resour. Res., 22,
pp. 120S-134S, 1986.
Dagan, G., Stochastic modeling of groundwater flow by unconditional and conditional
probabilities: 1. Conditional simulation and the direct problem, Water Resour. Res.,
18,pp.813-833, 1982.
Dagan, G., Stochastic modeling of groundwater flow by unconditional and conditional
probabilities: 2. The solute transport, Water Resour. Res., 18, pp. 835-848, 1982.
Dagan, G., Theory of solute transport by groundwater, Ann. Rev. Fluid Mech., 19, pp.
183-215, 1987.
Dagan, G., Time-dependent macrodispersion for solute transport in anisotropic hetero-
geneous aquifers, Water Resour. Res., pp. 1491-1500. 1988.
Delhomme, J.P., Kriging in the hydrosciences, Adv. Water Resour., 1(5), pp. 251-266
1978
Delhomme, J.P., Spatial variability and uncertainty in groundwater flow parameters: a
geostatistical approach, Water Resour. Res., 15(2), pp. 269-280, 1979.
References
page 395
Der Kiureghian, A, and J-B. Ke, The stochastic finite element method in structural reli-
ability, Probabilistic EngineeringMechanics, 3(2), pp. 83-91, 1988.
Der Kiureghian, H-Z. Lin, and S-J Hwang, Second order reliability approximations,
Journal ojEngineeringMechanics, ASCE, 113(8), pp.1208-1225, Aug., 1987.
Der Kiureghian, and Pei-Ling Liu, Structural reliability under incomplete probability in-
formation, Journal ojEngineeringMechanics, ASCE, 112(1), pp. 85-104, 1986
Freeze, RA, A stochastic-conceptual analysis of one-dimensional groundwater flow in
non-uniform homogeneous media, Water Resour. Res., 11, pp. 724-741, 1975.
Freeze, R.A, and J.A Cherry, Groundwater, Prentice-Hall, Inc., 1979.
Gelhar Lynn, W., Stochastic subsurface hydrology'prentice-Hall, Inc., 1993.
Gelhar, L.W., Stochastic subsurface hydrology from theory to applications,Water Re-
sour. Res., 22(9), pp. 135S-145S, 1986.
Gelhar. L.W., and C.L. Axness, Three dimensional stochastic analysis of macrodisper-
sion in a stratified aquifer, Water Resour. Res., 19, pp. 161-180, 1983.
Gollwitzer, S., and R Rackwitz, An efficient numerical solution to the multinormal inte-
gral, Probabilistic EngineeringMechanics, 3(2), pp. 98-101, 1988.
Gollwitzer, S., and R. Rackwitz, First order reliability of structural systems, 4 'th Inter-
national conference on structural safety andreliability, ICOSSAR'85.
Gollwitzer, S., T. Abdo and R Raclcwitz (1988). FORM (First Order Reliability
Method) manual. RCP GmbH, Munchen, 1988
Golub Gene H. and Charles F. Van Loan, Matrix computations, second edition, The
John Hopkins University Press 1989.
Haan,C,T.,Statistical Methods in Hydrology.,Iowa State University Press, Ames,
page 396
Iowa.,351 p,. 1977.
--- -
References
Hachich W., and E.H. Vanrnarcke, Probabilistic updating of pore pressure fields, J. 0/
Geotech. Eng. Div. ASCE, 109(3) FR, 1983.
Hansteen O.E., Reducing uncertainties related to offshore geotechnical engineering.- The
stochastic finite element method and it's application to geotechnical engineering.,
Norways Geotechnical Institute, Report 514160-8, 17 April 1990.
Harr ME., Stochastic analysis of contaminant flow in unsaturated porous media, Envi-
ronmental Geotechnology, Usem &Acar (eds), Balkema, Rotterdam pp. 43-47,
1992.
Hasofer, AM, and N. Lind, Exact and Invariant Second-Moment code format, Journal
o/EngineeringMechanics, ASCE, 100, Feb. 1974
Herrop-Williams, K., Random nature of soil porosity and related properties., Journal 0/
EngineeringMechanics, 115(5), May 1989.
Hoeksema, RI., and P.K.Kitanidis, An application of the geostatistical approach to the
inverse problem in two-dimensional groundwater modeling, Water Resour. Res.,
20(7), pp. 1003-1020, 1984.
Hoeksema, RI., and P.K.Kitanidis, Analysis of the spatial structure of properties of
selected aquifers, Water Resour. Res., 21(4), 1985
M., and R Rackwitz, Improvement of second order reliability estimates
by importance sampling, Journal 0/Engineering Mechanics, ASCE, 114(12), pp.
2195-2199, 1988.
Hohenbichler, M., and R Rackwitz, Non-normal dependent vectors in structural safety,
Journal o/Engineering mechanics, ASCE, 107(6), pp. 1227-1238, Dec. 1981.
Indelman P. and G. Dagan, Upscaling of permeability of anisotropic heterogeneous for-
References
page 397
mations, 1. The general framework, Water Resour. Res., 29(4), pp 917-923, Apr.
1993.
Indelman P. and G. Dagan, Upscaling of permeability of anisotropic heterogeneous for-
mations, 2. General Structure and small perturbation analysis, Water Resour.
Res., 29(4), pp 925-933, Apr. 1993.
Jang Y-S, N. Sitar, and A Der Kiureghian, Reliability analysis of contaminant transport
in saturated porous media, Water Resour. Res., 30(8), pp 2435-2448, Aug. 1994
Jang Y-S, N. Sitar, and A Der Kiureghian, Reliability approach to probabilistic modeling
of contaminant transport, University of California, department of civil engineering,
Report no. UCB/GT-90/03, July 1990.
Johnson Claes, Numerical solutions of partial differential equations by the finite element
method,Studentlitterature, Lund Sweden, 1987.
J0rgensen P., and 0stmo S.R, Hydrogeology in the Romerike area, Southern Norway.
Norwegian Geological Investigations (NGU) Bulletin 418, pp. 19-26, 1990.
Kernighan,K.Brian and Dennis M;,Ritchie.,The C Programming Language., Second Edi-
tion, Prentice Hall Software Series. 1987.
Kinzelbach, W., Groundwater Modeling-Developments in Water Science series Vol.2S,
Elsevier Science Publishers B.V., 1986.
LaVenue, M., R W. Andrews, and B. S. Ramarao, Groundwater travel time uncertainty
analysis using sensitivity derivatives, Water Resour. Res., 25(7), pp. 1551-1566,
July 1989.
Lin, H.-Z.. and A Der Kiureghian, "Second order system reliability using directional
simulation", Reliability and Risk analysis in civil engineering 2, ICASP5, ed by
N..c. Lind, University of Waterloo, Ontario, Canada, pp. 930-936, 1987.
page 398
~ - - - - ~ - ~ - - - - - - ~ ~ - - - -
References
Liu, W.K., G.H. Besterfield and T. Belytschko, Variational approach to probabilistic fi-
nite elements., Journal ofengineeringmechanics, 114(12), Dec. 1988.
Liu,W.K., A Mani, and Ted Belytschko, Finite Element Methods in Probabilistic Me-
chanics, Probabilistic EngineeringMechanics, 2(4), pp. 201-213, 1987
Liu,W.K.,Ted Belytschko, and A Mani, Random Field Finite Elements, International
Journalfor Numerical Methods in Engineering, 23, pp. 1831-1845, 1986.
Madsen, H.O., S.Krenk and N.C. Lind, Methods of Structural Safety, Prentice Hill Inc.,
Englewood Cliffs, NJ, 1986.
Marsily G., Quantitative Hydrogeology, Groundwater Hydrology for Engineers.,Paris,
School ofMines,Fontainebleau,France.,Academic Press Inc., pp. 284-337, 1986
Matheron, G., The intrinsic random functions and their applications, Adv. Appl. Prob.,
5,pp.438-468,1973.
Matheron, G., The theory of regionalised variables and its applications, Paris School of
Mines, Cah.Cent.Morphologie Math., 5, Fountainbleu, 1971.
Matheron, G., Principles ofGeostatistics, Econ. Geol. 58, pp. 1246-1266, 1963.
Nadim F., Reliability analyses for offshore foundations. - Estimation of soil variability by
Kriging (Users manual for computer programs "DFIT" and ''KRIG'')., Norways
Geotechnical Institute Report 51411-6, 31 December 1987.
Nadim F., Reliability analyses of offshore foundations.- Probabilistic site description
strategy., Norways Geotechnical Institute, Research Report 51411-4, 16 Decem-
ber 1986.
Overeem 1., The hydrostratigraphy ofthe ice-contact trandum delta in .southern Norway,
The environment ofthe subsurface, the gardermoen project. Report series C No.4,
University of Oslo 1994.
References page 39,9
Pinder, G.F. and W.G. Gray, Finite Element Simulation in Surface and Subsurface Hy-
drology, Academic Press ,Inc. London LTD. 1977.
Press, W.H., RP.Flannery, S.A Teukolsky, W.T. Vetterling, Numerical Recipes - The
: Art of Scientific Computing (FORTRAN Version), Cambridge University Press
1989.
Press, W.H., RP.Flannery, S.A Teukolsky, W.T. Vetterling, Numerical Recipes - The
Art of Scientific Computing (C Version), Cambridge University Press 1993.
Rackwitz, R, and B. Fiessler, Structural reliability under combined load sequences,
Computers and Structures, 9, pp. 489-494, 1978.
Rosenblatt, M., Remarks on a multivariate transformation. Annals ofMathematical Sta-
tistics, 23,pp. 470-472, 1952.
Sitar. N., Cawlfield J.D., and Kiureghian A, First order reliability approach to stochastic
analysis of subsurface flow and contaminant transport, Water Resour. Res.,
23(5),pp. 794-804, 1987.
Smith, L., A stochastic analysis of steady state groundwater flow in a bounded domain,
Ph.D. thesis, Univ. ofBritish Colombia, Vancouver, Canada, 1978.
Smith, L., and RA Freeze, Stochastic analysis of steady state groundwater flow in a
bounded domain, 1, One dimensional simulations, Water Resour. Res., 15(3), pp.
521-528 , 1979a
S0nsterudbrate S., Et hydrogeologisk studie av de distale deler av trandum is-kontakt
delta. Master thesis in geology. The environment of the subsurface, the garder-
moen project. Report series C No.3, University of Oslo 1994.
Sudicky, E.A, J.A Cherry, and E.O. Frind, Migration of contaminants in ground water
at a landfill: A case study, 7, A natural gradient dispersion test, Journal ofHydrol-
page 400
ogy, 63, pp. 81-108, 1983.
References
Tang D.H., F.W.Schwartz and L.Smith, Stochastic modeling of mass transport in a ran-
dom field velocity field, Water Resour. Res., 18(2), pp. 231-244, Apr. 1982.
Todd, D.K., Groundwater Hydrology, John Wiley & Sons,Inc. 1980.
Todd, J., Survey of numerical analysis, McGraw-Hill, New York, N.Y., 1962.
Townley, L.R, and J.L. WIlson, Computationally efficient algorithm for parameter esti-
mation and uncertainty propagation in numerical models of groundwater flow,
Water Resour. Res.. 21(12), pp. 1851-1960, 1985.
Tuttle K.J., T.S.Pedersen, S.S0nsterudbraten, I.Overeem and P.Aagaard, A 2-D geologi-
cal and hydrogeological geometric model ofthe "research traverse" ofthe t r a ~ d u m
delta. Delta structure, Sediment distribution and hydrostratigraphy., The environ-
ment of the subsurface, the gardermoen project. Report series B No. 2 University
of Oslo 1994.
Tvedt, L., Second order probability by an exact integral, Lecture Notes in Engineering,
48, P. Thoft-C.Hristensen-Ed., Springer-Verlag, Berlin, 'pp. 377-384, 1988.
Vanmarcke, E., Probabilistic modeling of soil profiles. Journal of the Geotechnical En-
gineering Division ,ASCE, GTll, Nov. 1977.
Vanmarcke, E., Random Fields, Analysis and Synthesis, MIT Press, second Printing
1984.
Vanmarcke, E.H., On the scale of fluctuation of the random functions, Research Report
R79-19, MIT Dept. of Civil Eng., Cambridge, Mass., 1979
Vanmarcke, E.H., Random Fields, MIT Press, Cambridge, Mass., 1983.
Vanmarcke,E., M.Shinozuka, S. Nakagiri, G.I. Schueller and M. Grigoriu , Random
References
- - - - - - - - - ~ ~ - - - ~ ~ - ~ - - ~
page 401
Fields and Stochastic Finite Elements., Structural Safety, 3(3+4), Aug. 1986.
Veneziano, D., R Kulkarni, G. Luster, G. Rao, and A Salhotra, Improving the efficiency
ofMonte Carlo simulation for groundwater transport models, Proc. of the confer.
on Geostatistics, sensitivity and uncertainty methods for groundwater flow and
Radionuclide transport modeling. Battelle Press, San Francisco, pp.155-172, 1987.
Wagner, RI., and S.M. Gorelik, Optimal ground water management under parameter
uncertainty, Water Resour. Res., 23, pp. 1162-1174, 1987.
Warren, J.E., and F.F. Skiba, Macroscopic Dispersion, Soc. Petrol. Eng. J., pp. 215-230,
Sept. 1964.
Zienkiewicz D.C. and RT.Taylor, The Finite Element Method.4th ed. Volume 1: Basic
Formulation and Linear Problems.McGraw-Hilllnternational (UK), 1989.
Zienkiewicz o.c. and R T.Taylor, The Finite Element Method.4th ed. Volume 2: Solid
and Fluid Mechanics, Dynamics and Non-linearity. McGraw-Hill International
(UK), 1991.
Zienkiezicz D.C. and K. Morgan, Finite Elements and Approximation,John Wiley &
Sons, Inc, 1983.
0stmo S.R, Gardermoen, Quarternary map. Scale 1:20000 CQR 051052-20. Norwegian
Geological Investigations NGU. 1976.
0stmo S.R, Hydrogeological map of 0vre Romerike. Groundwater in porous material
between Jessheim and Hurdalsj0en. Scale 1:20000. Norwegian Geological Investi-
gations (NGU). 1976.
-----------------
Appendix A
Probability Estimation
When the reliability index f3 has been calculated, an approximation of the corresponding
probability is given as Pf= <P(-f3>, where <p(.) is the standard normal cumulative distri-
bution function given from the equation
:r: t
2
1 J--
p[x s x] =<P(x) = r.:::-= e 2dt
",21C _
(A.I)
Since equation A.I can not be integrated analytically, numerical integration methods
may be used to obtain the needed result.
In the Gaussian quadrature technique, the integral of a function j(x) in the interval
[-1, 1] can be approximated by using a weighted mean of n particular values of the
function evaluated at particular points. To be more specific, Gauss defined n specific
points Xi and nweights Wi for which
1 n
Jf(X)dx= Lwif(xi)+R
-1 i=1
where R is the residual and equal to
(A.2)
2
2n
+
1
( 1)4
R= n. j(2n)c;)
(2n+1)[(2n)lr
(A.3)
page 404
- - - - - - - - - - _ . _ ~ - - .. ~ - - - - - --
Appendix A
Equation A3 shows that iffix) is a polynomial of degree 2n-l or smaller, then the re-
sidual is zero (since the 2n'th derivative will be zero), and equation A.2 will be exact.
The number of points used will determine the precision of the approximation.
Extensive tables giving the weights Wi and the points Xi for different values of n can be
found in the literature (for example Abramowitz and Stegun ,1970). In order to use
these values for integrals over arbitrary intervals we must modify equation A.2 into the
form
and
(
b-a) (b+a)
Yj= -2- xj + -2-
where Xi and Wi are the same as for equation A2.
(A.4)
(A.S)
The PAGAP program implements Gaussian quadrature with six points for which the Xi
and Wi are given in table A.I.
Table A.1. Gauss points and weight factors for n=6.
xi wi
0,238619186083197 0,467913934572691
0,661209386466265 0,360761573048139
0,932469514203152 0,171324492379170
When using Gaussian quadrature for estimating cI>(-fJ), we are facing two problems.
First, the interval of integration is [-00, f3J, and second, the precision obtained from a six
Appendix A
page 405
point Gaussian quadrature for this case will not be sufficient. In order to obtain a well
defined interval of integration we observe that
f3 f3
<P(-f3) =1-<P(f3) =1-<P(0) - f~ ( x ) d x =0,5 - f~ ( x ) d x
o 0
The interval now has become [0, f3j and is well deImed. In order to obtain a better
precision, the interval [O,f3j is divided into 32 equal intervals [O,all, [ai,a21, ...,[a31, f3j
and six point Gaussian quadrature is used to evaluate the integral in each of the 32 inter-
vals. The sum of the values obtained for each interval gives the value of the integral
over [O,f3j.
~ ~ - - - - ~ - ~ - - -- - ~ - - ~ - - - - - - - - - - - - ~ ~ - - ~ - ~ - - - - ~
Appendix 8
Choleski Factorisation
A non-singular matrix A can be expressed as the product of a lower triangular matrix L
and a unit upper triangular matrix V so A = LV. This is actually a usual LU factoriza-
tion, but Choleski provides a way to directly evaluate A-I from L-I and U-I. He also
gives a shortened procedure which can be followed if A is a symmetrical matrix. It is
possible in this case to express A in the form
A=LLT (B.I)
where L is a lower triangular matrix. The elements of this matrix are given by the
equations
i>j
(B.2a)
(B.2b)
The inverse of L (L-l =M ={mij} ) is given by
I
m=-
" /..
"
i<j (B.2c)
(BJa)
page 408
i>j
i<j
AppendixB
(B.3b)
(B.3c)
The main problem with using the Choleski method for decomposing the correlation co-
efficient matrix is that if two variables are linearly dependent - meaning that their corre-
lation coefficient is equal to 1,0 - then the terms in the brackets of equation B.2a will
result to either zero or a negative number. If the result is negative, then when calculat-
ing the root in B.2a an error will occur. If the result is zero, then the problem will oc-
cur when calculating B.2b since a division by zero is taking place. To avoid this prob-
lem, instead of using 1,0 as an absolute correlation value between two variables we can
use the value 0,999999... The number of 9's that we can use, will depend on the accu-
racy of the computer one is using and on the order with which the random variables are
derIDed.
Appendix C
Calibration - Inverse Groundwater Modeling
The purpose of groundwater models is to imitate as closely as possible the groundwater
flow conditions found in nature. To construct a steady state groundwater model, we
must define the geometry of the aquifer, the boundary conditions, and the hydraulic
characteristics of the aquifer, i.e. the hydraulic conductivity distribution over the aqui-
fer.
Defming the boundary conditions is always difficult. The modeler has some freedom in
choosing the geometry of the aquifer and can use this freedom to minimize the un-
known boundary conditions, but there is a limit. Boundary conditions have a direct jn-
fluence on the solution obtained and too much manipulation of the boundary conditions
eventually leads to a direct manipulation of the results obtained.
Finding the hydraulic conductivity distribution is the main problem to be solved. Hy-
draulic conductivity measurements are expensive, and one never has enough measure-
ments to cover the distribution over the whole aquifer in a satisfactory manner. In order
to make the most out of the available measurements one turns to other sources of in-
formation. Geological and geophysical information is commonly used to identify geo-
logical formations which have common features and therefore might have similar hy-
draulic properties. In any case, an uncertain hydraulic conductivity distribution is fi-
nally obtained and can be employed for modeling purposes.
page 410 AppendixC
Since the model must imitate nature, one must verify that the model is capable of doing
so before using it for any other purpose, like predicting the outcome of future events.
The verification process is straightforward, at least in One simply needs to
compare the model results with actual hydraulic head measurements. Considering the
uncertainty of the input used for the model, it is unlikely that a good comparison will be
obtained, and therefore the parameter values and perhaps even the boundary condition
values will have to be altered in order to obtain a satisfactory fit between estimated and
measured hydraulic heads. This fitting process is called calibration of the model. The
straightforward way for calibrating a model is the trial and error approach. When
changing parameter values, keep always Kinzelbach's rule in mind: "... Change only one
parameter at a time...". The trial and error approach will work well if the geologi-
cal information suggests that the aquifer can be divided into a few units each, which can
be described by a single parameter value. If the geology is complex and the hydraulic
conductivities vary considerably from place to place, then the trial and error approach is
difficult and time consuming and other approaches must be used.
In the Gardermoen case study described in chapter 9, the vertical profile has been veri-
fied with the use of the trial and error approach. The geological information available
suggests that the aquifer is composed of two geological units, the lower having a hy-
draulic conductivity of ca. 10
05
mls and the upper a conductivity of ca.l0-4m1s. How-
ever, the geological information is very imprecise when it comes to defming the actual
geometries of these two units. Direct observation of these units is possible and, based
on such observations, it has been assumed that the upper layer has approximately a
thickness of 1O-12m while the lower layer a thickness 'of 15-25 m. The hydrogeological
proflle suggested can be seen in figure 9.1.3. Geological observations also suggest that
the high conductivity layer is the layer controlling groundwater flow, since in all cases
observed the layer was water saturated and a large number of small water springs are
associated with this layer. Unfortunately, the geological proflle did not fully take into
consideration the groundwater table. If the upper layer is controlling groundwater flow,
AppendixC
~ - - - - - - -----
page 411
I
then the water table should always and everywhere be located in the upper layer. How-
ever, once the water table was superimposed on the geological proflle, we noticed that
the water table crosses the boundary between the upper and lower layers at approxi-
mately the 190m contour level. Therefore, the groundwater flow should be controlled
by the low conductivity lower layer.
Using the known conductivity values for each layer and assuming that the geology is
correct, one obtains very unreasonable results for the piezometric heads. Upon calibra-
tion with the use of the trial and error approach, it was found that the hydraulic conduc-
tivity of the lower layer has to be increased approximately 20 times in order to obtain
an acceptable approximation to the measured hydraulic heads. Such an increase can not
be attributed to uncertainties in the estimation of the hydraulic conductivity for the
lower layer. It seems more reasonable to assume that the thickness of the upper layer is
much larger than that depicted in the simplifies proflle in figure 9.1.3. How much
thicker the high conductivity layer is, is a geological question which can be answered
only after further investigation. In order to proceed with the modeling of the vertical
section in this thesis, it was necessary to make some assumptions about the thicknesses
of these layers. The straightforward way of obtaining these thicknesses is again the trial
and error approach. There "must be a thickness which results in a hydraulic head distri-
bution which compares well with the measured head values. This would be a reason-
able approach to fmd the thicknesses if we knew the hydraulic conductivities of each
layer with a reasonable degree of certainty, which we do not. In other words, we must
use a double trial and error approach. We must first assume a thickness for the layers
and then fmd the conductivities which calibrate the model. After this has been done for
several thicknesses, we must compare the calibrated conductivities with the initial mean
values assigned to each layer. We then choose the thicknesses which give calibrated
conductivities as near as possible to the initial mean values proposed in figure 9.1.3.
This is indeed a very time consuming approach, and 0!1e that could not be employed
due to the limited time available for completing this project. Instead, a thickness was
page 412 AppendixC
assumed for each of the two layers and the model was calibrated and accepted as is.
The verification results do however suggest that the upper layer should be made thicker
with a consecutive reduction of the lower layer, compared to that shown in figure 9.1.3.
C.1. Numerical Parameter Estimation
The calibration of the 2D horizontal Gardermoen model was more complex. We know
for a fact that the aquifer is made up of two different layers whose thicknesses are un-
certain. We also know that near the water divide there is a substantial vertical compo-
nent of flow. In other words, our knowledge of the aquifer suggests that a 3D model
should be used, or if we ignore the vertical flow component, at least a coupled system
of two 2D models one for the lower and one for the upper layer. PAGAP has no cou-
pling capabilities, and in order to make a 2D horizontal model we had to make simpli-
fying assumptions. The aquifer was assumed to be homogeneous with a conductivity
equal to some kind of average for the two layers. In fact, since the uncertainties with
regard to the thicknesses and conductivities of the layers are quite substantial, it was
decided to ignore them completely and use conductivity values which simply verify the
model.
The frrst approach used to obtaip such conductivity values is a purely numerical ap-
proach, and we shall refer to it as the direct approach. As discussed in chapter 3, a
steady state groundwater model results in a equation system of the type
Ah=q
or
fork =1,2, ...,N
(C.l.I)
where A is the flow matrix, h is the hydraulic head vector, q is the flux vector and N is
the total number of nodes. The flow matrix A is assembled based on the equation
M
a
k1
= 'LE;l'
e=l
(c.1.2)
AppendixC page 413
where M is the total number of elements used in the finite element grid and E depends
on the transmissivity, the geometry and shape function used for element e and the nodes
with global indices k and 1. If k or 1do not belong to element e, then E is equal to zero.
For a linear triangular element the value of E is given as
(c.1.3)
where T is the transmissivity of element e, also assumed here to be isotropic (i.e. T is a
scalar) for simplicity, while R,C and D are defined in chapter 3 and depend on the ge-
ometry of the element and are specific to linear triangular elements. For an isoparamet-
ric quadrilateral element the defmition of E is quite similar. The basic difference is that
the geometrical and shape function parameters are replaced by an integral which is
evaluated numerically. Let us rewrite the E function in the following form
e (B;B: +C;C:)
where G
k1
=..:.------.;..
2D
e
(c.1.4)
If we now use this defmition of the E function and replace it in the fmite element equa-
tion system we obtain
N M
LLTeG:1h
l
= qk
1=1 e=1
fork = 1,2, ...,N (c.1.5)
which with some rearrangement can be written in the form
(c.1.6)
which is an MxN equation system with M unknowns, i.e. the unknown isotropic ele-
ment transmissivities. The system requires that both h and q vectors are known. If
these are indeed known, we can proceed with solution of this equation system. Conven-
tional solution algorithms can not be used for this system of N equations with M 00-
knowns. It is interesting to notice that for triangular elements we usually have that
page 414
---------- - -------------
AppendixC
M>N and the system is under-estimate,d, while for a regular quadrilateral grid N>M
and the system is over-estimated. In the Gardermoen case we are using a regular
quadrilateral grid so we have an overestimated equation system. The Singular Value
Decomposition (SVD) algorithm was employed as described by Press et al. (1988).
The SVD algorithm is an At algorithm and for a large number of elements the amount
of CPU time is considerable.
Another way to solve this equation system is to use some rule to remove the "extra"
equations and thus reduce the equation system to an MxM system which can be solved
with common solution algorithms. After some experimentation it was found that a
good rule for this purpose is to add up the equations which have G functions belonging
to the same element. In other words, if the 10th element is defmed by the nodes with
global indices 11,12,20 and 21, then we add up equations 11,12,20 and 21 and the result
is the 10th equation in the new reduced equation system. This produces an MxM sys-
tem which has a diagonal of zero entries. To avoid getting zero diagonal entries, prior
to adding the original equations with common G
e
, a check is made to see if the G
e
fimc-
tion in the original equation is positive or negative. If it is negative, the whole equation
is multiplied with -1 before adding it up to the other a
e
equations.
The direct approach has been tested out quite extensively. Diverse fmite element grids-
regular and irregular - with a variety of boundary conditions, precipitation input and
pumping wells were implemented. The tests of the direct inverse approach were carried
out in avery simple way. For agiven problem the FEM solution produces the hydrau-
lic head vector h and the complete flux vectorq. To test the direct inverse approach we
simply need to use these h and q vectors as input in the direct inverse algorithm and see
if the initial transmissivity values are obtained as output. The direct inverse approach in
al19 test cases reproduced the original transmissivity set. This shows that the basic re-
quirement for the direct inverse process to be successful is to have accurate values for
the h and q vectors. Obviously, this is not always possible. In the Gardermoen 2D
horizontal model we have a good knowledge of the h vector, but the q vector is known
AppendixC
page 415
at best with large uncertainties. The yearly precipitation is somewhere between 400-
500mm and therefore an average value of 450mm is used. At the upper constant head
boundary we simply know due to mass conservation that the total flux will be equal to
the amount of total precipitation over the aquifer domain. Its specific distribution over
the boundary is not known and has to be assumed. The uncertainties in the q vector for
this particular inverse problem represent a serious obstacle. The obvious approach is to
use some kind of iterational scheme. One can assume a transmissivity distribution and
based on that calculate the q vector. With the calculated q vector and the measured h
vector the direct algorithm is used to obtain a new set of transmissivities. The new set
of transmissivities is then used to calculate a new q vector and the direct inverse ap-
proach is used once again to obtain an improved set of transmissivity values.
This iterative process has not been successful. For some simple cases that were used for
testing purposes, the following was observed. The absolute difference between succes-
sive transmissivity sets is iteratively reduced towards zero, suggesting that the algorithm
converges to a specific set of transmissivities. However, the absolute difference be-
tween calculated hydraulic heads and measured heads converges towards some non-zero
value which remains thereafter more or less unchanged. The absolute difference 00- .
tween measured and estimated hydraulic heads was not very large. The average abso-
lute difference per node was between 0.1-0.01. When the iterational scheme was used
for the Gardermoen case, convergence was extremely slow. The iterational scheme
seems to be dependent on the initial transmissivities assumed. If these are badly suited
to the problem, then the scheme becomes erratic and very unstable.
It was necessary to fmd a different inverse approach, not only because of the problems
observed with the direct approach but also because of its requirements. To require the
knowledge of both the h and q vector is simply too demanding for a practical approach
to inverse modeling.
page 416
C.2. Reliability Approach
--. - - - ~ - - --------
AppendixC
In the reliability approach we fIrst defme a performance function which leads to the
minimization between observed hydraulic heads and calculated hydraulic heads. Such a
function is
K
g(X) =L(hoi - h
ei
)2
;=1
(C.2.l)
where K is -the number of known hydraulic heads, the subscripts 0 and e denote ob-
served and estimated values respectively for the hydraulic heads h, and X is the element
transmissivity or hydraulic conductivity vector. The problem with this performance
function is that it does not fulfill the conditions of a normal performance function. First
of all it is always positive, and furthermore the concepts of unsafe and safe region do
not apply to it. The derivative of this unconventional performance function with re-
spect to the X vector components is easily obtained as
~ = 2 f ( h . -h .)(}}ze;
::lV ,.J eJ OJ::lV
C7Aj ;=1 C7Aj
(C.2.2)
where the derivative of the hydraulic head with respect to the ~ = ~ is known and given
in chapter 5. With this performance function and its derivative we can now use the reli-
ability algorithm to find the reliability index and consequently the X vector giving us
the most likely parameter values satisfying g(X) = O. Since this is a reliability approach
we can also assign stochastic information to the hydraulic conductivities or transmis-
sivities. The reliability index obtained does not have any meaning since there are not
any safe and unsafe regions defmed, but the most likely values obtained will give us the
desired hydraulic conductivities and also satisfy the stochastic information assigned to
them i.e. the correlation scale. Although the above is conceptionally reasonable, it is
not very compatible with the type of problem we want to solve. The inverse problem is
essentially a minimization problem where the performance function, as defined above,
must be minimized. The reliability method on the other hand resembles mostly a
AppendixC page 417.
root-fmding approach. The obvious difference is that the reliability approach assumes
that g(X) = 0 exists, while the minimization approach does not require this. Actually,
the reliability approach does not only assume that g(X) = 0 exists, but also that it is a
function of X meaning that there are infmitively many sets of X that satisfy it. The
minimization approach on the other hand assumes that there are a finite number of
points which can minimize the function g(X) and usually it is assumed that only one
such point exists. There are also practical aspects to these differences. For example,
consider the case of defIDing one global random variable for the hydraulic conductivi-
ties. No matter which value this variable assumes it will never manage to make
g(X) = 0, and therefore the reliability iterations will never manage to converge
The above differences between minimization methods and the reliability method are
important because of the behavior of the reliability method when implemented to the
2D horizontal Gardermoen case. The iterations begin with the value of the performance
function being reduced at each iteration step, but at some step a jump occurs whereby
the performance function obtains a higher value than before. Then the performance
function starts becoming smaller once again, until a new jump takes place. No matter
how many iteration steps were used, convergence was not achieved.
Taking now into consideration the differences between reliability and minimization
methods, it seemed reasonable to modify the reliability iteration scheme to converge to
any point capable of giving g(X) = O. The iteration scheme was modified to simply
move the next iteration point in the direction of the gradient, and it therefore resembles
minimization by the method of steepest descent. This iteration scheme has the same
performance function ''jump'' characteristics as the usual reliability iteration scheme,
but each jump takes place at a lower performance function value, and eventually the
required convergence is obtained. A trial and error approach was use to find which con-
vergence was appropriate for the g(X) function. In most cases a value of 0.1 and 1.0
was used. In the Gardermoen case, which has approximately 400 known head values, a
convergence to 1.0 means that the average head difference is equal to 0.0025m which is
page 418
- - - - - - _ . _ - ~ _ . _ - - - - - - - - - - - - - .
AppendixC
extremely good. Most tests were carried out on a simplified version of the Gardermoen
case were 48 elements and 65 nodes have been used. The finite element grid for this
simplified case can be seen in figure C.2.I.
250 SOl) 750 tOCXl 1250 1SOO
250 500 750 tlXO 1250 1500
Figure C.2.1. Simplified FEMgrid used for testing purposes.
For the simplified Gardermoen case three tests were carried out. The purpose of these
tests was to see the effects of the initial transmissivities used. In each case the same
transmissivity value was assigned to all elements as both mean value and standard de-
viation. The transmissivity values used are 200m
2
/d, 25Om
2
/d and 300m
2
/d. No correla-
tion between element transmissivities is assumed. In all cases the results were obtained
at a convergence of g(X) < 0.1. The number of iterations required varied form 390 to
580, which in terms of CPU time was between 20 and 30 minutes on the DECstation
5000. Figures C.2.2, C.2.3 and C.2.4 show the transmissivity distribution obtained for
the three cases. Although all figures show the same distribution characteristics, the
300m
2
/d case shows slightly higher transmissivity values than the 250 case and the 250
caseshows slightly higher values than the 200m
2
/d case.
AppendixC
250 500 760 1000 1250 UiOO
page 419.
ABOVE IOQ.QOO
- IOQ.OOO - IOQ.QOO
_ 700.000 - aoo.ooo
_ eoo.ooo - 700.000
_ 500.000 - lO0.O0O
_ 4OOJlOO - 500.000
llOO.llOO - MlO.OOO
- 2OQ.OOO - SOQ.OOO
o 100.000 - 2OQ.OOO
E:J BELOW 100.000
250 SOO 760 1000 1250 1500
Figure C.2.2. Initial transmissivities 200rrf/d.
250 500 7&0 1COO 1250 1500
_ ABOVE llOQ.OOO
iii IOQ.OOO. llOQ.OOO
_ 700.000 - aoo.ooo
_ eoo.ooo - 700.000
l1li 500.000. lO0.O0O
.. 4OOJlOO - 500.000
_ llOO.llOO. 4OOJlOO
mIilI 2OQ.OOO - llOO.llOO
o 100.000 - 2OQ.OOO
o BELOW 100.000
250 600 750 1cm 1250 16tO
Figure C.2.3. Initial transmissivities 250m
2
/d.
page 420
-- ~ - ~ - ~ - - - - - - - - - - - ~ - - - - - - --- - --- ------
250 soo 750 1co:J 12SO 1&00
ABOVE lO0.O0O
_ lO0.O0O. lO0.O0O
7'0 _ 7 0 . o o IOQ.OOO
_ eoo.ooo. 7DO.OOO
I11III 600.000. eoo.ooo
IIIIIIII 400.000. 600.000
_ SllO.llOO- 4DO.OOO
_ 2OO.llOO.:lOO.OOO
2SO 0 1 0 . o o - 2OO.llOO
o BELOW 1DO.OOO
250 SOO 750 1000 1250 1&00
Figure C.2.4. Initial transmissivities 300nf/d.
AppendixC
We must keep in mind that the scale used for these figures is causing the differences to
look only "slight". In terms of values the differences are between 20-30 m
2
/d, which are
not so slight after all.
In all three cases the transmissivity distribution observed in the upper part of the aquifer
seems to be approximately the same, while the lower left side of the aquifer seems to be
the most sensitive area. This is also the area where the flow is most likely to be verti-
cal. Since the horizontal model has to remove the precipitation input by means of hori-
zontal movement in an area where the hydraulic gradient is very small, it makes sense
that the transmissivity values must be increased considerably to accomplish this. The
conclusion is that these transmissivity values are numerical products and have nothing
to do with the actual transmissivities in the lower left aquifer area. However we have to
use these values in the model to keep the water balance.
A question remaining to be answered is why we have obtained similar but nevertheless
AppendixC
page 421
different transmissivity distributions for each case? Is this simply the result of a conver-
gence problem or are there other factors influencing the problem? In order to answer
this question another case study was used. With this case study one hoped to find out if
the different transmissivity distributions are due to differences in the initial iteration
point used or because of different stochastic information assigned to the random ele-
ment transmissivities. For this purpose a case study with 300m
2
/d for the mean and
standard deviation of element transmissivities was used. but with a starting point of
200m
2
/d for each element. If the resulting transmissivity distribution resembles that of
case 200m
2
/d. then it is the starting point which affects the distribution.
250 600 750 1000 1250 1600
2000
1150
1500
12<0
_ ABOVE IlOO.OOO
_ lO0.O0O - IlOO.OOO
7SO _ 700Jl00 - aoo.ooo
_ lIOO.llllO - 700.000
III 5OO.IlllO - llOlI.llOO
_ 400.000 - 5OO.IlllO
_ 3llO.OOO - 400.000
lIJIlJIII!I 2OO.llOO - 3llO.OOO
2SO 0 100Jl00 - 2OO.llOO
o BELOW 100.000
2&0 6CO 760 1(XX) 1250 1&00
Figure C.2.5. Initial transmissivities 200M/d, with mean values and standard deviations equal to
300rrf/d.
If it resembles the 300m
2
/d case. it is the random variable information controlling the
distribution obtained. The transmissivity distribution obtained for this case is seen in
figure C.2:5. The transmissivity distribution of figure C.2.5 clearly resembles that of
the 300n;t2/d case with differences of approximately 1-3m
2
/d. Therefore the statistical
information describing the parameters is the controlling factor. However, this result
page 422
---- - - - - - - - - - - - ~ - -
AppendixC
does not help us in deciding which transmissivity distribution should be used for cali-
bration purposes. All distributions give an acceptable approximation, and all distribu-
tions are similar, but which one is the most reliable one. Additional information is re-
quired before deciding
C.3. Minimization Approach
It was never the purpose to develop a program to use a minimization process for cali-
bration purposes. However, once the algorithm calculating equation C.2.? and its de-
rivative equation C.2.8 was implemented in the reliability approach, it was rather a
short step to find a general purpose minimization algorithm and connect them to the
algorithm of equations C.2.? and C.2.8 to obtain a fully operational minimization pro-
gram.
250 $00 750 1ceo 1250 11iOO
2000
t7S0
1500
,,.0
ABOVE lIllO.OOO
.. aoo.ooo. lIllO.OOO
7SO _ 7DOJlOO - IlOQ.OOO
_ 1lOQ.OOO. 7DOJlOO
II1II 600.000 - IlOQ.OOO
IIIIIIII 4DOJlOO - 5Oll.OOO
_ 3DOJlOO - 4DOJlOO
Em 200.000 - soo.ooo
2SO 0 1DOJlOO - 200.000
o BELOW 1DOJlOO
250 $00 750 tooo 1250 1500
Figure C.3.1.Minimized simplified Gardermoen case after 304 iterations
The m i n i m i z a ~ o n algorithms were obtained from Press et al.(1988). The basic problem
with these algorithms is that they require the bracketing of the minimum. In order to
accomplish this the user must provide a pair of numbers as starting points, and the
AppendixC page 423
choice of numbers affects the convergence rate considerably. A poor choice can double
in some cases the number of iterations required or even cause the algorithm to a prema-
ture convergence (i.e. converge to a local minimum).
The fIrst tests were made on the simplified Gardermoen case. The algorithm converged
to a value of g(X) = 0.02565 after 304 iterations, which gave the transmissivity distri-
bution seen in figure C.3.1. The convergence criterion used is the one proposed by
Press et al.(1988) and has the following form
(C.3.1)
where g, is the value of the minimized function g(X) evaluated during the i'th iteration,
and Tol is a user defmed tolerance. In this specific case a tolerance of Tol = 0.001 was
used. In general, the minimization algorithm reduces the g-function to a value of ap-
proximately 1.0 in the first 10 iteration steps.
250 600 750 1000 1250 1600
750
600
250
250 500 760 1ClOO 1250 1QX)
,",,0
1760
'500
'260
_ ABOVE llOO.ooo
&81 aoo.ooo - llOO.OOO
_ 700.000 - aoo.ooo
_ eoo.oao - 700.000
_ IilXI.IlllO - IlOO.OOO
_ 4IlO.OOO - IlOO.OOO
_ SClO.IlOO - 4IlO.OOO
ram 2OO.DOO - soo.ooo
260 D 100.000 - 2OQ.OOO
D BELOW 100.000
Figure C.3.2. Minimized simplified Garderrnoen case after 975 aerations
page 424
---- - - ~ - - ----- - ~ - - - - -
AppendixC
Thereafter, convergence towards smaller values is extremely slow. In order to mini-
mize the g-function as much as possible the tolerance value was reduced to Tol =
0.000001 and the iterational process restarted. This time, convergence was obtained at
iteration 975 with g(X) = 0.014613. The transmissivity distribution can be seen in fig-
ure C.3.2. Further reduction of the g-function will cost an unreasonable amount of
CPU time, and it was not pursued
250 500 750 1000 1250 1500
250
250 SOO 750 1000 1250 1500
2000
1150
1500
1250
1000 ABOVE lIOO.OOO
lO0.O0O. lIOO.OOO
_ 700.000 lIOO.OOO
_ lIOOJlOO- 7IXl.OOO
_ 5OII.IlOO. llDO.OOO
_ 400.000 - 5OII.IlOO
_ SllO.QOO. 400.000
_ 2OO.IlOO. SOQ.OOO
CJ 100.000 - 2OO.IlOO
CJ BELOW 1llllOOO
Figure C.3.3.Minimization after removal ofprecipitation input at the first FEMgrid element row
ofthe simplified Gardermoen case
The differences between the distributions of figures C.3.1 and C.3.2 are very signifi-
cant. The resulting distribution after 975 iterations shows that in the lower left aquifer
area the transmissivity has increased by 250-300 m
2
/d in comparison to the 304 iteration
distribution. As mentioned previously, the high transmissivity values observed in the
lower left aquifer area, are believed to be the result of vertical flow taking place in this
area, which the 2D horizontal model can not simulate. In order for the 2D horizontal
model to move the precipitation input by horizontal flow and at the same time justify
the small hydraulic gradients observed in that area, it needs high transmissivity values.
AppendixC
- - - - - - - ~ , , - - -
- - - - - - - - - - - ~ -
page 425
Since the high transmissivity values for this area are fictitious, perhaps one should re-
move the cause of this behavior. We have basically two alternatives. We can either re-
move the precipitation input in the problematic area, or give the inverse process a larger
amount of freedom to choose the hydraulic heads in the area. Removing the precipita-
tion from the lower left aquifer area can not be done in a straightforward manner, be-
cause we are interfering with the aquifers water balance. Suppose we remove the pre-
cipitation input to the first row of elements of the simplified Gardermoen FEM grid.
This case has been analyzed and the results can be seen in figure C.3.3.
A tolerance of Tol=O.OO1 was used to obtain in 112 iterations a gOO = 0.147620. The
high transmissivity values in the lower left area have vanished - as expected - but we
notice that the transmissivities values have been considerably reduced over the whole
aquifer. This is due to the water balance changes we have introduced. We have reduced
the water input and therefore smaller transmissivities are required to move this water
towards the upper constant head boundary. The second alternative is to remove the
known hydraulic head value imposed on the water divide. The water divide according
to 0stmo (1976) shows the highest hydraulic head values approximately at the middle,
ca. 201m, while at the left (north) side it is reduced to ca. 200.5m, and at the right
(south) side to ca. 200.3m. Considering the distance of the 200m groundwater table
contour line from the water divide, we see that the hydraulic gradient is much smaller in
the lower left area than in the lower right area. If we do not impose the water divide hy-
draulic head values to the inverse process, we are giving it the freedom to choose the
head values and the hydraulic gradient in a more convenient way. However, if the re-
sulting hydraulic heads on the water divide are considerably different from the meas-
ured ones, we shall have to reduce this freedom by imposing the head values for at least
some of the nodes on the water divide. The results can be seen in figure C.3.4. A toler-
ance of Tol = 0.001 was used, and in 78 iterations the a g(X) = 0.024710 was achieved.
We notice the small number of iterations required for this problem. The head distribu-
tion on the water divide is consistent for the right side of the divide. On the problematic
page 426
- - ~ - -- ---
AppendixC
left side, the hydraulic heads have increased to a value of 201.2 i.e. 0.7m above meas-
ured values. We consider this as an acceptable increase.
250 Stx:I 750 1lXlO 1250 1SOO
2000
17'0
1500
1250
_ ABOVE lIDQ.OOO
_ 1OlI.OOO. lIDQ.OOO
7SO _ 700.000 IOQ.OOO
_ llOO.OOO. 700.000
_ liOO.OOO. llOO.OOO
_ 400.000 - liOO.OOO
_ 3OCl.OOO - .coo.ooo
Ii!ilm 2OOJlDO - 3OCl.OOO
o 100.000 - 2OOJlDO
o IEI.OW 100.000
250 5'00 750 1000 1250 1SOO
Figure C.3.4.Minimization afterremoval ofhead conditions on the water divide.
In figure C.3.6 we can see a detailed contour plot of the transmissivities obtained after
minimization on the actual FEM grid used for the horizontal case study. This grid has
512 elements and 561 nodes and can be seen in figure 9.3.1. The minimization process
was stopped as soon as g(X) became smaller than 0.01 which happened after 42 itera-
tions. In order to compare the minimization results obtained from the actual FEM grid
and those obtained from the simple FEM grid used in this section, figure C.3.5 shows
the same transmissivity distribution as figure C.3.4 but is using the 25 m
2
/d interval
between contour lines as used in figure C.3.6. We see that both plots have the same
characteristics although the actual FEM grid results show considerable spatial variation.
From the transmissivity distribution of figUre C.3.6 we need to obtain the hydraulic
conductivity distribution. For this purpose we simply need to know the aquifer thick-
ness. Unfortunately, we only know the thicknesses at a cross section, and therefore we
AppendixC page 427
must make some assumptions about the aquifer thickness to proceed. The easiest as-
sumption is to use the cross section thicknesses for the whole aquifer. Practically, this
means that we assume a thickness of 22-25m for the water divide area and a thickness
of 17-2Om for the upper boundary area, while in the rest of the aquifer the thickness
varies more or less linearly between the two extreme values. This assumption leads to a
mean value thickness of 21m, resulting in a hydraulic conductivity which will vary
from 7.1-17.2 mid. A more complicated assumption is to interprete the transmissivity
variation as a thickness variation. Practically this means that areas with high transmis-
sivities are assumed to be thicker than areas with low transmissivities. If we assume
that the hydraulic conductivity has a mean value of 12.15m1d over the whole aquifer,
then the thickness of the aquifer at the low transmissivity areas is ca.l2.4m while at the
high transmissivity areas ca. 35.4m. Both assumptions are very speculative. One might
argue that the frrst assumption at least makes use of available data and therefore is more
"correct" than the second assumption.
We have made so many assumptions in order to build up this 2D horizontal model that
it is hard to evaluate how well it can simulate the Gardermoen aquifer. A realistic
evaluation of this model would probably be to say that it has some common character-
istics with the Gardermoen aquifer, but in most aspects it is simply another theoretical
model.
page 428
250 5"00 750 1000 1250 1500
250 500 750 1000 1250 1500
ABOVE 376.000
S5O.OOO. ~ & . O O O
_ :l25.IIllO - 350.000
_ 300.000 - 32S.OOO
_ 275.000 - 3OO.llOO
_ 250.000 - 275.000
_ 225.000 - 250.000
_ 200.000 - 225.000
18m 17&.000 200.000
o 150.000 17&.000
o BB.OW 150.000
AppendixC
Figure C.3.5.Detailed parameter estimation distribution based on the simple FEM grid
250 500 7SO 1000 1250 1500
_ ABOVE 87&.000
m 850.000 - 87&.000
I!IlI 825.000 - 850.000
_ lIOCl.OOO - S2&.OOO
_ 27&.000 - llOO.OOO
a 2SO.OO0 - 27&.000
_ 225.000 - 250.000
_ 200.000 - 225.000
~ 175.000 - 2lIO.OOO
o 1llO.OOO - 17&.000
II:] IELOW 150.000
250 $00 7SD 1000 1250 151:10
Figure C.3.6. Detailed parameter estimation distribution based on the actual FEM grid

You might also like