You are on page 1of 237

Applications of Uncertainty Theory to Rock

Mechanics and Geotechnical Mine Design


by
John Markus Mayer
B.Sc. (Honours), Simon Fraser University, 2011

Thesis Submitted In Partial Fulfillment of the


Requirements for the Degree of
Master of Science

in the
Department of Earth Sciences
Faculty of Science

John Markus Mayer 2015


SIMON FRASER UNIVERSITY
Spring 2015

Partial Copyright Licence

iii

Abstract
Uncertainty analysis remains at the forefront of geotechnical design, due to the
predictive nature of the applied discipline. Designs must be analysed within a reliabilitybased framework, such that inherent risks are demonstrated to decision makers. This
research explores this paradigm in three important areas of geotechnical design;
namely, continuum, Discrete Fracture Network (DFN) and discontinuum modelling.
Continuum modelling examined the negative effects of ignoring spatial heterogeneity on
model prediction.

This was conducted through the stochastic modelling of spatial

heterogeneities found within a large open pit mine slope. DFN analysis introduced a
novel approach to fracture generation to solve issues associated with the incorporation
of traditional DFNs into geomechanical simulation models.

Finally, discontinuum

modelling explored the inherent mesh dependencies that exist in UDEC grain boundary
models (UDEC-GBM).

Conclusions suggest that a transition is required from

deterministic to uncertainty based design practices within the geotechnical discipline.

Keywords:

Uncertainty analysis; stochastic modelling; reliability based design;


numerical modelling; discrete fracture networks; brittle fracture

iv

Acknowledgements
This work was made possible by the generous support of my supervisor Dr.
Doug Stead. I would like to acknowledge the flexibility he gave me in pursuing various
avenues of research. I would like to extend my appreciation to my committee members,
Dr. Diana Allen, Dr. Dan Gibson, Dr. Norbert Baczynski, and Jarek Jakubec. I thank my
external examiner Dr. Scott Dunbar from the University of British Columbia.
I would like to thank SRK Consulting and the Natural Sciences and Engineering
Research Council of Canada (NSERC) for providing funding for this research through a
NSERC-IPS scholarship.

Specifically, I would like to thank my colleagues Jarek

Jakubec, Michael Royle, Daniel Mackie, Ian de Bruyn, Marek Nowak, Greg Fagerlund,
Jordan Severin, Jacek Scribek, Guy Dishaw, Jen Adams, Ryan Campbell, and Ben
Green for providing guidance throughout the research.
I would acknowledge the generous assistance of Ok Tedi Mining Ltd. for
providing the opportunity and data to conduct this study. In particular I would like to
thank Dr. Norbert Baczynski and Derrick Kelly.
A special thanks to all present and past graduate students within the Engineering
Geology and Resource Geotechnics Research Group at SFU who help me with this
research. They include Mohsen Havaej, Pooya Hamdi, Fuqiang Gao, Kenneth Lupogo,
Andrea Wolter, Janisse Vivas, Ryan Preston, Anne Clayton, Zack Tuckey, Yabing
Zhang, and Davide Donati. I would like to acknowledge the support of the SFU technical
staff including, Glenda Pauls, Rodney Arnold, Matthew Plotnikoff, Tarja Vaisanen, and
Bonnie Karhukangas.

Dedication

To my fiance Jen, who has always been there


for me with her unconditional love and support

vi

Table of Contents
Approval .......................................................................................................................... ii
Partial Copyright Licence ............................................................................................... iii
Abstract .......................................................................................................................... iv
Acknowledgements ......................................................................................................... v
Dedication ...................................................................................................................... vi
Table of Contents .......................................................................................................... vii
List of Tables ................................................................................................................... x
List of Figures................................................................................................................. xi
List of Acronyms........................................................................................................... xvii
1.
1.1.
1.2.
1.3.
1.4.

Introduction .......................................................................................................... 1
Background Motivation ........................................................................................... 1
Ok Tedi Mine .......................................................................................................... 2
Research Objectives .............................................................................................. 5
Thesis Structure ..................................................................................................... 6

2.
Literature Review .................................................................................................. 8
2.1. Types of Uncertainty............................................................................................... 8
2.2. Alternative Theories of Uncertainty ....................................................................... 10
2.2.1. Fuzzy Set Theory ...................................................................................... 11
2.2.2. Possibility Theory ...................................................................................... 13
2.2.3. Evidence Theory ....................................................................................... 15
2.2.4. Imprecise Probabilities .............................................................................. 17
2.3. Probability Theory of Uncertainty .......................................................................... 19
2.4. Probabilistic Models for Dealing with Uncertainty ................................................. 22
2.4.1. First-Order, Second-Moment Methods ...................................................... 22
2.4.2. Point Estimate Methods ............................................................................ 23
2.4.3. Monte Carlo Methods ................................................................................ 26
2.5. Numerical Simulation............................................................................................ 28
2.6. Model Complexity Issue ....................................................................................... 29
2.7. Reliability Based Design ....................................................................................... 30
2.8. Risk analysis ........................................................................................................ 31
3.
3.1.
3.2.
3.3.

3.4.

Effects of Rock Mass Heterogeneity on Geomechanical Model


Prediction ............................................................................................................ 33
Abstract ................................................................................................................ 33
Introduction .......................................................................................................... 34
Study Site and Data Sources................................................................................ 35
3.3.1. Geology .................................................................................................... 36
3.3.2. Borehole Data ........................................................................................... 39
3.3.3. Groundwater Model................................................................................... 40
Methodology ......................................................................................................... 43
3.4.1. Hoek Brown Parameters ........................................................................... 43
3.4.2. 3D Geological Model ................................................................................. 46
3.4.3. Stochastic Simulation ................................................................................ 47
vii

Spatial Declustering .................................................................................. 48


Detrending ................................................................................................ 50
Normal Score (Gaussian) Transformation ................................................. 51
Correlogram Analysis ................................................................................ 52
Sequential Gaussian Simulation ............................................................... 54
3.4.4. Pore Pressure Distribution ........................................................................ 56
3.4.5. Geomechanical Simulation Model ............................................................. 58
3.4.6. Critical Area Estimation ............................................................................. 59
3.4.7. Statistical Up-Scaling ................................................................................ 61
3.5. Simulation Results ................................................................................................ 63
3.5.1. General Observation ................................................................................. 64
3.5.2. Critical Path and Area Estimates ............................................................... 66
3.5.3. Conventional Probabilistic Techniques ...................................................... 69
3.5.4. No Spatial Autocorrelation......................................................................... 72
3.5.5. Effect of Groundwater ............................................................................... 73
3.5.6. Statistical Up-Scaling Results ................................................................... 76
3.6. Discussion ............................................................................................................ 77
3.6.1. The Scale-Dependency Issue ................................................................... 77
3.6.2. Step-Path Estimation Algorithms ............................................................... 79
3.6.3. Continuum Mechanics and Data Aggregation ........................................... 81
3.7. Conclusions .......................................................................................................... 83
4.
4.1.
4.2.
4.3.
4.4.
4.5.

4.6.
4.7.
4.8.

A Modified Discrete Fracture Network Approach for Geomechanical


Simulation ........................................................................................................... 85
Abstract ................................................................................................................ 85
Introduction .......................................................................................................... 86
DFN Models ......................................................................................................... 87
Triangular Mesh Generation ................................................................................. 88
Integration of DFN Models with Triangular Mesh Generation................................ 90
4.5.1. Overlap/separation Distance ..................................................................... 90
4.5.2. Intersection Distance................................................................................. 91
4.5.3. Intersection Angle ..................................................................................... 91
Model Validation ................................................................................................... 93
Comparison with Traditional Methods ................................................................... 94
Conclusions and Future Work .............................................................................. 98

5.
5.1.
5.2.
5.3.
5.4.

Mesh Dependencies in UDEC Grain Boundary Models ................................. 101


Abstract .............................................................................................................. 101
Introduction ........................................................................................................ 102
Darai Limestone ................................................................................................. 104
Methodology ....................................................................................................... 107
5.4.1. UDEC Block Tessellation ........................................................................ 107
Principal triangles.................................................................................... 111
Discretization of fractures........................................................................ 112
Splitting of large triangles ........................................................................ 113
5.4.2. UDEC-Grain Boundary Model ................................................................. 113
5.4.3. Model Construction ................................................................................. 114
5.5. Calibration .......................................................................................................... 116
viii

5.5.1. Calibration Procedure ............................................................................. 116


5.5.2. Calibrated Micro-Properties..................................................................... 118
5.6. Results ............................................................................................................... 121
5.6.1. Calibration Uncertainty ............................................................................ 121
5.6.2. Synthetic Rock Mass Models .................................................................. 125
5.6.3. Triangular vs. Voronoi Mesh Geometries ................................................ 128
5.7. Discussion .......................................................................................................... 130
5.7.1. Calibration Potential of UDEC-GBMs ...................................................... 130
5.7.2. Contact Failure Mechanisms ................................................................... 132
5.8. Conclusions ........................................................................................................ 137
6.
Conclusions and Recommendations for Future Work ................................... 139
6.1. Conclusions ........................................................................................................ 139
6.1.1. Adverse Effects of Heterogeneity on Model Prediction ............................ 140
6.1.2. Limitations of DFN and Numerical Model Integration............................... 141
6.1.3. Mesh Dependency in UDEC-Grain Boundary Models ............................. 143
6.2. Recommendations for Future Work .................................................................... 144
6.2.1. Spatial Uncertainty .................................................................................. 144
6.2.2. DFN Generation ...................................................................................... 146
6.2.3. UDEC-Grain Boundary Models ............................................................... 147
References ................................................................................................................. 150
Appendices ................................................................................................................ 176
Appendix A.
Hoek-Brown Criterion ...................................................................... 177
Appendix B.
Correlograms .................................................................................. 180
Appendix C.
Sequential Gaussian Simulation Code ............................................ 191
Appendix D.
Verification of Sequential Gaussian Simulation Code ...................... 197
Appendix E.
Critical Failure Path Pseudo-Code .................................................. 218

ix

List of Tables
Table 3.1

GSI estimates for highly fragmented, crushed and/or decomposed


zones. Ranges were approximated by SRK Consulting (SRK 2012)
using the GSI estimation chart of Hoek et al. (1998). ................................ 44

Table 3.2

Medial Hoek-Brown attributes and statistics for each geotechnical


domain at the Ok Tedi mine site. Data was declustered using the
methodology described in Section 3.4.3 prior to characterization of
the summary statistics. ............................................................................. 45

Table 3.3

Normal score variogram constraints for the Ok Tedi dataset. .................... 54

Table 4.1

2D DFN fracture morphologies used to demonstrate the issues that


arise when incorporating traditional DFNs into geomechanical
simulation codes. ...................................................................................... 95

Table 5.1

Geomechanical properties for Darai Limestone within the proposed


Ok Tedi underground. Attributes are obtained from laboratory
testing of drill core data. .......................................................................... 105

Table 5.2

Discontinuity orientation data used for 2D modelling of Darai


Limestone. Data were obtained from SRK (2013c). P21 estimates
were decreased by a factor of 30, to produce DFNs suitable for
geomechanical simulation. ...................................................................... 107

List of Figures
Figure 1.1

Location of the Ok Tedi Mine site in Papua New Guinea. ........................... 3

Figure 1.2

Plan view of the surface geology at the Ok Tedi mine site prior to
open pit operation. ...................................................................................... 4

Figure 2.1

Categorization of uncertainties present within engineering design


(adapted from Nikolaidis 2005). .................................................................. 9

Figure 2.2

Classical set theory assumes crisp membership boundaries,


whereas fuzzy set theory allows for gradational contacts. ......................... 11

Figure 2.3

Probability box concept from probability bounds analysis (PBA).


The probability box is bounded by the cumulative distribution
functions (CDFs) of the upper and lower bound estimates of the
statistical model parameters. The analysis assumes a predefined
statistical model to represent the attribute. and represent the
upper and lower bounds of the attribute mean; while, and
represent the upper and lower bounds of the standard deviation. ............. 18

Figure 3.1

Plan view of surface geology for the 2011 mining conditions at the
Ok Tedi site. The geotechnical borehole collar distribution is found
to be skewed towards the center of the pit, specifically targeting the
mineralized skarn bodies. ......................................................................... 37

Figure 3.2

Cross-section through the Ok Tedi pit at a northing of 423850. Inset


shows the location of the cross-section relative to the pit on a
photograph of pit from Baczynski et al. (2011). ......................................... 38

Figure 3.3

Distribution of hydraulic conductivity in the groundwater model of


the Ok Tedi mine site, constructed by SRK Consulting (Fagerlund et
al. 2013).................................................................................................... 41

Figure 3.4

Two tested depressurization scenarios for the Ok Tedi west wall


cutback. (a) Three 300 m horizontal drains at 1325, 1440, and 1525
masl. (b) Drainage tunnel with a single 300 m horizontal drain at
1525 masl. ................................................................................................ 42

Figure 3.5

3D geological model of the Ok Tedi mine site. Topography is based


on pre-mining conditions. .......................................................................... 47

Figure 3.6

Stochastic simulation processes used to characterize and simulate


the spatial heterogeneity in the GSI and UCS at the Ok Tedi mine
site. ........................................................................................................... 48

xi

Figure 3.7

GSI data were converted to normal score space using a cumulative


frequency plot. Normal scores were selected based on the
matching cumulative frequencies between the data and a normal
distribution. ............................................................................................... 52

Figure 3.8

A single realization of the GSI attribute using the SGS method................. 56

Figure 3.9

Pore pressure distribution estimated for the Ok Tedi site based on


FEFLOW modelling and conceptual estimation......................................... 57

Figure 3.10 Critical failure paths were identified using minimum distance
analysis. The methodology utilized Dijkstra's (1959) shortest path
algorithm. .................................................................................................. 61
Figure 3.11 Plots of the running average (a) mean and (b) standard deviation in
SRF results vs. the number of simulation trials are used to estimate
when the Monte Carlo simulation results become stable. The
results suggest that the required number of simulations is inversely
proportional to the degree of spatial autocorrelation. ................................ 64
Figure 3.12 Cumulative density plot comparing the SGS method with a standard
deterministic analysis. The deterministic analysis utilized
homogeneous units, with strength attributes defined using medial
value statistics. SGS modelling suggests a mean SRF of 1.45 with
a standard deviation of 0.08. ..................................................................... 65
Figure 3.13 GSI and UCS attributes are found to be reduced along the critical
failure path compared to west wall averages. A mean reduction of
14% and 32% was found in the GSI and UCS, respectively. ..................... 66
Figure 3.14 (a) Variation in failure area and length statistics provide an estimate
of the overall deep vs. shallow seated nature of the estimated failure
surfaces. The results suggest a positive correlation between the
degree of depressurization and size of potential failures. (b) Trends
in the coefficient of variation within the failure area and length
statistics can be used as a quantitative estimate of the overall
dispersion in failure path results. Results indicate that the degree of
failure path uncertainty is positively correlated with the degree of
spatial autocorrelation imposed on the system.......................................... 67
Figure 3.15 Distribution of critical failure surfaces from the SGS simulations.
Daylighting is concentrated within the Gleeson Fracture Zone. The
failure area is estimated to be 2.29 x 105 m2 with a standard
deviation of 7.82 x 104 m/s; while the failure length has a mean of
1,454 m with a standard deviation of 157 m. ............................................. 68
Figure 3.16 Development of shear bands between the active and passive blocks
is observed. This behaviour helps to facilitate movement of material
along the lower critical failure surface. ...................................................... 68

xii

Figure 3.17 Comparison of SRF results for both the SGS and conventional
approaches to geotechnical slope design. The simulation results
suggest that the conventional probabilistic approach over-estimates
both the mean SRF (1.45 vs. 1.58) and standard deviation (0.08 vs.
0.29) compared to the SGS method.......................................................... 70
Figure 3.18 Comparison of critical failure path distributions for the different
modelling approaches. .............................................................................. 71
Figure 3.19 The incorporation of rock mass strength heterogeneities into a
model results in increased dispersion in the SRF results compared
to non-autocorrelated models. The zero autocorrelation method is
found to over-estimate the mean SRF (1.53 vs. 1.45), while at the
same time under-estimate the standard deviation (0.02 vs. 0.08),
when compared to the SGS method. ........................................................ 73
Figure 3.20 The inclusion of groundwater pore pressures resulted in an average
decrease in SRF results of 0.14 compared to the SGS method
(Figure 3.13). The mean SRF values are 1.45 and 1.58 for the wet
and dry models, respectively, with standard deviations of 0.08 and
0.09. ......................................................................................................... 74
Figure 3.21 Active depressurization was found to increase SRF values by an
average of 0.10, compared to the base case of no depressurization.
Results of the depressurization scenarios suggest mean SRF
values of 1.53 and 1.58, with standard deviations of 0.08 and 0.08
for the horizontal drain holes and drainage tunnel scenarios,
respectively. .............................................................................................. 75
Figure 3.22 Comparison of SRF results between the SGS and critical path, upscaling methods. The results suggest the critical path algorithms
fail to fully capture the effects of spatial heterogeneity on
geomechanical models. Up-scaling results suggest a mean SRF of
1.35, 1.33 and 1.33, with a standard deviation of 0.24, 0.17, and
0.22 for the independent, dependent and roughness methods,
respectively. .............................................................................................. 77
Figure 3.23 Concept demonstrating the deviation in mean step-path angle and
the critical basal sliding surface (Jennings 1970). ..................................... 81
Figure 3.24 The discrete nature of geotechnical domains makes the definition of
a REV within fracture systems difficult, if not impossible. This is due
to the difficulty in stabilizing descriptive attributes at sample volumes
smaller than the domain scale. ................................................................. 82
Figure 4.1

Procedure for the overlap/separation distance check with a buffer


zone defined using the specified minimum overlap/separation
distance (). In the above case, fnew i would be rejected as it
terminates within the buffer zone, whereas fnew ii would be accepted
as both its terminations are outside the zone. ........................................... 92
xiii

Figure 4.2

Procedure for intersection distance check used to ensure that


intersection points are spaced greater than the overlap/separation
distance (). This is done to prevent the development of
unacceptably small elements. In the above case, the newly
generated fracture would be rejected if 1, 2 or 3 are less than
overlap/separation distance (). ................................................................ 92

Figure 4.3

Procedure for intersection angle check to ensure that newly


generated fractures form at angles greater than the critical minimal
angle (crit). In the above case, fnew i would fail the check due to the
acute angle between it and fold; whereas, fnew ii would pass the test
due to the high angle between fold and fnew ii. ............................................. 93

Figure 4.4

(a) Model validation of fracture orientation statistics. Good


agreement is shown between DFN simulations using the modified
algorithm and actual fracture network distributions. (b) Length
statistics back-calculated from un-truncated model simulation
results also show good agreement with model parameters. ...................... 94

Figure 4.5

DFN models and their corresponding mesh tessellation within the


Rockfield (2013) software ELFEN. (Left) Irregular mesh tessellation
caused by traditional DFN schemes. Closely generated fractures
cause the formation of skinny mesh elements during tessellation.
(Right) DFN model created by the proposed modified DFN
approach, incorporation of the DFN within ELFEN requires no
additional clean-up. ................................................................................... 96

Figure 4.6

Reduction in P21 values associated with the incorporation of


traditional DFN methods within geomechanical software, compared
to the proposed modified DFN approach................................................... 97

Figure 4.7

Discrete fracture network generated using the modified DFN


algorithm and incorporated into the Itasca (2014) software UDEC.
The figure shows the distribution of discrete, triangular blocks within
UDEC (outlined in grey). The blocks were generated to conform to
the fracture network. ................................................................................. 98

Figure 5.1

2D apparent dip estimates from orientation data collected for the


Darai Limestone near the proposed Ok Tedi underground. ..................... 106

Figure 5.2

Flow chart for the modified Baecher et al.s (1978) DFN generation
algorithm. Methodology is used to generate fracture networks
which adhere to later geomechanical meshing routines. ......................... 109

Figure 5.3

Generation of poor quality elements during embedment of DFNs


into UDEC Voronoi models. .................................................................... 111

xiv

Figure 5.4

Demonstration of triangulation algorithm used for mesh generation.


(a) A new grid point is inserted at the centroid of the designated
triangle. (b) All triangles whose circumcircle intersects the new
point are flagged. (c) Flagged triangles are removed from the mesh
and shared edges flagged. (d) Shared edges are removed from the
flagged triangles. (e) New triangles are generated by connecting
the new grid point and remaining edges. (f) Newly generated
triangles are reinserted into the mesh. The method is a modified
version of the procedure described by Priester (2004). ........................... 112

Figure 5.5

UDEC-GBM model configuration for intact rock and synthetic rock


mass simulations. ................................................................................... 115

Figure 5.6

UDEC-GBM model configuration for tensile calibration using


Brazilian test methodology. ..................................................................... 120

Figure 5.7

Calibration uncertainty in macro-scale shear strength parameters


from the Darai Limestone sample. Results suggest a coefficient of
variation of 6.0 and 4.7% for the cohesion (a) and friction angle (b),
respectively. ............................................................................................ 122

Figure 5.8

(a) Co-dependencies are observed between the macro-scale


cohesion and friction angle, explaining the reduction in peak
strength vs. macro-scale attribute variation. (b) The coefficient of
variation is demonstrated to be relatively insensitive to the confining
stress, with an average value of 3.0%..................................................... 123

Figure 5.9

Brittle fracture development within UDEC-GBM UCS simulation with


triangular mesh geometry. Fractures are found to concentrate
within high angle contacts. ...................................................................... 124

Figure 5.10 Simulations suggest an increased degree of uncertainty in the


UDEC-GBMs once DFNs are incorporated (Figure 5.5). CoV values
vary greatly between the cohesion (12.8%) and friction angle
(4.7%). .................................................................................................... 126
Figure 5.11 Inclusion of discrete fractures into the UDEC-GBMs results in a
reduction in the correlation coefficient between the macro-scale
cohesion and friction angle. .................................................................... 126
Figure 5.12 Brittle fracture development within DFN UDEC-GBM simulations
under UCS conditions. Fracture development is concentrated at
fracture tips within UDEC-GBM SRM simulations as wing cracks. .......... 127
Figure 5.13 UDEC-GBM model configuration for Voronoi mesh simulations. ............. 128
Figure 5.14 Calibration in uncertainty in peak UCS strength for Voronoi mesh
simulations.............................................................................................. 129

xv

Figure 5.15 Brittle fracture development within UDEC-GBM UCS simulation with
Voronoi mesh geometry. An increased degree of dispersed, high
angle fractures is observed compared to triangular mesh models
(Figure 5.9). ............................................................................................ 130
Figure 5.16 Wedging potential in UDEC models with Voronoi vs. triangular mesh
geometries. Triangular mesh was shown to have a predisposition
towards shear failure mechanisms, due to increased kinematic
freedom. This was in contrast to the Voronoi mesh simulations
which displayed a dominance of tensile failure mechanisms. .................. 135

xvi

List of Acronyms
ALARP

as low as reasonably practicable

CDF

cumulative distribution function

CoV

coefficient of variation

DEM

discrete element method

DFN

discrete fracture network

FCM

fuzzy

FDEM

finite discrete element model

FDM

finite difference method

FEM

finite element method

FLAC

fast Lagrangian analysis of continua

FOS

factor of safety

FOSM

first-order, second-moment

GSI

geological strength index

LHM

Latin hypercube method

masl

meters above mean sea level

MRMR

mining rock mass rating

NPV

net present value

OTML

Ok Tedi Mining Ltd.

P20

number of fractures per unit area

P21

length of fractures per unit area

PBA

probability bounds analysis

p-box

probability box

PDBO

possibility-based design optimization

PEM

point estimate method

REV

representative elementary volume

RFCDV-DO

random/fuzzy continuous/discrete variables design optimization

RME

rock mass excavability

RMR89

Bieniawskis (1989) rock mass rating

SGS

sequential Gaussian simulation

SIS

sequential indicator simulation

SRF

strength reduction factor

means

xvii

SSR

shear strength reduction

SRM

synthetic rock mass

UCS

uniaxial compressive strength

UDEC

universal distinct element code

UDEC-GBM

universal distinct element code grain boundary model

xviii

1.

Introduction

1.1. Background Motivation


Geotechnical design typically occurs in a state of limited information, where
multiple realizations of the subsurface are possible within the framework of the given
state of information (Read and Stacey 2009). Under such conditions, decision makers
routinely have to evaluate geotechnical designs with an incomplete knowledge of the
true state of the system.

This requires geotechnical engineers to have a sound

framework with which to quantify and demonstrate the inherent uncertainty in their
designs to decision makers, such that decisions are made with a proper appreciation of
the risks associated with different designs (Mazzoccola et al. 1997; Steffen 1997; Kong
2002; Robertson and Shaw 2003; Steffen and Contreras 2007). Traditionally this has
been accomplished by geotechnical engineers through the use of deterministic methods
such as the factor of safety (Wyllie and Mah 2004; Read and Stacey 2009). Within this
framework, sensitivity analysis is conducted by evaluating multiple subsurface
realizations within a deterministic framework. However, this method is limited, as there
is no explicit way of quantifying the probability that a specific realization reflects reality.
This forces decision makers to evaluate the likelihood of subsurface realizations
qualitatively, and hence places a degree of subjectivity into the decision making process.
The alternative to this methodology is the use of reliability based design where
uncertainty is explicitly quantified and propagated through geotechnical design
calculations, using a theory of uncertainty such as probability theory (Harr 1996; Duncan
2000; Wiles 2006; Nadim 2007). Within this framework, uncertainty is quantified at the
parameter level by modelling statistical distributions to observed data. Uncertainties are
then propagated through geotechnical design calculations using probabilistic methods
such as Monte Carlo simulation or point estimate methods (Hammersley and
Handscomb 1964; Beckman 1971; Rosenblueth 1975).

Using this approach,

geotechnical engineers can quantitatively evaluate the risks associated with different
1

designs through stochastic simulation of multiple subsurface realizations (Terbrugge et


al. 2006; Lorig 2009). These risks can then be presented to decision makers, whereby
sound decisions can be made within the framework of decision theory (Steffen 1997;
Steffen and Contreras 2007).
This research aims to extend upon this reasoning, through the application of
uncertainty theory to geotechnical design. Research focuses on the application of novel
approaches within three areas of geotechnical design, namely: continuum, discrete
fracture network (DFN), and discontinuum modelling. Methodologies are tested using
data from the Ok Tedi mine site in Papua New Guinea. The study was conducted in
collaboration with SRK Consulting (Canada) Inc. and Ok Tedi Mining Ltd. (OTML).

1.2. Ok Tedi Mine


Ok Tedi is an active mine located in the Western Province of Papua New Guinea
(Figure 1.1).

Mineralisation at the site was first reported in 1963, with production

commencing in 1984 (Davies et al. 1978). The mine is situated at the headwaters of the
Ok Tedi River, on top of Mt. Fubilan at an elevation of 1800 masl. Located within the
Star Mountains, the surrounding mountainous topography and tropical latitude result in
difficult mining conditions (Hearn 1995). Rainfall at the site is extreme, with an annual
average between 9 to 11 m, resulting in rapid erosion of pit walls (de Bruyn et al. 2011).
The mine itself is an open pit operation with an approximate areal size of 5 km2 and an
average pit slope between 38o to 40o (de Bruyn et al. 2013). Current daily production at
the site is approximately 80,000 tons of ore with equal amounts of waste rock (Baczynski
et al. 2011).
The site geology is characterized by a repeating succession of sub-horizontal
siltstone, mudstone and limestone layers, which have experienced regional shortening
through thrust fault activity, and local up-doming from intrusive activity (Figure 1.2;
Baczynski 2011). Sedimentary units are subdivided into three formations based on
stratigraphic characteristics, namely: the Ieru Siltstone, Darai Limestone and Pnyang
Formation (Hearn 1995). The Ieru Siltstone is the oldest formation and is characterized
by Cretaceous, grey, calcareous siltstones and medium graded sandstones. Overlying

this is the Darai Limestone, a massive, foraminiferal, carbonate-rich packstone,


mudstone and wackestone formation, which varies in thickness from 50 to 800 m.
Finally, the Pnyang Formation comprises the youngest units, characterized by a
calcareous mudstone and siltstone with minor limestone fingers.

N
Bismarck Sea
China

Philippines

River

New Britain
Island

Ok Ted

Indonesia

Ok Tedi Mine
Pacific Ocean

Papua New
Guinea

Fly

Riv

er

Indonesia

Papua New
Guinea

Port Moresby

Australia

Australia

Coral Sea

New
Zealand
0

1000

Figure 1.1

2000km

Location of the Ok Tedi Mine site in Papua New Guinea.

Repeated upward younging sections of the stratigraphic units are interrupted by


a series of low angle thrust faults, referred to as the Taranki, Parrots Beak and Basal
Thrust Zones (de Bruyn et al. 2011). These thrusts result in a stack of nappes with older
rock thrust upon younger rocks. The zones are characterized by highly fractured and
altered fault gouge, pyrite, magnetite skarn lenses, brecciated monzodiorite and
brecciated siltstone hornfels. The zones vary in thickness from 10-80 m. In addition to
thrust zones, two large sub-vertical faults cross cut the west wall, referred to as the

western (upper) and eastern (lower) Gleeson faults. The fault zones are characterized
by highly brecciated, granular and/or highly plastic gouge material. Translation along the
two faults has resulted in the formation of a disturbed zone referred to as the Gleeson
fracture zone. This zone is characterized by a large degree of fracturing and brecciation
of the host rock.

N
425000

424000

Taranaki
Thrust
Taranaki
Thrust

423500
Parrots Beak
Thrust

Skarn
Endoskarn
Monzonite Porphyry
Monzodiorite
Gleeson Faults
Gleeson Fracture Zone
Thrust Faults
Pnyang Formation
Darai Limestone
Ieru Siltstone

314000

314500

423000

315000

315500

316000

316500

Easting
Figure 1.2

Plan view of the surface geology at the Ok Tedi mine site prior to
open pit operation.

Northing

424500

Sedimentary units are intruded by two localized igneous intrusions; namely, a


monzonite porphyry at the north end of the pit and a southern monzodiorite (Baczynski
2011). Igneous emplacement has caused a slight up-doming of sedimentary strata, with
layers dipping 10 to 15 degrees away from pit walls. Proximal skarnification has resulted
in a large degree of fracturing and brecciation of host sedimentary units. Mineralization
within skarnified units presents the principal mineralization targets.

1.3. Research Objectives


Uncertainty quantification remains at the forefront of geotechnical design, as
application of the discipline remains fundamentally a predictive science. This thesis
aims to further our understanding of this critical area of research through the application
of novel approaches to uncertainty characterization.

This is conducted within three

areas of geotechnical design.


The first area of research explores the effects of spatial heterogeneity on
geomechanical model uncertainty. Traditional geotechnical practices subdivide slopes
into a series of geotechnical units, each with idealized constant properties (Wyllie and
Mah 2004; Lorig 2009).

However, this simplification ignores inherent heterogeneity

found within individual geotechnical units. This can lead to non-conservative design
practices, as the system is unable to preferentially fail through the weakest areas of the
rock mass (Griffiths and Fenton 2000; Hicks and Samy 2002; Jefferies et al. 2008). This
thesis explores these effects within the context of geostatistical theory. This is done
using a geostatistical method known as sequential Gaussian theory; which is an original
contribution within the field of geotechnical slope design.
In the second section of the thesis, the integration of DFNs with geomechanical
simulation codes is explored. Traditionally this amalgamation has been problematic, due
to the generation of unacceptable mesh elements during geomechanical model
construction (Painter 2011; Painter et al. 2012; Painter et al. 2014).

This issue is

explored and the causal effects of modifying DFNs to facilitate geomechanical simulation
scrutinized.

A novel approach to DFN generation is then presented to mitigate the

integration issue.

The final area of research explores mesh dependency issues within UDEC-grain
boundary models (UDEC-GBM). UDEC-GBMs are a method of simulating rock masses
as a stochastic arrangement of discrete deformable and/or rigid blocks (Lorig and
Cundall 1987; Kazerani and Zhao 2010; Lan et al. 2010; Gao and Stead 2014).
However, to date, few studies have explored possible mesh dependencies within the
technique (Kazerani et al. 2012; Kazerani 2013; Kazerani and Zhao 2014). This thesis
aims to address this short-coming through the quantification of irreducible calibration
uncertainties within UDEC-GBMs. In addition, the effects of mesh shape dependencies
on micro-scale fracture mechanisms are explored.

1.4. Thesis Structure


The thesis is organized into six chapters. Chapters three through five have been
prepared in a manuscript style, and will be/have been submitted for publication. The
remaining chapters are written in traditional thesis format.
Chapter 2 provides a literature review on the use of uncertainty in geotechnical
design. The chapter begins with an overview of the types of uncertainty and theories to
quantify our understanding of this important area.

An explanation for the use of

probability theory instead of alternative theories of uncertainty is provided, followed by a


review of probabilistic models for propagating uncertainties through engineering design
calculations.

An overview of the benefits and short-comings of complex vs. simple

numerical models is then presented. Finally, the chapter concludes with an introduction
to reliability based design and risk analysis.
Chapter 3 explores the effects of rock mass heterogeneity on geomechanical
model predictions through the application of sequential Gaussian simulation.

This

method is used to stochastically simulate the inherent spatial heterogeneity within the
geological strength index (GSI) and uniaxial compressive strength (UCS) at the Ok Tedi
Mine site. This is a new approach within the field of open pit slope design. The chapter
is written in an extended manuscript format, with the intention of submission of an
abbreviated version to the Rock Mechanics and Rock Engineering Journal.

Chapter 4 presents a new approach to discrete fracture network (DFN)


simulation, which takes into consideration geomechanical meshing routines during
fracture generation. This approach allows for the seamless integration between DFN
generation and geomechanical simulations, freeing researchers to focus on numerical
analysis, as opposed to cleaning up DFNs.

The manuscript was prepared for and

presented at the First International Discrete Fracture Network Engineering Conference in


Vancouver, Canada on October 20-22, 2014.
Chapter 5 explores mesh dependency issues within UDEC-GBMs. The chapter
explores irreducible uncertainties associated with meshing issues, as well as element
shape effects on the micro-mechanical failure behaviour. The manuscript has been
prepared for submission to the International Journal of Rock Mechanics and Mining
Sciences & Geomechanics Abstracts.
Chapter 6 concludes the thesis with a summary of the main conclusions from
each chapter, and provides recommendations to guide future researchers working in the
field of geotechnical uncertainty analysis.

2.

Literature Review
Applied geotechnical design remains fundamentally a predictive science,

whereby practitioners anticipate the behaviour of natural materials through the lens of
failure theory (Wyllie and Mah 2004). Due to its predictive nature, the quantification and
demonstration of uncertainties to decision makers remains one of the most important
issues within the discipline. This chapter provides an overview of the application of
uncertainty theory to geotechnical design. The chapter is divided into three primary
sections. The first deals with the types of uncertainty and associated theories. An
explanation is then given for the use of probability theory in this research as opposed to
alternate methods. The second section provides an overview of numerical simulation
techniques, and uncertainty propagation. Finally, in the last section, a brief overview of
reliability and risk analysis techniques is given.

2.1. Types of Uncertainty


Prior to any uncertainty analysis, a fundamental understanding and evaluation of
the types of uncertainty that maybe encountered is paramount. This evaluation is critical
in the overall understanding of the system, if the probability of certain outcomes is to be
predicted. The uncertainties can be broadly sub-divided into two categories; namely,
aleatory and epistemic (Figure 2.1; Parry 1996; Oberkampf et al. 2002; Kiureghian
2007).
Aleatory uncertainty derives from the Latin term aleator, meaning dice thrower or
gambler, and refers to the natural randomness of a variable (Aughenbaugh 2006). It is
an inherent property of a parameter, and is the end result of the underlying processes
that formed it. Although there is a philosophical question as to the existence of aleatory
uncertainty, engineering design typically models uncertainty as an inherent property of a

system. Examples of this include the spatial variability in the fracture density across a
study site, or the variation in peak particle acceleration from an earthquake.

Complete
Ignorance

State of
Information

Certainty

Total Uncertainty
Maximum Uncertainty

State of
Information

State of Precise
Information

Epistemic
Uncertainty

Figure 2.1

Certainty

Aleatory
Uncertainty

Categorization of uncertainties present within engineering design


(adapted from Nikolaidis 2005).

Epistemic uncertainty is caused by a lack of knowledge about a parameter, as


opposed to an underlying inherent randomness (Nadim 2007). This is also referred to
as systematic uncertainty, and includes measurement and model uncertainty. The term
originates from the Greek episteme which refers to knowledge.

Examples include

instrument limitations during laboratory testing, second-order uncertainties in statistical


models due to sample size limitations, or assumption uncertainties in model formulation.
In contrast to aleatory uncertainties, epistemic uncertainties can be reduced through
increasing ones knowledge about a parameter, improving measurement method(s), or
refining calculation methods.
Both types of uncertainty are often present in geotechnical design problems.
Quantification of the two types of uncertainty requires the employment of varying models
of uncertainty. In the case of aleatory uncertainties, quantification can be conducted
using traditional probability based methods such as the Monte Carlo, Rosenblueth point
estimate, or first-order, second-moment (FOSM) methods (Harr 1987); whereas,
epistemic uncertainties are reduced using methods such as fuzzy logic or evidence
9

theory (Zadeh 1965; Zadeh 1978; Shafer 1976). The issue with the former approach is
that since aleatory uncertainties are an inherent property of a variable, the incorporation
of new information does not reduce the uncertainty within a system, but simply refines
the estimate of it. In comparison, methods which focus on epistemic uncertainties aim to
reduce the uncertainty within a system by addressing our lack of knowledge pertaining to
a parameter.

2.2. Alternative Theories of Uncertainty


Since its introduction in the 17th century, probability theory has been at the
forefront of uncertainty quantification. However, the method has been criticized as there
is still no consensus on exactly what a probability is.

At least four different

interpretations of probability have been proposed including: theoretical probabilities


(classical

Laplace

interpretation),

relative

frequencies,

probabilities, and logical probabilities (Smets 1998).

subjective

(Bayesian)

Additionally, the theory is

predominantly concerned with aleatory uncertainty, and has difficulty dealing with
epistemic uncertainty.

These short-comings have led to proposed of alternative

interpretations of uncertainty.

These include fuzzy set theory (Zadeh 1965; Zadeh

1968), possibility theory (Zadeh 1978; Giles 1982; Dubois and Prade 1985; Klir 1992),
evidence theory (Dempster 1967; Shafer 1976), Hints model (Kohlas and Monney 1995),
imprecise probability theory (Good 1950; Smith 1961; Walley 1991), probability of
provability model (Ruspini 1986; Pearl 1988; Smets 1991), and the transferable belief
model (Smets 1988, 1990; Smets and Kennes 1994).

These alternative methods

attempt to fix the shortcomings of probability theory though an improved characterization


of epistemic uncertainties. For example, whereas probability theory is concerned with
the belonging of a poorly defined individual to a well-defined set, fuzzy set theory
concerns itself with the inclusion of a well-defined individual within a poorly defined set
(Smets 1998).
The following section attempts to summarize four of the main alternative methods
of uncertainty. These include: fuzzy set theory (Zadeh 1965), possibility theory (Zadeh
1978), evidence theory (Shafer 1976) and imprecise probabilities (Walley 1991).
Examples of the application of each within the engineering disciplines are given following
10

a brief introduction, along with criticism of the theories. The section is not meant to be
an exhaustive study of each type, but instead presents only an introduction to the
alternative methods.

2.2.1.

Fuzzy Set Theory


Fuzzy set theory is an extension of classical set theory (Zadeh 1965; Zadeh

1968). In the classical approach, an element is a member of a set according to a binary


function, whereby the element either belongs (in which case the membership is 1) to or
does not belong to a set (membership is 0; Figure 2.2). In comparison, fuzzy set theory
allows for a gradational membership function, whereby an element may be either a full
or partial member of a set.

This alternative method allows for the incorporation of

perception based observations of uncertainties (Zadeh 2002; Zadeh 2005).

It has

specific application in linguistics theory, where language is typically vague and/or


ambiguous. For example, if a group of people were surveyed as to what temperature
they would consider hot, we can expect a range in responses. This range prevents the
application of classical set theory and the definition of a crisp boundary. Instead a fuzzy
based approach is more applicable, whereby the range in responses is used to define a
membership function to characterize what hot is.

Fuzzy Set Theory

Membership

Membership

Classical Set Theory

0
Attribute Value

Figure 2.2

0
Attribute Value

Classical set theory assumes crisp membership boundaries,


whereas fuzzy set theory allows for gradational contacts.

Examples of fuzzy set theory within the geotechnical community include the
extension of rock mass classification schemes to include fuzzy logic. These include:

11

Bieniawskis (1989) rock mass rating (RMR89) system (Aydin 2004);

The rock mass excavability (RME; Bieniawski et al. 2006, 2007; Bieniawski
and Grandori 2007) system (Hamidi et al. 2010); and

Hoek et al.s (2002) geological strength index (GSI; Sonmez et al. 2003;
Zhang et al. 2009).

Other applications include:

Extension of the FOSM method to include fuzzy-based analysis in the


determination of a reliability index for a sedimentary, rock slope in Aliano, Italy
(Giasi et al. 2003).

Application of fuzzy set theory with alpha cut simulation principles to assess
the reliability of rock slopes (Park et al. 2008; 2012). The alpha cut principles
are an extension of Monte Carlo methods to fuzzy data.

Use of fuzzy theory to assess epistemic uncertainties in tunneling design


through the estimation of fuzzy safety factors (Harrison and Hudson 201;
Huang et al. 2014).

Extension of limit equilibrium analysis to include fuzzy sets in the estimation of


failure probabilities during slope stability analysis

(Dodagoudar

and

Venkatachalam 2000).

Incorporation of fuzzy

means (FCM) clustering into the distinct element

method (DEM) through the design of a fuzzy partitioning algorithm (Harrison et


al. 2001).

Integration of fuzzy theory with the finite element method (FEM) to produce
fuzzy finite element models (Hanss 2005).

Despite the examples of fuzzy set logic in geotechnical design problems, the
theory is limited in its ability to represent uncertainty for two primary reasons. First,
integration of fuzzy set theory in geotechnical design problems is often complicated due

12

to limited detailed operational definitions1 (Cooke 2004; Aughenbaugh 2006).


Specifically, there is currently no operational definition to define the membership function
when a linguistic population set is absent. For example, what does it mean to say that
the membership of

in

is 0.2 as opposed to 0.3, without the context of a linguistic

interpretation, such as the hot problem previously discussed? Many publications


overlook this issue and present fuzzy membership functions without presenting a clear
justification for the definitions.

The second issue that arises is that the practical

application of fuzzy theory to geotechnical design is limited due to unfamiliarity with


practitioners. Information economics suggest that practicing engineers should employ
methods which provide the greatest net value to a project, given the specific expertise of
the engineering group (Marschak 1974). However, fuzzy theory is rarely taught in depth
at the undergraduate level, resulting in engineers being more familiar and confident
applying classic probability theories of uncertainty (Aughenbaugh and Paredis 2006).

2.2.2.

Possibility Theory
Possibility theory was first introduced by Zadeh (1978) as an extension of his

theory of fuzzy sets (Zadeh 1965). The primary basis of possibility theory is the mapping
of the possibility of an event

occurring, given that

is a subset of

. The primary

axioms are such that:


=0

where

=1

= min

Equation 2.1

Equation 2.2
,

Equation 2.3

is the possibility of the event occurring. Based on these axioms, three views of

possibility theory have been advanced (Aughenbaugh 2006). The first is based on fuzzy
1

An operational definition is considered to be a set of rules which indicate how a set of


mathematical definitions are intended to be interpreted (Nagel 1960; Cooke 2004).

13

set theory introduced by Zadeh (1965) and assumes a fuzzy set basis for possibility
(Zadeh 1978). The second is that possibility is the limit of plausibility for nested bodies
of evidence (Klir 1992). Finally, Giles (1982) argues that possibility is the upper limit of
probability, similar to the upper bounds of imprecise probability theory formalized by
Walley (1991). Du et al. (2006) argue that for geotechnical design problems, possibility
theory may be more appropriate than probability theory in greenfield projects where little
information is available.
Examples of possibility theory in engineering design include:

The development of a possibility-based design optimization (PBDO) method to


account for epistemic uncertainties in structural design in the face of limited
information (Du et al. 2006; Youn et al. 2007).

The improvement of project decision analysis through the modelling of


investment uncertainties using possibility theory (Mohamed and McCowan
2001).

The comparison of possibility and probability methods in reliability analysis of


geotechnical design problems (Peschel and Schweigers 2003), and
catastrophic systems (Nikolaidis et al. 2004).

The use of random/fuzzy continuous/discrete variables design optimization


(RFCDV-DO) method for conducting uncertainty analysis using both
probability and possibility analysis (Huang and Zhang 2009).

The primary issue with possibility-based design methods is that they tend to
underestimate the risk of catastrophic failures for systems with many failure modes
(Nikolaidis et al. 2004). This is due to the inability of possibility theory to take into
consideration co-dependencies within a dataset, which has made it difficult to define
direct operators between probabilities and possibilities.

This can lead to non-

conservative design practices, which should be avoided when sufficient information is


available to utilize probabilistic models. However, possibility-based design methods may
be useful when subjective decisions are required in the face of limited information (Du et
al. 2006). Although it should be noted that there is also a fundamental philosophical
14

issue with possibility theory, as there is no consensus between practitioners on a clear


operational definition of possibility (Aughenbaugh 2006).

2.2.3.

Evidence Theory
Evidence theory, also referred to as Dempster-Shafer theory, is an alternative to

probability theory first proposed by Dempster (1967) and extended by Shafer (1976).
The theory is a generalization of the Bayesian theory of subjective probabilities. It
extends on the Bayesian approach through the introduction of belief functions, which
allows for the formulation of ones degrees of belief for a question based on the available
evidence of related questions (Shafer 1990).
defined for a subset

by:
=

where

The belief function is mathematically

Equation 2.4

can be thought of as all the relevant and available evidence within the set

that supports set . Evidence can be obtained from many sources including, complete
experimental frequency data (such as probabilities), sparse experimental results (such
as possibilities), and/or expert opinions (Aughenbaugh 2006).
The theory is based on two primary ideas. First, one obtains ones degrees of
belief for a question based on the available evidence for related questions. Shafer
(1992) provides an example of this principle:
To illustrate the idea of obtaining degrees of belief for one question from
subjective probabilities for another, suppose I have subjective probabilities for the
reliability of my friend Betty. My probability that she is reliable is 0.9, and my
probability that she is unreliable is 0.1. Suppose she tells me a limb fell on my
car. This statement, which must true if she is reliable, is not necessarily false if
she is unreliable. So her testimony alone justifies a 0.9 degree of belief that a
limb fell on my car, but only a zero degree of belief (not a 0.1 degree of belief)
that no limb fell on my car. This zero does not mean that I am sure that no limb
fell on my car, as a zero probability would; it merely means that Betty's testimony
gives me no reason to believe that no limb fell on my car. The 0.9 and the zero
together constitute a belief function.

15

The second principle is that one uses Dempsters rule for combining independent
items of evidence to obtain ones degrees of belief (Dempster 1968).
functions

and

The mass

are combined using the equation (Shafer 1986):


=

$%&

1(

"

Equation 2.5

where K is a measure of the conflict between the two mass sets defined by:
(=
0 and

and

$%

"

Equation 2.6

Examples of the use of evidence theory in engineering design include:

Comparisons between certainty factors and fuzzy evidence theory approaches


to evaluate slope reliability uncertainties near Fabriano, Italy (Binaghi et al.
1998).

The incorporation of professional judgment to quantify rolling element bearings


design uncertainties, through the use of evidence theory (Butler et al. 1995).

The development of an uncertainty approximation approach to assess


composite material structures and airframe wing aeroelastic design problems
when only limited and/or imprecise data are available (Bae et al. 2004).

The incorporation of export opinions on the evaluation of risk priority numbers


(RPN) to determine the risk priority order of failure modes for aircraft rotor
blades using evidence theory (Yang et al. 2011).

One of the issues with the current implementation of evidence theory is that
Dempsters Rule of Combination can lead to seemingly irrational results (Aughenbaugh
2006). An example of this was presented by Zadeh (1984):

16

Suppose that a patient is seen by two physicians regarding the patients


neurological symptoms. The first doctor believes that the patient has either
meningitis with a probability of 0.99 or a brain tumor, with a probability of 0.01.
The second physician believes the patient actually suffers from a concussion with
a probability of 0.99 but admits the possibility of a brain tumor with a probability
of 0.01. Using the values to calculate the m(brain tumor) with Dempsters rule, we
find that m(brain tumor)=Bel(brain tumor)=1. Clearly, this rule of combination
yields a result that implies complete support for a diagnosis that both physicians
considered to be very unlikely.
In addition, evidence theory is rarely taught in detail at the undergraduate level, leading
to an information economics problem, whereby practicing engineers are unfamiliar with
the theory and unlikely to apply it in practice.

2.2.4.

Imprecise Probabilities
Imprecise probability theory is an extension of traditional probability theory. Its

proponents argue that ones degree of belief cannot be precisely known but instead only
bounded by upper and lower limits (Good 1950, 1983; Smith 1961, 1965; Sarin 1978;
Kyburg 1987; Walley 1991; Weichselberger 2000) or by sets of probabilities (Tintner
1941; Hart 1942; Levi 1974). The theory is an extension of traditional probability theory,
and as such, has an advantage over the aforementioned alternative theories of
uncertainty, as it has clear operational definitions (Aughenbaugh 2006). The general
premise of the theory is that the imprecision in ones degree of belief should be directly
proportional to the amount of evidence.

Therefore, as more evidence becomes

available, a decision maker should narrow his or hers probability to improve his or her
confidence in the outcome.
The fundamental foundation of imprecise probability theory, as defined by Walley
(1991) is the definition of an upper and lower bound in ones confidence of an outcome.
In basic terms, the lower bound should reflect the highest price at which a decision
maker would place a bet; whereas, the upper bound reflects the lowest price at which
the decision maker would buy the opposite of the gamble (Aughenbaugh 2006). Any
point between the bounds reflects a fair price for the bet where the decision maker would
be willing to take either side of the gamble. This concept can be represented through
probability bounds analysis (PBA) by using probability boxes or p-box (Ferson and
Donald 1998).

p-boxes are defined from a set of cumulative distribution functions


17

(CDFs) that bound ones belief in the distribution of an attribute based on the current
state of information. By specifying a statistical model, and an upper and lower bounds
for the model parameter(s), one can visually define an area of epistemic uncertainty

Cumulative Probability

(Figure 2.3).

[, ]
[, ]

[, ]
[, ]

Attribute Uncertainty
Attribute Value
Figure 2.3

Probability box concept from probability bounds analysis (PBA).


The probability box is bounded by the cumulative distribution
functions (CDFs) of the upper and lower bound estimates of the
statistical model parameters. The analysis assumes a predefined
statistical model to represent the attribute. and represent the
upper and lower bounds of the attribute mean; while, and
represent the upper and lower bounds of the standard deviation.

The use of imprecise probability theory is not widespread within the rock
mechanics community; however, a few examples of the method do exist from other
branches of engineering design. These include:

The demonstration of the general advantages of using probability bounds


analysis to make engineering design decisions though comparisons with
traditional probabilistic techniques (Aughenbaugh and Paredis 2005).
18

The use of imprecise probability theory to analyze spatial heterogeneity


problems in slope stability analysis, though the application of a probability
bounds analysis (Schweiger and Peschl 2005).

The application of non-parametric methods to bound the probabilities; in the


sensitivity characterization of a cantilever sheet pile wall (Oberguggenberger
and Fellin 2008).

The application of a p-box approach to characterize epistemic uncertainties in


the assessment of power supply control system reliability (Karanaki et al.
2009).

The application of an interval Monte Carlo simulation approach to propagate


both epistemic and aleatory uncertainties through structural engineering
design calculations (Zhang et al. 2010).

The primary criticism of imprecise probability theory is that it inevitably leads to


one of the paradoxes of probability theory that uncertainty is hierarchical. For example,
since typically only a subset of a population is ever available to a researcher, there is
uncertainty in the first-order probability estimate.

This leads to the second-order

uncertainties and meta-probabilities, which imprecise probability theory attempts to deal


with through either bounds analysis (Walley 1991) or sets of probabilities (Levi 1974).
The issue, however, is that second-order uncertainty directly leads to the proposal of
higher-order uncertainties, which introduces the concept of an infinite regression of
meta-probabilities (Wang 2001).

Practitioners of traditional probability theory have

attempted to avoid this paradox by arguing that the inclusion of any meta-probabilities
over complicates the theory (Smets 1998).

2.3. Probability Theory of Uncertainty


Since its introduction in the 17th century, probability theory has been at the
forefront of uncertainty analysis, due to its detailed handling of aleatory uncertainties and
precise, albeit, controversial operational definitions (Hjek 2012). The theory is based

19

on three principal axioms, namely, non-negativity, normalization, and additivity


(Kolmogorov 1933). The first axiom of non-negativity states that:

when *

Equation 2.7

is a real number and a subset of the event space (,). The second axiom of

normalization deals with the sample space ( ), and states that:


*

=1

Equation 2.8

The final axiom of additivity states that:


* - - -/ =

0%

* -0

Equation 2.9

when events - , - , , -/ are mutually exclusive. Using these axioms, the probability of
an event can be defined and handled using mathematical constraints.

The theory remains the most widely used method for uncertainty quantification,
due to its ability to relatively easily propagate aleatory uncertainties through design
calculations. However, despite its widespread use, the theory has been criticized due to
its difficulty in representing epistemic uncertainties.

This has led to a number of

alternative interpretations of uncertainty, which were discussed in the previous section.


However, the aforementioned methods are difficult to utilize compared to probabilistic
methods for a number of reasons, including:

The fact that data are commonly collected and characterized within the context of
probability theory. This makes it difficult to utilize alternative methods, such as
fuzzy set theory, as fuzzy set classification schemes such as Aydin (2004) or
Sonmez et al. (2003) are rarely used. Although this does not preclude future
studies from characterizing data using alternative methods, it is easier if data are
collected in the context of the analysis methods.
20

This first issue directly leads to the second issue, i.e. the familiarity of alternative
methods by practicing engineers.

As stated previously, the choice of data

analysis techniques is fundamentally an information economics problem


(Marschak 1974). Practicing engineers should utilize method(s) that provide the
greatest net benefit to the project, given their specific expertise. As a result, the
use of alternative theories is difficult to sell to practitioners, as they are rarely
taught such theories in detail at the undergraduate level. Therefore, practicing
engineers are predisposed towards probability theory, compared to the
alternative theories, due to familiarity and confidence applying the method
(Aughenbaugh and Paredis 2006).

The third issue that arises with forgoing probability theory is that not only does
one have to be familiar with the alternative theory, but one must also develop
alternative methods for uncertainty propagation through design calculations
(Cooke 2004). This can lead to unclear design practices, such as the use of
alpha cuts in fuzzy set theory, whereby the theory is reduced to a probabilistic
interpretation in order to apply Monte Carlo methods (Park et al. 2008, 2012).
This reduction to a probabilistic interpretation adds additional layers of complexity
to already complicated systems, further convoluting analyses.

Finally, alternative methods of uncertainty often lack detailed operational


definitions, or rules with which the mathematical definitions should be interpreted
(Cooke 2004). Although, this does not preclude the future development of such
definitions, the currently lack thereof makes implementation of the theories
difficult. This is also not to say that probability theory doesnt suffer from similar
issues, with at least four different interpretations of probability, including the
classical Laplace, frequencist, subjective or Bayesian, and logical interpretations
(Smets 1998; Hjek 2012). However, although disagreements are found in the
interpretation, frameworks exist under which probability theory can be
interpreted.

Given these limitations, the probability method remains the most well developed theory
of uncertainty. The plethora of uncertainty propagation and representation methods, its
longevity, and its familiarity with practicing engineers makes it the most widespread
21

method. It is for these reasons that uncertainty analysis within this thesis was conducted
in the context of probability theory.

2.4. Probabilistic Models for Dealing with Uncertainty


Uncertainty characterization within the engineering disciplines is typically
concerned with multivariate formulations, wherein the output uncertainty is the result of
multiple underlying random input variables. Reliability analysis is therefore concerned
with the propagation of input variable uncertainty through engineering design
calculations. This has resulted in the formulation of several probabilistic, uncertainty
propagation methods. The methods can broadly be categorized into three subsets, each
of which has its own assumptions and groups of advocates (Harr 1996). These include:
first-order, second-moment methods (FOSM), point estimate methods (PEM), and Monte
Carlo methods. The following provides a brief overview of each of these methods.

2.4.1.

First-Order, Second-Moment Methods


First-order,

second-moment

(FOSM)

methods

provide

an

analytical

approximation of the output mean and variance based on the input attribute moments
(Ang and Tang 1984). The method approximates the response function based on either
(3 = 4 5 , 5 , , 5/ ) about the mean values of random input variables (67 , 67 , , 67 / ;
partial derivatives or a truncation of the Taylor series expansion of an output function

Wong 1985).

If it is truncated at its linear terms, the following first-order estimate is

obtained, assuming the input variables are independent (Nadim 2007):


68 4 67 , 67 , , 67 /
:8

0%

<3
; = A :7@
<50 >
?@

22

Equation 2.10

Equation 2.11

Co-dependencies in the dataset may also be taken into consideration during formulation;
however, the analysis becomes more complex and laborious (Harr 1996).
The method is advantageous as it has easier mathematical requirements when
dealing with simple systems (i.e. independent variables), which do not require complex
computation (Harr 1996). It also only requires knowledge of the statistical moments,
rather than complete distributions (Wong 1985). The downside of this simplification is
that the FOSM method only approximates the output moments instead of the entire
distribution. Failure probability estimates must therefore assume a distribution model,
with limited information on the system behaviour (Nadim 2007). In addition, the method
can also become quite complex when dealing with complicated output functions and
correlated variables. In such cases, attainment and evaluation of the derivatives can
become quite complicated, if not impossible when using numerical methods (Hammah et
al. 2009). As such, the method is not appropriate for propagating uncertainties through
numerical analyses.

2.4.2.

Point Estimate Methods


The point estimate method (PEM) was first introduced by Rosenblueth (1975)

and later expanded further (Rosenblueth 1981).

The method allows one to propagate

uncertainties through output functions, even if no closed-form analytical solution is


(mean and variance) of the output function (3 = 4 5 ) using point estimates (*B@ , *C@ )
available. The principle of the method is to estimate the first and second moments

from random input variable(s).

Point estimates are made such that the following

equations are satisfied (Rosenblueth 1975):

1G
*B = F1 I1
2
E

1
O
K8 N
1J L
2
M

*C = 1 *B

23

Equation 2.12

Equation 2.13

5B = -PQR + :T U

*C
*B

Equation 2.14

5C = -PQR :T U

*B
*C

Equation 2.15

If it is assumed that the skewness of the system is equal to zero (K8 = 0), which is the

case for a Gaussian (normal) distribution, then the equations simplify to (Harr 1996):
*B = *C =

1
2

5 = -PQR :7

Equation 2.16

Equation 2.17

Variability in the output function (3 = 4 5 ) can then be estimated from the point

estimates, when 3 admits a Taylor expansion about -PQR, using the equation
(Rosenblueth 1975):

- 3 / = *BVB/ + *CVC/

Equation 2.18

where - 3 / is the expected value of the nth order moment of the output function, and
V = 4 5 . Using this method, one can propagate uncertainties through the output

function (3 = 4 5 ) and obtain an estimate of the output mean and variance. This is

achieved by conducting two simulations at one standard deviation from the input
variable mean.
The system can be further expanded for multiple, independent variables, by
extending the weighting function (Equation 2.18) such that:

24

*B = *C =

1
2/

Equation 2.19

In this case, the required number of simulations becomes equal to 2n.

Additional

extensions to the system can be used in the case of co-dependent variables. In this
case the weighing system is extended to take into consideration correlation coefficients
between the modelled input parameters (Rosenblueth 1981).

However, the system

becomes extensively more complex compared to the independent case.


Although the method is very simple, it has been shown to be very accurate, if
certain assumptions about the systems can be made (i.e. non-skewed, etc.; Christian
and Baecher 1999, 2002). The method can also be advantageous over the FOSM as it
allows for uncertainty propagation even when no closed-form analytical solutions are
available. In addition, uncertainty estimates can also be made when only partial input
information is available, as it only requires knowledge of the statistical moments. The
downside is that similar to the FOSM, failure probability estimates require one to assume
an output distribution model. The system can also become quite complex when codependencies exist between the input variables. Additional uncertainties also exist with
the skewness assumptions, as underlying input distributions must display symmetric
properties2.
Two main advantages exist with the PEM in comparison to Monte Carlo
simulation. First, it does not suffer from the same tail distribution uncertainty as the latter
method, if an appropriate output distribution model can be assumed. Second, if the
output distribution. This is due to the 2/ simulation requirement for the PEM. However,
number of input parameters is minimal, it requires fewer simulations to estimate the

this can also be a disadvantage, as the number of required simulations increases

exponentially with the number of random input variables (Hammah et al. 2009). For
example, if we assume a simple perfectly-plastic system, with the linear-elastic
properties and a Mohr-Coulomb failure criterion, we end up with a minimum of four input
2

Although the normal distribution displays symmetric properties, a number of commonly used
distributions within the geotechnical engineering discipline are asymmetric (i.e. log-normal,
Weibull, exponential, etc.).

25

parameters for each geotechnical unit (i.e. Youngs modulus, Poissons ratio, friction
conducted using PEM the minimum number of simulations is equal to 16X , where
angle and cohesion; Labuz and Zang 2012).

Therefore, if parametric analyses are


is

equal to the number of geotechnical domains. Under this scenario, if more than two
geotechnical domains are present, then the number of simulations would exceed 1,000,
making the method computationally excessive compared to Monte Carlo techniques.
This is exacerbated in heterogeneous systems, where each individual model node can
be thought of as a random variable.

Researchers have attempted to address this

shortcoming through modifications to the PEM (Harr 1989; Hong 1998). However, these
alternative methods increase the spread in point estimates which can lead to unrealistic
input values (Christian and Baecher 2002).

2.4.3.

Monte Carlo Methods


Monte Carlo simulation is a very powerful and flexible, broad class of uncertainty

propagation algorithms that can be applied to a wide range of problems (Hammersley


and Handscomb 1964; Beckman 1971). Modern implementation of the method was first
introduced in the late 1940s by Stanislaw Ulam while working at the Los Alamos
National Laboratories (Eckhardt 1987).

However, underlying random sampling

procedures had been used before, such as Buffons needle solution to calculate in
1777 or Laplaces probabilistic generalization of the method in 1812 (Harr 1996).
The key principle of the method is that random sampling of the input variables is
used to estimate uncertainty in the output variables (Hammersley and Handscomb 1964;
Beckman 1971). As such, the method requires that the probability distributions of all
input variables are known prior to the analysis. This can be through set distribution
models, such as the normal, log-normal or uniform distribution, or non-parametric
methods. Once these have been obtained, a series of deterministic computations is
conducted with input variables selected randomly for each simulation from their
respected probability distribution. Output uncertainty is then estimated by summarizing
the resultant response variable statistics.
Since the random sampling is the key principle of the method, the generation of
random sample sets has remained a key area of research within the discipline
26

(Rubinstein et al. 1981). Present-day computational methods for the approach typically
rely on the use of pseudorandom number generators, which are based on deterministic
procedures. These procedures produce long sequences of apparently random values
based on seed values and recurrence relationships (Harr 1981). Although these number
generators are not truly random, they are typically sufficiently random for most cases,
provided that that the number of simulations is less than the reoccurrence interval.
One of the key advantages of the Monte Carlo method is that the number of
required simulations is independent of the number of random input variables, unlike
PEM.

However, the method has been criticized as there is no required number of

simulations, as increasing the number only increases the accuracy.

In theory, the

accuracy of the method is directly proportional to the number of simulations, and


decreases with higher order moments (Ahn and Fessler 2003):
Y-7 =

Y-]^ = : U

2
\1

Equation 2.20

Equation 2.21

where Y-7 and Y-]^ are the standard errors in the mean and variance, : is the output

standard deviation, and \ is the number of simulations. This accuracy issue can lead to

a very computationally intensive system, when one requires high degree accuracy in the
output estimates. This is particularly pronounced with tail distribution estimates which
are extremely sensitive to the distribution accuracy, and thus require accurate
knowledge of the higher order moments (Nadim 2007; Hammah et al. 2009).
To overcome the computational inefficiency of the Monte Carlo method,
systematic systems of selecting input variables has been proposed, known as the Latin
Hypercube methods (LHM; Imam and Conover 1982; Tang 1993; Olsson and
Sandberg). Instead of purely random sampling, the LHM sub-divides the input variable
domains into a series of equally probable intervals, and then obtains random samples
from each of these bins. This is done to ensure that the random set more accurately
27

adheres to the input distributions. This reduces the influence of outlier statistics when a
limit number of simulations are conducted. This methodology results in a reduction in
the number of required simulations and produces a more accurate approximation of
response variable distribution.

2.5. Numerical Simulation


Numerical simulation within the field of geomechanics is a difficult process due to
the complex system dynamics within geological media.

Substrates often exhibit

complicated behavioral responses due to complex interactions between discontinuous


and continuous materials (Hoek and Brown 1980a).

These complex behavioral

characteristics have led to the development of multiple numerical simulation methods for
geological materials, including: continuum, discontinuum and hybrid methods (Jing 2003;
Stead et al. 2006).
Continuum methods are the simplest approach to numerical analysis, and
conceptualize the material as a continuous substrate. A constitutive criterion is used to
describe the behavioral characteristics of the material, such as the Mohr-Coulomb
(Wyllie and Mah 2004) or Hoek-Brown (Hoek et al. 2002) relationships. Examples of
numerical approaches employed for continuum analaysis include the finite element
(FEM; Rocscience 2013) and finite difference method (FDM; Itasca 2011).

The

continuum method is often advantageous in green-fields research as it has fewer input


requirements than other methods.

However, the inability to explicitly model large-

displacements along discrete features limits its use in many geomechanical studies.
Discontinuum modelling was first introduced by Cundall (1971) with the advent of
the distinct element method (DEM). The approach simulates the finite displacement and
rotation of discrete deformable and/or rigid blocks, based on constitutive criteria
assigned to block contacts. Examples of the method include UDEC (Itasca 2014), 3DEC
(Itasca 2007) and PFC (Itasca 2008). The method has advantages in the field of rock
mechanics as it can simulate the movement of rock masses composed of discrete, interlocking blocks. However, the simulation brittle fracture is limited to the edges of block

28

contacts, which can reduce the overall kinematic freedom of discontinuum models, as
simulations are unable to model comminution behaviour.
Recent advances in numerical analysis have introduced hybrid modelling codes,
which combine both continuum and discontinuum methods. Examples of codes that use
this approach include ELFEN (Rockfield 2013) and Y-Geo (Mahabadi et al. 2012). The
hybrid method is an intriguing approach to geomechanical simulation as one can model
both the discrete movement of blocks, well as the comminution and brittle fracture of
geological materials. However, the method further complicates numerical simulation, as
additional input parameters are required; some of which (i.e. fracture energy and
toughness) are often difficult to collect.

2.6. Model Complexity Issue


The drive towards increasingly complex numerical techniques coincides with
additional input parameter requirements. This can complicate the calibration process,
making it difficult, if not impossible to calibrate complex models. It also leads to one of
the fundamental challenging questions in numerical analysis (Hammah and Curran
2009):
Is it better to be approximately right than precisely wrong?
In modelling terms this means that a complex model may be very precise in its
reproduction of the underlying failure mechanisms, but if the underlying input parameters
are wrong it may be very inaccurate. This can be a serious issue, as the cumulative
effects of parameter uncertainty often preclude complex modelling practices, due to the
considerable epistemic uncertainties found in geotechnical design projects (Wiles 2006).
However, the over-simplification of the rock mass systems can miss details in the failure
mechanisms, and may incorrectly predict the failure behaviour (Stead et al. 2006; Stead
and Eberhardt 2013). For example, the application of continuum mechanics to rock
mass issues ignores the influence of brittle fracture, which has often been observed to
play an important role in back-analysis studies (Havaej et al. 2014). These issues make
it difficult to provide a definitive answer to the complexity issue; however, modelling
should be conducted such that the simplest model is used, which fulfills the project
29

objectives, based on the expertise of the practitioner(s) and the encountered failure
mechanism(s) (Hammah and Curran 2009).

2.7. Reliability Based Design


Conventional geotechnical practice is based on factor of safety design criteria,
whereby uncertainty and/or design risks are quantified as the capacity over demand
(Wyllie and Mah 2004; Read and Stacey 2009). Although designs evaluated in this
manner typically produce adequate results, the method is limited as it does not explicitly
express variability in capacity and demand in a well formulated probabilistic framework
(Duncan 2000; Wiles 2006). This inability to incorporate variance explains the inability of
researchers to correlate factor of safety and probability of unsatisfactory performance
estimates in anything beyond site specific cases (Tapia et al. 2007; Lorig 2009). If
decision makers are to conduct proper risk evaluation and use sound decision theory
principles, then alternative methods are required which explicitly express the probability
of unsatisfactory performance (Steffen 1997; Robertson and Shaw 2003; Steffen and
Contreras 2007).
The most commonly presented alternative to factor of safety based design, which
can explicitly characterize the associated risks, are reliability based methods (Harr 1996;
Duncan 2000; Wiles 2006; Nadim 2007).

Within this framework, the probability of

unsatisfactory performance is either directly calculated, typically as the probability of


demand exceeding capacity, or a reliability index is used.

The reliability index () is

calculated as:
_=

6`
:`

Equation 2.22

where 6` and :` are the output mean and standard deviation of the performance

function (4P5R), where the performance function has the property of being greater than or

equal to zero when design performance is satisfactory and less than zero when
unsatisfactory (El-Ramly et al. 2002).

30

The use of reliability based designs allows engineers to explicitly express the
associated risks with different slope designs to decision makers, whereby business
decisions can be made within the framework of decision analysis (Steffen 1997). The
risk based approach also avoids the occurrence of risk abdication, whereby mine
management avoids the responsibility of designating tolerable risks by accepting
geotechnical designs based on specific factors of safety (Steffen and Contreras 2007).

2.8. Risk analysis


Risk analysis and/or management are a set of guidelines used to ensure that
decisions are made with a sound appreciation for the uncertainties associated with a
given project (Yoe 2011). Risk analysis is a natural extension of reliability based design
criterion which extends traditional criterion to include not only the probability of an event
framework, risk associated with an event (a K \b ) can be defined as:

occurring but also the consequences of such an event (Steffen 1997).

a K \b = * K \b 5 " K \b

Within this

Equation 2.23

where * K \b is the probability of an event occurring and " K \b is the potential


consequence or loss that would be incurred if the event occurred. In the case where

multiple events may impact a project and/or design, the overall risk becomes the
summation of the individual risks associated with each event.
Procedures for identifying and characterizing risks are diverse, with extensive
published literature detailing specific methodologies (Henley and Kumamto 1981). While
differences remain between the specific methodologies, common steps in the process
exist (Australian Geomechanics Society 2000):
1. Hazard identification
2. Assessment of likelihood or probability of occurrence
3. Assessment of consequences

31

4. Estimation of risk through combination of probability and consequences


5. Assessment of risk through comparison with benchmarks
6. Integration of risk into decision making framework
In the context of open pit mine design, multiple risks exists which all need to be
evaluated independently. This may include: injury to persons, damage to equipment,
impacts on production, force majeure, industrial action, environmental impacts, etc.
(Steffen and Contreras 2007). While diverse, these impacts can be subdivided into two
categories, personal and economic impacts, based on the treatment of consequences.
Economic risk can be expressed as a percentage of the forecasted net present
value (NPV) for the project (Terbrugge et al. 2006). This is a measure of cash inflows
and outflows used in financial analysis to analyze the profitability of an investment or
project. In the context of mining, the value of a project is assessed as the expected cash
return from ore recovery minus the expected cost associated with mine operations.
Personal impacts from slope failure can be assessed by comparing predicted
injury and/or loss of life probabilities with published benchmarks (Steffen et al. 2008).
The idea being that loss of life cannot be eliminated from mining operations but that
mines should implement a zero tolerance design policy which does not subject
employees to risks greater than that experienced in everyday life. This type of analysis
is typically conducted in conjunction with F-N plots, which provide benchmark criteria for
acceptable risk levels (Center for Chemical Process Safety 2009). Terbrugge et al.
(2006) recommend that open pit mines be designed with an acceptable risk between
1:1000 and 1:10000 which corresponds with the upper level of the As low as reasonably
practicable (ALARP) region of published F-N plots. While designing mines to the same
standards as civil engineering projects may appear conservative, it should be realized
that risks can be more easily mitigated through effective slope monitoring programs than
in civil engineering projects. So, although the probability of slope failure may be higher
at a mine site, effective slope monitoring programs can reduce the probability of
personal exposure to such an event, leading to negligible differences between the
probability of an event in mine and civil design.

32

3.

Effects of Rock Mass Heterogeneity on


Geomechanical Model Prediction3

3.1. Abstract
With the increased drive towards deeper and more complex mine designs,
geotechnical engineers are forced to reconsider traditional deterministic design
techniques in favour of probabilistic methods. These alternative methods allow for the
direct quantification of uncertainties within a risk and/or decision analysis framework.
However, conventional probabilistic practices typically discretize geological materials
into discrete, homogeneous domains, with attributes defined by spatially constant
random variables. This is done in spite of the fact that geological media typically display
inherent heterogeneous spatial characteristics. This research applies a geostatistical
approach to the stochastic simulation of spatial uncertainty, known as sequential
Gaussian simulation. The method uses variograms which impose a degree of controlled
spatial heterogeneity on the stochastic system. Simulations are constrained using data
from the Ok Tedi mine site in Papua New Guinea and designed to stochastically vary the
geological strength index and uniaxial compressive strength using Monte Carlo
techniques.

Results suggest that conventional probabilistic techniques have a

fundamental flaw compared to geostatistical approaches, as they fail to account for the
spatial dependencies inherent to geotechnical datasets.

This flaw can result in

erroneous model predictions, which are overly conservative when compared to the
geostatistical results.

Will be revised for submission to Rock Mechanics and Rock Engineering as J.M. Mayer and D.

Stead. Effects of Rock Mass Heterogeneity on Geomechanical Model Prediction.

33

3.2. Introduction
Geotechnical design projects often suffer from inherent information deficiencies
associated with the difficulties, and often impractical nature, of collecting large datasets
(Read 2009; Read and Stacey 2009). This leads to fundamental design issues, where
geotechnical design must be conducted with incomplete knowledge of the true state of
the system. Under such a paradigm, multiple realizations of the subsurface are often
possible within the framework of the given state of information.

To overcome this

deficiency, reliability and/or probability based methods can be used, whereby uncertainty
in the capacity and demand is explicitly propagated through design calculations (Harr
1996; Duncan 2000; Wiles 2006; Nadim 2007). Within this framework, conventional
practice dictates that the geological medium should be sub-divided into a series of
geotechnical units, whose properties are defined by spatially constant random variables
(Read and Stacey 2009). However, this introduces an underlying uncertainty into the
design process as the scale of data collection and analysis often differ, resulting in data
aggregation issues (Gehlke and Biehl 1934; Yule and Kendall 1950; Clark and Avery
1976; Haining 2003). These issues are then exacerbated by the application of classical
statistical methods and the false assumption of data independence, despite the inherent
spatial variability within natural geological systems (Journel and Huijbregts 1978; Isaaks
and Srivastava 1989; Deutsch 2002). This oversimplification of the spatial heterogeneity
has been shown to result in conservative design practices, with an over-estimation of the
probability of failure (Griffiths and Fenton 2000; Hicks and Samy 2002).

This

phenomenon results from the inability to reproduce realistic failure paths, as the lack of
heterogeneity prevents the development of step-path failures through the weakest areas
of the rock mass (Jefferies et al. 2008; Lorig 2009).
A number of modelling techniques has been proposed to overcome this issue.
These include the explicit modelling of spatial heterogeneity within geomechanical
simulation models (Baczynski 1980; Pascoe et al. 1998; Jefferies et al. 2008; Srivastava
2012), and the use of critical path algorithms for statistical up-scaling of attribute
distributions (Glynn et al. 1978; Glynn 1979; OReilly 1980; Shair 1981; Einstein et al.
1983; Baczynski 2000; Baczynski 2008).

Both methods aim to propagate spatial

uncertainties through the geomechanical design calculations using stochastic modelling

34

techniques. However, a fundamental difference exists between these approaches, as


the former explicitly models the heterogeneities within the numerical simulations
package; whereas, the latter adjusts the attribute statistics prior to their incorporation into
geomechanical models.
This chapter attempts to illustrate the limitations of conventional probabilistic
design practice and statistical up-scaling techniques, in the simulation of spatial
heterogeneity.

The research adopts a novel approach to spatial heterogeneity

simulation within the field of slope design within open pit mines. The method is known
as sequential Gaussian simulation (SGS), which uses variograms to constrain spatial codependencies within the dataset (Journel and Huijbregts 1978; Isaaks and Srivastava
1989; Deutsch 2002; Nowak and Verly 2007). Stochastic models are used to construct
multiple realizations of the subsurface geological strength index (GSI) and uniaxial
compressive strength (UCS) attributes at the Ok Tedi mine site in Papua New Guinea.
Stochastic simulations are conducted directly within the geomechanical simulation code
FLAC, which is used to estimate the pit wall stability (Itasca 2011). Results are then
compared with conventional probabilistic and statistical up-scaling techniques to show
the limitations of traditional methods.

3.3. Study Site and Data Sources


The Ok Tedi mine is a copper porphyry deposit which has been in operation
since the mid-1980s. The site is located in the remote Western Province of Papua New
Guinea, near the border with Indonesia (Bamford 1972; Davies et al. 1978; Figure 1.1).
Situated on top of Mt. Fubilan at an elevation of 1800 m, the site is surrounded by
rugged geomorphic features forming a complex irregular topography (Hearn 1995). The
mountainous topography coupled with tropical latitude results, by world mining
standards, in very adverse climatic conditions, with the mine surrounded by dense
tropical rain forest and an annual rainfall of 9 to 11 m (de Bruyn et al. 2011). Active uplift
associated with the collision of the Australian and Pacific tectonic plates results in the
area experiencing moderate earthquake risks, with events typically ranging between 4
and 6 on the Richter scale (Baczynski et al. 2011).

35

The current areal extent of the pit is approximately 2000 by 3000 m, with a
maximum wall height of 800 m (de Bruyn et al. 2011).

A final depth of 900 m is

designated for end of life operations; however, a decision is pending to extend this to
1000 m, through a 200 to 300 m pushback of the west wall (de Bruyn et al. 2013). Slope
angles average 40o throughout the current pit, with the proposed cut-back designated to
38o to 39o. Conditions of all the pit walls are generally poor due to the high rates of
weathering associated with the large amount of rainfall within the area.

3.3.1.

Geology
The geology of the site is characterised by a repeating succession of sub-

horizontal sedimentary facies, which have been locally intruded by two igneous bodies
(Figure 3.1; Figure 3.2; de Bruyn et al. 2011; Baczynski et al. 2011). Sedimentary facies
have been separated into three distinct units at the site, including: the Ieru Siltstone,
Darai Limestone and Pnyang Formation (Hearn 1995). The Cretaceous Ieru Siltstone
Formation is characterized by grey, calcareous siltstones, interbedded with minor
medium graded sandstones.

The unit varies in thickness across the site, with a

maximum depth of 1500 m. The unit is overlain disconformably by a late-Miocene to


mid-Eocene,

massive,

foraminiferal,

carbonate-rich

packstone,

mudstone

and

wackestone unit, referred to as the Darai Limestone. The limestone varies in thickness
from 50 to 800 m across the site, and structurally underlies the mid-Miocene Pnyang
Formation. The Pnyang Formation is the youngest of the main sedimentary units found
at the site, and is composed of calcareous mudstone and siltstone with limestone.
The boundary between the Ieru Siltstone and Darai Limestone is characterized
across the site by a series of low angle thrust faults, referred to as the Taranki, Parrots
Beak and Basal Thrust Zones (Figure 3.2; Baczynski et al. 2011). The faults are the
result of uplift associated with the collision of the Australian and Pacific plates
(Fagerlund et al. 2013). The geology is characterized by 20-80 m thick zones of highly
fractured and altered fault gouge, pyrite, magnetite skarn lenses, brecciated
monzodiorite and brecciated siltstone hornfels (de Bruyn et al. 2011). The sedimentary
units dip gently towards the southwest, with all three thrust zones exposed in the west
wall.

36

!
!
!

425000
!

Taranaki
Thrust
!
!
!

!
!

424500

!
!
!
!
!

!
!

A
!

!
!

Skarn
Endoskarn
Monzonite Porphyry
Monzodiorite
Gleeson Faults
Gleeson Fracture Zone
Thrust Faults
Pnyang Formation
Darai Limestone
Ieru Siltstone

!
!

!
!

424000

!
!

!
!
!

!
!

!
!

!
! !
!
!

!!
!!!!
!!
!
!
! !!
!!
! !!
!
!
!!
!
!!!
! !
!!
!
!
! ! !! !!
!
!
!!
!!

!
!!

!!
!

!
!
!

!
!! !
!
!! !
!
!
!!
!

!
!

!
!

423500

Parrots Beak
Thrust
!
!

315000

423000

!
!
!
!
!

314500

!
!

!
! ! !
!!
! !
! ! !
!
!
!
!
!
!
!
!
!!
!
!!!
!
!
!
!
!!!
!
!
! !

Taranaki
Thrust

314000

!!

!
!

!
!

!! !

!
!
!
!

!
!
!!
!
!
!
!

Northing

315500

316000

Cross Section
Borehole Collar

316500

Easting
Figure 3.1

Plan view of surface geology for the 2011 mining conditions at the
Ok Tedi site. The geotechnical borehole collar distribution is found
to be skewed towards the center of the pit, specifically targeting the
mineralized skarn bodies.

In addition to the three thrust faults, the west wall is cross-cut by two steeply
dipping (70o to 80o) sub-vertical faults, referred to as the western (upper) and eastern
(lower) Gleesons faults (de Bruyn et al. 2013). The faults strike approximately parallel
to the western pit wall. Displacement along the faults has resulted in the formation of a
discrete fracture zone, bound on each side by the respective faults. The rock mass
within the zone is highly disturbed and characterized by weak, very highly fractured or
brecciated rock, with localized stronger material (Baczynski et al. 2011).

The two

bounding faults are characterized by highly brecciated, granular and/or highly plastic
37

gouge material.

The west wall is also crosscut by several additional, orthogonally

oriented, high angle faults, which act as possible release structures for potential slope
failures.

Northing = 423850
A
Gleeson
West Fault

Pnyang
Formation

Original
Topography

Gleeson
East Fault
Monzonite
Porphyry

Darai Limestone
(Upper)

Skarn

Current Pit
Extent

1750

Gleeson
Fracture
Zone

Taranaki Thrust

1500
Endoskarn

Parrots Beak Thrust


Ieru Formation
(Upper)

Da

rai

Lim

1250

Ieru Formation (Lower)

r)
we
(Lo
one
t
s
e

Monzodiorite

Basal Thrust

Vertical Exaggeration = 1.25


314000

314500

Elevation

Cutback

315000

315500

316000

1000

316500

Easting

Figure 3.2

Cross-section through the Ok Tedi pit at a northing of 423850. Inset


shows the location of the cross-section relative to the pit on a
photograph of pit from Baczynski et al. (2011).

Sedimentary units have been locally intruded by two igneous bodies, following
regional thrust fault activity (de Bruyn et al. 2013).

These include the Sydney

Monzodiorite at the southern end of the pit and the Fubilan Monzonite Porphyry to the
north. The Sydney Monzodiorite is the older of the two intrusions, and dates to Pliocene
(2.6 Ma; Page 1975). The unit is a medium to coarse grained, dioritic intrusive body,
which is generally unmineralized (de Bruyn et al. 2011). In comparison, the younger (1.1
to 1.2 Ma) Fubilan Monzonite Porphyry is mineralized and hosts the main economic
mineralization, along with proximal skarnified bodies (Page 1975).
38

The unit is a

porphyritic, felsic body, which has caused local skarnification of the Darai Limestone and
extensive potassic alteration of the Ieru Siltstone (Baczynski et al. 2011). Skarn units
are sub-divided into four distinct units, namely: endoskarns, calc-silicate skarns, massive
magnetite skarns, and massive sulphide skarns. In addition to local alteration, igneous
emplacement has resulted in a slight up-doming of sedimentary strata. This has led to
the sedimentary layers having a slight dip into the pit walls.

3.3.2.

Borehole Data
Borehole data are commonly used at mine sites to it provide estimates of the

subsurface geomechanical properties, which can later be used to predict the behavior of
proposed engineering designs. This practice typically employs empirical methods, due
to the difficulty of directly measuring parameters at the rock mass scale (Laubscher
1975). These empirical methods include the geological strength index (GSI) and the
rock mass rating (RMR89) system, which attempt to characterize the average block
shape and size, as well as the fracture surface conditions (Bieniawski 1973, 1976; Hoek
et al. 2002). The end result is an estimation of the rock mass strength characteristics
based on degree and type of fracturing. This is typically conducted on a domain basis,
whereby drill core is subdivided into a series of discrete units with similar attributes.
The Ok Tedi mine borehole database was provided by Ok Tedi Mining Ltd.
through SRK Consulting. The database included 153 boreholes, subdivided into 8,178
discrete geotechnical logging intervals. Borehole logging intervals were found to vary
greatly in size, with a range of 0.01 to 64.40 m. The spatial distribution of the borehole
collars is also greatly skewed towards the center of the Ok Tedi pit, coinciding with the
main mineralization targets (Figure 3.1). Logging intervals were characterized by on-site
geotechnical staff using the Laubscher MRMR rock mass classification system and later
transformed by SRK Consulting to the Bieniawski RMR89 system (Bieniawski 1976;
Bieniawski 1989; Laubscher 1990; Jakubec and Laubscher 2000; Laubscher and
Jakubec 2001).
Intact rock strength databases were provided by SRK Consulting for both
laboratory and point load test data. Both datasets provide an estimate of the uniaxial
compressive strength (UCS) for intact rock. However, the laboratory database is limited
39

for conducting spatial analysis, as only 129 uniaxial and 23 triaxial compressive test
results were available.

In comparison, the point load test database included 2690

suitable test results.

3.3.3.

Groundwater Model
The groundwater conceptual model for the Ok Tedi mine site is dominated by a

gravity driven, high recharge system, which is compartmentalized by the Taranaki,


Parrots Beak and Basal thrust faults (Fagerlund et al. 2013). These zones result in
perching and damming of internal aquifers. In total, three aquifers exist and are referred
to throughout this thesis as the Taranaki, Parrots Beak and Basal aquifers, based on the
thrust fault defining their lower surface. These thrust faults dominate the groundwater
flow regime, and their slight upward doming morphology causes the majority of
groundwater to flow away from the pit walls; however, minor seepage is still observed on
the pit wall between 40m and 100m above the pit floor. This radial flow behaviour away
from pits walls is enhanced by gravity driven flow mechanisms associated with the
location of the Ok Tedi pit at the top of Mt. Fubilan. Hydraulic testing of sedimentary and
igneous units generally indicates higher hydraulic conductivities (10-7 to 10-6 m/s)
compared to fault zones (10-9 to 10-8 m/s) due to a large degree of fracture and karst
development. Sub-vertical fault zones in the western pit wall are thought to further
compartmentalize flow due to their high gouge content. The high recharge rates are
associated with the extremely high annual rainfall (9 to 11 m) found throughout the site
(Hearn 1995).
Groundwater modelling of the Ok Tedi system was conducted by SRK Consulting
using the DHI-WASY software FEFLOW (Fagerlund et al. 2013; DHI-WASY 2013). The
groundwater model was designed to estimate the pore pressure distribution within the
Ok Tedi pit following the west wall cutback. The model extent was limited to an inset
around the west wall, which coincided with on-going 3DEC geomechanical modelling
(Figure 3.3).

Simulations were run for saturated flow conditions, with a constant

recharge applied over the entire site.

Pore pressures were estimated for transient

groundwater conditions, over the 5 year anticipated life of the west wall cutback. The
general

groundwater

flow

is

strongly

40

influenced

by

the

low

permeability,

compartmentalizing fault zones, which result in perching and damming of internal


aquifers (SRK 2013a).
Hydraulic
Conductivity (m/s)

22

50

19

Elevation

5.0E-06
1.3E-06
3.6E-07
9.8E-08
2.6E-08
7.1E-09
1.9E-09
5.1E-10
1.4E-10
3.7E-11
1.0E-11

00

15

50

12

425000

00

424500
424000

85

0
31

40

00
31

45

00

Easting

Figure 3.3

423500
31

50

00
3

5
15

ing
rth
o
N

423000

00
31

60

00

Distribution of hydraulic conductivity in the groundwater model of


the Ok Tedi mine site, constructed by SRK Consulting (Fagerlund et
al. 2013).

Pore pressure distributions in the west wall were estimated for both natural and
depressurized conditions by SRK Consulting (SRK 2013b).

The effects of

depressurization were estimated for two scenarios (Figure 3.4):


Scenario I: Installation of three rows of horizontal drains, installed progressively
within the west wall during the cutback to a length of approximately 350 m.
Drains are placed at an elevation of 1325, 1440, and 1525 masl, with a 30 m
spacing.

41

Scenario II: Active depressurization from the installation of a drainage gallery and
a fan of drain holes from the gallery. Installation is planned to be conducted at
an elevation of 1360 masl. Drainage fans consist of five to nine drain holes up to
200 m in length, increasing in density towards the south. In addition, to the
drainage gallery a set of horizontal drains was also included in the model
coinciding with the uppermost drains from scenario 1.

a. Scenario I

Horizontal Drains

500m

b. Scenario II

Upper Horizontal Drain


Drainage Tunnel

Monzonite Porphyry
Ieru Siltstone

Figure 3.4

Monzodiorite

Pnyang Siltstone

Skarn
Thrust Faults

500m

Darai Limestone
Gleeson Fracture Zone

Two tested depressurization scenarios for the Ok Tedi west wall


cutback. (a) Three 300 m horizontal drains at 1325, 1440, and 1525
masl. (b) Drainage tunnel with a single 300 m horizontal drain at
1525 masl.

42

3.4. Methodology
3.4.1.

Hoek Brown Parameters


Simulation of rock masses remains a difficult procedure due to the difficulty in

estimating their mechanical properties. The most common solution to overcome this
issue within the geotechnical community is the use of the Hoek-Brown criterion (Hoek
and Marinos 2007). The method is an empirically derived relationship between the
strength of a rock mass and the degree of observed fracturing (Hoek et al. 2002). The
system is premised on the hypothesis that rock masses fail through sliding and/or
rotation of intact rock blocks (Hoek 1994). For example, a rock mass composed of
angular blocks, with rough discontinuity surfaces will exhibit a larger degree of interparticle locking, and hence stronger rock mass characteristics, than one composed of
smooth-walled, rounded particles. Although limitations exist within the system (Carter et
al. 2007; Brown 2008), the criterion has been widely utilized within the geotechnical
community owing to its ease of use and a lack of suitable alternatives. A full description
of the Hoek-Brown criterion is provided in Appendix A.
The method requires defining four parameters, namely: geological strength index
(cYd), intact rock uniaxial compressive strength (UCS), material constant (

0 ),

and a

disturbance factor (e). The GSI was estimated from borehole data through conversion
of RMR89 values. Conversion of the majority of RMR89 values utilized the formula (Hoek
1994):
cYd = afagh 5

Equation 3.1

However, this approach is inappropriate within highly fractured and/or decomposed


intervals, as the RMR89 system been shown to be unsuitable for characterizing overall
rock mass behaviour in such conditions (Hoek et al. 1995; Hoek et al. 2002).

To

compensate for this deficiency, GSI values were directly assigned to intervals described
as highly fragmented, crushed and/or decomposed zones within the geotechnical
database. This was conducted according to Table 3.1 constructed by SRK Consulting
for the Ok Tedi mine site (SRK 2012). Values assigned to these highly fractured zones
43

were treated as stochastic variables, defined by uniform distributions within the


designated GSI range. The resultant medial GSI values are summarized in Table 3.2.

Table 3.1

GSI estimates for highly fragmented, crushed and/or decomposed


zones. Ranges were approximated by SRK Consulting (SRK 2012)
using the GSI estimation chart of Hoek et al. (1998).
Rock Description

Assigned GSI Value

Clay, clayey gravel or Fault with gouge (clay and rock


fragments)

5 - 15

Sheared rock, crumbly rock, gravel or non-gouge fault

10 - 20

Intensely fractured rock or breccia, fragments <2 cm

15 - 25

Heavily fractured rock, greater than four discontinuity sets,


fragments 2-5 cm

20 - 35

Intact rock uniaxial compressive strength (UCS) was characterized directly from
the Is(50) tensile point-load test results (Table 3.2). Point-load estimates were chosen for
two reasons. First, the dataset was large and broadly distributed throughout the study
region allowing for proper characterization of the spatial structure, unlike the laboratory
test results which were spatially limited.

Second, point-load data were collected

independently from RMR89 estimates, unlike simple hammer tests, which exhibited an
underlying bias based on the condition of the rock mass.

This underlying bias is

observed in the Ok Tedi dataset by an increase in the correlation coefficient between the
non-declustered UCS and GSI data from 0.16 with point-load estimates to 0.61 with
hammer test results.
The material constant (

0)

is a difficult parameter to characterize as proper

estimation requires detailed laboratory test results. As a result, most studies rely on
published empirical estimates based on the lithology (Hoek et al. 2002). Due to this
44

difficulty, characterization of the spatial structure for the material constant was
impossible based on the current dataset.

Therefore, values were kept constant

throughout the geotechnical domains and were assigned based on previously published
estimates for the site ((Table 3.2; Baczynski et al. 2011).

Table 3.2

Medial Hoek-Brown attributes and statistics for each geotechnical


domain at the Ok Tedi mine site. Data was declustered using the
methodology described in Section 3.4.3 prior to characterization of
the summary statistics.
Median

Geotechnical Unit

Density (kg/m )

mi
GSI

UCS (MPa)

Monzonite Porphyry

2550

24

51

65

Monzodiorite

2550

24

40

46

Endoskarn

3250

17

46

34

Skarn

4450

17

53

76

Darai Upper

2750

10

45

69

Darai Lower

2740

10

47

65

Ieru Upper

2620

34

64

Ieru Lower

2620

53

86

Pnyang

2660

44

64

Thrust Fault Rock

2920

29

72

45

Similar to the material constant (

0)

characterization of the disturbance factor (e) is

challenging. This parameter is intended to describe the degradation of near surface rock
mass due to blasting and unloading (Hoek 2012). However, ambiguity exists within the
geotechnical community as how to apply the disturbance factor (e). No agreed upon,
concise rules exist as to what value should be used and how it should be zoned away
from the pit wall. As a result, the disturbance factor (e) was ignored throughout this
study and a constant value of 0.0 used.

Although this is not ideal, the study was

concerned with deep-seated failure, which is not greatly affected by the near surface
degradation.

3.4.2.

3D Geological Model
A three-dimensional geological model of the Ok Tedi site was provided by OTML

through SRK Consulting (Figure 3.5). Geotechnical domain characterization within this
study is based on this geological interpretation.
Three-dimensional geological data provided by OTML were in the DXF file
format. In order to allow for data interpretation using the Maptek software package
Vulcan (Maptek 2013), data was first converted to the Vulcan triangulation file format
(.00t). This involved using DXF files to define a series of geological boundaries which
were then used to split apart a large cube of the model area into the various geological
units. During this process, geological domains were extended approximately 500 m
towards the west, in order to capture the extent of geomechanical modelling conducted
later. This was done by projecting the sedimentary and fault zone units along their dip,
while preserving their stratigraphic thickness.
Geotechnical boreholes were then projected within the Vulcan software package,
and the associated geological units that they intersected were recorded. This allowed
for the construction of downhole geological profiles for all the geotechnical boreholes,
which matched the future FLAC simulation domains.

While good agreement was

achieved between the 3D geological domain boundaries and geological borehole logs,
some slight adjustments of (<20 m) were required to ensure that the borehole logs
matched the larger scale triangulation files.

These slight variations are due to the

difficulty in accurately interpolating domain boundaries from the geological logs.


46

Skarn
Endoskarn
Monzonite Porphyry
Monzodiorite
Gleeson Fracture Zone
Gleeson Faults
Thrust Faults
Pnyang Formation
Darai Limestone
Ieru Siltstone
0

Elevation

220

180

140

425

0
100
000
314

400

314

00
156

423

423
600

0
600

31

422

400

316

200

80
316

422
400

200

400

424

200

315

ting

Figure 3.5

424
800

424

80
314

Eas

3.4.3.

600

425

000

thi
Nor

ng

800

3D geological model of the Ok Tedi mine site. Topography is based


on pre-mining conditions.

Stochastic Simulation
Stochastic simulation of the geological strength index (GSI) and uniaxial

compressive strength (UCS) was conducted using sequential Gaussian simulation


(SGS). This approach is novel within the field of open pit slope design, but has been
utilized for a number of years within the geological and reservoir modelling communities
(Dimitrakopoulos and Fonseca 2003; Esfanhani and Asghari 2013).

The algorithm

works by sequentially simulating attribute values along pseudo-random paths, while


incorporating spatial co-dependencies using simple kriging routines (Journel and
Huijbregts 1978; Dowd 1992; Deutsch and Journal 1998). The method used in this
study involves a six step process; the following section provides a brief overview of the
techniques (Figure 3.6). For a more detailed description of the SGS method see Journel
and Huigbregts (1978), Goovaerts (1997), or Nowak and Verly (2007).

47

Pre-Processing
Declustering
Detrending

Statistical Summarization
Normal Score (Gaussian) Transformation
Normal Score Variograms

Stochastic Simulation
Sequential Gaussian Simulation (SGS)
Normal Score Back-Transformation
Figure 3.6

Stochastic simulation processes used to characterize and simulate


the spatial heterogeneity in the GSI and UCS at the Ok Tedi mine
site.

Spatial Declustering
Prior to characterization of the spatial structure, data must first be filtered to
remove spatial dependencies (Prycz and Deutsch 2003). These dependencies result
from the non-systematic manner of data collection and the underlying geological
processes which control the studied attributes.

This differs from classical statistical

methods where sample independence is assumed. To remove these dependencies


spatial declustering techniques are utilized, which assign differential weighting to studied
attributes based on their proximity to surrounding data (Chils and Delfiner 1999). This
is done by assigning smaller weights to closely spaced data, and larger weights to
widely spaced data, ensuring that closely spaced data are not over-represented within
the dataset.
Three main spatial declustering algorithms exist within the literature, namely:
polygonal, cell, and kriging weight declustering (Isaaks and Srivastava 1989). While all

48

of the aforementioned methods are effective at declustering spatial data, cell


declustering was chosen for de-biasing in this study due to its ease of use and ability to
control the spatial scale. The method utilizes the following steps (Prycz and Deutsch
2003):
1. A grid origin is specified.
a. Data are then overlain with a square grid based on the specified
origin.
b. The number of data in each cell (\0 ) is then tabulated and a weight
jkl calculated for each as follows:

jkl =

1 \
\0 0

where \ is the total number of data, and


data.

Equation 3.2

is the number of cells with

2. The grid origin is then shifted and step 1 repeated.


3. Finally, the weights are averaged across all of the origin simulations to give
an average weight for each datum.
Multiple offsets are required to remove the cell declustering sensitivity to the grid origin.
The Ok Tedi dataset was declustered using this approach with a 0.01 m offset and 1000
iterations. A 10 m3 cell size was used to mimic the 10 m2 cell size arrangement used in
the later FLAC geomechanical model. Declustering was conducted using a user-written
C++ script.
In addition to the spatial dependencies, sampling issues exist with the borehole
data due to the variable nature of the geotechnical domain logging intervals. This can
result in an over-representation of smaller compared to larger sampling intervals, if the
data is used without any bias correction. In order to overcome this issue, borehole logs

49

were re-sampled at a 0.01m spacing, to prevent the under-representation of larger


intervals.
In addition to cellular declustering, a moving window, averaging technique was
employed to obtain average attribute values for the 10 m2 cells subsequently used in the
FLAC 2D model. The method works by sub-dividing the study region into a series of
local neighborhoods of equal size and calculating summary statistics for each attribute
(Isaaks and Srivastava 1989). This is similar to the declustering method and utilizes
evenly spaced, square windows generated based on a designated grid origin. The final
result ensures that data are averaged to the same scale as the final geomechanical
simulation model, limiting the influence of small discrete anomalies.

Detrending
Following cell declustering it is important to filter the large-scale spatial trends
due to their poor reproducibility by the SGS process. This is due to the fact that the SGS
technique reproduces random phenomena assuming data conforms to the first-order
stationary assumption (Journel and Huigbregts 1978). This assumption is referred to as
the intrinsic hypothesis and states that both the mean and variance are dependent
strictly on the data separation distance and not the location of the data (Matheron 1963).
If data do not conform to this assumption due to systematic trends, then trends must be
defined and removed/filtered prior to conducting SGS (Deutsch 2002).
Identification of spatial trends is conducted through exploratory spatial data
analysis techniques, including: semivariogram analysis, average grade profiles, and
ordinary kriging with a high nugget effect (Vieira et al. 2010). The use of average grade
profiles is the simplest and often first means of trend identification.

It involves the

examination of averaged data along one, two or three dimensional profiles (Isaaks and
Srivastava 1989; Deutsch 2002). Once identified, trends can then be characterized
using moving average techniques, kernel estimation and/or ordinary kriging with a high
nugget effect (Hallin et al. 2004; Nowak and Verly 2007).
Following identification and characterization, the most common way to deal with
trends is to first remove them, then simulate the residuals, and finally add the trend back
to the simulated results (Vieira et al. 1983; Vieira et al. 2002; Blackmore et al. 2003;
50

Jenson et al. 2006). This filtering process commonly employs a number of techniques
including: subdividing the data into a series of domains (Deutsch 2002), linear
regression with a correlated variable (Phillips et al. 1992) and polynomial trend analysis
(Vieira et al. 2010).
Analysis of the spatial trends within the Ok Tedi dataset identified the influence of
the Gleeson fracture zone, which affected GSI estimates from all geotechnical units
within the western pit wall. To remove this trend, data were filtered using a constant
ratio of 0.81, which is equal to the average decrease of GSI values within the zone.
Residuals obtained from the filtering process were used for the remainder of the SGS
process and the trend added back following simulation.

Normal Score (Gaussian) Transformation


The SGS algorithm is based on an assumed multi-Gaussian system, where the
spatial variance arises from random processes acting on a stationary mean (Goovaerts
1997). In order to satisfy this assumption, one commonly utilized method involves a
Gaussian transformation of the data (Journel and Huijbregts 1978). This is conducted to
ensure data adhere to a normal distribution.

Under the assumption of a spatially

constant trend, the process involves assigning a standard normal score to each datum
such that the cumulative frequencies of both the normal score and attribute are identical
(Chils and Delfiner 1999). This transformation process is conducted either graphically
from the modelled cumulative density function (CDF) or by defining a transformation
function using a polynomial expansion (Castrignan et al. 2009).
The Ok Tedi data were transformed by first assigning distribution models to the
studied attributes prior to the normal score transformation. This was done to smooth the
data and have the transformation better reflect the likely underlying sample distribution.
Bimodal normal and Weibull distributions were used for the GSI and UCS, respectively.
Standard normal score values were then assigned to datum based on cumulative
frequencies from the modelled CDFs (Figure 3.7).
Microsoft software package EXCEL.

This was done directly in the

A lookup table was then constructed which

allowed back-transformation of normal scores to GSI and UCS values following SGS
simulation. The look-up table is accurate to +/- 0.01 in normal score space.

51

Cumulative Frequency

100%

80%

60%

40%

20%

0%

20

40

60

80

100

Geological Strength Index


Experimental Data
Figure 3.7

Modeled Distribu!on

GSI data were converted to normal score space using a cumulative


frequency plot. Normal scores were selected based on the matching
cumulative frequencies between the data and a normal distribution.

Correlogram Analysis
Accurate characterization of the underlying spatial structures is the foundation of
any geostatistical analysis involving kriging and/or SGS (Clark 1979; Isaaks and
Srivastava 1989). The standard method within geostatistics used to characterize this
structure is semivariogram analysis which measures the spatial dissimilarly vs. distance.
Since it is assumed that closely spaced data are more closely related than distant,
semivariograms should display increased dissimilarity with distance, until the point at
which no obvious correlation exists between data values.

At this point, the

semivariogram reaches a sill that is comparable to the sample variance.

Classic

geostatistical analysis within the mining industry typically utilizes semivariograms;


however, alternative methods of modelling spatial dependency exist (i.e. covariograms
and correlograms). Srivastava and Parker (1989) demonstrated that correlograms may
52

be more robust than semivariograms in the presence of heteroscedasticitic or


preferentially sampled data.

The use of correlograms/covariograms also allows for

greater continuity between the statistical modelling and stochastic simulation as the
kriging/SGS process requires the direct input of covariance vs. distance models (Journel
and Huijbregts 1978). For these reasons, spatial analysis at Ok Tedi was conducted
utilizing correlograms.
Correlogram analysis was conducted by first calculating average correlation
coefficients vs. distance.

The algorithm incorporated declustered weights using the

following formula:
m =

1
j0 jk

j0 o0 jk ok

Equation 3.3

where o0 and ok are the normal score values, j0 and jk are the declustered weights and
m is the correlation coefficient at the specified lag distance. Lags were calculated in a

logarithmic space, to give greater refinement of average correlation coefficients at


shorter lag distances.
Correlogram models were then fit to the experimental data for the GSI and UCS

using least squared regression techniques within the Microsoft software package
EXCEL. GSI continuity was modelled with two nested structure models with zero nugget
effect, while UCS continuity was modelled using an exponential model and relatively
high nugget effect (Table 3.3). Models were constrained to reproduce a dispersion
variance of 1.0 within the simulation area (Journel and Huijbregts 1978). A complete
summary of the exponential correlograms can be found in Appendix B.

53

Table 3.3

Normal score variogram constraints for the Ok Tedi dataset.


GSI

Geotechincal
Unit

Exponential Model I

UCS (MPa)

Exponential Model II

Exponential Model
Nugget

Sill

Range (m)

Sill

Range (m)

Sill

Range (m)

0.61

41

0.44

489

0.30

0.72

128

Monzodiorite

0.49

49

0.57

434

0.38

0.66

214

Endoskarn

0.69

38

0.32

149

0.47

0.55

97

Skarn

0.88

52

0.14

335

0.74

0.26

81

Darai Upper

1.00

24

0.00

381

0.00

1.01

37

Darai Lower

0.81

43

0.25

1000

0.54

0.50

369

Ieru Upper

0.76

43

0.29

630

0.21

0.82

143

Ieru Lower

0.86

88

0.18

614

0.25

0.81

318

Pnyang

1.00

27

0.00

381

0.21

0.82

143

Thrust Faults

0.92

40

0.10

513

0.27

0.75

107

Monzonite
Porphyry

Sequential Gaussian Simulation


Stochastic simulations of the inherent heterogeneity within the Ok Tedi rock
mass system were conducted using the sequential Gaussian simulation (SGS) method
(Dowd 1992; Nowak and Verly 2004; Leuangthong et al. 2011). The method works by
sequentially simulating a series of normal scores at specified grid nodes using a random
walk process coupled with simple kriging routines (Vann et al. 2002). The method was
chosen due to its ability to reproduce continuous random variables, while also taking into
consideration the underlying spatial structure. The basic steps in the algorithm are as
follows (Journel and Huijbregts 1978):
1. Generate a random walk sequence through the simulation grid nodes.

54

2. Visit the first node in the sequence and simulate a value by a random draw
from a conditional distribution derived from simple kriging.
3. The simulated value becomes part of a conditioning set.
4. Visit the next node in the sequence and simulate the studied attribute using
both original and simulated values for conditioning.
5. Repeat step 4 until all nodes have been visited.
While the method preserves the spatial structure defined by the semivariogram, there
are two main possible limitations of the method that need to be taken into consideration
(Vann et al. 2002). First, the simulation area must be greater than the range of the
defined spatial dependency model, otherwise the full spatial structure of the model will
not be preserved by the simulation. Next, an adequate number of neighboring data
points must be used during conditioning, or the simulation will heavily favor the short lag
trend in the spatial model.
SGS was conducted within this study using FISH routines written to conduct the
simulation directly within the software package FLAC (Figure 3.8; Itasca 2011).

general version of the SGS FISH algorithm is provided in Appendix C. Verification of the
code can be found in Appendix D. Simulations were conducted in normal-score space
and back-transformed to parameter space following stochastic simulation, with the
previously removed Gleeson facture zone trend added back to the results. GSI and
UCS simulations were conducted independently due to the poor correlation coefficient
between the two parameters (r = 0.19).

55

Geotechnical
Domain Boundary

Northing = 423850

GSI

2400

100

2100

1800

1500

Elevation

1200

900

600
313000

313500

314000

314500

315000

315500

316000

316500

Easting

Figure 3.8

3.4.4.

A single realization of the GSI attribute using the SGS method.

Pore Pressure Distribution


FEFLOW pore pressure results were exported for the end of mine life conditions,

to coincide with the mining stage simulated in the later FLAC models. Distributions were
obtained by exporting FEFLOW simulation results along a 2D east-west cross-section at
a northing of 423850. However, due to the limited extent of the groundwater simulation
around the west wall, pore pressure predictions had to be extended to include the entire
extent of the FLAC simulations (Figure 3.9). This was conducted for three zones:
Western Zone: An average water table height above the top of each thrust zone
was estimated for the three main aquifers within the west wall (i.e. Taranaki,
Parrots Beak and Basal aquifers). Pore pressures were then estimated for the
FLAC nodes which were located greater than 200 m west of the FEFLOW model.
This was done using the top elevation of the thrust zones and the average height
of the aquifers.
Sub-Central Zone:

Pore pressures were projected downward, assuming

hydrostatic conditions, from the base of the overlying Basal aquifer. A 100 m gap
56

was left in the predictions on either side of the Gleeson fracture zone. Pore
pressures within this gap were later estimated using a linear interpolation, which
caused predicted pressures to mimic the overlying step induced from fault
compartmentalization.
Eastern Zone: A theoretical groundwater distribution was constructed for FLAC
nodes located greater than 200 m east of the FEFLOW model. These pressures
are considered to be an estimate only, as there was little information provided in
the groundwater modelling reports pertaining to pressures on this side of the pit.
Following pore pressure prediction within the three zones, linear interpolation techniques
were employed to estimate pressures between the zones. Although this final pressure
distribution is a simplification of reality, its overall effect on the FLAC geomechanical
simulations is minimal due to failure being concentrated near the western pit wall within
the FEFLOW model region.

Northing = 423850

1750

1250
1000

FeFlow Model
Pressure (Pa)
7
1.1 x 10

Sub-Central Zone

Western Zone
313000

313500

314000

Eastern Zone

314500

315000

315500

316000

0.0

Elevation

1500

750
500

316500

Easting

Figure 3.9

Pore pressure distribution estimated for the Ok Tedi site based on


FEFLOW modelling and conceptual estimation.

57

3.4.5.

Geomechanical Simulation Model


Geomechanical simulation was conducted using the Itasca software Fast

Lagrangian Analysis of Continua (FLAC; Itasca 2011).

FLAC is a two-dimensional,

finite-difference simulation package, which simulates continuum type behaviour using


predefined constitutive criterion models (i.e. Mohr-Coulomb, Mohr Ubiquitous-Joint,
Hoek-Brown, etc.). During simulation, the material undergoes linear elastic behavior
until its yield point is reached, at which point it behaves as a plastic material, whose
properties are defined by the specified constitutive models.
Geomechanical simulations of the Ok Tedi pit involved a 2D east-west crosssection through the centre of the pit (Figure 3.2; Figure 3.8). Staging was not conducted
as FLAC modelling suggest that given perfectly-plastic behaviour there is very little
difference between staged and non-staged models at the Ok Ted mine site. Results
from the FLAC modelling indicated a factor of safety of 1.79 and 1.78 for the staged and
non-staged models, given medium-value deterministic simulations.

As a result,

increased computational efficiency was achieved by ignoring staging and running


models using the final excavation stage of the proposed west wall cutback.
Failure criterion within the FLAC simulations utilized the integrated, modified
Hoek-Brown criterion (Hoek et al. 2002; Itasca 2011). The criterion is incorporated into
the FLAC simulation code using a linear approximation obtained by fitting a tangential
Mohr-Coulomb envelope to the failure criterion. Prior to failure, materials are assumed
to behave according to linear elasticity theory, with external stress accommodated
through a combination of stress build-up and reversible strain accumulation. However,
the material begins to yield once the stress state exceeds the tangential Mohr-Coulomb
envelope, with the excess stress accommodated through non-reversible deformation.
One limitation of the Hoek-Brown criterion is its inability to characterize low GSI
conditions, where failure ceases to be controlled by translation and rotation of individual
blocks (Hoek et al. 2002; Carter et al. 2007). At these low GSI values (UCSir < 0.5 MPa)
materials typically behave more as a soil-like substance, with behaviour best described
by the Mohr-Coulomb strength criterion (Carvalho et al. 2007). The rock mass only
begins to behave as a Hoek-Brown substance after the UCSir exceeds 10-15 MPa
(Brown 2008). An empirically derived criterion used to describe the transition between
58

these two extremes was proposed by Carter et al. (2008). This criterion facilitates the
transition function (4p :q0 ).

transition from linear soil-like behaviour to non-linear rock mass type behaviour using a
This relationship was incorporated into the FLAC

function. A full description of the transition function (4p :q0 ) is provided in Appendix A.

simulations, by extending the modified Hoek-Brown criterion through a user-written FISH

Spatial heterogeneity was incorporated into simulations using the SGS process.
This ensured that unique GSI and UCS values were associated to each individual grid
unique Hoek-Brown , r, and

node. These attributes were used, along with domain constant


s

values, to assign

parameters to each individual node. Disturbance zone

(e) factors were ignored in the simulations as the purpose was to explore deep-seated

failure.
Models were assessed by conducting a shear strain reduction (SSR) analysis
once steady-state conditions had been achieved (Mattsui & San 1992; Dawson et al.
1999; Hammah et al. 2005; Hammah et al. 2006; Diederichs et al. 2007a). This was
done in order to calculate the critical strength reduction factor (SRF), which is equivalent
to the factor of safety in classical limit equilibrium analysis. Simulations employed Monte
Carlo sampling techniques, with 100 trials conducted within each simulation round. This
allowed for derivation of the SRF distribution and estimation of the probability of failure.
Simulations took approximately 8 days to complete one round of 100 models, for a 3.4
GHz PC with 16 GB of RAM.

3.4.6.

Critical Area Estimation


Recent advances in mine design practices have focused on an increased drive

towards deeper and more complex designs (Read and Stacey 2009). This has forced
geotechnical engineers to consider methods other than traditional deterministic
techniques, which can characterize the inherent uncertainty associated with increased
mine complexity. As a result, a renewed interest exists within the field towards more
probabilistic and/or risks based practices (Steffen 1997; Terbrugge et al. 2006; Steffen
2007; Steffen et al. 2008). This paradigm shift and increased focus on the associated
project risks, requires an appreciation for both the probability of an unacceptable event
occurring, as well as the associated consequences of the event (Yoe 2011). The first
59

stage in understanding these consequences requires the ability to assess the size of a
potential failure.
This study applied a novel approach to estimate the failure size through the use
of network analysis based techniques. This approach estimates the critical failure area
through minimum distance analysis of shear strain rates obtained from numerical
simulation. This first involved inverting the shear strain rates values to construct an
inverse shear strain rate matrix. Dijkstras (1959) algorithm was then used to estimate
minimum paths through this matrix, for each of the simulations, between the pit face and
rear slope crest (Figure 3.10). This was conducted for each boundary node along the
toe and slope of the modelled open pit. Minimum paths were then assessed based on
average inverse shear strain rates, with the lowest average rate path determined to be
the critical failure path. Summary statistics were then calculated for the GSI and UCS
along the identified path, which gave an indication of the shear strength along the
surface.

Critical paths were also used to estimate the size of potential failures, by

calculating the total area between the critical failure surface and slope face. A detailed
description of the critical path algorithm is provided in Appendix E.
Critical path density plots were constructed from the estimated failure path
results to give an indication of the critical failure surface distribution. This involved
estimating nodal intersection probabilities for each of the FLAC grid cells, measured as
the probability of a critical path intersecting a specified node. For example, if five critical
paths out of the total of 100 Monte Carlo simulations intersected a grid node, the
intersection probability at that node would be 0.05.

Intersection results were then

exported to ArcGIS and ordinary kriging techniques utilized to interpolate a failure path
density. The resultant kriged surface gave an indication of the distribution of failure
paths within the FLAC simulations.

60

.62 11. 2.4 5.


71 5 48
2.9

12

1.9

6.5

1.8

9.7

5.1

28

7
2

Model Cell
Crest Cell
Slope/Toe Cell
Critical Path
Starting Cell

Figure 3.10

3.4.7.

.86

Cost
iSSR

6.2

9.6

1.2

6.2

4.8

5.4

3.1

2.6

1.56

1.7

8.9

1.6

7.1

1.6

5.93

9 4.19
8
7

9
6

1.56

4.37
3
.87 .15 32.4 4.11 24.3 3.9 22.44
3
0 7
16.51
19
2
.15 114 3.4 92 17.1 78.72
6
4

500m

Critical failure paths were identified using minimum distance


analysis. The methodology utilized Dijkstra's (1959) shortest path
algorithm.

Statistical Up-Scaling
One of the difficulties in utilizing the stochastic simulation techniques is the data

intensive analysis that must be conducted to characterize and simulate the spatial
structure. While this can be considered an ideal to strive for, it is not always practical or
possible due to both time and data constraints. Therefore, a number of researchers
have proposed the use of critical path algorithms to up-scale attribute distributions from
the borehole to domain scale (Glynn et al. 1978; Glynn 1979; Shair 1981; Einstein et al.
1983; Baczynski 2008).

These algorithms work by identifying critical paths through

synthetic rock material, using either minimum distance (OReilly 1980) or stochastic
step-path generation (Baczynski 2000) techniques.

Strength attributes are then

summarized for the paths and incorporated into geomechanical software packages. To
test this general methodology, a software package was developed to quickly refine

61

geotechnical domain statistics based on a preliminary understanding of the local


heterogeneity. The program uses the following steps:
1. A two dimensional simulation area is defined by a \ by \/2 matrix, where \ is
equal to the user-specified failure length divided by simulation cell size.

2. GSI and UCS values are assigned to the simulation area using the sequential
Gaussian simulation algorithm described in Section 3.4.3. This requires a user
specified variogram model for both geotechnical attributes.
l
3. Hoeks global rock mass strength values (:qX
) are then assigned to each node

based on the simulated GSI and UCS values, and a user-specified


using the equation (Hoek and Brown 1997):

l
:qX
= :q0

where

, r and

+ 4r

2 1+

8r

2+

J 4s + rL

wC

attribute,

Equation 3.4

are the Hoek-Brown constants, and :q0 is the uniaxial

compressive strength of intact rock.

4. Dijkstra's (1959) algorithm is then used to calculate the critical paths through the
simulation area, based on a minimum distance analysis of global rock mass
strength values.
5. GSI and UCS values from nodes along the critical path are then averaged to give
an indication of the overall strength of the weakest path through the simulation.
Up-scaled GSI and UCS values are then incorporated into geomechanical simulations
as single variables assigned uniformly across geotechnical domains.
The proposed algorithm was used to conduct three separate simulations. This
includes:

The simulation of each geotechnical unit independently, and the GSI and
UCS statistics summarized accordingly.
62

The co-simulation of all geotechnical units into a single large matrix, which
was then used to find an overall weakest path. GSI and UCS values were
then averaged for each of the geotechnical units along the path. This allowed
for co-dependencies between units to be taken into consideration during rock
mass failure.

Finally, co-simulation was coupled with an estimation of the step-path scale

roughness (xyz{|} ) using the formula:

y0qw
xyz{|} = atan

}zy0z/w

Equation 3.5

where y0qw and }zy0z/w are the total length of the step-path in the
vertical and horizontal directions.

Angles were then incorporated into

geomechanical simulations using a dilation angle, to simulate the volumetric


change that must occur in response to step-path failure.

However, this

methodology is a simplification of reality and assumes that the failure


direction is equal to the average step-path direction (Baczynski 2014).
Unfortunately,

no

alternative

robust

methodologies

exist

within

the

geotechnical literature to estimate and simulate this roughness.

3.5. Simulation Results


This section provides an overview of the results obtained from the
geomechanical modelling. Simulations employed Monte Carlo techniques with 100 trials
conducted in each set of simulations. This was done in order to obtain a distribution of
the SRF, with reasonable estimates of the mean and standard deviation. The number of
trials corresponds with a stabilization of the mean and standard deviation, within a
reasonable simulation timeframe (Figure 3.11). On average, it is observed that the
number of trials required for stabilization of the normal distribution statistics is inversely
proportional to the degree of spatial autocorrelation within the models.

63

a.

b.
0.30
SRF Standard Deviation

SRF Mean

1.70

1.60

1.50

1.40

0.10

0.00
0

20

40
60
Trial Number

80

Sequential Gaussian Method

Figure 3.11

3.5.1.

0.20

100

20

Zero Autocorrelation Method

40
60
Trial Number

80

100

Conventional Method

Plots of the running average (a) mean and (b) standard deviation in
SRF results vs. the number of simulation trials are used to estimate
when the Monte Carlo simulation results become stable. The results
suggest that the required number of simulations is inversely
proportional to the degree of spatial autocorrelation.

General Observation
Incorporation of spatial heterogeneity into continuum simulations resulted in a

fundamental change in the model behaviour.

Instead of models being able to

indiscriminately fail anywhere in the rock mass, heterogeneous models restricted failure
to the weakest areas, resulting in step-path geometries. This fundamental behaviour
shift resulted in a reduction of the SRF from 1.63 in the deterministic simulation, to an
average of 1.45 within the SGS simulations (Figure 3.12). The observed reduction is
consistent with previous research (Griffiths and Fenton 2000; Hicks and Samy 2002;
Jefferies et al. 2008). SGS models also suggest a relatively tight constraint on SRF
values, with results conforming to a normal distribution with a standard deviation of 0.08.
Examination of the failure path within the SGS simulations confirms that damage
is preferential in the weaker areas of the rock mass (Figure 3.13). Critical path statistics
indicate an average reduction of 14% and 32% in the GSI and UCS compared to
average values in the western pit wall.

These results are consistent with previous

64

research into the effects of heterogeneity which have observed this preferential failure
behaviour (Lorig 2009; Jefferies et al. 2008; Srivastava 2012).
Previous two-dimensional, geomechanical simulation results from the central pit
area estimated safety factors between 1.25 and 1.40, based on Slide (Rocscience
2014), GALENA (Clover 2010), and UDEC (Itasca 2014) modelling (Baczynski et al.
2011). Comparison of this previous work with FLAC simulations suggests relatively
good agreement between the various analyses, given the varying methods for deriving
rock mass strengths. The slightly higher deterministic SRF estimation from the FLAC
simulations can be attributed to the use of medial-value rock mass strengths compared
to best-engineering judgement used in previous work.

100%

Cumulative Probability (%)

90%

Sequential Gaussian Method


Deterministic Method

80%
70%
60%
50%
40%
30%
20%
10%
0%
0.80

1.00

1.20

1.40

1.60

1.80

2.00

Critical Shear Strength Reduction Factor (SRF)

Figure 3.12

Cumulative density plot comparing the SGS method with a standard


deterministic analysis. The deterministic analysis utilized
homogeneous units, with strength attributes defined using medial
value statistics. SGS modelling suggests a mean SRF of 1.45 with a
standard deviation of 0.08.

65

50

Number of Simulations

45
40
35

Geological Strength
Index (GSI)
Uniaxial Compressive
Strength (UCS)

30
25
20
15
10
5
0
-60%

-50%

-40%

-30%

-20%

-10%

0%

Average reduction in GSI/UCS along


critical path compared to west wall of pit

Figure 3.13

3.5.2.

GSI and UCS attributes are found to be reduced along the critical
failure path compared to west wall averages. A mean reduction of
14% and 32% was found in the GSI and UCS, respectively.

Critical Path and Area Estimates


Critical path estimates suggest that failure is generally quasi-circular in nature,

with daylighting typically occurring at the toe of the slope (Figure 3.15). Although, minor
variations exist, including shallow pit wall failures and deep-seated circular failures. In
addition, failure paths are found to concentrate exclusively within the western pit wall,
due to the increased slope heights and on average lower GSI values (Figure 3.8).
Failure area estimates suggest a mean area of 2.29 x 105 m2, with a standard deviation
of 7.82 x 104 m2 (CoV = 34%; Figure 3.14).
Geological controls on failure path development are rather limited, with the
exception of breakout in the lower toe of the slope (Figure 3.15). This behaviour is
attributed to weaker material associated with the Gleeson fracture zone, concentrating
strain at the base of the model.

However, a few exceptions to this failure geometry

exist, where stronger than average properties are simulated within the zone. In this case
either deep-seated to rotational or shallow pit failures are observed.

66

Shear bands are

also found to form between the active and passive blocks, facilitating quasi-rotational
failure (Figure 3.16).
With the exception of the fracture zone, failure does not appear to be
substantially dominated by any other geological units (Figure 3.15). This indiscriminate
nature of failure development can be attributed to the sub-horizontal orientation of
sedimentary layering, and the similar geotechnical characteristics between units.

In

addition, thrust zones do not appear to exhibit a major influence on the failure
mechanism, due to their westward dip away from the pit wall.

a.

b.
30%

Coefficient of Variation in
Failure Length

Mean Failure Length (m)

1550

1500

1450

1400

1350
200,000

220,000

240,000

260,000

20%

10%

0%
10%

280,000

Sequential Gaussian (Wet)


Horizontal Drains

Figure 3.14

Sequential Gaussian (Dry)

Drainage Tunnel

20%

30%

40%

50%

Coefficient of Variation in Failure Area

Mean Failure Area (m )

Zero Autocorrelation

Up-scaling: Independent

Conventional Probabilistic

Up-scaling: Dependent

Up-scaling: Roughness

(a) Variation in failure area and length statistics provide an estimate


of the overall deep vs. shallow seated nature of the estimated failure
surfaces. The results suggest a positive correlation between the
degree of depressurization and size of potential failures. (b) Trends
in the coefficient of variation within the failure area and length
statistics can be used as a quantitative estimate of the overall
dispersion in failure path results. Results indicate that the degree of
failure path uncertainty is positively correlated with the degree of
spatial autocorrelation imposed on the system.

67

Geotechnical Unit
Monzonite Porphyry
Monzodiorite
Skarn
Darai Limestone (upper/lower)
Ieru Siltstone (upper/lower)
Pnyang Siltstone
Thrust Faults
Gleeson Fracture Zone

14

Failure Path Density (%)

Figure 3.15

500m

Distribution of critical failure surfaces from the SGS simulations.


Daylighting is concentrated within the Gleeson Fracture Zone. The
failure area is estimated to be 2.29 x 105 m2 with a standard deviation
of 7.82 x 104 m/s; while the failure length has a mean of 1,454 m with
a standard deviation of 157 m.

SSR
1E-6
Transition zones
between active/
passive blocks

1E-14

Lower critical
failure path

Figure 3.16

500m

Development of shear bands between the active and passive blocks


is observed. This behaviour helps to facilitate movement of material
along the lower critical failure surface.

68

3.5.3.

Conventional Probabilistic Techniques


Conventional probabilistic techniques assume that geotechnical units are

spatially homogenous, with attributes defined by single random variables (Read and
Stacey 2009). In order to compare this approach with the proposed SGS method, a
series of conventional geomechanical simulations were conducted, which utilized the
declustered domain statistics. Simulations were conducted by selecting two standard
normal deviates for each of the geotechnical units, representing GSI and UCS values.
Normal score transformation functions were then used to obtain GSI and UCS attributes
from the deviates. Simulated values were then assigned uniformly to all nodes within
the geotechnical domain. All other geotechnical attributes (e.g. mb) were kept constant
during the simulations.
The simulation results suggest that the conventional approach over-predicts both
the SRF mean and variance compared to the SGS method (Figure 3.17).

This is

observed by an increase in both the SRF mean (1.58 vs. 1.45), and standard deviation
(0.29 vs. 0.08), and resulted in an over-prediction of the probability of unsatisfactory
performance by nearly seven orders of magnitude.

However, the probability of

unsatisfactory performance may not be conservative in all cases. For example, overestimation of the mean tends to promote optimistic designs due to the upward translation
of the critical SRF distribution.

However, at the same time, over-estimation of the

variance increases the spread of the distribution, leading to overly conservative designs.
This complex interaction process makes it difficult to define comprehensive rules to
describe the negative effects of conventional probabilistic techniques.
A comparison of the critical area estimations between the conventional and SGS
methods indicates the same means (2.29 x 105 m2) but different coefficients of variance
(41% vs. 34%; Figure 3.14). This variation can be attributed to two factors:

First, the conventional approach results in a smoother failure surface compared


to the SGS method (Figure 3.14; Figure 3.18). This is due to the predisposition
toward step-path failures within heterogeneous models; whereas, the same
phenomenon is not reproduced in homogenous models, as the nodal similarity
precludes step-path development.

69

Second, in conventional probabilistic models, geotechnical domains exhibit a


greater influence on failure, compared to individual cells. This is due to the
homogeneous assignment of random variables to geotechnical domains, and
lack of spatial data aggregation. For example, in the case of heterogeneous
models, the averaging of nodal shear strengths along the critical path reduces
the influences of outlier deviates selected during the Monte Carlo simulation. In
comparison, conventional techniques are susceptible to these outlier statistics as
attributes are applied homogenously across the entire geotechnical domain, and
it is incorrect assumed that the borehole-scale and dispersion variance are equal.
These issues are discussed in more detailed in Section 3.6.1.

These two effects result in a fundamental difference in the underlying failure mechanics,
resulting in a profound alteration in both the SRF statistics and failure path location.

100%

Cumulative Probability (%)

90%

Sequential Gaussian Method


Conventional Method

80%
70%
60%
50%
40%
30%
20%
10%
0%
0.80

1.00

1.20

1.40

1.60

1.80

2.00

Critical Shear Strength Reduction Factor (SRF)

Figure 3.17

Comparison of SRF results for both the SGS and conventional


approaches to geotechnical slope design. The simulation results
suggest that the conventional probabilistic approach over-estimates
both the mean SRF (1.45 vs. 1.58) and standard deviation (0.08 vs.
0.29) compared to the SGS method.

70

a. Sequential Gaussian Method

b. Conventional Method

c. Zero Autocorrelation Method

d. Dry Model

e. Horizontal Drainholes

f. Drainage Tunnel

g. Up-Scaling: Independent

h. Up-Scaling: Dependent

i. Up-Scaling: Roughness

Failure Path Density (%)


1

Figure 3.18

Monzonite Porphyry

14

Ieru Siltstone

Monzodiorite

Pnyang Siltstone

Skarn
Thrust Faults

Darai Limestone

Gleeson Fracture Zone

Comparison of critical failure path distributions for the different modelling approaches.

71

750m

3.5.4.

No Spatial Autocorrelation
While conventional probabilistic techniques assume perfectly autocorrelated or

spatially homogenous domains, the other extreme is to assume no spatial


autocorrelation. Under this paradigm, each node is independently simulated, ignoring
the influence of nearby nodes. To compare SGS methods with this approach, a series of
simulations were conducted with an independent GSI and UCS deviate selected for
each node.
The results of the simulation indicate a mean SRF of 1.53 with a standard
deviation of 0.02 (Figure 3.19).

In comparison to the SGS approach, the non-

autocorrelated method over-predicts the mean, while at the same time under-estimate
the variance. This results in an under-estimation of the probability of unsatisfactory
performance by several orders of magnitude.
Critical path distribution estimates show a tighter confinement of failure paths
within the non-autocorrelated compared to SGS method (Figure 3.14; Figure 3.18). The
observed variation can be attributed to the increased clustering in rock mass strength
attributes, through the incorporation of the spatial autocorrelation structure. This affects
the location of the critical failure path, with increased dispersion observed within the
SGS models as the failure path is forced to by-passes the larger clusters of competent
rock.

In comparison, the non-autocorrelated models suppress cluster development

resulting in a reduction in critical path deviations. The discrepancy between the models
illustrates the need to properly define the spatial structure, as even though both methods
have the same attribute statistics, differences in the spatial structure drastically changes
the underlying failure path mechanisms.

72

100%

Cumulative Probability (%)

90%

Sequential Gaussian Method


Zero Autocorrelation Method

80%
70%
60%
50%
40%
30%
20%
10%
0%
0.80

1.00

1.20

1.40

1.60

1.80

2.00

Critical Shear Strength Reduction Factor (SRF)

Figure 3.19

3.5.5.

The incorporation of rock mass strength heterogeneities into a


model results in increased dispersion in the SRF results compared
to non-autocorrelated models. The zero autocorrelation method is
found to over-estimate the mean SRF (1.53 vs. 1.45), while at the
same time under-estimate the standard deviation (0.02 vs. 0.08),
when compared to the SGS method.

Effect of Groundwater
The characterization and management of groundwater is a key component of

large open pit design, as its effects are often detrimental and lead to increased wall
instability and higher operating costs (Beale 2009). This is due to the strength reduction
that occurs from elevated fluid pressures, as a result of a reduction in the effective stress
(Rutqvist and Stephansson 2003; Wyllie and Mah 2004).

However, if the

hydrogeological system can be properly characterized and an effective depressurization


plan implemented, then pit walls can be steepened leading to long term cost savings.
To study the effects of groundwater at the Ok Tedi mine site, a series of SGS
simulations were conducted for both dry and wet conditions. In addition, two
depressurization scenarios were studied. The first, involved the use of three sets of
horizontal drains; while the second utilized a single set of horizontal drains and a
73

depressurization tunnel (Figure 3.4). A full description of the two scenarios can be found
in Section 3.3.3.
A comparison of the wet vs. dry conditions indicates that, as expected, wet
conditions result in a reduction in the SRF. This reduction was found to be 0.14 on
average, with the mean SRF reduced from 1.59 to 1.45 (Figure 3.20). The variability
with both simulations was found to be similar, with standard deviations of 0.08 and 0.09,
respectively. As a result, both scenarios can be considered stable with a relatively high
degree of confidence, as the probability of failure for both is extremely low (Dry = 10-7 %,
Wet = 10-9 %).

100%

Cumulative Probability (%)

90%

Wet Simulations
Dry Simulations

80%
70%
60%
50%
40%
30%
20%
10%
0%
1.00

1.20

1.40

1.60

1.80

2.00

Critical Shear Strength Reduction Factor (SRF)

Figure 3.20

The inclusion of groundwater pore pressures resulted in an average


decrease in SRF results of 0.14 compared to the SGS method
(Figure 3.13). The mean SRF values are 1.45 and 1.58 for the wet
and dry models, respectively, with standard deviations of 0.08 and
0.09.

Critical path analysis suggests that the inclusion of groundwater into the SGS
simulations results in a deeper seated failure path (Figure 3.14). This is observed by an
increase in the average failure size from 2.09 x 105 m2 in the dry models to 2.29 x 105 m2
74

in the wet models. In addition, the inclusion of groundwater resulted in the critical path
being drawn deeper into the slope due to elevated pore pressures at depth.

The

elevated pressures also result in a slight increase in the critical path dispersion, due to
the increased likelihood of deep seated failures. From a risk analysis perspective, this
increased failure depth needs to be taken into consideration, as although the probability
of the event decreased, the consequences are increased. As a result, the overall risk
reduction may not be as drastic as initially suggested by the SRF reduction.

100%

Cumulative Probability (%)

90%
80%

No Depressurization
Horizontal Drainholes
Drainage Tunnel

70%
60%
50%
40%
30%
20%
10%
0%
1.00

1.20

1.40

1.60

1.80

2.00

Critical Shear Strength Reduction Factor (SRF)

Figure 3.21

Active depressurization was found to increase SRF values by an


average of 0.10, compared to the base case of no depressurization.
Results of the depressurization scenarios suggest mean SRF values
of 1.53 and 1.58, with standard deviations of 0.08 and 0.08 for the
horizontal drain holes and drainage tunnel scenarios, respectively.

The simulation of active depressurization of the Ok Tedi west wall suggests an


increase in the SRF of approximately 0.1, with the depressurization tunnel slightly more

effective than the horizontal drains (6y` = 1.58 vs. 1.53; Figure 3.21). The relative

variability within all three scenarios was found to be the same ( = 0.08). Similar to the

wet vs. dry scenarios, active depressurization leads to the development of deeper
75

seated failures (Figure 3.18). A key change also occurs in the mode of failure, with an
increased likelihood of deep rotational failure for both depressurization scenarios. This
represents a fundamental shift in the failure mechanism, with failure transitioning from
toe breakout in the Gleeson fracture zone toward a deeper failure with breakout in the
Monzodiorite. A further shift also occurs in the depressurization tunnel scenario, with toe
break-out in the Gleeson fracture zone occurring from a combination of deeper seated
failure combined with slip along the Parrots beak thrust, as opposed to classic rotational
toe failure.

3.5.6.

Statistical Up-Scaling Results


As an alternative to stochastic simulation, a number of researchers have

proposed step-path algorithms to upscale geotechnical domain statistics (Glynn et al.


1978; Glynn 1979; OReilly 1980; Shair 1981; Einstein et al. 1983; Baczynski 2000;
Baczynski 2008). This methodology was tested through the development of a software
package that relies on minimum distance analysis to up-scale geotechnical attributes.
Using this approach, summary statistics for critical paths through theoretical rock
material were estimated. A detailed description of each of the simulations is provided in
Section 3.4.7. Up-scaled geotechnical attributes were then incorporated into FLAC, and
a series of 100 trials was conducted for each of the simulations, using Monte Carlo
techniques.
Results of the FLAC models suggest that the up-scaling approaches underestimate the mean SRF, when compared to the SGS method, by approximately 0.11
(Figure 3.22).

The up-scaling approach also drastically over-estimates the SRF

variance, resulting in an over-estimation of the probability of failure by approximately


seven orders of magnitude. These discrepancies can be attributed to differences in the
failure mechanics between the two models. For example, failure within the up-scaled
models is found to be preferentially controlled by the weakest domains; whereas,
heterogeneous models are predisposed to failure at the weakest nodes (Figure 3.14;
Figure 3.18). This behaviour is shown in the independent up-scale simulations by a
concentration of the toe of the failure within the weakest domains (Figure 3.18g). The
same toe behaviour is not as pronounced in the co-dependent simulation results, due to
a reduction in the strength variation between the geotechnical domains (Figure 3.18h).
76

However, the homogenization of the domain attributes still results in an over-smoothing


of the failure surface compared to SGS simulations.

100%

Cumulative Probability (%)

90%
80%

Sequential Gaussian Method


Up-Scaling: Independent Method
Up-Scaling: Dependent Method
Up-Scaling: Roughness Method

70%
60%
50%
40%
30%
20%
10%
0%
0.80

1.00

1.20

1.40

1.60

1.80

2.00

Critical Shear Strength Reduction Factor (SRF)

Figure 3.22

Comparison of SRF results between the SGS and critical path, upscaling methods. The results suggest the critical path algorithms
fail to fully capture the effects of spatial heterogeneity on
geomechanical models. Up-scaling results suggest a mean SRF of
1.35, 1.33 and 1.33, with a standard deviation of 0.24, 0.17, and 0.22
for the independent, dependent and roughness methods,
respectively.

3.6. Discussion
3.6.1.

The Scale-Dependency Issue


Geomechanical simulations have highlighted the discrepancies between

conventional probabilistic and spatially heterogeneous models. These discrepancies


imply a fundamental flaw in the conventional geotechnical slope design process, as the
method under-estimates both the SRF mean and variance. Issues arise due to the

77

spatial nature of geotechnical data and the inherent spatial dependency or


autocorrelation (Haining 2003). This dependency results in two intertwined secondary
issues which complicate the use of conventional probabilistic methods and invalidate the
independence assumption required to use classical statistical approaches.

These

issues include scale-effects associated with spatial data aggregation, and the
preferential accumulation of strain within weaker areas of the rock mass (Gehlke and
Biehl 1934; Haining 2003; Jefferies et al. 2008; Lorig 2009).
The spatial data aggregation issue results in scale dependencies arising in the
sample variance due to spatial averaging effects (Gehlke and Biehl 1934; Isaaks and
Srivastava 1989; Deutsch 2002; Haining 2003). Typically, the variance demonstrates an
inverse relationship with the scale of study (Journel and Huijbregts 1978). The classic
geological example of this phenomenon is the distribution of copper grades at the grain
vs. the hand sample scales. At the smaller of the two scales, samples exhibit a larger
degree of variance, with copper distributions split into two distinct populations (e.g.
copper abundant and deficient grains). However, as the scale of study increases, so too
does the amount of spatial aggregation.

The end effect is a reduction the sample

variance, as results reflect an average of copper abundant and deficient grains. While
copper grade distributions provide the classic example of this phenomenon, the
behaviour is common with other geological attributes.

The key importance for

geotechnical slope design studies is that the variance at the geotechnical domain scale
likely differs from the dispersion variance observed at the data collection scale (Isaaks
and Srivastava 1989; Deutsch 2002). This presents an issue for practicing geotechnical
engineers, as classic statistical methods are commonly incorrectly applied to
engineering design problems (Harr 1996; Duncan 2000; Wiles 2006; Nadim 2007).
The second issue that arises from spatial dependencies is the preferential
accumulation of strain within weaker areas of the rock mass, which results in a drift in
the mean during shifting scales of study (Jefferies et al. 2008; Lorig 2009).

This

behaviour is demonstrated in classical geotechnical slope modelling by the development


of step-path failures, whereby the rock mass fails along the weakest path oriented in the
same direction as the driving force (Jennings 1970; Einstein et al. 1983). In such a
case, the global rock mass strength is the summation of shear and/or tensile strengths
along this critical path, resulting in a mean strength lower than the rock mass as a whole
78

(Glynn 1979; OReilly 1980; Baczynski 2000).

Similar effects are observed in

groundwater systems, where scale-effects arise from preferential flow along high
hydraulic conductivity (K) units, resulting in an upward drift in the mean away from
theoretical multi-log-normal predictions (Snchez-Vila et al. 1996).

In addition to

statistical effects, discrepancies in the failure dynamics can occur when heterogeneity is
explicitly excluded, as conventional approaches result in an over-smoothed failure
surface compared to SGS simulations (Figure 3.18).

This can lead to fundamental

errors, as the behaviour of conventional models are disproportionately controlled by the


uniformly applied geotechnical domain attributes, as opposed to local weak rock mass
sections.
These data aggregation and preferential strain issues result in underlying scale
dependencies in the geotechnical attribute statistics.

The behaviour is commonly

misrepresented in geotechnical design studies which assume that the statistics of


studied attributes is the same at both the borehole and domain scales. Research has
shown that this can cause erroneous SRF/FOS predictions, as underlying spatial
dependencies are ignored (Figure 3.17).

This presents a fundamental issue for

geotechnical slope design as billions of dollars are spent annually on designs which
incorrectly apply classic statistical approaches. In comparison to traditional design, the
utilized SGS method curtails the scale dependency issue through the imposition of a
degree of controlled spatial heterogeneity on the stochastic system.

The spatial

structure is imposed through the use of variograms, which allow for preservation of the
sample-scale variance, while at the same time more accurately representing the largescale, system variance (Journel and Huijbregts 1978). The final result is a more realistic
distribution in predicted SRF/FOS results.

3.6.2.

Step-Path Estimation Algorithms


In order to by-pass the aforementioned scale dependency issue, a number of

studies have proposed the use of critical path algorithms to up-scale attribute
distributions from the borehole to domain scale (Glynn et al. 1978; Glynn 1979; OReilly
1980; Shair 1981; Einstein et al. 1983; Baczynski 2000; Baczynski 2008).

These

algorithms work by summarizing strength attributes along a critical path identified within
a two-dimensional rock mass simulation. The rock mass is composed of a combination
79

of discontinuities, rock mass and/or intact rock, with strength attributes assigned
according to statistical distributions obtained from either borehole and/or outcrop data.
Either minimum distance (OReilly 1980) or stochastic step-path generation (Baczynski
2000) techniques are then used to identify a critical failure path through the theoretical
rock mass.

Simulations are then repeated using stochastic techniques to obtain a

distribution in the critical path strength. Summary results can then be incorporated into
geomechanical simulation models.
The applicability of these methods was tested within this study through the
development of a software package to determine critical path attributes using minimum
path analysis (Section 3.4.7).

Results of the analysis suggest that the critical path

approach fails to fully account for the up-scaling issues, with the approach imparting new
uncertainties into the analysis (Figure 3.22). Such discrepancies are observed in the
failure mechanics between the up-scaled and heterogeneous models (Figure 3.18).
Failure development within the up-scaled models is found to be controlled by the
weakest domains; whereas, failure within the heterogeneous models occurs through
preferential failure along the weakest nodes. The overall effect is an over-smoothing of
the failure surface within up-scaled models and a reduction in the large-scale roughness.
Attempts to correct for this discrepancy have been made by some researchers
through the calculation of a large-scale roughness factors (Little et al. 1998; Baczynski
2000). However, issues arise as the dominate failure direction often deviates from the
average step-path angle (Baczynski 2014; Figure 3.23).
roughness

estimates

often

over-estimating

the

This result in large-scale

domain-scale

roughness,

as

demonstrated in Section 3.5.6. A more realistic method of estimating failure directions


from the orientation of flatter joint has been proposed by Baczynski (2008). However,
the issues remain with the approach in the case of deep-seated failures where quasicircular geometries result in deviating failure directions throughout the sliding mass. As
a result, the use of step-path, up-scaling algorithms remains problematic until a robust
method for estimating step-path, roughness coefficients is developed.

80


Figure 3.23

= Mean Step-Path Angle


= Mean Fracture Dip

Concept demonstrating the deviation in mean step-path angle and


the critical basal sliding surface (Jennings 1970).

In addition to roughness issues, problems arise with the up-scaling approach due
discrepancies in the failure dynamics when heterogeneity is explicitly excluded (Figure
3.18). While this does not preclude the use of step-path methods, it is an underlying
assumption of such methods that the failure mechanics remains the same.

If this

assumption is invalid, then step-path methods may produce erroneous results.

3.6.3.

Continuum Mechanics and Data Aggregation


Geomechanical simulation models used throughout this study relied on the use

of the Hoek-Brown criterion (Hoek et al. 2002). However, the method has been criticized
due to difficulties in applying it in less than ideal conditions (Brown 2008; Mostyn and
Douglas 2000; Douglas and Mostyn 2004; Carter et al. 2007; Carvalho et al. 2007;
Carter et al. 2008). One of the main issues with this approach is that it requires the
definition of a homogenization scale (Bonnet et al. 2001). However, fracture systems
research has suggested that many systems display fractal spatial distributions, which
precludes the existence of a homogenization scale or representative elementary volume
(REV; Mandelbrot 1982; Davy et al. 1990; Davy et al. 1992; Sornette et al. 1993; Bonnet
et al. 2001). Homogenization scales are further complicated by the discrete nature of
81

geotechnical domains, which may preclude the development of appropriate REVs for
modelling purposes (Figure 3.24).

Descriptive Property

V1

V3
V2

Volume of Sample
V2

V3

V1

Figure 3.24

The discrete nature of geotechnical domains makes the definition of


a REV within fracture systems difficult, if not impossible. This is
due to the difficulty in stabilizing descriptive attributes at sample
volumes smaller than the domain scale.

The REV issue poses a problem for the geomechanical modeling within this
study as models were constructed using the Hoek-Brown continuum approach.
However, comparisons of failure mechanisms from continuum modelling with previous
discontinuum modelling at the site suggest a similar shear-dominated, rotational failure
develops using both approaches (Baczynski et al. 2011).

Such behaviour can be

attributed to the dense, chaotic fracturing at the Ok Tedi site, which facilitates the
primary Hoek-Brown (1983) assumption of the rock mass failing from translation and/or
rotation of individual blocks.

82

Despite the similarity in failure mechanics, problems may still exist with the HoekBrown approach as a result of the spatial aggregation utilized during numerical
modelling. Specifically, data was averaged over 10 m3 bins, equivalent to the numerical
mesh grid size, as described in Section 3.4.3. The problem with this approach is that it
assumes that strain is evenly distributed at the sub-nodal scale.

However, as was

discussed in the preceding sections, this assumption is invalid due to preferential failure
of a rock mass within its weakest sections. These preferential strain accumulations
results in the scale effects commonly observed in rock mechanics problems, whereby
the compressive strength of a sample is found to inversely correlated to the sample size
(Johns 1966; Bieniawski 1967; Pratt et al. 1972; Hoek and Brown 1980a; Bieniawski
1984; de Vallejo and Ferrer 2011). In effect, the SGS models accurately reproduce
spatial heterogeneities at the nodal scale, but fail to continue the heterogeneity
modelling down to the sub-nodal scale. This imparts an unknown degree of uncertainty
into the simulations, and needs to be taken into consideration when extrapolating
specific SRF estimates for risk and/or stability analysis purposes. However, despite this
limitation, the general conclusions are still considered valid, as the approach was
directed at investigating the variation between the methods as opposed to specific SRF
values.

3.7. Conclusions
The field of geotechnical slope design is currently in a state of flux. Open pit
mine operations are progressing towards ever deeper targets in response to the
depletion of near surface deposits (Read and Stacey 2009). This increases both the
costs and uncertainties, forcing geotechnical engineers to reconsider traditional
deterministic designs techniques (Harr 1996; Duncan 2000; Wiles 2006; Nadim 2007).
In the face of these issues, probabilistic design techniques represent an attractive
alternative, as uncertainties can be quantified directly within the framework of risk and/or
decision analysis (Steffen 1997; Terbrugge et al. 2006; Steffen and Contreras 2007;
Steffen et al. 2008). However, conventional probabilistic design techniques typically
utilize a discrete geotechnical domain approach, with attributes defined by spatially
constant random variables (Read and Stacey 2009).

This can lead to fundamental

underlying

associated

problems,

as

spatial

dependencies
83

with

geological

heterogeneities invalidate the independence assumption required to use classical


statistical approaches (Journel and Huijbregts 1978; Isaaks and Srivastava 1989;
Deutsch 2002). These spatial dependencies lead to scale effects due to spatial data
aggregation and preferential strain accumulation issues (Gehlke and Biehl 1934; Haining
2003; Jefferies et al. 2008; Lorig 2009). Research has demonstrated that failure to
consider spatial dependencies in a dataset can result in a fundamental difference in the
predicted SRF/FOS results (Figure 3.17).

These conclusions are of concern for future

geotechnical slope designs, as billions of dollars are invested annually based on


probabilistic design techniques that incorrectly apply classical statistical approaches.
Results have shown that methods which incorporate the spatial structure through
the proper application of geostatistical theory produce more realistic distributions in the
SRF/FOS results compared to traditional probabilistic design. This is due to the more
accurate reproduction of the system variance, as geostatistical methods impose a
degree of controlled spatial heterogeneity on the stochastic system through the
variogram. These results are consistent with previous research which has suggested
that conventional probabilistic design produces overly conservative designs (Griffiths
and Fenton 2000; Hicks and Samy 2002). Although, alternative methods have been
proposed to deal with scale dependency issues, including critical path estimation (Glynn
et al. 1978; Glynn 1979; OReilly 1980; Shair 1981; Einstein et al. 1983; Baczynski 2000;
Baczynski 2008), results from this study suggest that geostatistical methods remain the
most promising. While it is fair to say that the proposed geostatistical method is a data
intensive procedure, and difficult to apply in most greenfields research, the inability to
consider spatial heterogeneity has been shown to lead to systematic errors in the
modelling process.

So, although it may be tempting to ignore the spatial co-

dependencies in a dataset, due to a lack of information, this is a fundamentally flawed


position, which is likely to lead to erroneous results.

84

4.

A Modified Discrete Fracture Network


Approach for Geomechanical Simulation4

4.1. Abstract
Rock masses are typically conceptualized as having bimodal strength
characteristics, with deformation controlled by complex interactions between intact rock
material and discontinuities.

This spatial heterogeneity has driven engineers and

scientists to develop increasingly complex numerical simulation codes to capture this


intricate behaviour. One of the leading approaches in this field has been the discrete
fracture network (DFN) method, which explicitly models fractures as discontinuous
features using stochastic modelling processes.

While algorithms used for DFN

generation have been developed within a sound statistical and theoretical framework,
they often do not consider subsequent mesh generation routines which are required for
geomechanical simulation.

This can lead to the development of unacceptable

discontinuity configurations which cannot be incorporated into the numerical simulation


codes using standard meshing algorithms. In order to correct this deficiency, a modified
DFN algorithm is proposed, which takes into consideration mesh generation routines.
This approach allows for seamless integration between DFN generation and
geomechanical simulation, freeing researchers from the need to manually manipulate
generated fracture networks prior to incorporation in numerical models.

Fracture

networks generated using both the proposed method and established software are
incorporated into geomechanical simulation models to verify and demonstrate the
benefits and limitations of the new method.

Published in the First International Discrete Fracture Network Engineering Conference,


Vancouver, Canada, October 20-22, 2014 as J.M. Mayer, P. Hamdi and D. Stead. 2014. A
Modified Discrete Fracture Network Approach for Geomechanical Simulation.

85

4.2. Introduction
Rock masses typically exhibit a complex heterogeneous nature, owing to the
inter-relationship between intact rock material and discontinuities (e.g. micro-fractures,
macro-fractures, faults, etc.). This spatially discontinuous behaviour forces engineers to
conceptualize rock masses in one of two modes, namely, continuum or, discontinuum, or
a combination of the two (Hoek and Brown 1980a; Jing 2003; Stead et al. 2006). The
underpinning concept of the continuum approach is the representative elementary
volume (REV).

This concept assumes that a scale exists at which the individual

heterogeneous features average out, such that the material can be conceptualized as a
homogenous substance (Bear 1972). However, this approach has been questioned by a
number of researchers as REVs may not exist for a given substrate at scales
appropriate for numerical and/or analytical modelling (Dershowitz et al. 2004).

The

alternative to continuum conceptualization is the discontinuum approach, which typically


relies on the use of discrete fracture network (DFN) methodologies.

Within this

approach, the rock mass is conceptualized by a bimodal distribution in material


properties, where the substrate is controlled by both intact rock and secondary
discontinuities (Dershowitz and Einstein 1988; Xu and Dowd 2010).
Since its introduction in the mid-1960s, the DFN approach has received attention
from multiple scientific disciplines resulting in a plethora of algorithms designed to
generate DFNs (Dershowitz and Einstein 1988; Staub et al. 2002). This includes simple
early models such as the orthogonal joint set model (Snow 1965), the commonly
employed Poisson point process models originally developed in the late 1970s (Baecher
et al. 1978; Geier et al. 1988), and the later complex hierarchical systems of Ivanova
(1995).

While the algorithms have been developed within a sound statistical and

theoretical framework, incorporation of the methods into numerical modelling codes is


not always straightforward due to the development of unacceptable fracture
configurations (Painter 2011; Painter et al. 2012; Painter et al. 2014). These include: the
generation of sub-parallel fractures that intersect at angles less than the minimum
interior angle required for mesh generation, the bounding of regions smaller than the
desired minimum element due to the intersection of three or more fractures, or the
termination of fractures in near proximity to other features (i.e. discontinuities and model

86

edge boundaries).

These issues may lead researchers to manually manipulate

generated DFNs prior to incorporation into numerical models, in order to prevent the
development of poor quality elements that may cause numerical instabilities. Although
this manipulation can facilitate DFN integration, it can also lead to adverse effects,
including: the alteration of fracture attribute statistics through the removal and/or
manipulation of DFNs prior to incorporation in numerical models.

The manipulation

process also employs subjective techniques leading to poor reproducibility between


researchers.

Both of these limitations can be avoided through the design of DFN

algorithms, which incorporate not only a sound statistical and theoretical framework but
also an appreciation for subsequent mesh generation algorithms used in numerical
simulation.
This chapter attempts to add to current DFN research by proposing a new
algorithm for DFN generation. The algorithm is designed to generate 2D DFNs for use
with geomechanical simulation software utilizing triangular network meshing routines.
The purpose is to present researchers with an explicit means of generating DFNs within
a numerical simulation framework, allowing for seamless integration between the
software packages. The method is designed for use as a general DFN generator, to be
used within multiple geomechanical and geological software packages.

4.3. DFN Models


Stochastic simulation of fracture networks began in the mid-1960s (Snow 1965);
however, it was not until the late 1970s to early 1980s that it received widespread
attention within the research community (Geier et al. 1988; Priest and Hudson 1976;
Veneziano 1978; Einstein et al. 1983). The general approach of the method is to treat
fractures as discrete features whose properties, i.e. persistence, orientation, aperture,
etc., are defined by random variables, with centroids distributed according to a defined
random process within the model space (Xu and Dowd 2010). The two most common
algorithms for this generation process are point process modelling coupled with
simplified fracture geometries centered at generated points (Baecher et al. 1978), and
stochastic Poisson plane generation algorithms, employing tessellation routines to subdivide initial planes (Dershowitz and Einstein 1988; Veneziano 1978). Both systems
87

employ Monte Carlo based simulation routines which generate a unique realization with
each iteration. While both systems are efficient in DFN generation, the former was used
in this study, due to its widespread use within the geotechnical community (Barton
1978).
The Baecher disk model is one of the most commonly employed DFN generation
algorithms.

It employs point process modelling coupled with a fracture morphology

conceptualized as 2D convex disks (Dershowitz and Einstein 1988; Staub et al. 2002).
The model was developed independently by both Baecher et al. (1978) and Barton
(1978) in the late 1970s.

In its standard implementation point process modelling is

conducted by assuming spatial independence, with fracture centroids distributed


uniformly in space (Xu and Dowd 2010). Fractures are then conceptualized as circular
disks centered at generated points, although alternative implementations have been
employed using sub-circular polygonal shapes to approximate disks (Geier et al. 1988).
The radii of the 2D disks are modeled using a set distribution model, which traditionally
has been assumed to be the log-normal distribution, although other models have been
used (Einstein and Baecher 1983; Segall and Pallard 1983; Bonnet et al. 2001).
Fracture attributes are assumed to be independent of orientation and location. The
spatial structure is assumed to be random; however, more advanced models (i.e.
hierarchical, geostatistical and Markov chain Monte Carlo) attempt to remove this
assumption (Long and Billaux 1987; Billaux et al. 1989; Ivanova 1995; Gringarten 1997;
Wen and Sinding-Larsen 1997; Mardia et al. 2007).

4.4. Triangular Mesh Generation


One of the central tools in geomechanics is the use of numerical analysis
methods (Jing 2003; Stead et al. 2006). These methods allow geotechnical engineers to
simulate complex 2D and 3D phenomena through the solution of partial differential
equations. However, implementation of the methods requires the construction of an
often complex grid (mesh), which at present is typically conducted using automatic mesh
generation algorithms (Owen 1998). This task can often be difficult, as meshing routines
must satisfy a number of contradictory requirements, including (Shewchuk 2012):

88

the construction of elements must be sufficiently small to prevent numerical


inaccuracies, but not too small to incur extensive computational times,

the mesh must be able to grade from large to small elements, often over
short distances, and

the elements must adhere to strict shape requirements, often with equilateral
and equiangular geometries

These varied requirements have led to the development of diverse algorithms for
mesh generation, utilizing different criteria to conform girds to often complex geological
phenomenon. Within the field of geomechanics, the use of triangle element geometries
is common within numerical codes (Rocscience 2013; Rockfield 2013).

Triangular

elements are used throughout this study.


A key component of mesh generation is the production of a mesh with sufficiently
spaced nodes to prevent numerical instabilities during geomechanical simulation. As a
result, algorithms must pass certain mesh quality standards to ensure that poor quality
elements, and hence poorly spaced nodes, are avoided. To meet these standards and
to ensure adequate elements are generated, geomechanical software employs
designated criteria. For example the criteria used in the Rocscience (2013) software
Phase2 include:

maximum to minimum side length is less than a specified ratio (default = 10),

maximum interior angle is greater than a critical value (default = 120o), and

minimum interior angle is smaller than a critical value (default = 20o)

Alternatively, the Rockfield FDEM software, ELFEN, limits internal angles of 2D


triangular meshes using two internal constraints (Rockfield 2013). The first curtails the
size of newly created triangle edges by limiting the search window for nearby nodes
during mesh generation. The second utilizes a maximum internal angle and side length
to control the shape of newly generated triangles.

89

4.5. Integration of DFN Models with Triangular Mesh


Generation
Integration of DFN models with unstructured, triangular mesh generation requires
an appreciation for the requirements of meshing routines. This is to ensure that fracture
geometries are not generated which would be impossible to fit given the current meshing
constraints, or those that would generate poor quality elements (Painter 2011; Painter et
al. 2012; Painter et al. 2014). As such, enhancement to the general Baecher disk model
algorithm is required to account for later unstructured mesh generation during
geomechanical simulation modelling (Baecher et al. 1978).

This includes the

development of constraints for the DFN process to ensure seamless integration. In this
chapter,

propose

three

constraints for this

purpose.

First,

minimum

overlapping/separation distance () is specified to prevent the development of


unnecessarily small elements which would slow down overall numerical simulation
times. Second, intersection points between generated fractures are checked to ensure
they are spaced at a distance greater than the specified overlap/separation distance ().
Finally, a minimum intersection angle (crit) is used to ensure generated fractures
intersect at angles greater than the minimum internal angle used in unstructured mesh
generation.

These constraints require the user specification of a critical minimum

overlap/separation distance () and a critical minimum angle (crit).

4.5.1.

Overlap/separation Distance
The first stage in the overlapping distance analysis involves the creation of buffer

zones around existing fractures using the specified minimum overlap/separation


distance (; Figure 4.1). Newly generated fractures are then checked to ensure they do
not terminate within one of these zones. This is done to prevent the development of
unsatisfactorily small elements that would be required to fill the small gaps between near
terminating fractures. If fractures are found to terminate within the buffer zone they are
discarded and the generation process restarted.

90

4.5.2.

Intersection Distance
The second check ensures that the intersection of three or more fractures does

not produce unacceptably small elements (Figure 4.2). This is done by ensuring that the
separation distance between all intersection points (n) is greater than the
overlapping/separation distance (). If this check is found to be false (n < ), then the
newly generated fracture is discarded and the process restarted.

4.5.3.

Intersection Angle
The final stage in quantifying the suitability of a newly generated fracture is to

check whether or not the minimum intersection angle () between it and previously
generated intersecting fractures is less than the critical minimal angle (crit; Figure 4.3).
The procedure works by checking that newly generated fractures which intersect the
buffer zones of existing fractures have intersection angles greater than the critical
minimal angle (crit). If this check fails then the fracture is discarded.
Once all three checks have been conducted, a newly generated fracture is either
accepted or discarded. In the case that the fracture is discarded, another seed point is
generated and the qualification process restarted until a valid location for the fracture is
found. One of the limitations of this process is that it can lead to an infinite loop if the
fracture density exceeds a critical threshold. Simulations indicate that this threshold
typically occurs when P20 values exceed 0.75 to 0.85 times the inverse buffer zone area.
In order to prevent this from occurring, a limitation is placed on the maximum number of
new seed locations that are attempted before the program ceases and returns an invalid
result.

Provided this limitation is not encountered, new fractures are continuously

generated for a designated set until the designated P20 or P21 value is achieved. The
algorithm then moves onto the next fracture set in the sequence.

Fractures are

generated within a region equal to four times the desired simulation area and later
truncated in order to minimize boundary effects.

91

fol

f new i

Buffer Zone

f
Figure 4.1

w
ne

ii

Procedure for the overlap/separation distance check with a buffer


zone defined using the specified minimum overlap/separation
distance (). In the above case, fnew i would be rejected as it
terminates within the buffer zone, whereas fnew ii would be accepted
as both its terminations are outside the zone.

fi
2
3

fne

fii

Figure 4.2

Procedure for intersection distance check used to ensure that


intersection points are spaced greater than the overlap/separation
distance (). This is done to prevent the development of
unacceptably small elements. In the above case, the newly
generated fracture would be rejected if 1, 2 or 3 are less than
overlap/separation distance ().

92

fnew ii

i
new

fold
i
ii

Figure 4.3

Procedure for intersection angle check to ensure that newly


generated fractures form at angles greater than the critical minimal
angle (crit). In the above case, fnew i would fail the check due to the
acute angle between it and fold; whereas, fnew ii would pass the test
due to the high angle between fold and fnew ii.

4.6. Model Validation


A series of simulations was conducted in order to test the ability of the proposed
DFN generator to reproduce accurate fracture network statistics from field data. Input
parameters for the simulation include fracture orientation, intensity (P21) and length
parameters, as well as the modified method constraint variables (, crit).
Characterization of these attributes involved the use of discontinuity orientation data
collected from both exposure and borehole mapping at an undisclosed mine site.
Stereographic data were converted to 2D apparent dips for an east-west cross section
(090o) and summarized using a running average technique utilizing 20o spatial bins
(Figure 4.4a). An idealized Gaussian model was then fit to the data using least squared
analysis. The Gaussian model assumed three discrete discontinuity sets, each of which
can be described by a single normal distribution.
characterized using a log-normal distribution model.

Fracture length statistics were


Statistical models were then

imported into the DFN generator and series of DFNs produced. The results indicate a
good agreement between actual data and generated DFNs (Figure 4.4b).

93

a.
Running Average Probability

25%

Cumulative Frequency

b.

100%
67%
33%

0%
0.0
10.0 20.0 30.0
Fracture Length (m)
Log-Normal Model DFN Simulation

20%

15%

10%

5%
o

20 spatial averaging used


for probability estimates

0%
-90.0

-60.0

-30.0

0.0

30.0

60.0

90.0

2D Apparent Dip ( )
Observation Data

Figure 4.4

Gaussian Model

DFN Simulation

(a) Model validation of fracture orientation statistics. Good


agreement is shown between DFN simulations using the modified
algorithm and actual fracture network distributions. (b) Length
statistics back-calculated from un-truncated model simulation
results also show good agreement with model parameters.

4.7. Comparison with Traditional Methods


A series of DFNs was generated using the Baecher et al. (1978) method to
demonstrate the issues that arise with traditional DFNs during integration with
geomechanical simulation codes.

Results were then compared with the proposed

modified DFN algorithm to demonstrate the benefits and limitations of the new method.
Two conceptual DFN morphologies were used, each with two discontinuity sets (Table
4.1). The first employed orthogonal fracture morphology, with the mean dip of set one
oriented perpendicular to set two.

The second DFN used acute discontinuity

94

orientations, with the minimum angle between mean dips less than the minimum interior
angle used in later mesh generation. Twenty-five DFN simulations were produced for
both trials, with the resulting models incorporated into the Rockfield (2013) software
ELFEN.

Models were then meshed using the integrated tessellation routine within

ELFEN. The traditional DFNs were incorporated into the geomechanical software twice,
once without any manual manipulation of the fractures and then again following a
subjective clean-up process, where problematic fractures were removed which caused
unacceptable mesh configurations (Figure 4.5).

Table 4.1

2D DFN fracture morphologies used to demonstrate the issues that


arise when incorporating traditional DFNs into geomechanical
simulation codes.
o

Fracture Set I 2D Dip ( )

-1

Fracture Set II 2D Dip ( )

P21 (m )

Trial
Model

From

To

Model

From

To

Model

Orthogonal

Uniform

35.0

55.0

Uniform

-35.0

-55.0

Normal

0.85

0.07

Acute

Uniform

27.5

47.5

Uniform

47.5

72.5

Normal

0.84

0.07

Incorporation of unmodified traditional DFNs into ELFEN resulted in an 8.0%


failure rate, with mesh discretization causing an abnormal termination of the software
due to the presence of problematic fractures. However, DFN realizations that were
successful still resulted in the development of poor quality mesh elements during
tessellation, which can lead to numerical instabilities during computation (Figure 4.5).
Modification of DFNs to remove unacceptable fracture configurations resulted in an
average reduction in P21 values of 6.3% and 5.6% for the orthogonal and acute sets,
respectively (Figure 4.6). Although specific reduction rates are stated, values are likely
to vary between researchers due to the subjective nature of the manipulation process.
In comparison to traditional DFN methods, the modified DFN algorithm had no
problems with incorporation in ELFEN. As a result, seamless integration was achieved
between the two model packages.

This allowed for increased model construction

efficiency, as manual manipulation of the DFN models was avoided. The system also
95

allows for greater reproducibility between researchers as it avoids subjective


manipulation techniques. Although the presented research has been limited to DFN
incorporation within ELFEN, similar clean-up procedures are required for DFN
incorporation within the Rocscience software Phase2 (2013) and the Itasca code UDEC
(2014) due to similar meshing issues.

Although beyond the scope of this chapter,

preliminary incorporation of the modified DFN algorithm into the aforementioned codes
has shown promising results (Figure 4.7).

Traditional DFN Mesh Tessellation

Modified DFN Mesh Tessellation

(DFN clean up is required)

(DFN clean up is NOT required)

Figure 4.5

DFN models and their corresponding mesh tessellation within the


Rockfield (2013) software ELFEN. (Left) Irregular mesh tessellation
caused by traditional DFN schemes. Closely generated fractures
cause the formation of skinny mesh elements during tessellation.
(Right) DFN model created by the proposed modified DFN approach,
incorporation of the DFN within ELFEN requires no additional cleanup.

The main drawback of the modified method is the increased spatial


homogenization of the fracture network (Figure 4.5).

This phenomenon may be

inconsistent with naturally fractured systems, which often exhibit a hierarchical structure
with localized clustering (Pollard and Aydin 1988). It may also lead to artificial increases
in the overall rock mass strength due to a reduction in the overall DFN connectivity, and
hence an increase in rock bridge percentage (Elmo et al. 2011; Havaej et al. 2012;
Tuckey et al. 2012; Tuckey 2012; Fadakar et al. 2014). Although this is a limitation of
96

the proposed DFN algorithm, a similar homogenization occurs during the incorporation
of traditional DFNs into geomechanical simulation codes. This is due to the manual
manipulation process which often removes clustered fractures to limit the development
of poor quality elements. Spatial homogenization, therefore, is an inherent limitation of
both DFN methods and must be taken into consideration during the simulation process.

Relative Frequency

b.

Orthogonal Sets
8.0

Reduction in P21 due to clean-up


of traditional DFN models

6.0

4.0

2.0

0.0

Acute Sets
8.0

Relative Frequency

a.

Reduction in P21 due to clean-up


of traditional DFN models

6.0

4.0

2.0

0.0
0.6

0.7

0.8

0.9

1.0

1.1

0.6

-1

Modified DFN Method

Figure 4.6

0.7

0.8

0.9

1.0

1.1

-1

P21 (m )

P21 (m )

Traditional DFN Method

Modified DFN Method

Traditional DFN Method

Reduction in P21 values associated with the incorporation of


traditional DFN methods within geomechanical software, compared
to the proposed modified DFN approach.

One method to overcome this homogenization problem is the incorporation of


spatial dependencies within the DFN generation algorithm. If statistics characterizing
the spatial structure are collected then the proposed constraints in Section 4.5 could be
implemented with DFN methodologies that take into consideration this spatial
conditioning, using models such as the war zone (Geier et al. 1988), hierarchical fracture
(Ivanova 1995), geostatistical (Long and Billaux 1987; Gringarten 1997; Billaux et al.
1989; Wen and Sinding-Larsen 1997), or Markov chain Monte Carlo approach (Mardia et
al. 2007). Employing such a methodology should minimize the inherent homogenization,
through the preservation of observed heterogeneities. However, spatial statistics of a
given fracture network are rarely collected in conventional geotechnical studies, making
incorporation in DFN algorithms problematic. As a result, unless a fundamental shift in
data collection practices occurs within the rock mechanics community, the majority of
DFNs are likely to be limited in reproducing accurate spatial structures.
97

Figure 4.7

Discrete fracture network generated using the modified DFN


algorithm and incorporated into the Itasca (2014) software UDEC.
The figure shows the distribution of discrete, triangular blocks
within UDEC (outlined in grey). The blocks were generated to
conform to the fracture network.

4.8. Conclusions and Future Work


The DFN approach is an invaluable tool for the study of rock mass behaviour;
however, traditional DFN methodologies can lead to the development of unacceptable
fracture configurations when coupled with geomechanical simulation software.

The

generation of fractures intersecting at acute angles, bounding small regions of a model,


and terminating in very close proximity to each other, can all lead to problems for the
meshing routines used in numerical simulation. To eliminate these issues, this chapter
98

proposed an alternative method for DFN construction, which takes into consideration
meshing routines during fracture generation. This is done through three primary DFN
constraints, namely:

An overlap/separation buffer used to ensure generated fractures do not


terminate within the buffer zones of existing fractures.

An intersection distance check used to prevent the development of bounded


regions that are less than the desired minimum element size of future
meshing processes.

Finally, the intersection angle of any two cross-cutting fractures is checked to


ensure that it is greater than the minimum internal angle of future mesh
elements.

Without these constraints traditional DFN methodologies were shown to be problematic


when utilized with geomechanical software. Manual manipulation of generated DFNs
without constraints was required to facilitate integration between the software packages.
This resulted in a subjective process, with issues arising as P21 values were artificially
reduced, leading to DFNs that were not representative of the original dataset.

In

comparison, the modified DFN algorithm was shown to offer seamless integration
between the software packages, improving model construction efficiency and
reproducibility between researchers.
While this chapter presented a formal methodology for a modified discrete
fracture network process, the research remains on-going, as limitations still exist with the
presented methodology. Future and on-going work includes the:

Advancement of the modified DFN algorithm from 2D to 3D. This will provide
greater integration between the DFN software and geomechanical simulation
codes, which are progressively moving towards more three-dimensional
problems sets.

Incorporation of meshing algorithms into the DFN software in order to allow


for the generation of discrete element networks that, take into consideration

99

the locations of pre-existing discontinuities.

This will allow for the direct

incorporation of triangular mesh geometries into geomechanical simulation


software packages such as ELFEN, Phase2, UDEC, etc. (Rocscience 2013;
Rockfield 2013; Itasca 2014).

Inclusion of additional discrete fracture network methodologies to take into


consideration spatial inter-dependencies within datasets.

The current

generator is based on the use of the Baecher et al. (1978) disk method;
however, alternative methods such as the war zone (Geier et al. 1988) or
hierarchical fracture (Ivanova 1995), geostatistical (Gringarten 1997; Long
and Billaux 1987; Billaux et al. 1989; Wen and Sinding-Larsen 1997), or
Markov chain Monte Carlo approach (Mardia et al. 2007) could be coupled
with the outlined DFN constraints to better characterize the hierarchical
structure found in natural systems.

Current research suggests that DFN incorporation into geomechanical


simulation codes results in a decrease in spatial clustering and an overall
reduction in fracture network connectivity. This coincides within an increase
in rock bridge percentage, which should, theoretically, lead to an increase in
the overall rock mass strength. However, the degree of strengthening is
unclear at this time. Future research will explore the effects of this spatial
homogenization on overall rock mass strength, and will try to characterize the
relationship between the two parameters.

100

5.

Mesh Dependencies in UDEC Grain Boundary


Models5

5.1. Abstract
The advancement of numerical modelling codes to include the simulation of
brittle fracture mechanics is at the forefront of geomechanical design.

One of the

leading areas in this field of research is the use of UDEC grain boundary models, where
rock masses are simulated as a stochastic arrangement of discrete blocks.

This

approach has shown promise in back-analysis; however, to date, few studies have
characterized possible limitations in using the method for predictive analysis. This study
suggests that mesh dependencies can impart irreducible uncertainties into UDEC grain
boundary models during forward-analysis. In addition, micro-scale fracture mechanisms
are found to be highly dependent on the underlying mesh geometries. Voronoi meshing
routines were found to limit the kinematic freedoms, increasing the degree of localized
tensile failure.

In comparison, triangular mesh geometries had the opposite effect,

increasing kinematic freedom, and predisposing models towards shear failure


mechanisms.

While these results do not preclude application of the method, the

irreducible calibration uncertainties and mesh dependency issues must be taken into
consideration when conducting UDEC grain boundary model analysis.

Prepared for submission to International Journal of Rock Mechanics and Mining Sciences &

Geomechanics Abstracts as J.M. Mayer and D. Stead. Mesh Dependencies in UDEC Grain
Boundary Models.
.

101

5.2. Introduction
The simulation of a rock mass is both an interesting and complex problem within
geotechnical engineering disciplines. Unlike manufactured materials, rock masses pose
a difficult problem for engineers, due to their heterogeneous nature.

This leads to

complex deformational responses from induced stresses, as complex interactions exist


between both intact rock and discontinuities (e.g. micro-fractures, macro-fractures,
faults, etc.). Further complicating the problem, low confining stresses present in most
engineering applications lead to deformation which is inherently brittle in its nature. This
results in a temporally variable material which is in a constant state of change as brittle
damage accumulates within intact rock material leading to the development of new
macro-scale discontinuities (Cai et al. 2004).
Interest in brittle fracture and the desire to simulate such behaviour has led
researchers to attempt to simulate fracture development using diverse numerical
methods, including: limit equilibrium, continuum, discontinuum and hybrid approaches
(Jing 2003; Stead et al. 2006). While the former methods attempt to represent rock
mass behaviour using an averaging or representative elementary volume (REV)
technique, the latter attempt to explicitly model both intact rock and discontinuity
behaviour. Included in these latter approaches is the distinct element method (DEM),
first proposed by Cundall (1971). The method simulates the finite displacement and
rotation of discrete deformable and/or rigid blocks, with defined block contact properties.
While the explicit breakage of blocks is not possible using the conventional method,
Lorig and Cundall (1987) developed the Voronoi tessellation model to approximate brittle
failure through the progressive breakage of block contacts within the Itasca code UDEC
(Itasca 2014). Using this approach, a UDEC-grain boundary model (UDEC-GBM) is
constructed where intact rock is simulated, assuming discrete elements represent
individual grains with the macro-scale behavior controlled by deformation along intergrain boundaries.
The potential use of the UDEC-GBM method has been recognized for a
considerable amount of time; however, only recently has the method become
extensively utilized within the research literature.

Christianson et al. (2006)

demonstrated application of the approach in reproducing laboratory test data. Damjanic


102

et al. (2007) examined the mechanical degradation of a rock mass around emplacement
drifts. Lorig et al. (2009) employed the method to simulate the effect of brittle fracture in
causing catastrophic collapse of a slow moving landslide.

Alzoubi (2009, 2012)

reproduced typical rock slope failure mechanisms (i.e. toppling and buckling) using the
UDEC-GBM method. Kazerani and Zhao (2010) presented a formal methodology for
calibration, which was later updated using central composite design methods (Kazerani
et al. 2012; Kazerani 2013; Kazerani and Zhao 2014).

Shin (2010) simulated the

development of fracturing within the disturbance zone around underground openings.


Damjanac and Fairhurst (2010) examined the effects of damage accumulation over time
within crystalline rocks. Lan et al. (2010) and Nicksiar and Martin (2013) explored the
effects of grain-scale heterogeneities during compressional loading. Gao (2013) and
Gao and Stead (2014) extended the method to include triangular block shapes through
the UDEC trigon model, and applied the approach to coal seam longwall caving
applications (Gao et al. 2014a, 2014b).
Although the UDEC-GBM method shows promise under back-analysis,
researchers are limited by their inability to directly measure the micro-scale, block
contact properties.

Due to this limitation, calibration must be conducted whereby

researchers match the macro-scale behaviour from laboratory testing by varying the
micro-scale properties of UDEC-GBMs (Kazerani and Zhao 2010; Kazerani et al. 2012;
Kazerani 2013; Kazerani and Zhao 2014).

The calibration process has been

demonstrated to realistically reproduce macro-scale behavior under back analysis


conditions; however, to date, very few studies have attempted to characterize the
uncertainty associated with using the calibrated attributes in forward analysis.

This

presents a potential issue for researchers as UDEC-GBMs are known to be highly


dependent on the shape and arrangement of model blocks (Gao 2013). As a result,
variations in the stochastic block arrangements between calibrated back analysis and
forward analysis models may result in variations in the macro-scale model response,
leading to undesirable and/or unpredictable results.
In this chapter, rock mass material from the Ok Tedi mine site in Papua New
Guinea is simulated within the Universal Distinct Element Code (UDEC; Itasca 2014). A
series of uniaxial, triaxial, and Brazilian tension tests are first simulated to calibrate the
micro-scale properties of a UDEC-GBM to fit the macro-scale behaviour observed from
103

laboratory testing of the Darai Limestone at the Ok Tedi mine site. Characterization of
the uncertainty associated with forward analysis, is then conducted through the
simulation of multiple UDEC-GBMs realizations utilizing constant element size but
varying stochastic arrangement of the UDEC blocks. Uncertainty analysis focuses on
characterizing the macro-scale parameter variance given constant, calibrated microscale properties for the contact stiffness, cohesion, friction angle and tensile strength. In
addition, mesh dependency issues associated with micro-scale failure mechanics were
explored through examining tensile vs. shear damage.

Propagation of these

uncertainties is then demonstrated when moving from simple intact-rock samples, to


more complex synthetic rock mass models.

5.3. Darai Limestone


UDEC-GBMs were constrained using macro-scale properties obtained from
laboratory testing of the Darai Limestone at the Ok Tedi mine site. The site is a world
class copper porphyry deposit located in the Western Province of Papua New Guinea.
The mine has been in operation since the late 1980s and is nearing the end of its current
open pit design life.

Detailed studies were conducted to assess the feasibility of

transitioning operations from open pit to underground. Underground designs along the
east of the deposit called for a decline to pass through approximately 650 m Darai
Limestone formation.
The Darai Limestone is a Late Eocene to Middle Miocene, buff to pale grey,
massive, poorly-bedded limestone, composed of lime packstone, mudstone and
wackestone units. Minor chert, calcareous siltstone and dolomite lens can be found
interbedded with the general limestone packages. The unit varies markedly in thickness
across the site from 50 to 1,000 m, due to localized nappe-style thrusting of sedimentary
units (Baczynski 2011). Bedding is often difficult to identify at the outcrop scale, with
pervasive jointing giving the unit a rubble-like appearance.

Although local spatial

variability exists, average intact rock and discontinuity strength estimates have been
obtained from laboratory testing (Table 5.1).

104

Table 5.1

Geomechanical properties for Darai Limestone within the proposed


Ok Tedi underground. Attributes are obtained from laboratory
testing of drill core data.
Unit

Property

Value

Peak Friction Angle (o)

44.9

Peak Cohesion (MPa)

8.3

Residual Friction Angle (o)

31.5

Residual Cohesion (MPa)

0.08

Tensile Strength (MPa)

5.1

Youngs Modulus (GPa)

55.0

Poissons Ratio

0.26

Density (kN/m3)

28.9

Friction Angle (o)

31.5

Peak Cohesion (MPa)

0.375

Residual Cohesion (MPa)

0.08

Intact Rock

Discontinuities

Joint orientation data obtained from tunnel exposure and borehole mapping
suggest a complex joint hierarchy, with eight discrete sets identified across the site;
however, no more than four sets have been identified at any one location (de Bruyn et
al. 2013). Characterization of the 2D dip orientations was conducted using data from the
exposure and borehole mapping for use with later synthetic rock mass modelling.
Orientations were converted to 2D apparent dips for an east-west cross section (090o)
and summarized using a running average technique utilizing 20o spatial bins (Figure
5.1). An idealised Gaussian model was then fit to the data using least squares analysis.
The model assumed three discrete discontinuity sets, each of which can be described by
a single normal distribution.

105

Running Average Probability

25%

20%

15%

10%

5%

20o spatial averaging used


for probability estimates

0%
-90.0

-60.0

-30.0

0.0

30.0

60.0

90.0

2D Apparent Dip ( )
Observation Data

Figure 5.1

Bimodel Gaussian Model

2D apparent dip estimates from orientation data collected for the


Darai Limestone near the proposed Ok Tedi underground.

Two-dimensional, fracture density (P21) estimates were compiled for each of the
fracture sets based on persistence and spacing measurements from SRK/OTML (SRK
2013c; Table 5.2). Estimates assumed that only four discontinuity sets are present at
any given time, based on recommendations by de Bruyn et al. (2013). Due to the
extremely high fracture density at the Ok Tedi site, it was impossible to include all
fractures into the geomechanical simulations (Mayer et al. 2014a). As a result, P21 and
persistence attributes were reduced by a factor of 30 in order to produce DFNs suitable
for numerical simulation. Due to this reduction, DFN simulations are not be considered
an accurate reproduction of the actual site conditions. Instead, DFNs are designed to
simulate the general behaviour that can be expected from the inclusion of fractures into
UDEC-GBMs, and the uncertainty associated with it, instead of actual in-situ behaviour.

106

Table 5.2

Discontinuity orientation data used for 2D modelling of Darai


Limestone. Data were obtained from SRK (2013c). P21 estimates
were decreased by a factor of 30, to produce DFNs suitable for
geomechanical simulation.
Discontinuity Sets
1

Mean

-74.1

4.3

46.0

Std. Dev.

29.4

15.5

9.5

Dip (o)

Mean

0.41

Std. Dev.

0.05

Persistence (m)
P21 (m-2)

7.98

0.99

0.68

5.4. Methodology
5.4.1.

UDEC Block Tessellation


A fundamental issue within rock mechanics is the difficulty in directly measuring

the mechanical properties of a rock mass at scale suitable for engineering design (Wyllie
and Mah 2004; Jaeger et al. 2007). This remains a key issue as mechanical properties
obtained from laboratory testing are typically not representative of the design-scale rock
mass behaviour due to the presence of discontinuities at larger scales. Recently, a
numerical approach has been proposed which attempts to quantitatively estimate the
scale effects associated with these discontinuities (Pierce et al. 2007). The approach
represents a jointed rock mass numerically through the generation of a synthetic rock
mass (SRM). This is accomplished by the superimposition of a discrete fracture network
(DFN) onto a geomechanical simulation model. Using this approach, the design-scale
rock mass structure can be explicitly represented, and then used to estimate the largescale failure behaviour and mechanical properties (Pierce et al., 2007; Cundall et al.

107

2008; Esmaieli et al. 2010; Mas Ivars et al. 2007; Deisman et al. 2010; Mas Ivars et al.
2011; Pettitt et al., 2011; Zhang et al., 2011; Gao 2013; Zhang 2014).
One limitation of the SRM approach is its dependency on the DFN method and
the difficulty in integrating the generated features with common numerical meshing
routines. This is due to the development of adverse fracture geometries including: subparallel fractures that intersect at acute angles, bounding of adversely small regions due
to the intersection of three or more fractures, or near terminating fractures. To solve
these issues, researchers typically manually manipulate DFNs prior to incorporation
within numerical models; however, this is a subjective process which leads to alteration
of the fracture attribute statistics (Mayer et al. 2014a). In order to solve these issues,
and enhance the integration of DFNs with numerical meshing codes, an alternative DFN
algorithm was proposed by Mayer et al. (2014a).

The alternative approach is a

modification of the Baecher et al. (1978) DFN algorithm, which takes into consideration
numerical meshing routines during the fracture generation process.
The DFN algorithm extends upon Baecher et al.s (1978) work by incorporating
three constraints into the fracture generation process (Figure 5.2). The process is based
on the definition of a user specified critical minimum overlap/separation distance () and
a critical minimum angle (crit). The methodology relies on the following constraints:
1. First, a buffer zone around pre-existing fractures using a minimum
overlap/separation distance (; Figure 5.2). Newly generated fractures
are then checked to ensure their tips do not terminate within the buffer
zones. This ensures that zones are not created which would require
development of unsatisfactorily small mesh elements.
2. Next, the enclosed area between of three or more intersecting fractures is
checked to ensure that it does not bind a region smaller than the
minimum desired element size. Bound regions are checked by ensuring
that the separation distance between all intersection points (n) is greater
than the overlapping/separation distance (; Figure 5.2).

108

Check DFN Constraints

Start
Simulation

Overlap/Separation Distance
f new i

fold

Generate a
New Fracture

Buffer Zone

End
Simulation

f new

ii

Find New Fracture


Seed Location

Intersection Distance
fi

Greater than
Desired Value

Less than
Desired Value

fne

fii

Failed

Intersection Angle

Check P21

fnew ii

f new i

fold
i
ii

Passed

Figure 5.2

Flow chart for the modified Baecher et al.s (1978) DFN generation algorithm. Methodology is used to
generate fracture networks which adhere to later geomechanical meshing routines.
109

3. Finally, intersection angles between newly generated and pre-existing


fractures are checked to ensure that they are larger than the critical
minimum angle (crit; Figure 5.2). The procedure ensures that regions are
not created which would require mesh with internal angles less than the
desired minimum size.
This modified approach has been shown to offer seamless integration between DFN
generation and numerical meshing, improve model construction efficiency, preserve
fracture attribute statistics, and improve reproducibility between researchers. For a more
detailed description of the methodology see Mayer et al. (2014a).
In order to further facilitate DFN integration, a triangular mesh generation routine
was incorporated into the DFN algorithm to generate a mesh that conforms to the
fracture arrangement.

This is advantageous over integrated tessellation processes

within UDEC which do not take into consideration the location of discrete features during
the meshing process (Figure 5.3). This has a tendency to generate adversely small
elements, which must be removed prior to numerical simulation in order to prevent slow
excessively computational times. In addition, fractures often terminate within Voronoi
blocks and must be artificially truncated, causing alteration of the fracture attribute
statistics. In comparison, the proposed tessellation process generates GBMs which
conform to the DFN geometries.
The proposed tessellation process was designed to produce a triangular mesh
similar to the newly implemented Trigon mesh within UDEC (Gao 2013). This process
follows a three step procedure. First, a set of principal triangles is constructed which
fully defines the extent of the model. Next, grid points are inserted along the fracture,
and the mesh is progressively updated until the grid point spacing is less than the
minimum overlap/separation distance ().

Finally, the generated triangles are

progressively split, producing successively smaller elements until all triangles have a
maximum height less than 1.5 times the overlap/separation distance (). Details on
these steps are provided in subsequent sections. Adaptive re-meshing, which occurs
progressively as new grid points are inserted into the mesh, is conducted according to
the algorithm described in Figure 5.4.

110

Generation of Poor Quality,


Small Elements

Figure 5.3

Generation of poor quality elements during embedment of DFNs into


UDEC Voronoi models.

Principal triangles
The first stage in the triangulation process is the development of a series of
principal triangles which fully encapsulates the simulation area.

This is done by

generating two triangles that fully encompass the simulation area.

Triangles are

designated such that the coordinates are:


Triangle 1: (0, 0) (hmax, 0) (wmax, 0)
Triangle 2: (0, hmax) (hmax, wmax) (wmax, 0)
where hmax is the height and wmax is the width of the sample. These triangles are then
progressively split in subsequent steps using the adaptive re-meshing routine outlined in
Figure 5.4.
111

a.

b.

c.

d.

e.

f.

Figure 5.4

Demonstration of triangulation algorithm used for mesh generation.


(a) A new grid point is inserted at the centroid of the designated
triangle. (b) All triangles whose circumcircle intersects the new
point are flagged. (c) Flagged triangles are removed from the mesh
and shared edges flagged. (d) Shared edges are removed from the
flagged triangles. (e) New triangles are generated by connecting the
new grid point and remaining edges. (f) Newly generated triangles
are reinserted into the mesh. The method is a modified version of
the procedure described by Priester (2004).

Discretization of fractures
Re-meshing of the triangulation to incorporate the DFN involves a three-step
procedure. First, grid points are inserted at the end points of each fracture (Figure 5.4).
This ensures that fractures are fully inserted into the mesh, allowing for wing crack
development at fracture terminations during geomechanical simulation. Next, grid points
are inserted at all fracture intersection points, ensuring preservation within the mesh.
Finally, the fractures are progressively split into segments by inserting grid points at the
half width distance between established grid nodes along fracture surfaces.

This

procedure is continued until all segments are less than the minimum overlap/separation
distance ().
112

Splitting of large triangles


The final stage of the meshing process involves the progressive splitting of large
triangles into smaller elements. This is done by progressively splitting triangles with the
largest internal height dimension, until the height of all triangles is less than 1.5 times the
minimum overlap/separation distance (). A factor of 1.5 times is used to induce a
slightly larger mesh density along fracture surfaces, which helps to constrain the location
of triangle edges along fracture surfaces (Figure 5.4).

5.4.2.

UDEC-Grain Boundary Model


Rock mass failure has been shown to be a progressive process characterized by

several distinct deformation stages (Cai et al. 2004). This includes the initiation of microseismic events as new micro-scale cracks are formed when the stress level exceeds
approximately 0.3-0.5 times the peak uniaxial load (Brace et al. 1966; Bieniawski 1967;
Holcomb and Costin 1987). This is followed by the propagation of microcracks mainly
parallel to the maximum principal stress orientation, and eventual onset of microcrack
coalescence, as the stress levels exceed approximately 0.7-0.8 times the peak strength
(Lockner et al. 1992; Martin and Chandler 1994). Finally, progressive damage results in
the formation of macro-scale cracks and/or shear bands slightly following or at the peak
strength.
To simulate this failure behaviour using DEM methods, a 2D UDEC-GBM is
utilized where the rock is represented as an assemblage of discrete blocks (Lorig and
Condall 1987; Kazerani and Zhao 2010). The randomly distributed block contacts are
analogous to grain boundaries and/or micro-fracture contacts found within intact rock
samples (Alzoubi 2012). Brittle failure is designated to initiate along these contacts
when the applied stress exceeds either the tensile or shear strength of the boundary
(Gao and Stead 2013).

Using this approach, brittle failure begins as small micro-

fractures form along block contacts, which gradually coalescence into macro-scale
tensile cracks and/or shear bands (Alzoubi 2009). Material properties are designated
through assignment of normal and shear stiffness, cohesion, friction and tensile
strengths to block contacts, which represent inter-granular rock mass strength or the
micro-scale properties (Kazerani et al. 2012). Based on these micro-properties and the

113

shape, size and arrangement of blocks, the material will exhibit a large-scale behaviour
that can be described by equivalent macro-scale properties. Since differences exist
between the micro- and macro-scale properties, a calibration must be conducted prior to
forward analysis, so that the sample exhibits the correct macro-scale behaviour (Gao
2013).

5.4.3.

Model Construction
A 2D triaxial test sample was created within UDEC to test both intact rock and

rock mass behaviour (Figure 5.5). The model was 2 m high and 1 m wide, with a 0.1 m
platen on either end. Block shapes were constrained using the mesh generation routine
mentioned in Section 5.4. This produced triangular block geometries similar to those
integrated into the UDEC trigon method proposed by Gao (2013). An average block
area of 6.4 x 10-3 m2 was used throughout the model. Block geometries were generated
within an independent C++ software package and imported into UDEC using and
integrated FISH function.

Each block was overlain with a finite difference grid and

allowed to behave as an isotropic, Mohr-Coulomb material, facilitating intra-block ductile


failure.
Brittle fracture was allowed to develop along block contacts, through either shear
and/or tensile failure.

Material properties for the contacts were assigned using a

Coulomb slip model with residual strength, with properties downgraded following peak
strength to residual values using a post-peak brittle response. Peak strengths were
assigned through a calibration process to match the micro-scale properties to the macroscale behaviour observed from triaxial and Brazilian indirect tensile laboratory testing of
the Darai Limestone. Residual values were assigned based on joint shear test results.
In subsequent, SRM models, DFNs were generated prior to block tessellation,
using the methodology described in Figure 5.2.

Triangular tessellation was then

conducted with prior knowledge of the DFN arrangement, allowing triangulations to


conform to the fracture geometry (Figure 5.5). Fracture surface contact properties were
constrained using laboratory joint shear test results on the Darai Limestone. All joints
were assumed to have the same properties regardless of orientations.

114

The stress-strain response curves of compressive test simulations, were


obtained by assigning 100 evenly spaced, history points along the top of the samples
(Figure 5.5). Vertical stress and displacement results were collected at each point based
on a nearest grid point analysis every 2000 steps.

Axial stress and displacement

measurements were then averaged across all history points to give an indication of the
overall model response.

Synthetic Rock Mass

1.0 m

1.0 m

2.0 m

Intact Rock

Fracture
Figure 5.5

Block Contact

Finite-Difference Grid

History Point

UDEC-GBM model configuration for intact rock and synthetic rock


mass simulations.

115

In order to achieve reasonable model results the loading rate of UDEC-GBMs


must be sufficiently slow and the applied damping high enough to ensure the simulation
remains at quasi-static equilibrium. To satisfy this requirement, a constant loading rate
of 1x10-3 metres per model second was set for all models, which is equivalent to 5x10-9
percent model compression per step. This is 125 times slower than the rate utilized by
Kazerani and Zhao (2010), and should prevent the development of unstable material
responses during the simulations.

Model run times at this loading rate were

approximately 14-24 hours, depending on the degree of confinement, for a 3.4 GHz PC
with 22 GB of RAM.
Peak strength contact behaviour was monitored using an FISH routine based on
a modification of the damage algorithm proposed by Gao et al. (2014a). The routine
works by constructing an array of all block contacts present within the model, and
monitoring the shear and tensile stresses at these contacts during each model step.
Contacts were flagged as either initially failing under shear or tension based on the
mode of failure at the peak contact strength. Once a contacts failure mode was flagged,
it was removed from the fracture array. The failure type was then recorded in a table,
along with average axial stress and displacement measured across all history point
locations (Figure 5.5).

5.5. Calibration
5.5.1.

Calibration Procedure
Calibration of the intact rock UDEC-GBMs required a multi-stage approach to

calibrate the micro-scale discontinuity stiffness, cohesion, friction angle and tensile
strength, such that the appropriate macro-scale properties were reproduced.

This

includes the calibration to data to satisfy the macro-scale Youngs modulus, Poissons
Ratio, tensile strength, internal friction angle and internal cohesion, obtained from
laboratory testing.

This calibration process is required due to variability in the

arrangement, size and shape of discrete blocks affecting how the micro-scale properties
are represented at the macro-scale.

116

The calibration process used in this study is based on work by Kazerani and
Zhao (2010) and involves a five step procedure:
1. Particle sizes should be generated based on the grain size distribution within
intact rock samples.

However, in large-scale problems this may be

impractical due to computational limitations.

Therefore, particle size

distributions should be chosen based on a trade-off analysis between the run


time and model refinement requirements (Gao 2013). Particle size should be
sufficiently small such that macro-scale brittle fracture coalescence is
independent of mesh geometries (Gao and Stead 2014).
2. The Poissons ratio within UDEC-GBMs is based on the contact stiffness ratio
(ks/kn) and the elastic properties of the deformable blocks. Kazerani and Zhao
(2010) found that in rigid block systems, the contact stiffness ratio is
equivalent to the Shear modulus to Youngs modulus (G/E) ratio.

The

authors recommended that the G/E ratio should be between 0.35 and 0.50,
reflecting a Poissons ratio between 0.2 and 0.5.
3. Once the contact stiffness ratio has been set, both the normal (kn) and shear
(ks) stiffness are calibrated to fit the Youngs modulus. Initial normal stiffness
estimates were calculated from (Itasca 2014):

/ = \

4
( + 3c
oX0/

j 1 \ 10

Equation 5.1

where K is the bulk modulus, G is the shear modulus, Zmin is the minimum
element length, and n is a user-defined constant which varies between 1 and
10.
4. Contact strength properties are then initially calibrated, such that the desired
macro-scale behaviour is represented in the material stress-strain response.
This involves three subsets: first the contact cohesion, then the friction angle,
followed by the tensile strength.

117

5. The final step involves refinement of the contact strength properties. This is
required due to the inter-connected nature of the properties, which results in
slight change in one parameter as another is refined (Gao 2013).
During calibration, the strength properties for both the intra-block and block contacts are
kept constant in order to prevent preferential failure within either medium.

5.5.2.

Calibrated Micro-Properties
Ideally, block size distributions should be chosen such that the size distributions

reflect the grain size distributions within the actual modelled samples (Gao and Stead
2013). While this is desired, simulations were conducted on a 2.0 x 1.0 m sample,
preventing sufficient refinement of block sizes due to computational limitations. As a
result, an average block size of 6 x 10-3 m2 was chosen, as it represented the best tradeoff between computational efficiency and model refinement. This mesh density reflects a
distribution of approximately 3,100 discrete blocks within the sample, which is a near
four-fold increase from recommendations by Kazerani and Zhao (2010).
The contact stiffness ratio was estimated directly from the Shear to Youngs
modulus ratio (0.4), based on recommendations by Kazerani and Zhao (2010) and Gao
(2013). Following definition of the stiffness ratio, the Youngs modulus was calibrated
from elastic responses observed during compressional testing. Results suggest that a
logarithmic relationship exists between the macro-scale modulus and the micro-scale
input attributes, with calibrated normal and shear stiffness micro-properties found to be
3.5 x 1013 and 1.4 x 1013 Pa/m, respectively.
The calibration of the shear strength parameters is paramount for compressional
testing, as under conventional, homogeneous and isotropic settings, the sample strength
is directly proportional to the shear strength of a sample (Wyllie and Mah 2004). To
derive the calibrated macro-scale cohesive and friction properties a series of
compressional tests was carried out at different confinements. Confining pressures of
0.0 MPa and 1.0 MPa were chosen, to ensure good agreement between the micro-scale
and macro-scale response at low confinement, as later simulations were interested in
the micro-scale failure behaviour under uniaxial compressive test conditions.

118

Strength envelopes were then derived through linear regression using the
equations (Kovari et al. 1983):
= arcsin
=

1
+1

1 r\
2 r

Equation 5.2

Equation 5.3

where is the friction angle, c is the cohesion, and m and b are the slope and intercept
obtained from linear regression of a peak strength vs. confining stress plot. The initial
calibration was conducted by keeping one parameter constant, while the other was
varied until the required macro-scale behaviour was achieved. Results then required
refinement to achieve appropriate micro-properties, due to the inter-dependencies
between the shear strength parameters. Final calibration results suggest that a microscale cohesion of 14.8 MPa and friction angle of 48.2o is required to achieve the desired
macro-scale behaviour.
Calibration of tensile properties is required as although, compression tests are
predominantly controlled by shear strength criteria, micro fracturing at the grain scale
can be an important contributor to overall rock mass strengths, especially at low
confinement (Tang and Hudson 2011). This is due to the increase in tensile stresses
from local bending moments around sample homogeneities and anisotropies.

Gao

(2013) demonstrated this dependency in UDEC-GBMs, as the tensile strength was


shown to effect both the peak strength and post peak macro-scale behaviour. Tensile
strength calibration was conducted using a Brazilian test methodology, which is an
indirect estimate of the macro-scale tensile strength (de Vallejo and Ferrer 2011). Tests
were conducted on a 1.0 m wide sample with the same block size and shape as
compressional tests (Figure 5.6). History points were placed along the middle of the
upper platen and monitored the total normal force applied (Figure 5.6). The tensile
strength was then estimated using the equation:

119

: =

,Xw7

Equation 5.4

where Fmax is the maximum force applied to the model at the point of failure, and r is the
radius of the sample. The calibration process suggests a micro-scale tensile strength of
11.8 MPa is required to replicate the macro-scale tensile strength of 5.1 MPa. This
micro-scale tensile strength exceeds the Mohr-Coulomb tensile cut-off estimated from
the micro-scale shear strength parameters. As a result, a limit was selected for the
tensile strength of 11.3 MPa, which is representative of a macro-scale strength of 4.9

1.0 m

MPa.

1.0 m
Block Contact

Figure 5.6

Finite-Difference Grid

History Point

UDEC-GBM model configuration for tensile calibration using


Brazilian test methodology.

120

5.6. Results
5.6.1.

Calibration Uncertainty
A series of UDEC-GBMs was conducted in order to verify the ability of the

calibrated UDEC-GBM models in forward analysis and to characterize any uncertainty


associated with them. This involved the generation of 30 UDEC-GBMs with a constant
element size and shape, but with a different stochastic arrangement of triangular blocks.
Block arrangements were generated stochastically using the C++ software described in
Section 5.4.1.

All other properties were kept constant throughout the simulations.

Simulations were conducted using the same model geometry as that used in the
calibration (2.0 x 1.0 m 2D triaxial test sample, with an average block size of 6 x 10-3 m2).
To derive the macro-scale cohesive and friction properties simulations were carried out
for a series of different confinements for each of the 30 UDEC-GBMs.

Confining

pressures of 0.0, 1.0, 2.0, 3.0 and 4.0 MPa were chosen to ensure good compliance with
the model calibration, which was conducted at low confinement values. In total of 150
simulations were conducted.

Strength envelopes were then derived through linear

regression using equations 5.2 and 5.3 (Kovari et al. 1983).


Simulation results indicate an overall poor reproducibility of desired macroproperties from the previously calibrated models during forward analysis, as evident by a
coefficient of variation (CoV) of 6.0 and 4.7% for the cohesion and friction angle,
respectively. A large discrepancy was also observed between the calibrated, macroscale shear strength properties ( = 45.0o and c = 8.3 MPa) and mean forward analysis
results ( = 41.3o and c = 9.5 MPa). In comparison to shear strength parameters, peak
load results show a reduced calibration error, with an average CoV of 3.2%.

This

reduction is attributed to the underlying dependency between the macro-scale, cohesive


and frictional properties of a UDEC-GBM.

Examination of correlation coefficients

between the macro-scale, cohesion and friction angle suggest a strong, negative
relationship exists (r = -0.90; Figure 5.8a).

121

b. 0.24
Relative Frequency

Relative Frequency

a. 0.80
0.60

0.40

0.20

0.00
7.1

8.3

9.5

10.7

11.9

0.18

0.12

0.06

0.00
33.7

37.4

Simulation Results

Figure 5.7

41.1

44.8

48.5
o

Macro-Scale Cohesion (MPa)

Macro-Scale Friction Angle ( )

Gaussian Model

Simulation Results

Gaussian Model

Calibration uncertainty in macro-scale shear strength parameters


from the Darai Limestone sample. Results suggest a coefficient of
variation of 6.0 and 4.7% for the cohesion (a) and friction angle (b),
respectively.

Exploration into possible correlations between the degree of uncertainty and


confining stress conditions indicates very little variation in the peak strength uncertainty
within the range of modelled confinements (Figure 5.8b).

In addition, comparisons

between the crack initiation and peak strength thresholds indicates a higher degree of
variability in initiation threshold (CoV = 5.1%). The crack initiation stress was also found
(:q0 /"Y) of 0.82.

to be extremely high in comparison to peak UCS results, with an average initiation ratio

Discrepancies between the micro- and macro-scale behaviour of the UDECGBMs can be attributed to the stochastic nature of triangular block generation. More
specifically, the behaviour is controlled by the distribution and failure concentration within
high angle contacts (Figure 5.9).

This predisposition towards failure in high angle

contacts is the result of the underlying tensile and shear failure mechanics. Tensile
cracking is theorized to concentrate sub-parallel to the major principal stress direction
(90o). Simulation results suggest an average pre-peak tensile crack orientation of 86.4o,
with a CoV of 27.8%, within UCS simulations. The large CoV is the result of the limited
number of tensile failures within the UDEC-GBM simulations (average number of tensile
fractures = 1.7). In comparison, shear damage is thought to coincide with the idealized
inclination of the shear (x), given by (Jaeger et al. 2007):
122

x = 45 + /2

Equation 5.5

where is equal to the micro-scale friction angle (48.2o) and x is the angle between
idealized plane and minimum principal stress (:).

In the case of the Darai Limestone

the idealized inclination of shear (x) is 69.1o, which coincides with the simulation results.
UCS simulations indicated an average pre-peak shear failure angle of 72.2o, with a CoV
of 2.6%. A comparison of the percentage of tensile vs. shear cracking indicates that the
model is preferentially failing through shear, with 98.6% of pre-peak damage due to this
mechanism.

These results are consistent with Gao (2013) who observed that UDEC-

GBMs fail predominantly through shear when utilizing triangular elements.

b.

a.

Peak Strength Coeff. of Variation

10.5

Cohesion (MPa)

10.0
9.5
9.0
8.5
Correl Coefficient = -0.90
8.0
38.0
40.0
42.0

44.0

46.0

3.25%

3.00%

2.75%
0.0

Friction Angle (o)

Figure 5.8

3.50%

1.0

2.0

3.0

4.0

Confining Stress (MPa)

(a) Co-dependencies are observed between the macro-scale


cohesion and friction angle, explaining the reduction in peak
strength vs. macro-scale attribute variation. (b) The coefficient of
variation is demonstrated to be relatively insensitive to the confining
stress, with an average value of 3.0%.

To further demonstrate the preferential shear mechanisms within UDEC-GBMs


tensile cut-off reduced by 50% (: = 5.65 f* ). This resulted in an increase in the

with triangular mesh elements, a second set of simulations was conducted with the peak

percentage of tensile micro-cracking from 1.4% to 10.5%; however, the failure

123

mechanism was still dominated by shear behaviour. The increased amount of tensile
fracturing was also found to improve the measured and theoretical pre-peak crack
orientation discrepancies, with the average angle found to be 89.3o.

150
130
110
90
70
50
30
10
-10

Figure 5.9

Vertical Stress (MPa)

New
Fracture

Brittle fracture development within UDEC-GBM UCS simulation with


triangular mesh geometry. Fractures are found to concentrate
within high angle contacts.

Uncertainty in the macro-scale elastic properties of the UDEC-GBMs was found to be


greatly reduced compared to the shear and peak strength attributes (CoV = 0.08%). The
reduction is attributed to the increased spatial averaging in the elastic attributes. For
example, the macro-scale elastic behaviour is the result of the combined deformable

124

properties of all elements and contacts. However, in comparison, the macro-scale shear
and peak strength attributes display a reduced level of the spatial aggregation, as
deformation becomes concentrated in a limited number of pre-peak failed contacts6
(Figure 5.9).

5.6.2.

Synthetic Rock Mass Models


Uncertainty that arises from the stochastic nature of DFN generation has been

noted by a number of authors as a primary source of error within geomechanical models


(Olofsson and Fredriksson 2005; Bagheri 2009; Elmouttie and Poropat 2011). However,
no studies have explored how much uncertainty this imparts in comparison to other
sources, such as mesh dependency. To characterize this relationship, a series of DFNs
was constructed and integrated into the UDEC-GBMs (Figure 5.5). Uncertainties in the
peak and shear strength attributes were then compared with the triangular, intact rock
UDEC-GBM simulations.

This process employed the same Monte Carlo generation

technique, with 30 UDEC-GBMs constructed using constant DFN attribute statistics but
independent stochastic fracture and block realizations. All other properties were kept the
same as the triangular, intact rock UDEC-GBMs described in the previous section.
Shear strength attributes were estimated by subjecting the 30 UDEC-GBMs to a series
at different confining conditions. This included simulation at 0.0, 2.0, and 4.0 MPa, to
ensure good compliance with the low confinement used for model calibration. In total of
90 coupled DFN/UDEC-GBM simulations were conducted.
Inclusion of DFNs within UDEC-GBMs resulted in an overall increase in the level
of uncertainty, with the average CoV in the peak strength increasing to 10.7%. This
represents a near three-fold increase in the uncertainty, suggesting that variation in the
DFN realizations plays an important role on the overall uncertainty within UDEC-GBMs.
A similar uncertainty increase was also observed in the macro-scale cohesion with a
CoV of 12.8%; whereas, the friction did not display a noticeable increase in the CoV
(4.7%; Figure 5.10). The behaviour also coincides with a reduction in co-dependency
structure between the friction and cohesion (r = -0.27; Figure 5.11).
6

The average number of pre-peak contact failures was found to be 110, but this value varied
greatly with a CoV of 58.8%.

125

a.

b. 0.24

2.0

Relative Frequency

Relative Frequency

2.5

1.5
1.0
0.5
0.0
0.8

1.1

1.7

1.4

2.0

2.3

0.16

0.08

0.00
38.1

Simulation Results

Figure 5.10

44.7

48.0

51.3

54.6

Macro-Scale Friction Angle ( )

Gaussian Model

Simulation Results

Gaussian Model

Simulations suggest an increased degree of uncertainty in the


UDEC-GBMs once DFNs are incorporated (Figure 5.5). CoV values
vary greatly between the cohesion (12.8%) and friction angle (4.7%).

Cohesion (MPa)

2.0

Correl Coefficient = -0.27

1.8

1.6

1.4

1.2
42.0

Figure 5.11

41.4

Macro-Scale Cohesion (MPa)

44.0

46.0
48.0
Friction Angle (o)

50.0

Inclusion of discrete fractures into the UDEC-GBMs results in a


reduction in the correlation coefficient between the macro-scale
cohesion and friction angle.

The inclusion of discrete fractures was found to change the overall failure
to UCS ratio (:q0 /"Y) showed a decrease from 0.82 in the intact rock simulations to
mechanics of the UDEC-GBMs. An examination of the average crack initiation strength

0.48 in the DFN simulations. A similar change was observed in the type of microdamage with the percentage of tensile micro-cracking increasing from 1.4% to 13.1%.
This suggests that the degree of tensile damage is sensitive to pre-existing fracture
126

heterogeneities, with tensile micro-cracking concentrating near fracture tips (Figure


5.12). The observed micro-mechanical behaviour is consistent with fracture mechanics
research, which suggests that fracture propagation is controlled by the fracture
toughness and stress intensity factor at pre-existing crack tips (Lajtai 1968; Singh and
Sun 1990).

Existing
Fracture

150
130
110
90
70
50
30
10
-10

Figure 5.12

Vertical Stress (MPa)

New
Fracture

Brittle fracture development within DFN UDEC-GBM simulations


under UCS conditions. Fracture development is concentrated at
fracture tips within UDEC-GBM SRM simulations as wing cracks.

127

5.6.3.

Triangular vs. Voronoi Mesh Geometries


Recent interest in alternative mesh geometries has led to the introduction of

triangular DEM blocks, which have been incorporated into the recently released UDEC
6.0 (Kazerani et al. 2012; Gao 2013; Gao and Stead 2014; Gao et al. 2014a, 2014b;
Kazerani 2013; Kazerani and Zhao 2014; Itasca 2014). In order to compare the effects
of these triangular mesh geometries with traditional Voronoi blocks, a series of 28
Voronoi UDEC-GBMs was simulated. Models utilized calibrated micro-properties from
the triangular mesh calibration to ensure similar micro-scale contact behaviour. Random
block arrangements were generated for each of the simulations using the integrated
Voronoi mesh generator within UDEC.

Simulation geometry was the same as the

triangular models, and utilized a uniaxial compressive test methodology, with no


confinement (Figure 5.5; Figure 5.13).

2.0 m

Block Contact
Finite-Difference
Grid
History Point

1.0 m
Figure 5.13

UDEC-GBM model configuration for Voronoi mesh simulations.

128

The shift from triangular to Voronoi block geometries resulted in a 45.1%


increase in the UCS (41.8 vs. 60.7 MPa). Gao (2013) attributed this change to a larger
macro-scale friction, associated with an increase in the block interlocking as the Voronoi
blocks are forced to rotate past one another. A comparison of the UCS coefficient of
variation for the Voronoi and triangular mesh models indicates no statistically significant
difference in the peak strength variability (3.2 vs. 3.6%; Figure 5.14). This suggests that
underlying calibration uncertainties are relatively independent of mesh shapes and an
inherent aspect of the system, associated with the stochastic mesh generation process.

Relative Frequency

0.20

0.15

0.10

0.05

0.00
54.2

58.3

62.5

66.6

Uniaxial Compressive Strength (MPa)


Simulation Results

Figure 5.14

Gaussian Model

Calibration in uncertainty in peak UCS strength for Voronoi mesh


simulations.

A comparison of the percentage of tensile micro-cracking indicates an increase in


from 1.4% to 16.6% in the Voronoi block simulations compared to the triangular mesh
models. Such behaviour suggests that the micro-scale failure mechanics are highly
strength to UCS ratio (:q0 /"Y), which decreased from 0.82 to 0.23. These results are
dependent on the mesh geometry.

A similar change was observed in the initiation

consistent with the work of Nicksiar and Martin (2013) who found that failure within
Voronoi UDEC-GBMs is controlled by tensile failure mechanics.

129

150
130
110
90
70
50
30
10
-10

Figure 5.15

Vertical Stress (MPa)

New
Fracture

Brittle fracture development within UDEC-GBM UCS simulation with


Voronoi mesh geometry. An increased degree of dispersed, high
angle fractures is observed compared to triangular mesh models
(Figure 5.9).

5.7. Discussion
5.7.1.

Calibration Potential of UDEC-GBMs


Discrepancies between the micro- and macro-scale properties of UDEC-GBMs

have led researchers to develop calibration procedures to match the micro-scale


attributes to macro-scale behavior during back-analysis (Kazerani and Zhao 2010; Gao
2013; Gao and Stead 2014). However, aleatoric uncertainties exist in calibrated UDEC-

130

GBMs as a result of the inherent randomness of the element generation process. While
preliminary estimates of the degree of this uncertainty have been made by previous
researchers, the estimates exhibit large standard errors due to limited sample sizes
(Kazerani et al. 2012; Kazerani 2013; Kazerani and Zhao 2014). Simulations conducted
within this study aimed to refine these estimates, with the CoV of the cohesion and
friction angle found to be 6.0 and 4.7%, respectively. This is within the range of previous
research which suggests estimates between 1.5 to 15.0% (Kazerani et al. 2012;
Kazerani 2013; Kazerani and Zhao 2014). Comparisons between Voronoi and triangular
mesh geometries, also suggest that these uncertainties are an inherent property of the
system, and originate regardless of the underlying mesh shape.
Previous researchers have noted that these discrepancies are not an inherent
disadvantage of the method, and can be equated to the spatial heterogeneity found
within intact rock samples (Lan et al. 2010; Kazerani et al. 2012; Kazerani 2013;
Kazerani and Zhao 2014; Nicksair and Martin 2014).

To facilitate this connection,

studies have attempted to correlate the average grain and element size (Kazerani and
Zhao 2010; Alzoubi 2012; Gao 2013; Gao and Stead 2014). However, UDEC-GBM
element generation typically does not take into consideration the underlying spatial
structure of the grains, nor their shape, despite thin-section analysis suggesting that
grain distributions display spatially heterogeneous behaviour (Oren and Bakke 2003;
Okabe and Blunt 2005; Okabe and Blunt 2007; Politis et al. 2008; Mndez-Venegas and
Daz-Viera 2014). The incorporation of such heterogeneities has been shown to cause
increased asymmetry in the strain distribution, resulting in alteration of the macro-scale
output behaviour (Cho et al. 2007; Damjanac et al. 2007; Lorig 2009; Jefferies et al.
2008; Lan et al. 2010; Srivastava 2012; Nicksiar and Martin 2014).

This strain

accumulation results in a fundamental change in the output uncertainty due to spatial


data aggregation issues (Gehlke and Biehl 1934; Isaaks and Srivastava 1989; Deutsch
2002; Haining 2003). Namely, macro-scale behaviour becomes preferentially controlled
by the weakest areas of the simulation, as opposed to the model area as a whole
(Snchez-Vila et al. 1996; Mayer et al. 2014b). Such asymmetrical behaviour presents
an underlying issue, as without accurate simulation of the spatial structure, the macroscale output uncertainty is simply a reflection of the underlying mesh dependency, and
not the grain-scale uncertainty as previous researchers have tried to argue. Future

131

studies should aim to limit this dependency through the accurate simulation of the grain
shape and spatial structure, in addition to the grain size.
In addition to mesh dependencies errors, uncertainty exists in the tensile strength
calibration, as the calibrated micro-scale value exceeded the theoretical Mohr-Coulomb
limit imposed by the shear strength attributes.

Similar issues are observed in the

calibration results of Kazerani et al. (2011). This issue may be the result of the difficulty
in reproducing laboratory tensile tests within DEM models. Kemeny and Cook (1986)
observed that within tensile tests, sample failure coincides with the crack initiation stress
due to the near instantaneous crack propagation associated with stress concentrations
at fracture tips.

Diederichs et al. (2007a) noted the difficulty in reproducing this

behaviour within DEM models, due to the inability to directly simulate crack propagation.
In comparison, compressional tests are far easier to simulate as their failure
mechanisms are controlled by the accumulation of micro-scale damage, as opposed to
propagation of a single crack (Diederichs et al. 2004). The difficulty in directly simulating
the underlying tensile failure mechanism is an inherent limitation of all DEM modelling,
and imparts a degree of uncertainty into the micro-scale calibration procedure.

Its

presence needs to be clearly understood as a potential limitation requiring future study.

5.7.2.

Contact Failure Mechanisms


Despite the wide-spread use of linear-elastic theory in rock mechanics, intact

rock samples typically display a distinct non-linear, pre-peak stress-strain response


curve (Jaeger et al. 2007). Acoustic emission monitoring of unconfined compressive
tests have shown that this behaviour is the result of two major damage thresholds, which
are encountered prior to the peak strength (Eberhardt 1998; Eberhardt et al. 1998;
Diederichs et al. 2004). This includes the initiation of tensile failure at 30 to 60% of the
UCS, and a crack damage threshold, characterized by the onset of micro-damage
coalescence (Brace et al. 1966; Fonseka et al. 1985; Pelli et al. 1991; Martin 1994;
Castro et al. 1995; Martin et al. 1999; Diederichs 2003). Diederichs and Kaiser (1999)
and Diederichs et al. (2007a) were able to demonstrate that this behaviour using the
DEM code PFC2D (Itasca 2008). Crack initiation was found to coincide with the onset of
tensile, micro-crack development. Crack damage coincided with acceleration in microcrack coalescence.
132

Gao and Stead (2014) and Nicksiar and Martin (2013) were able to replicate the
crack initiation and damage thresholds using UDEC-GBMs; however, pre-peak, microscale failure mechanics were found to differ between the studies. Nicksiar and Martin
(2013) utilized Voronoi mesh geometry and found that results were similar to PFC, with
the damage initiation threshold dominated by tensile failure (Diederichs 2000; Diederichs
et al. 2007a). Shear induced micro-cracking was found to increase near the damage
accumulation threshold, with the final peak failure behaviour controlled by a combination
of shear and tensile failure.

Back analysis of Nicksiar and Martins (2013) results

suggest a percentage of pre-peak tensile micro-cracking of approximately 48 to 65%,


despite a relatively high tensile strength (tensile cut-off percentage = 73.5%7).
Gao and Stead (2014) found that the failure mechanism was dominated by shear
induced failure when using a triangular mesh geometry, despite a relatively low tensile
strength (tensile cut-off percentage = 28.6%). Estimation of the pre-peak tensile microcrack percentage suggests a value of only 14%. This behaviour was attributed to two
possible mechanisms. First, models were composed of an assortment of bonded blocks
with no inherent porosity, which limits the kinematic freedom of elements with UDECGBMs.

In contrast, PFC models exhibit an inherent porosity due to the packing

arrangement of circular elements, which can lead to the development of tensile stresses,
causing tensile failure of the rock mass (Diederichs 2000; Diederichs et al. 2004). Gao
(2013) demonstrated that a similar behaviour could be achieved within UDEC-GBMs
through the inclusion of porosity within the geomechanical models. A second possible
mechanism suggested is the potential for reduced kinematic freedom of elements within
2D UDEC-GBM models, due to restrictions in the out-of-plane strain. It was shown using
3DEC that the inclusion of a third dimension, and hence increased kinematic freedom,
results in an increase in the pre-peak tensile micro-crack percentage.
Although the two proposed mechanisms by Gao (2013) could contribute to a
dominance of shear cracking within UDEC-GBMs, results from the research presented
in this thesis suggest that differences between the amount of tensile and shear cracking
simulated are predominantly the result of the assumed internal mesh geometry. This is
7

The tensile cut-off percentage is calculated as the assigned tensile strength over the maximum
theoretical Mohr-Coulomb value, based on the assigned friction and cohesion attributes.

133

clearly demonstrated by the increase in the percentage of tensile micro-cracking from


1.4% to 16.6% between the triangular and Voronoi block geometries. The importance of
mesh dependency is suggested to be the result of variations in the kinematic freedom of
triangular vs. Voronoi mesh.

In the case of Voronoi block models, the increased

sphericity and additional block roughness is likely to lead to increased locking-up of


blocks.

As a result, an increased degree of internal rotation and displacement is

required to move blocks past one another.

Such behaviour was observed by Gao

(2013) and confirmed in this study by the 45.1% increase in the UCS between Voronoi
and triangular mesh models. An effect of this behaviour is the development of internal
mesh wedging, which increases the degree of tensile failure. A conceptual example of
this behaviour is shown in Figure 5.16; while, model results displaying the increased
degree of tensile failure is evident in Figure 5.9 and Figure 5.15. In the conceptual
Voronoi example, tensile stresses develop between the central Voronoi blocks as they
are displaced outward by the upper and lower blocks moving inward due to the major
principal compressional stress.
In comparison, triangular mesh geometries display an increased degree of
kinematic freedom resulting in a reduction in the locking-up of blocks. The overall effect
of this behaviour is a reduction in the internal mesh wedging, decreasing the amount of
tensile failure. From the conceptual example provided in Figure 5.16, it can be seen that
the increased kinematic freedom with triangular mesh results in less block inter-locking.
As a result, blocks can more easily slide past each other without the need for wedging
and tensile stress development. The end result in an reduction in the amount of tensile
failure, with the majority of damage occurring through shear mechanisms.

This

predisposition towards shear mechanics was confirmed by Gao (2013) and in this study
through a reduction in the micro-crack tensile percentage from 16.6% to 1.4% between
the Voronoi and triangular mesh models.

134

Voronoi Mesh
UDEC Simulation Results
1

Conceptual Behaviour
1

1
1

Triangular Mesh
UDEC Simulation Results
1

Conceptual Behaviour
1

1
1
Direction of Block Movement
Shear Contact Failure
Tensile Contact Failure

Figure 5.16

Wedging potential in UDEC models with Voronoi vs. triangular mesh


geometries. Triangular mesh was shown to have a predisposition
towards shear failure mechanisms, due to increased kinematic
freedom. This was in contrast to the Voronoi mesh simulations
which displayed a dominance of tensile failure mechanisms.

135

As discussed in the preceding section, previous research has attempted to


equate the mesh dependencies with inherent spatial heterogeneities found within intact
rock samples (Kazerani and Zhao 2010; Lan et al. 2010; Alzoubi 2012; Gao 2013;
Kazerani et al. 2012; Kazerani 2013; Kazerani and Zhao 2014; Gao and Stead 2014;
Nicksair and Martin 2014). However, conventional research only considers the grain
size and not the underlying spatial structure. In addition, the simulation of realistic grain
shapes is often lacking, with only four viable options for DEM mesh generation available
at present (Lan et al. 2010), including: spherical grains (Potyondy and Cundall 2004),
square-shaped elements (Tang 1997), polygonal or Voronoi grains (Lorig and Cundall
1987), and triangular grains (Gao and Stead 2014).

The lack of mesh variability

presents a fundamental issue for DEM modelling, as the micro-scale failure behaviour of
UDEC-GBMs has been shown to be is strongly dependent on the mesh shape. Similar
behaviour is observed within PFC models which displays shape dependency issues
when unrealistic grain shapes are utilized (Diederichs 2000; Potyondy and Cundall 2004;
Yoon 2007; Cho et al. 2007; Herbst et al. 2008; Li et al. 2008; Akram et al. 2011;
Sakakibara et al. 2011). Failure to account for this can lead to poor reproducibility of
grain-scale deformation mechanisms as models become dependent on artificial mesh
geometries as opposed to realistic grain shape geometry and size distributions.
Examples of this behaviour include:

The over homogenization of grain size distributions and failure to include microscale matrix grains, which can reduce the overall kinematic freedom of DEMs.
The use of such methodologies can result in an inaccurate reproduction of microscale failure mechanisms, as strain in unable to realistically accumulate within
the smaller scale matrix grains. As a result, models are predisposed towards
increased micro-crack deformation to overcome the larger inter-block roughness.

The use of triangular mesh geometries, which may not be representative of


natural grain shapes and may artificially increase kinematic freedoms within
DEMs.

This can lead to a predisposition towards grain boundary sliding

mechanisms, under-representing block contact tensile failure.

The failure to properly model spatially heterogeneous behaviour using


geostatistical techniques, such as sequential indicator simulation (SIS; Emery
136

2004).

This is especially pronounced in metamorphic rocks were samples

display a predisposition towards slip along weakened schistose bands, or in


sedimentary units where failure may be preferential to weaker silt or mud layers
(Passchier and Trouw 2005).
The inability to simulate these phenomena using conventional mesh generation
techniques, presents a fundamental area for future research and realistic mesh
geometries.

Voronoi/Trigon geometry can lead to significantly different failure

mechanisms than those observed in actual rocks, as current mesh generation


techniques are not representative of the true grain shapes and/or spatial structure.

5.8. Conclusions
The realistic simulation of brittle fracture is currently one of the most important
issues in geomechanical simulation. The desire to simulate such behaviour has led to
the development of a variety of numerical simulation codes including the UDEC-GBM
method, which simulates the finite displacement and rotation of discrete deformable
and/or rigid blocks using block-contact constitutive models (Lorig and Cundall 1987; Jing
2003; Stead et al. 2006)

The method has shown promise under back-analysis

conditions where micro-properties can be calibrated for specific mesh realizations


(Kazerani and Zhao 2010; Lan et al. 2010; Gao 2013; Kazerani et al. 2012; Kazerani
2013; Kazerani and Zhao 2014; Gao and Stead 2014; Nicksair and Martin 2014; Gao et
al. 2014a, 2014b). However, to date, few studies have characterized the uncertainty
associated with using these apparently calibrated models under forward-analysis
conditions. This presents a fundamental issue as results from this study show that
important mesh dependency issues exist with the UDEC-GBM method.
Uncertainty analysis in this thesis suggests an average peak strength
uncertainty, measured as a coefficient of variation between 3 and 4% exists in calibrated
UDEC-GBMs.

In addition, uncertainties are further pronounced in the macro-scale

cohesion and friction angle, due to internal co-dependencies between the attributes.
Uncertainties originate from the underlying stochastic mesh generation processes and
can be considered an irreducible aspect of the system. As a result, uncertainties cannot

137

be eliminated provided stochastic mesh generation is used. This is a major impediment


for future UDEC-GBM modelling, as it means that it is not possible to fully calibrate the
system under forward-analysis conditions.
Important mesh dependencies were also observed between the mesh shape and
micro-scale fracture mechanisms.

Voronoi block models were found to have an

increased predisposition towards tensile failure due to decreased kinematic block


freedom. In comparison, the increased kinematic freedom associated with triangular
mesh geometries was found to limit tensile fractures and predispose a model to shear
failure. As a result, the realistic simulation of intact rock samples requires the accurate
reproduction of natural grain shapes and textures, which is not currently well preserved
in UDEC-GBMs.
Despite underlying mesh dependencies, the UDEC-GBM method to be an
important area for future rigorous research, as irreducible or aleatoric uncertainties will
always remain an inherent aspect of all numerical analysis. Such uncertainties can also
occur from numerous other sources, including: data input uncertainties, numerical
instabilities, etc. However, renewed and detailed research into the UDEC-GBM method
is a relatively recent phenomenon. Further robust research into the advantages and
limitations of the UDEC-GBM method is required with constraint against laboratory and
in-situ observational data.

138

6.

Conclusions and Recommendations for


Future Work

6.1. Conclusions
Uncertainty analysis remains at the forefront of geotechnical design, as the
applied application of the discipline remains a predominantly predictive science.
Uncertainties arise from a number of sources, including: inherent attribute variability,
instrument and observation errors, algorithmic simplifications, limited information, etc.
(Palmstrm 1995; Read 2009; Read and Stacey 2009).

The presence of these

uncertainties forces engineers to incorporate uncertainty analysis into the geotechnical


design process (Hammersley and Handscomb 1964; Beckman 1971; Rosenblueth 1975;
Harr 1996; Duncan 2000; Wiles 2006; Nadim 2007). This allows for both the
quantification and demonstration of project uncertainties to decision makers, which can
then be incorporated into a proper risk analysis (Mazzoccola et al. 1997; Steffen 1997;
Kong 2002; Robertson and Shaw 2003; Steffen and Contreras 2007).
Research has explored three separate applications of uncertainty theory to
geotechnical design, in the areas of continuum, DFN, and discontinuum modelling. New
approaches were applied to geotechnical design, including: the advent of the modified
DFN algorithm, the application of mining geostatistical methods to the simulation of
heterogeneity through sequential Gaussian simulation, estimation of failure size within
continuum models using minimum distance analysis, and the introduction of a triangular
mesh tessellation routine to embed DFNs into UDEC-GBMs. In addition, the adverse
effects of ignoring uncertainties in geotechnical design were explored and clearly
demonstrated, including: the inability of conventional continuum models to capture
accurate SRF/FOS distributions, and the quantification of inherent mesh dependencies
in UDEC-GBMs.

The following section provides an overview of the three areas of

uncertainty analysis explored in the thesis.

139

6.1.1.

Adverse Effects of Heterogeneity on Model Prediction


Spatial heterogeneity is an inherent aspect of natural rock formations due to their

complex formation and tectonic histories, resulting in differential failure behaviour across
a study site.

However, despite this phenomenon, conventional geotechnical slope

design practice continues to subdivide a study site into a series of discrete geotechnical
units, each conceptualized by spatially constant random variables. This simplification
ignores inherent spatial variability of geotechnical systems. It has been shown to lead to
overly conservative design practices, due to an over-estimation of the probability of
failure (Griffiths and Fenton 2000; Hicks and Samy 2002). In opposition to conventional
practices, heterogeneities can be accounted for through the explicit simulation of the
spatial variability (Fenton 1997; Jefferies et al. 2008). Research within this study used a
geostatistics based method, known as sequential Gaussian simulation (SGS), to
stochastically simulate rock mass heterogeneity. The proposed methodology is new to
the field of open pit mine design.

The algorithm works by sequentially simulating

attribute values along pseudo-random paths through the modelled grid nodes (Dowd
1992; Deutsch and Journal 1998). Spatial dependencies are taken into consideration
during the simulation process through the use of variograms and simple kriging routines
(Goovaerts 1997; Journel and Huigbregts 1978; Nowak and Verly 2007).
Simulations results using the SGS approach emphasize the importance of
incorporating heterogeneities into geomechanical codes. The results suggest that the
inability to consider heterogeneities can result in a fundamental change in the predicted
SRF/FOS results.

Such behaviour is problematic for geotechnical slope design, as

billions of dollars are spent annually on probabilistic designs based on fundamentally


flawed assumptions.

These issues are due to the invalidation of the independence

assumption required to use the classical statistical methods, employed in conventional


geotechnical slope design (Journel and Huijbregts 1978; Isaaks and Srivastava 1989;
Deutsch 2002).

This invalidation is due to spatial dependencies which cause the

development of scale effects due to data aggregation and preferential strain issues. In
the case of data aggregation, it is a commonly held position in the geographical sciences
that data are only valid at the collection scale (Gehlke and Biehl 1934; Haining 2003).
However, conventional geotechnical slope analysis often disregards this principle and
up-scales geomechanical properties without considering the effects of spatial averaging.
140

The end effect of this fundamentally flawed methodology is an over-estimation of the


domain scale attribute variance, leading to erroneous SRF/FOS estimates. In addition to
aggregation problems, preferential strain issues also arise from the predisposition of rock
mass failure within its weakest sections (Jefferies et al. 2008; Lorig 2009).

The

predisposition behaviour results in a negative correlation between scale and rock mass
strength, which is not properly considered in conventional slope design. Its exclusion
can lead to fundamental alterations in the behaviour response of geomechanical models.
In

comparison

to

conventional

design,

methods

which

incorporate

heterogeneities have been shown to produce more realistic SRF/FOS distributions. The
SGS method relies on the use of a variogram to control the variance within the
simulation.

The end result is a degree of controlled spatial heterogeneity on the

stochastic system, resulting in the preservation of the sample-scale variance, while at


the same time more accurately representing the large-scale, system variance. Although
spatially stochastic methods may be considered more data intensive, the inability to
consider heterogeneities has been shown to lead to systematic errors in the simulation
process. So, although it may be tempting to ignore its effects, and simplify the modelling
environment, this is a fundamentally flawed position, which leads to fallacious results.

6.1.2.

Limitations of DFN and Numerical Model Integration


Bimodal strength characteristics are a natural phenomenon within rock systems,

due to the complex inter-relationship between intact rock material and discontinuities
found in natural rock masses (e.g. micro-fractures, macro-fractures, faults, etc.; Hoek
and Brown 1980a). This behaviour has led engineers to treat rock systems as either a
continuum whose attributes are the result of a combination of the intact rock and
discontinuity behaviour, or to explicitly model the discrete features and use a
discontinuum approach (Jing 2003; Stead et al. 2006).

Using the latter approach,

discrete fracture network (DFN) methodologies are typically employed where individual
fractures are explicitly modelled (Dershowitz and Einstein 1988; Xu and Dowd 2010).
However, incorporation of DFNs within conventional geomechanical modeling is often
difficult due to the development of unacceptable fracture configurations (Painter 2011;
Painter et al. 2012; Painter et al. 2014). As a result, researchers are commonly forced to
manually manipulate fractures prior to incorporation. However, this can result in adverse
141

effects, including: the alteration of fracture attribute statistics, and the poor reproducibility
between researchers.
To overcome the limitations of traditional DFN generation, an alternative
approach was proposed in this thesis. The alternative method takes into consideration
not only the statistical and theoretical basis of DFN generation, but also subsequent
geomechanical meshing algorithms. The methodology was based on the enhancement
of the Baecher disk model coupled with three simulation constraints (Baecher et al.
1978). First, a minimum overlapping/separation distance () inhibits the development of
adversely small elements, by ensuring minimum spacing between discrete fractures.
Next, fracture intersection points are checked to prevent the bounding of unusually small

areas. Finally, a minimum intersection angle (qy0 ) ensures fractures intersect at angles

greater than the minimum internal angle used in mesh generation. Although this trialand-error approach may not be the most elegant solution to the problem, it is able to
generate DFNs which conform to later mesh generation routines, improving DFN
integration. This frees researchers to focus on the actual simulation processes instead
of cleaning-up DFNs. In addition, the development of an explicit, modified method for
DFNs generation improves the reproducibility of DFNs between researchers, reducing
the degree of subjectivity.
Although promising results were obtained from the modified DFN method, it was
found to increase the degree of spatial homogenization within the fracture network,
which may be inconsistent with the natural system (Pollard and Aydin 1988). In addition,
the increased homogenization may reduce the overall fracture connectivity, and hence
increase in the rock bridge percentage, resulting in an overall increase in the rock mass
strength (Elmo et al. 2011; Havaej et al. 2012; Tuckey et al. 2012; Tuckey 2012; Fadakar
et al. 2014). However, similar homogenization also occurs when using traditional DFNs,
as the manual manipulation process removes closely spaced fractures. As a result,
homogenization can be considered an inherent property of current DFN model
incorporation.

142

6.1.3.

Mesh Dependency in UDEC-Grain Boundary Models


Brittle fracture is known to play an important role in rock mass failure (Jaeger et

al. 2007). Until recently, the numerical simulation of such behaviour has been difficult.
However, recent advances in numerical modelling have allowed for the explicit
simulation of this behaviour using a number of different approaches (Jing 2003; Stead et
al. 2006). One of the leading approaches is the DEM method, first proposed by Cundall
(1971).

The method simulates the finite displacement and rotation of discrete

deformable and/or rigid blocks, using constitutive models for block contacts. Lorig and
Cundall (1987) proposed an extension of this method, using Voronoi tessellation
routines, to simulate intact rocks as an assortment of discrete grains. The methodology
is referred to as the UDEC-grain boundary model (UDEC-GBM). The approach restricts
large-scale deformation to inter-grain boundaries, allowing grains to become entirely
disconnected during the simulation process (Kazerani and Zhao 2010; Lan et al. 2010;
Gao and Stead 2014). However, the inability to directly measure the micro-scale, block
contact properties means that calibration must be conducted prior to use of method
(Kazerani and Zhao 2010).
The issue with this calibration process is that irreducible uncertainties exist in the
system due to the stochastic nature of mesh generation, making it impossible to fully
calibrate a UDEC-GBM. Research estimated the degree of this irreducible uncertainty,
and found that for the cohesion and friction angle an uncertainty of 5 to 7% exists,
measured as a coefficient of variation.

The degree of uncertainty was found to be

reduced in the peak strength, with a value of approximately 3%, due to the co-dependent
nature of the cohesion and friction angle. The degree of peak strength uncertainty is
consistent with previous work; however, the research has extended on previous studies
by using a larger sample size (Kazerani et al. 2012; Kazerani 2013; Kazerani and Zhao
2014). Mesh uncertainties also appear to be relatively independent of the mesh shape,
with a similar peak uncertainty observed with both Voronoi and triangular mesh models.
An understanding of these irreducible mesh uncertainties is important for any future
UDEC-GBM studies, as researchers must realize that it is impossible to fully calibrate
the system.

143

In addition to mesh uncertainty analysis, the influence of mesh shape on the


underlying failure mechanisms was explored. Results suggest that micro-scale failure
mechanics are highly dependent on the mesh shape. Decreased kinematic freedom
associated with Voronoi blocks was found to result in increased localized tensile stress
concentration, and an increased predisposition towards tensile failure mechanisms. In
comparison, triangular mesh geometries were found to favor shear mechanisms, due to
the increased kinematic freedom of discrete blocks. These results suggest that the
future realistic simulation of accurate grain shapes is paramount in accurate simulation
of intact rock failure mechanisms. An inability to account for this may lead to erroneous
simulation results, which do not reproduce realistic underlying mechanisms.
Although underlying mesh dependencies exist, the author still considers the
UDEC-GBM method to be a valid approach to brittle facture simulation. The presence of
irreducible uncertainties does not necessarily preclude use of the method, as it is an
inherent aspect of all numerical analysis, and not just UDEC-GBMs. However, extensive
research into the use of the method has only just begun, within the scientific literature.
As a result, fundamental research is still required to demonstrate potential advantages
and limitations, before the method receives widespread application in engineering
practice.

6.2. Recommendations for Future Work


Uncertainty analysis is paramount in geotechnical design. This research has
presented three separate applications of uncertainty theory to geotechnical design, in
the areas of continuum, DFN and discontinuum modelling.

New approaches were

applied to each in the slope design process. However, there remain limitations in all
three applications that should be explored in future research.

6.2.1.

Spatial Uncertainty
Exploration of the effects of spatial heterogeneity on continuum modelling

demonstrated the importance of explicitly modelling the phenomenon. The research


applied new approaches to geotechnical slope design, includeing: the application of the

144

SGS method to model heterogeneity, and Dijkstras (1959) algorithm to find critical paths
through failed material. While the research presented a case study on the application of
the method, similar approaches could be applied to other sites. Possible extensions of
the research include:

Application of the methodology to additional case studies with different failure


modes.

While the results of this study are consistent with previous studies

(Pascoe et al. 1998; Griffiths and Fenton 2000; Hicks and Samy 2002; Jefferies
et al. 2008; Lorig 2009; Srivastava 2012), additional research is needed to
confirm the conclusions in alternative settings. It will remain difficult to convince
practitioners of the short-comings of current methods until such time as an
extensive body of research is developed. It is therefore recommended that future
studies apply similar approaches to case studies with different failure modes, to
prove that similar effects will occur.

Simulation of co-dependencies between geotechnical attributes.

Simulations

results from the Ok Tedi dataset were based on independent simulations of the
( = 0.19). However, the degree of co-dependency is likely to vary between

GSI and UCS, as the correlation coefficient between the parameters was minimal

studies. In the case of a strong co-dependency, the simulation process can be


extended to incorporate information from secondary auxiliary variables through
the use of co-kriging routines (Rivoirard 2005). Co-kriging is an extension of
traditional kriging where information from secondary auxiliary variables is
included in the kriging process (Leuangthrong et al. 2011).

The additional

information is used to refine parameter distributions at the sample location during


the kriging processes, allowing for the reproduction of covariance structures
between variables (Wackernagel 2003). While this can be a complex process,
Xu et al.s (1992) proposed a simplified co-kriging methodology when only two
parameters are present. The approach may be useful if strong co-dependencies
were found to exist between the GSI and UCS.

Development of a step-path roughness coefficient. Current step-path estimation


algorithms are hampered by the inability to simulate the large scale step-path
roughness in geomechanical models.
145

This can lead to poor reproducibility

between step-path estimation and stochastic simulation results. To solve this


issue, a step-path roughness coefficient needs to be developed to allow for the
simulation of realistic failure behaviour.

Inclusion of failure size calculations within a risk framework. Current estimates of


the failure size provide a preliminary indication of the potential consequences of a
catastrophic failure; however, research could be taken a step further by
estimating the personal and economic impacts of such an incident.

The risk framework could be further extended by incorporating failure size


estimates into run-out approximation equations (Rose and Hungr 2006). This
empirical approach provides a primary estimate of the run-out extent, if certain
assumptions are made about the run-out area (i.e. horizontal, unobstructed, etc.).
In addition, run-out risks could be assessed by propagating failure size
uncertainty through the equations.

6.2.2.

DFN Generation

A modification of the Baecher et al. (1978) disk model was proposed for DFN
generation, to solve the issues with traditional DFN methodologies where problematic
elements are produced during geomechanical meshing. While, the research presents an
attempt to integrate DFNs and meshing algorithms, limitations exist, as the method overhomogenizes the fracture system. Possible future extensions of the work include:

Expansion of the modified DFN methodology to 3D. The current algorithm was
designed for use with the 2D geomechanical software codes ELFEN (Rockfield
2013), UDEC (Itasca 2014), and Phase2 (Rocscience 2013).

However, an

extension of the methodology to 3D would allow for a broader application of the


method, to work with 3DEC (Itasca 2007), Slope Model (Cundall 2011) and 3D
implementations of ELFEN (Rockfield 2013).

Incorporation of the fracture autocorrelation structure into the DFN methodology.


The current software is limited in the reproduction spatial phenomenon as it overhomogenizes the system. Therefore, the system should be extended to either

146

simulate the accurate spatial structure, or at the very least a purely random
structure.

Numerical simulation study to explore the effects of spatial clustering on the


overall geomechanical response. Homogenization of the fracture autocorrelation
structure is known to be positively correlated with the rock bridge percentage,
resulting in an overall increase in the rock mass strength. However, the degree
of strengthening this imposes on the system is unclear.

As a result, future

studies should explore this effect, and help constrain the degree of correlation
between the two phenomena.

Extension of the modified DFN algorithm to take into consideration alternative


mesh geometries. The current method was written for incorporation into ELFEN
(Rockfield 2013), UDEC-trigon (Itasca 2014), and Phase2 (Rocscience 2013), all
of which utilize triangular mesh geometries.

However, the method could be

extended to include other geometries, such as Voronoi blocks, through an


extension/alteration of the proposed constraints.

UDEC-GBMs research

presented in this thesis suggests that this extension may be extremely important,
as the grain shape has been shown to influence the overall failure mechanics.

6.2.3.

UDEC-Grain Boundary Models

UDEC-GBM simulations explored the influence of mesh dependencies on the overall


system behaviour.

The results suggest that mesh dependencies lead to irreducible

uncertainties in UDEC-GBMs. In addition, mesh geometries were found to control the


micro-scale fracture mechanics, with an increased degree of tensile damage observed in
Voronoi compared to triangular mesh geometries. A fundamental understanding of this
behaviour is important for any future UDEC-GBM studies, as researchers must be aware
of the mesh dependencies that exist in the system. While this study has added to the
body of research supporting the UDEC-GBM method, additional research is still needed
to fully describe and maximize its benefits and pitfalls.
research include:

147

Possible areas of further

Continued research into the synthetic rock mass (SRM) approach using the
proposed DFN mesh generation coupled algorithm. Currently, SRM research
is relatively limited within UDEC-GBMs due to the difficulty in generating DFNs
within the conventional UDEC modelling package. However, with the advent of
the new algorithm, mesh geometries can be generated which conform to
previously generated DFNs. This work can be extended upon by investigating
SRM models in more detail, or by extending the method to include alternative
mesh geometries.

The accurate reproduction of micro-structures within UDEC-GBMs, similar to the


work done in PFC (Herbst et al. 2008; Li et al. 2008; Akram et al. 2011;
Sakakibara et al. 2011). Current research is lacking in this area, within UDECGBMs, as studies only match the grain size (Kazerani and Zhao 2010; Alzoubi
2012; Gao 2013; Kazerani et al. 2012; Kazerani 2013; Kazerani and Zhao 2014;
Gao and Stead 2014), and/or use geostatistically unconstrained models to
simulate heterogeneity (Lan et al. 2010; Nicksair and Martin 2014). An extensive
study characterizing the micro-structure, grain shape, and size, and incorporating
the information into a UDEC-GBM, or other numerical packages such as ELFEN
or PFC, would be useful. This would require an appreciation for geostatistical
methods to characterize the micro-scale spatial structure using a method such as
sequential indicator simulation (Emery 2004). As well as, an understanding of
structural geology techniques to understand and reproduce realistic microstructures within numerical simulation models. Realistic grain shapes could be
incorporated using a method such as the clustered overlapping sphere algorithm
of Garcia et al. (2009). A study such as this would be beneficial as the microscale failure mechanisms in UDEC-GBMs have been clearly shown to be
sensitive to the assumed element shapes.

Extension of the probabilistic methods to explore the effects of spatial up-scaling


on calibration uncertainty.

Current research only focused on the inherent

uncertainties associated with forward analysis using the same model geometry.
A study could utilize a similar approach to that used in Chapter 5, to explore the
effects of model up-scaling on the calibration uncertainty.

148

In closing, although numerous recommendations are provided in this thesis, the most
important to take away is the need to transition from deterministic to uncertainty based
slope design practices.

149

References
Ahn, S., and A. Fessler. 2003. Standard errors of mean, variance, and standard
deviation estimators. Technical Report 413, Communications and Signal
Processing Laboratory, Department of Electrical Engineering and Computer
Sciences, University of Michigan, Ann Arbor, USA. 48109-2122.
Akram, M.S., G. Sharrock, and R. Mitra. 2011. The role of interstitial cement in synthetic
conglomeratic rocks. In: Sainsbury, R. Hart, C. Detournay and P. Cundall (eds)
Continuum and Distinct Element Numerical Modeling in Geomechanics, Itasca
Consulting Group, Minneapolis, USA. Paper 08-03. 10 p.
Alzoubi, A.K. 2009. The effect of tensile strength on the stability of rock slopes. Ph.D.
Thesis, University of Alberta, Edmonton, Canada. 205 p.
Alzoubi, A.K. 2012. Modeling of rocks under direct shear loading by using discrete
element method. Journal of Engineering & Applied Sciences. 4:5-20.
Ang, A.H.S., and W. Tang. 1984. Probability concepts in engineering planning and
design: volume I basic principles. John Wiley & Sons, New York, USA. 420 p.
Aughenbaugh, J.M. 2006. Managing uncertainty in engineering design using imprecise
probabilities and principles of information economics. Ph.D. Thesis, Georgia
Institute of Technology, Atlanta, USA. 326 p.
Aughenbaugh, J.M., and C.J. Paredis. 2006. The value of using imprecise probabilities
in engineering design. Journal of Mechanical Design. 128:969-979.
Augustin, T., and R. Hable. 2010. On the impact of robust statistics on imprecise
probability models: a review. Structural Safety. 32:358-365.
Australian Geomechanics Society. 2000. Landslide risk management concepts and
guidelines. AGS Sub-Committee on Landslide Risk Management, Sydney,
Australia. 92 p.
Aydin, A. 2004. Fuzzy set approaches to classification of rock masses. Engineering
Geology. 74: 227-245.
Baczynski, N.R.P. 1980. Rock mass characterization and its application assessment of
unsupported underground openings. Ph.D. Thesis, University of Melbourne,
Australia. 233 p.

150

Baczynski, N.R.P. 2000. STEPSIM4 step-path method for slope risks. GeoEng2000:
An International Conference on Geotechnical & Geological Engineering,
Melbourne, Australia. 19-24.
Baczynski, N.R.P. 2008. STEMSIM4-REVISED: Network analysis methodology for
critical paths in rock mass slopes. In: Proceedings of the Southern Hemisphere
International Rock Mechanics Symposium (SHIRMS-2008), Perth, Australia. 13
p.
Baczynski, N.R.P., I. de Bruyn, J. Mylvaganam, and D. Walker. 2011. High rock slope
cutback geotechnics: a case study at Ok Tedi mine. In: Slope Stability 2011:
International Symposium on Rock Slope Stability in Open Pit Mining and Civil
Engineering, Vancouver, Canada. 14 p.
Baczynski, N.R.P. 2014. Personal Communications. August 22, 2014.
Bae, H.R., R.V. Grandhi, and R.A. Canfield. 2004. An approximation approach for
uncertainty quantification using evidence theory. Reliability Engineering and
System Safety. 86:215-225.
Baecher, G.B., N.A. Lanney, and H.H. Einstein. 1978. Statistical description of rock
properties and sampling. In: Proceedings of the 18th U.S. Symposium on Rock
Mechanics, American Rock Mechanics Association. 5C1-8.
Bagheri, M. 2009. Model uncertainty of design tools to analyze block stability. Licentiate
Thesis. Royal Institute of Technology, Stockholm, Sweden. 163 p.
Bamford, R.W. 1972. The Mount Fublian (Ok Tedi) porphyry copper deposit, territory of
Papua New Guinea. Economic Geology. 67:1019-1033.
Barton, C.M. 1978. Analysis of joint traces. In: Proceedings of the 19th U.S. Symposium
on Rock Mechanics, American Rock Mechanics Association. 39-40 p.
Beale, G. 2009. Hydrogeologic model. In: J. Read, and P. Stacey (eds) Guidelines for
open pit design. CSIRO Publishing, Collingwood Australia. 141-201 p.
Bear, J. 1972. Dynamics of fluids in porous media. Courier Dover Publications. 763 p.
Beckman, P. 1971. A history of . Golden Press, Boulder, USA. 208 p.
Bieniawski, Z.T. 1967. Mechanism of brittle fracture of rock: parts I theory of the
fracture process. International Journal of Rock Mechanics and Mining Sciences &
Geomechanics Abstracts. 4:395-430.
Bieniawski, Z.T. 1973. Engineering classification in jointed rock masses. In: Transactions
of the South African Institute of Civil Engineers. 15:335-344.
Bieniawski, Z.T. 1976. Rock mass classification in rock engineering. In: Proceedings of
the Symposium on Exploration for Rock Engineering, Johannesburg, South
Africa. 97-106.
151

Bieniawski, Z.T. 1984. Rock mechanic design in mining and tunnelling. Balkema,
Rotterdam. 272 p.
Bieniawski, Z.T. 1989. Engineering rock mass classifications. John Wiley & Sons, New
York, USA. 384 p.
Bieniawski, Z.T., B.C. Tamames, J.M.G. Fernadez, and M.A. Hernandez. 2006. Rock
mass excavability (RME) indicator: new way to selecting the optimum tunnel
construction method. In: ITA-AITES World Tunnel Congress and 32rd ITA
General Assembly, Seoul, Korea. 6 p.
Bieniawski, Z.T., B. Celada, and J.M. Galera. 2007. TBM excavability: prediction and
machine-rock interaction. In: Proceedings of the Rapid Excavation and Tunneling
Conference (RETC), Toronto, Canada. 1118-1130.
Bieniawski, Z.T., and R. Grandori. 2007. Predicting TBM excavability part II. Tunnels &
Tunnelling International. January 2008, 15-18.
Billaux, D., J.P. Chiles, K. Hestir, and J. Long. 1989. Three-dimensional statistical
modelling of a fractured rock mass an example from the Fanay-Augres mine.
International Journal of Rock Mechanics and Mining Sciences & Geomechanics
Abstracts. 26:281-299.
Binaghi, E., L. Luzi, P. Madella, F. Pergalani, and A. Rampini. 1998. Slope instability
zonation: a comparison between certainty factor and fuzzy Dempster-Shafer
approaches. Natural Hazards. 17:77-97.
Blackmore, S., R. Godwin, and S. Fountas. 2003. The analysis of spatial and temporal
trends in yield map data over six years. Biosystems Engineering. 84:455-466.
Bonnet, E., O. Bour, N.E. Odling, P. Davy, I. Main, P. Cowie, and B. Berkowitz. 2001.
Scaling of fracture systems in geological media. American Geophysical Union.
39:347-383.
Brace, W.F., B. Paulding, and C. Scholz. 1966. Dilatancy in the fracture of crystalline
rocks. Journal of Geophysical Research. 71:3939-3953.
Brown, E.T. 1970. Strength of models of rock with intermittent joints. Journal of the Soil
Mechanics and Foundation Division. 96:1935-1949.
Brown, E.T. 2008. Estimating the mechanical properties of rock masses. In: Y. Potvin, J.
Carter. A. Dyskin, and R. Jeffery. (eds) SHIRMS 2008, Australian Centre for
Geomechanics, Perth, Australia. 3-22.
Burns, M. Propagation of imprecise probabilities through black-box models. M.Sc.
Thesis, Georgia Institute of Technology, Atlanta, USA. 99 p.

152

Butler, A.C., F. Sadeghi, S.S. Rao, and S.R. LeClair. Computer-aided


design/engineering of bearing systems using the Dempster-Shafer theory.
Artificial Intelligence for Engineering, Design, Analysis and Manufacturing. 9:111.
Cai, M., P.K. Kaiser, Y. Tasaka, T. Maejima, H. Morioka and M. Minami. 2004.
Generalized crack initiation and crack damage stress thresholds of brittle rock
masses near underground excavations. International Journal of Rock Mechanics
& Mining Sciences. 41:833-847.
Carter, T.G., M.S. Diederichs, and J.L. Carvalho. 2007. A unified procedure for HoekBrown prediction of strength and post yield behaviour for rock masses at the
extreme ends of the rock competency scale. In: L. Ribeiro e Sousa, C. Olalla and
N. Grossmann (eds) Proceedings of the 11th Congress of the International
Society for Rock Mechanics. 161-164.
Carter, T.G., M.S. Diederichs, and J.L. Carvalho. 2008. Application of modified HoekBrown transition relationships for assessing strength and post yield behaviour at
both ends of the rock competence scale. The Journal of the Southern African
Institute of Mining and Metallurgy. 108:325-338.
Carvalho, J.L., T.G. Carter, and M.S. Diederichs. 2007. An approach for prediction of
strength and post yield behaviour for rock masses of low intact strength. In: E.
Eberhardt, D. Stead, and T. Morrison (eds) Proceedings of the 1st Canada-US
Rock Mechanics Symposium, Vancouver, Canada. 277-285.
Castrignan, A., A. Buondonno, P. Odierna, C. Fiorentino, and E. Coppola. 2009.
Uncertainty assessment of soil quality index using geostatistics. Environmetrics.
20:298-311.
Castro, L., D. McCreath, and P.K. Kaiser. 1995. Rock mass strength determination from
beakouts in tunnels and boreholes. In: T. Fujii (eds) Proceedings of the 8th
International Congress on Rock Mechanics, Tokyo, Japan. A.A. Balkema,
Rotterdam, Netherlands. 2:531-536
Center for Chemical Process Safety. 2009. Appendix A: Understanding and Using F-N
Diagrams. In: Guidelines for Developing Quantitative Safety Risk Criteria. John
Wiley & Sons, New York, USA.
Chils, J.P., and P. Delfiner. 1999. Geostatistics: Modeling Spatial Uncertainty. 2nd ed.
New Jersey: Wiley.
Cho, N., C.D. Martin, and D.S. Sego. 2007. A clumped particle model for rock.
International Journal of Rock Mechanics and Mining Sciences & Geomechanics
Abstracts. 44:997-1010.
Christian, J.T., and G.B. Baecher. 1999. Point estimate method as numerical quadrature.
Journal of Geotechnical and Geoenvironmental Engineering. 125:779-786.

153

Christian, J.T., and G.B. Baecher. 2002. The point-estimate method with large numbers
of variables. International Journal for Numerical and Analytical Methods in
Geomechancis. 26:1515-1529.
Christianson, M., M. Board, and D. Rigby. 2006. UDEC simulation of triaxial testing of
lithophysal tuff. In: Proceedings of the 41st US Symposium on Rock Mechanics,
Golden, USA. 8 p.
Clark, I. 1979. Practical Geostatistics. 1st ed. Essex: Elsevier Applied Science.
Clark, W.A.V., and K.L. Avery. 1976. The effects of data aggregation in statistical
analysis. Geographical Analysis. 75:428-438.
Clover Associates Pty Ltd. 2010. GALENA. Version 5.0. Software. Robertson, Australia.
Colyvan, M. 2008. Is probability the only coherent approach to uncertainty? Risk
Analysis. 28: 645-652.
Cooke, R. 2004. The anatomy of the squizzel: The role of operational definitions in
representing uncertainty. Reliability Engineering & System Safety. 85:313-319.
Cundall, P.A. 1971. A computer model for simulating progressive large scale movements
in blocky rock systems. In: Proceedings of the Symposium of the International
Society of Rock Mechanics (ISRM). Nancy, France. 129-136 p.
Cundall, P., M. Pierce, and D. Mas Ivars. 2008. Quantifying the size effect of rock mass
strength. In: Proceedings of the 1st South Hemisphere International Rock
Mechanics Symposium, Perth, Australia. 315 p.
Cundall, PA. 2011. Lattice method for modeling brittle, jointed rock. In: D.P. Sainsbury,
R. Hart, C. Detournay and P. Cundall (eds) Continuum and Distinct Element
Numerical Modeling in Geomechanics, Itasca Consulting Group, Minneapolis,
USA. Paper 01-02. 9 p.
Damjanac, B., M. Board, M. Lin, D. Kicker, and J. Leem. 2007. Mechanical degradation
of emplacement drifts at Yucca Mountain a modeling case study part II:
lithophysal rock. International Journal of Rock Mechanics and Mining Sciences &
Geomechanics Abstracts. 44:368-399.
Damjanac, B., and C. Fairhurst. 2010. Evidence for a long-term strength threshold in
crystalline rock. Rock Mechanics and Rock Engineering. 43:513531.
Davies, H.L., W.J.S. Howell, R.S.H. Fardon, R.J. Carter, and E.D. Bumstead. 1978.
History of the Ok Tedi porphyry copper prospect, Papua New Guinea. Economic
Geology. 73:796-809.
Davy, P., A. Sornette, and D. Sornette. 1990. Some consequences of a proposed factal
nature of continental faulting. Nature. 348:56-58.

154

Davy, P. A. Sornette, and D. Sornette. 1992. Experimental discovery of scaling laws


relating fractal dimensions and the length distribution exponent of fault systems.
Geophysical Research Letters. 19:361-363.
Dawson, E.M., W.H. Roth, and A. Dresher. 1999. Slope stability analysis by strength
reduction. Gotechnique. 46:835-840.
de Bruyn, I., J. Mylvaganam, and N.R.P. Baczynski. 2011. A phased modelling approach
to identify passive drainage requirements for ensuring stability of the proposed
west wall cutback at Ok Tedi mine, Papua New Guinea. In: Slope Stability 2011:
International Symposium on Rock Slope Stability in Open Pit Mining and Civil
Engineering, Vancouver, Canada. 12 p.
de Bruyn, I., M.A. Coulthard, N.R.P. Baczynski, and J. Mylvaganam. 2013. Twodimensional and three-dimensional distinct element numerical stability analysis
for assessment of the west wall cutback design at Ok Tedi Mine, Papua New
Guinea. In: P.M. Dight (eds) Slope Stability 2013: International Symposium on
Rock Slope Stability in Open Pit Mining and Civil Engineering, Brisbane,
Australia. 653-668.
de Vallejo, L.I., and M. Ferrer. 2011. Geological Engineering. CRC Press, Taylor &
Francis Group, London, UK. 678 p.
Deisman, N., D. Mas Ivars, C. Darcel, and R.J. Chalaturnyk. 2010. Empirical and
numerical approaches for geomechanical characterization of coal seam
reservoirs. International Journal of Coal Geology. 82:204212.
Dempster, A.P. 1967. Upper and lower probabilities induced by a multiple valued
mapping. Annals of Mathematical Statistics. 38:325-339.
Dershowitz, W.S., P.R. La Pointe, and T.W. Doe. 2004. Advances in discrete fracture
network modeling. In: Proceedings of the US EPA/NGWA Fractured Rock
Conference. Portland, USA. 882-894.
Dershowitz, W.S., and H.H. Einstein. 1988. Characterizing rock joint geometry with joint
system models. Rock Mechanics and Rock Engineering. 21:21-51.
Deutsch, C.V. and A.G. Journal. 1998. GSLIB: Geostatistical Software Library and
Users Guide. New York: Oxford University Press.
Deutsch, C.V. 2002. Geostatistical reservoir modeling. Oxford University Press, Oxford,
UK. 400 p.
DHI-WASY GmbH. 2013. FEFLOW: Finite-Element Simulation Systems for Subsurface
Flow and Transport Processes. Version 6.2. 64-bit. Software. Berlin, Germany.
Diederichs, M.S., and P.K. Kaiser. 1999. Tensile strength and abutment relaxation as
failure control mechanics in underground excavations. International Journal of
Rock Mechanics and Mining Sciences & Geomechanics Abstracts. 36:69-96.

155

Diederichs, M.S. 2000. Instability of hard rock masses: the role of tensile damage and
relaxation. Ph.D. Thesis, University of Waterloo, Canada. 597 p.
Diederichs, M.S. 2003. Manuel Rocha medal recipient: rock fracture and collapse under
low confinement conditions. Rock Mechanics and Rock Engineering. 36:339-381.
Diederichs, M.S., P.K. Kaiser, and E. Eberhardt. 2004. Damage initiation and
propagation in hard rock tunnelling and the influence of near-face stress rotation.
International Journal of Rock Mechanics and Mining Sciences & Geomechanics
Abstracts. 41:785-812.
Diederichs, M.S., M. Lato, R. Hammah, and P. Quinn. 2007a. Shear strength reduction
approach for slope stability analysis. In: Proceedings of the 1st Canada-US Rock
Mechanics Symposium, Vancouver, Canada. 8 p.
Diederichs, M.S., J.L. Carvalho, and T. Carter 2007. A modified approach for prediction
of strength and post yield behaviour for high GSI rock masses in strong, brittle
ground. Proceedings of the 1st Canada-US Rock Mechanics Symposium,
Vancouver, Canada. 8 p.
Dijkstra, E.W. 1959. A note on two problems in connexion with graphs. Numerical
Mathematics. 1:269-271.
Dimitrakopoulos, R., and M.B. Fonseca. 2003. Assessing risk in gradetonnage curves
in a complex copper deposit, northern Brazil, based on an efficient joint
simulation of multiple correlated variables. In: Proceedings of the Application of
Computers and Operations Research in the Minerals Industries, Cape Town,
South Africa. 373382 p.
Dodagoudar, G.R., and G. Venkatachalam. 2000. Reliability analysis of slopes using
fuzzy set theory. Computers and Geotechnics. 27:101-115.
Douglas, K.J., and G. Mostyn. 2004. The shear strength of rock masses. In: G. Farquar,
Kelsy, Marsh, and Fellows (eds) Proceedings of the 9th Australia New Zealand
Conference on Geomechanics, New Zealand Geotechnical Society, Auckland,
New Zealand. 166-172 p.
Dowd, P.A. 1992. A review of recent developments in geostatistics. Computers &
Geosciences. 17:1481-1500.
Du, L., K.K. Choi, and B.D. Youn. 2006. Inverse possibility analysis method for
possibility-based design optimization. AIAA Journal, 44:2682-2690.
Du, L. B.D. Youn, and D. Gorsich. 2006. Possibility-based design optimization method
for design problems with both statistical and fuzzy input data. Journal of
Mechanical Design. 128:928-935.
Duncan, J.M. 2000. Factors of safety and reliability in geotechnical engineering. Journal
of Geotechnical and Geoenvironmental Engineering, 126:307-316.

156

Eberhardt, E.D. 1998. Brittle rock fracture and progressive damage in uniaxial
compression. Ph.D. Thesis, University of Saskatchewan, Saskatoon, Canada.
334 p.
Eberhardt, E., D. Stead, B. Stimpson, and R.S. Read. 1998. Identifying crack initiation
and propagation thresholds in brittle rock. Canadian Geotechnical Journal.
35:222-233.
Eckhardt, R. 1987. Stan Ulam, John von Neumann, and the Monte Carlo method, Los
Alamos Science, Special Issue. 15:131-137.
Einstein, H.H., and G.B. Baecher. 1983a. Probabilistic and statistical methods in
engineering geology. Rock mechanics and Rock Engineering. 16:39-72.
Einstein, H.H., D. Veneziano, G.B. Baecher, and K.J. OReilly. 1983b. The effect of
discontinuity persistence on rock slope stability. International Journal of Rock
Mechanics and Mining Sciences & Geomechanics Abstracts. 20:227-236.
Elmo, D., C. Clayton, S. Rogers, R. Beddoes, and S. Greer. 2011. Numerical simulations
of potential rock bridge failure within a naturally fractured rock mass. In:
Proceedings of the International Symposium on Rock Slope Stability in Open Pit
Mining and Civil Engineering. Vancouver, Canada. 13 p.
Elmouttie, M.K., and G.V. Poropat. 2011. Uncertainty propagation in structural modeling.
In: Slope Stability 2011: International Symposium on Rock Slope Stability in
Open Pit Mining and Civil Engineering, Vancouver, Canada. 13 p.
El-Ramly, H., N.R. Morgenstern, and D.M. Cruden, 2002. Probabilistic slope stability
analysis for practice. Canadian Geotechnical Journal. 39:665-683.
El-Ramly, H., N.R. Morgenstern, and D.M. Cruden. 2006. Lodalen slide: A probabilistic
assessment. Canadian Geotechnical Journal. 43:956-968.
Emery, X. Properties and limitations of sequential indicator simulation. Stochastic
Environmental Resources and Risk Assessment. 18:414-424.
Esfahani, N.M., and O. Asghari. 2013. Fault detection in 3D by sequential Gaussian
simulation of Rock Quality Designation (RQD). Arabian Journal of Geosciences.
10:3737-3747.
Esmaieli, K., J. Hadjigeorgiou, and M. Grenon. 2010. Estimating geometrical and
mechanical REV based on synthetic rock mass models at Brunswick Mine.
International Journal of Rock Mechanics and Mining Sciences & Geomechanics
Abstracts. 47:915926.
Fadakar, A.Y., P.A. Dowd, and X. Chaoshui. 2014. Connectivity field: a measure for
characterising fracture networks. Mathematical Geosciences. DOI
10.1007/s11004-013-9520-7.

157

Fagerlund, G., M. Royle, and J. Scibek. 2013. Integrating complex hydrogeological and
geotechnical models a discussion of methods and issues. In: P.M. Dight (eds)
Slope Stability 2013: International Symposium on Rock Slope Stability in Open
Pit Mining and Civil Engineering, Brisbane, Australia. 1091-1102.
Fenton, G.A. 1997. Probabilistic methods in geotechnical engineering. In: Workshop
presented at ASCE GeoLogan97 Conference, Logan, USA.
Ferson, S., and S. Donald. 1998. Probability bounds analysis. In: A. Mosleh, and R.A.
Bari (eds) Probabilistic Safety Assessment and Management. Springer-Verlag,
New York, USA. 1203-1208.
Ferson, S., and W.T. Tucker. 2006. Sensitivity analysis using probability bounding.
Reliability Engineering & System Safety. 91:1435-1442.
Fonseka, G.M., S.A.F. Murrell, and P. Barnes. 1985. Scanning electron microscope and
acoustic emission studies of crack development in rocks. International Journal of
Rock Mechanics and Mining Sciences & Geomechanics Abstracts. 22:273-289.
Garcia, X., J.P. Latham, J. Xiang, and J.P. Harrison. 2009. A clustered overlapping
sphere algorithm to represent real particles in discrete element modelling.
Gotechnique. 59:779-784.
Gao, F. 2013. Simulation of failure mechanics around underground coal mine openings
using discrete element modelling. Ph.D. Thesis, Simon Fraser University,
Vancouver, Canada. 288 p.
Gao, F.Q., and D. Stead. 2014. The application of a modified Voronoi logic to brittle
fracture modelling at the laboratory and field scale. International Journal of Rock
Mehcanics and Mining Sciences & Geomechanics Abstracts. 68:1-14
Gao, F., D. Stead, and J. Coggan. 2014a. Evaluation of coal longwall caving
characteristics using an innovative UDEC Trigon approach. Computers and
Geotechnics. 55:448-460.
Gao, F., D. Stead, and J. Coggan. 2014b. Simulation of roof shear failure in coal mine
roadways using an innovative UDEC trigon approach. Computers and
Geotechnics. 61:33-41.
Gehlke, C.E., and K. Biehl. 1934. Certain effects of grouping upon the size of correlation
coefficient in census tract material. Journal of the American Statistical
Association Supplement. 29:169-170.
Geier, J.E., K. Lee, and W.S. Dershowitz. 1988. Field validation of conceptual models for
fracture geometry. In: Proceedings of the American Geophysical Union, 1988 Fall
Meeting, San Francisco, EOS, Transactions, American Geophysical Union.
69:1177.
Giasi, C.I., P. Masi, and C. Cherubini. 2003. Probabilistic and fuzzy reliability analysis of
a sample slope near Aliano. Engineering Geology. 67:391-402.
158

Giles, R. 1982. Foundations for a theory of possibility. In: M.M. Gupta, and E. Sanchez
(eds) Fuzzy Information and Decision Processes. North-Holland Publishing
Company, Amsterdam, Holland. 183-195.
Glynn, E.F., D. Veneziano, and H.H. Einstein. 1978. The probabilistic model for shearing
resistance of jointed rock. In: 19th US Symposium on Rock Mechanics (USRMS).
American Rock Mechanics Association. 66-76.
Glynn, E. 1979. A probabilistic approach to the stability of rock slopes. Ph.D.
dissertation, Massachusetts Institute of Technology, Cambridge, USA. 442 p.
Good, I.J. 1950. Probability and the weighting of evidence. Charles Griffin, London, UK.
119 p.
Good, I.J. 1983. Good thinking: the foundations of probability and its applications.
University of Minnesota Press, Minneapolis, USA. 352 p.
Goovaerts, P. 1997. Geostatistics for Natural Resources Evaluation. 1st ed. New York:
Oxford University Press. 496 p.
Griffiths, D.V. and G.A. Fenton. 2000. Influence of soil strength spatial variability on the
stability of an undrained clay slope by finite elements. In: Slope Stability 2000:
Proceedings of GeoDenver 2000, Denver, ASCE Geotechnical Special
Publication No. 101, 184-193. New York. DOI:10.1061/40512(289)14.
Griffiths, D.V., J. Huang, and G.A. Fenton. 2009. Influence of spatial variability on slope
reliability using 2-D random fields. Journal of Geotechnical and
Geoenvironmental Engineering. 135:1367-1378.
Gringarten, E. 1997. 3D geometric description of fractured reservoirs. In: E.Y. Baafi, and
N.A. Schofield (eds) Geostatistics Wollongong 96. Dordrecht: Kluwer Academic.
424-432.
Haining, R.P. 2003. Spatial data analysis: theory and practice. Cambridge University
Press, Cambridge, UK. 454 p.
Hjek, A. 2012. Interpretations of probability. In: E.N. Zalta (eds) The Stanford
Encyclopedia of Philosophy, Winter 2012 Edition,
http://plato.stanford.edu/archives/win2012/entries/ probability-interpret.
Hallin, M., Z. Lu, and L.T. Tran. 2004. Kernel density estimation for spatial processes:
the L1 theory. Journal of Multivariate Analysis. 88:61-75.
Hamidi, J.K., K. Shahriar, B. Rezai, and H. Bejari. 2010. Application of fuzzy set theory
to rock engineering classification systems: an illustration of the rock mass
excavability index. Rock Mechanics and Rock Engineering. 43:335-350.
Hammah, R.E., T.E. Yacoub, B. Corkum, and J. Curran. 2005. The shear strength
reduction method for the generalized Hoek-Brown criterion. In: Proceedings of
the 40th U.S. Symposium on Rock Mechanics, Anchorage, USA. 6 p.
159

Hammah, R.E., T.E. Yacoub, and J.H. Curran. 2006. Investigating the performance of
the shear strength reduction (SSR) method on the analysis of reinforced slopes.
In: Proceedings of the 59th Canadian Geotechnical Conference, Vancouver,
Canada. 5 p.
Hammah, R.E., and J.H. Curran. 2009. Is it better to be approximately right than
precisely wrong: why simple models work in mining geomechanics. In:
Proceedings of the 43rd US Rock Mechanics Symposium and 4th U.S.-Canada
Rock Mechanics Symposium, Asheville, USA. 8 p.
Hammah, R.E., T.E. Yacoub, and J.H. Curran. 2009. Numerical modelling of slope
uncertainty due to rock mass jointing. In: Proceedings of the International
Conference on Rock Joints and Jointed Rock Masses, Tucson, Arizona, USA. 8
p.
Hammersley, J.M., and D.C. Handscomb. 1964. Monte Carlo methods. John Wiley &
Sons, New York, USA. 178 p.
Hanss, M. 2005. Applied fuzzy arithmetic: an introduction with engineering applications.
Springer. New York, USA. 259 p.
Harr, M.E. 1989. Probabilistic estimates for multivariate analyses. Applied Mathematical
Modelling. 13:281-294.
Harr, M.E. 1996. Reliability based design in civil engineering. Dover, New York, USA.
281 p.
Harrison, J.P., and J.A. Hudson. 2010. Incorporating parameter variability in rock
mechanics analyses: fuzzy mathematics applied to underground rock spalling.
Rock Mechanics and Rock Engineering. 43:219224.
Harrison, J.P., A.M. Ferrero, and S. Cravero. 2001. Fuzzy partitioning algorithms applied
to the interpretation of distinct element modelling results. Gotechnique, 50:677
686.
Hart, A.G. 1942. Risk, uncertainty and the unprofitability of compounding probabilities.
In: O. Lange, F. McIntyre, and T.O. Yntema (eds) Studies in Mathematical
Economics and Econometrics. University of Chicago Press, Chicago, USA. 110118.
Havaej, M., D. Stead, L. Lorig, and J. Vivas. 2012. Modelling rock bridge failure and
brittle fracturing in large open pit rock slopes. In: Proceedings of the 46th U.S.
Rock Mechanics/Geomechanics Symposium, American Rock Mechanics
Association, Chicago, USA. 9 p.
Havaej, M., A. Wolter, and D. Stead. 2014. Exploring the potential role of brittle fracture
in the 1963 Vajont Slide, Italy. Submitted to: International Journal of Rock
Mechanics and Mining Sciences & Geomechanics Abstracts.

160

Hearn, G.J. 1995. Landslide and erosion hazard mapping at Ok Tedi copper mine,
Papua New Guinea. Quarterly Journal of Engineering Geology and
Hydrogeology. 28:47-60.
Henley, E.J., and H. Kumamoto. 1981. Reliability engineering and risk assessment.
Prentice-Hall, New Jersey, USA. 540 p.
Herbst, M., H. Konietzky, and K. Walter. 2008. 3D microstructural modeling. In: R. Hart,
C. Detournay and P. Cundall (eds) Continuum and Distinct Element Numerical
Modeling in Geo-Engineering, Itasca Consulting Group, Minneapolis, USA. Paper
08-05. 7 p.
Hicks, M.A., and R. Boughrarou. 1998. Finite element analysis of the Nerlerk underwater
berm failures. Gotechnique. 48:169-185.
Hicks, M.A., and K. Samy. 2002. Influence of heterogeneity on undrained clay slope
stability. Quarterly Journal of Engineering Geology and Hydrogeology. 35:41-49.
Hoek, E. 1968. Brittle failure of rock. In: K.G. Stagg, and O.C. Zienkiewicz (eds) Rock
Mechanics in Engineering Practice. 99-124.
Hoek, E., and E.T. Brown. 1980a. Underground excavations in rock. 1st edition. London:
Institute of Mining and Metallurgy. 536 p.
Hoek, E., and E.T. Brown. 1980b. Empirical strength criterion for rock masses. Journal of
Geotechnical Engineering. 106:1013-1035.
Hoek, E. 1983. Strength of jointed rock masses. Gotechnique. 23:187-223.
Hoek, E. 1994. Strength of rock and rock masses. ISRM News Journal. 2:4-16.
Hoek, E., P.K. Kaiser, and W.F. Bawden. 1995. Support of Underground Excavations in
Hard Rock. Balkema, Rotterdam, Netherlands. 300 p.
Hoek, E., and E.T. Brown. 1997. Practical estimates of rock mass strength. International
Journal of Rock Mechanics and Mining Sciences & Geomechanics Abstracts.
34:1165-1186.
Hoek, E., Marinos, P., and Benissi, M. 1998. Applicability of the Geological Strength
Index (GSI) classification for very weak and sheared rock masses. The case of
the Athens Schist Formation. Bulletin of Engineering Geology and the
Environment. 57:151-160.
Hoek, E., C. Carranza-Torres, and B. Corkum. 2002. HoekBrown failure criterion-2002
edition. In: Proceedings of the 5th North American Rock Mechanics Symposium,
Toronto, Canada. 7 p.
Hoek, E. and P. Marinos. 2007. A brief history of the development of the HoekBrown
failure criterion. Soils and Rocks, No 2, November. 13 p.

161

Hoek, E. 2012. Blasting damage in rock. Retrieved on December 8, 2012.


www.rocsience.com. 11 p.
Holcomb, D.J. and L.S. Costin. 1987. Damage in brittle materials: experimental methods.
In: J.P. Lamb (eds) Proceedings of the 10th U.S. National Congress of Applied
Mechanics. 107-113.
Hong, H.P. 1998. An efficient point estimate method for probabilistic analysis. Reliability
Engineering and System Safety. 59:261-267.
Huang, H.Z., and X. Zhang. 2004. Design optimization with discrete and continuous
variables of aleatory and epistemic uncertainties. Journal of Mechanical Design.
131:031006.
Huang, H., W. Gong, C.H. Juang, and S. Khoshnevisan. 2014. Robust geotechnical
design of shield-driven tunnels using fuzzy sets. Tunneling and Underground
Construction. 184-194.
Imam, R.L., and W.J. Conover. 1982. A distribution-free approach to inducing rank
correlation among input variables. Communications in Statistics. 11:311-334.
Isaaks, E.H., and R.M. Srivastava. 1989. An Introduction to Applied Geostatistics. 1st ed.
New York: Oxford University Press. 592 p.
Itasca Consulting Group Inc. 2007. 3DEC. Version 4.1. Software. Minneapolis, USA.
Itasca Consulting Group Inc. 2008. PFC2D. Version 4.0. Software. Minneapolis, USA.
Itasca Consulting Group Inc. 2011. FLAC (Fast Lagrangian Analysis of Continua).
Version 7.00.411. Software. Minneapolis, USA.
Itasca Consulting Group Inc. 2014. UDEC (Universal Distinct Element Code). Version
6.00.282. Software. Minneapolis, USA.
Ivanova, V.M. 1995. Three-dimensional stochastic modeling of rock fracture systems.
M.Sc. Thesis, Massachusetts Institute of Technology, Cambridge, MA. 200 p.
Jaeger, J.C., N.G.W. Cook, and R.W. Zimmerman. 2007. Fundamentals of rock
mechanics. 4th ed. Blackwell Publishing, Malden, USA. 488 p.
Jakubec, J. and D.H. Laubscher. 2000. The MRMR rock mass rating classification
system in mining practice. In: Proceedings of the 3rd International Conference
and Exhibition on Mass Mining, Brisbane, Australia. 413-421.
Jefferies, M., L. Lorig, and C. Alvarez. 2008. Influence of rock-strength spatial variability
on slope stability. In: R. Hart, C. Detournay and P. Cundall (eds) Continuum and
Distinct Element Numerical Modeling in Geo-Engineering, Itasca Consulting
Group, Minneapolis, USA. Paper 01-05. 9 p.

162

Jennings, J. 1970. A mathematical theory for the calculation of the stability of slopes in
open cast mines. In: Proceeding of the Symposium on the Theoretical
Background to the Planning of Open Pit Mines, Johannesburg, South Africa. 87102.
Jensen, O.P., M.C. Christman, and T.J. Miller. 2006. Landscape-based geostatistics: a
case study of the distribution of blue crab in Chesapeake Bay. Environmetrics.
17:605-621.
Jing, L. 2003. A review of techniques, advances and outstanding issues in numerical
modelling for rock mechanics and rock engineering. International Journal of Rock
Mechanics and Mining Sciences & Geomechanics Abstracts. 40:283-353.
Journel, A.G., and C.J. Huijbregts. 1978. Mining Geostatistics. 1st ed. New York:
Academic Press. 600 p.
Johns, H. 1966. Measuring the strength of rock in situ at an increasing scale. In:
Proceedings of the 1st ISRM Congress, Lisbon, Portugal. 477-482.
Karanki, D.R., H.S. Kushwaha, A.K. Verma, and S. Ajit. 2009. Uncertainty analysis
based on probability bounds (P-Box) approach in probabilistic safety
assessment. Risk Analysis. 29:662-675.
Kazerani, T., and J. Zhao. 2010. Micromechanical parameters in bonded particle method
for modelling of brittle material failure. International Journal for Numerical and
Analytical Methods in Geomechanics. 34:18771895.
Kazerani, T., Z.Y. Yang, and J. Zhao. 2012. A discrete element model for predicting
shear strength and degradation of rock joint by using compressive and tensile
test data. Rock Mechanics and Rock Engineering. 45:695-709.
Kazerani, T. 2013. A discontinuum-based model to simulate compressive and tensile
failure in sedimentary rock. Journal of Rock Mechanics and Geotechnical
Engineering. 5:378-388.
Kazerani, T., and J. Zhao. 2014. A microstructure-based model to characterize
micromechanical parameters controlling compressive and tensile failure in
crystallized rock. Rock Mechanics and Rock Engineering. 47:435-452.
Kemeny, J.M., and N.G.W. Cook. 1986. Effective moduli, non-linear deformation and
strength of a cracked elastic solid. International Journal of Rock Mechanics and
Mining Sciences & Geomechanics Abstracts. 23:107-118.
Kiureghian, A.D. 2007. Aleatory or epistemic? Does it matter? Special Workshop on Risk
Acceptance and Risk Communication, March 26-27, 2007, Stanford University,
USA. 13 p.

163

Klir, G.J. 1992. Probabilistic versus possibilistic conceptualization of uncertainty. In: B.M.
Ayyub, M.M. Gupta, and L.N. Kanal (eds) Analysis and Management of
Uncertainty: Theory and Applications. North-Holland Publishing Company, New
York, USA. 38-41.
Kohlas, J., and P.A. Monney. 1995. A mathematical theory of hints: an approach to
Dempster-Shafer theory of evidence. Lecture Notes in Economics and
Mathematical Systems. 422 p.
Kolmogorov, A. N., 1933, Grundbegriffe der Wahrscheinlichkeitrechnung, Ergebnisse
Der Mathematik. Translated in 1950 as Foundations of Probability. Chelsea
Publishing Company, New York, USA. 84 p.
Kong, W.K. 2002. Risk assessment of slopes. Quarterly Journal of Engineering Geology
and Hydrogeology. 35:213-222.
Kovari, K., A. Tisa, E. Einstein, and J. Franklin. 1983. Suggested methods for
determining the strength of rock materials in triaxial compression: revised
version. International Journal of Rock Mechanics and Mining Sciences &
Geomechanics Abstracts. 20:285290.
Labuz, J.F., and A. Zang. 2012. Mohr-Coulomb failure criterion. Rock Mechanics and
Rock Engineering. 45:975-979.
Lajtai, E.Z. 1968. Shear strength of weakness planes in rock. International Journal of
Rock Mechanics and Mining Sciences & Geomechanics Abstracts. 6:499-515.
Lan, H., C.D. Martin, and B. Hu. 2010. Effect of heterogeneity of brittle rock on
micromechanical extensile behavior during compression loading. Journal of
Geophysical Research. 115:B01202. doi:10.1026/2009JB006496.
Laubscher, D.H. 1975. Class distinction in rock masses. Coal, Gold and Base Minerals
of South Africa. 23:37-50.
Laubscher, D.H. 1990. A geomechanics classification system for the rating of rock mass
in mine design. Journal of the South African Institute of Mining and Metallurgy.
90:257-273.
Laubscher, D.H., and J. Jakubec. 2001. The MRMR rock mass classification for jointed
rock masses. In: W.A. Hustrulid, and R.L. Bullock (eds) Underground mining
methods: engineering fundamentals and international case studies, Society of
Mining Metallurgy and Exploration, Littleton, USA. 475-481.
Leuangthong, O., K.D. Khan, and C.V. Deutsch. 2011. Solved problems in geostatistics.
2nd ed. New Jersey: Wiley. 208 p.
Levi, I. 1974. On indeterminate probabilities. Journal of Philosophy. 71:391-418.

164

Li, L., I. Larsen, and R.M. Holt. 2008. A grain scale PFC3D model. In: R. Hart, C.
Detournay and P. Cundall (eds) Continuum and Distinct Element Numerical
Modeling in Geo-Engineering, Itasca Consulting Group, Minneapolis, USA. Paper
08-02. 7 p.
Little, T.N., Cortes, J.P., and Baczynski, N.R.P. 1998. Risk-based slope design
optimisation study for the Ok Tedi copper-gold mine. Internal Report: Ok Tedi
Mining Ltd., Tabubil, Papua New Guinea. 1657 p.
Lockner, D.A.,J.D. Bayerlee, V. Kuksenko, A. Ponomarev, and A. Sidorin. 1992.
Observations of quasi-static fault growth from acoustic emissions. In: B. Evens,
and T. Wong (eds) Fault mechanics and transport properties of rocks. Academic
Press, New York, USA. 3-31.
Long, J.C.S., and D.M. Billaux. 1987. From field data to fracture network modelling an
example incorporating spatial structure. Water Resource Research. 23:12011216.
Lorig, L.J., and P.A. Cundall. 1987. Modeling of reinforced concrete using the distinct
element method. In: S.B. Shah, and S.E. Swartz (eds) Fracture of Concrete and
Rock, SEM-RILEM International Conference, Springer, Houston, USA. 276-287.
Lorig, L.J., A. Watson, C.D. Martin, and D. Cruden. 2009. Rockslide run-out prediction
from distinct element analysis. Geomechanics and Geoengineering. 4:17-25.
Lorig, L.J. 2009. Challenges in current slope stability analysis methods. In: Slope
Stability, International Symposium on Rock Slope Stability in Open Pit Mining and
Civil Engineering, 2009, Santiago, Chile. 8 p.
Mahabadi, O.K., A. Lisjak, G. Grasselli, and A. Munjiza. 2012. Y-Geo: a new combined
finite-discrete element numerical code for geomechanical applications.
International Journal of Geomechanics. 12:676-688.
Mandelbrot, B.B. 1982. The fractal geometry of nature. W.H. Freeman, New York, USA.
480 p.
Maptek Pty Ltd. 2013. Vulcan. Version 8.1.4. 64 bit. Software. Adelaide, Australia.
Mardia, K.V., W.B. Nyirongo, A.N. Walder, C. Xu, P.A. Dowd, R.J. Fowell, and J.T. Kent.
2007. Markov chain Monte Carlo implementation of rock fracture modelling.
Mathematical Geosciences DOI 10.1007/s11004-007-9099-3.
Marschak, J. 1974. Economic Information, Decision, and Prediction: Selected Essays.
Vol. I-III. D. Reidel Publishing Company, Boston, USA. 400 p.
Martin, C.D., and N.A. Chandler. 1994. The progressive fracture of Lac du Bonnet
granite. International Journal of Rock Mechanics and Mining Sciences &
Geomechanics Abstracts. 31:643-659.

165

Martin, C.D. 1997. The 17th Canadian geotechnical colloquium: the effect of cohesion
loss and stress path on brittle rock strength. Canadian Geotechnical Journal.
34:698-725.
Martin, C.D., P.K. Kaiser, and D.R. McCreath. 1999. Hoek-Brown parameters for
predicting the depth of brittle failure around tunnels. Canadian Geotechnical
Journal. 36:136-151.
Marinos, V., P. Marinos, and E. Hoek, 2005. The geological strength index: applications
and limitations. Bulletin of Engineering Geology and the Environment. 64:55-65.
Mas Ivars, D., M. Pierce, D. DeGagn, and C. Darcel. 2007. Anisotropy and scale
dependency in jointed rock-mass strength a synthetic rock mass study. In:
Proceedings of the 1st International FLAC/DEM Symposium on Numerical
Modeling. 231239.
Mas Ivars, D., M.E. Pierce, C. Darcel, J. Reyes-Montes, D.O. Potyondy, R. Paul Young,
and P.A. Cundall. 2011. The synthetic rock mass approach for jointed rock mass
modelling. International Journal of Rock Mechanics and Mining Sciences &
Geomechanics Abstracts. 48:219244.
Matheron, G. 1963. Principles of geostatistics. Economic Geology. 58:1246-66.
Matsui, T., and K.C. San. 1992. Finite element slope stability analysis by shear strength
reduction technique. Soils and Foundations. 32:59-70.
Mayer, J.M., P. Hamdi, and D. Stead. 2014a. A modified discrete fracture network
approach for geomechanical simulation. In: Proceedings of the 1st International
Conference on Discrete Fracture Network Engineering. 9 p.
Mayer, J.M., D. Stead, I. de Bruyn, and M. Nowak. 2014b. A sequential Gaussian
simulation approach to modelling rock mass heterogeneity. In: Proceedings of
the 48th US Rock Mechanics/Geomechanics Symposium, Minneapolis, USA. 11
p.
Mazzoccola, D.F., D.L. Millar, and J.A. Hudson. 1997. Information, uncertainty and
decision making in site investigation for rock engineering. Geotechnical and
Geological Engineering. 15:145-180.
Mndez-Venegas, J., and M.A. Daz-Viera. 2014. Stochastic modeling of spatial grain
distribution in rock samples from terrigenous formations using the plurigaussian
simulation method. In: M. Diaz-Viera, P. Sahay, M. Coronado and A. Ortiz-Tapia
(eds) Mathematical and Numerical Modeling in Porous Media: Applications in
Geosciences, 1st ed. RC Press, Taylor & Francis Group, Boca Raton, USA. 17 p.
Mohaned, S., and A.K. McCowan. 2001. Modelling project investment decisions under
uncertainty using possibility theory. International Journal of Project Management.
19:231-241.

166

Mostyn, G., and K. Douglas. 2000. Strength of intact rock and rock masses. In:
Proceedings of GeoEng 2000, Technomic Publishing Company, Lancaster, USA.
1389-1421.
Nadim, F. 2007. Tools and strategies for dealing with uncertainty in geotechnics.
Probabilistic Methods in Geotechnical Engineering. 491:71-95.
Nagel, E. 1960. The structure of science. Hackett, London, UK. 618 p.
Nicksiar, M., and C.D. Martin. 2013. Factors affecting crack initiation in low porosity
crystalline rocks. Rock Mechanics and Rock Engineering. DOI 10.1007/s00603013-0451.
Nikolaidis, E., S. Chen, H. Cudney, R.T. Haftka, and R. Rosca. 2004. Comparison of
probability and possibility for design against catastrophic failure under
uncertainty. Journal of Mechanical Design. 126:386-394.
Nikolaidis, E. 2005. Types of uncertainty in design decision making. In: E. Nikolaidis,
D.M. Ghiocel and S. Singhal (eds) Engineering Design Reliability Handbook.
CRC Press, New York, USA. 8-1-8-20.
Nowak, M., and G. Verly. 2004. The practice of sequential Gaussian simulation. In: O.
Leuangthong and C.V. Deutsch (eds) Geostatistics Banff, Netherlands, Springer.
387-398.
Nowak, M., and G. Verly. 2007. A practical process for geostatistical simulation with
emphasis on Gaussian methods. In: R. Dimitrakopoulos (eds) Orebody Modelling
and Strategic Mine Planning - Uncertainty and Risk Management Models (2nd
Edition). The Australasian Institute of Mining and Metallurgy (The AusIMM). 10 p.
Oberguggenberger, M., and W. Fellin. 2008. Reliability bounds through random sets:
non-parametric methods and geotechnical applications. Computers and
Structures. 86:1093-1101.
Oberkampf, W.L., S.M. DeLand, B.M. Rutherford, K.V. Diegert, and K.F. Alvin. 2002.
Error and uncertainty in modeling and simulation. Reliability Engineering and
System Safety. 75:333-357.
Olofsson, I., and A. Fredriksson. 2005. Strategy for a numerical rock mechanics site
descriptive model: further development of the theoretical/numerical approach.
SKB Rapport R-05-43, ISSN 1402-3091.
Olsson, A.M.J., and G.E. Sandberg. 2002. Latin hypercube sampling for stochastic finite
element analysis. Journal of Engineering Mechanics. 128:121-125.
OReilly, K. 1980. The effect of joint plane persistence on rock slope reliability. M.Sc.
Thesis, Massachusetts Institute of Technology, Cambridge, USA. 553 p.
Oren, H., and S. Bakke. 2003. Reconstruction of Berea sandstone and pore-scale
modeling of wettability effects. Petroleum Science and Engineering. 39:177-199.
167

Oren, H., and M. Blunt. 2005. Pore space reconstruction using multiple point statistics.
Petroleum Science and Engineering. 46:121-137.
Oren, H., and M. Blunt. 2007. Pore space reconstruction of vuggy carbonates using
microtomography and multiple-point statistics. Water Resources Research. 43,
W12S02, doi:10.1029/2006WR005680.
Owen, S.J. 1998. A survey of unstructured mesh generation technology. In: Proceedings
of the 7th International Meshing Roundtable, Sandia National Laboratories, USA.
239267.
Page, R.W. 1975. Geochronology of Late Tertiary and Quaternary mineralised intrusive
porphyries of the Star Mountains of Papua New Guinea and Irian Jaya. Economic
Geology. 70:928-936.
Painter, S.W. 2011. Development of discrete fracture network modeling capability.
Presentation to the Nuclear Waste Technical Review Board, Salt Lake City USA.
Painter, S. L, C.W, Gable, N. Makedonska, J. Hyman, T.L. Hsieh, Q. Bui, and H.H. Liu.
2012. Fluid flow model development for representative geological media. Fuel
Cycle Research & Development Report for the Department of Energy Used Fuel
Disposition Campaign, USA. 48 p.
Painter, S.L., C.W. Gable, N. Makedonska, J. Hyman, S. Karra, S. Chu, H.H. Liu, J.
Birkholzer, Y. Wang, W.P. Gardner, and G.Y. Kim. 2014. Modeling fluid flow in
natural systems: model validation and demonstration. Fuel Cycle Research &
Development Report for the Department of Energy Used Fuel Disposition
Campaign, USA. 85 p.
Palmstrm, A. 1995. A rock mass characterization system for rock engineering
purposes. Ph.D. Thesis. Oslo University, Oslo, Norway. 400 p.
Park, H.J., J.G. Um, and I. Woo. 2008. The evaluation of failure probability for rock slope
based on fuzzy set theory and Monte Carlo simulation. In: Proceedings of the
Tenth International Symposium on Landslides and Engineered Slopes (Volume
2). 7 p.
Park, H.J., J.G. Um, I. Woo, and J.W. Kim. 2012. Application of fuzzy set theory to
evaluate the probability of failure in rock slopes. Engineering Geology, 125:92101.
Parry, G.W. 1996. The characterization of uncertainty in probabilistic risk assessment of
complex systems. Reliability Engineering and Systems Safety. 54:119-126.
Pascoe, D.M., R.J. Pine, and J.H. Howe. 2014. An extension of probabilistic slope
stability analysis of china clay deposits using geostatistics. In: J.G. Maund and M.
Eddleston (eds) Geohazards in Engineering Geology. Geological Society,
London, Engineering Geology Special Publications. 15:193-197.

168

Passchier, C.W., and Trouw, R.A. 2005. Microtectonics. Springer-Verlag, Berlin,


Germany. 366 p.
Pearl, J. 1988. Probabilistic reasoning in intelligent systems: networks of plausible
inference. Morgan Kaugmann Publishing, San Mateo, USA. 552 p.
Pelli, F., P.K. Kaiser, and N.R. Morgenstern. 1991. An interpretation of ground
movements recorded during construction of the Donkin-Morien tunnel. Canadian
Geotechnical Journal. 28:239-254.
Peschl, G.M., and H.F. Schweiger. 2003. Reliability analysis in geotechnics with finite
elements comparison of probabilistic, stochastic and fuzzy set methods. In:
Proceedings of the 3rd International Symposium on Imprecise Probability:
Theories and Applications. 437-451.
Phillips, D.L., J. Dolph, and D. Marks. 1992. A comparison of geostatistical procedures
for spatial analysis of precipitation in mountainous terrain. Agricultural and Forest
Meteorology. 58:119-141.
Pettitt, W., M. Pierce, D. Damjanac, J. Hazzard, L. Lorig, C. Fairhurst, I. Gil, M. Sanchez,
N. Nagel, J. Reyes-Montes, and N.P. Young. 2011. Fracture network engineering
for hydraulic fracturing. The Leading Edge. 30:844853.
Pierce, M., D. Mas Ivars, P.A. Cundall, and D.O. Potyondy. 2007. A synthetic rock mass
model for jointed rock. In: E. Eberhardt, D. Stead, and T. Morrison (eds) Rock
Mechanics, Meeting Societys Challenges and Demands, Vancouver, Canada.
1:341-349.
Politis, M., E. Kikkinides, M. Kainourgiakis, and A. Stubos. 2008. Hybrid process-based
and stochastic reconstruction method of porous media. Microporous and
Mesoporous Materials. 110:92-99.
Pollard, D.D., and A. Aydin. 1988. Progress in understanding jointing over the past
century. Geological Society of America Bulletin. 100:1181-1204.
Popescu, R., J. Prevost, and G. Deodatis. 1997. Effects of spatial variability on soil
liquefaction: Some design recommendations. Gotechnique. 47: 1019-1036.
Potyondy, D.O., and P.A. Cundall. 2007. A bounded-particle model for rock. International
Journal of Rock Mechanics and Mining Sciences & Geomechanics Abstracts.
41:1329-1364.
Pratt, H.R., A. Black, W. Brown, and W. Brace. 1972. The effect of specimen size on the
mechanical properties of unjointed diorite. International Journal of Rock
Mechanics & Geomechanics Abstracts. 9:513-516.
Priest, S.D., and J.A. Hudson. 1976. Discontinuity spacings in rock. International Journal
of Rock Mechanics & Geomechanics Abstracts. 13:135-148.

169

Priester, S. 2004. Delaunay triangles. Website:


www.codeguru.com/cpp/cpp/algorithms/general/ article.php/c8901/DelaunayTriangles.htm. Updated Dec. 28th 2004.
Prycz, M.J., and C.V. Deutsch. 2003. Declustering and debiasing. In: S. Searston (eds)
Newsletter 19, Melbourne: Geostatistical Association of Australasia. 25 p.
Read, J.R.L. 2009. Data uncertainty. In: Slope Stability, Santiago, Chile. 6 p.
Read, J.R.L., and P. Stacey. 2009. Guidelines for open pit slope design. 1st ed. CSIRO,
Collingwood, Australia. 496 p.
Rivoirard, J. 2005. On simplifications of cokriging. In: O. Leuangthong and C.V. Deutsch
(eds) Geostatistics Banff, Netherlands, Springer. 195-203.
Robertson, A., Mac, G., and Shaw, S. 2003. Risk management for major geotechnical
structures on mines. In: Proceedings of Computer Applications in the Mineral
Industries, Calgary, Canada. 18 p.
Rocscience Inc. 2013. Phase2. Version 8.014. Software. Toronto, Canada.
Rocscience Inc. 2014. Slide. Version 6.029. Software. Toronto, Canada.
Rockfield Software Ltd. 2013. ELFEN. Version 4.7.1. Software. Swansea, UK.
Rose, N.D., and O. Hungr. 2006. Forecasting potential slope failure in open pit mines
contingency planning and remediation. International Journal of Rock Mechanics
and Mining Sciences & Geomechanics Abstracts. 44:308-320.
Rosenblueth, E. 1975. Point estimates for probability moments. Proceedings of the
National Academy of Sciences of the United States of America. 72:3812-3814.
Rosenblueth, E. 1981. Two-point estimates in probabilities. Applied Mathematical
Modelling. 5:329-335.
Rubinstein, R.Y. 1981. Simulation and the Monte Carlo method. John Wiley & Sons,
New York, USA. 372 p.
Ruspini, E.H. 1989. The logical foundation of evidential reasoning. Technical Note 408,
AI Center, SRI International, Menlo Park, USA. 33 p.
Rutqvist, J., and O. Stephansson. 2003. The role of hydromechanical coupling in fracture
rock engineering. Hydrogeology Journal. 11:7-40.
Sakakibara, T., S. Shibuya, and S. Kato. 2011. Effects of grain shape on mechanical
behaviors of granular material under plane strain conditions in 3D DEM analyses.
In: Sainsbury, R. Hart, C. Detournay and P. Cundall (eds) Continuum and Distinct
Element Numerical Modeling in Geomechanics, Itasca Consulting Group,
Minneapolis, USA. Paper 08-02. 6 p.

170

Snchez-Vila, X., J. Carrera, and J.P. Girardi. 1996. Scale effects in transmissivity.
Journal of Hydrogeology. 183:1-22.
Sarin, R.K. 1978. Elicitation of subjective probabilities in the context of decision-making.
Decision Sciences. 9:37-48.
Schweiger, H.F., and G.M. Peschl. 2005. Reliability analysis in geotechnics with the
random set finite element method. Computers and Geotechnics. 32:422-435.
Segall, P., and D.D. Pallard. 1983. Joint formation in granitic rock of the Sierra Nevada.
Geological Society of America Bulletin. 94:563-575.
Shafer, G. 1976. A mathematical theory of evidence. Princeton University Press,
Princeton, USA. 314 p.
Shafer, G. 1986. The combination of evidence. International Journal of Intelligent
Systems. 1:155-179.
Shafer, G. 1990. Perspectives on the theory and practice of belief functions.
International Journal of Approximate Reasoning. 3:1-40.
Shafer, G. 1992. The DempsterShafer theory. In: S.C. Shapiro (ed.), Encyclopedia of
Artificial Intelligence, 2nd ed. John Wiley & Sons, New York, USA. 330331.
Shair, A. 1981. The effect of two sets of joints on rock slope reliability. M.Sc. Thesis,
Massachusetts Institute of Technology, Cambridge, USA. 308 p.
Shewchuk, J.R. 2012. Unstructured Mesh Generation. In: Combinatorial Scientific
Computing, eds. U. Naumann, and O. Schenk. 1st ed. Boca Raton, USA: CRC
Press. 259-299 p.
Shin, W.S. 2010. Excavation disturbed zone in Lac du Bonnet granite. Ph.D. Thesis,
University of Alberta, Edmonton, Canada. 247 p.
Singh, R., and G. Sun. 1990. A fracture mechanics approach to rock slope stability. In:
Proceedings of the 14th World Mining Congress, Peking, China. 543-548.
Smets, P. 1988. Belief functions. In: P. Smets, A. Mamdani, D. Dubois, and H. Prade
(eds) Non Standard Logics for Automated Reasoning, Academic Press, London,
UK. 253-286.
Smets, P. 1990. The combination of evidence in the transferable belief model. IEEEPattern Analysis and Machine Intelligence. 12:447-458.
Smets, P. 1991. Probability of provability and belief functions. Logique et Analyse. 133134:174-195.
Smets, P., and R. Kennes. 1994. The transferable belief model. Artificial Intelligence.
66:191-234.

171

Smets, P. 1998. Theories of uncertainty. In: .H. Ruspini, P.P. Bonissone and W. Pedrycz
(eds) Handbook of fuzzy computation. Institute of Physics Publications. 14 p.
Smith, C.A.B. 1961. Consistency in statistical inference and decision. Journal of the
Royal Statistical Society. B23:1-37.
Smith, C.A.B. 1967. Personal probability and statistical analysis. Journal of the Royal
Statistical Society. A128:469-499.
Snow, D.T. 1965. A parallel plate model of fractured permeable media. Ph.D.
Dissertation, University of California, Berkeley, USA. 330 p.
Sornette, A., P. Davy, and D. Sornette. 1993. Fault growth in brittle-ductile experiments
and the mechanics of continental collisions. Journal of Geophysical Research.
B7:12111-12139.
Sonmez, H., C. Gokceoglu, and R. Ulusay. 2003. An application of fuzzy sets to the
geological strength index (GSI) system used in rock engineering. Engineering
Applications of Artificial Intelligence. 16:251-269.
Srivastava, R.M., and H.M. Parker. 1989. Robust measures of spatial continuity. In: M.
Armstrong (eds) Geostatistics, Kluwer Academic Publishers, Alphen aan den
Rijn, Netherlands. 1:295-308.
Srivastava, A. 2012. Spatial variability modelling of geotechnical parameters and stability
of highly weathered rock slope. Indian Geotechnical Journal, 42:179-185.
SRK. 2012. Comparison of Laubscher and Bieniawski RMR values. Report Prepared for
OTML. Perth, Australia. 7 p.
SRK. 2013a. West wall depressurisation modeling preliminary results (MLE 2013).
Report Prepared for OTML. Vancouver, Canada. 32 p.
SRK. 2013b. Addendum to the west wall depressurisation modelling report (MLE 2013)
dated May 2013. Report Prepared for OTML. Vancouver, Canada. 19 p.
SRK. 2013c. MLE geotechnical studies for the proposed Gold Coast underground mine
at Ok Tedi. Report Prepared for OTML. Perth, Australia. 155 p.
Staub, I., A. Fedriksson, and N. Outters. 2002. Strategy for a rock mechanics site
descriptive model: development and testing of the theoretical approach.
Stockholm: Svensk Krnbrnslehantering AB. 236 p.
Stead, D., E. Eberhardt, and J.S. Coggan. 2006. Developments in the characterization of
complex rock slope deformation and failure using numerical modelling
techniques. Engineering Geology. 83:217-235.
Stead, D., and E. Eberhardt. 2013. Understanding the mechanics of large landslides.
Italian Journal of Engineering Geology and Environment Book Series. 6:85112. DOI: 10.4408/IJEGE.2013-06.B-07.
172

Steffen, O.K.H. 1997. Planning of open pit mines on a risk basis. Journal of South
African Institute of Mining and Metallurgy. 97:47-56.
Steffen, O.K.H., and L.F. Contreras. 2007. Mine planning-its relationship to risk
management. In: Proceedings of the International Symposium on Stability of
Rock Slopes in Open Pit Mining and Civil Engineering, Perth, Australia. 17 p.
Steffen, O.K.H., L.F. Contreras, P.J. Terbrugge, and J. Venter. 2008. A risk evaluation
approach for pit slope design. In: Proceedings of the 42nd US Rock Mechanics
Symposium and 2nd US-Canada Rock Mechanics Symposium, San Francisco,
USA. 18 p.
Tang, B. 1993. Orthogonal array-based Latin hypercubes. Journal of the American
Statistical Association. 88:1392-1397.
Tang, C., and J.A. Hudson. 2010. Rock Failure Mechanisms: Illustrated and Explained.
CRC Press, Taylor & Francis Group, Boca Raton, USA. 364 p.
Tapia, A., L.F. Contreras, M.G. Jefferies, and O. Steffen. 2007. Risk evaluation of slope
failure at the Chuquicamata mine. In: Y. Potvin (eds) Proceedings of the
International Symposium on Rock Slope Stability in Open Pit Mining and Civil
Engineering. Slope Stability 2007, Perth, Australia. 477-495.
Terbrugge, P.J., J. Wesseloo, J. Venter, and O.K.H. Steffen. 2006. A risk consequence
approach to open pit slope design. The Journal of the South African Institue of
Mining and Metallurgy. 106:503-511.
Tintner, G. 1941. The theory of choice under subjective risk and uncertainty.
Econometrica. 9:298-304.
Tuckey, Z., D Stead, M. Havaej, F. Gao, and M. Sturzenegger. 2012. Towards an
integrated field mapping-numerical modelling approach for characterising
discontinuity persistence and intact rock bridges in large open pits. In:
Proceedings of the Canadian Geotechnical Society (Geo Manitoba), Winnipeg,
Canada. 15 p.
Tuckey, Z. 2012. An integrated field mapping-numerical modelling approach to
characterising discontinuity persistence and intact rock bridges in large open pit
slopes. M.Sc. Thesis, Simon Fraser University, Burnaby, Canada. 440 p.
Vann, J., O. Bertoli, and S. Jackson. 2002. An overview of geostatistical simulation for
quantifying risk. In: Proceedings of Geostatistical Association of Australasia
Symposium Quantifying Risk and Error, Perth, Australia. 12 p.
Veneziano, D. 1978. Probabilistic model of joints in rock. Unpublished manuscript,
Massachusetts Institute of Technology, Cambridge, USA.
Vieira, S.R., T.L. Hatfield, D.R. Nielsen, and J.W. Biggar. 1983. Geostatistical theory and
application to variability of some agronomical properties. Hilgardia. 51:1-75.

173

Vieira, S.R., J. Millete, G.C. Topp, and W.D. Reynolds. 2002. Handbook for geostatistical
analysis of variability in soil and climate data. In: V.H. Alvarezz, C.R. Schaefer,
N.F. Barros, J.W.V. Mello, and L.M. Costa (eds) Tpicos em Cincia do solo.
Viosa: Sociedade Brasileira de Cincia do solo. 2:1-45.
Vieira, S.R., J.R.P.D. Carvalho, M.B. Ceddia, and A.P. Gonzlez. 2010. Detrending non
stationary data for geostatistical applications. Bragantia. 69:01-08.
Wackernagel, H. 2003. Multivariate geostatistics. Springer, Berlin, Germany. 388 p.
Walley, P. 1991. Statistical reasoning with imprecise probabilities. Chapman and Hall,
London, United Kingdom. 720 p.
Wang, P. 2001. Confidence as higher-order uncertainty. In: The 2nd International
Symposium on Imprecise Probabilities and Their Applications, Ithaca, USA. 10 p.
Weichselberger, K. 2000. The theory of interval-probability as a unifying concept for
uncertainty. International Journal of Approximate Reasoning. 24:149-170.
Wen, R., and R. Sinding-Larsen. 1997. Stochastic modelling and simulation of small
faults by marked point processes and kriging. In: E.Y. Baafi, and N.A. Schofield
(eds) Geostatistics Wollongong 96. Dordrecht: Kluwer Academic. 398-414.
Wiles, T.D. 2006. Reliability of numerical modelling predictions. International Journal of
Rock Mechanics and Mining Sciences & Geomechanics Abstracts. 43:454-472.
Wong, F.S. 1985. First-order, second-moment methods. Computers and Structures.
20:779-791.
Wyllie, D.C., and C.W. Mah. 2004. Rock slope engineering: Civil and mining (4th edition).
Taylor & Francis, New York, USA. 431 p.
Xu, C., and P. Dowd. 2010. A new computer code for discrete fracture network
modelling. Comput. Geosci. 36:292-301.
Xu, W., T.T. Tran, R.M. Srivastava, and A.G. Journel. 1992. Integrating seismic data in
reservoir modelling: the collocated cokriging alternative. In: Proceedings of the
67th Annual Technical Conference and Exhibition of the Society of Petroleum
Engineers, Washington, DC, USA. 833-842.
Yang, J., H.Z. Huang, L.P. He, S.P. Zhu, and D. Wen. 2011. Risk evaluation in failure
mode and effects analysis of aircraft turbine rotor blades using Dempster-Shafer
evidence theory under uncertainty. Engineering Failure Analysis. 18:2084-2092.
Yoe, C. 2011. Primer on risk analysis. CRC Press. 237 p.
Yoon, J. 2007. Application of experimental design and optimization to PFC model
calibration in uniaxial compression simulation. International Journal of Rock
Mechanics and Mining Sciences. 44:871-889.

174

Youn, D.D., K.K. Choi, and L. Du. 2007. Integration of possibility-based optimization and
robust design for epistemic uncertainty. Journal of Mechanical Design. 129:876882.
Yule, A.U., and M.G. Kendall. 1950. An introduction to the theory of statistics. Hafner
Publishing Company, New York, USA. 701 p.
Zadeh, L.A. 1965. Fuzzy sets. Information and control. 8:338-353
Zadeh, L.A. 1968. Probability measures of fuzzy events. Journal of Mathematical
Analysis and Applications. 23:421-427.
Zadeh, L.A. 1978. Fuzzy sets as a basis for a theory of possibility. Fuzzy Sets and
Systems. 1:3-28.
Zadeh, L.A. 1984. Review of books: a mathematical theory of evidence. The AI
Magazine. 5:81-83.
Zadeh, L.A. 2002. Toward a perception-based theory of probabilistic reasoning with
imprecise probabilities. Journal of Statistical Planning and Inference. 105:233264
Zadeh, L.A. 2005. Toward a generalized theory of uncertainty (GTU) an outline.
Information Sciences. 172:1-40
Zhang, H., R.L. Mullen, and R.L. Muhanna. 2010. Interval Monte Carlo methods for
structural reliability. Structural Safety. 32:183-190.
Zhang, Q., H. Zhu, L. Zhang, and X. Ding. 2011. Study of scale effect on intact rock
strength using particle flow modeling. International Journal of Rock Mechanics
and Mining Sciences & Geomechanics Abstracts. 48:13201328.
Zhang, T.J., W.G. Cao, and M.H. Zhao. 2009. Application of fuzzy sets to geological
strength index (GSI) system used in rock slope. Soils and Rock Instrumentation,
Behavior, and Modeling. 30-35 p. DOI 10.1061/41046(353)5.
Zhang, Y. 2014. Modelling hard rock pillars using a Synthetic Rock Mass approac. Ph.D.
Thesis, Simon Fraser University, Vancouver, Canada. 247 p.

175

Appendices

176

Appendix A.
Hoek-Brown Criterion
Hoek and Brown (1980a, 1980b) introduced their failure criterion in the early 1980s
based on empirical results derived from brittle failure tests of intact rock by Hoek (1968) and joint
rock mass modelling results by Brown (1970). The criterion utilized a method of reducing intact
rock strengths by a given factor based on the fracture characteristics of the rock mass. The
properties of this reduction factor were originally based on the Rock Mass Rating (RMR) system
devised by Bieniawski (1976) and later revised to use the Geological Strength Index (GSI)
introduced by Hoek (1994) and Hoek et al. (1995).
The criterion assumes that intact rock particles must have sufficient degrees of freedom
to allow for sliding and/or rotation, without significant amounts of inter-particle locking (Hoek
2012). For example, a rock mass composed of angular blocks, with rough discontinuity surfaces
will exhibit a larger degree of inter-particle locking and hence stronger rock mass characteristics,
than one composed of smooth-walled, rounded particles. This has led to some criticism as
researchers have noted the over-reliance on mode II type shear failures in the Hoek-Brown
system (Wyllie and Mah 2004). Although these limitations exist the Hoek and Brown failure
criterion has been widely accepted within the geotechnical community owing to its ease of use
and lack of suitable alternatives (Hoek et al. 2002).
In comparison, to earlier linear failure criterion such as the Mohr-Coulomb model, the
relationship between the major and minor principal effective stresses within the Hoek-Brown
system is assumed to be stress dependent. This assumption result in the failure criterion being
described by the non-linear function (Hoek et al. 2002):
: =

:l

:l
+ :q0 s
+ r
"Y

Equation A.1

cYd 100

28 14e

Equation A.2

where : l and :l are the major and minor effective principal stresses at failure, "Y is the uniaxial
compressive strength of intact rock samples, s is the modified material constant, and r and
are rock mass constants. The modified material constant ( s ) is estimate from the unmodified
material constant ( 0 ) by the equation:
0

exp

where cYd is the Geological Strength Index, defined by the block size and fracture condition
(Hoek 1994; Hoek et al. 1995; Hoek et al. 1998; Marinos et al. 2005), and e is a disturbance
factor which depends on the degree of rock mass disturbance from blasting and stress relaxation
(Hoek et al. 2002; Hoek 2012). Estimation of the rock mass constants (r, ) is given by the
functions:

r = exp

cYd 100

9 3e

1 1
cYd
20
+ exp
exp

2 6
15
3

177

Equation A.3

Equation A.4

One of the main problems in using the Hoek-Brown criterion is deciding when it is
applicable. The criterion was originally designed to describe the failure of continuum type
material and assumes a number of assumptions about the rock mass (Hoek 1983):
Rock mass failure controlled by translation and rotation of individual blocks
Failure of intact rock does not play a significant role in overall rock mass failure
Jointing pattern sufficiently chaotic to assume isotropic behavior
Due to these assumptions the criterion is not applicable when failure occurs along a dominant
discontinuity set(s) (Brown 2008). However, when the size of the discontinuities becomes
sufficiently small compared to the sample size and lacks dominant discontinuity orientations, then
the criterion can be applied, by assuming that the rock mass acts as a continuum (Hoek et al.
2002). As a general rule of thumb the Hoek-Brown failure criterion is relatively accurate when
applied to rock masses with GSI values between 30 and 70, which coincides with the range used
in its development (Carter et al. 2007). However, the system breaks down in very weak and
strong rocks, as rock mass failure ceases to be controlled by translation and rotation of individual
blocks. Under these conditions modifications are required to the traditional failure criterion as the
failure behaviour transitions from an inter-block to intact rock controlled.
At the low end of the rock strength scale (UCSir < 0.5 MPa) material typically behaves as
a soil-like substance, whose behaviour can be defined by the Mohr-Coulomb strength criterion
(Carter et al. 2007; Carvalho et al. 2007). It is only after the UCS exceeds 10-15 MPa, coinciding
with complete transition to inter-block controlled failure, that the system behaves as a HoekBrown material (Hoek et al. 2002). Between these two extremes a transition zones exists when a
material transitions form a more linear soil-like behaviour to non-linear rock mass type behaviour
(Brown 2008). Carvalho et al. (2008) defined the transition function (4p :q0 ) between these two
extremes as defined by:
4p :q0

1, :q0 0.5 f*
C C. ^

:q0 > 0.5 f*

Equation A.5

This is then incorporated into the Hoek-Brown criterion by modifying the a, s and mb parameters
by the transition function relationship, such that:
s

4p "Y

r = r + 1 r 4p "Y

+ 1

4p "Y

Equation A.6
Equation A.7
Equation A.8

At the upper end of the rock mass competency scale, behaviour transitions from interblock to intact rock controlled failure (Cartier et al. 2008). The behavioural change coincides with
the onset of crack coalescence in low mi rocks, and the crack initiation strength in high mi rocks
(Carvalho et al. 2008). In the latter, the effects of moderate jointing are suppressed as failure is
dominated by mode I crack propagation when the in-situ stresses are below the spalling limit. At
these low confining conditions, a modification to the Hoek-Brown criterion is required. Diederichs
(2007) proposed the following modification to the criterion for spall-prone rocks:

= r

178

"Y

||

Equation A.9

r = J
-w

:q0 w
L
"Y

Equation A.10

= 0.25

y0{w

Equation A.11

= 0.75

Equation A.12

where is the modified Hoek-Brown material constant, r and are the modified HoekBrown rock mass constants, :q0 is the crack initiation stress and is the tensile stress. The use
of a modified value for both peak and residual values is used to represent the spalling limit,
y0{w
of 0 and a
and is based on a recommended range of : /: of 7 to 10 when using a r
y0{w
of 6 to 8. Transition between spalling and shear behaviour is modelled using the

equation (Cartier et al. 2007):

where:

5yw/ = 5|0 + 5|0 5- 4-

4- =

1+

Equation A.13

& $
B CJ
L CJ
L

and where 5 represents the modified Hoek-Brown material (


).

179

Equation A.14
and rock mass constants (r ,

Appendix B.
Correlograms
Experimental correlogram structures for both the GSI and UCS were calculated within a
C++ self-written program. Experimental correlograms were then fit using least squares
regression techniques within the Microsoft software package EXCEL. Models were fit such that
the dispersion variance within the simulation zone was equal to 1.0, using the method proposed
by Journel and Huijbregts (1978). Correlograms were fit such that the GSI utilized two nested
structure models with zero nugget effect, while the UCS was fit with an exponential model and
relatively high nugget effect (Table B.1).

Table B.1

Constraints for the normal score correlogram used in the FLAC simulation.
GSI

Geotechincal
Unit

Exponential Model I

UCS (MPa)

Exponential Model II

Exponential Model
Nugget

Sill

Range (m)

Sill

Range (m)

Sill

Range (m)

Monzonite
Porphyry

0.61

41

0.44

489

0.30

0.72

128

Monzodiorite

0.49

49

0.57

434

0.38

0.66

214

Endoskarn

0.69

38

0.32

149

0.47

0.55

97

Skarn

0.88

52

0.14

335

0.74

0.26

81

Darai Upper

1.00

24

0.00

381

0.00

1.01

37

Darai Lower

0.81

43

0.25

1000

0.54

0.50

369

Ieru Upper

0.76

43

0.29

630

0.21

0.82

143

Ieru Lower

0.86

88

0.18

614

0.25

0.81

318

Pnyang

1.00

27

0.00

381

0.21

0.82

143

Thrust Faults

0.92

40

0.10

513

0.27

0.75

107

180

1.50

1.25

1 - (h)

1.00

0.75

0.50

0.25

0.00
1.0

10.0

100.0

1,000.0

Lag Distance (m)


Experimental Correlogram

Figure B.1

Correlogram Model

Monzonite Porphyry GSI normal score correlogram.

1.50

1.25

1 - (h)

1.00

0.75

0.50

0.25

0.00
1.0

10.0

100.0

1,000.0

Lag Distance (m)


Experimental Correlogram

Figure B.2

Correlogram Model

Monzodiorite GSI normal score correlogram.

181

1.50

1.25

1 - (h)

1.00

0.75

0.50

0.25

0.00
1.0

10.0

100.0

1,000.0

Lag Distance (m)


Experimental Correlogram

Figure B.3

Correlogram Model

Endoskarn GSI normal score correlogram.

1.50

1.25

1 - (h)

1.00

0.75

0.50

0.25

0.00
1.0

10.0

100.0

1,000.0

Lag Distance (m)


Experimental Correlogram

Figure B.4

Correlogram Model

Skarn GSI normal score correlogram.

182

1.50

1.25

1 - (h)

1.00

0.75

0.50

0.25

0.00
1.0

10.0

100.0

1,000.0

Lag Distance (m)


Experimental Correlogram

Figure B.5

Correlogram Model

Darai Upper GSI normal score correlogram.

1.50

1.25

1 - (h)

1.00

0.75

0.50

0.25

0.00
1.0

10.0

100.0

1,000.0

Lag Distance (m)


Experimental Correlogram

Figure B.6

Correlogram Model

Darai Lower GSI normal score correlogram.

183

1.50

1.25

1 - (h)

1.00

0.75

0.50

0.25

0.00
1.0

10.0

100.0

1,000.0

Lag Distance (m)


Experimental Correlogram

Figure B.7

Correlogram Model

Ieru Upper GSI normal score correlogram.

1.50

1.25

1 - (h)

1.00

0.75

0.50

0.25

0.00
1.0

10.0

100.0

1,000.0

Lag Distance (m)


Experimental Correlogram

Figure B.8

Correlogram Model

Ieru Lower GSI normal score correlogram.

184

1.50

1.25

1 - (h)

1.00

0.75

0.50

0.25

0.00
1.0

10.0

100.0

1,000.0

Lag Distance (m)


Experimental Correlogram

Figure B.9

Correlogram Model

Pnyang GSI normal score correlogram.

1.50

1.25

1 - (h)

1.00

0.75

0.50

0.25

0.00
1.0

10.0

100.0

1,000.0

Lag Distance (m)


Experimental Correlogram

Figure B.10

Correlogram Model

Thrust Zones GSI normal score correlogram.

185

1.50

1.25

1 - (h)

1.00

0.75

0.50

0.25

0.00
1.0

10.0

100.0

1,000.0

Lag Distance (m)


Experimental Correlogram

Figure B.11

Correlogram Model

Monzonite Porphyry UCS normal score correlogram.

1.50

1.25

1 - (h)

1.00

0.75

0.50

0.25

0.00
1.0

10.0

100.0

1,000.0

Lag Distance (m)


Experimental Correlogram

Figure B.12

Correlogram Model

Monzodiorite UCS normal score correlogram.

186

1.50

1.25

1 - (h)

1.00

0.75

0.50

0.25

0.00
1.0

10.0

100.0

1,000.0

Lag Distance (m)


Experimental Correlogram

Figure B.13

Correlogram Model

Endoskarn UCS normal score correlogram.

1.50

1.25

1 - (h)

1.00

0.75

0.50

0.25

0.00
1.0

10.0

100.0

1,000.0

Lag Distance (m)


Experimental Correlogram

Figure B.14

Correlogram Model

Skarn UCS normal score correlogram.

187

1.50

1.25

1 - (h)

1.00

0.75

0.50

0.25

0.00
1.0

10.0

100.0

1,000.0

Lag Distance (m)


Experimental Correlogram

Figure B.15

Correlogram Model

Darai Upper UCS normal score correlogram.

1.50

1.25

1 - (h)

1.00

0.75

0.50

0.25

0.00
1.0

10.0

100.0

1,000.0

Lag Distance (m)


Experimental Correlogram

Figure B.16

Correlogram Model

Darai Lower UCS normal score correlogram.

188

1.50

1.25

1 - (h)

1.00

0.75

0.50

0.25

0.00
1.0

10.0

100.0

1,000.0

Lag Distance (m)


Experimental Correlogram

Figure B.17

Correlogram Model

Ieru Upper UCS normal score correlogram.

1.50

1.25

1 - (h)

1.00

0.75

0.50

0.25

0.00
1.0

10.0

100.0

1,000.0

Lag Distance (m)


Experimental Correlogram

Figure B.18

Correlogram Model

Ieru Lower UCS normal score correlogram.

189

1.50

No Data: simulations utilized Upper Ieru variogram


1.25

1 - (h)

1.00

0.75

0.50

0.25

0.00
1.0

10.0

100.0

1,000.0

Lag Distance (m)


Experimental Correlogram

Figure B.19

Correlogram Model

Pnyang UCS normal score correlogram.

1.50

1.25

1 - (h)

1.00

0.75

0.50

0.25

0.00
1.0

10.0

100.0

1,000.0

Lag Distance (m)


Experimental Correlogram

Figure B.20

Correlogram Model

Thrust Zones UCS normal score correlogram.

190

Appendix C.
Sequential Gaussian Simulation Code
Spatial heterogeneity was simulated in Chapter 3 using the sequential Gaussian
simulation algorithm (Journel and Huigbregts 1978; Goovaerts 1997; Dowd 1992; Nowak and
Verly 2007). The algorithm was directly incorporated into the Itasca code using the integrated
FISH scripting language. The following provides an overview of the basic code used to conduct
the simulation.

Required input variables:


_cells = total number of cells to be simulated
_nMax = maximum neighbours used in the analysis
_sArray(i, j) = 2D search array, with the i-index characterizing the unique cell identification
number, and the j-index defining the [1] FLAC i-grid index, [2] FLAC j-grid index, [3] attribute
normal score
_Range = variogram range
_Sill = variogram sill
_Nugget = variogram nugget effect

General Routines
def RandVal
; determine a random seed within three standard deviations of the mean
_rnd = grand

If _rnd > 3 then


_rnd = 3
endif

If _rnd < -3 then


_rnd = -3
endif
end

def GenVariables
; General array variables which are required to conducted the SGS analysis
array _Near(3, _nMax)

191

array _Dist(_nMax, _nMax)


array _Covar(_nMax, _nMax)
array _iCovar(_nMax, _nMax)
array _sCovar(_nMax)
array _sCovar(_nMax)
array _kWeight(_nMax)
$var = 1 ; needed to prevent FLAC from crashing when only arrays are assigned
end

Stage 1 Scrabble:
The first stage in the algorithm is to randomly scramble the _sArray, this is done using the
Fish-Yates (1948) shuffle.

def Scramble
loop hi (1, _cells)
; select a random point in the array
$rndLoc = int( urand * _cells hi + 1 ) + hi

; if the point is zero it becomes the highest value in the base array (0 is not possible in array)
if $rndLoc = 0 then
$rndLoc = _cells
endif
th

th

; Replace the m object and the rnd object


$i = _sArray(hi, 1)
$j = _sArray(hi, 2)
_sArray(hi, 1) = _sArray(rnd, 1)
_sArray(hi, 3) = _sArray(rnd, 2)
_sArray($rndLoc, 1) = $i
_sArray($rndLoc, 2) = $j
endloop
end

192

Stage 2 Normal Score Simulation:


The following is the main SGS simulation code. It assumes variograms/ correlograms are
characterized within normal score space.

def SGS
; determine initial seeds, this can be removed if at least
; _nMax nodes are known prior to the simulation
loop hi (1, _nMax)
RandVal
_sArray(hi, 3) = _rnd
endloop

; conduct main SGS algorithm


loop hi (_nMax + 1, _cells)
; determine the i and j grid indices of the hith variable
$iPos = _sArray(hi, 1)
$jPos = _sArray(hi, 2)

; Reset the nearest neighbour distance


loop nx (1, _nMax)
_Near(2, nx)
endloop

; determine the nearest neighbours from cells which already have a generated value
loop hii (1, hi - 1)
; retrieve i, j position for the cell being sampled
$iAlt = int(_sArray(hii, 1))
$jAlt = int(_sArray(hii, 2))
; store search array location and sample cell GSI value for later
$loc1 = hii
$var1 = _sArray(hii, 3)
; calcuate the euclidean distance between the sample cell and cell of interest
$y = y($iPos, $jPos) - y($iAlt, $jAlt)
$x = x($iPos, $jPos) - x($iAlt, $jAlt)
$dist1 = sqrt( ($y)^2 + ($x)^2 )

193

; loop through the nearest neighbours and check if the distance is less than
; any current neighbours, if the distance is less, then replace the nearest
; neighbour results with the current value
loop nx (1, _nMax)
if $dist1 < _Near(2, nx) then
$loc2 = _Near(1, nx)
$dist2 = _Near(2, nx)
$varii = _Near(3, nx)
_Near(1, nx) = $loc1
_Near(2, nx) = $dist1
_Near(3, nx) = $var1
$loc1 = $loc2
$dist1 = $dist2
$var1 = $var2
endif
endloop
endloop

; Calculate a _nMax by _nMax matrix of the inter-neighbour separation distance


loop nx (1, _nMax)
loop mx (1, _nMax)
; Obtain i, j coords for nearest neighbours from the search array
$iPos = int(_sArray(_Near(1, nx), 1))
$jPos = int(_sArray(_Near(1, nx), 2))
$iAlt = int(_sArray(_Near(1, nx), 1))
$jAlt = int(_sArray(_Near(1, mx), 2))
; Calcuate the euclidean distance between the two points
$y = y($iPos, $jPos) - y($iAlt, $jAlt)
$x = x($iPos, $jPos) - x($iAlt, $jAlt)
_Dist(nx, mx) = sqrt( ($y)^2 + ($x)^2 )
endloop
endloop

; Convert the distance matrix to a covariance matrix


loop nx (1, _nMax)
loop mx (1, _nMax)

194

if _Dist(nx, mx) < _Range then


; calculate covariance assuming an exponential variogram model with
; a possible nugget effect, code needs to be edited if another
; variogram model is present
$cov = exp((-3 * _Dist(nx, mx)) / _Range) * _Sill
$cov = $cov + _Nugget
_Covar(nx, mx) = $cov
else
; else the covariance becomes zero (i.e. sill has been reached)
_Covar(nx, mx) = 0.0
endif
endloop
endloop

; Calculated Inverse Matrix


$var = mat_inverse(_Covar, _iCovar)

; Calculate covariance matrix between sample and obs. points


loop nx (1, _nMax)
if _Near(2, nx) < _Range then
; calculate covariance assuming an exponential variogram model with
; a possible nugget effect, code needs to be edited if another
; variogram model is present
$cov = exp((-3 * _Near(nx, mx)) / _Range) * _Sill
$cov = $cov + _Nugget
_sCovar(nx) = $cov
else
; else the covariance becomes zero (i.e. sill has been reached)
_sCovar(nx) = 0.0
endif
endloop

; Reset kriging weights


loop nx (1, _nMax)
_kWeight(nx) = 0.0
endloop

195

; Calculate kriging weights


loop nx (1, _nMax)
loop mx (1, _nMax)
$cov = _sCovar(mx) * _iCovar(nx, mx)
_kWeight(nx) = _kWeight(nx) + $cov
endloop
endloop

; Calculate residuals
$kR = 0.0 ; reset kriging residual
loop nx (1, _nMax)
$kR = $kR + _Near(3, nx) * _kWeight(nx)
endloop

; Reset kriging variance


$kVar = 1.0

; Calculate kriging variance


loop nx (1,Neigh_max)
$kVar = $kVar - _sCovar(nx) * _kWeight(nx)
endloop

; Generate a random variable based on the kriging mean and variance


RandVal
_sArray(hi, 3) = $kR + _rnd * sqrt($kVar)

endloop
end

196

Appendix D.
Verification of Sequential Gaussian Simulation Code
Sequential Gaussian Simulation (SGS) code was programmed within the Itasca (2014)
software FLAC, using the integrated FISH language. A full description of the algorithm is
provided in Section 3.4.3, with the FISH code provided in Appendix III. The following provides a
series of verification plots, confirming the accurate reproduction of the correlogram and
cumulative density plots for the Ok Tedi Dataset. The results represent a single model
realization, employing Monte Carlo simulation techniques. As a result, some natural drift exists.
However, this drift evens out between the simulations, resulting in good reproducibility overall
between model simulations and actual data.

1.50

1.25

1 - (h)

1.00

0.75

0.50

0.25

0.00
1.0

10.0

100.0

1,000.0

Lag Distance (m)


Experimental Correlogram

Figure D.1

Correlogram Model

FLAC Correlogram

Verification of Monzonite Porphyry GSI normal score correlogram.

197

1.50

1.25

1 - (h)

1.00

0.75

0.50

0.25

0.00
1.0

10.0

100.0

1,000.0

Lag Distance (m)


Experimental Correlogram

Figure D.2

Correlogram Model

FLAC Correlogram

Verification of Monzodiorite GSI normal score correlogram.

1.50

1.25

1 - (h)

1.00

0.75

0.50

0.25

0.00
1.0

10.0

100.0

1,000.0

Lag Distance (m)


Experimental Correlogram

Figure D.3

Correlogram Model

FLAC Correlogram

Verification of Endoskarn GSI normal score correlogram.

198

1.50

1.25

1 - (h)

1.00

0.75

0.50

0.25

0.00
1.0

10.0

100.0

1,000.0

Lag Distance (m)


Experimental Correlogram

Figure D.4

Correlogram Model

FLAC Correlogram

Verification of Skarn GSI normal score correlogram.

1.50

1.25

1 - (h)

1.00

0.75

0.50

0.25

0.00
1.0

10.0

100.0

1,000.0

Lag Distance (m)


Experimental Correlogram

Figure D.5

Correlogram Model

FLAC Correlogram

Verification of Darai Upper GSI normal score correlogram.

199

1.50

1.25

1 - (h)

1.00

0.75

0.50

0.25

0.00
1.0

10.0

100.0

1,000.0

Lag Distance (m)


Experimental Correlogram

Figure D.6

Correlogram Model

FLAC Correlogram

Verification of Darai Lower GSI normal score correlogram.

1.50

1.25

1 - (h)

1.00

0.75

0.50

0.25

0.00
1.0

10.0

100.0

1,000.0

Lag Distance (m)


Experimental Correlogram

Figure D.7

Correlogram Model

FLAC Correlogram

Verification of Ieru Upper GSI normal score correlogram.

200

1.50

1.25

1 - (h)

1.00

0.75

0.50

0.25

0.00
1.0

10.0

100.0

1,000.0

Lag Distance (m)


Experimental Correlogram

Figure D.8

Correlogram Model

FLAC Correlogram

Verification of Ieru Lower GSI normal score correlogram.

1.50

1.25

1 - (h)

1.00

0.75

0.50

0.25

0.00
1.0

10.0

100.0

1,000.0

Lag Distance (m)


Experimental Correlogram

Figure D.9

Correlogram Model

FLAC Correlogram

Verification of Pnyang GSI normal score correlogram.

201

1.50

1.25

1 - (h)

1.00

0.75

0.50

0.25

0.00
1.0

10.0

100.0

1,000.0

Lag Distance (m)


Experimental Correlogram

Figure D.10

Correlogram Model

FLAC Correlogram

Verification of Thrust Zones GSI normal score correlogram.

1.50

1.25

1 - (h)

1.00

0.75

0.50

0.25

0.00
1.0

10.0

100.0

1,000.0

Lag Distance (m)


Experimental Correlogram

Figure D.11

Correlogram Model

FLAC Correlogram

Verification of Monzonite Porphyry UCS normal score correlogram.

202

1.50

1.25

1 - (h)

1.00

0.75

0.50

0.25

0.00
1.0

10.0

100.0

1,000.0

Lag Distance (m)


Experimental Correlogram

Figure D.12

Correlogram Model

FLAC Correlogram

Verification of Monzodiorite UCS normal score correlogram.

1.50

1.25

1 - (h)

1.00

0.75

0.50

0.25

0.00
1.0

10.0

100.0

1,000.0

Lag Distance (m)


Experimental Correlogram

Figure D.13

Correlogram Model

FLAC Correlogram

Verification of Endoskarn UCS normal score correlogram.

203

1.50

1.25

1 - (h)

1.00

0.75

0.50

0.25

0.00
1.0

10.0

100.0

1,000.0

Lag Distance (m)


Experimental Correlogram

Figure D.14

Correlogram Model

FLAC Correlogram

Verification of Skarn UCS normal score correlogram.

1.50

1.25

1 - (h)

1.00

0.75

0.50

0.25

0.00
1.0

10.0

100.0

1,000.0

Lag Distance (m)


Experimental Correlogram

Figure D.15

Correlogram Model

FLAC Correlogram

Verification of Darai Upper UCS normal score correlogram.

204

1.50

1.25

1 - (h)

1.00

0.75

0.50

0.25

0.00
1.0

10.0

100.0

1,000.0

Lag Distance (m)


Experimental Correlogram

Figure D.16

Correlogram Model

FLAC Correlogram

Verification of Darai Lower UCS normal score correlogram.

1.50

1.25

1 - (h)

1.00

0.75

0.50

0.25

0.00
1.0

10.0

100.0

1,000.0

Lag Distance (m)


Experimental Correlogram

Figure D.17

Correlogram Model

FLAC Correlogram

Verification of Ieru Upper UCS normal score correlogram.

205

1.50

1.25

1 - (h)

1.00

0.75

0.50

0.25

0.00
1.0

10.0

100.0

1,000.0

Lag Distance (m)


Experimental Correlogram

Figure D.18

Correlogram Model

FLAC Correlogram

Verification of Ieru Lower UCS normal score correlogram.

1.50

No Data: simulations utilized Upper Ieru variogram


1.25

1 - (h)

1.00

0.75

0.50

0.25

0.00
1.0

10.0

100.0

1,000.0

Lag Distance (m)


Experimental Correlogram

Figure D.19

Correlogram Model

FLAC Correlogram

Verification of Pnyang UCS normal score correlogram.

206

1.50

1.25

1 - (h)

1.00

0.75

0.50

0.25

0.00
1.0

10.0

100.0

1,000.0

Lag Distance (m)


Experimental Correlogram

Figure D.20

Correlogram Model

FLAC Correlogram

Verification of Thrust Zones UCS normal score correlogram.

100%

90%

80%

Cumula!ve Percentage

70%

60%

50%

40%

30%

20%

10%

0%
0

10

20

30

40

50

60

70

80

90

100

Geological Strength Index


Experimental

Figure D.21

Modelled

FLAC Results

Verification of Monzonite Porphyry GSI cumulative density plot.

207

100%

90%

80%

Cumula!ve Percentage

70%

60%

50%

40%

30%

20%

10%

0%
0

10

20

30

40

50

60

70

80

90

100

Geological Strength Index


Experimental

Figure D.22

Modelled

FLAC Results

Verification of Monzodiorite GSI cumulative density plot.

100%

90%

80%

Cumula!ve Percentage

70%

60%

50%

40%

30%

20%

10%

0%
0

10

20

30

40

50

60

70

80

90

100

Geological Strength Index


Experimental

Figure D.23

Modelled

FLAC Results

Verification of Endoskarn GSI cumulative density plot.

208

100%

90%

80%

Cumula!ve Percentage

70%

60%

50%

40%

30%

20%

10%

0%
0

10

20

30

40

50

60

70

80

90

100

Geological Strength Index


Experimental

Figure D.24

Modelled

FLAC Results

Verification of Skarn GSI cumulative density plot.

100%

90%

80%

Cumula!ve Percentage

70%

60%

50%

40%

30%

20%

10%

0%
0

10

20

30

40

50

60

70

80

90

100

Geological Strength Index


Experimental

Figure D.25

Modelled

FLAC Results

Verification of Darai Upper GSI cumulative density plot.

209

100%

90%

80%

Cumula!ve Percentage

70%

60%

50%

40%

30%

20%

10%

0%
0

10

20

30

40

50

60

70

80

90

100

Geological Strength Index


Experimental

Figure D.26

Modelled

FLAC Results

Verification of Darai Lower GSI cumulative density plot.

100%

90%

80%

Cumula!ve Percentage

70%

60%

50%

40%

30%

20%

10%

0%
0

10

20

30

40

50

60

70

80

90

100

Geological Strength Index


Experimental

Figure D.27

Modelled

FLAC Results

Verification of Ieru Upper GSI cumulative density plot.

210

100%

90%

80%

Cumula!ve Percentage

70%

60%

50%

40%

30%

20%

10%

0%
0

10

20

30

40

50

60

70

80

90

100

Geological Strength Index


Experimental

Figure D.28

Modelled

FLAC Results

Verification of Ieru Lower GSI cumulative density plot.

100%

90%

80%

Cumula!ve Percentage

70%

60%

50%

40%

30%

20%

10%

0%
0

10

20

30

40

50

60

70

80

90

Geological Strength Index


Experimental

Figure D.29

Modelled

FLAC Results

Verification of Pnyang GSI cumulative density plot.

211

100

100%

90%

80%

Cumula!ve Percentage

70%

60%

50%

40%

30%

20%

10%

0%
0

10

20

30

40

50

60

70

80

90

100

Geological Strength Index


Experimental

Figure D.30

Modelled

FLAC Results

Verification of Thrust Zones GSI cumulative density plot.

100%

90%

80%

Cumula!ve Percentage

70%

60%

50%

40%

30%

20%

10%

0%
0

50

100

150

200

250

300

350

Uniaxial Compressive Strength (MPa)


Experimental

Figure D.31

Modelled

FLAC Results

Verification of Monzonite Porphyry UCS cumulative density plot.

212

100%

90%

80%

Cumula!ve Percentage

70%

60%

50%

40%

30%

20%

10%

0%
0

50

100

150

200

250

300

350

Uniaxial Compressive Strength (MPa)


Experimental

Figure D.32

Modelled

FLAC Results

Verification of Monzodiorite UCS cumulative density plot.

100%

90%

80%

Cumula!ve Percentage

70%

60%

50%

40%

30%

20%

10%

0%
0

50

100

150

200

250

300

350

Uniaxial Compressive Strength (MPa)


Experimental

Figure D.33

Modelled

FLAC Results

Verification of Endoskarn UCS cumulative density plot.

213

100%

90%

80%

Cumula!ve Percentage

70%

60%

50%

40%

30%

20%

10%

0%
0

50

100

150

200

250

300

350

Uniaxial Compressive Strength (MPa)


Experimental

Figure D.34

Modelled

FLAC Results

Verification of Skarn UCS cumulative density plot.

100%

90%

80%

Cumula!ve Percentage

70%

60%

50%

40%

30%

20%

10%

0%
0

50

100

150

200

250

300

350

Uniaxial Compressive Strength (MPa)


Experimental

Figure D.35

Modelled

FLAC Results

Verification of Darai Upper UCS cumulative density plot.

214

100%

90%

80%

Cumula!ve Percentage

70%

60%

50%

40%

30%

20%

10%

0%
0

50

100

150

200

250

300

350

Uniaxial Compressive Strength (MPa)


Experimental

Figure D.36

Modelled

FLAC Results

Verification of Darai Lower UCS cumulative density plot.

100%

90%

80%

Cumula!ve Percentage

70%

60%

50%

40%

30%

20%

10%

0%
0

50

100

150

200

250

300

350

Uniaxial Compressive Strength (MPa)


Experimental

Figure D.37

Modelled

FLAC Results

Verification of Ieru Upper UCS cumulative density plot.

215

100%

90%

80%

Cumula!ve Percentage

70%

60%

50%

40%

30%

20%

10%

0%
0

50

100

150

200

250

300

350

Uniaxial Compressive Strength (MPa)


Experimental

Figure D.38

Modelled

FLAC Results

Verification of Ieru Lower UCS cumulative density plot.

100%

90%

80%

Cumula!ve Percentage

70%

60%

50%

40%

30%

20%

10%

No Data: simulations utilized Upper Ieru Data


0%
0

50

100

150

200

250

300

350

Uniaxial Compressive Strength (MPa)


Experimental

Figure D.39

Modelled

FLAC Results

Verification of Pnyang UCS cumulative density plot.

216

100%

90%

80%

Cumula!ve Percentage

70%

60%

50%

40%

30%

20%

10%

0%
0

50

100

150

200

250

300

350

Uniaxial Compressive Strength (MPa)


Experimental

Figure D.40

Modelled

FLAC Results

Verification of Thrust Zones UCS cumulative density plot.

217

Appendix E.
Critical Failure Path Pseudo-Code
The following provides an overview of the algorithm used to identify critical failure paths
from the FLAC simulation results. The algorithm involves seven steps:
1. First, SSR values obtained from FLAC modelling are inverted (iSSR) to create a cost
matrix. This is done to ensure that the largest shear strain rates correspond with the
lowest cost.
2. Two nodal arrays are then constructed to denote the location of the potential break-out
surface and tension crack. This break-out array is defined by boundary nodes along the
lower 90% of the slope face, as well as nodes along the toe of the slope; whereas, the
tension array is denoted by boundary nodes behind the slope face (Figure 3.10).
3. The first node in the break-out array is then designated as the break-out node.
4. Dijkstra's (1959) algorithm is then used to calculate the minimum cost path to get from
the break-out node to the closest tension array node. This is conducted using the
following steps:
a. First, a total cost matrix is constructed, which is the same size as the cost matrix.
Total cost values are initially designated as null.
b. Next, an unvisited array is constructed in which data is sorted by the total cost to
get to the 2D location from the starting point. Each node in the array is then
assigned a tentative total cost equal to infinity.
c.

The designated starting node determined from the break-out array is set as the
current node. The total cost to get to the node is designated as zero.

d. For the current node, tentative costs are calculated for all unvisited neighbours
using the formula:
"/@ = "q{yy/ + "/ 0

Equation E.1

where "/@ is the total cost to get to the unvisited neighbour via the current node,
"q{yy/ is the total cost to get to the current node from the starting node, and
"/ 0 is the cost to get from current to the neighbour node (obtained from the iSSR
matrix). The calculated total cost at the unvisited neighbour is then compared to
the currently assigned total cost, and the minimum value stored within the
unvisited array.
e. Once all neighbour nodes have been considered, the current node is then
removed from the unvisited array and assigned to the 2D minimum total cost
matrix. Once a node has been visited it will never be checked again.
f.

Step d is then repeated by designating the next lowest total cost node from the
unvisited array as the current node. This process is repeated until a node within
the tension array is defined as the current node. This node is then specified as
the tension crack.

5. Back-analysis of the 2D minimum total cost matrix can then be conducted to find the
minimum cost path from the tension crack to the break-out node. This involves the
following steps:

218

a. First, an empty minimum path array is constructed which will house the nodal
locations of the critical failure path.
b. Next, the tension crack is specified as the current node and its nodal location is
added to the minimum path array.
c.

The neighbours of the current node are then examined and the nodal location of
the neighbour with the minimum total cost ("/@ ) is added to the minimum path
array.

d. The minimum neighbour is then specified as the current node and Step c
repeated, until the beak-out node is encountered.
6. Step 4 & 5 are then repeated, until all nodes within the break-out array have been visited,
by designating the next node in the array as the break-out node.
7. Designated minimum cost paths are then compared based on average shear strain rates,
with the lowest rate path determined to be the critical path.

219

You might also like