Professional Documents
Culture Documents
in the
Department of Earth Sciences
Faculty of Science
iii
Abstract
Uncertainty analysis remains at the forefront of geotechnical design, due to the
predictive nature of the applied discipline. Designs must be analysed within a reliabilitybased framework, such that inherent risks are demonstrated to decision makers. This
research explores this paradigm in three important areas of geotechnical design;
namely, continuum, Discrete Fracture Network (DFN) and discontinuum modelling.
Continuum modelling examined the negative effects of ignoring spatial heterogeneity on
model prediction.
heterogeneities found within a large open pit mine slope. DFN analysis introduced a
novel approach to fracture generation to solve issues associated with the incorporation
of traditional DFNs into geomechanical simulation models.
Finally, discontinuum
modelling explored the inherent mesh dependencies that exist in UDEC grain boundary
models (UDEC-GBM).
Keywords:
iv
Acknowledgements
This work was made possible by the generous support of my supervisor Dr.
Doug Stead. I would like to acknowledge the flexibility he gave me in pursuing various
avenues of research. I would like to extend my appreciation to my committee members,
Dr. Diana Allen, Dr. Dan Gibson, Dr. Norbert Baczynski, and Jarek Jakubec. I thank my
external examiner Dr. Scott Dunbar from the University of British Columbia.
I would like to thank SRK Consulting and the Natural Sciences and Engineering
Research Council of Canada (NSERC) for providing funding for this research through a
NSERC-IPS scholarship.
Jakubec, Michael Royle, Daniel Mackie, Ian de Bruyn, Marek Nowak, Greg Fagerlund,
Jordan Severin, Jacek Scribek, Guy Dishaw, Jen Adams, Ryan Campbell, and Ben
Green for providing guidance throughout the research.
I would acknowledge the generous assistance of Ok Tedi Mining Ltd. for
providing the opportunity and data to conduct this study. In particular I would like to
thank Dr. Norbert Baczynski and Derrick Kelly.
A special thanks to all present and past graduate students within the Engineering
Geology and Resource Geotechnics Research Group at SFU who help me with this
research. They include Mohsen Havaej, Pooya Hamdi, Fuqiang Gao, Kenneth Lupogo,
Andrea Wolter, Janisse Vivas, Ryan Preston, Anne Clayton, Zack Tuckey, Yabing
Zhang, and Davide Donati. I would like to acknowledge the support of the SFU technical
staff including, Glenda Pauls, Rodney Arnold, Matthew Plotnikoff, Tarja Vaisanen, and
Bonnie Karhukangas.
Dedication
vi
Table of Contents
Approval .......................................................................................................................... ii
Partial Copyright Licence ............................................................................................... iii
Abstract .......................................................................................................................... iv
Acknowledgements ......................................................................................................... v
Dedication ...................................................................................................................... vi
Table of Contents .......................................................................................................... vii
List of Tables ................................................................................................................... x
List of Figures................................................................................................................. xi
List of Acronyms........................................................................................................... xvii
1.
1.1.
1.2.
1.3.
1.4.
Introduction .......................................................................................................... 1
Background Motivation ........................................................................................... 1
Ok Tedi Mine .......................................................................................................... 2
Research Objectives .............................................................................................. 5
Thesis Structure ..................................................................................................... 6
2.
Literature Review .................................................................................................. 8
2.1. Types of Uncertainty............................................................................................... 8
2.2. Alternative Theories of Uncertainty ....................................................................... 10
2.2.1. Fuzzy Set Theory ...................................................................................... 11
2.2.2. Possibility Theory ...................................................................................... 13
2.2.3. Evidence Theory ....................................................................................... 15
2.2.4. Imprecise Probabilities .............................................................................. 17
2.3. Probability Theory of Uncertainty .......................................................................... 19
2.4. Probabilistic Models for Dealing with Uncertainty ................................................. 22
2.4.1. First-Order, Second-Moment Methods ...................................................... 22
2.4.2. Point Estimate Methods ............................................................................ 23
2.4.3. Monte Carlo Methods ................................................................................ 26
2.5. Numerical Simulation............................................................................................ 28
2.6. Model Complexity Issue ....................................................................................... 29
2.7. Reliability Based Design ....................................................................................... 30
2.8. Risk analysis ........................................................................................................ 31
3.
3.1.
3.2.
3.3.
3.4.
4.6.
4.7.
4.8.
5.
5.1.
5.2.
5.3.
5.4.
ix
List of Tables
Table 3.1
Table 3.2
Table 3.3
Table 4.1
Table 5.1
Table 5.2
List of Figures
Figure 1.1
Figure 1.2
Plan view of the surface geology at the Ok Tedi mine site prior to
open pit operation. ...................................................................................... 4
Figure 2.1
Figure 2.2
Figure 2.3
Figure 3.1
Plan view of surface geology for the 2011 mining conditions at the
Ok Tedi site. The geotechnical borehole collar distribution is found
to be skewed towards the center of the pit, specifically targeting the
mineralized skarn bodies. ......................................................................... 37
Figure 3.2
Figure 3.3
Figure 3.4
Figure 3.5
Figure 3.6
xi
Figure 3.7
Figure 3.8
Figure 3.9
Figure 3.10 Critical failure paths were identified using minimum distance
analysis. The methodology utilized Dijkstra's (1959) shortest path
algorithm. .................................................................................................. 61
Figure 3.11 Plots of the running average (a) mean and (b) standard deviation in
SRF results vs. the number of simulation trials are used to estimate
when the Monte Carlo simulation results become stable. The
results suggest that the required number of simulations is inversely
proportional to the degree of spatial autocorrelation. ................................ 64
Figure 3.12 Cumulative density plot comparing the SGS method with a standard
deterministic analysis. The deterministic analysis utilized
homogeneous units, with strength attributes defined using medial
value statistics. SGS modelling suggests a mean SRF of 1.45 with
a standard deviation of 0.08. ..................................................................... 65
Figure 3.13 GSI and UCS attributes are found to be reduced along the critical
failure path compared to west wall averages. A mean reduction of
14% and 32% was found in the GSI and UCS, respectively. ..................... 66
Figure 3.14 (a) Variation in failure area and length statistics provide an estimate
of the overall deep vs. shallow seated nature of the estimated failure
surfaces. The results suggest a positive correlation between the
degree of depressurization and size of potential failures. (b) Trends
in the coefficient of variation within the failure area and length
statistics can be used as a quantitative estimate of the overall
dispersion in failure path results. Results indicate that the degree of
failure path uncertainty is positively correlated with the degree of
spatial autocorrelation imposed on the system.......................................... 67
Figure 3.15 Distribution of critical failure surfaces from the SGS simulations.
Daylighting is concentrated within the Gleeson Fracture Zone. The
failure area is estimated to be 2.29 x 105 m2 with a standard
deviation of 7.82 x 104 m/s; while the failure length has a mean of
1,454 m with a standard deviation of 157 m. ............................................. 68
Figure 3.16 Development of shear bands between the active and passive blocks
is observed. This behaviour helps to facilitate movement of material
along the lower critical failure surface. ...................................................... 68
xii
Figure 3.17 Comparison of SRF results for both the SGS and conventional
approaches to geotechnical slope design. The simulation results
suggest that the conventional probabilistic approach over-estimates
both the mean SRF (1.45 vs. 1.58) and standard deviation (0.08 vs.
0.29) compared to the SGS method.......................................................... 70
Figure 3.18 Comparison of critical failure path distributions for the different
modelling approaches. .............................................................................. 71
Figure 3.19 The incorporation of rock mass strength heterogeneities into a
model results in increased dispersion in the SRF results compared
to non-autocorrelated models. The zero autocorrelation method is
found to over-estimate the mean SRF (1.53 vs. 1.45), while at the
same time under-estimate the standard deviation (0.02 vs. 0.08),
when compared to the SGS method. ........................................................ 73
Figure 3.20 The inclusion of groundwater pore pressures resulted in an average
decrease in SRF results of 0.14 compared to the SGS method
(Figure 3.13). The mean SRF values are 1.45 and 1.58 for the wet
and dry models, respectively, with standard deviations of 0.08 and
0.09. ......................................................................................................... 74
Figure 3.21 Active depressurization was found to increase SRF values by an
average of 0.10, compared to the base case of no depressurization.
Results of the depressurization scenarios suggest mean SRF
values of 1.53 and 1.58, with standard deviations of 0.08 and 0.08
for the horizontal drain holes and drainage tunnel scenarios,
respectively. .............................................................................................. 75
Figure 3.22 Comparison of SRF results between the SGS and critical path, upscaling methods. The results suggest the critical path algorithms
fail to fully capture the effects of spatial heterogeneity on
geomechanical models. Up-scaling results suggest a mean SRF of
1.35, 1.33 and 1.33, with a standard deviation of 0.24, 0.17, and
0.22 for the independent, dependent and roughness methods,
respectively. .............................................................................................. 77
Figure 3.23 Concept demonstrating the deviation in mean step-path angle and
the critical basal sliding surface (Jennings 1970). ..................................... 81
Figure 3.24 The discrete nature of geotechnical domains makes the definition of
a REV within fracture systems difficult, if not impossible. This is due
to the difficulty in stabilizing descriptive attributes at sample volumes
smaller than the domain scale. ................................................................. 82
Figure 4.1
Figure 4.2
Figure 4.3
Figure 4.4
Figure 4.5
Figure 4.6
Figure 4.7
Figure 5.1
Figure 5.2
Flow chart for the modified Baecher et al.s (1978) DFN generation
algorithm. Methodology is used to generate fracture networks
which adhere to later geomechanical meshing routines. ......................... 109
Figure 5.3
xiv
Figure 5.4
Figure 5.5
Figure 5.6
Figure 5.7
Figure 5.8
Figure 5.9
xv
Figure 5.15 Brittle fracture development within UDEC-GBM UCS simulation with
Voronoi mesh geometry. An increased degree of dispersed, high
angle fractures is observed compared to triangular mesh models
(Figure 5.9). ............................................................................................ 130
Figure 5.16 Wedging potential in UDEC models with Voronoi vs. triangular mesh
geometries. Triangular mesh was shown to have a predisposition
towards shear failure mechanisms, due to increased kinematic
freedom. This was in contrast to the Voronoi mesh simulations
which displayed a dominance of tensile failure mechanisms. .................. 135
xvi
List of Acronyms
ALARP
CDF
CoV
coefficient of variation
DEM
DFN
FCM
fuzzy
FDEM
FDM
FEM
FLAC
FOS
factor of safety
FOSM
first-order, second-moment
GSI
LHM
masl
MRMR
NPV
OTML
P20
P21
PBA
p-box
probability box
PDBO
PEM
REV
RFCDV-DO
RME
RMR89
SGS
SIS
SRF
means
xvii
SSR
SRM
UCS
UDEC
UDEC-GBM
xviii
1.
Introduction
framework with which to quantify and demonstrate the inherent uncertainty in their
designs to decision makers, such that decisions are made with a proper appreciation of
the risks associated with different designs (Mazzoccola et al. 1997; Steffen 1997; Kong
2002; Robertson and Shaw 2003; Steffen and Contreras 2007). Traditionally this has
been accomplished by geotechnical engineers through the use of deterministic methods
such as the factor of safety (Wyllie and Mah 2004; Read and Stacey 2009). Within this
framework, sensitivity analysis is conducted by evaluating multiple subsurface
realizations within a deterministic framework. However, this method is limited, as there
is no explicit way of quantifying the probability that a specific realization reflects reality.
This forces decision makers to evaluate the likelihood of subsurface realizations
qualitatively, and hence places a degree of subjectivity into the decision making process.
The alternative to this methodology is the use of reliability based design where
uncertainty is explicitly quantified and propagated through geotechnical design
calculations, using a theory of uncertainty such as probability theory (Harr 1996; Duncan
2000; Wiles 2006; Nadim 2007). Within this framework, uncertainty is quantified at the
parameter level by modelling statistical distributions to observed data. Uncertainties are
then propagated through geotechnical design calculations using probabilistic methods
such as Monte Carlo simulation or point estimate methods (Hammersley and
Handscomb 1964; Beckman 1971; Rosenblueth 1975).
geotechnical engineers can quantitatively evaluate the risks associated with different
1
commencing in 1984 (Davies et al. 1978). The mine is situated at the headwaters of the
Ok Tedi River, on top of Mt. Fubilan at an elevation of 1800 masl. Located within the
Star Mountains, the surrounding mountainous topography and tropical latitude result in
difficult mining conditions (Hearn 1995). Rainfall at the site is extreme, with an annual
average between 9 to 11 m, resulting in rapid erosion of pit walls (de Bruyn et al. 2011).
The mine itself is an open pit operation with an approximate areal size of 5 km2 and an
average pit slope between 38o to 40o (de Bruyn et al. 2013). Current daily production at
the site is approximately 80,000 tons of ore with equal amounts of waste rock (Baczynski
et al. 2011).
The site geology is characterized by a repeating succession of sub-horizontal
siltstone, mudstone and limestone layers, which have experienced regional shortening
through thrust fault activity, and local up-doming from intrusive activity (Figure 1.2;
Baczynski 2011). Sedimentary units are subdivided into three formations based on
stratigraphic characteristics, namely: the Ieru Siltstone, Darai Limestone and Pnyang
Formation (Hearn 1995). The Ieru Siltstone is the oldest formation and is characterized
by Cretaceous, grey, calcareous siltstones and medium graded sandstones. Overlying
N
Bismarck Sea
China
Philippines
River
New Britain
Island
Ok Ted
Indonesia
Ok Tedi Mine
Pacific Ocean
Papua New
Guinea
Fly
Riv
er
Indonesia
Papua New
Guinea
Port Moresby
Australia
Australia
Coral Sea
New
Zealand
0
1000
Figure 1.1
2000km
western (upper) and eastern (lower) Gleeson faults. The fault zones are characterized
by highly brecciated, granular and/or highly plastic gouge material. Translation along the
two faults has resulted in the formation of a disturbed zone referred to as the Gleeson
fracture zone. This zone is characterized by a large degree of fracturing and brecciation
of the host rock.
N
425000
424000
Taranaki
Thrust
Taranaki
Thrust
423500
Parrots Beak
Thrust
Skarn
Endoskarn
Monzonite Porphyry
Monzodiorite
Gleeson Faults
Gleeson Fracture Zone
Thrust Faults
Pnyang Formation
Darai Limestone
Ieru Siltstone
314000
314500
423000
315000
315500
316000
316500
Easting
Figure 1.2
Plan view of the surface geology at the Ok Tedi mine site prior to
open pit operation.
Northing
424500
found within individual geotechnical units. This can lead to non-conservative design
practices, as the system is unable to preferentially fail through the weakest areas of the
rock mass (Griffiths and Fenton 2000; Hicks and Samy 2002; Jefferies et al. 2008). This
thesis explores these effects within the context of geostatistical theory. This is done
using a geostatistical method known as sequential Gaussian theory; which is an original
contribution within the field of geotechnical slope design.
In the second section of the thesis, the integration of DFNs with geomechanical
simulation codes is explored. Traditionally this amalgamation has been problematic, due
to the generation of unacceptable mesh elements during geomechanical model
construction (Painter 2011; Painter et al. 2012; Painter et al. 2014).
This issue is
explored and the causal effects of modifying DFNs to facilitate geomechanical simulation
scrutinized.
integration issue.
The final area of research explores mesh dependency issues within UDEC-grain
boundary models (UDEC-GBM). UDEC-GBMs are a method of simulating rock masses
as a stochastic arrangement of discrete deformable and/or rigid blocks (Lorig and
Cundall 1987; Kazerani and Zhao 2010; Lan et al. 2010; Gao and Stead 2014).
However, to date, few studies have explored possible mesh dependencies within the
technique (Kazerani et al. 2012; Kazerani 2013; Kazerani and Zhao 2014). This thesis
aims to address this short-coming through the quantification of irreducible calibration
uncertainties within UDEC-GBMs. In addition, the effects of mesh shape dependencies
on micro-scale fracture mechanisms are explored.
numerical models is then presented. Finally, the chapter concludes with an introduction
to reliability based design and risk analysis.
Chapter 3 explores the effects of rock mass heterogeneity on geomechanical
model predictions through the application of sequential Gaussian simulation.
This
method is used to stochastically simulate the inherent spatial heterogeneity within the
geological strength index (GSI) and uniaxial compressive strength (UCS) at the Ok Tedi
Mine site. This is a new approach within the field of open pit slope design. The chapter
is written in an extended manuscript format, with the intention of submission of an
abbreviated version to the Rock Mechanics and Rock Engineering Journal.
2.
Literature Review
Applied geotechnical design remains fundamentally a predictive science,
whereby practitioners anticipate the behaviour of natural materials through the lens of
failure theory (Wyllie and Mah 2004). Due to its predictive nature, the quantification and
demonstration of uncertainties to decision makers remains one of the most important
issues within the discipline. This chapter provides an overview of the application of
uncertainty theory to geotechnical design. The chapter is divided into three primary
sections. The first deals with the types of uncertainty and associated theories. An
explanation is then given for the use of probability theory in this research as opposed to
alternate methods. The second section provides an overview of numerical simulation
techniques, and uncertainty propagation. Finally, in the last section, a brief overview of
reliability and risk analysis techniques is given.
system. Examples of this include the spatial variability in the fracture density across a
study site, or the variation in peak particle acceleration from an earthquake.
Complete
Ignorance
State of
Information
Certainty
Total Uncertainty
Maximum Uncertainty
State of
Information
State of Precise
Information
Epistemic
Uncertainty
Figure 2.1
Certainty
Aleatory
Uncertainty
Examples include
theory (Zadeh 1965; Zadeh 1978; Shafer 1976). The issue with the former approach is
that since aleatory uncertainties are an inherent property of a variable, the incorporation
of new information does not reduce the uncertainty within a system, but simply refines
the estimate of it. In comparison, methods which focus on epistemic uncertainties aim to
reduce the uncertainty within a system by addressing our lack of knowledge pertaining to
a parameter.
Laplace
interpretation),
relative
frequencies,
subjective
(Bayesian)
predominantly concerned with aleatory uncertainty, and has difficulty dealing with
epistemic uncertainty.
interpretations of uncertainty.
1968), possibility theory (Zadeh 1978; Giles 1982; Dubois and Prade 1985; Klir 1992),
evidence theory (Dempster 1967; Shafer 1976), Hints model (Kohlas and Monney 1995),
imprecise probability theory (Good 1950; Smith 1961; Walley 1991), probability of
provability model (Ruspini 1986; Pearl 1988; Smets 1991), and the transferable belief
model (Smets 1988, 1990; Smets and Kennes 1994).
a brief introduction, along with criticism of the theories. The section is not meant to be
an exhaustive study of each type, but instead presents only an introduction to the
alternative methods.
2.2.1.
It has
Membership
Membership
0
Attribute Value
Figure 2.2
0
Attribute Value
Examples of fuzzy set theory within the geotechnical community include the
extension of rock mass classification schemes to include fuzzy logic. These include:
11
The rock mass excavability (RME; Bieniawski et al. 2006, 2007; Bieniawski
and Grandori 2007) system (Hamidi et al. 2010); and
Hoek et al.s (2002) geological strength index (GSI; Sonmez et al. 2003;
Zhang et al. 2009).
Application of fuzzy set theory with alpha cut simulation principles to assess
the reliability of rock slopes (Park et al. 2008; 2012). The alpha cut principles
are an extension of Monte Carlo methods to fuzzy data.
(Dodagoudar
and
Venkatachalam 2000).
Incorporation of fuzzy
Integration of fuzzy theory with the finite element method (FEM) to produce
fuzzy finite element models (Hanss 2005).
Despite the examples of fuzzy set logic in geotechnical design problems, the
theory is limited in its ability to represent uncertainty for two primary reasons. First,
integration of fuzzy set theory in geotechnical design problems is often complicated due
12
in
2.2.2.
Possibility Theory
Possibility theory was first introduced by Zadeh (1978) as an extension of his
theory of fuzzy sets (Zadeh 1965). The primary basis of possibility theory is the mapping
of the possibility of an event
is a subset of
. The primary
where
=1
= min
Equation 2.1
Equation 2.2
,
Equation 2.3
is the possibility of the event occurring. Based on these axioms, three views of
possibility theory have been advanced (Aughenbaugh 2006). The first is based on fuzzy
1
13
set theory introduced by Zadeh (1965) and assumes a fuzzy set basis for possibility
(Zadeh 1978). The second is that possibility is the limit of plausibility for nested bodies
of evidence (Klir 1992). Finally, Giles (1982) argues that possibility is the upper limit of
probability, similar to the upper bounds of imprecise probability theory formalized by
Walley (1991). Du et al. (2006) argue that for geotechnical design problems, possibility
theory may be more appropriate than probability theory in greenfield projects where little
information is available.
Examples of possibility theory in engineering design include:
The primary issue with possibility-based design methods is that they tend to
underestimate the risk of catastrophic failures for systems with many failure modes
(Nikolaidis et al. 2004). This is due to the inability of possibility theory to take into
consideration co-dependencies within a dataset, which has made it difficult to define
direct operators between probabilities and possibilities.
2.2.3.
Evidence Theory
Evidence theory, also referred to as Dempster-Shafer theory, is an alternative to
probability theory first proposed by Dempster (1967) and extended by Shafer (1976).
The theory is a generalization of the Bayesian theory of subjective probabilities. It
extends on the Bayesian approach through the introduction of belief functions, which
allows for the formulation of ones degrees of belief for a question based on the available
evidence of related questions (Shafer 1990).
defined for a subset
by:
=
where
Equation 2.4
can be thought of as all the relevant and available evidence within the set
that supports set . Evidence can be obtained from many sources including, complete
experimental frequency data (such as probabilities), sparse experimental results (such
as possibilities), and/or expert opinions (Aughenbaugh 2006).
The theory is based on two primary ideas. First, one obtains ones degrees of
belief for a question based on the available evidence for related questions. Shafer
(1992) provides an example of this principle:
To illustrate the idea of obtaining degrees of belief for one question from
subjective probabilities for another, suppose I have subjective probabilities for the
reliability of my friend Betty. My probability that she is reliable is 0.9, and my
probability that she is unreliable is 0.1. Suppose she tells me a limb fell on my
car. This statement, which must true if she is reliable, is not necessarily false if
she is unreliable. So her testimony alone justifies a 0.9 degree of belief that a
limb fell on my car, but only a zero degree of belief (not a 0.1 degree of belief)
that no limb fell on my car. This zero does not mean that I am sure that no limb
fell on my car, as a zero probability would; it merely means that Betty's testimony
gives me no reason to believe that no limb fell on my car. The 0.9 and the zero
together constitute a belief function.
15
The second principle is that one uses Dempsters rule for combining independent
items of evidence to obtain ones degrees of belief (Dempster 1968).
functions
and
The mass
$%&
1(
"
Equation 2.5
where K is a measure of the conflict between the two mass sets defined by:
(=
0 and
and
$%
"
Equation 2.6
One of the issues with the current implementation of evidence theory is that
Dempsters Rule of Combination can lead to seemingly irrational results (Aughenbaugh
2006). An example of this was presented by Zadeh (1984):
16
2.2.4.
Imprecise Probabilities
Imprecise probability theory is an extension of traditional probability theory. Its
proponents argue that ones degree of belief cannot be precisely known but instead only
bounded by upper and lower limits (Good 1950, 1983; Smith 1961, 1965; Sarin 1978;
Kyburg 1987; Walley 1991; Weichselberger 2000) or by sets of probabilities (Tintner
1941; Hart 1942; Levi 1974). The theory is an extension of traditional probability theory,
and as such, has an advantage over the aforementioned alternative theories of
uncertainty, as it has clear operational definitions (Aughenbaugh 2006). The general
premise of the theory is that the imprecision in ones degree of belief should be directly
proportional to the amount of evidence.
available, a decision maker should narrow his or hers probability to improve his or her
confidence in the outcome.
The fundamental foundation of imprecise probability theory, as defined by Walley
(1991) is the definition of an upper and lower bound in ones confidence of an outcome.
In basic terms, the lower bound should reflect the highest price at which a decision
maker would place a bet; whereas, the upper bound reflects the lowest price at which
the decision maker would buy the opposite of the gamble (Aughenbaugh 2006). Any
point between the bounds reflects a fair price for the bet where the decision maker would
be willing to take either side of the gamble. This concept can be represented through
probability bounds analysis (PBA) by using probability boxes or p-box (Ferson and
Donald 1998).
(CDFs) that bound ones belief in the distribution of an attribute based on the current
state of information. By specifying a statistical model, and an upper and lower bounds
for the model parameter(s), one can visually define an area of epistemic uncertainty
Cumulative Probability
(Figure 2.3).
[, ]
[, ]
[, ]
[, ]
Attribute Uncertainty
Attribute Value
Figure 2.3
The use of imprecise probability theory is not widespread within the rock
mechanics community; however, a few examples of the method do exist from other
branches of engineering design. These include:
attempted to avoid this paradox by arguing that the inclusion of any meta-probabilities
over complicates the theory (Smets 1998).
19
when *
Equation 2.7
is a real number and a subset of the event space (,). The second axiom of
=1
Equation 2.8
0%
* -0
Equation 2.9
when events - , - , , -/ are mutually exclusive. Using these axioms, the probability of
an event can be defined and handled using mathematical constraints.
The theory remains the most widely used method for uncertainty quantification,
due to its ability to relatively easily propagate aleatory uncertainties through design
calculations. However, despite its widespread use, the theory has been criticized due to
its difficulty in representing epistemic uncertainties.
The fact that data are commonly collected and characterized within the context of
probability theory. This makes it difficult to utilize alternative methods, such as
fuzzy set theory, as fuzzy set classification schemes such as Aydin (2004) or
Sonmez et al. (2003) are rarely used. Although this does not preclude future
studies from characterizing data using alternative methods, it is easier if data are
collected in the context of the analysis methods.
20
This first issue directly leads to the second issue, i.e. the familiarity of alternative
methods by practicing engineers.
The third issue that arises with forgoing probability theory is that not only does
one have to be familiar with the alternative theory, but one must also develop
alternative methods for uncertainty propagation through design calculations
(Cooke 2004). This can lead to unclear design practices, such as the use of
alpha cuts in fuzzy set theory, whereby the theory is reduced to a probabilistic
interpretation in order to apply Monte Carlo methods (Park et al. 2008, 2012).
This reduction to a probabilistic interpretation adds additional layers of complexity
to already complicated systems, further convoluting analyses.
Given these limitations, the probability method remains the most well developed theory
of uncertainty. The plethora of uncertainty propagation and representation methods, its
longevity, and its familiarity with practicing engineers makes it the most widespread
21
method. It is for these reasons that uncertainty analysis within this thesis was conducted
in the context of probability theory.
2.4.1.
second-moment
(FOSM)
methods
provide
an
analytical
approximation of the output mean and variance based on the input attribute moments
(Ang and Tang 1984). The method approximates the response function based on either
(3 = 4 5 , 5 , , 5/ ) about the mean values of random input variables (67 , 67 , , 67 / ;
partial derivatives or a truncation of the Taylor series expansion of an output function
Wong 1985).
0%
<3
; = A :7@
<50 >
?@
22
Equation 2.10
Equation 2.11
Co-dependencies in the dataset may also be taken into consideration during formulation;
however, the analysis becomes more complex and laborious (Harr 1996).
The method is advantageous as it has easier mathematical requirements when
dealing with simple systems (i.e. independent variables), which do not require complex
computation (Harr 1996). It also only requires knowledge of the statistical moments,
rather than complete distributions (Wong 1985). The downside of this simplification is
that the FOSM method only approximates the output moments instead of the entire
distribution. Failure probability estimates must therefore assume a distribution model,
with limited information on the system behaviour (Nadim 2007). In addition, the method
can also become quite complex when dealing with complicated output functions and
correlated variables. In such cases, attainment and evaluation of the derivatives can
become quite complicated, if not impossible when using numerical methods (Hammah et
al. 2009). As such, the method is not appropriate for propagating uncertainties through
numerical analyses.
2.4.2.
1G
*B = F1 I1
2
E
1
O
K8 N
1J L
2
M
*C = 1 *B
23
Equation 2.12
Equation 2.13
5B = -PQR + :T U
*C
*B
Equation 2.14
5C = -PQR :T U
*B
*C
Equation 2.15
If it is assumed that the skewness of the system is equal to zero (K8 = 0), which is the
case for a Gaussian (normal) distribution, then the equations simplify to (Harr 1996):
*B = *C =
1
2
5 = -PQR :7
Equation 2.16
Equation 2.17
Variability in the output function (3 = 4 5 ) can then be estimated from the point
estimates, when 3 admits a Taylor expansion about -PQR, using the equation
(Rosenblueth 1975):
- 3 / = *BVB/ + *CVC/
Equation 2.18
where - 3 / is the expected value of the nth order moment of the output function, and
V = 4 5 . Using this method, one can propagate uncertainties through the output
function (3 = 4 5 ) and obtain an estimate of the output mean and variance. This is
achieved by conducting two simulations at one standard deviation from the input
variable mean.
The system can be further expanded for multiple, independent variables, by
extending the weighting function (Equation 2.18) such that:
24
*B = *C =
1
2/
Equation 2.19
Additional
extensions to the system can be used in the case of co-dependent variables. In this
case the weighing system is extended to take into consideration correlation coefficients
between the modelled input parameters (Rosenblueth 1981).
exponentially with the number of random input variables (Hammah et al. 2009). For
example, if we assume a simple perfectly-plastic system, with the linear-elastic
properties and a Mohr-Coulomb failure criterion, we end up with a minimum of four input
2
Although the normal distribution displays symmetric properties, a number of commonly used
distributions within the geotechnical engineering discipline are asymmetric (i.e. log-normal,
Weibull, exponential, etc.).
25
parameters for each geotechnical unit (i.e. Youngs modulus, Poissons ratio, friction
conducted using PEM the minimum number of simulations is equal to 16X , where
angle and cohesion; Labuz and Zang 2012).
equal to the number of geotechnical domains. Under this scenario, if more than two
geotechnical domains are present, then the number of simulations would exceed 1,000,
making the method computationally excessive compared to Monte Carlo techniques.
This is exacerbated in heterogeneous systems, where each individual model node can
be thought of as a random variable.
shortcoming through modifications to the PEM (Harr 1989; Hong 1998). However, these
alternative methods increase the spread in point estimates which can lead to unrealistic
input values (Christian and Baecher 2002).
2.4.3.
procedures had been used before, such as Buffons needle solution to calculate in
1777 or Laplaces probabilistic generalization of the method in 1812 (Harr 1996).
The key principle of the method is that random sampling of the input variables is
used to estimate uncertainty in the output variables (Hammersley and Handscomb 1964;
Beckman 1971). As such, the method requires that the probability distributions of all
input variables are known prior to the analysis. This can be through set distribution
models, such as the normal, log-normal or uniform distribution, or non-parametric
methods. Once these have been obtained, a series of deterministic computations is
conducted with input variables selected randomly for each simulation from their
respected probability distribution. Output uncertainty is then estimated by summarizing
the resultant response variable statistics.
Since the random sampling is the key principle of the method, the generation of
random sample sets has remained a key area of research within the discipline
26
(Rubinstein et al. 1981). Present-day computational methods for the approach typically
rely on the use of pseudorandom number generators, which are based on deterministic
procedures. These procedures produce long sequences of apparently random values
based on seed values and recurrence relationships (Harr 1981). Although these number
generators are not truly random, they are typically sufficiently random for most cases,
provided that that the number of simulations is less than the reoccurrence interval.
One of the key advantages of the Monte Carlo method is that the number of
required simulations is independent of the number of random input variables, unlike
PEM.
In theory, the
Y-]^ = : U
2
\1
Equation 2.20
Equation 2.21
where Y-7 and Y-]^ are the standard errors in the mean and variance, : is the output
standard deviation, and \ is the number of simulations. This accuracy issue can lead to
a very computationally intensive system, when one requires high degree accuracy in the
output estimates. This is particularly pronounced with tail distribution estimates which
are extremely sensitive to the distribution accuracy, and thus require accurate
knowledge of the higher order moments (Nadim 2007; Hammah et al. 2009).
To overcome the computational inefficiency of the Monte Carlo method,
systematic systems of selecting input variables has been proposed, known as the Latin
Hypercube methods (LHM; Imam and Conover 1982; Tang 1993; Olsson and
Sandberg). Instead of purely random sampling, the LHM sub-divides the input variable
domains into a series of equally probable intervals, and then obtains random samples
from each of these bins. This is done to ensure that the random set more accurately
27
adheres to the input distributions. This reduces the influence of outlier statistics when a
limit number of simulations are conducted. This methodology results in a reduction in
the number of required simulations and produces a more accurate approximation of
response variable distribution.
characteristics have led to the development of multiple numerical simulation methods for
geological materials, including: continuum, discontinuum and hybrid methods (Jing 2003;
Stead et al. 2006).
Continuum methods are the simplest approach to numerical analysis, and
conceptualize the material as a continuous substrate. A constitutive criterion is used to
describe the behavioral characteristics of the material, such as the Mohr-Coulomb
(Wyllie and Mah 2004) or Hoek-Brown (Hoek et al. 2002) relationships. Examples of
numerical approaches employed for continuum analaysis include the finite element
(FEM; Rocscience 2013) and finite difference method (FDM; Itasca 2011).
The
displacements along discrete features limits its use in many geomechanical studies.
Discontinuum modelling was first introduced by Cundall (1971) with the advent of
the distinct element method (DEM). The approach simulates the finite displacement and
rotation of discrete deformable and/or rigid blocks, based on constitutive criteria
assigned to block contacts. Examples of the method include UDEC (Itasca 2014), 3DEC
(Itasca 2007) and PFC (Itasca 2008). The method has advantages in the field of rock
mechanics as it can simulate the movement of rock masses composed of discrete, interlocking blocks. However, the simulation brittle fracture is limited to the edges of block
28
contacts, which can reduce the overall kinematic freedom of discontinuum models, as
simulations are unable to model comminution behaviour.
Recent advances in numerical analysis have introduced hybrid modelling codes,
which combine both continuum and discontinuum methods. Examples of codes that use
this approach include ELFEN (Rockfield 2013) and Y-Geo (Mahabadi et al. 2012). The
hybrid method is an intriguing approach to geomechanical simulation as one can model
both the discrete movement of blocks, well as the comminution and brittle fracture of
geological materials. However, the method further complicates numerical simulation, as
additional input parameters are required; some of which (i.e. fracture energy and
toughness) are often difficult to collect.
objectives, based on the expertise of the practitioner(s) and the encountered failure
mechanism(s) (Hammah and Curran 2009).
calculated as:
_=
6`
:`
Equation 2.22
where 6` and :` are the output mean and standard deviation of the performance
function (4P5R), where the performance function has the property of being greater than or
equal to zero when design performance is satisfactory and less than zero when
unsatisfactory (El-Ramly et al. 2002).
30
The use of reliability based designs allows engineers to explicitly express the
associated risks with different slope designs to decision makers, whereby business
decisions can be made within the framework of decision analysis (Steffen 1997). The
risk based approach also avoids the occurrence of risk abdication, whereby mine
management avoids the responsibility of designating tolerable risks by accepting
geotechnical designs based on specific factors of safety (Steffen and Contreras 2007).
a K \b = * K \b 5 " K \b
Within this
Equation 2.23
multiple events may impact a project and/or design, the overall risk becomes the
summation of the individual risks associated with each event.
Procedures for identifying and characterizing risks are diverse, with extensive
published literature detailing specific methodologies (Henley and Kumamto 1981). While
differences remain between the specific methodologies, common steps in the process
exist (Australian Geomechanics Society 2000):
1. Hazard identification
2. Assessment of likelihood or probability of occurrence
3. Assessment of consequences
31
32
3.
3.1. Abstract
With the increased drive towards deeper and more complex mine designs,
geotechnical engineers are forced to reconsider traditional deterministic design
techniques in favour of probabilistic methods. These alternative methods allow for the
direct quantification of uncertainties within a risk and/or decision analysis framework.
However, conventional probabilistic practices typically discretize geological materials
into discrete, homogeneous domains, with attributes defined by spatially constant
random variables. This is done in spite of the fact that geological media typically display
inherent heterogeneous spatial characteristics. This research applies a geostatistical
approach to the stochastic simulation of spatial uncertainty, known as sequential
Gaussian simulation. The method uses variograms which impose a degree of controlled
spatial heterogeneity on the stochastic system. Simulations are constrained using data
from the Ok Tedi mine site in Papua New Guinea and designed to stochastically vary the
geological strength index and uniaxial compressive strength using Monte Carlo
techniques.
fundamental flaw compared to geostatistical approaches, as they fail to account for the
spatial dependencies inherent to geotechnical datasets.
erroneous model predictions, which are overly conservative when compared to the
geostatistical results.
Will be revised for submission to Rock Mechanics and Rock Engineering as J.M. Mayer and D.
33
3.2. Introduction
Geotechnical design projects often suffer from inherent information deficiencies
associated with the difficulties, and often impractical nature, of collecting large datasets
(Read 2009; Read and Stacey 2009). This leads to fundamental design issues, where
geotechnical design must be conducted with incomplete knowledge of the true state of
the system. Under such a paradigm, multiple realizations of the subsurface are often
possible within the framework of the given state of information.
To overcome this
deficiency, reliability and/or probability based methods can be used, whereby uncertainty
in the capacity and demand is explicitly propagated through design calculations (Harr
1996; Duncan 2000; Wiles 2006; Nadim 2007). Within this framework, conventional
practice dictates that the geological medium should be sub-divided into a series of
geotechnical units, whose properties are defined by spatially constant random variables
(Read and Stacey 2009). However, this introduces an underlying uncertainty into the
design process as the scale of data collection and analysis often differ, resulting in data
aggregation issues (Gehlke and Biehl 1934; Yule and Kendall 1950; Clark and Avery
1976; Haining 2003). These issues are then exacerbated by the application of classical
statistical methods and the false assumption of data independence, despite the inherent
spatial variability within natural geological systems (Journel and Huijbregts 1978; Isaaks
and Srivastava 1989; Deutsch 2002). This oversimplification of the spatial heterogeneity
has been shown to result in conservative design practices, with an over-estimation of the
probability of failure (Griffiths and Fenton 2000; Hicks and Samy 2002).
This
phenomenon results from the inability to reproduce realistic failure paths, as the lack of
heterogeneity prevents the development of step-path failures through the weakest areas
of the rock mass (Jefferies et al. 2008; Lorig 2009).
A number of modelling techniques has been proposed to overcome this issue.
These include the explicit modelling of spatial heterogeneity within geomechanical
simulation models (Baczynski 1980; Pascoe et al. 1998; Jefferies et al. 2008; Srivastava
2012), and the use of critical path algorithms for statistical up-scaling of attribute
distributions (Glynn et al. 1978; Glynn 1979; OReilly 1980; Shair 1981; Einstein et al.
1983; Baczynski 2000; Baczynski 2008).
34
simulation within the field of slope design within open pit mines. The method is known
as sequential Gaussian simulation (SGS), which uses variograms to constrain spatial codependencies within the dataset (Journel and Huijbregts 1978; Isaaks and Srivastava
1989; Deutsch 2002; Nowak and Verly 2007). Stochastic models are used to construct
multiple realizations of the subsurface geological strength index (GSI) and uniaxial
compressive strength (UCS) attributes at the Ok Tedi mine site in Papua New Guinea.
Stochastic simulations are conducted directly within the geomechanical simulation code
FLAC, which is used to estimate the pit wall stability (Itasca 2011). Results are then
compared with conventional probabilistic and statistical up-scaling techniques to show
the limitations of traditional methods.
35
The current areal extent of the pit is approximately 2000 by 3000 m, with a
maximum wall height of 800 m (de Bruyn et al. 2011).
designated for end of life operations; however, a decision is pending to extend this to
1000 m, through a 200 to 300 m pushback of the west wall (de Bruyn et al. 2013). Slope
angles average 40o throughout the current pit, with the proposed cut-back designated to
38o to 39o. Conditions of all the pit walls are generally poor due to the high rates of
weathering associated with the large amount of rainfall within the area.
3.3.1.
Geology
The geology of the site is characterised by a repeating succession of sub-
horizontal sedimentary facies, which have been locally intruded by two igneous bodies
(Figure 3.1; Figure 3.2; de Bruyn et al. 2011; Baczynski et al. 2011). Sedimentary facies
have been separated into three distinct units at the site, including: the Ieru Siltstone,
Darai Limestone and Pnyang Formation (Hearn 1995). The Cretaceous Ieru Siltstone
Formation is characterized by grey, calcareous siltstones, interbedded with minor
medium graded sandstones.
massive,
foraminiferal,
carbonate-rich
packstone,
mudstone
and
wackestone unit, referred to as the Darai Limestone. The limestone varies in thickness
from 50 to 800 m across the site, and structurally underlies the mid-Miocene Pnyang
Formation. The Pnyang Formation is the youngest of the main sedimentary units found
at the site, and is composed of calcareous mudstone and siltstone with limestone.
The boundary between the Ieru Siltstone and Darai Limestone is characterized
across the site by a series of low angle thrust faults, referred to as the Taranki, Parrots
Beak and Basal Thrust Zones (Figure 3.2; Baczynski et al. 2011). The faults are the
result of uplift associated with the collision of the Australian and Pacific plates
(Fagerlund et al. 2013). The geology is characterized by 20-80 m thick zones of highly
fractured and altered fault gouge, pyrite, magnetite skarn lenses, brecciated
monzodiorite and brecciated siltstone hornfels (de Bruyn et al. 2011). The sedimentary
units dip gently towards the southwest, with all three thrust zones exposed in the west
wall.
36
!
!
!
425000
!
Taranaki
Thrust
!
!
!
!
!
424500
!
!
!
!
!
!
!
A
!
!
!
Skarn
Endoskarn
Monzonite Porphyry
Monzodiorite
Gleeson Faults
Gleeson Fracture Zone
Thrust Faults
Pnyang Formation
Darai Limestone
Ieru Siltstone
!
!
!
!
424000
!
!
!
!
!
!
!
!
!
!
! !
!
!
!!
!!!!
!!
!
!
! !!
!!
! !!
!
!
!!
!
!!!
! !
!!
!
!
! ! !! !!
!
!
!!
!!
!
!!
!!
!
!
!
!
!
!! !
!
!! !
!
!
!!
!
!
!
!
!
423500
Parrots Beak
Thrust
!
!
315000
423000
!
!
!
!
!
314500
!
!
!
! ! !
!!
! !
! ! !
!
!
!
!
!
!
!
!
!!
!
!!!
!
!
!
!
!!!
!
!
! !
Taranaki
Thrust
314000
!!
!
!
!
!
!! !
!
!
!
!
!
!
!!
!
!
!
!
Northing
315500
316000
Cross Section
Borehole Collar
316500
Easting
Figure 3.1
Plan view of surface geology for the 2011 mining conditions at the
Ok Tedi site. The geotechnical borehole collar distribution is found
to be skewed towards the center of the pit, specifically targeting the
mineralized skarn bodies.
In addition to the three thrust faults, the west wall is cross-cut by two steeply
dipping (70o to 80o) sub-vertical faults, referred to as the western (upper) and eastern
(lower) Gleesons faults (de Bruyn et al. 2013). The faults strike approximately parallel
to the western pit wall. Displacement along the faults has resulted in the formation of a
discrete fracture zone, bound on each side by the respective faults. The rock mass
within the zone is highly disturbed and characterized by weak, very highly fractured or
brecciated rock, with localized stronger material (Baczynski et al. 2011).
The two
bounding faults are characterized by highly brecciated, granular and/or highly plastic
37
gouge material.
oriented, high angle faults, which act as possible release structures for potential slope
failures.
Northing = 423850
A
Gleeson
West Fault
Pnyang
Formation
Original
Topography
Gleeson
East Fault
Monzonite
Porphyry
Darai Limestone
(Upper)
Skarn
Current Pit
Extent
1750
Gleeson
Fracture
Zone
Taranaki Thrust
1500
Endoskarn
Da
rai
Lim
1250
r)
we
(Lo
one
t
s
e
Monzodiorite
Basal Thrust
314500
Elevation
Cutback
315000
315500
316000
1000
316500
Easting
Figure 3.2
Sedimentary units have been locally intruded by two igneous bodies, following
regional thrust fault activity (de Bruyn et al. 2013).
Monzodiorite at the southern end of the pit and the Fubilan Monzonite Porphyry to the
north. The Sydney Monzodiorite is the older of the two intrusions, and dates to Pliocene
(2.6 Ma; Page 1975). The unit is a medium to coarse grained, dioritic intrusive body,
which is generally unmineralized (de Bruyn et al. 2011). In comparison, the younger (1.1
to 1.2 Ma) Fubilan Monzonite Porphyry is mineralized and hosts the main economic
mineralization, along with proximal skarnified bodies (Page 1975).
38
The unit is a
porphyritic, felsic body, which has caused local skarnification of the Darai Limestone and
extensive potassic alteration of the Ieru Siltstone (Baczynski et al. 2011). Skarn units
are sub-divided into four distinct units, namely: endoskarns, calc-silicate skarns, massive
magnetite skarns, and massive sulphide skarns. In addition to local alteration, igneous
emplacement has resulted in a slight up-doming of sedimentary strata. This has led to
the sedimentary layers having a slight dip into the pit walls.
3.3.2.
Borehole Data
Borehole data are commonly used at mine sites to it provide estimates of the
subsurface geomechanical properties, which can later be used to predict the behavior of
proposed engineering designs. This practice typically employs empirical methods, due
to the difficulty of directly measuring parameters at the rock mass scale (Laubscher
1975). These empirical methods include the geological strength index (GSI) and the
rock mass rating (RMR89) system, which attempt to characterize the average block
shape and size, as well as the fracture surface conditions (Bieniawski 1973, 1976; Hoek
et al. 2002). The end result is an estimation of the rock mass strength characteristics
based on degree and type of fracturing. This is typically conducted on a domain basis,
whereby drill core is subdivided into a series of discrete units with similar attributes.
The Ok Tedi mine borehole database was provided by Ok Tedi Mining Ltd.
through SRK Consulting. The database included 153 boreholes, subdivided into 8,178
discrete geotechnical logging intervals. Borehole logging intervals were found to vary
greatly in size, with a range of 0.01 to 64.40 m. The spatial distribution of the borehole
collars is also greatly skewed towards the center of the Ok Tedi pit, coinciding with the
main mineralization targets (Figure 3.1). Logging intervals were characterized by on-site
geotechnical staff using the Laubscher MRMR rock mass classification system and later
transformed by SRK Consulting to the Bieniawski RMR89 system (Bieniawski 1976;
Bieniawski 1989; Laubscher 1990; Jakubec and Laubscher 2000; Laubscher and
Jakubec 2001).
Intact rock strength databases were provided by SRK Consulting for both
laboratory and point load test data. Both datasets provide an estimate of the uniaxial
compressive strength (UCS) for intact rock. However, the laboratory database is limited
39
for conducting spatial analysis, as only 129 uniaxial and 23 triaxial compressive test
results were available.
3.3.3.
Groundwater Model
The groundwater conceptual model for the Ok Tedi mine site is dominated by a
groundwater conditions, over the 5 year anticipated life of the west wall cutback. The
general
groundwater
flow
is
strongly
40
influenced
by
the
low
permeability,
22
50
19
Elevation
5.0E-06
1.3E-06
3.6E-07
9.8E-08
2.6E-08
7.1E-09
1.9E-09
5.1E-10
1.4E-10
3.7E-11
1.0E-11
00
15
50
12
425000
00
424500
424000
85
0
31
40
00
31
45
00
Easting
Figure 3.3
423500
31
50
00
3
5
15
ing
rth
o
N
423000
00
31
60
00
Pore pressure distributions in the west wall were estimated for both natural and
depressurized conditions by SRK Consulting (SRK 2013b).
The effects of
41
Scenario II: Active depressurization from the installation of a drainage gallery and
a fan of drain holes from the gallery. Installation is planned to be conducted at
an elevation of 1360 masl. Drainage fans consist of five to nine drain holes up to
200 m in length, increasing in density towards the south. In addition, to the
drainage gallery a set of horizontal drains was also included in the model
coinciding with the uppermost drains from scenario 1.
a. Scenario I
Horizontal Drains
500m
b. Scenario II
Monzonite Porphyry
Ieru Siltstone
Figure 3.4
Monzodiorite
Pnyang Siltstone
Skarn
Thrust Faults
500m
Darai Limestone
Gleeson Fracture Zone
42
3.4. Methodology
3.4.1.
estimating their mechanical properties. The most common solution to overcome this
issue within the geotechnical community is the use of the Hoek-Brown criterion (Hoek
and Marinos 2007). The method is an empirically derived relationship between the
strength of a rock mass and the degree of observed fracturing (Hoek et al. 2002). The
system is premised on the hypothesis that rock masses fail through sliding and/or
rotation of intact rock blocks (Hoek 1994). For example, a rock mass composed of
angular blocks, with rough discontinuity surfaces will exhibit a larger degree of interparticle locking, and hence stronger rock mass characteristics, than one composed of
smooth-walled, rounded particles. Although limitations exist within the system (Carter et
al. 2007; Brown 2008), the criterion has been widely utilized within the geotechnical
community owing to its ease of use and a lack of suitable alternatives. A full description
of the Hoek-Brown criterion is provided in Appendix A.
The method requires defining four parameters, namely: geological strength index
(cYd), intact rock uniaxial compressive strength (UCS), material constant (
0 ),
and a
disturbance factor (e). The GSI was estimated from borehole data through conversion
of RMR89 values. Conversion of the majority of RMR89 values utilized the formula (Hoek
1994):
cYd = afagh 5
Equation 3.1
To
compensate for this deficiency, GSI values were directly assigned to intervals described
as highly fragmented, crushed and/or decomposed zones within the geotechnical
database. This was conducted according to Table 3.1 constructed by SRK Consulting
for the Ok Tedi mine site (SRK 2012). Values assigned to these highly fractured zones
43
Table 3.1
5 - 15
10 - 20
15 - 25
20 - 35
Intact rock uniaxial compressive strength (UCS) was characterized directly from
the Is(50) tensile point-load test results (Table 3.2). Point-load estimates were chosen for
two reasons. First, the dataset was large and broadly distributed throughout the study
region allowing for proper characterization of the spatial structure, unlike the laboratory
test results which were spatially limited.
independently from RMR89 estimates, unlike simple hammer tests, which exhibited an
underlying bias based on the condition of the rock mass.
observed in the Ok Tedi dataset by an increase in the correlation coefficient between the
non-declustered UCS and GSI data from 0.16 with point-load estimates to 0.61 with
hammer test results.
The material constant (
0)
estimation requires detailed laboratory test results. As a result, most studies rely on
published empirical estimates based on the lithology (Hoek et al. 2002). Due to this
44
difficulty, characterization of the spatial structure for the material constant was
impossible based on the current dataset.
throughout the geotechnical domains and were assigned based on previously published
estimates for the site ((Table 3.2; Baczynski et al. 2011).
Table 3.2
Geotechnical Unit
Density (kg/m )
mi
GSI
UCS (MPa)
Monzonite Porphyry
2550
24
51
65
Monzodiorite
2550
24
40
46
Endoskarn
3250
17
46
34
Skarn
4450
17
53
76
Darai Upper
2750
10
45
69
Darai Lower
2740
10
47
65
Ieru Upper
2620
34
64
Ieru Lower
2620
53
86
Pnyang
2660
44
64
2920
29
72
45
0)
challenging. This parameter is intended to describe the degradation of near surface rock
mass due to blasting and unloading (Hoek 2012). However, ambiguity exists within the
geotechnical community as how to apply the disturbance factor (e). No agreed upon,
concise rules exist as to what value should be used and how it should be zoned away
from the pit wall. As a result, the disturbance factor (e) was ignored throughout this
study and a constant value of 0.0 used.
concerned with deep-seated failure, which is not greatly affected by the near surface
degradation.
3.4.2.
3D Geological Model
A three-dimensional geological model of the Ok Tedi site was provided by OTML
through SRK Consulting (Figure 3.5). Geotechnical domain characterization within this
study is based on this geological interpretation.
Three-dimensional geological data provided by OTML were in the DXF file
format. In order to allow for data interpretation using the Maptek software package
Vulcan (Maptek 2013), data was first converted to the Vulcan triangulation file format
(.00t). This involved using DXF files to define a series of geological boundaries which
were then used to split apart a large cube of the model area into the various geological
units. During this process, geological domains were extended approximately 500 m
towards the west, in order to capture the extent of geomechanical modelling conducted
later. This was done by projecting the sedimentary and fault zone units along their dip,
while preserving their stratigraphic thickness.
Geotechnical boreholes were then projected within the Vulcan software package,
and the associated geological units that they intersected were recorded. This allowed
for the construction of downhole geological profiles for all the geotechnical boreholes,
which matched the future FLAC simulation domains.
achieved between the 3D geological domain boundaries and geological borehole logs,
some slight adjustments of (<20 m) were required to ensure that the borehole logs
matched the larger scale triangulation files.
Skarn
Endoskarn
Monzonite Porphyry
Monzodiorite
Gleeson Fracture Zone
Gleeson Faults
Thrust Faults
Pnyang Formation
Darai Limestone
Ieru Siltstone
0
Elevation
220
180
140
425
0
100
000
314
400
314
00
156
423
423
600
0
600
31
422
400
316
200
80
316
422
400
200
400
424
200
315
ting
Figure 3.5
424
800
424
80
314
Eas
3.4.3.
600
425
000
thi
Nor
ng
800
Stochastic Simulation
Stochastic simulation of the geological strength index (GSI) and uniaxial
The algorithm
47
Pre-Processing
Declustering
Detrending
Statistical Summarization
Normal Score (Gaussian) Transformation
Normal Score Variograms
Stochastic Simulation
Sequential Gaussian Simulation (SGS)
Normal Score Back-Transformation
Figure 3.6
Spatial Declustering
Prior to characterization of the spatial structure, data must first be filtered to
remove spatial dependencies (Prycz and Deutsch 2003). These dependencies result
from the non-systematic manner of data collection and the underlying geological
processes which control the studied attributes.
48
jkl =
1 \
\0 0
Equation 3.2
49
Detrending
Following cell declustering it is important to filter the large-scale spatial trends
due to their poor reproducibility by the SGS process. This is due to the fact that the SGS
technique reproduces random phenomena assuming data conforms to the first-order
stationary assumption (Journel and Huigbregts 1978). This assumption is referred to as
the intrinsic hypothesis and states that both the mean and variance are dependent
strictly on the data separation distance and not the location of the data (Matheron 1963).
If data do not conform to this assumption due to systematic trends, then trends must be
defined and removed/filtered prior to conducting SGS (Deutsch 2002).
Identification of spatial trends is conducted through exploratory spatial data
analysis techniques, including: semivariogram analysis, average grade profiles, and
ordinary kriging with a high nugget effect (Vieira et al. 2010). The use of average grade
profiles is the simplest and often first means of trend identification.
It involves the
examination of averaged data along one, two or three dimensional profiles (Isaaks and
Srivastava 1989; Deutsch 2002). Once identified, trends can then be characterized
using moving average techniques, kernel estimation and/or ordinary kriging with a high
nugget effect (Hallin et al. 2004; Nowak and Verly 2007).
Following identification and characterization, the most common way to deal with
trends is to first remove them, then simulate the residuals, and finally add the trend back
to the simulated results (Vieira et al. 1983; Vieira et al. 2002; Blackmore et al. 2003;
50
Jenson et al. 2006). This filtering process commonly employs a number of techniques
including: subdividing the data into a series of domains (Deutsch 2002), linear
regression with a correlated variable (Phillips et al. 1992) and polynomial trend analysis
(Vieira et al. 2010).
Analysis of the spatial trends within the Ok Tedi dataset identified the influence of
the Gleeson fracture zone, which affected GSI estimates from all geotechnical units
within the western pit wall. To remove this trend, data were filtered using a constant
ratio of 0.81, which is equal to the average decrease of GSI values within the zone.
Residuals obtained from the filtering process were used for the remainder of the SGS
process and the trend added back following simulation.
constant trend, the process involves assigning a standard normal score to each datum
such that the cumulative frequencies of both the normal score and attribute are identical
(Chils and Delfiner 1999). This transformation process is conducted either graphically
from the modelled cumulative density function (CDF) or by defining a transformation
function using a polynomial expansion (Castrignan et al. 2009).
The Ok Tedi data were transformed by first assigning distribution models to the
studied attributes prior to the normal score transformation. This was done to smooth the
data and have the transformation better reflect the likely underlying sample distribution.
Bimodal normal and Weibull distributions were used for the GSI and UCS, respectively.
Standard normal score values were then assigned to datum based on cumulative
frequencies from the modelled CDFs (Figure 3.7).
Microsoft software package EXCEL.
allowed back-transformation of normal scores to GSI and UCS values following SGS
simulation. The look-up table is accurate to +/- 0.01 in normal score space.
51
Cumulative Frequency
100%
80%
60%
40%
20%
0%
20
40
60
80
100
Modeled Distribu!on
Correlogram Analysis
Accurate characterization of the underlying spatial structures is the foundation of
any geostatistical analysis involving kriging and/or SGS (Clark 1979; Isaaks and
Srivastava 1989). The standard method within geostatistics used to characterize this
structure is semivariogram analysis which measures the spatial dissimilarly vs. distance.
Since it is assumed that closely spaced data are more closely related than distant,
semivariograms should display increased dissimilarity with distance, until the point at
which no obvious correlation exists between data values.
Classic
greater continuity between the statistical modelling and stochastic simulation as the
kriging/SGS process requires the direct input of covariance vs. distance models (Journel
and Huijbregts 1978). For these reasons, spatial analysis at Ok Tedi was conducted
utilizing correlograms.
Correlogram analysis was conducted by first calculating average correlation
coefficients vs. distance.
following formula:
m =
1
j0 jk
j0 o0 jk ok
Equation 3.3
where o0 and ok are the normal score values, j0 and jk are the declustered weights and
m is the correlation coefficient at the specified lag distance. Lags were calculated in a
using least squared regression techniques within the Microsoft software package
EXCEL. GSI continuity was modelled with two nested structure models with zero nugget
effect, while UCS continuity was modelled using an exponential model and relatively
high nugget effect (Table 3.3). Models were constrained to reproduce a dispersion
variance of 1.0 within the simulation area (Journel and Huijbregts 1978). A complete
summary of the exponential correlograms can be found in Appendix B.
53
Table 3.3
Geotechincal
Unit
Exponential Model I
UCS (MPa)
Exponential Model II
Exponential Model
Nugget
Sill
Range (m)
Sill
Range (m)
Sill
Range (m)
0.61
41
0.44
489
0.30
0.72
128
Monzodiorite
0.49
49
0.57
434
0.38
0.66
214
Endoskarn
0.69
38
0.32
149
0.47
0.55
97
Skarn
0.88
52
0.14
335
0.74
0.26
81
Darai Upper
1.00
24
0.00
381
0.00
1.01
37
Darai Lower
0.81
43
0.25
1000
0.54
0.50
369
Ieru Upper
0.76
43
0.29
630
0.21
0.82
143
Ieru Lower
0.86
88
0.18
614
0.25
0.81
318
Pnyang
1.00
27
0.00
381
0.21
0.82
143
Thrust Faults
0.92
40
0.10
513
0.27
0.75
107
Monzonite
Porphyry
54
2. Visit the first node in the sequence and simulate a value by a random draw
from a conditional distribution derived from simple kriging.
3. The simulated value becomes part of a conditioning set.
4. Visit the next node in the sequence and simulate the studied attribute using
both original and simulated values for conditioning.
5. Repeat step 4 until all nodes have been visited.
While the method preserves the spatial structure defined by the semivariogram, there
are two main possible limitations of the method that need to be taken into consideration
(Vann et al. 2002). First, the simulation area must be greater than the range of the
defined spatial dependency model, otherwise the full spatial structure of the model will
not be preserved by the simulation. Next, an adequate number of neighboring data
points must be used during conditioning, or the simulation will heavily favor the short lag
trend in the spatial model.
SGS was conducted within this study using FISH routines written to conduct the
simulation directly within the software package FLAC (Figure 3.8; Itasca 2011).
general version of the SGS FISH algorithm is provided in Appendix C. Verification of the
code can be found in Appendix D. Simulations were conducted in normal-score space
and back-transformed to parameter space following stochastic simulation, with the
previously removed Gleeson facture zone trend added back to the results. GSI and
UCS simulations were conducted independently due to the poor correlation coefficient
between the two parameters (r = 0.19).
55
Geotechnical
Domain Boundary
Northing = 423850
GSI
2400
100
2100
1800
1500
Elevation
1200
900
600
313000
313500
314000
314500
315000
315500
316000
316500
Easting
Figure 3.8
3.4.4.
to coincide with the mining stage simulated in the later FLAC models. Distributions were
obtained by exporting FEFLOW simulation results along a 2D east-west cross-section at
a northing of 423850. However, due to the limited extent of the groundwater simulation
around the west wall, pore pressure predictions had to be extended to include the entire
extent of the FLAC simulations (Figure 3.9). This was conducted for three zones:
Western Zone: An average water table height above the top of each thrust zone
was estimated for the three main aquifers within the west wall (i.e. Taranaki,
Parrots Beak and Basal aquifers). Pore pressures were then estimated for the
FLAC nodes which were located greater than 200 m west of the FEFLOW model.
This was done using the top elevation of the thrust zones and the average height
of the aquifers.
Sub-Central Zone:
hydrostatic conditions, from the base of the overlying Basal aquifer. A 100 m gap
56
was left in the predictions on either side of the Gleeson fracture zone. Pore
pressures within this gap were later estimated using a linear interpolation, which
caused predicted pressures to mimic the overlying step induced from fault
compartmentalization.
Eastern Zone: A theoretical groundwater distribution was constructed for FLAC
nodes located greater than 200 m east of the FEFLOW model. These pressures
are considered to be an estimate only, as there was little information provided in
the groundwater modelling reports pertaining to pressures on this side of the pit.
Following pore pressure prediction within the three zones, linear interpolation techniques
were employed to estimate pressures between the zones. Although this final pressure
distribution is a simplification of reality, its overall effect on the FLAC geomechanical
simulations is minimal due to failure being concentrated near the western pit wall within
the FEFLOW model region.
Northing = 423850
1750
1250
1000
FeFlow Model
Pressure (Pa)
7
1.1 x 10
Sub-Central Zone
Western Zone
313000
313500
314000
Eastern Zone
314500
315000
315500
316000
0.0
Elevation
1500
750
500
316500
Easting
Figure 3.9
57
3.4.5.
FLAC is a two-dimensional,
As a result,
these two extremes was proposed by Carter et al. (2008). This criterion facilitates the
transition function (4p :q0 ).
transition from linear soil-like behaviour to non-linear rock mass type behaviour using a
This relationship was incorporated into the FLAC
function. A full description of the transition function (4p :q0 ) is provided in Appendix A.
Spatial heterogeneity was incorporated into simulations using the SGS process.
This ensured that unique GSI and UCS values were associated to each individual grid
unique Hoek-Brown , r, and
values, to assign
(e) factors were ignored in the simulations as the purpose was to explore deep-seated
failure.
Models were assessed by conducting a shear strain reduction (SSR) analysis
once steady-state conditions had been achieved (Mattsui & San 1992; Dawson et al.
1999; Hammah et al. 2005; Hammah et al. 2006; Diederichs et al. 2007a). This was
done in order to calculate the critical strength reduction factor (SRF), which is equivalent
to the factor of safety in classical limit equilibrium analysis. Simulations employed Monte
Carlo sampling techniques, with 100 trials conducted within each simulation round. This
allowed for derivation of the SRF distribution and estimation of the probability of failure.
Simulations took approximately 8 days to complete one round of 100 models, for a 3.4
GHz PC with 16 GB of RAM.
3.4.6.
towards deeper and more complex designs (Read and Stacey 2009). This has forced
geotechnical engineers to consider methods other than traditional deterministic
techniques, which can characterize the inherent uncertainty associated with increased
mine complexity. As a result, a renewed interest exists within the field towards more
probabilistic and/or risks based practices (Steffen 1997; Terbrugge et al. 2006; Steffen
2007; Steffen et al. 2008). This paradigm shift and increased focus on the associated
project risks, requires an appreciation for both the probability of an unacceptable event
occurring, as well as the associated consequences of the event (Yoe 2011). The first
59
stage in understanding these consequences requires the ability to assess the size of a
potential failure.
This study applied a novel approach to estimate the failure size through the use
of network analysis based techniques. This approach estimates the critical failure area
through minimum distance analysis of shear strain rates obtained from numerical
simulation. This first involved inverting the shear strain rates values to construct an
inverse shear strain rate matrix. Dijkstras (1959) algorithm was then used to estimate
minimum paths through this matrix, for each of the simulations, between the pit face and
rear slope crest (Figure 3.10). This was conducted for each boundary node along the
toe and slope of the modelled open pit. Minimum paths were then assessed based on
average inverse shear strain rates, with the lowest average rate path determined to be
the critical failure path. Summary statistics were then calculated for the GSI and UCS
along the identified path, which gave an indication of the shear strength along the
surface.
Critical paths were also used to estimate the size of potential failures, by
calculating the total area between the critical failure surface and slope face. A detailed
description of the critical path algorithm is provided in Appendix E.
Critical path density plots were constructed from the estimated failure path
results to give an indication of the critical failure surface distribution. This involved
estimating nodal intersection probabilities for each of the FLAC grid cells, measured as
the probability of a critical path intersecting a specified node. For example, if five critical
paths out of the total of 100 Monte Carlo simulations intersected a grid node, the
intersection probability at that node would be 0.05.
exported to ArcGIS and ordinary kriging techniques utilized to interpolate a failure path
density. The resultant kriged surface gave an indication of the distribution of failure
paths within the FLAC simulations.
60
12
1.9
6.5
1.8
9.7
5.1
28
7
2
Model Cell
Crest Cell
Slope/Toe Cell
Critical Path
Starting Cell
Figure 3.10
3.4.7.
.86
Cost
iSSR
6.2
9.6
1.2
6.2
4.8
5.4
3.1
2.6
1.56
1.7
8.9
1.6
7.1
1.6
5.93
9 4.19
8
7
9
6
1.56
4.37
3
.87 .15 32.4 4.11 24.3 3.9 22.44
3
0 7
16.51
19
2
.15 114 3.4 92 17.1 78.72
6
4
500m
Statistical Up-Scaling
One of the difficulties in utilizing the stochastic simulation techniques is the data
intensive analysis that must be conducted to characterize and simulate the spatial
structure. While this can be considered an ideal to strive for, it is not always practical or
possible due to both time and data constraints. Therefore, a number of researchers
have proposed the use of critical path algorithms to up-scale attribute distributions from
the borehole to domain scale (Glynn et al. 1978; Glynn 1979; Shair 1981; Einstein et al.
1983; Baczynski 2008).
synthetic rock material, using either minimum distance (OReilly 1980) or stochastic
step-path generation (Baczynski 2000) techniques.
summarized for the paths and incorporated into geomechanical software packages. To
test this general methodology, a software package was developed to quickly refine
61
2. GSI and UCS values are assigned to the simulation area using the sequential
Gaussian simulation algorithm described in Section 3.4.3. This requires a user
specified variogram model for both geotechnical attributes.
l
3. Hoeks global rock mass strength values (:qX
) are then assigned to each node
l
:qX
= :q0
where
, r and
+ 4r
2 1+
8r
2+
J 4s + rL
wC
attribute,
Equation 3.4
4. Dijkstra's (1959) algorithm is then used to calculate the critical paths through the
simulation area, based on a minimum distance analysis of global rock mass
strength values.
5. GSI and UCS values from nodes along the critical path are then averaged to give
an indication of the overall strength of the weakest path through the simulation.
Up-scaled GSI and UCS values are then incorporated into geomechanical simulations
as single variables assigned uniformly across geotechnical domains.
The proposed algorithm was used to conduct three separate simulations. This
includes:
The simulation of each geotechnical unit independently, and the GSI and
UCS statistics summarized accordingly.
62
The co-simulation of all geotechnical units into a single large matrix, which
was then used to find an overall weakest path. GSI and UCS values were
then averaged for each of the geotechnical units along the path. This allowed
for co-dependencies between units to be taken into consideration during rock
mass failure.
y0qw
xyz{|} = atan
}zy0z/w
Equation 3.5
where y0qw and }zy0z/w are the total length of the step-path in the
vertical and horizontal directions.
However, this
no
alternative
robust
methodologies
exist
within
the
63
a.
b.
0.30
SRF Standard Deviation
SRF Mean
1.70
1.60
1.50
1.40
0.10
0.00
0
20
40
60
Trial Number
80
Figure 3.11
3.5.1.
0.20
100
20
40
60
Trial Number
80
100
Conventional Method
Plots of the running average (a) mean and (b) standard deviation in
SRF results vs. the number of simulation trials are used to estimate
when the Monte Carlo simulation results become stable. The results
suggest that the required number of simulations is inversely
proportional to the degree of spatial autocorrelation.
General Observation
Incorporation of spatial heterogeneity into continuum simulations resulted in a
indiscriminately fail anywhere in the rock mass, heterogeneous models restricted failure
to the weakest areas, resulting in step-path geometries. This fundamental behaviour
shift resulted in a reduction of the SRF from 1.63 in the deterministic simulation, to an
average of 1.45 within the SGS simulations (Figure 3.12). The observed reduction is
consistent with previous research (Griffiths and Fenton 2000; Hicks and Samy 2002;
Jefferies et al. 2008). SGS models also suggest a relatively tight constraint on SRF
values, with results conforming to a normal distribution with a standard deviation of 0.08.
Examination of the failure path within the SGS simulations confirms that damage
is preferential in the weaker areas of the rock mass (Figure 3.13). Critical path statistics
indicate an average reduction of 14% and 32% in the GSI and UCS compared to
average values in the western pit wall.
64
research into the effects of heterogeneity which have observed this preferential failure
behaviour (Lorig 2009; Jefferies et al. 2008; Srivastava 2012).
Previous two-dimensional, geomechanical simulation results from the central pit
area estimated safety factors between 1.25 and 1.40, based on Slide (Rocscience
2014), GALENA (Clover 2010), and UDEC (Itasca 2014) modelling (Baczynski et al.
2011). Comparison of this previous work with FLAC simulations suggests relatively
good agreement between the various analyses, given the varying methods for deriving
rock mass strengths. The slightly higher deterministic SRF estimation from the FLAC
simulations can be attributed to the use of medial-value rock mass strengths compared
to best-engineering judgement used in previous work.
100%
90%
80%
70%
60%
50%
40%
30%
20%
10%
0%
0.80
1.00
1.20
1.40
1.60
1.80
2.00
Figure 3.12
65
50
Number of Simulations
45
40
35
Geological Strength
Index (GSI)
Uniaxial Compressive
Strength (UCS)
30
25
20
15
10
5
0
-60%
-50%
-40%
-30%
-20%
-10%
0%
Figure 3.13
3.5.2.
GSI and UCS attributes are found to be reduced along the critical
failure path compared to west wall averages. A mean reduction of
14% and 32% was found in the GSI and UCS, respectively.
with daylighting typically occurring at the toe of the slope (Figure 3.15). Although, minor
variations exist, including shallow pit wall failures and deep-seated circular failures. In
addition, failure paths are found to concentrate exclusively within the western pit wall,
due to the increased slope heights and on average lower GSI values (Figure 3.8).
Failure area estimates suggest a mean area of 2.29 x 105 m2, with a standard deviation
of 7.82 x 104 m2 (CoV = 34%; Figure 3.14).
Geological controls on failure path development are rather limited, with the
exception of breakout in the lower toe of the slope (Figure 3.15). This behaviour is
attributed to weaker material associated with the Gleeson fracture zone, concentrating
strain at the base of the model.
exist, where stronger than average properties are simulated within the zone. In this case
either deep-seated to rotational or shallow pit failures are observed.
66
also found to form between the active and passive blocks, facilitating quasi-rotational
failure (Figure 3.16).
With the exception of the fracture zone, failure does not appear to be
substantially dominated by any other geological units (Figure 3.15). This indiscriminate
nature of failure development can be attributed to the sub-horizontal orientation of
sedimentary layering, and the similar geotechnical characteristics between units.
In
addition, thrust zones do not appear to exhibit a major influence on the failure
mechanism, due to their westward dip away from the pit wall.
a.
b.
30%
Coefficient of Variation in
Failure Length
1550
1500
1450
1400
1350
200,000
220,000
240,000
260,000
20%
10%
0%
10%
280,000
Figure 3.14
Drainage Tunnel
20%
30%
40%
50%
Zero Autocorrelation
Up-scaling: Independent
Conventional Probabilistic
Up-scaling: Dependent
Up-scaling: Roughness
67
Geotechnical Unit
Monzonite Porphyry
Monzodiorite
Skarn
Darai Limestone (upper/lower)
Ieru Siltstone (upper/lower)
Pnyang Siltstone
Thrust Faults
Gleeson Fracture Zone
14
Figure 3.15
500m
SSR
1E-6
Transition zones
between active/
passive blocks
1E-14
Lower critical
failure path
Figure 3.16
500m
68
3.5.3.
spatially homogenous, with attributes defined by single random variables (Read and
Stacey 2009). In order to compare this approach with the proposed SGS method, a
series of conventional geomechanical simulations were conducted, which utilized the
declustered domain statistics. Simulations were conducted by selecting two standard
normal deviates for each of the geotechnical units, representing GSI and UCS values.
Normal score transformation functions were then used to obtain GSI and UCS attributes
from the deviates. Simulated values were then assigned uniformly to all nodes within
the geotechnical domain. All other geotechnical attributes (e.g. mb) were kept constant
during the simulations.
The simulation results suggest that the conventional approach over-predicts both
the SRF mean and variance compared to the SGS method (Figure 3.17).
This is
observed by an increase in both the SRF mean (1.58 vs. 1.45), and standard deviation
(0.29 vs. 0.08), and resulted in an over-prediction of the probability of unsatisfactory
performance by nearly seven orders of magnitude.
unsatisfactory performance may not be conservative in all cases. For example, overestimation of the mean tends to promote optimistic designs due to the upward translation
of the critical SRF distribution.
variance increases the spread of the distribution, leading to overly conservative designs.
This complex interaction process makes it difficult to define comprehensive rules to
describe the negative effects of conventional probabilistic techniques.
A comparison of the critical area estimations between the conventional and SGS
methods indicates the same means (2.29 x 105 m2) but different coefficients of variance
(41% vs. 34%; Figure 3.14). This variation can be attributed to two factors:
69
These two effects result in a fundamental difference in the underlying failure mechanics,
resulting in a profound alteration in both the SRF statistics and failure path location.
100%
90%
80%
70%
60%
50%
40%
30%
20%
10%
0%
0.80
1.00
1.20
1.40
1.60
1.80
2.00
Figure 3.17
70
b. Conventional Method
d. Dry Model
e. Horizontal Drainholes
f. Drainage Tunnel
g. Up-Scaling: Independent
h. Up-Scaling: Dependent
i. Up-Scaling: Roughness
Figure 3.18
Monzonite Porphyry
14
Ieru Siltstone
Monzodiorite
Pnyang Siltstone
Skarn
Thrust Faults
Darai Limestone
Comparison of critical failure path distributions for the different modelling approaches.
71
750m
3.5.4.
No Spatial Autocorrelation
While conventional probabilistic techniques assume perfectly autocorrelated or
autocorrelated method over-predicts the mean, while at the same time under-estimate
the variance. This results in an under-estimation of the probability of unsatisfactory
performance by several orders of magnitude.
Critical path distribution estimates show a tighter confinement of failure paths
within the non-autocorrelated compared to SGS method (Figure 3.14; Figure 3.18). The
observed variation can be attributed to the increased clustering in rock mass strength
attributes, through the incorporation of the spatial autocorrelation structure. This affects
the location of the critical failure path, with increased dispersion observed within the
SGS models as the failure path is forced to by-passes the larger clusters of competent
rock.
resulting in a reduction in critical path deviations. The discrepancy between the models
illustrates the need to properly define the spatial structure, as even though both methods
have the same attribute statistics, differences in the spatial structure drastically changes
the underlying failure path mechanisms.
72
100%
90%
80%
70%
60%
50%
40%
30%
20%
10%
0%
0.80
1.00
1.20
1.40
1.60
1.80
2.00
Figure 3.19
3.5.5.
Effect of Groundwater
The characterization and management of groundwater is a key component of
large open pit design, as its effects are often detrimental and lead to increased wall
instability and higher operating costs (Beale 2009). This is due to the strength reduction
that occurs from elevated fluid pressures, as a result of a reduction in the effective stress
(Rutqvist and Stephansson 2003; Wyllie and Mah 2004).
However, if the
depressurization tunnel (Figure 3.4). A full description of the two scenarios can be found
in Section 3.3.3.
A comparison of the wet vs. dry conditions indicates that, as expected, wet
conditions result in a reduction in the SRF. This reduction was found to be 0.14 on
average, with the mean SRF reduced from 1.59 to 1.45 (Figure 3.20). The variability
with both simulations was found to be similar, with standard deviations of 0.08 and 0.09,
respectively. As a result, both scenarios can be considered stable with a relatively high
degree of confidence, as the probability of failure for both is extremely low (Dry = 10-7 %,
Wet = 10-9 %).
100%
90%
Wet Simulations
Dry Simulations
80%
70%
60%
50%
40%
30%
20%
10%
0%
1.00
1.20
1.40
1.60
1.80
2.00
Figure 3.20
Critical path analysis suggests that the inclusion of groundwater into the SGS
simulations results in a deeper seated failure path (Figure 3.14). This is observed by an
increase in the average failure size from 2.09 x 105 m2 in the dry models to 2.29 x 105 m2
74
in the wet models. In addition, the inclusion of groundwater resulted in the critical path
being drawn deeper into the slope due to elevated pore pressures at depth.
The
elevated pressures also result in a slight increase in the critical path dispersion, due to
the increased likelihood of deep seated failures. From a risk analysis perspective, this
increased failure depth needs to be taken into consideration, as although the probability
of the event decreased, the consequences are increased. As a result, the overall risk
reduction may not be as drastic as initially suggested by the SRF reduction.
100%
90%
80%
No Depressurization
Horizontal Drainholes
Drainage Tunnel
70%
60%
50%
40%
30%
20%
10%
0%
1.00
1.20
1.40
1.60
1.80
2.00
Figure 3.21
effective than the horizontal drains (6y` = 1.58 vs. 1.53; Figure 3.21). The relative
variability within all three scenarios was found to be the same ( = 0.08). Similar to the
wet vs. dry scenarios, active depressurization leads to the development of deeper
75
seated failures (Figure 3.18). A key change also occurs in the mode of failure, with an
increased likelihood of deep rotational failure for both depressurization scenarios. This
represents a fundamental shift in the failure mechanism, with failure transitioning from
toe breakout in the Gleeson fracture zone toward a deeper failure with breakout in the
Monzodiorite. A further shift also occurs in the depressurization tunnel scenario, with toe
break-out in the Gleeson fracture zone occurring from a combination of deeper seated
failure combined with slip along the Parrots beak thrust, as opposed to classic rotational
toe failure.
3.5.6.
100%
90%
80%
70%
60%
50%
40%
30%
20%
10%
0%
0.80
1.00
1.20
1.40
1.60
1.80
2.00
Figure 3.22
Comparison of SRF results between the SGS and critical path, upscaling methods. The results suggest the critical path algorithms
fail to fully capture the effects of spatial heterogeneity on
geomechanical models. Up-scaling results suggest a mean SRF of
1.35, 1.33 and 1.33, with a standard deviation of 0.24, 0.17, and 0.22
for the independent, dependent and roughness methods,
respectively.
3.6. Discussion
3.6.1.
77
These
issues include scale-effects associated with spatial data aggregation, and the
preferential accumulation of strain within weaker areas of the rock mass (Gehlke and
Biehl 1934; Haining 2003; Jefferies et al. 2008; Lorig 2009).
The spatial data aggregation issue results in scale dependencies arising in the
sample variance due to spatial averaging effects (Gehlke and Biehl 1934; Isaaks and
Srivastava 1989; Deutsch 2002; Haining 2003). Typically, the variance demonstrates an
inverse relationship with the scale of study (Journel and Huijbregts 1978). The classic
geological example of this phenomenon is the distribution of copper grades at the grain
vs. the hand sample scales. At the smaller of the two scales, samples exhibit a larger
degree of variance, with copper distributions split into two distinct populations (e.g.
copper abundant and deficient grains). However, as the scale of study increases, so too
does the amount of spatial aggregation.
variance, as results reflect an average of copper abundant and deficient grains. While
copper grade distributions provide the classic example of this phenomenon, the
behaviour is common with other geological attributes.
geotechnical slope design studies is that the variance at the geotechnical domain scale
likely differs from the dispersion variance observed at the data collection scale (Isaaks
and Srivastava 1989; Deutsch 2002). This presents an issue for practicing geotechnical
engineers, as classic statistical methods are commonly incorrectly applied to
engineering design problems (Harr 1996; Duncan 2000; Wiles 2006; Nadim 2007).
The second issue that arises from spatial dependencies is the preferential
accumulation of strain within weaker areas of the rock mass, which results in a drift in
the mean during shifting scales of study (Jefferies et al. 2008; Lorig 2009).
This
groundwater systems, where scale-effects arise from preferential flow along high
hydraulic conductivity (K) units, resulting in an upward drift in the mean away from
theoretical multi-log-normal predictions (Snchez-Vila et al. 1996).
In addition to
statistical effects, discrepancies in the failure dynamics can occur when heterogeneity is
explicitly excluded, as conventional approaches result in an over-smoothed failure
surface compared to SGS simulations (Figure 3.18).
geotechnical slope design as billions of dollars are spent annually on designs which
incorrectly apply classic statistical approaches. In comparison to traditional design, the
utilized SGS method curtails the scale dependency issue through the imposition of a
degree of controlled spatial heterogeneity on the stochastic system.
The spatial
structure is imposed through the use of variograms, which allow for preservation of the
sample-scale variance, while at the same time more accurately representing the largescale, system variance (Journel and Huijbregts 1978). The final result is a more realistic
distribution in predicted SRF/FOS results.
3.6.2.
studies have proposed the use of critical path algorithms to up-scale attribute
distributions from the borehole to domain scale (Glynn et al. 1978; Glynn 1979; OReilly
1980; Shair 1981; Einstein et al. 1983; Baczynski 2000; Baczynski 2008).
These
algorithms work by summarizing strength attributes along a critical path identified within
a two-dimensional rock mass simulation. The rock mass is composed of a combination
79
of discontinuities, rock mass and/or intact rock, with strength attributes assigned
according to statistical distributions obtained from either borehole and/or outcrop data.
Either minimum distance (OReilly 1980) or stochastic step-path generation (Baczynski
2000) techniques are then used to identify a critical failure path through the theoretical
rock mass.
distribution in the critical path strength. Summary results can then be incorporated into
geomechanical simulation models.
The applicability of these methods was tested within this study through the
development of a software package to determine critical path attributes using minimum
path analysis (Section 3.4.7).
approach fails to fully account for the up-scaling issues, with the approach imparting new
uncertainties into the analysis (Figure 3.22). Such discrepancies are observed in the
failure mechanics between the up-scaled and heterogeneous models (Figure 3.18).
Failure development within the up-scaled models is found to be controlled by the
weakest domains; whereas, failure within the heterogeneous models occurs through
preferential failure along the weakest nodes. The overall effect is an over-smoothing of
the failure surface within up-scaled models and a reduction in the large-scale roughness.
Attempts to correct for this discrepancy have been made by some researchers
through the calculation of a large-scale roughness factors (Little et al. 1998; Baczynski
2000). However, issues arise as the dominate failure direction often deviates from the
average step-path angle (Baczynski 2014; Figure 3.23).
roughness
estimates
often
over-estimating
the
domain-scale
roughness,
as
80
Figure 3.23
In addition to roughness issues, problems arise with the up-scaling approach due
discrepancies in the failure dynamics when heterogeneity is explicitly excluded (Figure
3.18). While this does not preclude the use of step-path methods, it is an underlying
assumption of such methods that the failure mechanics remains the same.
If this
3.6.3.
of the Hoek-Brown criterion (Hoek et al. 2002). However, the method has been criticized
due to difficulties in applying it in less than ideal conditions (Brown 2008; Mostyn and
Douglas 2000; Douglas and Mostyn 2004; Carter et al. 2007; Carvalho et al. 2007;
Carter et al. 2008). One of the main issues with this approach is that it requires the
definition of a homogenization scale (Bonnet et al. 2001). However, fracture systems
research has suggested that many systems display fractal spatial distributions, which
precludes the existence of a homogenization scale or representative elementary volume
(REV; Mandelbrot 1982; Davy et al. 1990; Davy et al. 1992; Sornette et al. 1993; Bonnet
et al. 2001). Homogenization scales are further complicated by the discrete nature of
81
geotechnical domains, which may preclude the development of appropriate REVs for
modelling purposes (Figure 3.24).
Descriptive Property
V1
V3
V2
Volume of Sample
V2
V3
V1
Figure 3.24
The REV issue poses a problem for the geomechanical modeling within this
study as models were constructed using the Hoek-Brown continuum approach.
However, comparisons of failure mechanisms from continuum modelling with previous
discontinuum modelling at the site suggest a similar shear-dominated, rotational failure
develops using both approaches (Baczynski et al. 2011).
attributed to the dense, chaotic fracturing at the Ok Tedi site, which facilitates the
primary Hoek-Brown (1983) assumption of the rock mass failing from translation and/or
rotation of individual blocks.
82
Despite the similarity in failure mechanics, problems may still exist with the HoekBrown approach as a result of the spatial aggregation utilized during numerical
modelling. Specifically, data was averaged over 10 m3 bins, equivalent to the numerical
mesh grid size, as described in Section 3.4.3. The problem with this approach is that it
assumes that strain is evenly distributed at the sub-nodal scale.
However, as was
discussed in the preceding sections, this assumption is invalid due to preferential failure
of a rock mass within its weakest sections. These preferential strain accumulations
results in the scale effects commonly observed in rock mechanics problems, whereby
the compressive strength of a sample is found to inversely correlated to the sample size
(Johns 1966; Bieniawski 1967; Pratt et al. 1972; Hoek and Brown 1980a; Bieniawski
1984; de Vallejo and Ferrer 2011). In effect, the SGS models accurately reproduce
spatial heterogeneities at the nodal scale, but fail to continue the heterogeneity
modelling down to the sub-nodal scale. This imparts an unknown degree of uncertainty
into the simulations, and needs to be taken into consideration when extrapolating
specific SRF estimates for risk and/or stability analysis purposes. However, despite this
limitation, the general conclusions are still considered valid, as the approach was
directed at investigating the variation between the methods as opposed to specific SRF
values.
3.7. Conclusions
The field of geotechnical slope design is currently in a state of flux. Open pit
mine operations are progressing towards ever deeper targets in response to the
depletion of near surface deposits (Read and Stacey 2009). This increases both the
costs and uncertainties, forcing geotechnical engineers to reconsider traditional
deterministic designs techniques (Harr 1996; Duncan 2000; Wiles 2006; Nadim 2007).
In the face of these issues, probabilistic design techniques represent an attractive
alternative, as uncertainties can be quantified directly within the framework of risk and/or
decision analysis (Steffen 1997; Terbrugge et al. 2006; Steffen and Contreras 2007;
Steffen et al. 2008). However, conventional probabilistic design techniques typically
utilize a discrete geotechnical domain approach, with attributes defined by spatially
constant random variables (Read and Stacey 2009).
underlying
associated
problems,
as
spatial
dependencies
83
with
geological
84
4.
4.1. Abstract
Rock masses are typically conceptualized as having bimodal strength
characteristics, with deformation controlled by complex interactions between intact rock
material and discontinuities.
generation have been developed within a sound statistical and theoretical framework,
they often do not consider subsequent mesh generation routines which are required for
geomechanical simulation.
Fracture
networks generated using both the proposed method and established software are
incorporated into geomechanical simulation models to verify and demonstrate the
benefits and limitations of the new method.
85
4.2. Introduction
Rock masses typically exhibit a complex heterogeneous nature, owing to the
inter-relationship between intact rock material and discontinuities (e.g. micro-fractures,
macro-fractures, faults, etc.). This spatially discontinuous behaviour forces engineers to
conceptualize rock masses in one of two modes, namely, continuum or, discontinuum, or
a combination of the two (Hoek and Brown 1980a; Jing 2003; Stead et al. 2006). The
underpinning concept of the continuum approach is the representative elementary
volume (REV).
heterogeneous features average out, such that the material can be conceptualized as a
homogenous substance (Bear 1972). However, this approach has been questioned by a
number of researchers as REVs may not exist for a given substrate at scales
appropriate for numerical and/or analytical modelling (Dershowitz et al. 2004).
The
Within this
While the algorithms have been developed within a sound statistical and
86
edge boundaries).
generated DFNs prior to incorporation into numerical models, in order to prevent the
development of poor quality elements that may cause numerical instabilities. Although
this manipulation can facilitate DFN integration, it can also lead to adverse effects,
including: the alteration of fracture attribute statistics through the removal and/or
manipulation of DFNs prior to incorporation in numerical models.
The manipulation
algorithms, which incorporate not only a sound statistical and theoretical framework but
also an appreciation for subsequent mesh generation algorithms used in numerical
simulation.
This chapter attempts to add to current DFN research by proposing a new
algorithm for DFN generation. The algorithm is designed to generate 2D DFNs for use
with geomechanical simulation software utilizing triangular network meshing routines.
The purpose is to present researchers with an explicit means of generating DFNs within
a numerical simulation framework, allowing for seamless integration between the
software packages. The method is designed for use as a general DFN generator, to be
used within multiple geomechanical and geological software packages.
employ Monte Carlo based simulation routines which generate a unique realization with
each iteration. While both systems are efficient in DFN generation, the former was used
in this study, due to its widespread use within the geotechnical community (Barton
1978).
The Baecher disk model is one of the most commonly employed DFN generation
algorithms.
conceptualized as 2D convex disks (Dershowitz and Einstein 1988; Staub et al. 2002).
The model was developed independently by both Baecher et al. (1978) and Barton
(1978) in the late 1970s.
88
the mesh must be able to grade from large to small elements, often over
short distances, and
the elements must adhere to strict shape requirements, often with equilateral
and equiangular geometries
These varied requirements have led to the development of diverse algorithms for
mesh generation, utilizing different criteria to conform girds to often complex geological
phenomenon. Within the field of geomechanics, the use of triangle element geometries
is common within numerical codes (Rocscience 2013; Rockfield 2013).
Triangular
maximum to minimum side length is less than a specified ratio (default = 10),
maximum interior angle is greater than a critical value (default = 120o), and
89
development of constraints for the DFN process to ensure seamless integration. In this
chapter,
propose
three
purpose.
First,
minimum
4.5.1.
Overlap/separation Distance
The first stage in the overlapping distance analysis involves the creation of buffer
90
4.5.2.
Intersection Distance
The second check ensures that the intersection of three or more fractures does
not produce unacceptably small elements (Figure 4.2). This is done by ensuring that the
separation distance between all intersection points (n) is greater than the
overlapping/separation distance (). If this check is found to be false (n < ), then the
newly generated fracture is discarded and the process restarted.
4.5.3.
Intersection Angle
The final stage in quantifying the suitability of a newly generated fracture is to
check whether or not the minimum intersection angle () between it and previously
generated intersecting fractures is less than the critical minimal angle (crit; Figure 4.3).
The procedure works by checking that newly generated fractures which intersect the
buffer zones of existing fractures have intersection angles greater than the critical
minimal angle (crit). If this check fails then the fracture is discarded.
Once all three checks have been conducted, a newly generated fracture is either
accepted or discarded. In the case that the fracture is discarded, another seed point is
generated and the qualification process restarted until a valid location for the fracture is
found. One of the limitations of this process is that it can lead to an infinite loop if the
fracture density exceeds a critical threshold. Simulations indicate that this threshold
typically occurs when P20 values exceed 0.75 to 0.85 times the inverse buffer zone area.
In order to prevent this from occurring, a limitation is placed on the maximum number of
new seed locations that are attempted before the program ceases and returns an invalid
result.
generated for a designated set until the designated P20 or P21 value is achieved. The
algorithm then moves onto the next fracture set in the sequence.
Fractures are
generated within a region equal to four times the desired simulation area and later
truncated in order to minimize boundary effects.
91
fol
f new i
Buffer Zone
f
Figure 4.1
w
ne
ii
fi
2
3
fne
fii
Figure 4.2
92
fnew ii
i
new
fold
i
ii
Figure 4.3
imported into the DFN generator and series of DFNs produced. The results indicate a
good agreement between actual data and generated DFNs (Figure 4.4b).
93
a.
Running Average Probability
25%
Cumulative Frequency
b.
100%
67%
33%
0%
0.0
10.0 20.0 30.0
Fracture Length (m)
Log-Normal Model DFN Simulation
20%
15%
10%
5%
o
0%
-90.0
-60.0
-30.0
0.0
30.0
60.0
90.0
2D Apparent Dip ( )
Observation Data
Figure 4.4
Gaussian Model
DFN Simulation
modified DFN algorithm to demonstrate the benefits and limitations of the new method.
Two conceptual DFN morphologies were used, each with two discontinuity sets (Table
4.1). The first employed orthogonal fracture morphology, with the mean dip of set one
oriented perpendicular to set two.
94
orientations, with the minimum angle between mean dips less than the minimum interior
angle used in later mesh generation. Twenty-five DFN simulations were produced for
both trials, with the resulting models incorporated into the Rockfield (2013) software
ELFEN.
Models were then meshed using the integrated tessellation routine within
ELFEN. The traditional DFNs were incorporated into the geomechanical software twice,
once without any manual manipulation of the fractures and then again following a
subjective clean-up process, where problematic fractures were removed which caused
unacceptable mesh configurations (Figure 4.5).
Table 4.1
-1
P21 (m )
Trial
Model
From
To
Model
From
To
Model
Orthogonal
Uniform
35.0
55.0
Uniform
-35.0
-55.0
Normal
0.85
0.07
Acute
Uniform
27.5
47.5
Uniform
47.5
72.5
Normal
0.84
0.07
efficiency, as manual manipulation of the DFN models was avoided. The system also
95
preliminary incorporation of the modified DFN algorithm into the aforementioned codes
has shown promising results (Figure 4.7).
Figure 4.5
inconsistent with naturally fractured systems, which often exhibit a hierarchical structure
with localized clustering (Pollard and Aydin 1988). It may also lead to artificial increases
in the overall rock mass strength due to a reduction in the overall DFN connectivity, and
hence an increase in rock bridge percentage (Elmo et al. 2011; Havaej et al. 2012;
Tuckey et al. 2012; Tuckey 2012; Fadakar et al. 2014). Although this is a limitation of
96
the proposed DFN algorithm, a similar homogenization occurs during the incorporation
of traditional DFNs into geomechanical simulation codes. This is due to the manual
manipulation process which often removes clustered fractures to limit the development
of poor quality elements. Spatial homogenization, therefore, is an inherent limitation of
both DFN methods and must be taken into consideration during the simulation process.
Relative Frequency
b.
Orthogonal Sets
8.0
6.0
4.0
2.0
0.0
Acute Sets
8.0
Relative Frequency
a.
6.0
4.0
2.0
0.0
0.6
0.7
0.8
0.9
1.0
1.1
0.6
-1
Figure 4.6
0.7
0.8
0.9
1.0
1.1
-1
P21 (m )
P21 (m )
Figure 4.7
The
proposed an alternative method for DFN construction, which takes into consideration
meshing routines during fracture generation. This is done through three primary DFN
constraints, namely:
In
comparison, the modified DFN algorithm was shown to offer seamless integration
between the software packages, improving model construction efficiency and
reproducibility between researchers.
While this chapter presented a formal methodology for a modified discrete
fracture network process, the research remains on-going, as limitations still exist with the
presented methodology. Future and on-going work includes the:
Advancement of the modified DFN algorithm from 2D to 3D. This will provide
greater integration between the DFN software and geomechanical simulation
codes, which are progressively moving towards more three-dimensional
problems sets.
99
The current
generator is based on the use of the Baecher et al. (1978) disk method;
however, alternative methods such as the war zone (Geier et al. 1988) or
hierarchical fracture (Ivanova 1995), geostatistical (Gringarten 1997; Long
and Billaux 1987; Billaux et al. 1989; Wen and Sinding-Larsen 1997), or
Markov chain Monte Carlo approach (Mardia et al. 2007) could be coupled
with the outlined DFN constraints to better characterize the hierarchical
structure found in natural systems.
100
5.
5.1. Abstract
The advancement of numerical modelling codes to include the simulation of
brittle fracture mechanics is at the forefront of geomechanical design.
One of the
leading areas in this field of research is the use of UDEC grain boundary models, where
rock masses are simulated as a stochastic arrangement of discrete blocks.
This
approach has shown promise in back-analysis; however, to date, few studies have
characterized possible limitations in using the method for predictive analysis. This study
suggests that mesh dependencies can impart irreducible uncertainties into UDEC grain
boundary models during forward-analysis. In addition, micro-scale fracture mechanisms
are found to be highly dependent on the underlying mesh geometries. Voronoi meshing
routines were found to limit the kinematic freedoms, increasing the degree of localized
tensile failure.
irreducible calibration uncertainties and mesh dependency issues must be taken into
consideration when conducting UDEC grain boundary model analysis.
Prepared for submission to International Journal of Rock Mechanics and Mining Sciences &
Geomechanics Abstracts as J.M. Mayer and D. Stead. Mesh Dependencies in UDEC Grain
Boundary Models.
.
101
5.2. Introduction
The simulation of a rock mass is both an interesting and complex problem within
geotechnical engineering disciplines. Unlike manufactured materials, rock masses pose
a difficult problem for engineers, due to their heterogeneous nature.
This leads to
et al. (2007) examined the mechanical degradation of a rock mass around emplacement
drifts. Lorig et al. (2009) employed the method to simulate the effect of brittle fracture in
causing catastrophic collapse of a slow moving landslide.
reproduced typical rock slope failure mechanisms (i.e. toppling and buckling) using the
UDEC-GBM method. Kazerani and Zhao (2010) presented a formal methodology for
calibration, which was later updated using central composite design methods (Kazerani
et al. 2012; Kazerani 2013; Kazerani and Zhao 2014).
researchers match the macro-scale behaviour from laboratory testing by varying the
micro-scale properties of UDEC-GBMs (Kazerani and Zhao 2010; Kazerani et al. 2012;
Kazerani 2013; Kazerani and Zhao 2014).
This
laboratory testing of the Darai Limestone at the Ok Tedi mine site. Characterization of
the uncertainty associated with forward analysis, is then conducted through the
simulation of multiple UDEC-GBMs realizations utilizing constant element size but
varying stochastic arrangement of the UDEC blocks. Uncertainty analysis focuses on
characterizing the macro-scale parameter variance given constant, calibrated microscale properties for the contact stiffness, cohesion, friction angle and tensile strength. In
addition, mesh dependency issues associated with micro-scale failure mechanics were
explored through examining tensile vs. shear damage.
Propagation of these
transitioning operations from open pit to underground. Underground designs along the
east of the deposit called for a decline to pass through approximately 650 m Darai
Limestone formation.
The Darai Limestone is a Late Eocene to Middle Miocene, buff to pale grey,
massive, poorly-bedded limestone, composed of lime packstone, mudstone and
wackestone units. Minor chert, calcareous siltstone and dolomite lens can be found
interbedded with the general limestone packages. The unit varies markedly in thickness
across the site from 50 to 1,000 m, due to localized nappe-style thrusting of sedimentary
units (Baczynski 2011). Bedding is often difficult to identify at the outcrop scale, with
pervasive jointing giving the unit a rubble-like appearance.
variability exists, average intact rock and discontinuity strength estimates have been
obtained from laboratory testing (Table 5.1).
104
Table 5.1
Property
Value
44.9
8.3
31.5
0.08
5.1
55.0
Poissons Ratio
0.26
Density (kN/m3)
28.9
31.5
0.375
0.08
Intact Rock
Discontinuities
Joint orientation data obtained from tunnel exposure and borehole mapping
suggest a complex joint hierarchy, with eight discrete sets identified across the site;
however, no more than four sets have been identified at any one location (de Bruyn et
al. 2013). Characterization of the 2D dip orientations was conducted using data from the
exposure and borehole mapping for use with later synthetic rock mass modelling.
Orientations were converted to 2D apparent dips for an east-west cross section (090o)
and summarized using a running average technique utilizing 20o spatial bins (Figure
5.1). An idealised Gaussian model was then fit to the data using least squares analysis.
The model assumed three discrete discontinuity sets, each of which can be described by
a single normal distribution.
105
25%
20%
15%
10%
5%
0%
-90.0
-60.0
-30.0
0.0
30.0
60.0
90.0
2D Apparent Dip ( )
Observation Data
Figure 5.1
Two-dimensional, fracture density (P21) estimates were compiled for each of the
fracture sets based on persistence and spacing measurements from SRK/OTML (SRK
2013c; Table 5.2). Estimates assumed that only four discontinuity sets are present at
any given time, based on recommendations by de Bruyn et al. (2013). Due to the
extremely high fracture density at the Ok Tedi site, it was impossible to include all
fractures into the geomechanical simulations (Mayer et al. 2014a). As a result, P21 and
persistence attributes were reduced by a factor of 30 in order to produce DFNs suitable
for numerical simulation. Due to this reduction, DFN simulations are not be considered
an accurate reproduction of the actual site conditions. Instead, DFNs are designed to
simulate the general behaviour that can be expected from the inclusion of fractures into
UDEC-GBMs, and the uncertainty associated with it, instead of actual in-situ behaviour.
106
Table 5.2
Mean
-74.1
4.3
46.0
Std. Dev.
29.4
15.5
9.5
Dip (o)
Mean
0.41
Std. Dev.
0.05
Persistence (m)
P21 (m-2)
7.98
0.99
0.68
5.4. Methodology
5.4.1.
the mechanical properties of a rock mass at scale suitable for engineering design (Wyllie
and Mah 2004; Jaeger et al. 2007). This remains a key issue as mechanical properties
obtained from laboratory testing are typically not representative of the design-scale rock
mass behaviour due to the presence of discontinuities at larger scales. Recently, a
numerical approach has been proposed which attempts to quantitatively estimate the
scale effects associated with these discontinuities (Pierce et al. 2007). The approach
represents a jointed rock mass numerically through the generation of a synthetic rock
mass (SRM). This is accomplished by the superimposition of a discrete fracture network
(DFN) onto a geomechanical simulation model. Using this approach, the design-scale
rock mass structure can be explicitly represented, and then used to estimate the largescale failure behaviour and mechanical properties (Pierce et al., 2007; Cundall et al.
107
2008; Esmaieli et al. 2010; Mas Ivars et al. 2007; Deisman et al. 2010; Mas Ivars et al.
2011; Pettitt et al., 2011; Zhang et al., 2011; Gao 2013; Zhang 2014).
One limitation of the SRM approach is its dependency on the DFN method and
the difficulty in integrating the generated features with common numerical meshing
routines. This is due to the development of adverse fracture geometries including: subparallel fractures that intersect at acute angles, bounding of adversely small regions due
to the intersection of three or more fractures, or near terminating fractures. To solve
these issues, researchers typically manually manipulate DFNs prior to incorporation
within numerical models; however, this is a subjective process which leads to alteration
of the fracture attribute statistics (Mayer et al. 2014a). In order to solve these issues,
and enhance the integration of DFNs with numerical meshing codes, an alternative DFN
algorithm was proposed by Mayer et al. (2014a).
modification of the Baecher et al. (1978) DFN algorithm, which takes into consideration
numerical meshing routines during the fracture generation process.
The DFN algorithm extends upon Baecher et al.s (1978) work by incorporating
three constraints into the fracture generation process (Figure 5.2). The process is based
on the definition of a user specified critical minimum overlap/separation distance () and
a critical minimum angle (crit). The methodology relies on the following constraints:
1. First, a buffer zone around pre-existing fractures using a minimum
overlap/separation distance (; Figure 5.2). Newly generated fractures
are then checked to ensure their tips do not terminate within the buffer
zones. This ensures that zones are not created which would require
development of unsatisfactorily small mesh elements.
2. Next, the enclosed area between of three or more intersecting fractures is
checked to ensure that it does not bind a region smaller than the
minimum desired element size. Bound regions are checked by ensuring
that the separation distance between all intersection points (n) is greater
than the overlapping/separation distance (; Figure 5.2).
108
Start
Simulation
Overlap/Separation Distance
f new i
fold
Generate a
New Fracture
Buffer Zone
End
Simulation
f new
ii
Intersection Distance
fi
Greater than
Desired Value
Less than
Desired Value
fne
fii
Failed
Intersection Angle
Check P21
fnew ii
f new i
fold
i
ii
Passed
Figure 5.2
Flow chart for the modified Baecher et al.s (1978) DFN generation algorithm. Methodology is used to
generate fracture networks which adhere to later geomechanical meshing routines.
109
within UDEC which do not take into consideration the location of discrete features during
the meshing process (Figure 5.3). This has a tendency to generate adversely small
elements, which must be removed prior to numerical simulation in order to prevent slow
excessively computational times. In addition, fractures often terminate within Voronoi
blocks and must be artificially truncated, causing alteration of the fracture attribute
statistics. In comparison, the proposed tessellation process generates GBMs which
conform to the DFN geometries.
The proposed tessellation process was designed to produce a triangular mesh
similar to the newly implemented Trigon mesh within UDEC (Gao 2013). This process
follows a three step procedure. First, a set of principal triangles is constructed which
fully defines the extent of the model. Next, grid points are inserted along the fracture,
and the mesh is progressively updated until the grid point spacing is less than the
minimum overlap/separation distance ().
progressively split, producing successively smaller elements until all triangles have a
maximum height less than 1.5 times the overlap/separation distance (). Details on
these steps are provided in subsequent sections. Adaptive re-meshing, which occurs
progressively as new grid points are inserted into the mesh, is conducted according to
the algorithm described in Figure 5.4.
110
Figure 5.3
Principal triangles
The first stage in the triangulation process is the development of a series of
principal triangles which fully encapsulates the simulation area.
This is done by
Triangles are
a.
b.
c.
d.
e.
f.
Figure 5.4
Discretization of fractures
Re-meshing of the triangulation to incorporate the DFN involves a three-step
procedure. First, grid points are inserted at the end points of each fracture (Figure 5.4).
This ensures that fractures are fully inserted into the mesh, allowing for wing crack
development at fracture terminations during geomechanical simulation. Next, grid points
are inserted at all fracture intersection points, ensuring preservation within the mesh.
Finally, the fractures are progressively split into segments by inserting grid points at the
half width distance between established grid nodes along fracture surfaces.
This
procedure is continued until all segments are less than the minimum overlap/separation
distance ().
112
5.4.2.
several distinct deformation stages (Cai et al. 2004). This includes the initiation of microseismic events as new micro-scale cracks are formed when the stress level exceeds
approximately 0.3-0.5 times the peak uniaxial load (Brace et al. 1966; Bieniawski 1967;
Holcomb and Costin 1987). This is followed by the propagation of microcracks mainly
parallel to the maximum principal stress orientation, and eventual onset of microcrack
coalescence, as the stress levels exceed approximately 0.7-0.8 times the peak strength
(Lockner et al. 1992; Martin and Chandler 1994). Finally, progressive damage results in
the formation of macro-scale cracks and/or shear bands slightly following or at the peak
strength.
To simulate this failure behaviour using DEM methods, a 2D UDEC-GBM is
utilized where the rock is represented as an assemblage of discrete blocks (Lorig and
Condall 1987; Kazerani and Zhao 2010). The randomly distributed block contacts are
analogous to grain boundaries and/or micro-fracture contacts found within intact rock
samples (Alzoubi 2012). Brittle failure is designated to initiate along these contacts
when the applied stress exceeds either the tensile or shear strength of the boundary
(Gao and Stead 2013).
fractures form along block contacts, which gradually coalescence into macro-scale
tensile cracks and/or shear bands (Alzoubi 2009). Material properties are designated
through assignment of normal and shear stiffness, cohesion, friction and tensile
strengths to block contacts, which represent inter-granular rock mass strength or the
micro-scale properties (Kazerani et al. 2012). Based on these micro-properties and the
113
shape, size and arrangement of blocks, the material will exhibit a large-scale behaviour
that can be described by equivalent macro-scale properties. Since differences exist
between the micro- and macro-scale properties, a calibration must be conducted prior to
forward analysis, so that the sample exhibits the correct macro-scale behaviour (Gao
2013).
5.4.3.
Model Construction
A 2D triaxial test sample was created within UDEC to test both intact rock and
rock mass behaviour (Figure 5.5). The model was 2 m high and 1 m wide, with a 0.1 m
platen on either end. Block shapes were constrained using the mesh generation routine
mentioned in Section 5.4. This produced triangular block geometries similar to those
integrated into the UDEC trigon method proposed by Gao (2013). An average block
area of 6.4 x 10-3 m2 was used throughout the model. Block geometries were generated
within an independent C++ software package and imported into UDEC using and
integrated FISH function.
Coulomb slip model with residual strength, with properties downgraded following peak
strength to residual values using a post-peak brittle response. Peak strengths were
assigned through a calibration process to match the micro-scale properties to the macroscale behaviour observed from triaxial and Brazilian indirect tensile laboratory testing of
the Darai Limestone. Residual values were assigned based on joint shear test results.
In subsequent, SRM models, DFNs were generated prior to block tessellation,
using the methodology described in Figure 5.2.
114
measurements were then averaged across all history points to give an indication of the
overall model response.
1.0 m
1.0 m
2.0 m
Intact Rock
Fracture
Figure 5.5
Block Contact
Finite-Difference Grid
History Point
115
approximately 14-24 hours, depending on the degree of confinement, for a 3.4 GHz PC
with 22 GB of RAM.
Peak strength contact behaviour was monitored using an FISH routine based on
a modification of the damage algorithm proposed by Gao et al. (2014a). The routine
works by constructing an array of all block contacts present within the model, and
monitoring the shear and tensile stresses at these contacts during each model step.
Contacts were flagged as either initially failing under shear or tension based on the
mode of failure at the peak contact strength. Once a contacts failure mode was flagged,
it was removed from the fracture array. The failure type was then recorded in a table,
along with average axial stress and displacement measured across all history point
locations (Figure 5.5).
5.5. Calibration
5.5.1.
Calibration Procedure
Calibration of the intact rock UDEC-GBMs required a multi-stage approach to
calibrate the micro-scale discontinuity stiffness, cohesion, friction angle and tensile
strength, such that the appropriate macro-scale properties were reproduced.
This
includes the calibration to data to satisfy the macro-scale Youngs modulus, Poissons
Ratio, tensile strength, internal friction angle and internal cohesion, obtained from
laboratory testing.
arrangement, size and shape of discrete blocks affecting how the micro-scale properties
are represented at the macro-scale.
116
The calibration process used in this study is based on work by Kazerani and
Zhao (2010) and involves a five step procedure:
1. Particle sizes should be generated based on the grain size distribution within
intact rock samples.
The
authors recommended that the G/E ratio should be between 0.35 and 0.50,
reflecting a Poissons ratio between 0.2 and 0.5.
3. Once the contact stiffness ratio has been set, both the normal (kn) and shear
(ks) stiffness are calibrated to fit the Youngs modulus. Initial normal stiffness
estimates were calculated from (Itasca 2014):
/ = \
4
( + 3c
oX0/
j 1 \ 10
Equation 5.1
where K is the bulk modulus, G is the shear modulus, Zmin is the minimum
element length, and n is a user-defined constant which varies between 1 and
10.
4. Contact strength properties are then initially calibrated, such that the desired
macro-scale behaviour is represented in the material stress-strain response.
This involves three subsets: first the contact cohesion, then the friction angle,
followed by the tensile strength.
117
5. The final step involves refinement of the contact strength properties. This is
required due to the inter-connected nature of the properties, which results in
slight change in one parameter as another is refined (Gao 2013).
During calibration, the strength properties for both the intra-block and block contacts are
kept constant in order to prevent preferential failure within either medium.
5.5.2.
Calibrated Micro-Properties
Ideally, block size distributions should be chosen such that the size distributions
reflect the grain size distributions within the actual modelled samples (Gao and Stead
2013). While this is desired, simulations were conducted on a 2.0 x 1.0 m sample,
preventing sufficient refinement of block sizes due to computational limitations. As a
result, an average block size of 6 x 10-3 m2 was chosen, as it represented the best tradeoff between computational efficiency and model refinement. This mesh density reflects a
distribution of approximately 3,100 discrete blocks within the sample, which is a near
four-fold increase from recommendations by Kazerani and Zhao (2010).
The contact stiffness ratio was estimated directly from the Shear to Youngs
modulus ratio (0.4), based on recommendations by Kazerani and Zhao (2010) and Gao
(2013). Following definition of the stiffness ratio, the Youngs modulus was calibrated
from elastic responses observed during compressional testing. Results suggest that a
logarithmic relationship exists between the macro-scale modulus and the micro-scale
input attributes, with calibrated normal and shear stiffness micro-properties found to be
3.5 x 1013 and 1.4 x 1013 Pa/m, respectively.
The calibration of the shear strength parameters is paramount for compressional
testing, as under conventional, homogeneous and isotropic settings, the sample strength
is directly proportional to the shear strength of a sample (Wyllie and Mah 2004). To
derive the calibrated macro-scale cohesive and friction properties a series of
compressional tests was carried out at different confinements. Confining pressures of
0.0 MPa and 1.0 MPa were chosen, to ensure good agreement between the micro-scale
and macro-scale response at low confinement, as later simulations were interested in
the micro-scale failure behaviour under uniaxial compressive test conditions.
118
Strength envelopes were then derived through linear regression using the
equations (Kovari et al. 1983):
= arcsin
=
1
+1
1 r\
2 r
Equation 5.2
Equation 5.3
where is the friction angle, c is the cohesion, and m and b are the slope and intercept
obtained from linear regression of a peak strength vs. confining stress plot. The initial
calibration was conducted by keeping one parameter constant, while the other was
varied until the required macro-scale behaviour was achieved. Results then required
refinement to achieve appropriate micro-properties, due to the inter-dependencies
between the shear strength parameters. Final calibration results suggest that a microscale cohesion of 14.8 MPa and friction angle of 48.2o is required to achieve the desired
macro-scale behaviour.
Calibration of tensile properties is required as although, compression tests are
predominantly controlled by shear strength criteria, micro fracturing at the grain scale
can be an important contributor to overall rock mass strengths, especially at low
confinement (Tang and Hudson 2011). This is due to the increase in tensile stresses
from local bending moments around sample homogeneities and anisotropies.
Gao
119
: =
,Xw7
Equation 5.4
where Fmax is the maximum force applied to the model at the point of failure, and r is the
radius of the sample. The calibration process suggests a micro-scale tensile strength of
11.8 MPa is required to replicate the macro-scale tensile strength of 5.1 MPa. This
micro-scale tensile strength exceeds the Mohr-Coulomb tensile cut-off estimated from
the micro-scale shear strength parameters. As a result, a limit was selected for the
tensile strength of 11.3 MPa, which is representative of a macro-scale strength of 4.9
1.0 m
MPa.
1.0 m
Block Contact
Figure 5.6
Finite-Difference Grid
History Point
120
5.6. Results
5.6.1.
Calibration Uncertainty
A series of UDEC-GBMs was conducted in order to verify the ability of the
Simulations were conducted using the same model geometry as that used in the
calibration (2.0 x 1.0 m 2D triaxial test sample, with an average block size of 6 x 10-3 m2).
To derive the macro-scale cohesive and friction properties simulations were carried out
for a series of different confinements for each of the 30 UDEC-GBMs.
Confining
pressures of 0.0, 1.0, 2.0, 3.0 and 4.0 MPa were chosen to ensure good compliance with
the model calibration, which was conducted at low confinement values. In total of 150
simulations were conducted.
This
between the macro-scale, cohesion and friction angle suggest a strong, negative
relationship exists (r = -0.90; Figure 5.8a).
121
b. 0.24
Relative Frequency
Relative Frequency
a. 0.80
0.60
0.40
0.20
0.00
7.1
8.3
9.5
10.7
11.9
0.18
0.12
0.06
0.00
33.7
37.4
Simulation Results
Figure 5.7
41.1
44.8
48.5
o
Gaussian Model
Simulation Results
Gaussian Model
In addition, comparisons
between the crack initiation and peak strength thresholds indicates a higher degree of
variability in initiation threshold (CoV = 5.1%). The crack initiation stress was also found
(:q0 /"Y) of 0.82.
to be extremely high in comparison to peak UCS results, with an average initiation ratio
Discrepancies between the micro- and macro-scale behaviour of the UDECGBMs can be attributed to the stochastic nature of triangular block generation. More
specifically, the behaviour is controlled by the distribution and failure concentration within
high angle contacts (Figure 5.9).
contacts is the result of the underlying tensile and shear failure mechanics. Tensile
cracking is theorized to concentrate sub-parallel to the major principal stress direction
(90o). Simulation results suggest an average pre-peak tensile crack orientation of 86.4o,
with a CoV of 27.8%, within UCS simulations. The large CoV is the result of the limited
number of tensile failures within the UDEC-GBM simulations (average number of tensile
fractures = 1.7). In comparison, shear damage is thought to coincide with the idealized
inclination of the shear (x), given by (Jaeger et al. 2007):
122
x = 45 + /2
Equation 5.5
where is equal to the micro-scale friction angle (48.2o) and x is the angle between
idealized plane and minimum principal stress (:).
the idealized inclination of shear (x) is 69.1o, which coincides with the simulation results.
UCS simulations indicated an average pre-peak shear failure angle of 72.2o, with a CoV
of 2.6%. A comparison of the percentage of tensile vs. shear cracking indicates that the
model is preferentially failing through shear, with 98.6% of pre-peak damage due to this
mechanism.
These results are consistent with Gao (2013) who observed that UDEC-
b.
a.
10.5
Cohesion (MPa)
10.0
9.5
9.0
8.5
Correl Coefficient = -0.90
8.0
38.0
40.0
42.0
44.0
46.0
3.25%
3.00%
2.75%
0.0
Figure 5.8
3.50%
1.0
2.0
3.0
4.0
with triangular mesh elements, a second set of simulations was conducted with the peak
123
mechanism was still dominated by shear behaviour. The increased amount of tensile
fracturing was also found to improve the measured and theoretical pre-peak crack
orientation discrepancies, with the average angle found to be 89.3o.
150
130
110
90
70
50
30
10
-10
Figure 5.9
New
Fracture
124
properties of all elements and contacts. However, in comparison, the macro-scale shear
and peak strength attributes display a reduced level of the spatial aggregation, as
deformation becomes concentrated in a limited number of pre-peak failed contacts6
(Figure 5.9).
5.6.2.
technique, with 30 UDEC-GBMs constructed using constant DFN attribute statistics but
independent stochastic fracture and block realizations. All other properties were kept the
same as the triangular, intact rock UDEC-GBMs described in the previous section.
Shear strength attributes were estimated by subjecting the 30 UDEC-GBMs to a series
at different confining conditions. This included simulation at 0.0, 2.0, and 4.0 MPa, to
ensure good compliance with the low confinement used for model calibration. In total of
90 coupled DFN/UDEC-GBM simulations were conducted.
Inclusion of DFNs within UDEC-GBMs resulted in an overall increase in the level
of uncertainty, with the average CoV in the peak strength increasing to 10.7%. This
represents a near three-fold increase in the uncertainty, suggesting that variation in the
DFN realizations plays an important role on the overall uncertainty within UDEC-GBMs.
A similar uncertainty increase was also observed in the macro-scale cohesion with a
CoV of 12.8%; whereas, the friction did not display a noticeable increase in the CoV
(4.7%; Figure 5.10). The behaviour also coincides with a reduction in co-dependency
structure between the friction and cohesion (r = -0.27; Figure 5.11).
6
The average number of pre-peak contact failures was found to be 110, but this value varied
greatly with a CoV of 58.8%.
125
a.
b. 0.24
2.0
Relative Frequency
Relative Frequency
2.5
1.5
1.0
0.5
0.0
0.8
1.1
1.7
1.4
2.0
2.3
0.16
0.08
0.00
38.1
Simulation Results
Figure 5.10
44.7
48.0
51.3
54.6
Gaussian Model
Simulation Results
Gaussian Model
Cohesion (MPa)
2.0
1.8
1.6
1.4
1.2
42.0
Figure 5.11
41.4
44.0
46.0
48.0
Friction Angle (o)
50.0
The inclusion of discrete fractures was found to change the overall failure
to UCS ratio (:q0 /"Y) showed a decrease from 0.82 in the intact rock simulations to
mechanics of the UDEC-GBMs. An examination of the average crack initiation strength
0.48 in the DFN simulations. A similar change was observed in the type of microdamage with the percentage of tensile micro-cracking increasing from 1.4% to 13.1%.
This suggests that the degree of tensile damage is sensitive to pre-existing fracture
126
Existing
Fracture
150
130
110
90
70
50
30
10
-10
Figure 5.12
New
Fracture
127
5.6.3.
triangular DEM blocks, which have been incorporated into the recently released UDEC
6.0 (Kazerani et al. 2012; Gao 2013; Gao and Stead 2014; Gao et al. 2014a, 2014b;
Kazerani 2013; Kazerani and Zhao 2014; Itasca 2014). In order to compare the effects
of these triangular mesh geometries with traditional Voronoi blocks, a series of 28
Voronoi UDEC-GBMs was simulated. Models utilized calibrated micro-properties from
the triangular mesh calibration to ensure similar micro-scale contact behaviour. Random
block arrangements were generated for each of the simulations using the integrated
Voronoi mesh generator within UDEC.
2.0 m
Block Contact
Finite-Difference
Grid
History Point
1.0 m
Figure 5.13
128
Relative Frequency
0.20
0.15
0.10
0.05
0.00
54.2
58.3
62.5
66.6
Figure 5.14
Gaussian Model
consistent with the work of Nicksiar and Martin (2013) who found that failure within
Voronoi UDEC-GBMs is controlled by tensile failure mechanics.
129
150
130
110
90
70
50
30
10
-10
Figure 5.15
New
Fracture
5.7. Discussion
5.7.1.
130
GBMs as a result of the inherent randomness of the element generation process. While
preliminary estimates of the degree of this uncertainty have been made by previous
researchers, the estimates exhibit large standard errors due to limited sample sizes
(Kazerani et al. 2012; Kazerani 2013; Kazerani and Zhao 2014). Simulations conducted
within this study aimed to refine these estimates, with the CoV of the cohesion and
friction angle found to be 6.0 and 4.7%, respectively. This is within the range of previous
research which suggests estimates between 1.5 to 15.0% (Kazerani et al. 2012;
Kazerani 2013; Kazerani and Zhao 2014). Comparisons between Voronoi and triangular
mesh geometries, also suggest that these uncertainties are an inherent property of the
system, and originate regardless of the underlying mesh shape.
Previous researchers have noted that these discrepancies are not an inherent
disadvantage of the method, and can be equated to the spatial heterogeneity found
within intact rock samples (Lan et al. 2010; Kazerani et al. 2012; Kazerani 2013;
Kazerani and Zhao 2014; Nicksair and Martin 2014).
studies have attempted to correlate the average grain and element size (Kazerani and
Zhao 2010; Alzoubi 2012; Gao 2013; Gao and Stead 2014). However, UDEC-GBM
element generation typically does not take into consideration the underlying spatial
structure of the grains, nor their shape, despite thin-section analysis suggesting that
grain distributions display spatially heterogeneous behaviour (Oren and Bakke 2003;
Okabe and Blunt 2005; Okabe and Blunt 2007; Politis et al. 2008; Mndez-Venegas and
Daz-Viera 2014). The incorporation of such heterogeneities has been shown to cause
increased asymmetry in the strain distribution, resulting in alteration of the macro-scale
output behaviour (Cho et al. 2007; Damjanac et al. 2007; Lorig 2009; Jefferies et al.
2008; Lan et al. 2010; Srivastava 2012; Nicksiar and Martin 2014).
This strain
131
studies should aim to limit this dependency through the accurate simulation of the grain
shape and spatial structure, in addition to the grain size.
In addition to mesh dependencies errors, uncertainty exists in the tensile strength
calibration, as the calibrated micro-scale value exceeded the theoretical Mohr-Coulomb
limit imposed by the shear strength attributes.
calibration results of Kazerani et al. (2011). This issue may be the result of the difficulty
in reproducing laboratory tensile tests within DEM models. Kemeny and Cook (1986)
observed that within tensile tests, sample failure coincides with the crack initiation stress
due to the near instantaneous crack propagation associated with stress concentrations
at fracture tips.
behaviour within DEM models, due to the inability to directly simulate crack propagation.
In comparison, compressional tests are far easier to simulate as their failure
mechanisms are controlled by the accumulation of micro-scale damage, as opposed to
propagation of a single crack (Diederichs et al. 2004). The difficulty in directly simulating
the underlying tensile failure mechanism is an inherent limitation of all DEM modelling,
and imparts a degree of uncertainty into the micro-scale calibration procedure.
Its
5.7.2.
Gao and Stead (2014) and Nicksiar and Martin (2013) were able to replicate the
crack initiation and damage thresholds using UDEC-GBMs; however, pre-peak, microscale failure mechanics were found to differ between the studies. Nicksiar and Martin
(2013) utilized Voronoi mesh geometry and found that results were similar to PFC, with
the damage initiation threshold dominated by tensile failure (Diederichs 2000; Diederichs
et al. 2007a). Shear induced micro-cracking was found to increase near the damage
accumulation threshold, with the final peak failure behaviour controlled by a combination
of shear and tensile failure.
arrangement of circular elements, which can lead to the development of tensile stresses,
causing tensile failure of the rock mass (Diederichs 2000; Diederichs et al. 2004). Gao
(2013) demonstrated that a similar behaviour could be achieved within UDEC-GBMs
through the inclusion of porosity within the geomechanical models. A second possible
mechanism suggested is the potential for reduced kinematic freedom of elements within
2D UDEC-GBM models, due to restrictions in the out-of-plane strain. It was shown using
3DEC that the inclusion of a third dimension, and hence increased kinematic freedom,
results in an increase in the pre-peak tensile micro-crack percentage.
Although the two proposed mechanisms by Gao (2013) could contribute to a
dominance of shear cracking within UDEC-GBMs, results from the research presented
in this thesis suggest that differences between the amount of tensile and shear cracking
simulated are predominantly the result of the assumed internal mesh geometry. This is
7
The tensile cut-off percentage is calculated as the assigned tensile strength over the maximum
theoretical Mohr-Coulomb value, based on the assigned friction and cohesion attributes.
133
(2013) and confirmed in this study by the 45.1% increase in the UCS between Voronoi
and triangular mesh models. An effect of this behaviour is the development of internal
mesh wedging, which increases the degree of tensile failure. A conceptual example of
this behaviour is shown in Figure 5.16; while, model results displaying the increased
degree of tensile failure is evident in Figure 5.9 and Figure 5.15. In the conceptual
Voronoi example, tensile stresses develop between the central Voronoi blocks as they
are displaced outward by the upper and lower blocks moving inward due to the major
principal compressional stress.
In comparison, triangular mesh geometries display an increased degree of
kinematic freedom resulting in a reduction in the locking-up of blocks. The overall effect
of this behaviour is a reduction in the internal mesh wedging, decreasing the amount of
tensile failure. From the conceptual example provided in Figure 5.16, it can be seen that
the increased kinematic freedom with triangular mesh results in less block inter-locking.
As a result, blocks can more easily slide past each other without the need for wedging
and tensile stress development. The end result in an reduction in the amount of tensile
failure, with the majority of damage occurring through shear mechanisms.
This
predisposition towards shear mechanics was confirmed by Gao (2013) and in this study
through a reduction in the micro-crack tensile percentage from 16.6% to 1.4% between
the Voronoi and triangular mesh models.
134
Voronoi Mesh
UDEC Simulation Results
1
Conceptual Behaviour
1
1
1
Triangular Mesh
UDEC Simulation Results
1
Conceptual Behaviour
1
1
1
Direction of Block Movement
Shear Contact Failure
Tensile Contact Failure
Figure 5.16
135
presents a fundamental issue for DEM modelling, as the micro-scale failure behaviour of
UDEC-GBMs has been shown to be is strongly dependent on the mesh shape. Similar
behaviour is observed within PFC models which displays shape dependency issues
when unrealistic grain shapes are utilized (Diederichs 2000; Potyondy and Cundall 2004;
Yoon 2007; Cho et al. 2007; Herbst et al. 2008; Li et al. 2008; Akram et al. 2011;
Sakakibara et al. 2011). Failure to account for this can lead to poor reproducibility of
grain-scale deformation mechanisms as models become dependent on artificial mesh
geometries as opposed to realistic grain shape geometry and size distributions.
Examples of this behaviour include:
The over homogenization of grain size distributions and failure to include microscale matrix grains, which can reduce the overall kinematic freedom of DEMs.
The use of such methodologies can result in an inaccurate reproduction of microscale failure mechanisms, as strain in unable to realistically accumulate within
the smaller scale matrix grains. As a result, models are predisposed towards
increased micro-crack deformation to overcome the larger inter-block roughness.
2004).
5.8. Conclusions
The realistic simulation of brittle fracture is currently one of the most important
issues in geomechanical simulation. The desire to simulate such behaviour has led to
the development of a variety of numerical simulation codes including the UDEC-GBM
method, which simulates the finite displacement and rotation of discrete deformable
and/or rigid blocks using block-contact constitutive models (Lorig and Cundall 1987; Jing
2003; Stead et al. 2006)
cohesion and friction angle, due to internal co-dependencies between the attributes.
Uncertainties originate from the underlying stochastic mesh generation processes and
can be considered an irreducible aspect of the system. As a result, uncertainties cannot
137
138
6.
6.1. Conclusions
Uncertainty analysis remains at the forefront of geotechnical design, as the
applied application of the discipline remains a predominantly predictive science.
Uncertainties arise from a number of sources, including: inherent attribute variability,
instrument and observation errors, algorithmic simplifications, limited information, etc.
(Palmstrm 1995; Read 2009; Read and Stacey 2009).
139
6.1.1.
complex formation and tectonic histories, resulting in differential failure behaviour across
a study site.
design practice continues to subdivide a study site into a series of discrete geotechnical
units, each conceptualized by spatially constant random variables. This simplification
ignores inherent spatial variability of geotechnical systems. It has been shown to lead to
overly conservative design practices, due to an over-estimation of the probability of
failure (Griffiths and Fenton 2000; Hicks and Samy 2002). In opposition to conventional
practices, heterogeneities can be accounted for through the explicit simulation of the
spatial variability (Fenton 1997; Jefferies et al. 2008). Research within this study used a
geostatistics based method, known as sequential Gaussian simulation (SGS), to
stochastically simulate rock mass heterogeneity. The proposed methodology is new to
the field of open pit mine design.
attribute values along pseudo-random paths through the modelled grid nodes (Dowd
1992; Deutsch and Journal 1998). Spatial dependencies are taken into consideration
during the simulation process through the use of variograms and simple kriging routines
(Goovaerts 1997; Journel and Huigbregts 1978; Nowak and Verly 2007).
Simulations results using the SGS approach emphasize the importance of
incorporating heterogeneities into geomechanical codes. The results suggest that the
inability to consider heterogeneities can result in a fundamental change in the predicted
SRF/FOS results.
development of scale effects due to data aggregation and preferential strain issues. In
the case of data aggregation, it is a commonly held position in the geographical sciences
that data are only valid at the collection scale (Gehlke and Biehl 1934; Haining 2003).
However, conventional geotechnical slope analysis often disregards this principle and
up-scales geomechanical properties without considering the effects of spatial averaging.
140
The
predisposition behaviour results in a negative correlation between scale and rock mass
strength, which is not properly considered in conventional slope design. Its exclusion
can lead to fundamental alterations in the behaviour response of geomechanical models.
In
comparison
to
conventional
design,
methods
which
incorporate
heterogeneities have been shown to produce more realistic SRF/FOS distributions. The
SGS method relies on the use of a variogram to control the variance within the
simulation.
6.1.2.
due to the complex inter-relationship between intact rock material and discontinuities
found in natural rock masses (e.g. micro-fractures, macro-fractures, faults, etc.; Hoek
and Brown 1980a). This behaviour has led engineers to treat rock systems as either a
continuum whose attributes are the result of a combination of the intact rock and
discontinuity behaviour, or to explicitly model the discrete features and use a
discontinuum approach (Jing 2003; Stead et al. 2006).
discrete fracture network (DFN) methodologies are typically employed where individual
fractures are explicitly modelled (Dershowitz and Einstein 1988; Xu and Dowd 2010).
However, incorporation of DFNs within conventional geomechanical modeling is often
difficult due to the development of unacceptable fracture configurations (Painter 2011;
Painter et al. 2012; Painter et al. 2014). As a result, researchers are commonly forced to
manually manipulate fractures prior to incorporation. However, this can result in adverse
141
effects, including: the alteration of fracture attribute statistics, and the poor reproducibility
between researchers.
To overcome the limitations of traditional DFN generation, an alternative
approach was proposed in this thesis. The alternative method takes into consideration
not only the statistical and theoretical basis of DFN generation, but also subsequent
geomechanical meshing algorithms. The methodology was based on the enhancement
of the Baecher disk model coupled with three simulation constraints (Baecher et al.
1978). First, a minimum overlapping/separation distance () inhibits the development of
adversely small elements, by ensuring minimum spacing between discrete fractures.
Next, fracture intersection points are checked to prevent the bounding of unusually small
areas. Finally, a minimum intersection angle (qy0 ) ensures fractures intersect at angles
greater than the minimum internal angle used in mesh generation. Although this trialand-error approach may not be the most elegant solution to the problem, it is able to
generate DFNs which conform to later mesh generation routines, improving DFN
integration. This frees researchers to focus on the actual simulation processes instead
of cleaning-up DFNs. In addition, the development of an explicit, modified method for
DFNs generation improves the reproducibility of DFNs between researchers, reducing
the degree of subjectivity.
Although promising results were obtained from the modified DFN method, it was
found to increase the degree of spatial homogenization within the fracture network,
which may be inconsistent with the natural system (Pollard and Aydin 1988). In addition,
the increased homogenization may reduce the overall fracture connectivity, and hence
increase in the rock bridge percentage, resulting in an overall increase in the rock mass
strength (Elmo et al. 2011; Havaej et al. 2012; Tuckey et al. 2012; Tuckey 2012; Fadakar
et al. 2014). However, similar homogenization also occurs when using traditional DFNs,
as the manual manipulation process removes closely spaced fractures. As a result,
homogenization can be considered an inherent property of current DFN model
incorporation.
142
6.1.3.
al. 2007). Until recently, the numerical simulation of such behaviour has been difficult.
However, recent advances in numerical modelling have allowed for the explicit
simulation of this behaviour using a number of different approaches (Jing 2003; Stead et
al. 2006). One of the leading approaches is the DEM method, first proposed by Cundall
(1971).
deformable and/or rigid blocks, using constitutive models for block contacts. Lorig and
Cundall (1987) proposed an extension of this method, using Voronoi tessellation
routines, to simulate intact rocks as an assortment of discrete grains. The methodology
is referred to as the UDEC-grain boundary model (UDEC-GBM). The approach restricts
large-scale deformation to inter-grain boundaries, allowing grains to become entirely
disconnected during the simulation process (Kazerani and Zhao 2010; Lan et al. 2010;
Gao and Stead 2014). However, the inability to directly measure the micro-scale, block
contact properties means that calibration must be conducted prior to use of method
(Kazerani and Zhao 2010).
The issue with this calibration process is that irreducible uncertainties exist in the
system due to the stochastic nature of mesh generation, making it impossible to fully
calibrate a UDEC-GBM. Research estimated the degree of this irreducible uncertainty,
and found that for the cohesion and friction angle an uncertainty of 5 to 7% exists,
measured as a coefficient of variation.
reduced in the peak strength, with a value of approximately 3%, due to the co-dependent
nature of the cohesion and friction angle. The degree of peak strength uncertainty is
consistent with previous work; however, the research has extended on previous studies
by using a larger sample size (Kazerani et al. 2012; Kazerani 2013; Kazerani and Zhao
2014). Mesh uncertainties also appear to be relatively independent of the mesh shape,
with a similar peak uncertainty observed with both Voronoi and triangular mesh models.
An understanding of these irreducible mesh uncertainties is important for any future
UDEC-GBM studies, as researchers must realize that it is impossible to fully calibrate
the system.
143
applied to each in the slope design process. However, there remain limitations in all
three applications that should be explored in future research.
6.2.1.
Spatial Uncertainty
Exploration of the effects of spatial heterogeneity on continuum modelling
144
SGS method to model heterogeneity, and Dijkstras (1959) algorithm to find critical paths
through failed material. While the research presented a case study on the application of
the method, similar approaches could be applied to other sites. Possible extensions of
the research include:
While the results of this study are consistent with previous studies
(Pascoe et al. 1998; Griffiths and Fenton 2000; Hicks and Samy 2002; Jefferies
et al. 2008; Lorig 2009; Srivastava 2012), additional research is needed to
confirm the conclusions in alternative settings. It will remain difficult to convince
practitioners of the short-comings of current methods until such time as an
extensive body of research is developed. It is therefore recommended that future
studies apply similar approaches to case studies with different failure modes, to
prove that similar effects will occur.
Simulations
results from the Ok Tedi dataset were based on independent simulations of the
( = 0.19). However, the degree of co-dependency is likely to vary between
GSI and UCS, as the correlation coefficient between the parameters was minimal
The additional
6.2.2.
DFN Generation
A modification of the Baecher et al. (1978) disk model was proposed for DFN
generation, to solve the issues with traditional DFN methodologies where problematic
elements are produced during geomechanical meshing. While, the research presents an
attempt to integrate DFNs and meshing algorithms, limitations exist, as the method overhomogenizes the fracture system. Possible future extensions of the work include:
Expansion of the modified DFN methodology to 3D. The current algorithm was
designed for use with the 2D geomechanical software codes ELFEN (Rockfield
2013), UDEC (Itasca 2014), and Phase2 (Rocscience 2013).
However, an
146
simulate the accurate spatial structure, or at the very least a purely random
structure.
As a result, future
studies should explore this effect, and help constrain the degree of correlation
between the two phenomena.
UDEC-GBMs research
presented in this thesis suggests that this extension may be extremely important,
as the grain shape has been shown to influence the overall failure mechanics.
6.2.3.
147
Continued research into the synthetic rock mass (SRM) approach using the
proposed DFN mesh generation coupled algorithm. Currently, SRM research
is relatively limited within UDEC-GBMs due to the difficulty in generating DFNs
within the conventional UDEC modelling package. However, with the advent of
the new algorithm, mesh geometries can be generated which conform to
previously generated DFNs. This work can be extended upon by investigating
SRM models in more detail, or by extending the method to include alternative
mesh geometries.
uncertainties associated with forward analysis using the same model geometry.
A study could utilize a similar approach to that used in Chapter 5, to explore the
effects of model up-scaling on the calibration uncertainty.
148
In closing, although numerous recommendations are provided in this thesis, the most
important to take away is the need to transition from deterministic to uncertainty based
slope design practices.
149
References
Ahn, S., and A. Fessler. 2003. Standard errors of mean, variance, and standard
deviation estimators. Technical Report 413, Communications and Signal
Processing Laboratory, Department of Electrical Engineering and Computer
Sciences, University of Michigan, Ann Arbor, USA. 48109-2122.
Akram, M.S., G. Sharrock, and R. Mitra. 2011. The role of interstitial cement in synthetic
conglomeratic rocks. In: Sainsbury, R. Hart, C. Detournay and P. Cundall (eds)
Continuum and Distinct Element Numerical Modeling in Geomechanics, Itasca
Consulting Group, Minneapolis, USA. Paper 08-03. 10 p.
Alzoubi, A.K. 2009. The effect of tensile strength on the stability of rock slopes. Ph.D.
Thesis, University of Alberta, Edmonton, Canada. 205 p.
Alzoubi, A.K. 2012. Modeling of rocks under direct shear loading by using discrete
element method. Journal of Engineering & Applied Sciences. 4:5-20.
Ang, A.H.S., and W. Tang. 1984. Probability concepts in engineering planning and
design: volume I basic principles. John Wiley & Sons, New York, USA. 420 p.
Aughenbaugh, J.M. 2006. Managing uncertainty in engineering design using imprecise
probabilities and principles of information economics. Ph.D. Thesis, Georgia
Institute of Technology, Atlanta, USA. 326 p.
Aughenbaugh, J.M., and C.J. Paredis. 2006. The value of using imprecise probabilities
in engineering design. Journal of Mechanical Design. 128:969-979.
Augustin, T., and R. Hable. 2010. On the impact of robust statistics on imprecise
probability models: a review. Structural Safety. 32:358-365.
Australian Geomechanics Society. 2000. Landslide risk management concepts and
guidelines. AGS Sub-Committee on Landslide Risk Management, Sydney,
Australia. 92 p.
Aydin, A. 2004. Fuzzy set approaches to classification of rock masses. Engineering
Geology. 74: 227-245.
Baczynski, N.R.P. 1980. Rock mass characterization and its application assessment of
unsupported underground openings. Ph.D. Thesis, University of Melbourne,
Australia. 233 p.
150
Baczynski, N.R.P. 2000. STEPSIM4 step-path method for slope risks. GeoEng2000:
An International Conference on Geotechnical & Geological Engineering,
Melbourne, Australia. 19-24.
Baczynski, N.R.P. 2008. STEMSIM4-REVISED: Network analysis methodology for
critical paths in rock mass slopes. In: Proceedings of the Southern Hemisphere
International Rock Mechanics Symposium (SHIRMS-2008), Perth, Australia. 13
p.
Baczynski, N.R.P., I. de Bruyn, J. Mylvaganam, and D. Walker. 2011. High rock slope
cutback geotechnics: a case study at Ok Tedi mine. In: Slope Stability 2011:
International Symposium on Rock Slope Stability in Open Pit Mining and Civil
Engineering, Vancouver, Canada. 14 p.
Baczynski, N.R.P. 2014. Personal Communications. August 22, 2014.
Bae, H.R., R.V. Grandhi, and R.A. Canfield. 2004. An approximation approach for
uncertainty quantification using evidence theory. Reliability Engineering and
System Safety. 86:215-225.
Baecher, G.B., N.A. Lanney, and H.H. Einstein. 1978. Statistical description of rock
properties and sampling. In: Proceedings of the 18th U.S. Symposium on Rock
Mechanics, American Rock Mechanics Association. 5C1-8.
Bagheri, M. 2009. Model uncertainty of design tools to analyze block stability. Licentiate
Thesis. Royal Institute of Technology, Stockholm, Sweden. 163 p.
Bamford, R.W. 1972. The Mount Fublian (Ok Tedi) porphyry copper deposit, territory of
Papua New Guinea. Economic Geology. 67:1019-1033.
Barton, C.M. 1978. Analysis of joint traces. In: Proceedings of the 19th U.S. Symposium
on Rock Mechanics, American Rock Mechanics Association. 39-40 p.
Beale, G. 2009. Hydrogeologic model. In: J. Read, and P. Stacey (eds) Guidelines for
open pit design. CSIRO Publishing, Collingwood Australia. 141-201 p.
Bear, J. 1972. Dynamics of fluids in porous media. Courier Dover Publications. 763 p.
Beckman, P. 1971. A history of . Golden Press, Boulder, USA. 208 p.
Bieniawski, Z.T. 1967. Mechanism of brittle fracture of rock: parts I theory of the
fracture process. International Journal of Rock Mechanics and Mining Sciences &
Geomechanics Abstracts. 4:395-430.
Bieniawski, Z.T. 1973. Engineering classification in jointed rock masses. In: Transactions
of the South African Institute of Civil Engineers. 15:335-344.
Bieniawski, Z.T. 1976. Rock mass classification in rock engineering. In: Proceedings of
the Symposium on Exploration for Rock Engineering, Johannesburg, South
Africa. 97-106.
151
Bieniawski, Z.T. 1984. Rock mechanic design in mining and tunnelling. Balkema,
Rotterdam. 272 p.
Bieniawski, Z.T. 1989. Engineering rock mass classifications. John Wiley & Sons, New
York, USA. 384 p.
Bieniawski, Z.T., B.C. Tamames, J.M.G. Fernadez, and M.A. Hernandez. 2006. Rock
mass excavability (RME) indicator: new way to selecting the optimum tunnel
construction method. In: ITA-AITES World Tunnel Congress and 32rd ITA
General Assembly, Seoul, Korea. 6 p.
Bieniawski, Z.T., B. Celada, and J.M. Galera. 2007. TBM excavability: prediction and
machine-rock interaction. In: Proceedings of the Rapid Excavation and Tunneling
Conference (RETC), Toronto, Canada. 1118-1130.
Bieniawski, Z.T., and R. Grandori. 2007. Predicting TBM excavability part II. Tunnels &
Tunnelling International. January 2008, 15-18.
Billaux, D., J.P. Chiles, K. Hestir, and J. Long. 1989. Three-dimensional statistical
modelling of a fractured rock mass an example from the Fanay-Augres mine.
International Journal of Rock Mechanics and Mining Sciences & Geomechanics
Abstracts. 26:281-299.
Binaghi, E., L. Luzi, P. Madella, F. Pergalani, and A. Rampini. 1998. Slope instability
zonation: a comparison between certainty factor and fuzzy Dempster-Shafer
approaches. Natural Hazards. 17:77-97.
Blackmore, S., R. Godwin, and S. Fountas. 2003. The analysis of spatial and temporal
trends in yield map data over six years. Biosystems Engineering. 84:455-466.
Bonnet, E., O. Bour, N.E. Odling, P. Davy, I. Main, P. Cowie, and B. Berkowitz. 2001.
Scaling of fracture systems in geological media. American Geophysical Union.
39:347-383.
Brace, W.F., B. Paulding, and C. Scholz. 1966. Dilatancy in the fracture of crystalline
rocks. Journal of Geophysical Research. 71:3939-3953.
Brown, E.T. 1970. Strength of models of rock with intermittent joints. Journal of the Soil
Mechanics and Foundation Division. 96:1935-1949.
Brown, E.T. 2008. Estimating the mechanical properties of rock masses. In: Y. Potvin, J.
Carter. A. Dyskin, and R. Jeffery. (eds) SHIRMS 2008, Australian Centre for
Geomechanics, Perth, Australia. 3-22.
Burns, M. Propagation of imprecise probabilities through black-box models. M.Sc.
Thesis, Georgia Institute of Technology, Atlanta, USA. 99 p.
152
153
Christian, J.T., and G.B. Baecher. 2002. The point-estimate method with large numbers
of variables. International Journal for Numerical and Analytical Methods in
Geomechancis. 26:1515-1529.
Christianson, M., M. Board, and D. Rigby. 2006. UDEC simulation of triaxial testing of
lithophysal tuff. In: Proceedings of the 41st US Symposium on Rock Mechanics,
Golden, USA. 8 p.
Clark, I. 1979. Practical Geostatistics. 1st ed. Essex: Elsevier Applied Science.
Clark, W.A.V., and K.L. Avery. 1976. The effects of data aggregation in statistical
analysis. Geographical Analysis. 75:428-438.
Clover Associates Pty Ltd. 2010. GALENA. Version 5.0. Software. Robertson, Australia.
Colyvan, M. 2008. Is probability the only coherent approach to uncertainty? Risk
Analysis. 28: 645-652.
Cooke, R. 2004. The anatomy of the squizzel: The role of operational definitions in
representing uncertainty. Reliability Engineering & System Safety. 85:313-319.
Cundall, P.A. 1971. A computer model for simulating progressive large scale movements
in blocky rock systems. In: Proceedings of the Symposium of the International
Society of Rock Mechanics (ISRM). Nancy, France. 129-136 p.
Cundall, P., M. Pierce, and D. Mas Ivars. 2008. Quantifying the size effect of rock mass
strength. In: Proceedings of the 1st South Hemisphere International Rock
Mechanics Symposium, Perth, Australia. 315 p.
Cundall, PA. 2011. Lattice method for modeling brittle, jointed rock. In: D.P. Sainsbury,
R. Hart, C. Detournay and P. Cundall (eds) Continuum and Distinct Element
Numerical Modeling in Geomechanics, Itasca Consulting Group, Minneapolis,
USA. Paper 01-02. 9 p.
Damjanac, B., M. Board, M. Lin, D. Kicker, and J. Leem. 2007. Mechanical degradation
of emplacement drifts at Yucca Mountain a modeling case study part II:
lithophysal rock. International Journal of Rock Mechanics and Mining Sciences &
Geomechanics Abstracts. 44:368-399.
Damjanac, B., and C. Fairhurst. 2010. Evidence for a long-term strength threshold in
crystalline rock. Rock Mechanics and Rock Engineering. 43:513531.
Davies, H.L., W.J.S. Howell, R.S.H. Fardon, R.J. Carter, and E.D. Bumstead. 1978.
History of the Ok Tedi porphyry copper prospect, Papua New Guinea. Economic
Geology. 73:796-809.
Davy, P., A. Sornette, and D. Sornette. 1990. Some consequences of a proposed factal
nature of continental faulting. Nature. 348:56-58.
154
155
Diederichs, M.S. 2000. Instability of hard rock masses: the role of tensile damage and
relaxation. Ph.D. Thesis, University of Waterloo, Canada. 597 p.
Diederichs, M.S. 2003. Manuel Rocha medal recipient: rock fracture and collapse under
low confinement conditions. Rock Mechanics and Rock Engineering. 36:339-381.
Diederichs, M.S., P.K. Kaiser, and E. Eberhardt. 2004. Damage initiation and
propagation in hard rock tunnelling and the influence of near-face stress rotation.
International Journal of Rock Mechanics and Mining Sciences & Geomechanics
Abstracts. 41:785-812.
Diederichs, M.S., M. Lato, R. Hammah, and P. Quinn. 2007a. Shear strength reduction
approach for slope stability analysis. In: Proceedings of the 1st Canada-US Rock
Mechanics Symposium, Vancouver, Canada. 8 p.
Diederichs, M.S., J.L. Carvalho, and T. Carter 2007. A modified approach for prediction
of strength and post yield behaviour for high GSI rock masses in strong, brittle
ground. Proceedings of the 1st Canada-US Rock Mechanics Symposium,
Vancouver, Canada. 8 p.
Dijkstra, E.W. 1959. A note on two problems in connexion with graphs. Numerical
Mathematics. 1:269-271.
Dimitrakopoulos, R., and M.B. Fonseca. 2003. Assessing risk in gradetonnage curves
in a complex copper deposit, northern Brazil, based on an efficient joint
simulation of multiple correlated variables. In: Proceedings of the Application of
Computers and Operations Research in the Minerals Industries, Cape Town,
South Africa. 373382 p.
Dodagoudar, G.R., and G. Venkatachalam. 2000. Reliability analysis of slopes using
fuzzy set theory. Computers and Geotechnics. 27:101-115.
Douglas, K.J., and G. Mostyn. 2004. The shear strength of rock masses. In: G. Farquar,
Kelsy, Marsh, and Fellows (eds) Proceedings of the 9th Australia New Zealand
Conference on Geomechanics, New Zealand Geotechnical Society, Auckland,
New Zealand. 166-172 p.
Dowd, P.A. 1992. A review of recent developments in geostatistics. Computers &
Geosciences. 17:1481-1500.
Du, L., K.K. Choi, and B.D. Youn. 2006. Inverse possibility analysis method for
possibility-based design optimization. AIAA Journal, 44:2682-2690.
Du, L. B.D. Youn, and D. Gorsich. 2006. Possibility-based design optimization method
for design problems with both statistical and fuzzy input data. Journal of
Mechanical Design. 128:928-935.
Duncan, J.M. 2000. Factors of safety and reliability in geotechnical engineering. Journal
of Geotechnical and Geoenvironmental Engineering, 126:307-316.
156
Eberhardt, E.D. 1998. Brittle rock fracture and progressive damage in uniaxial
compression. Ph.D. Thesis, University of Saskatchewan, Saskatoon, Canada.
334 p.
Eberhardt, E., D. Stead, B. Stimpson, and R.S. Read. 1998. Identifying crack initiation
and propagation thresholds in brittle rock. Canadian Geotechnical Journal.
35:222-233.
Eckhardt, R. 1987. Stan Ulam, John von Neumann, and the Monte Carlo method, Los
Alamos Science, Special Issue. 15:131-137.
Einstein, H.H., and G.B. Baecher. 1983a. Probabilistic and statistical methods in
engineering geology. Rock mechanics and Rock Engineering. 16:39-72.
Einstein, H.H., D. Veneziano, G.B. Baecher, and K.J. OReilly. 1983b. The effect of
discontinuity persistence on rock slope stability. International Journal of Rock
Mechanics and Mining Sciences & Geomechanics Abstracts. 20:227-236.
Elmo, D., C. Clayton, S. Rogers, R. Beddoes, and S. Greer. 2011. Numerical simulations
of potential rock bridge failure within a naturally fractured rock mass. In:
Proceedings of the International Symposium on Rock Slope Stability in Open Pit
Mining and Civil Engineering. Vancouver, Canada. 13 p.
Elmouttie, M.K., and G.V. Poropat. 2011. Uncertainty propagation in structural modeling.
In: Slope Stability 2011: International Symposium on Rock Slope Stability in
Open Pit Mining and Civil Engineering, Vancouver, Canada. 13 p.
El-Ramly, H., N.R. Morgenstern, and D.M. Cruden, 2002. Probabilistic slope stability
analysis for practice. Canadian Geotechnical Journal. 39:665-683.
El-Ramly, H., N.R. Morgenstern, and D.M. Cruden. 2006. Lodalen slide: A probabilistic
assessment. Canadian Geotechnical Journal. 43:956-968.
Emery, X. Properties and limitations of sequential indicator simulation. Stochastic
Environmental Resources and Risk Assessment. 18:414-424.
Esfahani, N.M., and O. Asghari. 2013. Fault detection in 3D by sequential Gaussian
simulation of Rock Quality Designation (RQD). Arabian Journal of Geosciences.
10:3737-3747.
Esmaieli, K., J. Hadjigeorgiou, and M. Grenon. 2010. Estimating geometrical and
mechanical REV based on synthetic rock mass models at Brunswick Mine.
International Journal of Rock Mechanics and Mining Sciences & Geomechanics
Abstracts. 47:915926.
Fadakar, A.Y., P.A. Dowd, and X. Chaoshui. 2014. Connectivity field: a measure for
characterising fracture networks. Mathematical Geosciences. DOI
10.1007/s11004-013-9520-7.
157
Fagerlund, G., M. Royle, and J. Scibek. 2013. Integrating complex hydrogeological and
geotechnical models a discussion of methods and issues. In: P.M. Dight (eds)
Slope Stability 2013: International Symposium on Rock Slope Stability in Open
Pit Mining and Civil Engineering, Brisbane, Australia. 1091-1102.
Fenton, G.A. 1997. Probabilistic methods in geotechnical engineering. In: Workshop
presented at ASCE GeoLogan97 Conference, Logan, USA.
Ferson, S., and S. Donald. 1998. Probability bounds analysis. In: A. Mosleh, and R.A.
Bari (eds) Probabilistic Safety Assessment and Management. Springer-Verlag,
New York, USA. 1203-1208.
Ferson, S., and W.T. Tucker. 2006. Sensitivity analysis using probability bounding.
Reliability Engineering & System Safety. 91:1435-1442.
Fonseka, G.M., S.A.F. Murrell, and P. Barnes. 1985. Scanning electron microscope and
acoustic emission studies of crack development in rocks. International Journal of
Rock Mechanics and Mining Sciences & Geomechanics Abstracts. 22:273-289.
Garcia, X., J.P. Latham, J. Xiang, and J.P. Harrison. 2009. A clustered overlapping
sphere algorithm to represent real particles in discrete element modelling.
Gotechnique. 59:779-784.
Gao, F. 2013. Simulation of failure mechanics around underground coal mine openings
using discrete element modelling. Ph.D. Thesis, Simon Fraser University,
Vancouver, Canada. 288 p.
Gao, F.Q., and D. Stead. 2014. The application of a modified Voronoi logic to brittle
fracture modelling at the laboratory and field scale. International Journal of Rock
Mehcanics and Mining Sciences & Geomechanics Abstracts. 68:1-14
Gao, F., D. Stead, and J. Coggan. 2014a. Evaluation of coal longwall caving
characteristics using an innovative UDEC Trigon approach. Computers and
Geotechnics. 55:448-460.
Gao, F., D. Stead, and J. Coggan. 2014b. Simulation of roof shear failure in coal mine
roadways using an innovative UDEC trigon approach. Computers and
Geotechnics. 61:33-41.
Gehlke, C.E., and K. Biehl. 1934. Certain effects of grouping upon the size of correlation
coefficient in census tract material. Journal of the American Statistical
Association Supplement. 29:169-170.
Geier, J.E., K. Lee, and W.S. Dershowitz. 1988. Field validation of conceptual models for
fracture geometry. In: Proceedings of the American Geophysical Union, 1988 Fall
Meeting, San Francisco, EOS, Transactions, American Geophysical Union.
69:1177.
Giasi, C.I., P. Masi, and C. Cherubini. 2003. Probabilistic and fuzzy reliability analysis of
a sample slope near Aliano. Engineering Geology. 67:391-402.
158
Giles, R. 1982. Foundations for a theory of possibility. In: M.M. Gupta, and E. Sanchez
(eds) Fuzzy Information and Decision Processes. North-Holland Publishing
Company, Amsterdam, Holland. 183-195.
Glynn, E.F., D. Veneziano, and H.H. Einstein. 1978. The probabilistic model for shearing
resistance of jointed rock. In: 19th US Symposium on Rock Mechanics (USRMS).
American Rock Mechanics Association. 66-76.
Glynn, E. 1979. A probabilistic approach to the stability of rock slopes. Ph.D.
dissertation, Massachusetts Institute of Technology, Cambridge, USA. 442 p.
Good, I.J. 1950. Probability and the weighting of evidence. Charles Griffin, London, UK.
119 p.
Good, I.J. 1983. Good thinking: the foundations of probability and its applications.
University of Minnesota Press, Minneapolis, USA. 352 p.
Goovaerts, P. 1997. Geostatistics for Natural Resources Evaluation. 1st ed. New York:
Oxford University Press. 496 p.
Griffiths, D.V. and G.A. Fenton. 2000. Influence of soil strength spatial variability on the
stability of an undrained clay slope by finite elements. In: Slope Stability 2000:
Proceedings of GeoDenver 2000, Denver, ASCE Geotechnical Special
Publication No. 101, 184-193. New York. DOI:10.1061/40512(289)14.
Griffiths, D.V., J. Huang, and G.A. Fenton. 2009. Influence of spatial variability on slope
reliability using 2-D random fields. Journal of Geotechnical and
Geoenvironmental Engineering. 135:1367-1378.
Gringarten, E. 1997. 3D geometric description of fractured reservoirs. In: E.Y. Baafi, and
N.A. Schofield (eds) Geostatistics Wollongong 96. Dordrecht: Kluwer Academic.
424-432.
Haining, R.P. 2003. Spatial data analysis: theory and practice. Cambridge University
Press, Cambridge, UK. 454 p.
Hjek, A. 2012. Interpretations of probability. In: E.N. Zalta (eds) The Stanford
Encyclopedia of Philosophy, Winter 2012 Edition,
http://plato.stanford.edu/archives/win2012/entries/ probability-interpret.
Hallin, M., Z. Lu, and L.T. Tran. 2004. Kernel density estimation for spatial processes:
the L1 theory. Journal of Multivariate Analysis. 88:61-75.
Hamidi, J.K., K. Shahriar, B. Rezai, and H. Bejari. 2010. Application of fuzzy set theory
to rock engineering classification systems: an illustration of the rock mass
excavability index. Rock Mechanics and Rock Engineering. 43:335-350.
Hammah, R.E., T.E. Yacoub, B. Corkum, and J. Curran. 2005. The shear strength
reduction method for the generalized Hoek-Brown criterion. In: Proceedings of
the 40th U.S. Symposium on Rock Mechanics, Anchorage, USA. 6 p.
159
Hammah, R.E., T.E. Yacoub, and J.H. Curran. 2006. Investigating the performance of
the shear strength reduction (SSR) method on the analysis of reinforced slopes.
In: Proceedings of the 59th Canadian Geotechnical Conference, Vancouver,
Canada. 5 p.
Hammah, R.E., and J.H. Curran. 2009. Is it better to be approximately right than
precisely wrong: why simple models work in mining geomechanics. In:
Proceedings of the 43rd US Rock Mechanics Symposium and 4th U.S.-Canada
Rock Mechanics Symposium, Asheville, USA. 8 p.
Hammah, R.E., T.E. Yacoub, and J.H. Curran. 2009. Numerical modelling of slope
uncertainty due to rock mass jointing. In: Proceedings of the International
Conference on Rock Joints and Jointed Rock Masses, Tucson, Arizona, USA. 8
p.
Hammersley, J.M., and D.C. Handscomb. 1964. Monte Carlo methods. John Wiley &
Sons, New York, USA. 178 p.
Hanss, M. 2005. Applied fuzzy arithmetic: an introduction with engineering applications.
Springer. New York, USA. 259 p.
Harr, M.E. 1989. Probabilistic estimates for multivariate analyses. Applied Mathematical
Modelling. 13:281-294.
Harr, M.E. 1996. Reliability based design in civil engineering. Dover, New York, USA.
281 p.
Harrison, J.P., and J.A. Hudson. 2010. Incorporating parameter variability in rock
mechanics analyses: fuzzy mathematics applied to underground rock spalling.
Rock Mechanics and Rock Engineering. 43:219224.
Harrison, J.P., A.M. Ferrero, and S. Cravero. 2001. Fuzzy partitioning algorithms applied
to the interpretation of distinct element modelling results. Gotechnique, 50:677
686.
Hart, A.G. 1942. Risk, uncertainty and the unprofitability of compounding probabilities.
In: O. Lange, F. McIntyre, and T.O. Yntema (eds) Studies in Mathematical
Economics and Econometrics. University of Chicago Press, Chicago, USA. 110118.
Havaej, M., D. Stead, L. Lorig, and J. Vivas. 2012. Modelling rock bridge failure and
brittle fracturing in large open pit rock slopes. In: Proceedings of the 46th U.S.
Rock Mechanics/Geomechanics Symposium, American Rock Mechanics
Association, Chicago, USA. 9 p.
Havaej, M., A. Wolter, and D. Stead. 2014. Exploring the potential role of brittle fracture
in the 1963 Vajont Slide, Italy. Submitted to: International Journal of Rock
Mechanics and Mining Sciences & Geomechanics Abstracts.
160
Hearn, G.J. 1995. Landslide and erosion hazard mapping at Ok Tedi copper mine,
Papua New Guinea. Quarterly Journal of Engineering Geology and
Hydrogeology. 28:47-60.
Henley, E.J., and H. Kumamoto. 1981. Reliability engineering and risk assessment.
Prentice-Hall, New Jersey, USA. 540 p.
Herbst, M., H. Konietzky, and K. Walter. 2008. 3D microstructural modeling. In: R. Hart,
C. Detournay and P. Cundall (eds) Continuum and Distinct Element Numerical
Modeling in Geo-Engineering, Itasca Consulting Group, Minneapolis, USA. Paper
08-05. 7 p.
Hicks, M.A., and R. Boughrarou. 1998. Finite element analysis of the Nerlerk underwater
berm failures. Gotechnique. 48:169-185.
Hicks, M.A., and K. Samy. 2002. Influence of heterogeneity on undrained clay slope
stability. Quarterly Journal of Engineering Geology and Hydrogeology. 35:41-49.
Hoek, E. 1968. Brittle failure of rock. In: K.G. Stagg, and O.C. Zienkiewicz (eds) Rock
Mechanics in Engineering Practice. 99-124.
Hoek, E., and E.T. Brown. 1980a. Underground excavations in rock. 1st edition. London:
Institute of Mining and Metallurgy. 536 p.
Hoek, E., and E.T. Brown. 1980b. Empirical strength criterion for rock masses. Journal of
Geotechnical Engineering. 106:1013-1035.
Hoek, E. 1983. Strength of jointed rock masses. Gotechnique. 23:187-223.
Hoek, E. 1994. Strength of rock and rock masses. ISRM News Journal. 2:4-16.
Hoek, E., P.K. Kaiser, and W.F. Bawden. 1995. Support of Underground Excavations in
Hard Rock. Balkema, Rotterdam, Netherlands. 300 p.
Hoek, E., and E.T. Brown. 1997. Practical estimates of rock mass strength. International
Journal of Rock Mechanics and Mining Sciences & Geomechanics Abstracts.
34:1165-1186.
Hoek, E., Marinos, P., and Benissi, M. 1998. Applicability of the Geological Strength
Index (GSI) classification for very weak and sheared rock masses. The case of
the Athens Schist Formation. Bulletin of Engineering Geology and the
Environment. 57:151-160.
Hoek, E., C. Carranza-Torres, and B. Corkum. 2002. HoekBrown failure criterion-2002
edition. In: Proceedings of the 5th North American Rock Mechanics Symposium,
Toronto, Canada. 7 p.
Hoek, E. and P. Marinos. 2007. A brief history of the development of the HoekBrown
failure criterion. Soils and Rocks, No 2, November. 13 p.
161
162
Jennings, J. 1970. A mathematical theory for the calculation of the stability of slopes in
open cast mines. In: Proceeding of the Symposium on the Theoretical
Background to the Planning of Open Pit Mines, Johannesburg, South Africa. 87102.
Jensen, O.P., M.C. Christman, and T.J. Miller. 2006. Landscape-based geostatistics: a
case study of the distribution of blue crab in Chesapeake Bay. Environmetrics.
17:605-621.
Jing, L. 2003. A review of techniques, advances and outstanding issues in numerical
modelling for rock mechanics and rock engineering. International Journal of Rock
Mechanics and Mining Sciences & Geomechanics Abstracts. 40:283-353.
Journel, A.G., and C.J. Huijbregts. 1978. Mining Geostatistics. 1st ed. New York:
Academic Press. 600 p.
Johns, H. 1966. Measuring the strength of rock in situ at an increasing scale. In:
Proceedings of the 1st ISRM Congress, Lisbon, Portugal. 477-482.
Karanki, D.R., H.S. Kushwaha, A.K. Verma, and S. Ajit. 2009. Uncertainty analysis
based on probability bounds (P-Box) approach in probabilistic safety
assessment. Risk Analysis. 29:662-675.
Kazerani, T., and J. Zhao. 2010. Micromechanical parameters in bonded particle method
for modelling of brittle material failure. International Journal for Numerical and
Analytical Methods in Geomechanics. 34:18771895.
Kazerani, T., Z.Y. Yang, and J. Zhao. 2012. A discrete element model for predicting
shear strength and degradation of rock joint by using compressive and tensile
test data. Rock Mechanics and Rock Engineering. 45:695-709.
Kazerani, T. 2013. A discontinuum-based model to simulate compressive and tensile
failure in sedimentary rock. Journal of Rock Mechanics and Geotechnical
Engineering. 5:378-388.
Kazerani, T., and J. Zhao. 2014. A microstructure-based model to characterize
micromechanical parameters controlling compressive and tensile failure in
crystallized rock. Rock Mechanics and Rock Engineering. 47:435-452.
Kemeny, J.M., and N.G.W. Cook. 1986. Effective moduli, non-linear deformation and
strength of a cracked elastic solid. International Journal of Rock Mechanics and
Mining Sciences & Geomechanics Abstracts. 23:107-118.
Kiureghian, A.D. 2007. Aleatory or epistemic? Does it matter? Special Workshop on Risk
Acceptance and Risk Communication, March 26-27, 2007, Stanford University,
USA. 13 p.
163
Klir, G.J. 1992. Probabilistic versus possibilistic conceptualization of uncertainty. In: B.M.
Ayyub, M.M. Gupta, and L.N. Kanal (eds) Analysis and Management of
Uncertainty: Theory and Applications. North-Holland Publishing Company, New
York, USA. 38-41.
Kohlas, J., and P.A. Monney. 1995. A mathematical theory of hints: an approach to
Dempster-Shafer theory of evidence. Lecture Notes in Economics and
Mathematical Systems. 422 p.
Kolmogorov, A. N., 1933, Grundbegriffe der Wahrscheinlichkeitrechnung, Ergebnisse
Der Mathematik. Translated in 1950 as Foundations of Probability. Chelsea
Publishing Company, New York, USA. 84 p.
Kong, W.K. 2002. Risk assessment of slopes. Quarterly Journal of Engineering Geology
and Hydrogeology. 35:213-222.
Kovari, K., A. Tisa, E. Einstein, and J. Franklin. 1983. Suggested methods for
determining the strength of rock materials in triaxial compression: revised
version. International Journal of Rock Mechanics and Mining Sciences &
Geomechanics Abstracts. 20:285290.
Labuz, J.F., and A. Zang. 2012. Mohr-Coulomb failure criterion. Rock Mechanics and
Rock Engineering. 45:975-979.
Lajtai, E.Z. 1968. Shear strength of weakness planes in rock. International Journal of
Rock Mechanics and Mining Sciences & Geomechanics Abstracts. 6:499-515.
Lan, H., C.D. Martin, and B. Hu. 2010. Effect of heterogeneity of brittle rock on
micromechanical extensile behavior during compression loading. Journal of
Geophysical Research. 115:B01202. doi:10.1026/2009JB006496.
Laubscher, D.H. 1975. Class distinction in rock masses. Coal, Gold and Base Minerals
of South Africa. 23:37-50.
Laubscher, D.H. 1990. A geomechanics classification system for the rating of rock mass
in mine design. Journal of the South African Institute of Mining and Metallurgy.
90:257-273.
Laubscher, D.H., and J. Jakubec. 2001. The MRMR rock mass classification for jointed
rock masses. In: W.A. Hustrulid, and R.L. Bullock (eds) Underground mining
methods: engineering fundamentals and international case studies, Society of
Mining Metallurgy and Exploration, Littleton, USA. 475-481.
Leuangthong, O., K.D. Khan, and C.V. Deutsch. 2011. Solved problems in geostatistics.
2nd ed. New Jersey: Wiley. 208 p.
Levi, I. 1974. On indeterminate probabilities. Journal of Philosophy. 71:391-418.
164
Li, L., I. Larsen, and R.M. Holt. 2008. A grain scale PFC3D model. In: R. Hart, C.
Detournay and P. Cundall (eds) Continuum and Distinct Element Numerical
Modeling in Geo-Engineering, Itasca Consulting Group, Minneapolis, USA. Paper
08-02. 7 p.
Little, T.N., Cortes, J.P., and Baczynski, N.R.P. 1998. Risk-based slope design
optimisation study for the Ok Tedi copper-gold mine. Internal Report: Ok Tedi
Mining Ltd., Tabubil, Papua New Guinea. 1657 p.
Lockner, D.A.,J.D. Bayerlee, V. Kuksenko, A. Ponomarev, and A. Sidorin. 1992.
Observations of quasi-static fault growth from acoustic emissions. In: B. Evens,
and T. Wong (eds) Fault mechanics and transport properties of rocks. Academic
Press, New York, USA. 3-31.
Long, J.C.S., and D.M. Billaux. 1987. From field data to fracture network modelling an
example incorporating spatial structure. Water Resource Research. 23:12011216.
Lorig, L.J., and P.A. Cundall. 1987. Modeling of reinforced concrete using the distinct
element method. In: S.B. Shah, and S.E. Swartz (eds) Fracture of Concrete and
Rock, SEM-RILEM International Conference, Springer, Houston, USA. 276-287.
Lorig, L.J., A. Watson, C.D. Martin, and D. Cruden. 2009. Rockslide run-out prediction
from distinct element analysis. Geomechanics and Geoengineering. 4:17-25.
Lorig, L.J. 2009. Challenges in current slope stability analysis methods. In: Slope
Stability, International Symposium on Rock Slope Stability in Open Pit Mining and
Civil Engineering, 2009, Santiago, Chile. 8 p.
Mahabadi, O.K., A. Lisjak, G. Grasselli, and A. Munjiza. 2012. Y-Geo: a new combined
finite-discrete element numerical code for geomechanical applications.
International Journal of Geomechanics. 12:676-688.
Mandelbrot, B.B. 1982. The fractal geometry of nature. W.H. Freeman, New York, USA.
480 p.
Maptek Pty Ltd. 2013. Vulcan. Version 8.1.4. 64 bit. Software. Adelaide, Australia.
Mardia, K.V., W.B. Nyirongo, A.N. Walder, C. Xu, P.A. Dowd, R.J. Fowell, and J.T. Kent.
2007. Markov chain Monte Carlo implementation of rock fracture modelling.
Mathematical Geosciences DOI 10.1007/s11004-007-9099-3.
Marschak, J. 1974. Economic Information, Decision, and Prediction: Selected Essays.
Vol. I-III. D. Reidel Publishing Company, Boston, USA. 400 p.
Martin, C.D., and N.A. Chandler. 1994. The progressive fracture of Lac du Bonnet
granite. International Journal of Rock Mechanics and Mining Sciences &
Geomechanics Abstracts. 31:643-659.
165
Martin, C.D. 1997. The 17th Canadian geotechnical colloquium: the effect of cohesion
loss and stress path on brittle rock strength. Canadian Geotechnical Journal.
34:698-725.
Martin, C.D., P.K. Kaiser, and D.R. McCreath. 1999. Hoek-Brown parameters for
predicting the depth of brittle failure around tunnels. Canadian Geotechnical
Journal. 36:136-151.
Marinos, V., P. Marinos, and E. Hoek, 2005. The geological strength index: applications
and limitations. Bulletin of Engineering Geology and the Environment. 64:55-65.
Mas Ivars, D., M. Pierce, D. DeGagn, and C. Darcel. 2007. Anisotropy and scale
dependency in jointed rock-mass strength a synthetic rock mass study. In:
Proceedings of the 1st International FLAC/DEM Symposium on Numerical
Modeling. 231239.
Mas Ivars, D., M.E. Pierce, C. Darcel, J. Reyes-Montes, D.O. Potyondy, R. Paul Young,
and P.A. Cundall. 2011. The synthetic rock mass approach for jointed rock mass
modelling. International Journal of Rock Mechanics and Mining Sciences &
Geomechanics Abstracts. 48:219244.
Matheron, G. 1963. Principles of geostatistics. Economic Geology. 58:1246-66.
Matsui, T., and K.C. San. 1992. Finite element slope stability analysis by shear strength
reduction technique. Soils and Foundations. 32:59-70.
Mayer, J.M., P. Hamdi, and D. Stead. 2014a. A modified discrete fracture network
approach for geomechanical simulation. In: Proceedings of the 1st International
Conference on Discrete Fracture Network Engineering. 9 p.
Mayer, J.M., D. Stead, I. de Bruyn, and M. Nowak. 2014b. A sequential Gaussian
simulation approach to modelling rock mass heterogeneity. In: Proceedings of
the 48th US Rock Mechanics/Geomechanics Symposium, Minneapolis, USA. 11
p.
Mazzoccola, D.F., D.L. Millar, and J.A. Hudson. 1997. Information, uncertainty and
decision making in site investigation for rock engineering. Geotechnical and
Geological Engineering. 15:145-180.
Mndez-Venegas, J., and M.A. Daz-Viera. 2014. Stochastic modeling of spatial grain
distribution in rock samples from terrigenous formations using the plurigaussian
simulation method. In: M. Diaz-Viera, P. Sahay, M. Coronado and A. Ortiz-Tapia
(eds) Mathematical and Numerical Modeling in Porous Media: Applications in
Geosciences, 1st ed. RC Press, Taylor & Francis Group, Boca Raton, USA. 17 p.
Mohaned, S., and A.K. McCowan. 2001. Modelling project investment decisions under
uncertainty using possibility theory. International Journal of Project Management.
19:231-241.
166
Mostyn, G., and K. Douglas. 2000. Strength of intact rock and rock masses. In:
Proceedings of GeoEng 2000, Technomic Publishing Company, Lancaster, USA.
1389-1421.
Nadim, F. 2007. Tools and strategies for dealing with uncertainty in geotechnics.
Probabilistic Methods in Geotechnical Engineering. 491:71-95.
Nagel, E. 1960. The structure of science. Hackett, London, UK. 618 p.
Nicksiar, M., and C.D. Martin. 2013. Factors affecting crack initiation in low porosity
crystalline rocks. Rock Mechanics and Rock Engineering. DOI 10.1007/s00603013-0451.
Nikolaidis, E., S. Chen, H. Cudney, R.T. Haftka, and R. Rosca. 2004. Comparison of
probability and possibility for design against catastrophic failure under
uncertainty. Journal of Mechanical Design. 126:386-394.
Nikolaidis, E. 2005. Types of uncertainty in design decision making. In: E. Nikolaidis,
D.M. Ghiocel and S. Singhal (eds) Engineering Design Reliability Handbook.
CRC Press, New York, USA. 8-1-8-20.
Nowak, M., and G. Verly. 2004. The practice of sequential Gaussian simulation. In: O.
Leuangthong and C.V. Deutsch (eds) Geostatistics Banff, Netherlands, Springer.
387-398.
Nowak, M., and G. Verly. 2007. A practical process for geostatistical simulation with
emphasis on Gaussian methods. In: R. Dimitrakopoulos (eds) Orebody Modelling
and Strategic Mine Planning - Uncertainty and Risk Management Models (2nd
Edition). The Australasian Institute of Mining and Metallurgy (The AusIMM). 10 p.
Oberguggenberger, M., and W. Fellin. 2008. Reliability bounds through random sets:
non-parametric methods and geotechnical applications. Computers and
Structures. 86:1093-1101.
Oberkampf, W.L., S.M. DeLand, B.M. Rutherford, K.V. Diegert, and K.F. Alvin. 2002.
Error and uncertainty in modeling and simulation. Reliability Engineering and
System Safety. 75:333-357.
Olofsson, I., and A. Fredriksson. 2005. Strategy for a numerical rock mechanics site
descriptive model: further development of the theoretical/numerical approach.
SKB Rapport R-05-43, ISSN 1402-3091.
Olsson, A.M.J., and G.E. Sandberg. 2002. Latin hypercube sampling for stochastic finite
element analysis. Journal of Engineering Mechanics. 128:121-125.
OReilly, K. 1980. The effect of joint plane persistence on rock slope reliability. M.Sc.
Thesis, Massachusetts Institute of Technology, Cambridge, USA. 553 p.
Oren, H., and S. Bakke. 2003. Reconstruction of Berea sandstone and pore-scale
modeling of wettability effects. Petroleum Science and Engineering. 39:177-199.
167
Oren, H., and M. Blunt. 2005. Pore space reconstruction using multiple point statistics.
Petroleum Science and Engineering. 46:121-137.
Oren, H., and M. Blunt. 2007. Pore space reconstruction of vuggy carbonates using
microtomography and multiple-point statistics. Water Resources Research. 43,
W12S02, doi:10.1029/2006WR005680.
Owen, S.J. 1998. A survey of unstructured mesh generation technology. In: Proceedings
of the 7th International Meshing Roundtable, Sandia National Laboratories, USA.
239267.
Page, R.W. 1975. Geochronology of Late Tertiary and Quaternary mineralised intrusive
porphyries of the Star Mountains of Papua New Guinea and Irian Jaya. Economic
Geology. 70:928-936.
Painter, S.W. 2011. Development of discrete fracture network modeling capability.
Presentation to the Nuclear Waste Technical Review Board, Salt Lake City USA.
Painter, S. L, C.W, Gable, N. Makedonska, J. Hyman, T.L. Hsieh, Q. Bui, and H.H. Liu.
2012. Fluid flow model development for representative geological media. Fuel
Cycle Research & Development Report for the Department of Energy Used Fuel
Disposition Campaign, USA. 48 p.
Painter, S.L., C.W. Gable, N. Makedonska, J. Hyman, S. Karra, S. Chu, H.H. Liu, J.
Birkholzer, Y. Wang, W.P. Gardner, and G.Y. Kim. 2014. Modeling fluid flow in
natural systems: model validation and demonstration. Fuel Cycle Research &
Development Report for the Department of Energy Used Fuel Disposition
Campaign, USA. 85 p.
Palmstrm, A. 1995. A rock mass characterization system for rock engineering
purposes. Ph.D. Thesis. Oslo University, Oslo, Norway. 400 p.
Park, H.J., J.G. Um, and I. Woo. 2008. The evaluation of failure probability for rock slope
based on fuzzy set theory and Monte Carlo simulation. In: Proceedings of the
Tenth International Symposium on Landslides and Engineered Slopes (Volume
2). 7 p.
Park, H.J., J.G. Um, I. Woo, and J.W. Kim. 2012. Application of fuzzy set theory to
evaluate the probability of failure in rock slopes. Engineering Geology, 125:92101.
Parry, G.W. 1996. The characterization of uncertainty in probabilistic risk assessment of
complex systems. Reliability Engineering and Systems Safety. 54:119-126.
Pascoe, D.M., R.J. Pine, and J.H. Howe. 2014. An extension of probabilistic slope
stability analysis of china clay deposits using geostatistics. In: J.G. Maund and M.
Eddleston (eds) Geohazards in Engineering Geology. Geological Society,
London, Engineering Geology Special Publications. 15:193-197.
168
169
170
Snchez-Vila, X., J. Carrera, and J.P. Girardi. 1996. Scale effects in transmissivity.
Journal of Hydrogeology. 183:1-22.
Sarin, R.K. 1978. Elicitation of subjective probabilities in the context of decision-making.
Decision Sciences. 9:37-48.
Schweiger, H.F., and G.M. Peschl. 2005. Reliability analysis in geotechnics with the
random set finite element method. Computers and Geotechnics. 32:422-435.
Segall, P., and D.D. Pallard. 1983. Joint formation in granitic rock of the Sierra Nevada.
Geological Society of America Bulletin. 94:563-575.
Shafer, G. 1976. A mathematical theory of evidence. Princeton University Press,
Princeton, USA. 314 p.
Shafer, G. 1986. The combination of evidence. International Journal of Intelligent
Systems. 1:155-179.
Shafer, G. 1990. Perspectives on the theory and practice of belief functions.
International Journal of Approximate Reasoning. 3:1-40.
Shafer, G. 1992. The DempsterShafer theory. In: S.C. Shapiro (ed.), Encyclopedia of
Artificial Intelligence, 2nd ed. John Wiley & Sons, New York, USA. 330331.
Shair, A. 1981. The effect of two sets of joints on rock slope reliability. M.Sc. Thesis,
Massachusetts Institute of Technology, Cambridge, USA. 308 p.
Shewchuk, J.R. 2012. Unstructured Mesh Generation. In: Combinatorial Scientific
Computing, eds. U. Naumann, and O. Schenk. 1st ed. Boca Raton, USA: CRC
Press. 259-299 p.
Shin, W.S. 2010. Excavation disturbed zone in Lac du Bonnet granite. Ph.D. Thesis,
University of Alberta, Edmonton, Canada. 247 p.
Singh, R., and G. Sun. 1990. A fracture mechanics approach to rock slope stability. In:
Proceedings of the 14th World Mining Congress, Peking, China. 543-548.
Smets, P. 1988. Belief functions. In: P. Smets, A. Mamdani, D. Dubois, and H. Prade
(eds) Non Standard Logics for Automated Reasoning, Academic Press, London,
UK. 253-286.
Smets, P. 1990. The combination of evidence in the transferable belief model. IEEEPattern Analysis and Machine Intelligence. 12:447-458.
Smets, P. 1991. Probability of provability and belief functions. Logique et Analyse. 133134:174-195.
Smets, P., and R. Kennes. 1994. The transferable belief model. Artificial Intelligence.
66:191-234.
171
Smets, P. 1998. Theories of uncertainty. In: .H. Ruspini, P.P. Bonissone and W. Pedrycz
(eds) Handbook of fuzzy computation. Institute of Physics Publications. 14 p.
Smith, C.A.B. 1961. Consistency in statistical inference and decision. Journal of the
Royal Statistical Society. B23:1-37.
Smith, C.A.B. 1967. Personal probability and statistical analysis. Journal of the Royal
Statistical Society. A128:469-499.
Snow, D.T. 1965. A parallel plate model of fractured permeable media. Ph.D.
Dissertation, University of California, Berkeley, USA. 330 p.
Sornette, A., P. Davy, and D. Sornette. 1993. Fault growth in brittle-ductile experiments
and the mechanics of continental collisions. Journal of Geophysical Research.
B7:12111-12139.
Sonmez, H., C. Gokceoglu, and R. Ulusay. 2003. An application of fuzzy sets to the
geological strength index (GSI) system used in rock engineering. Engineering
Applications of Artificial Intelligence. 16:251-269.
Srivastava, R.M., and H.M. Parker. 1989. Robust measures of spatial continuity. In: M.
Armstrong (eds) Geostatistics, Kluwer Academic Publishers, Alphen aan den
Rijn, Netherlands. 1:295-308.
Srivastava, A. 2012. Spatial variability modelling of geotechnical parameters and stability
of highly weathered rock slope. Indian Geotechnical Journal, 42:179-185.
SRK. 2012. Comparison of Laubscher and Bieniawski RMR values. Report Prepared for
OTML. Perth, Australia. 7 p.
SRK. 2013a. West wall depressurisation modeling preliminary results (MLE 2013).
Report Prepared for OTML. Vancouver, Canada. 32 p.
SRK. 2013b. Addendum to the west wall depressurisation modelling report (MLE 2013)
dated May 2013. Report Prepared for OTML. Vancouver, Canada. 19 p.
SRK. 2013c. MLE geotechnical studies for the proposed Gold Coast underground mine
at Ok Tedi. Report Prepared for OTML. Perth, Australia. 155 p.
Staub, I., A. Fedriksson, and N. Outters. 2002. Strategy for a rock mechanics site
descriptive model: development and testing of the theoretical approach.
Stockholm: Svensk Krnbrnslehantering AB. 236 p.
Stead, D., E. Eberhardt, and J.S. Coggan. 2006. Developments in the characterization of
complex rock slope deformation and failure using numerical modelling
techniques. Engineering Geology. 83:217-235.
Stead, D., and E. Eberhardt. 2013. Understanding the mechanics of large landslides.
Italian Journal of Engineering Geology and Environment Book Series. 6:85112. DOI: 10.4408/IJEGE.2013-06.B-07.
172
Steffen, O.K.H. 1997. Planning of open pit mines on a risk basis. Journal of South
African Institute of Mining and Metallurgy. 97:47-56.
Steffen, O.K.H., and L.F. Contreras. 2007. Mine planning-its relationship to risk
management. In: Proceedings of the International Symposium on Stability of
Rock Slopes in Open Pit Mining and Civil Engineering, Perth, Australia. 17 p.
Steffen, O.K.H., L.F. Contreras, P.J. Terbrugge, and J. Venter. 2008. A risk evaluation
approach for pit slope design. In: Proceedings of the 42nd US Rock Mechanics
Symposium and 2nd US-Canada Rock Mechanics Symposium, San Francisco,
USA. 18 p.
Tang, B. 1993. Orthogonal array-based Latin hypercubes. Journal of the American
Statistical Association. 88:1392-1397.
Tang, C., and J.A. Hudson. 2010. Rock Failure Mechanisms: Illustrated and Explained.
CRC Press, Taylor & Francis Group, Boca Raton, USA. 364 p.
Tapia, A., L.F. Contreras, M.G. Jefferies, and O. Steffen. 2007. Risk evaluation of slope
failure at the Chuquicamata mine. In: Y. Potvin (eds) Proceedings of the
International Symposium on Rock Slope Stability in Open Pit Mining and Civil
Engineering. Slope Stability 2007, Perth, Australia. 477-495.
Terbrugge, P.J., J. Wesseloo, J. Venter, and O.K.H. Steffen. 2006. A risk consequence
approach to open pit slope design. The Journal of the South African Institue of
Mining and Metallurgy. 106:503-511.
Tintner, G. 1941. The theory of choice under subjective risk and uncertainty.
Econometrica. 9:298-304.
Tuckey, Z., D Stead, M. Havaej, F. Gao, and M. Sturzenegger. 2012. Towards an
integrated field mapping-numerical modelling approach for characterising
discontinuity persistence and intact rock bridges in large open pits. In:
Proceedings of the Canadian Geotechnical Society (Geo Manitoba), Winnipeg,
Canada. 15 p.
Tuckey, Z. 2012. An integrated field mapping-numerical modelling approach to
characterising discontinuity persistence and intact rock bridges in large open pit
slopes. M.Sc. Thesis, Simon Fraser University, Burnaby, Canada. 440 p.
Vann, J., O. Bertoli, and S. Jackson. 2002. An overview of geostatistical simulation for
quantifying risk. In: Proceedings of Geostatistical Association of Australasia
Symposium Quantifying Risk and Error, Perth, Australia. 12 p.
Veneziano, D. 1978. Probabilistic model of joints in rock. Unpublished manuscript,
Massachusetts Institute of Technology, Cambridge, USA.
Vieira, S.R., T.L. Hatfield, D.R. Nielsen, and J.W. Biggar. 1983. Geostatistical theory and
application to variability of some agronomical properties. Hilgardia. 51:1-75.
173
Vieira, S.R., J. Millete, G.C. Topp, and W.D. Reynolds. 2002. Handbook for geostatistical
analysis of variability in soil and climate data. In: V.H. Alvarezz, C.R. Schaefer,
N.F. Barros, J.W.V. Mello, and L.M. Costa (eds) Tpicos em Cincia do solo.
Viosa: Sociedade Brasileira de Cincia do solo. 2:1-45.
Vieira, S.R., J.R.P.D. Carvalho, M.B. Ceddia, and A.P. Gonzlez. 2010. Detrending non
stationary data for geostatistical applications. Bragantia. 69:01-08.
Wackernagel, H. 2003. Multivariate geostatistics. Springer, Berlin, Germany. 388 p.
Walley, P. 1991. Statistical reasoning with imprecise probabilities. Chapman and Hall,
London, United Kingdom. 720 p.
Wang, P. 2001. Confidence as higher-order uncertainty. In: The 2nd International
Symposium on Imprecise Probabilities and Their Applications, Ithaca, USA. 10 p.
Weichselberger, K. 2000. The theory of interval-probability as a unifying concept for
uncertainty. International Journal of Approximate Reasoning. 24:149-170.
Wen, R., and R. Sinding-Larsen. 1997. Stochastic modelling and simulation of small
faults by marked point processes and kriging. In: E.Y. Baafi, and N.A. Schofield
(eds) Geostatistics Wollongong 96. Dordrecht: Kluwer Academic. 398-414.
Wiles, T.D. 2006. Reliability of numerical modelling predictions. International Journal of
Rock Mechanics and Mining Sciences & Geomechanics Abstracts. 43:454-472.
Wong, F.S. 1985. First-order, second-moment methods. Computers and Structures.
20:779-791.
Wyllie, D.C., and C.W. Mah. 2004. Rock slope engineering: Civil and mining (4th edition).
Taylor & Francis, New York, USA. 431 p.
Xu, C., and P. Dowd. 2010. A new computer code for discrete fracture network
modelling. Comput. Geosci. 36:292-301.
Xu, W., T.T. Tran, R.M. Srivastava, and A.G. Journel. 1992. Integrating seismic data in
reservoir modelling: the collocated cokriging alternative. In: Proceedings of the
67th Annual Technical Conference and Exhibition of the Society of Petroleum
Engineers, Washington, DC, USA. 833-842.
Yang, J., H.Z. Huang, L.P. He, S.P. Zhu, and D. Wen. 2011. Risk evaluation in failure
mode and effects analysis of aircraft turbine rotor blades using Dempster-Shafer
evidence theory under uncertainty. Engineering Failure Analysis. 18:2084-2092.
Yoe, C. 2011. Primer on risk analysis. CRC Press. 237 p.
Yoon, J. 2007. Application of experimental design and optimization to PFC model
calibration in uniaxial compression simulation. International Journal of Rock
Mechanics and Mining Sciences. 44:871-889.
174
Youn, D.D., K.K. Choi, and L. Du. 2007. Integration of possibility-based optimization and
robust design for epistemic uncertainty. Journal of Mechanical Design. 129:876882.
Yule, A.U., and M.G. Kendall. 1950. An introduction to the theory of statistics. Hafner
Publishing Company, New York, USA. 701 p.
Zadeh, L.A. 1965. Fuzzy sets. Information and control. 8:338-353
Zadeh, L.A. 1968. Probability measures of fuzzy events. Journal of Mathematical
Analysis and Applications. 23:421-427.
Zadeh, L.A. 1978. Fuzzy sets as a basis for a theory of possibility. Fuzzy Sets and
Systems. 1:3-28.
Zadeh, L.A. 1984. Review of books: a mathematical theory of evidence. The AI
Magazine. 5:81-83.
Zadeh, L.A. 2002. Toward a perception-based theory of probabilistic reasoning with
imprecise probabilities. Journal of Statistical Planning and Inference. 105:233264
Zadeh, L.A. 2005. Toward a generalized theory of uncertainty (GTU) an outline.
Information Sciences. 172:1-40
Zhang, H., R.L. Mullen, and R.L. Muhanna. 2010. Interval Monte Carlo methods for
structural reliability. Structural Safety. 32:183-190.
Zhang, Q., H. Zhu, L. Zhang, and X. Ding. 2011. Study of scale effect on intact rock
strength using particle flow modeling. International Journal of Rock Mechanics
and Mining Sciences & Geomechanics Abstracts. 48:13201328.
Zhang, T.J., W.G. Cao, and M.H. Zhao. 2009. Application of fuzzy sets to geological
strength index (GSI) system used in rock slope. Soils and Rock Instrumentation,
Behavior, and Modeling. 30-35 p. DOI 10.1061/41046(353)5.
Zhang, Y. 2014. Modelling hard rock pillars using a Synthetic Rock Mass approac. Ph.D.
Thesis, Simon Fraser University, Vancouver, Canada. 247 p.
175
Appendices
176
Appendix A.
Hoek-Brown Criterion
Hoek and Brown (1980a, 1980b) introduced their failure criterion in the early 1980s
based on empirical results derived from brittle failure tests of intact rock by Hoek (1968) and joint
rock mass modelling results by Brown (1970). The criterion utilized a method of reducing intact
rock strengths by a given factor based on the fracture characteristics of the rock mass. The
properties of this reduction factor were originally based on the Rock Mass Rating (RMR) system
devised by Bieniawski (1976) and later revised to use the Geological Strength Index (GSI)
introduced by Hoek (1994) and Hoek et al. (1995).
The criterion assumes that intact rock particles must have sufficient degrees of freedom
to allow for sliding and/or rotation, without significant amounts of inter-particle locking (Hoek
2012). For example, a rock mass composed of angular blocks, with rough discontinuity surfaces
will exhibit a larger degree of inter-particle locking and hence stronger rock mass characteristics,
than one composed of smooth-walled, rounded particles. This has led to some criticism as
researchers have noted the over-reliance on mode II type shear failures in the Hoek-Brown
system (Wyllie and Mah 2004). Although these limitations exist the Hoek and Brown failure
criterion has been widely accepted within the geotechnical community owing to its ease of use
and lack of suitable alternatives (Hoek et al. 2002).
In comparison, to earlier linear failure criterion such as the Mohr-Coulomb model, the
relationship between the major and minor principal effective stresses within the Hoek-Brown
system is assumed to be stress dependent. This assumption result in the failure criterion being
described by the non-linear function (Hoek et al. 2002):
: =
:l
:l
+ :q0 s
+ r
"Y
Equation A.1
cYd 100
28 14e
Equation A.2
where : l and :l are the major and minor effective principal stresses at failure, "Y is the uniaxial
compressive strength of intact rock samples, s is the modified material constant, and r and
are rock mass constants. The modified material constant ( s ) is estimate from the unmodified
material constant ( 0 ) by the equation:
0
exp
where cYd is the Geological Strength Index, defined by the block size and fracture condition
(Hoek 1994; Hoek et al. 1995; Hoek et al. 1998; Marinos et al. 2005), and e is a disturbance
factor which depends on the degree of rock mass disturbance from blasting and stress relaxation
(Hoek et al. 2002; Hoek 2012). Estimation of the rock mass constants (r, ) is given by the
functions:
r = exp
cYd 100
9 3e
1 1
cYd
20
+ exp
exp
2 6
15
3
177
Equation A.3
Equation A.4
One of the main problems in using the Hoek-Brown criterion is deciding when it is
applicable. The criterion was originally designed to describe the failure of continuum type
material and assumes a number of assumptions about the rock mass (Hoek 1983):
Rock mass failure controlled by translation and rotation of individual blocks
Failure of intact rock does not play a significant role in overall rock mass failure
Jointing pattern sufficiently chaotic to assume isotropic behavior
Due to these assumptions the criterion is not applicable when failure occurs along a dominant
discontinuity set(s) (Brown 2008). However, when the size of the discontinuities becomes
sufficiently small compared to the sample size and lacks dominant discontinuity orientations, then
the criterion can be applied, by assuming that the rock mass acts as a continuum (Hoek et al.
2002). As a general rule of thumb the Hoek-Brown failure criterion is relatively accurate when
applied to rock masses with GSI values between 30 and 70, which coincides with the range used
in its development (Carter et al. 2007). However, the system breaks down in very weak and
strong rocks, as rock mass failure ceases to be controlled by translation and rotation of individual
blocks. Under these conditions modifications are required to the traditional failure criterion as the
failure behaviour transitions from an inter-block to intact rock controlled.
At the low end of the rock strength scale (UCSir < 0.5 MPa) material typically behaves as
a soil-like substance, whose behaviour can be defined by the Mohr-Coulomb strength criterion
(Carter et al. 2007; Carvalho et al. 2007). It is only after the UCS exceeds 10-15 MPa, coinciding
with complete transition to inter-block controlled failure, that the system behaves as a HoekBrown material (Hoek et al. 2002). Between these two extremes a transition zones exists when a
material transitions form a more linear soil-like behaviour to non-linear rock mass type behaviour
(Brown 2008). Carvalho et al. (2008) defined the transition function (4p :q0 ) between these two
extremes as defined by:
4p :q0
1, :q0 0.5 f*
C C. ^
Equation A.5
This is then incorporated into the Hoek-Brown criterion by modifying the a, s and mb parameters
by the transition function relationship, such that:
s
4p "Y
r = r + 1 r 4p "Y
+ 1
4p "Y
Equation A.6
Equation A.7
Equation A.8
At the upper end of the rock mass competency scale, behaviour transitions from interblock to intact rock controlled failure (Cartier et al. 2008). The behavioural change coincides with
the onset of crack coalescence in low mi rocks, and the crack initiation strength in high mi rocks
(Carvalho et al. 2008). In the latter, the effects of moderate jointing are suppressed as failure is
dominated by mode I crack propagation when the in-situ stresses are below the spalling limit. At
these low confining conditions, a modification to the Hoek-Brown criterion is required. Diederichs
(2007) proposed the following modification to the criterion for spall-prone rocks:
= r
178
"Y
||
Equation A.9
r = J
-w
:q0 w
L
"Y
Equation A.10
= 0.25
y0{w
Equation A.11
= 0.75
Equation A.12
where is the modified Hoek-Brown material constant, r and are the modified HoekBrown rock mass constants, :q0 is the crack initiation stress and is the tensile stress. The use
of a modified value for both peak and residual values is used to represent the spalling limit,
y0{w
of 0 and a
and is based on a recommended range of : /: of 7 to 10 when using a r
y0{w
of 6 to 8. Transition between spalling and shear behaviour is modelled using the
where:
4- =
1+
Equation A.13
& $
B CJ
L CJ
L
179
Equation A.14
and rock mass constants (r ,
Appendix B.
Correlograms
Experimental correlogram structures for both the GSI and UCS were calculated within a
C++ self-written program. Experimental correlograms were then fit using least squares
regression techniques within the Microsoft software package EXCEL. Models were fit such that
the dispersion variance within the simulation zone was equal to 1.0, using the method proposed
by Journel and Huijbregts (1978). Correlograms were fit such that the GSI utilized two nested
structure models with zero nugget effect, while the UCS was fit with an exponential model and
relatively high nugget effect (Table B.1).
Table B.1
Constraints for the normal score correlogram used in the FLAC simulation.
GSI
Geotechincal
Unit
Exponential Model I
UCS (MPa)
Exponential Model II
Exponential Model
Nugget
Sill
Range (m)
Sill
Range (m)
Sill
Range (m)
Monzonite
Porphyry
0.61
41
0.44
489
0.30
0.72
128
Monzodiorite
0.49
49
0.57
434
0.38
0.66
214
Endoskarn
0.69
38
0.32
149
0.47
0.55
97
Skarn
0.88
52
0.14
335
0.74
0.26
81
Darai Upper
1.00
24
0.00
381
0.00
1.01
37
Darai Lower
0.81
43
0.25
1000
0.54
0.50
369
Ieru Upper
0.76
43
0.29
630
0.21
0.82
143
Ieru Lower
0.86
88
0.18
614
0.25
0.81
318
Pnyang
1.00
27
0.00
381
0.21
0.82
143
Thrust Faults
0.92
40
0.10
513
0.27
0.75
107
180
1.50
1.25
1 - (h)
1.00
0.75
0.50
0.25
0.00
1.0
10.0
100.0
1,000.0
Figure B.1
Correlogram Model
1.50
1.25
1 - (h)
1.00
0.75
0.50
0.25
0.00
1.0
10.0
100.0
1,000.0
Figure B.2
Correlogram Model
181
1.50
1.25
1 - (h)
1.00
0.75
0.50
0.25
0.00
1.0
10.0
100.0
1,000.0
Figure B.3
Correlogram Model
1.50
1.25
1 - (h)
1.00
0.75
0.50
0.25
0.00
1.0
10.0
100.0
1,000.0
Figure B.4
Correlogram Model
182
1.50
1.25
1 - (h)
1.00
0.75
0.50
0.25
0.00
1.0
10.0
100.0
1,000.0
Figure B.5
Correlogram Model
1.50
1.25
1 - (h)
1.00
0.75
0.50
0.25
0.00
1.0
10.0
100.0
1,000.0
Figure B.6
Correlogram Model
183
1.50
1.25
1 - (h)
1.00
0.75
0.50
0.25
0.00
1.0
10.0
100.0
1,000.0
Figure B.7
Correlogram Model
1.50
1.25
1 - (h)
1.00
0.75
0.50
0.25
0.00
1.0
10.0
100.0
1,000.0
Figure B.8
Correlogram Model
184
1.50
1.25
1 - (h)
1.00
0.75
0.50
0.25
0.00
1.0
10.0
100.0
1,000.0
Figure B.9
Correlogram Model
1.50
1.25
1 - (h)
1.00
0.75
0.50
0.25
0.00
1.0
10.0
100.0
1,000.0
Figure B.10
Correlogram Model
185
1.50
1.25
1 - (h)
1.00
0.75
0.50
0.25
0.00
1.0
10.0
100.0
1,000.0
Figure B.11
Correlogram Model
1.50
1.25
1 - (h)
1.00
0.75
0.50
0.25
0.00
1.0
10.0
100.0
1,000.0
Figure B.12
Correlogram Model
186
1.50
1.25
1 - (h)
1.00
0.75
0.50
0.25
0.00
1.0
10.0
100.0
1,000.0
Figure B.13
Correlogram Model
1.50
1.25
1 - (h)
1.00
0.75
0.50
0.25
0.00
1.0
10.0
100.0
1,000.0
Figure B.14
Correlogram Model
187
1.50
1.25
1 - (h)
1.00
0.75
0.50
0.25
0.00
1.0
10.0
100.0
1,000.0
Figure B.15
Correlogram Model
1.50
1.25
1 - (h)
1.00
0.75
0.50
0.25
0.00
1.0
10.0
100.0
1,000.0
Figure B.16
Correlogram Model
188
1.50
1.25
1 - (h)
1.00
0.75
0.50
0.25
0.00
1.0
10.0
100.0
1,000.0
Figure B.17
Correlogram Model
1.50
1.25
1 - (h)
1.00
0.75
0.50
0.25
0.00
1.0
10.0
100.0
1,000.0
Figure B.18
Correlogram Model
189
1.50
1 - (h)
1.00
0.75
0.50
0.25
0.00
1.0
10.0
100.0
1,000.0
Figure B.19
Correlogram Model
1.50
1.25
1 - (h)
1.00
0.75
0.50
0.25
0.00
1.0
10.0
100.0
1,000.0
Figure B.20
Correlogram Model
190
Appendix C.
Sequential Gaussian Simulation Code
Spatial heterogeneity was simulated in Chapter 3 using the sequential Gaussian
simulation algorithm (Journel and Huigbregts 1978; Goovaerts 1997; Dowd 1992; Nowak and
Verly 2007). The algorithm was directly incorporated into the Itasca code using the integrated
FISH scripting language. The following provides an overview of the basic code used to conduct
the simulation.
General Routines
def RandVal
; determine a random seed within three standard deviations of the mean
_rnd = grand
def GenVariables
; General array variables which are required to conducted the SGS analysis
array _Near(3, _nMax)
191
Stage 1 Scrabble:
The first stage in the algorithm is to randomly scramble the _sArray, this is done using the
Fish-Yates (1948) shuffle.
def Scramble
loop hi (1, _cells)
; select a random point in the array
$rndLoc = int( urand * _cells hi + 1 ) + hi
; if the point is zero it becomes the highest value in the base array (0 is not possible in array)
if $rndLoc = 0 then
$rndLoc = _cells
endif
th
th
192
def SGS
; determine initial seeds, this can be removed if at least
; _nMax nodes are known prior to the simulation
loop hi (1, _nMax)
RandVal
_sArray(hi, 3) = _rnd
endloop
; determine the nearest neighbours from cells which already have a generated value
loop hii (1, hi - 1)
; retrieve i, j position for the cell being sampled
$iAlt = int(_sArray(hii, 1))
$jAlt = int(_sArray(hii, 2))
; store search array location and sample cell GSI value for later
$loc1 = hii
$var1 = _sArray(hii, 3)
; calcuate the euclidean distance between the sample cell and cell of interest
$y = y($iPos, $jPos) - y($iAlt, $jAlt)
$x = x($iPos, $jPos) - x($iAlt, $jAlt)
$dist1 = sqrt( ($y)^2 + ($x)^2 )
193
; loop through the nearest neighbours and check if the distance is less than
; any current neighbours, if the distance is less, then replace the nearest
; neighbour results with the current value
loop nx (1, _nMax)
if $dist1 < _Near(2, nx) then
$loc2 = _Near(1, nx)
$dist2 = _Near(2, nx)
$varii = _Near(3, nx)
_Near(1, nx) = $loc1
_Near(2, nx) = $dist1
_Near(3, nx) = $var1
$loc1 = $loc2
$dist1 = $dist2
$var1 = $var2
endif
endloop
endloop
194
195
; Calculate residuals
$kR = 0.0 ; reset kriging residual
loop nx (1, _nMax)
$kR = $kR + _Near(3, nx) * _kWeight(nx)
endloop
endloop
end
196
Appendix D.
Verification of Sequential Gaussian Simulation Code
Sequential Gaussian Simulation (SGS) code was programmed within the Itasca (2014)
software FLAC, using the integrated FISH language. A full description of the algorithm is
provided in Section 3.4.3, with the FISH code provided in Appendix III. The following provides a
series of verification plots, confirming the accurate reproduction of the correlogram and
cumulative density plots for the Ok Tedi Dataset. The results represent a single model
realization, employing Monte Carlo simulation techniques. As a result, some natural drift exists.
However, this drift evens out between the simulations, resulting in good reproducibility overall
between model simulations and actual data.
1.50
1.25
1 - (h)
1.00
0.75
0.50
0.25
0.00
1.0
10.0
100.0
1,000.0
Figure D.1
Correlogram Model
FLAC Correlogram
197
1.50
1.25
1 - (h)
1.00
0.75
0.50
0.25
0.00
1.0
10.0
100.0
1,000.0
Figure D.2
Correlogram Model
FLAC Correlogram
1.50
1.25
1 - (h)
1.00
0.75
0.50
0.25
0.00
1.0
10.0
100.0
1,000.0
Figure D.3
Correlogram Model
FLAC Correlogram
198
1.50
1.25
1 - (h)
1.00
0.75
0.50
0.25
0.00
1.0
10.0
100.0
1,000.0
Figure D.4
Correlogram Model
FLAC Correlogram
1.50
1.25
1 - (h)
1.00
0.75
0.50
0.25
0.00
1.0
10.0
100.0
1,000.0
Figure D.5
Correlogram Model
FLAC Correlogram
199
1.50
1.25
1 - (h)
1.00
0.75
0.50
0.25
0.00
1.0
10.0
100.0
1,000.0
Figure D.6
Correlogram Model
FLAC Correlogram
1.50
1.25
1 - (h)
1.00
0.75
0.50
0.25
0.00
1.0
10.0
100.0
1,000.0
Figure D.7
Correlogram Model
FLAC Correlogram
200
1.50
1.25
1 - (h)
1.00
0.75
0.50
0.25
0.00
1.0
10.0
100.0
1,000.0
Figure D.8
Correlogram Model
FLAC Correlogram
1.50
1.25
1 - (h)
1.00
0.75
0.50
0.25
0.00
1.0
10.0
100.0
1,000.0
Figure D.9
Correlogram Model
FLAC Correlogram
201
1.50
1.25
1 - (h)
1.00
0.75
0.50
0.25
0.00
1.0
10.0
100.0
1,000.0
Figure D.10
Correlogram Model
FLAC Correlogram
1.50
1.25
1 - (h)
1.00
0.75
0.50
0.25
0.00
1.0
10.0
100.0
1,000.0
Figure D.11
Correlogram Model
FLAC Correlogram
202
1.50
1.25
1 - (h)
1.00
0.75
0.50
0.25
0.00
1.0
10.0
100.0
1,000.0
Figure D.12
Correlogram Model
FLAC Correlogram
1.50
1.25
1 - (h)
1.00
0.75
0.50
0.25
0.00
1.0
10.0
100.0
1,000.0
Figure D.13
Correlogram Model
FLAC Correlogram
203
1.50
1.25
1 - (h)
1.00
0.75
0.50
0.25
0.00
1.0
10.0
100.0
1,000.0
Figure D.14
Correlogram Model
FLAC Correlogram
1.50
1.25
1 - (h)
1.00
0.75
0.50
0.25
0.00
1.0
10.0
100.0
1,000.0
Figure D.15
Correlogram Model
FLAC Correlogram
204
1.50
1.25
1 - (h)
1.00
0.75
0.50
0.25
0.00
1.0
10.0
100.0
1,000.0
Figure D.16
Correlogram Model
FLAC Correlogram
1.50
1.25
1 - (h)
1.00
0.75
0.50
0.25
0.00
1.0
10.0
100.0
1,000.0
Figure D.17
Correlogram Model
FLAC Correlogram
205
1.50
1.25
1 - (h)
1.00
0.75
0.50
0.25
0.00
1.0
10.0
100.0
1,000.0
Figure D.18
Correlogram Model
FLAC Correlogram
1.50
1 - (h)
1.00
0.75
0.50
0.25
0.00
1.0
10.0
100.0
1,000.0
Figure D.19
Correlogram Model
FLAC Correlogram
206
1.50
1.25
1 - (h)
1.00
0.75
0.50
0.25
0.00
1.0
10.0
100.0
1,000.0
Figure D.20
Correlogram Model
FLAC Correlogram
100%
90%
80%
Cumula!ve Percentage
70%
60%
50%
40%
30%
20%
10%
0%
0
10
20
30
40
50
60
70
80
90
100
Figure D.21
Modelled
FLAC Results
207
100%
90%
80%
Cumula!ve Percentage
70%
60%
50%
40%
30%
20%
10%
0%
0
10
20
30
40
50
60
70
80
90
100
Figure D.22
Modelled
FLAC Results
100%
90%
80%
Cumula!ve Percentage
70%
60%
50%
40%
30%
20%
10%
0%
0
10
20
30
40
50
60
70
80
90
100
Figure D.23
Modelled
FLAC Results
208
100%
90%
80%
Cumula!ve Percentage
70%
60%
50%
40%
30%
20%
10%
0%
0
10
20
30
40
50
60
70
80
90
100
Figure D.24
Modelled
FLAC Results
100%
90%
80%
Cumula!ve Percentage
70%
60%
50%
40%
30%
20%
10%
0%
0
10
20
30
40
50
60
70
80
90
100
Figure D.25
Modelled
FLAC Results
209
100%
90%
80%
Cumula!ve Percentage
70%
60%
50%
40%
30%
20%
10%
0%
0
10
20
30
40
50
60
70
80
90
100
Figure D.26
Modelled
FLAC Results
100%
90%
80%
Cumula!ve Percentage
70%
60%
50%
40%
30%
20%
10%
0%
0
10
20
30
40
50
60
70
80
90
100
Figure D.27
Modelled
FLAC Results
210
100%
90%
80%
Cumula!ve Percentage
70%
60%
50%
40%
30%
20%
10%
0%
0
10
20
30
40
50
60
70
80
90
100
Figure D.28
Modelled
FLAC Results
100%
90%
80%
Cumula!ve Percentage
70%
60%
50%
40%
30%
20%
10%
0%
0
10
20
30
40
50
60
70
80
90
Figure D.29
Modelled
FLAC Results
211
100
100%
90%
80%
Cumula!ve Percentage
70%
60%
50%
40%
30%
20%
10%
0%
0
10
20
30
40
50
60
70
80
90
100
Figure D.30
Modelled
FLAC Results
100%
90%
80%
Cumula!ve Percentage
70%
60%
50%
40%
30%
20%
10%
0%
0
50
100
150
200
250
300
350
Figure D.31
Modelled
FLAC Results
212
100%
90%
80%
Cumula!ve Percentage
70%
60%
50%
40%
30%
20%
10%
0%
0
50
100
150
200
250
300
350
Figure D.32
Modelled
FLAC Results
100%
90%
80%
Cumula!ve Percentage
70%
60%
50%
40%
30%
20%
10%
0%
0
50
100
150
200
250
300
350
Figure D.33
Modelled
FLAC Results
213
100%
90%
80%
Cumula!ve Percentage
70%
60%
50%
40%
30%
20%
10%
0%
0
50
100
150
200
250
300
350
Figure D.34
Modelled
FLAC Results
100%
90%
80%
Cumula!ve Percentage
70%
60%
50%
40%
30%
20%
10%
0%
0
50
100
150
200
250
300
350
Figure D.35
Modelled
FLAC Results
214
100%
90%
80%
Cumula!ve Percentage
70%
60%
50%
40%
30%
20%
10%
0%
0
50
100
150
200
250
300
350
Figure D.36
Modelled
FLAC Results
100%
90%
80%
Cumula!ve Percentage
70%
60%
50%
40%
30%
20%
10%
0%
0
50
100
150
200
250
300
350
Figure D.37
Modelled
FLAC Results
215
100%
90%
80%
Cumula!ve Percentage
70%
60%
50%
40%
30%
20%
10%
0%
0
50
100
150
200
250
300
350
Figure D.38
Modelled
FLAC Results
100%
90%
80%
Cumula!ve Percentage
70%
60%
50%
40%
30%
20%
10%
50
100
150
200
250
300
350
Figure D.39
Modelled
FLAC Results
216
100%
90%
80%
Cumula!ve Percentage
70%
60%
50%
40%
30%
20%
10%
0%
0
50
100
150
200
250
300
350
Figure D.40
Modelled
FLAC Results
217
Appendix E.
Critical Failure Path Pseudo-Code
The following provides an overview of the algorithm used to identify critical failure paths
from the FLAC simulation results. The algorithm involves seven steps:
1. First, SSR values obtained from FLAC modelling are inverted (iSSR) to create a cost
matrix. This is done to ensure that the largest shear strain rates correspond with the
lowest cost.
2. Two nodal arrays are then constructed to denote the location of the potential break-out
surface and tension crack. This break-out array is defined by boundary nodes along the
lower 90% of the slope face, as well as nodes along the toe of the slope; whereas, the
tension array is denoted by boundary nodes behind the slope face (Figure 3.10).
3. The first node in the break-out array is then designated as the break-out node.
4. Dijkstra's (1959) algorithm is then used to calculate the minimum cost path to get from
the break-out node to the closest tension array node. This is conducted using the
following steps:
a. First, a total cost matrix is constructed, which is the same size as the cost matrix.
Total cost values are initially designated as null.
b. Next, an unvisited array is constructed in which data is sorted by the total cost to
get to the 2D location from the starting point. Each node in the array is then
assigned a tentative total cost equal to infinity.
c.
The designated starting node determined from the break-out array is set as the
current node. The total cost to get to the node is designated as zero.
d. For the current node, tentative costs are calculated for all unvisited neighbours
using the formula:
"/@ = "q{yy/ + "/ 0
Equation E.1
where "/@ is the total cost to get to the unvisited neighbour via the current node,
"q{yy/ is the total cost to get to the current node from the starting node, and
"/ 0 is the cost to get from current to the neighbour node (obtained from the iSSR
matrix). The calculated total cost at the unvisited neighbour is then compared to
the currently assigned total cost, and the minimum value stored within the
unvisited array.
e. Once all neighbour nodes have been considered, the current node is then
removed from the unvisited array and assigned to the 2D minimum total cost
matrix. Once a node has been visited it will never be checked again.
f.
Step d is then repeated by designating the next lowest total cost node from the
unvisited array as the current node. This process is repeated until a node within
the tension array is defined as the current node. This node is then specified as
the tension crack.
5. Back-analysis of the 2D minimum total cost matrix can then be conducted to find the
minimum cost path from the tension crack to the break-out node. This involves the
following steps:
218
a. First, an empty minimum path array is constructed which will house the nodal
locations of the critical failure path.
b. Next, the tension crack is specified as the current node and its nodal location is
added to the minimum path array.
c.
The neighbours of the current node are then examined and the nodal location of
the neighbour with the minimum total cost ("/@ ) is added to the minimum path
array.
d. The minimum neighbour is then specified as the current node and Step c
repeated, until the beak-out node is encountered.
6. Step 4 & 5 are then repeated, until all nodes within the break-out array have been visited,
by designating the next node in the array as the break-out node.
7. Designated minimum cost paths are then compared based on average shear strain rates,
with the lowest rate path determined to be the critical path.
219