You are on page 1of 107

Reference Manual Page 1 of 106

Navigation: No topics above this level

Reference Manual
The User's Guide and tutorials do not attempt to explain all features. Only those relevant to the particular workflow step are explained.
The Reference Manual is intended to give details of many features and functionalities of Leapfrog not covered in the tutorials.

Navigation: Reference Manual >

Add Interval Table


Once a collar and survey file have been imported, interval measurement tables can be added at any time. To do this, right-click on the
Drillhole-Data object in the project tree and select Add Interval Table from the menu:

This will open the Add Interval Tables dialog:

Note that adding a collar or a survey table is not allowed. Click the Add button to import an interval table.
The Import Table dialog will then appear. Proceed as described in the Importing Drillhole Data tutorial.

Navigation: Reference Manual >

Advanced Interpolation Settings


Leapfrog uses interpolation to determine the value of a continuous variable, such as grade, between the measured data samples. If the data
is both regularly and adequately sampled, you will find the different interpolants will produce similar results. In mining, however, it is rarely
the case that data is so abundant and input from the geologist is required to ensure the interpolations produce geologically reasonable
results. There are six choices that underpin how the interpolation is performed and, consequently, how the quantity of interest is estimated
at points away from the data samples:
1. Accuracy
2. Variogram models
3. Modelling the underlying variation
4. Anisotropy
5. Data transformation
6. Nugget
One way Leapfrog differs from many direct methods is that rather than attempting to produce an exact interpolation, it produces an
interpolation that is accurate to a user-specified accuracy. Doing this enables Leapfrog to solve large problems quickly and efficiently.

Setting the Accuracy


Although there is temptation to set the accuracy as low as possible, there is little point to specifying an accuracy significantly smaller than
the errors in the measured data. For example, if grade values are specified to two decimal places, setting the accuracy to 0.001 is more than
adequate. Smaller values will cause the interpolation procedure to run more slowly and degrade the interpolation result. For example, when
recording to two decimals the range 0.035 to 0.045 will be recorded as 0.04. There is little point in asking Leapfrog to match a value to plus
or minus 0.000001 when intrinsically that value is only accurate to plus or minus 0.005.
Leapfrog estimates the accuracy from the data values by taking a fraction of the smallest difference between measured data values.

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 2 of 106

Variogram Model

In Leapfrog the interpolated value at a point is the weighted sum of the data points added to a smooth estimate of the underlying distribution
of the data. This is equivalent to conventional Kriging. Leapfrog differs from most Kriging implementations in its choice of Variogram models.
One of the fundamental difficulties in interpolating data is the problem of determining a suitable range. A finite range means that any point in
space that is more than the range away from a data sample will have an interpolated value that is either zero or an estimate of the mean
value. Often this is an advantage, as it is intuitively reasonable to expect that an interpolation becomes less reliable further from the data.
However, often the range is not known a priori and the data sampling is highly irregular. In such a case, a basis function with an infinite
range can produce a better result. The linear variogram is an example of just such a model and as a consequence it is the default
interpolation method inside Leapfrog. A data set interpolated with a linear variogram is independent of axis units, and will produce identical
results if the data coordinates are given in meters or millimetres.
It is important to realise that even if a variogram has infinite range, the behaviour near data samples is determined substantially by the
values of that data and can be controlled using the nugget value. Beware that when using a linear variogram, artefacts may occur in parts of
the isosurfaces far away from data values. These can be removed either by choosing another variogram model or by clipping the
isosurfaces to a minimum distance from the data.
Appropriate choice of variogram model and associated parameter settings can be crucial for successful modelling. Therefore, before going
into the various options in Leapfrog, first a little background on variograms. The following variogram represents the variance (gamma,) of
sample values vs. distance following the popular spherical basis function.

The "sill" defines the upper-bound of the variance. At the distances less than the "range", shows a quasi-linear behaviour, and is stabilised
at the sill beyond the "range". Roughly speaking, having a "sill" limits the influence of a value to be within the specified "range".
The "nugget" (effect) is the expected variance when two different samples are very close. This is greater than or equal to zero and less than
the sill. If samples taken at two very close locations are very different, the nugget becomes a large positive value. When the nugget is non-
zero, the variogram is discontinuous at the origin. The nugget effect implies that values have a high fluctuation over very short distances.
Leapfrog provides 4 Variogram Models: Linear, Multi-Quadric, Spheroidal and Generalised Cauchy. One variogram might perform better
than others for a particular data set.

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 3 of 106

1. Linear variogram (default) A useful general purpose interpolant for sparsely and/or irregularly sampled data. This is not bounded. i.e.
there is no sill.
2. Multi-Quadric [Hardy (1971)] In earlier versions of Leapfrog, this model was referred to as "Generalised Multiquadric". Shows an
exponential growth but flat slope around the origin. This is a simple way of smoothing the linear model's sharp changes of slope and
rounding the corners (i.e. smooths the derivatives). The "scale" parameter is the radius of curvature at x=0, and controls the smoothness.
The alpha () parameter determines the growth rate. Users may specify alpha() and scale. The function is given as follows:
2 2 /2
(x)=(x +c ) ,where =1,3,5 and c=scale

As there is no sill, both linear and multi-quadric models tend to connect across larger intervals, which could have been disconnected if a
different model (e.g. Cauchy, Spheroidal) were used. If you want high connectivity, linear or multi-quadric variograms will be a suitable
choice.
However, both variograms may suffer from blowouts at data extremities. While the Multi-Quadric model produces smoother interpolation
than the linear model, it is more susceptible to blowouts. If you observe this problem, consider providing a small nugget value or switch to
one of the following two models.
3. Spheroidal An interpolant that approximates the spherical basis function used in Kriging. Instead of having an exactly finite range the
function dies rapidly to zero outside the specified range. The grade shells produced by this function are in general very similar to those
produced by Kriging (spherical basis function) close to the data values, but the shells are less prone to artefacts when the grade shell is
distant from a measured data point. High alpha() leads to fast growth, approaching the sill quickly. Roughly speaking, the spheroidal
model shows the behaviour of the linear model at the origin and the rest shape is reminiscent of Generalised Cauchy model.

4. Generalised Cauchy Also known as the Inverse Multi-Quadric. Particularly suitable for smooth data such as gravity or magnetic field
data. This model is flat at the origin, and asymptotically approaches the sill. Users may specify "sill", "scale" and "alpha()". The function
is given as follows.
2 2 -/2
(x)=sill(1-c (x +c ) ) ,where =1,3,5,7 or 9 and c=scale

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 4 of 106

The variogram approaches the sill at a pace determined by the alpha () and c parameters - Varying the sill does not make a noticeable
difference to the interpolation results.
Higher values for the range allows the surface to expand further from the known points. As a result, there is a higher chance for the
surface to be connected to neighbouring surfaces. Similarly, lower alpha () means the model is slower to reach the "sill", and it is also
more likely to make neighbouring surfaces get connected.

Modelling the Underlying Drift


The underlying drift is a model of the grade distribution in terms of a simple deterministic model such as a zero, constant, linear or quadratic
variation. Away from data samples, the interpolant will tend towards the value predicted by the underlying drift. This has a direct analogy
with Kriging. Simple and Ordinary Kriging differs in that the latter estimates the mean of the data samples whereas the former assumes a
zero mean. Leapfrog enables the user to use higher order models, such as a linear or quadratic variation across the data when this is
appropriate.

Anisotropy
In an isotropic world the influence of an isolated data point on the interpolation is symmetric in all directions. Thus the isosurfaces formed
around an isolated data point will appear to be spheres. It is often the case that data is not isotropic, for example in a vein. Here, it is
expected that the influence of a data point in a vein should extend further in the direction parallel to the vein than in the direction
perpendicular to the vein. This behaviour is achieved in Leapfrog using anisotropy. If anisotropy is defined, a data point no longer influences
the interpolant uniformly around a data point but does so in the form of an ellipsoid. This is particularly useful in circumstances where the
geologist wants grade shells to link along a direction defined by, for example, a fault.
In order to preserve the volume, the ranges used in the anisotropy are scaled to maintain unit volume. Thus, only the ratio of the lengths is
important. Specifying an ellipsoid ratio of 1:1:10 will produce a result identical to specifying an ellipsoid ratio of 0.1:0.1:1.
The ellipsoid ratios are mapped onto the axes defined by the dip, dip-azimuth and pitch in the following manner. The Max scaling is applied
along the axis defined by the pitch line (pitch-axis). The Min scaling is applied to the axis perpendicular to the plane defined by the dip and
dip-azimuth (pole-axis). The Intermediate scaling is applied to the axis that is perpendicular to the axes defined by the pitch and pole.
In practice, setting the anisotropy is most easily done in Leapfrog using the moving plane.

Data Transformation
One of the problems with modelling grade values occurs with the existence of samples with extreme values. An interpolant that uses a
weighted sum of the data will place far too much emphasis on what are essentially exceptional values. The solution to this problem is to
apply a nonlinear transformation to the data to reduce the emphasis of exceptional values. Leapfrog provides two grade transformation
methods, namely Logarithmic and Gaussian. Both preserve the ordering of data values so that if the value of a sample is higher or lower
than another before transformation, the same relationship will exist after transformation.
The Gaussian transform modifies the distribution of the data values to conform as closely as possible to a Gaussian Bell curve. Because the
grade value distribution is often skewed, (for example, a large number of low values) this transformation cannot be done exactly.
The logarithmic transform uses the logarithm to compress the data values to a smaller range. In order to avoid issues with taking the
logarithm of zero or negative numbers a constant is added to the data to make the minimum value positive. After the logarithm is taken, a
constant is added so the minimum of the data is equal to the specified post-log minimum. Flexibility in choosing the pre-log minimum is
provided since increasing this value away from zero can be used to reduce the effect of the logarithmic transformation on the resultant
isosurfaces.
Pressing the "Show Histogram" button will show the histogram of the data with the specified transformation. Show Histogram should also
be pressed to update the histogram after any changes to the transformation parameters.
When isosurfacing transformed data, the threshold value is also transformed. This ensures that an isosurface at a threshold of 0.4 will still

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 5 of 106

pass through data samples whose value is 0.4. What will change, however is the behaviour of the isosurface away from the samples.

Nugget
Nugget represents a local anomaly in the grade values. That is, a nugget value is substantially different than the value that would be
predicted at that point from the data around it. In Leapfrog nugget behaviour is most commonly seen in the form of pin-cushion distortions of
the isosurfaces near data points. Block models that are based on smooth interpolants are also affected by this pincushion effect, although it
may not be as visible to the user.
The pincushion effect can be reduced by adding or increasing the nugget value in the variogram. This effectively places more emphasis on
the average values of the surrounding samples, and less on the actual data point. It is important to note that when nugget is non-zero an
isosurface of a given value may no longer touch a sample of that value. How far it deviates from the sample is an indication of how different
that data sample is from what would be predicted from its neighbours.
Note that the pincushion effect can also be caused by incorrect specification of a deposit's anisotropy.

Navigation: Reference Manual >

Batch Export
Exporting multiple items at once may be done by using the Batch Export command from the Project menu as shown below.

Selecting Export from the Project menu

The Select Objects To Export dialog will appear showing the project tree. Select any objects you want to export by ticking the check-boxes
and clicking OK as shown below.

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 6 of 106

Multiple objects may be selected at once by right-clicking on a row and choosing Select Children. This will select all the children of a given
row but not the row itself. This allows you to select all the grade shells of an interpolant in one go as shown below.

Exporting a points ( grey points) object will export the points and all the associated values at once, including
any points without associated values. Exporting a values object ( coloured points) will export only the selected
values and their points.

The Batch Export dialog is then displayed.

The Batch Export dialog lists the objects to export, along with a header rows for each object type selected.
To change the file name for an object double-click on the cell in the Save As column and enter a new name.
To change the export file format click on the Format column and select a new format from the combo-box. To change the format for all
objects of a type, set the format in the header row.
To change the export folder click on the Folder column and type in a new directory or click the button to open a file chooser dialog as
shown below.

To change the export folder for all objects of a type, set the folder in the header row.
To change the export folder for all objects, use the text box at the bottom of the dialog or click the Browse button.
Some GMP products do not allow spaces in filenames, to prevent spaces in the exported filenames un-tick the Allow spaces in filenames
checkbox.

Navigation: Reference Manual >

Boolean Mesh
A Boolean operation on two meshes (or isosurfaces) computes the intersection, union or subtraction of one mesh from another. To
demonstrate this operation, we compute the intersection of two meshes, cu 0.61 and m_assays Buffer 47.0.

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 7 of 106

We have two isosurfaces, cu 0.61 and m_assays Buffer 47.0, as shown in the project tree:

Here we refer to both the isosurface and the mesh objects as 'meshes'. To compute the intersection of two meshes, right-click on one of the
meshes in the project tree and select the New Boolean Mesh option.

A mesh object (listed under the Meshes object in the project tree) can be derived from an isosurface by extracting
mesh parts (see screenshot above). Alternatively, you can export an isosurface to a mesh file (*.msh) and then
import it back into Leapfrog as a mesh object. For details, refer to Extract Mesh Parts in the Reference Manual.

In the Boolean Mesh window, the mesh you right-clicked on is already specified as the first mesh:

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 8 of 106

The default operation is Intersect. Other available operations include Union, First minus Second and Second minus First. The result of
the Boolean mesh operation will be placed under the mesh you selected to initiate the process, but you can change this using the Place
under list.
To select the second mesh, click on the Second Mesh button. The Select Mesh window that appears lists all the available meshes
(including both isosurfaces and mesh objects):

Select the second mesh, in this case m_assays Buffer 47.0, and click OK. Back in the Boolean Mesh window, both meshes are now
specified:

Notice that the default name has been updated automatically. Click OK to proceed.
The new mesh has been added under the isosurface cu 0.61:

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 9 of 106

Press Shift+Ctrl+R to run the process.


When the operation is complete, view cu 0.61 Intersect m_assays Buffer 47.0. When other meshes are cleared from the scene, the
intersecting mesh looks like the one below:

Compare with the two original meshes and confirm that the correct intersection is obtained.

Boolean Mesh vs. Domaining


If you are not familiar with the domaining technique covered in Domaining Tutorial, skip the following.
A Boolean mesh not only offers intersection, but also provides A union B, A-B and B-A operations, where A and B refers to the first and the
second mesh respectively.
Where the intersection of two meshes is concerned, a boolean mesh operation is similar to domaining. The essence of the domaining
technique, "clipping a mesh by a domain", is to obtain the intersection of the mesh and the domain.
While the following two results are very similar, the boolean mesh and a domain are computed slightly different. This results in subtle
differences. In short, the Boolean mesh produces sharper boundaries, whereas the boundaries produced by the domain are more jagged (or
chamfered).

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 10 of 106

Intersection by Boolean mesh. Produces sharp boundaries. Clipped by a domain, showing jagged boundaries.

However, it is possible to produce shaper edges with domaining. When you specify the domain for an isosurface, you can select the Exact
Intersection.

There are three options:


Off: Default. It will trim the edge of the mesh if the triangle on the boundary intersects the domain. As a result, the edge may be jagged or
chamfered.
MultiRes: With this option, the entire isosurface is computed using a multi-resolution solution. The edge will be very smooth and fine.
However, isosurfacing with this option will be considerably slow.
Standard: With this option, most of the isosurface is computed with the specified resolution, but it will use the boolean mesh to compute
the edges.
If you select Standard for Exact Intersection, the isosurface clipped by the domain will be identical to the intersection computed by the
boolean mesh. (Slight differences may occur depending on the order of operations.)

Boolean Operations and the Direction of a Mesh


In Leapfrog, a mesh has a positive side and a negative side, which affects the results of Boolean operations carried out on meshes.
A Boolean operation on two meshes acts on the positive part of the space divided by each mesh. The following table illustrates the result of
Boolean operations on closed meshes, where red is the positive side and blue is the negative side:
Operation Both surfaces positive toward One positive surface toward Both surfaces positive
the inside the outside toward the outside

Union

Intersect

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 11 of 106

First minus
second

Second minus
first

Navigation: Reference Manual >

Bounding Boxes
To create a bounding box, right-click on the Bounding Boxes folder in the project tree and select Define Bounding Box from the menu:

This displays the Define Bounding Box window:

When the Define Bounding Box window opens it defaults to a bounding box calculated from the project extents, that is, from all the
locations, polyline and mesh objects in the project. The project extents box can be recalculated at any time by clicking the From Projects
Extents button.
There are two types of bounding boxes:
A Fixed Bounding Box does not depend on any other object and will not change unless the user edits it directly.
An Object Bounding Box surrounds an object, enlarged by the specified margin. The bounding box will change when the locations
of the object it surrounds changes.
To specify a fixed bounding box, check the Fixed Bounding Box radio button and type the required extents in the Minimum and Maximum
columns.
To copy the extents from a locations object to the fixed bounding box area, click on the Object Bounding Box radio button, then select the
required object from theLocations drop downbox. Set the - Margin and + Margin as required, click on the Fixed Bounding Box radio
button, then on the Copy Extents button.
To specify an object bounding box, check the Object Bounding Box radio button and type the required extents in the - Margin and +
Margin columns.
To specify the actual extents of an object bounding box, click on the Fixed Bounding Box radio button and set the extents in the Minimum

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 12 of 106

and Maximum columns. Click on the Object Bounding Box radio button, then on the Copy Extents button. If the specified extents
would result in a negative margin, the margin is set to zero instead.

Example
We will edit the m_assay points bounding box to have a minimum corner at (3500, 7000, 120) and a maximum corner at (5000, 8000,
1200).
1. Double-click on the m_assay bounding box to open the Edit Bounding Box dialog as shown below:

2. Click on the Fixed Bounding Box radio button:

3. Now type in the desired extents for the bounding box: (3500, 7000, 200) in the Minimum column and (5000, 8000, 800) in the
Maximum column and click on the Object Bounding Box radio button:

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 13 of 106

4. Now click on the Copy Extents button. The margins will update as shown below. Now click OK to save changes.

5. Now rerun all the grade shells that depended on the bounding box. Here are the new Au grade shells. The bounding box is now large
enough to not clip the Au 0.48 grade shell. The yellow and green isosurfaces are the one with the new and the old bounding boxes
respectively in the following screenshot.

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 14 of 106

Navigation: Reference Manual > Bounding Boxes >

Set Default Bounding Box


To set a default bounding box, right-click on the Bounding Boxes folder and select Set Default Bounding Box:

The Set Default Bounding Box window will appear:

This window displays all bounding boxes currently defined for the project, together with the option <None>.

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 15 of 106

If you select <None>, the project as a whole will be used as the default bounding box.
Select the required default bounding box and click OK. The default bounding box is indicated in the project tree by the blue bounding box
icon:

You can also set the default bounding box by right-clicking on the bounding box you wish to use, then ticking the Default box:

Navigation: Reference Manual >

Changing Data Types


Point data is grouped into three folders based on the type of data the points represent: Numeric Data, Boundaries and Topography. If
some data appears in the wrong folder it can be moved to another using the Change Data Type command.
To change the data type of a points object right-click on the points and select Change Data Type from the menu as shown below:

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 16 of 106

The Select New Type dialog will appear. Select the desired folder, in this case Boundary, from the Geological Type drop down list and
click OK.

The points object and all it's children will be moved to the selected folder as shown below:

Changing the data type of a points object will change the data type of any subsets of the points selected in a domain.

Navigation: Reference Manual >

Combined Interpolants
Combined interpolants are weighted linear combinations of other interpolants. Given interpolants f and g with weights w1 and w2
respectively, the value of the combined interpolant is given by:
c(x) = w1f(x) + w2g(x).
Suppose you have imported the Demo drillhole sets in tutorials\Demo and followed the instruction given in Vein Modelling.
The Combine Interpolants command is found by right clicking on an interpolant object in the project tree and selecting Combine
Interpolants from the menu:

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 17 of 106

The Select Interpolants dialog will then appear.

Select two or more interpolants and click OK to display the Combined Interpolant dialog.

To add more interpolants, click the Add button to redisplay the Select Interpolants dialog.
To remove an interpolant, select the interpolant in the list and click the Remove button.
To change a weight, double-click on the desired number (or select the desired row and hit Space), and type the new value - hit Enter to
finish editing.
The Normalize button will scale all the weights so there sum is one (1) whilst maintaining the ratio between them.
Fill in the Name text box and click OK to create the new combined interpolant which will run automatically. Combined interpolants are
placed in their own folder which will appear if it does not already exist. They may be used like any other RBF interpolant.

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 18 of 106

Example
We will use combined interpolants to display the thickness of the vein shown below. From the B1_vein footwall interpolant, select
Combine Interpolants.

Select B1_vein footwall offset values and B1_vein hangingwall offset values. To find the thickness of the vein we combine these two
interpolants.

Ensure that the two weights are -1 and 1 and click OK to create the new interpolant.
Right-click on the mesh from which the vein was made - B1_vein footwall Surface in this instance - and select the Evaluate command.
Select the combined interpolant just created - vein thickness in this instance - as shown below and click OK.

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 19 of 106

Running the evaluation and displaying gives the following result:

Blue regions indicate thin and red regions indicate thick parts. More information about the thickness can be obtained from the evaluation's
properties or changing the Colouring.

The vein thickness interpolant must be evaluated on the mesh from which the vein was made - not on the vein
mesh. Evaluating the interpolant on the vein mesh will not give you the thickness at that point.

The thickness evaluation is automatically computed if the vein is created by New Vein function.

Navigation: Reference Manual >

Composite Assays
The Assay Compositing dialog allows you to perform fixed-length compositing of assay data. This dialog can be accessed by right-clicking
on the assay table of the imported drillhole data.

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 20 of 106

The dialog is composed of three tabs, Compositing, Volume and Output Columns:

Compositing
Compositing Method
No compositing: Apart from the actions for special assay values, no processing on the input data is done.
Fixed Length: All intervals are processed to the fixed composite length. Note that the interval at the end of a drill hole may be shorter
than the composite length. If the last interval it is longer than the specified minimum length, it will be kept, otherwise it will be discarded.
Special Assay Values
Under Special Assay Values will be listed any meanings that have been associated with special assay values in the table (non-numeric or
negative values), along with 2 standard values - Blank (empty or NULL value ) and Missing (no row in database).
For each type of interval you can Omit (leave empty), Replace it with a fixed value or set it to a Background value depending on the assay
column. The background values used for each assay column are specified in the Assay Background Values list.

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 21 of 106

We examine how the compositing option affects the result. The following snapshots are showing cu grades of a portion of the m_assays
data.
The first snapshot shows the original, non-composited, intervals of cu.

Original intervals

Let us select No Compositing method. This will only perform processing for the special cases. Select Replace action for Below Detection
and give 1.5 (just for illustrative purpose; in practice, the value for below detection is very low). Notice that the result remains mostly
unchanged, apart from the short interval that appears to have a value below detection (inside the red rectangle)

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 22 of 106

No compositing. Below Detection changed to 1.5

We now select Fixed Length with Composite Length 20.0 and Minimum Length 10.00. Notice that all the intervals are exactly 20.0 long.
The exceptions are are those at the start and the end. If an interval is shorter than the minimum length, this interval is discarded and its
length is distributed between the intervals at the start and the end. The grade of an interval is the average value. The replaced value 1.5 for
the below detection case is no longer distinctively shown, but it contributes to yielding a higher average.

Fixed Length Compositing with Length 20.0 and Minimum length 10.0.

Volume
In the Volume tab, you can specify where to composite. By default, it is performed everywhere, but you can choose to composite only the
inside of a region or the results of a query filter. If you have regions or query filters available, they will show in the dialog. Otherwise, they
can be created using the Composite Region, and the New Query Filter commands respectively. Suppose we have a composited region
MX_composite created from m_assays by including the zone MX but excluding others:

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 23 of 106

We composite with the same Fixed Length setting as above, but limit to composite to inside a region MX_composite only. The red bars
below represents portions inside this region. Only the data within this region will be composited.

MX_composite (showing "included" regions only)

The result should be similar to below.

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 24 of 106

Output Columns
While all the columns will be composited by default, you can determine which columns to include or exclude in the Output Columns tab.

Navigation: Reference Manual >

Composite Region

This topic supplements the tutorial on Compositing Regions.

The Composite Region dialog (shown below) is used for modelling spatial regions. These regions could represent a particular lithological
type (or group of types), mineralization, high grade zones or any other region of interest. The result is stored in a region table, which is an
interval table with one measurement column called 'interest'. The interest is 1 for intervals inside the region and 0 otherwise.

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 25 of 106

The left-hand side of the dialog is used to select which intervals are to be included in the region. The right-hand side allows you to specify
the processing steps to apply to the intervals selected on the left.

If no processing is required, consider using Partitions or Query Filters instead of creating a composite
region.

Navigation: Reference Manual > Composite Region >

Selecting Regions
Intervals to include in the region can be selected using a query filter, specifying a list of category values to include (i.e. lithology values) or by
specifying a set of category values previously grouped together using a partition.

Using a Query Filter


Select Query Filter from the Define region using a combo-box (the Category Column parameters will be removed). Then select the
desired query from the Query filter to use combo-box, as shown below:

If there are no query filters defined this option is not available.

Using a Category Column


Select Category Column from the Define region using a combo-box. Then select a column from the Column to use combo-box as shown
below:

If there are no category columns in the table, this option is not available.
You can work directly with values in the selected column or you can work with previously defined partition groups by selecting a partition
from the Partition combo-box.
Using the left mouse button, drag the intervals you want to model from the Exclude column to the Include column. Use the Ignore column
for dykes or other (younger) intrusions that you wish to ignore.

Exclude vs. Ignore


Let us consider the following diagram showing three lithologies, A, B and C, where we wish to model lithology A.

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 26 of 106

Clearly A must be included.


If B and C are both excluded, Leapfrog will model A as shown below.

On the other hand, if C is ignored (and B excluded), all occurrences of A-C-A down a drillhole are replaced with A-A-A and all occurrences
of A-C-B down a drillhole are replaced with A-B (the contact point is the midpoint of the removed C interval). Effectively C will be completely
ignored as if it were non-existent and Leapfrog will model A as shown below.

Missing Intervals
Missing intervals (sometimes known as 'implicitly missing intervals') can be treated in the same way as other intervals: included, ignored or
excluded. Ignored is recommended in most situations except when there are large areas of un-sampled drillholes. This can happen, for
example, when the ore is below a lot of ground rock.

Navigation: Reference Manual > Composite Region >

Parameter Settings

Processing Types
Leapfrog provides five ways to process the drillhole data when you composite a region.
1. Window filter
2. Fill short gaps
3. Remove short intervals
4. Extract single vein
5. Longest interval only
We describe the details of each processing type and observe how each of them affects the following scene: the original drillhole data
showing MX zone only.

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 27 of 106

m_assays (zone showing MX only)

Window Filter
The window filter quickly determines whether an interval should be included in or excluded from the composited region. The decision is
based on the three parameters: Width, Interest Percentage and Conservatism.

The Width parameter specifies the width of the window. If the proportion of the interest intervals (MX in this example) within the window is
higher than the Interest Percentage, the filter decides these intervals will be included. Otherwise, these intervals will be removed and will
not appear in the resulting composited region.
The following series of images show the effect of varying the parameters. The translucent white cylinders are the processed region intervals,
and the red cylinders are the original interest intervals.
Width=1, Interest percentage=50%, Conservatism=50%

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 28 of 106

Width=40, Interest percentage=50%, Conservatism=50%

A high value for Interest percentage would make the filter strict, and may improve the alignment between the output and the input.
Width=40, Interest percentage=90%, Conservatism=50%

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 29 of 106

Conservatism controls the strictness in determining the boundary of the filtered intervals.
After the filtering, the region intervals' endpoints will not usually match any of the original interval end-points. This is not particularly
desirable, so the filtered intervals may need to extend its endpoint to an adjacent interval.
If a filtered interval happens to have an endpoint lying within an interest interval, they will be merged and the endpoint will be extended to
the endpoint of the interest interval.

On the other hand, if a filtered interval endpoint lies within a non-interest interval, then the composite region result will include the original
(non-interest) interval when the overlap between the filtered interval and the original interval is more than Conservatism percent.

A high value for Conservatism will remove poorly-aligned intervals. For example, if Conservatism is 100%, then no non-interest areas
touching the filtered boundary will be included, resulting in a smaller volume. If Conservatism is 0.1% (almost) all non-interest areas
touching the filtered boundary will be included, resulting in a larger volume.

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 30 of 106

Fill Short Gaps


The intervals separated by a gap shorter than the specified distance can be merged.
To demonstrate this, composite a region with Fill gaps shorter than 60.
In the scene below, intervals less than 60m away are joined.

Remove Intervals
The intervals shorter than a specified length can also be removed.
To demonstrate this, composite a region with Remove intervals shorter than 50. The short red intervals not overlapped by the white
translucent intervals are those excluded by the filter.

Extract Single Vein


With this option, each drillhole would have only one interval, which fills all the gaps between the first interval and the last interval and forms a
long continuous interval.

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 31 of 106

Longest Continuous Interval Only


With this option, each drillhole would have one interval, the longest continuous interval only. This is useful for extracting veins. The red
intervals not overlapped by the white translucent intervals are those excluded by the filter.

Composited regions can also be composited. Right-click on a region table in the Project tree, and select
Composite Region. This means that the different types of filters can be sequentially applied. For example, you
may apply the remove-intervals filter to the region you composited with the window filter.

Handling Missing/Ignored Intervals


Missing Intervals
In practice, it is not uncommon for a drillhole to contain some intervals with no values. For correct modelling, you should specify how these
'missing intervals' will be processed. Based on your domain knowledge and analysis, they can be included, excluded or ignored.
Convert Ignored Intervals

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 32 of 106

If Yes is selected, Leapfrog will convert the ignored intervals to either included or excluded by comparing them with their adjacent intervals.
For example, when an ignored interval lies between two included intervals, it is converted to an included one. When an ignored interval is
sandwiched by one of each type, the ignored interval will be split into two and each portion will be converted to the type of the neighbouring
interval.
Otherwise, ie. if No is selected, the ignored intervals are left ignored.

Navigation: Reference Manual >

Constant Offset
Meshes may be offset by a constant distance using the Constant Offset command.
The Constant Offset command may be found by right-clicking on any mesh type object in the project tree and selecting Constant Offset:

The Constant Offset Mesh dialog will then be displayed:

Select an Offset Distance. Positive values offset towards higher grade for grade shells and to the red side of boundary meshes. Use
negative values to offset in the other direction.
Select a Quality level. A low quality offset will run faster but will not be completely accurate around detailed areas and is more likely to miss
small parts. A high quality offset will take longer but will be accurate around detailed areas and will offset small parts correctly. Below are
time comparisons for a mesh with 16 500 vertices.
Quality: 0.25 0.50 0.75 1.00
Time taken for Interpolation step: 2.7sec 7.4sec 16.3sec 21.6sec
Setting a value for Ignore parts less than ignores parts smaller than the threshold value in the offsetting process. Small parts are often not
interesting and do not offset well unless a very high quality is used. If you set this to 0 then set the Quality to 1.00.
Click OK to create the offset mesh. Three objects will appear under the Offset Interpolants sub-folder.

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 33 of 106

The first object, Cu 0.61 offset by 1.0, is the values object defining the offset mesh, which is then interpolated by an Rbf to obtain the actual
offset mesh Cu 0.61 offset by 1.0 Surface. To edit the offset distance or other parameters double click on the offset values object ( Cu
0.61 offset by 1.0 in this instance).

Example
This example demonstrates how problems with small parts can manifest. We will offset the same grade shell by 30m with all small parts
included and a quality of 0.15 as shown below.

This results in the following surface which has missed two of the internal parts (among others) as is shown below.

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 34 of 106

Increasing the quality will catch the missing parts.

giving:

Navigation: Reference Manual >

Creating a 2D Slice

To create a 2D slice, you first need to add the slicer to the scene. To do this, activate the slicer by clicking on the button on the scene
toolbar. Manipulate the slicer as described in the Section View Manipulation tutorial. Position the slicer in the scene to represent the slice
you will create.
Next, right-click on the Images and Slices folder and select Make 2D Slice:

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 35 of 106

The Name window will appear. Type a name for the new section, then click OK.

A window will be displayed showing all the domains available in the project:

If you wish to include any of the listed domains, tick the required box. Then click OK. The new section will appear in the project tree in the
Images and Slices folder. To view the section, drag it into the scene or right-click on on it and select View.

Navigation: Reference Manual >

Date and Time Formats


When loading date and timestamp columns, you can specify the date and time format used.
Format strings are case sensitive. The following directives can be used in a date or timestamp format string:
Directive Place-holder for
YY Year without century [00-99].
YYYY Year with century.
MM Month as a number [1-12].
MMM abbreviated month name.
MMMM full month name.
DD Day of the month as a decimal number [1,31].
DDD abbreviated weekday name.
DDDD full weekday name.
hh Hour as a number [0-23]. 0-12 if 'pm' directive is specified
mm Minute as a number [00-59].
ss Second as a number [00-59].
pm AM or PM place holder.
\Y, \M, \D A literal Y, M, D, h, m, s or p
\h, \m, \s, \p
\\ A literal \
Examples:

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 36 of 106

example date format string matching date


3 November 2006 DD MMMM YYYY
3/11/06 DD/MM/YY
Nov 3, 2006 MMM DD, YYYY
on 3-Nov-2006 on DD-MMM-YYYY
Tuesday, 11 November 2006 DDDD, DD MMMM YYYY
Date: 3 Nov 06 \Date: DD MMM YY
example time stamp format string maching time stamp
2006-11-03 14:35:00 YYYY-MM-DD hh:mm:ss
2006-11-03 02:35:00 pm YYYY-MM-DD hh:mm:ss pm
Tuesday, 11 November 2006 at 2:35pm DDDD, DD MMMM YYYY at hh:mmpm
20061103143500 YYYYMMDDhhmmss
2:35pm on Tue 3 Nov 06 hh:mmpm on DDD DD MMM YY
Date: 3 Nov 06 Time: 14:35 \Date: DD MMM YY Ti\me: hh:mm

Navigation: Reference Manual >

Define Complement on Domain


To generate the complement of a domain, right-click on the domain and select Define Complement:

The complement will be generated and will appear in the project tree under the Domains folder:

You can then view the complement properties and modify the complement in the same way you would any other domain.
The Define Complement function is useful in dividing a larger volume into smaller ones. For example, say we wish to use a fault surface to
divide a volume into two separate volumes:

The first step is to use the Define Sub Domain function to create a new sub-domain, the first of the smaller volumes:

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 37 of 106

Next, right-click on the new sub-domain in the project tree and select Define Complement. When the result is added to the scene, together
with the sub-domain, you can see that the original volume has been divided in two:

Navigation: Reference Manual >

Detach Viewer
The scene window can be detached from the main window and promoted to as a stand-alone window. This is especially convenient if you
have two or more display screens and wish to have the scene window maximised in one screen and have the main window as a controller in
the other screen.
To detach the viewer, select View > Detach Viewer from the main menu. Go to the scene window and press F11 for full-screen display. To
put the scene window back to the main window, simply press Esc key.

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 38 of 106

Navigation: Reference Manual >

Domains
Domains are simply regions of space. There are no restrictions on the size or shape of a domain. Domains may be infinite in extent (e.g.
everywhere below the topography) or of finite extent (e.g. high grade region). Domains may contain multiple regions that are disconnected
from each other or they could be a single connected region.
Domain boundaries can be defined using polyline surfaces, grade values, minimum distance values, boundary surfaces, bounding boxes
and other domains. Only one bounding box per domain is permitted.
See the Domaining Tutorial for instructions on how to add boundaries and set thresholds.
Intersection and Union
When the Intersection option at the top of the dialog is selected (the default) all the conditions are required to be true at a point for the point
to be inside the domain (logical AND). Consider the dialog encountered in the Domaining Tutorial reproduced below:

This domain is all the points where "Topo Subset Rbf is less than or equal to zero" AND "Distance to Marvin (Isotropic) is less than or
equal to 150". (Since Topo Subset Rbf is zero at the topography surface, the first phrase means "below the ground").

When the Union option is selected, the domain is defined to be all points that satisfy any one of the conditions. Consider the same example
with union selected:

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 39 of 106

This domain is all the points where "Topo Subset Rbf is less than or equal to zero" OR "Distance to Marvin (Isotropic) is less than or
equal to 150". The boundary of the domain looks like this (beware that triangles on the bounding box is turned of by default; therefore, all
points below ground in the bounding box are not shown):

This domain is points above the ground that are within 150m of the Marvin data and also points further than 150m from Marvin that are
underground.
Parent Domains
As mentioned above, domains can reference other domains. Suppose we have a domain Ground that is defined as "Topo <= 0".
The domain dialog below shows a region to the north side of a fault that is also below ground.

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 40 of 106

This domain would normally appear in the Domains folder. To make it appear under the Ground domain in the project tree, select Ground
from the Parent domain drop-down list. The parent domain is indicated by a pink background in the list.

This only works when adding the Inside of a domain. This ensures that child domains in the project tree are
subsets of the parent domain.

Setting the parent is not required, but allows flexibility in the layout of the domain objects in the project. A domain's parent can be changed
without recalculation, provided it remains (or was already in) the list of boundaries.

Navigation: Reference Manual >

Drawing Commands

Drawing Commands

These are available in drawing mode, when a the drawing toolbar is visible and one of the or buttons is selected.

Mouse/Keyboard Action Taken


Left click Draws a point or a node with straight edges
Left drag Draws a point with a normal or a node with a smooth tangent
Right click Terminate current polyline
Shift+Left click Rotate camera
Double left click Terminate current polyline
Left click on contour endpoint Close polyline
Ctrl-Z Undo last drawing or edit command. Note: This may change the mode
from drawing to editing.

Editing Commands

These are available in editing mode, that is, when a the drawing toolbar is visible and the button is selected.

Mouse/Keyboard Action Taken


Left click Selects segment, node or point under cursor. When on a selected segment it selects
the entire polyline
Delete Delete selected segment, node or point
Ctrl+Left drag (on a node or point) Moves node or point. (on a segment) Adds a node
Alt+Left drag (on a node without tangents) adds a smoothing tangent (on a node with tangents)
moves node
Double left click Select entire polyline
Ctrl-Z Undo last drawing or edit command. Note: This may change the mode from editing to
drawing.

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 41 of 106

When editing a polyline, the nodes will normally move in the section plane that the polyline was drawn in.
However, if the angle between the current view direction and the section plane is less than 35 degrees, the
nodes will move in the current viewing plane instead.

Navigation: Reference Manual >

Extract Mesh Parts


The Extract Mesh Parts command allows you to create a mesh from selected connected parts of an existing mesh or isosurface.

To being the process, right-click on any mesh mesh ( ) or isosurface ( ) in the project tree:

When Extract Mesh Parts is clicked, the following dialog will appear.

The largest part is initially selected. The mesh parts may be sorted either by Volume or by Area by clicking the heading of the respective
column.
To select all the parts click the Select All button.
To de-select all the parts click the Remove All button.
Inside-Out parts have negative volume. These are the blobs you can see inside the large shell in the picture above. To remove them, click
the Remove Inside-Out button.

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 42 of 106

To remove parts smaller than a given size, first click the Select All button then select the last item you want to keep in the listbox and click
the Remove Below Current button as shown below.

Click OK to create the mesh. It will be placed in the Meshes folder:

Meshes created in this way are not connected to the mesh they were created from. Changes to the original mesh
will not be reflected in the selected parts.

Here is the result of selecting all non-negative volumes.

Navigation: Reference Manual >

Extract Points
From the imported drillhole data set, you can retrieve several types of pointset objects, including assay points, volume points, vein walls and
contact points.

Assay Points

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 43 of 106

The basic techniques for extracting assay points are covered in the Extract Assay Points tutorial.
Background Regions
When you apply a filter that determines the points of interests and others, those filtered-out areas are referred to as background regions.
The points in the background regions are not particularly of interest, but they cannot be simply discarded in an attempt to reduce the number
of points for more efficient processing. If they are discarded, the background regions will be seen as blanks, and when you interpolate the
remaining points, the result can potentially be inaccurate. Instead, Leapfrog allows you to 'implant' a small number of points with a fixed
grade (preferably low) in the background regions.
The following example illustrates how this works.
When assay points of m_assays are generated without a filter, cu grade is shown as below.

Right-click on m_assays and select Extract Points > Assay Points:

Go to the Background Regions tab, and enable "Create fewer points when" option:

You can create a value filter inside this dialog, or opt to choose an available filter if you have created one previously. If there is no available
filter, "The following criteria is" option will be greyed out.

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 44 of 106

Here, we create a value filter inside the dialog: Points with cu grades are less than 0.100 are background regions.
Leapfrog will remove all the points in the background regions, but will place new points with grade 0.01 (as specified in the Background
Value field) every 50.0 m (as per Distance between points)
Note that points of cu grade above 0.1 will remain unaffected. Enter the Name m_assays_cu_above_0.1 and click OK.

Set the selection to display m_assays_cu_above_0.1. As you can see, background values of cu 0.01 are displayed every 50m.

Comparing the new result with the original (no filter), the number of points have been reduced from 9182 to 8352.

The isosurface cu 0.61 obtained from m_assays_cu_above_0.1 almost precisely coincides with the one from the original m_assays. It
suggests a properly set background region will improve processing efficiency without compromising the accuracy.

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 45 of 106

Isosurface of cu 0.61 (m_assays_cu_above_0.1)

Volume Points
The lithology data is typically composed of non-numeric values, and is not suitable for interpolation. Leapfrog applies the following idea to
create volume points, the numeric representation of these lithology data. Refer to the Generating Volume Points tutorial.
Suppose the area of interests is composed of three lithological layers, A, B and C, and we wish to build the 3D model of layer B. There are 5
drillholes:

In this drawing, there are four boundary points in each drillhole. Weight 0 is assigned to each point. Intervals denoted by (+) are those to be
included, others with (-) are excluded.
"Exclude" vs. "Ignore"
Let us consider the following diagram showing three layers, A,B and C, where we wish to model layer A.

Obviously A must be included. For B and C you need to decide whether to Exclude or Ignore.
Leapfrog requires that at least one layer be excluded.
If B and C are both excluded, Leapfrog considers A as separated into two blocks.

On the other hand, if B is excluded and C is ignored, the drillhole data containing C will be completely ignored as if it were non-existent.
When Leapfrog performs an interpolation, the space occupied by C will be filled in by the nearest lithology type. In this case, A is likely to be
seen as a single continuous block.

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 46 of 106

Surface Offset Distance and Internal Fill Spacing


For more realistic 3D models, Leapfrog creates many artificial points and distributes them between two boundary points.
When lithology points are generated, users can specify two parameters, Surface Offset Distance and Background Fill Spacing.

In the following simple drawing of a drillhole, let us suppose we wish to include the blue interval for 3D modelling.

The top and bottom ends of the interval are adjusted by the value specified by Surface Offset Distance (offset for short).
Points a and d are given weight 0. The remaining intervals between a and d are divided into segments of size "spacing", which is specified
by Background Fill Spacing. This creates new points b and c.
If the remaining interval is not a multiple of "spacing", Leapfrog automatically adjust "spacing" to an appropriate value.
The weight of these artificial points is determined by the distance from the closest boundary point (possibly a boundary point from another
drillhole). The further the distance is, the higher the value of that point is set.
The default values for the offset and spacing will suffice in most situations. A smaller value for the spacing means higher resolution and,
therefore, slightly smoother surfaces. However, computation will take slightly longer.
A higher offset value may have a subtle effect. It might cause the anisotropic interpolation slightly more pronounced.
Missing Intervals
In practice, it is not uncommon for a drillhole to contain some intervals without values. For correct modelling, you should specify how these
'missing intervals' will be processed. Based on your domain knowledge and analysis, they can be included, excluded or ignored.
Convert Ignored Intervals
If Yes is selected, Leapfrog will convert the ignored intervals to either included or excluded by comparing them with their adjacent intervals.

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 47 of 106

For example, when an ignored interval lies between two included intervals, it is converted to an included one. When an ignored interval is
'sandwiched' between one interval of each type, the ignored interval will be split into two and sub-interval will be converted to the type of its
neighbouring interval.
Otherwise, ie. if No is selected, ignored intervals remain ignored.

Vein Walls
Extracting vein walls is a little more advanced and is explained as part of a separate topic, Vein Modelling.

Contact Points
Volume points are numerical representations of non-numeric lithology data, which make it suitable for Leapfrog's FastRBF engine to create
a 3D surface. However, volume points are not particularly strong at outlining the boundary between two contacting layers. Therefore,
Leapfrog offers an alternative called contact points. Contact points define the boundary between two lithology layers.
From the table that contains the lithology data, in this case, m_assays, right-click and select Extract Points>Contact Points.

We generate the contact points between layer MX and PM by setting the parameters accordingly and clicking OK:

Now display MX-PM contacts under Boundaries and resize the point radius to get a similar scene to the one below. These points sit
between MX (red) and PM(blue).

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 48 of 106

You can interpolate MX-PM contacts in the usual way to create a surface:

Navigation: Reference Manual >

FastRBF
FastRBF (developed by Applied Research Associates New Zealand) allows scattered 2D and 3D data sets to be described by a single
mathematical function, a Radial Basis Function (RBF).
The resulting function and its gradient can be evaluated anywhere, for example, on a grid or on a surface. RBFs are a natural way to
interpolate scattered data particularly when the data samples do not lie on a regular grid and when the sampling density varies.
The ability to fit an RBF to large data sets has previously been considered impractical for data sets consisting of more than a few thousand
points.
FastRBF overcomes these computational limitations and allows millions of measurements to be modelled by a single RBF on a desktop
PC.

Navigation: Reference Manual >

Filter Values
After numeric data is loaded into Leapfrog (directly imported or generated from drillhole data), users may create a value filter that collects
points with a grade within a specified range. For example, you can select points with Cu grade greater than 0.7:

Filter Creation: Cu >= 0.7


In the Project tree, select the field name (e.g. Cu, Au) for which you wish to create a filter, and right-click. This brings up a context-menu.
Select Filter Values:

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 49 of 106

The Filter Values dialog pops up. You can specify the lowerbound of the grade. The default setting of the lowerbound filter is greater than
or equal to the minimum grade found. Type in 0.7 to replace the default value 0.01.
Notice that the filter name "cu >= 0.7" is automatically created. You are free to customise the name, but once modified, it is no longer
automatically updated. To finish, click OK.

This creates a filtered points set cu >= 0.7 under m_assays. This filtered point set is regarded as an independent numeric data set. You can
interpolate values, distance etc. just as you can with an ordinary numeric data set Cu.

Enable Upperbound Filter


If you wish to define an upperbound of the grade, tick the and check box and enable the upperbound setting. The maximum grade found,
3.22, is the current upperbound. The name "cu in [0.7, 3.22]" is automatically produced. The delimiter "[" and "(" represent ">=" and ">"
respectively.

Modifying a Filter
Double-click on Cu >= 0.700 in the project tree or right-click and choose Filter Values.

This will bring up the Filter Values dialog again and lets you modify the filter settings.

Navigation: Reference Manual >

Finding Objects
When a project is very large, finding objects in the project tree can be difficult. In such cases, you can search in the project tree using the
Find box above the project tree:

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 50 of 106

You can limit the search to a specific folder or choose "All" to search the whole project tree.
Note that the term you're searching for does not need to be complete.
You can also find objects in other parts of the Leapfrog application. In these cases, press Ctrl+F and type the keyword you wish to
search for in the dialog that appears:

Navigation: Reference Manual >

Grade Estimation

Introduction
Two of the strengths of Leapfrog are the fast computation of the boundaries of three-dimensional grade shells and the ability to visualise the
ore distribution described by these grade shells easily. Once a model has been obtained, it can be useful to make an approximate estimate
of the total mineral within a deposit before committing to a rigorous geostatistical analysis. The following describes how to create an
estimate using Leapfrog, but it is an approach that needs to be used with an awareness of the limitations of the approach. Provided it is
used carefully useful estimates can be obtained.
The basic approach is shown in Figure 1 in two dimensions. The contours illustrate the boundaries of the quantity to be estimated at
different thresholds. This may correspond to grade, but the procedure is quite generic. To avoid clouding the basic procedure with scaling
factors that vary depending on the type of geological or chemical units, the following discussion assumes the boundaries represent the
annual rainfall in metres, and the areas of the regions are given in square metres.

Illustrating the estimation procedure in two-dimensions.

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 51 of 106

A very conservative estimate of the total water falling in regions with a rainfall above 0.5m per year would be to calculate:
Estimate 1: 0.5*(Area of A).
Clearly this is going to be an underestimate because there are regions within A where the rainfall is higher. A better estimate would be to
take:
Estimate 2: 0.5*(Area of A - Area of B) + 0.6*(Area of B - Area of C - Area of D) + 0.7*(Area of C + Area of D)
A further improvement would be to recognize that the average grade in the region of A excluding the subregion B would probably be closer
to (0.5+0.6)/2 = 0.55.
Estimate 3: (0.5+0.6)/2*(Area of A - Area of B) + (0.6+0.7)/2*(Area of B - Area of C - Area of D) + 0.7*(Area of C + Area of D)
Leapfrog uses the calculation in Estimate 3.
It is worth noting that estimating the ore above the highest contour is an example of extrapolation rather than interpolation and needs special
care, since it is not easy to estimate an average grade to use to weight this volume. This presents special problems when estimating the
metal contained in nuggets discussed below.

Estimation in Leapfrog
A list of grade shell volumes is listed in the Grade Shells tab of the grade interpolant properties dialog. See the Isosurfacing Tutorial for
more details.
The major factors that need to be considered when estimating grade are:
Do the contours adequately describe the distribution of a mineral?
Are the contours adequately approximated?
If the contours are incorrect, the estimate will simply be wrong. It is critically important that the user is confident that the contours faithfully
represent the data. In Leapfrog interpolation can be interpreted as a form of Kriging. Like Kriging, it can produce balloons of the isosurfaces
in regions of sparse data. This will result in a significant over-estimate of the mineral.

An example showing ballooning in Leapfrog. The data set used is the Au 0.48 from m_assays data set.

Fortunately, ballooning is visually obvious, as is apparent above, and Leapfrog provides a number of tools to remove this effect. Two of the
most common approaches would be to limit the regions to within a finite distance of the data or to define a domain boundary. It is the user's
responsibility to define interpretations of what is geologically reasonable. The quality of the regions defined by the user within Leapfrog
directly determines the quality of the estimate. Fortunately, Leapfrog can calculate rapidly so it is not difficult to try a range of assumptions
and assess their effects.
In the rainfall example, the user needs to determine how many contours are sufficient to represent the rainfall distribution. This can be done
by the practical application of what mathematicians refer to as taking limits. In the limit of very finely spaced contours, the estimate can be
expected to converge to the true value. In practice, what is usually done is that the number of contours would be doubled and the estimate
recomputed. Thus, the region between 0.5 and 0.6 would be divided into two regions of between 0.5 and 0.55 and between 0.55 and 0.6.
The difference between the sum of these two estimates and the original estimate between 0.5 and 0.6 gives an idea of the error in the
original estimate. If it is too large or the user has doubts about the validity the operation needs to be repeated.
A similar procedure can be used to verify an appropriate resolution for an isosurface in Leapfrog. Halving the resolution should reduce the
error caused due to approximating the true surface with triangles by approximately a quarter. Again, the user needs to visually check the
isosurfaces as this rule of thumb may not apply at very coarse resolutions.

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 52 of 106

The estimation procedure can be summarised as a two-stage process. Firstly, the generation of a model or models. Secondly, the varying of
doubtful parameters in the models to either confirm that they are not changing the results significantly, or to estimate the range of possible
results.

Nugget Distributions
The approach described above does not work well when the distribution is poorly represented by an isosurface.
For example, in real world mining, it often happens that a significant proportion of the mineral is contained in nugget. In this case, Leapfrog
will generate small isosurfaces around the nuggets encountered in the drillholes, but fails to generate the isosurfaces around nuggets
between drillholes.
Missing these isosurfaces can cause a significant underestimate of the total mineral deposit. Deposits which are prone to this problem can
be identified by looking at the histogram of the grade, which will decay slowly for high values, for example Figure 3.

The histogram of a deposit.

An isosurface taken at a high grade threshold (Figure 4) is also typical of a grade distribution with high nugget.

A grade shell computed in Leapfrog for a deposit with significant nugget.

There is no simple way of solving this problem, which again essentially reduces to one of defining a volume in which the nuggets occur and
estimating an effective mineral density from the measured probability distribution of the grade within this volume. Leapfrog provides the tools
to help the user to define the volume, however, the estimation of the effective nugget density is still a topic of research.

Navigation: Reference Manual >

Histogram
If you select to view Properties from a table, you may find the Histogram tab providing the statistical characteristics of the data.
If the table contains several columns, you may select the column for which a histogram will be generated. For example, the histogram for Au
is generated as shown in the following screenshot.

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 53 of 106

You can adjust Bin count (the number of intervals in the histogram). The default is 50 as shown above. The following figure shows the
result of 25 bin counts. Type 25 and press Enter to update the histogram.

Semi-log histogram of the data values can be produced by ticking the Semilog X check-box. This is particularly helpful when high a
population is concentrated in low-valued bins.

Navigation: Reference Manual >

Import Column
Columns of an interval table that have not been imported during the drillhole data import can be added at any time.
To demonstrate this, we import the column Cxcu which has been previously omitted from importing into the assay table m_assay. Right-
click on m_assays and select Add Interval Column:

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 54 of 106

This brings up the Open Interval Measurement File dialog. Choose M-Assays.DAT. The following dialog will appear.

Let us have a close look at this dialog first.

Column Summary
The panel on the right gives a list of column summaries. Note that 4 columns, Hole, from, to and Sample are highlighted and their action is
"Match". This means that all 4 columns will be used as the key to identify the matching row in Leapfrog.
In this case, the Sample column itself provides a unique row key, so importing all 4 columns is not necessary. While it does no harm in
terms of correct operation, the column import will be inefficient.
So we just import Sample column here and do not import the other 3 columns. Change the Import As field of Hole, from, to to Not
Imported as shown below.
Note that this action would have been unnecessary if the Sample column had been selected as the Unique Row ID during the original
drillhole import. Only the Sample column would have been highlighted in that case.

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 55 of 106

Select Additional Columns


Suppose, we attempt to import three columns cu, Cxcu and Au here by selecting "Assay" for "Import As" fields as shown below.
The Action field of cu and Au would appear as "Match", meaning that they have been previously imported, and will be used for "matching".
In contrast, the Action field of Cxcu should appear as "Import", indicating that this column will be a new addition to the database.
Revert cu and Au back to "Not Imported". We have the Sample column for Match, so extra matching columns are unnecessary.
In some cases, you may wish to import the same column again. As long as you assign a different name, Leapfrog allows this.

Click on the Finish button to import the selected columns. The new column, Cxcu, will appear in the Processing Tasks list and will run
automatically.
Exercise.
It appears that the new column, however, contains some errors. Fix them following the methods described in the Fixing Errors part of the
Drillhole Data Import tutorial.

Navigation: Reference Manual >

Import Meshes
Meshes in various formats can be imported into Leapfrog. The list of recognised formats is given in Export Tutorial.
Follow the steps below to import a mesh.

You are expected to have completed the Exporting Meshes tutorial. It will be assumed that you have the cu 1.0
(Linear Isotropic)_tr.asc files.

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 56 of 106

Mesh Importing Basics


Right-click on the Meshes folder in the Project pane, and select Import. Browse to the mesh file and select it.

During the mesh is being imported, you will see the progress similar to below.

When the mesh is successfully imported, you will see the imported mesh located in the Meshes folder, ready to be displayed.

Importing a Mesh in Elevation Format


There is an extra step when importing a mesh in elevation format (*.adf, *.asc).
After selecting the mesh file to import, the Filter Elevation Data dialog will appear, such that you can specify a bounding box. When you
import a huge topography mesh that contains large portion of area that is not needed, this option will be particularly useful. A properly set
bounding box can clip unnecessary portion from the mesh during the import.

No Bounding Box

With Bounding Box

When there is no bounding box available, the option will be disabled as the first screenshot above. We import the same mesh twice with and
without a bounding box. The bounding box eastern_half specified above will include the eastern half of the original mesh, and filter out the
rest. It is possible to extend the bounding box by setting Everything within field.
Both meshes and the bounding box eastern_half are displayed below. As expected elevation_example_with_bbox only covers the
eastern half of the original mesh that lies inside the bounding box.

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 57 of 106

The clipping only takes place on the East(X)-North(Y) plane. Points with high elevation (Z) that lie above the
bounding box will not be removed.

Navigation: Reference Manual >

Importing Polylines
Leapfrog can import polylines from many formats including:
Datamine (*.asc)
Surpac String (*.str)
Gemcom (*.asc)
Micromine (*.asc, *.str)
MineSight (*.srg)
FracSIS (*.txt)
Gocad (*.pl, *.ts)
AutoDesk DXF (*.dxf)
Leapfrog (*.csv, *.txt)
To import a polyline select Import > Polylines from the Project menu or right-click on the Polylines folder in the project tree and select
Import from the menu.
Navigate to the desired directory, select the polyline file and click the Open button.
If the polyline file is in Gocad or DXF format the importing will start immediately, for all other formats the Polyline Import dialog is displayed
as shown below:

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 58 of 106

If the polyline file is in one of the standard formats listed above the default settings can be used and the Import button may be pressed
immediately.

Specifying Polyline Import Parameters


Two pieces of information are required to import a polyline:
1. The columns the polyline vertex coordinates are in
2. How the polyline sections are separated in the file
The vertex coordinate columns are selected by clicking on the heading at the top of a column and selecting one of East (X), North (Y) or
Elev (Z) from the menu that appears.

Polyline sections may be separated in three ways:


1. By rows that do not contain a vertex. These rows either start with a special value or are blank. (Use the option Row: Row starts
with)
2. By numbering each section and specifying the section identifier with each vertex. (Use option Column: Column values are polyline
identifiers)
3. By flagging the first vertex of each section with a special value. (Use option Column: Start new polyline on value)
The Gemcom and Surpac formats use rows that do not contain a vertex. A Gemcom format polyline is shown below:

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 59 of 106

Gemcom uses empty lines so the text-box Row starts with is empty. Lines that do not contain a vertex are highlighted in green with a red
line through them.
Here is an example of a Surpac polyline, the separator lines start with 0:

The Datamine format uses polyline section identifiers to separate polyline sections. An example is shown below:

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 60 of 106

Note that the first column has been assigned to Polyline Separator, to tell Leapfrog which column the section identifiers are in. The first row
of each section is shown in green; rows 17 and 25 in the example above.
The Micromine polyline format includes a vertex index for each section and so new sections are flagged with an index of 1 as shown below:

Note that the fourth column has been assigned to Polyline Separator, to tell Leapfrog which column the vertex indices are in. The Start
new polyline on value text-box has been set to 1 to start sections at vertices with index 1. The first row of each section is shown in green;
rows 1 and 7 in the example above.
Any ascii polyline format that separates polyline sections in one of these ways can be imported into Leapfrog.

Navigation: Reference Manual >

Keyboard Commands
The following keyboard shortcuts apply when the specified part of the application has the focus. To move focus from one area to another
left-click in the area where you want the focus to be.

Application Window

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 61 of 106

Key Combination Command


F8 Toggle project tree visibility
F9 Toggle the shape list visibility
F10 Display menu from the
menubar. Then use keys
to change menu and keys
to navigate menu items and
press Enter to make a
selection.
F11 Unsplit scene.
Alt-F11 Split scene across top
Ctrl-F11 Split scene at right
Ctrl-S Save the project
Ctrl-R Run
Shift+Ctrl-R Run All
Ctrl-Q Quit Leapfrog

Project Tree

Key Combination Command


Tree navigation.
Page Up, Page Down
Ctrl-O or Enter Open current object (Some
objects)
F2 Rename current object (Some
objects)
Alt-Enter View properties for current
object (Some objects)
Delete Delete current object (Some
objects)
Insert Copy current object
(Interpolants only)
Ctrl-F Search for text in the tree. The
tree is expanded as required
to display matching rows
+ Expand branch 1 level
Shift-Keypad+ Expand entire branch
- Collapse branch to current
position

Scene

Key Combination Command


Rotates the camera. Hold
down the Shift key for
smaller steps.

Alt and Pan the camera. Hold down


the Shift key for smaller
steps.
Page-Up Page-Down Zoom in and out respectively.
Hold down the Shift key for
smaller steps.
Home Reset the camera view
Ctrl-Home Reset the camera view and

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 62 of 106

the slicing and moving planes.


N, S, E, W Set the view direction to North,
South, East or West
respectively.
U, D Set the view direction to Up or
Down (Plan view) respectively.
O, P Set the view type to
Orthographic projection or
Perspective respectively.
Comma (,), period (.) Move the slicing plane
backwards and forwards with
the current step distance.
Caution: This works even
when the slicing plane is
turned off, you just won't see
the result until the slicing
plane is turned on.
L Set the view to look down on
the slicing plane
Shift-L Look at slicing plane from rear
Ctrl-B Bookmark the current view
position
B Restore previously
bookmarked view

Shape List

Key Combination Command


List row navigation.

Delete Remove highlighted objects


from the scene

Navigation: Reference Manual >

Merged Intervals Table


Assay and lithology data are often recorded in separate files. In such cases, there will be separate tables for assay and lithology in Leapfrog.
Indeed, even when both assay and lithology data are in the same file, importing the file twice (importing only assay columns the first time
and lithologies second) can be beneficial. However, having separate tables makes it difficult to explore relationships between the
measurements in each table.
To get around this, Leapfrog merges all imported interval tables into a table called merged_itervals. This allows you to create queries that
reference both assay and lithology values.

How tables are merged

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 63 of 106

The drawing above illustrates assay values for holes composed of 7 intervals and lithology values for the same hole having 4 intervals
(shown as 4 different colours). The merged_intervals table uses the from and to depths from all tables and, for the example above,
consists of 10 intervals. It has both assay and lithology values associated with each interval.

Example
The drillhole data given in the directory tutorials\Demo\ has a separate set of assay and lithology files. The lithology table has columns
holeid, from, to and litho. The assay table has columns holeid, sampleid, from, to and Grade.
After import, you should be able to find the automatically generated merged_intervals table as shown below.

Double-click on the merged_intervals and see the table contents.

The holeid, from and to columns are calculated from both the assay and lithology tables. The collar_id column is Leapfrog's internal
identifier for the given holeid. The sampleid and Grade columns are from the assay table and the litho column comes from the lithology
table.

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 64 of 106

If all the interval tables originated from the same file the merged_intervals table will be identical to the original file,
except for a possible reordering of columns.

Navigation: Reference Manual >

Merging Objects
The Merge Objects command allows you to combine multiple Locations, Polyline or Polyline Values objects into a single Locations object.
This feature may be used to augment measured data with your own interpretation. This is useful for Modelling boundaries of any sort.
The Merge Objects command may be found by right-clicking on a Locations, Values, Polyline or Polyline-Values object in the Project tree
and selecting Merge Objects from the menu as shown below.

The Merge Objects selection dialog is displayed. Select at least two objects from the tree using the check boxes, as shown below, and click
OK.

The Merge Objects dialog is then shown.

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 65 of 106

The members of the merged points are displayed in the Object list.
To add more objects click the Add button, this will redisplay the Merge Objects selection dialog above.
To remove an object, click on its name in the Object list and click the Remove button.
In the event of two objects having identical points with differing values, the value from the object appearing last in the list takes precedence.
To move an object in the list click the or buttons. This also changes the order in the default name.
Click OK to create the merged points object. The new merged points object will appear in the same folder as the first Points object or in the
Boundaries folder if all members are polylines. It will run automatically.

Example 1:
Below is the Marvin tutorial data's topography. There is an area in the foreground with no sampled data. Suppose we know there is a dip in
the topography there but don't have survey data available. We can draw the dip with a polyline and then merge it with the existing points.

Here is the polyline representing the dip

Here is the merged points object along with its interpolating surface.

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 66 of 106

Example 2:
In some infrequent cases, the Interpolate Surface command will return with the error "Could not determine surface from points", or it will
simply go wrong. This happens when Leapfrog cannot determine which direction the surface should go through any of the points or when
Leapfrog gets the surface direction at some of the points wrong. Let us suppose this is the case with the Marvin topography.
Start a new polyline and draw some points with lines pointing outward from the surface as shown below. Ensure you are using Draw on
Object mode so that the polyline points lie exactly on the existing data. The lines are drawn in the viewing plane so check also that the view
is perpendicular to the surface you are defining.

Here is another view of the same data.

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 67 of 106

When you have a sparsely covered most of the surface with the polyline save it. Right-click on the points and choose Merge Objects.
Select the Points Off Surface Values shown under the polyline you have just drawn, as shown below.

Click OK and then click OK again to create the Merged Points.


Note: Normally you would have to check that the polyline occurs before the locations in the list (so that the polyline values have priority) but
in this situation it does not matter.

Notice that this time the Merged Locations has a values associated with each point. These can now be interpolated with the Interpolate -
RBF command using the default settings to generate the topography surface.

Example 3:

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 68 of 106

During the lifetime of a project new drillhole data becomes available. Typically, you can utilise the drillhole data append function to include
newly available drillhole data.
Consider this situation. You don't have the original drillhole data, but have been working with a point data (Numeric). You are given another
set of point data to add to the current project. In previous versions of Leapfrog, such an operation was not possible.
In Leapfrog 2.4, you can simply import the new point set separately and merge it with the old point set. This also allows you to view different
drillings individually or as a whole.
Right-click on the Numeric Data folder in the Project pane, and import M_Cu_Au.DAT and Mar_Cu_Au.DAT from
tutorials\Marvin\numeric

Au in M_Cu_Au

Au in Mar_Cu_Au

Right-click on one of the Au objects in the Project tree and select Merge Objects from the menu as shown below.

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 69 of 106

Select the other Au object(s) as shown below and click OK.

Click OK to create the merged object.

You will probably want to rename the newly created merged values as shown below:

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 70 of 106

Now you can copy any existing interpolants onto the new merged data set.

Navigation: Reference Manual >

Mesh From Moving Plane


A mesh can be created from the moving plane.

Right-click on the Meshes folder in the project and select Mesh From Moving Plane from the menu. This option is only enabled when a
moving plane is visible in the scene.

Alternatively, selecting Processing > Mesh Types > Mesh From Moving Plane in the menu does the same thing.
This will present the Mesh From Plane dialog as shown below.

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 71 of 106

Select the desired number of vertices in each direction and give the mesh a name. Using a higher number of vertices will result in a higher
resolution when evaluating the mesh.
Click OK to create the plane mesh. The mesh (called plane in this instance) will appear under the Meshes folder. (You can rename it if a
different name is desired)

Displaying the mesh confirms that it (red plane) is consistent with the moving plane.

A mesh from a moving plane can be used for clipping data or specifying structural trends.
For instance, if you wish to clip the isosurface cu 0.61 (green shell in screen below) to remove the portion above the planar mesh, you can
create a domain from this planar mesh and apply the domain to cu 0.61. For the details of techniques involved, refer to Domaining Tutorial.

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 72 of 106

Navigation: Reference Manual >

Multiple Views
When using the slicing plane or drawing tool in Leapfrog it is often useful to know the depths of points you are looking at. This can be
achieved by using Multiple Views. This feature provides a view orthogonal to the current view direction above or to the right of the main view
window. The orthogonal views are analogous to the 3rd-angle-projections Main, Plan and Side elevations used in 2D architectural or
engineering drawings. The biggest difference is that the Main view can be in any direction and the Plan and Side views follow the Main view
in real time.

To split or un-split the scene select one of the menu items shown below:

Selecting Split Right will split the scene vertically as shown below:

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 73 of 106

You cannot do anything in the secondary view except pan side to side (or up and down if Split Top is selected).
When the slicer is turned on and is roughly parallel with the main view, the orthogonal view will automatically pan to keep the slicer centred
in the view as the slicer is moved, as shown below.

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 74 of 106

The slicing mode in the orthogonal views is the same as in the main view.

To set the current view parallel to the slicer press L. Press L again to view the slicer from the opposite direction

Using multiple views together with hiding the project and/or shape list areas gives you the flexibility to layout the Leapfrog window in the way
that suits you best. For example, in the image below, the project tree has been hidden (Press F8 to turn it on/off) and the Split Top has
been chosen for the scene. We can see a north section and Plan views simultaneously, with the shape list to the right.

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 75 of 106

Navigation: Reference Manual >

New Interval Selection


A handy tool that enables you to manually select subsets from the drillhole data. Tutorial and tips are given in New Interval Selection in
Drillhole Data Import Tutorial.

Navigation: Reference Manual >

Offset To Points
The Interpolate Values and Interpolate Surface commands in Leapfrog use interpolants to generate a mesh which has vertices that do not
coincide with the data points through which the mesh passes. However in some situations having the data points as mesh vertices is
desirable and can be achieved using the Offset To Points command.
The Offset To Points command may be found by right clicking on any mesh type object in the project tree and selecting Offset To Points:

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 76 of 106

Finding the Offset To Points command. The scene window is showing a close-up of a topography surface and the
topography points through which it passes.

This will display the Offset To Points dialog as shown below.

Select the points you wish to offset from the Points combo-box or click the button to select the points from a project tree view.
If the Add mesh vertices at point locations check-box is checked the points are added to the offset mesh as vertices, ensuring that the
mesh honours the points.
If the points you are offsetting contain outliers, or if they extend significantly beyond the mesh extents, tick the Exclude points further than
check-box and specify a distance from the mesh. Points beyond this distance from the mesh will be ignored.
Click OK. The objects generated appear in the project tree:

If the check box Add mesh vertices at point locations is checked, extra triangles are added to the mesh so that vertices appear at each
data point as shown in the selection below.

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 77 of 106

Navigation: Reference Manual >

Planned Drillholes
To add a planned drillhole to the project, right-click on the Planned Drillholes folder (in the Drillhole Data folder) and select Plan Drillhole.
The Drillhole Planning window will appear, with the planned drillhole placed in the scene:

There are two ways to enter information about a planned drillhole:


By specifying the collar
By specifying the target
To move the collar onto the topography, click on the Move collar onto topography button. A list of topography objects will appear. Select
the one to which you wish to move the collar.
Clicking on the Slice along drillhole button slices the scene along the new drillhole.
When you have entered all the information about the new planned drillhole, click OK. You can also click Next Hole if you wish to add
another drillhole.
Planned drillholes appear in project tree in the Planned Drillholes folder. You can edit a planned drillhole by right-clicking on it, then
selecting Edit In Scene.

Specifying the Collar


To plan a drillhole by specifying the collar, enter information about the collar location, then enter the Dip at collar, Azimuth, Drift and
Target Depth. You can also click the arrow next to Collar and then click in the scene. You may wish to add topography to the scene to
more accurately locate the collar in the scene.
The Dip at Target and Target values will be calculated automatically.

Specifying the Target


When you plan a drillhole by specifying the target, enter information about the target location in the same manner as when planning a

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 78 of 106

drillhole by specifying the collar. Information about the collar location will be calculated automatically.
If this option is not available, click the Options button and select Specify Target as the Default Planning Mode.

Navigation: Reference Manual > Planned Drillholes >

Drillhole Planning Options


The settings in the Drillhole Planning Options window determine how you enter information about planned drillholes in the Drillhole
Planning window.

When planning drillholes using the Drillhole Planning window, the Drift value entered applies only to the current drillhole. If you wish to set
a default value for Drift when other planned drillholes are created, enter the required values in the Default Drift fields.
You can also set how far past the target the drillhole ends.
The Default Planning Mode determines whether location information is entered for the collar or for the target.
Click OK. When new planned drillholes are added to the project, the new settings will be used.

Navigation: Reference Manual > Planned Drillholes >

Drilling Prognoses
Planned drillholes can be evaluated against any model in the project. To view drilling prognoses, right-click on a planned drillhole and select
Drilling Prognoses. The Drilling Prognoses window will appear:

You can display prognoses for different models by selecting them from the dropdown list. You can also view a plot of the data by clicking on
the Plot tab:

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 79 of 106

You can copy the information displayed in the Data and Plot tabs to your computer's clipboard by clicking Copy. The information in the Data
tab will be copied as tab delimited text, which can be copied into a spreadsheet application such as Excel. The plot displayed in the Plot tab
will be copied as a bitmap image.

Navigation: Reference Manual >

Projecting Onto a Surface


In Leapfrog, you can project points data and polylines onto a mesh. For example, you may have a polyline drawn in 2D that you wish to
project onto a topography mesh.
To project an object onto a mesh:
1. Right-click on the object.
2. Select Project Vertically Onto Mesh.
3. In the window that appears, select from the surfaces available in the project:

4. Click OK.
A new object will appear in same folder of the project tree as the original object.

Navigation: Reference Manual >

Query Filters
Query filters provide a way to select or view a subset of the rows in a table. When used on a collar table, this amounts to selecting collars.
When used on an interval table, measurement intervals are selected.
Query filters can be used for the following tasks:
Selecting rows to display in the table dialog
Creating regions
Restricting which drillholes to display in the scene
Selecting master segments
Selecting areas in which to composite
Defining background areas when extracting assay values

Leapfrog Query Syntax


The Leapfrog query syntax is based on the WHERE clause of the Structured Query Language (SQL), with some restrictions:

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 80 of 106

Unary operators (as in 'not au > 0.0') are not allowed


SQL functions (as in 'min(au, cu)') cannot be used
The SELECT statement should not be used (as in 'holeid in (SELECT holeid FROM ...)')
Following statements are also prohibited CASE, WHEN, GLOB, MATCH or CAST
There is also one main SQL extension:
IN and NOT IN will accept a partition group for the value list. E.g. 'zone IN layers.weathered' where "layers" is a partition of the
"zone" column that has a group called 'weathered'.
Examples
Here are some examples of valid Leapfrog query statements:
au > 0.45
cu > 0.35 and rocktype = 'qtz'
holeid in ('m-001', 'm-002')
holeid not in ('m-001', 'm-002')
holeid not like 'MAR%'
holeid in drill_programs.march
(au + cu) > 0.4
(au > 0.5 and pb < 1.3) or (au > 1.0 and pb < 2.1)
au*cu + pb - 3*zn > au*pb/cu -- if you really wanted to!
Non-Numeric Assay Values
In (numeric) assay columns it is common to have non-numeric values such as '<0.01' or 'NS' which mean below detection or not sampled
respectively. Leapfrog permits these non-numeric values in the numeric column but assigns them a numeric value of zero (0) for all
arithmetic and comparisons except equality. This ensures a sensible and consistent result for the <, <=, > and >= comparisons and
arithmetic operations while still allowing the query au = '<0.01' to work as expected.
Click here for a description of the syntax.

Navigation: Reference Manual > Query Filters >

Query Filter Dialog


To create a query filter, right click on the desired table and select New Query Filter from the menu as shown below.

This displays the Query Filter dialog

Type the desired criteria into the Query text box. Press Ctrl-Enter for a new line in longer queries.
If you are editing an existing query (e.g. double-click on the object under Filters in the tree view) the Apply button will become available.
Clicking Apply or pressing Enter in the Query text-box will immediately send your changes to the server (if there are no syntax errors). If
the query filter is being used in the scene it will update after the query has finished processing.

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 81 of 106

If you are not an avid typist click on the button to open the Build Query dialog. If a query has already been entered Leapfrog will classify
it as a standard or advanced query and open the appropriate dialog. If the query has an error, it will be classified as an advanced query.

Shortcuts
The Query text-box allows the following shortcut queries:
Typing a single word will match all hole-ids starting with that string. For example, typing "MAR" is a shortcut for "holeid like 'MAR%'"
Typing a comma-separated list of words will match all hole-ids that exactly match the given words. For example typing "M001, M002" is
a shortcut for "holeid in ('M001', 'M002')".

Navigation: Reference Manual > Query Filters >

Build Query Dialog


Many common queries can be built using this dialog. The Build Query dialog aims to be easy to use rather than comprehensive in it's
support for the full Leapfrog query syntax and shields the user from the details of SQL. For more advanced queries, click on the Advanced
button to use the Advanced Query dialog.

To add a criteria, left-click in the Column column in the first empty row to display a list of columns and then select one from the list.

Now select a Test and enter a value. In this example we have entered the criteria Au > 0.5.

What can be entered for the value depends on the types of the Column and Test selected. These are shown in the table below:

Column Test Value Build


Numeric or Assay any a number is required No

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 82 of 106

Text any anything No


Date any a date is required Yes
Category is, is not anything Yes
Category in, not in comma separated list of values or Yes
partition group
Category starts with, anything Yes
ends with

Quotes are not required around text values entered in the value column as they are in SQL.
Suppose we have partitions defined on the zone column, such that a partition 'zone_layers' is composed of two groups 'deep' (MX and PM)
and 'shallow' (OX). Refer to the Manage Partitions tutorial for the related techniques.
Now we wish to build a query that selects the segments whose zone is either MX or PM. The appropriate query is :
m_assays.zone in zone.layers.deep

or
m_assays.zone in ('MX', 'PM')

You can select zone_layers.deep from the drop-down combo box.

When you don't have the partitions defined as above, press the button to select zone values from a list as shown below:

Double-click or click-and-drag values in the Available list to move them to the Selected list or highlight the desired values and use the
arrow buttons. In this example we have selected MX and PM as shown below:

Note that Leapfrog will add quotes and brackets to the value "MX, PM" to make a valid SQL list of strings "('MX', 'PM')" when the query is
saved.
The Apply button will apply the query to the context in which the dialog was opened. If the dialog was opened from a drillhole table dialog
then clicking the Apply button will display the rows matching the current query in the table dialog. If the dialog was opened from the project
tree (via the Query Filter dialog) then clicking the Apply button will save the query to the server. If the query filter is being used in the

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 83 of 106

scene, the scene will update once the query has been processed.
Use the Delete button to delete the criteria in the selected row. Use the Add button to add a blank row.
If you need more flexibility in your query than this dialog allows, click the Advanced button.

Navigation: Reference Manual > Query Filters >

Advanced Query Dialog


For the brave of heart, Leapfrog provides a powerful and flexible advanced query builder. The Advanced Query dialog allows the user to use
the full power of the Leapfrog query language.

Dialog Overview

The query is entered in the Criteria to Match area on the left. The query is displayed as a tree structure with AND and OR terms as the
branch nodes and conditions as the leaves. This will be illustrated later. Use the Delete button to delete a row from the query and the Add
button to add a blank row. The Check button can be used at any time to check if the current query statement is valid. Below the buttons is a
box showing the raw SQL form of the query.
Except for the arrow buttons, the buttons down the middle of the dialog are used for entering values into the query. The arrow buttons are
used for moving the currently selected criteria to a different position in the query. The Date..., List... and Value... buttons will open a builder
dialog for the column selected in the current row. If there is no column found or the column is of the wrong type an error message is
displayed.
The tree on the right contains all the columns available to the query. Double-click on a column name to insert it into the query. Below the
columns are listed all the partition groups defined on the table (and also on the collar table). Double-clicking on a group name will insert it
into the query. If the current criteria is empty the full criteria text is added. For example, clicking on holeid > drilling_program > M will add
'holeid in drilling_program.M' to the query.
For a description of the Apply button see the end of the Build Query Dialog topic.
Differences Between Advanced and Basic Query Dialogs
The starts with and ends with tests in the Basic Query dialog are not available as they are special cases of a LIKE test. The full LIKE
expression must be used instead.
In the Basic Query dialog dates are shown using the current locale settings. In the Advanced Query dialog dates are written in single quotes
using 'YYYY-MM-DD' format. For example, '2007-02-25'.
In the Basic Query dialog quotes are added around textual values for you, in the Advanced Query dialog all textual values must be enclosed
in single quotes.

Constructing a Query
This is best explained by example. Here we will use the Marvin assay table provided in the sample data directory.
Example 1
We will construct the query Au > 0.5 and Cu > 0.5. Note that a row containing the 'AND' is already present, we will add our criteria
below this in the tree. We'll start by following the steps shown in the picture below:

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 84 of 106

Finally press Enter or Tab to finish editing.


Note that the query text at the bottom is still just 'Au > 0.5' as there is nothing to AND it with.
Now we can enter the second part of our query 'Cu > 0.5' following the same steps as above:

We are finished!
If the arrow on the first line is toggled the query is collapsed onto one line as shown below

Example 2
If you initially find the tree layout confusing, you can type your query in one line and then expand it.
To do this, first collapse the first line - the 'AND' text will disappear as shown below.

Then type cu > 0.5 and (Au > 0.5 or holeid = 'm001').

To see how this is laid out in the tree, expand the arrows to:
and

We will now enter the same query using the tree interface starting again with an empty query list.
1) Select the first row and click the Delete button. This will clear the query and place the Boolean 'AND' value in the first row.

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 85 of 106

2) Using the symbol buttons and column list or by typing in the entry field, enter cu > 0.5 for the first condition and press Enter
3) Then enter the second condition Au > 0.5

4) Select the last condition and click the OR button as shown below:

A new OR branch is added to the current tree and the cursor is placed ready to add a new condition

5) enter the final condition: holeid = 'm001'

We are finished. Steps 3 and 4 may be reversed if desired.

Rearranging a Query
Suppose we want to change the previous query to be Cu > 0.5 AND au > 0.5 AND holeid = 'm001'. To do this we need to move the last 2
conditions out from under the OR node. To do this, select the row containing holeid = 'm001' and click the button.

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 86 of 106

Repeat with the row containing Au > 0.5. You will see that the OR entry is automatically removed from the tree.
To change an AND branch to an OR branch (or vice versa), select the desired row and click the AND or OR button to set the desired value
as shown below.
becomes

Navigation: Reference Manual > Query Filters > Advanced Query Dialog >

Leapfrog Query Language Syntax

Query expressions must obey the following syntax:

expr ::= expr binary-op expr |


expr [NOT] like-op expr
( expr ) |
column-name |
table-name . column-name |
literal-value |
expr ISNULL |
expr NOTNULL |
expr [NOT] BETWEEN expr AND expr |
expr [NOT] IN ( value-list ) |

like-
op ::= LIKE | REGEXP

The Leapfrog Query Language understands the following binary operators, in order from highest to lowest precedence:
* / %
+ -
< <= > >=
= != IN
AND OR

The operator % outputs the remainder of its left operand modulo its right operand.

The result of any binary operator is a numeric value.


Note:

A literal value is an integer number or a floating point number. A literal value can also be the token "NULL".

Scientific notation is supported.

The "." (dot) character is always used as the decimal point even if the locale setting specifies "," for this role. The use of "," for the
decimal point would result in syntactic ambiguity.

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 87 of 106

A string constant is formed by enclosing the string in single quotes (' '). A single quote within the string can be encoded by putting
two single quotes in a row - as in 'Joe says ''Hello'' to Fred'.
The LIKE operator is used for pattern matching comparisons. The operand to the right contains the pattern. The left hand operand
contains the string to match against the pattern. The percent symbol '%' in the pattern is used to match any sequence of zero or
more characters in the string. An underscore '_' in the pattern matches any single character in the string. Any other character
matches itself or it's lower/upper case equivalent (i.e. case-insensitive matching). Due to limitations in the underlying database,
Leapfrog only understands upper/lower case for 7-bit Latin characters. Hence the LIKE operator is case sensitive for 8-bit iso8859
characters or UTF-8 characters. For example, the expression 'a' LIKE 'A' is TRUE but '' LIKE '' is FALSE.
The REGEXP operator is used for regular expression pattern matching comparison. See the next topic for the syntax.
Navigation: Reference Manual > Query Filters > Advanced Query Dialog >

Regular Expression Syntax


Regular expressions (RE's) can contain both special and ordinary characters. Most ordinary characters, like "A", "a", or "0", are the
simplest regular expressions; they simply match themselves. You can concatenate ordinary characters, so last matches the string
'last'. (In the rest of this section, we'll write RE's in this special style, usually without quotes, and strings to be matched 'in single quotes'.)

Some characters, like "|" or "(", are special. Special characters either stand for classes of ordinary characters, or affect how the
regular expressions around them are interpreted.
Special Characters
"."
(Dot.) This matches any character except a new line.
"^"
(Caret.) Matches the start of a string
"$"
Matches the end of the string or just before the new line at the end of the string. Since Leapfrog always matches the entire string
using a "$" is not recommended (and probably won't work), however it is still a special character and needs to be treated as
such if you wish to use a literal "$"
"*"
Causes the resulting RE to match 0 or more repetitions of the preceding RE, as many repetitions as are possible. ab* will match
'a', 'ab', or 'a' followed by any number of 'b's.
"+"
Causes the resulting RE to match 1 or more repetitions of the preceding RE. ab+ will match 'a' followed by any non-zero number
of 'b's; it will not match just 'a'.
"?"
Causes the resulting RE to match 0 or 1 repetitions of the preceding RE. ab? will match either 'a' or 'ab'.
*?, +?, ??
The "*", "+", and "?" qualifiers are all greedy; they match as much text as possible. Sometimes this behaviour isn't desired; if the
RE <.*> is matched against '<H1>title</H1>', it will match the entire string, and not just '<H1>'. Adding "?" after the qualifier makes it
perform the match in non-greedy or minimal fashion; as few characters as possible will be matched. Using .*? in the previous
expression will match only '<H1>'.
{m}
Specifies that exactly m copies of the previous RE should be matched; fewer matches cause the entire RE not to match. For
example, a{6} will match exactly six "a" characters, but not five.
{m,n}
Causes the resulting RE to match from m to n repetitions of the preceding RE, attempting to match as many repetitions as
possible. For example, a{3,5} will match from 3 to 5 "a" characters. Omitting m specifies a lower bound of zero, and omitting n
specifies an infinite upper bound. As an example, a{4,}b will match aaaab or a thousand "a" characters followed by a b, but not
aaab. The comma may not be omitted or the modifier would be confused with the previously described form.
{m,n}?
Causes the resulting RE to match from m to n repetitions of the preceding RE, attempting to match as few repetitions as
possible. This is the non-greedy version of the previous qualifier. For example, on the 6-character string 'aaaaaa', a{3,5} will match
5 "a" characters, while a{3,5}? will only match 3 characters.
"\"
Either escapes special characters (permitting you to match characters like "*", "?", and so forth), or signals a special sequence;
special sequences are discussed below.
[]
Used to indicate a set of characters. Characters can be listed individually, or a range of characters can be indicated by giving
two characters and separating them by a "-". Special characters are not active inside sets. For example, [akm$] will match any of
the characters "a", "k", "m", or "$"; [a-z] will match any lowercase letter, and [a-zA-Z0-9] matches any letter or digit. Character
classes such as \w or \S (defined below) are also acceptable inside a range. If you want to include a "]" or a "-" inside a set,
precede it with a backslash, or place it as the first character. The pattern []] will match ']', for example.
You can match the characters not within a range by complementing the set. This is indicated by including a "^" as the first
character of the set; "^" elsewhere will simply match the "^" character. For example, [^5] will match any character except "5", and
[^^] will match any character except "^".
"|"
A|B, where A and B can be arbitrary REs, creates a regular expression that will match either A or B. An arbitrary number of REs
can be separated by the "|" in this way. As the target string is scanned, REs separated by "|" are tried from left to right. When

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 88 of 106

one pattern completely matches, that branch is accepted. This means that once A matches, B will not be tested further, even if it
would produce a longer overall match. In other words, the "|" operator is never greedy. To match a literal "|", use \|, or enclose it
inside a character class, as in [|].
(...)
Matches whatever regular expression is inside the parentheses as a single RE group. To match the literals "(" or ")", use \( or \),
or enclose them inside a character class: [(] [)].
(?...)
This is an extension notation (a "?" following a "(" is not meaningful otherwise). The first character after the "?" determines what
the meaning and further syntax of the construct is. Following are the currently supported extensions.
(?P<name>...)
Similar to regular parentheses, but the substring matched by the group is accessible via the symbolic group name name. Group
names must be valid Python identifiers, and each group name must be defined only once within a regular expression. A
symbolic group is also a numbered group, just as if the group were not named. So the group named 'id' in the example above
can also be referenced as the numbered group 1.
For example, if the pattern is (?P<id>[a-zA-Z_]\w*), the group can be referenced by its name in pattern text (for example, (?P=id)).
(?P=name)
Matches whatever text was matched by the earlier group named name.
(?#...)
A comment; the contents of the parentheses are simply ignored.
(?=...)
Matches if ... matches next, but doesn't consume any of the string. This is called a lookahead assertion. For example, Isaac (?
=Asimov) will match 'Isaac ' only if it's followed by 'Asimov'.
(?!...)
Matches if ... doesn't match next. This is a negative lookahead assertion. For example, Isaac (?!Asimov) will match 'Isaac ' only if it's
not followed by 'Asimov'.
(?<=...)
Matches if the current position in the string is preceded by a match for ... that ends at the current position. This is called a
positive lookbehind assertion. (?<=abc)def will find a match in "abcdef", since the lookbehind will back up 3 characters and check if
the contained pattern matches. The contained pattern must only match strings of some fixed length, meaning that abc or a|b are
allowed, but a* and a{3,4} are not. Note that patterns which start with positive lookbehind assertions will never match at the
beginning of the string being searched;
(?<!...)
Matches if the current position in the string is not preceded by a match for .... This is called a negative lookbehind assertion.
Similar to positive lookbehind assertions, the contained pattern must only match strings of some fixed length. Patterns which
start with negative lookbehind assertions may match at the beginning of the string being searched.
(?(id/name)yes-pattern|no-pattern)
Will try to match with yes-pattern if the group with given id or name exists, and with no-pattern if it doesn't. |no-pattern is optional and
can be omitted. For example, (<)?(\w+@\w+(?:\.\w+)+)(?(1)>) is a poor email matching pattern, which will match with '<user@host.com>'
as well as 'user@host.com', but not with '<user@host.com'.
Special Sequences
Special sequences consist of "\" and a character from the list below. If the ordinary character is not on the list, then the resulting RE
will match the second character. For example, \$ matches the character "$".
\number
Matches the contents of the group of the same number. Groups are numbered starting from 1. For example, (.+) \1 matches 'the
the'or '55 55', but not 'the end' (note the space after the group). This special sequence can only be used to match a maximum of 99
groups (\99). If the first digit of number is 0 (\02), or number is 3 octal digits long (\123), it will not be interpreted as a group match,
but as the character with octal value number ('2' and '123' respectively). Inside the "[" and "]" of a character class, all numeric
escapes are treated as characters.
\A
Matches only at the start of the string.
\b
Matches the empty string, but only at the beginning or end of a word. A word is defined as a sequence of alphanumeric or
underscore characters, so the end of a word is indicated by whitespace or a non-alphanumeric, non-underscore character.
Inside a character range, \b represents the backspace character.
\B
Matches the empty string, but only when it is not at the beginning or end of a word.
\d
Matches any decimal digit; this is equivalent to the set [0-9].
\D
Matches any non-digit character; this is equivalent to the set [^0-9].
\s
Matches any white space character; this is equivalent to the set [ \t\n\r\f\v].
\S
Matches any non-white space character; this is equivalent to the set [^ \t\n\r\f\v].
\w
Matches any alphanumeric character and the underscore; this is equivalent to the set [a-zA-Z0-9_].
\W
Matches any non-alphanumeric character; this is equivalent to the set [^a-zA-Z0-9_].
\Z
Matches only at the end of the string.
See the next topic for examples.

Navigation: Reference Manual > Query Filters > Advanced Query Dialog > Regular Expression Syntax >

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 89 of 106

Regular Expression Examples


Regular Expression Matches
'M0.*' anything starting with 'M0' - same as: LIKE 'M0%'
'[a-e].*' anything starting with 'a', 'b', 'c', 'd' or 'e'
b\w{5} 6-letter words starting with b (may contain '_'s)
'\w+' any single word with 1 or more characters
'...\$' 4-letter codes ending with '$' - same as: LIKE '___$'
'pqr\d' 'pqr' followed by a digit

Navigation: Reference Manual >

Reload Data
Reloading data is necessary when the imported data is modified externally. Drillhole data can be reloaded following the same procedure as
the original import. Leapfrog retains the table structure and refreshes the data contained in the tables. This ensures that you do not need to
re-assign the data type for each column and select which columns to be imported etc.

Navigation: Reference Manual >

Rendering Image
You can take a screenshot of the scene in Leapfrog and save it as an image file.
Select Scene > Render Image to bring up the Render Image window which will automatically render the scene with the default settings.

You can define the size of the rendered image by changing the Size fields. With a higher Supersampling rate, you can expect better anti-
aliasing when the rendered image is enlarged.
When the parameters are set, click on the Render button to capture the scene.
If you deselect the Keep aspect option and render the image, it will be either cropped or a blank boundary will be added.
If satisfied with the image you captured, click on the Save button and save as a png.jpg or JPG file.

Navigation: Reference Manual >

Surface Values
(You are expected to have mastered Interpolation with Structural Trend Tutorial and Topography Tutorial)
Non-numeric data is not particularly suitable for interpolation. When interpolating non-numeric values, Leapfrog places extra off-surface
points above and below the value point, such that the value points would be joined together forming a better surface. For example, let us
consider the topography data set, Topo.

Select to view Topo and show the point data as spheres.

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 90 of 106

Now, view the Topo and adjust the colour appropriately to get something similar to the following view. This is a classified points set,
including three different types of points. In this example, orange points correspond to the yellow points shown in the previous screenshot.
Red and blue points are the extra off-surface points.

When slightly rotated as shown below, we have a well-separated view of three different types of points. These red and blue points are
automatically generated by Leapfrog. The orange points are "sandwiched" between the red and blue points. This ensures that the
interpolated surface of the orange points will be nicely between red and blue.

The default setting for generating classified points sets does the job well for most cases. However, if there is a need for tuning the settings,
you can open the Surface Values dialog and alter the default parameters.

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 91 of 106

Surface Normals Tab


In the Surface Normals tab, you can specify a structural trend and "blend" it with the surface to be produced. The list will contain all the
available structural trends. The default is None. When a structural trend is selected, its normal vectors will be taken and paired with the
actual normals of the surface.
If their orientations are less than the specified angle (45 degrees in the screenshot below), two normal vectors will be "blended", and the
average of two vectors will be used. Otherwise, if two normals are wider than the angle specified, the normal of the trend will be ignored and
the normal of the surface will be used as it is.

We use trend (refer to Interpolation with Structural Trend Tutorial if it is not available in the project) as the structural trend, and 60 degrees
for blending. We create Topo_blended, a classified points set. Subtle diffepng.jpgrences from Topo may be observed.

Topo Topo_blended

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 92 of 106

Points and Values Tab


In the Points and Values tab, you can manipulate the way the off-surface points (red and blue points in the classified points set) are
produced. Initially, Leapfrog uses a default setting. If you would like to use a manual setting, you can change the offset distance and the
ratio of off-surface points and surface points. The offset distance determines how far an off-surface point will be placed away from the
surface.
If a greater offset distance is used, the surface is expected to become bumpier with more ups and downs, as the "sandwich" created by
above and below off-surface points will have a less repelling effect.
You can also set the ratio of on- and off-surface points. A large value means a smaller number of the off-surface points, so off-surface points
will appear sparingly and may result in a less repelling effect, and therefore a bumpier surface.

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 93 of 106

Navigation: Reference Manual >

Table of Special Assay Values


Leapfrog uses a data type called Assay as a solution to handle non-numeric values in assay value data. In previous versions, assay values
were imported as numerical data. While assay values are numeric by nature, it is not guaranteed that the assay data to be imported will be
100% numeric. For example, some people prefer to use a non-numeric value 'na' instead of a numeric value 0.0 to represent "not available".
In the Marvin example demonstrated in the User's Guide, non-numeric values '<0.02' and '<0.01' were discovered in the assay data, which
meant below detection limit for Au and Cu respectively.
The new data type, Assay, accepts either non-numeric or numeric values as they are, and interprets the actual meaning of non-numeric
values according to a supplied look-up table for Special Assay Values.
In the demonstration given in the User's Guide, Special Assay Values were not supplied during the drillhole data import, but manually
assigned during the error fixing stage. In this section, we show how a table for Special Assay Values is prepared and supplied at the same
time while the drillhole data is being imported.

Preparation of Special Assay Values


The table for Special Assay Values consists for 3 columns, Column, Code and Description, formatted in CSV style:
Column: name of the mineral
Code: non-numerical value used in the assay data
Description: The meaning of the non-numerical value. You can create your own meanings or use on of the three built-in values:
1. Below Detection: The grade value is smaller than the precision of the detecting device, hence not reliable.
2. Lost Core: The interval was sampled, but the core is not available.
3. Not Sampled: The interval has not been analysed, thus no value is available.

The most convenient way to create a CSV file is through spreadsheet software such as Excel, Open Office Calc or Gnumeric. Make sure
you save or export as CSV format.
As an example we can create a table as shown below for the two errors related to Special Assay Values that occurred in the Marvin
example.

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 94 of 106

Optionally, an auxiliary column named Table can be prepared in addition to the 3 columns described above. This extra column can be used
when there are multiple assay tables and a special assay value is specific to a certain table.

Import the table for Special Assay Values


Let us reiterate the drillhole data import with the Marvin>M example. Specify the file containing the table for Special Assay Values as shown
below. Let us suppose we load special_assay_values.csv

Follow the usual steps involved in the drillhole data import. At the end of the usual import steps, you will be prompted to confirm the use of
the Special Assay Values.

If special_assay_values2.csv is loaded instead, the above screen should look like

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 95 of 106

Notice that the imported drillhole data contains no error.

View and edit registered special values


You can right-click on m_assays and select Special Assay Values to view or edit all the registered special values.

This brings up the following dialog, where you can set the meaning of the special value.

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 96 of 106

Navigation: Reference Manual >

Vein Modelling

Overview
Veins are particularly important to mineral deposits since the source of mineralisation are often located in or near the veins.
In this topic, you will learn:
How to extract vein walls from drillhole data and interval selections
How to interpolate and make vein meshes

Navigation: Reference Manual > Vein Modelling >

Extract Vein Walls

Extracting Vein Walls From an Interval Selection


In the following demonstration, we extract vein walls from the B1 lithology layer in the Demo data set. (You will have to start a new project
and import the drillhole data from tutorials\Demo directory.)

Right-click on lithology under Drillhole Data and select Extract Points>Vein Walls. Change the setting as shown below.

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 97 of 106

By default, the Join intervals to make consistent with trend option is enabled. You may expect a good result with this enabled in most
cases. See below for details.
The most critical step is the Orient vein segments settings. This step is to determine which side of the vein a point belongs to. If you click
on OK without specifying anything, you will have an error message.

So a better estimate is required. The fairly regular shape of the B1 layer means that we can use the moving plane and a global trend. For
more complicated orientation settings, you may choose to use a Structural Trend. See Interpolation with Structural Trend Tutorial in the
User's Guide and Tutorials for details on the structural trend.

If a structural trend is used, the Strength parameter isn't very critical.

Click on View Plane and show the plane in the scene window.

Move the plane around in the scene, such that it is placed between the top and the bottom end of most of the B1 segments. This does not
need to be highly accurate.

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 98 of 106

Back in the dialog, click on the Set From Plane button. This will copy the current setting of the moving plane. Leapfrog will automatically
assign suitable values for Ellipsoid Ratios. Click on OK to accept this setting and dismiss the dialog.

Run the project. You should be able to find the a new entry B1_vein including B1_vein footwall and B1_vein hangingwall under
Boundaries folder. The sub-categories, footwall and hangingwall, represent the group of points on each side of the vein:

Why is "Orient Vein Segment" vital?

The following surface was produced by interpolating the combined point set of ox_vein footwall and ox_vein
hangingwall using the Marvin data and the OX layer.
As there is no distinction between points lying on one side of the vein and points on the other side of the vein, the
interpolation engine assumes that all the points are lying on one surface. The final product is an open bumpy surface,
which is obviously not what we want.

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 99 of 106

Display B1_vein footwall and B1_vein hangingwall and adjust their colour scheme appropriately. The light-green points at the top are
B1_vein footwall, and the orange points below correspond to B1_vein hangingwall.

If you wish to proceed to the interpolation of the vein wall points, jump to the Interpolate Vein Walls subtopic.

Extracting Vein Walls From an Interval Selection


Vein walls can be directly extracted from an interval selection table. Suppose we have an interval selection, a subset of B1 lithology layer as
shown below. For details on the interval selection, see New Interval Selection in the Drillhole Data Import Tutorial.

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 100 of 106

There are two paths to extract the vein walls from an interval selection.
Right-click on the interval selection from which you wish to extract the vein walls, and select Extract Vein Walls.

When the Extract Vein Walls window appears, you specify from which subset of the interval selection you wish to extract the vein. In this
case, Subset 1.

Alternatively, you can select the Extract Points>Vein Walls from the parent table, lithology:

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 101 of 106

When the Extract Vein Walls table appears, choose to Define region using a Selection and select Subset 1 as the Subset to use.

Either way, it will extract footwall and hanging wall points.

What is "Join intervals to make consistent with trend"?

When there are multiple intervals in the same drillhole, it often causes difficulties for the interpolation. For
example, if you are using a global trend, and you have two intervals with the same orientation in the same
drillhole. With the Join intervals to make consistent with trend option is enabled, these two intervals are merged
and regarded as a single interval with no gaps therein. This helps classifying footwall and hangingwall in a
cluttered set of data, and usually ensures better result.

The joining operation is only performed when doing so conforms to the given trend. Consider the second
diagram. Here, two intervals are oriented in the opposite direction. They are consistent with the structural trend
used, but joining these intervals will break the consistency. Thus they are not joined.

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 102 of 106

Navigation: Reference Manual > Vein Modelling >

Interpolate Vein Walls


From these two sets of contact points that represent each side of the vein structure and one surface defining one side or the medial surface
of the point sets, we can make a vein.
First, we need to interpolate a surface of either point set. Here, we interpolate B1_vein footwall to obtain its surface (right-click and select
Surface Interpolant). This creates B1_vein footwall Surface. The red (top)/blue(buttom) surface shown in the scene window corresponds
to the B1_vein footwall Surface.
Now, right-click on the B1_vein footwall Surface and select New Vein.

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 103 of 106

This will display the Make Vein window:

Select the contact points for each side of the vein from the combo-boxes at the top or, alternatively, click the button to select the points
from the project tree view. The First side points and Other side points can be two separate point sets (as shown) or a single data set.
Normally the sets would refer to a foot wall and a hanging wall. The order of two does not matter.
Use the Ensure thickness option to force the vein to maintain a minimum thickness. Be careful to choose a value less than the minimum
distance between any two contact points. See below for more about this option.
Check the Add mesh vertices to point locations checkbox to add mesh vertices at the data points. By default, the Add mesh vertices to
point locations option is not selected.
Tick the Exclude points further than check-box and specify a distance (we selected 4 in this example) from the mesh to ignore foot wall
and hanging wall points further than that distance from the mesh. Use this option if the footwall or hangingwall points contain outliers, or if
they extend significantly beyond the mesh extents. This option can often affect the quality of the vein model. See below for more about this
option.
When you are happy with the settings click the OK button.
The offset interpolants generated (Footwall and Hangingwall offset values) are used to calculate the thickness of the vein (there will be
one interpolant if a single point set is used). The last object, Vein from B1_vein footwall Surface, is the final product, the vein model.
Display this mesh:

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 104 of 106

Display the vein mesh with the Thickness evaluation. You can easily distinguish between the thinner area (blue) and the thicker area (red).

Notes on the Ensure Thickness Option


When footwall and hangingwall points are in pairs, this option is usually not required. However you may still get situations where the vein will
intersect itself, in which case it is required. This is easily detected as shown in the picture below. The regions where the vein has intersected
itself will show up in the back side colour of the mesh instead of the front colour.

Vein with self-intersecting regions. These are shown in blue

If you see dimples in the resulting vein mesh this is usually because the specified minimum thickness is larger than the distance between
the contact points and Add mesh vertices to point locations is selected. To remedy this, reduce the minimum thickness.

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 105 of 106

Vein with dimples caused by a minimum thickness greater than the vein
width.

See also the Combined Interpolants Example.

Notes on the "Exclude points further than..." Option


If the Exclude points further than..option is disabled as below, what effect would it result in?

The vein mesh will include all the foot wall and hanging wall points regardless of their distance from the mesh that we wish to create. The
screenshot of the vein mesh we created above shows that lots of points are lying far off the the vein mesh. When this option is disabled, all
these points away from the vein will now be included, and we obtain the vein mesh similar to below.

There are a number of inverted blue faces in this vein mesh. This issue is addressed below.

Limitations
In some circumstances, the vein mesh may contain inverted faces. This can occur when the detail in the surface (the size of any lumps or
folds etc) from which the vein is made is of a similar magnitude to the vein width.
Inverted faces will appear as the back colour instead of the expected front colour as illustrated below.

file://C:\TEMP\~hhB4B2.htm 04-07-2011
Reference Manual Page 106 of 106

If you encounter this problem you can try the following work-arounds:
Decreasing the resolution of the mesh (increasing the number).
Create a medial mesh by interpolating the foot wall and hanging wall and then use the Combine Interpolants command to create an
average of the two. Isosurface the combined interpolant at 0.0 and create the vein from that surface. See the Combined Interpolants
Example for details.
If the surface is predominantly concave on one side, use the that side to create the vein.
If the surface has dimples try adding some Nugget to the interpolant to smooth the surface. Open the properties dialog of the interpolant
and look at the histogram. A good first guess at a nugget value is around 25% of the largest value. Double-click the interpolant and go to
the Variogram tab to set the nugget value.

file://C:\TEMP\~hhB4B2.htm 04-07-2011

You might also like