Está en la página 1de 33

Ore body Modelling

Concepts and Techniques

DR. SURESH PANDEY


pandeysuresh50@gmail.com
+91-9534062256
Ore body Modelling-Concepts and Techniques

1. Introduction
A key point in the design and operation of a mine is the construction of what is called
an ore body model. The proper description of an ore body is the foundation upon which
follow up mine decisions are taken. An ore body model has three distinct components,
viz.
(i) the physical geometry of the geologic units that formed and host the ore body;
(ii) the attribute characterisation in terms of assays and geo-mechanical properties of
all materials to be mined; and
(iii) the value model in terms of economic mining of the ore body.

The ore body model is constructed by interpolating between sample points and
extrapolating onto the volume beyond sample limits. The modelling depends on
considerations such as sampling methods, reliability of data, specific purpose of
estimation and required accuracy. The basic concept of ore body modelling is to conceive
the entire ore body as an array of blocks arranged in a three dimensional X Y Z grid
system (X representing Easting, Y representing Northing and Z representing Elevation)
by making certain assumptions about the continuity of the ore body parameters. Each
block of uniform size represents a small volume of material to which the value of width,
grade, tonnage and other geological entities are assigned. There are four conditions that
an ore body model must satisfy, viz.
(i) the parameters of a model chosen should allow estimation to be made;
(ii) the model must be able to provide an answer to a relevant question;
(iii) the model must be compatible with data; and
(iv) the predictions of the model should be verified or checked by experience.

As a prerequisite to ore body modelling, it is necessary to identify geological domains


of homogeneity within which ore body modelling should be carried out.

Ore Body Modelling- Concepts and Techniques | DR. SURESH PANDEY


pandeysuresh50@gmail.com
+91-9534062256
The range of modelling practices available can be grouped under two broad
categories, viz. (i) Conventional; and (ii) Geomathematical. These techniques exist as
computer packages in some form or other to provide a high-speed computation of a large
number of blocks.
2. Conventional methods
The conventional methods of modelling may be described as tools for quantitative
and qualitative estimation of deposit based on its geometry and sample configuration.
Methods under conventional techniques include polygonal, triangular, sectional, random
stratified grid and contouring methods (Table 1, and Fig. 1). These methods utilise
zonation of a deposit to which arithmetic calculations are applied for arriving at estimates
of quality or quantity of the mineral property within the influence zone of each drill hole
determined by the deposit geometry and drill hole configuration. Aggregating the
individual estimates of tonnage, for each influence zone, provides global estimates while
individual tonnage weighted quality parameters yield estimate of quality parameters of a
deposit. These methods do not take into account spatial relationship among sample
values and are unable to define the precision of estimates that leads to subjective mineral
appraisal.

The conventional methods are based on the rules given by Popoff (1966), which
govern delineation of block boundaries for the purpose of reserve estimation, viz.
(i) The rule of gradual changes;
(ii) The rule of nearest points or equal sphere of influences; and
(iii) The rule of generalization of influence range.

The rule of gradual changes points to a linear change in any property between two
points where the value of property (such as quality parameters, width, density etc.) is
known. The rule of nearest points expresses that in the absence of any other information,
the value at any point nearer to the location can be presumed. The rule of influence range
provides the reasonable distance for extending the value of a sample along a given
2
direction within its influence.

Ore Body Modelling- Concepts and Techniques | DR. SURESH PANDEY


pandeysuresh50@gmail.com
+91-9534062256
Table 1. Summary descriptions of conventional methods of reserve estimation

Methods Descriptions

i) Polygonal method Polygons are constructed by drawing perpendicular bisectors


to lines connecting sample points so that each polygon
encloses one sample point (Fig.1.(1)). The area of each
polygon measured by a planimeter, and the individual sample
grades are then extended to the whole area of corresponding
polygons. A global estimate of grade is obtained by summing
the individual sample grades weighted by their respective
polygonal areas.

ii) Triangulation method Sample points are connected by straight lines to form a series
of triangles (Fig.1.(2)). The grade of each triangle estimated
from the arithmetic mean of the three corner samples, as a
thickness weighted mean or by weighting samples as a
function of their distance from the centre of the triangle. A
global estimate is obtained by summing the individual triangle
grades weighted by their respective areas.

iii) Sectional method Cross sections are constructed from a set of drill holes in lines
across a mineral deposit (Fig.1. (3)). An area of mineralisation
in each section is outlined by joining the intersected
thicknesses and measured by a planimeter. Estimation of
average grade is carried out by summing the individual drill
composite grades weighted by their respective thickness.

iv) Random Stratified Grid A regular grid of a suitable size and an orientation is adjusted
on a set of sample distributed in space by trial and error until,
method as far as possible, at least one sample falls per grid panel
(Fig.1. (4)). An estimated global mean value of the deposit is
the thickness weighted average of the individual panel values.

v) Contouring method Isolines are constructed by interpolation of points of known


values that assume a gradual uninterrupted change from one
sample point to another (Fig.1.(5)). Areas lying between each
successive pair of contours are measured by a planimeter and
multiplied by the average value of its confining contours. A
global estimate of grade is obtained by summing the
individual contour confined values weighted by their
respective areas.

vi) Computer methods Most of the above methods have been computerized at some
stage or other. Hewlet (1962) had developed computer
programs for polygons and triangles while numerous
contouring packages are readily available today. Other
computer methods include i) estimation by linear or quadratic
interpolation between known values, i.e. Inverse Distance; ii)
estimation by fitting a surface polynomial to known values, i.e.
Trend Surface Analysis; and iii) estimation by smoothing of
3 grade variation, i.e. Moving Average.
A comprehensive view of these techniques has been given by Davis (1986).

Ore Body Modelling- Concepts and Techniques | DR. SURESH PANDEY


pandeysuresh50@gmail.com
+91-9534062256
Fig 1. Fig.4.1 Diagrammatic illustration of conventional methods for estimation
4

Ore Body Modelling- Concepts and Techniques | DR. SURESH PANDEY


pandeysuresh50@gmail.com
+91-9534062256
3. Geomathematical methods
Geomathematical methods of exploration modelling involve fitting a mathematical
function, f (x) to define adequately a mineral deposit with respect to the distribution of its
size, shape, grade, density, thickness and other geological attributes of relevance with an
aim to provide three dimensional representations of the deposit parameters with stated
level of confidence. The techniques aim at replicating the reality of a deposit as closely as
possible using available sample information of exploration campaign.

Geomathematical representation of a minral deposit can be achieved either through


estimation or simulation. While estimation enables only one realisation of reality,
simulation provides a family of realisations. Various geomathematical methods of
modelling a mineral deposit can be grouped under two broad techniques, viz. (i)
Deterministic; and (ii) Probabilistic.

Each of these techniques has its merits and limitations. However, recent advances in
geomathematical modelling have established that the probabilistic methods are more
useful and accurate than the deterministic methods.
3.1 Deterministic techniques
The deterministic techniques provide only one outcome for an event or process.
These methods generate a single realisation for qualitative and quantitative estimates of
reserve parameters. The more important techniques under the deterministic group include
(i) distance weighting, (ii) moving average and (iii) trend surface analysis.
Distance weighting
These methods became more popular when computer assistance became available to
perform a large number of repetitive calculations. The objective of distance weighting
methods is to assign a value for the mineral quality (assay value) parameters i.e. grade or
for the depositional parameters (i.e. thickness) to a point block within given volume
based on an appropriate linear combination of the sample values of the surrounding
sample locations (i.e. drill holes). In general, it is assumed that the potential influence of
a sample value (say, grade) at a point decreases as one moves away from that point. The
5
grade change, thus, becomes a function of distance i.e. f (di).

Ore Body Modelling- Concepts and Techniques | DR. SURESH PANDEY


pandeysuresh50@gmail.com
+91-9534062256
Table 2 An overview of deterministic and probabilistic models used in geology

Deterministic Probabilistic
(Single outcome) (Many p ossible outcomes)

Straight Moving Linear Contouring IDP Trend


Average Average inter- Analysis
polation

Independent Random Theory Correlated Random Theory Stochastic Simulation


(Classical Statistics -RV) (Geostatistics on ReV) Turning band Simulation
Sequential Gaussian Simulation
Truncated Gaussian Simulation
Sequential Indicator Simulation
Parametric Non-Parametric Simulated Annealing Simulation
Probability Field Simulation
Lower-Upper M. Simulation

Indicator Kriging Probability Kriging

Ordinary Simple Universal Lognormal Disjunctive Multi Gaussian


Kriging Kriging Kriging Kriging Kriging Kriging

Parametric Non-Parametric
(Interval or ratio scale) (Normal or ordinal Scale)

Normal Lognormal Binomial Negative Poisson 2-distribution Kolmogorov Rank


Distribution Distribution Distribution Binomial Distribution (goodness of fit) Smirnov(K-S) Distribution
Distribution Distribution (correlation
(Goodness test)
of fit)

IDP = Inverse Distance Power, RV = Random Variable, ReV = Regionalized Variable.

Ore Body Modelling- Concepts and Techniques | DR. SURESH PANDEY


pandeysuresh50@gmail.com
+91-9534062256
Distance weighting
These methods became more popular when computer assistance became available to
perform a large number of repetitive calculations. The objective of distance weighting
methods is to assign a value for the mineral quality (assay value) parameters i.e. grade or
for the depositional parameters (i.e. thickness) to a point block within given volume
based on an appropriate linear combination of the sample values of the surrounding
sample locations (i.e. drill holes). In general, it is assumed that the potential influence of
a sample value (say, grade) at a point decreases as one moves away from that point. The
grade change, thus, becomes a function of distance i.e. f (di).
If linear distances are used to calculate the value (grade) of a block or at point, it is
called Direct Linear Distance Method. Instead of using direct distance, the inverse of
distance can also be used as a function. In such case, the method is known as Inverse
Distance Method. The function used for the weighting parameter in the inverse distance
method for the estimation of a value at a point from the nearest neighbourhood sample
value is sometimes considered as inversely proportional to some power (n) of the
distances between the samples and the point to be estimated (i.e. 1/dn). In such case, the
method is defined as Inverse Distance Power Weighting Method. If the power `n of the
inverse distance is 1, the method is known as Inverse Distance Method. If the power of
the inverse distance used is 2, it is known as Inverse Distance Squared (IDS) Method,
which is the most common weighting method used by computers to calculate grade at a
point or block that has been sampled. If the power of inverse distance is 3, the method is
called Inverse Distance Cube (IDC) and so on and so forth.

At times, a constant `k is added to the distance with power `n, the value of `k being
chosen empirically to provide an adequate fit. This provides a mathematical function:
f{1/ (din +k)}

Although the most common distance weight in use is 1/d2, various other powers or
combinations of powers of `d and a constant `k are used (David, 1980). The distance
weighting methods have following drawbacks:
7

Ore Body Modelling- Concepts and Techniques | DR. SURESH PANDEY


pandeysuresh50@gmail.com
+91-9534062256
(i) An isolated rich or poor sample would generate a lot of rich or poor blocks that
actually may not be the case;
(ii) An influence which is weighted like IDS, gives a lot of importance to the
closet sample and a little to the others. This situation further increases with IDC
weighting.

Moving average
Moving average technique produces a trend surface and represents a smooth picture
of grade variation but not confined by a mathematical function. It was first used by krige
(1964) in South Africa to establish block grades that provided the basis for the
development of Geostatistics. The method differs from others in that all data surrounding
block is used to value it but once all the blocks have been valued, the point values are
deleted from any further calculations.

Trend surface analysis


The spatial variability of any geological phenomenon defined by a polynomial is
termed as `trend. If in mineral deposit, there exits a systematic change in the expectation
of attribute, then it is said that a trend exits. A trend surface is a mathematical surface
expressible by polynomials, fitted to spatially distributed exploration data represented by
geographic coordinates, by the method of least squares. The surface may be linear,
quadratic, cubic, quartic etc depending upon the degree and order of polynomial fit. Thus,
a second order trend surface is a numerical analogy of an anticline or syncline. Krumbein
and Graybill (1965) had explained trend surface analysis as a mathematical quantitative
model and have used the term `concept model as equivalent to trend surface of a regional
variable in exploration modelling. The analysis calls for a suitable mathematical function
of the geographic coordinates of a set observation that must minimise the squared
deviations from the trend and construction of global function relations:
Y= f [(X1, X2, X3,..Xn)n ; a1,a2,a3,an ]+ a0
where X1, X2, X3..Xn are the geographic coordinates; a1, a2, a3..an are the unknown
coefficients and a0 is the random function that follows a frequency distribution, such as
8
normal distribution.

Ore Body Modelling- Concepts and Techniques | DR. SURESH PANDEY


pandeysuresh50@gmail.com
+91-9534062256
The mathematical trend surface fitting has expanded since its introduction in early
1960s. This method attempts to fit a mathematical function-a polynomial, to the assay
values in a deposit so that the value at any point can be estimated. The confidence of the
fitted mathematical function is stated by the 'goodness of fit. Davis (1986) provides a
comprehensive view of the trend analysis. On removing the trend component m (x), from
the data set one obtains the residuals R (x) for which experimental semi-variograms can
be constructed and fitted to suitable models. After analysis, m (x) is added back to get
g(x) i.e., g (x) = m (x) + R (x).

The strategy in the trend surface analysis should be initiated by (i) evaluating the
reliability of the fitted trend surface (ii) selecting right geological and statistical models
and (iii) considering the pertinence of trend surface analysis of geological data (Koch and
Link, 1970). Trend surface of a part can be patched with each other. It is a mathematical
technique and unless it can be geologically explained, it has no relevance in geological
data analysis or in modelling of a mineral deposit.

3.2 Probabilistic techniques


The probabilistic methods provide more than single outcome of an event or a
geological phenomenon. Since most exploration (geological) data influenced by a
multiplicity of causes (i.e. variation in source, depositional processes, structural features,
host rock type etc.) and a random element always associated with them, the probabilistic
methods are found to be more useful in exploration modelling than deterministic
methods. Developments in geomathematics have led to a number of probabilistic
methods. However, the most frequently used methods are: (i) Classical statistical
methods; and (ii) Geostatistical methods.

3.2.1 Classical statistical methods


The classical statistical modelling is based on Theory of Random Variables. It
involves random observations of independent individuals of a given population,
regardless of their spatial position. It provides (i) the nature of frequency distribution; (ii)
9 estimates associated within specified confidence limits; (iii) average deviations of
observations from the mean; and (iv) a check on the sampling and analytical biases. Most

Ore Body Modelling- Concepts and Techniques | DR. SURESH PANDEY


pandeysuresh50@gmail.com
+91-9534062256
commonly classical statistical models include (i) Normal Distribution Model and (ii)
Lognormal Distribution Model. Various other distributions are known but the assumption
of either normality or log normality can be made for most mineral deposits and the use of
a more complex distribution is not justified (David, 1977; Rendu, 1981).

Normal distribution model


Theory of normal distribution
Normal or Gaussian distribution is characterised by a symmetrical bell-shaped
continuous curve with asymptotic bottom (Wellmer, 1998) defined by its probability
density function expressed as:

1 0.5{( xi x ) / s}2
f ( x) e
p.d.f.,
s 2 ,where x is sample mean which is an

estimate of population mean, and s is sample standard deviation which is an estimate of


the population standard, .

The cumulative probability density function (c. d. f.) of normal distribution has the
expression:

xi

F(x) = [1/s( 2 )] e[ 0.5{( xi x ) / s} ] dx


2

These expressions are simplified by defining the standardised normal random


variable, z = ( xi x )/s, where xi = upper value of a class and developing a standard
normal distribution with zero mean i.e. z = 0 and standard deviation, Sz =1, i.e. N (0,1).
This standardised normal distribution is expressed as:

f (z) = [1/ 2 ]e[ -0.5(z)


2
]

The fit of normal distribution to a sample distribution is established (i) numerically


by the values of skewness, kurtosis and chi-squared goodness of fit test statistics, and (ii)
10 graphically by approximating a straight line on the arithmetic-probability scale.
Numerical value of skewness should be zero or close to zero and that of kurtosis should

Ore Body Modelling- Concepts and Techniques | DR. SURESH PANDEY


pandeysuresh50@gmail.com
+91-9534062256
be 3 or close to 3, while the calculated value of chi-squared statistics should be less than
or equal to the critical (table) value to establish the fit of normal distribution.

Estimation of parameters
Once the fit of a normal distribution is established to a sample distribution, the
theory of distribution can be applied to estimate mean, variance and confidence limits of
mean. The sample mean and the sample variance for a normal distribution are estimated
as follows:
n
Sample mean, X = [1/n] xi
i 1

n
Sample variance, s 2 =[1/n-1] ( xi x )2
i 1

where S = S2 is the standard deviation of the sample population and ' n ' is the number
of samples drawn from the population. The mean value 'm' of the ore body is estimated
by m = x with variance, V = S2/n. If mp were the confidence limit of the true mean 'm'
such that the probability of 'm' being less than mp is p, then m1-p is the confidence limit
such that the probability that 'm' being larger than m1-p is 1-p, then the probability of 'm'
falls between mp and m1-p is 1-2p confidence limits of mean. The following equations are
to be used to calculate mp and m1-p for the mean value 'm' of the ore body:
Lower limit, mp = m - t 1-p* S/ n; and
Upper limit, m1-p = m + t 1-p* S/ n.
where t 1-p is the value of the Student's 't' variate for f = n-1, degrees of freedom, such
that the probability that 't' is smaller than t 1-p is 1-p.

Once optimum solution for 'm' has been determined, it is desirable to check for the
goodness of fit for the normal distribution to the sample distribution. Chi-squared (2)
provides a robust technique for the fit. The test statistics is given by:
2 = [(Oi - Ei) 2/ Ei],
where Oi = observed frequency in a group; and Ei = corresponding expected frequency
in that group. For a sample distribution approximating normal distribution, the calculated
11

Ore Body Modelling- Concepts and Techniques | DR. SURESH PANDEY


pandeysuresh50@gmail.com
+91-9534062256
value of Chi-squared should be either less than or equal to its corresponding critical value
(Sarkar et al., 1988; Wellmer, 1998).

The degree of skewness and kurtosis of a sample distribution are given as:
Skewness, Sk = [1/n-1] [( xi x )3 /S3 ]

Kurtosis, Ku = [1/n-1] [( xi x )4 /S4 ]

The degree of skewness is a parameter that measures the symmetry of the distribution
curve while the degree of kurtosis measures its peakedness. For an ideal normal
distribution curve, the degree of skewness should be zero or close to zero and the kurtosis
should be equal to or close to three (Rendu, 1981).
Fitting a normal distribution
The first step in fitting a normal distribution to sample values involves grouping of
the sample values in different classes and calculation of frequencies corresponding to
each class. Though the class interval is chosen based on experience, yet it is best
achieved applying an approximation provided by Sturges rule (Wellmer, 1998) as given
below:
Class interval = R / ((1+3.322 log10 (n))
=R/((1+1.4427ln (n))
where R= Range of values, a difference between maximum and minimum value;
n = Total number of values/cumulative frequency.

The individual class frequencies when divided by the total number of sample
frequencies provide relative frequencies and enable construction of histogram that
reflects whether or not the sample values are symmetrically distributed.

The normal distribution can also be checked by a graphical method by using an


arithmetic-probability paper provided the number of samples is large enough. The
upper class limits are plotted in arithmetic scale whereas the corresponding percentage
cumulative frequencies are plotted along the probability scale. The assumptions of
normal distribution are valid provided a straight line can be assumed to fit the plotted
12
points.

Ore Body Modelling- Concepts and Techniques | DR. SURESH PANDEY


pandeysuresh50@gmail.com
+91-9534062256
However, if two or more separate straight lines with varying slopes appear to fit the
plotted points, a mixed mode of population may be inferred. But if the points plotted
cannot be fitted through any sort of straight line, the distribution may be considered as
'non- normal'. The value corresponding to the 50% cumulative frequency provides an
estimate of mean and difference in values corresponding to 50% and 84% cumulative
frequencies or the values corresponding to 50% and 16% cumulative frequencies
provide an estimate of the population standard deviation. These graphical estimates of
mean and standard deviation are used to calculate the other statistics (skewness,
kurtosis, and confidence limits of mean).
Lognormal distribution model
Theory of lognormal distribution
When the distribution curve is fairly skewed and its kurtosis value is either
significantly greater than or less than three, the distribution may be represented by a 2-
parameter or a 3-parameter lognormal distribution (Krige, 1951 and 1978; Rendu, 1979
and 1981; Sichel, 1952 and 1966). Let xi be a variate with skewed distribution. If ln (xi) is
a variate with normal distribution, then the distribution of xi is said to be a 2-parameter
lognormal distribution (2 PLND). If ln (xi + c) is a normal variate where c is additive
constant, then xi is said to be a 3-parameter lognormal distribution (3 PLND). The value
of additive constant (c) is usually positive for low-grade deposits and negative for high-
grade deposits. The probability density function of a lognormal distribution is given by
the expression:
1
f ( x) e 0.5[{ln x }/ ]
2

x 2
where = Logarithmic mean or log mean and;
2 = Logarithmic variance or log variance.
The probability distribution of a 3-parameter lognormal variate xi is defined by
(i) the additive constant c;
(ii) the log mean of (xi + c); and
13 (iii) the log variance of (xi + c).

Ore Body Modelling- Concepts and Techniques | DR. SURESH PANDEY


pandeysuresh50@gmail.com
+91-9534062256
Estimation of parameters
Logarithmic mean can be estimated as follows:
Let yi = ln (xi +c)
n
Log mean, y or = [1/n] y
i 1
i

Log variance V (y) or 2 = [1/n-1] [( yi -y ) 2]

The geometric mean m of x i+ c is estimated from m = exp ( y )

The average value, m of a mineral deposit is given by m*- c, where


m*= e y V ( y ) = e +* / 2 = e .e*/2 = e.n (V).

The value of n (V) factor is read from Sichel's table (David, 1977), where n = number of
samples and V= Log variance.
Average value, m = m*-c
Variance, S2 = m2 [eV -1] = m2 [exp (2) -1]

The lower and upper limits for the estimation of a central 90% confidence interval of
the mean of a lognormal distribution is obtained by using the factors 0.05 (V, n) and 0.95
(V, n) respectively.
Lower limit = [(0.05 (V, n) ) (m)] - c
Upper limit = [(0.95 (V, n) ) (m* )]- c

Fitting a lognormal distribution


Graphically the cumulative frequency distribution of a 2-parameter lognormal variate
would be plotted as a straight line on a logarithmic-probability paper. In the case of a 3-
parameter lognormal distribution, the plot is a curve either convex up or convex down
and a straight line is fitted to it as described in the following steps (David, 1977; Rendu,
1981; and Sarkar et al., 1995):

(i) The curve is plotted on a logarithmic-probability scale. The upper class value limits are
14 plotted on the ordinate (in log scale) while the corresponding percent cumulative
frequencies are plotted on the abscissa (in probability scale).

Ore Body Modelling- Concepts and Techniques | DR. SURESH PANDEY


pandeysuresh50@gmail.com
+91-9534062256
(ii) The additive constant is estimated by the following approximation (Rendu, 1981):
c = (Me2 - F1F2)/(F1 + F2 - 2Me)
where Me = the sample value corresponding to 50% cumulative frequency i.e., the median
of the observed distribution.
F1 = Sample value corresponding to 'p' percent cumulative frequency.
F2 = Sample value corresponding to '1-p' percent cumulative frequency.
For best results, the value of 'p' is kept between 5% and 20%. However, theoretically
any value of 'p' may be used. The value of 'p' is altered till a best-fit value of the additive
constant is approached.
(iii) The additive constant, thus estimated, is added to the upper value limits of each
of the classes of the distribution and with these new values for the upper class marks, the
graph is plotted on a log-probability scale. If the plot fits or approximates well to a
straight line, it is said to conform to a 3-parameter lognormal distribution.
(iv) The graphical estimates of log mean and log variance are estimated as follows:
Log mean = = ln (x50%) i.e., loge value corresponding to 50% cumulative frequency for
the straight line obtained.
Log standard deviation, = Difference in the log values corresponding to 84% and 50%
cumulative frequencies or 50% and 16% cumulative frequencies for the straight line plot
on a log-probability scale i.e. ln (x84%) - ln (x50%) = 1, and ln (x50%) - ln (x16%) = 2;
ideally, 1 = 2.
Alternatively, = 1/2[{ln (x84%) - ln (x50%)}+ {ln (x50%) - ln (x16%)}]
Log variance = 2 = V (y)

The classical statistical models can define the precision of an estimate, but they have
a few drawbacks. It is assumed that the samples taken from an unknown population are
randomly distributed and are independent of one another. In case of mineral deposits, this
implies that all the samples in the deposit have an equal probability of being selected. The
likely presence of trends, zones of enrichment or pay shoots in the mineralisation all may
get neglected (Rendu, 1981).
15

Ore Body Modelling- Concepts and Techniques | DR. SURESH PANDEY


pandeysuresh50@gmail.com
+91-9534062256
3.2.2 Geostatistical methods
These modelling methods utilise an understanding of the spatial relations of sample
values within a mineral body. The geostatistical modelling techniques are based on a set
of theoretical concepts known as the theory of Regionalised Variables developed by
Matheron (1971) based on empirical work carried out by Krige (1951, 1952 and 1962).
Any variable, which is related to its position (i.e. exhibits spatial correlation) and support
or volume in space, is called a regionalised variable. In fact, almost all variables
encountered in earth sciences can be regarded as regionalised variables (Kim, 1991).

Most regionalised variables, in ore reserve estimation, display two aspects; viz. (i) a
random aspect, consisting of highly irregular and unpredictable variations and (ii) a
structured aspect, reflecting spatial characteristics of the regionalised phenomena. The
two-fold purposes of the theory of regionalised variables are, (i) to express the spatial
properties of regionalised phenomena in adequate form; and (ii) to solve the problems of
estimating regionalised variables from sample data (Kim, 1991).

To achieve these, George Matheron (1963) introduced a probabilistic interpretation to


regionalised variables that led to the emergence of Geostatistics as an ore reserve
estimation technique in early 1960s in France and spread worldwide. On a global scale,
Geostatistics has been successfully applied to metallic and non-metallic minerals,
precious metals and fossil fuel while in India its application has been made mainly to
base metals, BIFs, coal, oil, phosphorite and to some extent to bauxite as well.

Geostatistics, if properly understood and appropriately applied does derive from the
raw data, the best possible estimates of ore body parameters. The Conventional methods
of estimation of mineral reserves and grades, in practice, do not provide any objective
way of measuring the reliability of the estimates (Sarkar, O' Leary and Mill, 1990). The
Classical statistical techniques provide an error of estimation stated by confidence limits
but ignore the spatial relations within a set of sample values (Royle et al., 1980). Trend
surface analysis and Moving average methods take into account the spatial relationships
16 but ignore the error of estimations (Davis, 1986). Geostatistics overrides these limitations
by providing estimates together with a minimum error variance (Matheron, 1971).

Ore Body Modelling- Concepts and Techniques | DR. SURESH PANDEY


pandeysuresh50@gmail.com
+91-9534062256
Geostatistical methods utilise an understanding of the inter-relations of sample values
for quantifying the geological concept of: (i) the inherent characteristics of the deposit;
(ii) a change in the continuity of interdependence of sample values according to the trend
of mineralization; and (iii) a range of interdependence of sample values. Based on these
quantifications, geostatistics produces: (i) estimation with a minimum variance; and (ii)
an error of estimation, both in local and global scales. None of these properties are taken
into account of the conventional or classical methods.
Geostatistics, thus, marks a major advance in ore body modelling, resource
assessment and its appraisal provided that they exhibit a definite regionalised
phenomenon. A comprehensive account of recent methods of geostatistical modelling to
mineral inventory estimation has been given by Sinclair and Blackwell (2002).

A brief description of some the important geostatistical techniques are given in Table 2.
Of these kriging techniques, the Ordinary Kriging is the most simple and widely used
kriging technique.
Table 3. Various techniques of kriging
Kriging techniques Description
1. Ordinary kriging Linear kriging of a variable with unknown mean is called as
(Journel and Huijbregts, ordinary kriging (OK). The OK technique accounts for local
1978; Goovaerts, 1997 variation of mean by limiting the domain of stationarity of
and Olea, 1999) mean to neighbourhood samples. This technique imposes a
constraint that the sum of the kriged weights must be equal
to unity.

2. Simple kriging Simple kriging (SK) is the form of linear kriging where the
(Goovaerts, 1997; mean of the regionalised variable is known. There is no
Armstrong, 1998 and condition in the sum of the weight.
Olea 1999)

3. Universal kriging Kriging with trend is known as universal kriging (UK).


(Journel and Huijbregts, When data values exhibit a trend, which is expressed by
1978 and Deutsch and polynomial function, then UK technique is applied.
Journel, 1997)

17
4. Lognormal kriging At times when it is not possible to find an acceptable linear
(Rendu, 1979 and 1981; combination of kriging coefficients, kriged estimate may be

Ore Body Modelling- Concepts and Techniques | DR. SURESH PANDEY


pandeysuresh50@gmail.com
+91-9534062256
and Sinclair and obtained based on logarithmic values of the samples. This
Blackwell, 2002) technique is known as lognormal kriging (LNK) since
estimation is based on logarithmic values.

5. Disjunctive kriging Disjunctive kriging (DK) is a technique developed by


(Matheron, 1976) Matheron (1976) which estimates a probability density
function of the grade distribution within a block using
nearby samples based on univariate normal assumption for
the sample values (Xi) and a bivariate normal assumption
for every pair of sample values (Xi, Xj). Using this
probability density function, DK establishes grade-tonnage
curve for the block for estimating recoverable reserves.

6. Multi-Gaussian This technique consists of two apparently strong


kriging hypotheses, viz. (i) strict stationarity and (ii) multi-
(Verly, 1983) normality. In practice, it is only when both these conditions
are met, the conditional expectation becomes identical to the
OK estimator (Journel and Huijbregts, 1978).

7. Co-kriging When two variables in a deposit represent a high degree of


(Journel and Huijbregts, correlation, a cross semi-variogram may be used to establish
1978) the possibilities of a spatial correlation between them and
the kriging system used in such situation is termed as Co-
kriging.

8. Indicator kriging Indicator kriging (IK) technique is one of non-parametric


(Journel, 1983 and approaches of estimation. In case of IK, an optimal solution
Sinclair and Blackwell, is provided using the data in their rank order according to an
2002) indicator function, I (x).

9. Probability kriging Probability kriging is an extension of indicator kriging. In


(Sulivan, 1984) this technique, in addition to the rank data, experimental
cumulative distribution function of the sample values is
used.

10. Outlier restricted Outlier restricted kriging (ORK) is a technique which


kriging accounts for outliers present in highly skewed grade
(Arik, 1992 and Sinclair distribution such as precious metal deposits. In this
and Blackwell, 2002) technique, weights are assigned to the estimated block
values accounting the outliers present in the data.

11. Area influence Area influence kriging (AIK) is a modified OK technique


18 kriging developed for highly skewed grade distributions where the
(Arik, 2002) OK results are unreasonable smooth. In this technique, a
sample value is considered to be the primary starting point

Ore Body Modelling- Concepts and Techniques | DR. SURESH PANDEY


pandeysuresh50@gmail.com
+91-9534062256
for the grade of the blocks within its area of influence. The
weights assigned to the samples for a given block outside
the area of influence of the nearest sample then control the
resulting grade of the block.

13. Factorial kriging Factorial kriging technique estimates a given factor for a
(Bleines et al., 2004) given scale component. In this method, the kriged estimate
can be recovered through a linear decomposition, as the
factors are mutually independent.

14. Collocated kriging This technique is used when a variable based on sparse
(Bleines et al., 2004) sampling is being estimated on a regular grid and analysis of
another correlated variable is available at each node. This
technique is a modification of co-kriging technique.

Parallel to these advancements, the practitioners of geostatistics in mining industry


realised the need for a link between geology and geostatistics (Rendu, 1985) that is
manifested at each step of a geostatistical study. The mode of incorporating geology into
geostatistics has been to perform geostatistical modelling with respect to geological
controls of mineralization.

Concurrent with these developments, another suite of techniques called geostatistical


simulations (David, 1977; Journel and Huijbregts, 1978; Journel and Alabert, 1989;
Deutsch and Journel, 1992) were developed. A key property of geostatistical simulation
models (as opposed to geostatistical estimation, or kriging models) is that a family or
system of model realizations is generated, i.e. not merely one best estimate. A series of
images or realizations is produced that presents a range of plausible possibilities. The
plausibility of these possible images is dependent on the assumptions and methodology
employed in the simulation process. Chiles and Delfiner (1999) provide a modern
overview of simulation while Lantuejoul (2002) gives an excellent mathematically
rigorous summary of simulation algorithms.

Semi-variography
The basic assumptions made for geostatistical structural modelling purpose include:
19
(i) The values of samples located near or inside a block of ground are most closely related

Ore Body Modelling- Concepts and Techniques | DR. SURESH PANDEY


pandeysuresh50@gmail.com
+91-9534062256
to the values of the block being considered; and (ii) A relation exits amongst the sample
values as a function of their distance and orientation. The function that measures the
spatial variability among the sample values is called 'semi-variance'. Semi-variograms
are constructed by comparing a sample value with the remaining ones at constantly
increasing distance called lag interval (Isaaks and Srivastava, 1989; Kim, 1991; and
Armstrong, 1998). The mathematical formulation of a semi-variance function (h) is
given as:
n
(h) = [1/2N]
i1
[Z (xi) -Z (xi +h)] 2

where Z (xi) is the value of regionalised variable (e.g., grade) at a point x i in the space;
Z(xi +h) is the grade at another point at a distance 'h' known as lag distance/lag interval
and 'N' is the number of sample pairs being considered.

In usual practice, an experimental semi-variogram is constructed by plotting lag


distance along the abscissa while the corresponding value of semi-variance function is
kept along the ordinate (Fig. 2). By definition, `h starts at zero, since it is impossible to
take two samples closer than no distance apart.

A semi-variogram model has an equivalent Covariogram model, C (h), given by the


relation (Fig. 2.):

2 Variance

(h)

CV(h)

20 h

Ore Body Modelling- Concepts and Techniques | DR. SURESH PANDEY


pandeysuresh50@gmail.com
+91-9534062256
An experimental semi-variogram provides the following information regarding the
characteristics of a mineral deposit:
(i) A measure of continuity of mineralisation: The continuity of mineralization is
reflected by the rate of growth of (h) for the constantly increasing values of h. The
growth of the curve demonstrates the regionalised component of the samples and its
smooth steady growth indicates the degree of continuity of mineralization. Sedimentary
deposits exhibit high degree of continuity, whereas mineralization concentrated in veins,
veinlets, stringers etc. exhibits low degree of continuity. At times, mineralisation may
display no continuity at all. This is described as pure nugget effect model such as in
case of gold mineralisation.

(ii) A measure of the zone or area of influence: The distance at which a semi-
variogram levels off its plateau is called the range (or zone) of influence of semi-
variogram. It is the distance up to which the regionalised component has its effect. In
other words, it is the range where a sample reaches a point far enough apart so as to
have its any influence upon the other sample point. Beyond this amount of separation,
values of sample pair do not correlate with one another and become independent. In the
deposits, where there is no continuity, the samples have no range of influence. The range
of influence thus provides an improved basis over the conventional notion of half way
influence.

(iii) Sill (C0 +C): The corresponding value of semi-variogram function (h) for which
semi-variogram plateaus off is referred to as sill variance (Matheron, 1971). For all
practical purposes, the sill variance is considered equal to the statistical variance of all
sample values used to compute an experimental semi-variogram.

(iv) Nugget to Sill Ratio (C0 /C0+C): It provides an indication of consistency in


regularity of a deposit. An increase in this ratio marks a decrease in the regularity of the
deposit that may result from erratic grade distribution. A considerably high value of this
ratio requires a greater attention.

21 (v) A measure of trend: A visual glance on the semi-variogram may reflect the
presence of trend in the dataset. The trend is characterised by a conspicuous hump in

Ore Body Modelling- Concepts and Techniques | DR. SURESH PANDEY


pandeysuresh50@gmail.com
+91-9534062256
between certain lag distances followed by a dip in the semi-variogram curve (Clark,
1988). This requires removal of trend and then performing semi-variography of the
residuals that reflect deviations from the trend. Presence of trend overestimates the
underlying (true) semi-variograms (Armstrong, 1998; Clark, 1988; Sahu, 2003).

Based on semi-variography of the residuals (x), block values are then estimated
for the residuals, employing geostatistical estimation (Rossi, 1989; Journal and Rossi,
1989; Isaaks and Srivastava, 1989; Sarma and Selvaraj, 1990; Singh and Singh, 1996 and
Watson et al., 2001). The trend component m (x) of each block is added to the estimated
residual value R (x) to have the estimated block value g (x) for a variable, i.e.,
g (x) = R (x) + m (x)

(vi) A measure of Anisotropy: When the semi-variograms calculated for all pairs of
points in various principal directions exhibit different types of behaviours such as
difference in ranges, they are said to reflect anisotropy. If this does not occur, the semi-
variograms depend only on the magnitude of distance between the points and are said to
spatially 'isotropic'. Two different types of anisotropies are distinguished as: (i)
geometric/elliptic anisotropy; and (ii) zonal/stratified anisotropy. In the former case, the
ratio of the larger range to smaller range provides the anisotropy ratio while in the latter
case it is common practice to split the semi-variogram into two components-an isotropic
one plus another anisotropic type that reflects variations in the vertical direction.

Semi-variography with due consideration to deposit geology is able to quantify the


characteristics of spatial continuity via nugget effect, range, sill and directional
anisotropy, which in turn, provides an adequate model of geological influences that are
used in reserve estimation. The requirement to estimate deposits in which selective
mining of ore and commingled waste could take place calls for a more careful appraisal
of the geological controls.
Fitting a theoretical model to an experimental Semi-variogram
To an experimental semi-variogram, various mathematical models (Fig. 4) may be
22 fitted such as Spherical/Matheron model, Exponential model, De Wijsian/Logarithmic

Ore Body Modelling- Concepts and Techniques | DR. SURESH PANDEY


pandeysuresh50@gmail.com
+91-9534062256
model, Linear model, parabolic model, Hole-effect model, Mixed/Nested Spherical
model etc. (Table 4.4).

23

Ore Body Modelling- Concepts and Techniques | DR. SURESH PANDEY


pandeysuresh50@gmail.com
+91-9534062256
Name of the model Model equation Description
This model is encountered most
(1) Spherical Model (h)=Co + C [1.5 (h/a) commonly in mineral deposits
3 3
0.5(h /a )] where sample values become
for h < a independent once a given
(h) = Co + C for h a distance of influence (i.e. the
(h) = Co for h tends to 0 Range) a is reached but within
(h) = 0 for h = 0. it, sample values are highly
correlated. Various deposits
including coal have been found
to have their grade distribution
adequately represented by this
model (David, 1977). This
model is also known as
Matheron model.

It is the simplest model


(2) Linear Model (h) = A h + B encountered where there is no
where A (slope) and B (intercept) existence of the range. (h)
are constants. continuously increases as h
increases. It shows a moderate
continuity, observed sometimes
in iron ore deposits. It is
described by a linear equation.
(3) de Wijsian (h) = A ln (h) + B This is an extension of the linear
Model where A (slope) and B (intercept) model. In some hydrothermal
(after Prof H J de are constants. deposits, semi-variogram plots
Wijs) as a straight line when (h) is
plotted against ln(h).

In this model, semi-variogram is



(4) ah Model (h) = ah made linear by plotting it on a
where is a power factor and a log-log scale. This model is
is intercept. frequently encountered in
elevation semi-variogram.
24

Ore Body Modelling- Concepts and Techniques | DR. SURESH PANDEY


pandeysuresh50@gmail.com
+91-9534062256
(5) Exponential (h) = C [ 1 e-h/a ] This model is not encountered
The slope of the tangent at the too often in mining practice
Model
origin is C/a. For practical since its infinite range is
purposes, the range can be taken associated with a too continuous
as 3a. The tangent at the origin process.
intersects the sill at a point where
h equals a.

2 / a2)
(6) Gaussian Model (h) = C [ 1 e(-h ] This model is characterised by
two parameters C and a. The
The practical range is 3 a. curve is parabolic near the origin
and the tangent at the origin is
horizontal, which indicates low
variability for short distances.
Excellent continuity is observed
which is rarely found in
geological environments.
(7) Parabolic Model (h) = A h2 This model is observed when
there is a linear drift.
where A is the slope.
(8) Hole-Effect It can be used to represent fairly
Model (h) = C [ 1 (sin (ah) / ah) ] continuous process. The tangent
at the origin is horizontal and it
shows a periodic/cyclic
behaviour which is often
encountered when there exists,
for instance, a succession of
alternate rich and poor zones or
alternate layers.

(9) Pure Random (h) = S2 In this model, no continuity


Model exists thereby indicating the
presence of a very high degree of
randomness of the variable
distribution. (h) is then equal to
the statistical variance (S2).
25

Ore Body Modelling- Concepts and Techniques | DR. SURESH PANDEY


pandeysuresh50@gmail.com
+91-9534062256
The model that is most commonly used in field of mineral deposit modelling, is the
spherical model developed by Matheron (1961), as more than 95% of the mineral
deposits conform to this model (Rendu, 1981). This model can be represented by the
following set of equations:
(h) = Co + C [1.5 (h/a) -0.5 (h/a) 3] for ha
= Co + C for h a
= Co for h 0
=0 for h = 0
Where Co is the nugget effect, C is the continuity and a is the range of influence.

Fig. 3. A spherical model fitted to an experimental semi-variogram


In fitting a mathematical model to an experimental semi-variogram, the behavior of
semi-variogram at the origin (both the nugget effect and slope) plays a crucial role. The
slope of the semi-variogram can be assessed from the first three to four experimental
26
semi-variogram value, (h) by joining them in a straight line. The nugget effect can be

Ore Body Modelling- Concepts and Techniques | DR. SURESH PANDEY


pandeysuresh50@gmail.com
+91-9534062256
estimated by extrapolating back this line to the (h) axis and reading the intercept. The
choice of nugget effect is of extreme importance since it has a very marked effect on the
kriging weights and on the kriging variance. Three methods of model fitting are known.
They are (i) Hand fit method (ii) Non-linear least square fit method and (iii) Point
Kriging Cross-Validation (PKCV) method (Kim, 1991).

(i) Hand fit method


The sill (Co + C) is set at the value where experimental semi-variogram stabilises. In
theory, this should coincide with the statistical variance. However, if a sill clearly exists
in the experimental semi-variogram, then, the sample variance is not considered as an
estimate of variogram sill. Estimate of nugget effect is achieved by joining the first three
or four semi-variogram values and then projecting this line to intersect the (h) axis. By
projecting the same line until it intercepts the sill provides 2/3rds the range. Using the
estimates of nugget (Co), continuity (C) and range (a), a few points are calculated to
examine if the model curve fits to experimental semi-variogram. Although this method is
straightforward and simple, there is an element of subjectivity involved in the estimation
of model parameters.

(ii) Non-linear least squares fit method


This method uses the principle of polynomial fit by least squares to fit a model with
sum of the deviations squared of the estimated values from the real values being
minimum. Unfortunately, polynomials obtained by least squares do not guarantee the
positive definite function (otherwise semi-variance could turn out to be negative).

(iii) Point Kriging Cross-Validation (PKCV) method


Point Kriging cross validation technique is a robust method for fitting a mathematical
model to an experimental semi-variogram model. It is a technique referred to by Davis
and Borgman (1979) as a procedure for checking the validity of experimental semi-
variogram model that represents the true underlying semi-variogram and controls the
kriging estimation (Sarkar et al., 1990; Isaaks et al., 1989).

27
Based on inspection of crude semi-variogram model that is initially fitted by hand fit
method to the experimental semi-variogram, estimates of semi-variogram parameters (C0,

Ore Body Modelling- Concepts and Techniques | DR. SURESH PANDEY


pandeysuresh50@gmail.com
+91-9534062256
C, and a) are made out and cross-validated through point kriging empirically. The model
parameters, viz. nugget variance (Co), continuity (C) and range of influence (a) are then
varied and adjusted until the following constraints are achieved (Sarkar et al., 1988):

i. A ratio of estimation variance (Ev) to kriging variance (Kv) approximating to unity,


i.e., Ev / Kv 1 0.05.
ii. A mean difference between sample values (Z) and estimated values (Z*) is close to
zero, i.e. Mean (Z-Z*) 0.
iii. The mean of estimates should approximate the mean of true values.
iv. The other errors should also be close to zero or minimum and
v. An adequate graphical fit to the experimental semi-variogram is achieved.

A model approximated by this approach eliminates any element of subjectivity. The


principle underlying the point kriging cross validation technique is that a sample point is
selected in turn on the sample grid, which has real value. The real value is temporarily
deleted from the data set and kriged using neighbouring sample values confined within its
radius of search. The error between the real and estimated value is calculated. The point
kriging process is repeated for all the data points.

Kriging
The geostatistical procedure of estimating values of a regionalized variable using the
information obtained from a semi-variogram is Kriging. Let G* be the kriged estimate of
the average value of grade G of the samples having values g1, g2, g3, ..,gn. Let a1, a2,
a3,.. an be the weightage given to each of the values respectively such that ai =1; and
G* =ai gi .Thus the estimation becomes unbiased; the mean error is zero for a large
number of estimated values and the estimated variance is minimum. The Kriging
variance is given as k2 = (gi -G *). 2

To make kriging variance minimum, a function called Lagrange multiplier (), is


used for optimal solution of the kriging system. Kriging carried out for a point estimate is
28 called point kriging and that accomplished for making estimates of a block of ground is
known as block kriging.

Ore Body Modelling- Concepts and Techniques | DR. SURESH PANDEY


pandeysuresh50@gmail.com
+91-9534062256
Point kriging
Point kriging is a method of estimation or interpolation of a point by a set of neighbouring
sample points applying the theory of regionalised variables where the sum of weight
coefficients sum to unity and produce a minimum variance of error.
Expressed mathematically, kriged estimate is given as:
P* = ai si
where P* = the estimate of true value at a point 'p'.
ai = weight coefficients of the individual samples
si = individual sample values at sample points, si .

and kriging variance, k2 = ai (si, p) + , where = Lagrangian multiplier and


(si, p) = average semi-variance among samples and the point to be estimated.

An example of point kriging


In a simplest possible situation one may wish to make a kriged estimate G* of the true
value G at a point 'p' from the three known observations S1, S2, and S3 to which weights
a1,a2, and a3 are assigned respectively such that G*= a1S1 + a2 S2 +a3S3 with the
constraints of a1+a2 +a3 = 1 and the variance of estimation is minimum. This is achieved
by equating the partial derivatives to zero using a Lagrangian multiplier. Solving this
finally the matrix is of the form:

11 12 13 1 a1 1p
12 22 23 1 * a2 = 2p
13 23 33 1 a3 3p
1 1 1 0 1

where I,j is the semi-variance over a distance 'h' corresponding to separation between
control points (i,j).From this matrix, the values of a1,a2,a3 and are obtained and the
29
value at a desired point is estimated thereof. The kriging variance is computed as: k2

Ore Body Modelling- Concepts and Techniques | DR. SURESH PANDEY


pandeysuresh50@gmail.com
+91-9534062256
=a11p + a22p +a33p +. Kriging thus produces estimates that have minimum error, and
also provides an explicit statement of the magnitude of error.

Block kriging
It is a method of estimation of a block of ground with the help of surrounding sample
values using the theory of Regionalised variables. The kriged estimate G * of a block is
mathematically expressed as: G* = ai gi where G* = estimated value of the block using a
set of sample Si.; ai = weight coefficients and n = number of samples used for estimation
of block. Normally, the minimum and maximum number of samples used for kriging a
block is taken between 4 and 12 or 3 and 15.

Kriging variance, k2, is mathematically given as:


k2 = ai (Si, V) - (V, V) + , where
(V, V) = Average semi-variance within block V.
(Si, V) = Average semi-variance between sample, Si and whole of the block, V and
= Lagrange multiplier, a constant, introduced in minimization processes to
balance the number of equations with the number of unknown coefficients.

The weight coefficients, ai, and the Lagrangian multiplier, are computed from the
matrix form of kriging equation, which is given as:

( (S1,S1)-C0) ( (S1,S2) -C0) .....( (S1,Sn) -C0) 1 a1 (S1,V)


( (S2,S1) -C0) ( (S2,S2) -C0)..... ( (S2,Sn) -C0) 1 a2 (S2,V)
..... ..... ... ... ... = ...
..... ..... ... ... ...
( (Sn,S1) -C0) ( (Sn,S2) -C0)..... ( (Sn,Sn) -C0) 1 an (Sn,V)
1 1 ..... 1 0 1

or, [S] [L] = [T]


or, [S]-1 [S] [L] = [S]-1 [T]
or, [I] [L] = [S]-1 [T]
30 or, [L] = [S]-1 [T]

Ore Body Modelling- Concepts and Techniques | DR. SURESH PANDEY


pandeysuresh50@gmail.com
+91-9534062256
Outline of steps for performing Block Kriging
The entire mineralized body is divided into regularly spaced horizontal sections, by
projecting the sample data from the (vertical) cross sections earlier constructed. The
vertical height or gap between the sections is kept at length equalling the vertical lift or
bench height as per the method of mining. In each of the horizontal sections, the
mineralized boundary delineated, is divided into smaller grids based on selective mining
unit (SMU). Usually at least one fourth of the drill spacing (for a square grid) is taken as
the side of a grid.

Each slice forms a set of X and Y arrays of blocks with constant Z values (X-Easting,
Y-Northing, Z- Elevation). The arrays of blocks are then kriged slice by slice, producing
kriged estimates and kriging variance for each of them. At first step, the following input
parameters are required for block Kriging:
i. A minimum of 4 and a maximum of 16 samples to krige a block,
ii. The radius of search for sample points around a block centre should be within the
range of influence,
iii. The semi-variogram parameters: nugget variance (C0), transition variance or
continuity and range (a),
iv. The ratio of anisotropy in case of anisotropic semi-variogram model, and
v. The dimension of blocks to be kriged and block coordinates.
The next steps that follow include:
i) Computation of average variability of sample values contained within the
dimensions of small blocks;
ii) Selection of nearest samples lying within the radius of search;
iii) Counting the number of the samples. If found insufficient with reference to a
minimum specified to krige a block, the next block is taken up and procedure is
repeated from step ii);
iv) Establishing kriging matrices and computation of weight coefficient;

v) Multiplication of weight coefficient by their respective sample values to provide


31 kriged estimates. Kriging variance is calculated from the sum of the products of the

Ore Body Modelling- Concepts and Techniques | DR. SURESH PANDEY


pandeysuresh50@gmail.com
+91-9534062256
weight coefficient and their respective sample-block variances. An extra constant,
called lagrange multiplier is added to minimise the kriging variance; and
vi) Move to next block and repeat the procedure from step (ii).

The individual slices are then averaged to produce a global estimate of kriged mean
together with associated variance. The methods of kriging described here viz.,Point
Kriging and Block Kriging belong to linear geostatistics.

The non-linear geostatistics deals with Lognormal Kriging (Rendu, 1979 and 1981;
Sinclair and Blackwell, 2002), Disjunctive Kriging (Matheron, 1976) and Multi-Gaussian
Kriging (Verly, 1983), while the non-parametric geostatistics includes Indicator Kriging
(Journel, 1983; Sinclair and Blackwell, 2002) and Probability Kriging (Sulivan, 1984).
Additionally, there are other models such as Universal Kriging (Kriging in the presence
of trend, Journel and Huijbregts, 1978; Deutsch and Journel, 1997), Co-Kriging (Kriging
of one variable based on the correlation of it with the other variable, Journel and
Huijbregts, 1978); Polygonal kriging and Blast hole Kriging (David, 1988; Kim, 1993).

32

Ore Body Modelling- Concepts and Techniques | DR. SURESH PANDEY


pandeysuresh50@gmail.com
+91-9534062256

También podría gustarte