1448 XXXI International Mineral Processing Congress 2024 Proceedings/Washington, DC/Sep 29–Oct 3
In this sense, process design is generally targeting a point
or one-dimensional answer and conventional statistics and
rules of thumb typically guide the number of samples.
Rules of thumb are widely used and many variations
have been proposed. Several of these rules consider the
number of parameters used for regression modelling and/or
are based on statistical concepts of variance in populations.
Examples of published rules of thumb include:
The 20:1 rule (Burmeister 2012) which states that
the ratio of the sample size to the number of vari-
ables in a regression model should be at least 20 to
1. If a variable is categorical, each category should be
counted as a variable. In other words, if a categorical
variable has values of “A” and “B” it is counted as two
variables, rather than one.
An alternative method of sample number (N) cal-
culation for multiple regression suggested by Green
(1991) as: N 50 +8p where p is the number of
variables.
Ecological studies suggest N =10 to 20 per predictor
variable (Gotelli 2004).
Jenkins and Quintana-Ascensio (Jenkins 2020) rec-
ommend N 25 per regression but suggest N=8 is
adequate for a dependent variable with low variance.
Sample numbers based on the variance of the input
data set. However, input data with high variance
could result in a very good predictive model if the
model is controlled by a wide range of input values.
Conversely, a lot of data may be required if a robust
and accurate model is to be developed using only a
narrow range (low variance) of input values.
A limitation of these rules is that they are largely based on
the statistical properties of the input data and do not con-
sider the properties of the output predictions. High vari-
ances of input variables will not necessarily lead to high
errors in the output predictions. It is important to recognize
that this is different to the requirement to adequately cover
the ranges of the data to build the regression model.
For comminution, authors provide guidance based on
study stage and test type (Meadows 2012), number of ore
types (Morrell 2011, Giblett 2016), or per orebody (Doll
2011). Table 1 is reproduced from Meadows (2012) and
suggests the recommended number of tests by each method.
Morrell (2011) and Giblett and Morrell (2016) state
that at least 10 samples per ore type are required to per-
form a meaningful statistical analysis at prefeasibility stage.
Morrell (2011) further suggests that for the development of
a geometallurgical model that is useful for the forecasting
of daily grinding circuit throughput then the number of
samples is “at least an order of magnitude higher.”
For an entire geometallurgical mapping program,
Williams and Richardson (Williams 2004) estimated that
100 to 300 metallurgical samples would be necessary, sup-
ported by at least 1000 mineralogy tests and 10,000 assays.
In 2023 dollars, the difference between 100 and 300 flota-
tion tests is of the order of $0.5 million, so many compa-
nies would be tempted to stop at 100 samples.
The development of a geometallurgical block model
has a different objective that of providing predictions of
the variability of the ore that will be supplied to the process
plant. The geometallurgical model represents localised ore
and waste characteristics as a three-dimensional matrix of
blocks and is based on the mineral resource block model.
The modelled characteristics typically include metallurgi-
cal response variables such as mass pull, recovery, concen-
trate quality, work indices, throughput or specific energy.
Machine learning and regression modelling can be used
to predict the metallurgical response variables of typically
sparse metallurgical test data and a much larger geological
sample database. This approach derives maximum lever-
age from the high-cost metallurgical tests. The regression
models can then be applied to the block model to provide
block by block predictions of ore processing response based
on the local geological data. These granular, quantitative
Table 1. Number of grinding tests recommended for best practices
Test
Number of Tests Recommended
Remarks 1 Scope 1 PEA 3 PFS 4FS 5EPC
Bond (BWi, RWi,
CWi, Ai)
cv 12 40 100 200 New drilling required to get
samples
JKDWT 1 6 20 50 100 Limited by material Available
MacPherson AWI 1 2 6 15 30 Large composite samples req.
Protodyakonov 1 2 6 15 30 Large composite samples req.
SAGDesign 1 3 10 25 50 Composite or point samples
SPI 3 12 40 100 200 Composite or point samples
SMC 3 12 40 100 200 Point hardness samples only
Previous Page Next Page

Extracted Text (may have errors)

1448 XXXI International Mineral Processing Congress 2024 Proceedings/Washington, DC/Sep 29–Oct 3
In this sense, process design is generally targeting a point
or one-dimensional answer and conventional statistics and
rules of thumb typically guide the number of samples.
Rules of thumb are widely used and many variations
have been proposed. Several of these rules consider the
number of parameters used for regression modelling and/or
are based on statistical concepts of variance in populations.
Examples of published rules of thumb include:
The 20:1 rule (Burmeister 2012) which states that
the ratio of the sample size to the number of vari-
ables in a regression model should be at least 20 to
1. If a variable is categorical, each category should be
counted as a variable. In other words, if a categorical
variable has values of “A” and “B” it is counted as two
variables, rather than one.
An alternative method of sample number (N) cal-
culation for multiple regression suggested by Green
(1991) as: N 50 +8p where p is the number of
variables.
Ecological studies suggest N =10 to 20 per predictor
variable (Gotelli 2004).
Jenkins and Quintana-Ascensio (Jenkins 2020) rec-
ommend N 25 per regression but suggest N=8 is
adequate for a dependent variable with low variance.
Sample numbers based on the variance of the input
data set. However, input data with high variance
could result in a very good predictive model if the
model is controlled by a wide range of input values.
Conversely, a lot of data may be required if a robust
and accurate model is to be developed using only a
narrow range (low variance) of input values.
A limitation of these rules is that they are largely based on
the statistical properties of the input data and do not con-
sider the properties of the output predictions. High vari-
ances of input variables will not necessarily lead to high
errors in the output predictions. It is important to recognize
that this is different to the requirement to adequately cover
the ranges of the data to build the regression model.
For comminution, authors provide guidance based on
study stage and test type (Meadows 2012), number of ore
types (Morrell 2011, Giblett 2016), or per orebody (Doll
2011). Table 1 is reproduced from Meadows (2012) and
suggests the recommended number of tests by each method.
Morrell (2011) and Giblett and Morrell (2016) state
that at least 10 samples per ore type are required to per-
form a meaningful statistical analysis at prefeasibility stage.
Morrell (2011) further suggests that for the development of
a geometallurgical model that is useful for the forecasting
of daily grinding circuit throughput then the number of
samples is “at least an order of magnitude higher.”
For an entire geometallurgical mapping program,
Williams and Richardson (Williams 2004) estimated that
100 to 300 metallurgical samples would be necessary, sup-
ported by at least 1000 mineralogy tests and 10,000 assays.
In 2023 dollars, the difference between 100 and 300 flota-
tion tests is of the order of $0.5 million, so many compa-
nies would be tempted to stop at 100 samples.
The development of a geometallurgical block model
has a different objective that of providing predictions of
the variability of the ore that will be supplied to the process
plant. The geometallurgical model represents localised ore
and waste characteristics as a three-dimensional matrix of
blocks and is based on the mineral resource block model.
The modelled characteristics typically include metallurgi-
cal response variables such as mass pull, recovery, concen-
trate quality, work indices, throughput or specific energy.
Machine learning and regression modelling can be used
to predict the metallurgical response variables of typically
sparse metallurgical test data and a much larger geological
sample database. This approach derives maximum lever-
age from the high-cost metallurgical tests. The regression
models can then be applied to the block model to provide
block by block predictions of ore processing response based
on the local geological data. These granular, quantitative
Table 1. Number of grinding tests recommended for best practices
Test
Number of Tests Recommended
Remarks 1 Scope 1 PEA 3 PFS 4FS 5EPC
Bond (BWi, RWi,
CWi, Ai)
cv 12 40 100 200 New drilling required to get
samples
JKDWT 1 6 20 50 100 Limited by material Available
MacPherson AWI 1 2 6 15 30 Large composite samples req.
Protodyakonov 1 2 6 15 30 Large composite samples req.
SAGDesign 1 3 10 25 50 Composite or point samples
SPI 3 12 40 100 200 Composite or point samples
SMC 3 12 40 100 200 Point hardness samples only

Help

loading