Quick Simulation for Grade Control - Validations

Quick Simulation for Grade Control - Validations

Running simulations can be useful, but we need to know whether the results are trustworthy. Validation confirms that our simulated models are consistent with the input data, honouring the spatial structure and providing outcomes we can rely on. Without validation, a simulation is just a collection of random numbers with no guarantee of geological meaning. With proper checks in place, we can demonstrate that the simulations reflect the reality of the data. Firstly, simulation results should reproduce the global properties of the data. Across all realisations, the simulated mean should be very close to the sample mean, and the coefficient of variation (CV) should match that of the input data.

Article content
Simulations are close (1.8% lower) but not quite capturing the variability of the data here

The CV allows us to check whether the simulations are capturing the spread of values (i.e. it’s variability). In practice, this means ensuring the simulation outputs return a mean of zero and a CV of 1 (in Normal Score units). Deviation from this means that we’re either under or overestimating the variability when compared to the input data, or over or underestimating the grade. Plotting the CV and mean of all simulations is a useful visual check to see that we're getting the CV and mean that we expect from the input data.

Spatial validation is equally important. Variogram reproduction confirms that the simulations are honouring the spatial continuity of the data. I’m plotting my normal score variogram model (red) together with variograms of my simulations before back transformation (grey lines). Similarly, histogram reproduction checks whether the simulations are capturing the distribution of input grades, including skewness and tails. These checks together ensure that the simulations are not only statistically valid but also geologically reasonable.

Article content
Comparison of the sample histogram (blue) to the histogram of one of the simulations (red). My simulation should be reproducing the input data, producing the same histogram.


Article content
Variogram reproduction: the normal score variogram model (red) compared to variograms produced from my simulations (grey), in normal score units. I was hoping it would be closer!

Kriging provides another important checkpoint. By comparing simulation results against a Kriged estimate, we can assess how well the simulations compare to the expected values. The correlation coefficient and slope of regression between kriged and simulated values give evidence of whether the simulations are behaving sensibly. It is also worth noting the difference between Simple Kriging (SK) and Ordinary Kriging (OK) – I have seen arguments for using both, and also many who insist only SK should be used for Simulation. To be on the safe side I’m running both, in this method when the user chooses either SK or OK, the same Kriging method is used for the validation as for the simulation. I’d be interested to hear from people’s experience in this area, should SK always be used for Simulation and validated against an OK result?

Article content
Scatter plot of e-type versus Kriged estimate. I'm using OK for both the simulation and the Kriging estimate here, although I've seen many insist on SK being used for the simulation. The script here runs either SK or OK depending on the user choice.

Beyond these checks, Herco analysis is a useful tool. It’s a method to predict the grade tonnage curves for a given SMU(s) when given the sample data and a variogram model as inputs. By assessing the recoverable resources at a specified SMU size and cut-off, Herco gives us predictions of grade and tonnage that can be compared to the simulations. If simulations reproduce Herco-derived grade tonnage curves, it gives us confidence that they will support realistic mining decisions at the operational scale.

Article content
HERCO Analysis produced grade tonnage curves for comparison to my simulated result, in particular at my cut-off of interest which is 0.66 here (circled).

Of course, validation involves multiple steps and can be repetitive if performed manually for every simulation run. This is where automation becomes essential. Using Vulcan’s Python integration, we can script some validation routines - checking means and CVs, running kriging comparisons, reproducing histograms. Automating these steps ensures consistency, reduces the risk of human error, and saves time, making robust validation achievable even under tight grade control schedules. More importantly, it shifts the effort away from manual box-ticking toward meaningful interpretation, giving geologists and engineers confidence in simulations as reliable tools for decision-making.

There are other validations I'd like to work in when time is available, so any suggestions are welcome. Ultimately having the process react to failure of any of these validations is also a goal. In the next post I'll cover some of the settings of the simulation and kriging that are dynamically changing based on the input data from the user.


To view or add a comment, sign in

More articles by David O'Neill

  • Quick Simulation for Grade Control

    Traditionally, many grade control models use inverse distance (ID) for estimation because it is quick and…

    14 Comments
  • Open Pit Mapping

    The geology, structure and geotechnical features of the rock are most visible and easily accessible in the exposed…

    2 Comments
  • Underground Face Mapping

    Mapping faces is a critical process; understanding the geology up close and guiding production. It's the highest level…

    3 Comments
  • From Dataset to Dashboard: : A Practical Approach

    A while ago I created a gold dataset for myself using Maptek Vulcan. Originally to test some variography, I later used…

    1 Comment

Others also viewed

Explore content categories