Geospatial modelling of geotechnical parameters…..worth the effort? (Part 1)
A couple of weeks ago, Scott Dunham posted an interesting article that grabbed my attention. It was about the geostatistical validity of interpolating certain geotechnical parameters, such as RQD. Scott’s post provoked a good deal of discussion from geologists, geotechnical engineers and mining engineers alike.
I thought it might be useful to run a series of articles about geospatial modelling of geotechnical parameters to highlight some issues we face in the mining industry and to generate a bit of discussion about the usefulness of such models.
Firstly, let’s talk about some of the geotechnical parameters, or “data”, that are typically used for mining projects and their relevance for geospatial modelling. In this post, I will just focus on RQD as a variable, and not the validity of the interpolation process itself. So what’s so wrong about RQD for geostatistical modelling?
OK, before I get into this, I just want everyone to keep in mind the huge disparity in data sets available for geological/resource and geotechnical disciplines. Obviously project stage is important, yet typically the proportion of dedicated geotechnical drill holes that are fully geotechnically logged (not just RQD!) to total drill holes can be 10’s of orders less than resource. Just wanted to point out that “geotechs” deal with much less information. RQD is typically the most commonly logged basic parameter, just saying...
Put simply, RQD is a contrived, or made-up, “index”. It’s basically a modified version of core recovery. It was designed to “red flag” zones of “poor” quality drill core (Deere and Deere, 1988). It could be compared to an index, for example, to flag it’s a “hot” day using an arbitrary threshold value of a temperature scale to define “hot” and then provides a scale of “hotness” without providing the actual temperature.
The “poor” drill core zones include core loss, fragments and small pieces of rock, and altered rock. Core loss and fragmentation of the core are largely a mechanical response to drilling, core extraction, core manipulation and transport practices. Let’s just look at drilling; Core loss and fragmentation are highly influenced by drill operator experience and diligence, triple tube versus single tube, core diameter, drilling fluids, core lifter conditions, etc. In effect, for poorer rock conditions, the RQD index is not an intrinsic property of the rock, yet mostly a response variable to various mechanical processes. This introduces significant noise and irreproducibility. When using RQD values on a project from different programs and purposes, drillers, machines, core sizes, one may sometimes question its value as precise/worthwhile “data”.
The RQD index is officially defined as a percentage of the sum of pieces of “hard and sound” rock core greater than 10 cm separated by “natural fractures” over the “length logged”. I deliberately put “hard and sound”, “natural fractures” and “length logged” in parentheses because these items are all subjective assessments/decisions that depend on the person logging the core, and I have seen a very wide range of RQD’s from different people logging the same core!
OK, so why an arbitrary threshold value of 10 cm?...Some reading suggests it was defined by two times the diameter of the core (e.g. 10 cm for NQ core)….Yet, how was this “threshold” chosen from an engineering point of view? Does this threshold fit all engineering objectives? Think of a small tunnel versus a 300 m high open pit slope...How relevant is this fixed 10 cm threshold for each?
Another important aspect of this “threshold” is when it comes to descriptive statistics. In competent rock masses we get exceptionally left skewed RQD distributions usually with the 90-100 bin on the histogram taking up 80-90 percent of values. Conversely, when we have fractured rock mass we have the 0-10 bin taking up almost all the values. So this threshold artificially truncates, or censors, the full data spectrum. Theoretically, this is means that certain values of RQD are less frequently encountered, for example, 40% through to 60%, simply by the way RQD index is formulated. That is, it’s formulation was biased to highlight either getting 0% or close to 100%. This sort of questions the value of geostatiscal modelling of RQD.
Now to the subjective and variable “length logged”. RQD values are sensitive to the choice of length of core that RQD is calculated, due to the fixed 10 cm threshold. For example, a 300 mm long highly fractured zone in massive rock will result in RQD values of 90%, 80% and 40% for logged lengths of 3 m, 1.5 m and 0.5 m. The shorter the logged length for “poor” zones, the lower the RQD value.
Recommended by LinkedIn
In practice, RQD is not collected on standard 1 m lengths, yet usually based on drill run lengths where the upper length limit is usually the core barrel length. A three metre barrel should therefore artificially have higher RQD than 1.5 m core barrels for the same rock. In addition, even shorter intervals are generally subjectively selected for identified low RQD zones. This leads to a measurement support issue for geostatistics, as compositing is usually done to the most common length (i.e. longer, higher RQD values), which will then smear out the lower RQD values (hey, no more RQD=0!) and decrease “sampling” variance.
OK, let’s assume that there’s no core loss and no “mechanical” breaks and RQD has been measured solely as “a result of natural fractures”. Say you have two meters of core, the first 0% RQD, the second 100% RQD. If you calculate the RQD of the 2m it will be 50%, right? Yeah, but it’s not quite right.
RQD is derived from an intrinsic variable (natural fracture intensity) which will determine the lengths of the individual sticks of core. The relationship between 1-D fracture frequency (λ) and RQD and is almost never linear (apart from very special rock masses), as fracture spacings tend to be negative exponentially distributed, such that; RQD = 100 exp( -tλ) * (tλ+1), where t is the threshold (i.e. 10 cm). If we take the above example where the first metre of core has 1 fracture per metre (100% RQD) and the second, say, 50 (4% RQD – pretty close to zero), calculating RQD based on an average 1-D fracture frequency of 25.5 leads to an RQD of 28%, not 50%. In this case linearly averaging RQD is not representative of its relationship with fracture intensity.
Importantly, as distinct from grade, RQD is a vector and not a scalar property. That is, its value depends on the direction it was measured relative to the orientations of the discontinuities in the rock, so sticks of core are related to the 3-D shape of discontinuity bound rock blocks, so simple averaging doesn’t make sense….Consider a simple bedded rock mass dipping at 60 degrees. Imagine a borehole drilled 60 degrees down-dip within a strong sandstone unit which has between 1 and 6 fractures per metre (RQD = 86 – 100). Now imagine a nearby borehole drilled perpendicular to bedding intersecting all those contacts and has an RQD of 20 – 30. So which hole directions are valid? Which ones do you interpolate? You could say that if all holes are drilled in the same direction, say perpendicular to bedding, then any RQD interpolants are valid for that direction. But this is not especially useful for engineering in a 3D world, nor correct.
To sum up, RQD is a fairly archaic contrived index that’s main aim was to red-flag poor quality zones, full stop. This is still useful, as these zones should be explored hole by hole, or even in a 3D visualiser, to say “There’s a fault there, there’s crappy ground there, let’s find out why and hows this feature is going to impact on engineering”. I think this is where, as “data”, it’s best value lies.
Sure, it’s regularly collected (yeah, we got tonnes of geotech data, we got RQD on all holes!) and it’s inclusion in other rock mass classification systems somehow has given it the legitimacy as high value “data”. Unfortunately, due to it’s draw-backs described above, RQD is exceptionally noisy, biased and insensitive. I really can’t see how it can provide much value as a “data” source for geostatistical modelling…….
In my next post I will explain some of the possible ways to develop more useful “geospatial” models of some geotechnical parameters that provide usefulness to the engineering objective at hand.
Rerefences:
Deere, DU and Deere, DW. 1988. "The Rock Quality Designation (RQD) Index in Practice". Rock Classification Systems for Engineering Purposes, ASTM STP 984, Louis Kirkaldie, Ed., American Society for Testing and Materials, Philadelphia, pp. 91-101
The major reason RQD is used so widely used and contoured by geotechs is because it is usually collected as part of the geological logging. Ie the data us collected for the geotechs not by the geotechs. So it is an eay win. In my experience it is rare that the site geotechs will do any QAQC or any rigorous statistical analysis on RQD data.
Great article Peter.
Bruna Maria Cruz Fernandes
Eduardo Takafuji
Fernanda Carvalho