Intelligent Decision Making
Looking to the future, we appear to be enthralled by the possibilities of artificial intelligence, but don’t seem to realise we already have the intelligence and all we will be doing is automating it.
Machine learning is currently very limited; based on massive processing power, running a correspondingly large number of binary experiments, rather than genuine intelligence. We introduce the intelligence by programming the code. In the near term, we won’t be introducing anything new. Only increasing the efficiency of employing the intelligence.
Decision Modelling
So, let’s use that intelligence now. Even in simple situations we run the risk of confirmation bias and group think, let’s see if we can model the decision process. Let’s establish a habit we can apply to the more complex.
Can you illustrate the decision process that lead to the current scope you are working on? Can you clearly represent the desired outcome?
If so you ought to be able identify variables that influenced that decision, indicate the relevant importance of each variable, the degree of certainty and describe its relationship to the other variables in reaching the desired outcome.
Now ask yourself how often you have seen such a disciplined approach applied to decisions?
For small decisions of no real consequence the effort wouldn’t be worth it, but perhaps it might help establish a habit. However, for major decisions such as the process to be followed for an engineering project, let alone choosing amongst the portfolio of projects most large companies must handle, such a decision should be carefully tested and optimised using every tool at our disposal. Ensuring the opportunity being pursued, and the associated risks being mitigated, are handled in a structured fashion.
The concept of Nett Present Value (NPV) and Internal Rate of Return (IRR) provide mathematical models to compare the merits of different investment opportunities. How rigorously and how often are they applied? If they are applied has an Objective Function been used to determine the optimum outcome?
Mastering the Complex
There are many interdependent parts of an optimal structured decision making process. But they are hardly ever conceptualised in this way. Elements of projects use specific tools (like subroutines in a computer program) but there is no connection between these routines to test their inter-dependencies.
Within the resources industries we don’t deal in the chaotic where no structure or relationships exist. At worst, we deal in the complex, where there are several unknown unknowns, but such unknowns can be explored by experiment, that after all is the point of science, to explore and to understand. However, most of the time we are dealing with the complicated, known unknowns, which are why we should apply a measure of certainty to the variables in our decision models.
We can use currently available intelligence to make better decisions. We use models in the financial aspects and in considering the engineering alternatives. We take this further by creating new models representing our understanding of alternative options and our chosen path in reaching decisions. Instead of debating decisions we are armed with all the tools we need to create digital models to represent courses of action and to optimise the choice between them.
Decision models are developed, but all too often the variables are weighted, assigned a degree of preference, funnily enough known as Preference Modelling. Better than nothing, but falling short of a genuine understanding of the interaction between variables and outcomes. All too often preference modelling simply builds in confirmation bias and group think.
Starting at the very beginning of any situation, let’s develop a register of opportunity and risk, consider the options and mitigations and produce a model of the situation and its interactions (complexities). As we design a model we are forced to test and measure our understanding of cause and effect and indicate certainty, representing uncertainty using probability distributions and Monte Carlo analysis. We reduce the percentage chance of deviation from our desired outcome not by legal and commercial means (additional complexity), but from a superior understanding of the situation and its optimum solution.
Intelligence versus Blunt Instruments
This is an important take away: all the legal and commercial agreements in the world won’t change the outcome of a flawed process, they only describe the remediation if things do go wrong. Instead of our current skewed reliance on compliance with HSE, legal and commercial; why don’t we also mandate, or at least require evidence of, an intelligent decision making process?
Decision models not only allow us to optimise or at least understand what drives outcomes, but, perhaps even more importantly, represent our decision process such that as variables change we can understand the impact and change course accordingly. This provides a degree of transparency to all concerned. The earlier we change course when necessary the lower the impact on overall performance.
Lord Kelvin was not wrong, we should be advancing our projects to the stage of science, not persisting with a race to the bottom as we try to minimise risk by commoditising projects. Let’s use intelligence not blunt instruments.
Hi Geeta, trust you and most of the Intec Sea family got through the Harvey cataclysm without too much distress
Great article Howard,. DA will I believe form an increasingly important & scrutinised element of a Boards Corporate Governance role.
Oussama GHANI
Great article Howard. I am very conscious of development managers wanting to put their own individual stamp on decisions. I suspected it was ego, but could not say that in public. Now you have given me a PC response: Preferential Modelling and cognitive bias. Thank you.
Great article Howard! DA should be mandated as compliance to HSE but it rarely is.