Probability Management
Letters after names are really, REALLY common these days. In my parents’ generation you might see M.D., or even J.D., but that was about it. Today, post-nominal initials are hard to keep up with: CSRTE? FAcEM??. I have an MBA and someone recently asked me why those initials don’t appear after my own name.
The question lead me to do some research into the topic and I noticed that just about every discipline has some kind of post-nominal, from project management and engineering all the way to cybersecurity and finance. A Google search revealed that these disciplines do have standards behind the designations.
In our business, decision science, we take a quantitative approach to decision analysis and risk management. Our models include random or stochastic variables, and we don’t use exact point estimate; like any research or study, we use ranges to express our uncertainty. We also use Monte Carlo simulations, which is just a way of doing the math for variables that are distributed in a variety of ways.
In the end we are able to measure things – some very cool things – like drought in the horn of Africa, fuel consumption on the battlefield or even the risk of a mine flooding. More practically, our techniques can rank a portfolio of projects or measure cybersecuirty risk(s) and the value of controls. We are able to give a “go” or “no go” decision on a major investment that’s typically full of unknown risks and returns.
Why should you care? Some methods outperform others and our methods outperform the alternatives – like human intuition and balanced scorecard!
Peer-reviewed scientific research tells us that balanced scorecard techniques and even expert intuition (gut feel) show no measurable improvement over time. In fact, we know that these methods can, at times, have a negative effect on decision outcomes. The bottom line is: if you find yourself multiplying a weight by a score, ranking projects by numbers, or using “red”, “green”, and “yellow” to analyze your risk – STOP. Just stop. You are wasting your time. The research is clear – the evidence tells us these are retrograde methods that either do not improve outcomes or damage the quality of outcomes.
Empirical research also verifies that using predictive models shows a measurable improvement in decision making. In fact, it’s been measured before – people who use Monte Carlos outperform those that don’t. They definitely outperform the alternative and that’s all we need to show to clients who want to use the method that results in the best decision.
So what does this have to do with post-nominal initials? It occurred to me that this is hard for people to swallow. I mean, try telling a CEO that “the model of her outperforms her” or “his intuition shows no measurable improvement on decisions over time.” Moreover, until very recently there were no standards for predictive models and folks were intimidated by the math, which by the way, is trivial with current tools.
Thus the point of this post: there is a very unique opportunity coming up in January 2016 for those interested in the topic of risk management or just need to understand an approach for making a really difficult measurement or decision in a way that outperforms the alternatives.
A Nobel Laureate in Economics, the author of the Flaw of Averages, and the author of How to Measure Anything chair the not-for-profit organization ProbabilityManagement.org. These individuals started this organization to get the word out that “There is a better way!” This organization is dedicated to rethinking uncertainty through education and best practices – in short, they are setting the standards for predictive models.
Harry Markowitz (Nobel Laureate and ProbabilityManagement.org Board Member), Sam Savage (Executive Director, ProbabilityManagement.org and author of The Flaw of Averages) and Doug Hubbard (President of Hubbard Decision Research and author of How to Measure Anything) will be speaking at the 2016 Annual Conference: A Common Risk Lexicon for Government and Industry. The conference will be held at Catamaran Resort & Spa, San Diego, CA January 26 - 27, 2016. For more info please visit: www.probabilitymanagement.org
"The bottom line is: if you find yourself multiplying a weight by a score, ranking projects by numbers, or using “red”, “green”, and “yellow” to analyze your risk – STOP. Just stop." - Exactly right! Well said and thanks for posting. More importantly - thanks for highlighting ProbabilityManagement.org and the upcoming conference to help further the adoption of "...a better way!"