When tigers attack: the ludic fallacy
The other week I asked a simple question aimed to highlight the ludic fallacy:
Assume that a coin is fair, i.e., has an equal probability of coming up heads or tails when flipped. I flip it ninety-nine times and get heads each time. What are the odds of my getting tails on my next throw?
I’d copied this word-for-word from the Wikipedia page about the ludic fallacy.
Most people who responded did so in a very sensible and rational manner by stating that the probability of tails on the next throw was 50%. Nothing wrong with that and as anyone with a smattering of knowledge about probability will tell you, it’s the correct answer. To those of you who suggested 50% I congratulate you. You were quite correct. That I disagree is in no way a put-down or an attempt to show off. I was being unfair. I asked a sensible, rational question and you gave me a sensible, rational answer.
So why did I disagree with the correct answer? Well, if we’d been sitting at a table and I threw 99 heads in a row I defy anyone with an ounce of common-sense to bet on tails (unless the returns were significantly greater than the original investment). In fact, by the tenth toss I’d expect you to be watching my hands very closely and ask some serious questions. If you still believed that the odds were even by the twentieth toss, then I’d be seriously questioning whether you were allowed unaccompanied. But that wouldn’t happen. You’d quickly call me out as a fraud and change your position on the odds of tails.
Which begs the question, why would one answer differently in two different scenarios? The ludic fallacy seems to be the answer. A definition from Wikipedia:
The ludic fallacy, identified by Nassim Nicholas Taleb […] is “the misuse of games to model real-life situations". Taleb explains the fallacy as "basing studies of chance on the narrow world of games and dice".
I think Taleb is marvellous. I’m told by acquaintances that his books are well read and understood in The City. Yet in consulting and research he often remains something of a mystery. Which is a terrible shame given the frequency with which researchers, analysts and consultants (and thus their clients) fall victim to some of the well-known fallacies he introduces. I include my old self here. Perhaps my new (post-Taleb) self falls for them less often? Maybe.
Taleb explains the ludic fallacy through two (reoccurring) characters; Dr. John (a man of science and logical thinking) and Fat Tony (a man who lives by his wits). Dr. John is fooled by the coin tossing, Fat Tony certainly isn’t. You can read his books if you want to know more.
What I find interesting is that the ludic fallacy appears to be a special case of a wider problem. That of platonicity:
The focus on those pure, well-defined, and easily discernible objects like triangles, or more social notions like friendship or love, at the cost of ignoring those objects of seemingly messier and less tractable structures.
Data analysts are particularly prone to this but it’s a problem that is near-universal. Allow me to highlight with another of Taleb’s real-life examples:
A team of clever statisticians were tasked with assessing the financial risks to the Las Vegas gambling industry. Being composed of people like Dr. John they carefully calculated the risks inherent in the different games of chance and concluded that all was well. Yet by far the biggest financial disaster to hit the casino industry was caused by the mauling of an entertainer by a ‘tamed’ tiger in the city’s most popular show (and a drawcard for the casino industry). Of course, none of the statisticians considered this. It was outside of their platonic world of probability theory.
As I’ve become older, grumpier and a lot more sceptical I see this sort of thing everywhere (accompanied by another fallacy, that of trusting men in smart suits in senior roles).