My stats guru colleague Dr Andrew Pratley and I are on the move to tackle Quantifornication, the plucking of numbers out of thin air. Here is the fourth in a series we are co-writing.
We all have a tendency to overestimate our knowledge. And as a result, we believe that we know things which we don't. Our understanding of what is random is even worse – hence Nassim Taleb’s book Fooled by Randomness. It's the overconfidence that is the problem. It causes us to jump to the wrong answer and believe it is right.
Let’s take an example of something that we all ‘know’. If I rolled a die and a six comes up you’d think that’s unremarkable. If I rolled another six you’d think that reasonable. If I continued to roll six after six; three, four or five times in a row you’d be convinced that I was cheating, that the die was loaded or there was something wrong. We all know the chance of a six is 1/6. What’s hard to imagine is that it’s entirely possible, even if incredibly unlikely, not just to roll five sixes (with a probability of 1/7776) but to roll a million sixes in a row on a fair die. Actually it is entirely possible to role a six an infinite number of times.
How is your head?
The very idea of infinity is not something we can grasp. There’s no mental model that makes sense, no set of rules or structures we can rely on.
Irrespective, we are always looking for rules or guidelines to understand the world when it comes to probability. This focus often doesn’t serve us. Knowing the probability of five sixes in a row is 1/7776, we generalise this to mean that if we conduct this experiment 7,776 times, we should see this result once. That’s not true. The actual outcome will be somewhere in a distribution of outcomes ranging from 0 to 7,776. This simple example shows we tend to talk about probability as though these are fixed outcomes, but they exist on a distribution, a distribution that is very difficult to grasp when we are seeking rules and guidelines.
How do we apply these ideas to improving estimates of likelihood? The first and most important idea is not to see probability as a fixed outcome. A once in a hundred-year flood doesn’t happen every hundred years, it happens when a specific set of circumstances occur. Understanding and quantifying these circumstances will be more useful than thinking the probability is 1/100 because we gain understanding of the key drivers and can create early warning signals so we can prepare for the highly unlikely event. Signals you might refer to as Key Risk Indicators.
The second idea is that most of what we’re interested in doesn’t follow predictable probabilities such as rolling dice and flipping coins. So if we are going to improve decision making we need to find better ways to make more accurate estimates of probability than plucking numbers out of thin air in a workshop. That’s right. Quantifornication.
When we have better estimates of likelihood of a risk occuring we need to hold onto them lightly and be responsive to new information, not dogmatic in our beliefs about data. When we’re driving on a rough road, we want to use the feedback through the steering wheel to guide us, not spend all our time and effort ignoring the signals and fighting against the conditions.
Stay safe and adapt – with better measurement!