My stats guru colleague Dr Andrew Pratley and I are on the move to tackle Quantifornication, the plucking of numbers out of thin air. Here is the seventh in a series we are co-writing.
Nothing is certain. Not even death and taxes. Because the only certainty is uncertainty, there’s no foolproof way to make the right decision. Most organisations deal with this by putting in a time constraint. Forcing a decision, however imperfect it may be.
While keeping the organisation moving by forcing decisions is admirable, the organisations we work with that prove most successful also look to improve the information available. Some high level research, some investigation of data or similar. However, few do the hard-smart work that Andrew and I help organisations do. We create hypotheses and then test them harder and harder over time. Andrew calls this directionality. Directionality can be thought of as a vector. It has both direction and magnitude. The value of using directionality is that it leads us to better decisions by assuming we are making the right decision and testing it over time.
Much of the current approach with regard to quantifying uncertainty in risk management is about developing calibrated ranges, based on guessing values and then simulating the results. This is a vast improvement over just guessing a value and then hoping it’s right, or wrong. What this approach misses is that we’re trying to avoid uncertainty by creating artificial distinctions. Using this approach is like drawing state borders in countries. Lines on a map are fine until things go wrong. All of sudden we find that this approach splits up families, stops access to medical care and stops kids going to school. Things we never saw until we started relying on this knowledge.
Directionality embraces the underlying uncertainty. What we work towards is seeing how our understanding changes based on additional information. The simplest example of this is how we can infer more about a decision by conducting a survey. The amount of directionality is related to the sample size we select. Let’s take an example where we want to know if we should return to working from the office next month. We ask one person and they say ‘yes’. That’s 100% agreement with returning to the office. Would you re-sign the lease based on this? Of course not. If you had the same result from a group of ten, you’d be a bit more inclined. If from a group of 100, this would be hard to ignore. If you asked every single person and they all said yes, you’d know what to do. The strength of this or the directionality is proportional to the sample size. There’s no right sample size, but finding the same result over a larger group gives a greater sense of directionality. Statistics show this to be true by reducing the size of the confidence interval.
The value of sampling is to increase your confidence in a decision without committing to a belief that you can’t change your mind, or trying to create a world that is not inherently uncertain. If we ever needed evidence of the underlying uncertainty, 2020 has shown us this in a way, few could have ever imagined.
This concludes the series I have written jointly with Andrew on defeating quantifornication. I hope we have got you thinking and you are planning your next steps to improve decision making in your organisation.
Stay safe and adapt – with better measurement!