Uncertainty Wednesday: Interlude

Before covering p-values and why they are so problematic, I thought it would be a good idea to provide a bit of a recap of Uncertainty Wednesday. I have been writing this series for almost two years now, beginning summer of 2016.  Uncertainty is everywhere in the world and yet we are generally poorly equipped to reason about it. That happens to be true even for people, like myself, who have studied some probability and statistics.

My approach in Uncertainty Wednesday has been to consider that there is an external reality which is accessible to us only through observations (which I also refer to as measurements and signals). Our task then is to make inferences about the underlying reality based on those observations, meaning we want to learn something about reality from the observations.

The key point I want to get across is that this should be an iterative process where we update explanations with the goal of improving those explanations over time. We should always be asking ourselves the following question: given the explanations we have so far and some new observations, what should we believe now? Or put differently: how do we update our explanations based on observations?

While this sounds easy enough in principle, it turns out to be quite hard to do in practice for two reasons. First, as human we have all sorts of built-in heuristics that make updating hard, such as confirmation bias. We are much more likely to simply discard observations that do not fit with our explanation than to update our explanation. Why? Because it takes virtually no effort to ignore something and it takes a lot of effort to revise one’s explanation.

Second, we are often taught a binary view of the world, even in statistics classes. An explanation is either right or it is wrong. When we get new observations we use them to either confirm or reject the explanation. This amounts to asking the (wrong) question: given an explanation, how likely are these observations? Not likely, well then the explanation is wrong. This binary approach often leads to wrong conclusions though.

Why is that? An explanation establishes a hypothetical probability distribution, but our observations are a sample drawn based on the underlying reality (which may be quite different from our explanation!). And sample statistics, such as the sample mean and the sample correlation, have a distribution of their own, again based on the underlying reality. So instead of a binary accept / reject, we should use the information from the sample to update the probability distribution of our explanation.

In the coming Uncertainty Wednesdays I will give both formal and informal examples of this fundamental difference in approach.

Posted: 21st March 2018Comments
Tags:  uncertainty wednesday updating inference

Newer posts

Older posts

blog comments powered by Disqus
  1. continuations posted this

Newer posts

Older posts