We are all made of stats

Children are very good at science. They start with broad priors (anything is possible) and learn through collecting data (see picture below) what conclusions are supported best by the evidence. They experiment, make mistakes, and test the variations on a theme. They learn what is dangerous; they learn what is tasty; they learn how to speak.

Kids doing science

Kids doing science

Our responses to experiences are very similar to Bayesian reasoning. Take trust as an example. If some dudette off the street – let’s call her Margaret – were to recommend a movie, say Moon, we might not heed her words since we have no reason to think we’d have the same taste in movies as her, but if upon watching Moon we found that we quite enjoyed it – we’d be more likely to rely on Margaret’s next tip, say Wadjda. And if Wadjda was also to our liking, we’d probably trust Margaret’s advice when she suggests Fast & Furious 6 (oops). But that blunder would reduce our confidence in her next recommendation, etc. If we define our experience of the movie in binary terms such as “liked” and “disliked”, the situation resembles the classic coin-toss experiment in which one tries to determine if a coin is biased by flipping it many times.

In the process of writing this blog-post I stumbled across this lovely description of much the same proposition from Tom Campbell-Ricketts in his blog:

“And since science is simply the methodical application of common sense, Bayes’ theorem can be seen to be (together with decision theory) a good model for all rational behaviour. Indeed, it may be more appropriate to invert that, and say that your brain is a superbly adapted mechanism, evolved for the purpose of simulating the results of Bayes’ theorem.”

Indeed. And that intuition should work like the application of Bayes’ theorem is pretty sweet. But sadly, it seems that once we actually bring numbers into it, we tend to get it wrong, sometimes fatally so. The so-called ‘base rate fallacy’ (incorrectly downweighting/ignoring base rate information) sends us into a panic at positive test results from medical screenings for rare diseases. Meanwhile the Prosecutor’s Fallacy (confusing the likelihood and the posterior) has resulted in innocent people being convicted. Cognitive dissonance and reinforcement of stereotypes both sound like a case of not updating priors, selectively ignoring data or having heavily biased priors to begin with. But why does it happen???

Everything I’ve mentioned so far has related to the way our confidence in a particular idea changes over short timescales: from minutes to a human lifespan. But we also see related concepts playing out over generations. So I’ll end on a side-note and some questions. The process of evolution, involving random mutations of the genetic code, seems to work like an MCMC routine exploring some many-dimensional parameter space, accepting the parameter-choice if one is able to survive and reproduce (assuming offspring have similar parameters). In fact, genetic algorithms (GAs) for optimisation are inspired by this phenomenon. [Edit: GAs are designed this way, while the analogy with MCMC will break down at some point; GAs are meant to find peaks, rather than sample a full distribution; but I’ll expand on that in a future blog post…] But how far can one draw parallels between evolutionary theory and the emergence of various facets of what we now call human nature? Are there similarities even when heredity is not involved? Can it help us to understand our failings? Did racial prejudice evolve over years of tribal conflict and some mechanism for group survival?


6 thoughts on “We are all made of stats

  1. Matt Francis

    The same evidence can be taken to strengthen completely different beliefs, depending on the prior. For instance, if some new study points to human activity contributing to global warming then this might strengthen someone’s belief that we should rethink how we generate electricity. To someone else this may be further evidence of the extent of groupthink in academia and strengthen their belief that we should stop government funding of research. I’m sure that could be cast in a Bayesian way, though I’m not sure how to mathematically describe the two different priors. There definitely seems to be some kind of mechanism in the formation of human beliefs that leads to these kinds of bifurcations which lead to new evidence, paradoxically, driving beliefs further apart rather than closer together.

    1. madhurakilledar Post author

      ooh good point – same data; different outcome. But is it necessarily because of different priors? or does the difference lie in background information, or is it kind of in the data after all because of differing trust in the source of information?

      1. Matt Francis

        Ha! I just had a look back at Jaynes as I vaguely recalled there was something about this in there. If you’ve got it handy check out 5.3 “Converging and Diverging Views”. TL;DR, probability theory can lead to the same data causes beliefs to diverge and that’s a feature not a bug (since it replicates what we see in many debates). My clumsy summary of how is this works is that the data is not “there is evidence for climate change”, but that “Professor X claims there is evidence for climate change”. Different individuals have priors on their belief in climate change, but also on their belief about what Professor X would claim given that climate change is or isn’t true. From that realisation the divergence of views given new evidence follows pretty trivially. I think that means we’re doomed.

  2. Pingback: Rational Agents | Truth, Beauty and a Picture of You

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s