Recently I attended the second ever Bayesian Young Statisticians’ Meeting (BAYSM`14) in Vienna, which was a really stimulating experience, and something pretty new for me, being my first non-astronomy conference. I won a prize for my talk too, which was pretty sweet!

During the two-day overview of theory and a variety of applications by the newest people in the field (read about the highlights over at the blogs of Ewan Cameron and Christian Robert), we heard from a few Keynote Speakers including Chris Holmes. In his talk, he mentioned the world of rational decision makers as envisioned by Leonard J. Savage in his 1954/1972 tome *The Foundations of Statistics* (adding that on my ‘to read’ list), and went on to describe the application of a *loss function *and *minimax* to avoid worst-case scenarios. Minimax isn’t the only approach to decision-making; I think other approaches are more relevant to our behaviour, as I’ll describe later.

“If you lived your life according to minimax, you’d never get out of bed” – C. Holmes

I didn’t confirm if Holmes agrees with the notion that people are rational agents; I do, but my fellow participant Achim Doerre was skeptical, and we spent half the day arguing about what drives our decisions. Seeing as it was coffee break, our choice of pastries provided a convenient and ideal test case. We had walked in to find a delicious tea-break spread, including croissants and pastries with various fillings; etiquette demanded that we pick one (perhaps going back for seconds later).

While the best course of action seemed obvious (to me), Achim made a good point about the non-obvious choice between two equally good options. Indeed, what reason have I to prefer the berry filling over apricot? Both are less messy than the croissant (which I’d made the mistake of having the day before), both are sweet, but neither are my ideal – a little too tangy and strong for my taste. Do I then choose randomly with 50% chance of selecting either? Or shall I misappropriate Andrew Jaffe’s suggestion and eat half of each, and then umm… give some poor sod the rest? Do we, in fact, make a decision by sampling from the decision-space weighted by expected benefit? Achim has pointed out since that explicit treatment of this type of scenario is notably absent from standard decision theory…

I’ve written earlier about how human intuition and belief responds to experience via Bayesian updating (though we suck pretty bad at doing this consciously). Correspondingly, human choices reflect the use of a *utility function*, u(), that defines desired outcomes. More specifically, humans are rational agents who aim to maximise the expected utility when they make a decision (not always conscious). To determine the expected utility, *Û*, one marginalises over all possible *events*, *E*, that affect the outcome, weighted by the probability, *Pr(E)*, of them occurring having made a particular decision, *D*.

*Û*(*D*) = ∫ u(*E*)*Pr(E*|*D*) d*E*

The best decision is that with the largest expected utility and while it doesn’t necessarily mean the best outcome, it is the best choice given all knowledge available to oneself at the time of the decision. But even with this definition of “best”, we don’t always get it right because we’re often unaware of what we desire or how to define it. Over lunch, Achim and I recounted previous times in our life in which we’ve applied decision theory to more trivial issues like choosing a house or a career. We’d identify important characteristics of the options; assume a linear model combining these characteristics; rate each choice weighted by the importance of that characteristic; plug in the numbers and hey presto: major life decision made! But inevitably, even though the computer said *“do option A”*, we found ourselves thinking *“No! I wanted to do B!”*. Clearly then, what we want is B, but our conscious attempt to describe our own utility function was flawed.

An important consequence of this paradigm is the interpretation that when people appear to act irrationally (under presumably the same knowledge), it is simply because their utility function does not match ours. The most terrifying truth about humanity is that all people act “rationally” in the manner I’ve described. Even the most cruel and horrible of acts have some reasoning behind them. Notwithstanding rare pathological cases of clinical insanity, those we would call “evil” are not aliens and monsters. They are human. And worse still, their utility functions contain the same motivations that exist within all of us, but weighted differently. We all are driven by love, fear, empathy, and the hunger for power.

Brendon J. BrewerHi Mud. Do you know of a good article or book giving the argument for why it’s expected utility, and not some other quantity, that should be maximised? It seems very plausible to me, and I can come up with some thought experiments that seem to suggest other possible decision theories are ruled out. However I don’t think I’ve ever read a proof of a theorem along these lines.

That said, I’m not the kind of person who finds it easy to understand proofs of theorems.

madhurakilledarPost authorhehe, you know I’m pretty sure my only reference here is B. Brewer circa 2009. Anyway, are you asking if utility *should* be maximised, or if utility *is* maximised?

Madhura KilledarPost authorYou know, the more I think about it, the more I worry that any “proof” would be circular: utility defines “should”

Brendon J. BrewerThere are arguments in favour of probability theory for uncertainty, along the lines of ‘any theory of uncertainty that isn’t equivalent to probability theory is illogical/self contradictory’: the Cox derivation and the dutch book argument are examples, and the take home message is that Bayesianism is normative, not necessarily descriptive. I’m led to believe there are similar arguments for basing decision theory on expected utility maximisation, but I just don’t know the classic references.

Madhura KilledarPost author…aaand one year later, I find myself reading homework along the same lines:

http://plato.stanford.edu/archives/win2015/entries/rationality-normative-utility/#ArgForExpUtiThe

Still not obvious what the classic texts should be; the ones mentioned are: Savage 1972 (as above), Jeffrey 1983, and Zynda 2000 (bit new though). I’m mostly seeing references to alternative approaches to decision theory and so-called paradoxes.

Pingback: Risk and rationality | Truth, Beauty and a Picture of You