To summarise: The expected value of an action is the utility (value) that the action *could* have, multiplied by the probability of the action actually having that value. In general, it helps us to reason under uncertainty and in situations where our intuitions are giving incorrect answers.

For example: *Imagine you want an ice-cream. There are two shops selling ice-cream nearby. One has an ice-cream you really like (value: 10) which it only has in store 50% of the time. The other has one which you quite like (value: 7) which it has in store 90% of the time. *

The first shop has an expected value of 10*0.5 = 5. The second has a value of 7*0.9 = 6.3. So, you should go to the second shop.

## Problems Edit

One noticeable problem is that you have to assign a number to utility, which can be a little fuzzy in practice. One way to get around this is to assign values that are consistent relative to each other: i.e. one outcome is assigned a value of 10, and the other outcome is half as good, so gets a value of 5, and the third is three times as good, so gets a value of 30.

Another problem is that it can give seemingly silly answers if it is applied to particular situations, such as ones involving very high expected value and very low probability, e.g. a probability of 1*10-67 of a very, very, very good outcome (3^^^^^3 utilons) or an infinitely good outcome, will result in very high expected value.

## Further Reading Edit

- Expected Utility: Less Wrong Wiki entry.
- Why Maximise Expected Value: Alan Dawrst's article on why you would want to maximise expected value anyway.