# Prediction Markets and the Kelly Criterion, Part 4

I changed my mind; I want to stick with my toy example just a little bit longer.

Let’s change the game slightly. Instead of bringing your own bankroll, Casino Nemo gives you $1 with which to play. You can play as many rounds as you like, compounding your gains from round to round… For as many rounds as you can win in a row. And the first time you lose, you do not lose your wager; you get to keep it! But then the game is over and you do not get to play any more. So I guess the first time you lose is also the last time. There is just one catch. You have to pay a one-time fee to play. Question: How much should you be willing to pay to play this version of the game? I will not bore you with the details, but the expectation value of this game is actually $$\infty$$, assuming you go all-in on every bet (as you should). So you pay$1 million to play, lose on the fourth round, and take home $8. Nice work. This little thought experiment is called the St. Petersburg paradox. Every article about the Kelly Criterion seems to mention it, although they really have very little to do with each other, in my opinion. But who am I to argue with tradition? The first satisfactory solution was provided by Bernoulli in 1728, who made the fascinating observation that$100 to a broke man is worth more than $100 to a millionaire. In economist-speak, the utility of money is not linear. Using simple expectation value as your goal assumes that utility is linear, which gives rise to the paradox. For expectation value to make sense as a goal, it has to be computed over a measurement of value that is linear. Such a measurement is called a utility function. Bernoulli decided that a logarithmic utility function was logical; i.e. that the value of any quantity of money equals its percentage of your net worth. So$100 to someone with net worth $1000 has exactly the same utility as$100,000 to someone with net worth \$1 million. Equivalently, each digit you can tack on to your net worth has the same utility.

Note that defining “utility” like this is an assertion without proof. And not even really, you know, true. We will revisit this when we talk about Kelly skeptics.

Using such a logarithmic utility function, the St. Petersburg paradox vanishes because the expectation value is no longer infinite. Read the Wikipedia article for the gory details.

Returning now to a world where you place bets you might actually lose, what is the connection between all this and the Kelly Criterion?

In Kelly’s original paper, the goal he chose was to maximize the expected rate of return. That is, given some betting strategy that you apply for $$n$$ rounds, what was your average percentage return per round? The strategy that maximizes the expected value of that per-round compound return, as $$n$$ becomes large, is the Kelly strategy. Note that this is not only a property of the Kelly strategy; it is the original definition.

It turns out — since percentage return is basically a logarithm and compounding (multiplying) results is just adding logarithms — that this is equivalent to maximizing your expected utility on each round using a logarithmic utility function. In fact, the Wikipedia page for the Kelly Criterion “derives” the Kelly formula from this fact, without really explaining where it comes from or why.

Kelly pointed out in his paper that maximizing the expected logarithm of your bankroll per bet is a consequence of his goal to maximize the compound rate of return, and it has nothing to do with any particular concept of “utility”.1 But that has not stopped lots of people from confusing the two.

Given this defining property of the Kelly Criterion, it is perhaps not so surprising that several people who are famous for their ability to generate large annualized returns are also notable proponents of the Kelly Criterion.

We will meet one of them… next time.

1. Logarithmic utility has various implications in this context; for example, $$\log 0 = -\infty$$. Losing one dollar is OK; losing your last dollar is very, very, very bad. Consequently, the Kelly formula will not permit any nonzero chance of losing all of your money. The formula only tells you to go all-in when $$p = 1$$; i.e., it’s a sure bet. If you are in the habit of making such bets, you do not need Kelly or anyone else to tell you how to size them.