Prediction markets and the Kelly Criterion, part 2

Welcome to Casino Nemo! You will like it here.

We have this game where you place a bet and then we roll a fair six-sided die. If it lands 1 or 2, we keep your bet; if it lands 3 through 6, you get back your bet times two (1:1 payoff).

As I said, you will like it here.

Pretend that you have not read Part 1 and consider: How much should you bet?

The answer is… “It depends”.

Suppose you are visiting Nemoville (Nemoton? Nemostan?) for a ten day vacation, and we only let you play this game once per day. Suppose further that your spouse gives you a strict allowance of $100 each day for gambling. It is fairly clear, I think, that you should bet the entire $100 every day. You will probably win around 2/3 of the time, so you expect to finish the week with roughly \($200 * 10 * \frac{2}{3} = $1333\), and no other strategy has a higher expectation value. In fact, the more days you play, the better off you become relative to other strategies (both in total wealth and in likelihood) by betting your entire $100 allowance every day.

Call this strategy of always betting everything “Strategy A”.

Now, suppose when you return the following year, your spouse changes the rules and gives you a single $1000 allowance for the entire 10 days. And you are allowed to compound; i.e. roll your winnings/losses forward from one day to the next.

If you follow Strategy A and bet your entire bankroll every day for 10 days, there is a \(1-(\frac{2}{3})^{10}\) = 98.3% chance you will lose one of the 10 bets and thus all of your money. You do have a chance of winning all 10 bets and $1.024 million, but that is only 1.7%. If we extend this game to 20 or 30 days, your chances of winning every bet become vanishingly small very quickly.

Note that the payoff for Strategy A, if you do manage to win and compound over many days, becomes ludicrously huge; so huge that this strategy still has a higher expectation value than any other. Yet if you play it long enough — and probably not even very long — you will definitely lose everything.

So… Perhaps maximizing expected payoff is not the best goal. But then what is?

Maybe we can simplify the problem. Let’s reduce your vacation to just two days. You have your $1000 allowance, and you get to roll your win/loss from Day 1 into Day 2.

Four things can happen:

  1. You win both days (4 chances in 9)
  2. You win on the first day but lose on the second (2 chances in 9)
  3. You lose on the first day but win on the second (also 2 chances in 9)
  4. You lose both days (1 chance in 9)

Now, Strategy A (bet it all both days) will leave you with $4000 in case (1) and $0 in the other cases, for an expected value of \($4000 * \frac{4}{9} = $1778\). And this is the highest expectation of any strategy.

On the other hand, Strategy A leaves you with nothing more than half the time. So maybe you should try something else?

Define “Strategy Z” as: Bet zero, always.

We could say one strategy is “better” than another if it is more likely to win head-to-head. Like say you are on vacation with the neighbors, and your spouse does not care how much money you win or lose, as long as you wind up with more than the neighbors.

By this definition, how does Strategy Z compare to Strategy A? Well, A beats Z 4 times out of 9 via case (1), but loses 5 times out of 9. So, by this definition, Z is better. (Sometimes the only way to win is not to play.)

We can toy with other ideas. Consider “Strategy ZA”: Bet zero on the first day and everything on the second.

Let’s compare this to Strategy Z. In case (1), ZA wins by leaving you with $2000 versus Z’s $1000. Similarly for case (3). ZA does lose to Z in cases (2) and (4), but those only combine to 3 chances out of 9. So Strategy ZA beats strategy Z 6 times out of 9 and is therefore “better”.

To recap: By this definition of “better”, Z is better than A, and ZA is better than Z.

So it must follow that ZA is better than A, right? Let’s check.

Case (1) – A wins. Case (2) – tie. Case (3) – ZA wins. Case (4) – tie. (Verify these for yourself). But Case (1) has 4 chances in 9, while Case (3) only has 2 in 9. Therefore, A is actually better than ZA.

All of which is a long way of saying that this notion of “better” is not an ordering, which means “better” is a pretty bad name for it (see related video). We just got ourselves into a rock/paper/scissors game with the neighbor. I hate it when that happens.

I stumbled across this example while trying to reason my own way toward the Kelly Formula. It turns out this does not work, and Kelly-type arguments have little or nothing to say about examples like this. To arrive at Kelly, we have to simplify our example not by reducing the rounds to two, but by increasing them to infinity. Once we do that, an analogous definition of “better” actually does produce an ordering on betting strategies; and under that ordering, Kelly is “better” than anything else in the long run.

But the whole framework kind of breaks down for finite cases, which is one reason those Nobel laureates were non-fans of the Kelly Criterion. Another is whether beating the neighbor is actually the right goal.

More next time.

Leave a Reply