Prediction Markets and the Kelly Criterion, Part 5

Perhaps the most famous proponent of the Kelly Criterion is Edward Thorp. He founded the M.I.T. Blackjack Club, published various papers on gambling and investing, and became both a professor of mathematics and a billionaire investor. The Kelly Criterion played a key role in most of these; he dubbed it “Fortune’s Formula”.

Thorp has authored various articles about the Kelly Criterion over the years, e.g. The Kelly Criterion and the Stock Market. These typically list six properties related to the Kelly formula which I will now attempt to paraphrase:

  1. If your expected compound rate of growth is positive, your wealth will approach \(\infty\) over time
  2. If your expected compound rate of growth is negative, your wealth will approach zero over time
  3. If your expected compound rate of growth is zero, your wealth will approach both \(\infty\) and zero (i.e. make arbitrarily wide swings) over time
  4. The ratio between the performance of the Kelly strategy and that of any other strategy will approach \(\infty\) over time
  5. The expected time to reach any fixed target wealth is shorter for Kelly than for any other strategy
  6. To maximize your expected rate of growth over many rounds, you can simply maximize the expected logarithm of your wealth each round, even if the exact probabilities and payoffs change from round to round

At least a couple of these results were first established by Thorp himself in the 60s.

To reiterate the context: We assume you have some “edge” in gambling or investing, and you are going to make a large sequence of bets/investments using that edge, compounding your results over time. These properties — and the Kelly formula itself — are about your strategy for sizing each bet. (If you have no edge, you should not be making bets in the first place.)

Properties (1), (2), and (3) say you do not have to use the Kelly formula to do (very) well; smaller or even somewhat larger bets will work fine. But be careful not to make your bets too large or you are very surely going to do (very) poorly.

Property (4) is essentially the one I have mentioned already: As the number of bets goes up, Kelly is increasingly likely to outperform any other strategy, and that outperformance is likely to grow toward \(\infty\) over time.

Property (5) says Kelly bets are the fastest expected way to reach a betting/investment target.

Property (6) says it is valid to apply the Kelly Criterion to situations more complex than (e.g.) my little toy example with the dice.

Other billionaire investors known or strongly suspected of using Kelly methods include Warren Buffett, George Soros, and James Simons. That is some impressive company.

On the flip side, the most prominent critic of the Kelly Criterion was probably Paul Samuelson. A Nobel prizewinner in Economics, he wrote about the Kelly formula several times, the most amusing surely being his NSF-funded academic paper consisting entirely of one-syllable words. He was presumably trying to make it accessible to the less gifted; Prof. Samuelson was apparently a bit of an a**hole. My kind of guy.

Now, nobody likes to laugh at economists more than I do. And stodgy academics telling colorful billionaires how to invest certainly seems a ripe opportunity.

But this is really not fair. Those same academics would also tell a lottery winner he should never have bought a ticket, and they would be right. The details, and not the outcomes or personalities, are what matter.

We will ponder some of those details in the next installment.

Prediction Markets and the Kelly Criterion, Part 4

I changed my mind; I want to stick with my toy example just a little bit longer.

Let’s change the game slightly. Instead of bringing your own bankroll, Casino Nemo gives you $1 with which to play. You can play as many rounds as you like, compounding your gains from round to round… For as many rounds as you can win in a row. And the first time you lose, you do not lose your wager; you get to keep it! But then the game is over and you do not get to play any more. So I guess the first time you lose is also the last time.

There is just one catch. You have to pay a one-time fee to play.

Question: How much should you be willing to pay to play this version of the game?

I will not bore you with the details, but the expectation value of this game is actually \(\infty\), assuming you go all-in on every bet (as you should). So you pay $1 million to play, lose on the fourth round, and take home $8. Nice work.

This little thought experiment is called the St. Petersburg paradox. Every article about the Kelly Criterion seems to mention it, although they really have very little to do with each other, in my opinion. But who am I to argue with tradition?

The first satisfactory solution was provided by Bernoulli in 1728, who made the fascinating observation that $100 to a broke man is worth more than $100 to a millionaire. In economist-speak, the utility of money is not linear. Using simple expectation value as your goal assumes that utility is linear, which gives rise to the paradox.

For expectation value to make sense as a goal, it has to be computed over a measurement of value that is linear. Such a measurement is called a utility function. Bernoulli decided that a logarithmic utility function was logical; i.e. that the value of any quantity of money equals its percentage of your net worth. So $100 to someone with net worth $1000 has exactly the same utility as $100,000 to someone with net worth $1 million. Equivalently, each digit you can tack on to your net worth has the same utility.

Note that defining “utility” like this is an assertion without proof. And not even really, you know, true. We will revisit this when we talk about Kelly skeptics.

Using such a logarithmic utility function, the St. Petersburg paradox vanishes because the expectation value is no longer infinite. Read the Wikipedia article for the gory details.

Returning now to a world where you place bets you might actually lose, what is the connection between all this and the Kelly Criterion?

In Kelly’s original paper, the goal he chose was to maximize the expected rate of return. That is, given some betting strategy that you apply for \(n\) rounds, what was your average percentage return per round? The strategy that maximizes the expected value of that per-round compound return, as \(n\) becomes large, is the Kelly strategy. Note that this is not only a property of the Kelly strategy; it is the original definition.

It turns out — since percentage return is basically a logarithm and compounding (multiplying) results is just adding logarithms — that this is equivalent to maximizing your expected utility on each round using a logarithmic utility function. In fact, the Wikipedia page for the Kelly Criterion “derives” the Kelly formula from this fact, without really explaining where it comes from or why.

Kelly pointed out in his paper that maximizing the expected logarithm of your bankroll per bet is a consequence of his goal to maximize the compound rate of return, and it has nothing to do with any particular concept of “utility”.1 But that has not stopped lots of people from confusing the two.

Given this defining property of the Kelly Criterion, it is perhaps not so surprising that several people who are famous for their ability to generate large annualized returns are also notable proponents of the Kelly Criterion.

We will meet one of them… next time.

1. Logarithmic utility has various implications in this context; for example, \(\log 0 = -\infty\). Losing one dollar is OK; losing your last dollar is very, very, very bad. Consequently, the Kelly formula will not permit any nonzero chance of losing all of your money. The formula only tells you to go all-in when \(p = 1\); i.e., it’s a sure bet. If you are in the habit of making such bets, you do not need Kelly or anyone else to tell you how to size them.

Prediction Markets and the Kelly Criterion, Part 3

Let me continue with my example from Part 2. Yes, this example is a toy. But I believe that studying simple cases can help to understand complex ones.

To recap, we have a game where you place a bet that you will win with probability \(p = \frac{2}{3}\) and that pays off 1:1. You have a $1000 bankroll to play this game once per day for two days. You may compound (roll) any win/loss from the first day into the second.

We compared three betting strategies:

  • Strategy A (“Rock”): Go all in, always
  • Strategy Z (“Paper”): Bet nothing, always
  • Strategy ZA (“Scissors”): Bet nothing on the first day and go all in on the second

Changing terminology slightly, let’s say that one strategy “beats” another if it is more likely to leave you with more money in a head-to-head comparison.

We saw last time that — for this two-day game — Paper beats Rock, and Scissors beats Paper, and Rock beats Scissors.

Consider one more strategy:

  • Strategy K: Bet \(\frac{1}{3}\) of your current bankroll, always

This is the Kelly bet for this game. The math is simple. When the payoff is 1:1, the Kelly formula reduces to \(p-q\). For this game, \(p = \frac{2}{3}\) and thus \(q = 1-p = \frac{1}{3}\), so Kelly says to bet \(\frac{2}{3}-\frac{1}{3} = \frac{1}{3}\) of your bankroll.

(Note: This hypothetical game has positive expectation; that is, the payoff is more than sufficient to compensate for your chance of losing. If you study any actual casino game and plug its numbers into the Kelly formula, you will get a negative answer, which is Kelly’s way of telling you to take the other side of the bet.)

You can check for yourself that strategy K beats A, and it beats Z, but it loses to ZA. The latter is easy to see since ZA leaves you with $2000 six times out of nine, while the best Kelly can do is win twice leaving you with \($1000 * \frac{16}{9} = $1777\). I suppose this makes it “Dynamite”, blowing up Rock and Paper while having Scissors cut its fuse. And we will pretend I designed the example this way on purpose.

Now wait a minute… Did we just beat the Kelly Criterion?

Yes. Yes, we did. For the two-day version of this game.

But look at Strategy ZA and tell me how to extend it to three days. Or 10 days, 1000 days, 1 million days… You will find it becomes harder and harder to develop any strategy to beat Kelly’s simple “always bet \(\frac{1}{3}\)”. This includes adaptive approaches that change strategy based on your win/loss record.

I want to mention again that, in all cases, Strategy A (good old Rock) still has the highest expectation value. For example, if you come back every year for 100 years and play the 10-day game with Strategy A, you will probably win the $1 million once or twice, which is enough to outrun Kelly’s expected ~$2900 per year. You will still go bust the other 98 years, of course.

And if we extend the game to 100 days, and you stick with Strategy A, you have to come back for something like \(10^{17}\) years for a decent shot at seeing your astronomical payoff and pulling ahead.

I believe I have now beaten this example into the ground, and I am debating what direction to head. Tune in next time to find out.

Prediction markets and the Kelly Criterion, part 2

Welcome to Casino Nemo! You will like it here.

We have this game where you place a bet and then we roll a fair six-sided die. If it lands 1 or 2, we keep your bet; if it lands 3 through 6, you get back your bet times two (1:1 payoff).

As I said, you will like it here.

Pretend that you have not read Part 1 and consider: How much should you bet?

The answer is… “It depends”.

Suppose you are visiting Nemoville (Nemoton? Nemostan?) for a ten day vacation, and we only let you play this game once per day. Suppose further that your spouse gives you a strict allowance of $100 each day for gambling. It is fairly clear, I think, that you should bet the entire $100 every day. You will probably win around 2/3 of the time, so you expect to finish the week with roughly \($200 * 10 * \frac{2}{3} = $1333\), and no other strategy has a higher expectation value. In fact, the more days you play, the better off you become relative to other strategies (both in total wealth and in likelihood) by betting your entire $100 allowance every day.

Call this strategy of always betting everything “Strategy A”.

Now, suppose when you return the following year, your spouse changes the rules and gives you a single $1000 allowance for the entire 10 days. And you are allowed to compound; i.e. roll your winnings/losses forward from one day to the next.

If you follow Strategy A and bet your entire bankroll every day for 10 days, there is a \(1-(\frac{2}{3})^{10}\) = 98.3% chance you will lose one of the 10 bets and thus all of your money. You do have a chance of winning all 10 bets and $1.024 million, but that is only 1.7%. If we extend this game to 20 or 30 days, your chances of winning every bet become vanishingly small very quickly.

Note that the payoff for Strategy A, if you do manage to win and compound over many days, becomes ludicrously huge; so huge that this strategy still has a higher expectation value than any other. Yet if you play it long enough — and probably not even very long — you will definitely lose everything.

So… Perhaps maximizing expected payoff is not the best goal. But then what is?

Maybe we can simplify the problem. Let’s reduce your vacation to just two days. You have your $1000 allowance, and you get to roll your win/loss from Day 1 into Day 2.

Four things can happen:

  1. You win both days (4 chances in 9)
  2. You win on the first day but lose on the second (2 chances in 9)
  3. You lose on the first day but win on the second (also 2 chances in 9)
  4. You lose both days (1 chance in 9)

Now, Strategy A (bet it all both days) will leave you with $4000 in case (1) and $0 in the other cases, for an expected value of \($4000 * \frac{4}{9} = $1778\). And this is the highest expectation of any strategy.

On the other hand, Strategy A leaves you with nothing more than half the time. So maybe you should try something else?

Define “Strategy Z” as: Bet zero, always.

We could say one strategy is “better” than another if it is more likely to win head-to-head. Like say you are on vacation with the neighbors, and your spouse does not care how much money you win or lose, as long as you wind up with more than the neighbors.

By this definition, how does Strategy Z compare to Strategy A? Well, A beats Z 4 times out of 9 via case (1), but loses 5 times out of 9. So, by this definition, Z is better. (Sometimes the only way to win is not to play.)

We can toy with other ideas. Consider “Strategy ZA”: Bet zero on the first day and everything on the second.

Let’s compare this to Strategy Z. In case (1), ZA wins by leaving you with $2000 versus Z’s $1000. Similarly for case (3). ZA does lose to Z in cases (2) and (4), but those only combine to 3 chances out of 9. So Strategy ZA beats strategy Z 6 times out of 9 and is therefore “better”.

To recap: By this definition of “better”, Z is better than A, and ZA is better than Z.

So it must follow that ZA is better than A, right? Let’s check.

Case (1) – A wins. Case (2) – tie. Case (3) – ZA wins. Case (4) – tie. (Verify these for yourself). But Case (1) has 4 chances in 9, while Case (3) only has 2 in 9. Therefore, A is actually better than ZA.

All of which is a long way of saying that this notion of “better” is not an ordering, which means “better” is a pretty bad name for it (see related video). We just got ourselves into a rock/paper/scissors game with the neighbor. I hate it when that happens.

I stumbled across this example while trying to reason my own way toward the Kelly Formula. It turns out this does not work, and Kelly-type arguments have little or nothing to say about examples like this. To arrive at Kelly, we have to simplify our example not by reducing the rounds to two, but by increasing them to infinity. Once we do that, an analogous definition of “better” actually does produce an ordering on betting strategies; and under that ordering, Kelly is “better” than anything else in the long run.

But the whole framework kind of breaks down for finite cases, which is one reason those Nobel laureates were non-fans of the Kelly Criterion. Another is whether beating the neighbor is actually the right goal.

More next time.

Prediction markets and the Kelly Criterion, part 1

Last week, on PredictIt, the “Yes” contract for Amy Barrett becoming Trump’s Supreme Court nominee was trading at an implied probability of 40%. Based on my own reasoning, I estimated her chances at closer to 20%. Put another way, the “No” contract was offered for $0.60, while I thought it was worth $0.80. So I decided to place a bet.

Question: How much should I bet?

I have learned that this is a surprisingly interesting question, one that once inspired Nobel laureates and billionaire investors to publish multiple academic papers calling each other morons.

Let me start with the answer. Well, the answer according to some. I found most expressions of this formula hard to remember, so I will (a) put it here up front where I can find it and (b) cast it in a simple form.

Define:

\[
\begin{align*}
p &= \textrm{your (estimated) probability of winning} \\
q &= \textrm{the opposite} = 1 – p \\
p’ &= \textrm{the market price (imputed probability)} \\
q’ &= \textrm{the opposite} = 1 – p’
\end{align*}
\]

Write down \(p-q\) and \(\frac{p’}{q’}\) next to each other without any parentheses:

\[p-q\frac{p’}{q’}\]

This is the fraction of your bankroll you should bet. Note that \(\frac{p’}{q’}\) is just the payoff on a winning bet, as in 1:1, 2:1, 10:1, or whatever. (Well, the reciprocal of the payoff.) This version of the formula directly applies to markets where winning contracts pay $1, like PredictIt.

So, for my example, \(p = 0.8\), \(q = 0.2\), \(p’ = 0.6\), \(q’ = 0.4\), and I should have bet \(0.8 – 0.2(\frac{0.6}{0.4}) = 0.8 – 0.3 = 0.5\), or half my bankroll.

This formula is called the Kelly Formula or Kelly Criterion. Describing where it comes from, some of its properties, and maybe a bit of its amusing history is the subject of this series. Which I might actually finish for a change.

Air Quality Part 2: The Dylos DC1100 Pro

The first gadget I purchased was this bad boy, the Dylos DC1100 Pro:

Dylos DC1100 Pro

If I could have just one device for measuring air quality, this… Well, this would not be it. It does not measure CO2. It does not measure humidity. It does not measure temperature. It does not have a Web server or a wireless card or indeed any connectivity whatsoever. It does not have a battery. It does not even measure the same thing government agencies use for their air quality indices.

What it does do is provide a high-quality laser particle count. And hey, do you really need a Web server to count particles?

Two things sold me.

First, this 2007 discussion on hvac-talk.com. (Did you know there is an “hvac-talk.com”? Because of course there is.)

The discussion goes like this:

Person A: “Anyone know anything about this new inexpensive particle counter?”

Person B: (standing up to hide butt crack) “Ya get what ya pay for is all I’m sayin'”

Person C: “Hi, I am the engineer who designed the DC1100…” (proceeds to tear Person B a second butt crack)

That was when I placed my order. OK, so technically just one thing sold me.

Some brief history. Prior to 2007, a decent particle counter cost thousands of dollars, and a cleanroom-quality counter ran $10K or more. Then this little company came out with this device, and various labs started comparing it to their research-grade equipment, and found… Hey, it works pretty well for a sub-$300 gadget. Many amateurs use the Dylos as their “trusty” golden reference.

I became further sold while browsing the Dylos site. For example:

I feel like I know these guys… They are old-school ninja EEs. If you ever met a real monster Electrical Engineer, you know what I am talking about. Give one a soldering iron1 and some coffee, come back later, and you are guaranteed to see something amazing. Just don’t touch it.

I wanted to make mine portable, so I bought an XTPower MP-10000 external battery pack. Works great.

If you want to pull samples from the device, for a modest extra charge Dylos will provide an RS-232 serial output. If you do not know what that is, or even if you do, I do not recommend it, because there are other devices you should buy in addition. All right, all right, “instead”. A topic for a later installment.

The sole difference between the Pro and non-Pro versions is that the former is calibrated to see particles down to 0.5 microns, while the latter “only” sees down to 1 micron.

I will close by mentioning this device’s relevant limitations.

  1. It sees water vapor as particles, so the measurements vary based on humidity.
  2. It only provides a count of particles, while all of the “standard” air quality metrics are based on particle mass, not particle count. This is not as bad as it sounds for two reasons. First, the use of particle mass was an arbitrary choice based on research in the 1950s; more recent research suggests some negative effects are better correlated with particle count anyway. Second, if your ensemble of pollution sources is fairly stable, particle masses and counts are well-correlated to each other.
  3. It does not “see” ultrafine particles. But neither does anything else at any sane price point, for now.

Bottom line: While this is not the only device I would want to own, I am glad to have it.

Next time: PM2.5 etc.

1. but not, heaven forbid, a keyboard

Air Quality Part 1: Introduction

(related video)

When Northern California was burning, and I saw articles about the Bay Area’s air quality being “worse then Beijing”, I started to wonder… What does that mean? How is it measured? Can I measure it myself? Can I mitigate it?

This will be a short series of posts to do a brain dump on what I found. This introduction will provide a few definitions and some plausibility arguments for why you might care.

“Air quality” can mean a lot of things, but when talking about fires the main concern is particulates (unless you are standing so close that you are inhaling the burning plastics, in which case you have bigger problems).

The unit of choice in this context is the micron, aka. “micrometer”, which is defined as \(\frac{1}{25400}\) of an inch. (The autistic voice in my head — you have one too, right? — is screaming at me that ACTUALLY, it would more accurate to define an “inch” as 25400 microns. But here in Trump’s America, we use the Queen’s units, dammit.)

Right, where was I?

The US EPA provides this handy graphic:

micron illustration

Larger particles — 10 microns and up — are not a huge threat for two reasons:

  1. Your body reacts to them by making you cough, sneeze, etc. This is your body’s way of trapping and expelling these particles before they reach your lungs.
  2. Your body reacts to them by making you cough, sneeze, etc. This makes you aware that something is wrong and encourages you to be somewhere else.

In short, we evolved to handle particles on this scale; our ancestors had hay fever, too.

We did not evolve to handle the output of an internal combustion engine.

Between 2.5 and 10 microns, particles are called “coarse”, and they make their way into your lungs. Between 0.3 and 2.5 microns, they are called “fine”, and they pass through your lungs and into your bloodstream. Below 0.3 microns are the “ultrafine” particles that get everywhere, including your brain.

None of these are perceptible. When you vacuum your carpet, for instance, you capture most of the larger make-you-sneeze particles. But the smaller ones pass through the vacuum’s filter and bag like they are not even there. All your vacuum does to fine and ultrafine particles is stir them up… And you cannot even tell. Those little paper masks do nothing. Same for the filter in your HVAC system, probably.

And yes, tiny particles are bad for you, especially in the long run. Long-term effects are always tricky to prove, but the evidence of nasty connections (e.g. between ultrafine particles and dementia) seems to grow every year.

OK, that will do for an introduction. The remaining parts will be about how particulates are measured, how you can measure them yourself, and what you can do about it.

Yes, Virginia, the Bitcoin contango is a mystery

Lots of people are saying that massive contango in the Bitcoin futures is no surprise, because the contracts are cash settled.

For example:

“If you’re doing a cash-settled future, it’s just a bet,” said Aaron Brown, a former managing director at quant hedge fund AQR Capital Management who invests in the cryptocurrency and writes for Bloomberg Prophets. “If that’s not related to any underlying physical transaction, the only people who want to do it are gamblers.”

Or N. N. Taleb:

I think they are wrong.

Let me start with some background since my readers (if I have any) are not all finance types. A futures contract is conceptually very simple. Example: You and I agree right now that at noon four weeks from today you will give me $6000, and I will give you 100 barrels of oil. That is the contract. Our trade four weeks from now is the settlement of the contract, and my delivery of the barrels is called physical settlement.

Physically settled contracts are useful for hedging; e.g. if you are a transportation firm that actually needs the oil in four weeks but wants to price some tickets for sale today. Physical settlement is also open to arbitrage: If oil today is much cheaper than $60/barrel, I can sell you those futures, purchase 100 barrels of oil, then simply hold them for four weeks until I deliver them to you. Of course, during those four weeks I have to store the barrels and forego any interest on the cash I paid for them; these represent the “cost of carry” and give rise to the contango.

Now, it is very possible that you do not really want the oil, but merely want to speculate that oil will cost more than $60/barrel four weeks from today. In that case, you might agree that instead of giving you barrels, I can give you the cash equivalent based on the price of oil at that time. Or for simplicity, one of us will give the other the difference between that amount and $6000. This is called cash settlement, and most (all?) futures either allow or require it.

An important detail is defining “the price of oil at that time”. I am sure you can imagine ways based on some recent bid, offer, or transaction in some market. And I am sure you can also imagine someone with a large futures position badly wanting to manipulate that bid, offer, or transaction.

Even absent such manipulation, cash settled futures are less useful for hedging and less open to arbitrage, because they depend on that future market price.

When I wrote my prior post, I wrongly assumed the CBOE Bitcoin futures were physically settled; i.e. that they required delivery of actual Bitcoins (which are not exactly “physical” but never mind).

In fact, those futures are cash settled. But to understand the implications, you have to look at how the settlement price is determined:

The Final Settlement Value of an expiring XBT futures contract shall be the official auction price for bitcoin in U.S. dollars determined at 4:00 p.m. Eastern Time on the Final Settlement Date by the Gemini Exchange (the “Gemini Exchange Auction”)

The Gemini Exchange is a Bitcoin trading platform owned by the Twinkletoes Twins.

Gemini describes its auctions like so:

The final auction price for every auction is established as the price that executes the greatest aggregate quantity across both the auction and continuous order book. A desirable property of this auction design is that it approximates the results of the Walrasian tâtonnement process used in standard economics textbooks to describe how the forces of supply and demand determine prices and market clearing quantities. Within this auction design, the market is open to accepting bids and offers until the time the auction algorithm runs. This auction mechanism is similar to the auction mechanisms used by NYSE Arca, Nasdaq, Bats, and other large stock exchanges throughout Europe and Asia.

Lots of fancy terms, but the bottom line is that this is a double auction where anyone can participate and everyone pays/gets the same price. So the arbitrage goes like this:

  1. I sell you the overpriced futures
  2. I buy Bitcoins at spot
  3. I participate later in the actual auction determining the settlement price, entering a limit sell order for all of those Bitcoins at $0
  4. I use the proceeds to settle the contract in cash

Note that in step (3), the amount I receive for my Bitcoins is exactly what I need to settle with you in cash, by the definition of the settlement price. It does not matter what anybody’s “sentiment” is. It does not matter if you also participate to “push the price higher”. Since I can participate myself and sell all of my Bitcoins at the settlement price, this is a perfect arbitrage (less the Twinkletoes’ cut).

Perhaps I am not Gapishing something, but my guess is that the futures premium will collapse as bigger players get this arbitrage machinery spun up. Also January 17 might be an interesting day for Bitcoin.


Update: Ta-da!

Bitcoin futures

I hope I am not too immodest if I feel like my Bitcoin series has aged pretty well.

I do have a long-ish follow-up post percolating in my head, but I need some time to pull it together and research a few questions first (e.g. did anyone at the FT ever manage to make a true statement about this topic?)

But I had to get this one off my mind. Tonight, ZeroHedge (I know, I know, bear with me) is tracking the CBOE’s Bitcoin futures contract that just launched today. And they present this rather interesting chart:

bitcoin chart

This shows the Bitcoin futures rocketing higher than the spot price. I find this interesting for two reasons.

First, I am not sure exactly what “spot” means in this context. Anybody can exchange Bitcoins with anybody else for anything at any time. Unless there are deep, well-arbitraged markets out there, it is not clear that “spot” is even well-defined.

Second, and more interestingly, there is simply No Way for the futures price to get much higher than spot for something like Bitcoin. Because if it did, anyone could perform the arbitrage: Sell the futures; buy actual Bitcoins; deliver them at expiration. Bitcoins have no cost of carry, so whence the contango? (always wanted to say that)

I do not claim to be an expert, but I am pretty sure there are only two possibilities. Either (a) the “spot” price is a fiction, and our hypothetical arbitrageur(s) cannot actually acquire Bitcoins so cheaply; or (b) the poor retail punters who decided to call a top on BTC tonight and go short are getting a call from Mr. Margin.

Well, now we know why Trump fired Comey

First Interaction

“Am I under investigation?”

“No, sir, you are not”

“Could you maybe let everyone know that?”

“I’ll see what I can do”

Second Interaction

“Look, if anybody on my team did something wrong, they need to be held to account. But I know I did nothing wrong, and you told me I am not even under investigation. Is that still true?”

“Correct, sir, you are not under investigation”

“Could you please find some way to say that in public?”

“I’ll see what I can do”

Third Interaction

“One more time: Am I under investigation?”

“No, sir, you are not”

“Then, as we discussed, could you please let the people know that?”

“Maybe you should talk to the Deputy AG”

Fourth Interaction

Letter to the Director

Source: Comey’s written testimony