Never buy anything you don’t understand

Once upon a time, I almost went to grad school to study Theoretical Computer Science. So when Alea and Freedom to Tinker mentioned this paper, I had to read it for myself. The paper is titled “Computational Complexity and Information Asymmetry in Financial Products”.

Traditional economics argues that fi nancial derivatives, like CDOs and CDSs, ameliorate the negative costs imposed by asymmetric information. This is because securitization via derivatives allows the informed party to find buyers for the less information-sensitive part of the cash flow stream of an asset (e.g., a mortgage) and retain the remainder. In this paper we show that this viewpoint may need to be revised once computational complexity is brought into the picture. Using methods from theoretical computer science this paper shows that derivatives can actually amplify the costs of asymmetric information instead of reducing them.

I recommend reading the actual paper, since the authors spend the first four pages providing background clearly intended for the layperson. Here I will attempt a brief summary…

A classic problem for free markets is the Lemon Problem, also known as the “asymmetric information” problem. The canonical example is used cars, where some cars (the “lemons”) are of much lower quality than most. Suppose you are in the market for a used car. In general, each seller knows better than you whether her car is a lemon. You, on the other hand, must assume some probability that any particular car might be a lemon. Thus you will not be willing to pay the full value for the non-lemon cars. This discourages sellers of non-lemon cars, who do not want to sell their cars for less than they are worth. The result is that only lemons will be available for sale. (In practice, of course, this is an exaggeration, but only because the market has found ways to address the Lemon Problem; e.g. warranties from the seller.)

In theory, securitization helps to address the Lemon Problem for financial products. If a seller sells you a pool of used cars, but agrees to reimburse you for the purchase price of the first N cars that turn out to be lemons, you will be willing to pay closer to full price for the pool. In securitization, this corresponds to the seller retaining the “equity tranche” while selling you the “senior tranche”.

And thus do sub-prime no-doc interest-only mortgages become AAA-rated securities. Mirabile dictu!

Yeah, yeah, I know. But never mind that. This paper throws cold water on the idea from a completely different angle: The sheer complexity of modern financial instruments makes it possible for the seller to construct derivatives with embedded “lemons” in a way that is computationally infeasible for you to detect. In particular, the authors construct a model for derivatives and assets, propose a way within that model for the seller to off-load more lemons than random chance would permit, and then show a reduction from a “known hard” problem to the problem of detecting the tampering.

That is, if you could detect the tampering, you could also solve a problem (“planted densest subgraph”) believed to be computationally hard. So a seller could place his deceptive construction right in front of your face, and the deception would be obvious if you simply knew where to focus your attention, yet you would be unable to find it using all of the computers in the world. Put another way, this result quantifies “bounded rationality” in the form of bounded computing power, since even a rational buyer has limited computing resources.

There are a couple of caveats to the result. 1) Their model of financial derivatives is somewhat contrived, and when they try to make it less contrived, the result gets a lot weaker. 2) They assume the best known algorithm for “planted densest subgraph” is the best possible, but they have not shown that “planted densest subgraph” is necessarily as hard as other well-studied problems (e.g., factoring). That said, I believe this is the first result of its kind, so more are probably coming.

A final comment since I am in a pedantic mood. The paper is from Princeton. The Freedom to Tinker blog is also at Princeton, and their post says that detecting the tampering is NP-complete. The paper does not show that, and I doubt any similar result will involve a reduction to an NP-complete problem. The construction has the “feel” of a cryptographic algorithm — where the “key” is which assets are lemons — and NP-complete problems almost never appear in cryptographic algorithms.

Update

…although modern cryptographic researchers are working on it. The paper cites another (sharing an author) that proposes a couple of cryptosystems using constructions awfully close to known NP-hard problems.

What appears to have happened is that some cryptographers at Princeton decided to learn something about financial derivatives, probably in reaction to the current crisis. Then they told their colleagues at the economics department, “I think we may have something to say about this.” The resulting paper includes authors from both departments.

2 comments to Never buy anything you don’t understand

  • richfam

    Its worse than this. Here’s the real piece of asymmetric information: You buy a bond with a five year “expected maturity” and a 5% floating coupon rated AAA and plenty of equity/subordination like your example. Things go bad for the collateral and the equity/subordination are getting hurt. So the AAA rating is a bit more tenuous but you still believe you’re okay. But your bond is now starting to fall in price a bit. Then the asymmetrical information becomes… symmetrical. Turns out that 5yr “maturity” depends on the quality of the collateral, and that coupon, depends on short rates and the federal reserve is lowering them aggressively. So now you have a 20 year maturity and a 0.5% coupon with eroding equity. So you’ll still get your money back but it will be a long time and your carry return is poor at the moment. Bonds you bought at 100 now at 50 and the yield to MATURITY is still the original 5%.

  • This seems silly. I’m sure catastrophe theory, systems theory, and quantum mechanics have much to tell us as well about why some CDO’s borked. Not.

    As they admit on page 1, computation is rarely the limiting reagent in the alchemy of a decision to buy or sell. In fact if we knew what Monte Carlo’s to write we could do jiggaflops of computations to simulate the outcomes and over a week with 50 blades you could do a mind-boggling amount of computation.

    But what Monte Carlo would you write? What computation would you do? What numbers would you put into the computation? As an outsider it seems inconceivable that computational power is a binding constraint. EC2 is cheaper than an intern.

    Maybe the message is supposed to be that the final computations are done on a limited wetware rather than a superlarge utility computer. OK. Not that they make this argument themselves, it’s just “precision” “quantification” and “computation”. We have these tools (semidefinite programming, XOR) so yeah, let’s use them. Insert remark about academic vacuum or truffle hounds.

    I found the “dense subgraphs” part (pp 8-9) interesting and I guess the key result of the paper is that since their model derivative value depends on stdev, it’s enough to twist just √N of the knobs to get N effect. But as far as the invocation of computability… What does this paper add above the basic Lemons insight? They agree that seller owning some junior tranche ameliorates the lemon problem–can’t get around that logic–but add that it’s possible to cheat a “polynomial computing” customer (whatever that means) under certain dependency conditions. Not so great when it’s unclear what a polynomial computing customer is.

    Section 5 is also interesting–an XOR derivative (I don’t get how this would be susceptible to timing attacks or why lack of monotonicity kills a derivative. Ag is not monotonic in sunshine either: too much or too little is bad.)

    The conclusion delimits the real-world purposes of the paper, but not enough. It’s suggested that because of computational limits better models might not improve ratings guesses. They’ve leapt from their models to the real world without providing a clear bridge.

Leave a Reply