Do beliefs contain useful information?
The "Do bets reveal beliefs?" discussion continues. This is very interesting to me, since in finance experiments, you often depend on bets accurately revealing beliefs.
But lost in the mayhem of the debate is a second question: Do beliefs contain useful information in the first place?
In an econ experiment, of course they do, because researchers want to understand how humans process information. But how about in the real world? Take my bet with Brad DeLong. Suppose that bet did reveal my true belief, i.e. that inflation is going to spike. Suppose I really really really believed that, very strongly. So what? The belief of Noah Smith, no matter how strong, tells you incredibly little about the future path of inflation that the market for TIPS didn't already tell you.
Now, maybe there are exceptions to this. Suppose you have a well-respected, widely trusted expert in monetary economics, such as Steve Williamson. Last March, Steve Williamson confidently predicted a near-term spike in inflation, despite low TIPS breakevens, claiming that his understanding of monetary economics gave him private information that the market did not possess. If you believed that expert prediction, and increased your inflation hedging accordingly, then you lost money. Perhaps you are mad at Steve for losing you money, and you sulkily suspect that maybe Steve didn't really believe his own prediction. You wish that Steve had been forced to somehow reveal that he really, truly believed that inflation would spike, and was not pulling your leg.
But even in the case of experts, I think you need to be very, very confident in the expert's record before you give special weight to that expert's opinion. For example, Michael Boskin is legendary for getting every major macroeconomic prediction wrong since the beginning of time (update: Scott Sumner dutifully informs us that some of Boskin's so-called "predictions" were actually just implausible and unverifiable explanations for things, not true predictions). Paul Krugman is somewhat ahead of the average of pundits, though it's a small sample. Robert Shiller has an impeccable record of bubble prediction, but that sample is even smaller.
So the only time beliefs reveal useful information about financial outcomes, such as inflation, is if you trust a very special expert. If that expert is pulling your leg, then you have a problem. But I contend that this situation is very very very rare, because reliably market-beating experts are very very very rare.
So we see that the situations in which (economics) bets can most easily be made - concrete, financially important outcomes - are not only the situations in which bets are least likely to reveal beliefs (because either hedging or making the same bet with better odds is always possible using public markets), but also the situations in which individual beliefs are the least likely to contain useful new information.
But I suspect that the bet advocates (Alex Tabarrok and Bryan Caplan) want to extract a different kind of information from bets. I suspect - and if I'm wrong, please correct me - that they are concerned with the ulterior motives of people who advance economic theories. For example, perhaps they suspect that Keynesians don't really believe in Keynesian business cycle models, but want increased government spending for the sake of redistribution. Or perhaps they believe that inflation hawks don't really believe their dire warnings of inflation, but want higher real interest rates for the sake of redistribution.
In this case, forcing an economist to reveal his or her true beliefs might be very useful. If it was revealed that an economist didn't really believe in the quantitative predictions of a theory that (s)he had spent a great deal of time and effort promoting, then that might be a signal that the economist had promoted the theory because of some ulterior motive.
And knowing the ulterior motives of would-be experts can be very useful information in situations in which public financial markets can't give you an answer. For example, suppose Economist A says "The Fed should lower interest rates to boost the economy! I know this because of my New Keynesian model." And Economist B says "Lower interest rates will just lead to inflation without boosting the economy! I know this because of my RBC model." And suppose that the New Keynesian model also just happens to predict lower inflation over the next 6 months, while the RBC model also just happens to predict higher inflation. In that case, forcing Economists A and B to make a bet on upcoming inflation might - or so Tabarrok and Caplan seem to hope - be able to reveal whether one or both of these economists actually has little confidence in his own theory. That in turn would reveal that that economist very possibly had some ulterior motive in advocating for that theory in the first place, which in turn would tell you not to trust that economist on policy matters in which the ulterior motive might apply.
This is what I think Alex Tabarrok is hoping when he says that a bet is a "tax on bullshit". He wants not to extract useful information about financial markets, but to reveal the ulterior motives of disingenuous public intellectuals, and thus get a better idea of whom to trust in the Marketplace of Ideas.
This, I think, is an excellent goal. Some kinds of bullshit are useful and should not be taxed, but disingenuous theory-promotion by public intellectuals seems to produce a negative externality that should be taxed away. The question of whether public bets are the tool to accomplish this goal, however, seems to hings on a lot of the questions Tyler Cowen has raised, such as what a public intellectual really risks when making a bet.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment