Pages

.

Macro always fights the last war



Matthew Klein of The Economist has a great post up about the history of modern macro, drawing on a presentation by the incomparable Markus Brunnermeier. If you are at all interested in macroeconomics, you should check it out (though of course econ profs and many grad students will know the bulk of it already).

Here is Klein's summary of pre-2008 macro:
As the slideshow makes clear, macro has evolved in fits and starts. Existing models seem to work until something comes along that forces a rethink. Then academics tinker and fiddle until the next watershed.  
In response to the Great Depression, John Maynard Keynes developed the revolutionary idea that individually beneficial actions could produce undesirable outcomes if everyone tried to do them at the same time. Irving Fisher explained that high levels of debt make economies vulnerable to downward spirals of deflation and default... 
Problems developed in the 1970s. “Stagflation,” the ugly portmanteau that describes an economy beset with rapid price increases and high levels of unemployment was not supposed to be possible—yet it was afflicting all of the world’s rich countries...A new generation of macroeconomists, including Ed Phelps, Robert Lucas, Thomas Sargent, Christopher Sims, and Robert Barro, responded to the challenge in the late 1970s and early 1980s...[their] new “dynamic stochastic general equilibrium” (DSGE) models were based on individual households and businesses that tried to do the best they could in a challenging world...Despite...many drawbacks, DSGE models got one big thing right: they could explain “stagflation” by pointing to people’s changing expectations.
Klein and Brunnermeier both say that macro is changing again, this time in response to the Great Recession and the financial crisis that preceded it. The big change now, they say, is adding finance into macro models.

Reading this, one could be forgiven for thinking that macro lurches from crisis to crisis, always trying to "explain" the last crisis, but always missing the next one.

How true is that? Well, on one hand, science should progress by learning from its mistakes. You have a model that you think explains the world...then something new comes along, and you need to change your model. Great. That's how it's supposed to work.

Doesn't that describe exactly what macro has been doing? Well, maybe, but maybe not. First of all, what you shouldn't do is develop models that only explain the most recent set of observations. In the 70s and 80s, the DSGE models that were developed to explain stagflation had a very hard time explaining the Great Depression. Robert Lucas joked about this, saying: "If the Depression continues, in some respects, to defy explanation by existing economic analysis (as I believe it does), perhaps it is gradually succumbing under the Law of Large Numbers."

But the fact that DSGE models couldn't explain the Depression was not seen as a pressing problem. There was no big push to modify or expand the models in order to explain the biggest economic crisis of the 20th century (though there were scattered attempts).

So macro seems to suffer from some "recency bias".

And here's another issue. When we say macro models "explain" a phenomenon, that generally means something very different, and less impressive, than it means in the hard sciences (or even in microeconomics). When we say that 80s-vintage DSGE models "explain" stagflation, what we mean is "there is the possibility of stagflation in these models". We mean that these models are consistent with observed stagflation.

But for any phenomenon, there are many possible models that are consistent with that phenomenon. How do you know you've got the right story? Well, there are several ways you can sort of tell. One is generality of a model: how well does the model explain not just this one thing, but a bunch of other things at the same time? (This is closely related to the idea of "unification" in physics.) If your model can explain a bunch of different stuff, then it's probably more likely to have captured something real, instead of being a "just-so story".

But modern macro models don't do a lot of that. Each DSGE model matches a few things, and not other things (this is why they are all rejected by formal statistical testing). Ask the author about the things his model doesn't match, and he'll shrug and say "I'm not trying to model the whole economy, just a couple of things." So there's a huge proliferation of models - not even one model to "explain" each phenomenon, but many models per phenomenon, and very little in the way of choosing which model is appropriate to use, and when.

Another clue that you've got the right story is if your model has predictive power. But modern macro models display very poor forecasting ability (as do non-modern models, of course).

Before the 2008 crisis, there doesn't seem to have been very much dissatisfaction with the state of macro. Models were rejected by statistical tests...fine, "All models are wrong," right? There were 50 models per phenomenon...fine, "We have models for anything!" Models can't forecast the future...fine, "We're not interested in forecasting, we're interested in giving policy advice!" I wasn't alive, but I imagine there existed a similar complacency before the 1970s.

Then 2008 came, and suddenly everyone was scrambling to update and modify the models. No doubt the new crop of finance-including models will be able to tell a coherent, plausible-sounding story of why the 2008 Financial Crisis led to the Great Recession. (In fact, I suspect quite a number of mutually conflicting models will be able to tell different plausible-sounding stories.) And then we'll sit back and smile and say "Hey, look, we explained it!"

But maybe we didn't.

Of course, this doesn't necessarily mean macroeconomists could do a lot better. Maybe this is the best we can do, or close to it. Maybe time-series data is so inherently limited, data collection so poor, and macroeconomies so hideously complex, non-ergodic, and chaotic that we're never going to able to have predictive, general models of the macroeconomy, no matter how many crises we observe. In fact, I wouldn't be terribly surprised if this turned out to be the case. But I think at least we could try, a little more pre-emptively than in the past. And I think that if we didn't tend to oversell the power of the models we have, we wouldn't be so embarrassed when the next crisis comes along and smashes them to bits.

No comments:

Post a Comment