Pages

.

DSGE + financial frictions = macro that works?

File:Mitrailleuse front.jpg

In my last post, I wrote:
So far, we don't seem to have gotten a heck of a lot of a return from the massive amount of intellectual capital that we have invested in making, exploring, and applying [DSGE] models. In principle, though, there's no reason why they can't be useful.
One of the areas I cited was forecasting. In addition to the studies I cited by Refet Gurkaynak, many people have criticized macro models for missing the big recession of 2008Q4-2009. For example, in this blog post, Volker Wieland and Maik Wolters demonstrate how DSGE models failed to forecast the big recession, even after the financial crisis itself had happened:


This would seem to be a problem. 

But it's worth it to note that, since the 2008 crisis, the macro profession does not seem to have dropped DSGE like a dirty dishrag. Instead, what most business cycle theorists seem to have done is simply to add financial frictions to the models. Which, after all, kind of makes sense; a financial crisis seems to have caused the big recession, and financial crises were the big obvious thing that was missing from the most popular New Keynesian DSGE models.

So, there are a lot of smart macroeconomists out there. Why are they not abandoning DSGE? Many "sociological" explanations are possible, of course - herd behavior, sunk cost fallacy, hysteresis and heterogeneous human capital (i.e. DSGE may be all they know how to do), and so on. But there's also another possibility, which is that maybe DSGE models, augmented by financial frictions, really do have promise as a technology.

This is the position taken by Marco Del Negro, Marc P. Giannoni, and Frank Schorfheide of the New York Fed. In a 2013 working paper, they demonstrate that a certain DSGE model was able to forecast the big post-crisis recession.

The model they use is a combination of two existing models: 1) the famous and popular Smets-Wouters (2007) New Keynesian model that I discussed in my last post, and 2) the "financial accelerator" model of Bernanke, Gertler, and Gilchrist (1999). They find that this hybrid financial New Keynesian model is able to predict the recession pretty well as of 2008Q3! Check out these graphs (red lines are 2008Q3 forecasts, dotted black lines are real events):



I don't know about you, but to me that looks pretty darn good!

I don't want to downplay or pooh-pooh this result. I want to see this checked carefully, of course, with some tables that quantify the model's forecasting performance, including its long-term forecasting performance. I will need more convincing, as will the macroeconomics profession and the world at large. And forecasting is, of course, not the only purpose of macro models. But this does look really good, and I think it supports my statement that "in principle, there is no reason why [DSGEs] can't be useful."

Remember, sometimes technologies take a long time to mature. People thought machine guns were a joke after they failed to help the French in the War of 1870. But after World War 1, nobody was laughing anymore.

However, I do have an observation to make. The Bernanke et al. (1999) financial-accelerator model has been around for quite a while. It was certainly around well before the 2008 crisis. And we had certainly had financial crises before, as had many other countries. Why was the Bernanke model not widely used to warn of the economic dangers of a financial crisis? Why was it not universally used for forecasting? Why are we only looking carefully at financial frictions after they blew a giant gaping hole in the world economy?

It seems to me that it must have to do with the scientific culture of macroeconomics. If macro as a whole had demanded good quantitative results from its models, then people would not have been satisfied with the pre-crisis finance-less New Keynesian models, or with the RBC models before them. They would have said "This approach might work, but it's not working yet, let's keep changing things to see what does work." Of course, some people said this, but apparently not enough. 

Instead, my guess is that many people in the macro field were probably content to use DSGE models for storytelling purposes, and had little hope that the models could ever really forecast the actual economy. With low expectations, people didn't push to improve the existing models as hard as they might have. But that is just my guess; I wasn't really around.

So to people who want to throw DSGE in the dustbin of history, I say: You might want to rethink that. But to people who view the del Negro paper as a vindication of modern macro theory, I say: Why didn't we do this back in 2007? And are we condemned to "always fight the last war"?


Update: Mark Thoma has some very good thoughts on why we didn't use this sort of model pre-2008, even though we had the chance.

Update 2: Some commenters and Twitter people have been suggesting that the authors tweaked ("calibrated") the parameters of the model in order to produce the impressive results seen above. The authors say in the paper (p. 13, section 3.1) that they did not do this; rather, they estimated the model using only data before 2008Q3. 

Which is good, because calibrating parameters to produce better forecasts is definitely something you are not supposed to do!! There is a difference between "fitting" and "pseudo-out-of-sample forecasting". The red lines seen in the picture above are labeled "forecasts". To do a "pseudo-out-of-sample forecast", you train (fit) the model using only data before 2008Q3, and then you produce a forecast and compare it with the post-2008Q3 data to see how good your forecast was. You should never fiddle with the model parameters to make the "forecast" come out better! 

From Section 3.1 of the paper it seems fairly clear that del Negro et al. did not make this mistake. But I think the authors should explain the forecasting procedure itself in greater detail in the next iteration of the working paper...just in case readers worry about this.

No comments:

Post a Comment