Pages

.

PSST: The end of economics as we know it


























(Note: I wrote this post quite a while ago, and never published it. Since I'll be mostly too busy to blog for a few weeks, I've decided to publish some of these old "backup posts"...)

Since the recession began, Arnold Kling has been trumpeting a very non-traditional way of thinking about the economy. At first this went by the name of "recalculation," but Kling has now settled upon "PSST," which stands for "Patterns of Sustainable Specialization and Trade." This idea has generated a lot of interest in the blogosphere, and I personally find it extremely appealing, if daunting. Before I discuss it, I'll let Kling sketch the idea in his own words:
Regular readers know that I am trying to nudge them toward a different paradigm in macroeconomics. I want to get away from thinking of economic activity as spending, and instead move toward thinking of it as patterns of sustainable specialization and trade...

I believe that trying to describe economic activity using an aggregate production function is a mistake...The advantage of the aggregate production function is that...it yields an aggregate supply curve. This allows macro to be presented using the familiar tools of supply and demand...


Instead, I think that the right tools to use for macro are the two-country, two-good models of international trade. The "two countries" could be two sectors within an economy...

At full employment, both countries are taking advantage of specialization and trading with one another. When something happens to adversely affect the pattern of trade, some workers shift from market activities to non-market activities, mostly in the form of involuntary unemployment. Gradually, new patterns of specialization and trade emerge, and full employment returns. That is what I have been calling the Recalculation Story.
Every first-time econ student who has ever been presented with a macroeconomic aggregate probably has some variant of this reaction. Economies are just too complex to be modeled with this handful of variables! Then someone raises his hand and asks the Teaching Assistant if that isn't the case, and the Teaching Assistant shakes his head and says "Yeah, well, just try to model a complex system like that and see how far you get!"

Because it is hard. The two-country, two-good model of international trade is not going to do the trick. In that model, adjustment is instantaneous if you make all the standard assumptions that you make in the one-sector model (flexible prices, etc.); there is no need for down time as agents recalculate and patterns re-adjust. And in fact, New Keynesian macro people have been using multi-sector models for a long time (as an example, see this paper), and those models never have any "recalculation."

To get recalculation, you are going to have to make your model much more complex - so complex, in fact, that the interactions between sectors or industries become more important to the movement of macroeconomic aggregates than the movement of aggregate variables themselves, and things like recessions and booms become emergent phenomena. This will require you to delve into the world of complex systems. It will be a breathtaking break with basically all of establishment economics.

And that break might well be necessary. As traditional macro models have failed to yield much in the way of predictive usefulness, some economists have been sprinkling flavors of complex systems into their models. Prime examples (that I know of) would be Krugman, Fujita, & Venables' "New Economic Geography" and Charles Jones' "linkages and complementarities" theory of development. Other economists have been exploring agent-based modeling, which (as Kling says) is probably a good way to identify the existence (though not necessarily the exact nature) of complex PSST-type phenomena.

But for a moment, I want to step back and think about the policy implications of a PSST theory of the economy. As Kling points out, PSST weakens the typical rationale for countercyclical fiscal and monetary policy (though it does not necessarily mean that those policies are ineffective). But it greatly strengthens the rationale for a very different kind of government policy - one which is commonly derided by most modern macroeconomists. I am talking about industrial policy.

In a typical microeconomic model, the market clears, because price adjusts to balance supply and demand. In a PSST world, this does not happen. The pattern of specialization and trade will not always be disturbed by small changes in prices, because the global pattern itself represents a stable equilibrium (i.e., is "sustainable"). How many computers I buy and sell will depend not only on the price of computers, my desire for computers, and my cost of producing computers; it will depend on the prices, desirabilities, and costs of a bunch of other goods throughout the whole economy. The economy will be riddled with network externalities, and the resultant weakening of the price mechanism means that any market may or may not tend toward efficiency on any given time scale. In other words, in a PSST world, there is no invisible hand.

This opens the door for a hugely expanded role for government (or other large, centralized actors) in the macroeconomy. If global patterns matter as much as local prices, then an actor large enough to perceive and affect the overall pattern might be capable of nudging the economy out of a bad equilibrium and into a better one. Dani Rodrik has been saying this for a long time in connection with newly developing economies, but the same may be true in rich countries when faced with disruptive technological change or globalization.

Is it possible that fiscal policy is really just industrial policy? Could it be that World War 2 ended the Depression not because it represented a sufficiently large Keynesian stimulus, but because it deliberately created new industries and new technologies that formed the basis of a new sustainable pattern of specialization and trade? After all, the modern U.S., German, and Japanese automobile and aircraft industries look suspiciously like the same firms that supplied their countries' war efforts seventy years ago. And Annalee Saxenian will tell you that Silicon Valley got its start from World War 2 weapons research and shipbuilding.

Anyway, this is something to think about. If you are brave enough to venture out of the comfortable, familiar world of aggregate production functions into the vast and unexplored wilderness of PSST, more power to you. Just don't be surprised if what you find is weirder than anything Adam Smith ever dreamed.


Update: Arnold Kling and Tyler Cowen are generally on board with my characterization, but are skeptical of government's ability to conduct effective industrial policy. Note that I am not claiming that government is good at industrial policy, only that PSST implies that industrial policy would be the most effective type of government intervention. Meanwhile, Brad DeLong thinks that this PSST idea is basically bunk, and that the macreconomy is not an irreducibly complex system. And Karl Smith speculates that excess financial-industry profits might be due to finance's ability to perceive and affect the big patterns.
reade more... Résuméabuiyad

A perfect storm for XMP?



Like many others in this biz, I've been following the development of Adobe's Extensible Metadata Platform (XMP) for quite some time, and for at least three years I've been saying that it would be in Adobe's best interest to hand oversight over this ostensibly open standard to a bonafide Standards Body (rather than let adoption languish as people continue to associate XMP with "Adobe-proprietary"). Happily, Adobe is in fact now doing the right thing: XMP is in the process of becoming ISO-16684-1, via an effort led by my colleague Frank Biederich.

This effort couldn't have come at a better time. The content world is in desperate need of an industry-standard way to represent rich-content metadata, and I strongly believe XMP is the right technology at the right time.

One can quibble over whether embedding XMP in a host file is the correct thing to do (as opposed to placing it in the file's resource fork, or simply creating XMP as a separate sidecar file and managing it separately). There are good arguments pro and con. But packaging issues aside, there's not much question, in my mind, that nearly every form of content benefits from having easily-parsed metadata associated with it. This is particularly true of enterprise content (content that's managed in some kind of access-controlled repository). The availability of metadata makes a given content bit easier to repurpose, easier to track, easier to search -- easier to work with all the way around.

At Day Software (now a part of Adobe), we've long had a saying that "everything is content." I'm fond of saying that once metadata is attached, "everything is an asset."

I think XMP is poised to become a huge success, comparable to, say, Atom or RSS. First of all, the specification itself is short and easily understood (thus easily implemented) -- always a Good Thing where XML standards are concerned. It's also semantically flexible and highly extensible -- again two very good things. The fact that it leverages RDF also bodes well for XMP as we trundle ever-closer to the Ontological Web. The social dimensions of an asset, for example, could easily be accommodated by XMP via RDF triples. Let your imagination dwell on that for a minute.

But I also think the timing for XMP is quite propitious just in terms of where it's at in its lifecycle. I was giving a talk last week at Adobe Research in Basel (on the subject of XMP) in which I mentioned a certain lifecycle theory (whose, I can't remember) that sees all technologies as basically going through three phases, each one lasting about six years. Phase One is Acceptance: It takes around six years for anything truly new to change the way people think about it. (During this period, only alpha geeks will actually adopt the new technology.) Phase Two is Adoption: It takes six years for the (at last understood) new technology to enter the mainstream in earnest. Phase Three is Ubiquity: This is when adoption becomes universal (or as close to that as it's going to get) and the market is saturated.

Not everything goes through an 18-year cycle, obviously. This is just a rough conceptual model, but I find that it applies in a surprising number of cases. If we look at XMP (which dates to 2001) through this model, we see that it is a little more than halfway through the Adoption phase. I think that the ratification of ISO-16684-1 will kick off a Ubiquity phase in which we see XMP used in more ways and in more places than anyone would ever have thought possible.

My talk last week in Basel was in front of a roomful of developers. Everyone there was familiar with aspect-oriented programming, so I made the (admittedly imperfect) analogy with XMP, saying "Imagine if resources could have aspects. What would that look like? It would look a lot like XMP." The info that gets packaged up into XMP often has to do with crosscutting concerns, like access control, DRM, version history, and what might be called "serving suggestions" (mimetype, compatibility hints). It's not that far different from an advice stack in JBossAOP. Even the packaging concerns are familiar from AOP. A classic problem in AOP, after all, is where to put aspects: in the source code itself (as annotations), or in separate descriptors (as in JBossAOP)? The same concerns (and tradeoffs) arise with XMP.

In any case, I think the pressing need for more and better metadata (as it pertains to enterprise content in particular) plus the built-in (in many cases) support for XMP in cell-phone cameras, plus the need for ontology-friendly web formats going forward, and many other factors (including the opening up of XMP under ISO-16684), spell a perfect storm for XMP as we hurtle toward Web.Next. All I can say is: It's about time.
reade more... Résuméabuiyad

Past performance is no guarantee of future results

"Past performance is no guarantee of future results." This is the most common caveat in finance. It means that, despite the fact that past and future are often correlated, that correlation is no guarantee; something may happen in the future that never happened in the past. In technical terms, economic and financial processes might not be ergodic.

This is why, unlike Mark Thoma, I am not reassured by a long-term plot of United States gross domestic product. Dr. Thoma writes:
As you can see from this picture, historically we've always recovered from recessions. Eventually. ... I am confident that we'll return to trend this time as well, the question is how long it will take us to get there.
He illustrates this with the following famous graph:


The idea is that because this graph sort of looks like a straight line (although if you look closely, you'll see that it's not!), that it will continue to look sort of like a straight line into the future.

But off the top of my head, I can think of no good reason to think that this is true. The kinda-sorta stability of the long-term U.S. GDP growth rate is not a law of the Universe, like conservation of momentum, which is (we hope) fixed and immutable. It is a past statistical regularity whose underlying processes we don't fully understand. There may be solid, long-term factors that will keep our growth at this "trend," or there may not.

Here, check out a long-term graph of Japan's GDP, in levels:


Looks a bit different, eh? If you looked at this graph in 1996, you would expect Japan to return fairly rapidly to the exponential growth "trend" it had enjoyed up until 1995. Instead, Japan's GDP has remained flat in since then (this is also true in real terms). Here's a picture of Japan's real GDP growth rate over time:


Japan's growth history looks very different from ours. It seems to have suffered some "trend breaks" in growth. And my question is: Why should we believe that this will not happen to us?

One common answer is that long-term growth for a mature economy will continue at roughly the rate of technological progress. But this is a tautology, since economists measure "technological progress" simply as the the long-term rate of GDP growth. This leads some economists to look at slowing growth and conclude that technological progress is slowing. And maybe they're right! The point is that whether long-term growth represents "technology" or some combination of underlying processes, there is no law of the universe that says that these processes grow at a constant exponential rate.

And in addition to "trend breaks," there is no guarantee that U.S. GDP does not also contain unit roots. In other words, the assumption that the dip in U.S. output that we call the "Great Recession" will be made up for by fast growth in the future is unfounded. Even if the U.S. returns to its "trend" growth rate of 2 or 3 percent, there seems to me to be no good reason to believe that it will return to its trend level.

So no, this graph does not ease my worries. Past performance is no guarantee of future results. It may well be that a return to our "trend" growth rate, and/or a return to our "trend" level of output, may be contingent on our policy choices. At least, I am not willing to assume that that is not the case...


Update: I don't want to mislead with the Japan graphs. In per capita terms, the growth slowdown since the mid-90s is much less pronounced. And before the 70s, Japan was well below the richest industrialized nations in per capita GDP, so its mid-70s slowdown is to be expected. But my point about Japan was that the U.S. long-term GDP plot, which is so often used to predict a return to 2-3% growth, is not particularly universal.

Update 2: What "policy choices," you may ask? Well, the answer is that I don't know. My instinct says that clinging to an increasingly broken health care system can't be good for long-term growth. The first-ever U.S. debt default, which Republicans look ever more eager to flirt with, might end our special place in the global economy. Accepting a permanently lower level of taxation and spending might starve the nation of infrastructure and R&D, thus reducing our trend growth. Just some thoughts.

Update 3: I see that G.I. over at The Economist has said much the same thing. And wow...the drops in output in Sweden and South Korea following those countries' financial crises sure look like unit-root drops to the naked eye! 

Update 4: Mark Thoma writes a very long and good post about the "return to trend" controversy, which also cites other long and good posts by Brad DeLong, Greg Mankiw, and Paul Krugman. Definitely read it! And, of course, it almost goes without saying that I think we should try our best to boost output back to the "trend," whether or not that would happen on its own.

Update 5: Brad DeLong has a graph of UK GDP that also shows that "returns to trend" are not universal.
reade more... Résuméabuiyad

Conley and Dupor, revised

A couple posts back, on May 14 I criticized Timothy Conley and Bill Dupor for overstating their results in the abstract of a recent paper on the effect of the ARRA stimulus. The old abstract read:
Our benchmark results suggest that the ARRA created/saved approximately 450 thousand state and local government jobs and destroyed/forestalled roughly one million private sector jobs. State and local government jobs were saved because ARRA funds were largely used to offset state revenue shortfalls and Medicaid increases rather than boost private sector employment. The majority of destroyed/forestalled jobs were in growth industries including health, education, professional and business services.

On May 17, the authors posted a revision. The new abstract reads:
Our benchmark point estimates suggest the Act created/saved 450 thousand government-sector jobs and destroyed/forestalled one million private sector jobs. The large majority of destroyed/forestalled jobs are in a subset of the private service sector comprised of health, (private) education, professional and business services, which we term HELP services. There is appreciable estimation uncertainty associated with these point estimates. Specifically, a 90% confidence interval for government jobs gained is between approximately zero and 900 thousand and the counterpart for private HELP services jobs lost is 160 to 1378 thousand. In the goods-producing sector and the services not in our HELP subset, our point estimate jobs effects are, respectively, negligible and negative, and not statistically different from zero. However, our estimates are precise enough to state that we found no evidence of large positive private-sector job effects. Searching across alternative model specifications, the best-case scenario for an effectual ARRA has the Act creating/saving a (point estimate) net 659 thousand jobs, mainly in government. It appears that state and local government jobs were saved because ARRA funds were largely used to offset state revenue shortfalls and Medicaid increases (Fig. A) rather than directly boost private sector employment (e.g. Fig. B).
The new abstract is completely accurate, and does not overstate the paper's findings. I commend the authors for making the revision.

I would like to think that this represents a case of the blogosphere acting as a useful adjunct to the academic literature (and not just a case of me being a Statistics Nazi). The authors hopefully would have made a similar revision without any blog attention, but you never know.
reade more... Résuméabuiyad

What would it take for an American manufacturing revival?


















Note: Posting is sporadic, and will continue to be so, due to the freak occurrence of a natural disaster known as "dissertation"...

Paul Krugman on the maybe-kinda-sorta revival of American manufacturing:
Manufacturing is one of the bright spots of a generally disappointing recovery, and ... a sustained comeback may be under way...[W]hat’s driving the turnaround in our manufacturing trade? The main answer is that the U.S. dollar has fallen against other currencies, helping give U.S.-based manufacturing a cost advantage. A weaker dollar, it turns out, was just what U.S. industry needed.
Why might manufacturing be important? There are many theories, but I tend to focus on the fact that manufactured goods are easily exportable. When goods are easily exportable, relatively small changes in exchange rates can allow large changes in the trade balance, which can be an effective way of fighting recessions. Sure, we export a lot of services, but a change in net services exports big enough to fight a recession probably requires a much bigger movement in exchange rates. And exchange rate are "sticky," they don't like to move a lot. So what we could really use right now is a manufacturing export-driven recovery, enabled by a weaker dollar. Brad DeLong and Christina Romer agree.

As Krugman points out in his column, this will be politically difficult. Why? Krugman blames right-wing ideologues, but I somehow doubt that "strong dollar" money-illusion is high on the list of conservative priorities (maybe I'm wrong). My guess is that the financial lobby opposes a weaker dollar. Right now, the dollar is being propped up by the government intervention of our biggest trade partner, China, which buys U.S. bonds in order to finance its currency peg. Those bond purchases keep U.S. interests rates low, which means that our still-fragile financial system, with its still-huge accumulations of possibly-toxic mortgage-backed debt, gets to pretend for another day that it is still solvent. Low interest rates also allow many finance companies to make large profits through the use of leverage. For the dollar to weaken would require China to stop giving our finance companies free money, and so they naturally oppose a weaker dollar. That's just my guess, anyway.

But there are other, deeper reasons for America to be concerned about its manufacturing sector. These reasons are related to the very first econ theory I ever studied (and still one of my favorites) - the New Economic Geography theory, developed by Paul Krugman himself, for which he won the Nobel back in '08.

This theory is based on a fairly simple concept. Since it costs money to transport goods from one place to another, it makes sense for companies to put their factories near to their consumers. Since workers are also consumers, it therefore makes sense for companies and industries to cluster together. This, basically, is why we have cities. Krugman's model divides the world into an urban industrial "core" and a poorer, sparsely populated "periphery" that produces stuff from the land (e.g. food and oil). The core is much richer than the periphery, so if you are a country, you want to be the core.

In this framework, shifting geographic patterns of wealth can cause booms and busts in far-away places. For example, as the center of global economic activity shifts from Europe to Asia, it may make less sense to locate a bunch of heavy industry in, say, Michigan (which is ideally positioned to supply things to America's Europe-facing East Coast).

The thing is, on a global level the U.S. itself doesn't make a natural "core." We have a lot of land, and not a lot of people on that land; geographically, we look a lot more like your average resource-producing country (say, Argentina) than your average manufacturing powerhouse (say, Brazil). Yes, the U.S. has some pockets of high density, and yes, the existence of tradable services complicates the picture. But if you read the news, you hear all the time about companies relocating production to Asia to (hopefully) take advantage of the huge new Asian consumer markets. That is economic geography in action.

If we want to bolster our manufacturing sector over the long run - whether to give us a better way to fight recessions, or to allow us to continue to benefit from industrial clustering, we may need to increase our population density over time. This will require that we do two things. First, we have to continue to allow large-scale immigration. Second, we have to make big policy changes to encourage urban density - public transit, high density housing, etc. Basically, the kind of stuff that Ed Glaeser is always recommending that we do. The alternative (and here I exaggerate) may be a steady process of deindustrialization as we revert to being an exporter of corn and coal.
reade more... Résuméabuiyad

Who is blocking high-skilled immigration?

One native-born American, one (very) high-skilled immigrant.

















There is a puzzle in the American political economy that has me utterly baffled. Readers and fellow bloggers, I need your help to solve this puzzle. The question is: Who is the constituency holding back high-skilled immigration to the United States?

Few economists would argue that high-skilled immigration is not an undiluted positive for the American economy. In fact, it is one of the only sources of "low-hanging fruit" (as Tyler Cowen would put it) that we have left. Here's Annie Lowrey:
But maybe there remains one last shiny, fat apple hanging right in front of our faces, one last endeavor that would bring us fast, costless, and easy growth. It is immigration reform. The United States can grow faster by stealing the rest of the world's smart people.

[F]oreign-born entrepreneurs were at the helm of a full quarter of Silicon Valley start-ups founded between 1980 and 1998—start-ups like, say, Google. In 1998 alone, those companies created $17 billion in sales and accounted for 58,000 jobs.

Since then, the contributions of highly skilled immigrants—let's call them super-immigrants—have only grown...25.3 percent of engineering and technology start-ups opened between 1995 and 2005 had a foreign-born founder...Immigrant-founded companies across the country produced $52 billion in sales and employed 450,000 workers...[I]mmigrants are 30 percent more likely to start a business than U.S.-born citizens. Immigrants with college degrees are three times as likely to file patents as the domestically born...Economist Jennifer Hunt of McGill estimates that the contributions of immigrants with college degrees increased the U.S.'s GDP per capita by between 1.4 and 2.4 percent in the 1990s. 
For some reason, though, there exists a vast thicket of U.S. policies and practices that keep high-skilled immigrants out. H1-B visas are temporary, severely limited in number, and annoyingly hard to get. Our student visa program kicks foreign students out right after they finish. The number of "employment-based" green cards is capped at 140,000 a year, which heavily tilts our immigration policy toward family reunification and away from high-skilled immigration.

It goes without saying that this is nuts. By keeping these people out, our nation is shooting itself in the foot with a sawed-off shotgun.

But this raises a huge, looming question: Who or what is behind this insanity? Usually, when economists  see such a gross and persistent miscarriage of policy, we look for a vested constituency that has successfully used the political process to block national efficiency in order to favor its own narrow interest. But, for the life of me, I can't figure out who is against high-skilled immigration.

It doesn't appear to be the political far right. Tea Partiers and the like are up in arms about immigration, but all of their animus is directed at low-skilled immigrants, particularly Mexicans. They are not marching in the streets or joining Minuteman squads because of Indian computer programmers.

Nor is it business conservatives. Check out this Wall Street Journal article by Jonah Lehrer, which basically echoes Lowrey (yes, I view the WSJ as a barometer of business-conservative opinion). Or read AEI calling for reform of our high-skilled immigration policy. After all, high-skilled immigrants are a huge boon to American business, which is why tech companies are always lobbying Congress (unsuccessfully) to increase the number of H1-B visas.

It isn't libertarians. Libertarians favor (relatively) open borders.

It doesn't seem to be the political left. Observe this article by David Altman in the Huffington Post, which used the term "super-immigrants" months before the Lowrey article, and basically says the same things (yes, I view the Huffington Post as a barometer of elite-liberal opinion). Liberals, after all, tend to favor a multicultural society. They also favor income equality, which high-skilled immigration tends to promote. Maybe Democrats are afraid that the children of entrepreneurial immigrants will vote Republican, but in recent years Asian-Americans have trended Democratic.

Is it the security state? Yes, the influx of foreign students and workers dropped off after 9/11, but has since recovered. It seems conceivable that the DHS and other arms of the security apparatus are paranoid about smart terrorists or Chinese spies. But I have not heard of the DHS lobbying to keep out high-skilled immigrants. Is this happening?

What about high-skilled native-born Americans? Are American-born computer programmers, engineers, and entrepreneurs afraid that high-skilled immigrants will take their jobs? I guess this is conceivable. I've heard some low-level grumbling from American-born engineers about the low wages and long hours that immigrant engineers are willing to accept, but I know of nothing even slightly resembling an organized movement or lobbying effort. And my guess is that smart Americans are smart enough to know that it's a positive-sum game - that the positive impact of the businesses started by smart immigrants vastly outweighs the effects of wage competition.

So who is it? Is there someone I'm forgetting here? Have I made a mistake in my analysis? Or is my instinct wrong - is it simple blind dumb institutional momentum, and not the diabolical actions of any special interest group, that is keeping world's geniuses shivering outside our tall iron gates? Is it simply that no one is paying attention? Help me out here, people. Help me understand why we have not yet picked this lowest of low-hanging fruit.


Update: E.G. at The Economist has an excellent post comparing Canada's attitude toward high-skilled immigrants to America's. All of the commenters who wrote that "we have enough smart people already" should read it.
reade more... Résuméabuiyad

Did the stimulus really destroy a million private-sector jobs?



Hey non-economists, want to see what's inside the guts of one of those econ papers you keep hearing about? Well that's why you have grad student bloggers like me. We read the papers so you don't have to.

This week, Greg Mankiw links us to a paper by Timothy Conley of Western Ontario and Bill Dupor of Ohio State University. The paper's eye-popping finding is that the American Recovery and Reinvestment Act (ARRA), also known as the Stimulus, was responsible for a net loss in jobs. No, really! From the paper's abstract:
Our benchmark results suggest that the ARRA created/saved approximately 450 thousand state and local government jobs and destroyed/forestalled roughly one million private sector jobs...The majority of destroyed/forestalled jobs were in growth industries including health, education, professional and business services.
Wow! Seriously? The stimulus directly resulted in a net loss of five hundred and fifty thousand jobs? That's it, I'm voting Republican from now on...

But wait. A still small voice is nagging me from the back of my mind, urging me to read beyond the abstract. And so my dissertation will have to wait 40 minutes while I wade through three dozen pages of PDF in search of an answer to my nagging doubts.

Because I do have some doubts about this result. Stimulus spending destroys jobs? How the heck is that supposed to work? I mean, maybe you believe in full Ricardian Equivalence, but that would just predict that stimulus is a wash. Perhaps people are cutting back spending in anticipation of the deadweight losses caused by the future taxes needed to pay back the stimulus-related borrowing*? Hmm, maybe, but that sounds so preposterous that I kind of expected something to be fishy about this paper from the get-go.

What Conley and Dupor do is to run a state-by-state regression. Different states received different amounts of ARRA spending, so looking at the differences in employment growth rate between those states after the passage of ARRA should tell us how many jobs ARRA created or destroyed. This should lead to a regression of the type:
Employment growth = A + B*Stimulus + C*Other Stuff + e
Now, you may say: "Wait, but states where employment goes down should be expected to get more stimulus money, since those are just the states that were hardest-hit by the recession!" And you'd be right: there is a big endogeneity problem here. After all, the fact that there's a bunch of sick people in the doctor's office doesn't mean that doctors make you sick. Messrs. Conley & Dupor deal with this problem by finding some "instruments" - natural sources of variation in the amount of stimulus money that a state gets, that have nothing to do with how bad the state's economy was doing. Usually, critics of an empirical paper like this will try to say that the instruments used are bad ones - that they actually can be affected by the business cycle, or that they don't give rise to enough variation in stimulus funding. 

I am not going to do that. I am going to give Conley & Dupor a free pass on their instrumental variables, because I already see one and possibly two gaping hole in their analysis that makes the instrument problem somewhat of a sideshow.

(Update: I had initially written about a second possible problem with this paper, but commenter Ivan found evidence that (thankfully) that problem didn't exist. So, in the interests of not making people read several pointless paragraphs, I've deleted the section that was previously here. Thanks, Ivan!!)

On page 20 of their paper (Table 4), Conley and Dupor have a table that shows their main result: the number of jobs that they estimate to have been created or destroyed by the stimulus. In all private sectors, the estimates are negative. BUT, check out the confidence intervals in Table 4. With one exception, the upper limits of all the confidence intervals are highly positive. This despite the fact that they use a less-rigorous 90% confidence interval (instead of the standard 95%).

This means that Conley and Dupor's results are statistically insignificant. Bluntly, what they have found is nothing. Formally, if we use their model to test the hypothesis that the stimulus caused a net increase in private-sector jobs, we will not be able to reject the hypothesis.

Conley and Dupor tweak their model with some alternative specifications. No change. As you can see in Table 7 and Table 9 (p.23-4), upper 90% confidence limits continue to be strongly positive. If the authors really did leave the intercept term in their regression equation, then that's probably why they got insignificant results; if not, then there's some other problem with their instruments or their specification, or maybe just the data itself.

But, given the lack of any statistically significant findings, this paper does not deliver the results that it advertised. Conley and Dupor's abstract should read "We find no evidence for a significant effect of the ARRA on job creation." That would be scientifically honest, but would not turn a lot of heads. Instead, the abstract makes the more politically incendiary claim that the ARRA destroyed jobs, which the authors actually did not find. They do leave themselves an escape rout by using the word "suggest," but I am not satisfied. In my opinion this is a paper that overstates its findings. (Note: Conley and Dupor have since revised their abstract significantly to more accurately reflect their results, for which I commend them!)

My guess is that papers like this get attention because of politics, not because of science. Dr. Mankiw linked to this paper without comment, evaluation, or qualification. But he could have just as easily linked to this paper by Daniel J. Wilson, which uses a methodology similar to that of Conley and Dupor, but finds strongly positive (and often strongly significant) effects of the stimulus.


Update: Arnold Kling is also not a fan of Conley-Dupor


* Actually, it's worse than that. You have to also assume that future deadweight losses from stimulus-payback taxation will be highly concentrated in the states that received the most stimulus funding; i.e., that taxes will be specifically targeted at those states! 
reade more... Résuméabuiyad

Speculators and oil prices: what experiments tell us




















Another gas price spike, another wave of articles blaming "speculators." Here's an editorial in USA Today by Representative Ed Markey, D-Mass:
[W]e must crack down on speculators in the oil market. Speculative money is seeking volatile investments. Since 2003, the size of the oil futures market has increased by a factor of 17...

When this massive speculative market meets manipulators such as Saudi Arabia and OPEC, consumers get gouged. Goldman Sachs has indicated as much as $20 per barrel is due to speculation, not supply and demand. Anyone doubting the volatility and added momentum speculation brings to the market need only look at Thursday's one-day drop of 9% in the price of oil. The oil market should be governed by the principles of supply and demand, not flittering on the whims of speculators. 
Most Americans seem to agree with this idea. But is it true? Do speculators cause oil and/or gas prices to rise above their "natural" or fundamental level?

First, an important distinction. When we talk about "speculation," we're typically talking about futures contracts. If a speculator buys an oil futures contract, (s)he is not buying a barrel of oil; (s)he is buying the right to buy a barrel of oil in the future, for a price that is determined (locked in) today. This is a very different thing than hoarding, which is purchasing the actual physical commodity and storing it, with the intent to sell if in the future at a profit when the price goes up. Everyone agrees that hoarding can cause today's prices to rise; the question of whether futures contract purchases can have the same effect is far trickier. In fact, most economists will tell you that futures speculation can only raise spot-market prices if it causes physical hoarding to increase.

The key question is: If we curbed activity on futures markets, would prices stabilize?

Theoretically, it's hard to see how that would work. If' I'm a speculator who believes that oil prices will rise, I have two options to make a profit: 1) I can buy an oil futures contract, or 2) I can buy an actual barrel of oil and store it. But what if there is no futures market? In that case, I only have one way to speculate: hoard physical oil. Since it is obvious that hoarding raises prices, but not obvious that futures contracting raises prices, it seems that curbing futures speculation - as Ed Markey would have us do - would push prices up rather than down.

But enough theory; what does the data say? Fortunately, this is one area of economics where good controlled experimental evidence exists. In 1995, Vernon Smith and David Porter conducted an experiment to examine the effect of futures markets on the formation of asset bubbles. They found that when people can buy and sell futures markets, asset bubbles tend to be much smaller and rarer than when futures trading is forbidden. In 2006, Charles Noussair and Steven Tucker did a more in-depth version of the experiment, and got exactly the same result. When futures markets aren't available, spot prices bubble and crash; when futures trading is allowed, futures prices oscillate wildly, but spot prices barely budge from the correct fundamental value.

This experimental evidence is important, because it is controlled. Looking at real data usually doesn't allow you to determine cause and effect; you can observe that futures prices and spot prices tend to move together, but (unless you find a good instrument) you can't pick apart which is causing which. Even if you find a historical case of futures markets being curbed, you don't know whether what happened after that was a result of the policy change, or any one of a bazillion other unrelated events. But in the laboratory, we know that only one thing has changed. So we know that it was the introduction of the futures market that killed the bubble in the lab.

Now, you can argue that lab experiments don't have external validity; that real-world markets are so different that exactly the opposite thing happens when you allow futures markets in the real world. And maybe you'd be right. But as things stand, the weight of evidence is firmly against the idea that futures speculators raise oil prices.
reade more... Résuméabuiyad

Bob Shiller and Greg Mankiw stick up for Science

























"Measure what is measurable, and make measurable what is not so." - Galileo Galilei

"I appear to be wiser than he, because I do not fancy I know what I do not know." - Socrates

Writing some of my recent posts has gotten me thinking a lot about economics as a science. It seems to me that all too few economists view their field the way natural scientists do their own - as a potential tool for understanding and mastering the Universe. Plenty of economists value models that are "interesting" or "thought-provoking," that tell "good stories," or that have a priori plausible assumptions. That is how journalism or philosophy works, but it is not how Science works. 

So it is extremely gratifying and refreshing to hear leading economists stick up for two of the key elements of science: observation and doubt.

[T]he theory of outlier events doesn’t actually say that they cannot eventually be predicted. Many of them can be, if the right questions are asked and we use new and better data. Hurricanes, for example, were once black-swan events. Now we can forecast their likely formation and path pretty well, enough to significantly reduce the loss of life. Such predictions are a crucial challenge in economics, too, and they are why data collection need not be a dull or a routine field...

Armchair scientists will never get far; observation makes all the difference. Think of the advances that came with the microscope and telescope. So it is with measurements in economics, too.

I produced a century-long series of home prices, which revealed how unusual the housing-price boom was [in the mid-00s]. General talk about the nature of bubbles didn’t convince many people that a bubble was forming, but the data I collected did convince at least some that we were in a very risky and historically unparalleled situation...

We need another measurement revolution like that of G.D.P. or flow-of-funds accounting. For example, Markus Brunnermeier of Princeton, Gary Gorton of Yale and Arvind Krishnamurthy of Northwestern are developing what they call “risk topography.”...We should respond just as we did to the Depression, by starting the long process of redefining our measurements so we can better understand the risk of another financial shock. (emphasis mine)
This is absolutely right. To figure out how the world works, you have to actually look out the window. The revolution in astronomy in the 1600s - which led to and motivated the invention of physics itself - depended crucially on improvement in telescopes, like the ones invented by Galileo and Newton. Similarly, we didn't correct classical physics (with relativity and quantum mechanics) until we mastered electricity and observed electric phenomena that didn't square with existing theories.

As Shiller notes, the big data revolution in econ came after the Depression, when we invented things like the National Income and Product Accounts. All the macro we have today, from RBC to New Keynesian models to more outlandish stuff, is an attempt to explain what we see in the NIPA. Those theories are extremely limited; if we're going to improve upon them, we need better data, not just to pick from the cornucopia of models we have now, but to develop new and more useful ones. Shiller talks about better financial data (also see Hernando de Soto on that subject), but another source of good data is coming from experimental economics, which is rapidly becoming more central to the field.

But to find theories that work, we also need another pillar of the scientific approach: doubt. That is why I was pretty happy to see Greg Mankiw write this in the Times:
After more than a quarter-century as a professional economist, I have a confession to make: There is a lot I don’t know about the economy. Indeed, the area of economics where I have devoted most of my energy and attention — the ups and downs of the business cycle — is where I find myself most often confronting important questions without obvious answers...
The inflation rate that the economy gets is, in large measure, based on the inflation rate that people expect...Even if expectations are as important as the conventional canon presumes, it isn’t obvious what determines those expectations. Are people merely backward-looking, extrapolating recent experience into the future? Or are the expectations based on the credibility of policy makers? And if credibility matters, how is it established? Are people making rational judgments, or are they easily overcome by fear and influenced by extraneous events?...
I just cannot express how refreshing it is to see this kind of scientific humility being expressed by one of macroeconomics' most respected practitioners. Yes, Mankiw is using doubt to score political points over his opponents; the ideas about which he waxes skeptical are things like "We should worry about unemployment more than inflation" and "The U.S. government can safely borrow more money." But that's absolutely fine! There will be plenty of people on the other side of the political spectrum to cast doubt on the idea that we should worry about inflation and deficits. Don't worry.

Because something bigger is at stake here. By invoking doubt, and by admitting his ignorance and the limitations of his models, Greg Mankiw is doing the economics field a great service. Mankiw is probably the ultimate virtuoso practitioner of macro's dominant DSGE paradigm. By admitting that that paradigm has failed to answer some of its own central questions, he is reminding us that - in a field filled with chest-thumping and argument-from-authority - the crucial idea of scientific doubt is not quite dead.

Observation and doubt. Human thinkers have not always valued these things. Economics is far behind the natural sciences - and marginally behind the field of psychology- in recognizing their importance. Kudos to Shiller and Mankiw for nagging us to abandon "armchair science" and do some real science.
reade more... Résuméabuiyad

What I learned in econ grad school, Part 2


















When I wrote my earlier post recounting what I learned in econ grad school, I realized shortly after I finished it that I might have sounded like I was being a little too harsh on my own econ department, which is really quite a good one. That's why I added the following:
In my second year I took a macro field sequence, which taught me all about demand-based models, frictions, heterogeneity, and other interesting stuff. I don't want to make it sound like graduate school taught me nothing about how to understand the recession...it taught me plenty. It just all came in the field course...
I realize now that this update deserved its own post. After all, the course I described in my last post was one single semester, out of four that I've spent learning macro. If we're trying to assess how well grad school trains macroeconomists, we should talk about the field classes that they're required to take.

My second-year field class was divided into four half-semester portions. Each had its own theme. Broadly, these were: 1) Heterogeneous-agent models, 2) Sticky-price models, 3) Neo-monetarism, and 4) Labor search. Some highlights:

* We spent quite a lot of time on heterogeneous-agent models, e.g. Krusell-Smith model. These models turn out to be very tricky to solve numerically. So far, they have also been mostly wrong in their predictions. But they are very interesting nonetheless.

* We learned about sticky-price models and their cousins, Greg Mankiw's sticky-information models (Mankiw is pictured above). I really liked Mankiw's model; although it (like most macro models) is a "storytelling" model with some implausible assumptions and no real predictive power, the story it tells points in some very interesting research directions, since it involves much more interesting microfoundations than the standard "tastes and technology."

* We briefly covered structural vector autoregressions, or SVARs (I also learned these in a stats class). I liked these because the focus was on making forecasts...finally, someone calculating something! Also, they were honest about their limitations; their standard error bars were so big that they had very little predictive power more than one quarter into the future, but they admitted and prominently displayed this fact, instead of using something like "moment matching" to try to exaggerate their empirical success.

* We studied this very interesting paper by Basu, Fernald and Kimball. Basically, the paper constructs a very general form of the RBC model, and finds that it can't explain economic fluctuations. The reason is that improvements in technology, which are what cause booms in Prescott's original RBC setup, actually cause recessions once you allow for things like imperfect competition. This reinforces similar results by Jordi Gali, who used SVARs but arrived at the exact same conclusion.

* We learned some neo-monetarist models (by the way, what I learned was called "neo-monetarism" seems very different from what Stephen Williamson thinks it is). The neo-monetarist policy response to recessions, I learned, is quantitative easing. Or, as my advisor Miles Kimball put it: "Print money and buy stuff!" (He actually repeated this line four times in a row. When I asked him later what he thought of Bernanke's response to the recession, he grinned hugely and said "He printed money and bought stuff!") I also learned that some neo-monetarist models have a role for fiscal policy, but only for a short time after a particularly severe drop in investment.

* We studied labor search models, e.g. the Mortensen-Pissarides model (which recently won its creators the pseudo-Nobel). Although these models, like the heterogeneity models, make some incorrect predictions, they are commendable for admitting this fact. I liked these models because they relied on interesting and observable microfoundations (e.g. the job matching function).

The field course addressed some, but not all, of the complaints I had had about my first-year course. There was more focus on calculating observable quantities, and on making predictions about phenomena other than the ones that inspired a model's creation. That was very good.

But it was telling that even when the models made wrong predictions, this was not presented as a reason to reject the models (as it would be in, say, biology). This was how I realized that macroeconomics is a science in its extreme infancy. Basically, we don't have any macro models that really work, in the sense that models "work" in biology or meteorology. Often, therefore the measure of a good theory is whether it seems to point us in the direction of models that might work someday.

Anyway, Brad DeLong would still probably have some issues with my field course. We did learn a lot of demand-side models, and a bit of history as well (I learned about Wicksell, and about the Great Depression, both for the first time). But never once was finance mentioned. I learned about the existence of financial accelerator models in an email from a friend at Berkeley...

There were two other big conclusions I drew from that course.

The first was that the DSGE framework is a straitjacket that is strangling the field. It's very costly in terms of time and computing resources to solve a model with more than one or two "frictions" (i.e. realistic elements), with more than a few structural parameters, with hysteresis, or with heterogeneity, etc. This means that what ends up getting published are the very simplest models - the basic RBC model, for example. (Incidentally, that also biases the field toward models in which markets are close to efficient, and in which government policy thus plays only a small role.) 

Worse, all of the mathematical formalism and kludgy numerical solutions of DSGE give you basically zero forecasting ability (and, in almost all cases, no better than an SVAR). All you get from using DSGE, it seems, is the opportunity to puff up your chest and say "Well, MY model is fully microfounded, and contains only 'deep structural' parameters like tastes and technology!"...Well, that, and a shot at publication in a top journal.

Finally, my field course taught me what a bad deal the whole neoclassical paradigm was. When people like Jordi Gali found that RBC models didn't square with the evidence, it did not give any discernible pause to the multitudes of researchers who assume that technology shocks cause recessions. The aforementioned paper by Basu, Fernald and Kimball uses RBC's own framework to show its internal contradictions - it jumps through all the hoops set up by Lucas and Prescott - but I don't exactly expect it to derail the neoclassical program any more than did Gali.

It was only after taking the macro field course that I began to suspect that there might be a political motive behind the neoclassical research program (I catch on quick, eh?). "Why does anyone still use RBC?" I asked one of the profs (not an RBC supporter himself). "Well," he said, stroking his chin, "it's very politically appealing to a lot of people. There's no role for government." 

That made me mad! "Politically appealing"?! What about Science? What about the creation of technologies that give humankind mastery over our universe? Maybe macro models aren't very useful right now, but might they not be in the future? The fact is, there are plenty of smart, serious macroeconomists out there trying to find something that works. But they are swimming against not one, but three onrushing tides - the limited nature of the data, the difficulty of replicating a macroeconomy, and the political pressure for economists to come up with models that tell the government to sit on its hands.

Macro is a noble undertaking, but it's 0.01 steps forward, N(0,1) steps back...
reade more... Résuméabuiyad

A halfhearted semi-defense of Casey Mulligan

























Wow, I almost can't believe I just wrote those words. Anyway...

Paul Krugman and Brad DeLong are quick to jump all over Casey Mulligan for his recent blog post attacking New Keynesian macro theories. Mulligan writes:
Our labor market has long-term problems that are not addressed by Keynesian economic theory. New Keynesian economics is built on the assumption that employers charge too much for the products that their employees make and are too slow to cut their prices when demand falls. With prices too high, customers are discouraged from buying, especially during recessions, and there is not enough demand to maintain employment.

When the financial crisis hit in 2008...New Keynesian fears seem to have been realized: consumer prices had to fall to maintain employment, but too few employers were willing or able to make the price cuts quickly enough. The result was going to be a severe recession that could be partly cured, in the short term, by fiscal stimulus or, in the longer term, as more companies had the time needed to cut their prices...[but] the low employment rates we have today are too persistent to be blamed on price adjustment lags[.]
It looks as if he’s assuming that nominal demand is constant, so that a fall in prices would lead one for one to a rise in real output. But where’s that coming from?

If he had read anything — anything at all — that Keynesians have written about policy at the zero lower bound, he would have learned that there is no reason to expect falling wages and prices to raise employment — in fact, quite the contrary in the face of a debt overhang.
Nor does DeLong:
For firms and workers to cut their prices in a downturn has, New Keynesians (and Old Keynesians, and monetarists, and Fisherians, and Wicksellians, and a host of others) think, two effects:
  1. With lower prices the same flow of nominal demand purchases more commodities and employs more people. 
  2. With lower prices the collateral and cash flow cover of nominal debt erodes, and so nominal debt becomes even riskier. If the problem is indeed an excess demand at full employment for safe assets, allowing deflation to reduce the supply of safe assets really does not help.
Back in 1933, I think, Irving Fisher argued most strenuously that deflation was destabilizing: that downward moves in nominal wages and prices were not the cure but the cause of Great Depressions.

It is much better to use other policies--open-market operations, quantitative easing, commitments to further expansion in the future, loan guarantees, government spending, tax cuts--to boost nominal demand in both the short and long run than to sit back and wait for deflation to someday, somehow restore the proper functioning of the market system and return the economy to full employment.
Both DeLong and Krugman are right, in the sense that the existence of debt deflation and liquidity traps mean that you can't just sit there and wait for falling prices to cure a depressed economy.

But they are defending New Keynesians, or the New Keynesian movement. Many economists who count themselves as New Keynesians (or Old Keynesians, or monetarists, or Fisherians, or Wicksellians) understand and believe in the existence of debt deflation and liquidity traps. But that is a very different thing from defending New Keynesian models Mulligan is actually right about the particular New Keynesian model that he is criticizing. 

The classic New Keynesian model is a sticky-price model. In that model, recessions happen when firms are unable to lower their prices in response to falling demand. The solution is for the Fed to cut interest rates, thus raising demand. In these models, if the Fed does not cut interest rates, deflation eventually brings the economy back to full employment at a lower price level, just as Mulligan says.

That's it. No debt deflation, no liquidity trap. The economy can be perfectly stabilized by the implementation of a Taylor-type rule governing nominal interest rates. This is what you will find if you read Michael Woodford's book.

Now, it's true that New Keynesian models have developed far beyond this baseline. And, of course, people who call themselves "Keynesian" (New or otherwise) in no way believe that this model fully describes the economy. But that doesn't erase the grain of truth at the core of Mulligan's crude caricature. The basic "New Keynesian" sticky-price model is not very useful in describing the situation in which we now find ourselves.

Nor was it intended to be. The original sticky-price models were not intended to be a "theory of everything," they were intended to tell the simplest possible story of why demand might matter for the macroeconomy. At the time that Mankiw and Calvo and others were laying the foundations of New Keynesian theory, the "neoclassical" paradigm and the RBC model were in the ascendant; many macroeconomists had been convinced that all recessions were caused by supply shocks, and that demand basically didn't matter at all. Sticky-price models were a way of saying "No, wait, demand shocks could matter as well," in a way that fit into the DSGE framework that neoclassicals insisted everyone use.

The sad truth of the matter is that when macro models are created to tell stories instead of make predictions, it becomes pretty easy for anyone to poke holes in their political opponents' baseline models. And it's also true that stories have power; many smart New Keynesian economists were convinced, before the 2008 crisis shattered their faith, that the Fed really could manage the economy with things like interest-rate targeting.

That turned out not to be true. And to their credit, New Keynesian (and Old Keynesian, and monetarist) economists rapidly realized that their framework had been too narrow, and turned to an older and more diverse set of models to help them understand what they were seeing...while neoclassical economists like Casey Mulligan mainly buried their heads in the sand and blamed the recession on Obama or other chimeras. So it is a little rich for Mulligan to be taking potshots at twenty-year-old New Keynesian formalism, at a time when the people who made or endorsed that formalism have basically moved on.

Final note: I should point out that after he points out the weakness of the classic sticky-price model, Mulligan goes on to say a whole bunch of nonsense things about labor costs, minimum wages, etc. I do not want this blog post to be read as an endorsement of any of that silly stuff.


Update: Brad DeLong emails to point out that the financial accelerator models of Bernanke and Gertler are considered "New Keynesian" models. To be honest, I didn't know that. Actually, sad to say, I never even learned those models from any of the macro classes I took (time for "What I learned in econ grad school, Part 2"?), and only found out about their existence on a tip from a friend at Berkeley! So my defense of Mulligan's terminology can be chalked up to my own Dark Age ignorance. Doh.

But, I reiterate the points I was trying to make: Mulligan discusses one single "New Keynesian" model, the sticky-price model of Calvo, Woodford, etc. He is right insofar as he is saying that that model is sorely inadequate. He would be right if he had pointed out that its inadequacy is mainly a result of the use of macro models to "tell stories" (but he did not point this out).  He is wrong insofar as he is claiming that the "New Keynesian" paradigm or movement is thus discredited. And he is kind of bizarre in his claim that we should be focusing on long-run supply-side policies.

Update 2:  My halfhearted semi-defense is smacked down by Paul Krugman.
reade more... Résuméabuiyad