Pages

.

The Omnipotent Fed idea



Since the Fed started its new policy of "QE infinity" (which it stepped up on Wednesday), acclaim has been heaped upon the economists who have promoted a policy of NGDP targeting (or "NGDP level path targeting"), which bears some resemblance to "QE infinity". Chief among these economists is Scott Sumner, who promotes his ideas mainly through his blog; Sumner was recently named one of Foreign Policy Magazine's top 100 global thinkers, and economics pundits from Tyler Cowen to Matt Yglesias have credited Sumner as being the intellectual force behind the Fed's new policy. However, Scott is far from a solitary crusader; he has been assisted by David Beckworth, Ryan Avent, Andy Harless, Steve Randy Waldman, Joe Weisenthal, Evan Soltas, and a number of other bloggers and pundits. Additionally, my own graduate advisor, Miles Kimball, has promoted similar ideas on his blog and in his academic work.

I generally support the idea of an activist Fed, unconventional monetary policy, etc. However, I do have a misgiving about a key element of the case made by the aforementioned crop of monetarists. This is the notion of an "Omnipotent Fed"...by which I mean not that the Fed can create stars and galaxies, but that the Fed can set NGDP to be whatever it wants. If this assumption is wrong, NGDP targeting (or similar policies) may simply not work.

To some, the proposition that the Fed can hit any NGDP target seems self-evident. NGDP is just real GDP multiplied by the price level; if the Fed perfectly controls the price level, and either A) knows the relationship between the price level and output, or B) can change the price level faster than real output changes, then it immediately follows that the Fed sets NGDP. You often hear this stated as the idea that "the Fed can always choose to inflate".

But what if the Fed can't set the price level? There are several ways this could be the case. For example, the price level might be discontinuous in certain regions. For example, suppose the Fed attempts to set inflation at exactly 176.73%. But it might be the case that any monetary policy that pushes inflation above 175% will automatically cause inflation to jump to 190% (and vice versa). In other words, suppose that in some region, NGDP is a step function with respect to monetary policy. That's just one example, though; in general, any time NGDP is an unstable, stochastic, or undetermined function of monetary policy, the "omnipotent Fed" proposition fails. One special case of this is Milton Friedman's idea that monetary policy acts with "long and variable lags", a notion that has been pooh-poohed by the new monetarists.

Anyway, OK, so is the Fed "omnipotent" or not? Well, how on Earth could we know? My big problem with the "Omnipotent Fed" idea is that it seems non-falsifiable. By which I mean, it doesn't seem like the evidence will ever be able to tell us whether the Fed is omnipotent or not.

Why not? Two reasons: A) Because the Fed's thought process is unobservable, and B) Because the Fed's policy toolkit is unobservable. To know what the Fed can do, we have to know what the Fed tries to do. For example, suppose we see the Fed do $1 trillion of quantitative easing, and NGDP doesn't seem to budge much.

Interpretation 1: The Fed knew that its actions would lead to a non-budging NGDP level, which is why it did what it did. In other words, the Fed chose to keep NGDP where it was, and if it had wanted to, it could have raised NGDP instead of just keeping it static.

Interpretation 2: The Fed tried to raise NGDP and failed. It failed because the people at the Fed made a mistake. They did the wrong kind of easing, or didn't manage expectations correctly, or in some other way used the wrong tools. If the Fed had used the right tools, it could have raised NGDP.

Interpretation 3: The Fed tried to raise NGDP and failed. It failed because the only way to raise NGDP would have been to cause a hyperinflation, which would raise NGDP by much, much more than it wanted.

There are more interpretations. I just highlighted these three to demonstrate a point.

How can you know what the Fed wants? You can make some guesses, but not scientific ones. The Fed keeps its decision-making process secret. And suppose you somehow could figure out what the Fed wants (say, by applying a mind-reading device to the Fed chairman during a policy announcement). That would tell you precious little about what the Fed is actually capable of. For example, suppose that expectations are very important in the determination of NGDP. Do we know what determines expectations? Not really, no. Or suppose money demand is unstable, or contains hysteresis, in certain regions. How would we know that?

Some people have claimed that an "NGDP futures market" would allow us to test the proposition of Fed omnipotence. If NGDP futures were stable, they say, that would show that the Fed can hit any NGDP target it likes. But this is just flat-out false. Low NGDP futures volatility could mean that the Fed is utterly powerless, and that investors simply expect few shocks to NGDP.

So it seems to me that the proposition of Fed omnipotence is something that we can only believe by making a leap of faith. It is functionally equivalent to the notion that an invisible God controls everything we see in the world. Thus, believers in the Omnipotent Fed will always be able to claim, without scientific or logical refutation, that every jump and juggle of NGDP was the deliberate choice of the Fed.

Does this mean that every question about monetary policy is fundamentally unanswerable? No, it doesn't. We can't observe the Fed's desires, and we definitely can't observe the Fed's total choice set. But we can observe the Fed's actual choices. If you tell me "Inflation always rises 1 for 1 with the monetary base," well, that is an easy proposition to falsify.

So what is the implication for monetary policy? I am not claiming that NGDP targeting is a bad idea, or that it definitely will not work, or even that it is unlikely to work. What I'm claiming is that, in the presence of true Knightian uncertainty about the power of the Fed, it is certain that at some point, if NGDP targeting doesn't seem to work, we will inevitably abandon the policy. And the point where we decide it has failed will depend not on scientific fact, but on intuition and heuristics. In other words, if the Fed keeps printing money and NGDP doesn't return to its pre-crisis path, at some point we will simply start to entertain the notion that the Fed is incapable of doing what we want it to do. And then we will try to think of something else. And of course the true believers will say: "No, the Fed could have done it, they just didn't really want to." And we won't be able to prove them wrong.


Update: On Twitter I asked Miles Kimball what he thought of this post, to which he responded:
It is certainly a logical possibility that the Fed can't get inflation up without overshooting...The difference is that I don't think the US is actually in that situation of having to overshoot. Japan may be.
reade more... Résuméabuiyad

When Is a Program Too Feature-Rich?

In yesterday's post, I posed a bunch of really hard human-factors questions. What got me thinking about all that was the simple question: When (if ever) is a program too feature-rich?

Maybe it's not possible for a software system to be too feature-rich. Perhaps it's all a question of how features are organized and exposed. After all, a laptop computer (in its entirety: operating system, drivers, software, everything) can be considered a single "software system"—a single meta-app, with various child components having names like Chrome, Microsoft Word, Photoshop, etc. Imagine how many "features" are buried in a laptop, if you include the operating system plus everything on down (all software applications of all kinds). We're talking hundreds of thousands of features, total. And yet, users manage, somehow, to cope with all that complexity. Or maybe I should say, users try like hell to cope with it. Often, it's a struggle.

Given the fact that people still do buy enormously complex "software systems" (and manage to cope with them, to the point of making them worthwhile to own), maybe something like total feature count doesn't matter at all, in and of itself, where usability is concerned.

Or does it? There are still people in this world who are afraid to use a computer (and/or terrified to use a smart phone or an iPad), either because it's "too hard," too apt to make the user feel stupid, or whatnot. Those of us who do use such devices daily tend to chuckle at the fears of the computer-illiterate. We tell them "There's nothing to be afraid of" and then expect them to get over their fears instantly. When they don't, we scoff.

But really, should we be judging the user-friendliness of a software system by how easy it is for the majority of users to adapt to it (often with a certain amount of pain and difficulty)? Or should we (instead) be judging a system's usability by the number of people who are afraid of it?

Why shouldn't we be designing systems for the computer-fearful rather than for the computer-literate?

It's easy to say that something like total feature count doesn't matter as long as the software's (or device's) interface is good. The problem is, it's never really very good.

I consider myself a fairly computer-literate person at this point. I've written programs in assembly language for Intel- and Motorola-powered machines. I can read and write C++, JavaScript, Java, and (under duress) a few other programming languages. I've written plug-ins or extensions for a dozen well-known desktop programs, and I have seven software patents to my name. But there are still software systems in this world (mostly enterprise) that make me feel stupid.

If someone like myself feels stupid when confronted by a certain device or software system, isn't that an indictment of the software (or device)? Or do I deserve to feel stupid, since thousands of other people are able to get work done using the same software?

If there are people in this world who don't know how to set the time and date on a coffee maker, isn't that an indictment of the coffee maker?

If someone can't figure out how to operate a cable-TV channel changer, isn't that an indictment of the device itself?

I don't have hard and fast answers to these questions. But I think it's fair to raise the questions.

I'll go further and say: Every user (or potential buyer) of software, or software-powered devices, should definitely raise these questions.

Also: Every company that designs and sells software, or software-powered devices, needs to raise these questions.

So raise them, I say. If you're a software (or device) maker, have the necessary dialog, in your company, to get a strategy together for dealing with these sorts of issues. Get users involved in the discussion. Come up with a trackable plan for implementing possible solutions. Then follow up with customers to see if the solutions are actually accomplishing the desired purpose.

And if you're a customer? Ask yourself how smart or how stupid you feel when using a given product. And then, if you have a choice, vote with your wallet.
reade more... Résuméabuiyad

Hard Human Factors Questions

A cascading menu in Firefox. (An example of GUI 1.0 design.)


I'm not a human factors expert (therefore I could easily be wrong on this), but it seems to me that where GUI-driven applications are concerned, certain fundamental human factors questions have either been overlooked or not investigated fully. For example:
  • How many features can you pack into a program before you reach some kind of usability limit? Are there any fundamental usability limits relating to feature count, or can feature count go on forever? 
  • What does it mean to have a product with ten thousand features? What about a hundred thousand features? Can such a product be considered "usable" except on a superificial level?
  • For a program with thousands of features, what's the best strategy for exposing those features in a GUI? Need features be hidden in some hierarchical manner, where the most-used features are easiest to get to, second-tier features are next-easiest to get to, and so on, until you reach the really obscure features, which are presumably hardest to drill down into? Or is that kind of model wrong? Should all features be treated equally? Should the user be in charge of exposing the features he or she wants to expose (and be able to choose how they're exposed)?
  • How does feature richness relate to user satisfaction and/or "perceived usability"? Is it all just a matter of good GUI design? What metrics can one use to measure usability? 
  • In analyzing a program's GUI, has anyone ever created a complete command-tree for all UI elements (down to individual dialog-control level), in some kind of graphical format, and overlaid a heat map on the tree to see where users spend the most time?
  • Are current GUI motifs (menus, submenus, menu commands, dialogs and sub-dialogs, standard dialog controls, wizards, palettes, toolbars with icon-based commands) adequate to meet the needs of today's users? How adequate? Can we even measure "how adequate" with meaningful metrics?
It seems to me that most of the original thinking on these sorts of matters was done thirty years ago or so, with the advent of Apple's Lisa and Macintosh computers (plus work done before that at Xerox Parc); and we've been stuck in the world of GUI 1.0 ever since.

So that brings up yet another question: Is anyone working on GUI 2.0? (If so, who?) I would put touchscreen gestures in the GUI 2.0 category. (Is there anything else that belongs in that category?)

It seems to me software companies (including companies that develop web apps) should be concerned with all these sorts of questions.

I get the impression (based on the amount and quality of GUI design work that went into things like the iPhone, iPad, and iPod Touch) that Apple does, in fact, take these sorts of questions seriously. But does anyone else?

I don't see much evidence of other software companies taking these questions seriously. Then again, maybe I'm just not paying attention. Or maybe I shouldn't be asking these questions in the first place. As I said at the outset, I'm not a human factors expert. I'm merely an end user.
reade more... Résuméabuiyad

Did risky mortgage lending cause the financial crisis?


No.

Or, at least, not by itself, it did not.

The financial crisis consisted of two things:

1. A liquidity crunch or bank run, in which financial institutions all wanted to sell their long-term assets in order to pay off short-term liabilities at the same time, but couldn't.

2. A solvency crisis, in which so many systemically important financial institutions had made bad bets that their simultaneous failure threatened the health of the financial sector itself.

Both of these things involved risky mortgage lending. But risky mortgage lending, by itself, was not sufficient to cause either one of these. That's why I say that risky mortgage lending didn't cause the crisis.

Why not? Because in an efficient financial market, risk is fine. Risk is OK. In an efficient financial market, risk is priced. In an efficient financial market, if I buy a risky asset from you, I pay you less money because of the fact that I agree to take on more risk. (Paying less money up front means a higher expected return. So higher returns are my compensation for taking on more risk.)

What happened in the financial crisis was that risk was mispriced. People underestimated the risk of mortgages, mortgage-backed bonds, derivatives of mortgage-backed bonds (CDOs), insurance on mortgage backed bonds (CDS), commercial paper of banks that owned a bunch of mortgage-backed bonds, repurchase agreements with banks that owned a bunch of mortgage-backed bonds etc. Because they underestimated the risk of these things, they paid too much for them. This caused crisis (2), the solvency crisis, which hobbled our banks for years. And realization of crisis (2) caused crisis (1).

If there had been less risky lending, would people have underestimated these risks as much as they did? Maybe, maybe not. But whether or not they would have, we know that if the financial system had worked like it should, then there would not have been such a systematic underpricing of risk.

So risky mortgage lending didn't cause the crisis. What (partially) caused the crisis was risky mortgage lending being mistaken for non-risky mortgage lending, by people who ought to have known better.

Which brings me to the Community Reinvestment Act. Via Tyler Cowen, here is an NBER working paper that shows that the CRA made banks take on riskier portfolios of mortgage loans than they otherwise would have.

Does this mean that the CRA contributed to the financial crisis? No. Because the CRA existed since 1977, and the U.S. housing bubble only began in the 2000s. If three decades was not enough for the financial market to figure out that CRA mortgages are riskier than other mortgages, then the market is grossly inefficient, and crises will tend to form even with no CRA. And if in those three decades the market did figure out that CRA mortgages were riskier, then the CRA risk described in the paper was properly priced into all the MBS, CDOs, CDS, asset-backed commercial paper, etc.

In other words, the CRA made have caused more risky mortgages to be born, but the banks should have known that and hedged their bets accordingly. Either they did, and the root cause of the financial crisis was the mispricing of non-CRA risks (I suspect this is the case), or else they didn't, and thus the financial system was broken for a long, long time.

Either way, we shouldn't blame the CRA.


Update: Yes, I know there are reasons to doubt this study, and to think that the CRA didn't even cause much of the increase in risky lending. But the point of this post is: Even if it did, that doesn't mean it contributed to the financial crisis!
reade more... Résuméabuiyad

Is a bailout a tax?



I'm sure that by now, a bunch of people have piled on to this instantly notorious Casey Mulligan blog post. But let me add my voice to the chorus.

Mulligan's thesis is that because poverty rates didn't rise in the Great Recession (once you factor in government transfers), poor people now face an effective 100% marginal tax rate on their income; make one dollar more, if you're a poor person, and your government benefits go down $1. Here's Mulligan:

When measured to include taxes and government benefits, poverty did not rise between 2007 and 2011, and that shows why government policy is seriously off track... 
[W]hen someone loses $10,000 by not working, he should get some help from the government or from others in the forms of reduced taxes and enhanced benefits but still should bear a portion of that loss himself... 
If people with declining incomes found them entirely replaced by government help, that amounts to 100 percent taxation (providing more benefits as income falls is sometimes called “implicit taxation”)... 
Erasing incentives is not the way to a civilized society but rather to an impoverished one.
Casey Mulligan's general point - that the expiration of government benefits is a form of implicit taxation - is a good one. But I don't agree with his conclusion about the Great Recession. Just because poverty rates didn't rise doesn't mean that the government imposed a 100% implicit tax rate.

Why not? Because individual incentives don't (necessarily) depend on aggregate outcomes.

Suppose that the government gave out cash to poor people in order to keep the poverty rate at or below 15%.  Would that make it impossible to become poor? No. Because you'd still have a chance of becoming one of the 15%. If you work less than the poor guy next door, it's possible that you'll fall into the 15% and he'll rise out of it. The aggregate poverty rate will stay the same, but now you'll be poor. In other words, there is still an individual incentive for people to work, even if the aggregate poverty level is held fixed.

Or take another, even simpler example. Suppose the poverty level is $10,000 per year. Suppose the government decided to hand every citizen exactly $10,000 per year (raised with an income tax on people making above the median income). The poverty rate would then be permanently fixed (at 0%), and yet for everyone in the lower part of the income distribution, the implicit marginal income tax rate would be unchanged from whatever it was before the policy.

(Now, if poor people could somehow coordinate - if they could get together and say "Hey guys, let's all not work, and then the government will give us all bigger checks!" - then the government policy would indeed produce a 100% tax rate. But poor people can't coordinate like that in real life. And if somehow they did, the government could probably see them doing it, and change the policy to avoid getting ripped off.)

Note that in my example, the antipoverty programs are permanent. They are not temporary recession-fighting measures. My argument does not depend on the temporary nature of the incentive structure.

But in real life, the programs that Mulligan is talking about are temporary, and that actually makes my argument even stronger. Programs to keep the poverty rate constant during a recession are like bank bailouts - they are only likely to be used in a time of systemic crisis. If the overall economy is doing well, the government will be much more likely to allow the poverty rate to grow (this is basically what happened in the Bush years). And poor people know this. Since incentives depend on the future as well as the present, this means that we can't just look at what happened during the Great Recession in order to make conclusions about incentives.

In other words, I think Casey Mulligan makes two mistakes here: 1) He confuses individual incentives with aggregate outcomes, and 2) He assumes that poor people are not forward-looking.

Now, remember, Mulligan's more general point is still correct: The phase-out of antipoverty programs as income rises acts as an implicit marginal tax. But there is little we can deduce about the strength of this incentive just by looking at aggregate incomes during the Great Recession.
reade more... Résuméabuiyad

Confessions of a Twitter follow-slut

People are always asking me why I'm such a follow-slut on Twitter. They want to know what the heck good it is to be following 108,000 people. Am I insane? Is there any conceivable reason for following so many people?

My complete philosophy on this (and all the techniques I used to get to 100K followers) is laid out in detail in a 99-cent e-book, which I sincerely hope you'll enjoy. If you know what's good for you, you'll spring for it. But let me cut to the chase, in case you don't want to buy the book.

My basic M.O. is to follow as many interesting people as I can. Quite a few follow me back. Eventually I unfollow the heartless, unthinking losers who don't follow me back.

This results in me following an awful lot of interesting people, obviously. (Duh.) Can I interact with that many people? No. Only randomly. Can I really follow what people are posting? No, not one by one by one.

What you have to do is be smart enough to use Twitter's excellent List feature. Put various categories of Very Interesting People into various Lists, then check those streams regularly. I have lots of lists. Many of them private, some public. All excellent.

But I also check the main firehose. I do it all the time, actually, because with that many tweeps, I get to see tons and tons of curated links in my stream (and yes, the occasional bit of nonsensical stream-of-consciousness dreck), but the main thing is, I catch huge numbers of fascinating news stories and blog posts by sipping from the fire hydrant. If there's any kind of fast-breaking news story going on, I see it right away. A drone strike kills a child in Pakistan, I know about it. A Supreme Court justice happens to say something rational, I hear about it. A hair falls out of Donald Trump's whatever-it-is-that's-on-his-head, I know about it.

The life of a follow-slut is pretty good, actually.

So please. Don't judge me. Sluts are people too.

reade more... Résuméabuiyad

David Beckworth might be very wrong about the multiplier


Via Tyler Cowen, I see that David Beckworth is claiming that recent events prove that the "fiscal multiplier" is very low. In a post entitled "Paul Krugman will not like these figures", Beckworth writes:
See if you can figure out why [Krugman will not like these figures]:

This first figure shows that aggregate demand growth has not been affected by a tightening of fiscal policy since 2010.  Specifically, it shows that nominal GDP (NGDP) growth has been remarkably stable since about mid-2010 despite a contraction in federal government expenditures. The same story emerges if we look at the budget deficit relative to NGDP growth: 


 
Both figures seriously undermine the argument for coutercyclical fiscal policy and suggest a very a low fiscal multiplier.  They also indicate that the Fed has been doing a remarkable job keeping NGDP growth stable around 4.5%. Monetary policy, in other words, appears to be dominating fiscal policy in terms of stabilizing aggregate demand growth.
Beckworth's conclusion is not necessarily valid, and illustrates the danger in drawing conclusions about structural variables from looking at correlations between macroeconomic aggregates. Here's why the conclusion might not be valid:

Suppose that Keynesian demand management policy works perfectly: in other words, fiscal stimulus perfectly smooths fluctuations in aggregate demand. In that case, you will observe substantial swings in fiscal policy, but no swings whatsoever in aggregate demand. When external shocks push AD up, fiscal tightening will push it back down; when external shocks push AD down, fiscal policy will push it back up.

Beckworth's graphs give no measure of external shocks; hence, they are perfectly consistent with the idea that fiscal stimulus was allowed to wind down as the economy naturally recovered. (Tyler points this out.)

Check out Nick Rowe for an exploration of this idea in much greater depth. Using these graphs to conclude that fiscal policy is ineffective is like saying "Hey, no matter how much power my neighbor's heating system puts out from day to day, his room stays the same temperature; his heater must be useless!" It might be true. Or it might be the exact opposite of true.

Also, note that if fiscal policy is effective (i.e. if the multiplier is high), then aggregate demand will depend not just on current deficits but on expectations of the response of deficits to future external AD shocks. This is a central tenet of the "market monetarism" that Beckworth espouses, but there's no reason that forward-looking expectations can't be applied to fiscal policy as well as monetary policy.

To conclude: The graphs Beckworth shows are perfectly consistent with a large fiscal multiplier. In fact, they are perfectly consistent with the hypothesis that monetary policy is essentially ineffective, that the Fed is basically powerless, and that fiscal policy is capable of doing a perfect job of smoothing NGDP growth all on its own.

Now, I'm not saying Beckworth is wrong, and that the multiplier is big. I'm saying that the graphs he shows do not tell us much about the size of the multiplier. We should always beware of drawing conclusions about structural variables from looking at correlations between macroeconomic aggregates.


Update: David Beckworth responds (in the comments, and in an update to his post):
Noah, the thermostat assumes policy can respond in real time to the shocks. Do you really think fiscal policy is nimble enough to do this? It is reasonable to claim monetary policy can given its flexibility, but it's a stretch with fiscal policy (unless there are large automatic stabilizers built into place). 
Well, it's possible that there just weren't any big shocks in recent years...stimulus might have just calmly would down as planned, while the economy slowly recovered as expected. Also, remember expectations. Expectations are certainly nimble enough to respond quickly to any shock.
A bigger problem with your alternative story is that it doesn't fit the facts. Fiscal policy has been tightening despite the spate of negative economic shocks over the past few years: Eurozone crisis, debt cliff talks 2011, China slowdown which have kept U.S. economy from having a robust recovery.
Well, these are certainly putative negative shocks. They seem like things that might have been shocks. But do we know that they really were shocks? The Eurozone crisis was resolved, the debt cliff talks resulted in compromise, and China's slowdown was not really that big of a deal. Maybe in reality this was small potatoes compared to the basic restorative forces, pushing us to a slow but steady recovery after the Great Recession.
Paul Krugman has made this abundantly clear in many pieces where he repeatedly laments the lack of adequate fiscal policy. Surely he wouldn't be saying this is he thought along the lines of your thermostat example?
Ah, but what does Krugman consider a satisfactory outcome? The "recovery" from the Great Recession has seen pre-crisis RGDP (and NGDP) growth rates restored, but at a lower level than the pre-crisis trend; Krugman might just be dissatisfied with this outcome, even if it was the outcome produced by fiscal policy. Remember, Krugman doesn't set fiscal policy, he just talks about it a lot.

I continue to conclude that the notion that fiscal policy is ineffective is not supported by these graphs.


Update 2: Scott Sumner weighs in:
I was asked to comment on Noah Smith’s recent critique of David Beckworth’s post on the fiscal multiplier.  I basically agree with Noah, and would simply add that his arguments also suggests that the standard arguments in favor of the effectiveness of fiscal stimulus are also mostly flawed, in basically the same way that he claims Beckworth’s arguments are flawed (ignoring expectations channels, etc.)  When it comes to fiscal stimulus it’s all about faith—the data tell us almost nothing.
The whole world seems to be converging on the idea that macro data doesn't really reveal the true workings of the macroeconomy...I personally think we can do much better, data-wise, than some simple graphs of macroeconomic aggregates, but it is true that any empirical study of the fiscal multiplier (or the money multiplier, or any such policy effect) is going to have to rely on a theoretical model which itself can't easily be verified with data...


Update 3: This blogger put up some similar graphs comparing monetary aggregates and NGDP. Guess what? Monetary aggregates jump all over the place, NGDP just sails along. Exactly like deficits and NGDP in Beckworth's graphs. There's a lesson here...


Update 4: Paul Krugman weighs in, agreeing about the graphs in question, but saying that I'm too "nihilistic" about how much we can learn from macro data. But this, dear readers, is a topic for another post...
reade more... Résuméabuiyad