Pages

.

Going on a Software-Design Feature Fast

I advocate that software makers take a hard look at why and how so many features have made their way into their products. The process by which non-core functionality enters a product is more important (obviously) than the sheer number of features.

Software makers should also reevaluate the process by which a feature becomes "required" and what it means for a feature to be "required."

I've been in tech for decades, and I've never yet encountered a software product that didn't contain at least one totally useless feature, a feature no one ever uses; the equivalent of the Scroll Lock key on a modern keyboard. The important point to note is that all software features, even the  most obscure and/or useless ones, got into the product as a result of somebody's "requirement."

I propose that software makers go on a "feature fast" until the feature-addition process is not only well understood but re-imagined. (Let Marketing be a stakeholder in this process, but let it be only one of many stakeholders. Not the majority stakeholder.)

Until then, I offer the following exercises for purveyors of commercial software:

1. Implement in situ analytics (inside-the-app analytics) so that you can understand how users are spending their time when they work with the product.

2. Find out (via built-in analytics) what the least-used feature of your product is. Get rid of it.

3. Repeat No. 2 for another 100 features. Replace them with API methods and helpful tooling (an SDK). Charge no money for the SDK.

4. Have you ever added an obscure feature because an important customer asked for it? If so, consider the following: Did you make the sale? Did the sale of the product actually hinge on that one feature? (Hopefully not. Hopefully the product's core functionality and reputation for excellence made the sale.) Five years later, is that customer still with you? Are they still using the feature? If not, why are you continuing to code-maintain, regression-test, document, and tech-support a one-off feature that's no longer needed?

5. Of all the UI elements that are in the user's face by default, find which ones are least-used. Of all the UI elements that are not readily visible, find those that are most-used. Consider ways to swap the two.

6. Try to determine how many features are in your product (develop your own methodology for this), then determine how many features are used by what percentage of customers. (When you have that data, visualize it in more than one way, graphically.) When you're done, ask yourself if you wouldn't be better off, from a resource allocation standpoint, if you stopped working on at-the-margin features and reinvested those dollars in making core features even more outstanding.

7. Obtain (via real-time analytics) a profile of a given user's favorite (or most-used) features and preemptively load those into memory, for that particular user, at startup time. Lazily load everything else, and in any case, don't single-task the entire loading process (and make the user stare at a splash screen). The preferential loading of modules according to a user-specific profile is essentially the equivalent of doing a custom build of the product on a per-customer basis, based on demonstrated customer needs. Isn't this what you should be aiming for?

8. Find out the extent to which customers are using your product under duress, and why. In other words, if your product is Microsoft Word, and you have customers who are still doing a certain amount of text editing in a lesser product (such as Wordpad), find out how many customers are doing that and why. Address the problem.

In tomorrow's post, I'm going to list some favorite software-design mantras that all people involved in building, testing, documenting, supporting, or marketing software products can (I hope) learn something from. Don't miss it.



reade more... Résuméabuiyad

GUI Surface Area and Its Implications

I've been talking a lot about feature richness as if it's a measure of product complexity, which it might not be. What I care about, in any case, is not feature count per se, nor complexity per se, but a product's perceived utility and ease of use.

For matters involving "feature count," it may actually be more useful to talk about total GUI surface area. After all, features often equate (in at least a rough sense) to clicks on controls of various sorts: push buttons, radio buttons, checkboxes, menu selections, color pickers, calendar controls, etc. In some sense, feature count and GUI surface area go hand in hand.

How to calculate GUI surface area? Dialogs (and other UI elements) tend to grow in proportion to an app's functionality, so why not just add up the actual screen real estate consumed by all the dialogs, toolbars, palettes, tabs, and menus in the product (in "pixels squared"), and call that the UI's surface area?


I offer, without further proof, the conjecture that a program's perceived complexity is related in some suitably subtle way to a program's total GUI surface area.

I also contend that the bigger a product's total GUI surface area, the smaller the user is made to feel.

Moreover, if a product's total functional surface area far exceeds a customer's actual use-case requirements, an unavoidable impression of waste is conveyed. More than that, the customer might very well infer that the product came from a culture of waste, an engineering culture that doesn't value efficiency. That's a devastating assumption to let take root in a customer's mind.

Do you really want a customer to feel he has paid good money for unnecessary functionality? Ever?

If you know that eighty percent of customers will only ever use twenty percent of the software's features, do you really want to brag, in your marketing, about the extravagant excess of functionality in your product?

Isn't it more important to be able to emphasize the inarguably superior nature of the product's core functionality?

Shouldn't non-core functionality be non-core to marketing dialogs until the customer demands otherwise?

In tomorrow's post, I'll offer constructive suggestions for software makers; ideas that can actually be implemented and tested. Why argue about ideas when you can test them?
reade more... Résuméabuiyad

Macro, what have you done for me lately?


Paul Krugman says the state of macroeconomics is rotten. Steve Williamson disagrees. With apologies, I'll cut-and-paste most of Steve's post:
This is actually a relatively tranquil time in the field of macroeconomics. Most of us now speak the same language, and communication is good. I don't see the kind of animosity in the profession that existed, for example, between James Tobin and Milton Friedman in the 1960s, or between the Minnesota school and everyone else in the 1970s and early 1980s. People disagree about issues and science, of course, and they spend their time in seminar rooms and at conferences getting pretty heated about economics. But I think the level of mutual respect is actually relatively high... 
Since the 1970s, it is hard to identify a field called macroeconomics. People who call themselves macroeconomists have adopted ideas from game theory, mechanism design, general equilibrium theory, finance, information economics, etc. to study problems of interest to policymakers and the public at large. Sometimes it's hard to tell a macroeconomist from a labor economist, from someone working on industrial organization problems. What are "freshwater" and "saltwater" macro? No idea...
The truth is that we have all moved on from the macro world of the 1970s. Methods that seemed revolutionary in 1972 are the methods everyone in the profession uses now. The nerds who had trouble getting their papers published in 1972 went on to run journals and professional organizations, and to win Nobel prizes. This isn't some "cult that has taken over half the field," it's the whole ball of wax... (emphasis mine)

Economic science does an excellent job of displacing bad ideas with good ones. It's happening every day. For every person who places obstacles in the way of good science to protect his or her turf, there are five more who are willing to publish innovative papers in good journals, and to promote revolutionary ideas that might be destructive for the powers-that-be. The state of macro is sound - not that we have solved all the problems in the world, or don't need a good revolution.
On the question of whether macro is divided into warring "schools", I'd say Williamson is 85% right, and Krugman only 15%. Yes, there are some systematic disagreements, but they're not very bitter or rancorous, and everyone uses mostly the same "language". The old "freshwater"/"saltwater" distinction is still there to a limited degree among older faculty, but younger faculty don't seem to see much of a dividing line. (Krugman even hints at this when he mentions the fact that "saltwater" and "freshwater" models now look very much alike.) In the blog world you see lots of name-calling and rhetorical fireworks and serious consideration of "fringe" ideas, but in the academic world of conferences and seminars and journals, there seems to be almost none of that.

So if collegiality and similarity of technique are measures of a field's health, then macro is doing quite well. But I feel like there's a larger question: What has macro done for the human race in the last 40 years? How are we better off as a result of all this macro research effort?

(This is the more general version of the question asked by the Queen of England, when she asked why (macro)economists didn't see the financial crisis coming. Sure, some people argue that "financial crises" are inherently unpredictable because of the EMH. But there's no obvious reason why recessions shouldn't be predictable in advance.)

Today, in 2012, do we know much more about the "shocks" that cause recessions than we knew in 1972? I'm not sure we do. The question of whether these shocks are mainly "real" or mainly "monetary" is not settled within the field (as Bob Lucas mentioned in a recent interview). Nor do we seem to know much about how the shocks actually work - usually, macroeconomists just assume the shocks follow a simple random process like an AR(1).

What this means is that the actual cause of recessions is basically still one huge mystery.

What about the question of how the economy responds to the shocks? Even if we don't know much about the cause(s) of recessions, do we understand how recessions play out? I'm not sure we do. We have some empirical observations, of course - we know how much investment tends to vary with swings in GDP, etc. But in terms of impulse responses - i.e. the way the economy would move if all the noise were cleared out of the data - we have as many different guesses as we have macro theory papers. And macro theory papers are as numberless as the stars in the night sky. 

What about the question of policy? Do we know how governments can damp out the swings in the business cycle? Here there seems to be very little agreement. Even if macro isn't divided into warring camps shouting at each other, there is nevertheless a huge diversity of opinion on both the efficacy and the proper conduct of monetary policy, fiscal policy, and other recession-fighting measures. That there is no consensus means that the question is still unanswered. Useful technology has not been delivered. It doesn't take an expert to realize that fact. (Update: Well, actually not quite. Constantine Alexandrakis comes up with a couple of important counterexamples; see below.)

So macro has not yet discovered what causes recessions, nor come anywhere close to reaching a consensus on how (or even if) we should fight them.

(As an aside, modern macro models - at least, the DSGE variety - are basically not regarded as useful by private industry, although time-series methods developed for macro, such as vector autoregressions, have seen wide application.)

Given this state of affairs, can we conclude that the state of macro is good? Is a field successful as long as its members aren't divided into warring camps? Or should we require a science to give us actual answers? And if we conclude that a science isn't giving us actual answers, what do we, the people outside the field, do? Do we demand that the people currently working in the field start producing results pronto, threatening to replace them with people who are currently relegated to the fringe? Do we keep supporting the field with money and acclaim, in the hope that we're currently only in an interim stage, and that real answers will emerge soon enough? Do we simply conclude that the field isn't as fruitful an area of inquiry as we thought, and quietly defund it?

And who do we call in to evaluate the health of a science? If we only rely on insiders to evaluate their own field, we are certain to run afoul of vested interest (If someone asked you "How valuable is the work you do", what would you say?). But outsiders often lack the relevant expertise to judge someone else's field.

It's a thorny problem. And (warning: ominous, vague brooding ahead!) it seems to be cropping up in a number of fields these days, from string theory in physics to "critical theory" in literature departments. Are we hitting the limits of Big Science? Dum dum DUMMMM...But no, this question leads us too far afield...


Update: Mark Thoma and Simon Wren-Lewis weigh in on the original Krugman-Williamson debate.

Update 2: Simon Wren-Lewis drops by in the comments to say that yes, macro did produce a policy consensus (basically interest rate targeting by the Fed, with a Taylor Rule type objective function balancing growth stability and price stability), and yes, that policy consensus did help the world, by giving us the Great Moderation, which wasn't perfect but was better than what came before. I guess that's a fair point. But I'd point out that:

A) the New Keynesian models to which Simon is referring were made after the 70s inflation and early-80s Volcker disinflations, and the Great Moderation policies might have been mostly inspired by the Volcker episode rather than by the models that came after it;

B) the New Keynesian "consensus" was always rather fragile, accepted by central banks but pooh-poohed by the academics that Krugman calls "freshwater" macroeconomists; and

C) Taylor-type rules were originally estimated as Fed reaction functions, describing Fed behavior rather than prescribing it (later they became prescriptive when added to Woodford's New Keynesian models), so it seems that the Fed may have been behaving in the way it did in the Great Moderation well before New Keynesian models emerged to endorse the policy.

So I am still very skeptical of the notion that modern macro models motivated the Great Moderation policies rather than just describing them after the fact...But I guess I could be convinced.

Update 3: In the comments, Costas Alexandrakis (CA) points out that the development of the ideas of adaptive expectations (Friedman) and rational expectations (Lucas) helped put an end to the mistaken idea that the Phillips Curve represented a stable, usable policy tradeoff between inflation and unemployment. This is a good point. Exposing the flaws in bad economic policy is just as important as suggesting good economic policy. So this is definitely an exception to my sweeping statement that macro hasn't given us much in the way of policy advice. Also, Costas points out that Lucas' idea of rational expectations helped motivate the Fed's practice of "forward guidance". This I'm not so sure has been effective, but it is definitely another interesting counterexample.

Update 4: Brad DeLong thinks I am using the phrase "techniques of modern macroeconomics" too loosely, and lists a number of questionable Steve Williamson quotes that he thinks should be inconsistent with the true "techniques" of macro. But I think that that doesn't negate the ubiquity of DSGE modeling in modern macro journals and conferences. Steve Williamson certainly does use DSGE in his theory papers, and he certainly does get published...

Update 5: Paul Krugman also says that just because everyone in (mainstream) academia uses DSGE doesn't mean that they're really using the same "techniques"; the content of the models makes all the difference.

Update 6: Scott Sumner comments. He disagrees with my observation that the freshwater/saltwater conflict has waned. His answer to the question of "what causes recessions" is "aggregate demand shocks", but this doesn't really answer the question of what is causing those shocks (of course, Scott Sumner thinks NGDP targeting can perfectly cancel out any aggregate demand shocks, so maybe he doesn't even care where the shocks come from). He agrees that modern macro methods (DSGE models) have given us no useful technology of any kind. And he endorses replacing the current mainstream macro profession with "Market Monetarists" like himself and David Beckworth.

Update 7: Brad DeLong weighs in. He agrees that academic macro hasn't produced useful models (actually I myself wouldn't go quite that far; I think academic macro simply hasn't produced models that we can know are useful in time to use them). He says that economists in the public sphere have been less than helpful since 2008 primarily because of political pressure from the conservative movement.
reade more... Résuméabuiyad

New article in the Huffington Post: Why conservatives shouldn't fear the fiscal cliff



I have a new article up at the Huffington Post. Basic idea: Conservatives tend to believe in the power of forward-looking expectations. That will tend to reduce the impact of the fiscal cliff - yes, even the distortionary part - because if people are forward-looking, they will have been expecting taxes to go up ever since Reagan raised deficits in the 80s, and this will dramatically reduce the impact of the cliff. Excerpt:

[A]ccording to a central tenet of conservative economics, the fiscal cliff is not going to be a big deal. I'm talking about the principle known as "Ricardian equivalence."... 
Ricardian equivalence has become a pillar of conservative thinking about economic policy. When President Obama was preparing the 2009 "stimulus" bill, a number of economists -- including Robert Barro himself -- took to the editorial pages to vigorously protest. Deficit spending, they argued, couldn't boost the economy, because people would expect future taxes to pay for today's spending, and would cut back accordingly, exactly canceling out the "stimulus." 
By the same logic, conservatives shouldn't be worrying about the fiscal cliff. Yes, taxes will go up if we go over the cliff. But according to Ricardian equivalence, people have known all along that this would have to happen at some point, and they have been planning accordingly... 
In other words, the fight over the fiscal cliff might just be an elaborate form of political kabuki theater. Conservatives, if they believe their own economic doctrine, are probably not actually losing any sleep.
Read the whole thing here!
reade more... Résuméabuiyad

Modal GUI Elements Are Creativity Sappers

Why does OpenOffice Writer force me to visit a modal dialog to adjust header properties? Why can't I right-click inside the actual header, on-the-page, to make these adjustments? A modal UI takes me away from my work.
Lately I've been trying to rethink the assumptions behind user interfaces, particularly the UIs of "creativity-oriented" applications.

One exercise I've found useful is to take notice, as you work with your favorite application(s), of how much time you spend working with dialogs, menus, palettes, etc. versus how much time you spend working on the document itself at a low level.

Any non-trivial GUI-driven application has at least two different levels of GUI. There's a high-level interface and a low-level interface. In a word processor, low-level operations (and corresponding interfaces) are ones that have you operating directly on text with the keyboard and/or mouse, without the aid of dialogs. So for example: entering new text, selecting portions of text, copying and pasting text, applying fonts, applying styles to fonts (italic, bold, etc.), scrolling, deleting text, and using Undo all of these are core low-level operations. The UI for these operations doesn't take you away from your work.

In an image editor, low-level operations are generally ones that involve dragging the mouse on the canvas. When you are doing things like selecting a portion of an image, transforming an active selection (via shear, rotation, scaling), or drawing shapes by hand, you're operating directly on the canvas with mouse drags.

An app's high-level GUI consists of anything that has to be done in a modal dialog, a menu system, a wizard, or anything else that doesn't directly involve a low-level operation.

Here's the important point. Anything that takes you away from the low-level interface (for example, any operation that takes you immediately to a modal dialog) is taking you away from your work. It's an interruption to the workflow and an impediment to getting work done, not because such diversions steal precious time, but because they steal from you something far more precious than time: namely, creativity.

Modal GUI elements interrupt a user's concentration and interfere with inspiration. This is a serious issue if your customers are creative individuals working in a creativity-oriented application.

If you look at how Adobe Photoshop has evolved from Version 1.0 to the present day, one of the most noticeable changes is in how many non-modal GUI elements have appeared in the workspace (and how easy it is for the user to choose which elements appear, via the Window menu). It's because non-modal elements like tool palettes and layer pickers are nowhere near as disruptive as modal elements. They let you stay "close to the work."

An application like Adobe After Effects makes the point even clearer. Here, you have a program in which an immense number of features have been realized in non-modal GUI elements. It's an important issue, because when you're doing something as complex (and creative) as offline video editing, you can't afford to have your creativity interrupted by frequent detours into modal dialogs.

Some "creativity" programs go the wrong way and implement the majority of GUI elements in modal (rather than non-modal) fashion by default. An example is OpenOffice. To do something as trivial as view a document's word count in OpenOffice Writer means making a detour to a modal dialog.

What's the main takeaway? Modal UI elements (dialogs, menus and sub-menus, wizards) take the user further from the work document. And that's always a bad thing. It's time-wasteful and saps creativity. Non-modal interfaces keep the user close to the content, at the risk of UI clutter. (The answer to the clutter problem is to put the user in charge of how much real estate to devote to non-modal UI elements at any given time.)

In tomorrow's post, I'll talk about GUI surface area and what its implications are for usability.
reade more... Résuméabuiyad

More Thoughts on Feature Richness

A classic example of rampant feature excess and poor UI design: Eclipse.

As you know if you've been following my previous posts, I've been thinking a lot, lately, about feature richness in software. What does it mean, to the user, to have a feature-rich product? When are additional features really needed? Is it possible to have too many features? Is there a "sweet spot" for feature richness (or a point of diminishing return)? Is it possible to build too many features into a product, or is that question best recast in terms of how features are exposed via the UI (or perhaps the API)?

Fair warning: I offer many questions, but few answers.

My gut tells me that more often than not, feature richness is a curse, not a blessing; an embarrassment, not something for Marketing to be proud of.

When a customer sits down to use a product and he or she notices an excess of functionality, it conveys an impression of waste. It suggests that the maker of the product willingly tolerates excess and doesn't understand the less-is-more aesthetic.

From a purely economic point of view, a customer who sees an excess of functionality wonders why he is being forced to spend money on things he might never need.

The customer might also get the impression (without even trying the product!) that the product is hard to learn.

For these and other reasons, people involved in software design should be asking themselves not how feature-rich a product should be, but how feature-spare.

Does anybody, at this point, seriously question that it is more important for an app to do a small number of mission-critical things in a superior fashion than to do a larger number of non-mission-critical things in acceptable fashion?

I have two word processing applications that I use a lot. One is OpenOffice Writer; the other is Wordpad. The former is Battlestar Galactica, the latter Sputnik. Ironically, I often find myself using Wordpad even though I have a much more capable word processor at my disposal. I use Wordpad to capture side-thoughts and sudden inspirations that I know I'll come back to later, when I'm further along in the main document. These side-thoughts and incidental epiphanies are sometimes the most creative parts of whatever it is I'm writing.

It's ironic that I will use the most primitive tool available (preferentially, over a much more powerful tool) when capturing my most creative output.

I don't think I'm alone in this. I'm sure a lot of programmers, for example, have had the experience of writing a Java class or JavaScript routine in Notepad (or the equivalent) first, only to Copy/Paste the code into Eclipse or some other heavyweight IDE later.

Why is this? Why turn to a primitive application first, when capturing fresh ideas and "inspired" content?

Speaking for myself, part of it is that when I'm having a peak creative moment, I don't have time to sit through through the ten to thirty seconds it might take to load OpenOffice, Eclipse, or Photoshop. Creative moments have very short shelf life. Any speed bumps on the way to implementing a new idea are creativity-killers. An app that loads in one second or less (as Wordpad does) is priceless.

But that's not the whole explanation, because quite often I'll turn to Wordpad even when OpenOffice Writer is already running!

I think that's because when I'm having a peak-creative moment, I don't want any distractions. I want to work close to the document, with no extraneous features distracting me or slowing me down in any way whatsoever. I don't want to be tempted to choose a different font, reset margins and tabs, redo paragraph formatting, worry about spellcheck, etc., while I'm capturing my most important thoughts. Just knowing that extraneous features exist slows me down.

Also, I find I often need more than one clipboard instance. I'll often open multiple Wordpad windows just to cache certain text passages until I can figure out whether or not I want to use them (and in what order) in my main document.

I'm sure there are other, deeper reasons why I turn to lightweight programs before using supposedly "superior" heavyweight alternatives. The fact that I can't articulate all the reasons tells me the reasons probably run quite deep. Software makers, take note.

I'll say it again: an application that has a large excess of features is a liability, both to the customer and the company that makes the software.

The larger the number of things an app is capable of doing, the more likely it is the user will:
  • be frustrated with program load time
  • feel intimidated by the product
  • need to consult documentation
  • call the help desk
  • spend money on third-party books and training
  • forget how certain features work (and spend time re-learning how to use those features)
  • feel pain at upgrade time, when menus and palettes and dialogs and prefs and workflows are "improved" over the last version of the software, requiring yet more (re)learning

Bottom line, when it comes to feature richness, more is not better. More is less. Sometimes a lot less.


reade more... Résuméabuiyad

The Omnipotent Fed idea



Since the Fed started its new policy of "QE infinity" (which it stepped up on Wednesday), acclaim has been heaped upon the economists who have promoted a policy of NGDP targeting (or "NGDP level path targeting"), which bears some resemblance to "QE infinity". Chief among these economists is Scott Sumner, who promotes his ideas mainly through his blog; Sumner was recently named one of Foreign Policy Magazine's top 100 global thinkers, and economics pundits from Tyler Cowen to Matt Yglesias have credited Sumner as being the intellectual force behind the Fed's new policy. However, Scott is far from a solitary crusader; he has been assisted by David Beckworth, Ryan Avent, Andy Harless, Steve Randy Waldman, Joe Weisenthal, Evan Soltas, and a number of other bloggers and pundits. Additionally, my own graduate advisor, Miles Kimball, has promoted similar ideas on his blog and in his academic work.

I generally support the idea of an activist Fed, unconventional monetary policy, etc. However, I do have a misgiving about a key element of the case made by the aforementioned crop of monetarists. This is the notion of an "Omnipotent Fed"...by which I mean not that the Fed can create stars and galaxies, but that the Fed can set NGDP to be whatever it wants. If this assumption is wrong, NGDP targeting (or similar policies) may simply not work.

To some, the proposition that the Fed can hit any NGDP target seems self-evident. NGDP is just real GDP multiplied by the price level; if the Fed perfectly controls the price level, and either A) knows the relationship between the price level and output, or B) can change the price level faster than real output changes, then it immediately follows that the Fed sets NGDP. You often hear this stated as the idea that "the Fed can always choose to inflate".

But what if the Fed can't set the price level? There are several ways this could be the case. For example, the price level might be discontinuous in certain regions. For example, suppose the Fed attempts to set inflation at exactly 176.73%. But it might be the case that any monetary policy that pushes inflation above 175% will automatically cause inflation to jump to 190% (and vice versa). In other words, suppose that in some region, NGDP is a step function with respect to monetary policy. That's just one example, though; in general, any time NGDP is an unstable, stochastic, or undetermined function of monetary policy, the "omnipotent Fed" proposition fails. One special case of this is Milton Friedman's idea that monetary policy acts with "long and variable lags", a notion that has been pooh-poohed by the new monetarists.

Anyway, OK, so is the Fed "omnipotent" or not? Well, how on Earth could we know? My big problem with the "Omnipotent Fed" idea is that it seems non-falsifiable. By which I mean, it doesn't seem like the evidence will ever be able to tell us whether the Fed is omnipotent or not.

Why not? Two reasons: A) Because the Fed's thought process is unobservable, and B) Because the Fed's policy toolkit is unobservable. To know what the Fed can do, we have to know what the Fed tries to do. For example, suppose we see the Fed do $1 trillion of quantitative easing, and NGDP doesn't seem to budge much.

Interpretation 1: The Fed knew that its actions would lead to a non-budging NGDP level, which is why it did what it did. In other words, the Fed chose to keep NGDP where it was, and if it had wanted to, it could have raised NGDP instead of just keeping it static.

Interpretation 2: The Fed tried to raise NGDP and failed. It failed because the people at the Fed made a mistake. They did the wrong kind of easing, or didn't manage expectations correctly, or in some other way used the wrong tools. If the Fed had used the right tools, it could have raised NGDP.

Interpretation 3: The Fed tried to raise NGDP and failed. It failed because the only way to raise NGDP would have been to cause a hyperinflation, which would raise NGDP by much, much more than it wanted.

There are more interpretations. I just highlighted these three to demonstrate a point.

How can you know what the Fed wants? You can make some guesses, but not scientific ones. The Fed keeps its decision-making process secret. And suppose you somehow could figure out what the Fed wants (say, by applying a mind-reading device to the Fed chairman during a policy announcement). That would tell you precious little about what the Fed is actually capable of. For example, suppose that expectations are very important in the determination of NGDP. Do we know what determines expectations? Not really, no. Or suppose money demand is unstable, or contains hysteresis, in certain regions. How would we know that?

Some people have claimed that an "NGDP futures market" would allow us to test the proposition of Fed omnipotence. If NGDP futures were stable, they say, that would show that the Fed can hit any NGDP target it likes. But this is just flat-out false. Low NGDP futures volatility could mean that the Fed is utterly powerless, and that investors simply expect few shocks to NGDP.

So it seems to me that the proposition of Fed omnipotence is something that we can only believe by making a leap of faith. It is functionally equivalent to the notion that an invisible God controls everything we see in the world. Thus, believers in the Omnipotent Fed will always be able to claim, without scientific or logical refutation, that every jump and juggle of NGDP was the deliberate choice of the Fed.

Does this mean that every question about monetary policy is fundamentally unanswerable? No, it doesn't. We can't observe the Fed's desires, and we definitely can't observe the Fed's total choice set. But we can observe the Fed's actual choices. If you tell me "Inflation always rises 1 for 1 with the monetary base," well, that is an easy proposition to falsify.

So what is the implication for monetary policy? I am not claiming that NGDP targeting is a bad idea, or that it definitely will not work, or even that it is unlikely to work. What I'm claiming is that, in the presence of true Knightian uncertainty about the power of the Fed, it is certain that at some point, if NGDP targeting doesn't seem to work, we will inevitably abandon the policy. And the point where we decide it has failed will depend not on scientific fact, but on intuition and heuristics. In other words, if the Fed keeps printing money and NGDP doesn't return to its pre-crisis path, at some point we will simply start to entertain the notion that the Fed is incapable of doing what we want it to do. And then we will try to think of something else. And of course the true believers will say: "No, the Fed could have done it, they just didn't really want to." And we won't be able to prove them wrong.


Update: On Twitter I asked Miles Kimball what he thought of this post, to which he responded:
It is certainly a logical possibility that the Fed can't get inflation up without overshooting...The difference is that I don't think the US is actually in that situation of having to overshoot. Japan may be.
reade more... Résuméabuiyad