Pages

.

Does the "Entrepreneurship Subculture" prevent big ideas?


Try this: Think of an animal that isn't an elephant.

What was the first animal you thought of? For a hefty fraction of you, it was probably an elephant. But if I had just said "Think of an animal", only a few of you would have thought of an elephant. The moral of this story is that when you try to fixate on "thinking different," often you just fixate on the same-old, same-old.

This is a colloquial description of a line of research by Steven M. Smith, a cognitive psychologist at Texas A&M University (disclosure: Steven M. Smith is my father). In this book chapter, Smith describes how "initial ideas" can constrain creativity. The results are best summed up by this abstract (omitted from the final draft):
The first ideas to be considered during creative idea generation can have profoundly constraining effects on the scope of the ideas that are subsequently generated. Even if initial ideas are intended to serve as helpful examples, or they are given simply to get the creative process going, the constraints of initial ideas may be inescapable. Such constraints can impede successful problem solving and inhibit creative invention. Overcoming these constraints can be enhanced by reconsidering initially failed problems in new contexts.
Here is another paper by Smith, along with Nick Kohn, detailing how group brainstorming can lead to "collaborative fixation", in which everyone in the group starts fixating on whatever ideas get suggested first.

Why am I bringing this up? Well, in the past few years, I've been reading - and hearing - a lot about the Entrepreneurship Subculture. You all know what this is. It's mostly young people, mostly in urban areas (especially SF and NYC). It's mostly (but not exclusively) made up of entrepreneurs in the fields of technology and media. It includes media outlets like TechCrunch, books like The Lean Startup, "incubators" like YCombinator, forums like Quora, and other outlets like the TED and TEDx talks. I myself have come in contact with this subculture just by dint of being friends with a lot of engineers, and with Peter Chang, whose own venture-funded media startup covers a lot of entrepreneurship-related events out in the Bay Area.

At this point, given the often combative nature of this blog, as well as the title, you might expect me to reveal that I am a detractor of the Entrepreneurship Subculture - a hater, so to speak. But that is not the case. I love the Entrepreneurship Subculture. The sheer intellectual energy of the movement is intoxicating. The people are, by and large, wonderful human beings. And the work being done by those involved in the Subculture is some of the most valuable stuff being done anywhere. In an age when much of highly-educated white-collar America spends its time performing unnecessary medical tests, trying to trick suckers into buying overvalued financial assets, or lobbying government for pork, the crowd at your local TechCrunch Disrupt conference are the real heroes of the economy.

But - as you may expect - I wonder if the Entrepreneurship Subculture isn't creating some unwarranted adverse effects. By putting entrepreneurs in such close and constant contact with each other, does the Subculture ferment creativity and cross-collaboration? Probably. But it also may inadvertently stifle creativity, by exactly the process that Steven Smith's research describes. Working in incubators, attending entrepreneur conferences, reading entrepreneurship publications, and talking constantly to other entrepreneurs may cause "collaborative fixation". Everyone will end up thinking of the same stuff, even if they try to think of something new and differentiated. Especially if they try. 

If that happens, what we'll see is a lot of "me too" products. A social network for miniature terriers. Yet another mobile social local photo sharing app (that line is plagiarized from somewhere, but I can't remember where). Not the kind of big conceptual breakthroughs that really disrupt the industrial structure. Not the kind of big ideas that build really huge and successful companies.

This is important because America needs good entrepreneurship - and especially good tech entrepreneurship - more than ever. Rates of new business formation are falling. The venture capital sector is not making good returns (though much of their profit may have simply been taken by angel investors). And then there is that Great Stagnation

So what is the solution? How can we make sure that a Subculture designed to ferment entrepreneurship doesn't end up accidentally encouraging groupthink? Smith's research suggests a way out: a change of context. Give entrepreneurs a break. Send them out into the wilderness - to other countries, or to small towns away from the beating heart of innovation. Or just encourage them to spend periods of time away from the Subculture, avoiding conferences, not talking to other entrepreneurs, not reading TechCruch. Have them live life, read some science fiction, visit factories and farms and retail outlets and non-tech office parks. Have them talk to friends (or just make friends) who work in other areas of the economy. Get them out of the bubble. Have them write down ideas, keep diaries, etc. I strongly suspect that when they come back, many of them will have new, weird, different ideas that they would not otherwise have had.

A famous Japanese artist once told me "It's impossible to think of anything new in a city." I countered "Yes, but it's hard to collaborate out in the country." We agreed that one needs to alternate. In the same way, I think that the Entrepreneurship Subculture should emphasize the importance of changes in context. Disrupt your own ideas.
reade more... Résuméabuiyad

Apple and Sony: an eerie parallel?


(Note: This post is a break from "serious" econ blogging...)

OK, Don't go and make any financial trades based on this (or anything you read in any one blog post), but check this out:


Charts courtesy of Yahoo Finance, idea courtesy of my friend Dayv Wachell.

It's well known that Steve Jobs idolized Sony, especially its founder Akio Morita. Morita was a Jobs-like figure, maniacally focused on design and on pleasing the consumer (while his partner, Masaru Ibuki, the Wozniak of Sony, handled the initial technical wizardry), and loved by the public. Sony even inspired Jobs' wardrobe.

There are other parallels. Sony's big break was a portable music device; so was Apple's. Like Apple today, Sony in the 90s was known for having cult-like fans in its domestic market; these "Sony-heads" would essentially buy anything that Sony put out, even as the company's image slipped internationally. I've even heard people allege that those fans provided the company with a profit cushion that allowed it to ignore problems on the horizon.

In any case, when Sony's charismatic founder died, the firm was at the top of its game, and on top of the world. After Morita's death in October of 1999, Sony's stock price rose dramatically, only to crater a few months later.

Since Steve Jobs' death a year ago, Apple's stock has soared. But is there a parallel here? Is Apple's recent downtick the start of the kind of epic fall suffered by Sony in 2000? If so, that'll be kind of neat.

As I wrote at the top of this post, the point here is not to say "Apple = Sony! Sell!" Don't do that. (If you're interested in trading Apple stock, go be an Apple analyst; otherwise, keep your money in a nice diversified portfolio and trade only once per year.) The point is to wonder about the effects of iconic founders. Do celebrity founders who reach the iconic status of a Jobs or a Morita endow their companies with fat profit margins, by creating a group of super-fans who would pay $600 for a brick as long as it sported the company logo? Does a diehard core of super-fans lead a company to become complacent after the death of the iconic founder?

And does the death of an iconic founder lead to a predictable rise in a firm's stock price? Do investors implicitly believe - for a little while, anyway - that the "spirit" of an iconic founder lives on in the company he founded, causing them to overestimate the company's prospects, leading to a predictable crash in price? And can the timing of that price crash be predicted?

It's an interesting question, and one that's hard to evaluate econometrically, since iconic founders seem like they should be pretty rare. But maybe they're not as rare as I think, and someone out there will find a way to investigate this empirically and write a paper on it. In which case, any market-beating investment opportunity based on iconic founder deaths will promptly disappear...

Update: Apple has fallen to $550 since the writing of this post...
reade more... Résuméabuiyad

Strategies for Querying Literary Agents

A friend of mine recently asked for advice on different ways to approach the task of querying large number of agents. She asked things like: Should I query them all at once? Or should I query them in groups? Or should I query them serially, one at a time, and wait for responses? If I query them serially or in groups, should I go with my favorite agents first (and then second-tier agents next, then third-tier and so forth)? Or should I query third-tier first, second-tier next, and first-tier last?

Let me kill the suspense by skipping to the bottom line. I told her to query in batches, backwards (third-tier first, then second-tier, and finally first-tier).

Now let's look at the reasons why.

Querying 100 agents all-at-once is the worst strategy ever, IMHO. First, I believe that a pitch should always be considered a work-in-progress. You should always be open to the idea that your query can be improved. Say you write a query this month (without sending it to anyone), then go on vacation, then come back to the query. There's a substantial chance that when you look at the query through new eyes, you'll see ways it can be improved. Or you might see mistakes in the original query, as written. Either way, think how disastrous it would be to copy-paste the original query and send it to everyone under the sun. Any imperfections in it will be propagated to all agents, and then you've blown it. Arguably, at least.

Also consider the possibility that your original pitch is simply taking the wrong approach. That's something you can discover by sending it to 20 or 30 or 40 agents. If you really believe the query is the absolute best it can be and you've selected agents carefully (to match what they're looking for), you should get at least one positive response out of 40 agents queried. If you don't, chances are good your query is fundamentally flawed in some way. You should consider whether a total rewrite is called for.

I've queried magazine editors, book publishers, and others in the past, and I've found from experience that a pitch can nearly always be improved. A direct-marketing pitch (which is exactly what you're writing) is something you hone and sharpen incrementally and continuously, preferably on the basis of testing. Sometimes you decide that an entirely different approach would be better. Don't foreclose that possibility by spamming out your first-generation query to everybody at once.That would be unwise.

Sending out queries serially and waiting for a response from each agent before moving on to the next one is simply impractical. Let's say agents take two weeks to respond, on average. (Which is ridiculous, because the true answer is closer to four weeks.) If you're planning on writing to 40 agents, it'll take you 80 weeks to get to all of them. I don't know about you, but for me, that would be impractical.

The reason I used 40 agents in the above example is that in the real world, agents respond positively to only two or three percent of cold queries. If you think you're in that category, you need to reach out to 40 agents, because a one-in-forty success rate is a two-and-a-half percent success rate.

Sending out queries in groups is the way to go, IMHO. But even if you adopt that strategy, you should still not blindly use copy-paste, because (again) if there are imperfections in the pitch, you need to find them early on, not when you've already spammed everybody. That means you should read each query before sending it out. Believe me, after 20 or 30 or 40 re-readings of something, you'll find flaws. Unless of course you're undeniably the all-time best writer in the universe and can reliably turn out perfection on the first go.

Here's why you should send batches to third-tier agents first, then second-tier, then first-tier last. (Unless of course you have a recommendation from someone significant, like a bestselling author who already works with the agency in question. If you have that, contact that agency first.) Usually your first tier will contain a lot of top-flight agencies (in addition to containing the occasional boutique agency that just happens to be a special fit for your particular project). Top-flight agencies get phenomenal quantities of queries. They have more good material to choose from than bottom-tier agencies. Thus, your level of competition is very great when you go to a top agency.

The way to beat the competition (if you don't come with a recommendation that really counts) is to come at the first-tier agency with an offer already in hand. This usually gets the agency's attention.

So the strategy I would use is this: Send out your first batch of queries to bottom-tier agencies. If you get an offer of representation from one of those, tell the first-tier agency that you've already got an offer in hand but you want to consider the top-tier agency in question first, because you strongly prefer that agency and don't want to go with the other one unless you really have to. But don't reveal the name of the agency that you got an offer from, because the top-flight agency will likely assume that (since you're writing to top-flight agencies) you got an offer from another top-flight agency. And you do want them to assume this. You certainly don't want them to know that your offer came from some little-known one-person agency.

It's totally Kosher to pit one agency against another like this. I can tell you for a fact that this sort of thing is done all the time when agencies pitch books to publishers. They love to get an auction going. I did this myself once, many years ago. I had a firm offer (contract in hand) from Doubleday. Instead of signing the contract immediately (as most people would have done), I wrote to four other top-flight publishers, and in my pitch I told them I already had an offer from Doubleday. All four publishers sent me contracts immediately and begged me to sign. I had an auction going. (I finally went with McGraw-Hill.)

I hope this discussion has been useful for you. It was for my friend.















reade more... Résuméabuiyad

Money is just little green pieces of paper!


Have you ever heard people say that "money is just little green pieces of paper"? Well, that is exactly what Steve Williamson claims in this post. Most of the post is an anti-Krugman volley, but buried in one of Steve's points is the following extremely interesting claim:
What is a bubble? You certainly can't know it's a bubble by just looking at it. You need a model. (i) Write down a model that determines asset prices. (ii) Determine what the actual underlying payoffs are on each asset. (iii) Calculate each asset's "fundamental," which is the expected present value of these underlying payoffs, using the appropriate discount factors. (iv) The difference between the asset's actual price and the fundamental is the bubble. Money, for example, is a pure bubble, as its fundamental is zero. (emphasis mine)
Can this be true? Is money fundamentally worth nothing more than the paper it's printed on (or the bytes that keep track of it in a hard drive)? It's an interesting and deep question. But my answer is: No.

First, consider the following: If money is a pure bubble, than nearly every financial asset is a pure bubble. Why? Simple: because most financial assets entitle you only to a stream of money. A bond entitles you to coupons and/or a redemption value, both of which are paid in money. Equity entitles you to dividends (money), and a share of the (money) proceeds from a sale of the company's assets. If money has a fundamental value of zero, and a bond or a share of stock does nothing but spit out money, the fundamental value of every bond or stock in existence is precisely zero.

That's a weird way of thinking about the world.  It would mean that the size of a stock bubble, measured in percentage of terms, is always and everywhere infinite. It would mean that the size of a stock bubble, measured in absolute terms, is just the price of the stock - that Google's stock now has a bigger "bubble component" than Pets.com's ever did, simply because Google's stock price is higher than Pets.com's ever was. If money is a pure bubble, this must be the case.

So it's a weird way of thinking about the world...but is it correct?

It seems to hinge on the definition of "fundamental value". Usually we define "fundamental value" as the (discounted) amount of money you'll have if you hold on to an asset. But if money has no fundamental value, then this is zero.

So what is "fundamental value"? Is it consumption value? If that's the case, then a toaster has zero fundamental value, since you can't eat a toaster (OK, you can fling it at the heads of your enemies, but let's ignore that possibility for now). A toaster's value is simply that it has the capability to make toast, which is what you actually want to consume. So does a toaster have zero fundamental value, or is its fundamental value equal to the discounted expected consumption value of the toast that you will use it to produce?

If it's the latter, then why doesn't money have fundamental value for the exact same reason? After all, I can use money to buy a toaster, then use a toaster to make toast, then eat the toast. If the toaster has fundamental value, the money should too.

So does saying "money is a pure bubble" mean that toasters have no fundamental value, and that therefore, the price of toasters - or, indeed, of any non-consumable good - is a pure bubble? If "fundamental value" = "consumption value", it seems that it must mean exactly that. Now we are into a very weird way of thinking about the world.

Or is there another way to define "fundamental value", besides "expected discounted stream of money payments" or "expected discounted consumption value"? I can't think of one...any takers?


Update: Brad Delong and Paul Krugman weigh in. Paul suggests a more expansive definition of "bubble", while Brad conjectures about what Steve Williamson might mean. And yes, it feels weird calling Paul Krugman by his first name when we've never actually met...

Update 2: Steve Williamson weighs in:
The payoffs on my stocks and bonds, and the sale of my house, may be denominated in dollars, but that does not mean that the value of those assets is somehow derived from the value of money.
Not in general, no. But if the fundamental value of money is precisely, exactly zero then it does mean that. Any finite number multiplied by zero is still zero, so using Steve's definition of "fundamental value" - whatever the heck that is - the expected discounted present "fundamental" value of the stream of (money) payments from any stock or bond is precisely, exactly zero. As for the definition of "bubble", Steve claims that I disagree with his definition ("price > fundamental value"), but actually I do not disagree; that is one of the two main definitions out there (the other being "a rapid rise and crash of prices"), and I think it's perfectly fine.

Update 3: Nick Rowe has some thoughts.

Update 4: David Glasner thinks I've made a mistake. But I haven't made a mistake. If there exists a machine whose only possible function or use is to spit out assets that have zero fundamental value, then that machine has zero fundamental value. There exist many financial assets whose only possible function or use is to spit out fiat money. If the fundamental value of fiat money is always identically zero (as Williamson claims), then the fundamental value of those financial assets is always identically zero.

Update 5: David Andolfatto attempts to rebut my claim that if FV(money)=0, then the FV of most financial assets is also identically zero. Here is his attempted rebuttal:
What of Noah's claim that if money is a bubble, then nearly every financial asset is a bubble? This just seems plain wrong to me. Financial assets are typically backed by physical assets. For example, the banknotes issued by private banks in the U.S. free-banking era (1836-63) were not only redeemable in specie, but they constituted senior claims against the bank's physical assets in the event of bankruptcy. Mortgages are backed by real estate, etc.
I don't think this is a very good rebuttal. Sure, there are examples of financial assets that can be exchanged directly for real assets (without being first exchanged for money). But these are few and far between. Most financial assets only pay you in money, no matter what happens. So I don't think David's rebuttal really works.

Note: None of these critics has yet to offer a concrete definition of "fundamental value". The whole point of my post is to ask for a concrete definition of fundamental value...so far I haven't got one.

Update 5: Steve Williamson finally does provide a definition of fundamental value, cribbing from Allen, Morris, & Postlewaite (1993) (which, by the way, is an excellent paper which you should read if you have time):
To summarize, we are arguing that the fundamental value of an asset is the present value of the stream of the market value of dividends or services generated by that asset.
According to this definition, money is priced above its fundamental value, because money pays no dividends and thus has a fundamental value of zero. Also note that according to this definition, T-bills have a fundamental value of zero, since they pay no coupons. In other words, by this definition, the market value of the redemption payment of a bond does not count toward its fundamental value.

It's not the definition I'd choose; I'd include the redemption in the fundamental (in which case money would have a positive fundamental, since you can "redeem" it for itself). But "fundamental value = dividends" is a perfectly consistent definition. Great! Steve also writes:
I'll leave you to judge whether Allen, Morris, and Postlewaite are better or worse economic theorists than Paul Krugman or Noah Smith.
I can resolve half of this question for you right now: Allen and Morris (and almost certainly Postlewaite, though this is the only paper of his I've read) are better theorists than I am. All you aspiring theorists out there, take a lesson from those guys, and remember to define your terms explicitly and precisely!
reade more... Résuméabuiyad

Reinhart-Rogoff vs. Bordo-Haubrich (with grandstanding by John Taylor)


If you follow econ blogs at all, you'll have been reading lots about the dustup between Carmen Reinhart & Kenneth Rogoff, whose research argues that financial crises cause slow economic recoveries, and Michael Bordo & Joseph Haubrich, whose research argues that recoveries after financial crises are usually very rapid. Here is a Bloomberg op-ed by R&R defending their work.

The argument is politically important, because it tells us how good the Obama administration has been doing. If R&R are right, then Obama has been a good steward of the economy, since America's recovery has slightly outperformed the average of their sample of historical post-crisis recoveries. But if B&H are right, then Obama has done a historically bad job. Thus it is no surprise to find Mitt Romney's economic advisors, in particular John Taylor, hawking the Bordo-Haubrich research and disparaging that of Reinhart and Rogoff.

First of all, do not listen to John Taylor. He is not being a scientist right now, he is being a politician. Paul Krugman is right; this is an example of how politics hurts the academic discipline of economics. But unlike Krugman I think it's inevitable; you can hardly expect John Taylor not to do his job and support his boss. People know to take that into account when reading what he writes, and Taylor knows they take it into account. Are we ever going to get economists to stop advising political candidates? Are we ever going to get political candidates to stop insisting that their advisors support their campaign narrative? To each of these questions I answer: Maybe, but I am not optimistic.

But do pay attention to the academic dispute between R&R and B&H. It's very interesting. How do the two research teams arrive at such different conclusions? Essentially, there are three big differences in the methodologies used by the two teams. 

Difference 1: R&R compare recoveries across different countries. B&H only look at the U.S.

Difference 2: R&R define the "strength of a recovery" as the time required to reach the pre-crisis level of GDP per capita; B&H define the "strength of a recovery" as the rate of total GDP growth at a certain time following the trough of the recession.

Difference 3: R&R define a "financial crisis" much more narrowly than B&H.

Let's talk about Difference #1. Because B&H include only the U.S., they ignore episodes like Japan's crisis-and-recovery in the early 1990s. This means that, for one thing, B&H have a much smaller sample than R&R. If you believe that every nation is fundamentally different, this is unavoidable; but if you believe that "financial crises" are a universal phenomenon, then B&H are making a big mistake. 

It also means that B&H are comparing across different periods of history. This doesn't seem appropriate to me. For one thing, in its earlier history, the United States was experiencing "catch-up growth", which means that the trend rate of growth was much higher than it is now. For another thing, past eras had considerably higher productivity growth than the current era, which also raised the trend rate of U.S. growth. Finally, as R&R point out in their op-ed, U.S. population growth was higher in the past. B&H, by failing to detrend their GDP series, leave out all of these important facts.

Basically, I think R&R's methodology is much better here. B&H, by refusing to even look at other countries, are potentially throwing away a huge amount of information. Sure, combining samples across countries introduces a lot of omitted variables, but you can always just compare within-country analyses to cross-country analyses and note whether and how the two are different. And you can always just make a list of potential cross-country structural differences. Then you let the reader decide for herself whether cross-country or single-country makes more sense. I think this is much better than simply choosing one specification and sticking with it.

OK, let's talk about Difference #2. This is partly a case of an apples-to-oranges comparison; the two research teams are measuring different things, and their stories are not necessarily incompatible. B&H tell a story of a "string-plucking" effect, where financial crises are followed by very deep recessions, and deeper recessions mean faster, but longer, recoveries. R&R's observation that recoveries from financial crises take longer than others could be consistent with that string-plucking story. 

(The point of contention appears to be over the "shape" of recoveries - R&R contend that financial crises produce L-shaped recoveries, while B&H say there is no conclusive evidence of that. The difference is caused by the difference in the definition of "financial crises", which we'll discuss in a moment.)

Note, by the way, that this second point shows that John Taylor is being a bit disingenuous when he uses B&H's results as a stick with which to beat the Obama Administration. Here, and again here, Taylor agrees with B&H and R&R that "there is no disagreement that recessions associated with financial crises have tended to be deeper than those without financial crises." In the "string-plucking" model proposed in the appendix of B&H's paper, they claim that deeper recessions will be followed by faster recoveries; in this model, one reason for a slower recovery under Obama is that the recession of 2009 was not as deep as recessions during the 1800s. So John Taylor is overlooking the obvious implication of B&H's model - that Obama slowed the recovery by reducing the severity of the recession.

OK, on to Difference #3 - the definition of a "financial crisis". My instincts tell me that B&H's more expansive definition of financial crisis is wrongheaded - after all, they include 1981 as a "financial crisis", even though basically everyone believes that that was a "Fed recession" caused by the Volcker disinflation. Intuition strongly suggests that R&R's restrictive definition of a "financial crisis" is much more credible.

BUT, I don't think we should always trust our intuition. It is certainly possible that R&R constructed their definition of "financial crises" by looking at the data, picking out L-shaped recoveries, noticing that what happened to the financial systems of countries right before those L-shaped recoveries looked different in some respects from what happened prior to V-shaped recoveries, and then defined those observed differences as "financial crises". 

Is this a bad or wrong approach? Heck no! It's exactly what I would have done. It's a naturalistic approach. You observe patterns in nature and you write them down. That's how science gets all of its insights.

But it's an incomplete approach. If you observe a pattern and then conclude that the pattern is structural, you are data-mining. Before we believe a theory, we need to use it to make out-of-sample predictions. In this case, what that means is that before we accept R&R's definition of "financial crisis", we really need to wait and watch history unfold, and see if subsequent L-shaped recoveries still correlate with the things R&R define as the essential characteristics of a "financial crisis". That will take a long time.

Alternatively, we could use microfoundations. If we successfully identified the processes by which R&R-defined financial crises affect recoveries (and B&H-defined crises don't), we could conclude in favor of R&R's definition without having to wait for out-of-sample crises to unfold.

But until we do at least one of those things, I am not willing to say with certainty that R&R's definition of crises, intuitive though it may be, is better than B&H's.

So, in conclusion: I like R&R's approach better than B&H's, because it comes at the problem from more different angles. This is how I think the best empirical research is done; you ask a question, and then you attack that question with multiple data sources, multiple alternative assumptions, and multiple models. This is how Justin Wolfers, for example, attacked the question of whether prediction markets or opinion polls do a better job of forecasting election results. B&H don't do this; they throw away the information contained in other countries, and they don't try alternative definitions of "financial crisis". In addition, I think they make a mistake by not adjusting their GDP growth data for long-term trends.

And I think no one should take John Taylor's promotion of B&H's results seriously, since he is part of Team Romney.

However, this does not mean I totally believe the results of Reinhart & Rogoff. The fact that their results ring true to me might just be a function of how long those results have been publicized in the media. The fact is, the data sample they have to work with is small and riddled with all kinds of potential confounding effects and omitted variables. That is what macro has to deal with, folks. It ain't pretty.
reade more... Résuméabuiyad

What is math, and why should we use it in economics?


In my last post, I pointed out that the Nobel Prize-winning work of Lloyd Shapley and Al Roth, makes heavy use of mathematics, and indeed would be completely impossible without math. This, I said, is evidence against the idea that economics doesn't need (or shouldn't use) math.

But then some commenters asked me: What do you mean by "math"? And I thought that was an interesting question.

There is no "correct" definition of the word "math", any more than there is a correct definition of the word "art", or the word "love". There are many different definitions, all of which are drawn from similar connotations; in other words, people look at a bunch of things, say "This is math, and that is math", and then try to distill and formalize the similarities between the things that seem like math. For example, the definition I tended to like in college was called the "formalist" definition: 
"Mathematics is the manipulation of the symbols of a language according to explicit, syntactical rules."
Basically, this just means "math" = "logic". Philosophically, I'm fine with that. It's an expansive definition. But it's not very helpful when talking about economic methods, since it includes lots of stuff that people wouldn't normally call "math".

So what do I think is a useful definition? When it comes to scientific methodology, I think of "math" as basically being the same thing as "precision of meaning." This working definition is not a yes-or-no sort of thing; it's a sliding scale. Methods can be more math-y or less. 

So what do I mean by "precision of meaning"? Basically, something with a precise meaning has fewer alternative things that it could mean. For example, compare the two scientific propositions:

1. If you push something, it will push you back.

2. Momentum is conserved.

The second statement has a more precise meaning than the first. For example, the first statement could mean "If I push on something with a force of 5 Newtons, it will push on me with a force of 5 Newtons in the exact opposite direction that I pushed." Or, it could just as easily mean "If I push on something with a force of 5 Newtons, it will push on me with a force of anywhere between 1 to 1,000,000 Newtons, in a direction 15 degrees east of the direction I pushed." But the second statement can only mean the first of those two things, not the second. 

Therefore, I would say that the second statement is more mathematical than the first. Note that both of these statements are logical statements; for example, you can apply the rules of first-order logic to either statement to rule out the situation where I push something and it doesn't push me back at all. By the formalist definition, we can do "math" with either statement. But my "precision" definition makes a distinction between the two.

So by this definition, are probabilistic statements less mathy than deterministic ones? No, as long as they are explicit about the fact that they are probabilistic statements.

Are qualitative statements less mathy than quantitative statements? Not necessarily ("The sign of the first derivative is positive" is qualitative but is precise in its meaning), but in practice, this often tends to be the case. Quantitative statements must be precise, while qualitative statements may or may not be. This is just due to differences in the languages we use for expressing qualitative and quantitative statements. And this tendency is why people usually think math is about numbers and/or symbols that stand for numbers.

What, then, to raise the old question once more, is mathematics? The answer, it appears, is that any argument which is carried out with sufficient precision is mathematical, and the reason that your friends and ours cannot understand mathematics is not because they have no head for figures, but because they are unable [or unwilling, DRH] to achieve the degree of concentration required to follow a moderately involved sequence of inferences. This observation will hardly be news to those engaged in the teaching of mathematics, but it may not be so readily accepted by people outside of the profession. For them the foregoing may serve as a useful illustration.
So there you go. Great minds think alike...and mine occasionally happens to stumble to the same conclusions.

So why should we use math in economics? Well, I can think of a number of reasons:

1. We may want to make precise predictions about what will happen in a market.

2. We may want to make precise predictions about the conditions under which things will happen in a market.

3. Precise statements often help resolve debates, avoiding the phenomenon of "talking past each other".

4. Precise statements often lead to unintuitive but logically inescapable results.

5. It is usually easier to check sets of precise statements for logical inconsistencies.

I think all of these reasons are good reasons sometimes and bad reasons sometimes (note how imprecise of a statement that is!). I have no hard-and-fast rule about how much precision to use, and when. But I do know that if you tried to implement a Shapley-Roth matching algorithm without mathematically precise statements about what happens when, it would be hopeless. 

And I also know that in the blogosphere, many debates go on and on without being resolved, when both sides are really just talking past each other. Egos get bruised, grudges develop, and understanding is not advanced, even when the different sides' positions are not mutually incompatible or even that far off. That's why, when debates get really long and confusing, I think it's time to whip out the math, define terms, and get really precise. (By the way: In my experience, defining terms is really the critical piece of this. It's very very hard to make imprecise statements when all your words are precisely defined!)

So are there times when we should use less math in economics? Sure. Sometimes we understand a phenomenon so little that imprecise statements are more valuable than precise ones; precise formulations, if we believe them, give us the illusion of understanding, while imprecise statements, by pointing us in many directions at once, give us a menu of options for seeking the truth. And I also suspect (without proof) that some authors use excessive precision as a form of obscurantism, cloaking simple ideas in daunting reams of equations, or performing byzantine manipulations of simplistic assumptions, in order to deter outsiders from entering their hyper-specialized sub-field and criticizing their work. 

But these are cases in which the purpose of imprecision is to lead us to greater future truth. And that truth, if it is found, will certainly be expressed with great precision - i.e., if there is an economic theory that really works, it's going to use some math. The only time not to use math in econ is when we haven't found the right math yet.

And in practice, I find that a few of the people calling for less math in economics (You know who you are!) don't seem to have any such goal in mind. There are a few people out there who would rather econ stay imprecise forever - so that nobody will ever be proved wrong or right, and we can let a million flowers bloom, and everyone's scholarly opinion about the economy will be equally valid. Paul Krugman discusses these folks when he says:
[Some people] claim to reject neoclassical economics, but their alternative is not an alternative model but a lot of verbiage; they talk at the economy, and imagine that by so doing they achieve a higher level of sophistication and realism than economists who try to express their ideas in terms of little models. 
And they’re kidding themselves; all they’ve done is hide their implicit models and prejudices behind a dust cloud.
Agreed. Math is not always the most appropriate tool in economics. But the more real successes economics achieves, the more good math it will use.

Update: And here is a useful reminder that the things people call "math" don't always meet my definition...computer-generated gibberish was accepted for publication in a math journal. Gibberish, of course, has no precision of meaning at all.

Update 2: Alex Marsh has a good post that discusses the pitfalls of using math in economics. The main pitfall he identifies is that people start to believe in their own math because it's simple. Marsh is absolutely right. Making simplifications is a necessary evil, and when people do it, sometimes they forget - or decide not to believe - that the things they left out of the model still exist. Believing that your own oversimplifactions are the Laws of the Universe is easy, seductive, and deadly. Only empiricism - the relentless insistence on matching models to real-world data - can provide an effective check on this tendency.
reade more... Résuméabuiyad

A Nobel for economics that really works


Out here in the blogosphere, it is common to hear things like the following:

1. "Economics doesn't work; it has no practical applications."

2. "Economic will never discover any stable scientific laws, because human behavior changes."

3. "Economics shouldn't use math, because math can't describe human behavior."

4. "Economics is not a science."

I have some sympathy for these viewpoints. But economics is a very broad discipline, and I think there are many cases in which these criticisms couldn't be more wrong. There are cases in which economics works, in which it does discover "laws", and in which difficult math is absolutely essential. For example, consider the theories that won the Economics Nobel Prize this week.

The prize, given to Lloyd Shapley (who, by the way, spends his summers at Stony Brook) and to Alvin Roth (who was recently hired by Stanford despite being commonly cited as working for Harvard), was awarded for the invention of Matching Theory. Matching Theory is basically an algorithm - a mathematical technology - for finding optimal matches between pairs or groups of people. It incorporates human preferences, optimization, and strategic behavior, so it is economics. Alex Tabarrok gives a great introduction to the theory in this blog post.

I would like to point out some things about Matching Theory:

1. It is testable, tested, and correct for a very broad class of situations. There are many situations in which the theory makes predictions about when matching will be stable. As Tabarrok points out, these predictions have been confirmed in a number of real-world situations (and, I am sure, in controlled experiments as well).

2. It is practically applicable. Implementing the algorithms designed by Al Roth has resulted in much improved availability of kidney transplants.

3. The theory uses a lot of math. It does not rely on verbal characterizations of human behavior, but on hard quantitative predictions derived from non-trivial mathematics. Without that math, the theory would be useless.

In other words, Matching Theory is what most scientists would call science. Nor, I believe, is it the only such example. Critics of the economics profession should realize this. It's an important fact that not all fields of economics - and not all techniques and theories and schools of thought within each field - are created equal, in terms of their testability, real-world applicability, and appropriate use of mathematics.

As for the Nobel, I see this decision as increasing the credibility of the prize itself. The econ Nobel traditionally lies somewhere in between the Peace Prize - which everyone knows is a big joke - and the Physics, Chemistry, and Medicine Prizes, which have managed to maintain very high levels of credibility. But the Economics Prize has always seemed to alternate between testable, applicable, "science-y" sorts of economics (think James Heckman's selection models, Daniel Kahneman's experiments, William Vickrey's auction theory, or the 2007 prize for mechanism design) and less testable, more "storytelling" kinds of econ (I am sure you know which ones I'm talking about). It is no coincidence that the former tend to be prizes for microeconomics and the latter tend to go to macroeconomists.

My guess - and this is just a wild guess - is that in the years since the Crisis of 2008, the enormous wave of criticism directed at the econ profession has not been lost on the Nobel Prize committee. The only macroeconomists selected for the prize in the past few years have been Chris Sims and Thomas Sargent - two hardcore empiricists whose work serves to illuminate the data limitations and huge error margins faced by macroeconomics. It is my sincere hope that the prize will continue to move in the direction of the science prizes, toward testable, applicable theories and credible empirical results.

And in the meantime, my heartiest congratulations to Lloyd Shapley and Al Roth, who richly deserved the prize.

Update: Mark Thoma is thinking along similar lines.
reade more... Résuméabuiyad

October - National Depression Awareness Month

depression awareness
This year I reduced activity on this blog because of many changes in my life. The main change of course is dragging myself out of what I call 'full-grade' depression. It is like with many things: the greatest urge to speak up, to write and to talk about something comes during the actual experience. When experience fades away so does this will to be heard. However no major experience comes and goes without changing something in you.
Sometimes in public transport I see a person with scars on their forearms and I can't help feeling recognition. I know exactly what this person has gone or going through. I know what they might tell their friends, relatives or co-workers and I know what it really is. When somebody complains about feeling tired, down, low all the time, it is as if something ringing in my head. But I know that for those lucky people who have never experienced anything similar nothing is ringing.
So October is National Depression Awareness Month and I'd like to emphasize some facts that are obvious for people who has been there and not that obvious for those who hasn't.

1. Depression is an illness. Think about it before calling a person who has no will to do anything lazy, weak or whiny. Everyone needs a break from time to time to have fun and do nothing. If somebody feels bad most of the time and can't find strength to do something it is far from fun. It's painful in fact. It hurts even more to get criticized or laughed at.
2. Those who tell you about being depressed do tell it to get your attention. I've heard this many times: if somebody tells you they are depressed or suicidal they probably aren't serious, but probably just seeking attention. Why they chose such a way to get attention? Are they being manipulative? Maybe. Some people even self-harm to get attention because they need it so badly. A person may even not intellectualize it but do it out of impulse. But the fact that they even have such impulses says a lot.
3. Depressed people might be annoying. I know how it sounds but I have a right to say so. It is annoying to live with the person who rarely shares the fun, who never smiles, who's always sad or angry. It tires you especially if you don't understand what does your relative or friend or partner feel. Know, they don't do it to make you mad. They aren't naturally nasty or hateful. And it's not your fault that a person you love or live with is hard to be around. Once again, depression is an illness and it's manifestations are as annoying as someone's constant loud cough. The latter case is understandable because everybody knows a person is sick if they're coughing. It might be loud and annoying but understandable. Depressive talks and lack of smiles, whining, sarcasm, social isolation are understandable too once you know what they mean.
4. People don't choose to be depressed. A person doesn't choose to feel bad, they don't choose low self-esteem, they don't choose to feel pain. So they can't unchoose bad feelings and cheer up. It's not that simple. On the contrary one may feel so bad and wake up every day wishing for relief, wanting to feel good or rather to not feel so awful.
5. One does not have to have excuses for depression. One may seem to have no reasons to be depressed yet they are. They don't have to compare their lives to those of hungry African children to have an excuse for being depressed. Because there are no legitimate or illegitimate reasons for depression. Like I said many times, depression is caused by chemical imbalance in brain or post-traumatic experience. A person doesn't have to be in specific socially accepted as 'good' environment or situation for being depressed.
6. If you want to help - be there. If you suspect a loved one to suffer clinical depression and want to help -  talk to them, surf for information on depression, try to understand and be there with them and for them. One of the things a depressed person needs is to know they aren't alone, that they aren't on their own, that they aren't abandoned and forsaken. Let them know you love and accept them no matter what - it means a lot, trust me.
reade more... Résuméabuiyad

Debt and the burden on future generations, Part MMMVIII


I don't want to bore people, but once again this question has come up (see here, here, herehere, herehere, and here for the whole battle royale) , and I thought I'd blog about it, because hey, every econ blog should occasionally do some little "thought experiment" type stuff, even if it doesn't quite as much traffic as does making fun of commenters.

The question, once again, is: "Does government debt impose a burden on future generations?" I took a crack at this question back in January, and my answer is still the same, but I'd like to phrase it more concretely.

Here's how I like to think about this question. In my mind, to "impose a burden on future generations" means  "to decrease the consumption possibilities of future generations". So the question is really whether or not the size of today's stock of government debt reduces the total consumption possibilities of people not currently born. In other words, if government debt is $1,000,000,000 today, does that mean that the consumption of future people must be lower than if government debt were $1 today?

Let's assume a closed economy. In that case, the economy's maximum potential consumption at any point in time is determined by the productive capacity of the economy at that time. Productive capacity is determined by the size of the capital stock, the labor force, the availability of natural resources, and the level of production technology. (For convenience, I'm defining the "capital stock" as including all consumer durables, and defining "consumption" as including the flow of services from those durables.) Now let's assume that the technology level, the labor force, and the amount of natural resources are all completely exogenous, so that the government cannot affect these things (this may not be realistic but we could always drop that assumption later). So the productive capacity of the economy at any point in time is just a monotonic function of the economy's capital stock - more capital at time T means more potential consumption at time T.

Now let's define "burden on future generations". That means that at some time T > 0 (t=0 being today), the potential consumption of the economy will be lower. Since the potential consumption of the economy at any time t is determined entirely by the size of the capital stock at time t, what we are really asking is whether or not the following proposition is true:

∀{D_t},{C_t} ∃T>0 s.t. K_T = f(D_0), where f'(D_0) < 0 

Here K is the capital stock, D is government debt, f is some function, t=0 is today, D_0 is today's stock of government debt, {D_t} is the path of government debt between t=0 and t=T, and {C_t} is the path of consumption between t=0 and t=T. If this proposition is true, then no matter what anybody does in the future, higher debt today necessarily means a smaller capital stock at some point in the future. 

Note that this proposition is not stated as formally as it could be or really should be, for which I apologize.

So now, let's think about what determines the capital stock at a future time T. This is determined by the sequences of consumption and investment from t=0 to t=T-1. In order for K_T to be constrained to be lower than it would otherwise be, it must be the case that K_T-1 is lower than it would otherwise be (this follows easily from the assumption that the production function is monotonic in the level of the capital stock). By backwards induction, the above proposition can only hold if the following proposition holds:

K_1 = f(D_0), where f'(D_0) < 0 

Remember, t=1 means tomorrow. In other words, only if tomorrow's capital stock depends in a negative way on today's stock of government debt can it be true that a higher D_0 forces K_T to be lower at some point in time.

Tomorrow's capital stock depends entirely on today's level of investment (today's level of production is fixed, because today's capital stock is fixed). So our question now reduces to:

Question: If I_0 = g(D_0), where I_0 is today's investment and g is some function, what is the sign of g'(D_0)? 

If g'(D_0) is positive, then a higher government debt stock today means that the economy will invest more today; this means that government debt will impose no burden on future generations.

So is it possible that g'(D_0) > 0? In other words, given two societies that are identical in all respects except that Society 1 has a higher stock of government debt than Society 2, is it possible that Society 1 will invest more today (and consume less today) than Society 2?

Of course it's possible. The investment/consumption choice is entirely behavioral. And when I say "behavioral" I am including the behavior of the government. If Society 1's government chooses to cut welfare and use the money to build a bunch of roads, for example, it could easily invest more and consume less today than Society 2; the high level of D_0 in Society 1 would not prevent it from being able to do this.

So government debt need not be a burden on future generations. It all depends on how economy-wide consumption/savings decisions react to the size of the stock of government debt. And that is heavily dependent on the behavioral model one chooses. Might a higher stock of government debt outstanding induce a society to invest less and consume more (which would constrain future consumption to be lower under certain additional assumptions)? Sure.

So the answer to the question is: It depends. What does it depend on? It depends on how consumption/savings decisions react to the size of the stock of government debt, which depends on the behavior of the government, firms, and households. Modeling that behavior is a major challenge.

Also, note that this does not answer the question of "Does government borrowing impose a debt on future generations?" This is because the economy's consumption-savings choices may respond differently to changes in debt than to levels of debt. But in general, the answer will have the same form.

So to sum up:
  • Must higher government debt today lead to lower potential consumption sometime in the future? No.
  • Does higher government debt today lead to lower potential consumption sometime in the future? Maybe; I don't know.
  • Does higher government debt today lead to lower actual consumption sometime in the future? Maybe; I don't know.
  • Must higher government borrowing today lead to lower potential consumption sometime in the future? No.
  • Does higher government borrowing today lead to lower potential consumption sometime in the future? Maybe; I don't know.
  • Does higher government borrowing today lead to lower actual consumption sometime in the future? Maybe; I don't know.

(Just in case you were wondering: The example Nick Rowe creates here is a case of higher government borrowing today leading to lower actual consumption in the future. He uses a "fruit-tree economy" with no capital (or if you prefer, with K fixed), so potential consumption in each period is fixed. In that sort of economy, it is impossible for anything to "impose a burden" on any cohort, using my definition of "imposing a burden".) 

Update: More interesting conversation between me and Nick over at his blog, as well as in the comment section of this post. We look deeper into the issue and get some more interesting results.

Update 2: Nick and I have been discussing the issue. I think we agree on everything now, and a number of interesting conclusions have emerged. Let me see if I can translate them into plain English...

The "Burden" Result: It is possible that the existence of past government transfers can ensure that either currently living people or as-yet-unborn (or both) must get screwed, relative to the baseline in which no transfers occurred. These past government transfers can be accomplished by government borrowing and spending; in that case, the past government transfers will affect the value of today's government debt. This is the upshot of Nick's model.

The "No Future Burden" Result: However, no matter what transfers happened in the past or how much government debt we have today, then given some simple assumptions, it is always possible to get away with only screwing people who are currently alive (and yes, you can quote me on that!). This is the upshot of my proof.

Note that these two results are not incompatible at all. So Nick and I don't disagree.

The "Dues Paid" Result: Given some more simple assumptions, it is always possible to limit the total amount of screwage (in consumption terms, not utility terms) to the amount of consumption that was, in the past, transferred away from people who are currently alive. In other words, the total amount of screwage never has to be bigger than the total "dues" already paid by currently living people. This is something I realized while talking to Nick over at his blog. I think it's kind of interesting.

The "Debt Does Not Equal Burden" Result: This means that the govt. debt number may not equal the burden number (and in general does not). The size of the current stock of government debt may be much larger than the total amount of the aforementioned screwage. In other words, govt. debt may be $10,000,000,000 today, but the total amount of necessary screwage might be much smaller, or might even be zero. This can happen, for example, if the government spends money on the same people it taxes, or if people leave government bonds to their children in a certain way. So debt is not a book-keeping device that faithfully records the amount of necessary future screwage.

(Note that this means that government debt's effect on society is very different from the effect of one household's debt on that household. If I borrow $10,000 and spend it today, I'm going to need to take a $10,000 hit in the future in order to pay it back. But if the government borrows $10,000 today, it's quite possible that nobody ever has to take a hit at all. I am not sure, but I think that this might be Paul Krugman's main point.)

(Update: Antonio Fatas thinks that this last result should be the main takeaway from the debate.)

In conclusion: When you ask "Does debt impose a burden on future generations?", you have to be very careful about exactly what you mean when you ask that question. But if you are careful - if you use math in your explanation, state all definitions and assumptions clearly, and above all think clearly and don't get mad - then the truth will out.
reade more... Résuméabuiyad

Will econ blogging hurt your career?


Many people ask me this. Short answer: It's impossible to know.

Reason 1: The number of bloggers is small, and blogging is new. In terms of econ grad student bloggers, there has been me, Steve Randy Waldman, Adam Ozimek, Daniel Kuehn, Kevin Bryan, JW Mason, and maybe one or two others. That's not a statistically large sample, and there are lots of confounding factors (research quality, blog subject matter, etc.). So it's basically impossible to do any kind of quantitative analysis to answer the question.

Reason 2: Very few of the people you annoy will actually inform you of the fact. For example, I've criticized modern macro a lot. Maybe that has annoyed tons of macroeconomists, to the point where they wouldn't consider working with me, hiring me, or allowing my papers into a journal that they refereed. But if so, they're not going to write me emails and say "Hey, I think you're a jerk." They're just going to quietly decide that I'm a jerk, and I'll never know why my paper really got rejected.

So basically, nobody will know for a long time how blogging impacts people's careers. Those of us who have tried it are basically just very tolerant of Knightian uncertainty. In fact, I love uncertainty. In many situations I'd rather try something just to see what happens. I'm the character that gets killed first in every horror movie, but that's fine with me, since life is not generally like a horror movie.

But here are a few reasons to think that blogging won't be as bad for your career as many people fear:

Reason 1: Blogging is great for meeting people. Through blogging, I've met awesome people like Richard Thaler, Eric Brynjolfsson, George Akerlof, James Heckman, Betsy Stevenson, and Roger Farmer, not to mention fellow blogger/economists like Mark Thoma, Tyler Cowen, Alex Tabarrok, Justin Wolfers, Brad DeLong, John Cochrane, Greg Mankiw, Robert Waldmann, Scott Sumner, Steve Williamson, David Andolfatto, and others (I still haven't met Paul Krugman, in case you were wondering). That doesn't mean those people think I'm an elite researcher just because I blog, or will do me any personal career-related favors. Blogs are not a good-ol'-boy network. But it's very helpful to meet this sort of people, to get ideas and perspective, learn how to think about things, and see what's going on in the world of economics. Not to mention networking; senior people advise younger people, and younger people are potential co-authors.

Reason 2: Blogging really doesn't take up much time. It's like any other hobby; it may put a crimp in your social life, but work will still come first. Heavy blogging will require 2 hours a day, but I would say I spend an average of only 20-30 minutes a day on it. And I never feel pressured to post more. Blogging is not a job, so it's not an obligation.

Reason 3: Blogging helps you think. It helps to get things down on paper. Sometimes you have an idea, and then when you start to write it down you realize how vague and/or implausible and/or illogical it is. Writing an idea clarifies the idea, and it helps you practice communicating your idea to others. This will help with writing papers. Even the "blog-fights" help with logical thinking and being able to dissect arguments.

Reason 4: Name recognition is somewhat important in economics, and there is a bit of evidence that blogging helps to build name recognition. This evidence should be taken with several grains of salt, of course, for reasons discussed above.

So there are reasons that blogging might be good for one's career. But these should be viewed simply as mitigating the (unknowable) risks of blogging, not as reasons to start blogging in the first place. The real reason to blog is to affect the national conversation, to get involved with policy and national affairs in some small way, and simply for the sheer joy of thinking about stuff. In other words, the same reasons that people should go into academia in the first place. If your main goal is to make money and wear a suit and have a swank office - and there's absolutely nothing wrong with that - go find a nice safe job in a bank!

Update: In the comments, Steve Williamson adds:
Here's an old blogger perspective. There is risk associated with getting into anything you have not tried before. When you're young you have to take risks, otherwise you never get anywhere. There is risk in blogging just as there is risk in anything else we do. You can say foolish things in a blog post. You can say foolish things when you present a paper at a conference. In the first case you reveal your foolishness to more people, but they are more forgetful. Tomorrow they will move on to another idiot. Old economists who have had some success in the profession and have tenure can coast - they don't have to take risks. But coasting is no fun, and rust never sleeps. If you have tenure, you can use it to your advantage. Offend a few people. Speak your mind. Maybe it matters.
Sounds like great advice to me...

Update 2: In the comments, Frances Woolley adds:

Academic publishing is becoming increasingly dysfunctional. People have to publish to get tenure/promotions/government or other funding, so everyone wants to get stuff out, but no one wants to referee for journals or read their contents (except for a dozen or so top journals).  
As academic journals lose relevance, conferences, high profile working paper series like the NBER, and blogs are gaining. I get way more eyeballs - and way more ideas out there into the public domain - by blogging than I would by publishing stuff in mid-ranked journals.
Interesting...
reade more... Résuméabuiyad

Acemoglu and Robinson versus the blogs


Daron Acemoglu and James Robinson have a paper out (with Thierry Verdier) about different "flavors" of capitalism and how these  flavors could affect innovation. Specifically, they compare the "cuddly" capitalism of Europe to the "cutthroat" capitalism of America. First they make a simple mathematical model to show how more socialist "cuddly" countries like Sweden can act as parasites, leeching off of the innovations produced by the freewheeling "cutthroat" nations. Then they "test" their model by showing that the U.S. patents more stuff than Scandinavian countries.

This paper drew criticism from a number of bloggers. For example, here Lane Kenworthy asked: 1. What about alternate measures of innovation in which Scandinavian countries score close to the U.S.?, and 2. Wasn't the U.S. more "cuddly" back in the 70s, and weren't we just as innovative back then? And here, Matt Yglesias alleged that patents are a crummy measure of innovation.

Acemoglu and Robinson then defended themselves on their blog. After suggesting that criticism of their paper was motivated by politics (buncha commie bloggers!), Acemoglu and Robinson discuss what they believe to be the differing roles of blogs and academic research:
[There is a] divide between what the academic research in economics does — or is supposed to do — and the general commentary on economics in newspapers or in the blogosphere. When one writes a blog, a newspaper column or a general commentary on economic and policy matters, this often distills well-understood and broadly-accepted notions in economics and draws its implications for a particular topic. In original academic research (especially theoretical research), the point is not so much to apply already accepted notions in a slightly different context or draw their implications for recent policy debates, but to draw new parallels between apparently disparate topics or propositions, and potentially ask new questions in a way that changes some part of an academic debate. For this reason, simplified models that lead to “counterintuitive” (read unexpected) conclusions are particularly valuable; they sometimes make both the writer and the reader think about the problem in a total of different manner (of course the qualifier “sometimes” is important here; sometimes they just fall flat on their face).
Well, first of all, I disagree with the idea that counterintuitiveness is inherently good when evaluating academic research; growing up, I argued this point at length with my dad, who is a cognitive psychologist. But this is neither the time nor the place for that argument.

Instead, I want to make two points.

First, I want to point out that Acemoglu and Robinson's theoretical result is not very counterintuitive. The notion that there is a tradeoff between innovation and redistribution is quite a commonly-held belief. In Acemoglu and Robinson's theoretical model, this ideas is a built-in assumption; is is not the result of the model, it is the model's starting point. To see this, just check out Section 2.2 on "Reward Structures". The authors assume that the reward to entrepreneurship (entrepreneurs are the same as innovators in their model) depends on the degree to which a country lets winners win and lets losers lose. The entrepreneurs decide how much effort to put out - if they try harder, they have a bigger chance of succeeding.

To my knowledge - and if I am wrong, please correct me! - this reward structure and this "return to effort", are not taken from any microeconomic study of entrepreneurial behavior; they are just something the authors wrote down.

Why did they write them down? A cynic would say: "Because these assumptions made the model work out the way the authors wanted it to", but I am not such a cynic. Instead, it seems to me that they wrote down these assumptions because they were intuitively plausible. It makes intuitive sense that the bigger the risk from losing, the more people will try hard not to end up being losers. And it makes intuitive sense that the harder someone tries, the better they do.

Given these assumptions, the result of the model is not hard to predict - when you let losers lose and winners win, innovators try harder. Not exactly a shocker, given the assumptions.

So is this model counterintuitive? I argue: No. Instead, it is intuitive. It seems to have been built using intuition, and its results confirm commonly-held beliefs about the difference between "cutthroat" and "cuddly" capitalism. So I don't think it makes much sense for Acemoglu and Robinson to defend their research from the bloggers by saying that the purpose of academic research is to be counterintuitive.

OK, time for my second point. Mark Thoma wondered why Acemoglu, Robinson, and Verdier get the result they get. Isn't it true that entrepreneurs have to take a lot of risk? And doesn't that mean that social insurance, which reduces risk, should encourage entrepreneurs to take more risk, not less? How is it that Acemoglu et al.'s model avoids this effect?

Here is the answer: it's built into the math. The authors assume that the only cost of entrepreneurship is effort. From the paper:
We assume that workers can simultaneously work as entrepreneurs (so that there is no occupational choice). This implies that each individual receives wage income in addition to income from entrepreneurship[.]
In other words, the authors have assumed away much of the risk of entrepreneurship! A failed entrepreneur gets paid exactly the same wage income as a worker who doesn't try to be an entrepreneur at all! This automatic wage income reduces the risk of entrepreneurship substantially, and makes social insurance much less necessary for reducing risk. 

How realistic is that assumption? Well, in the real world, entrepreneurs in rich countries have limited liability, and can pay themselves wages out of their start-up capital. This means that many entrepreneurs can earn a wage even as they work to start businesses. But this wage is often much less than they could have earned otherwise, and if their business fails (a statistically likely event), they will be unemployed. So the "no occupational choice" assumption probably reduces the risk of entrepreneurship, relative to the real world. 

Also, the authors assume that entrepreneurs do not put up any of their own wealth as startup capital for their ventures, and they assume no heterogeneity between worker/entrepreneurs. This means that it is just as easy - and no more risky - for a poor person to start a successful company as for a rich person to do so.

So to sum up my second point, Acemoglu, Robinson, and Verdier have assumed a model in which:
  • Entrepreneurship is low-risk,
  • Rich people have no advantage over poor people when it comes to starting companies, and
  • Your probability of success depends entirely on how hard you work.
(No wonder liberals were not happy about this model, eh?)

So to combine my two points: When it comes to this kind of modeling, what you get out is pretty much what you put in. If you start off with the intuition that success is a function of how hard you work, and how hard you work is a function of how much the government will let you keep your hard-earned gains - in other words, if you start off with the intuition of pretty much every middle-aged conservative guy in America - then your model will probably spit out the result that countries face a tradeoff between redistribution and innovation...again, fitting perfectly with the intuition of pretty much every middle-aged conservative guy in America.

So the model is not counterintuitive. But is it a good model? Does it help us understand the world? Here we have to turn to the data. The data tell us that America issues more patents than Scandinavian countries. Is that good enough? Even if patents are a good measure of innovation (i.e., if Matt Yglesias is wrong), and even if cross-country comparisons are valid, and even if such a small sample were enough to make a statistical inference, I'd still say we have a problem here. Why? Because Acemoglu, Robinson, and Verdier were almost certainly aware of the patents data before they wrote their paper. It is quite probable that the patenting disparity between the U.S. and Scandinavia is what inspired their paper. And one cardinal rule of scientific theorizing is that your model should be tested on data other than the data used to construct the model.

In other words, Acemoglu et al. have not yet succeeded in explaining anything about the world. They have looked at the world, and then used plausible sounding assumptions to create a model whose results fit what they observed. But they have not yet tested whether that model can be used to predict things other than the original observation. Until they do that (or someone else does it), their theory should not be believed.

Anyway, I think I'm done talking about this paper. I am NOT trying to say that Acemoglu and Robinson are wrong, or that they have made any mistakes in their research. What I am trying to do is to illustrate the usefulness of blogs. Even if I've made some mistakes about the particulars (and I may have!), I hope I've shown that blogs, while not a substitute for academic research, do have something to contribute to the academic discussion - by pointing out assumptions, identifying relationships between assumptions and conclusions, discussing alternative assumptions, and evaluating the current status of the research. This is much more than just "distilling well-understood and broadly-accepted notions", which Acemoglu and Robinson claim to be the purpose of blogging.

Update: Some people apparently have been thinking that I'm accusing Acemoglu et al. of political bias. I am doing no such thing. Acemoglu et al. almost certainly just want to demonstrate a neat idea they had (the "asymmetric equilibrium" between "cutthroat" and "cuddly" countries). Demonstrating that, though, requires a model whose assumptions are bound not to be very pleasing to liberals...although again, that's not necessarily the only reason that liberal bloggers criticized the paper. In econ, a lot of accusations and counter-accusations of political bias are always flying around, but I like to keep those to a minimum.
reade more... Résuméabuiyad