Pages

.

A Strong First Paragraph

I recently picked up a copy of the revised edition of Jack's Book: An Oral Biography of Jack Kerouac by Barry Gifford and Lawrence Lee, a superb collection of reminiscences by Jack Kerouac's contemporaries, given crystalline coherence by Gifford's and Lee's skillful narration.

The first paragraph of the Prologue (written by Barry Gifford) is beautifully worded. But before you look at it, imagine for a moment that you've been tasked with writing the first sentence of this book yourself. What would your strategy be? Would you try to say something about Jack Kerouac? Would you launch right into an explanation of how the book came to be? Or would you back up fifty paces and try to take in the larger picture?

Gifford backs up a hundred paces.

America makes odd demands of its fiction writers. Their art alone won't do. We expect them to provide us with social stencils, an expectation so firm that we often judge their lives instead of their works. If they declare themselves a formal movement or stand up together as a generation, we are pleased, because this simplifies the use we plan to make of them. If they oblige us with a manifesto, it is enforced with the weight of contract.


The opening sentence is an observation; a hypothesis.

It's a masterfully crafted paragraph, isn't it? Notice the way the sentences vary in length. The first sentence is appropriately short: eight words. The second, even shorter. Then we encounter a 23-word sentence. Then a 28-word sentence. And finally, 15 words. Can you hear the cadence this progression produces? Isn't it fabulous?

Notice how few words are reused. Can you spot the words that are used twice? They're short words: a, of, us, the, with, they, their. Is any word used more than twice?

Miraculously, we don't encounter the first occurrence of "the" until we're 56 words into the paragraph!

The coinage "social stencils" is unique and brilliant. A lesser writer would have written "social blueprints." But Gifford knows that "blueprints" is unimaginative, overused.

Think what a travesty it would be to defile a biography of Jack Kerouac with stale, overused metaphors.

Far be it for me to try to improve upon such a magnificent paragraph. If I were asked to do so, I could suggest perhaps two minor changes. First, change "America" to "History." Why? Because there's no reason at all to limit this discussion to America. The first sentence is an observation of universal importance. After you've changed "America" to "History" you can then get rid of "its," producing an opening sentence of: "History makes odd demands of fiction writers.


Quite possibly, I'd change the final sentence ("If they oblige us with a manifesto, it is enforced with the weight of contract") to "If they oblige us with a manifesto, so much the better." To me, "enforced with the weight of contract" sounds a bit legalese. "So much the better" is borderline-trite, but everyone understands it.


Then again I might not change anything. Gifford's paragraph is pretty much golden as is. If an English student ever showed me a paragraph like that, I'd gasp so hard all the air would be sucked out of the room.

As an exercise, I suggest you take a moment right now to go find whatever it was you last wrote that was more than a paragraph long. Re-read your piece. Is the opening paragraph golden as is? What would you change to make it better?
reade more... Résuméabuiyad

What can we do to put a stop to global warming?

First, the good news. Here is an infographic about the U.S. contribution to global warming:



U.S. total energy-related carbon emissions are down 13% since 2007. That's huge. Although the U.S. refused to sign the Kyoto Protocol, we managed about 70% of the emissions reductions mandated by that treaty (which is much better than most of the actual signatories!).

Renewable energy now provides 12.1% of U.S. energy. That is big.

Energy demand has fallen 6.4% since 2007, even though GDP is slightly higher. Hence, energy efficiency is responsible for the reduction in demand. That is good.

Gas is replacing coal. That is good, provided that wellhead methane emissions are not making up the difference.

Bottom line: If the U.S. were the world, the fight against global warming would be going well.

OK, now for the bad news: The U.S. is not the world. Global warming is global. The only thing that matters for the world is global emissions. And global emissions are still going up, thanks to strong increases in emissions in the developing world, notably China.

Figures released this week show skyrocketing Chinese coal use. China now burns almost as much coal as the rest of the world combined:

China consumes nearly as much coal as the rest of the world combined

Meanwhile, Indian coal use is also increasing strongly.

If China and the other developing nations cook the world, the world is cooked, no matter what America or any other country does. China et al. can probably cook the world without our help, because global warming has "threshold effects" (tipping points), and because carbon stays in the air for thousands of years.

Bottom line: We will only save the planet if China (and other developing countries) stop burning so much coal. Any policy action we take to avert global warming will be ineffective unless it accomplishes this task.

What will accomplish this task? What can we do to influence the behavior of China? One thing that might help, on the margin, is to tax the carbon content of imports into the U.S. A second thing would be to tax U.S. exports of coal and other fuels.

But these measures - or any carbon-taxing measures taken only by rich countries - will have limited effects, due to the large size of the developing-world economy, which is set to pass the developed world in size very soon. What else can we do to slow developing-world emissions?

As I see it, there is only one thing we can do: develop renewable technologies that are substantially cheaper than coal, and give these technologies to the developing countries. China in particular is not a very globally responsible country; it will continue to pursue growth, economic size, and geopolitical power at any cost, and that means using the cheapest energy source available. The only way China will stop using coal is if it becomes un-economical to continue using coal.

Thus, the rich world should focus its efforts and money on developing renewable energy cheaper than coal. This mainly means solar; it also means better energy storage and transmission technologies. We should give these technologies away to China and other countries for free; the economic hit we take from doing so will help ease developing-country resentment over the fact that the U.S., Europe, Japan and others got rich by burning fossil fuels in the past.

Developing cheap renewable energy technologies requires research funding from the government. A carbon tax would also help, since it provides a subsidy for private firms to develop their own in-house technologies. However, it will not be possible to give privately owned technologies to China; for these to be rapidly adopted in China in time to save the world, we must rely on natural technology diffusion, or on Chinese espionage.

So, government research is the most important component. We need to increase government funding for solar, for energy storage, and for electricity transmission tech. And then we need to give the fruits of our research for free to the entire world, before it's too late.
reade more... Résuméabuiyad

Are jobless recoveries the Fed's fault?



Matt O'Brien (who, in full disclosure, is the guy who recruited me to write for the Atlantic) hypothesizes that the "jobless recoveries" of recent decades have been caused by the Fed. Specifically, he thinks that the Fed has been practicing "opportunistic disinflation", allowing recessions to lower inflation, and then "stabilizing" inflation at a new, lower level after each recession by raising interest rates too soon. Here is the case:

Through the 1980s, postwar recessions happened when the Fed decided to raise rates to head off inflation, and recoveries happened when the Fed decided things had tamed down enough to lower rates. But now recessions happen when bubbles burst...and the Fed hasn't been able to cut interest rates enough to generate strong post-crash recoveries. Or maybe it hasn't wanted to... 
Why have interest rates and inflation mostly been falling for the past 30 years? In other words if the Fed has been de facto, and later de jure, targeting inflation for most of this period (and it has), why has inflation been on a down trend (and it has)?...  
CorePCEInflation2.png 
Say hello to "opportunistic disinflation...The Volcker Fed had come in for quite a bit of abuse when it whipped inflation at the expense of the severe 1981-82 downturn, and the Fed seems to have learned it was better not to leave its fingerprints on the business cycle.  
In other words, Let recessions do their dirty work for them. 
It's not hard for central bankers to get what they want without doing anything, as long as what they want is less inflation (and that's almost always what central bankers want). They just have to wait for a recession to come along ... and then keep waiting until inflation falls to where they want it. Then, once prices have declined enough for their taste, they cut rates (or buy bonds) to stabilize inflation at this new, lower level. But it's one thing to stabilize inflation at a lower level; it's another to keep it there. The Fed has to raise rates faster than it otherwise would during the subsequent recovery to keep inflation from going back to where it was before the recession. It's what the Fed calls "opportunistic disinflation," and it's hard to believe this wasn't their strategy looking at falling inflation the previous few decades. Not that we have to guess. Fed president Edward Boehene actually laid out this approach in 1989, and Fed governor Laurence Meyer endorsed the idea of "reducing inflation cycle-to-cycle" in a 1996 speech -- the same year the Wall Street Journal leaked an internal Fed memo outlining the policy.  
In short: Recoveries have been jobless, because that's how the Fed likes them.
I guess this is a pretty solid case. Fed memos and speeches, combined with the low path of observed inflation. However, I don't believe it.

Why not? Well, there have been three "jobless recoveries" in recent decades: the early-90s recovery, the early-2000s recovery, and the current recovery. Looking at Matt's inflation history graph, we see that in the 2000s, inflation didn't shift to a lower level - so, no "opportunistic disinflation" there (unless the Fed tried and failed!). In the current recovery, the Fed hit the Zero Lower Bound, and inflation actually fell below the Fed's desired rate. So let's look at the one remaining candidate for "opportunistic disinflation" - the early 90s. Here is a graph of the Federal Funds rate over time:



We see that the Fed cut rates during the early-90s recession, and kept cutting then for several years after that. Remember what Matt said: "The Fed has to raise rates faster than it otherwise would during the subsequent recovery to keep inflation from going back to where it was before the recession." According to this principle, the rate rise in 1995 1994 must have come earlier than it would have come, had the Greenspan Fed not been practicing "opportunistic disinflation."

But compared to other recessions, the 1994 rate rise came with a very long lag. In fact, the 1990s recession is nearly unique in that the Fed Funds kept falling for quite some time after GDP stopped contracting. In fact, by the time the Fed started raising rates in 1994, the unemployment rate had already fallen to around its pre-recession level:



In other words, it sure looks like the Fed didn't even start raising rates until after the unusually long "jobless recovery" of the early 1990s was already finished.

Now, of course, it is fashionable these days to say that looking at the Fed's policy rate actually tells us nothing whatsoever about monetary policy. If you believe that the Fed chooses the level of NGDP at all points in time, then by assumption, all "jobless recoveries" were chosen by the Fed, and the policy rate simply did what it had to do in order to produce the observed time path of NGDP.

But I am highly skeptical of this idea.

Actually, in the case of post-Volcker monetary policy, I find the typical story to be the most convincing one. Estimates of the Fed's "reaction function", such as this one by Clarida, Gali, and Gertler in 2000 and others since then, find that the Fed has always seemed to use a "Taylor-type" rule to set policy rates, but that the Fed's rule started putting more of an emphasis on inflation-fighting, and less on unemployment-fighting, since Volcker took over. According to this common received wisdom, the Volcker Recessions convinced America that the Fed wouldn't tolerate inflation, and then higher productivity growth in the 90s enabled Greenspan to keep rates low without causing inflation expectations to come un-anchored. That story would explain the lower inflation observed post-1980 in O'Brien's graph. But it's not the same thing as "opportunistic disinflation", and it would not lead to jobless recoveries, because the Fed would still set its policy rate to respond to the best current estimates of inflation and the output gap.

As for the Fed's memos and speeches suggesting opportunistic disinflation? Well, Fed memos and speeches suggest a lot of things. That doesn't mean that the Fed actually does them...

So while I also think the Fed has probably been a little too focused on inflation since the experience of the 70s, my gut tells me we need to look elsewhere for the source of the jobless recovery phenomenon. My own guess is that financial-based explanations, in particular the idea of "balance sheet recessions", is more compelling. As Matt noted, recessions with "jobless recoveries" (really, just recoveries with sluggish growth) have tended to follow very different kinds of financial events than pre-1990 recessions. Some international comparisons find that recoveries tend to be anemic after a certain kind of financial crisis. If I were a betting man, I'd put my money on that explanation.
reade more... Résuméabuiyad

More Easy Ways to Improve Your Writing

Punctuation matters.
A while ago, I wrote about some easy ways to pump up your writing.  It turned out to be a fairly popular post (though nowhere near as popular as my January 15 post on "How to Write an Opening Sentence," which got a bewildering 37,000 page-views). I thought I'd share a couple dozen more of my favorite tricks for forcing oneself to do a better job of writing. So here goes.
DISCLAIMER: All rules can be broken. Try sticking to them first.
  1. On a separate piece of paper or in a separate doc file, write down (as simply as you can) your main message; what you're wanting to say. Keep that piece of paper (or doc file) visible, off to the side, while you work.

  2. Avoid long sentences.

  3. Try varying your sentence lengths more. Paragraph lengths too.

  4. When in doubt, leave it out. Fewer words equals less revision.

  5. Don't hoard a good phrase until the ideal situation comes. Write full-out, all the time. Hold nothing back.

  6. "Don't tell me the moon is shining. Show me the glint of light on broken glass." (Chekhov)

  7. An ultra-short sentence at the beginning or end of a paragraph adds impact. Try it.

  8. Go back to the last thing you wrote and strip all the adjectives and adverbs out. How does it read now?

  9. Stop using "seamlessly." (Unless of course you're a seamstress.)

  10. Stop using "effectively." It adds nothing.

  11. Stop using "burgeoning." Trite. Lazy.

  12. Never use "whilst," "thusly," "ergo," or any other arch words that make you sound like an insufferable pedant.

  13. "Substitute 'damn' every time you write 'very'. Your editor will delete it and the writing will be just as it should be." (Mark Twain)

  14. Stop giving a shit what your English teacher thinks.

  15. Get on with it.

  16. If any sentence has you working on it longer than 60 seconds, rewrite it immediately as two or more short sentences. Recombine.

  17. Ask a friend to change three words in something you just wrote.

  18. Go back and edit something you wrote a year ago. Notice how much of it stinks.

  19. In thirty seconds or less, take three words out of whatever you just wrote. If you can't do it, the penalty is to take out six words.

  20. Learn to recognize, and stop using, overused expressions. A good rule is: If you've heard it before, don't use it. Things like "hell bent," "all hell broke loose," "[adjective] as the dickens," "so quiet you could hear a pin drop," etc. will creep into your writing while you're not looking. Go back and find such atrocities. Rip them out. Set ablaze. Bury.

  21. Specificity counts. Your friend doesn't drive a car; she drives a tired-looking red Camry. It's not a "sweltering hot day." It's the kind of summer day that makes even pigeons sweat. The gunman didn't have a gun; he had a .45-caliber semi-automatic Glock. See the difference?

  22. Don't use the same adjective, adverb, or pronoun more than once in the same paragraph (unless of course somebody is holding a .45-caliber Glock to your face). See how long you can hold out before using any word a second time. Think of synonyms, alternative phrasings, pseuodnyms, creative euphemisms, indirect references, colloquialisms, never-before-heard coinages -- anything except the same old word, repeated.

  23. Elmore Leonard once said: "If it sounds like writing, I rewrite it."

  24. Leonard also said to "leave out the parts people skip."

  25. Read good writing.
reade more... Résuméabuiyad

Are Placebos Really Sugar Pills?

Is this really what a placebo amounts to?
Over the weekend I was reading some medical studies involving placebos. The experimental protocols were of the standard double-blind type in which a control group gets a placebo without either the group or their doctors knowing it.

One of the studies involved a medical condition for which sugar (supposedly the main ingredient of placebos) might be anything but biologically inert, and I thought to myself "Okay, certainly the doctors would know that and would choose a sugar-free placebo for the study." But when I read the study I couldn't find an ingredients list for the placebo. Maybe it was a sugar pill. Maybe not. We'll never know.

Then I started to wonder: Who makes placebos? Where do they come from? Is there a widely used "standard placebo" that scientists typically use in studies? What does it contain, exactly? And so on.

Let me skip right to the punch line. It turns out the drug companies (the very people who perform and/or fund the efficacy studies FDA relies on when granting new drug approvals) manufacture their own placebos -- and aren't required to list the ingredients.

One reason this is so disturbing is that drug companies are allowed to use (and do increasingly use) active placebos in their studies. An active placebo is one that is biologically active, rather than inert.

"But wait," you're probably saying. "Isn't the whole point of a placebo that it's biologically inert, by definition?"

You'd think so. But you'd be wrong. Active placebos are designed to mimic the side-effects of drugs under study. So for example, if a new drug is known (or thought by the drug company) to produce dry mouth, the drug company might use a placebo containing ingredients that produce dry-mouth. That way, of course, they can say things in their ads like "[drug name] has a low occurrence of side effects, such as dry mouth, which occurred about as often as they did with placebo."

In a 2010 study by Beatrice A. Golomb, M.D., Ph.D. (and colleagues), published in the Annals of Internal Medicine (19 October 2010;153(8):532-535), some 150 recent placebo-controlled trials were examined to see how many of them listed placebo ingredients. Only eight percent of trials using placebos in pill form (the majority of trials) disclosed ingredients. Overall, three quarters of studies failed to report placebo ingredients.

One of the trials in the Golomb study involved a heart drug. Over 700 patients participated, so it was a good-sized study by any definition. In a subgroup of patients that had recently experienced a heart attack, the drug in question (clofibrate) was no better than placebo in extending patients' lives. But the placebo was actually quite effective, reducing the group's mortality rate by more than half. However: the placebo was olive oil. And olive oil is known to fight heart disease.

Carelessly chosen placebos can also have a harmful effect. Dr. Golomb tells of receiving a call from HIV researchers whose drug study had to be aborted because the placebo group was "dropping like flies." The placebo contained lactose. It's well known that lactose intolerance is higher for HIV patients than for the general population.

It's inconceivable (to me, at least) that there are no laws requiring drug companies to list placebo ingredients. The fact that drug companies can formulate their own placebos (some of which are biologically active) and not list the ingredients, in research aimed at getting approvals from FDA, is shocking and outrageous.

It's quite obvious that researchers (whether associated with drug companies or not) need to agree on a standard placebo of some kind (or at least standards for placebos).

FDA needs to review its policies on placebos and either outlaw "active placebos" or rigorously define acceptable conditions for their use.

When I say FDA needs to review its policies on placebos, I'm referring to such (ongoing) practices as letting drug companies de-enroll study subjects from studies based on individuals' sensitivity to placebos. (Drug makers usually begin a study with a two-week "washout period" during which time potential subjects take either a placebo, or nothing. Subjects who respond to the placebo can be summarily taken out of the study before it begins in earnest.)

The current anarchy that prevails with regard to placebos calls into question the reliability not just of drug-company research but of virtually every placebo-controlled study ever done. Which is a hell of a thing to have to say, or even think about. In fact it's nauseating.

Someone, please: Pass me the Tic-Tacs.
reade more... Résuméabuiyad

The power and the terror of Irrational Expectations



In September 2011, in an interview with the Wall Street Journal, Robert Lucas gave the following justification for the use of Rational Expectations:
If you're going to write down a mathematical model, you have to address that issue. Where are you supposed to get these expectations? If you just make them up, then you can get any result you want.
So, are Rational Expectations not "just made up"? Does the evidence tell us that this is how people form expectations? I don't think so. It seems to me that Lucas is saying that we should pick Rational Expectations because they are appealing in some a priori way. I'm not sure what that is, though.

Be that as it may, figuring out how people actually form expectations, in the real world, is devilishly hard. Thomas Sargent and many others have experimented with models of Bayesian learning. Roger Farmer has advanced the idea that agents use a "belief function" in cases where rational expectations of the Lucas variety can't be formed. Greg Mankiw and Ricardo Reis have experimented with macro models in which people don't always update their beliefs on time, and Christopher Sims and many others have tried to microfound this idea with various models of rational inattention (in fact, rational inattention is now a hot topic in behavioral finance).

Of course, there are older, simpler ideas of expectation formation that were pushed out by the Lucas revolution, but which may have received a bad rap. One of these is Milton Friedman's theory of "adaptive expectations", which in its simplest form doesn't seem to explain the data, but may actually be going on in some more complicated form.

That is the conclusion of a recent paper by Ulrike Malmendier, a star of the behavioral finance field. Malmendier shows that people's inflation expectations are strongly affected by recency bias; in other words, people who have experienced higher inflation during a large percentage of their lifetime tend to expect higher inflation, even if their lifetimes haven't been that long yet. From the abstract:
How do individuals form expectations about future inflation? We propose that past inflation experiences are an important determinant absent from existing models. Individuals overweigh inflation rates experienced during their life-times so far, relative to other historical data on inflation. Differently from adaptive-learning models, experience-based learning implies that young individuals place more weight on recently experienced inflation than older individuals since recent experiences make up a larger part of their life-times so far. Averaged across cohorts, expectations resemble those obtained from constant-gain learning algorithms common in macroeconomics, but the speed of learning differs between cohorts.
This comes via Carola Binder, a grad student blogger at Berkeley (whom you should follow, by the way).

Binder discusses the implications of this finding for Japanese exchange rates, but I wish she had touche more on the basic idea that this sort of expectation formation might be responsible for the persistence of Japan's deflationary trap itself. Japan has been in deflation, or near it, for two decades now. That's a large fraction of the working lifetimes of many of Japan's current adults. If inflation expectations are set in the backward-looking way that Malmendier suggests, then it might take far more dramatic and sustained central bank action than anyone realizes in order to produce a return to an inflationary environment.

But to me, that's not even the most disturbing implication of Malmendier's finding, and of this type of expectations model in general. In most theories of non-rational expectations, like Bayesian learning or rational inattention, expectations evolve in a smooth, stable way. And so these models, as Chris Sims writes, look reassuringly like rational-expectations models. But there is no guarantee that real-world expectations must behave according to a stable, tractable model. I see no a priori reason to reject the possibility that expectations react in highly unstable, nonlinear ways. Like tectonic plates that build up pressure and then slip suddenly and unpredictably, expectations may be subject to some kind of "cascades". This can happen in some simple examples, like in the theory of "information cascades" (In that theory, people are actually rational, but incomplete markets prevent their information from reaching the market, and beliefs can shift abruptly as a result). In the real world, with its tangle of incomplete markets, bounded rationality, and structural change, expectations may be subject to all kinds of instabilities.

In other words, to use Lucas' turn of phrase, expectations might just make themselves up...and we might get any result that we don't want.

What if inflation expectations change suddenly and catastrophically? That would probably spell the death knell for macro theories in which the central bank can smoothly steer the path of things like inflation, NGDP, etc. It would raise the specter of an "inflation snap-up" (or "overshoot", or "excluded middle") - the central bank might be unsuccessful in beating deflation, right up until the moment when hyperinflation runs wild.

And what would be the implications of financial markets and financial theories of the macroeconomy? Belief cascades could obviously cause asset market crashes. It seems like sudden changes in expectations of asset price appreciation might also cause abrupt and long-lasting changes in saving and investment behavior. Which in turn could cause...well, long economic stagnations.

A very disturbing thought.
reade more... Résuméabuiyad

Funny Metaphors

Humor is a good thing in metaphors. But not unintentional humor.

Here are some stupendously warped metaphors and similes from student essays. Read 'em and weep.

1. Her vocabulary was as bad as, like, whatever.

2. The ballerina rose gracefully en pointe and extended one slender
leg behind her, like a dog at a fire hydrant.

3. Hailstones leaped from the pavement, like maggots when you
fry them in hot grease.

4. The revelation that his marriage of 30 years had disintegrated
because of his wife's infidelity came as a rude shock, like a surcharge
at a formerly surcharge-free ATM.

5. He spoke with the wisdom that can only come from experience, like a
guy who went blind because he looked at a solar eclipse without one of
those boxes with a pinhole in it and now goes around the country
speaking at high schools about the dangers of looking at a solar eclipse
without one of those boxes with a pinhole in it.

6. The little boat gently drifted across the pond exactly the way a
bowling ball wouldn't.

7. She grew on him like she was a colony of E. coli and he was
room-temperature Canadian beef.

8. She had a deep, throaty, genuine laugh, like that sound a dog makes
just before it throws up.

9. It hurt, the way your tongue hurts after you accidentally staple it
to the wall.

10. From the attic came an unearthly howl. The whole scene had an
eerie, surreal quality, like when you're on vacation in another city and
Jeopardy comes on at 7:00 p.m. instead of 7:30.

11. John and Mary had never met. They were like two hummingbirds who
also had never met.

12. Her hair glistened in the rain like a nose-hair after a sneeze.

13. The plan was simple, like my brother-in-law Phil. But unlike Phil,
this plan just might work.

14. The young fighter had a hungry look, the kind you get from not
eating for a while.

15. McBride fell 12 stories, hitting the pavement like a Hefty bag
filled with vegetable soup.

16. He was as lame as a duck. Not the metaphorical lame duck, either,
but a real duck that was actually lame. Maybe from stepping on a land
mine or something.

17. She walked into my office like a centipede with 98 missing legs.

18. He was deeply in love. When she spoke, he thought he heard bells,
as if she were a garbage truck backing up.

19. Even in his last years, Grandpappy had a mind like a steel trap,
only one that had been left out so long, it had rusted shut.

20. He fell for her like his heart was a mob informant and she was the
East River.
reade more... Résuméabuiyad

How to be a Master of Metaphor

Nothing makes a piece of writing sparkle like a good metaphor. Well-crafted metaphors and similes are the RPGs of diction; destroyers of boredom, exploding munitions of meaning.

What is "metahpor"? Term.ly defines it as "a figure of speech in which an expression is used to refer to something that it does not literally denote in order to suggest a similarity." I like to think of it in simpler terms: enlisting a vivid image in service of description. Diction's lubricant. The rib-spreader that exposes a writer's true heart.

What is "simile"? A metaphor in drag; a metaphor with the word "like" in it. Nothing more.

A simile is like a white lie; you're telling the reader that Thing A is like Thing B, even though in a literal sense, the two are not the same. A metaphor, on the other hand, is a pretend-lie. You're calling one thing something else entirely. Stephen Colbert explains it this way: "What's the difference between a metaphor and a lie? Okay, I am the sun, you are the moon. That's a lie. You're not the moon."

What makes a good metaphor (or simile) good?

  • Simple and clear: A good metaphor is vivid, useful, concise, and (when successful) memorable. An elaborate, baroque, overworked, or otherwise wordy metaphor topples under its own weight.
  • Highly visual, if possible: Concrete language that evokes a clear mental image is always a good idea, for any kind of writing.
  • Original: Not lame, not something anybody has used before.
  • Unexpected, perhaps even shocking: A good metaphor doesn't leave the reader dumbstruck; it leaves her Tasered in the nipples. It's a subversion of expectation.
  • Not mixed: An inconsistent image destroys, not augments, meaning.
  • Parallel in tone with whatever you're describing: If you're describing weird, produce a metaphor that's weird. If you're describing upbeat, be sure the metaphor is upbeat. You're not just denoting imagery; you're conveying tone. Or should be.
  • Entertaining: The reader should smile, maybe even laugh.

Sometimes it doesn't hurt to inject a bit of absurdity. One time, I overheard somebody talking about the dangerously worn-out tires on his car. He spoke of tires that "were so bald you could drive over a dime and tell if it was heads or tails." I'd never heard that expression before. It stayed with me.


Examples of Hackneyed Metaphors and Similes

  • "[to] rise head and shoulders [above something]": Thoroughly overused.
  • "Music to my ears": Horrible.
  • "Two peas in a pod": Offal.
  • "Heart of stone": Nauseating.
  • "The light of my life": Cloying.
  • "Raining cats and dogs": How about raining llamas and dromedaries? Anything but housepets.
  • "[our culture is a] melting pot." How about "a sumptuous ethnic ragout"?
  • "Sank like a stone": The essence of trite.
  • "[He or she turned] white as a sheet." OMG please no.
  • "He was awkward; all knees and elbows": No longer original. Try something like: "He was awkward, all knees and elbows, like a newborn giraffe."


Metaphor: Good Examples

  • "Advertising is the rattling of a stick inside a swill bucket." George Orwell
  • "Art washes away from the soul the dust of everyday life." Pablo Picasso
  • "Fill your paper with the breathings of your heart." William Wordsworth
  • "Courage is grace under pressure." Ernest Hemingway
  • "The night wind was a torrent of purple darkness." Unknown
  • "I tom-peeped across the hedges of years, into wan little windows." Vladimir Nabokov
  • "A bland agenda. Political meatloaf." (Yours truly)
  • "A wicker basket weighed down with half-rotted ideas." (Yours truly)


Simile: Good Examples

  • "The air smelled sharp as new-cut wood, slicing low and sly around the angles of buildings." Joanne Harris
  • "The dust lifted up out of the fields and drove gray plumes into the air like sluggish smoke." John Steinbeck
  • "Elderly American ladies leaning on their canes listed toward me like towers of Pisa." Vladimir Nabokov
  • "There was a quivering in the grass which seemed like the departure of souls." Victor Hugo
  • "His face was deathly pale, and the lines of it were hard like drawn wires." Bram Stoker
  • "To live anywhere in the world today and be against equality because of race or colour is like living in Alaska and being against snow." Unknown

It's surprising how many people quote Margaret Mitchell's little suck-ass line about Scarlett meeting Rhett as an example of a beautiful simile: "The very mystery of him excited her curiosity like a door that had neither lock nor key." First of all, "the very [sight, mystery, image, etc.] of [something]" is a repugnantly arch construction. But more to the point: A door that has neither lock nor key is just your average door, isn't it? Most doors have neither lock nor key. It seems Scarlett got easily excited by a cheap, lockless door. (My kind of woman.)

Tomorrow, I'm going to continue on this subject with some examples of truly humorous metaphors and similes drawn from that inexhaustible well of preposterous nonsense, student essays. Don't miss tomorrow's post. You'll be sorry as a whore in church if you do.
reade more... Résuméabuiyad

Bayesian vs. Frequentist: Is there any "there" there?


The Bayesian/Frequentist thing has been in the news/blogs recently. Nate Silver's book (which I have not yet read btw) comes out strongly in favor of the Bayesian approach, which has seen some pushback from skeptics at the New Yorker. Meanwhile, Larry Wasserman says Nate Silver is really a frequentist (though Andrew Gelman disagrees), XKCD makes fun of Frequentists quite unfairly, and Brad DeLong suggests a third way that I kind of like. Also, Larry Wasserman gripes about people confusing the two techniques, and Andrew Gelman cautions that Bayesian inference is more a matter of taste tan a true revolution. If you're a stats or probability nerd, dive in and have fun.

I'm by no means an expert in this field, so my take is going to be less than professional. But my impression is that although the Bayesian/Frequentist debate is interesting and intellectually fun, there's really not much "there" there...a sea change in statistical methods is not going to produce big leaps in the performance of statistical models or the reliability of statisticians' conclusions about the world.

Why do I think this? Basically, because Bayesian inference has been around for a while - several decades, in fact - and people still do Frequentist inference. If Bayesian inference was clearly and obviously better, Frequentist inference would be a thing of the past. The fact that both still coexist strongly hints that either the difference is a matter of taste, or else the two methods are of different utility in different situations.

So, my prior is that despite being so-hip-right-now, Bayesian is not the Statistical Jesus.

I actually have some other reasons for thinking this. It seems to me that the big difference between Bayesian and Frequentist generally comes when the data is kind of crappy. When you have tons and tons of (very informative) data, your Bayesian priors are going to get swamped by the evidence, and your Frequentist hypothesis tests are going to find everything worth finding (Note: this is actually not always true; see Cosma Shalizi for an extreme example where Bayesian methods fail to draw a simple conclusion from infinite data). The big difference, it seems to me, comes in when you have a bit of data, but not much.

When you have a bit of data, but not much, Frequentist - at least, the classical type of hypothesis testing - basically just throws up its hands and says "We don't know." It provides no guidance one way or another as to how to proceed. Bayesian, on the other hand, says "Go with your priors." That gives Bayesian an opportunity to be better than Frequentist - it's often better to temper your judgment with a little bit of data than to throw away the little bit of data. Advantage: Bayesian.

BUT, this is dangerous. Sometimes your priors are totally nuts (again, see Shalizi's example for an extreme case of this). In this case, you're in trouble. And here's where I feel like Frequentist might sometimes have an advantage. In Bayesian, you (formally) condition your priors only on the data. In Frequentist, in practice, it seems to me that when the data is not very informative, people also condition their priors on the fact that the data isn't very informative. In other words, if I have a strong prior, and crappy data, in Bayesian I know exactly what to do; I stick with my priors. In Frequentist, nobody tells me what to do, but what I'll probably do is weaken my prior based on the fact that I couldn't find strong support for it. In other words, Bayesians seem in danger of choosing too narrow a definition of what constitutes "data".

(I'm sure I've said this clumsily, and a statistician listening to me say this in person would probably smack me in the head. Sorry.)

But anyway, it seems to me that the interesting differences between Bayesian and Frequentist depend mainly on the behavior of the scientist in situations where the data is not so awesome. For Bayesian, it's all about what priors you choose. Choose bad priors, and you get bad results...GIGO, basically. For Frequentist, it's about what hypotheses you choose to test, how heavily you penalize Type 1 errors relative to Type 2 errors, and, most crucially, what you do when you don't get clear results. There can be "good Bayesians" and "bad Bayesians", "good Frequentists" and "bad Frequentists". And what's good and bad for each technique can be highly situational.

So I'm guessing that the Bayesian/Frequentist thing is mainly a philosophy-of-science question instead of a practical question with a clear answer.

But again, I'm not a statistician, and this is just a guess. I'll try to get a real statistician to write a guest post that explores these issues in a more rigorous, well-informed way.

Update: Every actual statistician or econometrician I've talked to about this has said essentially "This debate is old and boring, both approaches have their uses, we've moved on." So this kind of reinforces my prior that there's no "there" there...

Update 2: Andrew Gelman comments. This part especially caught my eye:

One thing I’d like economists to get out of this discussion is: statistical ideas matter. To use Smith’s terminology, there is a there there. P-values are not the foundation of all statistics (indeed analysis of p-values can lead people seriously astray). A statistically significant pattern doesn’t always map to the real world in the way that people claim. 
Indeed, I’m down on the model of social science in which you try to “prove something” via statistical significance. I prefer the paradigm of exploration and understanding. (See here for an elaboration of this point in the context of a recent controversial example published in an econ journal.)

Update 3: Interestingly, an anonymous commenter writes:
Whenever I've done Bayesian estimation of macro models (using Dynare/IRIS or whatever), the estimates hug the priors pretty tight and so it's really not that different from calibration.

Update 4: A commenter points me to this interesting paper by Robert Kass. Abstract:
Statistics has moved beyond the frequentist-Bayesian controversies of the past. Where does this leave our ability to interpret results? I suggest that a philosophy compatible with statistical practice, labeled here statistical pragmatism, serves as a foundation for inference. Statistical pragmatism is inclusive and emphasizes the assumptions that connect statistical models with observed data. I argue that introductory courses often mischaracterize the process of statistical inference and I propose an alternative "big picture" depiction.
reade more... Résuméabuiyad

How to Use Webfonts in Blogger

Lately I've been experimenting with Google Webfonts, which is a terrific way to get started with webfont technology. The fonts are free and using them is a snap. Scroll down for a few sample fonts.

Once you pick out the fonts you want to use, just insert a line like the following in the <head> section of your template. Note that this is all one line (ignore the wrapping):

<link href='http://fonts.googleapis.com/css?family=Arbutus+Slab|Belgrano|Tinos:400,400italic|Ovo|Arapey:400italic,400|Alegreya:400italic,400,700|Ledger|Adamina|Andada' rel='stylesheet' type='text/css'>

Right after that line, insert some style classes as follows:

<style>
.ovo  { font-family: Ovo, Arial, serif; font-weight: 400;font-size:12pt;font-color:black; }
.arbutus  { font-family: Arbutus+Slab, Arial, serif; font-weight: 400;font-size:12pt;font-color:black; }
.tinos  { font-family: Tinos, Arial, serif; font-weight: 400;font-size:12pt;font-color:black; }
.arapey  { font-family: Arapey, Arial, serif; font-weight: 400;font-size:12pt;font-color:black; }
.alegreya  { font-family: Alegreya, Arial, serif; font-weight: 400;font-size:12pt;font-color:black; }
.ledger  { font-family: Ledger, Arial, serif; font-weight: 400;font-size:12pt;font-color:black; }
.adamina  { font-family: Adamina, Arial, serif; font-weight: 400;font-size:12pt;font-color:black; }
.andada  { font-family: Andada, Arial, serif; font-weight: 400;font-size:12pt;font-color:black; }
</style>


Note that you can put any name you want after the dot. For example, instead of ".ovo" you could name the class ".fancy" or ".whatever" or ".ovo12pt," but for maximum browser compatibility, don't start the class name with a number. For instance, don't use ".12ptOvo."

Save your template, and you're ready to use the fonts. How? One way is to enclose a section of text in a <span> that invokes the class you want, like this:

<span class="ovo">
Text goes here. Blah blah blah.
</span>


Google provides hundreds of free fonts (again, see Google Webfonts for details), and many of them are outstanding. The serif fonts are less numerous and less varied than the sans-serif fonts Google provides, and there are no convincing "typewriter fonts" (which is a serious omission, IMHO), but you'll find no shortage of headline fonts. Check the character sets carefully, in any case, because many of the fonts provide only a basic Latin alphanumeric character set.

For an even greater variety of fonts, be sure to check out Adobe's Typekit site.

Here are some of my personal favorites from the Google collection:


Ovo
Nicole Fally's Ovo was inspired by a set of hand-lettered caps seen in a 1930s lettering guide. A medium-contrast serif font, Ovo has a noticeable yet agreeable linearity, with crisp features that provide good (though not excellent) legibility at a variety of sizes. This sample is 12pt and shows that the font itself is natively smaller than most fonts. Ovo's serifs and crossbars are slanted and adhere to a single common angle. This makes for a distinctive font but can become intrusive in long passages of text. Ovo is thus (arguably) better used for short and medium-length spans of text.


Belgrano
Belgrano is a slab serif type, initially designed for printed newspapers but now adapted for use on the web. It features coarse terminals and larger counterforms that allow it to work well in smaller sizes. (This sample is 10pt.) Letters of the alphabet that are closed but rounded ('o', 'b', 'p', etc.) tend to circumscribe slightly more white space in Belgrano than in fonts like Alegreya, giving a more open feel to long runs of text.


Tinos
Tinos was designed by Steve Matteson as a refreshingly stylish alternative to Times New Roman. It is metrically compatible with Times New Roman (giving about the same number of words per page, for example), even though it looks more condensed. Tinos offers good onscreen readability characteristics and comes with a superbly crafted italic version. In larger sizes, it quickly loses its "condensed" feel.


Arapey
Eduardo Tunni's first sketches of this typeface were made during a vacation in Arapey, a small town in the north of Uruguay, hence its name. While the font is reminiscent of Bodoni, the soft lines and finishes give the text a smooth, distinguished feeling. The font tends to look best at 12pt or larger sizes. This sample is 13pt.


Alegreya
Alegreya was chosen as one of 53 "Fonts of the Decade" at the ATypI Letter2 competition in September 2011. It was also selected in the 2nd Bienal Iberoamericana de Diseño competition held in Madrid in 2010. Originally intended for literature, Alegreya is more angular than Arapey and conveys a subtle variegation that facilitates the reading of long texts. The italic version shows just as much care and attention to detail as the roman version. There is also a Small Caps sister family. The font is natively somewhat small (this is a 12pt sample).


Adamina
An excellent general-purpose serif font for long-form text projects, Adamina was specifically designed for readability at small sizes. As a result, the x-height is increased and complex features (of the kind that contribute to contrast) are kept more controlled. One-sided flaring and asymmetrical serifs provide a pleasant reading experience; the font never feels intrusive. This is an 11pt sample with letter spacing increased by 0.01em and word-spacing set to 0.1em (because otherwise it can look a bit tight, especially at small point sizes).


Ledger
Much of Ledger's charm, as with Garamond, comes from its relatively heavy downstroke thickness compared to the almost frail stroke thickness at the tops of curved letters like 'o' and 'p'. That and the font's slightly more open character make Ledger a good alternative to Garamond-family fonts in larger sizes (though not smaller sizes). The letter forms feature a large x-height, good stroke contrast, and elegant wedge-like serifs and terminals, yielding a "distinguished-looking" font, again in the spirit of Garamond except with somewhat better screen readability.


Andada
Designed by Carolina Giovagnoli for Huerta Tipográfica, Andada shares many of Adamina's most agreeable features but, by virtue of being a slab-serif design, lacks the more refined flourishes (in ascenders and descenders, for example) of Adamina. Perhaps precisely because of the less-adorned design, many readers will prefer Andada over Adamina (or "Garamond-like" fonts) for long passages of text. 

Note: If you found this post useful, please tweet it and/or share the link with a friend. Thanks!
reade more... Résuméabuiyad

Solar: It's about to be a whole new world.


Many conservatives appear to have an unshakable, bedrock belief that solar power will never be cost-effective. Talk about solar, and conservatives often won't even look at the numbers - they'll just laugh at you. Mention that solar power recently provided almost half of Germany's electricity at peak hours, and they'll say things like "Oh, Germany's economy must be tanking, then." It seems like almost a fundamental axiom of their worldview that solar will always be too expensive to exist without government subsidies, and that research into solar is therefore money flushed down the toilet.

I suspect that many of these conservatives came of age in the 1970s, when solar was first being mooted as the "green" alternative to fossil fuels. They probably saw solar as a crypto-socialist plot; by scaring everyone about global warming and forcing businesses to convert to expensive solar power, "greens" would impose huge a implicit tax on business, causing the capitalist system to grind to a halt.

Maybe some people did support solar for just such a (silly) reason. But far-sighted people knew that technologies often require lots of government support to develop (basic research being, after all, a public good), and they saw that fossil fuels would have to start getting more expensive someday.

And now, after decades of research and subsidies, we may be on the verge of waking up into a whole new world. The cost of solar power has been falling exponentially for the past 35 years. What's more, there is no sign at all that this cost drop is slowing. New technologies are in the pipeline right now that have the potential to make solar competitive with coal and natural gas, even with zero government subsidy. Here are a few examples:
1. Nano-templated molecules that store energy
MIT associate professor Jeffrey Grossman and others successfully created a new molecule [to] "lock in" stored solar thermal energy indefinitely. These molecules have the remarkable ability to convert solar energy and store it at an energy density comparable to lithium ion batteries...

2. Print solar cells on anything
An MIT team led by professor Karen Gleason has discovered a way to print a solar cell on just about anything...The resulting printed paper cell is also extremely durable and can be folded and unfolded more than 1,000 times with no loss in performance.

3. Solar thermal power in a flat panel
Professor Gang Chen has been working on a revolutionary new way to make solar power — micro solar thermal — which could theoretically produce electricity at 8 times the efficiency of the word's best solar panel...Because it is a thermal process, the panels can heat up from ambient light even on an overcast day, and these panels can be made from very inexpensive materials.

4. A virus to improve nano-solar cell efficiency
MIT graduate students recently engineered a virus called M13 (which normally attacks bacteria) that works to precisely space apart carbon nanotubes so they can be used to effectively convert solar energy...

5. Transparent solar cell could turn windows into power plants
...Electrical engineering professor Vladimir Bulovic has made a breakthrough that could eliminate two-thirds of the costs of installing thin-film technology [on windows] by incorporating a layer of new transparent organic PV cells into the window glazing. The MIT team believes it can reach a whopping 12 percent efficiency at hugely reduced costs[.]
And then there are the technologies that are out of the laboratory and being sold to customers. For example, here's this article from the website Grist:
The company is called V3Solar (formerly Solarphasec) and its product, the Spin Cell, ingeniously solves two big problems facing solar PV. 
First, most solar panels are flat, which means they miss most of the sunlight most of the time...The Spin Cell is a cone...The conical shape catches the sun over the course of its entire arc through the sky, along every axis. It’s built-in tracking. 
The second problem: Solar panels produce much more energy if sunlight is concentrated by a lens before it hits the solar cell; however, concentrating the light also creates immense amounts of heat, which means that concentrating solar panels (CPV) require expensive, specialized, heat-resistant solar cell materials. 
The Spin Cell concentrates sunlight on plain old (cheap) silicon PV, but keeps it cool by spinning it... 
[T]he company tells CleanTechnica that it already has over 4 GW of requests for orders. There is 7 GW of installed solar in the U.S., total... 
Maybe this tech or this company will peter out before reaching mass-market scale. But advances in solar technology are coming faster and faster. (Small, distributed energy technologies are inherently more prone to innovation than large, capital-intensive energy technologies.)
As the article says, this could easily be just an illusion. Don't believe the hype. But the point is that there are now lots of companies and academic labs making claims like this, and the rate appears only to be increasing. Sooner or later - and recent trends suggest "sooner" rather than "later" - one of these claims is going to be right.

And on that day, we will wake up into a whole new world.

Cheap solar energy will change pretty much everything. First of all, it will cause a huge boom among essentially all industries in every country (except for competing energy technologies, of course). Energy powers everything. So far, with nuclear technology stalled, we don't have anything cheaper than coal and gas for producing electricity. Our only hope for cheaper energy has been to find better ways to mine coal and gas. With cheap solar, that is no longer true. The Great Stagnation - which many suspect is really just an energy technology stagnation - would suddenly be a lot less scary. 

Mention this possibility to conservatives, and they will of course be skeptical. These days, you are less likely to hear outright denials of solar's cheapness; instead, the knee-jerk conservative response is "Well what about the intermittency? Solar power only works during the day!"

Two things to note about this. First, it's very telling that solar detractors didn't talk much about intermittency a decade ago. They didn't have to; solar was too expensive even at high noon. The fact that detractors are falling back on the intermittency argument shows how much the game has changed.

Second, the problem of intermittency isn't really a big one. Most electricity is used during "peak" hours, which incidentally is when the sun is shining. It's easy to imagine a future in which solar electricity powers the world during the day, and then gas takes over at night. But that will mean solar is the main source, and gas only a sideshow. (And that's even without any breakthroughs in energy storage technology.)

Anyway, it's looking more and more likely that conservatives are going to wake up one day soon, and look around and blink and find that one of their bedrock beliefs has suddenly been invalidated on a grand scale. If they're smart, conservatives will take this opportunity to discard the old belief that solar is the thin wedge of crypto-socialism, and recognize it for what it truly is - a breakthrough technology, being developed by entrepreneurs for profit on the free market.

In other words, exactly the kind of thing they should applaud.


Update: A commenter writes:
Great discussion. I am a conservative and I own a solar energy company. I do not understand your premise on conservatives aversion to solar power. By nature most conservatives I know desire to break free from the control of the energy,environmental, foreign wars and government lobby, and solar allows us to get there.
Good point. Some other good reasons why conservatives should be more pro-solar.

Update 2: I guess I should give a concrete prediction about when solar will actually start being cost-competitive with fossil fuels, without subsidies, in some locations for some customers. My prediction is: around 2020, or 7 years from now. 95% credible interval would be...um, let's see...2014 to 2040. So that's a fairly wide interval.

Update 3: Commenter Kevin Dick provides some numbers regarding current costs:

[T]he US DOE actually tries to calculate the cost of various energy sources using a complicated levelized cost model. See http://www.eia.gov/forecasts/aeo/electricity_generation.cfm. For power plants coming on line in 2017, their nationwide average estimates in $/MWH are: 
Conventional Coal: 98
Convenional CC Gas: 66
Solar PV: 153  
On average, PV has a ways to go. However, the lowest regional cost of PV is 119, while the highest regional cost of coal is 115 and advanced nuclear is 119.  
So there are probably places today where PV is cost competitive. But the market can surely figure this out at least as well as the government.
If these numbers are right, it means that we are just now hitting the point where solar power makes economic sense in a few places without any government subsidies. That's pretty amazing, if you ask me. I wonder how many of those places there will be in 7 years...
reade more... Résuméabuiyad

Can you name these famous authors?

If you consider yourself a true bibliophile, here's a quick test for you. At right are photos of 18 famous authors (of fiction, although many also wrote nonfiction), from the 19th and 20th centuries. Eleven wrote solely or primarily in English; seven wrote in a language other than English. As far as I know, only one of these people is still alive.

For this particular test, I'm including only male authors. In a future post, I'll do female authors only. That'll be much more challenging.

Scoring works like this. There are 18 authors. Give yourself five points for every correct answer (that's a possible total of 90 points), then give yourself a free 10-point bonus if you end up not using any of the hints shown below. If you do use the hints (any of them), you don't get the 10-point bonus.

Check the bottom of the page to see how you did. Good luck!

Hints

1. Tried to study engineering; became a wordsmith instead. Dead at 44 after writing a dozen novels plus scores of stories, poems, essays.

2. Left school to work in a factory after his father was thrown into debtors' prison.

3. The godfather of futurist steampunk.

4. Workaholic pioneer of literary realism.

5. Less than three years after winning the 1957 Nobel Prize for Literature, he died in a car wreck at age 46.

6. Novelist, short story writer, social critic, philanthropist, essayist, and 1929 Nobel Prize laureate. Known for epic irony and ironic epics.

7. He won the 1962 Nobel Prize for literature for his "realistic and imaginative writing, combining as it does sympathetic humor and keen social perception."

8.In 1851, after running up heavy gambling debts, he went with his older brother to the Caucasus and joined the army. Then he began writing.

9. In addition to his famous dystopian novel, he wrote literary criticism, poetry, and polemical journalism. Heavy smoking did not help his tuberculosis.

10. Known for his prescience.

11. This Prague-born author's social satire was as grotesque as it was moving.

12. He went from unknown to famous to unknown in the space of his 72-year-long life.

13. Wait. You don't recognize Достое́вский? He and other members of his literary group were arrested, sentenced to death, subjected to a mock execution, then given four years of hard labor in Siberia.

14. Poet, painter, and master of the Bildungsroman. He received the Nobel Prize in 1946.

15. Winner of the 1954 Nobel Prize for Literature. Dead in 1961 at age 61.

16. Better known in Kashmiri as अहमद सलमान रुशदी. He started out as an ad copywriter with Ogilvy & Mather.

17. His major opus was reportedly typed as a single paragraph on a 120-foot-long scroll of paper.

18. While genuinely a gifted writer, he became famous mainly for being famous. Many think of him as having pioneered the "nonfiction novel."


Answers

1. Robert Louis Stevenson. 2. Charles Dickens. 3. H.G. Wells. 4. Honoré de Balzac. 5. Albert Camus. 6. Thomas Mann. 7. John Steinbeck. 8. Leo Tolstoy. 9. George Orwell. 10. Arthur C. Clarke. 11. Franz Kafka. 12. Herman Melville. 13. Fyodor Dostoyevsky. 14. Herman Hesse. 15. Ernest Hemingway. 16. Salmon Rushdie. 17. Jack Kerouac. 18. Truman Capote.


Scoring
(5 points per correct answer plus 10 points if you didn't use Hints)

90 to 100: Master bibliophile. Congratulations.
80 to 89: Excellent. You've been paying attention.
70 to 79: Solid. You're no literary dummy.
60 to 69: Acceptable. It's possible you actually earned your degree.
50 to 59: Poor. You've been reading the wrong stuff.
40 to 49: Were you not paying any attention in school?
below 40: Give your degree back. You were wasting everyone's time.
reade more... Résuméabuiyad

When is Surface-Deep Knowledge Good Enough?

As the dimensionality of a (hyper)sphere increases,
more and more of the volume is near the surface. The pink
and red portions of the (hyper)spheres shown in cross-section
here each contain 50% of the volume. 'N' is the dimensionality.
The common supposition is that when your knowledge of something is "surface deep," it's tantamount to knowing nothing. But is that always true? What if you understand many facets of a complex topic, some perhaps at deep level, but you lack formal training in those facets? Does it mean your understanding rounds off to zero? Hardly.

Here's one way to look at it. Suppose the subject domain (whatever it happens to be) can be represented, conceptually, as a sphere. Everything there is to know about the subject maps to some region inside the sphere. "Total knowledge" represents the total contents (the total volume) of the sphere.

If the sphere is three-dimensional, half the volume is contained in an inner sphere that has 79.37% of the radius of the overall sphere. (Stay with me on this for a moment, even if you're not a math person.) For purposes of discussion, we'll consider a sphere of radius 1.0 (a so-called "unit sphere"). By comparison to a unit sphere, a sphere that has a radius of 0.7937 contains half the volume of the unit sphere. The reason for this is that volume grows as the cube of the radius, and the cube root of 0.5 is 0.7937. So that's what I mean when I say that the innermost 79.37% of a sphere (any 3-dimensional sphere), as measured in terms of its radius, contains 50% of the volume of the sphere. The outermost 20.63% of the radius bounds the outermost 50% of the sphere's volume.

This is summarized in the topmost portion of the accompanying graphic, where we see the cross-section of a sphere with the innermost half of the volume shaded in pink and the outermost half shaded in dark red. The boundary between the two half-volumes starts at a point on the radius that's 79.37% of the way from the center to the surface.

Now suppose we consider a hypersphere of dimensionality 10. That's the middle sphere of the graphic (the one that has "N = 10" next to it). The volume of such a sphere grows as the tenth power of the radius. Therefore the inner and outer half-volumes are delimited at a point on the radius that is 93.3% of the way from the sphere's center (the tenth root of 0.5 is 0.93303). Again, the graphic depicts the outer half-volume in dark red. Notice how much thinner it is than in the top drawing.

If we step up the dimensionality to N = 30, the half-volumes are delimited at the 97.16%-radius point. Half the volume of the hypersphere is contained in just the outer 2.84% of radius.

You can see where I'm going with this. As the dimensionality N approaches infinity, all of a hypersphere's volume is contained in the surface.

So if the dimensionality of a problem is large enough (and you're willing to buy into the simple "volume is knowledge" model set forth earlier), surface-deep knowledge can be quite valuable indeed.

The next time someone tells you your knowledge of something is only "surface-deep," consider the number of dimensions to the problem, then tell the person: "Dude. This is an N-dimensional problem, and since N is high in this case, surface-deep knowledge happens to be plenty. Let me explain why . . ."

reade more... Résuméabuiyad

Creating Your Own Memorization Tricks

Better memory is something almost everybody wants. How much time have you spent recovering passwords to websites? Trying to remember what you were supposed to buy at the supermarket? Trying to remember phone numbers? Trying to remember where you put the damn keys? Trying to remember where you stashed the access code to your wireless connection?

If you do a survey of memorization tricks, you quickly find that they all rely on the same few sorts of strategies. One common strategy is to connect a picture with whatever you're trying to remember. Another is to connect an emotion. When you're trying to memorize more than one thing at the same time (such as a name plus a face, or several numbers in a sequence), combine multiple "lookup techniques" to make a story.

All of these strategies involve making connections between disparate object-types (e.g., associating an image with a number), in hopes of enlisting more than one part of your brain in memorizing whatever it is you're trying to memorize. Once you know that, it's fairly easy to make up your own memorization tricks.

The key is to take advantage of the fact that your brain stores information in different ways. One part of your brain is devoted to face recognition (and facial memory). Another part is devoted to emotional memory. We also have distinct ways of remembering shapes and imagery; sounds; vocabulary and language-based meanings; letters, numbers, or glyphs; mathematical relationships; kinesthetic experiences ("muscle memory"); and a bunch of other stuff I can't remember right now. (Bwa-ha-ha.)

The key is to tie two or more of these memory modalities together.

How many times have you been in a phone conversation where someone suddenly gives you a phone number when you're not ready to copy it down? I made up my own memory technique for that. I take the first portion of the phone number and memorize the visual image of it (the picture of it, as if it's a photo of the number projected on a wall). Then I recite the last portion of the phone number (either silently or out loud) repeatedly, like a mantra, until it's part of my mouth's muscle memory. I don't just "recite" the number in a monotone voice, I actually make it a sing-songy, semi-musical ditty, the way you often hear phone numbers sung in radio commercials.

I find that it's easy to hold a "photographic image" of a number in one part of my brain and a singy-songy spoken (or sung) number in another part of my brain, at the same time. Many math savants (people who can tell if a large number is prime, or who can multiply any two numbers in their head, etc.) report that they rely on techniques involving seeing the shapes of numbers. This is often useful when trying to memorize the "photo image" of a number. E.g., 413 is sharp and pointy on the left (it has the shape of the prow of a ship) but round like two buttocks on the right.

Sometimes I use a different technique for phone numbers. (This is going to sound ridiculous.) Suppose the number I want to memorize is 326-5918. This is a fairly difficult number to remember because no two digits are the same. First, quickly memorize the 326 part by rote. (If I suspect I'll forget the '326' part, I'll go a step further and try to find a mathematical crutch that will help me. In this case: 3 times 2 is 6.) For the 5918 part, I tell myself "I feel like I'm 59 years old, but I want to feel like I'm 18." Or I make up a fantastical little story: "When I'm 59 years old I'll meet someone who's 18." (Yeah, right.) If I'm on the phone with a customer service representative: "Holy crap, she must think I'm 59 years old, but she sounds like she's 18!"

I'm still bad with faces and names. The experts say to transform a facial feature into an object, then create a bizarre story about the object that's easy to remember. So for example, suppose you meet someone named Cory Zimmerman. You might make the (absurd) realization that the person's neck reminds you of an apple core (it looks Core-y). Then you might imagine that his zipper is down (Zipperman). A person with an apple core for a neck, with his zipper down, is laughable enough to remember. Arguably.

At any rate, now you know how to invent your own memory techniques. Take any two modalities of learning (muscle memory, image memory, math-relationships memory, etc.) and connect them together, then overlay with a story. The more absurd the story, the better. Remember that.





reade more... Résuméabuiyad

Macro always fights the last war



Matthew Klein of The Economist has a great post up about the history of modern macro, drawing on a presentation by the incomparable Markus Brunnermeier. If you are at all interested in macroeconomics, you should check it out (though of course econ profs and many grad students will know the bulk of it already).

Here is Klein's summary of pre-2008 macro:
As the slideshow makes clear, macro has evolved in fits and starts. Existing models seem to work until something comes along that forces a rethink. Then academics tinker and fiddle until the next watershed.  
In response to the Great Depression, John Maynard Keynes developed the revolutionary idea that individually beneficial actions could produce undesirable outcomes if everyone tried to do them at the same time. Irving Fisher explained that high levels of debt make economies vulnerable to downward spirals of deflation and default... 
Problems developed in the 1970s. “Stagflation,” the ugly portmanteau that describes an economy beset with rapid price increases and high levels of unemployment was not supposed to be possible—yet it was afflicting all of the world’s rich countries...A new generation of macroeconomists, including Ed Phelps, Robert Lucas, Thomas Sargent, Christopher Sims, and Robert Barro, responded to the challenge in the late 1970s and early 1980s...[their] new “dynamic stochastic general equilibrium” (DSGE) models were based on individual households and businesses that tried to do the best they could in a challenging world...Despite...many drawbacks, DSGE models got one big thing right: they could explain “stagflation” by pointing to people’s changing expectations.
Klein and Brunnermeier both say that macro is changing again, this time in response to the Great Recession and the financial crisis that preceded it. The big change now, they say, is adding finance into macro models.

Reading this, one could be forgiven for thinking that macro lurches from crisis to crisis, always trying to "explain" the last crisis, but always missing the next one.

How true is that? Well, on one hand, science should progress by learning from its mistakes. You have a model that you think explains the world...then something new comes along, and you need to change your model. Great. That's how it's supposed to work.

Doesn't that describe exactly what macro has been doing? Well, maybe, but maybe not. First of all, what you shouldn't do is develop models that only explain the most recent set of observations. In the 70s and 80s, the DSGE models that were developed to explain stagflation had a very hard time explaining the Great Depression. Robert Lucas joked about this, saying: "If the Depression continues, in some respects, to defy explanation by existing economic analysis (as I believe it does), perhaps it is gradually succumbing under the Law of Large Numbers."

But the fact that DSGE models couldn't explain the Depression was not seen as a pressing problem. There was no big push to modify or expand the models in order to explain the biggest economic crisis of the 20th century (though there were scattered attempts).

So macro seems to suffer from some "recency bias".

And here's another issue. When we say macro models "explain" a phenomenon, that generally means something very different, and less impressive, than it means in the hard sciences (or even in microeconomics). When we say that 80s-vintage DSGE models "explain" stagflation, what we mean is "there is the possibility of stagflation in these models". We mean that these models are consistent with observed stagflation.

But for any phenomenon, there are many possible models that are consistent with that phenomenon. How do you know you've got the right story? Well, there are several ways you can sort of tell. One is generality of a model: how well does the model explain not just this one thing, but a bunch of other things at the same time? (This is closely related to the idea of "unification" in physics.) If your model can explain a bunch of different stuff, then it's probably more likely to have captured something real, instead of being a "just-so story".

But modern macro models don't do a lot of that. Each DSGE model matches a few things, and not other things (this is why they are all rejected by formal statistical testing). Ask the author about the things his model doesn't match, and he'll shrug and say "I'm not trying to model the whole economy, just a couple of things." So there's a huge proliferation of models - not even one model to "explain" each phenomenon, but many models per phenomenon, and very little in the way of choosing which model is appropriate to use, and when.

Another clue that you've got the right story is if your model has predictive power. But modern macro models display very poor forecasting ability (as do non-modern models, of course).

Before the 2008 crisis, there doesn't seem to have been very much dissatisfaction with the state of macro. Models were rejected by statistical tests...fine, "All models are wrong," right? There were 50 models per phenomenon...fine, "We have models for anything!" Models can't forecast the future...fine, "We're not interested in forecasting, we're interested in giving policy advice!" I wasn't alive, but I imagine there existed a similar complacency before the 1970s.

Then 2008 came, and suddenly everyone was scrambling to update and modify the models. No doubt the new crop of finance-including models will be able to tell a coherent, plausible-sounding story of why the 2008 Financial Crisis led to the Great Recession. (In fact, I suspect quite a number of mutually conflicting models will be able to tell different plausible-sounding stories.) And then we'll sit back and smile and say "Hey, look, we explained it!"

But maybe we didn't.

Of course, this doesn't necessarily mean macroeconomists could do a lot better. Maybe this is the best we can do, or close to it. Maybe time-series data is so inherently limited, data collection so poor, and macroeconomies so hideously complex, non-ergodic, and chaotic that we're never going to able to have predictive, general models of the macroeconomy, no matter how many crises we observe. In fact, I wouldn't be terribly surprised if this turned out to be the case. But I think at least we could try, a little more pre-emptively than in the past. And I think that if we didn't tend to oversell the power of the models we have, we wouldn't be so embarrassed when the next crisis comes along and smashes them to bits.
reade more... Résuméabuiyad

This is How Wrong Kurzweil Is

Yesterday I criticized Ray Kurzweil's prediction (made in a Discover article) of the arrival of sentient, fully conscious machine intelligences by 2029. I'd like to put more flesh on some of the ideas I talked about earlier.

Because of some of the criteria Kurzweil has set for sentient machines (e.g. that they have emotional systems indistinguishable from those of humans), I like to go ahead and assume that the kind of machine Kurzweil is talking about would have fears, inhibitions, hopes, dreams, beliefs, a sense of aesthetics, understanding (and opinions about) spiritual concepts, a subconscious "mind," and so on. Not just the ability to win at chess.

Microtubules appear to play a key role in long-term memory.
I call such a machine Homo-complete, meaning that the machine has not only computational capabilities but all the things that make the human mind human. I argued yesterday that this requires a developmental growth process starting in "infancy." A Homo-complete machine would not be recognizably Homo sapiens-like if it lacked a childhood, in other words. It would also need to have an understanding of concepts like gender identity and social responsibility that are, at root, socially constructed and depend on a complex history of interactions with friends, parents, relatives, teachers, role models (from real life, from TV, from the movies), etc.

A successful Homo-complete machine would have the same cognitive characteristics and unrealized potentials that humans have. It would have to have the ability not just to ideate, calculate, and create, but to worry, feel anxiety, have self-esteem issues, "forget things," be moody, misinterpret things in a characteristically human way, feel guilt, understand what jealousy and hatred are, and so on.

On top of all that, a Homo-complete machine would need to have a subconscious mind and the ability to develop mental illnesses and acquire sociopathic thought processes. Even if the machine is deliberately created as a preeminently "normal," fully self-actualized intelligence (in the Maslow-complete sense), it would still have to have the potential of becoming depressed, having intrusive thoughts, developing compulsivities, experiencing panic attacks, acquiring addictions (to electronic poker, perhaps!), and so on. Most of the afflictions described in the Diagnostic and Statistical Manual of Mental Disorders are emergent in nature. In other words, you're not born with them. Neither would a Kurzweil machine be borne with them; yet it could acquire them.

We're a long way from realizing any of this in silicon.

Kurzweil conveniently makes no mention of how the human brain would be modeled in a Homo-complete machine. One presumes that he views neurons as mini-electronic devices (like elements of an electrical circuit) with firing characteristics that, once adequately modeled mathematically, would account for all of the activities of a human brain under some kind of computer-science neural-network scheme. That's a peculiarly quaint outlook. Such a scheme would model the brain about as well as a blow-up doll models the human body.

Current mathematical models are impressive (see [3] below, for example), but they don't tell the whole story. It's also necessary to consider the following:

  • Neurotransmitter vesicle release is probabilistic and possibly non-computable.

  • Beck and Eccles [2] have suggested that quantum indeterminacy may be involved in consciousness.

  • It's likely that consciousness occurs primarily in dendritic-dendritic processing (about which little is known, except that it's vastly more complex than synapse-synapse processing) and that classical axonal neuron firing primarily supports more-or-less automatic, non-conscious activities [1][7].

  • Substantial recent work has shown the involvement of protein kinases in mediating memory. (See, for example [8] below.) To model this realistically, it would be necessary to have an in-depth understanding of the underlying enzyme kinetics.

  • To model the brain accurately would require modeling the production, uptake, reuptake, and metabolic breakdown of serotonin, dopamine, norepinephrine, glutamate, and other synaptic substances in a fully dynamic way, accounting for all possible interactions of these substances, in all relevant biochemical contexts. It would also require modeling sodium, potassium, and calcium ion channel dynamics to a high degree of accuracy. Add to that the effect of hormones on various parts of the brain. Also add intracellular phosphate metabolism. (Phosphates are key to the action of protein kinases, which, as mentioned before, are involved in memory.)

  • Recent work has established that microtubules are responsible not only for maintaining and regulating neuronal conformation, but in addition, they service ion channels and synaptic receptors, provide for neurotransmitter vesicle transport and release, and are involved in "second messenger" post-synaptic signaling. Moreover, they're believed to affect post-synaptic receptor activation. According to Hameroff and Penrose [5], it's possible (even likely) that microtubules directly facilitate computation, both classically and by quantum coherent superposition. See this remarkable blog post for details.

Kurzweil is undoubtedly correct to imply that we'll know a great deal more about brain function in 2029 than we do now, and in all likelihood we will indeed begin to see, by then, machines that convincingly replicate certain individual aspects or modalities of human brain activity. But to say that we will see, by 2029, the development of computers with true consciousness, plus emotions and all the other things that make the human brain human, is nonsense. We'll be lucky to see such a thing in less than several hundred years—if ever.


References

1. Alkon, D.L. 1989. Memory storage and neural systems. Scientific American 261(1):42-50.

2. Beck, F. and Eccles, J.C. 1992. Quantum aspects of brain activity and the role of consciousness. Proc. Natl. Acad. Sci. USA 89(23):11357-11361.

3. Buchholtz et al., Mathematical Model of an Identified Stomatogastric Ganglion Neuron, J. Neurophysiology, 67:2 February 1992.

4. Hameroff S 1996. Cytoplasmic gel states and ordered water: Possible roles in biological quantum coherence. Proceedings of the Second Advanced Water Symposium, Dallas, Texas, October 4-6, 1996. http://www.u.arizona.edu/~hameroff/water2.html

5.Hameroff, S.R., and Penrose, R., (1996a) Orchestrated reduction of quantum coherence in brain microtubules: A model for consciousness. In: Toward a Science of Consciousness, ­ The First Tucson Discussions and Debates, S.R. Hameroff, A. Kaszniak and A.C. Scott (eds.), MIT Press, 1996, Cambridge, MA. Also published in Mathematics and Computers in Simulation 40:453­480.

6. Toward a Science of Consciousness II: The 1996 Tucson Discussions and Debates, Stuart Hameroff, Alfred Kaszniak, and Alwyn Scott, Editors. MIT Press, Cambridge MA 1998.

7. Pribram, K.H. Brain and Perception Lawrence Erlbaum, New Jersey 1991.

8. Rovelli C, Smolin L 1995a. Discreteness of area and volume in quantum gravity. Nuclear Physics B 442:593-619.

9. Shema et al., Rapid Erasure of Long-Term Memory Associations in the Cortex by an Inhibitor of PKM, Science, 317:5840 pp. 951-953, August 2007.
reade more... Résuméabuiyad