Pages

.

How I Fell in Love with a Schizophrenic

Note: See update at the end of the post.

I had a much different post planned for today, a great post (if I do say so myself) about how three groups of scientists, 60 years ago this year, wrote three papers on the same subject in the same issue of Nature. One of the three papers was only 843 words long; half the length of the others, and much better-written. It resulted in a Nobel Prize.

I am going to have to push that one off to tomorrow.

Yesterday, my true love, Sally, had a psychotic break and went into the (mental) hospital, where she'll probably be for the next two weeks. Today, I'm writing as a means of therapy. Therapy for me.

I knew going into this relationship that it would entail ups and downs, and hard work.

I've never been as "all-in" as in this relationship. I'll stick by Sally for as long as she'll have me. I'll stick by her no matter what.

Our meeting was the fluke of the century. I happened to be scrounging around on Craigslist one day looking for a furniture item. Sally happened to be on Craigslist looking for pet supplies. On a lark, I posted a personals ad, something I'd never done (on Craigslist). It was a short ad, maybe three or four sentences total. The heading was something goofy like "Intelligent guy looking for sharp gal."

Sally happened to see the ad, and responded to it. She was the only legit response I got. Altogether, I got 10 replies, 9 of them from come-on artists trying to get me to visit this or that website and enter a credit card number "for verification purposes."

This is as much of Sally as I can show
without triggering her paranoia.
I was skeptical of Sally's response, as honest and heartfelt as it sounded. For those of you reading this who live outside the U.S. and may not know what Craigslist.org is, it's a free-for-all of want ads ("classified ads" online) for anything and everything. But it's also a dangerous cesspool of scammers and criminal activity (everything from prostitution to sale of stolen goods) with a reputation to match. Want to buy a new refrigerator cheap? Craigslist. Expect to pay cash, ask no questions, and not know where the fridge came from. Want to buy a dog? Plenty for sale. Many kidnapped. Want to buy a boa constrictor or other illegal pet? Craigslist. Want to buy a white mouse to feed your boa? Craigslist. Always Craigslist.

The best way to sum up Craigslist is this: Think of it as an Internet version of Mos Eisley Spaceport. Remember Obi-Wan's famous line from Star Wars? "You will never find a more wretched hive of scum and villainy. We must be cautious."

Sally and I met via Craigslist, the first time either of us had met anyone that way (or wanted to). We'd each been the OKCupid route, the Plenty of Fish route, and other routes. To no meaningful avail.

We laugh now when we tell friends how we met. It's with great pride that I tell people, in complete seriousness, that I met, and fell in love with, a schizophrenic on Craigslist.

As I was saying a second ago, I was skeptical of Sally's response to my ad (because one is justifiably skeptical of anything that comes into one's life via Craiglist). We corresponded briefly. Very briefly. I asked her in the first e-mail to send a phone number so I could talk to her immediately (my way of verifying that I wasn't dealing with a convincing scammer).

We spoke on July 1, 2011 for about ten minutes. The call ended with a promise of trying a longer phone call the next day.

The July 2 phone call turned out to be a revelation.

Sally told me straight out that she was on disability. I expected her to say she was in a wheelchair. Instead she said she had been deemed incurably schizophrenic by the Social Security Administration fifteen years earlier.

I listened as Sally explained that her particular variety of schizophrenia is actually called schizo-affective disorder, which is schizophrenia with an added twist. People with schizo-affective disorder show all the classical signs of schizophrenia (delusions, hallucinations, confused or irrational thoughts, a greater or lesser degree of paranoia, lack of interest in the world and other people, inability to act spontaneously, occasionally varying degrees of catatonia; and quite often, indifference to grooming or bathing), but instead of exhibiting the emotional "flatness" (lack of affect) that most schizophrenics evince, people with schizo-affective disorder have an accompanying mood disorder, most commonly bipolar disorder, or, as in Sally's case, major depression.

On top of that, Sally told me about having PTSD from a traumatic experience earlier in her life (details of which are not important here). She was still having intrusive thoughts as a result of the PTSD.

We had no trouble conversing on the phone. It seemed I was talking to a cogent enough individual. I asked what her medicines were and how she'd been doing of late. She listed the meds and told me that her major symptoms (delusions, hallucinations, confused thoughts) were under control but that she still had residual paranoia, intrusive thoughts, and depression. SSRIs like Zoloft and Prozac (which help only a minority of depression sufferers) had done little or no good for her.

I was exceedingly impressed with Sally's candor. After an hour on the phone I suggested we meet for coffee at 10:00 the next morning (July 3). She agreed.

We met face-to-face at Starbucks and had a delightful 45-minute talk. She looked just like her pictures. She sounded just like she did on the phone. She was charming and forthright; cheerful; well-dressed; a delight to behold, in every way. By the end of the conversation, we were asking each other about 4th-of-July plans. Neither of us had any. "Why don't we go out to dinner?" I suggested. And with that, we agreed on our first "real" date.

Our first date was lengthy and delightful. After Mexican food and an hour-long walk on the beach, I offered to take Sally home, if she wanted. She shocked the wits out of me by saying: "Why don't we go shoot a few rounds of pool?"

We went to Pete's (a ratty but convivial beach bar in Jacksonville) and played billiards while everyone else went outside to view the fireworks at the pier. Sally preferred to avoid the flashing pyrotechnics and loud noises, as I did; we stayed inside and took turns beating each other at 8-ball, in an otherwise empty bar. We smoked Salems. We drank Blue Moons. We talked of alien visitations and suicide attempts.

I think most guys, on a first date, upon hearing a young lady talk seriously about being visited by aliens, would probably find some reason to cut the evening short. I merely listened. Sally "knew" the alien visitation wasn't real. But it felt real enough to her when it happened. So I asked her to tell me about it in detail. And I listened, without passing judgment. She ultimately laughed the whole thing off, but I knew it was an important part of her reality. It had stayed with her. It was still real to her. Who was I to question her reality?

Sally told me what it had been like to discover that she was schizophrenic at age 22. It was a gradual process. She didn't know that the radios she was hearing weren't real, or that the songs she was listening to on the (real) radio weren't actually written specifically for her, with encoded messages in them. She didn't know that when both her parents happened to wear orange windbreakers on the same day, it didn't mean she would soon be going to jail. She didn't know that when a yellow car pulled in front of her on the road, it didn't mean there was danger ahead. She didn't know that numbers don't have "assigned colors" to them. She didn't know that when she was eating in a crowded restaurant, people weren't talking about her, or that passing helicopters weren't really spying on her.

One day, Sally's mother found her sitting on the edge of the bed, catatonic, unresponsive. Her mom wisely rushed her to the doctor. Sally was referred to a mental-rehab center (the closest thing we have these days to a "mental hospital"), where she was diagnosed as schizophrenic and put on strong medications, medications that (after several days) began to dissolve away Sally's most florid symptoms, which included the constant sound of doors slamming and the whop-whop-whop of helicopters overhead with Homeland Security agents spying on her.

She tells of waking up one morning in her hospital bed, astonished to find the room pin-drop quiet. No loud radios, no slamming doors, no helicopters, no voices of people "talking about her." Just silence.

For the first time in however many months (she doesn't recall how many), she actually heard the world as it is.

Her doctor told her the silence was the meds finally working. She asked what condition she was being medicated for (and asked for verification that she was, indeed, in a hospital of some kind). For days, she had been wondering where she was, and who all the strange people around her were. The doctor explained to her that she had schizophrenia.

A few weeks later, out of the hospital, her major symptoms under control, Sally (under her mother's guidance) applied to the U.S. government for disability. The Social Security Administration put her through a psychological exam. There was no doubt at all that she was schizo-affective, with residual symptoms of paranoia, intrusive thoughts, and major depression. The Social Security Administration put her on a disability pension.

As a disabled person (who can't work a normal job, because of the severity of her residual symptoms), Sally gets a monthly check for $661. That's it. That's all. No more. Here you go: $661 a month, now go take care of yourself.

Need I say, no one can live on $661 a month in the United States. That's the average monthly per-capita income in Gabon, and people have trouble living on that kind of money in Gabon. 

It's an outrage that a disabled person is expected to live independently on $661 a month.

It's the kind of thing that makes me ashamed of my own country.

As it happens, Sally got married in her early twenties. Her husband, an alcoholic musician, proved unable to take care of himself, let alone a wife. In an attempt to straighten himself out, her husband enlisted in the U.S. Army. After a year, he washed out. A couple years after that, he and Sally divorced.

Two abortions later, Sally had herself sterilized.

Then came the boyfriend who would take care of her (more successfully than the ex-husband) for six years. The boyfriend was a career Navy guy who was often at sea for months at a time. When he came home, he spent his time playing World of Warcraft. All of his time went to WoW. All of it. Even when Sally attempted suicide by taking a full bottle of Seroquil, he played WoW. In fact, he stepped over her dying body in the hallway in order to get back to WoW. Sally grabbed a cell phone just before passing out and dialed 911. The boyfriend spent the rest of the night explaining his negligence to the police.

When she recovered from the suicide attempt, Sally left Mr. World of Warcraft.

Three more suicide attempts would follow, in a space of four years.

Sally finally moved in with her aging (but healthy) father.

Nine months later, she and I met.

It's been over a year and a half now of seeing each other. On December 1, 2012, we rented a house in Jacksonville and moved in together. But after three weeks, Sally fell into a deep depression. Antidepressants (a wide variety of them) simply have not worked for her. Her depression isn't just biochemical. It's situational. She wants to be able to hold a job, but can't. She wants to be able to be independent, live on her own if she wants to, but can't. As much as she adores me, she's depressed to know she can never move out and live on her own, not even in theory. She can only move to her father's house again. But she doesn't want that.

I stay with her not only because I understand her problems and want to be there for her, but because I'm totally taken by her (a polite way of saying I'm madly in love with her) and have been since the day we met. She's truly a beautiful person inside and out. Guileless, straightforward, self-aware, good-hearted, open-minded, always truthful, always kind; the type of woman I've always wanted to meet and fall in love with. I could never say anything bad about her. (How could I? There's nothing bad to say.) I could never do anything but love her, and want to take care of her. And I want what we have to last forever.

I've told Sally many times, I never want to go on a first date ever again. I'll never be interested in another woman. I'll throw myself in front of a bus for her if she wants it. I'll run naked through the streets if she says to. (I pray she never becomes that crazy, of course.) There isn't anything I wouldn't do for Sally.

When I look at Sally, sometimes I feel sad. Sad for the thousands of schizophrenics alive in the U.S. today who have not gotten treatment; who sit homeless and disheveled on street corners in every major city in America, talking to themselves or raving at no one in particular, waiting for a caring passer-by to offer a few dollars to buy them the lunch the government won't buy them. Sad for the countless thousands of schizophrenics who were subjected to cold-water baths, spinning chairs, and other quack therapies in lunatic asylums a century ago. Sad for the thousands burned as witches in the middle ages.

I look at Sally with happiness, too. Happiness that she has gotten treatment. Happiness that, thanks to powerful new medicines, she has gotten a good part of her life back. Happiness that she has chosen to be with me.

I hope, when Sally comes out of the hospital in a week or two, she'll be a lot like the old Sally I met that fateful July day in 2011. And I hope we can laugh about aliens, and go back to fixing up the house, and maybe even shoot a round of pool or two now and then (minus the Blue Moons; we stopped drinking as of last September 19). We'll keep trying to find an antidepressant that works, and a work-at-home job that Sally can do to make extra money, so she can feel more independent. Fate willing, we'll build a life together.

No doubt there's a lot of hard work ahead, for both of us. But you know what? I never saw anything in this life that was worth a damn that didn't involve hard work. The idea is not to shun the hard work but to embrace it. Embrace it with both arms, squeeze it hard, and accept it, not with fateful resignation but with the sure knowledge that if you do embrace it, good things will come, eventually. The alternative, giving up, is unthinkable.

I'll never give up on Sally, as long as I breathe.

*

UPDATE: As of January 14, over 60,000 people from 144 countries have viewed this post. Sally is now out of the hospital and back to normal (or what passes for it). We were both amazed and humbled by the response to the blog. Partly as a therapy measure, I urged Sally to write a memoir, a book that gives the complete backstory to this post, which she is currently doing. If you would like to follow her progress on the memoir (working title: ALMOST NORMAL), please enter your e-mail address in the short form at the bottom of this page. You'll get one to two updates a month from us. No spam. Easy unsubscribe. No nonsense. We look forward to hearing from you!

NOTE: A Russian version of this post is available at http://homoveresapiens.ru/kak-ya-vlyubilsya-v-shizofrenika/ (thank you Natalia Stotskaya!).
reade more... Résuméabuiyad

Easy Rules for Improving Your Writing

Never be afraid to stomp on something to death and start over. In fact, get in the habit of doing it.

Don't keep a long, awkward sentence in hopes of reworking it. There's no sense polishing a turd.

Write multiple variations of the same sentence. Choose the best one.

Sentences that begin with a gerund (an -ing verb). Start over. Let verbs be verbs, not nouns.

Sentences that start with "There is" or use a "There is . . . that" construction: Cremate at once.

Sentences that begin with a subsidiary clause (one that doesn't contain the subject of the sentence) followed by a comma: Toxic. Keep in a sealed lead box.

Sentences in which the subject is far removed from the predicate, or located near the end of the sentence: Junk. No cash value.

Sentences that begin with an adverb or adverbial clause, followed by a comma: Seldom the best thing to do.

Sentences with more than one subserviant clause: Puts a huge workload on the reader. Chunk it up. Write it as more than one sentence.

The use of possessive pronouns (such as whose) with inanimate objects: Amateurish. Don't say "The brands whose prices are going up will be announced each week." Say: "Price increases for brands will be announced weekly."

"Effective" or "effectively": Whenever you say something like "effective marketing" (or "writing effectively") ask yourself as opposed to what? Isn't the effectiveness of what you're proposing already implied? (It sure as heck better be.)

Eliminate phrases like "the fact that" or "based on the fact that" or "due to the fact that." They're never needed. Show causality with "because" or "due to." Better yet, make causation implicit in what you're saying.

Very is overused and thus weak, not strong. Rather than strengthening something with very, you're often weakening it. Try exceedingly, extraordinarily, astonishingly, etc.

Avoid weak modifiers like "somewhat," "rather," and "fairly." Make definitive statements.

Semicolons; avoid.

Ellipses . . . ditto.

Watch out for exclamation points!

And above all, don't not avoid double negatives.

Tomorrow's post is special: In it, you'll see how 850 words of clear, direct prose can result in a Nobel Prize. (Yes, a Nobel Prize.) Come back tomorrow to read the whole incredible story.
reade more... Résuméabuiyad

Should Japan "reflate"?



In Japan, the term "rifure" stands for "reflation" or "reinflation", meaning a (hypothetical) Big Push by montary policymakers to end the decades of deflation in that country. The question of whether Japan should "reflate" is the biggest question in Japanese macroeconomic policy circles.

Now, I've gone on the record as a skeptic regarding the power of central banks to fine-tune the macroeconomy. In that post, I mentioned the idea of an inflation "snap-up", where expansionary monetary policy suddenly and unpredictably pushes inflation from very low to problematically high. Also, I've cast doubt on the idea that Shinzo Abe, the current hero of the Japanese "reflationist" camp, is really committed to following through on the radical changes he's proposed.

Still, I think that if Japanese politicians and policymakers were willing to try a big push for reflation, it would be a good idea. I explain why in an article (in Japanese) published on the Japanese econ blog site Agora. Here is an English translation of my main argument:
[T]he gains [of an attempt at reflation] seem disproportionate to the risks. Reflation has the potential to help Japan solve three of its biggest problems at once: 1) the slow economy, 2) deflation, and 3) the huge national debt. Monetary easing will probably lower Japan’s unemployment a bit, and will also cause the yen to weaken, helping exporters. It will also erode the real value of the national debt, which at over 140% is the highest in the developed world. 
The only risk, on the other hand, is hyperinflation. How much should we fear hyperinflation? In terms of its effect on the economy, it is very similar to a sovereign default, which Japan is headed for anyway if it does not get its deficit spending under control. Hyperinflation destroys savings and causes economic activity to grind to a temporary halt; it usually lasts for about a year, before the government is forced to implement harsh austerity. After the end of hyperinflation, economies often recover strongly as economic activity restarts. 
In other words, hyperinflation is bad, but it is not the end of the world. Furthermore, it seems like an unlikely event. Hyperinflations are rare in history, and usually seem to coincide with severe disruptions to the real economy, such as wars.  
So when contemplating reflation, we must balance the likely possibility of three very important gains against the unlikely possibility of one bad but not world-ending loss. To me, the risk seems to be one worth taking.
Nobuo Ikeda, the prominent Japanese econ blogger who runs Agora and graciously published my piece, offers a rebuttal (also in Japanese). Here is a (rough) translation of his main counterargument:
[Noah is espousing] the "burn it down and start over" theory I often hear these days; but would the damage [from hyperinflation] really be finished in just one year? The bad debt problem in the 90s lasted ten! And in 5 years, recovery from the American financial crisis has not yet been achieved; in fact, the effects have spread to Europe. A Japanese financial crisis would be even bigger, and I fear the Japanese economy would never be able to regain its footing. 
As I showed in the hypothetical scenarios outlined in my book, the real danger of hyperinflation is not an economic collapse, but a financial one. If [nominal] interest rates soar [as they would in a hyperinflation], the national debt bubble will burst and most local banks will collapse...the crisis would be even longer-lasting than the one now facing Europe... 
As Niall Ferguson (whom Noah dislikes) has pointed out, many civilizational collapses begin with a financial collapse.
Basically, Ikeda argues that even if hyperinflation is unlikely, it is so catastrophic that we should not countenance even the smallest possibility of it happening. 

Well, I have three thoughts about this:

1. Niall Ferguson shows that financial collapses precede civilizational declines. Precedence does not equal causation. Remember, financial markets are forward-looking, so if people see a civilizational collapse coming, of course they're going to sell their assets; that doesn't mean a hyperinflation or default would have caused civilizational collapses where none occurred. If civilizations collapsed every time there was a financial crisis, then America would have collapsed after 1929, Germany after 1920, and Sweden after 1992. In other words, financial crises have predicted 10 of the last 3 civilizational declines. 

2. Looking at the list of past hyperinflations, I don't see any instance of monetary policy experimentation causing a civilizational collapse. In many of the cases, hyperinflation was followed by a return to robust health (the Weimar hyperinflation, the end of Polish communism). In others, political upheaval followed hyperinflation, but these upheavals seem to have been related to wars or civil unrest (which in turn probably caused the hyperinflations). So I'm not denying the possibility that Japanese "reflation" could cause Japanese civilization to collapse; I'm merely saying it would be something new and unprecedented if it did.

3. Ikeda mentions the long stagnations that followed the Japanese "bubble burst" of 1990 and the American financial crisis of 2008. But it is important to remember that these involved deflation, not hyperinflation. The two are not the same; in fact, in one important way they are opposites. Deflation exacerbates debt; inflation destroys debt. A Japanese hyperinflation (or sovereign default) would eliminate Japan's government and corporate debt. That would act as a tax on the old people who own bonds. But it would remove the overhang of debt from Japan's economy, increasing the expected future income of workers and young people. That might be good for Japan's economy, not bad.

So it seems to me that the main risk of a Japanese hyperinflation is political. If hyperinflation caused a revolution and a collapse of the current regime, a much less effective Japanese regime (yes, you read that right!) might replace it; perhaps an autocracy. That, I agree, is a big risk.

So I'm left ambivalent about "reflation". I still think it's worth a try. I think hyperinflation seems unlikely (though I don't really know this for sure). Based on the historical evidence, I don't think the economic risks of hyperinflation are particularly huge. But the political risks might be big. And that might be reason enough to avoid "reflation". I can't say for certain, though I still definitely lean toward reflation.

In any case, as I said, Shinzo Abe is unlikely to actually carry through any sort of serious push for reflation. So the question is, in all likelihood, a moot one...
reade more... Résuméabuiyad

On the Need for Shorter Sentences

Want to make your writing easier to read? Stay away from long sentences, and vary your sentence lengths.

The old rule of thumb about sentence length used to be: If you can't read a sentence aloud in one breath, it's too long. In my opinion (which is all that counts here; it's my blog), that rule is off by a factor of two. You should be able to read aloud any two consecutive sentences without incurring hypoxia.

You'll find it much easier to obey the out-loud rule if you simply vary your sentence lengths. After a long-ish sentence or two, give the reader a break by throwing in a shorter sentence. The shorter the better.

Also vary paragraph lengths.

Combine the two techniques (varied sentence lengths; varied paragraph lengths). Try this easy experiment: Write or rewrite a paragraph to have a super-short opening sentence (say, six words or less). Write or rewrite a paragraph (not necessarily the same one) to end on a super-short sentence. In either case, take note of the short sentence's impact relative to all the other sentences.

A good strategy for simplifying long-ish sentences is to start by eliminating "nice but not strictly necessary" words, then chunk the sentence up into single thoughts. Consider this example:

Because of the fact that a widespread practice of discrimination continues in the field of medicine, women have not at the present time achieved equality with men.

This sentence is grammatically correct. But let's face it; it sucks ass. It lacks impact and sounds like "student writing." Strip out unnecessary words ("at the present time," "because of the fact") and get right to the core meaning. Discrimination continues in medicine; that's one thought. Women have not yet achieved equality with men; that's another thought. Which is more important? To me, the key takeaway is that women have not achieved equality with men. Once somebody tells me that, I want to know why. So don't tell me the why first; tell me the what, followed by the why. "In medicine, women have yet to achieve equality with men due to widespread discrimination."

Have mercy on the reader's brain. Do some pre-parsing for your already overworked reader. For example: When you have a sentence made up of two clauses separated by a comma and a "but," consider splitting the sentence into two sentences. Let the second one begin with "But." Consider:

Statistics show that most people believe aliens have visited earth, but there is no convincing physical evidence for such a belief.

That's grammatically correct. It's also a mouthful. Try something like: "Statistics show that most people believe aliens have visited earth. But there is no convincing physical evidence for such a belief." You've saved the reader an important bit of parsing. Plus, the average sentence length is now 10.5 instead of 21.

I'm pretty sure a math geek could prove quite easily that the amount of effort required to understand a sentence grows exponentially (not linearly) with the number of words or phrases in the sentence. That's because the first thing a reader tries to do, if the sentence is non-trivial, is parse the sentence into least-ambiguous form. As sentence length grows, the number of possible parsings grows out of control because of all the possible permutations of meaning. Eventually, if the sentence gets to be long enough, the reader's head explodes. We don't want that.

January is Prevent Head Explosion Month (tell your friends), so please, do your best to simplify your prose. It's really not that hard. The alternative is a big fat bloody mess, no matter how you look at it.
reade more... Résuméabuiyad

Brain Anatomy and the Difficulty of Writing

I had an Aha moment the other day when I was thinking about why it is so many people consider writing difficult (often frighteningly so).

Want to see how hard it
is to overcome left-brain
language dominance? Name
the words' colors out loud
without reading the words.
This is the so-called
Stroop Effect.
The key insight: Brain anatomy is not well optimized to make good writing easy. 

We know that most of the language centers are on the left side of the brain. We also know that the left brain is where linear, logical, rule-based thinking occurs. The right brain understands less-linear things like metaphors, idioms, graphical relationships in space, and improvisation. In at least a colloquial sense, we think of the right brain as "creative."

Therefore, the difficulty of writing is partly anatomical. The brain's language centers are proximal to the rigidly logical "linear thinking" parts of the brain. If you're a computer programmer, that's a good thing. If you're trying to write poetry, it's not.

It's not impossible to trick the right brain into becoming more involved in left-brain tasks. My favorite tricks are:

Buy an Etch A Sketch.

Build a Lego puzzle.

Develop an intimate relationship with a yo-yo.

If what you're writing needs accompanying illutrations, work on the illustrations first.

Read some poetry before trying to write prose. Try some e.e. cummings.

Create a pro forma "outline" for your piece in the form of a quick doodle with lots of circles and boxes (a Venn diagram on crystal meth). I like a white-board for this, but the back of an envelope will do just as well.

Find an image that elicits a strong non-verbal reaction in you. Study it. Modify it in Photoshop.

Watch your favorite TED Talk video. Or watch a new one.

Listen to music that relies on improvisation, or at least lack of repetition. (Read up on the Mozart Effect.) My favorite musical works in this regard are the live solo performances of pianist Keith Jarrett. Most of his concert performances are pure improvisation. Some are quite abstract (think Jackson Pollock on piano). If you're not familiar with Jarrett, buy his signature Köln Concert album, and go from there.

Maybe you know of other tricks. Leave them in a Comment below. Thanks!

reade more... Résuméabuiyad

Trust not in Shinzo Abe, ye monetarists!



Monetarists are an innocent lot. American bloggers, op-ed writers, and economists seem quite taken in by Japanese Prime Minister Shinzo Abe's promise of a grand monetary experiment. Abe is threatening to revoke the Bank of Japan's independence, forcing those recalcitrant hard-money-loving inflation hawks to set a hard target of 2% inflation or higher. To an American monetarist, this is really Christmas. Finally, we get to actually test the hypothesis that a central bank can hit an inflation target if it really puts its mind to it! Finally, we get to see the ultimate two-men-enter, one-man-leaves doomsday showdown between the immovable object of Japan's implacable deflation and the irresistible force of Print Money And Buy Stuff!

But it is not to be. Shinzo Abe is not the Jesus of monetary policy. American monetarists, I feel for you - I would love to see the idea of monetary policy dominance put to a stark test - but I just don't think it's going to happen.

You see, unlike most Americans who weren't watching back in 2006-2007, I remember Shinzo Abe's first term as PM. So I know what a walking facepalm this man represents. A brief refresher course: Abe's agricultural minister killed himself after a corruption scandal, and another of his cabinet ministers resigned after another such scandal. His health minister, Hakuo Yanagisawa, managed to hang on despite a wave of negative publicity after he called women "baby-making machines".

Abe is mainly interested in social and cultural issues. He is the Japanese style of socio-cultural conservative, sort of a Newt Gingrich type . As prime minister in 2006-7, he enacted a law requiring public schools to teach "patriotism",  mounted a vigorous denial of Japan's WW2 "comfort women" sex-slavery, gave gifts to the nationalist Yasukuni Shrine (angering China), and pushed to de-emphasize Japan's WW2 war guilt in school textbooks. His lifelong quest has been the revision of Japan's "pacifist" constitution to allow Japan to have a normal military.

I of course don't mean to imply that Abe's cultural conservatism makes him unlikely to experiment with monetary policy (unlike in America, in Japan "hard money" is less of a conservative sacred cow). Instead, what I mean is that Abe really just does not care very much at all about the economy. I mean, of course he wants Japan to be strong, and of course he doesn't want his party kicked out of power. But his overwhelming priority is erasing the legacy of World War 2, with the economy a distant, distant second.

This is why Abe allows himself to be surrounded by corrupt and incompetent people. He is entirely focused on his cultural conservative quest. The other day Abe called Obama "Bush". He just deeply, truly, does not care about stuff that does not involve boosting Japanese nationalism.

So why is Abe making all this noise about revoking central bank independence, setting hard inflation targets, etc.? I have a hypothesis: He is talking down the yen.

Since Abe was elected in a landslide a couple weeks ago, the yen, which had been at historically high levels, has plunged. (Update: from Matthew Boesler at Business Insider, here is a chart of the yen-dollar exchange rate since the LDP returned to power:
USD/JPY H2 2012

Wow!)

This is bound to give a (possibly temporary) boost to Japan's beleaguered exporters, who have been suffering quite dramatically under the strong yen. Remember, Abe's LDP, which ruled Japan for 55 years, has always been closely connected with export manufacturers in the so-called "iron triangle". The LDP, which thrived in the 60s, 70s, and 80s, has always been a mercantilist outfit, weakening the currency to pump up exports, using the surplus from exports to support Japan's corporatist social model in the so-called "two-tiered economy". In Japan's days as a high-quality low-cost export powerhouse, this worked marvelously and kept everyone happy, allowing the LDP to keep power for generations. The recent strength of the yen, however, has been looking like the final nail in that system's coffin.

By making lots of noise about revoking the BOJ's independence, Abe is trying to convince foreigners that inflation is on the way, thus sending the yen south. Basically, he is taking a page out of the LDP's old playbook - weaken the currency, pump up exports. Sure, it's not a sustainable strategy, but Abe doesn't need it to be sustainable; he just needs it to give the economy a fillip for long enough to let him complete his precious revision of the Japanese constitution. After that, he couldn't care less about what happens to the economy. It's a cursory, stopgap measure. To Abe, Japan's pride as a nation is infinitely more important than the fatness of its people's wallets.

Abe is not exactly trying to keep this strategy a secret, saying that counteracting the strong yen is a top priority.

So what does this mean for monetary policy? It means that Abe is targeting exchange rates, not inflation (or NGDP). He'll do what he has to do to tweak foreign expectations enough to keep the yen weak, but he won't actually follow through and revoke BOJ independence. And even if by some miracle he does revoke BOJ independence, he won't insist on a hard inflation target. A non-independent BOJ wouldn't be controlled by Shinzo Abe, it would be controlled by the Ministry of Finance, and those people are just as likely to fear the peril of hyperinflation.

Expect Abe to continue making noise at the BOJ, and expect to see some token BOJ response, i.e. a bit more quantitative easing. If the yen starts rising again, expect Abe to switch gears and start talking about (or actually carrying out) currency market intervention of the type carried out in 2004. Essentially, he will continue the current talk of radical monetary policy experimentation precisely as long as he thinks it's holding down the yen, and then abandon it for a different mercantilist stopgap. Do not expect any real action against the BOJ.

OK, maybe I'm wrong. I'm no expert in Japanese politics, just a guy who has been reading about the LDP for a long time. If Abe follows through on his radical monetary proposals, I'll gladly eat crow. But think of it this way. If a British guy came up to you six months ago, brimming with optimism that Vice President Paul Ryan would enact his famous Ryan Plan and save the U.S. from ballooning budget deficits, what would have been your response? That's how I feel when I see people put their trust in Shinzo Abe.

I hope I'm wrong. I'd love to see a bold monetary experiment. But I'm pretty sure I know these LDP jokers, and I'm pretty sure they're not going to deliver in the crunch.


Update: And now today, via the WSJ, I see that I'm completely right about the LDP's economic philosophy. Observe:
Japan's new finance minister upped the ante in the country's war of words against the strong yen, lashing out at the U.S. and Europe for letting their currencies weaken dramatically and calling on the U.S. to strengthen the dollar. 
The tirade from Taro Aso, Prime Minister Shinzo Abe's point person on currency strategy, underscores the increasingly pugnacious stance of the fledgling Abe government... 
"The U.S. ought to do its job and make the dollar strong. And what about the euro?" Mr. Aso said Friday... 
A procession of Japanese executives and politicians have bemoaned the yen's strength, blaming it for a loss of competitiveness, dwindling earnings, bankruptcies and the relocation of operations abroad... 
The dollar has recently staged a sharp recovery, as Mr. Abe's pledge to strong-arm the Bank of Japan into easing monetary policy to weaken the yen has driven investors to sell off the yen. (emphasis mine)
Oh yeah, they're really thinking a lot about inflation rates and money demand and the Zero Lower Bound. Sure.

Update 2: I think Reddit puts it very succinctly by saying: "Shinzo Abe isn't reading Scott Sumner, he just wants a return to Japanese mercantilism." That's exactly it. A mercantilist in monetarist's clothing.
reade more... Résuméabuiyad

"Problem Words" in English and How to Use Them

I want to talk about some specific words and usage issues that cause trouble for a great many native speakers of English who should know better. Call these pet peeves, if you must. I don't like to think of them as pets, though. Savage beasts all.

Ironic means that the outcome of something had a distinct quality of unexpectedness to it. But I like to think it means something more. To me it implies that there are (or were) two possible outcomes or interpretations of something, one that's expected but turns out to be wrong, and one that's not expected but actually true. Contrast this with the word paradoxical, which (to me) implies two outcomes that seem to be at odds with one another yet are both demonstrably true. Use paradoxical when there are two true yet seemingly incompatible outcomes. Ironic is less concrete a word and not as widely understood as paradoxical.

Poignant means keenly distressing to the senses and/or arousing deep and often somber emotions. It doesn't mean bittersweet. It can be an outcome of a bittersweet situation, but by itself it does not mean bittersweet.

Decimate means to reduce by one tenth. Never use it to mean "destroy completely." Decimation was (in Roman times) the practice of killing one out of every ten mutineers (or sometimes one in ten prisoners of war), as a means of demoralizing the nine out of ten survivors. The preferred meaning of decimate remains reduction by ten percent. You can use it to mean "reduce significantly," but never use it to mean total eradication.

Irregardless is always incorrect. Use "regardless" unless you want to appear careless or stupid.

Don't say "which" when you mean "that." Example: "The subject which interests me most is philosophy." Use that, not which, in such a sentence. There's a difference between "The crane that was the cause of the accident was demolished" and "The crane, which was the cause of the accident, was demolished." Which should be reserved for clauses set off by commas.

For God's sake learn the difference between it's (a contraction) and its (possessive). The reason people get this mixed up is that the rule for making something possessive, in English, is to add apostrophe-s to the end of whatever it is. So it's natural to think that if you add apostrophe-s to "it," you get a possessive form. Not true, though. The possessive form of it is its.

Learn to use "nor" as the negative form of "or."
In particular, don't use "or" in connection with "neither." Don't do: "Using ain't is neither correct or necessary." The word "neither" here demands that you use "nor."

Try not to use "almost always" or "almost never." It's semantically akin to saying "almost infinite." The words always, never, and infinite are absolute and binary. Something is infinite, or it's not. Something either occurs always, or it doesn't. "Almost" and "always" are two different concepts.

Don't say infer when you mean imply.

Bemused has nothing whatsoever to do with amusement. (Read that again.) It has everything to do with bewilderment or befuddlement.

Peruse means to read carefully, not to skim lightly or read haphazardly.

Who versus whom: My advice? Don't worry about "whom" versus "who" unless you're writing for an audience that cares about such things. It's not always better to use "whom" properly. Using it properly can mark you as a self-righteous pedant! It all depends on the audience. My rule is to always use "who" unless you're convinced the reader will object to its improper use. (And you might have noticed, I don't much care about splitting infinives.) Most readers won't care. You're writing for most readers, by the way (and not your high school English teacher), aren't you?

And finally:

Literally refers to something that actually happened (or is happening) in reality. It represents the concrete reality of something, not anything metaphoric. There's nothing speculative (nor merely descriptive) about a thing that's literal. "He literally went insane" means the person actually became clinically schizophrenic per DSM-IV-TR #295.1–295.3, 295.90. "He literally went ballistic" means the person had enough momentum to follow a ballastic trajectory through space. "He literally melted down" means the person became hot enough to exceed the melting point of his constituent materials. Don't say literally unless you really mean it.
reade more... Résuméabuiyad

Are voluntary contracts always mutually beneficial?



In a Crooked Timber post about cyborgs, Chris Bertram writes:
I expect someone will be along to explain how...contracts [requiring employees to get cyborg modifications] would be win-win.
Matt Yglesias drops by in the comments to write:
It seems pretty obvious how they would be win-win: They’d be agreed to voluntarily by two mentally competent adults.
Actually, this is a common misconception, so I thought I'd write a quick post to correct it. Basic Econ 101 does not imply that voluntary contracts are mutually beneficial to the people who enter into them.

The misconception springs from some solid intuition. In general, people who are free to do what they want, do do what they want. Maybe sometimes they don't realize what they want, or are subject to compulsions like addiction, but in general, free people only make deals that they want to make.

BUT, it doesn't follow that contracts are mutually beneficial. The reason is that there is uncertainty in the world.

Suppose that there's a deal that has a 60% chance of being to my benefit, and a 40% chance of being to my loss (assume equal benefit and loss here, just for simplicity). If I'm a rational person, and not too risk-averse, I would do that deal. But that still leaves a 40% chance that I'll lose out on the deal.

This is what's known as the difference between ex ante and ex post. Econ 101 says that people only make deals that are to their benefit ex ante. But that still leaves a lot of room for people to lose out ex post. And ex post is more important, since it's the real thing that actually happens to people, whereas ex ante is just what we guess will happen. (As a commenter points out, insurance contracts are a really good illustration of this principle. Would you buy health insurance if you knew you weren't going to have any health problems? Would your insurer sell you insurance if they knew you were going to get sick?)

Of course, all this doesn't mean the government needs to step in and stop people from taking risks. It just means that you can't infer outcomes from people's decisions.

Now just for fun, and because I don't like writing short blog posts, let's move out of the Econ 101 world, and introduce two advanced concepts: 1) asymmetric information, and 2) Knightian Uncertainty.

In a world of asymmetric information, one party to a deal may know something that the other party doesn't. For example, suppose you and I are considering making the deal in the above example. You think that the deal gives you a 60% chance of benefiting and a 40% chance of losing out. So, by your best guess, this deal is worth it ex ante. And so you're willing to do the deal.

But suppose I have information you don't (of which you are entirely unaware). Suppose I know that in reality, you have only a 30% chance of benefiting from the deal and a 70% chance of losing out. If you know what I knew, you'd never agree to the deal.

Now, if we could do 100 such deals, you'd eventually realize that I systematically had better information than you, and you'd become wary and stop making deals with me (as in George Akerlof's "lemons" model of asymmetric information). But in the real world, conditions are changing all the time - today I might have information you don't, and you might have information I don't. Thus, not only is there asymmetric information, but there's uncertainty (called "Knightian Uncertainty" after Frank Knight) about how likely it is that there's asymmetric information.

This allows people to be swindled again and again, as new kinds of asymmetric info keep popping up and falling into different hands. The swindlers may change, but the swindling will never stop, no matter how rational people are or how much experience they get. This is the basis of what George Akerlof calls "Phishing for Phools".

This is why we might want the government to step in and force people to divulge their private information. Econ 101 does say that better information all around can't possibly worsen the outcome of deals. I know of no "economic efficiency" argument for allowing people to try to swindle other people.

Anyway, bottom line: Even in a perfectly rational, perfectly free world, voluntary contracts are not always  mutually beneficial to the people who enter the contracts. And in a realistic, ever-changing, uncertain world, some kinds of contracts might be mutually beneficial less than 50% of the time. (Of course, if you allow for people to be irrational and unfree, things get even worse. And this post doesn't even mention things like externalities, which throw a further wrench into the system.)
reade more... Résuméabuiyad

Common Writing Mistakes and How to Avoid Them

I want to talk about some tips for streamlining your writing and giving it more impact (as well as making it more "correct," grammatically). These tips address problems that even competent writers have. I catch myself making some of these mistakes. But afterward, I always discipline myself appropriately, for example by withholding extra servings of gruel.

Subject-pronoun agreement is a problem for many native speakers of English. I'm not sure why. It's easy enough to avoid. An example of what not to do: "When you ask a person to help you, they will often refuse." Why is this wrong? The pronoun "they" is a plural form, yet here it refers to a singular "person." It's correct to say: "When you ask a person to help you, he or she will often refuse." Or, if you prefer the plural: "When you ask people to help you, they will often refuse." Do one or the other. Don't mix "they" or "them" with a singular subject.

Avoid the word "very." Instead, use a higher-impact (and/or more descriptive) word like "extremely," "exceedingly," "hugely," "remarkably," "astoundingly," "astonishingly," "massively," etc. The word "very" is overused and thus has little impact. It's supposed to magnify the impact of whatever it's modifying, but you can't increase the impact of something by modifying it with a low-impact word. So just avoid "very" altogether. Think up something more imaginative. Imaginative words improve almost anybody's writing.

Eliminate unnecessary uses of "that." "He knew that it was wrong" can be improved by saying "He knew it was wrong." This may not sound like such a big deal, but if you use "that" needlessly throughout a lengthy piece of writing, you'll find it tends to bog things down. If you're looking for a super-easy way to streamline your writing, start by finding and removing unnecessary thats.

Stay away from "There is...that" constructions. Don't do this: "There are many cars that aren't reliable." Instead say: "Many cars aren't reliable." Why would you want to use seven words to say something that can be said in four words?

Don't put an "-ing" word at the start of a sentence, unless you really know what you're doing. In English, an "-ing" word is (grammatically speaking) either a present participle or a gerund. The difference between a participle and a gerund is that a participle is a verb form used as an adjective, whereas a gerund is an "-ing" verb that serves as a noun. Either way, the brain rebels. Your brain doesn't want to see verbs used as adjectives (nor as nouns). So avoid "-ing" verbs wherever you can. Sometimes you can't avoid them, of course. "Revolving door" uses the participle "revolving" to modify "door," which is fine; the meaning is clear. "Interrupting is rude" uses the gerund "interrupting" as the subject of the sentence. Not bad; it's short, and the brain can parse it okay. But consider: "Paying attention to grammar eliminates mistakes." That's a poor sentence (what's the subject?), as is "Being thin avoids heart disease later in life." Stay with nouns as the subjects of sentences and you'll find that sentences are easier to write, as well as easier for the reader to understand.

Be careful about decoupling the object of a sentence from the predicate. Example of what not to do: "Throw the horse over the fence some hay." The subject of this sentence is an implied "you," the object is "hay," and the predicate is "throw." But that's not how the sentence reads. It reads as if "horse" is the object, which is wrong. Presumably, you want to throw hay, not a horse, over the fence. If you were to say "Throw hay to the horse over the fence," that's still not good, because you're implying that the horse is over the fence rather than that you need to throw hay over the fence. If you actually want somebody to throw hay over the fence, say so: "Throw hay over the fence, to [or for] the horse."

Don't let ambiguity creep into your writing. "The ability to read quickly made him smarter." Does "quickly" modify "made" (quickly made)? Or does it modify "read" (read quickly)? It's ambiguous. Completely reword the sentence if necessary. Try something like "He became smarter because of his ability to read quickly," or (if "quickly" applies to "made") "He quickly became smarter because of his ability to read." Say things in the most unambiguous way possible, even if it means making sentences longer.

And by the way: Much of the time, you can ignore the old rule about not allowing a sentence to end with a preposition. Examples: "That's a subject I know nothing about." "It's nothing to cry over." "That's what the dog sat on." "Do it that way, if you have to." No one but the most pedantic schoolmarm would consider such sentences wrong.

Tomorrow, I want to talk about certain words and usages that cause trouble (yet are easily made right). The words in question are like land-mines waiting to blow big craters in your writing. Ignore them at your own peril.
reade more... Résuméabuiyad

Rise of the cyborgs



People have been requesting that I do more futurist posts, so here's a bit of holiday optimism.

A lot of recent futurist discussion in the media and blogosphere has revolved around either the technological stagnation thesis, or the question of whether robots will replace humans. Elsewhere, people are toying with the implications of more far-out technologies like brain emulation and desire modification, or following the ongoing mini-booms in natural gas fracking and 3-D printing.

But it occurs to me that people may be overlooking something big: another technological revolution that is right under our noses, about to change our world in a big way. I'm talking about the rise of biomechanical engineering...or, to use a more catchy term, cyborg technology. The cyborg revolution is not a far-future sci-fi conjecture; it is upon us even as I write these words.

For a taste of how cyborg technology may soon change our world, check out this BBC article. The key technology is the integration of human brains with computers. Here are some extrapolations of technologies that currently exist:

1. Direct mental control of machines (also called Mind-Machine Interface, or MMI). Non-invasive ways of controlling machines with one's mind have already been developed and will soon be commercialized. The biggest benefit of this, of course, will be for physically impaired people, but it will also probably allow us to write a lot faster; just think words, and they appear on the page. Writing speed is probably a significant constraint on productivity, so MMI may have the potential to raise service-industry productivity, which has been lagging in recent decades. Of course, MMI may also be a much easier and more fun way to play video games, control your mobile devices, etc., than punching buttons.

Another aspect of this is mind-internet interface. Obviously, this is scary, since you don't want your brain getting hacked by jerky teenagers halfway around the globe. So I'm not sure if this will ever be done, especially because sight is already a very fast way to assimilate information from the net.

2. Augmented intelligence. Artificial intelligence is one of the most-talked-about technologies, but if you think about it, it's probably easier and more natural to begin with the intelligence we already have, and simply augment it with computers. The BBC article I cited discusses experiments in which artificial devices have already served as functioning brain structures in rats, in particular as artificial memory centers; expect this technology to improve rapidly.

If we can store human memories in artificial brain structures, the implications are enormous. First of all, it would vastly expand the knowledge base and expertise of a human knowledge worker; if we could store vastly expanded amounts of knowledge, we would no longer be constrained to specialize in one incredibly narrow field. This might unlock huge innovative potential, as individual humans could do the kind of creative work that now require teams of humans.

If these artificial brain structures can be exchanged between people (a nontrivial task, obviously!), then we get human memory transfer, and the possibilities are even more enormous. Instant education, as expertise is copied and transferred from human to human. Functional immortality, as full sets of memories are transferred to cloned brains (Note: This is an idea I got from Miles Kimball; he explains it in this post). Etc.

Artificial brain structures might also allow boosted cognitive ability. Imagine humans with the processing power of computers at their beck and call. This, of course, is a more speculative technology...but maybe not so speculative, to wit:

3. Augmented learning. This sounds very pie-in-the-sky, until you read the BBC article and find that it is already real and may even be available over the counter:

Transcranial direct-current stimulation (tDCS) is a way of running electrical current through the brain with electrodes attached to the outside of the skull. The US Defence Advanced Research Agency (Darpa) currently uses tDCS to improve the learning speed of snipers, claiming it cuts the learning curve by a factor of 2.5. There are issues, though. "They learn more quickly but they don't have a good intuitive or introspective sense about why,” says Vincent Clark, neuroscientist at the University of New Mexico. 
Such devices were initially expensive, but now GoFlow sells a DIY kit for $99, which consists of two electrodes, cables and a 9-volt battery. So, in theory, everybody can try and tune his or her own brain at home. But if it is not applied correctly, anything could happen – from enhancing intelligence (intended), rewiring our brains (who knows?) through to electrocuting ourselves (not intended). Neuroscientist Roi Cohen Kadosh from Oxford University says he wouldn’t buy the DIY kit, because he thinks it is premature to distribute it to non-experts. “People might feel like they should stimulate their brain as much as they want, but just as buying a medicine on the counter, you need to know when to use it, how often, in what conditions and in what cases you should not take it.”
There is no word to describe this except for "amazing". I fully expect other bloggers to buy the kit, try it out, and tell me how it goes...

Of course, another possible application of this exact same type of technology is:

4. Mood modification. We already know how to stimulate certain emotions with direct brain stimulation; noninvasive methods, of the type currently being developed for artificial learning, would revolutionize the applications of this technology. For example, cognitive behavioral therapy currently relies on human attention and vigilance to replace negative thoughts with positive ones (thus alleviating depression and anxiety); if this process could be automated, it could help cure some of the great psychological scourges of modern society.

But why stop there? People with phobias could get rid of the phobias by counteracting fear responses at high speed; as soon as you see the thing you fear (a dog, or an enclosed space), a computer will see your fear response spiking and stimulate feelings of safety and security instead. Poof, phobia gone! Not to mention social anxiety; imagine how easy it would be to talk to cute girls at parties if your mobile device could zap you with artificial self-confidence every time you started to get scared.

Of course, at this point, mood modification becomes a rudimentary form of my "holy grail" technology of Desire Modification. The thing to understand is that non-invasive external stimulation of emotional responses is not very far away; we're talking a few years, not a few decades.

5. Artificial sensory input. This already exists and is on the market, in the form of cochlear impants (artificial ears) and visual prostheses (artificial eyes). The technology is improving very rapidly. At the point where artificial senses become as good as (or better than!) natural ones, whole new worlds of possibility open up.

For example: artificial eyes and ears would replace all input devices. You would never need a television screen, a phone, Google Goggles, or a speaker of any kind. All you would need would be your own artificial eyes. You could play video games in perfect, pure augmented reality. Imagine the possibilities for video-conferencing, or hanging out with friends half a world away!

And why stop there? If you wanted, you could perceive the buildings around you as castles, or the inside of a spaceship. The whole world could look and sound however you wanted.

(Of course, brain chips that could feed artificial input to the sensory perception centers of the brain - the technology of The Matrix - could accomplish this task even better. But this might be farther away.)


OK, time to stop. Of course, I haven't come close to encompassing the full set of possibilities available from brain-computer interface, but I think I've shown that many cyborg technologies that currently exist have the potential to quickly and dramatically reshape human life. I leave it to you to fill out the list.

What will this mean for the economy? Well, unlike media and information technologies (which can usually be copied without cost), biomechanical technologies are good old manufactured goods; their inclusion into the economy will show up in the GDP statistics, unlike Facebook or Craigslist. And because these technologies have the potential to vastly improve the human experience, we can expect them to become near-universal consumer goods, provided our legal institutions allow it.

Also, cyborg technologies have the potential to improve human productivity quite a bit, as my examples above have hopefully shown. Humans who can store vast amounts of knowledge and expertise, who can directly interface with machines, and who can make themselves more well-adjusted and motivated at the touch of a (mental) button will be valuable employees indeed, and will prove useful complements to the much-discussed army of robots.

All this means that cyborg technologies, if they become widespread, will do much to quiet the fears of the stagnationists. But this, of course, requires institutions that allow these technologies to become universal. Currently, I feel that institutions like the FDA and the health care system are biased toward treating the "sick", and place way too little value on technologies to improve the average human experience above its baseline (witness how we push antidepressants on everyone, but ban even weak recreational pleasure drugs like marijuana). I fear that our society will collectively decide that anything that improves on "natural" humanity is unsafe for public consumption. This would sacrifice huge amounts of growth potential on the altar of what is essentially a pointless, semantic distinction (Isn't it "natural" for old men to become impotent? But we still allow Viagra...).

In any case, the cyborg revolution is upon us. Pay attention, futurists. This could be very, very big.


Update: Just to clarify, I think that: A) cyborg technologies that affect the mind are going to be far, far more important than ones that affect mainly the body, and B) Noninvasive methods of brain-computer interface definitely count as "cyborg" technology; you don't have to have robot parts in your head to be a cyborg.

Update 2: In this must-read piece, io9's George Dvorsky lists 16 science fiction predictions that actually came true just in 2012. Cyborg technology dominates the list; see items 1, 3, 10, 13, and 15. The cyborg revolution is upon us!

Update 3: A TED talk on cyborgs just came out. Like I said, this is bigger than anyone realizes, and is right now in the process of exploding into the public consciousness.

Update 4: Futurist Ramez Naam has an article in Forbes summarizing a lot of the new cyborg tech and speculating about where it might take us.
reade more... Résuméabuiyad

Writer's Block: Getting Past the First Sentence

Suppose you have a writing assignment to do and it's due tomorrow and you're completely blocked. You don't even know where to begin.

Here's how to get started.

First, accept the general strategy that you're going to produce crap first, then make something out of it later. Because that's how writing works, frankly. Everything you've ever read in print started out as something way crappier than what finally got published. Most of what passes for "writing skill" is actually revision skill.

Secondly, forget about rules. Drop all your inhibitions over grammar, syntax, spelling, vocabulary, use of pronouns (first person, second person, third person), etc., because all that stuff can be fixed later. If you have a brief (80,000-foot-level) outline, fine, but for now take it off the table and hide it somewhere.

Start by writing the following sentence: "The most important thing I'd like to say about [subject] is XYZ." (Fill in the subject and XYZ yourself.)

Again, don't fuss over the fact that you're using first person voice ("I'd like to say"), because that's easily fixed later. There are hundreds of ways to fix it. Here's one: "There are lots of ways to look at [subject]. But probably the single most important thing to note about it is XYZ." Here's another: "Most people think ABC about [subject]. But in fact there are many reasons to take the XYZ point of view. A quick review of the evidence will show why DEF might well be a more worthwhile way to understand [subject]." You can always take yourself out of the discussion. Do it later.

Okay, you've written something, so congratulate yourself. The thing to notice is that once you've captured your main idea in a few words, you can now move in one of two directions. It may well be that your most important point is something you can only get to after first addressing a bunch of other things. In that case, move up to the top of the page (above the sentence you just wrote) and plan on writing downward, until you get to your most important point.

The other way it could go is that once you've stated your most important point, you need to back it up with examples and/or discuss important sub-points. In that case, start writing a new paragraph below the sentence you just wrote and plan on continuing downward toward the bottom of the page.

In one case you're moving toward the main point from above; in the other case you're moving from the main point downward. It may well be that you end up having to do both. But the point is, you've driven a stake into the ground. You have a reference point to work away from, or work toward.

Don't be afraid to state your conclusion first (at the very top of your piece), then, in the next paragraph, back up and explain how you got there. When I feel it's going to take a lot of difficult setup to get to my main point (and then I get all constipated-feeling, because I know the backstory is going to require a ton of well-thought-out explanation), I shortcut the whole process by stating my "punchline" early on, usually in the second paragraph. The first paragraph will state what it is I want to talk about and perhaps give some bullshit justification for why it needs to be talked about. Then, right away, in the second paragraph, I'll say something like "Rather than draw this out, let's cut to the chase. The right way to approach a problem like XYZ is to think about it in terms of ABC." Then I spend the rest of the piece supporting my already-delivered "punchline."

So give yourself permission to introduce topics in any order, including conclusion-first.

It may sound simplistic, but the most important thing you can do when you're blocked is just write something. Laugh, resign yourself to the idea that you're going to produce utter crap, then do just that: Quickly write down a big long list of absurdly simplistic statements about your topic. Or just write a laughably bad first paragraph and pretend you just discovered it under a stack of papers in a mental hospital. Laugh at it. Then move on.


reade more... Résuméabuiyad

Making the Writing Process Easier

Most people (including many professional writers) consider writing difficult. In fact, it's probably the most frighteningly difficult thing most people do in their professional lives, second only to public speaking.

Part of the reason for this is that people acknowledge, I think, on a gut level, that writing is a form of artistic expression, and yet most of us (for whatever reason) are convinced we're not capable of "art." No one is asking you to create art, though, so why impose that expectation on yourself, unless you're writing a sonnet?

Much of the fear of writing comes down to the fear of producing embarrassing crap. But here's something you should always bear in mind. Everything you've ever read, in print, started out as something crappier than what you ended up reading. You've only ever known the glossy polish of the final product. You didn't see the crappy rust-covered underbelly of what came before.

So don't hold yourself to a "final output" standard when you sit down to write. Your first effort may well be crap. But then, so was everyone else's first effort. You just didn't see it.

Always give yourself permission to produce crap. If you don't, you might never get past the first sentence.

Rumination is important preparation for writing, so always give yourself as much time as you can to think about your subject before sitting down to write. Break your topic down mentally into the simplest possible bits. Think each bit through on its own. What would you say in 25 words or less about each bit? Take notes if that helps.

I rarely sit down to write on a topic that's less than 70% to 80% thought-out in advance. (I count something as "thought out" if I am 90% confident that I understand my own unique take on the subject and can write about it in a way that will fool 90% of readers into thinking I know what I'm talking about.) I know I can count on the writing process itself (which is iterative, reentrant, non-linear, and thus organic) to help fill in the missing 20% to 30% of thought-outness, but I never count on (say) 90% of what I'm going to write just "coming to me" as I write. Only 20% or so will "come to me," and most of that in the editing pass, after the crappy first draft is laid down.

I like to think of putting fingertips to keyboard as the last step in a long chain of preparatory actions (research plus rumination). Captured keystrokes are just a static artifact of the dynamic thinking process that went before.

One of the best insights I can give you is that simple thoughts are easier to write down than complex thoughts, so anything you can do to de-complexify your thinking will have a huge payoff when it comes time to write. Stuff that's arduous to write is usually arduous to read. How do you make the arduous easy? Simplify.

If you find yourself completely choked up when you sit down to write, it's either because you're afraid to produce crap, or because your thinking on the subject matter is still muddled. If you're not under a deadline, take more time to think the subject through, except in simpler terms. (You can always make simple stuff complicated later, although I don't recommend it.) If you're under a deadline, sit down and quickly write a bunch of absurdly short sentences on the topic in question, phrasing everything in unacceptably oversimplified terms. I guarantee that if you quickly fill a page with laughably simplistic one-liner statements about a subject, the Fussmaster General inside you will be eager to jump at the chance to cross all that crap out and do a more meaningful job of expressing the same thoughts. In other words, you'll be ready to write "in anger."

One more point and then I'll shut up.

Techniques exist for simplifying the expression of ideas. Most of  them revolve around semantic clarity and micro-syntax. Just by avoiding certain types of words (for example, gerunds, which are verbs pretending to be nouns) you can force yourself to write in a simpler, clearer manner. The neat thing is, simple writing always comes out faster (and reads better) than turgid-but-logically-complete writing. The simpler you write, the easier the process, for you and the reader. I'll be talking about some of my favorite techniques in this regard over the next few days.
reade more... Résuméabuiyad

Contralateral Brain Exercises for Programmers

Lateralization of brain function has been well studied, and if there's one thing nobody disagrees on, at this point, it's that the brain does the bulk of its speech and language processing on the left side (in or near Broca's area). The right brain does have its own language comprehension ability (particularly when it comes to understanding metaphors, idioms, and prosody), but in general the right hemisphere lacks any ability to comprehend syntax or grammar, and people who suffer trauma to the left brain's speech and language centers develop profound verbal disabilities whereas damage to the right brain seldom produces profound language deficits.

It's also fairly well accepted that "the left hemisphere has been shown to be better prepared to process information in a more analytical, logical, or sequential fashion," whereas "the right hemisphere more efficiently serves tasks that require the holistic, or simultaneous, processing of nonverbal gestalts and the complex transformations of complex visual patterns" (The Neuropsychology Handbook, Chapter 7).

Writers and programmers deal (intensively and regularly) with large amounts of text; text that deals with sequential logic and conforms to especially rigid rules. This unavoidably brings a huge amount of left-brain usage. Is that a problem? No. But it might mean writers and programmers (and other "left-brain-intensive" folks) could benefit from greater engagement of the right hemisphere, because creative problem-solving requires active participation by both halves of the brain.

If one accepts the notion that programmers are (in work mode, at least) heavily lateralized to favor the left hemisphere, it stands to reason that those of us who deal in code for a living could benefit from contralateral brain exercises (exercises designed to stimulate the less-used side of the brain; the right side, in this case).

What does this mean in practice? If you already have hobbies or activities that engage your right brain, you might want to consider doing those things more intensively and more regularly. For example, if you occasionally play a musical instrument (an activity that requires an exceptional amount of cross-hemisphere coordination), start playing the instrument every day rather than occasionally.

If you like to paint, set aside time each day to paint. Do it intensively and regularly.

If you like photography, take photographs every day, without fail.  

Build up stamina for whatever it is you do that brings your less-dominant hemisphere into alignment.

If classical music gets your juices going, take a several-minute-long break once every hour or so (or during any natural break point in your work) to listen to classical music. Note that music is processed bilaterally, in a non-straightforward way [Andrade and Bhattacharya, Journal of the Royal Society of Medicine, June 2003 vol. 96 no. 6 284-287). But the point is, any kind of music processing requires the active participation of the right brain. So if your right brain has been "going to sleep" (not literally, of course) while you've been writing code, you can wake it up again simply by listening to music.

Interestingly, a certain amount of evidence exists that comprehension of poetry depends largely on right-brain engagement. Therefore, if you like poetry, try reading some before you sit down to work every morning.

My favorite exercise is this: Find an image of something that elicits a strongly positive non-verbal reaction in you (something that, just by its appearance, inspires you). It might be a picture of your favorite athlete in a moment of triumph. It might be a photo of some natural wonder (the Grand Canyon, a mountain, a glacier, a forest). It might be a picture of an iPhone. Whatever.

Tape the picture to the corner (or edge) of your monitor in the morning before you begin work. Leave it there, in your peripheral vision, for a while. Maybe all week.

You don't actually have to look at the photo to be affected by it. Its mere presence in the visual field will have an effect on your brain. I'm convinced this is why so many people put pictures of their children (or spouse, etc.) on their desk. You don't put a picture of your child on the desk to remind yourself that you have a child, nor to remind yourself what your child looks like. You already know what your child looks like. You've seen the picture a million times already, in any case. You put the picture there because its mere presence inspires you to perform better.

Try the inspiring-photo-in-your-peripheral-vision technique for a week, and try changing the photo to a different one every now and then. See if it doesn't spur your creativity. It works for me. Let me know if it works for you.





reade more... Résuméabuiyad

Interface Design Lessons from the Aerospace Industry


F-111A cockpit.
Expand your mental image of what a "device" is, for a moment.

I'm sure you'd be willing to agree that a fighter jet is an extremely complex, high-functionality device. Yet it has to have a usable human interface (or else it's not worth much). How does one provide a highly usable interface for such a complex "device"?

In the 1960s, the way to do it was as shown in the accompanying photo of the cockpit of a General Dynamics F-111A. You don't have to be a pilot to appreciate the fact that the "user interface" to the F-111A was (let us say) intimidating in its complexity. Is such an interface usable? Apparently it was. Over 500 of these aircraft flew, with the cockpit design shown in the photo.

F-22 Raptor cockpit.
Fast-forward to 2005, which is when the Lockheed Martin/Boeing F-22 Raptor went into service. The F-22A has a useful load (max weight minus empty weight) of about 40,000 pounds, which is essentially the same as for the F-111A. In almost all other respects, the planes are miles apart. The F-22A has vastly greater capabilities than the F-111A; so much so, that the two airplanes shouldn't really be compared. But the point is, the F-22, despite being a much more sophisticated and capable  aircraft than the F-111A, has a much simpler human interface (see photo).

What happened between 1964 and 2005? Human factors research happened.

First, human factors experts realized that anything that could be considered a visual distraction is a potential safety hazard. Therefore, consolidate (and hide, if you can) as many doodads as possible. Naturally, the advent of processor-driven electronics made it possible to integrate and automate many of the functions that were previously exposed to crew, thus reducing the overall interface footprint. A good example is Fully Automated Digital Engine Control (FADEC) technology. In the F-111A, pilots had to monitor half a dozen engine gauges showing turbine inlet temperature and other indications of engine status. In a modern jet, a computer monitors and regulates such things. The pilot doesn't need to get involved.

An important feature of modern cockpits (which has little or nothing to do with technology per se) is that important items in the interface (e.g. display screens) are made larger than less-important elements. Human factors experts quickly realized that the worst possible thing to do is simply make all gauges (or groups of gauges) a "standard size" regardless of importance.

Advances in digital display technology made it possible to consolidate data from multiple gauges onto one display surface (which might have several display modes). This also reduces footprint, even though the biggest "gauges" (which are now screens) have actually gotten bigger.

Yet another outcome of human factors research was (is) that color displays are easier for the brain to parse than a wall of black-and-white data. Likewise, graphical visualizations of data (if done correctly) are easier to comprehend than numeric representations.

The overall principle at work is, of course, that of simplification combined with functional organization. To fly an aircraft, the pilot needs to have flight information (airspeed, altitude, rate of climb or descent, angle of bank and/or rate of turn, heading), navigational information (terrain information, aircraft position, some indication of desired course and deviation from desired course, distance to waypoint), and the ability to operate avionics. (Avionics include radios used for communication; transponders to make the aircraft show up on ground radar; navigational receivers, such as GPS, and aids to navigating the glidepath to a landing; weather-avoidance gear; and autopilots.) In a military aircraft, the only major additional functional group is the fire-control (weapons management) system. So in other words, the major functional groups are few in number: They include flight information; navigational information; avionics; and fire control. Within each of these groups, you apply the standard principles of consolidation, appropriate visualization, and size differentiation (important things bigger, less important things smaller).

All of these principles can be adapted to GUI-intensive software. It's up to you (and your human factors experts) to decide how.



reade more... Résuméabuiyad

More Mantas for Software Professionals

See also yesterday's post.


First impressions count.

Being a category leader has no meaning if existing products are crap.

Never aspire to be best-in-category. Define a new category.

Never compare yourself to the competition. Your only competition is yourself.

Never borrow someone else's bar and set it higher. Design your own bar.


Refactor your thinking so you don't have to refactor code.


Good ideas are overrated. Good implementations are not.

Complexity cannot be made pretty.

A pig with lipstick is still a pig.


Excellence cannot be retrofitted.

If something's not right with your engineering culture, customers will notice.

There are no hard problems, only problems that aren't well defined.

Learning is the inevitable outcome of making mistakes, fixing them, and not repeating them.


If you aren't making mistakes, you're not doing it right. 

You can always do better.


If you found this post worthwhile, please tweet it, Digg it, Reddit it, or share the link with a friend. Thanks!
reade more... Résuméabuiyad

New Atlantic column: Gun control is good, but ending the War on Drugs is even better

I have a new column up at the Atlantic, in which I try to give some perspective to the gun control debate that has exploded since the Newtown massacre. Some excerpts:

This is not a column against gun control. Gun control is a good idea. The assault-weapons ban is a good idea. So are background checks, stricter licensing agreements, and greater efforts to keep guns out of the hands of minors. A prohibitive tax on ammunition? There's another good idea finally getting attention it deserves... 
Stringent gun-control measures are unlikely to turn the United States into a peaceful gun-free society like Japan...[T]o become like Japan, banning gun sales wouldn't be enough...if the U.S. banned gun ownership, and confiscated all the guns that people currently own, it would probably be very effective. But this is almost certainly politically infeasible... 
[I]f we really care about those 9,000 souls who are shot to death each year, there is an extremely effective policy that we could enact right now that would probably save many of them.  
I'm talking about ending the drug war... 
[F]ew would argue that the illegal drug trade is a significant cause of murders. This is a straightforward result of America's three-decade-long "drug war." Legal bans on drug sales lead to a vacuum in legal regulation; instead of going to court, drug suppliers settle their disputes by shooting each other. Meanwhile, interdiction efforts raise the price of drugs by curbing supply, making local drug supply monopolies (i.e., gang turf) a rich prize to be fought over. And stuffing our overcrowded prisons full of harmless, hapless drug addicts forces us to give accelerated parole to hardened killers... 
[D]on't expect [gun control] to be a panacea...[W]e need to end the self-destructive, failed drug policies that have turned us into a prison state and turned many of our cities into war zones.
Read the whole thing here!
reade more... Résuméabuiyad

Mantras to Live By in the Software Biz

Breakthrough ideas have no superclass.

Excellence is not an add-on.

Mediocrity is built in increments.
 
Even the stupidest do-nothing feature was somebody's "requirement."

Requirements often aren't.


Your goal isn't to meet a set of requirements but to change someone's world for the better.

Excellence isn't the same as sucking less. 

The rearview mirror is not a navigational device.

True progress occurs in quantum leaps, not by interpolation.

Creativity has an aspect of unexpectedness, not just originality.

Incremental build-out is not innovation. 


A product can meet all of a customer's needs and still be a terrible product.

Don't even entertain the idea that you've done something insanely great until large numbers of users have told you so to your face.

Even if you did something insanely great last year, last month, or last week, don't assume you're doing something insanely great right now.

Don't build what customers tell you they want. Build what they don't yet know they need.

Don't let customers design your product. They're not design experts.


Tell Marketing to shut up already. 


If you enjoyed this post, please tell a friend. And come back tomorrow for more mantras. Thanks!
reade more... Résuméabuiyad