Pages

.

Common User-Agent Strings

Today's post is more of a note-to-self than anything else. I'm always trying to remember how the various browsers identify themselves to servers. The attendant user-agent strings are impossible to carry around in one's head, so I'm setting them down here for future reference.

To get the user-agent strings for five popular browsers (plus Acrobat), I created a script (an EcmaScript server page) in my Sling repository that contains the line:

<%= sling.getRequest().getHeader("User-agent") %>

This line simply spits back the user-agent string for the requesting browser (obviously). The results for the browsers I happen to have on my local machine are as follows:

Firefox 3.6:
Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US; rv:1.9.2) Gecko/20100115 Firefox/3.6

Chrome 5.0.375:
Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US) AppleWebKit/533.4 (KHTML, like Gecko) Chrome/5.0.375.38 Safari/533.4

IE7.0.6:
Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0; SLCC1; .NET CLR 2.0.50727; .NET CLR 3.0.04506)

Safari 5.0.1:
Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US) AppleWebKit/533.17.8 (KHTML, like Gecko) Version/5.0.1 Safari/533.17.8

Opera 10.61:
Opera/9.80 (Windows NT 6.0; U; en) Presto/2.6.30 Version/10.61

Acrobat 9.0.0 Pro Extended:
Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/523.15 (KHTML, like Gecko) Version/3.0 Safari/523.15

Interestingly, Acrobat seems to spoof a Safari signature.

If you want to perform this test yourself right now, using your present browser, simply aim your browser at http://whatsmyuseragent.com/, and you'll get a complete report on the full header sent by your browser to the server.
reade more... Résuméabuiyad

When will China pass America?
















Much is being made over the fact that China just passed Japan to claim the title of "world's second-biggest economy." Although this has provoked a spate of hand-wringing over Japan's decline (see
here and here and here, for example), more level-headed commentators point out that China has eleven times as many people as Japan; for 1.34 billion people to have remained permanently poorer than 1.27 million people would have been (and for a century, was) a human disaster of epic proportions. Sure, Japan has made a lot of mistakes, but those are most certainly not the reason for China's ascendancy.

The next question on everyone's lips, naturally, is: When will China pass the U.S. to become #1? The consensus in the media seems to be that China had better not get too confident; like Japan in the 80s, they argue, China's rapid growth has come at the expense of inefficient overinvestment, misallocation of capital, suppressed consumption, and the entrenchment of special interests that will not be easily dislodged when the boom finally runs its course.

And I agree: China does share many of Japan's weaknesses. Some of these, fact, seem far more severe in China than they did in Japan - the corruption, the pollution, the suppression of consumption, the state control of banks. These are the kind of things that raise a country's growth during the "extensive" phase, when all you have to do to grow is save, save, save, and invest, invest, and invest. But because the same people who received the cheap capital during the boom will continue to hold the strings of the state after the economy gets saturated with buildings and roads and machines, this growth model bakes inefficiencies into the economy. Growth goes faster, but crashes more abruptly and at a lower level.

That said, the sheer numbers involve make it HIGHLY unlikely that China will NOT overtake America in the near future. If China were twice as big as the U.S., I'd say there was some chance they'd never be richer than us. But they are 4.3 times as big.

To see how certain a thing this is, let's do some rough calculations in our heads. This is pretty easy, with the Rule of 70; to find out how fast something doubles, divide 70 by its percentage growth rate. Thus, if China is growing at 10% and America at 2%, China's economy will double in size relative to America's in 70/8 = 8.75 years.

Now, let's do some math. Japan's growth slowed when it hit about half of U.S. per capita GDP (in the late 70s). Same for South Korea in the late 90s. Assume for the moment that China's slows at around the same level. China's GDP per capita is now about 16% o...r 17% of America's. At a relative growth rate of 8% (the average for recent years), China will hit half of America's per capita GDP (and more than twice America's total GDP) in a little more than 15 years.

But perhaps China has more limitations than Japan and South Korea did; suppose that corruption, resource constraints, or China's size relative to its trading partners acts as a check on its growth. Suppose for this reason, China's growth slows when it hits 35% of America's per capita GDP (and 1.5 times America's total GDP). That's still 10 years of hyper-charged growth.

Now suppose that China's growth slows a little bit, so that the growth rate relative to the U.S. is only 6% (i.e. about how fast India is growing now). And assume China reaches only 35% of America's per-capita GDP before its growth slows even more. That's 14 more years of pretty fast growth!

Put this another way: for China to fail to overtake U.S. total GDP, they will have to stall out at 23% of U.S. per capita GDP - the average Chinese worker will have to less productive than an entry-level employee at Wal-Mart. This would mean that their growth would have to stall within 5 years (if we measure GDP in PPP terms, or if they allow their currency to appreciate) or 7 years (if we measure GDP in nominal terms, and they keep their currency undervalued forever and ever).

So, basically, unless China's growth crashes spectacularly and semi-permanently by the midpoint of this decade, their economy will be larger than that of the U.S. by the mid-2020s. That would be an unprecedented slowdown, MUCH more severe than what Japan experienced in the 90s, and at a far lower level of development. I doubt any reasonable China critic would predict such a Biblical disaster.

Thus, the inescapable answer is: China will soon be the world's largest economy, no matter how you measure things. This will happen in less than two decades. And GDP is a direct measure of national power. We must therefore prepare for a world in which the leading geopolitical hegemon will be a non-democratic country, a country with little stake in the existing system of international norms and institutions, a country with hundreds of millions of citizens still living in poverty.

How much of the world we knew in the 20th Century was dependent on the hegemony of free-trading democratic countries? How much of the seemingly unstoppable technological progress, the respect for international boundaries, the slow advance of human rights, etc. were dependent on the lucky fact that the U.S.-Britain-France alliance (and later, the U.S.-Britain-France-Japan alliance) had the most guns?

We're about to find out.
reade more... Résuméabuiyad

Looking back on the Aldus-Adobe deal

There's a terrific interview with Paul Brainerd about the history of Aldus Corporation over at computerhistory.org. In it, Brainerd comments on the why and how of Aldus's eventual acquisition by Adobe (a subject of considerable interest to me, since the company I work for -- Day Software -- has just been acquired by Adobe). Of the acquisition, Brainerd says:
At a 30,000 foot level, we had similar approaches to running a company. But at a working level, there were some very definite philosophical differences.

There was a definite difference in the customer orientation. We spent a lot more time talking to customers. Adobe's philosophy was more of an engineering-based one: if we make a great product, like PostScript, sooner or later people will want it.

But the reason I even considered Adobe was their underlying ethical standard of running a high-quality company that was fair to their customers and their employees. Unfortunately, that couldn't be said of all the companies in the industry.

A lot of thought went into the merger, and I think it was one of the best.
Hopefully, we'll all be saying much the same thing about the Day-Adobe deal years from now.
reade more... Résuméabuiyad

Day Software Developer Training: Days Two and Three

I made it through Days Two and Three of developer training here at Day's Boston office. Under the expert tutelage of Kathy Nelson, the eight of us in the class got a solid grounding in:
  • Apache Sling and how it carries out script resolution. (For this, we used my August 16 blog post as a handout.)
  • Modularizing components and allowing for their reuse.
  • Enabling various WCM content management tools such as CQ's Sidekick, which help web authors to create and edit web pages.
  • Creating a Designer to provide a consistent look and feel to a website, and using a common CSS file.
  • Creating a navigation component that to provide dynamic navigation to all pages as they are added or removed by authors.
  • Adding log messages to .jsp scripts, and using the CRXDE debugger.
That was Day Two. On Day Three we focused on:
  • Creating components to display a customizable page title, logo, breadcrumbs, and configurable paragraph.
  • Creating and adding a complex component (containing text and images), to implement bespoke functionality.
  • Adding a Search component. (We saw 3 different ways to do this.)
  • Internationalization, so that dialogs displayed to web authors can be displayed in one of the 7 languages supported out-of-the-box by Day Communiqué.
By the end of the third day, we had written hundreds of lines of JSP and manually created scores upon scores of custom nodes and properties in the repository.

Still to come: Creating and consuming custom OSGi bundles; workflow; and performance optimization tools.

I can't wait!
reade more... Résuméabuiyad

Day Software Developer Training: Day One

Yesterday, I made it through Day One of developer training at Day Software's Boston office. It was an interesting experience.

There are eight of us in the class. Interestingly, two of the eight enrollees have little or no Java experience (one is not a developer); most of the rest have varied J2EE backgrounds. All are (as you'd expect) relatively new Day customers. One is from an organization that is trying to migrate away from Serena Collage. The organization in question chose Day over Ektron partly on the basis of the flexibility afforded by Day's Java Content Repository architecture, which is relatively forgiving when it comes to making ad hoc changes to the content model over time. (We spent a fair amount of time discussing David Nüscheler's Seven Rules for Content Modeling.)

We spent much of the morning talking about architecture, standards, and the Day technology stack, which is built on OSGi, JCR (JSR-283), Apache Jackrabbit, and Apache Sling. Surprisingly (to me), OSGi was an unfamiliar topic to a number of people. The fact that bundles could be started and stopped without taking the server down was, for example, a new concept for some.

All of us were given USB memory sticks containing the Day Communiqué distribution (and a training license), and we were asked to install the product locally from the flash drive. A couple of people had trouble getting the product to launch (they received the dreaded "Server not ready, browser not launched" message). In one case, it was a firewall issue that was easily resolved. In another case, someone was using Java 1.3 (the product requires 1.5, minimum). A third person had trouble getting WebDAV to work on Windows 7. I noticed, in general, that the people with the fewest problems (all the way through the class) were using Macs.

We were shown how to access the CQ Servlet Engine administration console, the CRX launchpad UI and Content Explorer, and the Apache Felix (OSGi) console, as well as the CRXDE Lite integrated development environment -- a very nice browser-based IDE for doing repository administration and JSP development, among other tasks.

We were also shown how to (and in fact we did) set up author and publish instances of CQ on our local drives, and replicate content back and forth between them.

In the afternoon, we did a variety of hands-on exercises designed to show how to create and manipulate nodes and properties in the repository; how to create folder structures; how to create templates; and finally, how to create components and Pages. (At last, we got our hands dirty with JSPs.)

Some students had trouble getting used to the fact that in JCR, everything is either a node or a property. "Folders" in the repository, for example, are actually nodes of type nt:folder. If you use WebDAV to drag and drop a file into a folder, the file becomes a node of type nt:file and the content of the file is now under a jcr:content node with a jcr:data property holding the actual content. It requires a new way of thinking. But once you get the hang of it, it's not hard at all.

Day Two promises to be interesting as we take a closer look at Sling, URL decomposition and script resolution, and component hierarchies. Hopefully, we'll get even more JSP under our fingernails!
reade more... Résuméabuiyad

Understanding how script URLs are resolved in Sling

One of the things that gives Apache Sling a great deal of power and flexibility is the way it resolves script URLs. Consider a request for the URL

/content/corporate/jobs/developer.html

First, Sling will look in the repository for a file at exactly this location. If such a file is found, it will be streamed out as is. But if there is no file to be found Sling will look for a repository node located at:

/content/corporate/jobs/developer

(and will return 404 if no such node exists). If the node is found, Sling then looks for a special property on that node named "sling:resourceType," which (if present) determines the resource type for that node. Sling will look under /apps (then /lib) to find a script that applies to the resource type. Let's consider a very simple example. Suppose that the resource type for the above node is "hr/job." In that case, Sling will look for a script called /apps/hr/job/job.jsp or /apps/hr/job/job.esp. (The .esp extension is for ECMAScript server pages.) However, if such a script doesn't exist, Sling will then look for /apps/hr/job/GET.jsp (or .esp) to service the GET request. Sling will also count apps/hr/job/html.jsp (or .esp) as a match, if it finds it.

Where things get interesting is when selectors are used in the target path. In content-centric applications, the same content (the same JCR nodes, in Sling) must often be displayed in different variants (e.g., as a teaser view versus a detail view). This can be accomplished through extra name steps called "selectors." For example:

/content/corporate/jobs/developer.detail.html

In this case, .detail is a selector. Sling will look for a script at /apps/hr/job/job.detail.esp. But /apps/hr/job/job.detail.html.esp will also work.

It's possible to use multiple selectors in a resource URL. For example, consider:

/content/corporate/jobs/developer.print.a4.html

In this case, there are two selectors (.print and .a4) as well as a file extension (html). How does Sling know where to start looking for a matching script? Well, it turns out that if a file called a4.html.jsp exists under a path of /apps/hr/jobs/print/, it will be chosen before any other scripts that might match. If such a file doesn't exist but there happens to be a file, html.jsp, under /apps/hr/jobs/print/a4/, that file would be chosen next.

Assuming all of the following scripts exist in the proper locations, they would be accessed in the order of preference shown:

/apps/hr/jobs/print/a4.html.jsp
/apps/hr/jobs/print/a4/html.jsp
/apps/hr/jobs/print/a4.jsp
/apps/hr/jobs/print.html.jsp
/apps/hr/jobs/print.jsp
/apps/hr/jobs/html.jsp
/apps/hr/jobs/jobs.jsp
/apps/hr/jobs/GET.jsp
This precedence order is somewhat at odds with the example given in SLING-387. In particular, a script named print.a4.GET.html.jsp never gets chosen (nor does print.a4.html.jsp). Whether this is by design or constitutes a bug has yet to be determined. But in any case, the above precedence behavior has been verified.

For more information on Sling script resolution, be sure to consult the (excellent) Sling Cheat Sheet as well as Michael Marth's previous post on this topic. (Many thanks to Robin Bussell at Day Software for pointing out the correct script precedence order.)



reade more... Résuméabuiyad

JSOP: An idea whose time has come

The w3c-dist-auth@w3.org list today received an interesting proposal for a new protocol, tentatively dubbed JSOP by its authors (David Nüscheler and Julian Reschke of Day Software). As the name hints, JSOP would be based on JSON and would be a RESTful protocol designed to facilitate the exchange of fine-grained information between browsers and (repository-based) server apps. As such, it's one of the first proposals (maybe the first?) to make extensive use of HTTP's new PATCH verb.

Why does the world need JSOP? "For the past number of years I always found myself in the situations where I wanted to exchange fine-grained information between a typical current browser and a server that persists the information," explains David Nüscheler. "In most cases for me the server obviously was a Content Repository, but I think the problem set is more general and applies to any web application that manages and displays data or information. It seemed that every developer would come up with an ad-hoc solution to that very same problem of reading or writing fine-grained data at a more granular level than a resource."

For example, what if you want to modify not just a resource but certain properties of the resource? WebDAV is often an answer in such situations (or you might be thinking AtomPub in the case of CMIS), but the fact is, it can take a lot of effort -- too much effort, some would say -- to achieve your goals using WebDAV, and in the end, HTML forms have no native understanding of property-based operations. As Nüscheler puts it, WebDAV and AtomPub "are not very browser-friendly, meaning that it takes a modern browser and a lot of patience with JavaScript to get to a point where one can interact with a server using either of the two."

So in other words, something as simple as setting or getting attributes on a folder shouldn't take a lot of hoop-jumping. You should be able to do things like:

Request:
GET /myfolder.json HTTP/1.1

Response:
{
"createdBy" : "uncled",
"name" : "myfolder",
"id" : "50d9317a-3a95-401a-9638-333a0dbf04bb"
"type" : "folder"
}

or:

Request:
GET /myfolder.4.json HTTP/1.1

Response:
{
"createdBy" : "uncled",
"name" : "myfolder",
"id" : "50d9317a-3a95-401a-9638-333a0dbf04bb"
"type" : "folder"
"child1" :
{
"grandchild11" :
{
"depth3" :
{
"depth4 : { ... }
}
}
}
}
In the above example (with nested folders), notice that the GET is on a URL of /myfolder.4.json. Notice the '.4.json', indicating that the server should return folders 4 levels deep.

Suppose you want to create a new document under /myfolder, delete an old document, move a doc, and update an attribute on the folder -- all in one operation. With JSOP, you could do something like:

PATCH /myfolder HTTP/1.1

+newdoc : { "type" : "document", "createdBy" : "me" }
-olddoc
>movingdoc : /otherfolder/mydocument
^lastModifiedBy : "me"

where + means to create a node/property/resource, - means delete, > means move, and ^ means update.

JSOP proposes not only to be JavaScript-friendly but forms-friendly. So for example, imagine that you want to upload a .gif image and update its metadata at the same time, using an HTML form. Under the Reschke/Nüscheler proposal, you could accomplish this with a form POST:

POST /myfolder/my.gif HTTP/1.1
Content-Type: multipart/form-data;
boundary=---------21447684891610979728262467120
Content-Length: 123
---------21447684891610979728262467120
Content-Disposition: form-data; name="data"
Content-Type: image/gif
GIF89a...................!.......,............s...f.;
---------21447684891610979728262467120
Content-Disposition: form-data; name="jsop:diff"
Content-Type: text/plain
^lastModifiedBy : "me"
+exif { cameraMake : "Apple", cameraModel : "Apple" }
---------21447684891610979728262467120--

Bottom line, JSOP promises to provide an easy, RESTful, forms-friendly, JavaScript-friendly way to do things that are possible (but not necessarily easy) right now with WebDAV or AtomPub. It should make working with repositories a snap for mere mortals who don't have time to master the vagaries of things like CMIS or WebDAV. In my opinion, it's a much-needed proposal. Here's hoping it becomes a full-fledged IETF RFC soon.
reade more... Résuméabuiyad

Why dictatorships fail














Dani Rodrik points out that, on average, democracies outperform dictatorships in terms of long-term economic growth and economic stability. Personally, I wouldn't make too much of this observation, since it's a small sample size (what if only America and its friends grew in the postwar period), there are a million kinds of selection bias at work (e.g. the "resource curse"), and there's a huge endogeneity problem (what if growth causes democracy instead of vice-versa).

But assume it's true. Why? Matt Yglesias thinks that it's because dictators in the past were just stupid:
[H]istorically [Rodrik's point has been right]. [But] if you’re a modern-day dictator, the lesson of history is clear that the less-corrupt, less-exploitive Singapore model was not only better for the Singaporeans it was better for the dictator. For all the same reasons that over the long term the revenue-maximizing tax rate equals the growth-maximizing tax rate, over the long-term dictatorship is more incentive-compatible than Mobutu seems to have realized.
However, Yglesias thinks, dictatorships have now become much smarter, and will now do significantly better:
Historically, few authoritarian regimes have seen that their own self-interest is best maximized via enlightened policies. But at least one interpretation of what’s happening in China is that the most important authoritarians around have figured this out (Abu Dhabi also seems to have) and this is driving major improvements in human well-being.
I heavily doubt this. Hasn't the human race had thousands of years to experiment with dictatorships? You'd think that if maximizing a country's long-term growth performance was always in a dictator's interests, that dictators would have figured this out long before now. Why should now suddenly be so different?

I think the explanation for the success of democracy lies elsewhere. In the modern era, we've seen many instances of dictatorships that grew rapidly - Imperial and then Nazi Germany, fascist Italy, Taiwan and South Korea and Singapore in the 70s and 80s, Chile under Pinochet, etc. The problem is, they all seem to stall out at a moderate level of income; it's easy for dictatorships to get middle-class quick, but it's very hard for them to get rich.

My guess that it has to do with what political scientist Bruce Bueno de Mesquita calls "the logic of political survival." The theory in a nutshell: no matter whether you're the leader of a democracy or a dictatorship, you have to pay people off to stay in power. In a democracy, you have to pay off voters, which requires public goods (because there are so many voters). In a dictatorship, you have to pay off your oligarch buddies, or the army, or a vast ruling party. Each one of those supporters asks for a lot more than a voter asks for in a democracy. Hence, dictatorships are forced to squander more state resources in direct payouts to regime supporters.

I'd add something to this theory. In a democracy, if you lose power you simply become part of the minority party in Congress or retire and go on posh speaking tours. In a dictatorship, if you lose power there's a fair chance you'll be riddled with bullets and hung on meathooks in the public square. Yglesias seems to think that this is a strength of dictatorships, because it will cause them to be very careful about keeping the country on a firm growth path:
But when growing dictatorships hit economic downturns, what tends to happen is you throw the dictators out of office. I’m not sure whether China’s leaders can keep delivering growth, but if they can’t it’ll be hard for them to stay in charge.
I see it very differently. When losing power means losing everything, dictators have an incentive to increase payouts to regime supporters by huge amounts, as a form of insurance. If a country is ever faced with an opportunity for long-term gain at the expense of short-term pain (e.g. deficit reduction, or curbing wasteful public works projects), a dictator is unlikely to take that opportunity. Prosperity in the long term is not worth the meathook treatment in the short term.

I think we can see this dynamic at work in China. Who are the regime's key supporters? Answer: local government officials. China has five layers of government (compared to our three or Japan's two). All those government officials have their hands in the pie; as Minxin Pei will tell you, much of China's economy consists of "private" factories owned by friends of local government officials. The government officials get massive kickbacks, the factory owners get cheap second-class-citizen migrant labor, license to destroy the local environment, and free land obtained by booting local peasants.

It's easy to see that China will not be able to afford this kind of economic system forever. Right now, the system works because China is so poor that kickbacks are relatively cheap, and because nearly every investment in a poor country tends to pay off. But as the economy grows richer, the kickbacks to the local government officials will become a greater and greater drain on the private sector (this is called Baumol's Cost Disease), even as the need for kickbacks forces China to maintain high levels of investment in projects that will never pay off. Michael Pettis and others report that this inefficiency is already starting to kick in.

Therefore, I do not share Yglesias' faith in the intelligence of dictators. No matter how intelligent they are, the logic of their political survival will force them to choose the short-term over the long-term. This time will not be different.
reade more... Résuméabuiyad

Skype heads for IPO of the century

Skype has made its filing with SEC, ahead of what will no doubt be the biggest IPO of the century. Interesting tidbits from the filing:
  • Skype's (top-line) run rate is $812 million per year
  • 28 percent of total Internet users have signed up with Skype (506 million people)
  • 40 percent of calls are video-chat
  • 6 percent of users pay
  • Adjusted EBITDA for the first half of 2010 was $115.7 million, up 54 percent from a year ago
  • The company has $85 million in cash
Add it all up and what do you get? Nothing less than the dial tone of the 21st century, I'd say.
reade more... Résuméabuiyad

Why our country is going down the tubes, and what you can do about it














America is caught in a spiral of decline and stagnation.

Why? The most immediate cause is that we refuse to spend money on public goods:
The lights are going out all over America — literally. Colorado Springs has made headlines with its desperate attempt to save money by turning off a third of its streetlights, but similar things are either happening or being contemplated across the nation...

Meanwhile, a country that once amazed the world with its visionary investments in transportation, from the Erie Canal to the Interstate Highway System, is now in the process of unpaving itself: in a number of states, local governments are breaking up roads they can no longer afford to maintain, and returning them to gravel.

And a nation that once prized education — that was among the first to provide basic schooling to all its children — is now cutting back. Teachers are being laid off; programs are being canceled; in Hawaii, the school year itself is being drastically shortened. And all signs point to even more cuts ahead...

In effect, a large part of our political class is showing its priorities: given the choice between asking the richest 2 percent or so of Americans to go back to paying the tax rates they paid during the Clinton-era boom, or allowing the nation’s foundations to crumble — literally in the case of roads, figuratively in the case of education — they’re choosing the latter.

It’s a disastrous choice in both the short run and the long run...

[E]verything we know about economic growth says that a well-educated population and high-quality infrastructure are crucial. Emerging nations are making huge efforts to upgrade their roads, their ports and their schools. Yet in America we’re going backward.

How did we get to this point? It’s the logical consequence of three decades of antigovernment rhetoric, rhetoric that has convinced many voters that a dollar collected in taxes is always a dollar wasted, that the public sector can’t do anything right...

Krugman knows, of course - and has said in other columns - that antigovernment rhetoric never really convinced many Americans to give up public goods and public services. What really happened was that the conservative movement told white people that all the cost of public goods would be borne by them, while all the benefits would go to blacks and Hispanics. This is the argument that was successful. This is the argument that destroyed our government's ability to provide the economic foundations of a successful nation-state.

Surely, now that our economy is going down the tubes, white conservative Americans are going to wake up and realize that they need public goods too...right?

Except that people's minds don't quite work that way. Instead of convincing people of the need for public goods, economic downturns often lead people to switch to an "every tribe for itself" crisis mode. This is what Matt Yglesias is talking about when he says that economic insecurity breeds mass scapegoating, prejudice, racial tribalism, and paranoia:
Last year we had town halls gone wild, fueled by the threat of death panels pulling the plug on Grandma. This year, us-vs.-them controversies are proliferating, linked by a surge in xenophobia. This is our summer of fear.

So far, the summer of fear has featured a charge, led by Newt Gingrich, Sarah Palin and former New York congressman Rick Lazio, to block the construction of the Cordoba House Islamic cultural center (which is to include a mosque) a few blocks from the site of the World Trade Center. Meanwhile, with frightening speed, we've gone from discussing the prospects for comprehensive immigration reform to watching congressional Republicans call for hearings to reconsider the 14th Amendment's guarantee of citizenship to anyone born in the United States...

Fear, in essence, begets fear. The loss of a job, or the worry that one might be lost, raises anxiety. This often plays out as increased suspicion of people who look different or come from different places. While times of robust growth and shared prosperity inspire feelings of interconnectedness and mutual gain, in times of worry, the picture quickly reverses. Views of the world turn zero-sum: If he wins, what do I lose? Any kind of change looks like decline -- the end of a "way of life."...

Benjamin Friedman, an economist at Harvard whose 2005 book "The Moral Consequences of Economic Growth" argued that growth tends to foster liberal sentiments and open societies, whereas slowdowns undermine them, says this summer's events "are predictable consequences of this kind of sustained economic downturn."

"Manifestations like these have appeared in the U.S. at such times before," he told me, "most obviously in the 1880s and early 1890s," when a sustained period of economic stagnation coincided with the abandonment of the Reconstruction-era commitment to civil rights, the widespread adoption of anti-Chinese legislation and a nationwide wave of lynchings directed not only at blacks, but also Catholics and immigrants...

The lesson is simple: The current controversies are ultimately byproducts of our economic morass. To really dispel the atmosphere of suspicion, what's needed are ideas about how to boost the economy to bring unemployment down and earnings up. Finding policies that do all this will not be easy, but it is the only way to turn the national mood around.
This is a very common idea, and it is supported by a number of lab experiments.

So, America is trapped in a vicious circle: Underinvestment in public goods causes economic decline, which causes prejudice and tribalism, which causes underinvestment in public goods.

How can this cycle be broken? I believe that the only people who can break it are conservatives. If Republican voters realize that government is not the enemy, and that investment in public goods is crucial to their own children's futures, we can arrest the cycle of decline, and set ourselves back on the upward path of economic growth and greater social integration. If you vote Republican, the power to save the country is in your hands.

But for us liberals, there is just not much that we can do, other than to try persuade our conservative friends that roads and bridges and public education are not just a scheme to steal their money and give it to the brown people. If we fail to make that case, America has a hard, dark road ahead.
reade more... Résuméabuiyad

Anti-manufacturing bias in the economics blogosphere














The economics blog Vox claims that the U.S. will not be able to increase employment in the manufacturing sector through promotion of goods exports. The reason? Because manufacturing employment has been trending downward.

By this logic, policy can do nothing to reverse deflation, because inflation is trending downward. And policy can do nothing to reverse job losses, because unemployment is trending upward. In other words, policy is assumed to be ineffectual - no trend can be reversed! - and the conclusion is that policy is ineffectual. Ta-da!

Now, the Vox authors do give one piece of data to back up their assertion that the decline in manufacturing output is irreversible: increasing productivity in the manufacturing sector, which "translate[s] into a greater substitution of capital for labour, causing downward pressure on manufacturing employment." In other words, our machines are getting so good that we need fewer and fewer people in manufacturing.

I have several counterarguments.

1. Productivity in U.S. manufacturing also increased strongly from 1945-1970. During this time, U.S. manufacturing employment soared.

2. Productivity has also increased in the service sector over the past two decades, even as service-sector employment has soared.

3. Theoretically, there is no reason why increased multifactor productivity should reduce employment. If the manufacturing sector has constant returns to scale, employment should be independent of productivity.

4. An increase in productivity in manufacturing could be due not to technological improvements (as the Vox authors assert), but rather to some exogenous sectoral shift that forces less productive factories out of business (e.g. an undervalued Chinese exchange rate or a U.S. industrial policy that favors finance and other services).

Given these counterarguments, Vox's declaration that the downward trend in manufacturing is inevitable seems to be on very shaky ground.

Of course, they then go on to argue that promoting goods exports will create employment in the service sector, leading them to quietly declare that export promotion is a good idea anyway. This agrees with my idea that "industrial policy" is generally less effective than "sectoral policy" aimed at boosting exports.

But my basic point here is that the econ blogosphere seems to be biased against the idea of manufacturing. They seem to have decided that the shift from a manufacturing-based economy to a service-based economy is as desirable as the shift out of agriculture into manufacturing. But there is just not very much data to back up that idea; at least, not that I've seen so far. I'm skeptical that the trend is entirely good, and I'm skeptical that the trend is entirely irreversible.
reade more... Résuméabuiyad

A "Smart Sobel" image filter


The original image ("Lena"), left, and the same image transformed via Smart Sobel (right).

Last time, I talked about how to implement Smart Blur. The latter gets its "smartness" from the fact that the blur effect is applied preferentially to less-noisy parts of the image. The same tactic can be used with other filter effects as well. Take the Sobel kernel, for example:

float [] kernel = {
2, 1, 0,
1, 0,-1,
0,-1,-2
};
Convolving an image with this kernel tends to produce an image in which edges (only) have been preserved, in rather harsh fashion, as seen here:


Ordinary Sobel transformation produces a rather harsh result.

This is an effect whose harshness begs to be tamed by the "smart" approach. With a "smart Sobel" filter, we would apply maximum Sobel effect to the least-noisy parts of the image and no Sobel filtering to the "busiest" parts of the image, and interpolate between the two extremes for other parts of the image.

That's easy to do with just some trivial modifications to the Smart Blur code I gave last time. Without further ado, here is the code for the Smart Sobel filter:

import java.awt.image.Kernel;
import java.awt.image.BufferedImage;
import java.awt.image.ConvolveOp;
import java.awt.Graphics;

public class SmartSobelFilter {

double SENSITIVITY = 21;
int REGION_SIZE = 5;

float [] kernelArray = {
2, 1, 0,
1, 0, -1,
0, -1,-2

};

Kernel kernel = new Kernel( 3,3, kernelArray );

float [] normalizeKernel( float [] ar ) {
int n = 0;
for (int i = 0; i < ar.length; i++)
n += ar[i];
for (int i = 0; i < ar.length; i++)
ar[i] /= n;

return ar;
}

public double lerp( double a,double b, double amt) {
return a + amt * ( b - a );
}

public double getLerpAmount( double a, double cutoff ) {

if ( a > cutoff )
return 1.0;

return a / cutoff;
}

public double rmsError( int [] pixels ) {

double ave = 0;

for ( int i = 0; i < pixels.length; i++ )
ave += ( pixels[ i ] >> 8 ) & 255;

ave /= pixels.length;

double diff = 0;
double accumulator = 0;

for ( int i = 0; i < pixels.length; i++ ) {
diff = ( ( pixels[ i ] >> 8 ) & 255 ) - ave;
diff *= diff;
accumulator += diff;
}

double rms = accumulator / pixels.length;

rms = Math.sqrt( rms );

return rms;
}

int [] getSample( BufferedImage image, int x, int y, int size ) {

int [] pixels = {};

try {
BufferedImage subimage = image.getSubimage( x,y, size, size );
pixels = subimage.getRGB( 0,0,size,size,null,0,size );
}
catch( Exception e ) {
// will arrive here if we requested
// pixels outside the image bounds
}
return pixels;
}

int lerpPixel( int oldpixel, int newpixel, double amt ) {

int oldRed = ( oldpixel >> 16 ) & 255;
int newRed = ( newpixel >> 16 ) & 255;
int red = (int) lerp( (double)oldRed, (double)newRed, amt ) & 255;

int oldGreen = ( oldpixel >> 8 ) & 255;
int newGreen = ( newpixel >> 8 ) & 255;
int green = (int) lerp( (double)oldGreen, (double)newGreen, amt ) & 255;

int oldBlue = oldpixel & 255;
int newBlue = newpixel & 255;
int blue = (int) lerp( (double)oldBlue, (double)newBlue, amt ) & 255;

return ( red << 16 ) | ( green << 8 ) | blue;
}

int [] blurImage( BufferedImage image,
int [] orig, int [] blur, double sensitivity ) {

int newPixel = 0;
double amt = 0;
int size = REGION_SIZE;

for ( int i = 0; i < orig.length; i++ ) {
int w = image.getWidth();
int [] pix = getSample( image, i % w, i / w, size );
if ( pix.length == 0 )
continue;

amt = getLerpAmount ( rmsError( pix ), sensitivity );
newPixel = lerpPixel( blur[ i ], orig[ i ], amt );
orig[ i ] = newPixel;
}

return orig;
}

public void invert( int [] pixels ) {
for (int i = 0; i < pixels.length; i++)
pixels[i] = ~pixels[i];
}

public BufferedImage filter( BufferedImage image ) {

ConvolveOp convolver = new ConvolveOp(kernel, ConvolveOp.EDGE_NO_OP,
null);

// clone image into target
BufferedImage target = new BufferedImage(image.getWidth(), image
.getHeight(), image.getType());
Graphics g = target.createGraphics();
g.drawImage(image, 0, 0, null);
g.dispose();

int w = target.getWidth();
int h = target.getHeight();

// get source pixels
int [] pixels = image.getRGB(0, 0, w, h, null, 0, w);

// blur the cloned image
target = convolver.filter(target, image);

// get the blurred pixels
int [] blurryPixels = target.getRGB(0, 0, w, h, null, 0, w);
invert( blurryPixels );

// go thru the image and interpolate values
pixels = blurImage(image, pixels, blurryPixels, SENSITIVITY);

// replace original pixels with new ones
image.setRGB(0, 0, w, h, pixels, 0, w);
return image;
}
}
To use the filter, instantiate it and then call the filter() method, passing a java.awt.image.BufferedImage. The method returns a transformed BufferedImage.

There are two knobs to tweak: SENSITIVITY and REGION_SIZE. The former affects how much interpolation happens between native pixels and transformed pixels; a larger value means a more extreme Sobel effect. The latter is the size of the "neighboring region" that will be analyzed for noisiness as we step through the image pixel by pixel. This parameter affects how "blocky" the final image looks.

Ideas for further development:
  • Develop a "Smart Sharpen" filter
  • Combine with a displacement filter for paintbrush effects
  • Overlay (combine) the same image with copies of itself, transformed with various values for SENSITIVITY and REGION_SIZE, to reduce "blockiness"
reade more... Résuméabuiyad

The return of industrial policy

The stars have aligned, and industrial policy is slowly starting to rise from its centuries-long slumber beneath the ocean floor...
President Obama and congressional Democrats -- out of options for another quick shot of stimulus spending to revive the sluggish economy -- are shifting toward a longer-term strategy that promises to tackle persistently high unemployment by engineering a renaissance in American manufacturing.

That approach, heralded by Obama last week in Detroit and sketched out in a memo to House Democrats as they headed home for the August break, is still evolving and so far focuses primarily on raising taxes on multinational corporations that Democrats accuse of shipping jobs overseas.

The strategy also repackages policies long pursued by the White House -- such as investing in clean energy, roads, bridges and broadband service -- with more than two dozen legislative proposals aimed at developing a plan for promoting domestic manufacturing.

Mark Thoma is highly skeptical of this, but his reasons are telling:
On the "Make It in America" initiative, I have a hard time getting excited about it, and it may leave the administration open to charges of protectionism (though I'm not sure how that charge would play with the Democratic base). I am not a big fan of industrial policy generally, it goes against the instincts that are beaten into economists during their training, but I don't have a better answer to the question of where will the good jobs come from in the future.
Economists are skeptical of industrial policy because they have been taught to be. This is not to say that industrial policy (in any of its many and various forms and incarnations) is a good thing. Rather, economists have the ability to write down models that support any conclusion they like, and generally have neither the inclination nor the ability to use data to destroy the models that don't work. The reason they have been taught to dislike industrial policy is because the people who taught them decided they disliked industrial policy, and hence wrote down models too simple to have a role for industrial policy. In Japan, by contrast, economists decided they liked industrial policy, and so wrote down models to support that conclusion (a fact I discovered while editing economics papers written by Japanese authors).

But what ends up happening is that policymakers end up making policy based not on any economic model, but on the ideological and intellectual assumptions that drove the creation of the prevailing models. When "neoliberalism" was politically popular, we got neoliberal policies and neoliberal models. Now that neoliberalism seems to the casual observer to have crashed and burned, we'll get something else. Maybe industrial policy, maybe something new. But it won't be backed up by data either. Policymakers will continue to "cross the river by feeling for the stones."
reade more... Résuméabuiyad

Economic security

In explaining the divisions between Democrats/liberals and Republicans/conservatives, I've often focused on cultural, regional, and racial divisions. This is because I fundamentally don't buy the story that the liberal-conservative divide is all about "rent-seeking" (i.e. wealth redistribution) - poor Democrats trying to use the government to steal money from rich Republicans, or vice versa. There's too much evidence against that view, in particular the fact that most poor people don't vote. So I've tended to buy into the idea that political ideologies and parties are held together by tribalism.

But every once in a while, I am led to question this view. This blog post by Matt Yglesias contains a very interesting graphic that makes me wonder if deep economic incentives really might be responsible for a lot of the liberal/conservative divide:



















Yglesias chooses to interpret this graphic in terms of the traditional "rent-seeking" explanation: business owners want favors and handouts from the government, so they vote Republican. But I think he's ignoring the other six panels. Why have "professionals" and "routine white-collar" workers tended Democratic over the last three decades, while both "skilled workers" and "non-skilled workers" trended Republican?

The answer that pops into my head is: economic security. Since 1980, our economy has become much less regulated, forcing firms to compete viciously to stay alive. since around the same time, globalization has gone into high gear, forcing massive continuing industrial restructuring as newly developed nations force their way into niches that U.S. firms used to dominate. Add to this the IT revolution, which is changing the very nature of work and eliminating the need for whole classes of workers on practically a yearly basis, and you have a recipe for massive economic uncertainty and insecurity.

I believe that it is this insecurity, more than rising inequality or decreasing mobility, which is driving Americans' discontent. Inequality as such doesn't bother us very much, and decreasing mobility has yet to really enter the public consciousness. But uncertainty - the daily fear of being downsized at any time, and behind that the vast looming terror of being made irrelevant by changing technology and trade patterns - is something that dogs each of us every day of our lives.

So how do we respond to inequality? I see two basic possibilities. The first is for us to respond as individuals - we "rise to the challenge," work hard, adapt our skills, and learn how to get by in ever-changing circumstances. The second option is for us to respond as a collective, seeking protection from the government in the form of industrial policy, trade protection, and government jobs.

I am guessing that people who choose the first option tend to vote Republican, and people who choose the second tend to vote Democratic.

Why does this make sense? After all, as Yglesias points out, Republicans often use government to benefit specific friendly firms (think: Halliburton); he might have also mentioned that much Republican-backed military spending is just corporate pork for defense contractors. So it is not immediate apparent that Republicans are the party of individual adaptability and Democrats are the party of collective economic security.

Still, looking at the breakdown in the graphic, as well as other things I know about voting patterns, it seems clear to me that the groups tending Democratic are either those most under threat from economic insecurity (traditional "routine white-collar" workers) and those most dependent on the government for their security (govt. workers, and also professionals, who are protected by govt. licensing). The groups who have chosen to go with the economic flow - entrepreneurs and "skilled workers" - are those that have shown the greatest trend towards the GOP. It is not clear why, but people seem to think that the Democrats will use the government to protect them against economic uncertainty, while the Republicans will give them free rein to adapt on their own.

Why this is, I'm not sure. The Democrats have not been more vocal in supporting entry restrictions for the professions, nor have they been particularly protectionist on trade. They have shown little appetite for re-regulating the industries deregulated under Reagan. Meanwhile, the Republicans offer hundreds of billions in defense contracts and corporate pork. Perhaps the divide comes down to pure rhetoric - Republican paeans to "small business" and economic individualism vs. Democratic reminders of the importance of government. Or maybe people believe that if and when popular pressure for trade protectionism and industrial policy becomes strong enough, it will be the Democrats who will be the first to cave in and take action.

Whatever is going on here, though, I think there is a lesson for both parties: economic security is a much bigger issue than either Democrats or Republicans seem to realize. There are policies out there that would mitigate the insecurity without damaging our economy - portable pensions, tax breaks for worker retraining, vocational education, tax-exemption for unemployment insurance, and government-sponsored job-matching programs, to name a few. Encouraging geographic mobility by ending the tax break for home mortgage interest is a step that deserves attention. And, in my opinion, the government needs to take a good hard second look at industrial policy.

My guess is that these policies, and the accompanying rhetoric of economic security, is a gold mine of electoral support that is just sitting there waiting to boost the fortunes of the first party that seizes it. Of course, this blog post is pretty much 100% supposition and me going out on an intellectual limb, so then again, maybe not.
reade more... Résuméabuiyad

Implementing Smart Blur in Java


Original image. Click to enlarge.


Image with Smart Blur applied. Notice that outlines are
preserved, even where the oranges overlap.


One of my favorite Photoshop effects is Smart Blur, which provides a seemingly effortless way to smooth out JPEG artifacts, remove blemishes from skin in photographs of people, etc. Its utility lies in the fact that despite the considerable blurriness it imparts to many regions of an image, it preserves outlines and fine details (the more important parts of an image, usually). Thus it gives the effect of magically blurring only those parts of the image that you want to be blurred.

The key to how Smart Blur works is that it preferentially blurs parts of an image that are sparse in detail (rich in low-frequency information) while leaving untouched the parts of the image that are comparatively rich in detail (rich in high-frequency information). Abrupt transitions in tone are ignored; areas of subtle change are smoothed (and thus made even more subtle).

The algorithm is quite straightforward:

1. March through the image pixel by pixel.
2. For each pixel, analyze an adjacent region (say, the adjoining 5 pixel by 5 pixel square).
3. Calculate some metric of pixel variance for that region.
4. Compare the variance to some predetermined threshold value.
5. If the variance exceeds the threshold, do nothing.
6. If the variance is less than the threshold, apply blurring to the source pixel. But vary the amount of blurring according to the variance: low variance, more blurring (high variance, less blurring).

In the implementation presented below, I start by cloning the current image and massively blurring the entire (cloned) image. Then I march through the pixels of the original image and begin doing the region-by-region analysis. When I need to apply blurring, I derive the new pixel by linear interpolation between original and cloned-image pixels.

So the first thing we need is a routine for linear interpolation between two values; and a corresponding routine for linear interpolation between two pixel values.

Linear interpolation is easy:

public double lerp( double a, double b, double amt) {
return a + amt * ( b - a );
}

Linear interpolation between pixels is tedious-looking but straightforward:

int lerpPixel( int oldpixel, int newpixel, double amt ) {

int oldRed = ( oldpixel >> 16 ) & 255;
int newRed = ( newpixel >> 16 ) & 255;
int red = (int) lerp( (double)oldRed, (double)newRed, amt ) & 255;

int oldGreen = ( oldpixel >> 8 ) & 255;
int newGreen = ( newpixel >> 8 ) & 255;
int green = (int) lerp( (double)oldGreen, (double)newGreen, amt ) & 255;

int oldBlue = oldpixel & 255;
int newBlue = newpixel & 255;
int blue = (int) lerp( (double)oldBlue, (double)newBlue, amt ) & 255;

return ( red << 16 ) | ( green << 8 ) | blue;
}
Another essential routine that we need is a routine for analyzing the pixel variance in a region. For this, I use a root-mean-square error:

public double rmsError( int [] pixels ) {

double ave = 0;

for ( int i = 0; i < pixels.length; i++ )
ave += ( pixels[ i ] >> 8 ) & 255;

ave /= pixels.length;

double diff = 0;
double accumulator = 0;

for ( int i = 0; i < pixels.length; i++ ) {
diff = ( ( pixels[ i ] >> 8 ) & 255 ) - ave;
diff *= diff;
accumulator += diff;
}

double rms = accumulator / pixels.length;

rms = Math.sqrt( rms );

return rms;
}
Before we transform the image, we should have code that opens an image and displays it in a JFrame. The following code does that. It takes the image whose path is supplied in a command-line argument, opens it, and displays it in a JComponent inside a JFrame:

import java.awt.Graphics;
import java.awt.image.BufferedImage;
import java.io.File;
import javax.imageio.ImageIO;
import javax.swing.JComponent;
import javax.swing.JFrame;

public class ImageWindow {

// This inner class is our canvas.
// We draw the image on it.
class ImagePanel extends JComponent {

BufferedImage theImage = null;

ImagePanel( BufferedImage image ) {
super();
theImage = image;
}

public BufferedImage getImage( ) {
return theImage;
}

public void setImage( BufferedImage image ) {
theImage = image;
this.updatePanel();
}

public void updatePanel() {

invalidate();
getParent().doLayout();
repaint();
}

public void paintComponent( Graphics g ) {

int w = theImage.getWidth( );
int h = theImage.getHeight( );

g.drawImage( theImage, 0,0, w,h, this );
}
} // end ImagePanel inner class

// Constructor
public ImageWindow( String [] args ) {

// open image
BufferedImage image = openImageFile( args[0] );

// create a panel for it
ImagePanel theImagePanel = new ImagePanel( image );

// display the panel in a JFrame
createWindowForPanel( theImagePanel, args[0] );

// filter the image
filterImage( theImagePanel );
}

public void filterImage( ImagePanel panel ) {

SmartBlurFilter filter = new SmartBlurFilter( );

BufferedImage newImage = filter.filter( panel.getImage( ) );

panel.setImage( newImage );
}

public void createWindowForPanel( ImagePanel theImagePanel, String name ) {

BufferedImage image = theImagePanel.getImage();
JFrame mainFrame = new JFrame();
mainFrame.setTitle( name );
mainFrame.setBounds(50,80,image.getWidth( )+10, image.getHeight( )+10);
mainFrame.setDefaultCloseOperation(3);
mainFrame.getContentPane().add( theImagePanel );
mainFrame.setVisible(true);
}

BufferedImage openImageFile( String fname ) {

BufferedImage img = null;

try {
File f = new File( fname );
if ( f.exists( ) )
img = ImageIO.read(f);
}
catch (Exception e) {
e.printStackTrace();
}

return img;
}

public static void main( String[] args ) {

new ImageWindow( args );
}
}


Note the method filterImage(), where we instantiate a SmartBlurFilter. Without further ado, here's the full code for SmartBlurFilter:
import java.awt.image.Kernel;
import java.awt.image.BufferedImage;
import java.awt.image.ConvolveOp;
import java.awt.Graphics;

public class SmartBlurFilter {

double SENSITIVITY = 10;
int REGION_SIZE = 5;

float [] kernelArray = {
1,1,1,1,1,1,1,1,1,
1,1,1,1,1,1,1,1,1,
1,1,1,1,1,1,1,1,1,
1,1,1,1,1,1,1,1,1,
1,1,1,1,1,1,1,1,1,
1,1,1,1,1,1,1,1,1,
1,1,1,1,1,1,1,1,1,
1,1,1,1,1,1,1,1,1,
1,1,1,1,1,1,1,1,1
};

Kernel kernel = new Kernel( 9,9, normalizeKernel( kernelArray ) );

float [] normalizeKernel( float [] ar ) {
int n = 0;
for (int i = 0; i < ar.length; i++)
n += ar[i];
for (int i = 0; i < ar.length; i++)
ar[i] /= n;

return ar;
}

public double lerp( double a,double b, double amt) {
return a + amt * ( b - a );
}

public double getLerpAmount( double a, double cutoff ) {

if ( a > cutoff )
return 1.0;

return a / cutoff;
}

public double rmsError( int [] pixels ) {

double ave = 0;

for ( int i = 0; i < pixels.length; i++ )
ave += ( pixels[ i ] >> 8 ) & 255;

ave /= pixels.length;

double diff = 0;
double accumulator = 0;

for ( int i = 0; i < pixels.length; i++ ) {
diff = ( ( pixels[ i ] >> 8 ) & 255 ) - ave;
diff *= diff;
accumulator += diff;
}

double rms = accumulator / pixels.length;

rms = Math.sqrt( rms );

return rms;
}

int [] getSample( BufferedImage image, int x, int y, int size ) {

int [] pixels = {};

try {
BufferedImage subimage = image.getSubimage( x,y, size, size );
pixels = subimage.getRGB( 0,0,size,size,null,0,size );
}
catch( Exception e ) {
// will arrive here if we requested
// pixels outside the image bounds
}
return pixels;
}

int lerpPixel( int oldpixel, int newpixel, double amt ) {

int oldRed = ( oldpixel >> 16 ) & 255;
int newRed = ( newpixel >> 16 ) & 255;
int red = (int) lerp( (double)oldRed, (double)newRed, amt ) & 255;

int oldGreen = ( oldpixel >> 8 ) & 255;
int newGreen = ( newpixel >> 8 ) & 255;
int green = (int) lerp( (double)oldGreen, (double)newGreen, amt ) & 255;

int oldBlue = oldpixel & 255;
int newBlue = newpixel & 255;
int blue = (int) lerp( (double)oldBlue, (double)newBlue, amt ) & 255;

return ( red << 16 ) | ( green << 8 ) | blue;
}

int [] blurImage( BufferedImage image,
int [] orig, int [] blur, double sensitivity ) {

int newPixel = 0;
double amt = 0;
int size = REGION_SIZE;

for ( int i = 0; i < orig.length; i++ ) {
int w = image.getWidth();
int [] pix = getSample( image, i % w, i / w, size );
if ( pix.length == 0 )
continue;

amt = getLerpAmount ( rmsError( pix ), sensitivity );
newPixel = lerpPixel( blur[ i ], orig[ i ], amt );
orig[ i ] = newPixel;
}

return orig;
}


public BufferedImage filter( BufferedImage image ) {

ConvolveOp convolver = new ConvolveOp(kernel, ConvolveOp.EDGE_NO_OP,
null);

// clone image into target
BufferedImage target = new BufferedImage(image.getWidth(), image
.getHeight(), image.getType());
Graphics g = target.createGraphics();
g.drawImage(image, 0, 0, null);
g.dispose();

int w = target.getWidth();
int h = target.getHeight();

// get source pixels
int [] pixels = image.getRGB(0, 0, w, h, null, 0, w);

// blur the cloned image
target = convolver.filter(target, image);

// get the blurred pixels
int [] blurryPixels = target.getRGB(0, 0, w, h, null, 0, w);

// go thru the image and interpolate values
pixels = blurImage(image, pixels, blurryPixels, SENSITIVITY);

// replace original pixels with new ones
image.setRGB(0, 0, w, h, pixels, 0, w);
return image;
}
}
Despite all the intensive image analysis, the routine is fairly fast: On my machine, it takes about one second to process a 640x480 image. That's slower than Photoshop by a factor of five, or more, but still not bad (given that it's "only Java").

Ideas for further development:
  • Substitute a directional blur for the non-directional blur.
  • Substitute a Sobel kernel for the blur kernel.
  • Try other sorts of kernels as well.
reade more... Résuméabuiyad

Two Americas














Back in the mid-century, we used to have an America with room for both conservatives and liberals. Liberals got to have activist government, conservatives got to have a racist, sexist, mostly religious society. Then the bargain broke down, when liberals decided that the racism, sexism, and religiosity had to go, and conservatives retaliated by disallowing activist government. The result was a society that was more friendly and inclusive, and more economically dynamic, but less effective at spreading economic gains throughout society. This was the new bargain of the 80s and 90s.

The problem is, there were two types of conservatives - businesspeople and small-town traditionalist whites. The businesspeople got what they want from the 80s and 90s (money), while the small-town traditionalist whites got screwed out of their jobs AND had to put up with brown people, gay people, etc. So now the social compact is breaking down again, as the STTWs demand that the brown people shove off. This is the root of the Tea Party.

But America is constitutionally biased toward accepting the brown people, because we're a nation based on immigration. Back when Germans and Swedes were considered brown people, our leaders griped, but let them in anyway. And then when Italians and Greeks and Jews (and maybe Poles) were considered brown people, we had to let them in as well. Letting the "brown" people in is at the core of our national character.

Which is why to stop the brown people from coming in, conservatives would have to change our national character. Indeed, they would have to repeal the 14th Amendment, which grants birthright citizenship. But the small-town traditionalist whites are so upset at the death of their closed little world that they are willing to scrap the United States of America in retaliation:
On Sunday, Sen. John Kyl (R-Ariz.) became the highest-ranking Republican to call for the repeal of the 14th Amendment to the U.S. Constitution. Appearing on CBS' Face the Nation, Kyl said that he opposes allowing children of undocumented immigrants to be granted U.S. citizenship and wants Congress to hold hearings on the matter...

There are already a number of Republican officials who have preceded Kyl in calling for a reworking of the country's citizenship laws. Sen. Lindsey Graham (R-S.C.) has proposed the piece of legislation that would repeal the 14th Amendment; he is joined on the House side by Rep. Jack Kimble (R-Calif.)...

Senate candidate Rand Paul (R-Ky.) caused a stir shortly after winning his primary by saying he supported stripping citizenship from children of the undocumented. Former congressman and potential Colorado gubernatorial candidate Tom Tancredo -- one of the staunchest anti-illegal immigration voices in national politics -- has made repeal of the 14th Amendment a major cause.

There are many obscure Republican candidates who have made the same proposal, including Kevin Craig in Missouri and Gary McLeod (an obscure Christian conservative who is challenging -- without much hope -- Majority Whip James Clyburn).

When moderates and liberals talk about "America," they are generally referring to the national institutions that formally define our country - the Constitution, the government, etc. When conservatives refer to "America," however, they generally mean something much different, something much closer to what Sarah Palin tellingly called "the real America." They refer to an American subculture defined by race and religion - a suburban (or small-town) "white" Christian nation.

If our suburbs become populated with Mexicans and Indians and Koreans, the moderate/liberal America will still be perfectly intact as long as the Constitution still reads the same. But the conservative America will either die or have to go through wrenching change (as it did in the 1800s when the Germans came, and in the 1900s when the East Europeans came). This is why conservatives are suggesting changing some of the core provisions of the Constitution - they are holding a gun to the head of our America, threatening to kill it unless we let their own America live.

But their threat is hollow. The Mexicans are here to stay, and the Indians and Asians will not stop coming as long as our labor market demands them. Conservatives do not have the votes to repeal the 14th Amendment. They must either change their America to accommodate the latest wave of "brown" people, or watch it die.
reade more... Résuméabuiyad