Pages

.

Procedural Textures in HTML5 Canvas

This is a screenshot of the user interface for the procedural texture app.
I've been playing around with the canvas API again, and this time I decided to create a simple HTML page that exposes an interface for creating procedural textures. Behind the scenes, I've included Perlin's famous noise function (see yesterday's post for details). The result is a tool that's as powerful (and fast) as it is fun to play with. (And the best part is, you don't need to host any files on a server: You can run the app straight from disk, with no security restrictions, in Chrome, Firefox, or any HTML5/canvas-capable browser.)

The interface is simple. There's a text box where you can type some code (see illustration at right). Whatever you type there will be executed against every pixel of the 2D canvas. Exposed globals include:

x -- the x coordinate of the current pixel
y -- the y coord of the current pixel
w -- the width of the canvas, in pixels
h -- the height of the canvas
r -- the red channel of the current pixel
g -- the green value of the current pixel
b -- the blue value of the current pixel
PerlinNoise.noise( u,v,w ) -- Perlin's 3D noise function

Offhand, you wouldn't think a loop that calls a callback for every pixel of a canvas image would be fast, but in reality the procedural shader can "call out" at a rate of over a million pixels per second. If you make calls to the Perlin noise() function in your loop, that'll slow you down to ~120K pixels per second. But that's still pretty good.

The versatility of the noise() function is truly amazing. The key to using it effectively is to understand how to scale it. By appropriately scaling the x and y parameters, you can stretch the noise space to any degree you want. You can achieve very colorful results, of course, by applying the result in creative ways to the r, g, and b channels. For example:


This texture was achieved with the following shader code:

n = PerlinNoise.noise(x/45,y/120, .89);
n = Math.cos( n * 85);
r = Math.round(n * 255);
b = 255 - r;
g = r - 255 ;

In this instance, the noise is scaled differently in x and y and then "reflected back on itself" (so to speak) using the cosine function, then the color channels are fiddled in such a way that whatever isn't red is blue.

By normalizing the texture space in various ways, you can end up with surprising effects. For example, consider:


centerx = w/2; centery = h/2;
dx = x - centerx; dy = y - centery;
dist = (dx*dx + dy*dy)/6000;
n = PerlinNoise.noise(x/5,y/5,.18);
r = 255 - dist*Math.round(255*n);
g = r - 255; b = 0;

In this case, we calculate the pseudo-distance from the center of the image as dx*dx + dy*dy (scaled by 6000) and fiddle with the colors to make the result red on a black background. The parameters to noise() have been scaled to give a relatively fine-grain noise.

If you download the code for the procedural-shader page (given further below), you can play with this "texture" yourself. Try substituting larger or smaller values for the scaling numbers to see what happens.

A dramatically different effect can be obtained by normalizing x and y and applying trig functions creatively:


x /= w; y /= h; sizex = 1.5; sizey=10;
n=PerlinNoise.noise(sizex*x,sizey*y,.4);
x = (1+Math.cos(n+2*Math.PI*x-.5));
x = Math.sqrt(x); y *= y;
r= 255-x*255; g=255-n*x*255; b=y*255;

Again, if you decide to download the code yourself, try playing with the various sizing parameters to see what the effect on the image is. That's the best way to get a feel for what's going on.

As you know if you've played with procedural textures before, you get a lot of mileage by normalizing x and y first (to keep them in the range of 0..1) and then using functions on them that are also normalized to produce output in the range 0..1. (Sine and cosine can, of course, easily be normalized to stay in the range 0..1.) It goes without saying that once a number is in the range 0..1 it can be squared (or squared-rooted) and still fall in the range 0..1. When you're ready to apply the number to a color channel, then of course you should multiply by 255 so that the result is in the range 0..255.

I've included a number of "presets" in the procedural-texture page (including code for the foregoing images). Here's another one that I like:


x/=w;y/=h; 
size = 20;
n = PerlinNoise.noise(size*x,size*y,.9);
b = 255 - 255*(1+Math.sin(n+6.3*x))/2;
g = 255 - 255*(1+Math.cos(n+6.3*x))/2;
r = 255 - 255*(1-Math.sin(n+6.3*x))/2;

I call this the "noisy rainbow." Without the noise term, it simply paints a rainbow across the image space, but a little added noise gives the effect shown here.

The code includes a few more examples (that aren't shown here). I encourage you to download it and play with it. Simply copy and paste all of the code below into a text file and give it a name that ends in ".html". Then open it in Chrome, Firefox, or any canvas-capable browser.
<html>
<head>
<script>

// A canvas demo by Kas Thomas.
// http://asserttrue.blogspot.com
// Use as you will, at your own risk.

context = null;
canvas = null;

window.onload = function(){

canvas = document.getElementById("myCanvas");
canvas.addEventListener('mousemove', handleMousemove, false);
context = canvas.getContext("2d");
loadHiddenText();
}

function loadHiddenText( ) {

var options = document.getElementsByTagName( "option" );
var spans = document.getElementsByTagName( "span" );

for (var i = 0; i < options.length; i++)
options[i].value = spans[i].innerHTML;
}

// should probably be called resetCanvas()
function clearImage( ) {

canvas.width = canvas.width;
}

function drawViaCallback( ) {

var w = canvas.width;
var h = canvas.height;

var canvasData = context.getImageData(0,0,w,h);

for (var idx, x = 0; x < w; x++) {
for (var y = 0; y < h; y++) {
// Index of the pixel in the array
idx = (x + y * w) * 4;


// The RGB values
var r = canvasData.data[idx + 0];
var g = canvasData.data[idx + 1];
var b = canvasData.data[idx + 2];

var pixel = callback( [r,g,b], x,y,w,h);

canvasData.data[idx + 0] = pixel[0];
canvasData.data[idx + 1] = pixel[1];
canvasData.data[idx + 2] = pixel[2];
}
}
context.putImageData( canvasData, 0,0 );
}

function fillCanvas( color ) {

context.fillStyle = color;
context.fillRect(0,0,canvas.width,canvas.height);
}

function doPixelLoop() {

var code = document.getElementById("code").value;
var f = "callback = function( pixel,x,y,w,h )" +
" { var r=pixel[0];var g=pixel[1]; var b=pixel[2];" +
code + " return [r,g,b]; }";

try {
eval(f);
fillCanvas( "#FFFFFF" );
drawViaCallback( );
}
catch(e) { alert("Error: " + e.toString()); }
}



function handleMousemove (ev) {

var x, y;

// Get the mouse position relative to the canvas element.
if (ev.layerX || ev.layerX == 0) { // Firefox
x = ev.layerX;
y = ev.layerY;
} else if (ev.offsetX || ev.offsetX == 0) { // Opera
x = ev.offsetX;
y = ev.offsetY;
}

document.getElementById("myCanvas").title = x + ", " + y;
}

// This is a port of Ken Perlin's Java code.
PerlinNoise = new function() {

this.noise = function(x, y, z) {

var p = new Array(512)
var permutation = [ 151,160,137,91,90,15,
131,13,201,95,96,53,194,233,7,225,140,36,103,30,69,142,8,99,37,240,21,10,23,
190, 6,148,247,120,234,75,0,26,197,62,94,252,219,203,117,35,11,32,57,177,33,
88,237,149,56,87,174,20,125,136,171,168, 68,175,74,165,71,134,139,48,27,166,
77,146,158,231,83,111,229,122,60,211,133,230,220,105,92,41,55,46,245,40,244,
102,143,54, 65,25,63,161, 1,216,80,73,209,76,132,187,208, 89,18,169,200,196,
135,130,116,188,159,86,164,100,109,198,173,186, 3,64,52,217,226,250,124,123,
5,202,38,147,118,126,255,82,85,212,207,206,59,227,47,16,58,17,182,189,28,42,
223,183,170,213,119,248,152, 2,44,154,163, 70,221,153,101,155,167, 43,172,9,
129,22,39,253, 19,98,108,110,79,113,224,232,178,185, 112,104,218,246,97,228,
251,34,242,193,238,210,144,12,191,179,162,241, 81,51,145,235,249,14,239,107,
49,192,214, 31,181,199,106,157,184, 84,204,176,115,121,50,45,127, 4,150,254,
138,236,205,93,222,114,67,29,24,72,243,141,128,195,78,66,215,61,156,180
];
for (var i=0; i < 256 ; i++)
p[256+i] = p[i] = permutation[i];

var X = Math.floor(x) & 255, // FIND UNIT CUBE THAT
Y = Math.floor(y) & 255, // CONTAINS POINT.
Z = Math.floor(z) & 255;
x -= Math.floor(x); // FIND RELATIVE X,Y,Z
y -= Math.floor(y); // OF POINT IN CUBE.
z -= Math.floor(z);
var u = fade(x), // COMPUTE FADE CURVES
v = fade(y), // FOR EACH OF X,Y,Z.
w = fade(z);
var A = p[X ]+Y, AA = p[A]+Z, AB = p[A+1]+Z, // HASH COORDINATES OF
B = p[X+1]+Y, BA = p[B]+Z, BB = p[B+1]+Z; // THE 8 CUBE CORNERS,

return scale(lerp(w, lerp(v, lerp(u, grad(p[AA ], x , y , z ), // AND ADD
grad(p[BA ], x-1, y , z )), // BLENDED
lerp(u, grad(p[AB ], x , y-1, z ), // RESULTS
grad(p[BB ], x-1, y-1, z ))),// FROM 8
lerp(v, lerp(u, grad(p[AA+1], x , y , z-1 ), // CORNERS
grad(p[BA+1], x-1, y , z-1 )), // OF CUBE
lerp(u, grad(p[AB+1], x , y-1, z-1 ),
grad(p[BB+1], x-1, y-1, z-1 )))));
}
function fade(t) { return t * t * t * (t * (t * 6 - 15) + 10); }
function lerp( t, a, b) { return a + t * (b - a); }
function grad(hash, x, y, z) {
var h = hash & 15; // CONVERT LO 4 BITS OF HASH CODE
var u = h<8 ? x : y, // INTO 12 GRADIENT DIRECTIONS.
v = h<4 ? y : h==12||h==14 ? x : z;
return ((h&1) == 0 ? u : -u) + ((h&2) == 0 ? v : -v);
}
function scale(n) { return (1 + n)/2; }
}

</script>
</head>

<body>
<canvas id="myCanvas" width="300" height="300">
</canvas><br/>

<input type="button" value=" Erase "
onclick="clearImage(); "/>

<select onchange=
"document.getElementById('code').innerHTML = this.value;">
<option>Choose something, then click Execute</option>
<option>Basic Perlin Noise</option>
<option>Waterfall</option>
<option>Spherical Nebula</option>
<option>Green Fibre Bundle</option>
<option>Orange-Blue Marble</option>
<option>Blood Maze</option>
<option>Yellow Lightning</option>
<option>Downward Rainbow Wipe</option>
<option>Noisy Rainbow</option>
<option>Burning Cross</option>
</select>

<br/>
<textarea id="code" type="textarea" cols="40" rows="7">/* Enter code here. */</textarea>
<br/>

<input type="button" value=" Execute "
onclick="doPixelLoop();" />
<input type="button" value="Open as PNG"
onclick="window.open(canvas.toDataURL('image/png'))"/>


<!-- BEGIN HIDDEN TEXT -->
<div hidden="true">
<span>
// you can enter your own code here!
</span>

<span>
x /= w; y /= h;
size = 10;
n = PerlinNoise.noise(size*x,size*y,.8);
r = g = b = 255 * n;
</span>

<span>
x/= 30; y/=3 * (y+x)/w;
n = PerlinNoise.noise(x,y,.18);
b = Math.round(255*n);
g = b - 255; r = 0;
</span>

<span>
centerx = w/2; centery = h/2;
dx = x - centerx; dy = y - centery;
dist = (dx*dx + dy*dy)/6000;
n = PerlinNoise.noise(x/5,y/5,.18);
r = 255 - dist*Math.round(255*n);
g = r - 255; b = 0;
</span>

<span>
x/=w;y/=h;sizex=3;sizey=66;
n=PerlinNoise.noise(sizex*x,sizey*y,.1);
x=(1+Math.sin(3.14*x))/2;
y=(1+Math.sin(n*8*y))/2;
b=n*y*x*255; r = y*b;
g=y*255;
</span>

<span>
centerx = w/2; centery = h/2;
dx = x - centerx; dy = y - centery;
dist = 1.2*Math.sqrt(dx*dx + dy*dy);
n = PerlinNoise.noise(x/30,y/110,.28);
dterm = (dist/88)*Math.round(255*n);
r = dist < 150 ? dterm : 255;
b = dist < 150 ? 255-r : 255;
g = dist < 151 ? dterm/1.5 : 255;
</span>

<span>
n = PerlinNoise.noise(x/45,y/120, .74);
n = Math.cos( n * 85);
r = Math.round(n * 255);
b = 255 - r;
g = r - 255 ;
</span>

<span>
x /= w; y /= h; sizex = 1.5; sizey=10;
n=PerlinNoise.noise(sizex*x,sizey*y,.4);
x = (1+Math.cos(n+2*Math.PI*x-.5));
x = Math.sqrt(x); y *= y;
r= 255-x*255; g=255-n*x*255; b=y*255;
</span>

<span>
// This uses no Perlin noise.
x/=w; y/=h;
b = 255 - y*255*(1 + Math.sin(6.3*x))/2;
g = 255 - y*255*(1 + Math.cos(6.3*x))/2;
r = 255 - y*255*(1 - Math.sin(6.3*x))/2;
</span>

<span>
x/=w;y/=h;
size = 20;
n = PerlinNoise.noise(size*x,size*y,.9);
b = 255 - 255*(1+Math.sin(n+6.3*x))/2;
g = 255 - 255*(1+Math.cos(n+6.3*x))/2;
r = 255 - 255*(1-Math.sin(n+6.3*x))/2;
</span>

<span>
x /= w; y /= h; size = 19;
n = PerlinNoise.noise(size*x,size*y,.9);
x = (1+Math.cos(n+2*Math.PI*x-.5));
y = (1+Math.cos(2*Math.PI*y));
//x = Math.sqrt(x); y = Math.sqrt(y);
r= 255-y*x*n*255; g = r;b=255-r;
</span>
</div>
<!-- END HIDDEN TEXT -->

</body>

</html>

The texture presets have been placed in a hidden div containing a bunch of span elements, and then at runtime the HTML dropdown menu is populated by loadHiddenText().

The Perlin noise() function may look intimidating, but it's not, really. It's a port of Ken Perlin's Java-based reference implementation of noise(). See yesterday's post for more information.

In the meantime, I encourage you to use this demo to explore the possibilities of procedural texture creation in HTML5 canvas. I hope you agree with me, it's a lot of fun, and educational as well.

reade more... Résuméabuiyad

Perlin Noise in JavaScript

Perlin noise in two dimensions, generated using the code below.
I've been working on an HTML5 canvas-based procedural texture demo (which I'll blog about tomorrow), for which I did a JavaScript port of Ken Perlin's noise() routine (which is in Java). Ahead of tomorrow's blog, I thought I'd briefly discuss Perlin Noise.

Perlin Noise
If you've worked with 3D graphics programs, you're already well familiar with Ken Perlin's famous noise function (which gives rise to so-called Perlin noise). The code for it looks a little scary, but intuitively it's an easy function to understand. Let's take the 2D case (although you can generate Perlin noise for any number of dimensions). Imagine that you have a 256-pixel-square image (blank, all white). Now, imagine that I come along and tell you to mark the canvas off into 32 rows and 32 columns of 8x8-pixel squares. Further imagine that I ask you to assign a random grey value to each square. You've now got a kind of checkerboard pattern of random greys.

What differentiates Perlin noise from random checkboard noise is that in Perlin's case, the color values are interpolated smoothly from the center of each tile outward, in such a way that you don't see an obvious gridlike pattern. In other words, when you cross a tile boundary, you want the slope of the pixel intensity to be constant (no discontinuities). You can visualize the end result if you took the 32x32 random checkboard pattern and passed it through a Gaussian blur a few times. Pretty soon, you wouldn't even be able to tell that gridlines ever existed in the first place. That's the idea with Perlin noise. You want to interpolate colors from one block to the next in such a way that there are no discontinuities at the cell boundaries. It turns out this requirement can be met in quite a variety of ways (by using cubic splines, quartics, or even sine- or cosine-based interpolation between squares, for example; or by using Perlin's gain() function). There's no one "correct" way to do it.

I'd love to be able to link to a good Perlin noise tutorial on the Web, but so far I haven't found one that doesn't try to conflate fractal noise, turbulence, and other topics with Perlin noise. The best treatment I've come across, frankly, is (not surprisingly) in Perlin's own Texturing and Modeling book (which is truly a first-rate book, "must reading" for graphics programmers).

Fortunately, Ken Perlin has done all the hard work for us in writing the necessary interpolation (and other) code for noise(), and he has kindly provided a 3D reference implementation of the noise() function in highly optimized Java. I ported his code to JavaScript (see below) and I'm happy to say it works very well in a canvas environment (as we'll see in tomorrow's post, right here). It's reasonably fast, too. In fact, it's so fast that there's no need to fall back to a 2D version for better speed. This is good, because the 3D version gives you added versatility in case you decide you want to animate your noise in the time domain.

Usage of Perlin's function is very straightforward. It takes 3 arguments (in Java, these are double-precision floating point numbers -- which is fine, because in JavaScript all numbers are IEEE-754 double-precision floating point numbers, under the covers). The way the function is usually used, the first two arguments correspond to the x and y coordinate values of a pixel in 2-space. If you're working in 3-space, the third argument is the z-value. In 2-space, you can call the noise() function with the third argument set to whatever you like. If you're doing a 2D animation and want the texture to animate in real time, you can link the third argument (that z-value) to a time-based index, and the texture will animate smoothly, because you are, in effect, sampling closely spaced slices of a 3D noise space.

The return value from noise() is a double-precision floating point number in the range 0..1. Actually, in Perlin's original code, the return value can range from -1 to 1.0, but in my JavaScript port (below), I clamp the return to 0..1. Here's the code:

// This is a port of Ken Perlin's Java code. The
// original Java code is at http://cs.nyu.edu/%7Eperlin/noise/.
// Note that in this version, a number from 0 to 1 is returned.
PerlinNoise = new function() {

this.noise = function(x, y, z) {

var p = new Array(512)
var permutation = [ 151,160,137,91,90,15,
131,13,201,95,96,53,194,233,7,225,140,36,103,30,69,142,8,99,37,240,21,10,23,
190, 6,148,247,120,234,75,0,26,197,62,94,252,219,203,117,35,11,32,57,177,33,
88,237,149,56,87,174,20,125,136,171,168, 68,175,74,165,71,134,139,48,27,166,
77,146,158,231,83,111,229,122,60,211,133,230,220,105,92,41,55,46,245,40,244,
102,143,54, 65,25,63,161, 1,216,80,73,209,76,132,187,208, 89,18,169,200,196,
135,130,116,188,159,86,164,100,109,198,173,186, 3,64,52,217,226,250,124,123,
5,202,38,147,118,126,255,82,85,212,207,206,59,227,47,16,58,17,182,189,28,42,
223,183,170,213,119,248,152, 2,44,154,163, 70,221,153,101,155,167, 43,172,9,
129,22,39,253, 19,98,108,110,79,113,224,232,178,185, 112,104,218,246,97,228,
251,34,242,193,238,210,144,12,191,179,162,241, 81,51,145,235,249,14,239,107,
49,192,214, 31,181,199,106,157,184, 84,204,176,115,121,50,45,127, 4,150,254,
138,236,205,93,222,114,67,29,24,72,243,141,128,195,78,66,215,61,156,180
];
for (var i=0; i < 256 ; i++)
p[256+i] = p[i] = permutation[i];

var X = Math.floor(x) & 255, // FIND UNIT CUBE THAT
Y = Math.floor(y) & 255, // CONTAINS POINT.
Z = Math.floor(z) & 255;
x -= Math.floor(x); // FIND RELATIVE X,Y,Z
y -= Math.floor(y); // OF POINT IN CUBE.
z -= Math.floor(z);
var u = fade(x), // COMPUTE FADE CURVES
v = fade(y), // FOR EACH OF X,Y,Z.
w = fade(z);
var A = p[X ]+Y, AA = p[A]+Z, AB = p[A+1]+Z, // HASH COORDINATES OF
B = p[X+1]+Y, BA = p[B]+Z, BB = p[B+1]+Z; // THE 8 CUBE CORNERS,

return scale(lerp(w, lerp(v, lerp(u, grad(p[AA ], x , y , z ), // AND ADD
grad(p[BA ], x-1, y , z )), // BLENDED
lerp(u, grad(p[AB ], x , y-1, z ), // RESULTS
grad(p[BB ], x-1, y-1, z ))),// FROM 8
lerp(v, lerp(u, grad(p[AA+1], x , y , z-1 ), // CORNERS
grad(p[BA+1], x-1, y , z-1 )), // OF CUBE
lerp(u, grad(p[AB+1], x , y-1, z-1 ),
grad(p[BB+1], x-1, y-1, z-1 )))));
}
function fade(t) { return t * t * t * (t * (t * 6 - 15) + 10); }
function lerp( t, a, b) { return a + t * (b - a); }
function grad(hash, x, y, z) {
var h = hash & 15; // CONVERT LO 4 BITS OF HASH CODE
var u = h<8 ? x : y, // INTO 12 GRADIENT DIRECTIONS.
v = h<4 ? y : h==12||h==14 ? x : z;
return ((h&1) == 0 ? u : -u) + ((h&2) == 0 ? v : -v);
}
function scale(n) { return (1 + n)/2; }
}

So let's say you have a function that marches through all the pixel values in an image, and you want to use this code. You need the x and y coordinates of the pixel, the width of the image (as w), and the height (as h). Then you could do something like:

x /= w; y /= h; // normalize
size = 10; // pick a scaling value
n = PerlinNoise.noise( size*x, size*y, .8 );
r = g = b = Math.round( 255 * n );

Here, the z-argument is arbitrarily set to .8, but it could just as well be set to zero or whatever you like. You can fiddle with size to get a result that's visually pleasing (it will vary considerably, depending on the effect that you're trying to achieve). If you're animating the texture, the next time-step might set the z-arg to 0.9, say, instead of 0.8.

In the example given above, we're setting r = g = b, which of course gives a grey pixel. The overall result looks like the picture at the top of this post. In fact, that image was generated using the code shown above.

Perlin's justly famous noise function is enormously versatile (and a ton of fun to play with). As I say, the most authoritative, in-depth discussion of it occurs in Perlin's Texturing and Modeling book. We'll see more colorful uses of the noise() function in tomorrow's blog. Don't miss it!
reade more... Résuméabuiyad

Convolution Kernels in HTML5 Canvas

Convolution is a straightforward mathematical process that is fundamental to many image processing effects. If you've played around with the Filter > Other > Custom dialog in Photoshop, you're already familiar with what convolutions can do.
A sharpening convolution applied to Lena.

A convolution applies a matrix (often called a kernel) against each pixel in an image. For any given pixel in the image, a new pixel value is calculated by multiplying the various values in the kernel by corresponding (underlying) pixel values, then summing the result (and rescaling to the applicable pixel bandwidth, usually 0..255). If you imagine a 3x3 kernel in which all values are equal to one, applying this as a convolution is the same as multiplying the center pixel and its eight nearest neighbors by one, then adding them all up (and dividing by 9 to rescale the pixel). In other words, it's tantamount to averaging 9 pixel values, which essentially blurs the image slightly if you do this to every pixel, in turn.

The application of convolutions to an HTML5 canvas image is straightforward. I've created an example Chrome extension that is active whenever you visit a URL ending in ".jpg" or ".png" (from any website). The extension provides a 3x3 convolution kernel (as text fields). You can enter any values you want (positive or negative) in the kernel columns and rows. Behind the scenes, the kernel will be normalized for you automatically. (That simply means each value is divided by the sum of all the values, except in the case where the values sum to zero, in which instance the normalization step is skipped.)

Some convolutions, such as the Sobel kernel, have kernel values that add up to zero. In this case, you end up with a mostly dark image that you'll probably want to invert. My Chrome extension provides an Invert Image button, for just that occasion.
A modified Sobel kernel, plus image inversion.

The UI also includes a Reset button (which reloads the original image and sets the kernel to an identity kernel) and a button that opens the newly modified image in a new window as a PNG that can be saved to disk.

The code for the Chrome extension is shown below. To use it, do this:

1. Copy and paste all of the code into a new file. Call it Kernel.user.js (or whatever you want, but be sure the name ends with .user.js).
2. Save the file (text-only) to any convenient folder.
3. Launch Chrome. Use Control-O to bring up the file-open dialog. Navigate to the file you just saved. Open it.
4. Notice at the very bottom of the Chrome window, there'll be a status warning (saying that extensions can harm your health, etc.) with two buttons, Continue and Discard. Click Continue.
5. In the Confirm Installation dialog that pops up, click the Install button. After you do this, the extension is installed and running. Test it by navigating to any convenient URL that ends in ".jpg" or ".png" (but do note, the extension may fail due to security restrictions if you are loading images from disk, via a "file:" scheme). For best results, navigate to an image on the web using http.


// @name           KernelTool
// @namespace ktKernelTool
// @description Canvas Image Kernel Tool
// @include *
// ==/UserScript==



// A demo script by Kas Thomas.
// Use as you will, at your own risk.


// The stuff under loadCode() will be injected
// into a <script>
element in the page.

function loadCode() {


window.KERNEL_SIZE = 3; // 3 x 3 square kernel

window.transformImage = function( x1,y1,w,h ) {

var canvasData = context.getImageData(x1,y1,w,h);

var kernel = getKernelValues( );
normalizeKernel( kernel );

for (var x = 1; x < w-1; x++) {
for (var y = 1; y < h-1; y++) {

// get the real estate around this pixel
// (using the offscreen image)
var area =
context.getImageData(x-1,y-1,
KERNEL_SIZE,KERNEL_SIZE);

// Index of the current pixel in the array
var idx = (x + y * w) * 4;

// apply kernel to current index
var rgb = applyKernel( kernel, area, canvasData, idx );

canvasData.data[ idx ] = rgb[0];
canvasData.data[idx+1] = rgb[1];
canvasData.data[idx+2] = rgb[2];
}
}

// inner function that applies the kernel
function applyKernel( k, localData, imageData, pixelIndex ) {

var sumR = 0; var sumG = 0; var sumB = 0;
var n = 0;

for ( var i = 0; i < k.length; i++,n+=4 ) {
sumR += localData.data[n] * k[i];
sumG += localData.data[n+1] * k[i];
sumB += localData.data[n+2] * k[i];
}

if (sumR < 0) sumR *= -1;
if (sumG < 0) sumG *= -1;
if (sumB < 0) sumB *= -1;

return [Math.round( sumR ),Math.round( sumG ),Math.round( sumB )];
}

context.putImageData( canvasData,x1,y1 );
};

window.invertImage = function( ) {

var w = canvas.width;
var h = canvas.height;
var canvasData =
context.getImageData(0,0,w,h);
for (var i = 0; i < w*h*4; i+=4) {
canvasData.data[i] = 255 - canvasData.data[i];
canvasData.data[i+1] = 255 - canvasData.data[i+1];
canvasData.data[i+2] = 255 - canvasData.data[i+2];
}
context.putImageData( canvasData,0,0 );
}

// get an offscreen drawing context for the image
window.getOffscreenContext = function( w,h ) {

var offscreenCanvas = document.createElement("canvas");
offscreenCanvas.width = w;
offscreenCanvas.height = h;
return offscreenCanvas.getContext("2d");
};

window.getKernelValues = function( ) {

var kernel = document.getElementsByClassName("kernel");
var kernelValues = new Array(9);
for (var i = 0; i < kernelValues.length; i++)
kernelValues[i] = 1. * kernel[i].value;
return kernelValues;
}

window.setKernelValues = function( values ) {

var kernel = document.getElementsByClassName("kernel");
for (var i = 0; i < kernel.length; i++)
kernel[i].value = values[i];
}

window.normalizeKernel = function( k ) {

var sum = 0;

for (var i = 0; i < k.length; i++)
sum += k[i];

if (sum > 0)
for (var i = 0; i < k.length; i++)
k[i] /= sum;
}

window.setupGlobals = function() {

window.canvas = document.getElementById("myCanvas");
window.context = canvas.getContext("2d");
var imageData = context.getImageData(0,0,canvas.width,canvas.height);
window.offscreenContext = getOffscreenContext( canvas.width,canvas.height );
window.offscreenContext.putImageData( imageData,0,0 );
};

setupGlobals(); // actually call it

// enable the buttons now that code is loaded
document.getElementById("reset").disabled = false;
document.getElementById("invert").disabled = false;
document.getElementById("PNG").disabled = false;

} // end loadCode()



/* * * * * * * * * main() * * * * * * * * */

(function main( ) {

// are we really on an image URL?
var ext = location.href.split(".").pop();
if (ext.match(/jpg|jpeg|png/) == null )
return;

// ditch the original image
img = document.getElementsByTagName("img")[0];
img.parentNode.removeChild(img);

// put scripts into the page scope in
// a <script> elem with id = "myCode"
// (we will eval() it in an event later...)
var code = document.createElement("script");
code.setAttribute("id","myCode");
document.body.appendChild(code);
code.innerHTML += loadCode.toString() + "\n";

// set up canvas
canvas = document.createElement("canvas");
canvas.setAttribute("id","myCanvas");
document.body.appendChild( canvas );

context = canvas.getContext("2d");

image = new Image();

image.onload = function() {

canvas.width = image.width;
canvas.height = image.height;
context.drawImage(image,0, 0,canvas.width,canvas.height );
};

// This line must come after, not before, onload!
image.src = location.href;

createKernelUI( );
createApplyButton( );
createResetButton( );
createInvertImageButton( );
createPNGButton( ); // create UI for Save As PNG

function createPNGButton( ) {

var button = document.createElement("input");
button.setAttribute("type","button");
button.setAttribute("value","Open as PNG...");
button.setAttribute("id","PNG");
button.setAttribute("disabled","true");
button.setAttribute("onclick",
"window.open(canvas.toDataURL('image/png'))" );
document.body.appendChild( button );
}

function createInvertImageButton( ) {

var button = document.createElement("input");
button.setAttribute("type","button");
button.setAttribute("value","Invert Image");
button.setAttribute("id","invert");
button.setAttribute("disabled","true");
button.setAttribute("onclick",
"invertImage()" );
document.body.appendChild( button );
}

function createResetButton( ) {

var button = document.createElement("input");
button.setAttribute("type","button");
button.setAttribute("value","Reset");
button.setAttribute("id","reset");
button.setAttribute("disabled","true");
button.setAttribute("onclick",
"var data = offscreenContext.getImageData(0,0,canvas.width,canvas.height);" +
"context.putImageData(data,0, 0 );" +
"setKernelValues([0,0,0,0,1,0,0,0,0]);" );
document.body.appendChild( button );
}

// This will load code if it hasn't been loaded yet.
function createApplyButton( ) {

var button = document.createElement("input");
button.setAttribute("type","button");
button.setAttribute("value","Apply");
button.setAttribute("onclick","if (typeof codeLoaded == 'undefined')" +
"{ codeLoaded=1; " +
"code=document.getElementById(\"myCode\").innerHTML;" +
"eval(code); loadCode(); }" +
"transformImage(0,0,canvas.width,canvas.height);" );
document.body.appendChild( button );
}

function createKernelUI( ) {

var kdiv = document.createElement("div");
var elem = new Array(9);

for ( var i = 0; i < 9; i++ ) {
elem[i] = document.createElement("input");
elem[i].setAttribute("type","text");
elem[i].setAttribute("value","1");
elem[i].setAttribute("class","kernel");
elem[i].setAttribute("style","width:24px");
elem[i].setAttribute("id","k" + i);
}
for ( var i = 0; i < 9; i++ ) {
kdiv.appendChild( elem[i] );
if (i == 2 || i == 5 || i == 8)
kdiv.innerHTML += "<br/>";
}

document.body.appendChild( kdiv );
}
})();

It can be fun and educational to experiment with new kernel values (and to apply more than one convolution sequentially to achieve new effects). With the right choice of values, you can easily achieve blurring, sharpening, embossing, and edge detection/enhancement, among other effects. 

Incidentally, for more information about the Lena test image (in case you're not familiar with the interesting backstory), check out http://en.wikipedia.org/wiki/Lenna.
reade more... Résuméabuiyad

The Great Ricardian Equivalence Throwdown!


Y'all know I cannot resist wading into a good macro throwdown.

First, a summary of the action!!

This week's econ-blogosphere mayhem started when Paul Krugman wrote a post about the idea of Ricardian Equivalence (the idea that the timing of taxes doesn't matter), and why it doesn't imply that fiscal stimulus can't work. As an example of someone who does think that Ricardian Equivalence makes stimulus a non-starter, Krugman cited some remarks by uber-macroeconomist Robert Lucas:
But, if we do build the bridge by taking tax money away from somebody else, and using that to pay the bridge builder — the guys who work on the bridge — then it’s just a wash. It has no first-starter effect. You apply a multiplier to the bridge builders, then you’ve got to apply the same multiplier with a minus sign to the people you taxed to build the bridge. And then taxing them later isn’t going to help, we know that.
Krugman's argument: Ricardian Equivalence says that the timing of taxes can't matter for the economy, not that the level of government spending can't matter.

Mark Thoma concurred: Ricardian Equivalence does not say that stimulus can't work, and Ricardian Equivalence is wrong anyway. But if it were right, it would only be an argument against tax-rebate stimulus, not against government-expenditure stimulus.

Then Krugman came under fire from David Andolfatto, who says that Lucas's statement was obviously not talking about Ricardian equivalence, and, hence, Krugman must not understand what Ricardian Equivalence is. Steve Williamson takes a somewhat less harsh line, saying that Krugman must not understand what Lucas was trying to say.

Krugman fired back, as did Andolfatto and Williamson. Much fun was had by all. I think I'm going to dub Andolfatto and Williamson the "Krugman-Teasing Brigade of St. Louis."

Update: Andolfatto has a new post on toy models that would get you dY/dG=0 (i.e., govt. spending doesn't affect output, i.e. Lucas' claim). It's a good post, but the toy models don't include prices, which are essential to many of the arguments for fiscal stimulus. Krugman challenges Andolfatto to explain the crux of his arguments without math.

Update: John Cochrane responds to Krugman, criticizing an example that Krugman used in his initial post. Along the way, Cochrane states that Ricardian Equivalence, by itself, implies that stimulus is ineffective. Mark Thoma convincingly refutes that latter statement, citing Robert Barro, the actual inventor of Ricardian Equivalence. DeLong argues against Cochrane's criticism of Krugman's example (and again here). Krugman also fires back at Cochrane. Karl Smith chimes in on the side of stimulus.

* * *

Now on to my (partially mistaken) contribution to the debate!

So allow me to wade in here. First of all, though it might not be clear from the heated exchange, Krugman, Andolfatto, Thoma, and Williamson all actually agree on the most important point! Ricardian Equivalence is about the timing of taxes, not about the effect of government spending. Hence, Ricardian Equivalence doesn't say whether or not government spending helps or hurts the economy. Everyone agrees about that!

Actually, this argument is about the second-order issue of what Bob Lucas was trying to say. So let me talk about that.

Lucas is restating Say's Law (Update: Actually, no! I made a mistake here; see below.). Say's Law says, basically, exactly what Lucas says: If you take money from Person A and give it to Person B, then total output (GDP) will be unchanged. This The idea that the effect of government spending is exactly canceled out by the effect of taxes is a very common argument for why stimulus can't work. Lucas is saying that A) Say's Law holds government spending cannot change output, and that B) you can't get around Say's Law that principle by taxing people in the future instead of today, because people are forward-looking and have rational expectations, so that the expectation of future taxation has the same effect as taxation in the present. That last part - the idea that expected future taxes have the same effect as present taxes - is Ricardian Equivalence.

So, Lucas is saying:
(Say's Law in the static case) + (Ricardian Equivalence) = (Say's Law in the dynamic case)
(dY/dG = 0 in the static case) + (Ricardian Equivalence) = (dY/dG = 0 in the dynamic case)

Now, it seems to me that IF you believe that Say's Law dY/dG = 0 holds in the static case (i.e., for tax-financed stimulus), and IF you believe in Ricardian Equivalence, it's reasonable to conclude that  Say's Law dY/dG = 0  holds for deficit-financed stimulus as well. The simplest interpretation of Lucas' statements is that he believes both.

Krugman thinks that Lucas thinks that Ricardian Equivalence implies that Say's Law dY/dG = 0 holds. Does Lucas think that? I don't think we can know, just from those short remarks.

But actually, I know of a pretty simple way to modify the Ricardian Equivalence Theorem so that it does imply Say's Law  dY/dG = 0. All you have to do is assume that government spending, G, is handed out to people as lump-sum transfers (either today or in the future), instead of used to make purchases. With this modification, G just becomes negative taxation. And since taxes in the Ricardian Equivalence model are non-distortionary, government spending would be non-distortionary too. The level of G would not affect output.

It may be that Lucas had such a model in mind. I often encounter the assumption, among economists and non-economists alike, that government spending consists purely of transfers. It is an explicit assumption in many models. It is not an assumption that, in my opinion, makes a lot of sense. But if Lucas was working with that assumption, then he could in fact start with Ricardian Equivalence and end up with Say's Law dY/dG = 0:

(Ricardian Equivalence) + (All govt. spending is nondistortionary transfers) = (Say's Law in the dynamic case)

(Ricardian Equivalence) + (All govt. spending is nondistortionary transfers) = (dY/dG = 0 in the dynamic case)

And Krugman would thus have read Lucas 100% correctly.

It's possible, though, that Krugman did read Lucas wrong, and that Lucas believes in the static version of Say's Law the ineffectiveness of government spending for other reasons entirely. In that case, Krugman should simply do a follow-up post called "Oh, and Say's Law is wrong too", because Say's Law is almost certainly wrong. (Even Say thought Say's Law was wrong!) "Lucas is wrong even if Ricardian Equivalence is right". Even if Krugman overestimated the degree to which Lucas was mentally extending the Ricardian Equivalence model, it's still true that Lucas' belief in Say's Law the ineffectiveness of government spending isn't something most macroeconomists would agree with.

I also think that Krugman's initial post was meant to say "Anyone who reads Lucas' remarks and comes away thinking that Ricardian Equivalence implies  Say's Law the ineffectiveness of government spending  is wrong." Which would also be a good point.

So basically, I score this throwdown: Krugman 2, Krugman-Teasing Brigade of St. Louis 1. On one hand, Ricardian Equivalence definitely does not imply Say's Law Lucas' claim, except in a special and unrealistic case (e.g. where all spending is just transfers). So nobody should try to overuse Ricardian Equivalence in this way! On the other hand, Krugman may or may not have overinterpreted the degree to which Lucas' belief in  Say's Law the ineffectiveness of government spending actually springs from his belief in Ricardian Equivalence. Who can know. But it's not a big deal. Because the big, important point is that, Ricardian Equivalence or no, Say's Law is just not right (Update: and from Say's Law not being right, it's only a short hop to what Lucas said being not right either), and Lucas was therefore making a very unorthodox and controversial claim.


Update: Smacked down by Brad DeLong! DeLong takes issue with two aspects of my post. The first is that I have mis-stated Say's Law:
I think Noah Smith is wrong here. Say's Law does not say that fiscal policy cannot affect spending but monetary policy can. Say's Law says that neither monetary nor fiscal policy can affect the level of spending because supply creates demand...
I agree that Lucas is wrong. But to say "Lucas believes in Say's Law" is, I think, not quite the right way to put it, for Lucas's statements are not consistent with Say's Law. 
Brad is correct. I stupidly and lazily copied the Lucas quote to my post and then considered it in isolation, forgetting that Lucas had said elsewhere in his remarks that monetary policy could be effective (a statement that is not consistent with Say's Law). So when I said "Say's Law" in the post above, I was completely wrong. Commenter TGGP actually pointed this out as well. Anyway, doh.


Brad also doesn't like that I am making claims about what Lucas "believes":
Noah Smith uses the phrase "X believes" as shorthand for "X's statements are consistent with a model in which". I think that is a misleading way to think about it... 
Now there is a sense in which this is a totally fruitless exercise: there is no point in trying to set out what the coherent model underlying somebody's thinking is when in fact there is no coherent model underlying their thinking.
I agree that we can't really know what model of the economy Lucas actually believes, or what probability weights he puts on various models. Or if he even had any formal model in mind at all when he made his remarks. I might be being too generous to Lucas - he might have just been tossing off incoherent statements without thinking of their implications, as DeLong says. Or I might be unfairly putting words in Lucas' mouth - he might actually have an underlying model in mind that is much more complex and has much more believable assumptions than the toy models Brad and I postulate (if so, Lucas should publish it).

It would have been more accurate for me to have said: "Lucas states that the level and type of government spending does not affect output, all other variables being equal. It is possible that Lucas arrived at this conclusion by using a slight modification of the assumptions that lead to the Ricardian Equivalence result, i.e. that Lucas was thinking about Ricardian Equivalence and simply assumed that government spending = transfers, and concluded that spending doesn't affect output. It is also possible that Lucas had some other model in mind, and if that is the case, then we just don't know what it is. Or it's possible that Lucas had no model in mind at all. But the statement that government spending can't affect output is not true in most models,  so whether Lucas' statement was motivated by a modified Barro-Ricardo type model is a bit of a moot point."


Update 2: Krugman also catches my mistake. He also describes Lucas' argument as "Ricardianoid", which I think is a good term for a model that starts with the Barro-Ricardo model and adds the assumption that government spending is pure transfer.
reade more... Résuméabuiyad

The liberty of local bullies


I have not been surprised by any of the quotes that have recently come to light from Ron Paul's racist newsletters. I grew up in Texas, remember, and I know from experience that if you talk to a hardcore Paul supporter for a reasonable length of time, these sorts of ideas are more likely than not to come up.

So does this mean that Ron Paul's libertarianism is merely a thin veneer covering a bedrock of tribalist white-supremacist paleoconservatism? Well, no, I don't think so. Sure, the tribalist white-supremacist paleoconservatism is there. I just don't think it's incompatible with libertarianism.

I have often remarked in the past how libertarianism - at least, its modern American manifestation - is not really about increasing liberty or freedom as an average person would define those terms. An ideal libertarian society would leave the vast majority of people feeling profoundly constrained in many ways. This is because the freedom of the individual can be curtailed not only by the government, but by a large variety of intermediate powers like work bosses, neighborhood associations, self-organized ethnic movements, organized religions, tough violent men, or social conventions. In a society such as ours, where the government maintains a nominal monopoly on the use of physical violence, there is plenty of room for people to be oppressed by such intermediate powers, whom I call "local bullies."

The modern American libertarian ideology does not deal with the issue of local bullies. In the world envisioned by Nozick, Hayek, Rand, and other foundational thinkers of the movement, there are only two levels to society - the government (the "big bully") and the individual. If your freedom is not being taken away by the biggest bully that exists, your freedom is not being taken away at all.

In a perfect libertarian world, it is therefore possible for rich people to buy all the beaches and charge admission fees to whomever they want (or simply ban anyone they choose). In a libertarian world, a self-organized cartel of white people can, under certain conditions, get together and effectively prohibit black people from being able to go out to dinner in their own city. In a libertarian world, a corporate boss can use the threat of unemployment to force you into accepting unsafe working conditions. In other words, the local bullies are free to revoke the freedoms of individuals, using methods more subtle than overt violent coercion.

Such a world wouldn't feel incredibly free to the people in it. Sure, you could get together with friends and pool your money to buy a little patch of beach. Sure, you could move to a less racist city. Sure, you could quit and find another job. But doing any of these things requires paying large transaction costs. As a result you would feel much less free.

Now, the founders of libertarianism - Nozick et. al. - obviously understood the principle that freedoms are often mutually exclusive - that my freedom to punch you in the face curtails quite a number of your freedoms. For this reason, they endorsed "minarchy," or a government whose only role is to protect people from violence and protect property rights. But they didn't extend the principle to covertly violent, semi-violent, or nonviolent forms of coercion.

Not surprisingly, this gigantic loophole has made modern American libertarianism the favorite philosophy of a vast array of local bullies, who want to keep the big bully (government) off their backs so they can bully to their hearts' content. The curtailment of government legitimacy, in the name of "liberty," allows abusive bosses to abuse workers, racists to curtail opportunities for minorities, polluters to pollute without cost, religious groups to make religious minorities feel excluded, etc. In theory, libertarianism is about the freedom of the individual, but in practice it is often about the freedom of local bullies to bully. It's a "don't tattle to the teacher" ideology.

Therefore I see no real conflict between Ron Paul's libertarianism and his support for the agenda of racists. It's just part and parcel of the whole movement. Not necessarily the movement as it was conceived, but the movement as it in fact exists.
reade more... Résuméabuiyad

Wages and the Great Vacation: Casey Mulligan responds


Two posts back, I explained why the "Great Vacation" idea doesn't pass the smell test. If U.S. unemployment had been caused by a negative shock to labor supply, we should have expected to see an increase in real wages.

Casey Mulligan, one of the leading proponents of the Great Vacation story, responded on his blog:
A number of bloggers have recently discovered real wages as a labor market indicator. They are at least 3 years late to the party. 
Three years ago I blogged about the effect of labor supply on real wages. 
I noted how real wages had risen since 2007, and predicted that they would begin to decline in 2010. 
I have continued to update this work, eg here, and here. 
The fact is that the real wage time series fits my recession narrative very well.
Well, in response to that, let's look at the numbers. Here, courtesy of FRED, is a graph of real compensation per hour in the nonfarm business sector:


A negative shock to labor supply should be associated with a spike in real compensation per hour. Looking at this graph, do you see such a spike? I do not. In fact, if I were to tell you that there had been a Great Vacation, and asked you to point out its beginning on that graph (without showing you the gray bars), you would probably say that it began in 2003, or maybe 2006 or 2009. You would not predict that a supply-driven recession began in 2008, when our real recession actually began.

Yes, it is true that real wages rebounded fairly rapidly from the trough to which they fell at the beginning of the Great Recession. And it is true that they climbed slightly higher after that, in 2009, reaching a peak about 2% higher than their 2006 peak. So Mulligan's statement that real wages rose during the Great Recession is correct.

However, note the size of the rise. There is no discernible increase in the rate of growth of real wages during the Great Recession. The wage growth to which Mulligan refers was slower by far, for example, than the growth that occurred between 2000 and 2004. If the Great Recession were caused by a massive negative labor supply shock, we would expect to see wages accelerate as employment fell. They did not. And the sharp downward spike in real wages in 2008 is especially hard to reconcile with a Great Vacation story.

I maintain my original case that the wage data shows no sign of a Great Vacation. If a Great Vacation in fact occurred, it had to have been a much more complicated sort of thing than the kind of negative labor supply shock that is taught in Econ 101.
reade more... Résuméabuiyad

How to Save a Canvas as PNG

In yesterday's post, I showed how to render an image into an HTML5 canvas element (and then operate on it with canvas API calls). When you've made changes to a canvas image, the time may come when you want to save the canvas as a regular PNG image. As it turns out, doing that isn't hard at all.

The key is to use canvas.toDataURL('image/png') to serialize the image as a data URI, which you can (of course) open in a new window with window.open( uri ). Once the image is open in a new window (note: you may have to instruct your browser to allow popups), you can right-click on the image to get the browser's Save Image As... command in a context menu. From there, you just save the image as you normally would.

The following code can be added to yesterday's example in order to create a button on the page called Open as PNG...

function createPNGButton( ) {

var button = document.createElement("input");
button.setAttribute("type","button");
button.setAttribute("value","Open as PNG...");
button.setAttribute("onclick",
"window.open(canvas.toDataURL('image/png'))" );
document.body.appendChild( button );
}
As you can see, there's no rocket science involved. Just a little HTML5 magic.
reade more... Résuméabuiyad

Gamma Adjustment in an HTML5 Canvas

I came up with kind of a neat trick I'd like to share. If you're an HTML canvas programmer, listen up. You just might get a kick out of this.

You know how, when you're loading images in canvas (and then fiddling with the pixels using canvas API code), you have to load your scripts and images from the same server? (For security reasons.) That's no problem for the hard-core geeks among us, of course. Many web developers keep a local instance of Apache or other web server running in the background for just such occasions. (As an Adobe employee, I'm fortunate to be able to run Adobe WEM, aka Day CQ, on my machine.) But overall, it sucks. What I'd like to be able to do is fiddle with any image, taken from any website I choose, any time I want, without having to run a web server on my local machine.

So, what I've done is create a Chrome extension that comes into action whenever my browser is pointed at any URL that ends in ".png" or ".jpg" or ".jpeg". The instant the image in question loads, my extension puts it into a canvas element, re-renders it, and exposes its 2D context for scripts to work against.

For demo purposes, I've included some code for making gamma adjustments to the image via canvas-API calls (which I'll talk more about later).

The code for the Chrome extension is shown below. To use it, do this:

1. Copy and paste all of the code into a new file. Call it BrightnessTest.user.js. Actually, call it whatever you want, but be sure the name ends with .user.js.

2. Save the file (text-only) to any convenient folder.

3. Launch Chrome. (I did all my testing in Chrome. The extension should be Greasemonkey-compatible, but I have not tested it in Firefox.) Use Control-O to bring up the file-open dialog. Navigate to the file you just saved. Open it.

4. Notice at the very bottom of the Chrome window, there'll be a status warning (saying that extensions can harm your loved ones, etc.) with two buttons, Continue and Discard. Click Continue.

5. In the Confirm Installation dialog that pops up, click the Install button. After you do this, the extension is installed and running.

Test the extension by navigating to http://goo.gl/UQpRA (the penguin image shown in the above screenshots). Please try a small image (like the penguin) first, for performance reasons. Note: Due to security restrictions, you can't load images from disk (no file: scheme in the URL; only http: and/or https: are allowed). Any PNG or JPG on the web should work.

When the image loads, you should see a small slider underneath it. This is an HTML5 input element. If you are using an obsolete version of Chrome, you might see a text box instead.

If you move the slider to the right, you'll (in effect) do a gamma adjustment on the image, tending to make the image lighter. Move the slider to the left, and you'll darken the image. No actual image processing takes place until you lift your finger off the mouse (the slider just slides around until a mouseup occurs, then the program logic kicks in). After the image repaints, you should see a gamma curve appear under the slider, as in the examples above.

Here's the code for the Chrome extension:

// ==UserScript==
// @name ImageBrightnessTool
// @namespace ktBrightnessTool
// @description Canvas Image Brightness Tool
// @include *
// ==/UserScript==



// A demo script by Kas Thomas.
// Use as you will, at your own risk.


// The stuff under loadCode() will be injected
// into a <script> element in the page.

function loadCode() {

window.LUT = null;

// Ken Perlin's bias function
window.bias = function( a, b) {
return Math.pow(a, Math.log(b) / Math.log(0.5));
};

window.createLUT = function( biasValue ) {
// create global lookup table for colors
LUT = createBiasColorTable( biasValue );
};

window.createBiasColorTable = function( b ) {

var table = new Array(256);
for (var i = 0; i < 256; i++)
table[i] = applyBias( i, b );
return table;
};

window.applyBias = function( colorValue, b ) {

var normalizedColorValue = colorValue/255;
var biasedValue = bias( normalizedColorValue, b );
return Math.round( biasedValue * 255 );
};

window.transformImage = function( x,y,w,h ) {

var canvasData = offscreenContext.getImageData(x,y,w,h);
var limit = w*h*4;

for (i = 0; i < limit; i++)
canvasData.data[i] = LUT[ canvasData.data[i] ];

context.putImageData( canvasData,x,y );
};


// get an offscreen drawing context for the image
window.getOffscreenContext = function( w,h ) {

var offscreenCanvas = document.createElement("canvas");
offscreenCanvas.width = w;
offscreenCanvas.height = h;
return offscreenCanvas.getContext("2d");
};

window.getChartURL = function() {

var url = "http://chart.apis.google.com/chart?";
url += "chf=bg,lg,0,EFEFEF,0,BBBBBB,1&chs=100x100&";
url += "cht=lc&chco=FF0000&&chds=0,255&chd=t:"
url += LUT.join(",");
url += "&chls=1&chm=B,EFEFEF,0,0,0";
return url;
}

setupGlobals = function() {

window.canvas = document.getElementById("myCanvas");
window.context = canvas.getContext("2d");
var imageData = context.getImageData(0,0,canvas.width,canvas.height);
window.offscreenContext = getOffscreenContext( canvas.width,canvas.height );
window.offscreenContext.putImageData( imageData,0,0 );
};

setupGlobals(); // actually call it

} // end loadCode()



/* * * * * * * * * main() * * * * * * * * */

(function main( ) {

// are we really on an image URL?
var ext = location.href.split(".").pop();
if (ext.match(/jpg|jpeg|png/) == null )
return;

// ditch the original image
img = document.getElementsByTagName("img")[0];
img.parentNode.removeChild(img);

// put scripts into the page scope in
// a <script> elem with id = "myCode"
// (we will eval() it in an event later...)
var code = document.createElement("script");
code.setAttribute("id","myCode");
document.body.appendChild(code);
code.innerHTML += loadCode.toString() + "\n";

// set up canvas
canvas = document.createElement("canvas");
canvas.setAttribute("id","myCanvas");
document.body.appendChild( canvas );

context = canvas.getContext("2d");

image = new Image();

image.onload = function() {

canvas.width = image.width;
canvas.height = image.height;
context.drawImage(image,0, 0,canvas.width,canvas.height );
};

// This line must come after, not before, onload!
image.src = location.href;

createSliderUI( ); // create the slider UI
createGoogleChartUI( ); // create chart UI


function createGoogleChartUI( ) {
// set up iframe for Google Chart
var container = document.createElement("div");
var iframe = document.createElement("iframe");
iframe.setAttribute("id","iframe");
iframe.setAttribute("style","padding-left:14px");
iframe.setAttribute("frameborder","0");
iframe.setAttribute("border","0");
iframe.setAttribute("width","101");
iframe.setAttribute("height","101");
container.appendChild(iframe);
document.body.appendChild(container);
}


// Create the HTML5 slider UI
function createSliderUI( ) {

var div = document.body.appendChild( document.createElement("div") );
var slider = document.createElement("input");
slider.setAttribute("type","range");
slider.setAttribute("min","0");
slider.setAttribute("max","100");
slider.setAttribute("value","50");
slider.setAttribute("step","1");

// if code hasn't been loaded already, then load it now
// (one time only!); update the slider range indicator;
// create a color lookup table
var actionCode = "if (typeof codeLoaded == 'undefined')" +
"{ codeLoaded=1; " +
"code=document.getElementById(\"myCode\").innerHTML;" +
"eval(code); loadCode(); }" +
"document.getElementById(\"range\").innerHTML=" +
"String(this.value*.01).substring(0,4);" +
"createLUT( Number(document.getElementById('range').innerHTML) );"


slider.setAttribute("onchange",actionCode);


// The following operation is too timeconsuming to attach to
// the onchange event. We attach it to onmouseup instead.
slider.setAttribute("onmouseup",
"document.getElementById('iframe').src=getChartURL();"+
"transformImage(0,0,canvas.width,canvas.height);");

div.appendChild( slider );
div.innerHTML += '<span id="range">0.5</span>';

}


})();

This code is a little less elegant in Chrome than it would have been in Firefox (which, unlike Chrome, supports E4X and exposes a usable unsafeWindow object). The code does, however, illustrate a number of useful techniques. To wit:

1. How to swap out an <img> for a canvas image.
2. How to draw to an offscreen context.
3. How to inject script code into page scope, from extension (gmonkey) scope.
4. How to use the HTML5 slider input element.
5. How to change the gamma (or, colloquially and somewhat incorrectly, "brightness") of an image's pixels via a color lookup table.
6. How to use Ken Perlin's bias( ) function to remap pixel values in the range 0..255.
7. How to display the resulting gamma curve (actually, bias curve) in a Google Chart in real time.

That's a fair amount of stuff, actually. Discussing it could take a long time. The code's not long, though, so you should be able to grok most of it from a quick read-through.

The most important concept here, from an image processing standpoint, is the notion of remapping pixel values using a pre-calculated lookup table. The naive (and very slow) approach would simply be to parse pixels and do a separate bias() call on each red, green, or blue value in the image. But that would mean calling bias() hundreds of thousands of times (maybe millions of times, in a sizable image). Instead, we create a table (an array of size 256) and remap the values there once, then look up the appropriate substitution value for each color in each pixel, rather than laboriously calling bias() on each color in each pixel.

If this is the first time you've encountered Ken Perlin's bias() function, it's actually a very important class of function to understand. Fundamentally, it remaps the unit interval (that is, real numbers in the range 0..1) to itself. With a bias value of 0.5, all real numbers from 0..1 map to their original values. With a bias value less than 0.5, the remapping is swayed in the manner shown in the screenshot above, on the right. A bias value greater than 0.5 bends the curve in exactly the opposite direction. But in any case, 0 always ends up mapping to zero and 1 always maps to one, no matter what the bias knob is set to. The function is, in that sense, nicely normalized.

Bias is technically quite a bit different from a true "gamma" adjustment. Gamma curves come from a different formula and they don't have the desirable property of mapping onto the unit interval or behaving intuitively with respect to the 0.5 midpoint. Nevertheless, because "gamma" is more familiar to graphic artists, I've (ab)used that word throughout this post, and even in the headline. (Shame on me.)

The performance of the bias code is surprisingly poor in this particular usage (as a Chrome extension). On my Dell laptop, I see processing at a rate of just under 50,000 pixels per second. The same bias-lookup code running in a normal web page (not a Chrome extension that injects it into page scope) goes about ten times faster. Yes, an order of magnitude faster. In a native web page, I can link the image transformation call to an onchange handler (so that the image -- even a large one -- updates continuously, in real time, as you drag the slider) -- that's how fast the code is in some of my other projects. But in this particular context (as a Chrome extension) it seems to be dreadfully slow, so I've hooked the main processing routine to an onmouseup handler on the slider. Otherwise the slider sticks.

Anyway, I hope the techniques in this post have whetted your appetite for more HTML5 canvas explorations. There are some great canvas demos out there, and I'll be delving into some more canvas scripting techniques in the not-so-distant future.

Happy pixel-poking!




reade more... Résuméabuiyad

A satisfactory philosophy of ignorance (John Cochrane edition)


"I feel a responsibility as a scientist who knows the great value of a satisfactory philosophy of ignorance...I feel a responsibility...to teach that doubt is not to be feared."
- Richard Feynman


A few posts back, I blogged about a Hoover Institute panel organized by John Taylor, in which eminent macroeconomists were invited to give their thoughts on how to restore America to robust growth. Now, via David Glasner, I have found a transcript of John Cochrane's remarks at the panel. Given Cochrane's polemic tone in past writings, my hopes were not exactly high. But I went ahead and read the whole thing, and what I found left me (mostly) pleasantly surprised. Cochrane spends much of his time talking about how macroeconomists really don't understand that much about the economy:

Why are we stagnating? I don’t know. I don’t think anyone knows, really...Nothing on the conventional macro policy agenda reflects a clue why we’re stagnating... 
This conference, and our fellow economists, are chock full of brilliant new ideas both
macro and micro. But how do we apply new ideas? Here I think we economists are often a bit arrogant. The step from “wow my last paper is cool” to “the government should spend a trillion dollars on my idea” seems to take about 15 minutes... 
Compare the scientific evidence on fiscal stimulus to that on global warming . Even if you’re a skeptic, compared to global warming, our evidence for stimulus ‐‐ including coherent theory and decisive empirical work ‐‐ is on the level of “hey, it’s pretty hot outside.”... 
There are new ideas and great new ideas. But there are also bad new ideas, lots of warmed over bad old ideas, and good ideas that happen to be wrong. We don’t know which is which. If we apply anything like the standards we would demand of anyone else’s trillion‐dollar government policy to our new ideas, the result for policy, now, must again be, stick with what works and the stuff we know is broken and get out of the way. 
But keep working on those new ideas!
These quotes, in my opinion, are spot on. If there's one point I've consistently tried to push since I started writing about macro on this blog, it's that we don't really know that much about how business cycles work. Sure, we are reasonably sure of a few things, like that most recessions, and the biggest recessions, are driven by demand shocks (as is high unemployment). And it seems that having the government spend money boosts GDP growth during recessions. But in general, we are just very ignorant. We don't have a really good (i.e., quantitatively predictive) model of how aggregate demand works, or why stimulus has an effect.

Given this ignorance, the appearance of precision and "sciency-ness" offered by modern business-cycle models seems pernicious to me. It biases the field toward making minor modifications of the existing paradigm (Olivier Blanchard's "haikus") rather than exploring blue-sky ideas that might lead to real leaps in our understanding. I can't offer a ready alternative to the DSGE paradigm (maybe someday I will), but I think that in the absence of something that works, the best alternative is to adopt a "satisfactory philosophy of ignorance."

So I like what Cochrane is saying about our ignorance. And I think there is a powerful case to be made for policy inactivity - the idea of "first do no harm." If you are a doctor and your patient is in critical condition, but you don't know how to save him, you don't just pump him full of every drug you have. If I were an opponent of fiscal stimulus, that is exactly the argument I would make - that the burden of proof is on the proponents of stimulus, and that the evidence is too muddled to risk making the situation worse. 

Unfortunately, stimulus opponents typically make far bolder claims, like "stimulus can't possibly work." And in doing so they throw away their natural advantage, because instead of a "satisfactory philosophy of ignorance," they reach for a false certainty and end up overstating their claims.

This is also where John Cochrane, in my opinion, stumbles a bit. Even as he points out how ignorant macroeconomists are, he goes ahead and offers his own positive theory of the recession:
So what if this really is not a “macro” problem? What if this is Lee Ohanian’s 1937 – not about money, short term interest rates, taxes, inadequately stimulating (!) deficits, but a disease of tax rates, social programs that pay people not to work, and a “war on business.” Perhaps this is the beginning of eurosclerosis. (See Bob Lucas’s brilliant Millman lecture for a chilling exposition of this view).
Yes, Cochrane says "perhaps" and "what if." But it certainly seems as if he leans toward the Ohanian view. The problem is, this view is pretty easily debunked by a casual reading of history. Tax rates have not gone up since 2007, and social programs are not currently more generous than in the past. There is much we don't understand about the true causes of recessions, but at least we understand that much! 

But then Cochrane comes back and says this:
Our (microeconomic) garden is full of (policy) weeds. Yes, it was full of weeds before, but at least we know that pulling the weeds helps. Or maybe not. (emphasis mine)
This is great! Not only does Cochrane move away from Ohanian-land and back toward the "first do no harm" critique of stabilization policy, but he admits that even that might not be right!

So basically, even though it includes a number of substantive points with which I'd argue, I really, really like this Cochrane talk. Now there's a sentence I didn't expect to see myself writing when I first followed the link!

It has always been my opinion that the neoclassical revolution hit its high point with the Lucas Critique. It was a great thing to expose the inadequacy of the macro models then in use. But when the neoclassicals went ahead and replaced those models with RBC and Rational Expectations, I feel like the revolution really overreached. The inability of RBC (or DSGE in general) to explain our current economic woes has led some neoclassical-minded folks to reach for Ohanian-style explanations (it's the socialists' fault!). Instead, I think that they should go back to where Lucas started, and embrace a "satisfactory philosophy of ignorance." Even as someone who is very dubious of the neoclassical worldview, that is a perspective with which I would heartily agree.
reade more... Résuméabuiyad