Pages

.

Three of five most prevalent Web exploits of 2009 were PDFs


Vendors with the most security vulnerabilities, according to IBM.

A few days ago, IBM came out with its IBM Security Solutions X-Force® 2009 Trend and Risk Report (available here with registration; choose the link called Get the IBM X-Force 2009 Trend and Risk Report), which provides an interesting assessment of the latest trends in online security vulnerabilities and attack modalities.

Some interesting highlights:
  • The number of high and critical multimedia vulnerabilities continue to increase.
  • Three of the five most prevalent malicious Web site exploits of 2009 were PDFs, one was a Flash exploit, and the other was an ActiveX control that allows a user to view an Office document through Microsoft Internet Explorer.
  • 7.5 percent of the Internet is considered “socially” unacceptable, unwanted, or flat out malicious.
  • Spam and phishing came back with a vengeance in the second half of 2009. At the end of the year, the volume of spam had more than doubled in comparison to the volume seen before the McColo shutdown in late 2008.
  • The majority of spam continues to be URL-based spam. Although most of those URLs are hosted in China, the senders of most spam are usually located in other countries, such as Brazil (the top sender in 2009), the US, India, and, new to the top sender’s list, Vietnam (whose spam volume has tripled over the past year).
  • Tuesday continues to be the biggest day of the week for appearance of new vulnerabilities.


PDFs present a special problem. According to IBM: "The use of malicious PDFs for exploitation has seen a dramatic increase this year and it is quite common for multiple exploits to be present in a single PDF delivered by a malicious site. In fact, the three PDF vulnerabilities on our list are the most commonly observed combination. We will surely see this trend continue into the future; at least as long as new PDF vulnerabilities trickle out into the wild while patch speed and adoption could be better. In 2010, Adobe products are likely to continue to have a presence on our future most popular exploits list, although it is difficult to predict if it will be the “year of PDF” or the “year of Flash.” Adobe Acrobat/PDF has the lead for now."

In addition: "Interestingly, some new additions to the PDF format include the ability to embed entire PDF documents and multimedia such as Flash movies. So now a malicious PDF might actually be a malicious Flash movie. It is quite critical that organizations and individuals update their Adobe products whenever a newer version is offered and if possible use the auto-update facility. In addition, unless you want or need the ability to run script or watch movies inside a PDF document, you should disable these features in the program options."
reade more... Résuméabuiyad

You are not a gadget, progress is not a widget



Lately I've been reading Jaron Lanier's brave new manifesto, You Are Not a Gadget. I admire it greatly. It takes courage, after all, to stand up in public and say Web 2.0 is dehumanizing. It's a book that goes against the populist "information wants to be free" grain of the supposedly open world of the Web and asks difficult questions, like where all the great new online music has gone (will there ever be another Beatles?) and what we're all supposed to do for a living after information is free and Google is the only commercially viable aggregator left standing.

You know it's going to be an interesting book when you encounter, on the first page of Chapter One:
Something started to go wrong with the digital revolution around the turn of the twenty-first century. The World Wide Web was flooded by a torrent of petty designs sometimes called Web 2.0. This ideology promotes radical freedom on the surface of the web, but that freedom, ironically, is more for machines than for people. Nevertheless, it is sometimes referred to as "open culture."

Anonymous blog comments, vapid video pranks, and lightweight mashups may seem trivial and harmless, but as a whole, this widespread practice of fragmentary, impersonal communication has demeaned interpersonal interaction.
Lanier's various laments extend to -- among other targets -- popular music culture (a retro wasteland of recycled motifs of the 1980s and 1990s), online advertising (which he says "is elevated by open culture from its previous role as an accelerant and placed at the center of the human universe"), and the lack of originality of the open-source movement. On the latter point, Lanier notes sardonically that the crown jewels of the open-source world, Linux and Wikipedia, are little more than finely honed, handcrafted digital tributes to the utterly creaky museum-pieces known as UNIX and Encyclopedia Britannica.

"Let's suppose that back in the 1980s," Lanier remarks, "I had said, 'In a quarter century, when the digital revolution has made great progress and computer chips are millions of times faster than they are now, humanity will finally win the prize of being able to write a new encyclopedia and a new version of UNIX!' It would have sounded utterly pathetic."

I'm not finished reading You Are Not a Gadget (I still have 50 pages to go), and still not sure what I think of some of the ideas, but on the whole, I'm glad I'm reading it. It's like a blast of fresh air.
reade more... Résuméabuiyad

Part plane, part trike -- a new way to get to work


Samson Motorworks is taking deposits on a new type of roadable aircraft that's part plane, part 3-wheeler motorcycle. Flight tests of the $60K (less engine or avionics) build-it-yourself kit vehicle will supposedly commence later this year near the company's Auburn, California headquarters. Questions? See FAQ here.
reade more... Résuméabuiyad

Possible bugs in Mozilla Jetpack?

I've noticed a couple of things that don't work in Mozilla Jetpack. One is:

var serializer = new XMLSerializer( );

This line works fine in the console -- it works in Firebug. But for some reason, in Jetpack, I get "XMLSerializer is undefined."

Fortunately, I have a workaround. The workaround is:

var serializerClass = "@mozilla.org/xmlextras/xmlserializer;1";
var serializer =
Components.classes[serializerClass];
var serializerInstance =
serializer.createInstance(Components.interfaces.nsIDOMSerializer);

The second thing that doesn't work for me in Jetpack is writing to a document object using document.write():

jetpack.tabs.focused.contentWindow.open(); // works
var doc = jetpack.tabs.focused.contentDocument;
// This part doesn't work:
doc.open( );
doc.write( formattedContent );
doc.close( );

It also doesn't work if I try to do
  win = jetpack.tabs.focused.contentWindow.open();
doc = win.document;
doc.open( );
// etc.

Jetpack will open() a new window in a fresh tab but won't give me a reference to the new window's document object. The window stays blank -- I can't write to it.

If anyone has a workaround to this, please let me know. It seems odd that I can't create a new page from Jetpack.
reade more... Résuméabuiyad

Is Apple's New Maiden, NC Data Center Really a Data Center?



There's been a lot of speculation online about what Apple might be up to in Maiden, NC. The above video, apparently shot by a local realtor, purports to show the 500,000-square-foot facility being built in North Carolina, 40 miles northwest of Charlotte. This is reportedly a $1 billion roll of the dice for Apple, so whatever it is will obviously be of strategic importance.

Some have suggested that Apple is preparing for some kind of major reset of its iTunes business, in the wake of its purchase of music service Lala in December. Lala allows members to (legally) create online shareable "playlists" (aka "radio stations") of their own uploaded music that other registered Lala members can subscribe to.

Given the large number of loading docks visible along the south side of the main building, it's tempting to speculate that this may be a fulfillment center as well as a cloud center, but then again, loading docks don't necessarily have to mean outbound shipments. These docks could also be read-only -- as in, swallowing large volumes of newly arrived books or videos.

What do you think?
reade more... Résuméabuiyad

Poor man's CMS: CK Editor + Apache Sling integration in 64 lines of code

I admit to a certain laziness when it comes to rich-text editing: I like the CK Editor (formerly known as FCK), and in fact I'll often just go to the CK Editor demo page to do impromptu rich-text editing online, then (eventually) I'll Cut-and-Paste the source from the demo editor into whatever final target (blog, wiki page, etc.) I'm writing for -- oftentimes without Saving the text anywhere else along the way. It's a bit of a dangerous practice (not doing regular Saves) and I've been known to close the CK Editor window prematurely, before saving my work, resulting in an unrecoverable FootInMouthError.

The problem is, the CK Editor demo page doesn't give you a way to Save your work (it is after all just a demo page). I decided the smart thing to do would be to put a Save button on the page and have my work get sent off to my local Sling repository at the click of a mouse. Yes yes, I could use something like Zoho Writer and be done with it, but I really do prefer CK Editor, and I like the idea of persisting my rich text locally, on my local instance of Sling. So I went ahead and implemented Sling persistence for the CK Editor demo page.

I could have done the requisite code trickery with Greasemonkey, but Mozilla Jetpack allows me to easily put a "Save to repository..." menu command on the browser-window context menu in Firefox and have that menu command show up only on the CK Editor demo page (and nowhere else). Like this:



Note the menu command at the bottom.

The "repository," in this case, is Apache Sling. I'm actually using Day CRX (Content Repository Extreme), which is a highly spiffed commercial version of Apache Sling for which there is a free developer's edition. (Download the free version here.) I use the Day implementation for a couple of reasons, the most compelling of which (aside from its freeness) is that CRX comes with excellent administration tools, including a visual repository-browser that Sling sorely lacks.

Powering the "Save to repository..." menu command is the following Mozilla Jetpack script (scroll sideways to see lines that don't wrap):

/* Sling.jetpack.js

Copyright/left 2010 Kas Thomas.
Code may be freely reused with attribution.
*/

jetpack.future.import("menu");

jetpack.menu.context.page.beforeShow = function( menu, context ) {

var menuCommand = "Save to repository...";
var frontWindow = jetpack.tabs.focused.contentWindow;

var FRED = "http://ckeditor.com/demo";

// don't slurp the content into memory if we don't have to
if ( jetpack.tabs.focused.contentWindow.location.href.indexOf(FRED)==-1)
return;

function saveToRepository() {

// Repository storage URL
var base_url = "http://localhost:7402/content/";

// get the content we want to post
var params = "content=" + getContent();

// prompt the user to give it a name
var name = frontWindow.prompt( "Name for this entry:");
if (!name || name.length == 0)
throw "No name provided.";

// get a reference to the front window
var theWindow = jetpack.tabs.focused.contentWindow;

// appending "/*" to the full URL
// tells Sling to create a new node:
var url = base_url + name + "/*";

// prepare for AJAX POST
http = new XMLHttpRequest();
http.open("POST", url, true);

// Send the proper header information along with the request
http.setRequestHeader("Content-type", "application/x-www-form-urlencoded");
http.setRequestHeader("Content-length", params.length);
http.setRequestHeader("Connection", "close");

// Show whether we succeeded...
http.onreadystatechange = function() {
if(http.readyState == 4)
theWindow.alert("http.status = " + http.status);
}
// do the AJAX POST
http.send(params);

}

function getContent() {
var doc = jetpack.tabs.focused.contentDocument;
var iframeDoc = doc.getElementsByTagName("iframe")[0].contentDocument;
return iframeDoc.body.innerHTML;
}

// manage menu
menu.remove( menuCommand );
menu.add( {
label: menuCommand,
command: saveToRepository
} );
}

A couple of quick comments. I use the jetpack.menu.context.page.beforeShow() method in order to test if the frontmost (current, focused) browser tab is in fact the CK Editor demo page, because there is no need to show the menu command if we're not on that page. If we're not on that page, the script bails. Otherwise, at the bottom, we call menu.add(). Note that menu.add() is preceded by a call to menu.remove() -- which fails harmlessly (silently) if there's nothing to remove. The call to remove() is needed because otherwise the script will add() a new copy of the menu command every time the mouse is right-clicked, and pretty soon there will be multiple copies of it appended to the bottom of the context menu. We don't want that.

Slurping content from the CK Editor demo page is pretty easy. The editor window is in an <iframe>, and it's the only iframe on the page, so all we have to do is get the innerHTML of the body of that iframe, and that's what the getContent() method accomplishes:
function getContent() {
var doc = jetpack.tabs.focused.contentDocument;
var iframeDoc = doc.getElementsByTagName("iframe")[0].contentDocument;
return iframeDoc.body.innerHTML;
}
The rest is pretty much straight AJAX. We do a POST to the repository on the base URL plus the (user supplied) name of the post, appended with "/*" to tell the Sling servlet to create a new node in the tree at that spot. So for example, if the repository is at http://localhost:7402 and you want a new node named "myNode" under "parent", you simply do a POST to
http://localhost:7402/parent/myNode/*
and Sling dutifully creates the new node thusly named.

And that's basically it: a CK Editor + Sling integration in 64 lines of code, thanks to Mozilla Jetpack.
reade more... Résuméabuiyad

Quantizing the colors in an image, using (server side) JavaScript



Top left: The original image. Top right: The image quantized to 4 bits of color information per channel. Lower left: 3 bits of color per channel. Lower right: 2 bits per channel.

It turns out to be surprisingly quick and easy to quantize the colors in an image to a smaller number of bits per channel than the standard 8 bits for red, 8 bits for green, and 8 bits for blue. All you have to do is loop over the pixels and AND them against the appropriate mask value. A mask value of 0xFFF0F0F0 discards the lower 4 bits' worth of color information from each channel, essentially leaving 4 bits, each, for red, green, and blue. A mask value of 0xFFE0E0E0 keeps just the top 3 bits in each channel, while a mask of 0xFFC0C0C0 retains just 2 bits of color per channel.

To obtain the images shown above, I ran the following script against them (using these various mask values) with the aid of the ImageMunger Java app that I gave code for earlier. The ImageMunger class simply opens an image of your choice (you supply the filepath as a command line argument) and runs the JavaScript file of your choice (a second command line argument), putting variables Image and Panel in scope at runtime. The Image variable is just a reference to the BufferedImage object, representing your image. The Panel variable is a reference to the JComponent in which ImageMunger draws your image.

MASK = 0xffc0c0c0; // 2 bits per channel
// 0xffe0e0e0 3 bits per channel
// 0xfff0f0f0 4 bits per channel

var w = Image.getWidth();
var h = Image.getHeight();
var pixels = Image.getRGB( 0,0,w,h,null,0,w );

for (i = 0, len = pixels.length; i < len; i++)
pixels[ i ] &= MASK;

Image.setRGB( 0,0,w,h,pixels,0,w );
Panel.updatePanel( );

The getRGB() method of BufferedImage fetches the pixels from your image as a giant one-dimensional array. The corresponding setImage() method replaces the pixels. The updatePanel() method of Panel (defined in ImageMunger.java) causes the JComponent to refresh.

Given that this is JavaScript and not Java, we shouldn't be surprised to find that performance isn't exactly breakneck. Still, at 110 pixels per millisecond, thoughput isn't terrible, either.

As you might expect, quantizing the color info makes the image easier to compress. The original image, in PNG form, occupies 185 Kbytes on disk. The 4-bit-per-channel version occupies just 61K; the 3-bit version, 38K; and the 2-bit version, a little over 23K.
reade more... Résuméabuiyad

What's wrong with Mozilla Jetpack

There's been some interesting discussion recently of "what's wrong with Jetpack" by Laurent Jouanneau, Daniel Glazman, and others (see the long comment thread at the end of Daniel's recent post). The criticisms tend to fall along two major axes:

1. Mozilla Jetpack claims to be a kinder, gentler, easier to learn replacement technology for making Firefox extensions (replacing the existing quirky hodgepodge of XUL+XBL+XHTML technologies), but it abandons XUL totally, which means that extension programmers can't transfer their current XUL skills to the Jetpack dev world, and (more important) Jetpack loses the sophisticated layout model of XUL. In its place we have plain old HTML and CSS.

2. The Jetpack API is bound too closely to the jQuery API with its closure-intensive syntax, its peculiar self-obfuscating '$' notation, and overreliance on method overloading.

One of Jetpack's goals is to democratize Firefox extension programming, liberating it from the hands of the XUL programming elite and bringing FF extension programming into the purview of mere mortals who speak HTML and JavaScript. But it stops short of that goal. In point of fact, Jetpack encourages "magic code" -- closure-ridden one-liners and such -- and expects a fair amount of clairvoyance from programmers when it comes to required imports and other notions. For example, the winning code entry in a recent Jetpack coding competition is all but unreadable (i.e., it's self-obfuscating), with lines like:

let window =
Cc["@mozilla.org/appshell/window-mediator;1"].getService(Ci.nsIWindowMediator).getMostRecentWindow("navigator:browser");


Any API that encourages this kind of code gets a thumbs-down from me, and frankly, at this point, I would probably have to agree with Daniel Glazman when he says that Jetpack "
totally misses its main goal [of] making extension authoring dead simple instead of recreating another programming elite." Wedding itself to jQuery was one of the worst design choices Jetpack's API experts could have made, IMHO. "Clever" syntax doesn't advance an API's cause, any more than secret handshakes advance diplomacy's cause.

There are other criticisms, having to do with things like overuse of imports, wrappedJSObject, lack of localization support, lack of ability to use offline resources, and some odd constructs like jetpack.tabs.focused.raw. But except perhaps for localization, those are not showstoppers. A syntax that encourages brevity over clarity and coolness over maintainability, on the other hand, is definitely a problem. It seems to me we're either going to democratize Firefox extension creation or not. If we are, let's get rid of the secret handshakes and go back to KISS as a design principle.

reade more... Résuméabuiyad

"Edit this page" menu command in Jetpack

Every once in a while, I persist a web page to disk (maybe it's an airline itinerary, or whatever), and I like to make notes to myself in the page before saving it. I have a Mozilla Jetpack script that lets me edit the web page directly (in Firefox) before saving it. It works by setting the document property designMode to "on." If you're not familiar with this technique, I blogged about it previously here.

The Jetpack script puts a menu command in the right-mouse (context) menu for the page, called "Edit this page." (See screenshot below. The menu command is at the bottom.)



It would have been simple to just have the script set designMode to "on" and then have another script, with a menu command of "Disable editing," that sets it to "off," and perhaps have the menu-command label toggle back and forth depending on the mode the page is in. But I decided that would be a poor UI decision. When a page is in "edit" mode, there should be some sort of always-visible indication of that fact; otherwise you could forget why the page's links aren't working, for example. Also, there needs to be a quicker, easier way to turn off page editing than to go back to a menu command. Hence, I decided not to do a "Disable editing" menu command. Instead, I put a bright red "DESIGN MODE" flag at the top of the page and make it non-scrollable so it's always in view. To exit design mode, you just have to click anywhere in the red DESIGN MODE label. The label immediately goes away and you're back to a normal non-editable page.




The red DESIGN MODE indicator is a little obnoxious, but it's that way by design. ;)

In any event, the code for doing all this is fairly short and self-explanatory. The only non-obvious part, I think, is obtaining a reference to the current page's (or tab's) document, which in Jetpack you have to access via
jetpack.tabs.focused.contentDocument
Aside from that, the code is pretty straightforward:

jetpack.future.import("menu");

jetpack.menu.context.page.add({
label: "Edit this page",
command: function enableDesignMode( ) {

// Get a reference to the current page's DOM:
Document = jetpack.tabs.focused.contentDocument;

var INDICATOR_TEXT = "DESIGN MODE";
var INDICATOR_STYLE = 'position: fixed; ' +
'top:10px; left:400px; z-index:100' +
'font-color:black; ' +
'background-color:red;padding:10px;';

var modeIndicator =
createIndicator( INDICATOR_TEXT, INDICATOR_STYLE );

Document.body.insertBefore( modeIndicator ,
Document.body.firstChild );

function stopDesignMode( ) {
Document.body.removeChild( modeIndicator );
Document.designMode = "off";
}


// Exit Design Mode when the indicator takes a click
modeIndicator.addEventListener( "click",
stopDesignMode, false );

// This line makes the page editable:
Document.designMode = "on";

function createIndicator( text, style ) {

var span =
Document.createElement( "span" );
span.setAttribute( "style", style );
span.innerHTML = text.bold( );
return span;
}

} // end enableDesignMode( )
});

The code is public domain. Do with it as you will. No warranties of any kind are made, blah-cubed. :)
reade more... Résuméabuiyad

To hell with browser security, let me cram the Mentos in the bottle

The web browser is much more standards-based than any desktop application any of us normally uses, which makes it a compelling platform for developing personal web apps -- certainly much more compelling than something like Eclipse, say (which is theoretically an "anything platform"). But there are still quite a few things browsers don't do well -- and/or don't do in standardized fashion, or do in a just plain irritating fashion. One is data persistence. Another is file I/O. Another is cross-domain AJAX. If you try to do certain types of supposedly "insecure" things in a browser app, you're pretty much hosed at the outset.

I'd like to be able to open an XML file on disk, read Twitter user IDs from it, and then make AJAX calls to Twitter to either follow or unfollow those user IDs. I actually do this now using Greasemonkey scripts -- but the scripts complain about the "file://" URL scheme of the XML, unless you set a particular config value (greasemonkey.fileIsGreaseable) to true in the about:config screen of Firefox, as I wrote previously here.

But what I'd really like to be able to do is run the same Greasemonkey script in Chrome instead of Firefox. But Chrome doesn't have a greasemonkey.fileIsGreaseable security setting that I can override. Basically I can't trigger a script to fire off of opening a file. I have to serve myself the file over HTTP. Which means I have to install and run an instance of Apache (or another web server) just to serve myself these XML files so they'll trigger the script properly. Which is a lot of nonsense.

Sometimes I wish Chrome and Firefox and all the rest had a master security setting -- call it userAgreesToHoldTheEntireUniverseHarmlessWhileHeKillsHimself -- that would, with the flip of a bit, let me disable all the ridiculous child-proof bottle caps of the browser world. I want to pull the mattress tags off, ignore the Surgeon General warnings, and run wild-eyed down the hallway with scissors in both hands. Let me test the "no user-serviceable parts" hypothesis. Let me decide if my browser should do "file://" I/O in an AJAX call, let me decide if a script will fire when I manually Open a file, yes let me decide if one of my own scripts should be able to slurp the cache using about:cache or persist a bit of user data in an insecure way. Folks, I want to drive over the speed limit. I want to have unsafe-file-I/O sex. (Cover your ears. I am going to shout now.) Hear me O Browser Thought Police, whoever you are, wherever you are, and let me knowingly flip the sanity bit. I'm tired of being treated like a retarded child. Stand the fuck back and let me cram the Mentos in the goddam bottle already.
reade more... Résuméabuiyad

Saving tab sets in Firefox

I blogged last summer about Using Mozilla Jetpack to save tab ensembles, giving a bit of POC-quality code, but now Davide Ficano has done a proper job of things and written a Jetpack script that lets you name and persist open-tab states, using a very natural set of UI gestures.

TabGroup Organizer allows you to save all open tabs as a group and then reopen all with a click. After installation, a new multifolder icon (that gives rise to a context menu when right-clicked) is present on the statusbar in the lower-right corner of the browser window:



Using this Jetpack script, I can finally save tab ensembles and come back to them later -- a real productivity win, for me. Thank you, Davide!

And since the code is open-source, I'm going to reproduce it here (scroll sideways to see the parts that don't wrap):

jetpack.future.import("menu");
jetpack.future.import("storage.simple");

var tabGroupStorage = jetpack.storage.simple;

jetpack.statusBar.append({
html: "<img src='data:binary;base64,iVBORw0KGgoAAAANSUhEUgAAABYAAAAWCAYAAADEtGw7AAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAADdgAAA3YBfdWCzAAAABl0RVh0U29mdHdhcmUAd3d3Lmlua3NjYXBlLm9yZ5vuPBoAAAGiSURBVDiNrZUxayJBGIafb9zWU6ukEQ7ObuEIpEqTMiDslf6L%2Bxd3%2F%2BT%2BQlKkszqEg8AWiiI2SghiI6dyiDP7pQi72birNwm%2BMDDMzPt%2BL%2FO9Oyuqig9E5AwIASnZVmCkqo%2BvK6r%2FHcB5r9eb7na7xFqbWGt1byRxHD8Bn1OO%2BDgWkRtr7f1sNjt4ptlsEgRBpKp3AIHXPUBgjAGgXq8XNpfLJSICYDKCiNRbrVYEaPICBXDOqXMuUVWt1WpfPA28OgEuR6PRr9TRRyAiqeM3wgBMp9ODpBSVSoXFYsFwOCTfm81mw3g8BvgqIg%2Bq%2BhjkSdVq9aiz%2BXzOarWi3W4XHAJYa3%2F2%2B%2F3vInKVCRtjSg%2FnMRgMiKKIY%2BkIw%2FAcCN8lbK3FNx2Z8Hq9ptvtvmnE%2FnwymRwtnEcmHMcxnU6HU6UjyC%2BCfzrKri1vKtiv5JOOQygIA5mYTxN9hf8YY74BF865HycTVtW%2FwK2I%2FDuF40LzUpR9977I80qFfcn7KG1eSjLGsN1u%2FW0eKFpw3Gg0PiRaKJI%2BfyLyCbim%2FGf5Xvx%2BBkJNxrS0dvEEAAAAAElFTkSuQmCC' width='20px'>",
onReady: function(doc) {
jetpack.menu.context.set([
{label: "Save Tabs...",
command: function() {
var win = jetpack.tabs.focused.contentWindow;
var name = win.prompt("Enter the name for tabs group");
if (name) {
var arr = [];
jetpack.tabs.forEach(function(tab) {
arr.push(tab.url);
});
tabGroupStorage[name] = arr;
jetpack.storage.simple.sync();
}
}},
{label : "Restore tabs",
menu: new jetpack.Menu({
beforeShow : function(menuitem, context) {
menuitem.set([]);
for (var i in tabGroupStorage) {
menuitem.add({label : i + " (" + tabGroupStorage[i].length + ")", data : i});
}
}}),
command : function(menuitem) {
tabGroupStorage[menuitem.data].forEach(function(url) {
jetpack.tabs.open(url);
});
jetpack.storage.live.lastUsedGroup = menuitem.data;
}},
{label: "Delete Group...",
command: function() {
var win = jetpack.tabs.focused.contentWindow;
if (typeof (jetpack.storage.live.lastUsedGroup) != "undefined") {
if (win.confirm("Delete the current group '" + jetpack.storage.live.lastUsedGroup + "'?'")) {
delete tabGroupStorage[jetpack.storage.live.lastUsedGroup];
delete jetpack.storage.live.lastUsedGroup;
}
} else {
win.alert("You must select a group from menu before delete it");
}
}},
]);
}
});
The data URL -- the big long line near the top that contains
data:binary;base64,iVBORw0K ...
-- is of course the raw bytestream for the multifolder icon. The rest of the code is more or less self-explanatory. It uses the jetpack.storage.simple mechanism for persistence.

Nice going, Davide Ficano. Ten thumbs up!
reade more... Résuméabuiyad

Jonathan Schwartz's Farewell Memo


Believe it or not, it’s been more than nine months since Oracle first announced their intent to acquire Sun in April, 2009. And the ‘interim’ period has been tough on everyone–on our employees, and our partners and customers. Thankfully, that interim period is coming to an end, with regulatory approval from the European Union issued today, and only a few hurdles remaining–before Oracle formally expands beyond software to become the world’s most important systems company.

Even though we’re not quite across the finish line, I wanted to leave you with a few final thoughts.

All in all, it’s been an honor and privilege to work together. In my more than twenty years in the industry, the last thirteen at Sun, I’ve had a chance to work with and around an enormous diversity of companies, from every sector you can imagine. I can say with conviction that Sun’s people have always stood apart as the brightest, most passionate, and most inspiring. I’ve never had a bad day in my thirteen years for one very basic reason–I’ve always been surrounded by the best and brightest individuals I’ve ever come across. That’s been an honor and privilege, for which I’m enormously thankful.

Technology from Sun, alongside our employees and partners, have changed the world. We’ve opened markets, elections and economies. We’ve helped build the world’s most important and valuable businesses. We’ve played a key role in discovering new drugs, in bringing education and healthcare to those in need, and supplying the world with an incredible spectrum of entertainment, from smartphones to social networking. I doubt any company has had such a significant influence over the way we see or experience the world. I once told Scott McNealy he was the Henry Ford of the technology industry, making remarkable innovations accessible to anyone, and creating an immense number of jobs around the globe for those that made use of them. I can’t begin to tell you how proud I am of my association with that cause and the people behind it, and the value we created for ourselves and those that exploited our innovations.

I also know we’ve had more than our share of very tough challenges. Amidst the toughest market and customer situations imaginable, I’m proud we’ve always acted with integrity, with a sense for what’s right, and not simply what’s expedient. Over the years, I’ve heard time and again, from those inside and outside the company, “I like and I trust Sun.”

Building that good will is something to which you’ve all contributed. And you have every right to be very proud of it.

Make no mistake, it’s been an enormous asset.

So, to the sales and SE teams across the world who continually give their all to bring the numbers home–thank you for the trust you’ve built with customers, and the results you’ve delivered. I hope you’re prepared to have the wind at your back, you deserve it.

To the service professionals who every day build, maintain and run the world’s most important data centers–thank you for your excellence and discipline, 7×24.

To the professionals who run the functions and processes that are the company’s spinal column–thank you, we’d be paralyzed without you.

And lastly – to the engineers and marketers who’ve fostered a perpetual belief that innovation creates its own opportunity – thank you. You’re right. Innovation does create its own opportunity. Like Oracle, we’re an engineering company in our heart and soul, our potential together is limitless.

Now many of you know that I came to Sun when a company I helped to found was acquired in 1996. I’ve also led, and been a part of many, many acquisitions at Sun, both large and small. From those experiences, I’ve learned one very clear lesson–the single most important driver of a successful acquisition are the people involved–and how committed they are to the new owner’s mission.

And the most effective mechanism I’ve seen for driving that commitment begins with a simple, but emotionally difficult step.

Upon change in control, every employee needs to emotionally resign from Sun. Go home, light a candle, and let go of the expectations and assumptions that defined Sun as a workplace. Honor and remember them, but let them go.

For those that ultimately won’t become a part of Oracle, this will be the first step in a new adventure. Sun has a tremendous reputation across the planet, well beyond Silicon Valley. It’s a great brand to have on your resume. We’re known as self-starters, capable of ethically managing through complexity and change, for delivering when called upon, and for inventing and building the future. With the world economy stabilizing, I’m very confident you’ll land on your feet. You’re a talented, tenacious group, and there’s always opportunity for great people.

For those that have roles at Oracle, may you start with a clean slate, ready to take on the myriad opportunities ahead. With the same passion and tenacity for Oracle’s success that you’ve had for Sun’s, and a renewed sense of energy around executing on a far broader mission. There is no doubt in my mind you, and Oracle, will be remarkably successful, beyond the market’s wildest expectations. But it’s important you come to work thinking, “Sun is a brand, Oracle’s my company.” Don’t look for ways to preserve or dwell in “how we used to do things.” Look for ways to help customers, grow the market, and improve Oracle’s performance.

Sun is a brand, Oracle is your company.

And to that end, with nine months of getting to know them, I’ve found Oracle to be truly remarkable, led by remarkable people. From Larry on down, they understand the enormity of the opportunity before them, and they’re more than prepared to execute on it – across the board. I’ve seen their commitment and focus, now they need yours. I’m confident you’ll give it the 10,000% effort it deserves–and we’ll all see the end result.

So thank you, again, for the privilege and honor of working together. The internet’s made the world a far smaller place–so I’m sure we’ll be bumping into one another.

Go Oracle!

Jonathan

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

NOTE: This was the memo Jonathan Schwartz sent to Sun employees before tendering his resignation. Note carefully the first letter of the first 7 paragraphs. Schwartz also blogged about Where Life Takes Me Next. According to Sun’s definitive proxy statement, Schwartz could end up with about $12 million in his pockets from the severance package he negotiated, plus another $5 million or so for the shares of stock he still holds.

reade more... Résuméabuiyad

Modal dialogs are evil

I find it endlessly fascinating (and perpetually frustrating) that 26 years after the introduction of the Mac, all of us -- on Windows, Mac, Gnome desktop, pretty much you-name-it -- are still suffering with the same tired UI metaphors in our desktop apps, some of which continue to serve us well, but others of which continue to serve us shoddily, day after frustrating day. The UI metaphor that serves us most shoddily of all, arguably, is that of the modal dialog.

I'm starting to agree with Aza Raskin and others who have pointed out that modal dialogs (dialogs that won't go away until you deal with them) are basically evil. They're not dialogs at all. They're more in the nature of monologs. A programmer has decided that you need to stop what you're doing and focus on (and make a decision regarding) whatever it is the programmer has decided you need to focus on, before you can move on to something else. This is done for your own good, of course. God forbid you should defer a decision, or decide to go on working while making a decision.

Some modal dialogs are necessary, of course. After all, if it is a requirement that you enter a license string before using a product, then you damn well better enter the license string. But most modal dialogs don't have to be modal -- and shouldn't be, IMHO. Most modal dialogs are modal because it's easier for the programmer if you work that way; maintaining a consistent program state becomes messy and difficult if you have a bunch of dialog boxes open at once. It's a matter of convenience. Not your convenience; the convenience of the people who designed the program.

"Modal" is not how people like to work, though. People tend to be extremely ad-hoc in their working styles (to match their thinking styles), tackling little bits of a job in random order, working a little on this, a little on that, until the job is done. Few people tackle a job by working it linearly, in rigid stepwise fashion, step by step until it's done. That's why wizards are (as UI devices go) generally odious. They don't match the way people work.

In my day job, I have the (dis)pleasure of using Adobe products intensively. The three I use daily are Acrobat Professional, Photoshop, and FrameMaker. Of these, the one I use the most -- and that causes the most heartburn -- is FrameMaker. Ironically, Adobe has learned a great deal about good UI design over the years, but they've applied the knowledge haphazardly. Photoshop, in particular, has become much less modal (as has FrameMaker); you can work ad-hoc now through a combination of always-open dialogs (palette panels), always-visible contextual toolbar buttons, and hotkey combos. However, image filters (plug-ins) are still modal: You work with one effect at a time and can't leave them open while jumping back and forth between them, much less chain them. Ironically, Adobe After Effects does let you work with filters that way (pipelining them; playing with multiple filter settings simultaneously, in non-modal fashion). You'd think Adobe would apply what it has learned from After Effects to Photoshop, for the benefit of the much larger Photoshop audience. But no.

With FrameMaker, palette-style operations are (thankfully) much more the norm now, but there are still far too many modal dialogs, and the ones that are most intrusive (for me) happen at the worst time: when I am opening a file. It so happens that I work with a lot of files that have missing graphics (graphics that are on someone else's machine) and/or unresolved cross-references. It's in the nature of what I do that I'm always encountering such files, which means that when I open them, I always have to dismiss 3 dialogs. The first dialog asks me to locate missing graphics. After I dismiss that dialog, I'm confronted with the following dialog (monolog):



Once I dismiss this monolog, I am confronted with yet another warning:



My question to Adobe is, why do I have to dismiss 3 dialogs in order to open a file? (And go through the same process every day, every time I open the same file?) Why can't you just put this information in a status-bar message at the bottom of the window, or flash it in a tooltip at the appropriate time (when I hover over a missing graphic), or at least put a checkbox on these dialogs that says "Don't show me this again"?

Better yet, give me a global config setting somewhere that turns off all "informational" alerts (see the little 'i' icon in the box?) and converts whatever those alerts (monologs) were going to tell me into log messages that I can look at whenever I want? Why put a modal dialog in my face and make me dismiss it 20 times a day?

But then, maybe I ask for too much. After all, it's only been 26 years now. These things take time to change.
reade more... Résuméabuiyad

To keep Flash relevant, Adobe must resort to the nuclear option

I keep asking myself over and over again whether Flash has a reason to live, aside from sheer legacy momentum (which is analogous to the "muscle memory" that keeps a dinosaur's tail wagging for a week after it is officially dead). The longer we go in the direction of HTML 5 and AJAX, the less reason I see for software companies (and individual developers) to dump time and resources into things like Flex and Flash. The technology is too nonstandard, too proprietary. The mere fact that you need a browser plug-in to run Flash is a huge liability for all concerned. It creates deployment and provisioning issues for the IT crowd, backwards compatibility issues for users and developers, messy browser-testing matrices for QA, etc. The upside to Flash (the benefits of Flash) just don't seem to be that compelling compared to the costs. To me, anyway.

Flash finds itself at a crossroads now: It has two huge hurdles to overcome if it is to survive as a mainstream platform. One is Apple: Steve Jobs has made it quite apparent that he doesn't want Flash on the iPlatform. The other challenge is HTML itself (specifically HTML 5).

The lack of a common approach among browser makers on what format to use for the HTML video object has provided a stay of execution for Flash by ensuring a period of ongoing technological diversity as the format wars settle out. Apple has decided to put its weight behind MPEG-4/H.264, which it uses across its device platforms. Microsoft has stayed with VC-1, its own de facto standard video codec. With around a 25% share of the browser market, Mozilla Firefox proposes to standardize on the open-source Ogg Vorbis codec. This is a bit of an anomaly, for what people tend not to realize is that while H.264 seems to be an open and free standard, in reality it is a technology provided by the MPEG-LA patent-pooling cartel, and as a result it is governed by commercial and IP restrictions. (In fact, in 2014 it will impose royalty requirements on all users of the technology.)

The elephant in the room, of course, is Google. Some think Google will attempt an end-run around the others by launching an open video format with a well-defined open source license for the technology. According to industry experts, Google's new format, which is based on On2 VP8, delivers almost all of the same technical benefits as H.264.

From a practical point of view, no one can really be declared the "winner" of this kind of battle until the technology in question reaches an adoption rate of at least 90 percent. That's obviously a ways off.

Which means Adobe still has time to ward off Google's end run. But to do so effectively means adopting a brave -- in fact, radical (for Adobe) -- strategy. Adobe must make every aspect of the Flash platform open source, with the most liberal possible licensing terms -- and put the technology under community governance. In other words, Flash needs to be under the stewardship of something like the Apache Foundation. (And please, keep the licensing clean. We don't need a replay of the Sun/Java 7 fiasco.)

I personally don't see Adobe having the kind of foresight and community-mindedness needed to make this kind of dramatic preemptive move. But I'm convinced that if they don't, Flash will peak in popularity (which I believe it already has) and begin to recede into history -- like other perfectly good (and at one time pervasive) Macromedia technologies that have gone before.
reade more... Résuméabuiyad

Information Technology: Land of the Project-Challenged

A 1995 survey of 365 IT managers found that only 16% of IT projects were successful (on time and on budget). Some 31% were impaired or canceled –– total failures. Another 53% were project-challenged, a diplomatic way of saying that they were over budget, late, and/or failed to deliver all that was promised.

Ten years later, the percentage for success has reportedly climbed to 29% from 16%. Failures have decreased to 18% from 31%. But "challenged" is holding steady at 53%.

That's not great, but maybe it's not so bad for an industry in which products are never finished or perfect, just less broken.
reade more... Résuméabuiyad

Nine Questions to Ask during a Job Interview

It's important, when submitting to a job interview, to realize that the interview process goes both ways: You're interviewing your future employer. It's not just him or her interviewing you.

I've been a hiring manager (in R&D) as well as a hiree, and I can say that from the standpoint of the hiring manager it is always refreshing to encounter a candidate who has interesting questions to ask. In fact, the quality of questions an interviewee asks is something I always paid close attention to in interviews. A good candidate invariably asks good questions. You can tell a lot about a person's preparedness for the job (and overall enthusiasm level, not to mention the degree to which the person has done some homework on the company and the position) by the types of questions the candidate asks during an interview.

Most candidates, of course, are passive, expecting only to answer (not ask) questions. Which is bad.

So, but. What kinds of questions should you ask? Here are a few possibilities. You can probably think of others.

1. Who would I be working with on my first assignment? (Try to find out who your peers are and what their backgrounds are.) And: Who will I report to? (Hopefully, you'll report to the hiring manager. But it's possible you'll initially report to a team leader -- or to no one. Best to find out now.) Who will mentor me? (Hopefully, someone will.)

2. What is the single most important quality someone in this job should possess? This is an open-ended question that could tell you a lot about both the job itself and the person who is hiring you. The answer to this question could help you frame better answers to subsequent questions during the interview, so listen up.

3. How is success in this job measured? How will my performance be measured? This is crucial to future job satisfaction. A fuzzy answer here is bad news.

4. Are there opportunities for training (and/or career enrichment) in this job? What are they?

5. How often will I have an opportunity to meet with my manager? Are regularly scheduled performance reviews part of the process? Try to get a sense of what kind of "management culture" you're going to find yourself in. Is this a company that values management skills, or is it a free-for-all in which it's every manager, and every employee, for himself/herself?

6. What is the career path in this position? In other words, what are the opportunities for advancement? (In plain English: Is this a dead-end job? Will I be doing the same thing in 5 years?) If it's a dead-end job, best to find out now.

7. What tools will I use the most in my day-to-day job? This is a very practical question. You want a concrete answer, like: "You'll be using Eclipse and Maven on Linux quite heavily, and you'll be expected to track bugs in Bugzilla. For word processing, you'll use OpenOffice, and for e-mail you can use whatever you want." (Or whatever.)

8. If you're filling a vacancy (rather than a newly created position), ask what happened to your predecessor. Did the person get promoted? Did he or she leave on his own? Did he die of exhaustion, or stab wounds to the back? Try to get a sense of what happens to people who take this particular job.

9. Ask the hiring manager how he or she got hired at the company. Also ask: What do you most like (and/or dislike) about working here?


Questions Not to Ask

As a hiring manager, I've always been unimpressed when candidates asked certain questions. So avoid the following unless you know what you're doing:
  • Questions that show an undue interest in time off or avoidance of overtime. It may be that the job involves no overtime per se, but I still never liked getting the impression, early in a job interview, that the person was already looking for opportunities to take time off. (The first question out of your mouth should not be: "When do I get to take vacation time?") It speaks to a certain work ethic.
  • Questions about working from home when the job description clearly states that it is an on-site, 40-hour-a-week office job requiring close interaction with coworkers who are also working on-site.
  • Basic questions about what the company does. This is something the job applicant should already know a thing or two about (from having visited the company website ahead of time). Thoughtful, in-depth questions about specific aspects of what the company does are fine, of course. But don't ask questions that indicate you didn't visit -- and study, in some detail -- the company website.
  • Questions that indicate an undue fascination with pay raises, bonuses, or benefits. Again, these are actually fair-game topics, but you have to be careful how you ask about them. You don't want to convey an attitude of entitlement.
In general, you should save any questions that can be answered by the HR manager for the HR manager. Don't ask the hiring manager detailed questions about the company 401K plan. That's what the HR manager does.

Do ask questions that make your hiring manager think. Trust me when I say, that's more than most hiring managers are expecting.
reade more... Résuméabuiyad

Voronoi tessellation in linear time



Top Left: The source image (600 x 446 JPEG). Top Right: The same image as a collage of 2407 Voronoi cells. Lower Left: 5715 cells. Lower Right: 9435 cells, embossed. Click any image to see a larger version.

A Voronoi tessellation is a factoring of 2-space into polygonal regions that enclose points (one point per region) in such a way that the boundary between two adjoining regions runs at a perpendicular to the (imaginary) line connecting the nearest two points, while also being midway between the two points. In the simplest case, a set of points S ("Voronoi sites") defines a corresponding number of cells V(s), with any given cell consisting of all points closer to s than to any other site. The segments of the Voronoi diagram are all the points in the plane that are equidistant to the two nearest sites.

If you look at the points in the diagram below, you can see that an imaginary line connecting any two neighboring points will be bisected at a right angle by a cell boundary; and the cell boundary will be exactly midway between the points. That's what makes a Voronoi cell a Voronoi cell.



Voronoi diagrams are named after Russian mathematician Georgy Fedoseevich Voronoi, but their use dates back hundreds of years. Descartes was already familiar with them in 1644. British physician John Snow supposedly used a Voronoi diagram in 1854 to illustrate how the majority of people who died in the Soho cholera epidemic lived closer to the infected Broad Street pump than to any other water pump.

The dual graph for a Voronoi diagram corresponds to the Delaunay triangulation for the same set of points. Delaunay is an interesting construction in its own right, but we'll save it for another day. For now suffice it to say that Delaunay offers a way of taking a field of (coplanar) points and making them into a field of triangles composed in such a way that the circumcircle inscribed by any given triangle encloses no other points.

Voronoi-tessellated forms tend to be aesthetically pleasing -- if the tessellation is done so as to produce more cells in areas high in detail, and fewer cells in low-detail areas -- although not always fast. Tessellation of a point-field into Voronoi cells generally takes (depending on the algorithm) either N-squared or N-log-N time (meaning, it can be quite slow if the number of points is large).

Fortunately, we can take advantage of a space-filling trick to make the whole process occur in linear time (i.e., time-order ~20N to 30N, in practice).

To see how the algorithm works, imagine, if you will, a field of points. Let each point magically become a soap bubble. Now grow each bubble slowly. When two bubbles meet, their walls fuse together into one flat section that joins the two, with a boundary that's perpendicular to the (imaginary) line connecting the centers of the bubbles. (If you've seen two bubbles stuck together, you know what I mean. There's a "flat" side to each bubble where they join together.) Continue to grow all bubbles until there are no more curved edges; only flat walls. This is the approach we use. We take a field of points and dilate them (grow them in all directions at once) until they become regions that adjoin. If all regions grow at the same speed, natural boundaries will form, and those boundaries will define Voronoi cells.

But how to redefine an image as a series of points? Easy: Just take random samples of the image. Actually, for the most visually pleasing result, we don't want random samples: We want to take more samples in areas of high detail and fewer samples in areas of gradual color change. This is easy enough to do with an algorithm that walks through the image, looking at how much each pixel differs from the pixels around it. We accumulate the variance into a "running average," and when that number exceeds a certain arbitrary threshold, we take a sample; otherwise, set visited pixels to white.

The JavaScript below shows how it's done. The loadSamples() method walks through the image, taking samples of pixel values -- more frequent samples in rapidly-fluctuating areas, less frequent samples in areas of little variation. Once a field of samples has been captured, we call the spaceFill() method, which dilates the points by growing them in north, south, east, and west directions until the image space is filled. I do frequent checks to see if we're done filling (in which case we break out of the loop). Generally, if the average cell size is small enough to give a pleasing visual appearance, the whole image can be filled in 30 iterations or so. Smaller (more numerous) cells can be filled quickly, hence fewer iterations with more cells. (Sounds counterintuitive at first.)

Note that to run this script, you may want to use the little ImageMunger app I gave code for in a previous post. (ImageMunger will open an image and run a script against it. Along the way, it puts Image and Panel globals in scope at runtime. See previous post for details.)

Unaccountably, I found that this code runs much faster using the separate Mozilla Rhino js.jar than using JDK6's onboard script engine. (When I say "much faster," I'm talking the difference between six seconds and two minutes.) I didn't try to troubleshoot it.


/*
voronoi.js
Kas Thomas
03 February 2010
Public domain.

*/

// Loop over all the pixels in the image and "sample" them, taking
// more samples in areas of detail, fewer samples in areas of little
// variation.
function loadSamples ( pixels, rasterWidth, threshold ) {
length = pixels.length;
accumulatedError = 0;
thisPixel = 0;
north = 0; south = 0;
east = 0; west = 0;
ave = 0;
samples = new Array( pixels.length);
for (var i = 0; i < samples.length; i++) samples[i] =0;

for (var i = 0; i < length; i++) {
thisPixel = getPixelStrength( pixels[i] );
north = i > rasterWidth ? getPixelStrength( pixels[i-rasterWidth] ) : 1;
south = i < (i - rasterWidth) - 1 ? getPixelStrength( pixels[i+rasterWidth] ) : 1;
east = i + 1 < length ? getPixelStrength( pixels[i + 1] ) : 1;
west = i - 1 >= 0 ? getPixelStrength( pixels[i - 1] ) : 1;

ave = (north + south + east + west + Math.random() )/5.;

accumulatedError += ave - thisPixel;

if (accumulatedError > threshold) {
samples[i] = pixels[i];
accumulatedError = 0;
}
else
samples[i] = 0x00ffffff;
}

return samples;
}

// get green value, scale it to 0..1
function getPixelStrength( p ) {
value = ( (p >> 8) & 255 )/255.;
return value;
}

var w = Image.getWidth();
var h = Image.getHeight();
var pixels = Image.getRGB( 0,0,w,h,null,0,w );
SENSITIVITY = 4;
var newPixels = loadSamples( pixels, w, SENSITIVITY );





// Starting with a field of points, grow the points evenly
//
until their regions touch.
function spaceFill( pixels, limit, width ) {

var i;


// iterate over all sample points and dilate them
for ( i = 0; i < limit; i++) {

var fillCount = 0;

for (var k = 1; k < pixels.length; k++)
fillCount += fillLeft( k, pixels );
if ( 0 == fillCount ) // done filling? bail
break;

for (var k = width; k < pixels.length; k++)
fillCount += fillUp( k, width, pixels );
if ( 0 == fillCount )
break;

for (var k = pixels.length - 2; k >= 0; k--)
fillCount += fillRight( k, pixels );
if ( 0 == fillCount )
break;

for (var k = pixels.length - width - 1; k >= 0; k--)
fillCount += fillDown( k, width, pixels );
if ( 0 == fillCount )
break;
}
return i;
}

// dilation functions
function fillRight( i, pixels ) {
if (pixels[i + 1] & 0x00ffffff == 0x00ffffff) {
pixels[i + 1] = pixels[i];
return 1;
}
return 0;
}

function fillLeft(i, pixels ) {
if (pixels[i - 1] & 0x00ffffff == 0x00ffffff) {
pixels[i - 1] = pixels[i];
return 1;
}
return 0;
}

function fillUp(i, width, pixels ) {
if (pixels[i - width] & 0x00ffffff == 0x00ffffff) {
pixels[i - width] = pixels[i];
return 1;
}
return 0;
}

function fillDown(i, width, pixels ) {
if (pixels[i + width] & 0x00ffffff == 0x00ffffff) {
pixels[i + width] = pixels[i];
return 1;
}
return 0;
}

// This optional function is for reporting
// purposes only...
function howManySamples( pixels ) {
for ( var i = 0, n = 0; i < pixels.length; i++)
if (pixels[i] != 0x00ffffff)
++n;
java.lang.System.out.println( n + " samples" );
}
sampleCount = howManySamples( newPixels );
var iterations = spaceFill( newPixels,50, w );
java.lang.System.out.println("Image filled in " + iterations + " iterations");
Image.setRGB( 0,0,w,h, newPixels, 0, w );
Panel.updatePanel(); // draw it


To get more Voronoi cells (finer granularity of resolution), decrease the value of the SENSITIVITY constant. A value around 4 will yield a point field with a density of around 3 percent -- in other words, 3 point samples per 100 pixels. To get half as many samples, double the SENSITIVITY value.
reade more... Résuméabuiyad

Generating a color-picker rainbow in 30 lines of JavaScript

Most color pickers, I find, aren't terribly helpful. Fortunately, though, it's relatively easy to create your own. All you have to do is generate a rainbow swatch and capture mousedowns or mousemoved events as the user hovers over the swatch, and sample the color under the mouse pointer. The trick is creating a good color swatch. The answer is a few lines of server-side JavaScript.

The swatch shown here (which has colors ranging from pure white at the top of the swatch, to 100 percent saturation at the bottom) was created by varying the red, green, and blue color channels sinusoidally, with each channel phase-shifted slightly. Code for this is shown below. To run the script, you can use the little ImageMunger app I gave code for in a previous post. (The app puts globals Image and Panel in scope. See previous post for details.) Just point the app at an image file of (say) dimensions 200 x 200 (or whatever), and let the script fill the image with colors. Be sure to use JDK6.

/* colorpicker.js
* Kas Thomas
* 02 February 2010
* Public domain.
*
* Run this file using ImageMunger:
* http://asserttrue.blogspot.com/2010/01/simple-java-class-for-running-scripts.html
*/


( function main() {

w = Image.getWidth();
h = Image.getHeight();
pixels = Image.getRGB(0, 0, w,h, null, 0,w);

var x,y,spanx,spany;
for (var i = 0; i < pixels.length; i++) {
x = i % w;
y = i / w;
spanx = x/w;
spany = y/h;
pixels[ i ] = rainbowPixel( spanx,spany );
}
Image.setRGB(0, 0, w,h, pixels, 0,w);
Panel.updatePanel();

function rainbowPixel( xspan, yspan ) {

blue = 255 - yspan*255 * ( 1.0 + Math.sin( 6.3*xspan ) )/2;
green = 255 - yspan*255 * ( 1.0 + Math.cos( 6.3*xspan ) )/2;
red = 255 - yspan*255 * ( 1.0 - Math.sin( 6.3*xspan ) )/2;

return (red << 16) + (green<<8) + blue;
}

})();
Note that this technique can be adapted to PDF image-maps quite easily (as shown here). It is also the basis of a (pure Java) plug-in for the National Institutes of Health's freeware ImageJ program.

Future projects:
  • Instead of sampling the color under the mouse pointer, retrieve the target color procedurally by back-calculating the color based on the x-y coordinates of the mouse.
  • Rewrite the rainbowPixel() method to space the color channels out by 120 degrees (2-pi-over-3 radians) instead of 90 or 180 degrees. (In the code shown above, blue and green channels are phased 90 degrees apart; blue and red are 180 degrees apart.)
  • Make it so that colors range from pure white at the top of the swatch to black at the bottom, with full saturation in the middle of the swatch.
  • Write a version in slider controls can be used to control the phase angles of the 3 color channels.
reade more... Résuméabuiyad