Pages

.

Modal dialogs are evil

I find it endlessly fascinating (and perpetually frustrating) that 26 years after the introduction of the Mac, all of us -- on Windows, Mac, Gnome desktop, pretty much you-name-it -- are still suffering with the same tired UI metaphors in our desktop apps, some of which continue to serve us well, but others of which continue to serve us shoddily, day after frustrating day. The UI metaphor that serves us most shoddily of all, arguably, is that of the modal dialog.

I'm starting to agree with Aza Raskin and others who have pointed out that modal dialogs (dialogs that won't go away until you deal with them) are basically evil. They're not dialogs at all. They're more in the nature of monologs. A programmer has decided that you need to stop what you're doing and focus on (and make a decision regarding) whatever it is the programmer has decided you need to focus on, before you can move on to something else. This is done for your own good, of course. God forbid you should defer a decision, or decide to go on working while making a decision.

Some modal dialogs are necessary, of course. After all, if it is a requirement that you enter a license string before using a product, then you damn well better enter the license string. But most modal dialogs don't have to be modal -- and shouldn't be, IMHO. Most modal dialogs are modal because it's easier for the programmer if you work that way; maintaining a consistent program state becomes messy and difficult if you have a bunch of dialog boxes open at once. It's a matter of convenience. Not your convenience; the convenience of the people who designed the program.

"Modal" is not how people like to work, though. People tend to be extremely ad-hoc in their working styles (to match their thinking styles), tackling little bits of a job in random order, working a little on this, a little on that, until the job is done. Few people tackle a job by working it linearly, in rigid stepwise fashion, step by step until it's done. That's why wizards are (as UI devices go) generally odious. They don't match the way people work.

In my day job, I have the (dis)pleasure of using Adobe products intensively. The three I use daily are Acrobat Professional, Photoshop, and FrameMaker. Of these, the one I use the most -- and that causes the most heartburn -- is FrameMaker. Ironically, Adobe has learned a great deal about good UI design over the years, but they've applied the knowledge haphazardly. Photoshop, in particular, has become much less modal (as has FrameMaker); you can work ad-hoc now through a combination of always-open dialogs (palette panels), always-visible contextual toolbar buttons, and hotkey combos. However, image filters (plug-ins) are still modal: You work with one effect at a time and can't leave them open while jumping back and forth between them, much less chain them. Ironically, Adobe After Effects does let you work with filters that way (pipelining them; playing with multiple filter settings simultaneously, in non-modal fashion). You'd think Adobe would apply what it has learned from After Effects to Photoshop, for the benefit of the much larger Photoshop audience. But no.

With FrameMaker, palette-style operations are (thankfully) much more the norm now, but there are still far too many modal dialogs, and the ones that are most intrusive (for me) happen at the worst time: when I am opening a file. It so happens that I work with a lot of files that have missing graphics (graphics that are on someone else's machine) and/or unresolved cross-references. It's in the nature of what I do that I'm always encountering such files, which means that when I open them, I always have to dismiss 3 dialogs. The first dialog asks me to locate missing graphics. After I dismiss that dialog, I'm confronted with the following dialog (monolog):



Once I dismiss this monolog, I am confronted with yet another warning:



My question to Adobe is, why do I have to dismiss 3 dialogs in order to open a file? (And go through the same process every day, every time I open the same file?) Why can't you just put this information in a status-bar message at the bottom of the window, or flash it in a tooltip at the appropriate time (when I hover over a missing graphic), or at least put a checkbox on these dialogs that says "Don't show me this again"?

Better yet, give me a global config setting somewhere that turns off all "informational" alerts (see the little 'i' icon in the box?) and converts whatever those alerts (monologs) were going to tell me into log messages that I can look at whenever I want? Why put a modal dialog in my face and make me dismiss it 20 times a day?

But then, maybe I ask for too much. After all, it's only been 26 years now. These things take time to change.
reade more... Résuméabuiyad

To keep Flash relevant, Adobe must resort to the nuclear option

I keep asking myself over and over again whether Flash has a reason to live, aside from sheer legacy momentum (which is analogous to the "muscle memory" that keeps a dinosaur's tail wagging for a week after it is officially dead). The longer we go in the direction of HTML 5 and AJAX, the less reason I see for software companies (and individual developers) to dump time and resources into things like Flex and Flash. The technology is too nonstandard, too proprietary. The mere fact that you need a browser plug-in to run Flash is a huge liability for all concerned. It creates deployment and provisioning issues for the IT crowd, backwards compatibility issues for users and developers, messy browser-testing matrices for QA, etc. The upside to Flash (the benefits of Flash) just don't seem to be that compelling compared to the costs. To me, anyway.

Flash finds itself at a crossroads now: It has two huge hurdles to overcome if it is to survive as a mainstream platform. One is Apple: Steve Jobs has made it quite apparent that he doesn't want Flash on the iPlatform. The other challenge is HTML itself (specifically HTML 5).

The lack of a common approach among browser makers on what format to use for the HTML video object has provided a stay of execution for Flash by ensuring a period of ongoing technological diversity as the format wars settle out. Apple has decided to put its weight behind MPEG-4/H.264, which it uses across its device platforms. Microsoft has stayed with VC-1, its own de facto standard video codec. With around a 25% share of the browser market, Mozilla Firefox proposes to standardize on the open-source Ogg Vorbis codec. This is a bit of an anomaly, for what people tend not to realize is that while H.264 seems to be an open and free standard, in reality it is a technology provided by the MPEG-LA patent-pooling cartel, and as a result it is governed by commercial and IP restrictions. (In fact, in 2014 it will impose royalty requirements on all users of the technology.)

The elephant in the room, of course, is Google. Some think Google will attempt an end-run around the others by launching an open video format with a well-defined open source license for the technology. According to industry experts, Google's new format, which is based on On2 VP8, delivers almost all of the same technical benefits as H.264.

From a practical point of view, no one can really be declared the "winner" of this kind of battle until the technology in question reaches an adoption rate of at least 90 percent. That's obviously a ways off.

Which means Adobe still has time to ward off Google's end run. But to do so effectively means adopting a brave -- in fact, radical (for Adobe) -- strategy. Adobe must make every aspect of the Flash platform open source, with the most liberal possible licensing terms -- and put the technology under community governance. In other words, Flash needs to be under the stewardship of something like the Apache Foundation. (And please, keep the licensing clean. We don't need a replay of the Sun/Java 7 fiasco.)

I personally don't see Adobe having the kind of foresight and community-mindedness needed to make this kind of dramatic preemptive move. But I'm convinced that if they don't, Flash will peak in popularity (which I believe it already has) and begin to recede into history -- like other perfectly good (and at one time pervasive) Macromedia technologies that have gone before.
reade more... Résuméabuiyad

Information Technology: Land of the Project-Challenged

A 1995 survey of 365 IT managers found that only 16% of IT projects were successful (on time and on budget). Some 31% were impaired or canceled –– total failures. Another 53% were project-challenged, a diplomatic way of saying that they were over budget, late, and/or failed to deliver all that was promised.

Ten years later, the percentage for success has reportedly climbed to 29% from 16%. Failures have decreased to 18% from 31%. But "challenged" is holding steady at 53%.

That's not great, but maybe it's not so bad for an industry in which products are never finished or perfect, just less broken.
reade more... Résuméabuiyad

Nine Questions to Ask during a Job Interview

It's important, when submitting to a job interview, to realize that the interview process goes both ways: You're interviewing your future employer. It's not just him or her interviewing you.

I've been a hiring manager (in R&D) as well as a hiree, and I can say that from the standpoint of the hiring manager it is always refreshing to encounter a candidate who has interesting questions to ask. In fact, the quality of questions an interviewee asks is something I always paid close attention to in interviews. A good candidate invariably asks good questions. You can tell a lot about a person's preparedness for the job (and overall enthusiasm level, not to mention the degree to which the person has done some homework on the company and the position) by the types of questions the candidate asks during an interview.

Most candidates, of course, are passive, expecting only to answer (not ask) questions. Which is bad.

So, but. What kinds of questions should you ask? Here are a few possibilities. You can probably think of others.

1. Who would I be working with on my first assignment? (Try to find out who your peers are and what their backgrounds are.) And: Who will I report to? (Hopefully, you'll report to the hiring manager. But it's possible you'll initially report to a team leader -- or to no one. Best to find out now.) Who will mentor me? (Hopefully, someone will.)

2. What is the single most important quality someone in this job should possess? This is an open-ended question that could tell you a lot about both the job itself and the person who is hiring you. The answer to this question could help you frame better answers to subsequent questions during the interview, so listen up.

3. How is success in this job measured? How will my performance be measured? This is crucial to future job satisfaction. A fuzzy answer here is bad news.

4. Are there opportunities for training (and/or career enrichment) in this job? What are they?

5. How often will I have an opportunity to meet with my manager? Are regularly scheduled performance reviews part of the process? Try to get a sense of what kind of "management culture" you're going to find yourself in. Is this a company that values management skills, or is it a free-for-all in which it's every manager, and every employee, for himself/herself?

6. What is the career path in this position? In other words, what are the opportunities for advancement? (In plain English: Is this a dead-end job? Will I be doing the same thing in 5 years?) If it's a dead-end job, best to find out now.

7. What tools will I use the most in my day-to-day job? This is a very practical question. You want a concrete answer, like: "You'll be using Eclipse and Maven on Linux quite heavily, and you'll be expected to track bugs in Bugzilla. For word processing, you'll use OpenOffice, and for e-mail you can use whatever you want." (Or whatever.)

8. If you're filling a vacancy (rather than a newly created position), ask what happened to your predecessor. Did the person get promoted? Did he or she leave on his own? Did he die of exhaustion, or stab wounds to the back? Try to get a sense of what happens to people who take this particular job.

9. Ask the hiring manager how he or she got hired at the company. Also ask: What do you most like (and/or dislike) about working here?


Questions Not to Ask

As a hiring manager, I've always been unimpressed when candidates asked certain questions. So avoid the following unless you know what you're doing:
  • Questions that show an undue interest in time off or avoidance of overtime. It may be that the job involves no overtime per se, but I still never liked getting the impression, early in a job interview, that the person was already looking for opportunities to take time off. (The first question out of your mouth should not be: "When do I get to take vacation time?") It speaks to a certain work ethic.
  • Questions about working from home when the job description clearly states that it is an on-site, 40-hour-a-week office job requiring close interaction with coworkers who are also working on-site.
  • Basic questions about what the company does. This is something the job applicant should already know a thing or two about (from having visited the company website ahead of time). Thoughtful, in-depth questions about specific aspects of what the company does are fine, of course. But don't ask questions that indicate you didn't visit -- and study, in some detail -- the company website.
  • Questions that indicate an undue fascination with pay raises, bonuses, or benefits. Again, these are actually fair-game topics, but you have to be careful how you ask about them. You don't want to convey an attitude of entitlement.
In general, you should save any questions that can be answered by the HR manager for the HR manager. Don't ask the hiring manager detailed questions about the company 401K plan. That's what the HR manager does.

Do ask questions that make your hiring manager think. Trust me when I say, that's more than most hiring managers are expecting.
reade more... Résuméabuiyad

Voronoi tessellation in linear time



Top Left: The source image (600 x 446 JPEG). Top Right: The same image as a collage of 2407 Voronoi cells. Lower Left: 5715 cells. Lower Right: 9435 cells, embossed. Click any image to see a larger version.

A Voronoi tessellation is a factoring of 2-space into polygonal regions that enclose points (one point per region) in such a way that the boundary between two adjoining regions runs at a perpendicular to the (imaginary) line connecting the nearest two points, while also being midway between the two points. In the simplest case, a set of points S ("Voronoi sites") defines a corresponding number of cells V(s), with any given cell consisting of all points closer to s than to any other site. The segments of the Voronoi diagram are all the points in the plane that are equidistant to the two nearest sites.

If you look at the points in the diagram below, you can see that an imaginary line connecting any two neighboring points will be bisected at a right angle by a cell boundary; and the cell boundary will be exactly midway between the points. That's what makes a Voronoi cell a Voronoi cell.



Voronoi diagrams are named after Russian mathematician Georgy Fedoseevich Voronoi, but their use dates back hundreds of years. Descartes was already familiar with them in 1644. British physician John Snow supposedly used a Voronoi diagram in 1854 to illustrate how the majority of people who died in the Soho cholera epidemic lived closer to the infected Broad Street pump than to any other water pump.

The dual graph for a Voronoi diagram corresponds to the Delaunay triangulation for the same set of points. Delaunay is an interesting construction in its own right, but we'll save it for another day. For now suffice it to say that Delaunay offers a way of taking a field of (coplanar) points and making them into a field of triangles composed in such a way that the circumcircle inscribed by any given triangle encloses no other points.

Voronoi-tessellated forms tend to be aesthetically pleasing -- if the tessellation is done so as to produce more cells in areas high in detail, and fewer cells in low-detail areas -- although not always fast. Tessellation of a point-field into Voronoi cells generally takes (depending on the algorithm) either N-squared or N-log-N time (meaning, it can be quite slow if the number of points is large).

Fortunately, we can take advantage of a space-filling trick to make the whole process occur in linear time (i.e., time-order ~20N to 30N, in practice).

To see how the algorithm works, imagine, if you will, a field of points. Let each point magically become a soap bubble. Now grow each bubble slowly. When two bubbles meet, their walls fuse together into one flat section that joins the two, with a boundary that's perpendicular to the (imaginary) line connecting the centers of the bubbles. (If you've seen two bubbles stuck together, you know what I mean. There's a "flat" side to each bubble where they join together.) Continue to grow all bubbles until there are no more curved edges; only flat walls. This is the approach we use. We take a field of points and dilate them (grow them in all directions at once) until they become regions that adjoin. If all regions grow at the same speed, natural boundaries will form, and those boundaries will define Voronoi cells.

But how to redefine an image as a series of points? Easy: Just take random samples of the image. Actually, for the most visually pleasing result, we don't want random samples: We want to take more samples in areas of high detail and fewer samples in areas of gradual color change. This is easy enough to do with an algorithm that walks through the image, looking at how much each pixel differs from the pixels around it. We accumulate the variance into a "running average," and when that number exceeds a certain arbitrary threshold, we take a sample; otherwise, set visited pixels to white.

The JavaScript below shows how it's done. The loadSamples() method walks through the image, taking samples of pixel values -- more frequent samples in rapidly-fluctuating areas, less frequent samples in areas of little variation. Once a field of samples has been captured, we call the spaceFill() method, which dilates the points by growing them in north, south, east, and west directions until the image space is filled. I do frequent checks to see if we're done filling (in which case we break out of the loop). Generally, if the average cell size is small enough to give a pleasing visual appearance, the whole image can be filled in 30 iterations or so. Smaller (more numerous) cells can be filled quickly, hence fewer iterations with more cells. (Sounds counterintuitive at first.)

Note that to run this script, you may want to use the little ImageMunger app I gave code for in a previous post. (ImageMunger will open an image and run a script against it. Along the way, it puts Image and Panel globals in scope at runtime. See previous post for details.)

Unaccountably, I found that this code runs much faster using the separate Mozilla Rhino js.jar than using JDK6's onboard script engine. (When I say "much faster," I'm talking the difference between six seconds and two minutes.) I didn't try to troubleshoot it.


/*
voronoi.js
Kas Thomas
03 February 2010
Public domain.

*/

// Loop over all the pixels in the image and "sample" them, taking
// more samples in areas of detail, fewer samples in areas of little
// variation.
function loadSamples ( pixels, rasterWidth, threshold ) {
length = pixels.length;
accumulatedError = 0;
thisPixel = 0;
north = 0; south = 0;
east = 0; west = 0;
ave = 0;
samples = new Array( pixels.length);
for (var i = 0; i < samples.length; i++) samples[i] =0;

for (var i = 0; i < length; i++) {
thisPixel = getPixelStrength( pixels[i] );
north = i > rasterWidth ? getPixelStrength( pixels[i-rasterWidth] ) : 1;
south = i < (i - rasterWidth) - 1 ? getPixelStrength( pixels[i+rasterWidth] ) : 1;
east = i + 1 < length ? getPixelStrength( pixels[i + 1] ) : 1;
west = i - 1 >= 0 ? getPixelStrength( pixels[i - 1] ) : 1;

ave = (north + south + east + west + Math.random() )/5.;

accumulatedError += ave - thisPixel;

if (accumulatedError > threshold) {
samples[i] = pixels[i];
accumulatedError = 0;
}
else
samples[i] = 0x00ffffff;
}

return samples;
}

// get green value, scale it to 0..1
function getPixelStrength( p ) {
value = ( (p >> 8) & 255 )/255.;
return value;
}

var w = Image.getWidth();
var h = Image.getHeight();
var pixels = Image.getRGB( 0,0,w,h,null,0,w );
SENSITIVITY = 4;
var newPixels = loadSamples( pixels, w, SENSITIVITY );





// Starting with a field of points, grow the points evenly
//
until their regions touch.
function spaceFill( pixels, limit, width ) {

var i;


// iterate over all sample points and dilate them
for ( i = 0; i < limit; i++) {

var fillCount = 0;

for (var k = 1; k < pixels.length; k++)
fillCount += fillLeft( k, pixels );
if ( 0 == fillCount ) // done filling? bail
break;

for (var k = width; k < pixels.length; k++)
fillCount += fillUp( k, width, pixels );
if ( 0 == fillCount )
break;

for (var k = pixels.length - 2; k >= 0; k--)
fillCount += fillRight( k, pixels );
if ( 0 == fillCount )
break;

for (var k = pixels.length - width - 1; k >= 0; k--)
fillCount += fillDown( k, width, pixels );
if ( 0 == fillCount )
break;
}
return i;
}

// dilation functions
function fillRight( i, pixels ) {
if (pixels[i + 1] & 0x00ffffff == 0x00ffffff) {
pixels[i + 1] = pixels[i];
return 1;
}
return 0;
}

function fillLeft(i, pixels ) {
if (pixels[i - 1] & 0x00ffffff == 0x00ffffff) {
pixels[i - 1] = pixels[i];
return 1;
}
return 0;
}

function fillUp(i, width, pixels ) {
if (pixels[i - width] & 0x00ffffff == 0x00ffffff) {
pixels[i - width] = pixels[i];
return 1;
}
return 0;
}

function fillDown(i, width, pixels ) {
if (pixels[i + width] & 0x00ffffff == 0x00ffffff) {
pixels[i + width] = pixels[i];
return 1;
}
return 0;
}

// This optional function is for reporting
// purposes only...
function howManySamples( pixels ) {
for ( var i = 0, n = 0; i < pixels.length; i++)
if (pixels[i] != 0x00ffffff)
++n;
java.lang.System.out.println( n + " samples" );
}
sampleCount = howManySamples( newPixels );
var iterations = spaceFill( newPixels,50, w );
java.lang.System.out.println("Image filled in " + iterations + " iterations");
Image.setRGB( 0,0,w,h, newPixels, 0, w );
Panel.updatePanel(); // draw it


To get more Voronoi cells (finer granularity of resolution), decrease the value of the SENSITIVITY constant. A value around 4 will yield a point field with a density of around 3 percent -- in other words, 3 point samples per 100 pixels. To get half as many samples, double the SENSITIVITY value.
reade more... Résuméabuiyad

Generating a color-picker rainbow in 30 lines of JavaScript

Most color pickers, I find, aren't terribly helpful. Fortunately, though, it's relatively easy to create your own. All you have to do is generate a rainbow swatch and capture mousedowns or mousemoved events as the user hovers over the swatch, and sample the color under the mouse pointer. The trick is creating a good color swatch. The answer is a few lines of server-side JavaScript.

The swatch shown here (which has colors ranging from pure white at the top of the swatch, to 100 percent saturation at the bottom) was created by varying the red, green, and blue color channels sinusoidally, with each channel phase-shifted slightly. Code for this is shown below. To run the script, you can use the little ImageMunger app I gave code for in a previous post. (The app puts globals Image and Panel in scope. See previous post for details.) Just point the app at an image file of (say) dimensions 200 x 200 (or whatever), and let the script fill the image with colors. Be sure to use JDK6.

/* colorpicker.js
* Kas Thomas
* 02 February 2010
* Public domain.
*
* Run this file using ImageMunger:
* http://asserttrue.blogspot.com/2010/01/simple-java-class-for-running-scripts.html
*/


( function main() {

w = Image.getWidth();
h = Image.getHeight();
pixels = Image.getRGB(0, 0, w,h, null, 0,w);

var x,y,spanx,spany;
for (var i = 0; i < pixels.length; i++) {
x = i % w;
y = i / w;
spanx = x/w;
spany = y/h;
pixels[ i ] = rainbowPixel( spanx,spany );
}
Image.setRGB(0, 0, w,h, pixels, 0,w);
Panel.updatePanel();

function rainbowPixel( xspan, yspan ) {

blue = 255 - yspan*255 * ( 1.0 + Math.sin( 6.3*xspan ) )/2;
green = 255 - yspan*255 * ( 1.0 + Math.cos( 6.3*xspan ) )/2;
red = 255 - yspan*255 * ( 1.0 - Math.sin( 6.3*xspan ) )/2;

return (red << 16) + (green<<8) + blue;
}

})();
Note that this technique can be adapted to PDF image-maps quite easily (as shown here). It is also the basis of a (pure Java) plug-in for the National Institutes of Health's freeware ImageJ program.

Future projects:
  • Instead of sampling the color under the mouse pointer, retrieve the target color procedurally by back-calculating the color based on the x-y coordinates of the mouse.
  • Rewrite the rainbowPixel() method to space the color channels out by 120 degrees (2-pi-over-3 radians) instead of 90 or 180 degrees. (In the code shown above, blue and green channels are phased 90 degrees apart; blue and red are 180 degrees apart.)
  • Make it so that colors range from pure white at the top of the swatch to black at the bottom, with full saturation in the middle of the swatch.
  • Write a version in slider controls can be used to control the phase angles of the 3 color channels.
reade more... Résuméabuiyad

Procedural Paint in Java: Perlin noise


A few days ago, I showed how to implement java.awt.Paint in a way that lets you vary the paint appearance according to the x-y position of a point onscreen -- in other words, treating Paint as a procedural texture. It turns out to be pretty straightforward. Implementing the Paint interface means providing an implementation for Paint's one required method, createContext():
   public PaintContext createContext(ColorModel cm,
Rectangle deviceBounds,
Rectangle2D userBounds,
AffineTransform xform,
RenderingHints hints)
Most of the formal parameters are hints and can be ignored. Note that the createContext()method returns a java.awt.PaintContext object. PaintContext is an interface, so you have to implement it as well, and this (it turns out) is where the real action occurs. The methods of the PaintContext interface include:

public void dispose() {};
public ColorModel getColorModel();
public Raster getRaster(int x,
int y,
int w,
int h);

The dispose() method releases any resources that were allocated by the class. In many cases, you'll allocate nothing and thus your dispose method can be empty. The getColorModel() method can, in most cases, be a one-liner that simply returns ColorModel.getRGBdefault(). Where things get interesting is in getRaster(). That's where you have the opportunity to set pixel values for all the pixels in the raster based on their x-y values.

One of the most widely used procedural textures is Ken Perlin's famous noise algorithm. It might be an exaggeration (but not by much) to say that the majority of the CGI world's most interesting textures start from, or at least in some way use, Perlin noise. One could say it's the texture that launched a thousand Oscars. (In 1997, Perlin won an Academy Award for Technical Achievement from the Academy of Motion Picture Arts and Sciences for his noise algorithm; that's how foundationally important it is in cinematic CGI.)

It turns out to be pretty easy to implement Perlin noise in custom Paint; see the 100 lines of code shown below. Note that in order to use this code, you need the class ImprovedNoise.java, which is a nifty reference implementation of Perlin noise provided by Ken Perlin here.

(Scroll code sideways to see lines that don't wrap.)
/* PerlinPaint
* Kas Thomas
* 1 February 2010
* Public domain.
* http://asserttrue.blogspot.com/
*
* Demonstration of a custom java.awt.Paint implementation.
* This Paint uses a two-dimensional Perlin noise texture,
* based on Perlin's improved reference implmentation
* (see ImprovedNoise.java, http://mrl.nyu.edu/~perlin/noise/).
* Thanks to David Jones (Code Monk) for the idea.
*/


import java.awt.Color;
import java.awt.Paint;
import java.awt.PaintContext;
import java.awt.Rectangle;
import java.awt.RenderingHints;
import java.awt.geom.AffineTransform;
import java.awt.geom.Rectangle2D;
import java.awt.image.ColorModel;
import java.awt.image.Raster;
import java.awt.image.WritableRaster;

class PerlinPaint implements Paint {

static final AffineTransform defaultXForm =
AffineTransform.getScaleInstance(0.15, 0.15);

// Colors a and b stored in component form.
private float[] colorA;
private float[] colorB;
private AffineTransform transform;

public PerlinPaint(Color a, Color b) {
colorA = a.getComponents(null);
colorB = b.getComponents(null);
transform = defaultXForm;
}

public PerlinPaint(Color a, Color b, AffineTransform transformArg) {
colorA = a.getComponents(null);
colorB = b.getComponents(null);
transform = transformArg;
}

public PaintContext createContext(ColorModel cm,
Rectangle deviceBounds,
Rectangle2D userBounds,
AffineTransform transform,
RenderingHints hints) {
return new Context(cm, transform);
}

public int getTransparency() {
return java.awt.Transparency.OPAQUE;
}

class Context implements PaintContext {

public Context(ColorModel cm_, AffineTransform transform_) { }

public void dispose() {}

public ColorModel getColorModel() {
return ColorModel.getRGBdefault();
}

// getRaster makes heavy use of the enclosing NoisePaint instance
public Raster getRaster(int xOffset, int yOffset, int w, int h) {

WritableRaster raster =
getColorModel().createCompatibleWritableRaster( w, h );

float [] color = new float[4];

for ( int y = 0; y < h; y++ ) {
for ( int x = 0; x < w; x++ ) {

// treat each x-y as a point in Perlin space
float [] p = { x + xOffset, y + yOffset };

transform.transform(p, 0, p, 0, 1);

float t = (float)ImprovedNoise.noise( p[0], p[1], 2.718);

// ImprovedNoise.noise returns a float in the range [-1..1],
// whereas we want a float in the range [0..1], so:
t = (1 + t)/2;

for ( int c = 0; c < 4; c++ ) {
color[ c ] = lerp( t, colorA[ c ] ,colorB[ c ] );
// We assume the default RGB model, 8 bits per band.
color[ c ] *= 0xff;
}
raster.setPixel( x,y, color );
}
}
return raster;
}

float lerp( float t, float a, float b ) {
return a + t * ( b - a );
}
}
}


The code should be self-explanatory. There are two constructors; both allow you to pick the primary and secondary colors for the texture, but one includes an AffineTransform, whereas the other doesn't. If you use the constructor with the transform, you can scale (or rotate, etc.) the Perlin noise to suit your needs. To achieve the "cloudy" look, the text at the top of this post uses a scaling factor of .06 in x and .05 in y, per the script below. Note that to run the following script, it helps if you have a copy of ImageMunger, the tiny Java app I wrote about a couple weeks ago. ImageMunger is a very simple command-line application: You pass it two command-line arguments, namely a file path pointing at a JPEG or other image file, and a file path pointing at a JavaScript file. ImageMunger opens the image in a JFrame and executes the script. Meanwhile, it also puts two global variables in scope for your script to use: Image (a reference to the BufferedImage object) and Panel (a reference to the JComponent that paints the image). Be sure you have JDK6.

/* perlinText.js
* Kas Thomas
* 1 February 2010
* Public domain.
*
* Run this file using ImageMunger:
* http://asserttrue.blogspot.com/2010/01/simple-java-class-for-running-scripts.html
*/


g2d = Image.createGraphics();

rh = java.awt.RenderingHints;
hint = new rh( rh.KEY_TEXT_ANTIALIASING,rh.VALUE_TEXT_ANTIALIAS_ON );
g2d.setRenderingHints( hint );
transform = g2d.getTransform().getScaleInstance(.06,.05);
perlinPaint = new Packages.PerlinPaint( java.awt.Color.BLUE,java.awt.Color.WHITE,transform);

g2d.setPaint( perlinPaint );
g2d.setFont( new java.awt.Font("Times New Roman",java.awt.Font.BOLD,130) );
g2d.drawString( "Perlin",50,100);
g2d.drawString( "Noise",50,200);

Panel.updatePanel();
Future projects:
  • Implement Perlin's turbulence and Brownian noise as custom Paints.
  • Implement a bump-map (faux 3D-shaded) version of PerlinPaint.
reade more... Résuméabuiyad