Pages

.

Seven Surefire Ways to Botch a Job Interview

In prior lives, as a hiring manager, I've interviewed scores of job applicants. And in the course of interviews, I've seen (how shall I say?) certain recurrent antipatterns of behavior that are usually a pretty good tipoff that the person in question "isn't right for the job."

Here are seven things I don't want to see during an interview. Committing only one or two of these transgressions might not cost you the job, but if a pattern starts to emerge, believe me, I'll notice it; and you won't be asked back.

1. Be late.
This indicates lack of commitment to deadlines. Arriving 10 to 15 minutes ahead of time is at least a small clue that you know how to underpromise and overdeliver. If you got stuck in traffic, that's fine, it doesn't penalize you (especially if you call ahead to let someone know you'll be late). Otherwise? Don't waste your hiring manager's time. Don't be late.

2. Be unprepared.
Did you leave samples of prior work at home? (Yes, I can look them up online, but it's a nice courtesy to be offered hard copies of previous work, whether printed or on CD, DVD, flash drive, etc.) Did you forget to bring an extra copy of your resume? Again, these sorts of small details aren't going to be a showstopper in and of themselves, but taken together with other items in this list, it can reveal a pattern of inattention to "little things." Sustained inattention to "little things" kills a business. Don't act like you don't know that.

3. Avoid eye direct eye contact.
If you can't look me in the eye when answering questions, I'm going to get the impression, subconsciously at least, that you're hiding something, or that you're ashamed of something, anxious to leave, easily distracted by your surroundings, etc. (or that you just don't like me). Stay focused. I'm your center of attention. Look me in the eye.

4. Say bad things about a previous employer, or be unable to explain why you left a previous job.
If I'm interviewing you, rest assured, I am going to ask you about your previous work experience. That means I'll definitely ask why you left your previous jobs (yes, all of them). Be careful how you answer. If you left a job because a previous employer treated people poorly, provide concrete details; explain the exact circumstances. But be careful. If you say something negative about a previous job, a previous manager, or a company, I'll assume that you may someday say something negative about me or my company. I'll question your loyalty before you even begin working. That's not good.

5. Fail to ask good questions about the job.
If you're seriously interested in the job, you'll have questions. By all means, ask! I want to know what's important to you (work conditions? people? hours? pay? the quality and nature of the assignments?), and I'll get some indication of that in the kind of questions you choose to ask. Plus, asking questions shows that you're inquisitive, thoughtful, and not merely interested in superficial matters -- or just being employed again.

6. Ask a lot of questions about flextime, days off, bonus plan, stock options, and job perks (and show concerns around how much overtime you might need to work).
I need somebody who's a hard worker and committed to helping a team meet difficult deadlines. Don't make me think you're focused on not working hard. It's okay to ask questions about perks and benefits (it's expected, actually), but save them until the end and for gosh sakes, don't make it look like perks, benefits, and compensation are near the top of your list of priorities. I'll wonder about your work ethic.

7. Come to the interview not having gone to the company's web site and not knowing a thing about the company.
Before coming to an interview, do a little homework. Visit the company web site (be prepared to critique it later, if asked), learn the company's history, and try to understand the company's positioning in the market and current strategies. I want to know that you're self-motivated, able to do a little research on your own, and keenly interested in this particular job, at this particular company. If you come to the interview not knowing what the company does, it shows me you don't care about the big picture. Maybe you don't care about anything. Maybe you're just plain lazy. Next.

There are plenty more ways to show an interviewer that you aren't the right person for the job, but these are a few of my favorites. And yes, I've interviewed candidates who flunked on all counts. It's amazing how many job candidates come to an interview well dressed but unprepared, unaware of what the company does, unable to ask questions that aren't related to perks and benefits, and unable to say good things about prior employers.

I want to know that you're a hard worker and a highly focused, self-motivated individual who is detail-oriented, yet also tries to understand the big picture. Is that so much to ask?
reade more... Résuméabuiyad

How the RIA wars will affect the future of civilization

There's a war in progress, and the outcome of it will affect the future of computing. It's important to see it for what it is, so you can prepare for the consequences. The consequences (unless there's a cease-fire in the meantime) will be enormous.

To get a handle on it requires a certain appreciation for the importance of operating systems. Let's back up the truck for a minute and talk about operating systems, and Windows in particular (two different things, really).

Non-technical computer users can be forgiven, I think, for misusing the term "Operating System" in the context of Windows. Underneath Windows is an operating system, to be sure, but the collection of applications that, in the aggregate, gives Windows its Windowsness has little to do with operating systems. An operating system is really about the core-essential software that discovers and registers "devices," controls the bootstrapping of a machine's services at boot time, and provides various hosting services to applications.

From a human user's point of view, that last bit (providing hosting services to apps) is the most important aspect of an operating system. It's what makes it possible for us to run programs and get work done.

But consider what has happened over the past decade or so. The Web has become a central metaphor in computing, not just at the level of desktop PCs but also a variety of handheld (and other) devices. Initially, the Web was a world of static content: You visited a URL with a browser, the browser rendered the page, and you hopped from URL to URL via hypertext links. But now the Web is full of highly interactive "web apps," and the browser is merely an interactive hosting environment for client-server apps in which part of the logic executes locally and part executes on a server somewhere. The browser is now the logical analog of a desktop OS, in many ways -- mooting the importance of something like, say, Windows.

This has worked to Microsoft's disadvantage, obviously. When people rely more and more heavily on a browser to get work done, it tends to marginalize the importance of desktop software; and since the Web is, at its very core, standards-driven (TCP/IP, HTTP, HTML, URLs, etc.; all universally understood standards), the concept of a non-standardized, proprietary OS that doesn't understand how to interoperate with non-native software (or with other OSes) runs counter to a user's needs and actually becomes an anti-feature. When the most important thing a computer OS can do is provide connectivity to an outside world that's based on standards, the proprietary OS is a liability. In fact, the Web makes all OSes equally irrelevant, in some sense, which is one reason Apple is doing well, because the age-old Cupertino stigma of having a non-Windows-interoperable OS is no longer, in fact, a stigma. The field is (almost) level now.

To the degree that things like Windows have become sidelined in importance compared to the virtual OS of the Web+browser, rich Internet-aware (but also desktop-aware) runtime frameworks like Adobe AIR become hugely important. They represent the "next platform." And that necessarily also means the next potentially proprietary platform, because Adobe (to stay with the AIR example) is a closed-source company that still makes most of its money from proprietary, non-open software. Even if you want to make the (hollow) argument that the Flash standard is "open" and uses ActionScript ("open") and XML ("open"), etc., you still have to concede that Adobe has an iron grip over what Flash and Flex consist of, and where they're headed. The only question (let's be frank about this) is whether Adobe is a benevolent dictator or a venal, conniving one.

The battle for the Rich Internet Application platform high ground is really a proprietary platform play: the next big attempt to lock computer users into privately controlled technologies -- technologies like Flash and Flex that could be foundational to future computing. The winner (be it Adobe, Microsoft, or Oracle) will find itself with a great concentration of power in its hands. Which, if we've learned anything at all from history, is a very bad thing indeed.

So before you get too worked up about AIR, Silverlight, or JavaFX, before you drink anybody's Kool-aid and start passing the cup around, remember what you're dealing with. These technologies aren't about making the world more standards-driven or putting more control in the hands of the user. They're about putting control of the Web experience in the hands of a multibillion-dollar closed-source software giant. Choose your poison carefully.

Of course, if one of the big RIA contenders decides to go 100% open-source, and put the future of the platform (whichever one it turns out to be) completely under community governance, then we have nothing to fear; we have a democracy. But don't hold your breath.
reade more... Résuméabuiyad

Two techniques for faster JavaScript

I like things that go fast, and that includes code that runs fast. With JavaScript (and Java, too), that can be a challenge. So much the better, though. I like challenges too.

When someone asks me what's the single best way to speed up "a slow script," naturally I want to know what the script is spending most of its time doing. In browser scripting, it's typical that a "slow" script operation either involves tedious string parsing of some kind, or DOM operations. That's if you don't count programmer-insanity sorts of things, like creating a regular expression object over and over again in a loop.

The two most important pieces of advice I can give on speeding up browser scripts, then, are:

1. Never hand-parse a string.
2. Don't do DOM operations in loops (and in general, don't do DOM operations!).

No. 1 means don't do things like crawl a big long string using indexOf( ) to tokenize-as-you-go. Instead, use replace( ) or a split( )/join( ) technique, or some other technique that will basically have the effect of moving the loop into a C++ native routine inside the interpreter. (The general approach is discussed in a previous post.) An example would be hit-highlighting in a long run of text. Don't step through the text looking for the term(s) in question; use replace( ).

No. 2 means to avoid looping over the return values from getElementsByTagName( ) -- in fact, don't call it unless you have to -- and get away from doing a lot of createElement( ), appendChild( ) types of things, especially in loops, and especially in functions that get called a lot (such as event handlers for mouse movements). How? Use innerHTML wherever possible. In other words, create your "nodes" as Strings (markup), then slam the final string into the DOM at the last minute by setting the parent node's innerHTML to that value. This moves all the DOM reconfiguring into the browser's native DOM routines, which it'll happen at the speed of compiled C++. Don't sit there and rebuild the DOM yourself, brick by brick, in JavaScript, unless you have to, which you seldom do.

There are other techniques for avoiding big speedups, but they're more situational. And I'm still learning, of course. I'm still trying to find out what all the lazily-invoked "big speed hit" operations are in Gecko that can suddenly be triggered by scripts. The situational speed hits can sometimes be addressed through caching of expensive objects, or reuse of expensive results (a technique known as memoization; good article here). The Mozilla folks have put a lot of work into speeding up the JavaScript runtimes, but remember, the fastest runtime environment in the world can be brought to its knees by poor choice of algorithms.

Obviously it's not always possible to employ the two techniques mentioned above, and in certain cases the performance gain is not impressive. But in general, these remain underutilized techniques (from what I can tell), which is why I bring them up here.

If you have additional techniques for speeding up JavaScript, by all means, leave a comment. I'm interested in hearing your experiences.
reade more... Résuméabuiyad

Can you pass this JavaScript test?

Think you know JavaScript? Try the following quick quiz. Guess what each expression evaluates to. (Answers given at the end.)

1. ++Math.PI
2. (0.1 + 0.2) + 0.3 == 0.1 + (0.2 + 0.3)
3. typeof NaN
4. typeof typeof undefined
5. a = {null:null}; typeof a.null;
6. a = "5"; b = "2"; c = a * b;
7. a = "5"; b = 2; c = a+++b;
8. isNaN(1/null)
9. (16).toString(16)
10. 016 * 2
11. ~null
12. "ab c".match(/\b\w\b/)

This isn't a tutorial, so I'm not going to explain each answer individually. If you missed any, I suggest while (!enlightenment()) meditate();

The answers:

1. 4.141592653589793
2. false
3. "number"
4. "string"
5. "object"
6. 10
7. 7
8. false
9. 10
10. 28
11. -1
12. [ "c" ]

For people who work with JavaScript more than occasionally, I would score as follows:

(correct answers: score)
5 - 7: KNOWLEDGEABLE
8 - 10: EXPERT
11: SAVANT
12: MASTER OF THE UNIVERSE

A few quick comments.

The answer to No. 2 is the same for JavaScript as for Java (or any other language that uses IEEE 754 floating point numbers), and it's one reason why you shouldn't use floating point arithmetic in any serious application involving monetary values. Floating-point addition is not associative. Neither is float multiplication. There's an interesting overview here.

No. 6: In an arithmetic expression involving multiplication, division, and/or subtraction, if the expression contains one or more strings, the interpreter will try to cast the strings to numbers first. If the arithmetic expression involves addition, however, all terms will be cast to strings.

No. 7: The evaluation order in JavaScript (as in Java and C) is left-to-right, so what you've got here is "a, post-incremented, plus b," not "a plus pre-incremented b."

No. 9: toString( ) takes a numeric argument (optionally, of course). An argument of "16" means base-16, hence the returned string is a hex representation of 16, which is "10." If you write .toString(2), you get a binary representation of the number, etc.

No. 10: 016 is octal notation for 14 decimal. Interestingly, though, the interpreter will treat "016" (in string form) as base-ten if you multiply it by one.

Don't feel bad if you didn't do well on this quiz, because almost every question was a trick question (obviously), and let's face it, trick questions suck. By the same token, if you did well on a test that sucks, don't pat yourself on the back too hard. It just means you're a little bit geekier than any human being probably should be.
reade more... Résuméabuiyad

Automatic Update Hell Must End

I recently stopped using anti-virus software. People think I'm crazy. But I'm not. It's about getting out of Automatic Update Hell.

And BTW, it's been a year now and my machines (Win XP and Vista) haven't been overtaken by the Bogeyman, because I don't practice the PC equivalent of unsafe sex. I'm not in the habit of opening e-mail attachments from people I don't know, clicking links in e-mails that have "Viagra" in the subject line, etc. I don't download games, wallpapers, screensavers, utilities I haven't heard of, crackz, hackz, or any of the other stupid-idiotware that can get you in trouble. I sure as hell don't run Internet Exploder, and guess what? I have a firewall, and a brain, and I know how to use them. (So Symantec, read my finger.)

Uninstalling Norton anti-virus software is extremely difficult, it turns out -- more difficult than uninstalling the malware it supposedly protects you against. But once it's gone from your machine, the hard-disk thrashing stops, the sudden CPU-spiking disappears, and the telltale sluggishness that accompanies a background download of the latest patch(es) vanishes.

Also, without a virus-scan of every document you open, the whole machine feels faster. Things like EditLive! and other applets load twice as fast. Zip archives open faster, etc. Sure, you can achieve this by turning off Norton's file-scan feature. But that's my point: Why are you buying software that you turn off?

So merely by getting rid of pointless anti-virus lockin-ware, I've scored a useful speedup and probably doubled the life of my hard drive. But I'm not totally out of Hell yet. There's still Microsoft to deal with.

Turning off Automatic Updates is one of the best things I've ever done to achieve better machine performance. Installing updates from Microsoft has always brought some kind of speed hit, somewhere, and sometimes brings new annoyances (new security dialogs that have to be turned off).

I'm very glad to be rid of Automatic Updates.

Sun's automatic Java updates are another painful annoyance. Again, though, you can turn this off fairly easily. But every time you manually upgrade your JDK, it seems Sun re-enables automatic Java updates. So you end up turning them off again.

But even after you get rid of Norton lockin-ware, disable Windows updates, and shut Sun the hell up, you're still not out of Hell yet, because there's yet another offender on your machine, a stealth daemon from Hades that sucks bandwidth needlessly while putting your hard drive through a rigorous TTD (test-to-destruction) regimen. I am talking, of course, about Adobe and its pernicious suite of updaters.



There's a famous line in Ace Ventura that makes me smile every time I hear it: "Dan Marino should die of gonorrhea and rot in hell." I would like to repurpose this statement somehow, except that corporations can't die of gonorrhea (any more than anyone else can), so Adobe, all I can say is: enough with the updates.

I can't think of a worse impediment to the widespread adoption of Adobe AIR than this:



I've seen this dialog far too many times this year already. It makes me want to empty a full clip of copper-jacketed hollowpoints into my machine. What is so defective about AIR that I have to update it every other time I fire up Yammer? (For that matter, what's so hopelessly broken about Yammer that I have to update it five times a week?)

Enough ranting. All rants should end at some point, and be followed by a constructive proposal aimed at solving the problem(s) in question.

So let us ask: What, if anything, should software vendors do about all this?

I can suggest a few things.

First, software updates should be opt-in by default, never the reverse.

Second: A vendor should never silently turn automatic updates back on after the user has turned them off.

Third: Give me some granularity as to what type of updates I want to receive. There are three basic types of updates: Security patches, bug fixes, and enhancements. I rarely want all three. Within those three, there are (or should be) several levels of criticality to choose from. I may want security fixes that are critical, but not those that are merely nice for grandma to have. Let me choose.

Fourth: Don't ever, ever make a user reboot the machine.

Fifth: Let me have the option, stupid as it sounds, of checking for updates at an interval of my choosing. Not just "daily, weekly, or monthly." Let me actually specify a date (e.g., December 25) on which to check for updates and receive them all in a huge, bandwidth-choking download that utterly shuts me out of the machine for 24 hours instead of torturing me daily, throughout the year, with paper cuts.

Sixth: Write better software. Don't let so many security vulnerabilities go into distribution in the first place. Open-source as many pieces of your code as possible so the community can find security flaws before ordinary users do. Don't make the user do your security-QA.

Microsoft, Sun, (Oracle), Adobe, are you listening?
reade more... Résuméabuiyad

Appliance-Oriented Architecture

I hate industry-jargon buzzwords, but I think it's not too early to promote a new one for 2010. I'm suggesting Appliance-Oriented Architecture (AOA). And yes, I think it just may be the Next Big Thing in IT (assuming IT isn't dead).

The big "Aha!" moment for me on this came when I was thinking about the Oracle Sun deal and realized that the true consequence of it was (is) that Oracle now enters the hardware biz, after being a pure software company since the beginning.

What does being a hardware company do for Oracle? It allows the company to create special-purpose hardware-software rollups known, colloquially, as appliances.

The marketing implications are far-reaching, of course, but consider the technical implications: Oracle gets to control the tuning and optimization of its software straight down to the bare metal. (And we know Oracle likes control.) Performance takes a huge jump when you can optimize for the hardware -- and for the OS. Let us not forget, Sun is an operating system company as well.

The possible synergies for Oracle of having direct control over hardware, OS, and software as a unified package are enormous.

What would Oracle put inside an appliance? How about a database-warehouse stack that "just works," for starters. But let's don't limit our thinking to databases. Remember, Oracle is also in the search business (with Oracle Secure Enterprise Search). Oracle gains the potential to introduce a search appliance to go head-to-head with Google. Oracle is also an ECM player. Let your imagination run wild.

In this context, the Sun deal is understandable as an Oracle response to the soon-to-be-previewed HP-Microsoft "Midas" appliance. Which I again see jumpstarting a move to Appliance Oriented Architecture.

Like all buzzwords, AOA encapsulates concepts and methodologies that are already in wide practice today (but haven't been rolled up, semantically, under one catchphrase). So let's not get carried away over-analyzing the term itself. The IT fantasy of plug-and-play black boxes that can be gridded together into an instant solution to hard problems is going to remain just that: a fantasy. AOA doesn't change it.

I do think, though, that the success of the Google appliance(s) has proven the existence of an untapped market for enterprise blackboxware, a market whose potential will be exploited in new and exciting ways by Oracle, Microsoft, HP, and others, going forward. We'll see BI-in-a-box, search-in-a-box, and just-about-everything-else-in-a-box, possibly including boxes in a box (think search-on-a-blade, BI-on-a-blade, and so on).

Put it on your calendar: Q1, 2010. AOA becomes real.
reade more... Résuméabuiyad

Where are the RIA "killer apps"?

I've found, over the years, that in almost every successful field of technology there's a "killer app," a category-leader so strong as to be universally understood as the archetype of success in a given domain. Conversely, when a technology lacks a killer app, it tends to be very telling. It says something about the future of that technology.

Take Java, for example. When Java first arrived, there were high hopes for its success based on the "write once, run anywhere" mantra. Applets started showing up all over the Web. But on the desktop, no killer apps. And even in the applet world, no killer apps, just a bunch of little games and academic demos. (Java's "killer app," the thing that would ensure its place in history, didn't really arrive until 1999: something called J2EE.)

So when a new technology-space like RIA comes along, with contenders having fancy names like AIR, Silverlight, or JavaFX, I sit back and wait for a "killer app" to emerge, signalling the appearance of a likely winner (or at least a contender with a future ahead of it) in the multi-way battle.

JavaFX was late to the party, so I continue to give it the benefit of the doubt, but it looks stillborn to me at this point (and I think the Oracle acquisition of Sun may delay progress with JavaFX until far past the point where it can regain ground against Adobe Flex/AIR). One thing we can all agree on is that there is no killer JavaFX app. In fact I can't even name a JavaFX app. Not a single one. "But it's too early," someone will say. To the contrary, my friend: It may be too late.

Silverlight has the full mass and motive power of the Microsoft juggernaut behind it, and for that reason we can't dismiss it (yet). But again, where are the killer apps? Shouldn't we have seen one by now? Shouldn't it be possible to walk up behind someone at any gathering of programmers, tap a total stranger on the shoulder, and get an immediate answer to the question: "Can you name a really cool Silverlight app?"

Yes, it's early.

And then there's Adobe with its shiny new AIR technology, built atop half-open, half-closed Flash and Flex infastructure, an alluring platform with the not inconsiderable advantage of being built, largely, on ActionScript (hormone-enriched JavaScript). It's fun, it's pretty, it's new. But where are the killer apps?

Actually, there's a class of killer apps built around AIR now. (Maybe you've noticed?) It's called the Twitter Client. TweetDeck, Twhirl, AlertThingy, Toro, the list goes on and on. (Many of these are not just Twitter clients, of course. Some are perhaps better called social clients, since they interact with other services besides Twitter.)

Does this mean Adobe has won the RIA wars? No, of course not. But it sure has a nice head start.

What we need to see now is whether additional killer-app categories start to emerge around AIR. If AIR progresses beyond the point of supporting fun little SoCo apps, things could get very interesting (for users of cell phones, palm devices, PCs, netbooks, laptops, readers, and who-knows-what-else) in a hurry.

If not -- if AIR remains the province of waist-slimming Twitter clients and zero-calorie RSS feed readers -- then we may have yet another evolutionary dead end along the lines of (dare I say it?) Java Man.

Time will tell.
reade more... Résuméabuiyad

If Oracle-Sun is a Cloud Play, what was Ellison ranting about?


It seems a lot of pundits out there think the Oracle Sun acquisition is (to a large degree) about Oracle wanting to establish more of a foothold in the cloud-computing biz. I won't disagree with that.

What's bizarre, though, is that it wasn't that long ago (in fact, September 2008) that Larry Ellison drew some flak for his public rant on cloud computing, in which he called cloud computing "total gibberish." The YouTube audio track of it is here. (I wrote a blog for CMS Watch about some of this back in November.)

Here's some of what Ellison had to say last September:
The interesting thing about cloud computing is that we’ve redefined cloud computing to include everything that we already do. I can’t think of anything that isn’t cloud computing with all of these announcements. The computer industry is the only industry that is more fashion-driven than women’s fashion. Maybe I’m an idiot, but I have no idea what anyone is talking about. What is it? It’s complete gibberish. It’s insane. When is this idiocy going to stop?
You tell us, Larry. You tell us.
reade more... Résuméabuiyad

Rating WCM and ECM vendor web sites for page loadability using YSlow

It occurred to me the other day that the people who sell Web Content Management System software are (supposedly) experts in Web technology; and presumably they use their own software to build their own corporate Web sites (following the well-known Dogfood Pattern); and therefore their home pages ought to be pretty good examples of what it means to build a highly functional, performant Web page that downloads quickly and displays nicely.

To get a handle on this, I decided to use YSlow to evaluate the "loadability" of various vendors' home pages. If you haven't heard about it before, YSlow is a Firefox plug-in (or "add-on," I guess) that analyzes web pages and tells you why they're slow based on Yahoo's rules for high performance web sites. (Note that to use YSlow, you first need to install Firebug, a highly useful add-on in its own right. Every Firefox user should have this add-on. It's a terrific tool.)

It's important to understand what YSlow is not. It is not primarily a profiling tool (in my opinion, at least). The point of YSlow isn't to measure page load-times. It's to score pages based on a static analysis of their design-for-loadability. There are certain well-known best practices for making pages load faster. YSlow can look at a page and tell if those best-practices are being followed, and to what degree.

YSlow assigns letter grades (A thru F) for a page in each of 13 categories of best-practice. I decided to run YSlow against the home pages of 35 well-known WCM and/or ECM vendors, then calculate a Grade Point Average. The scores are posted below.

Please note that the full results, with a detailed breakout of exactly how each vendor did in each of the 13 YSlow categories, is available in a (free) 121-page report that I put together over the weekend. The 1-megabyte PDF can be downloaded here. It contains some important caveats about interpreting the results, and also talks about methodology.

VENDOR

GPA

Alfresco

2.27

Alterian

2.18

Clickability

2.72

CoreMedia

3.09

CrownPeak

2.90

Day

3.09

Drupal

3.18

Ektron

2.63

EMC

1.81

Enonic

3.36

EPiServer

2.18

Escenic

2.72

eZ

2.63

FatWire

2.18

FirstSpirit (e-Spirit)

3.27

Hannon Hill

3.18

Hot Banana (Lyris)

2.18

Ingeniux

1.90

Interwoven.com

1.81

Joomla!

2.81

Magnolia

3.27

Nstein

2.27

Nuxeo

2.09

OpenCMS

2.18

Oracle

3.18

Open Text

2.27

PaperThin

2.72

Percussion

1.36

Plone

3.09

Refresh Software

2.54

Sitecore

3.00

TerminalFour

2.27

Tridion

2.00

TYPO3

2.90

Vignette

1.81

Once again, I urge you not to draw any conclusions before reading the PDF (which contains detailed information about how these numbers were obtained). The 121-page document can be downloaded here. (Note: The PDF does contain bookmarks for easy navigation. They may not be showing when you first open the file. Use Control-B to toggle bookmark-navtree visibility.)

Maybe others can undertake similar sorts of testing (I'd particularly like to see some actual timing results, comparing page load times for the various vendor pages, although this can be notoriously tricky to set up). If so, let me know.

Does it mean a whole lot? Not really. I think it just means some vendors have more of an opportunity than others, perhaps, to improve the performance of their home pages. But a lot of factors are at play any time you talk about Web site performance, obviously, and therefore it's not really fair to form any kind of final judgment based on the scores shown here. Use it as a starting point for further discussion, perhaps.
reade more... Résuméabuiyad

Making Infectious Memes Fashionable



A parcel came today bearing a T-shirt. I nearly fell over laughing! I'm still laughing.

Thank you, Adriaan Bloem, for this wonderful gift.

And for those who want to know exactly what the inside joke is: please proceed directly to Jon's blog post here. (And see my comment immediately following it.)
reade more... Résuméabuiyad

What the heck is a meme anyway?

In recent weeks, I've been accused of something no one has even accused me of before: creating a meme. The charge seems weak to me, though, based on my understanding of "meme." But let's review.

On 26 February 2009, I wrote a blog for CMS Watch called "A Reality Checklist for Vendors" in which I enumerated 15 things that a CMS software vendor (but really, any software vendor) needs to do these days in order to stay relevant. Things like posting a free downloadable eval version of your software on your company web site; eating your own dogfood (the vendor should use its own software to create its website); and having one pricesheet for all customers (we don't quote ten different prices to ten different customers). Simple things, basic sanity-check items. For the full list, go here.

Not long thereafter, on 17 March, Michael Marth wrote a blog at dev.day.com ("Introducing the CMS Vendor Meme") giving Day Software's answers to all 15 checklist items in my Reality Checklist. Not only that, Michael created a scoring system, assigned scores to Day's answers, and challenged ("tagged") several other vendors to respond in like manner.

This set off a flood of responses from vendors (including many vendors who weren't tagged by anyone), and the results are still coming in. The situation is well-captured by Jon Marks in his excellent series of blog posts here, where (incidentally) he calls it a Celebrity CMS Deathmatch.

As a result of all this, I've been accused of starting a meme, which makes me want to understand "meme" better. So I've done a little digging and found the internet definitions of meme rather unsatisfying. They seem sloppy, semantically speaking. Maybe that's just in the nature of memes.

Some definitions equate meme with slang. (But in that case, why not just stick with slang?) Other definitions point in the direction of slang with a pop-culture theme. Or anything on the internet that has a catchy phrase associated with it. It gets even sloppier: If you go to http://knowyourmeme.com, you find things like Yo Dawg, Advice Dog, and I Like Turtles. (But oddly, not WTF?)

Some people feel that meme gives memetics a bad name.

What I've decided is that it's easier for me to understand meme in terms of its characteristics rather than a declarative definition. From what I can tell, a meme has characteristics of:

1. Theme: A meme captures a theme
2. Originality: in a new way, with new nuancing
3. Compositionality: New nuance is achieved by combining other terms and themes.
4. Emergent lexical cohesion (tm): Through suitable juxtaposition of imagery, slang, conceptual archetypes, etc., it becomes apparent to the first-time listener that a familiar notion is encapsulated in the meme. That is, a person hearing it for the first time can synthesize the intended meaning, even if the meaning is unexpected.
5. Transmissibility: The meme is easily communicated from one person to another.
6. Contagion: A meme usually spreads. If it didn't, it wouldn't enter the common lexicon.

That's still not a satisfying definition of "meme," to me, but it captures a lot more of it than the definitions I've seen floating around on the Web.

So I guess maybe I am guilty of creating a meme, if "We Get It" combined with "checklist" combined with "CMS Vendors" produces a meme. But it seems weakly reachable somehow.

"10 Things About {X}" seems to qualify as a meme, though.

Tagging someone to get them to participate in a meme-off seems not a meme but a pattern. But then again, maybe patterns are memes.

And so, to finish off this post, I invite commenters to answer the following queston: How many memes you can find in this blog post? I see quite a few. But I am interested in knowing what others see in terms of memes.

Also, a challenge (extra points, and attribution, to anyone who answers this correctly). Explain the following meme:

厚黑學 厚黑学

(It's one of my favorites.)
reade more... Résuméabuiyad

Personality Disorder Information

An exceptionally good test. Although the results are never precise in any of such test because usually people tend to exaggerate or downgrade certain traits of theirs. Anyway what really drew my attention is the following brief personality disorder Information:

Paranoid
Paranoid personality disorder is characterized by a distrust of others and a constant suspicion that people around you have sinister motives. People with this disorder tend to have excessive trust in their own knowledge and abilities and usually avoid close relationships with others. They search for hidden meanings in everything and read hostile intentions into the actions of others. They are quick to challenge the loyalties of friends and loved ones and often appear cold and distant to others. They usually shift blame to others and tend to carry long grudges.

Schizoid
People with schizoid personality disorder avoid relationships and do not show much emotion. They genuinely prefer to be alone and do not secretly wish for popularity. They tend to seek jobs that require little social contact. Their social skills are often weak and they do not show a need for attention or acceptance. They are perceived as humorless and distant and often are termed "loners."

Schizotypal
Many believe that schizotypal personality disorder represents mild schizophrenia. The disorder is characterized by odd forms of thinking and perceiving, and individuals with this disorder often seek isolation from others. They sometimes believe to have extra sensory ability or that unrelated events relate to them in some important way. They generally engage in eccentric behavior and have difficulty concentrating for long periods of time. Their speech is often over elaborate and difficult to follow.

Antisocial
A common misconception is that antisocial personality disorder refers to people who have poor social skills. The opposite is often the case. Instead, antisocial personality disorder is characterized by a lack of conscience. People with this disorder are prone to criminal behavior, believing that their victims are weak and deserving of being taken advantage of. They tend to lie and steal. Often, they are careless with money and take action without thinking about consequences. They are often agressive and are much more concerned with their own needs than the needs of others.

Borderline
Borderline personality disorder is characterized by mood instability and poor self-image. People with this disorder are prone to constant mood swings and bouts of anger. Often, they will take their anger out on themselves, causing themselves injury. Suicidal threats and actions are not uncommon. They think in very black and white terms and often form intense, conflict-ridden relationships. They are quick to anger when their expectations are not met.

Histrionic
People with histrionic personality disorder are constant attention seekers. They need to be the center of attention all the time, often interrupting others in order to dominate the conversation. They use grandiose language to discribe everyday events and seek constant praise. They may dress provacatively or exaggerate illnesses in order to gain attention. They also tend to exaggerate friendships and relationships, believing that everyone loves them. They are often manipulative.

Narcissistic
Narcissistic personality disorder is characterized by self-centeredness. Like histrionic disorder, people with this disorder seek attention and praise. They exaggerate their achievements, expecting others to recongize them as being superior. They tend to be choosy about picking friends, since they believe that not just anyone is worthy of being their friend. They tend to make good first impressions, yet have difficulty maintaining long-lasting relationships. They are generally uninterested in the feelings of others and may take advantage of them.

Avoidant
Avoidant personality disorder is characterized by extreme social anxiety. People with this disorder often feel inadequate, avoid social situations, and seek out jobs with little contact with others. They are fearful of being rejected and worry about embarassing themselves in front of others. They exaggerate the potential difficulties of new situations to rationalize avoiding them. Often, they will create fantasy worlds to substitute for the real one. Unlike schizoid personality disorder, avoidant people yearn for social relations yet feel they are unable to obtain them. They are frequently depressed and have low self-confidence.

Dependent
Dependent personality disorder is characterized by a need to be taken care of. People with this disorder tend to cling to people and fear losing them. They may become suicidal when a break-up is imminent. They tend to let others make important decisions for them and often jump from relationship to relationship. They often remain in abusive relationships. They are overly sensitive to disapproval. They often feel helpless and depressed.

Obsessive-Compulsive
Obsessive-Compulsive personality disorder is similar to obsessive-compulsive anxiety disorder. People with this disorder are overly focused on orderliness and perfection. Their need to do everything "right" often interferes with their productivity. They tend to get caught up in the details and miss the bigger picture. They set unreasonably high standards for themselves and others, and tend to be very critical of others when they do not live up to these high standards. They avoid working in teams, believing others to be too careless or incompetent. They avoid making decisions because they fear making mistakes and are rarely generous with their time or money. They often have difficulty expressing emotion.

Here's the source of this info and I hope you've found it interesting.
reade more... Résuméabuiyad

CMIS: a standard in search of scenarios?

I've been looking all over the place for use-cases and user stories that illustrate the key requirements for CMIS (Content Management Interoperability Services, soon to be an OASIS-blessed standard API for content management system interoperability). As far as I can tell, CMIS is being developed without a proper set of real-world use-cases. I prefer "user narratives" over "use cases" because the latter often is nothing more than a phrase or two, whereas a narrative is just what it sounds like: A sentence-by-sentence explanation of a chain of events. A user narrative captures intent, actors, actions, results, consequences.

I'm finding none of that in CMIS, except for four rather trivial use-case descriptions in http://xml.coverpages.org/CMIS-v05-Appendices.pdf.

I gather from reading some of the Technical Committee's minutes that people have taken "develop use cases" as action items. That's good.

Going ahead with a spec without first understanding the use-cases leads to things like the CMIS "policy object," which I mentioned once before as something that should be (and I think will be) dropped from CMIS.

"Policy" should be dropped for two reasons. One is that it slows things down. If you want to get a standard out fast, don't make it bigger than it needs to be. Second, it's not at all clear what "policy" means. Various people have said it is basically "access control," whereas at least one CMIS expert has said that the policy object can support retention policies. Those are two quite different things.

In any case, CMIS-Policy belongs in its own separate standards effort (if indeed it has any need to exist; and for that, we need user narratives). It's out-of-band here, IMHO. It's not core.

I'm sure people involved with CMIS are very busy drawing up scenarios and user stories, and we'll hear more about it very shortly. Personally, I'd like to see some detailed scenarios around the manipulation of compound documents. I have some concerns there, but that discussion will have to wait for another time.

It's exciting to watch CMIS come together, in any case. A year from now, we may be seeing some very interesting content-management (and search) mashups. I wish I knew what they're going to look like. It does set the imagination spinning, though. No question about that.
reade more... Résuméabuiyad

Does workflow always have to suck?

I evaluate CMS and DAM systems for a living, and one thing I keep coming back to is the fact that so very few of these expensive systems "do workflow" well. I think part of this may be because there are no industry-accepted standards around the kind of workflow I'm talking about (thus, every vendor reinvents the wheel). The closest thing to a standard is BPEL4People, which is an extension of BPEL (and thus too heavy, IMHO). There needs to be a minimalist standard around this domain space, something dedicated to human-facing interactions, supporting process-facing tasks optionally, not the other way around.

I think the other reason so many Web CMS and DAM vendors fail to do a nice job with workflow is that it's just plain hard. Light-duty taskflow or workflow (or "pageflow," as we called it where I once worked) is deceptively difficult to implement, especially if there's a requirement for good UIs around administration, design, and (re)configuration of workflows. And especially if there's a requirement for hot failover (being able to deal with STONITH and other messinesses). And especially if you need to support acyclic (reentrant) flows. And especially if you want to offer good extensibility APIs. And, and, and.

Most systems that support approval workflows (of the type seen in web publishing scenarios) get the basics right, but after that, not. Typically, though, the customer hasn't really thought out his or her use-cases very well before buying a system. And so begins a long process of design, test, rollout, fail, back to the drawing board.

Setting up workflows typically means developers need to touch XML, code, properties files, templates, and/or miscellaneous artifacts, often editing them by hand (since it's unusual to get good tools for this). You may be able to draw a basic flow on a canvas (although even that isn't done well, by many vendors), but applying timeout and retry policies, and handlers for exceptional conditions, may involve a good bit of "dirty fingernails" work. When you're all done, the customer thinks "Okay, done. This should last us for all eternity. Glad we never have to do that again!" But very soon, it becomes clear that a number of corner-cases that were not anticipated at the design stage need to be handled better. So it's surgery time again. Back to messing with a bunch of artifacts and their cobweb of dependencies, then finding a way to test it all, etc.

Administratration is often not well supported. How do you run a report for all workflows of Type X in the past month that either finished abnormally, didn't finish at all, or took too long? (Don't tell me "look in the logs.") What UI tools do you have for simply finding an orphaned workflow, or killing an in-progress workflow instance? How do you know if bouncing one machine (in a cluster) left one or more workflows (of potentially hundreds in progress) in an inconsistent state?

And then there's rights administration. When Sally goes on vacation, how does she give her subordinate, Bob, her "rights" in the system for a workflow of Type A but not for workflows of Type B?

The issues get sticky in a hurry. But I do occasionally see workflow systems that combine decent functionality with a usable graphic designer, or with good administrative tools. But it's hard to get a robust engine, a good feature set, good visual design tools, and good administrative tools all in one package. There are always warts and holes.

So I think the right way to look at all this, if you're a vendor, is that this presents a ripe opportunity. If you're a CMS or DAM vendor looking to differentiate, provide a superior workflow solution.

But there's also an opportunity in the market, right now (IMHO), for someone to come up with a fully productized lightweight workflow product with decent design, development, and admin tools, and easy extensibility (of course), that can be bolted onto a Web CMS or DAM system with little effort, so that customers who are using bespoke WF systems (of the kind that are so common in this industry) can move over to "real" workflow. And get real work done.

I wonder why products like this aren't more common? Again: Probably because it's hard. But again: This is an opportunity . . . for someone.
reade more... Résuméabuiyad

Never, Ever Stop Taking Medication Suddenly

Bi polar disorder is no joke. It can make you feel really low one minute and really up and happy the next. There are medications your doctor will prescribe you if you are diagnosed with Bi Polar disorder that will help elevate your moods. With Bi Polar disorder your moods will fluctuate. One minute you can feel like you are Queen of the freaking world and all of a sudden you come crashing down to feel like you are worthless. This was how I felt before meds. I struggled with different medications until I found the right one. The only problem is it seems like my body gets used to the meds after so long and they quit working. Then I have to try something else and I have to go through it all over again. It is pure hell!

I was on Seroquel, Zoloft, and Klonopin for two years until I got tired of being tired all the time from the Seroquel. I quit taking it all of a sudden and here is what happened:

Why You Should Never Stop Taking Medication

If you read that you will know I became suicidal when I quit taking the Seroquel. The Zoloft wasn't working for me on it's own. I finally had a talk with my doctor and she put me on Abilify with the Seroquel and the Klonopin and they are working fine together.

Please, if you are on meds for depression, Bi Polar, or any other mental disease, never, ever stop taking your medication without first speaking to your doctor. If you read the blog post from the link above, you will understand why.
reade more... Résuméabuiyad

Should standards be copyrighted?

In the last few days I've begun to sink my teeth into the CMIS (Content Management Interoperability Services) standards documents a little bit. Digesting it all is going to take a while. The docs are not too big (yet), but I'm a slow reader.

One thing that's a little weird to me is that the drafts of the standard (available at the above link) carry a Copyright notice on behalf of EMC, IBM, and Microsoft.

I find this peculiar for a standards document that is supposed to be the collaborative work of numerous industry players (including Alfresco, Oracle, Open Text, and others). I'm sure it just means that the particular instance-documents comprising the draft of the standard were written by people from EMC, IBM, and Microsoft, and the companies in question decided (based on some sort of policy emanating from Legal) to assert ownership over the instance-docs.

Why have a copyright at all, though? This is going to be an industry standard, not an EMC standard, or an IBM or Microsoft standard. Copyright means you and I and others can't reproduce the document without permission. (It does say "All rights reserved.")

Someone will say "Well, this is the way IETF does it," or "This is the way [XYZ] does it," which of course is silly. That's not a defense. IETF shouldn't copyright anything either.

What does copyrighting a standards document achieve? Is it supposed to prevent bastardization of the standard by someone else who tries to publish a different version of it? That's not what copyright does. Copyright does not establish the "sole authoritative source-ness" of a document. It does not say "This is the Truth, this is the one true document defining the Standard." That's the job of the standards body. OASIS decides what the true CMIS standard consists of. And that "truth" can reside in an uncopyrighted work, just as easily as in a copyrighted work.

Putting copyrights on standards just does not make sense to me. It doesn't achieve anything except to inhibit reproduction and dissemination of the primary docs. Which is usually not a goal of the standards process (or shouldn't be). Standards should be widely disseminated. Copyright is designed to defeat that.

A nit, perhaps. But for me, not.
reade more... Résuméabuiyad

Coming to grips with CMIS

I'm slowly but surely coming to grips with CMIS (Content Management Interoperability Services), which will soon be the lingua franca of CRUD in the content management world, and maybe some other worlds as well.

After reading some of the CMIS draft docs and watching a couple of EMC's CMIS videos at YouTube, I'm starting to grok the basic abstractions. Here are a few first impressions. I offer these impressions as constructive criticism, BTW, not pot-shots. I want to see CMIS succeed. Which also means I want to see it done right.

The v0.5 draft doc for the Domain Model says there are four top-level ("first class", root) object types: Document, Folder, Relationship, and Policy. (Support for the Policy type is optional. So there are basically three root types.)

Already I question whether there shouldn't perhaps be a top-level object type ("CMISObject") that everything inherits from, rather than four root objects, since presumably all four basic object types will share at least a few characteristics in common. But maybe not.

Page 16 of the Part I doc says that Administration is out of scope for CMIS. But later on, we learn that "A policy object represents an administrative policy that can be enforced by a repository." We also find applyPolicy and removePolicy operations, which are clearly administrative in intent.

Remarkably, Policy objects can be manipulated through standard CMIS CRUD operations but do not have a content stream and are not versionable. However, they "may be" fileable, queryable, or controllable. Why are we treating this object as a file ("fileable") but not allowing it to be versionable? And why are we pretending it doesn't have a content stream? And why are we saying "may be"? This is too much fuzziness, it seems to me.

Right now, the way CMIS Part I is worded, a "policy" can be anything. One might as well call it Rules. Or Aspects. Or OtherStuff. The word Policy has a specific connotation, though. Where I come from, it implies things like compliance and governance, things that MAY intersect role constraints, separation of duties, RBAC, and possibly a lot more; and yes, these concepts do come up in content management, in the context of workflow. But it seems to me that policy, by any conventional definition, is rather far afield from where CMIS should be concentrating right now. If "policy" means something else here, let's have a good definition of it and let's hear the argument for why it should be exposed to client apps.

I say drop the Policy object type entirely. It's baggage. Keep the spec light.

I like the idea of having Relationships as a top-level object type. The notion here is that you can specify the designation of a source object and a target object that are related in some way that the two objects don't need to know about. I like it; it feels suitably abstract. And it models a construct that's used in all sorts of ways in content management systems today.

The Folder object type, OTOH, is too concrete for my tastes. We need to stop thinking in terms of "folder" (which is a playful non-geek term for "directory", designed to make file systems understandable by people who know about manila folders), and think more abstractly. What notion(s) are we really trying to encapsulate with the object type currently dubbed "Folder"? At first blush, it would seem as though navigability (navigational axes) constitute(s) the core notion, but the possible graphs allowed by Folder do not match popular navigational notions inherent in file-system folders (at least on Windows). In other words, the many-to-many parent-child mappings allowed by CMIS's Folders destroy the conventional "folder" metaphor, unless you're a computer science geek, in which case you don't think in terms of folders anyway.

I think what "Folder" should try to encapsulate is a Collection of Relationships. A navigation hierarchy (whether treelike or not) is just one possible subclass of such a collection. We cheat ourselves by trying to emulate, at the outset, some parochial notion of "folders" based on a particular type of graph. We need Folder to be more general. It is a Collection of Relationships. We already have Relationships, so why not take the opportunity to reuse them here?

I'd like to see more discussion about Folders, but I fear that the rush to get CMIS blessed by OASIS may have already precluded further discussion of this important issue. I hope I'm wrong.

Interesting stuff, though, this CMIS. And wow, do I still have a lot of grokking to do . . .
reade more... Résuméabuiyad

10 things about me

Since today is Easter Sunday and I can be pretty sure no one in the western hemisphere will be reading this blog today, I thought maybe it's as good a day as any to write a near-content-free "off topic" blog. So, 10 things about me. Here goes:

1. I grew up in Los Angeles.
Things I remember from childhood: The ground shudders very slightly whenever a nuclear bomb goes off at the nearby Nevada test site (250 miles away). I also remember sonic booms happening practically daily. (Edwards AFB was 112 miles distant. The X-planes flew almost every day.) I once saw telephone lines whirl like jump-ropes during an earthquake. This was a long time ago. Think Jailhouse Rock.

2. Aviation has been a big part of my life.
I've been a pilot (ASMEL/Instrument) for a long time and made a living writing about it for many years. I've lost around 30% of the hearing in one ear due to so much time spent in noisy cockpits.

3. I have degrees in biology and microbiology that I've never used.
University of California, Irvine (B.S.), U.C. Davis (M.A.)

4. Money means little to me.
Which is why I have none.

5. I started a monthly publication in 1979 that is still in publication today.
Through that desktop publishing business and one other, I learned a lot about direct marketing (I've designed, written, produced, and tracked direct mail campaigns encompassing millions of pieces of "junk mail." That all stopped when the Internet came along, of course.) Thanks to DTP, I have been able to spend most of my career self-employed.

6. The most fun thing I've ever done as a big-company employee was serve on Novell's Inventions Committee.
I got to examine, and vote on, patent proposals submitted from Novell engineers (and some non-engineers) all over the world. The other committee members, mostly Distinguished Engineers, were a joy to work with. I learned a lot about software patents and how they figure into corporate strategy. I also picked up a lot of technology knowledge.

7. I'm a slow reader.
In short bursts, I can go as slow as 30 words per minute.

8. I'm a coffee slut.
I'll drink any coffee of any kind, anywhere, any time; the blacker the better.

9. I'm mildly agoraphobic and like to lock myself in hotel rooms.
It takes incredible energy for me to feel like coming out of a hotel room once I'm in it. Unless, of course, coffee is involved.

10. My secret passion is pastel portraiture.

I don't want anyone to feel obligated to do the "10 Things" thing if they don't want to, but if I could nominate (tag) people whom I'd like to see do this, it would be the wonderful people on my Blogroll. Especially Irina, Jon, Julian, Lee, and Pie. Anyone care to step forward? You're it.
reade more... Résuméabuiyad

The principle of Last Responsible Moment

A military officer who was about to retire once reportedly said: "The most important thing I did in my career was to teach young leaders that whenever they saw a threat, their first job was to determine the timebox for their response. Their second job was to hold off making a decision until the end of the timebox, so that they could make it based on the best possible data."

This is an illustration of a principle that I think is (sadly) underutilized not only in R&D circles but in project planning generally, namely the principle of delaying decisions until the "last responsible moment" (championed by the Poppendiecks and others). The key intuition is that crucial decisions are best made when as much information as possible has been taken into account.

This is a good tactic when the following criteria are met:

1. Not all requirements for success are known in advance
2. The decision has huge downstream consequences
3. The decision is essentially irreversible

If one or more of the conditions is not met, the tactic of deferring commitment might not gain you anything (and could actually be costly, if it holds up development).

Conventional project planning, as practiced in enterpise today, tends to overemphasize the need for completeness in requirements-gathering. The completeness fetish leads to the Big Huge Requirements Document (or Big Huge RFP) Syndrome and can introduce unnecessary dependencies and brittleness into implementations.

There's a certain hubris associated with the notion that you can have a complete specification for something. You almost certainly can't. You almost certainly don't know your true needs ahead of rollout. True, some decisions have to be made in the absence of complete data (you don't always have the luxury of waiting for all the information to arrive), and there's the fact that you need to start somewhere even if you know that you "don't know what you're doing" yet. But that's not my real point. My real point is that too often we make decisions ahead of time (that we didn't really have to make, and later realize we shouldn't have made) based on the usually-false assumption that it's possible to know all requirements in advance.

What I'm suggesting, then, is that you reconsider whether it's always a good idea to strive for a complete specification before starting work on something. Accept the fact that you can't know everything in advance. Allow for "emergentcy." (Good decisions are often "emergent" in nature.) Reject the arrogant notion that with proper advance planning, you'll have a project that goes smoothly and results in a usable solution. Most of the time, from what I've seen, it doesn't work out that way. Not at all.
reade more... Résuméabuiyad

Most time spent in development is wasted

Yesterday, I was thinking about complexity in software systems and I had a kind of "Aha!" moment. It occurred to me that most of the programmer-hours time spent in product development are wasted.

We know that something like 30% to 40% (some experts say 45%) of the features in a software system are typically never used, while another 20% are rarely used. That means over half the code written for a product seldom, if ever, actually executes.

The irony here, if you think about it, is mindblowing. Software companies that are asking employees to turn their PCs off at night to save a few dollars on electricity are wasting huge dumpster-loads of cash every day to create code that'll never execute.

Is it worth creating the excess code? One could argue that it is, because there's always the chance someone will need to execute the unused bits, at some point in time. In fact, if you think about it, there are many things in this life that follow the pattern of "you seldom, if ever, need it, but when you need it, you really need it." Insurance, for example. Should we go through life uninsured just because we think we'll never experience disaster?

Unused software features are not like health insurance, though. They're more like teacup and soda straw insurance. Insurance at the granularity level of teacups is ridiculous (and in the aggregate could get quite expensive). But that's kind of the situation we're in with a lot of large, expensive software systems -- and a fair number of popular desktop programs, too (Photoshop, Acrobat Professional, OpenOffice being just three). You pay for a huge excess of features you'll never use.

There's no magic answer to the problem of "How do you know which features not to write?", obviously. It's a judgment call. But I think it's critical (for vendors, who need to cut costs, and customers, who are looking for less-expensive solutions to problems) to try to address the problem in a meaningful fashion.

What can be done? At least two things.

We know that formal requirements tend (pretty much universally) to err on the side of feature-richness, rather than leanness. It's important to address the problem early in the chain. Don't overspecify requirements. In software companies, product managers and others who drive requirements need to learn to think in terms of core use cases, and stop catering to every customer request for a specialized feature. There's a cost associated with implementing even the smallest new feature. Strive to hit the 80% use-case. Those are the scenarios (and customer needs) you can't afford to ignore.

If you're a software buyer, stop writing gargantuan RFPs. Again, figure out what your core use-cases are. You won't know what your real-world needs are until you've been in production a year. Don't try to anticipate every possible need in advance or insist on full generality. Stick with hard-core business requirements, because your odds of getting that right are small enough as it is.

Another approach to take is to insist on modular design. Factor out minority functionalities in such a way that they can easily be "added back in" later through snap-ins or extra modules. Create a framework. Create services. Then compose an application.

Product managers: Quit listening to every ridiculous feature request from the field. Don't drive needless features into a product because one customer asked for this or that edge-case to be supported. Make it easy for customers and SEs to build (and share) their own add-ons instead.

Infrequently executed "baggage code" is costly -- for everyone. Let's stop demanding it.
reade more... Résuméabuiyad

Why is everything being declared Dead?

Why is everything in technology being declared dead these days?

The Burton Group got huge PR mileage last January when one of its 12 vice presidents smugly declared "SOA Is Dead." Bell-clangers throughout the blogosphere latched onto it immediately as if John Lennon had come back to life as an IT savant.

The only problem with the Burton VP's oh-so-keenly-insightful declaration is that it's not original. David Chappell made the same declaration in August 2008 at TechReady7, Microsoft's semi-annual internal technical conference in Seattle.

But it turns out Hurwitz & Associates made the claim in October 2007.

And Jeff Nolan of Venture Chronicles declared "SOA Is Dead" in a blog back in April 2006.

All of which led Robin Bloor to declare recently: "The People Who Think SOA is Dead, Are Dead."

Of course, SOA isn't the only thing that's dead. Other recent death sentences include:

Web Services are dead
SOAP is dead
Web Content Management is dead
Cloud computing is dead
JSR process is dead
Java itself is dead
IT is dead

It seems to me that declarations of this sort are the kind of thing a publicity-grabbing publicity grabber does to grab publicity.

I think the only thing that's dead is imagination and originality on the part of certain analysts, journalists, and industry figures who, unable to think of something more meaningful to talk about in speeches and blogs, take cheap shots at technologies and processes that are still useful, still used every day, and (ultimately) still quite able to fog a mirror.

What do you think?
reade more... Résuméabuiyad

Swing versus death by paper cut

The other night, I was looking at the JSR-296 Swing Application Framework prototype implementation, which is (according to the landing page) "a small set of Java classes that simplify building desktop applications." What made me smile is the statement (on that same landing page): "The intended audience for this snapshot is experienced Swing developers with a moderately high tolerance for pain. "

When I tweeted this, Gil Hova tweeted back: "Wait. There are Swing developers with low tolerances for pain?"

I laughed so hard I almost blew coffee out my nose. (Now that's taking Java seriously.)

Before going any further, I should tell you that the
Swing Application Framework appears to be dead (the JSR is marked Inactive), with the most recent build carrying a date of 19 October 2007. It was supposed to go into Java SE 7. But it now seems to be in a kind of limbo.

But in case you were wondering what, exactly, the Swing App Framework is designed to let you do, here's the Hello World example cited by the creators.
public class ApplicationExample1 extends Application {
JFrame mainFrame = null;
@Override protected void startup(String[] ignoreArgs) {
JLabel label = new JLabel("Hello World", JLabel.CENTER);
label.setFont(new Font("LucidaSans", Font.PLAIN, 32));
mainFrame = new JFrame(" Hello World ");
mainFrame.add(label, BorderLayout.CENTER);
mainFrame.addWindowListener(new MainFrameListener());
mainFrame.setDefaultCloseOperation(JFrame.DO_NOTHING_ON_CLOSE);
mainFrame.pack();
mainFrame.setLocationRelativeTo(null); // center the window
mainFrame.setVisible(true);
}
private class MainFrameListener extends WindowAdapter {
public void windowClosing(WindowEvent e) {
exit(e);
}
}
public static void main(String[] args) {
launch(ApplicationExample1.class, args);
}
}
I'm sure there's a lot of goodness packed() away somewhere in the bowels of the SAF API, but it sure isn't showing up in this Hello World code.

If you run the foregoing code, you get:



Yes, it's an ugly large-type edition of browser-JavaScript's window.alert( ). Except it takes 20 lines of code instead of one.

This snippet illustrates a scant handful of the many annoyances that make Swing programming feel so much like death by a thousand paper cuts. For example, it shows the repetitive boilerplate code Swing programmers are forced to write every time something as common as a JFrame is needed. The setLocationRelativeTo(null), setVisible(true), the ever-ridiculous pack(), all are needless mumbo jumbo. Get rid of them! Roll them up out of view. Make them default behaviors. If I want to override these things, let me. But nine times out of ten, when I create a JFrame, I do, in fact, want it to be centered onscreen; I want it to be visible; I want it to go away when dismissed (and be garbage collected); and I don't want to have to recite pack() ever again in my lifetime.

A library that makes programmers write boilerplate is lame. It violates a basic principle of good API design, which is that any code that can be hidden from the programmer should be hidden. (See slide 28 of Joshua Bloch's excellent slideshow.) Not giving things reasonable default values is, likewise, a sin.

There's something else here that rubs me the wrong way, which is that if you're creating a new API (or framework, in this case) to supplement an existing API, it seems to me you shouldn't use that as an opportunity to introduce additional language syntax. In other words, don't introduce annotations if the underlying API doesn't use them. Keep it simple. Streamline. Simplify.

But enough ranting. On balance, I think the Swing App Framework is a good idea and adds value, and I think something like it should go into Java SE 7, because although it doesn't make writing JFrame code any less annoying, it does provide a host of application services that would otherwise require Swing programmers write tons and tons of really tedious code. Anything that reduces that tonnage is good, I say.
reade more... Résuméabuiyad

Turn off your step-thru debugger

Years ago, when I was first learning to program, I ran into a problem with some code I was writing, and I asked my mentor (an extraordinarily gifted coder) for some help. He listened as I described the problem. I told him all the things I had tried so far. At that time, I was quite enamored of the Think C development environment for the Mac. It had a fine step-thru debugger, which I was quite reliant on.

My mentor suggested a couple more approaches to try (and when I tried them, they worked, of course). Then he made a remark that has stayed with me ever since.

"I try to stay away from debuggers," he said. "A debugger is a crutch. You're better off without it."

I was speechless with astonishment. Here was someone who wrote massive quantities of Pascal and assembly for a wide variety of platforms -- and he never used a debugger! I couldn't have been more shocked if he told me he had perfected cold fusion.

"If you get in the habit of using a debugger," my mentor pointed out, "you'll get lazy. A certain part of your brain shuts off, because you expect the debugger to help you find the bug. But in reality, you wrote the bug, and you should be able to find it."

Still stunned, I asked: "What do you do when you have a really nasty bug?"

He said something I'll never forget. "I make the machine tell me where it is."

Make the machine tell you where the bug is. What a wonderful piece of advice. It's the essence of troubleshooting, whether you're trying to fix a car that won't start, trace an electrical fault, or debug a piece of software.

My friend (who did a lot of "realtime" programming in assembly, among other things) pointed out to me that there are many programming scenarios in which it's impossible to run a debugger anyway.

I took my mentor's advice and stopped using a step-through debugger. The only real debugger I continued to use (at that time) was Macsbug, which I occasionally invoked in order to inspect the heap or see what was going on in a stack frame.

Sure enough, I found that once I stopped using a step-thru debugger, my coding (and troubleshooting) skills improved rapidly. I spent less time in "endless loops" (fruitless troubleshooting sessions) and got to the source of problems quicker. I learned a lot about my own bad coding habits and developed a renewed appreciation for the importance of understanding a language at a level deeper than surface-syntax.

To this day, I avoid step-thru debugging, and find myself better off for it.

If you do a lot of step-thru debugging, try this as an exercise. For the next month, don't use a debugger. See if you can walk without crutches for a change. I'm betting you'll lose the limp in no time.
reade more... Résuméabuiyad

Should you cater to younger workers?

At the recent AIIM show in Philadelphia, there was a session called "Stump the Consultant" in which audience members got to put their toughest questions to a panel of three experts (Jesse Wilkins of Access Sciences, Lisa Welchman of WelchmanPierpoint, and my esteemed colleague Alan Pelz-Sharpe of CMS Watch). There were approximately 30 questions from 80 audience members (a very high rate of participation).

One of the questions was quite interesting, and it drew an interesting response.

The question came from someone working for an organization with two sizable constituencies of highly educated domain experts. (I'm being a bit vague, deliberately.) The organization's content-management infrastructure, the questioner said, was practically nonexistent, with many users still accessing content via very old-fashioned tools. There's an urgent need to overhaul the system and put some semblance of a "real" ECM solution in place. But there are two groups of users to satisfy: Senior domain specialists (older workers) who are comfortable with the old-fashioned tools and don't want to change; and younger workers with a strong preference for modern, browser-based apps. The question is, which group do you try to please? Which group can you least afford to alienate?

If you cater to the younger group, you risk alienating your most senior people (talented, expensive, hard-to-replace experts; people you don't want to lose to the competition; people with great political capital in the organization, who can perhaps defeat an IT initiative by pushing back hard). On the other hand, if you cater to the older group, you risk alienating the younger workers; and you risk keeping obsolete systems in place far longer than you should, making future replacement that much more difficult while also impeding business objectives, etc.

Lisa Welchman gave what I thought was a poignant and insightful answer. I'll try to paraphrase: She said, in essence, that if you're wise, you'll put a new system in place that serves the needs of all, but serves the wants of the younger generation of workers. And yes, you do this even though you know it will bring pushback from the more senior workers.

Lisa explained (in a much more articulate way than I can manage here) that older workers are less likely to quit their jobs than younger workers. They may grouse and grumble over a new system, but most will stay in their jobs rather than leave.

Younger workers, on the other hand, are more mobile and more inclined to go off on their own and find another job (or start a company) when conditions become frustrating. The older workers will retire; you'll eventually lose them anyway, no matter what system you put in place (or don't put in place). But if you fail to attract and nurture a talented, motivated corps of younger workers, the future of the company is put at risk.

So you do the right thing for the business. You put in a new system. One that will (hopefully) meet your current and future business needs while also satisfying as many users as possible. And if you have to choose between satisfying senior personnel versus generation-next, again you do the right thing for the business: You go with generation-next.

Lisa's answer resonated with me. It seemed to resonate, also, with the audience of 80 or so people. From my seat near the front of the room, I turned around and surveyed the tableau of faces. The majority of people looked to be over the age of 40. Everyone seemed to get it. Everybody seemed to understand that a company's best investment is not in its IT, but in its people; and not just in its older, more experienced workers, but in its older-workers-to-be. One thing you can't do is cater to workers who want to cling to the ways of the past, no matter how senior or how influential they may be.

As it turns out, I was only able to attend one session at this year's AIIM Expo (because I was working the CMS Watch booth the rest of the time). I'm glad it was this one.
reade more... Résuméabuiyad