Pages

.

Should you cater to younger workers?

At the recent AIIM show in Philadelphia, there was a session called "Stump the Consultant" in which audience members got to put their toughest questions to a panel of three experts (Jesse Wilkins of Access Sciences, Lisa Welchman of WelchmanPierpoint, and my esteemed colleague Alan Pelz-Sharpe of CMS Watch). There were approximately 30 questions from 80 audience members (a very high rate of participation).

One of the questions was quite interesting, and it drew an interesting response.

The question came from someone working for an organization with two sizable constituencies of highly educated domain experts. (I'm being a bit vague, deliberately.) The organization's content-management infrastructure, the questioner said, was practically nonexistent, with many users still accessing content via very old-fashioned tools. There's an urgent need to overhaul the system and put some semblance of a "real" ECM solution in place. But there are two groups of users to satisfy: Senior domain specialists (older workers) who are comfortable with the old-fashioned tools and don't want to change; and younger workers with a strong preference for modern, browser-based apps. The question is, which group do you try to please? Which group can you least afford to alienate?

If you cater to the younger group, you risk alienating your most senior people (talented, expensive, hard-to-replace experts; people you don't want to lose to the competition; people with great political capital in the organization, who can perhaps defeat an IT initiative by pushing back hard). On the other hand, if you cater to the older group, you risk alienating the younger workers; and you risk keeping obsolete systems in place far longer than you should, making future replacement that much more difficult while also impeding business objectives, etc.

Lisa Welchman gave what I thought was a poignant and insightful answer. I'll try to paraphrase: She said, in essence, that if you're wise, you'll put a new system in place that serves the needs of all, but serves the wants of the younger generation of workers. And yes, you do this even though you know it will bring pushback from the more senior workers.

Lisa explained (in a much more articulate way than I can manage here) that older workers are less likely to quit their jobs than younger workers. They may grouse and grumble over a new system, but most will stay in their jobs rather than leave.

Younger workers, on the other hand, are more mobile and more inclined to go off on their own and find another job (or start a company) when conditions become frustrating. The older workers will retire; you'll eventually lose them anyway, no matter what system you put in place (or don't put in place). But if you fail to attract and nurture a talented, motivated corps of younger workers, the future of the company is put at risk.

So you do the right thing for the business. You put in a new system. One that will (hopefully) meet your current and future business needs while also satisfying as many users as possible. And if you have to choose between satisfying senior personnel versus generation-next, again you do the right thing for the business: You go with generation-next.

Lisa's answer resonated with me. It seemed to resonate, also, with the audience of 80 or so people. From my seat near the front of the room, I turned around and surveyed the tableau of faces. The majority of people looked to be over the age of 40. Everyone seemed to get it. Everybody seemed to understand that a company's best investment is not in its IT, but in its people; and not just in its older, more experienced workers, but in its older-workers-to-be. One thing you can't do is cater to workers who want to cling to the ways of the past, no matter how senior or how influential they may be.

As it turns out, I was only able to attend one session at this year's AIIM Expo (because I was working the CMS Watch booth the rest of the time). I'm glad it was this one.
reade more... Résuméabuiyad

What Sun means to IBM


Sun Microsystems profit centers (from SEC filings)

Like a lot of my friends, I've been trying to figure out why the heck IBM would want to buy a burnt-out fail-whale like Sun Microsystems. Yes yes, Sun has some remarkably good technology, and I'm not putting it down. Sun's problem has never really been a lack of good technology. The company's problem has been a failure to monetize the technology. Big difference.

Sun's biggest problem at the moment (arguably) is brand deterioration. There's an odor of failure about the company, and it's a difficult odor to get rid of. It eventually taints the brand itself. I fear that's happened already with Sun.

I spent a lunch hour on the phone the other day with a friend of mine who works for a very large company that competes with Sun in a number of important markets. We tried to think of reasons for IBM to buy Sun, and couldn't come up with many.
  • Storage + cloud-computing story: IBM doesn't need one.
  • Servers and chipsets ("Computer Systems Products"): IBM doesn't need more of.
  • Operating system (Solaris): IBM has shown that it doesn't want to be in the OS business.
  • Java: The platform itself doesn't make huge money for Sun (if it did, Sun wouldn't be for sale), and IBM would probably throw it over the wall to the community (for real, and in toto) rather than try to maintain and advance it internally. If IBM didn't give Java to the community, there could be antitrust implications (since so many of IBM's competitors rely so heavily on Java).
  • Software: Sun middleware is so profitable it's not even a line item in the Annual Report. (Okay, that was unnecessarily sarcastic.) Sun middleware is not category-leading in any category I'm aware of. MySQL is interesting, but does IBM need a database? More to the point, is the income MySQL produces important to IBM? Is it important to the overall Sun deal?
So then. What does IBM stand to gain from a Sun acquisition?

Three things, I think. First, a customer list to sell into (for servers, storage, cloud services). That's the obvious one.

The second thing IBM gets by buying Sun (something I don't see many people talking about) is that nobody else gets to buy Sun. Certain IBM competitors who really do stand to benefit from a Sun purchase (e.g., Cisco) are denied easy entry into some of IBM's markets, if Big Blue takes Sun out.

A third thing IBM gets is 7000 patents. Not all of those patents are still active, and around 1600 were donated to open source a few years ago. But it's still a sizable portfolio. And we do know that IBM likes patents an awful lot.

Sadly, one thing IBM does not need, that Sun has way too many of, is employees. I see lots of unemployment coming out of this acquisition (if indeed it comes to pass).

A prediction: I think IBM will buy Sun, but people may be surprised at the low valuation of Sun. I also think Google will buy Twitter, and people will be surprised at the high valuation of Twitter. Sun, I fear, may turn out to be worth only a few Twitters.

And wouldn't that be something to tweet.

UPDATE: Late Sunday, the New York Times reported that talks between IBM and Sun had broken off. The deal is officially dead (for now). Neither party has indicated a willingness to continue negotiations. Where Sun goes from here is anyone's guess.
reade more... Résuméabuiyad

Hell freezes over as big ECM vendors suddenly embrace interoperability

Jeff Potts at ecmaarchitect.com has written an interesting post on the flurry of interest around the Content Management Interoperability Services (CMIS) standard, which was very much in evidence at the recent AIIM show. I was at the show, and I too detected a huge amount of interest around the new standard.

But it's not a standard yet (and won't be, until the end of this calendar year at the very earliest), which makes the sudden interest in it rather unusual, to say the least. I have seen a lot of industry standards come and go over the past 20 years. But I have seldom seen as much interest in a not-yet-released standard as is happening now with CMIS.

What strikes me as particularly odd is the huge interest in CMIS on the part of big ECM vendors like Open Text, EMC (Documentum), Microsoft, IBM, and Oracle, to name a few. Actually, IBM and Oracle don't surprise me very much, since they're pro-standards in general. But some of the other big players built their businesses on proprietary, standards-averse lock-in-ware. To go from a lock-in model to a posture of "let's stand up in public and salute the interoperability flag" seems downright weird to me.

I have it on good authority that Microsoft is a particularly enthusiastic proponent of CMIS, which is even queerer, to me. This is a company that has done more (over the years) to oppose interoperability than any software company in existence. For them to be the out-front cheerleader on CMIS blows my mind (or what's left of it at this point).

What's super-weird, also, is the fact that almost all of the big companies pushing CMIS are involved in the JSR-283 (JCR 2) effort, which produced a final draft spec the other day. If you look at the Expert Committee members on the project page for JSR-283 (scroll down to see the names), you'll see EMC, IBM, and most of the CMIS cheerleaders listed (except Microsoft).

The big CMIS supporters have "supported" JSR-170 and JSR-283 all along, but never once showed the kind of enthusiasm for those JSRs that they are now showing for CMIS. Those companies could have issued press releases, given seminars at AIIM, etc., in support of JCR, but never did. Somehow, interoperability (which is what JCR was and is about) wasn't important to these big ECM companies when JSR-170 was ratified. But now it is. And CMIS is a long way from ratified.

Does anyone else see anything strange in this picture, or is it just me? Mind you, I'm all for interoperability and I'm all for CMIS. I'm just struggling to understand why the sudden interest in interoperability on the part of companies who didn't give a damn 5 years ago.
reade more... Résuméabuiyad

Dot-NET to benefit from Sun sale?

At the AIIM show this week, I talked to a number of consultants and others who told tales of an uptick, recently, in .NET-based CMS business. One potential buyer wanted to know who the top .NET players in ECM are. There seemed to be a lot of interest, generally, in .NET-based content management. I can confirm that one software vendor whose .NET CMS has been around for years has been experiencing strong business in the recession.

It occurred to me that the much-talked-about (but slow to happen) acquisition of Sun by IBM, combined with the increasing entropy level around Java 7, may be giving IT decisionmakers a bit of stomach acid right now. Smaller shops with a significant existing investment in Microsoft infrastructure seem to see this as a good time to stop "thinking in Java." One consultant told me that a recent customer went with a .NET system based on the ability to get a usable delpoyment up and running quickly. The unspoken sentiment seemed to be "Who has time for Java EE?"

Bottom line: Acquisitions are disruptive. They put fence-sitters back on the fence, and they may others jump in unexpected directions. With Java, you also have uncertainty around the next edition. (Will there be a Java 7 any time soon? Doubtful.) These are not good things if you're Sun. But it's not a bad environment for Microsoft. Not bad at all.
reade more... Résuméabuiyad

Twitter, Google, and Semantic RSS

TechCrunch is reporting that the rumors about Google being in talks to acquire Twitter are true. Naturally, everybody wants to analyze this along "search" dimensions. TechCrunch says Twitter is about "people searching for news" and "brands searching for feedback." But that hardly conveys the importance of what's going on.

Talking about Twitter using the language of "search" feels vaguely wrong, like force-fitting a 20th-century semantic facade over a 21st-century idea. Imagine that the iPod were to have appeared in 1970, when handheld transistor radios were "high tech." People would have talked about the iPod in transistor-radio terms, instead of understanding it in terms of personal empowerment and cultural shift. The same is true of Twitter. It's more than a new data stream.

I spent most of this week at the AIIM show in Philadelphia, and while I was there, I was talking to an acquaintance about Twitter, trying to explain what makes Twitter so revolutionary for me. Bear in mind, until two months ago, I was not a Twitter user and never imagined I would be one.

I talked about bookmarks and how, originally, bookmarks were a hugely empowering idea: You could choose to keep references, in your browser, to parts of the Web that have special to you. But bookmarks became useless to me when I accumulated so many of them that I couldn't find the ones I wanted or (in many cases) even remember why I made many of them.

A few minutes ago, I counted how many bookmarks I have in my browser. You can do this, too. If you have Firefox, use the Import and Backup toolbar command in the Organize Bookmarks dialog to export your bookmarks as HTML:



Open the resulting HTML document in your browser, then run the following one-line script in the console:

alert(document.body.getElementsByTagName("a").length);

When I ran this script a few minutes ago, I learned that I have 598 bookmarks. That's beyond my ability to manage. It means bookmarks are useless to me now.

All is not lost, though, because I've found (in the meantime) that Google and RSS feeds are more useful. Google acts as a kind of just-in-time bookmark, or at least an indirection to bookmarks. RSS feeds are dynamically-updating bookmarks (in a sense), which is also a good thing.

But now I have too many RSS feeds! I can't easily scan them all. Much of what I want to "keep track of" on the Web (in terms of news blog entries of interest to me, and industry news) is no longer easily trackable. What I really need is some kind of intelligent filtering process that runs in the background and keeps a list of semantically relevant items, so I can just find the things of interest to me. New things. Up-to-date things.

Why not use Google news alerts? you may ask. Short answer: Poor signal-to-noise ratio. I get too many "false positives." Google news alerts are not "smart." The software that produces them is quite naive and easily fooled by stories that match the vocabulary of my search query. That's not what I want.

What does any of this have to do with Twitter?

Twitter is where I find out about industry news and new blog posts of potential interest to me. When an important news story breaks, I find out about it on Twitter first, and I usually know about it before any of my non-Twitter friends.

As for keeping up with relevant blog posts: The people I follow on Twitter share my interests. They tweet links to blog posts. The links tend to be extraordinarily useful and (again) extremely up-to-date. Much better than a "dumb list" of RSS feeds.

Twitter thus represents, for me, a semantically filtered super-RSS feed. It's very far from perfect, of course, but when I balance the time I spend sifting through 140-char tweets with the time I would otherwise spend walking RSS-feed links, doing searches in Google, waiting for full pages to load at various web sites, etc., I find that I actually save a lot of time by following Twitter streams.

Tweets represent semantically filtered information. A person with interests similar to mine has already determined that something is useful enough to share. That's the most valuable kind of semantic filtering there is.

Even without a "semantic web" of RDF triples and linked data and all the rest, I have an ad-hoc semantic web to work from, in Twitter. And it's up-to-the-minute. Twitterstreams move at the speed of the Web.

If I follow a Twitter Search as RSS, I'm even further ahead of the game.

So if Google acquires Twitter (which they will, if they're smart), Google doesn't just acquire a new data stream to build crummy little beta-apps on top of. It acquires a new paradigm. That's the true significance of Twitter. That, in a nutshell, is why they will acquire Twitter.
reade more... Résuméabuiyad

An API readiness checklist

Someone asked me recently if I could name some enterprise software products that have good APIs. I think it would be much more useful to list some of the characteristics of good APIs so that you can sniff out the various odors for yourself as you examine various products.

The rules for creating REST APIs are pretty simple (although often violated) and I won't address them here. Right now, I'll just speak to the topic of programmatic APIs, which is where most of the customer and consultant pain lives.

Realizing that the following list is by no means exhaustive and reflects a number of my own personal biases, I hereby offer a perfunctory API Readiness Checklist. Vendors can use this as a kind of scorecard to determine whether APIs are ready to show customers or not.

☒ Common operations don't require the user to write lots of repetitive boilerplate code

☒ Methods aren't complex, heavy (they don't try to "do too much")

☒ Methods have fully-spelled-out names; no abbreviations

☒ Method, variable, class, and other names are self-descriptive

☒ Methods have few formal parameters (seldom more than 3)

☒ Concrete methods are final

☒ Parameters are positionally consistent across different methods

☒ No ambiguous overloadings (in a Java API: You should be able to call every flavor of a method from JavaScript, without generating disambiguation errors)

☒ Code often reads like normal prose
  if ( user.debt( ) > Credit.LIMIT )
reject( user );
☒ Standard best practices apply with respect to internationalization

☒ Few custom exception types

☒ API follows patterns that developers are familiar with (don't make up new ones)

☒ Consistency of approach: The API follows the same patterns when doing similar sorts of things

☒ Consistency with related APIs: The API does things the way other company or product APIs do

☒ API favors interfaces and composition, not inheritance

☒ Documentation actually explains usage patterns and gives examples

☒ User doesn't have to know how everything works, just how to use it

☒ Sample code is intelligently commented

☒ Usability testing was conducted

☒ Your in-house developers actually like the API

There's a lot more to read on this subject. Joshua Bloch's book is a good starting point, as is this slideshow. A good resource on API design (for Java) can be found in this sample chapter from the book, Practical API Design.

If you have a favorite reference or more ideas to add to this list, by all means leave a comment here.

Digg!
reade more... Résuméabuiyad

API-First Design

I don't pretend to be an architect, but sometimes I wonder if APIs shouldn't be the starting point of product development, rather than an afterthought.

Yesterday, I suggested that if there were such a thing as a middleware API maturity model, "intentional API design" would have to be one of the middle levels. But I think (in fact I know) it's possible to do better than that. It's possible to make API design the sharp edge of the wedge, in software development.

This is not a new concept. Joshua Bloch and others have advocated early design of APIs, rather than letting APIs trail product design (as currently practiced). By early design, I mean that the APIs literally are written first, before the software itself.

A lot of software development happens this way, of course: You draw UML diagrams, identify key interfaces, write mock methods, etc., before writing "real" code. That's fine, but it's not quite the same as defining an API.

API-first design means identifying and/or defining key actors and personas, determining what those actors and personas expect to be able to do with APIs (i.e., what are the possible use-cases, user narratives, or stories that encapsulate the business problems these people need to solve), and -- very important -- trying to understand the mental model each actor brings to the problem space. The mental model will drive architectural and design decisions at various levels (for example, it will suggest what kinds of business objects each user thinks in terms of) and will keep the overall process on a track toward good usability.

It's not necessary to overspecify the API at the outset, the way most products are overspecified at the requirements stage. You can begin writing mock code immediately. The nice thing about writing to an API is that you find out very quickly whether it is serving your needs; you learn where the holes are. That means an API-first design process is, to a degree, self-guiding.

An overspecified API is your enemy. In the end, it will constrain the user in funny ways, and if you use the API as the core of your development, it will constrain your developers, too.

And that brings up the most important reason to consider API-first design, which is that you get a tool for creating the product itself. Your developers end up using the API to write the product. Early API mocks are the basis of unit tests. There's no downstream costs to "adding on" an API later, because it's already core. When the API is finally delivered to customers, its capabilities have been thoroughly tested (by your own R&D team, who used it to build the product).

So at the top level of the (fictional) Middleware API Maturity Model (if such a thing existed), there'd no doubt have to be a nirvana level called "API-driven design," characterized by the well-known Dogfood Pattern (tm), where you consume your own API as part of everything you build. Most vendors practice this to a greater or lesser extent already, but if you start from a strategy of API-first design, the degree of API reuse can be very high indeed.

I know of only a couple of R&D organizations that are aggressively and systematically applying an API-first design methodology right now. I have a feeling the number will grow. It's too good an idea not to.
reade more... Résuméabuiyad