Pages

.

A Semantic Web Crash Course

I finally found a wonderfully terse, easy-to-follow "executive summary" (not exactly short, though) explaining How to publish Linked Data on the Web: In other words, how to make your site Semantic-Web-ready.

If you've struggled with trying to visualize how the various pieces of the Semantic Web fit together (all the RDF-based standards, for example), and you still feel as though you aren't quite grasping the big picture, go read How to publish Linked Data on the Web. It'll bring you up to speed fast. The authors deserve special mention (join me in a polite round of applause, if you will):
Chris Bizer (Web-based Systems Group, Freie Universität Berlin, Germany)
Richard Cyganiak (Web-based Systems Group, Freie Universität Berlin, Germany)
Tom Heath (Knowledge Media Institute, The Open University, Milton Keynes, UK)
I hope vendors in the content-management space will get to work producing tools aimed at helping people implement "highly semantic sites" (tm). Search 2.0-and-up will rely heavily on linked data, and the advantages of a linked-data-driven Web in terms of enabling thousands of Web APIs to be conflated down to scores or hundreds will become apparent quickly once the ball starts rolling.
reade more... Résuméabuiyad

Why super( ) sucks

One complaint I heard someone make recently, in the context of JavaScript not having a true inheritance model, is that there is no super() in JavaScript. Somebody, in a forum somewhere, actually whined and moaned about not being able to call super(). I believe the whiner was a Java programmer.

There shouldn't be a super() in Java, either, though. That's the real issue.

I'm flabbergasted that anyone thinks super() is a meaningful thing to have to write, in any language. What could be more obscure and arcane than super()? It's totally cryptic. It's shorthand for "go invoke a method of my parent that I happen to have intimate knowledge of. Never mind the side effects, I'm clairvoyant enough to understand all that, even if my parent's concrete implementation changed without my knowing it."

I thought secret knowledge and hidden dependencies were supposed to be evil.
reade more... Résuméabuiyad

Twitter traffic still soaring


(Click on the graph for a larger version. Or go to quantcast for more.)

Thank goodness there's something in this economy that isn't slowing down.
reade more... Résuméabuiyad

Inheritance as Antipattern

Allen Holub tells of once attending a Java user group meeting where James Gosling was the featured speaker. According to Holub, during the Q&A session, someone asked Gosling: "If you could do Java over again, what would you change?" Gosling replied: "I'd leave out classes."

Holub recalls: "After the laughter died down, he explained that the real problem wasn't classes per se, but rather implementation inheritance: the extends relationship."

I bring this story up because it seems a lot of people still think inheritance (supposedly the cornerstone of OOP) is good. Those same people want to impose the inheritance model on JavaScript. Which to me would be a terrible thing to do. I wouldn't go so far as to say inheritance is evil, even though many experts have indeed said exactly that. But it is certainly the most misused feature of Java. It ruins most otherwise-good APIs, I've found. (Google's Joshua Bloch has observed the same thing.) In the real world, inheritance tends to be an antipattern.

Inheritance violates encapsulation, undercutting the most basic of OOP principles.

Quite simply: Inheritance requires children to understand their parents (which I can tell you from personal experience is a dangerous assumption).

Subclassing leads to bloat (something Java needs more of...), because children inherit the methods of their entire ancestry chain. Which leads to things like JMenu having 433 methods.

It also locks new classes into preexisting concrete implementations, which introduces brittleness. A change in an ancestral method can break children unexpectedly. This is a well known drawback of inheritance.

Here is a verbatim quote from the Java API documentation for the Properties class:

Because Properties inherits from Hashtable, the put and putAll methods can be applied to a Properties object. Their use is strongly discouraged as they allow the caller to insert entries whose keys or values are not Strings. The setProperty method should be used instead. If the store or save method is called on a “compromised” Properties object that contains a non-String key or value, the call will fail.

This sort of thing has an odor about it. It reeks of poor design.

There's plenty more to be said on this subject, but it's been said elsewhere and I won't regurgitate needlessly. And again, I have to stress, I don't consider inheritance evil so much as misused. More on that some other time.

The thing that bothers me is that so many Java programmers who haven't taken the time to grok Brendan Eich's motivations for making JavaScript the way it is (drill into some of the links at this page to get a tiny taste of what I'm talking about) think JavaScript's compositionality-based prototype model is a flaw, or at the very least, an egregious oversight. Hardly. The langauge was designed that way for a reason.

Gosling, Eich, Bloch, Holub, all know what they're talking about. Inheritance is overrated.
reade more... Résuméabuiyad

Script for bypassing Google's "site may harm your computer" page

There was an outbreak of the bogus "visiting this web site may harm your computer" warning-page redirection on Google this morning. Apparently there have been occurrences of this phenomenon before (judging from blogs going back to 2007). You run a search on Google, and all of a sudden every hit has a warning link under it that says "visiting this web site may harm your computer", and if you try to go to the page in question, you get directed to a Google warning page that urges you not to go to the actual page you want.

On Twitter, people began labelling the problem #GOOGLEMAYHARM, which of course is phonetically similar to GOOGLE MAYHEM.

Naturally, I went to work on a Greasemonkey script to fix the situation. And naturally, in the time it took me to write the script, Google fixed the silly redirection thing.

In any event, if you are seeing the "harmful site" warning, here's a Greasemonkey script that should allow you to bypass the Google redirection page:

// ==UserScript==
// @name GoogleHitFixer
// @namespace fixer
// @include http://www.google.com/*
// ==/UserScript==

// Routes around the bogus warning page that says
// "visiting this web site may harm your computer"

// Public domain. Author: Kas Thomas

( function main( ) {

var signature = "interstitial?url";

var address = location.toString( );

if ( address.indexOf( signature ) == -1 )
return;

var newUrl = address.split( "?url=" )[1];

location.href = newUrl;

} )( );
reade more... Résuméabuiyad

Google Measurement Labs?

Google has introduced yet another service, called Google Measurement Labs, designed to test your connection speed and provide various types of information about your last-mile chokepoints.

I have read Google's own announcement about this as well as several blogs that try to explain it, and honestly, I still can't fathom the true motivation(s) behind it or why the heck anyone outside of academia (or perhaps the NSA) would even care. Obviously, Google has an interest in last-mile problems (the Internet is its lifeblood), but offering this set of diagnostics to the general public gives the impression that Google is very proudly answering a question nobody asked.

I don't get it.
reade more... Résuméabuiyad

"Crux" app wins JCR Cup

Day Software announced the winner of the JCR Cup 08 competition today. College sophomore Russell Toris won top prize (taking home a MacBook Pro) with a little web app called "Crux" (a shameless play on CRX, which is Day's commercial Java Content Repository).

I managed to learn a tiny bit more about Crux. And from what I've seen, it is indeed a clever use of JSR-170 technology.

What it lets you do is copy and paste arbitrary selections from any web page that's open in your browser, and save them straight to a JSR-170 repository (in this case, Day CRX, which is built atop Apache Jackrabbit). When you want to retrieve the selection(s) again, you can browse the repository and open them again in your browser.

Why is this useful? Here's the use case. Suppose you've got a dozen tabs open in Firefox (because you're researching a term paper) and you want to save references to the various content items you've been looking at. The conventional thing to do is bookmark all the open pages. But the problem with bookmarks is that they don't actually encapsulate any content from the pages you were on: They just encapsulate URLs and page titles (which are often meaningless).

With Crux, you highlight and Copy content selections from pages, then push those items into the repository with the click of a button. (Of course, you have to have a repository server running somewhere, reachable via HTTP.) When you want the clipped items again, you visit one URL (the node in the repository where the items are stored), and there are all your snippets, viewable in a single summary page. And they render nicely since Crux saves actual selection-source markup, not just raw text. Any embedded links, images, etc., in the clipped content are still there. Also, each entry in Crux contains a trackback link to the original source page, in case you really do need to go back to the page in question.

If you think about it, saving content clippings is actually a very compelling alternative to bookmarking. A bookmark is just an address. What you care about is the content, not the address. I have hundreds of bookmarks already. I can't keep them straight. They just keep piling up, and I can't remember what most of them are for. (Even the ones I use a lot, I sometimes have trouble finding again.) Crux provides a useful alternative.

How do you find something in the repository after you've pushed hundreds of content items into it with Crux? You use whatever repository search tools you'd normally use. Only this time, you can actually run full text searches on the content items you stored, rather searching page names in your Bookmarks collection.

Functionality similar to Crux is available via Clipmarks. Also, Microsoft tries to do some of this with its Onfolio and OneNote products (which are, IMHO, painfully klutzy). Crux looks and feels very light and simple. It definitely hits a sweet spot.

Whether Crux's source code will ever see the light of day, I don't know. (Entrants in the JCR Cup competition were not required to make source code public.) Reportedly, the code is all JavaScript and requires Greasemonkey.

In any event, congratulations Russell Toris! And kudos to Day for sponsoring the competition. It's nice to see JCR being used for something practical, lightweight, and simple. Well done.
reade more... Résuméabuiyad