by Jon Udell

Information trailblazing

analysis
Feb 28, 20035 mins

The game is over for proprietary data pumps

AI, analytics, analys
Credit: Shutterstock

Last week Matt McAlister, InfoWorld’s director of online product development, forwarded me a list of the week’s 10 most-read stories. I was tickled to see a couple of my stories among them. “You can log in to the reporting system for more detail,” Matt said, so I did. Twice. First I logged in to the Web interface that selects report intervals. (No luck doing that on Mac OS X, by the way, where Mozilla, Safari, and MSIE all failed the browser check.) Then I logged in to the Java applet that delivers the reports. Once you burrow into the inner sanctum, you can see the data sliced and diced in every way that the system’s designers thought you might need. But there are two huge problems: You can’t link to those views, and you can’t link to the data that supports them.

I won’t take potshots by naming the vendor because, in truth, this system is state-of-the-art. Web analytics has been one of my passions for almost a decade, so I know firsthand the challenge of reducing vast quantities of log data into views that make sense to the business sponsors of a Web site. You’ve got to boil the stuff down in ways that are instantly accessible to those folks, and this system meets that expectation. But we’re at an inflection point, I believe, in terms of what regular folks will expect these systems to do.

Consider librarians. As I mentioned on my Weblog, these are non-technical users who have nonetheless begun to describe their OPACs (online public access catalogs) as being “the wrong kind of software” when they can’t adapt to the LibraryLookup style of hyperlink-driven integration. It’s becoming apparent to everybody that deep linking isn’t some obscure geekism, but rather a vital property of information systems. When an OPAC supports deep linking, integration with other systems is trivial. When an OPAC doesn’t support deep linking — for example, because it delivers only a Java interface, or because it encodes session IDs in URLs — such integration is much, much harder.  Users are starting to notice the difference.

I’m not discounting the value that client-side Java can bring to the table. Rich clients are an increasingly important part of the emerging picture. But please, pretty please, don’t force me to use the rich client to get to the data. Use it, instead, to enhance the presentation of XML data that is also highly accessible by way of hyperlinks and (where appropriate) Web services. The heavy lifting done by a Web analytics engine aggregates the raw log data along many dimensions: page views, referrers, paths (sequences of pageviews), you know the drill. Once that hard work is done, make sure it can be leveraged. If you want to use Java or Flash or another rich-client technology to visualize the data, then great, but make sure that users can share those views by passing around easily discovered links.

For extra credit, cache the views as XML files. It costs little to do this, and you open up worlds of possibility. It’s great if you can pair those files with XSLT transformations that render views of them, but just caching the work of analysis in URL-accessible XML files creates a terrific resource. That’s how AllConsuming.net, a different kind of analysis engine, enables reuse of the book discussion data it harvests from Weblogs. You can get fancy and make SOAP calls to retrieve this cached data; but hey, it’s just data, and you can navigate to a directory and scoop it up directly if you like. Here’s what DJ Adams, author of Programming Jabber, did with the data:

“While AllConsuming.net can send you book reading recommendations (by email) based on what your friends are reading and commenting about, I thought it might be useful to be able to read any comments that were made on books that you had in your collection. ‘I’ve got book X. Let me know when someone says something about book X.’

So I whipped up a little script … to grab a user’s currently reading and favorite books lists, and then look at the hourly list of latest books mentioned. Any intersections are pushed onto the top of a list of items in an RSS file, which represents a sort of ‘commentary alert’ feed for that user and his books,” he says in his Weblog.

That’s all well and good for scripting wonks like DJ (and me, and maybe you too), you’re probably thinking, but what about civilians who use off-the-shelf software like Microsoft Office? Funny you should ask. The log analyzer I mentioned does, in fact, have back-door access to the report data. You can download a special client that will suck the data out of the server and feed it into Word or Excel files for display and analysis. But I won’t. And soon nobody else will either. Now that Office 2003 can directly consume XML, it’s game over for proprietary data pumps. It’s a whole new game for systems that blaze information trails for others to follow.