The promise of Web services is there, but providers and customers need to know the downside TWO WEEKS AGO, I had the privilege of attending and moderating two panel discussions at InfoWorld’s Next Generation Web Services conference in San Francisco. The attendees and panelists seemed clearly divided into two camps: skeptical optimists and optimistic skeptics. Writing as someone who is involved in technology realities on a daily basis, I remain an optimistic skeptic, and much of what I heard at the conference reinforced my stance. Here’s why: The how is always easy, the what is always hard. Anyone who has ever done an end-to-end technology project of any scope knows that the trouble lies not in the details of implementation or the tools available but in the definition of a project’s goals in the beginning. Sure, Web services promise to make it easier to implement solutions between heterogeneous systems, but where is the help for the developers perpetually in the dark over what needs to be done, not just how to do it? Definition of requirements will continue to be an issue, and the promise of Web services could actually be counterproductive as businesspeople might be led to believe that Web services’ simplicity and flexibility means requirements can be even less defined: “Hey, if we skip the requirements phase, the ease of implementation with Web services will make up for it, right?” I go back to Frederick Brooks’ 1986 essay No Silver Bullets: “As we look to the horizon of a decade hence, we see no silver bullet.” Software development will always be difficult. The network is not infallible. In the conference keynote address, James Gosling, father of Java, pointed out Peter Deustch’s “Fallacies of Distributed Computing” and added one of his own at the end: 1) The network is reliable; 2) latency is zero; 3) bandwidth is infinite; 4) the network is secure; 5) topology doesn’t change; 6) there is one administrator; 7) transport cost is zero; and 8) the network is homogenous. Let’s look at reliability. One of the ideas floating around the conference was that Web services might level the playing field for small to midsize businesses, allowing them to partake in and provide the same Web services as the big guys. Will these companies be able to afford the network infrastructure (outsourced or otherwise) to provide the reliability they need? On a simplistic level, getting two redundant data lines from separate providers into your network is more than twice as expensive as one line as you add in extra network hardware, build fail-over mechanisms, and spend more money on high-end network engineering. The stability of your network, upon which your ability to deliver and consume Web services depends, is largely determined by how much money you are willing to throw at the problem; so established players with deep pockets will have a continued advantage. Security. Recent incidents at Microsoft in particular don’t paint a pretty picture of secure Web services. Last summer, Code Red and Nimda attacked IIS Web servers all over the Internet, underscoring that the assumption of a ubiquitous HTTP transport layer is not a given. Earlier this month, Microsoft left users of its .Net My Services service unable to retrieve updates that were intended, ironically enough, to fix security holes in Windows XP. In November, a key vulnerability in the Wallet service in Passport was found, potentially exposing sensitive financial data. Microsoft is not alone: Vulnerabilities continue to plague operating system, database, and other application vendors. The lesson: Stay inside the relative safety of your corporate firewall for now. Despite challenges, “Web services” is clearly here, but be prepared for evolution, not revolution. Software Development