The government can't muzzle speech, but vendors act as though benchmark testing is unconstitutional OK, class, time for a pop quiz. Who suffers the most from the censorship clauses many software companies include in their shrink-wrap agreements? Is the answer A) the computer press, B) independent labs, C) the software companies themselves, D) software customers, or E) all of the above?In case you’re having trouble with your answer, allow me to provide a crib sheet with this recent example of how censorship clauses are inhibiting open public discourse. Chris Merrill, an engineer and partner at Raleigh, N.C.-based Web Performance, was trying last fall to put together a benchmark comparison of Java servers using his company’s Web Performance Trainer Web-load testing tool. But he encountered a roadblock because several of the commercial products he wished to include had a provision in their license agreements prohibiting publication of benchmark results without the company’s permission. In particular, he tried to obtain permission from BEA Systems to include results for the single-server version of WebLogic, only to be told that BEA has a blanket policy against granting permission for publication of third-party benchmarks.“One question we get asked a lot is how the performance of commercial and open-source J2EE servlet applications compare; so we were just trying to provide some objective information in our research report,” Merrill says. “None of the commercial vendors that had a benchmark restriction in their license agreement would grant us permission, but we were able to determine in several cases that the restriction didn’t apply to what we were doing.” In an e-mail exchange with Merrill, however, BEA officials said they would strictly enforce the company’s benchmark policy. “At first they seemed to be saying they could not allow anyone who is not an expert with their product to publish performance measurements, but they never asked for my credentials or for my testing methodology,” Merrill says. “Later they suggested they might grant me permission if they could preview the results — and if the results favored BEA — but I wasn’t willing to work that way.”So when Web Performance posted its benchmark comparison in November, WebLogic wasn’t included. Merrill was aware that the censorship clauses are of very dubious legal standing, but sensibly enough, he wasn’t about to get into a legal fight with a company as big as BEA. “For our company, which is very small, we would be out of business long before the case made it to a judge. Legal enforceability does not matter — we would never make it that far.”For their part, BEA officials are refreshingly up front about the reasons for their policies regarding publishing of benchmark results. “I wouldn’t say our policy is no benchmarks. But the bar is pretty high for what we want to see for the person executing the benchmark, the other vendors participating, etc., if we’re going to invest all the time to participate,” says Eric Stahl, director of WebLogic Server marketing at San Jose, Calif.-based BEA. “Performance-tuning of an application server is a very important thing, and it has to be done correctly if we are going to help people see the differences between products. And we just don’t have the resources to send out our engineers to participate in every opportunity that comes our way. So unless it’s a really high-visibility scenario, I’m sorry, but we cannot give our permission to test,” Stahl says.Stahl was one of the BEA officials Merrill corresponded with, and he deserves credit for not stonewalling these requests the way we’ve seen certain other companies do in this situation. He didn’t recall Merrill’s specific case, however, because he gets many similar requests. “Probably not a week goes by that we don’t have to deal with a request like this,” Stahl says. “I can understand the frustration that those making these requests feel, but we invest many millions of dollars into the product. So we don’t want anyone publishing our benchmarks unless we agree on how it’s being tested, because if it’s done wrong, it could have a devastating impact on our business.”Stahl thinks the only answer is for companies to get together and settle on industry-standard benchmarks similar to the Transaction Processing Performance Council tests that database vendors use. “That approach lets each vendor put forth their best representation and alleviate all this problem of letting people publish anything they want,” he says. But Stahl admits that such tests don’t always reflect real-world results and are also subject to abuse by competitors who find ways to sidestep the censorship clauses. Specifically, he points to Microsoft-sponsored J2EE vs. .Net benchmarks published in November (about the same time Merrill’s BEA-less results were published) as a “sham of the highest order” and “fraudulent misrepresentation of our product” that are still to be found on Microsoft’s site in spite of having been debunked. If this sounds vaguely familiar, it’s because Microsoft officials have had similar things to say on how J2EE competitors have misrepresented .Net performance. In spite of all these vendors having censorship clauses that are supposed to protect them from publication of biased or badly conducted performance tests, they all claim to be victimized anyway. So what good are these benchmark restrictions, other than keeping the small-fry like Merrill from collectively providing a more balanced and objective view? And, by the way, there’s also the little detail that “people publishing what they want” isn’t supposed to be a problem in this country but a right.So the answer to our quiz is …. Sorry, it was a trick question. Not even E) really adequately covers who suffers from the fact that open discussion about a crucial technology is being thwarted by a handful of companies on an almost daily basis. Our whole society, and our future, is suffering incalculable damage. It’s time we wake up to that fact. Software Development