by Mario Apicella

A storage benchmark in the making

analysis
Jan 4, 20074 mins

Five years from its inception, the Storage Performance Council sets out to reach a wider audience

No other storage topic is more sensitive for vendors and more important for potential customers than performance measurement. I’ve had vendors refuse to send me a product for review because of disagreements on which speed benchmarks to use during the evaluation.

[ MarioApicella’s column is now a blog! Get the latest storage news from the Storage Adviser blog. ]

Even when there is agreement on the tools used to measure performance, vendors may disagree on how to read those results and they’ll dispute their relevance. For example, check out how muddy the waters get in this blog exchange about competing arrays from EMC and NetApp.

I won’t take sides in that debate. It’s rather obvious that those quarrels don’t help buyers who are trying to figure out if either of the two solutions fits their requirements.

In December 2001, the Storage Performance Council announced SPC-1, the first benchmark designed specifically to measure storage performance.

SPC-1 and its logical complement, SPC-2, are designed to measure the performance of storage subsystems regardless of their connection to application servers, and to simulate typical business workloads. It’s a gross simplification, but to put them in context, think of SPC-1 as a “random” benchmark and of SPC-2 as a “sequential” benchmark. You can learn more about the two benchmarks and read their specs here.

Why didn’t the SPC initiative gather more followers? I’m not sure if things have changed since, but according to 2001 reports, disagreements over the randomness of SPC-1 drove at least one vendor, EMC, to leave the council.

Five years later, EMC is still keeping their distance. The council now has about 30 members, but that number sounds rather unimpressive considering the hundreds of vendors crowding today’s storage market.

However, SPC has steadily (if not quickly) increased the number of published benchmarks and recently added an SPC-2 Toolkit, initially available for AIX, Solaris and Windows Server 2003, that you should be able to purchase online shortly. (Jan. 8, according to SPC.) The council is also working to produce additional benchmarks to measure the performance of basic components of a storage system including HBAs, disk drives, and applications such as logical volume managers.

Will these efforts attract more attention and followers to SPC? Perhaps, but as the number of published benchmarks grows, SPC should make those results more easily accessible. Right now, to assess how a storage system such as the Fujitsu Eternus fared in the SPC tests, you have to dig out those numbers from a PDF file. Not the friendliest nor the quickest way to find what you need.

Worse yet, the SPC Web site doesn’t allow you to search those benchmarks, and doesn’t have an option to download them to a spreadsheet — an option that other benchmarks, including SPEC, have offered for quite some time.

“That implementation was a unanimous decision of the council to prevent inappropriate comparisons,” says Walter E. Baker, SPC administrator, in an e-mail exchange. Well, they were unanimously wrong, in my opinion, because that decision also makes it extremely difficult for customers to find solutions that meet their needs, which defeats the purpose of publishing performance numbers.

Baker, however, concedes that “the increasing number of SPC Results and increased use of those results require a more sophisticated, user-friendly method of selecting and accessing SPC Result information of interest.” So by end of the quarter, Baker says, we should be able to search the results.

I will revisit the SPC site at the end of the first quarter to see if the council follows through, but if you have suggestions or strong opinions about the SPC benchmarks please e-mail me or post on my Storage Network blog. I’ll make sure to forward your suggestions to Mr. Baker.