Startup Azul steps into the data center

news
Feb 24, 20056 mins

Next month company will launch emerging technology designed to speed up processing of middleware apps

If Stephen DeWitt gets his way, the data center will become a much cooler place. Literally.

On a recent tour of his company’s new customer briefing center in Mountain View, California, the chief executive officer (CEO) of Azul Systems buzzed around rack after rack of his company’s hardware, barely able to contain his excitement. “Put your hand there,” DeWitt said, pointing to the back of one of the as-yet-unnamed Azul systems. “No heat.”

DeWitt was excited because the system housed nearly 400 microprocessor cores. In the server world, this would normally generate enough heat to cook a meal.

The next year promises to be busy one for DeWitt and his 130-employee company. By the end of next month, Azul plans to launch its first product, which is designed to speed up and simplify the processing of middleware applications.

Next week, the company is expected to announce details of a partnership with IBM, and it is also in discussions with Microsoft  to bring the Azul technology to the .Net platform. “We are going to partner with Microsoft and we’re going to bring the same sort of segmented virtual machine capabilities to the CLR (the .Net common language runtime) that we bring to the world of Java,” said DeWitt.

The idea behind the Azul appliance is simple. Users install Azul’s proxy software on servers running middleware products such as BEA Systems’s WebLogic or IBM’s WebSphere. The proxy software then transfers Java processing jobs away from the server that is running WebLogic or WebSphere and over to the Azul appliance.

One appliance can work with a number of different applications at once. Azul envisions that the systems will be used to consolidate an entire data center’s worth of application processing in one place, much in the same way that NAS (network attached storage) devices have consolidated file serving into one device. In addition, Azul’s processor is designed to consume less power than conventional chips. “We run at a very modest megahertz range, not in the gigahertz,” said DeWitt.

Azul has already made a heavy investment to make this simple idea work. It has custom-designed its own microprocessor — a chip with 24 processor cores that is built to run many Java operations, called threads, at once. And the company has integrated a number of new technologies into its systems designed to speed up thread processing and reduce the performance bottlenecks that keep Java 2 Enterprise Edition (J2EE) developers up at night.

Azul’s first appliances, also designed in-house, will hold between 4 and 16 of these chips, meaning that they will be able to run as many as 384 threads at once — more than the large SMP (symmetric multiprocessing) boxes on the market — and at a cost that will be lower than a cluster of commodity servers designed to run the same number of threads, DeWitt said.

Though pricing for the Azul systems has not been announced, DeWitt said it will cost less than $1,000 per processor core, putting the 4-way appliance somewhere under $100,000.

If Azul can gain a foothold in the application tier, it could be tapping into an extremely large market, DeWitt said. “This is where about 40 percent of Sun’s revenue is. It’s about the same for HP and everybody else,” he said. “This app tier is going to basically go away, or it’s going to slim down to a very, very small services level.”

But not all applications will “slim down,” to be run instead on Azul’s appliance. The hardware is not designed to work with more traditional applications, written in languages such as C or VisualBasic, and Azul doesn’t yet support Microsoft’s .Net platform.

A greater problem for Azul is the fact that it is a small company trying an approach to computing that is largely unproven.

“It’s interesting,” said RedMonk LLC analyst Stephen O’Grady. “But frankly, I need more to convince me that it’s true. I think there’s an opportunity there, but is it an opportunity that is going to eviscerate a large established market? I need more data to convince me of that.”

Despite Azul’s claims that its appliance does not require any application-level changes, customers may be hesitant to adopt a whole new server architecture in the already extremely complex world of J2EE development, O’Grady said. “To what extent does this complicate development? To what extent does it complicate architecture and design?” he asked. “There’s a long way to go to see how this is going to shake out.”

Electronic Data Systems (EDS) has been shaking out some of these questions since November of last year. The Plano, Texas, IT outsourcing company has been testing a 16-processor Azul system in its labs, considering it a possible component of its Virtual Services Suite provisioning offering, said Darrel Thomas, chief technology officer of hosting with the company.

The idea that Azul could be used to consolidate Java processing for the many different applications that EDS hosts makes it very appealing, Thomas said. “We really wanted to get toward the virtualization and consolidation aspects, and standardization functions that are key in our strategy,” he said. “Azul has made the commitment to that vision.”

Though EDS has not done enough testing to be able to say what, if any, performance benefits it may achieve with the appliance, installing the Azul proxy engine software has been relatively painless, Thomas said. “It’s actually unobtrusive to your application, so as long as you do your configuration properly and configure your instrumentation properly,” he said. “It’s not a full-blown change to the application itself.”

One of the questions that EDS is trying to answer is how best to take advantage of optimizations that may be specific to a particular middleware vendor. A second concern is figuring out how to bill for the machine’s use, when different departments or even different companies may be tapping into its processing power.

Ultimately, the Azul appliance is one of a number of promising technologies that are reshaping the way that data centers are used, he said. “It’s probably going to make the data center look like one big computer rather than a collection of various computers.”