No disk, no video, no extras -- just processors, RAM, and 10G. C'mon, hardware vendors, we need servers like this A while back, I ranted about wanting servers designed from the ground up to function without a disk subsystem, which drew comments from quite a few folks who didn’t seem to completely understand the main point. My bad — I’ll try to make it clearer today.When I think of diskless servers, I’m not thinking about small businesses. I’m thinking of medium-size to large businesses virtualizing their infrastructures like mad right now. I’m thinking of data centers bumping up against power and cooling constraints. I’m thinking of a soon-to-be reality where local disk is useless. But it needs to go further.[ Also on InfoWorld.com: Learn how data deduplication can slow the explosive growth of data with Keith Schultz’s Deep Dive Report. | Looking to revise your storage strategy? See InfoWorld’s iGuide to the Enterprise Data Explosion. ] HP and IBM both now offer what they term a “virtualization blade.” Neither blade has a traditional disk subsystem, opting for either flash or SSD-based boot devices and local storage. These may seem to be examples of what I’m talking about, but they aren’t quite there yet. The required blade enclosure was designed for the maximum load of fully disk-equipped blades, not for these lower-power, diskless blades.So imagine a blade chassis designed from the ground up to house completely diskless blades. Not only that, but these blades would be designed without a framebuffer: no disk, no video, just 10G Ethernet and Fibre Channel ports, two low-voltage multicore CPUs, and as many DIMM slots as possible. The power and cooling requirements for such a chassis would be different than those for a traditional chassis, as would possible blade densities. You’d get an awful lot of bang for your power and cooling buck.Blades aside, I bet you could shoehorn at least three independent systems of this type into a 1U rack space. Lacking the disk subsystem, framebuffer, and associated parts and ports, you’d have only three or five ports on the back of each server (two 10G, one serial, or two 10G, two FC, one serial), and plenty of room for RAM. Each server would have optional 1.8-inch SSD slots, or just SD card slots, nothing more. There could be a common set of redundant power supplies powering all three (or potentially even four) servers. A system such as this might present some hotspot issues, but only if deployed in quantity — I think of these units as remote site servers, or medium-size datacenter servers where there might only be a few in a rack.With one or two triple servers in 1U, copies of VMware ESX or ESXi and an iSCSI storage array, you could have the entirety of a small-to-medium business server infrastructure in four or five rack units, no KVM required. How compelling is that?The time has come to start taking the legacy out of our servers. Today, the vast majority of virtualized infrastructures run on 1U, 2U, or blade-based systems. They have local disk, framebuffers, and all the trimmings normally associated with single-instance servers, but they simply don’t need it. Very rarely does anyone use a console on a VMware ESX or ESXi server. If they do, it’s all text-based anyway, so there’s no point in having a framebuffer — a serial console is perfectly functional for these requirements, and it’s simpler to access. When talking about substantially virtualized infrastructures, we’re trying to drag race with Humvees, and it’s costing us money left and right. The next step is to get a real dragster and start wringing as much out of virtualization as we can.This story, “Why small, diskless servers make sense,” was originally published at InfoWorld.com. Follow the latest developments in servers, and read more of Paul Venezia’s The Deep End blog at InfoWorld.com. Technology Industry