As server virtualization has matured it has fundamentally changed the way IT works. What's next? In the past five years, server virtualization has grown from a niche concept adopted by only the largest and bravest enterprises into an indispensable part of everyday datacenter operations. Yet virtualization has done little to influence the software and operating systems we virtualize. I think that needs to change.This may seem like an odd thing to focus on. But remember: Server infrastructures exist solely to deliver applications to end-users. The process of delivering applications usually involves configuring storage and network resources, deploying a general-purpose operating system, reconfiguring that operating system to accept the application, installing the application, then configuring the application. Virtualization has made the first steps of that process all but effortless, but the effort required to configure the operating system and application layer remain almost entirely unchanged. If you’re looking for an area ripe for improvement, there it is.It’s becoming easier to imagine a world where the general-purpose operating system is replaced by a much thinner, purpose-specific framework that exists solely to operate a single application. I think that’s where the future lies, but we shouldn’t expect that kind of industry shift to take place overnight. Instead, the first steps in that direction are being made through the use of virtual appliances — consolidated downloadable images of a preconfigured operating system and application. Virtual appliances are not new; however, they have primarily been adopted by the open source community as an easy way to get their software in the hands of users that might not be willing to spend the time configuring an unfamiliar operating system or application.Whether due to licensing restrictions or lack of interest on the part of the larger commercial OS vendors, virtual appliances have not yet penetrated the enterprise application sector. Given the predominance of virtualization in the datacenter today, I don’t see how this trend can continue.There is really no reason why a server administrator should spend the better part of a day getting an application like Microsoft Exchange 2007 installed when it only takes minutes to provision the infrastructure necessary to support it. Why isn’t this application and many more enterprise applications just like it distributed an appliance? Then the only “installation” you’d need to do is to integrate that appliance into your Active Directory domain, reconfigure the appliance’s storage and resource allocation to accept the user load you expect, and customize the application configuration for your organization. This is a logical outgrowth of the same time- and money-saving advances made in the virtualized infrastructure below the application. In fact, with VMware’s concept of a vApp — a collection of multiple VMs that are treated as a single logical application environment — it’d even be possible to deploy an entire multitier application direct from the software manufacturer with very little installation footwork required. Your IT department’s efforts can be directed where they belong — application customization and actually working with your data, not watching interminable progress bars for hours. I shudder to think of how many very expensive IT man-hours are spent watching these exact same installation progress bars each year.You’d think that large software vendors would be jumping all over this as it offers them a chance to provide you with a completely optimized installation of their application. They have free rein to make their application look and perform the best that it possibly can and to cut their support overhead by performing deep quality assurance that looks at both the operating system and application. It seems like a win-win to me.Looking forward into the future, I think OS and software vendors are going to have to make some significant changes in the way they license and distribute their software to take advantage of running within a virtualized world. Many of the tools required to accomplish this are already available through the APIs and open standards support of the major virtualization hypervisors, so there’s no excuse not to get started. These changes together with more advanced security features such as common IPS and anti-virus implemented within the hypervisor stack will really allow virtualization to live up to its full potential. This story, “Virtualization: Present and future,” was originally published at InfoWorld.com. Follow the latest developments in storage at InfoWorld.com. Technology Industry