Last week, a vendor sponsored survey from Xsigo was published claiming that an I/O bottleneck exists within server virtualization deployments. Vendor survey skepticism aside, many people in the virtualization community will agree that this problem does indeed exist, although it varies from one deployment to the next. Can you spot a bottleneck problem with every virtualization host server? No. In fact, many serve Last week, a vendor sponsored survey from Xsigo was published claiming that an I/O bottleneck exists within server virtualization deployments. Vendor survey skepticism aside, many people in the virtualization community will agree that this problem does indeed exist, although it varies from one deployment to the next. Can you spot a bottleneck problem with every virtualization host server? No. In fact, many servers may never be afflicted with this pain for a number of reasons: the virtual machines may not be running intensive applications that exasperate the problem, the problem may be extremely isolated and only happen during spotty bursts, the server may have been configured with enough hardware to keep the problem at bay, or perhaps the environment was specifically designed to minimize the problem by not fully utilizing the server and keeping the density scaled down. Again, there are a number of ways to minimize or keep the problem from surfacing.But for those of us already in the know because we’ve witnessed this problem first hand, the results from the survey certainly aren’t anything new. Xsigo Systems received more than 100 responses from IT staff members surveyed at Fortune 5000 companies using server virtualization. And the survey revealed that IT managers encounter significant cost and cabling issues when configuring connectivity on servers running virtualization software. Compared with traditional servers, virtualized servers are being configured with more connections, and those configurations are being changed more frequently – two factors that significantly drive up costs. Other key findings from Xsigo’s survey revealed that 35% of virtualization users had to reconfigure I/O connections six or more times in the past year, typically because they moved a virtual machine from one physical host server to another. 58% of virtualization users had to add connectivity to a server specifically for virtualization requirements, and it caused them to use larger servers such as a 3- or 4U server rather than a 1U pizza box server. Perhaps the most significant finding from the survey is that server virtualization can significantly increase connectivity requirements: 75% of virtualization users configure seven or more I/O connections per server, compared to two to four connections for a server running without virtualization software. One way around the I/O bottleneck is certainly with the addition of hardware and consequently a spider web of cabling. Not a great solution mind you, but certainly a solution. Because of that, 65% of virtualization users considered cable reduction in their environment a high priority. Current I/O infrastructure in the data center was designed for traditional server usage, not virtualized server implementations that are currently on the rise. According to IDC, server shipments in support of virtualization are expected to reach 1.7 million units annually by the year 2010, growing at nearly 41% per year. Because users often prefer dedicated connectivity for individual virtual machines, servers frequently require additional I/O. A simple problem, like having a server with six I/O ports when seven is needed in order to accommodate virtualization capabilities, can add significant capital and labor expenses to a data center. Software Development