Hasty development and implementation schedules have sidelined such maintenance tasks as backup testing. How long can they stay there? If I picked up the phone, called any IT administrator I know, and asked what technical part of their job they like the least, the answer would almost invariably be backups. Nobody likes backups. In the complex world of mixed physical and virtual environments, it sometimes seems nearly impossible to build a backup regimen that just works every time. Instead, it’s much more common to see a few resources fail to back up or to get some kind of ambiguous error that takes hours to properly diagnose and resolve. [ Read Matt Prigge’s High-Availability Virtualization Deep Dive Report. | Discover the key technologies to speed archival storage and get quick data recovery in InfoWorld’s Archiving Deep Dive PDF special report. ] Too often, the amount of effort required to yield even the appearance of a consistently successful backup rotation takes so many hours that it crowds out other crucial tasks — such as testing. Recently, I got my hands on a survey commissioned by Veeam, a developer of backup tools for virtualized VMware environments, that polled 500 CIOs of companies with more than 1,000 employees. Like many commissioned surveys, this one tilts toward being a sales tool for the sponsor, but it also reveals shocking findings that underline the troubles many organizations have with backups. Several results jumped out at me: 43 percent of the businesses surveyed experienced some form of data loss over the past two years 60 percent of the businesses surveyed plan to keep their investments in backup technology flat or reduce them over the next two years Failed recoveries cost 60 percent of respondents between $100,000 and $250,000 a year — with around 15 percent reporting costs of over $1 million per year On average, respondents estimated that they test approximately 2 percent of the backups made Over 60 percent of respondents indicated that it took between 6 and 12 hours of IT effort to test a single backup; a similar number of respondents indicated this was the primary reason (lack of staff resources to perform manual backup tests) they didn’t test backups more frequently To paraphrase: Backups are generally considered unreliable, cost a ton of money when they fail to work properly, and are rarely tested sufficiently to ensure that they will work properly. If that sounds like a recipe for disaster, it is. Of all of the survey findings, the one that is most striking (and unsurprising) to me is the incredibly poor backup testing track record. The oft-cited mantra that IT is being asked to do more with less isn’t just workplace whining — it’s a real and very serious problem that can result in failure to perform IT’s most basic functions. While it’s certainly possible that implementing better backup tools to replace ill-performing ones may decrease the amount of time you spend managing backups — and reduce uncertainty about their effectiveness — no new tool will fix the accelerating cultural shift away from basic system maintenance. I don’t know how else to say this: It simply has to stop. The cold hard truth is that, one way or another, it will stop. Eventually, the cost of failed backup recoveries will escalate until organizations take notice and dedicate more resources. For those of us asked to do more with less, it’s our job to make sure that stakeholders understand the potential cost of breakneck development and implementation schedules that preclude basics like backup testing. Lobby for more resources before it’s too late. This article, “How ‘more with less’ puts your data at risk,” originally appeared at InfoWorld.com. Read more of Matt Prigge’s Information Overload blog and follow the latest developments in storage at InfoWorld.com. Data Management