Disasters should, by definition, never be scheduled -- unless, of course, you're maintaining a data center Unprecedented Arctic blasts cause all manner of mayhem, especially in areas where cold and snow are generally mythical. And mayhem means hard-learned lessons. Last week offered its fair share — like what happens to rooftop cooling units that have the right glycol concentration for normal weather but lose their minds when the wind chill drops to -15 Fahrenheit. (Answer: The brutally ironic situation of a chiller freezing due to cold, and a data center roasting when it’s -15 outside.)But that’s a problem caused by Mother Nature. You can’t prevent it, nor can it be even realistically forecasted. The fix (usually) is to raise the glycol concentration, but that needs to be removed in the spring to prevent problems when the weather warms up. In an installation that is eight years old and has never seen a problem like this, well, you have to take your lumps and deal with it as best you can. Mother Nature is not one to be trifled with.[ Also on InfoWorld: Downtime is … good? | InfoWorld’s Disaster Recovery Deep Dive Report walks you through all the steps in anticipating and handling worst-case scenarios. Download it today! | Get the latest practical data center info and news with InfoWorld’s Data Center newsletter. ] Man-made problems, however, can be upsetting on another level altogether. Take that enforced, four-hour-long power shutdown notice you get for the whole building starting at 6 a.m. on a Sunday. Sure, there could be generator backup, but as luck would have it, there was no generator capacity available during the build and no way to add new generator capacity to the facility. Instead, you have a monster UPS that can carry the room for nearly an hour but definitely not four.Of course, the one benefit this type of man-made disaster has over Mother Nature is that it’s scheduled. Backhoe operators and high winds rarely forecast the exact time and date they’re going to ruin your day, but electricians usually do.Still, what to do in this instance? You don’t want to take down the whole operation if you can help it. Having the entire room go dark will set a feast for gremlins, as storage arrays spin down for the first time ever and cooling systems that have been running nonstop for years go silent. Objects in motion tend to stay in motion, indeed. When those systems fire back up, it’s a virtual guarantee that something will fail, and you’re suddenly fighting multiple fires on a Sunday morning. The best way to deal with this situation is to identify everything that can reasonably be powered down, and pare back the data center to the leanest it can be without going completely offline. Leave storage arrays running, but shut down as many physical servers as possible. I’ve written scripts that take in a list of VMs that can be stopped, implements the shutdowns gracefully, consolidates all the remaining VMs on as few physical hosts as possible, and closes down the rest. Every watt you can remove from the UPS load will give you more time on the clock, and that’s the goal.But you need an emergency plan in the event that things go truly south and the UPS can’t be stretched further. This usually means a rapid, orderly power-down of the data center. This dance should be scripted as much as possible so that you can act extremely quickly (and accurately) if the situation arises. Those last running servers and VMs need to be shut down ASAP, and in the right order. The storage needs to be next in line, but every check and guarantee must be made that nothing is attached to it when it bails. Then, you’d better be sure that all of your switch configurations have been saved and backed up; although they’re going down last, they’re still going down.Of course, you need to make sure there’s proper ventilation. When the air units shut off, you can expect to have hot gear that’ll bake for a while. When the power returns, usually without warning, all you can do is wait to see what havoc may ensue, then begin triage. Reverse the power-down order, and hope all the storage lights come back up. Then bring everything else back online, encountering more chicken-and-egg situations than you thought possible. If you’re lucky, that four-hour power outage will only take 12 hours to repair, and the damage will not be permanent.Most people don’t really think about why data centers need to run 24/7/365. Even when they’re largely dormant, actually powering down these otherwise critical systems only leads to mayhem. This is one of the major benefits that virtualization has brought us — the fact that we can power down servers in times of low utilization, but we never, ever go all the way down if we can avoid it. When we can’t, we take a deep breath and head into the breach, hoping for the best, yet planning for the worst.This story, “Think of scheduled downtime as disaster recovery on your terms,” was originally published at InfoWorld.com. Read more of Paul Venezia’s The Deep End blog at InfoWorld.com. For the latest business technology news, follow InfoWorld.com on Twitter. Technology Industry