A task, a block of memory, or a network socket that knows why it exists can move anywhere The pain point for any IT endeavor is marrying requirements to solutions. Much of the learned art of IT management is invested in planning, sizing, staging, and administering purpose-targeted resources. Organizations that wisely leverage virtualization as a means to provision and reallocate pooled resources still face the c A task, a block of memory, or a network socket that knows why it exists can move anywhereThe pain point for any IT endeavor is marrying requirements to solutions. Much of the learned art of IT management is invested in planning, sizing, staging, and administering purpose-targeted resources. Organizations that wisely leverage virtualization as a means to provision and reallocate pooled resources still face the challenge of gaining an intimate level of knowledge of application behavior and runtime requirements.Applications and operating systems still want to see a predictable, nailed-down universe where such runtime essentials as memory, disk space, and file handles are fixed resources that can be used up, and when they’re gone, it’s acceptable behavior to die or shift into neutral, pending human intervention. The need to avoid this circumstance is one driver behind meticulous planning, and having such failures strike in production is cause for a full-on IT staff scramble. IT also needs to plan for continuity. It’s a pity, isn’t it, that operating systems and virtualized infrastructure solutions can’t just know what your applications need and allocate the enterprise’s resources to fit, instead of requiring a lot of observation and scripting? Virtual infrastructure will undoubtedly take on ever-smarter heuristics for automating the distribution of computing, storage, and communications resources. But setting this up as the only alternative to scripted agility presumes that software within a virtual container will always play a passive role in the structuring and optimization of its operating environment. I submit that to extract ever more value from virtualization, software must take an active role. However, I still hold to my original belief that software, including operating systems, should never be aware that it is running in a virtual setting. I want to see software, even system software (an OS is now an application), get out of the business of querying its environment to set a startup and, worse, a continuous operating state. Doing this severely limits the ability of tasks to leap around the network at will, because an OS freaks out if it finds that its universe has changed in one clock tick. In the least disruptive case, if its ceilings were raised the OS instance (and, therefore, the mix of applications running under it) would take no advantage of the greater headroom afforded by, say, a hop from a machine with 2GB of RAM to one with 32GB. So how can software be a partner in the shaping of its virtual environment without trying to wire in awareness of it?Clearly, software must be able to query subordinate software to ascertain its needs. The technology exists now to do this at startup. When commercial software, or software written to commercial standards, is compiled, optimization now includes steps that give the compiler a wealth of information about the application’s runtime behavior. One is auto-parallelization. This stage of optimization identifies linear execution paths that can be safely split apart and run as parallel threads. That’s some serious science, but the larger the application is, the more opportunities there are for auto-parallelization, and on multicore systems, the win can be enormous. The analysis that a compiler must perform to identify latent independent tasks could go a long way toward helping a VM manager decide how an application can be scattered across a pool of computing resources. If the ideal virtual infrastructure is a grid, then the ideal unit of mobile workload is the thread. If the compiler finds that an application is monolithic, this information, too, could be valuable, signaling that a process can be moved only as a whole.I’m more excited about two technologies that apply runtime analysis to the goal of optimization. A two-step optimization technique involves compiling the application with instrumentation for detailed runtime profiling that produces a detailed log of the application’s behavior. This log, plus the source and object code, is pushed through the compiler a second time, and the resulting analysis creates potential for optimization bounded by only the intelligence in the compiler.The pity of these techniques is that the knowledge gained about software’s characteristics and behavior is discarded after the application is compiled for release. It’s considered debugging information and is trimmed away for the sake of slimming the executable file and protecting intellectual property (the structure of an application is exposed in detail by debugging information). I imagine system software using this information the way that the compiler does to get inside the mind of a demanding application. If this intelligence were available at runtime — if software carried a detailed autobiography — then a virtualization engine wouldn’t need to wonder so much about whether a process, thread, block of memory, open file handle, or network socket could be safely relocated. The kind of surprises that complicate planning and automated reallocation of resources would be significantly reduced. Technology Industry