CIO of vehicle management company PHH Arval outlines its testing, performance management strategy for launching PHH Interactive You don’t have to look hard to find evidence of IT implementations gone wrong.The Standish Group reports that U.S. companies are wasting as much as $145 billion a year on failed IT projects. With only nine percent of technology investments completed on time within budget, the importance of taking control of your application deployment or upgrade from conception throughout the entire life cycle becomes clear.As a global vehicle management company handling more than one million vehicles worldwide each day, we at PHH were challenged to develop a mission-critical Web application that links clients, PHH representatives, suppliers, and fleet drivers to a comprehensive data warehouse of information — from new vehicle orders to car maintenance histories, information about accidents, fuel purchases, billing, and more. Also, since we were experiencing tremendous growth, we wanted our clients, suppliers, and internal users to have immediate, around-the-clock, easy-to-use access to the information they needed. While planning for the development and rollout of PHH Interactive (our secure Web site for clients that tapped into our comprehensive data warehouse), we wanted to ensure our deployment did not become another “failed IT implementation” statistic.So we took a business-centric approach to optimizing and measuring application quality, and we outlined a comprehensive business technology strategy that guaranteed the new system would perform according to design expectations at launch and over the long term. This strategy would measure proactively the readiness and quality of PHH Interactive prior to deployment, as well as application management, to meet service levels and mitigate risk during application/system software updates and hardware changes once the application was deployed. PHH Interactive was originally designed to include a Sybase data warehouse, a Sybase data mart, an IBM S/390 mainframe, Web application servers, firewalls, and thousands of remote users on PCs running different Web browsers.Implementing a high-performance, customer-interactive Web application that links thousands of users with multiple services is a risky undertaking. Because some companies skimp on pre-deployment testing and focus most of their time and effort on actual deployment, we knew that conducting a few cursory manual tests was not sufficient. That’s why we made pre-deployment testing and “previewing” real-life application operations — from the end-user’s point of view — a major priority. Part of the preview process involved testing scripts — real-life business transactions (i.e. a fleet manager accessing a vehicle report). We pre-tested how quickly end-users could access information as well as how many thousands of “hits” the application could handle. Managing 30 tests — each involving thousands of virtual users — during a one-month period would have been impossible without load testing software that anticipated problems. We used Mercury Interactive’sLoadRunner, and were able to run a high volume of virtual users off a few centralized machines. LoadRunner also helped us validate that our infrastructure would support the anticipated load as well as map out the scalability needed to support any future growth.Once PHH Interactive went live, our objective shifted from testing to performance management. We established an “early warning system,” reusing LoadRunner test scripts to continually monitor how the system performed after deployment. Mercury Interactive’s Topaz platform served as a central repository for application performance management. We now have one single point of control, and when issues arise IT is alerted immediately, before customers are affected.Our testing and post-deployment monitoring strategies do more than keep us “live” 24/7 — they keep our clients moving (literally) and help us map performance metrics to specific business goals. Software Development