Creativity and execution are hallmarks of great leadership, as these eight technologists proved in pushing their organizations in bold new directions Leading technology — as a creator, manager, implementer, and business catalyst — is no small feat even in the course of running IT or a business. Technology changes rapidly, and it often becomes increasingly complex. The problems and opportunities to which it is applied are equally variable, messy, and involved; the easy “just add automation” problems have already been addressed.Technology leadership in its four key forms is at the heart of InfoWorld.com’s mission, and the InfoWorld Technology Leadership Awards honor those who have been exceptional technology leaders over the past two years. No “we did it in six weeks” projects here — true technology leadership spans constituencies and technologies, and it’s often exemplified by projects months in the making.[ Check out all of InfoWorld’s awards. | InfoWorld’s Eric Knorr explains how to be a modern CIO. | Be inspired and informed every day on the key technology trends and insights: Subscribe to the InfoWorld Daily newsletter now. ] The TLAs have a broad mission to recognize two key shifts in IT.First, deployment is no longer the main game for IT, even if it remains the bulk of effort spent. Instead, creating value through technology — within IT, of course, but also by helping the business grow — is where leadership matters. As technology increasingly permeates the business, IT is providing more businesswide inspiration. And it’s not just the CIO or CTO; IT project managers, admins, architects, and the like are equally capable of contributing, so the TLAs now honor leaders regardless of title.Second, technology is no longer the sole province of IT. Nearly every businessperson today has been using technology at work and at home for two decades, and most are more than passably familiar with a variety of computer technologies. Thus, limiting technology to the high priests of IT is untenable. But so is the notion that the business is simply a customer of IT; that too suggests a “father knows best” mentality. It’s no accident that the main technology drivers of business change were pushed not by IT but by businesspeople in the past two decades: the PC, the Internet, cloud computing, mobile computing, and increasingly social technology. Thus, the TLAs look for technology leadership anywhere in the business, not just within IT. The 2013 TLAs showcase such leadership across the business and IT, as well as across roles. IT professionals remain the heart of technology leadership — no surprise to us, given the passion and creativity many technologists bring to the table. Our winners, selected by a panel of InfoWorld editors from nearly 100 nominees, fall into four categories of leadership: IT management, which honors technologists who assert leadership in the realm of IT, typically around management and enablement of IT as a whole. Technology creation/enhancement, which honors the creative side of technologists. Here, leadership is about vision and execution, setting a new course for technology, and coming up with novel approaches to make it happen. We don’t honor vendors’ creation of innovative products here (that’s what our Technology of the Year Awards are for), though we honor internal products created as a by-product of IT innovation, as well as broad technology innovation at vendors. Technology deployment, which honors the most exceptional leadership in the types of challenges IT faces day in and day out (it’s no surprise this category had the greatest number of nominations): designing, deploying, and maintaining the technology systems that the business depends on to succeed.The TLAs have no set number of winners, nor need there be honorees in each category. We’re looking for the best, period. (For details on the criteria and how to enter for 2014, go to the InfoWorld Technology Leadership Awards page.)We’ve found it, as the 2013 Technology Leadership Awards winners show. We present them in alphabetical order within each category: Business management Anthony Ricco, Citrix SystemsIT management Tim Bell, CERN John Martin, Edmunds.com Chris Whyde, Capital One Technology creation/management John Basso, Amadeus Consulting Brian O’Neill, Health Market ScienceTechnology deployment Craig Brown, STEM Resource Partners Babette Davis, California EDD TLA 2013: Business Management Anthony Ricco, vice president of demand marketing, Citrix SystemsThe sales process is all about identifying the likeliest prospects and focusing on them, using a method called lead scoring to determine who to focus sales efforts on. Such scoring is based on human assumptions and gut checks by the sales and marketing staff — an inherently biased method and a self-fulfilling prophecy: What sales says is a hot lead, it makes into a hot lead based on its past experience. But because people have different perspectives and emotions, there is no science to this method. At virtualization vendor Citrix, Ricco wanted to apply science to the process of lead scoring, to determine objectively where the best prospects were for sales to focus its efforts on. He sponsored and funded a systematic approach to demand generation and predictive analytics. The effort built a finely tuned scoring model based on the marketing industry’s knowledge of implicit and explicit variables and cumulative scoring. As a result of this data-driven analysis, Ricco’s marketing group reduced the number of leads to sales by 12 percent, increased the number of marketing contacts by 23 percent (meaning more engagement per contact), increased the sales rate (conversion) by 31 percent, and decreased the go-nowhere interactions (deactivation) by 21 percent.To optimize the scoring model, Ricco’s team applied predictive analytics to the marketing database from the last three years and identified 327 potential factors of lead conversion. It then ran logistic regression on a random sample of more than 1 million unique contacts to build a highly predictive model. The final model includes about 60 predictors with their relative strength.“We can optimize for the volume of leads that marketing should send to sales. We know the capacity of our sales reps, and beyond a certain number of leads it doesn’t make sense to burden sales with additional leads. Knowing the number of qualified leads we’re generating based on our lead flows enables us to see our staffing requirements and informs us when we need to grow the team,” Ricco says. TLA 2013: IT Management Tim Bell, infrastructure manager, Centre Européenne pour la Recherche Nucléaire (CERN)CERN, the European organization for nuclear research, operates the world’s largest particle physics laboratory, hosting 10,000 visiting scientists and engineers representing 608 universities and research facilities.You’ve likely read about its latest discovery, a particle believed to be the Higgs boson — a discovery in the science community equivalent to landing a man on the moon. The Higgs boson search was conducted through CERN’s Large Hadron Collider, which is the world’s most powerful particle-smasher. This collider produces 1 petabyte of data per second and demands an astronomical rate of computing power. But it’s not the only collider at CERN producing data. The growth in CERN’s computing capacity forced the team to create a new data center similar in size to its existing facility, which held more than 10,300 servers, 83,000 disks, and 90,000 cores. However, with a limited budget, the IT staff at CERN was not allotted new staff members to help manage this new data center, which by 2015 will amount to more than 15,000 hypervisors running 150,000 virtual machines.Bell and his team needed to come up with a cost-effective way to manage both of these data centers with innovative technologies and methodologies. For years, the team had been building its own software because it was on the bleeding edge of technology (for example, the World Wide Web was invented at CERN in the 1990s to address the challenges of sharing information). But continuing to create hardware and software from scratch with the amount of staff available was not feasible.The big breakthrough came when the team noticed that industries in Web and cloud were facing a similar problem to CERN. Although CERN’s data center was specialized, it could still use a similar blueprint. The team used the new OpenStack cloud-based data center standard and an automation tool called Puppet. Through these technologies, CERN was able to create its infrastructure to scale in a matter of months — not years — without having to customize every piece of hardware and software. As a result, Bell’s team was able to stay on budget, scale CERN’s infrastructure more quickly, and create a system to address CERN’s future infrastructure.TLA 2013: IT Management John Martin, senior director of production engineering, Edmunds.com“In October 2010, I walked out of what had turned into a particularly gruesome war room. We had just launched a project that we called Edmunds 2.0, which basically involved re-architecting our application around SOA principles, and we had found some problems in the production runway. We had been in the war room for days, and it was starting to smell. As I left the room that day I thought to myself, ‘There has to be a better way,'” recalls Martin. That better way is devops, which Martin spent the next three years building with the business leadership, development teams, and IT operations teams at the automobile information site Edmunds.com.Before the devops approach was implemented, ops wasn’t involved in a project until just before it was due to go live. “This meant that we often ended up with software that dev had spent months working on that wasn’t going to fly in production. Ops would double the infrastructure and it would work OK, but that wasn’t really good enough,” Martin recalls.“Worse, new features and apps from development often don’t work as advertised, and our solution was typically to throw more infrastructure at the problem. This is not only expensive, but time-consuming,” he says. There was no automation in place to help the necessary testing and deployment work. As a result, dev and ops spent a lot of time fighting in war rooms, rather than delivering working systems. “What we needed was better alignment of dev and ops — in short, devops. It wasn’t difficult to justify this project to our leadership, but it was difficult to effect change in an organization where dev and ops are basically pitted against each other: Dev wants to release lots of features very quickly, and ops wants to minimize change to keep the app stable,” Martin says.Martin addressed that challenge by creating an intermediary team that he led to unite the tooling and processes that dev and ops use, such as deploying Chef for configuration management and application deployment, and AppDynamics and Splunk for app deployment, as well as using common QA tools across dev and ops.Although the use of an intermediary wasn’t an ideal approach, Martin didn’t believe that his older organization could handle more-drastic changes. “As usual, the technology was the simple part. Changing the culture turned out to be far more difficult. The only way to improve the performance of our site while meeting business requirements for new features was to start entering each other’s worlds. Ops needs to be involved in new initiatives to provide guidance from a performance perspective, and development needs to be responsible for deploying and maintaining their code in production. Getting to this point would require everyone in the organization to start thinking differently about their roles, and to start taking on tasks that weren’t in their job description,” he says, Martin decided he had to show it was a joint effort my making a developer his partner in the devops experiment, even providing him production-level access. That made Martin nervous, but he knew he had to walk the talk if he expected others to. “The best way to effect cultural change is to find your champion on the other side and to bring them into your world, and this is what we’ve been doing — with great results.”From a productivity perspective, Edmunds.com spends significantly less time finding and troubleshooting performance issues in production than it did before. It spends about two fewer hours per week fixing problems, and estimates it has saved about $1.2 million through improved uptime and increased productivity by aligning dev and ops and unifying its tool set.TLA 2013: IT Management Chris Whyde, quality assurance manager, Capital One In 2012, the group at credit card issuer Capital One focused on card partnerships identified internal quality-assurance-supporting IT practices that lacked sufficient discipline and maturity: test automation, regression testing, and functional alignment within QA. As QA manager, it fell to Whyde to address the issues, so he mobilized a team to address the concerns around cost control, quality, and maturity of the testing team.He formulated a new automation framework whose hybrid nature could support automation of all 40-plus applications tied to the QA function, with a goal of 70 percent automated regression testing. Whyde also used industry standards in a risk-based analysis of its existing regression and fine-tuned the regression suites to cover 30 percent more functions with fewer resources required to test. He also delivered the automation several months before the deadline.So far, the new automation framework saves the equivalent labor of four full-time staff, yet allows for more testing. Whyde expects the labor savings to grow further.“Because of the scale of our organization and because our processes have been in place for so long, converting our organization to be more efficient and to use automation to support our QA processes was a significant task,” Whyde notes. “In addition, to obtain ROI in Year 1 of an automation framework is rare in the industry.” TLA 2013: Technology Creation/Enhancement John Basso, CIO, Amadeus ConsultingMost businesses gather large amounts of data — time cards, product orders, sales, contracts, basic costs, and the normal accounting that takes place in every industry. With a little extra effort, this data can be used to provide much more detailed and insightful information that can inform company executives on strategic decisions and help them foresee potential challenges.At .Net development shop Amadeus, Basso led the charge to build an internal business intelligence data and project management tool called Nexus Cube that pulls from multiple data sources such as Salesforce.com, FogBugz, QuickBooks, and Chronos. The data is indexed into an easy-to-read format. Real-time updates are produced from SQL Server Analysis Services to run the Cube and SQL Server Integration Services to pull the data from other databases.Instead of having one system that tracks employees’ project time and another that tracks project progress, Nexus Cube brings these data sources together. It offers a multidimensional view of company data, enabling the business to make everything from macro-level decisions such as increasing the workforce, to micro-level input on items like employee time spent to date on a specific project to track the accuracy of project estimates versus actuals.Nexus Cube also provides real-time data visualization that allows Amadeus Consulting’s executives and development team to make immediate course corrections. For example, the company can monitor its budget and human resources to ensure it completes a project on time and on budget. Plus, Nexus Cube serves as a historic analysis tool to help the company bid appropriately and manage resources more effectively on future projects. Finally, its metrics are used in performance reviews that measure both the quantity and quality of individuals’ work.The Nexus Cube is unlike other internal data management and analysis tool because it slices data into multidimensional views. Thus, users can isolate specific pieces of information related to project operation and development. For example, if a cube is composed of sales figures, time, product categories, and region dimensions, the user can choose to view a dimension of the cube that shows sales figures filtered by number of product categories, time, and/or region.TLA 2013: Technology Creation/Enhancement Brian O’Neill, lead architect for software development, Health Market ScienceMaster data management vendor Health Market Science serves the health care industry, where records exist in multiple systems and often vary in nomenclature, format, and structure. HMS’s job is to provide the master file for these common records, for discovering fraud, waste, and abuse in the health care system, both for forensic analysis and real-time authorization of, say, controlled-substance prescriptions.The typical approach is to create a relational data warehouse to hold the normalized and validated data, but that approach significantly limits the scalability of such a platform, in terms of both volume and variety of data, and this in the speed of the analytics possible on the master file. Additionally, to support both current and historical analysis, the platform must maintain point-in-time and revision information for every entity in the system.To address the issues of data variety, volume, and velocity, O’Neill championed a novel solution to the problem that combined cutting-edge big data techniques. One challenge was to integrate the legacy relational database management system with the big data platform. This required creative thinking and use of nontraditional integration techniques, so HMS could continue to deliver from the legacy platform, while developing new products, capabilities, and services on the new platform.The effort required abandoning traditional mechanisms that assure data consistency and integrity (such as locking mechanisms) and instead embracing techniques that allow for eventual consistency in the system, while shielding users and services from inconsistent states and integrity issues as data changes. That led to the adoption of NoSQL approaches and technology such as the Cassandra nonrelational storage mechanism and Storm distributed processing framework. O’Neill ended up becoming a contributor to the open source Cassandra and championed the establishment of numerous open source projects that extended Cassandra’s capabilities and allowed it to integrate more easily into a loosely coupled services-based infrastructure.Because its architecture is not based on batch-processing frameworks such as Hadoop, the platform better supports real-time integration via Web services. This enables self-service models and rapid, lightweight integration between systems — in other words, mashups.HMS’s big data platform can process orders of magnitude more data than the legacy technology. Also, because it is built entirely on open source technologies, there are no mandatory licensing costs as HMS expands the system’s cluster capacity and, thus, no financial worries over growing and shrinking the cluster to accommodate demand. Also because the platform is based on open source software, HMS has been able to extend the base capabilities of the infrastructure to meet market needs much faster than is possible with commercial software.TLA 2013: Technology Deployment Craig Brown, CEO, STEM Resource PartnersTwo big issues for IT today are the BYOD and big data phenomena — especially in organizations that resist change. The challenge is getting the organization to recognize that change is needed and invest in the technology that will bring about a more efficient ways to collect data, manage customers (both internal and external), market to new customers, and solve communication issues.Brown faced this challenge in trying to manage the expectations of the senior executive team around adoption of these emerging trends while maintaining business-as-usual reliability, security, and costs. He looked for low-cost technology that would let him deliver a small innovation within his budget but provide benefits in at least one area that would be noticed and revered as an accomplishment to justify the need enterprisewide for a serious investment.Brown also had to engage in politics around misguided perceptions and ideology. “I had to first correct the perception that the business was doing fine and that change was not needed,” he recalls.He created that proof-of-concept project under the radar. It allowed the sales team to create a new customer profile via a new mobile application — an approach that worked well in point-of-sale contexts. He was then able to enhance the mobile application so that the internal customer service could use it internally to enhance internal communication — which became a cost savings to the business. The mobile applications were welcomed by the administrative, customer service, and sales teams.These small successes let senior management buy in to a new approach to improving the world it had seen as already perfect. Now, teams are no longer tied to their email at their workstations. Instead, they can go out and work with both their customers and external support staff. Some competitors were already doing this, so this effort at least got STEM in the new game. “My plan only worked because I had the support of the team leads and the project managers,” Brown notes.Both the mobile applications and devices paid for themselves with just part of the new revenue they helped generate. This revenue exceeded the mobile application development costs, so fit well in the company’s culture of not increasing short-term or long-term costs.TLA 2013: Technology Deployment Babette Davis, executive liaison for Disability Insurance Branch, California EDDFor more than 25 years, the Disability Insurance unit of the California Employment Development Department has been using a mainframe system to process disability claims. That archaic system was inflexible and required every claim to be manually entered and processed. The EDD was able to pay claims on time, but it couldn’t provide faster, more convenient and accessible services to citizens, as they increasingly expect in the Internet age.The EDD thus launched an effort six years ago to create a new system, called SDI Online, envisioned as a state-of-the-art, Web-based solution to automate the nation’s largest SDI program and improve the processing of SDI and paid-family-leave claims. It was also tasked to deliver high-quality self-service for claimants, medical providers, and employers. The effort was one of the most complex business and technology transformations ever undertaken by EDD.The $158 million system went live in September 2012 with just a few hiccups as EDD staff used both it and the old system during a transition period, and the old system was retired in April 2013. Already, more than a half-million claims have been filed through the system, which combines a panoply of technology and tools, such as Microsoft .Net, IBM iLog, Oracle Identity Management, and SOA governance.From a technology deployment perspective, efforts the size of SDI Online don’t come quickly or easily in a public sector environment. For example, the actual project and procurement approval efforts normally start five years before the system is actually developed, which makes it hard to determine and specify appropriate technologies and keep the business and technical knowledge in place. The fixed-price nature of government contracts also puts vendors in a tough spot if initial expectations or specs don’t pan out, risking cut corners as a result.In September 2010, Davis took on the job of managing the project over its development, providing project management governance, high-level technology assessment, vendor management, and coordination with the EDD’s director and CIO and with the prime contractors, Deloitte and Unisys. When she entered the project in its fifth month of execution, there was no common goal around the schedule or scope, and the design was falling into the standard dysfunction of ping-ponging documents back and forth without real value added or completion of deliverables. The go-live date could not be moved, so Davis had to restructure the project organization while keeping the project moving on schedule.She imposed major milestone dates on all parties, with one schedule maintained by an integrated team of both state and vendor staff. She then established a team of state and vendor functional leads for the key project areas of organizational change, training, testing, development, and implementation. These leads mutually owned the progress and issues in their functional areas. Third, she established consistent meetings and reporting schedules so that status was shared across the broad project team. Within a few months, all parties working to common goals and expectations, as a true team.This story, “The InfoWorld 2013 Technology Leadership Awards,” was originally published at InfoWorld.com. For the latest business technology news, follow InfoWorld.com on Twitter. Careers