Bob Lewis
Columnist

The four fallacies of IT metrics

analysis
Dec 14, 20116 mins

Getting metrics right is essential to effective IT management, but getting them wrong is worse than not having them at all

It was an excellent help desk. Then, my correspondent explained, his CIO, wanting measurable results, established incidents resolved per analyst per week as an appropriate metric for assessing performance.

The company in question had three help desks: one for each major location. As my correspondent explained the situation, the one he managed performed far more poorly than the other two, and he was chastised for his organization’s subpar showing.

[ For more on Lewis’s First Law of Metrics, see “Stupid consultant tricks.” | Also on InfoWorld.com: Get Bob Lewis’s continuing IT management wisdom in his Advice Line blog and newsletter. | Find out why running IT as a business is a train wreck waiting to happen. ]

What was he doing wrong? He’d established a user self-sufficiency program, that’s what. His analysts spent quite a lot of their time educating employees to be more independent and sophisticated in their use of technology. The result was fewer incidents for analysts to resolve, coupled with higher levels of employee effectiveness.

It was a superior outcome that resulted in poor performance metrics.

“If you can’t measure, you can’t manage,” legendary management guru Peter Drucker once asserted. He was right — just not right enough. The fact of the matter is it’s a lot easier to get metrics wrong than right, and the damage done from getting them wrong usually exceeds the potential benefit from getting them right.

Lewis’s Corollary to the First Law of Metrics: If you mismeasure, you mismanage Last week’s missive on stupid consultant tricks introduced Lewis’s First Law of Metrics: You get what you measure — that’s the risk you take. Our help desk tale of woe leads to us to Lewis’s Corollary to the First Law of Metrics: If you mismeasure, you mismanage.

Imagine that instead of working in IT, you ran the highway patrol. You have a decision to make: Do you rely on unmarked cars and speed traps, or do you instruct everyone on the force to cruise the highways in their regular vehicles?

The right answer depends on clearly understanding what you want to accomplish, then turning that goal into a metric.

If your goal is to catch speeders, your metric will be the number of tickets issued per officer per hour, and you’ll go with the unmarked cars and speed traps. If, on the other hand, your goal is to minimize the amount of speeding on the highways, you’ll make sure every police car is highly visible, cruising exactly at the speed limit. After all, if drivers don’t see a police car, they might count on luck (and their overestimated ability to spot unmarked cars) and continue to speed unless caught. Only the most egregious nitwits will pass a cruising police car.

Sadly, you’ll have a more difficult time establishing a useful metric if you prefer to prevent speeding rather than capturing speeders. See how you do and what kind of data you’d need to track it.

One reason SMART isn’t always smart SMART is a popular goal-setting technique. It stands for (with some variations): specific, measurable, actionable, relevant, and time-bound.

Who could argue with a formulation like that? The answer: Anyone who, like the highway patrol that decided to cruise rather than catch, prefers prevention to troubleshooting. That’s because, with few exceptions, prevention ranges from being harder to measure to being indistinguishable from “What problem? I don’t see a problem.”

Successful prevention is indistinguishable from absence of risk, as anyone knows who worked on Y2K projects, only to be accused of wasting corporate funds on a phony problem when nothing blew up on Jan. 1, 2000.

This isn’t an isolated case. SMART isn’t always very smart because as a general rule, the more important the goal, the harder it is to establish metrics for which data collection is affordable and the data itself is objective.

Anyone who has worked in IT knows this, because almost no matter what you’re responsible for implementing, the cost is easy to measure. However, minor matters such as sound engineering, maintainable interfaces, and extensibility for future needs are nearly impossible to objectively gauge.

The four fallacies of metrics There are, it turns out, four different ways to do metrics wrong. You can:

  1. Measure the right things badly.
  2. Measure the wrong things, either well or badly.
  3. Neglect to measure something important.
  4. Extend measures to individual employees.

The first problem is the easiest to avoid. Once you know what you need to measure — what your goals are — the most common glitches are easy to spot and remedy. A common example is failing to weight different kinds of cases differently. Our help desk example would have failed this test, even if resolution rate was the right thing to measure: All calls to the help desk were counted equally, even though different kinds of calls would be expected to take dramatically different amounts of time to fix.

The second problem is harder to spot. It was the second metrics sin committed by the CIO in question. Resolution rate wasn’t the right element to measure. How much user time at work is free of technical difficulties is what matters, whether it’s the result of prevention, rapid resolution, or even dumb luck.

The CIO continued his unbroken streak with metrics fallacy No. 3. Companies should want their employees to take maximum advantage of the tools available to them. It’s another very important and hard-to-measure goal. The help desk manager recognized its importance and instituted programs to move in that direction. The CIO, by failing to measure it, ensured the death of those programs. Anything you don’t measure you don’t get.

That leaves the fourth and most controversial metrics fallacy — extending metrics to individual employees. Tempting as it is, it’s almost always a losing proposition, because smart (as opposed to SMART) employees will almost always figure out ways to game whatever metrics you apply to them.

If you managed the help desk in question or worked on it as an analyst, would you resist the temptation to ask every friend you had in the business to call in on a regular basis with easy-to-fix problems? Maybe you would. Then again, what if your raise or bonus depended on the metric and youd already lost respect for the powers that be because they were working so hard to keep you from doing what mattered most? I’m guessing that if you resisted the temptation, not only would you be the exception, but you’d be the exception most likely to be included in the next round of layoffs.

It’s the right course of action, isn’t it — laying off the company’s poorest performers?

Bottom line Metrics do matter. Without some way of knowing whether or not the organization is achieving its most important goals, its managers are flying without instruments. The challenge is getting them right because there are worse things than flying without instruments. Flying with instruments that provide false readings is, for example, much, much worse.

This story, “The four fallacies of IT metrics,” was originally published at InfoWorld.com. Read more of Bob Lewis’s Advice Line blog on InfoWorld.com. For the latest business technology news, follow InfoWorld.com on Twitter.