Bob Lewis
Columnist

The six C’s of effective metrics

analysis
Dec 28, 20117 mins

Useful business metrics follow the six-C formula, beginning with connection and ending with communication

“It’s the Christmas season!” an exasperated correspondent complained following my two recent pieces on the subject of metrics. “Get in a good mood! Enough about how to do metrics wrong! Tell us how to do them right!”

So here’s a gift from me to you: An excerpt on metrics from “Keep the Joint Running: A Manifesto for 21st Century Information Technology,” edited for length. Happy Whateveryoucelebrate! — Bob

[ Get Bob Lewis’s continuing IT management wisdom in his Advice Line blog and newsletter. | Find out why running IT as a business is a train wreck waiting to happen. ]

Good metrics are connected, consistent, calibrated, complete, communicated, and current — six C’s. In order:

  • Connected: Good metrics are connected to important goals. In fact, they begin as important goals, stated in English.
  • Consistent: Consistent metrics always go in one direction when the situation improves and the other direction when it deteriorates. If good doesn’t always point in one direction and bad in the other, your metric will drive organizational dysfunction. This, by the way, is the biggest problem with the TCO (total cost of ownership) measures so popular in the IT industry. TCO isn’t consistent. Some of the actions you might take to reduce TCO do improve your situation (standardized configurations, for example), but others, such as eliminating training, make it worse.
  • Calibrated: Calibration means you get the same value in the same situation no matter who records it. It also means the data are free from sample bias and other quality problems. It means, that is, you can count on the numbers themselves (NPI — honest!).
  • Complete: Anything you don’t measure you don’t get, so any useful system of measures must include all factors that are important to achieving the goal.
  • Communicated: The purpose of metrics is to drive behavior. If you don’t communicate their purpose, they won’t drive behavior.
  • Current: Goals change. Keep the old measures and you’ll achieve your old goals, not your new ones.

The six C’s are easily stated. Satisfying them is hard. If you’re sure you want to go through with it, here’s how.

Step No. 1: Define success

Be clear on how success looks and feels. If you’re a trendy sort of leader, you might even capture your thinking in the form of a vision statement, mission statement, or stated strategic goal.

Just don’t confuse the statements with what matters. Vision and mission statements are nothing. Understanding the vision and mission is everything.

Step No. 2: Set important goals

Forget SMART. Establish the goals that matter. Goal-setting isn’t the time to think about measurability.

It is, however, a good time to think about making them operational. For example:

  • Goal: Satisfy our customers. Neither objective nor operational. Often, not even your customers will know whether you’ve achieved it.
  • Goal: Make sure customers are happy they chose us. Better, but still not great. Happiness is elusive and sometimes not achievable. If, for example, you’re a dental surgeon, your customers might be happier than if they still had a toothache. But happy? Don’t get your hopes up.
  • Goal: Customers come back and bring their friends. This is an operational goal. It’s about how customers behave, not about how they feel. And you can tell if they have.

Second example:

  • Goal: Employee loyalty. Not operational.
  • Goal: Happy (or proud or whatever) employees. Still not operational, but at least you can ask and hope to get a meaningful answer.
  • Goal: Good, very good, and excellent employees stay, and recommend us to colleagues as a great place to work. Operational; also, highly desirable.

Step 3: Review for completeness

While you’ll never reach the level of geometric proof, think through your list of goals, preferably with the team that reports to you — more eyeballs help — until you’re all confident that if you achieve the goals on your list and do nothing else, you’ll be successful.

Step 4: Translate goals to math

Goals are stated in plain language. Metrics are their equivalents, expressed in the language of mathematics. They are easier to develop when goals are operational than when goals are about attitudes and feelings. That doesn’t mean goals for attitudes and feelings are impossible to develop metrics for — but it’s harder.

For customer satisfaction, you’d have to find a way to survey customers and ask them to rate their level of satisfaction on a numerical scale ­– 1 to 5 is traditional.

For the operational version, you would ask customers as they buy merchandise why they chose to do business with you. If you’re succeeding at your goal, the percentage of customers who are either repeaters or referred will rank high.

Targets — specific numbers you want to hit or exceed — are optional because they’re a mixed blessing. They do give everyone something to shoot for. But they also define “good enough,” which is generally the enemy of “better.”

Step 5: Determine how to collect the data

Then think through how your data collection methods might lead to data quality problems.

If, for example, you plan on using surveys, think about why someone would take the time and effort to fill one out. The most likely reasons translate directly to possible sources of sample bias:

  • Some companies offer customers a financial incentive to take Web surveys. The inevitable result: Survey-takers don’t care whether they fill out the questionnaire accurately.
  • Other companies don’t. Their data quality challenge: Ticked-off customers are more likely to fill out the survey than happy ones.
  • Also recall that if you plan on using the data to assess employee performance, you’ll almost certainly end up with unusable data. Resist the temptation.

Step 6: Fine-tune each metric

Go back to the help desk described a couple of weeks ago. While the incident close rate by itself isn’t complete, you might decide it’s useful when combined with other metrics. If so, look at how it could encourage undesirable decisions, and make adjustments to correct the problem. Examples:

  • Treating all incidents as being equal when some are much more difficult to resolve than others. Solution: Weight each incident according to level of difficulty.
  • Dealing with contacts rather than cases. Don’t mistake completing a call with fixing a problem. And don’t give credit for fixing lots of problems because lots of people called about a single problem. Solution: Organize incidents into cases, and only close cases, not calls.
  • Encouraging undesirable escalation, which will happen if you factor in time-to-resolution without great care. That’s because every time a help desk analyst escalates a problem, it pulls someone away from whatever other high-value activity they were engaged in. Solution: Left as an exercise for the reader.

If you want to do the best job of fine-tuning your metrics, review the results with the teams responsible for them often, and ask how accurately they think the results describe reality as they experience it. When a measure distorts the real situation as team members understand it, listen to them and fix the problem.

Step 7: Communicate the results

That is, after all, the whole point: to let everyone know how you’re doing.

How? If you’ve ever worked in a company that’s run a United Way campaign you already know a big part of the answer: graphically and prominently. Remember the United Way thermometer? It shows the target and how close the company is to reaching it. Everyone knows. If the campaign leaders have done their job, employees want the company to reach the target. It drives behavior.

Remind employees, over and over:

  • What the goals are.
  • How achieving them will make the organization more successful.
  • How your measures connect.
  • The current results and whether they’re good.
  • The plan for further improvement.

Bottom line

Designing an effective system of organizational metrics is far, far more difficult than most who recommend the practice acknowledge. It’s akin to designing a cockpit that allows a pilot to fly on instruments. It can be — and has been– done, and the bigger and more complex the aircraft, the more important it is to be able to fly this way.

But compare it to a typical car driver, who relies mostly on the view through the windshield, supplemented by three mirrors and four instruments.

The parallels are, I trust, clear.

This story, “The six C’s of effective metrics,” was originally published at InfoWorld.com. Read more of Bob Lewis’s Advice Line blog on InfoWorld.com. For the latest business technology news, follow InfoWorld.com on Twitter.