Daily Archives: January 25, 2012

Childproof your Metrics Program...

As a new parent years ago, I remember childproofing household dangers like electrical outlets, and raising adult objects above my children’s reach.  Over the years, new hazards appeared and it took planning to stay a step ahead to prevent injury or damage to innocent children.

I remembered this when I thought about what could be done to avoid software metrics failures – perhaps a form of “childproofing” could avoid a few of the dangers involved.

Software measurement is NOT rocket science (despite the claims of a few eager consultants), but neither is it child’s play.  Measurement must be properly planned or it can actually cause more damage than help!

You likely recall Tom DeMarco‘s famous “you can’t manage what you can’t measure” statement, but may not be aware of his later observation that poorly planned metrics can damage an organization.

How to “Childproof” your Metrics Program:

1. Plan a clean metrics program by following the Goal-Question-Metric (GQM) approach. By so doing, metrics are only collected when they serve a specific purpose.

2. Make sure that the measurement team understands the plan.  This will make sure that specific metrics are collected appropriately and that extraneous data are not lying around (or misinterpreted).

3. Pilot the measurement and metrics in a controlled environment before rolling it out. Train the right people to collect and analyze the metrics to make sure the intended results can be achieved.  It is far easier to see dysfunctional behavior (often unintended consequences of measurement) in a controlled environment and minimize potential damage.

4. Communicate, communicate, communicate. Be honest and specific about the project plan: resources, schedule, intended results.  Prepare management not to “shoot the messenger” when early results do not equal their expectations.

5. Limit data access to those who are skilled in data analysis (do not allow management access to raw data).  Proper data analysis and correlation is a critical success factor for any metrics program.

6. Be realistic with management about their expectations.  A program designed to meet champagne tastes (for measurement results) on a beer budget seldom succeeds.  Moreover, sometimes historical data can be collected if available, other times data are impossible to collect after the fact.

7. Recognize that wishful thinking for metrics will not disappear overnight.  Management and staff may not understand that measurement should be implemented as a project:  there will be a need for training (in metrics concepts), planning, design (measurement processes), implementation (initial data collection), training (for the program), communication, etc. As a result, people may give lip-service to the overall initiative without understanding project management is necessary.  Communication and level setting will be an ongoing process.

8. Do not allow data to get into the hands of untrained users.  Often sponsors want to “play with the data” once some measurement data is collected. Avoid the temptation to release the data to anyone who simply asks for access.

9. Do a dry run with any metrics reports before distribution.  After data analysis is done, gather a small focus group together to pilot the results.  It is far easier to address misinterpretations of a data chart with a small group than it is after a large group sees it. For example, if a productivity bar graph of projects leads viewers to the wrong conclusion, it is easy to correct the charts before damage is done.  It only takes one wrong action based on data misinterpretation (i.e., firing a team from an unproductive project when it was actually a problem of tools) to derail a metrics program.

To be effective and successful, software measurement requires planning including a consideration of the consequences (such as dysfunctional behaviors that measurement may cause). Childproofing using the above steps will help to ensure your measurement program achieves success.

Have a great week!