Measuring Software Development Productivity

You can’t manage what you can’t measure.

Management guru Peter Drucker is often quoted as saying that “you can’t manage what you can’t measure.” While its possible and quite useful to measure productivity in a manufacturing or services world, is it feasible to do so in the world of software development? Obviously one can count the number or widgets cranked out in a hour, or cups of coffee served or patients attended to in a day – but software? Isn’t that like magic?

Over the years, several attempts have been made to measure the software developer productivity and efficiency using lines of code produced, function points developed, use stories delivered, features shipped and more. Do they work? Sadly no.

Relying on one measure to assess the individual or team productivity is a recipe for disaster. Despite all good intentions it will only encourage and propagate bad behavior.

For example, counting lines of code, will result in inefficient code that needs to be tested more. Counting the number of features developed might delay the necessary or must have features from being shipped or prioritized. Focusing on the number of incidents handled will put emphasis on “patching” code rather than innovation or architecture improvements.

Should we then accept the fact that software development is an art, and that it cannot and should not be measured? Not so, we can and should measure software development to encourage the right set of organization behaviors and outcomes.

Rather than measuring output, one should measure outcomes. Progress, Quality, Efficiency and business Value. Here are some samples (by no means exhaustive)

Progress

  • Cycle times (from planning to testing)
  • Accumulation of tech debt (How much is the progress costing you over time?)
  • Frequency of builds (do you do daily, hourly, weekly?)
  • Burndown
  • Measure Velocity, and the velocity increases over sprints
  • Sustainability of the velocity (are you having a team burn out?)

    Quality

  • Failure rates
  • Number of production defects
  • Defect density
  • Test coverage
  • Automated test failure rates

    Efficiency

  • Cumulative flow diagrams
  • Mean time to resolution
  • Cycle times (from planning to production)
  • Frequency of production builds

    Business Value

  • Cycle time (from concept to go live)
  • Customer usage
  • Business value points delivered (akin to story points delivered)
  • Sales / cost reductions / error reduction – or whatever else the program goals are
  • By tailoring and weighting these measures to your organization’s vison, culture and desired outcomes and behaviors, you can arrive at a scoring mechanism that is valuable.

And remember that no one scoring mechanism fits all scenarios or types of IT projects. A development project may need different measures viz and viz a product in maintenance mode. A simple web application where the requirements are clear will need a different weightage mechanism as compared to the rollout of a point of sale or an ATM system.

Also, bear in mind that as with Agile processes, the scoring logic and measures need to evolve with the level of maturity within the teams and the organization at large.

Good luck selling this idea to your team at the next stand-up!