Thursday, March 27, 2008

Metrics that are useful



As an experiment, I ran one of the BoF sessions at the SPA 2008 Conference on the topic of 'Metrics which are useful'. An interesting discussion ensued in the group consisting of academics, software practitioners and quality specialists.

The following is a summary of my notes:







Why Capture Metrics and what are you trying to achieve?
  1. Purpose of metric capture depends on customer and business

  2. Provide 'Bird's eye view of projects'

  3. Used to improve quality

  4. Used to provide evidence to support quality measure e.g. CMMI

Some thoughts on metrics (good and bad)
  1. SLOC (source lines of code). Easy to calculate once agreement has been made with regards 'a line of code' but considered not good measure. Not particularly appropriate when system includes COTS components as part of solution. SLOC can change depending on language choice. No incentive to encourage reuse (see later), abstractions etc and can lead to excessive cut n'paste.

  2. Coupling/Cohesion of interfaces as a mechanism for showing well-designed modules which can be reused

  3. Use Metrics to monitor 'right first time' during integration. Key issue is how and what to measure. Can also be used as a measure of achievement by Project Manager
    Monitoring reuse but difficult to measure or demonstrate. Potential measure could be number of hours saved. Reuse is dependent on expertise, team, business processes and functionality.

  4. Measuring quality of code by examining use (or not!) of framework primitives and higher levels of abstractions.

  5. Number of dependencies (Java) – the more a class is used, the greater the chance that bugs could have been found. Also consider number of interfaces used by component.

  6. Number of tests passed. Not a good measure as it says nothing about requirements achievement. Better measure would be number of requirements passed – each test would have to reference the requirement(s) which are being tested (in part).

  7. Measurement source code changes between different phases e.g. Unit test and Functional Test
    Code coverage and number of tests (and completed) are not particularly useful. Code coverage for TDD is always 100%.

  8. Measuring capabilities delivered can be useful metric particularly when adapted to meet business needs.

  9. Key Performance Indicators (KPIs) often used for measures of performance across diverse set of projects (i.e. Not just software)
Using Metrics
  1. What to do with the data once calculated/presented. Metrics must be presented in an easy to understood form (examples include traffic light reports with appropriate thresholds for each colour, graphs (are these always clear?)).

  2. How frequently should the data be reviewed and corrective action instigated? Time should probably be a function of the size of the project/development and the anticipated development time. Small projects may be appropriate to measure daily (probably as part of an overnight build). Other projects it may be more appropriate to record on a weekly or monthly period depending on the likely changes between each report.

  3. The cost of measuring/calculating the metrics should be negligible.

  4. Metrics are always a snapshot. Need to examine the trends/dynamics rather than the absolute values and take appropriate action if trend is going in wrong direction. Doesn't matter if you have 1000 bugs this week, what matters next week is that you have less than 1000 bugs!. Any thresholds need to be appropriate to the project and reviewed and revised continuously. New developments may be able to start with a threshold of no compilation warnings when all code is new; this threshold might not be appropriate for a legacy system.
Some Recommendations
  1. Measure the business value and not the code. Measuring something that is significant to the business is more important. Examples include how many times has the software been used?

  2. Measurement must be understood by EVERYONE (and described in appropriate language), easy to calculate (i.e. Not subjective) and explain what it means to the business. The types of metrics to be selected depends on the type of organisation (and the business structure) and frequently change!

  3. Metrics should be used to encourage good practice (e.g. Reuse, abstractions, frameworks) and not to use to punish offenders!

  4. Starting a project from scratch is ideal to set good practices. However, regardless of where metrics are introduced, the key practice is to monitor the dynamic nature of the project.

No comments: