Equinox IT Blog

Software testing metrics - defect removal efficiency (DRE)

Software testing metrics - defect removal efficiency (DRE)
8:34

In my last post Essential testing metrics “Defect Removal Efficiency (DRE)” was identified as the most important measure of testing quality. Defect Removal Efficiency relates to the ability to remove defects introduced to a system by a project during the project life cycle. 

At its simplest DRE can be expressed as a percentage where DRE = (total defects found during the project/total defects introduced by the project) x 100 

How do we determine the total number of defects within a solution?

One of most significant challenges in calculating a DRE percentage is determining the total number of defects introduced to the system by the project, there are a number of ways how this can be determined –

  1. Defect Seeding – This is where defects are deliberately placed into code in order to determine the effectiveness of a testing programme. Defect Seeding has been used in a number of academic studies but is rarely used in real world testing where timeframes and budgets are often already stretched, creating representative defects is a difficult and time consuming activity.

    The total number of defects in a application can be extrapolated to = total defects found during testing x (total defects seeded/total seeded defects detected)
  1. Defect Estimation – This involves estimating the number of defects within a system based on previous deliverables and industry experience.  This is a technique that would be unlikely give a truly accurate defect count, and will be of more value as an input into the initial Test Planning estimates.
  1. Defect Counting – This involves combining the “number of defects found during testing” with the “number of defects identified in Production traced back to the project”.  This method will always give a lower value than the true number of defects introduced by the project as –
    1. Not all defects manifest themselves as failures in Production
    2. Cosmetic and Usability related defects are often less likely to be raised in Production.
    3. In order for the DRE metrics to be useful they should be derived as close as possible to when the system exited test, however many functions may not be executed in Production until the system has been live for a number of years.

What I would recommend in most instances would be using “Defect Counting” and cutting off the Production Defect count after the system has been live for three months. This should be sufficient for the majority of significant issues to be identified, while still being providing the information within a relevant timeframe.

Normalising the defect data

In order to be effective it is important that defects are consistently raised and classified across the testing life cycle and production, this means –

  • Ensuring that defect severities across test are applied according to the definitions specified in the test strategy
  • If necessary reclassifying Production defect priorities to ensure that  they are consistent with defects raised in test
  • Analysing any Production Change Requests to see if they are actually addressing defects
  • Often there can be a delay in raising Production defects so there is value is talking to some of the key System Users to identify any issues that they have observed but not yet formally logged.

Where were the defects introduced?

In order to measure the effectiveness of specific test teams or test phases it is necessary to determine where within the project life system defects were introduced, this requires a  level of root cause analysis as to the likely cause of each defect. Defects are usually  classified as being introduced in the following areas –

  • Requirements
  • Design
  • Build
  • Deployment to Production

For an iterative project it is also good practice to record the iteration in which the defects were introduced.
 

Where should defects be identified?

The V-model provides a good guide as to where within the Project Life cycle different classes of defects should be identified. I would normally apply the following criteria:

Defect introduced

Defect characteristics

Phase where defect should be identified

Requirements Phase

Requirements Related

Requirements inspection

Design Phase

Design Related

Design Inspection

Build Phase

Functional defect - within a code component or between related code components.

Unit Test

Build Phase

Integration between components within an application

Integration Test

Build Phase

Functional defects / standards / usability

System Test

Build Phase

Non-Functional defects

Non-Functional Test Phases

Requirements and Design Phase

Business process defects

Acceptance Testing

Deployment Defects

n/a

Post-Deployment Testing

For an iterative project defects should be identified during the phase in which they were introduced.
 

Calculating DRE for specific Testing and Inspection Phases

A “phase specific calculation of DRE” can be documented as “total number of defects found during a particular phase / total of number of defects in the application at the start of the phase”.

Some basic rules that should be applied –

  1. Most projects are delivered in a iterative fashion
    1. A test phase can only find defects that are actually in the solution when executed
    2. A test phase for a particular iteration however should still consider defects introduced by previous phases (Unit Test is an exception to this rule as it is usually phase specific)
  2. The non-functional test phase should only be expected to find non-functional defects within their area of focus (obviously knowledgeable  non-functional testers may find some functional defects, however this is not the prime purpose for their testing and should not count towards their DRE calculation)
  3. Functional test phases should not be expected to find non-functional defects.
  4. Functional test phases follow a solution maturity level as implied by the v-model; less mature test phases should not be expected to find defects belonging to higher phases (i.e. unit test would not be expected to find business process defects)

Example Formula

Phase Specific DRE

This measures how effective a test phase is at identifying defects that it is designed to capture

  1. DRE Requirements Inspection = (number of requirements related defects identified during requirements inspection/total number of requirements defects identified within the solution)
  2. DRE Design Inspection = (number of design and requirement related defects identified during design inspection/total number of design and requirement defects identified within the solution)
  3. DRE Unit Test = (number of unit test defects identified during unit test/total number of unit test defects identified within the solution)*
  4. DRE Integration Test = (number of integration defects identified during integration test/(total number of integration test defects identified within the solution post-unit test )
  5. DRE System Test = (number of system test defects identified during system test)/(total number of system test defects identified within the solution post-Integration test )
  6. DRE Acceptance Test = (number of acceptance test defects identified during acceptance test)/(total number of acceptance test defects identified within the solution post system test )

*As unit test is often an informally recorded testing activity this metric may not be able to be derived in which case other development quality metrics such as “defects/line of code” could be applied.

Overall DRE

This measures how effective a test phase is in capturing any residual defects within the application irrespective for the phase that should have caught them. (As an example Acceptance Testing is not specifically trying to find Unit Test defects, however a thorough testing programme will cover many paths through the functionality and should identify missed defects from other phases). 

  1. Overall DRE System Test = (number of defects identified during system test)/(total number of functional defects identified within the solution post-Integration test)
  2. Overall DRE Acceptance Test = (number of defects identified during acceptance test)/(total number of functional defects identified within the solution post-system test)

What is a good DRE Score?

An average DRE score is usually around 85% across a full testing programme, however with a thorough and comprehensive requirements and design inspection process this can be expected to lift to around 95%.

Free recorded webinar: Testing is key to your agile software development success

Subscribe by email