Concept: Failure Analysis and Report Creation
This concept addresses how to conduct failure analysis based on the execution of tests. The result of this analysis can take the form of a failure analysis report.
Relationships
Related Elements
Main Description

Introduction

During testing, you will encounter failures related to the execution of your tests in different forms, such as code defects, user errors, program malfunctions, and test scripting issues. This concept discusses some ways to conduct failure analysis and then to report your findings.

Failure Analysis

After you run the tests, its good practice to identify inputs for review of the results of the testing effort. Some likely sources are defects that occurred during the execution of test scripts, change request metrics, and Artifact: Test Log details.

Running test scripts results in errors of different kinds such as uncovered defects, unexpected behavior, or general failure of the test script to run properly. When you run test scripts, one of the most important things to do is to distinguish between the causes and effects of failure. It is important to differentiate failures in the system under test from those related to the tests themselves.

Change request metrics are useful in analyzing and correcting failures in the testing. Select metrics that will facilitate creation of incident reports from a collection of change requests.

Change request metrics that you may find useful in your failure analysis include:

  • test coverage
  • priority
  • impact
  • defect trends
  • density

Finally, one of the most critical sources of your failure analysis is the Artifact: Test Log. Relevant logs might come from many sources: they might be captured by the tools you use (both test execution and diagnostic tools), generated by automated tests or metrics tools, output from the target test items themselves, or recorded manually by the tester. Gather all of the available test log sources and examine their content. Check that all the scheduled testing executed to completion, and that all the needed tests have been scheduled.

Self-Documenting Tests

For automated tests, its important for the test itself to examine the results and clearly report itself as passing or failing. This provides the most efficient way to run a suite of tests without the need for human intervention. When authoring self-documenting tests, ensure that the reporting considers both passing and failing results.

Recording Your Findings

Once you have conducted your failure analysis, you might decide to formalize the results of this analysis by recording your findings in a report. There are several factors that go into deciding whether to record your failure analysis in a report. Some of the key factors include: level of testing formality, complexity of the testing effort, and the need to communicate the testing results to the entire development team. In less formal environments, it may be sufficient to record your failure analysis in a summary fashion.