How to identify, track, report and validate metrics in software testing?

Tram Ho

Metrics help measure the current state of an activity or process. It helps us to set benchmarks and goals. Measurement helps you set how much you need to go to reach your goals. The test manager must be able to identify, monitor and report test progress data. What is measured is made a common saying. Therefore, it can be inferred that if something is not measured then it will not be realized. Therefore, it is necessary to establish quantitative metrics for testing.

1. Four types of test operation data

  • Figures for software testing activities can be grouped into the following:
  1. Project metrics – They measure the extent to which projects move under its stopping conditions. The example includes the percentage of test cases that have run successfully, failed, or been executed.
  2. Product metrics – They measure product characteristics such as error density or level of testing.
  3. Process metrics – They measure the capabilities of product development or testing processes. Examples include the number of errors the test was able to detect.
  4. People figures – They measure the level of skills and abilities of team members or the whole team. Examples include following a schedule to execute a test case.
  • If no errors are reported for seven days, the project may be deemed safe to move to a stop condition.
  • If no defects are found in the product, it is a measure of product quality. If a large number of errors are detected during the early stages of testing, it will measure the likelihood of the testing process.
  • It is very important to handle people’s data very carefully because it can easily be confused with process metrics. If that happens, the entire testing process can fail and people may lose trust in their managers as well as their organizational capabilities.
  • The data supports testers in reporting test results and monitoring the inspection process consistently. Test managers often present these figures at stakeholder meetings at different levels.
  • Because these data can be used to assess the overall progress of the project, care must be taken while determining which data to be monitored, techniques for preparing data and frequency reports. Report presentation capacity.
  • Here are some points that Test Manager / QA should keep in mind:
  1. Metric Definition – A defined metric will be useful. Unimportant metrics should be ignored, keeping track of the four categories discussed above. All stakeholders must agree with the data definition to avoid any confusion when discussing measurements. Because a single metric can give a wrong idea, the numbers must be determined so that they are evenly matched.
  2. Data monitoring – The processing, consolidation and reporting of results for the data must be done automatically to the extent feasible to minimize the effort devoted to these activities. The test manager must observe if any deviations from the expected results and incorporate them into the report. If possible, the cause of the discrepancy should also be mentioned.
  3. Metric reports – Metrics are reported to senior managers for project management. Therefore, the report should ideally be presented and highlight important data values ​​as well as the development of the data over a period of time.
  4. Data validation – The test manager (leader QA) is responsible for verifying the data and values ​​presented in the report. The test manager should also analyze it for accuracy as well as reported trends.

2. Check progress data

  • Test progress is observed based on the following 5 factors:
  1. Product quality risks
  2. Product error
  3. The test is conducted
  4. Check coverage
  5. Confidence in test operations
  • Product defects, risks, tests and scope are often reported in a predefined format. If these figures correlate with predetermined stop conditions, a benchmark for assessing test effort may be developed.
  • Confidence can be measured objectively or objectively using metrics such as surveys and coverage.

3. Metrics that can be determined for product quality risk

  1. The risk part is fully tested
  2. The risk part in which all or at least some tests fail
  3. The risk part cannot be fully tested
  4. The risks have been checked or sorted by risk category
  5. The risk part is detected after a preliminary analysis of risks to product quality

4. Metrics can be identified for errors

  • The ratio of the total number of detected errors to the number of resolved bugs
  • The average time between failure or failure rate is reported
  • Classify errors according to the following factors:
  1. Product ingredients are tested
  2. Cause of error
  3. Sources of error such as addition of new features, regression, and specification
  4. Trial release
  5. Error level
  6. The priority or severity of the error
  7. Report is duplicated or rejected
  8. Time lag between error detection and resolution
  • Child error, that is, another error correction

5. Data can be specified for testing

  1. The number of tests planned
  2. Number of tests deployed and implemented
  3. Number of test cases that have been blocked, ignored, failed or successful
  4. The status, trends and values ​​for regression testing as well as test validation
  5. The ratio of planned test hours to actual daily test hours

6. Data can be specified for test scope

  1. Cover the range of requirements and design documents
  2. Range of risk
  3. The scope of the test or configuration environment
  4. Range of product code
  • The test manager must be proficient in interpreting and using the scope of the data to report on the test status. The scope of design documents and requirements is needed for higher levels of testing such as integration testing, acceptance testing and system testing.
  • Code coverage is required for lower test levels such as unit tests and component level tests. Higher level test results should not include code coverage.
  • It should be noted that although targets cover 100% at lower levels, errors will be detected at higher levels and remedied accordingly.
  • Test data may relate to the main testing activities. This will help monitor the testing progress against the stated project objectives.

7. Data can be specified to monitor and control test plans

  • The range of testing facility factors such as risks, product requirements, etc.
  • Error detection
  • Percentage of estimated hours required in development and testing of the total number of hours required

8. Data can be determined for test analysis

  • How many test conditions are known?
  • How many errors are detected?

9. Data can be specified for test design

  • Part of test conditions has test cases
  • How many errors are detected in the test design process?

10. Data can be specified to carry out the test

  • The ratio of the test environment has been set
  • The percentage of test data records that have been uploaded
  • The proportion of test cases was performed automatically

11. Data can be specified for testing

  • Percentage of test cases that have run, succeeded or failed
  • The percentage of test criteria covered by test cases that have been run successfully
  • The expected error rate compared to the actual error has been reported or resolved
  • Estimated test coverage ratio against actual coverage achievable

12. Data are determined to observe the testing process

The data that is determined to observe the progress of the test operations must be linked to project milestones, input test conditions and test stop conditions. Some of these metrics may be:

  1. Number of predefined test cases, conditions or specifications performed, together with their results (success or failure)
  2. Defect detection is classified by severity, importance, affected product components, etc.
  3. Details of the required modifications and their status are combined and / or tested
  4. Estimated against real cost
  5. Estimated test time compared to the actual
  6. Expected date for testing compared to actual date
  7. Estimated timelines of test activities compared to their actual dates
  8. Details of risks to product quality are classified into mitigation and not acknowledged
  9. The main risk component
  10. Risks detected after test analysis
  11. Spend time trying and trying because of unexpected or planned events
  12. Regression test status and validation test

13. Metrics to measure closing test tasks

The following metrics can be designed to measure closure of test tasks:

  • Number of test cases
  1. For categories – run, succeed, fail, ignore and be blocked
  2. That has become a part of reusable test cases
  3. That has been automated compared to the actual number of automated cases
  • Number of errors resolved or not resolved
  • The number of product storage work of the test
  • The data collected through the testing process must assist the test manager in monitoring the testing effort and bring it to a successful completion.
  • Therefore, the data, quantity and frequency of data collection, the complexity and risks associated with it must be established during the planning phase.

14. Control the testing activities

  • Test control must be able to modify the test according to the project environment changes and the data provided by the test.
  • For example, consider the scenario in which a dynamic test of a product shows a cluster of errors in areas deemed not to be defective, testing time reduced due to developmental delay. This will require modification of risk analysis and planning. In this case, test cases will have to be prioritized and the allocation of testing efforts will be reviewed.
  • Keep track of new information, test plans that need to be reviewed, newly created test cases and redistribution efforts.
  • If the test progress reports have deviations from the test plan, test control activities must be performed to control the test as planned.
  • Some actions that may be considered for this include:
  1. Modify the test sequence or test plan
  2. Review quality risk analysis
  3. Strengthen testing efforts
  4. Change the product release date
  5. Change the test stop condition
  6. Modify the project scope as required
  • All stakeholders and project managers must agree before taking any of these steps.


This article shares how to identify, track, report and validate data in software testing that hopefully helps everyone!

Reference source:

Share the news now

Source : Viblo