QA metrics for software testing
Here comes the list of metrics that were named in the talks of Marine Yegoryan and Irina Ushakova on QA Z-days 2020. Links to the talks videos may be found in the end of the article.
Part 1 (Marine Yegoryan)
Defect Containment
Defect Containment = Pre-release bugs * 100 / (Pre-release Bugs + Post-release bugs)
Possible problems: * Missing of test strategy and approaches * Low level of test coverage * Too many bugs in backlogs * Bad requirements and ineffective review sessions * Miscommunication between development and business teams * QA member productivity * Missing of unit tests * Missing of sanity check before deployment * Bad designed automated test scripts * Bad implementation * Architectural issues * Design issues
Test Case Coverage
Test Case Coverage = Total number of requirements mapped to test cases * 100 / Total number of requirements
Possible problems: * Not all related paths are covered * No mapping test cases with requirement * Too many trashes and out of scope items in test cases
Test Case Preparation Productivity (in hours)
Test Case Preparation Productivity (in hours) = Number of Test Cases / Effort spent for Test Case Preparation
Possible problems: * Laсk of usage test case creation techniques * Missing of designed test strategy * Bad requirements and misunderstanding with the BA team * Productivity of QA team member
Passed Test Cases
Passed Test Cases = Number of test cases passed * 100 / Total number of test cases executed
Regression Analysis
Regression Analysis = New defects detected during the regression * 100 / Total number of detected defects
Defect Severity Index
Defect Severity Index = ((Num of Sev 1s * 8) + (Number of Sev 2s * 6) + (Number of Sev 3s *3) + (Number of Sev 4s * 1)) / Total issue count
Possible problems: * Unstable application * Too many major bugs in backlogs * Bad designed automated test scripts * Bad implementation
Decline Rate
Decline Rate = Number of invalid bugs * 100 / Total number of closed bugs
Possible problems: * Bad requirements, misunderstanding between QA and team * Bad defect tracking * Missing of defect mapping with the test cases * Too many old and unactual bugs in backlogs
Visualise it!
- Automation vs total test cases
- Failed tests
- Execution summary (Total, passed, failed, skipped, Product bug, automation bug, System issue, not a defect, to investigate)
- Execution Time
Part 2 (Irina Ushakova)
Software related
Degree of requirements are interconnected = AVERAGE of (Number of requirements related to requirement №1)/(Total requirements -1), …,(Number of requirements related to requirement №n)/(Total requirements -1)
Coefficient of stability requirements = Number of changes to existing requirements/Total number of requirements implemented per iteration (with new)
Defect density = Number of defects in a separate module / Total number of defects in software
Regression coefficient = Number of defects in the old functionality / Total number of defects (with new)
Coefficient of reopened defects = Number of reopened defects / Total number of defects (reopened+new)
Team related
The number of defects in the code of a specific developer = Number of defects in the code of a specific developer / Total number of defects
Velocity of QA team = Number of story points per N iterations / N
The effectiveness of tests = Number of defects detected / Total number of test cases
Defect containment = Number of defects detected after release / Total number of defects detected before and after release
Accuracy of time estimation by area / type / type of work = Estimated time / Actual work time
Thanks. Very useful and hands on
Hi Natalia, this set of ‘indicators’ might serve as a source for inspiration for you perhaps https://www.tmap.net/page/indicators-voice-model
Dear Natalia Munina I am very thankful for your feedback. It was pleasure that you find the topic valuable!