Three levels of test coverage for the customer-side system testers
Systems testers are under constant pressure to complete their work faster. If they work for system supplier, they prevent product from being shipped and the project manager from reporting a milestone. If they are on the customer side, they prevent the system from being used and the acquisition manager from reporting a milestone.
Signing off the milestone is extremely tangible with tangible benefits for the managers, so the testers are under constant pressure to test faster. On the other hand, the testing quality is extremely intangible and very hard to explain, especially when the testers themselves are not sure what is test quality anyway.
In software business, test coverage metrics provide partial cure. "Don't ship - we've just tested 70% of the code" may convince the project manager to add few more testing days. "We've only tested 80% of customer's requirements" is even better. But what about customer-side testers? How can they justify their testing time, when the supplier reports completed testing with all contractual requirements covered?
Customer-side testers need to ensure that the system will perform adequately in all possible proper usage scenarios' and it will not punish improper use too severely - no death sentence is justified even for using an explosive device as a hammer. But how does one account for all possible use scenarios?
For the start, some job is already done by systems engineers and systems designers. The system performance in proper usage scenarios should be captured in system requirement documentation. When the systems engineering job is of low quality , the requirements don't cover all possible uses leading to inadequate design or engineering coverage. Given the final system design, experienced testers can improve the engineering coverage by thinking about unanticipated use scenarios and augmenting the requirement specification - reverse engineering the system requirements.
The second type of coverage is the conventional one - statistical coverage of parametrized requirements. The Design of Experiments may help, but the requirements have to be formally defined its mathematics to be effective - "in certain environment parameter range and in certain system state parameter range, certain event will lead to certain (adequate) outcome range".
The third type of coverage is risk or project coverage. It is perfectly valid for managers to take risks to meet time or financial constraints, so they are allowed to authorize an insufficiently tested system. In such case, the system tester job is to quantify residual risks to facilitate rational decisions, lest the decisions will be made on the basis of the manager's experience, heuristics or simple bravery. The possible method of working with the risk coverage is presented in Avner Engel's book "Verification, Validation, and Testing of Engineered Systems" and the accompanying software tool.
To summarize, three distinct types of test coverage should be considered:
- the design or engineering coverage provided by up-front systems engineering augmented by reverse engineering by system testers,
- the conventional statistical coverage of formalized requirements using Design of Experiments mathematics and
- the risk or project coverage facilitating rational decisions in the case of incomplete testing.
All that remains is to design methods to measure and employ these types of coverage.
You are right! I found out that it is very hard to met tight deadlines of a project and do all necessary tests with full coverage. But, I this this is the most entertaining part of tester.