Test Coverage vs. Validation Precision

I have found there is a trade off between the amount that an automated test tries to cover and the amount of precision it tries to utilize during validation of results. First, some definition of terms:

Validation Precision: The degree to which a test tries to validate correctness within the test execution. A highly precise test will validate very specific things (e.g. “Did the application calculate this math formula correctly”, “Are all the edits put in the document in their expected state?”). A low precision test will validate more broad things (e.g. “Were we able to sustain application/service load during the entire run without taking the application/service down?”) if it validates at all.

 Test Coverage: The amount of conditions the test is trying to hit during execution. For this discussion, coverage is loosely defined – could be code coverage, could be usage scenarios, it could be permutations. A low coverage test will cover exactly one thing (e.g. “Execute FileOpen”, or “Call to UpdateCell with new data?”). A high coverage test will test many things (e.g. “Execute multi-user collaboration model for one hour with multiple machines in semi-random execution order”)

 I have found that as one goes higher, maximum ROI is achieved if the other goes lower. Pushing both to very high or very low result in either over-engineered expensive tests, or cheap useless tests, respectively. The more the test does, the more difficult it is to do precision validation. I have a simplified visual on this relationship below:

The above visual is only illustrative. There is no math behind it, the shape of the "Maximum ROI" zone is not meant to be accurate, only to communicate that there is a zone somewhere that returns the maximum balance, and it is a function of the two variables.

I believe the reason it works out this way is because both factors increase the cost of the other. The two also compete with each other. The more a test does, the more difficult for the test code to keep track of the expected state. The more the test validates, the more it has to interrupt the execution, or interfere with the steps, thus diminishing the amount the test can cover.

I find this important for planning test approach. If a given problem demands very precise validation, with correctness confirmed at every step in the test, then the test ought to do less. This will maximize the value of the precision validation and reduce the costs of trying to reduce noise from false failures. If a given problem demands achieving as much in the test as possible, such as a server stress or sustained load test, then the test ought to validate less during execution - often little more than "is the application under test still responding to requests?", and then move validation to a post test analysis, accepting that the "pass/fail" status of the results will be less certain than the very cut and dry status of a high precision test.

To view or add a comment, sign in

More articles by Wayne Roseberry

Others also viewed

Explore content categories