Are we testing too much?

I have always worked by the simple rule that if you don't independently test something you cannot be sure it works. When I hear a fellow tester say something like 'it should work' it really grates on me as that statement is meaningless. Everything 'should' work. When a developer write a piece of code to do a task defined in a requirement then it 'should' work.

Until you test the functionality the developer created however, you cannot say it 'does' work. The role of test is to reduce the risk of issues, and therefore costs, in production. The risks will never be fully removed, but by testing you can reduce those risks and even get visibility of where the risks are and so mitigate them. However, this can only be achieved by actually testing.

As an example; you may test that a user can authenticate using a username & password. So you can now say authentication 'does' work. But this is wrong - all you have proved is that using the username/password you entered worked. Another login using a different number of characters may fail! If the username can be between 4 and 50 characters in length and use all visible characters in the ASCII character-set you cannot say it 'does' work until you have verified all lengths and character combinations of username/password as well as all the different possible states of the user being authenticated (IE. Just registered, registered then deregistered and re-registered, suspended then reinstated, etc...). To say authentication 'does' work we would probably need very many tests. Most of which would clearly be pointless - if we can authenticate with a username containing 34 characters then why also test at 33 characters! The probability of all username lengths working but NOT 33 is so low it would probably be worth buying a lottery ticket if it ever happened.

So test is about compromise. Cost vs risk reduction. In the scenario above a good tester would probably test the boundaries - that username lengths of 4 and 50 work, that 3 and 51 don't, and using all non-alphanumeric as well as alphanumeric.


Most testing roles I have been involved with have been in critical QA areas where there has been a very high degree of quality. Aerospace/Defence (where the cost of a defect in production can be very high and not only monetarily), Government (where the cost of a defect can be measured on a national scale) and medical systems (where defects could result in poor medical outcomes). These projects have always looked towards a zero-defect goal (There is no such thing as zero-defect; but the result was a very very high quality bar).

However, to enable that the resources needed were made available. Often, the overall cost of testing was well in excess of the development costs. Highly skilled testers would be brought in at project conception and be involved at every stage of the design & development process (systems, hardware and software). The result was very reliable products. But the cost of the products was also very high and primarily due to that reliability.

In a recent role I asked, in the interview, what the level of testing/coverage required was for the project I was to be involved with was. The reply was - not surprisingly - 'high'. However, once in the role I found that the resources I'd need to reach a 'high' level of test coverage were simply not provided - the tools I requested were all turned down and the human resources needed to reach 'high' were simply not there. This was very frustrating and, while doing my best to deliver what I could with the very limited resources, raising of my concerns that test coverage generally fell on deaf ears. This did however cause me to re-evaluate the need for 'high' test coverage in this 'non critical' sector. If, in production, someone trying to authenticate with 50 characters caused an error which prevented them authenticating what would the real cost be? They would probably phone up and complain (and even do what they wanted to do over the phone instead of online), the bug would be hot-fixed and all would be good. As the application was not a critical part of the business the issue was not as big as it could be if - for instance - a doctor could not log on to see a critically ill patients records.

So, the generally held wisdom's about test (IE. that test costs should be 1/3 of overall budget, 1.5 testers per dev etc...) that I have always held close to me as a fundamental may not always be quite as clear as I've always thought. It is probably more about the area and type of risk you want test to cover, rather than a simple 'high' test coverage approach. Maybe in some circumstances we don't need clear trace-ability of tests back to requirements or archived & reproducible results or integration of the test automation suite into the test, defect and results management systems.Maybe we don't always need to make sure that all automation and test-harness development is performed independently of developers. Maybe we don't need to set the QA bar of our customer portal at the same level as we would for an air traffic control front-end. We certainly won't have the same QA budget proportion for our customer portal, so trying to reach the same level would perhaps not be realistic.

What do others think? Are we trying to test to too high a standard for the project we are on? Is the result higher test costs and lower QA than if we set realistic targets in the first place?

"Just enough" of all the things - testing, development, analysis, marketing - any more and we are creating wasting. #lean

A great article Mat. I think my main take-away from this article was that we need to be much more open about discussing how much testing "is enough." It's not helpful for a PM or Scrum Master to state publicly that we aim for "zero defects" or "100% test coverage" and then resource a project such that there is no hope of ever getting close to those lofty goals. In the same way that PMs manage scope, time and budget, quality is also a factor that is equally important to control and to be open about with all project members and stakeholders. If the number 1 project priority is scope, closely followed by time, there's no need to beat around the bush telling everyone that the team deliver that exact scope according to the published schedule within the same budget and at a level of quality that ensures "100% test coverage." Telling that story is simply a matter of fiction, or to use a favourite euphemism, a matter of "alternative facts." Let's have the robust discussion with the entire project team and the sponsor/s up front. Clarify the expectations around quality, time-frame, scope and budget. What is most important? What is least important? Prioritise accordingly and then stick to those priorities. The Agile Warrior Inception Deck "Trade-off Sliders" are an excellent tool that I like to use before starting any project to help visually prioritise time, budget, scope and quality. https://agilewarrior.wordpress.com/2010/11/06/the-agile-inception-deck/

Like
Reply

To view or add a comment, sign in

More articles by Mathieu Walker

  • Remembering Fred Brooks

    I have just found out that Fred Brooks passed away in November last year. Very ashamed I didn't know.

  • Where is the Abort test result?

    In nearly most results presentation/reporting tools there are only two states for a test; Pass and Fail. This is not…

  • Test Automation in Agile

    I have never ever seen Functional Regression Test Automation work well in Agile. Now there is an opening statement! But…

    2 Comments
  • Test Automation - Get Real!

    I warn you, this is a rant. An unashamed rant.

    8 Comments
  • Test Tool Review sites

    Recently I was directed to a Software Test Tool review site; https://www.softwaretestinghelp.

    2 Comments
  • Test Automation - What is Test Data?

    When discussing Test Automation (Automation of the integrated regression suite of tests) with folk outside the Test…

    1 Comment
  • Analogy of Software and Hardware testing

    It is often difficult to explain to non software-test folk why there is a need to test at the varying stages of…

  • Test Pyramid and focus of testing

    I was recently asked if there is value in following the [Agile] testing pyramid (ie. focus on unit & integration tests…

    3 Comments
  • Automation of UI Tests - Economical?

    Very recently, I had lunch with an old colleague who is a highly respected business analyst and architect. During lunch…

    5 Comments
  • Test Automation tools and scaleability

    Choosing the right test automation tool is always an uphill task, that is quite often frustrating. The primary reason…

Others also viewed

Explore content categories