The Lost Art of Software Testing

My observation over the past decade or two is that software testing has become more of an ancillary afterthought and less of an integral part of the project plan. In some cases software testing is seen as a necessary evil with the test results a begrudged deliverable with no apparent purpose. In the Agile space it sometimes appears as if software testing has been discarded along with the less valued documentation, perhaps replaced by a brief demonstration to a product owner. At other times a reference is made to automated tools negating the need for experienced software testers. 

While I am a strong proponent of both automated tools and the Agile process, I believe if you need skilled software developers, you need skilled software testers. It may well be the skilled software tester setting up and configuring the automated tool. Regardless, the value add is the skilled, methodical independent set of eyes confirming product quality. I do not believe you can process or instrument your way out of the need for skill software testers. Obviously, the certainty of my assertion increases with product complexity. And I do not believe participation in a scrum team is by itself an adequate qualification.

If the ultimate goal is product quality, software testing, and the subsequent resolution and dispositioning of its findings, is a must.  The alternative is deferring testing to your customers/users and accepting the risk they will not change their purchasing/usage habits. Or in the case of mechanical devices, with the added peril of injury or death. Safety aside, the familiar “Cost of a Defect” curve comes to mind.

No alt text provided for this image

I believe the most effective method to produce a quality product is to have resources with three independent skills working together as a team. These resources perform checks which reinforce each other to achieve the goal of quality. It’s entirely appropriate for a scrum team to contain members of all three skills, provided they are trained and qualified.

No alt text provided for this image

I find today that many in the software development world lack even a fundamental knowledge of software testing methodologies or its terminology. This article attempts to provide a very basic training in both.

Please note that I am not a skilled software tester. I have developed software for many years, so I have written countless unit tests. I have also had the luxury of working with many excellent software testers. But the recommendations in this article are my own and based on my experiences.

The first term to define is “production”. This conjures up visions of a physical production line in a factory. In software development production could refer to software in a physical device being produced on a factory production line. But more typically it simply means made available for use by end users or customers.

Software Testing Rule #1: Never Trust a Software Developer

I say this somewhat in jest. But having been a software developer for many years, I feel I have some license. The most common mistake is taking requirements and requirement interpretations from a software developer. Software developers are rarely the official source of requirements. More typically it is a product owner, business analyst or architect.

Some software developers consider their code “their baby”. And as any parent, they are not receptive to criticism of their baby. They become defensive, evasive and in some cases unprofessional. There is no uniform method for how to handle these cases. But it is something software testers need to be aware of.

If a software developer assures you he or she has tested their code, thank them, but insist on testing it yourself. Or as I have said in the past, only hire software testers from Missouri

Software Testing Rule #2: Test what you deploy, deploy what you test

This rule could apply to software development, software architecture or software testing. But in essence, test exactly the code version you plan to deploy. 

A debug version may have different timing, memory footprint, resources needs or other dependencies. These differences may cause problems in a test environment which would not exist when deployed, or mask others that will. If you change the software, always regression test.

Make the test environment as close as possible, if not identical, to the production environment. Understand what is being deployed is not just the code, but also the environment in which it will operate. This includes servers, configurations, schemas, data, permissions, cryptographic certificates and keys, etc. Any of these can negate usability following a deployment, therefore all should be tested together.

Software Testing Rule #3: The only bad bug is one that isn’t logged

Our job is to reduce defects in the product, not reduce defects in the tracking tool. However, a project team may want to quarantine some defects from the end customer such as those of a very technical nature. But regardless, all defects should be dispositioned prior to production deployment. A “dispositioned” defect is resolved and tested, or one for which the risk has been discussed and accepted.

When logging a defect, ensure the title/summary describes a problem. This means it should have a subject and verb. If you are reviewing 100 defects, you do not have time to open each one to understand the issue. Also make sure the defect contains the following:

  • Detailed steps to reproduce the issue
  • Software release(s) which exhibit the issue
  • Environment used, e.g. login, browser, version, etc.

The best way to make a customer angry is to ignore them. Therefore track all issues they report. Make sure the source of the issue is included in the defect. This would include the person’s name. Attach their email to the defect or if it was reported in a meeting, include the time/date of the meeting. I’ve seen cases where a customer has blocked a deployment at the last minute due to a defect which they reported, but which was not logged and dispositioned.

And finally, avoid tracking multiple issues in the same defect. They may be prioritized, resolved, tested and/or deployed independently.

Types of Testing

Hopefully everyone reading this article knows the difference between unit, integration, system and user acceptance testing, as well as understands the value of each. I refer to “regression testing as a limited reprise of any or all these categories in response to a change. Others use the term regression to describe a defect introduced as the results of a change. I tend not to use the latter in order to avoid confusion.

I am always entertained by use of the term “smoke test” in software development, more so when it’s used incorrectly.  A true smoke test is when a hardware designer first applies power to a physical device. If it doesn’t smoke, it passes.  In software it is used to describe a brief check of basic functionality, generally to avoid wasting the time of a wider audience.

I used the term “scripted testing” to refer to tests which are characterized by documented steps in a predefined and explicitly stated order, typically with well-defined inputs and well-defined expected behaviors. It may be in the form of a written document, like a movie script, for a tester to manually execute. Or it may be in the form of an automated testing tool’s configuration to conform to the unique item being tested.

Finally I call testing outside a prepared script “ad-hoc” or “improvised” testing. I find it a nice supplement to scripted testing, but a bad replacement. It’s difficult to ensure proper test coverage or sometimes even reproduce a defect found during ad-hoc testing. It is a nice supplement because it is difficult to ensure scripted testing has 100% coverage or to fully anticipate what an actual user may do.

There often seems confusion around the terms Beta Test and Pilot. I have attempted to contrast both in the table below. Again, these are my definitions, but they are based on many past experiences.

No alt text provided for this image

*For a Beta Test you typically want “friendly” users who won’t run to Yelp the minute they uncover a defect. And you want users experienced with similar products to avoid wasting your team’s time with false defects and learning curves.

Verification versus Validation

There are many websites where the terms “Verification” and “Validation” are contrasted. I like the adage that Verification asks “Am I building the product right” and Validation asks “Am I building the right product.” Considering these definitions, I would say verification includes any testing, audits or reviews which occur on the incomplete product or the process by which it is designed, built and tested. This would include unit testing and peer reviews. Conversely, validation would include the same on the completed product. Validation would include user acceptance testing. But some subjectivity exists.

White Box Testing versus Black Box Testing

I define “white box testing” as testing with knowledge of the internal architecture and data flow of the application, device or unit under test. The “box” in this case is the application, device or unit under test. White indicates the lights are turned on; you can see inside. Unit testing is the most common example. Perhaps Clear Box Testing would be a more appropriate term.

I define “black box testing” as testing with no knowledge of the internal architecture and data flow of the application, device or unit under test. Black indicates the lights are turned off; you cannot see inside. User Acceptance Testing is a common example. Perhaps Opaque Box Testing would be more appropriate.

Quality versus Robustness

I define “Quality as how well a product or deliverable meets its requirements. I define “Robustness” as how resilient a product or deliverable is to adverse or unexpected conditions. A robust design facilitates operational monitoring with log files, dashboards, SMS messages, etc. It is designed to prevent, detect and self-diagnose errors, as well as automatically recover to an operational state if possible.  A robust design attempts to avoid requiring user or administrator intervention if possible and reasonable. 

Performance, Stress and Load testing are forms of quality and robustness testing. Unfortunately they are often overlooked until a post deployment issue arises.

Test Strategies

I am a big fan of automated testing as many are. If you are unsure if the time to stand up automated testing is worth the benefit, error on the side of implementing it. It makes regression testing substantially easier and more productive.

Test boundary conditions. If the software expects a number between 1 and 100, test -1, 0, 100 and 101. Testing 50 generally provides minimal value.

Test race conditions where possible. A “race condition” is a case where the timing of events can deviate outside typical norms to expose an issue. If a server’s response is slightly delayed, but still within design tolerances, does a failure occur?  Determining possible race conditions tends to be easier during white box testing.

Negative testing is essential. Try characters instead of numbers, strings that are too long and foreign characters. Take down a webservice, allow a failure to occur, bring the webservice up again and confirm the software recovers automatically.

I once had a software developer tasked with fixing a defect look puzzled at me when I asked if they had reproduced the defect yet. The first step in resolving a defect, or confirming it is resolved, is to reproduce the defect in the test environment. You can never be certain a defect is fixed until/unless you have confirmed you can reproduce it. Otherwise it is entirely possible something in your test environment or test methods could mask the defect with or without the claimed fix.

When a defect is found, reexamine your tests to see if adjustments are needed. Why did your tests not detect the defect at an earlier stage where cost is generally lower? Automated tests and test scripts must evolve to maintain quality.

Finally, map every requirement, user story or acceptance criterion to a test case. This will help ensure there are no test coverage gaps. This will also help with regression testing decisions. But note there may be exceptions such as architecture requirements or spike user stories.

Conclusion

Today we have become desensitized to the inconvenience of restarting applications, refreshing browsers and rebooting computers. By the time we are finally comfortable dodging the defects and quirks of a release, we are pressed or even forced to upgrade, which introduces a host of new issues and oddities. The new release or model is rarely the purported panacea. I believe the departure from structured software testing is primarily to blame. Time to Market is king and the tsunami of new products, applications, features and releases affords little time to focus on quality. But I am confident a resurgence in methodical software testing would keep our time focused on using products, rather than researching workarounds and recovering them to an operational state. I for one would be willing to wait a little longer, pay a little more or live without a feature I will likely never use in return for higher quality.

Finally, I would recommend anyone interested in software testing to research the many training courses and certifications which exist. I’m sure there are many which would bolster any résumé.

I agree with rule #1 Never trust the Software Developer, however for a different reason.  Doing so would be confirming the software matches the developer's bias or interpretation of the spec.  An independent interpretation needs to be made that the software performs as the spec requires.

Hmmm, I am a 20 years seasoned QA analyst and Now Scrum Master. I would challenge your post as it seems to be based on traditional software delivery and testing. If the post is about traditional software delivery you don't mention at all about the QA bottleneck. After 20 years I support the Agile testing manifesto way more than the traditional model.  http://www.growingagile.co.nz/2015/04/the-testing-manifesto/ The main reason I support the Agile Testing Manifesto is totally against your first principal, the coming together of the developer and tester and the trust in an agile team. IMHO there is so much wrong with not moving testing/ Quality to the left!, even in a non-agile environment.

Like
Reply

Great article John on the lost art of software testing. With the tools we have today this should not remain the issue it is today.

Like
Reply

Very appropriate John and well written as always.

Like
Reply

To view or add a comment, sign in

More articles by John Gervais

  • Interpersonal Algorithmics

    During my career I have had the pleasure of working with countless pleasant and professional individuals. These are the…

  • Working Offensively, Working Defensively

    It seems most work environments can be divided into two distinct groups. In one group your day-to-day efforts are…

    2 Comments
  • Kindle Writer

    Dear Sir or Madam, will you read my kindle fire? It took me hours to write, will you take a look? It’s based on a blog…

    2 Comments
  • An Anecdotal View of Customer Relationship Management

    It goes without saying, customer relationship management can make or break a company. The old adage “take care of your…

    2 Comments
  • First Wheel - A Prequel

    I found a big stone and started chiseling the first wheel. Then someone said “Scary.

  • Management of multicultural distributed teams

    89% of business executives predict use of offshore resources will continue to grow in the coming years (Deloitte’s 2014…

  • Rules from a Career

    During my career as a software engineer, resource manager and project manager, I experienced several epiphanies (or…

    3 Comments
  • Can Lean Six Sigma be used to derive Agile Scrum from Waterfall?

    By now we’ve all heard the proponents of the Agile Scrum process tout its advantages over historically popular…

  • A History of Algorithmics - A Little Brown Stone

    In the beginning, the world was cold and without form. And code was absent from the face of the earth.

Others also viewed

Explore content categories