Testing: The Poor Cousin of IT Projects
Anyone who has ever worked on an IT project knows the story. The Requirements took longer to document than we thought. The Design went back and forth a number of times. There were issues getting the pieces to fit together. Then the client added some new requirements and changed some existing ones, and all that rework takes time...
The project budget has been depleted by the extra work and slow pace of the effort so far. Everything still has to be completed, though. No more money, no more time, no more people. Your timeline’s been devoured by other project phases? Too bad - Get it done – we have a “deadline”. There are defects? You guys don’t test very well, or very thoroughly, and you’re way too slow. We’ll have to find someone who can do it cheaper, faster and better...
Ring any bells? It certainly does for me. This has happened in virtually every project I’ve been involved in over my 30+ years in IT. The most important thing I’ve learned over this period is that testing is the least understood, most maligned and generally least appreciated of all the disciplines which make up a project.
Traditional (waterfall) methodology places testing at the end of the development process. By then, the knowledgeable client resources are no longer on the project, or are busy getting ready for implementation (developing training material and the like). The developers have blown their budget to smithereens and are anxious to move on to the next project or contract, and the Project Manager is screaming about holding to an arbitrary schedule constructed (and endlessly revised) with next to no input from the test team.
The pressure builds. Defects are discovered, and must be investigated, fixed and taken through the entire testing process again. Scope creep becomes scope gallop and an initial review of the changes immediately leads to a need to evaluate and change the existing test package, and determine how much extra work is required.
Invariably, this leads a frustrated PM to demand that Test Team leaders (and their management) “do something about getting us back on track”, and copious e-mails cascade from senior management with little knowledge of the situation, exhorting the troops to work even harder to get the work done “on time”.
Sometimes it happens. More often than not, it doesn’t. A large part of the “contingency” time in the schedule has already been chewed up and the client executive / Sponsor is loathe to extend a schedule already viewed as extending much longer than it should. Should a client be prepared to add extra time, there are a myriad of forces hard at work to chew up the additional days. If you get extra days, they’re frequently swallowed up before the test team ever has a chance to use them, once again proving Parkinson’s Corollary (“Any task immediately expands to consume all the resources allocated to complete it”).
It’s similar for Agile. While Agile breaks down activities and deliverables to a series of “sprints”, the output from each sprint must be tested to ensure that it: (a) works as it is supposed to, and (b) hasn’t created problems in something created in a previous sprint. One of the major differences with Agile, however, is a need for an ever-increasing volume of test material that must be created or reused as the sprints move forward and more functions become available.
In either case, the “deadline” is carved in stone, and the test team is mandated to perform heroics above and beyond the call of sanity to save the project from itself.
I’ve been involved in too many projects where this has happened. (BTW: There are very few “deadlines”, but that’s a rant for another time.)
Case in point: I was surprised to discover that the testing effort had been drastically under-estimated for a project delivering a new application from a 3rd-party vendor. The project was well underway and I was the 2nd Test Manager brought into an effort that was still in the Development phase. The “test team” consisted of 2 offsite (and 1 onsite) Business Analysts who had never done testing before. One of these analysts was also the primary author of the Requirements. They were expected to learn how to test on the fly, as well as planning, documenting and executing all testing activities, and gathering or creating test data.
They were assigned to the project on a full-time basis, but expecting them to create and execute a comprehensive test package in the limited time available was completely unrealistic. The test effort encountered major problems, primarily related to unclear / imprecise requirements and poor-quality application code from the vendor. However, “testing” was blamed. In reality, the minimal testing that was performed identified a number of major problems with the vendor’s code.
Management dealt with the issue by adding (from offshore) even more inexperienced people who were completely unfamiliar with the deliverables, with the expected (less than desirable) results.
Quelle surprise!
Why does this happen so much?
The most common reasons relate to poor planning and a lack of understanding of (and respect for) the testing process.
Reason #1 - Planning is Incomplete, Inadequate or Unreasonable. Well-run project teams include representatives from the testing function throughout the Planning, Requirements and Design phases. This is especially true in the Requirements gathering process. While the entire test team has probably not been defined at this point, a senior person (Test Manager or application testing SME) should be included in any discussions, design sessions or similar exercises. Inclusion of these resources allows them to gather valuable information related to the test design and execution phases to follow. In some cases, a perceived requirement may lead to significant problems for the test team, especially if it is unclear, ambiguous and / or incomplete. It may also cause parts of the design to be re-thought, modified significantly or abandoned altogether.
Issues raised in design meetings may force a rethink of business processes, based on detailed knowledge of how the business functions from a technical perspective, as well as the types of data or level of complexity which the requirements will force the team to acquire or develop. Other internal groups or outside service providers may also be needed. More accurate estimates of time, people, infrastructure, and projected project costs also result from this early involvement. A hidden benefit of inclusion is the personalization of the test effort. With a human face connected to the effort, the test team is now seen as a valued member of the project, and just not some anonymous resource that needs cutting to hold schedule or budget.
Reason #2 - A Lack of Understanding of (and Respect for) the Testing Process. This is the most prevalent and most dangerous risk to a successful testing effort. The business side of projects sees a very simple process (we tell you what we what, you build and install it), and views the project team in a fairly simplistic view. Infrastructure? You already have it. Budget? We took your estimate and reduced it, because it was too high and your schedule is too long. Scope Creep? It happens. We’re the client: we can change the scope anytime we want. It’s the project team’s responsibility to deal with it.
Many clients believe that testing consists of “Happy Path” testing and nothing else. “Happy Path” testing verifies that only positive test cases are executed and completed successfully. It’s a stripped-down version of what testing should be. Happy Path testing has its uses but cannot serve as a full test of anything. Murphy’s Law (“If something can go wrong, it will") overrides all, but Parkinson’s Corollary also applies (“Any task automatically expands to consume all the resources allocated to complete it”)
Many clients look at the testing effort as an unnecessary step which just takes time and costs money, without any real benefits being generated. We were done ages ago; it’s just that testing is taking too long...
Testers understand the business far better than many clients think. They understand how all the pieces fit together, what goes where at what time, and most importantly, how an error in one place can cause what appears to be a totally unrelated problem somewhere else. What many clients fail to understand is the Risk Management value inherent in the testing process. When a defect is detected, a process of investigation, repair and re-test takes place. If the re-test confirms that the problem no longer exists, the risk of that error re-occurring (either in testing or live operations) has been mitigated.
Reason #3 - Testing is seen by many organizations as an expense, and expenses are evil. It doesn’t produce revenue, so we can just farm it out somewhere cheap and write off the costs. The Risk Management value of testing is not well-recognized, primarily because many testing organizations don’t identify it as a major corporate benefit. It’s not easy to quantify the savings gained by a high-quality testing effort. After all, how do you put a price on something which won’t happen? Business understanding of CoQ (Cost of Quality) can help, by identifying the impacts of fixing problems when they do occur and by recognizing Stephens’ Law of Failure, which states “Every failure is the direct result of not executing at least one specific test at least one time.”
How to Improve the Situation?
Let’s look at the main issues:
1. Not enough time, not enough money, not enough people. The rest of the project took all of them.
It needs to be clearly understood by all concerned that the test effort (as originally defined) is a medium-to-high level guesstimate at best. Every change in Requirements or Scope forces an unplanned and non-resourced review and evaluation effort. This takes time: it’s unavoidable. The Eternal Triangle of Project Management (Money, Time and People) also applies to testing activities. Take one of money, time or people away from the full testing effort and it will suffer unless you extend at least one other side of the triangle.
Projects which operate using a Project Charter approach can be chartered on a phase-by-phase basis. While this may not provide the sponsor with a completely defined and costed effort from the outset (an unrealistic idea, when you stop to think about it), it does prevent cost overruns in earlier phases from impacting testing, and by extension, implementation. The Testing Charter should be completed only when all Development items are locked down, to provide the best chance at a realistic set of numbers. The inevitable changes (after the scope is “locked down”) will force amendments to the Testing Charter, which provide a more realistic estimate of effort required, as well as costs still to be incurred.
2. Testers get little respect and less recognition.
It comes as little surprise that testers generally feel disrespected and unappreciated. They’re expected to absorb all project impacts with little or no ability to push back. They’re a group of technical and application specialists whose knowledge base is misunderstood at best and denigrated at worst. They’re seen as the impediment to timely implementation, and as nitpickers who can’t see “the big picture”.
The test team is often poorly represented in project meetings and discussions. Test teams often find themselves lumped in with developers. Issues related to testing aren’t well understood by those who attend status meetings or calls. All the team hears is Problem, Delay and Higher Cost.
I can’t remember the last time I worked on a project where the Sponsor (or members of a Steering Committee) came to visit the test team, or joined a test team call. Messages of “encouragement” were always relayed by the Project Manager, who was under the gun to “get it done and get it in”. It would have meant a lot to the team had someone at the senior client management level taken a few minutes to say hello, ask how things were going without leaping to making the date, and say that everyone on the client side appreciates the time, effort and difficulties the test team is facing. Asking how they could help would mean a lot to a test team forced to work weekends while the rest of the project is out playing in the sunshine. Don’t get me wrong...it’s not that we don’t like pizza now and then, but a pat on the back can do wonders for morale in a way that pepperoni and extra cheese just can’t duplicate.
Some organizations like to celebrate their successes with a social get-together. Invite the test team: all of them. If they’re offshore, pick up the cost of a similar event at their site, and have the PM, or better yet, the Sponsor, attend in person if possible, but by teleconference at the very least.
3. Automation Isn't Everything . You still need people skills to get a good result.
First things first: You Can’t Automate Everything. There are lots of places where it’s perfectly valid, even beneficial. But automation can’t do everything, and more importantly, the automated testing tool needs to be configured and operated, and test scenarios must be designed and written for the package to use. It also requires constant review and update as changes are made.
And that means people. Perhaps a different skill set than for manual testing, but test professionals none the less. Granted, people cost money, but this is no time to be cheap. You get what you pay for, and a top quality automated test package takes time and money to build.
Building a good automation package relies on all 3 sides of the External Triangle (Money, Time and People), so make sure you find good people, but also, that you recognize and appreciate their particular skill sets. Help them to help your project, and in that regard, morale is critical.
Testing will always remain at the mercy of earlier project phases and activities to a large extent, and to some extent that may not be changeable. However, the folks who clean up the project’s schedule mess and design/construction flaws deserve a little respect, too. It’s not an easy job, and not everyone can do it.
Bring your cousin in from the cold. Everyone benefits.
As a testing/QA professional I can very well relate to this. Nicely written!