Software Testing for the Uninitiated
(first published on stoutsystems.com)
A highly skilled tester is a blessing for any project. But not every project has the depth, complexity or—let’s face it—budget to accommodate a dedicated tester. That’s why Business Analysts, Technical Writers, and Software Development Managers also find themselves saddled with the additional task of validating and testing software.
There is a difference in mindset between validating and testing.
When you validate, you go down the list of requirements to make sure that the software does or meets each point. Add a new customer? Check! Edit a customer? Check! Don’t permit deleting customers who have transactions associated with them? Uh… In principle, the validator is merely ensuring that software does what it is supposed to do—or doesn’t do what it shouldn’t do.
Testing, on the other hand, shouldn’t be so structured. And there are some good tricks that the untrained tester can rely on to find bugs almost—but not quite—like a pro.
Trick #1: Do stupid stuff.
When writing code, developers know what kind of data is supposed to go into a field. Good developers will error check to ensure that stupid data isn’t being entered. But that doesn’t always happen.
So when testing, try to enter numbers where letters go or letters where numbers go. Try to enter symbols—especially symbols that are used in programming like the various kinds of brackets <> or {} or () and the various punctuation marks like ! or @ or # or &.
Another stupid thing to do is to click on one control and then another control before the application has a chance to execute the first action. Impatient users do this all the time, as do users who have mistakenly clicked the wrong place.
Get two copies of the same web page open, delete something from one place and then try to edit it in the other place. You’ll blow it up most of the time, and get an unintelligible error message.
Open a page up, do nothing, click the SUBMIT button. You should get validation errors that tell you that you didn’t fill in the required fields. But an amazing number of times the application will just blow up on you with another unintelligible error message.
This kind of testing will make you the most hated tester on the team, but it will also help make the code much less fragile or brittle. That saves a lot of heartache in the long run.
Trick #2: Regression test.
One of the most common problems in software development is that adding a new feature causes something else to break.
Regression testing is testing what was there before to make sure that it still works—just as it did before. If you were previously able to add, edit, and delete customers—but only if no transactions were associated with them—then by George you should start out your testing by making sure that you can still do this.
This should include using test records that you created before, too. So you add a new customer, edit the newly added customer… good, still works. Now open up a test customer you created previously. Can you still edit it? Maybe not! The new feature may have added fields to the database. The old records don’t have any information in those fields—but the information is marked as required—and the application cannot handle it.
One of the biggest short comings I have witnessed in developer testing is that they test the new feature, sometimes with many use cases, but they don’t go back to validate that everything from before still works.
It is lazy, lazy, lazy not to have someone on the team regression test the full application unless you are certain that the new features couldn’t have touched the old ones. But…hmmm…are you really certain?
Trick 3: Look at things like a user will.
Developers are notoriously bad at certain niceties.
Deliberately produce error cases and then read the error message that you receive. Is the message in techno speak or is it easy to understand? Does it have typos? Grammatical errors?
Are the labels for the controls properly spelled? Is the capitalization consistent? One thing I see often is a mixture of title case (where nearly every word is capitalized) and sentence case (where only the first word is capitalized). Some developers capitalize every word they think is important.
Let’s face it: they didn’t major in English, so they shouldn’t be expected to get these things right. But there is no reason why the application should be released widely with such simple-to-fix mistakes.
Look at the user interface itself. Are columns aligned in the layout? Are the margins uniform? If there is a style guide, is it being followed (the right colors, the right fonts, the right point sizes, the right button types, etc.).
Can you understand what you’re supposed to be doing? Sometimes a simple thing like changing the text used for a label on a control makes a big difference. And sometimes adding a tooltip with an explanation that’s too long for a label makes a big difference.
Then check out the workflow. Can you actually get around in the application? Or do you need additional navigational controls to take you where you want to go? If it’s frustrating to you, then count on it being frustrating to an end user.
Trick 4: For web applications, test in Internet Explorer.
Most developers prefer Google Chrome, so that’s the browser where all of their testing is done. I make it a habit to do all of my testing in Internet Explorer. Each browser has peculiarities, so this exposes a number of bugs that developers never encounter.
None of the things that are described above are rocket science. A good tester is going to do things substantially more like a rocket scientist than a Business Analyst or Technical Writer will ever achieve—test automation, database comparisons, validating data outputs, calculations, etc. But all of the areas above, when tested and fixed, go a long way to improving the end user’s experience and acceptance of the released product.
Happy testing!
AUGUST 19, 2015 BY PEG BOGEMA