Creating functional test cases that work

Creating functional test cases that work

If I ask you to walk around a shopping center using a map, would you be able navigate around and spot various stores with a fair amount of certainty ? I would assume the answer to that is going to be "yes". Now imagine if I ask you to replicate your walk without a map around the shopping center, now would you be able to walk around as easily as earlier when when you had a map? I guess most of them would struggle here! Its very simple , since you were walking around without a plan in an aimless manner, your brain had to figure out things. You might reach your eventual destination, but it would require more effort than normal. You may improve with time if you come to the same shopping center over and over again without a map, however if you change the shopping center you struggle again!

Most of the software testing happens like the example stated above.Teams usually have incorrect / old plans and test cases (or no test cases), which is totally obsolete and bulky. Testing your application based on improper plans and test cases can induce and inject a lot of issues which could have been avoided otherwise . We call it a total "Party Pooper"!

Test Cases : A tester's Holy Grail

This is where all goof up happens. People create test cases( Trust me all of 'em have Test Cases), but whats usually missing is effective test case planning and up-to date test cases. Teams often test the application off their memory rather than following a defined test case plan. This leads to a lot of what I call "Productivity Pilferage".

Here are a few challenges that test teams face with way test cases are managed currently.

  • Too many test cases : Disproportionate ratio of test cases when compared to the requirements ( Remember size is relative ) . Too many test cases can lead to ineffective testing practice and teams would always carry out incorrect tests. Large test sets also make the test case updates much more difficult to carry out.
  • Improper Prioritization :We may have lot of test cases but if they aren't categorized properly they will never yield proper results.
  • Single test case repository : When teams have only one set of test cases and that's all.
  • Test data management : When the test cases are not created with proper test data in mind.
  • Too much clutter : When your tests are contain a lot of unnecessary information.
  • Lack of product owner review : When test cases are only reviewed by the testing team.

I am sure you can relate to the problems above if you have worked in manual test teams before as a Lead or a Tester. Now the question is can we solve these problems ? The answer is Yes and No both. Lets look at the solutions to the problems listed above ( In the same order ).

The 80/20 Test Case rule

While there is no set principle that has been defined around this for testing but after spending years into software testing and practicing this technique , I can confidently say that it works great for software and can be considered as the Bhagavata Gita for authoring test cases. So essentially what I theorize is that 80 % of your application functionality can be covered through 20% of your Core Tests ( Pareto Analysis). These are your most pristine test cases which cover all the application features. You need these to ensure that your application is working fine as expected and all the major flows are working fine . This dramatically improves the regression testing time , ensures showstoppers and critical issues are caught early. These are the test cases which you should look to update regularly with the latest product level changes and keep them in ship shape.

80 % of your application functionality can be covered through 20% of your Core Tests

Prioritize Your Tests

Most of you who are reading this will say that "our test cases are already prioritized". Well you are not wrong here however what I see across the board is that people just prioritize test cases as " High , Medium and Low". This is great, but sadly not enough. You need to sub categorize your priorities into two major sections

  1. Execution Complexity : You derive this by the counting test steps for instance a test case which has >30 Steps = High , > 10 Steps & < 30 Steps = Medium and <10 Steps = Low. This strictly measures time to test.
  2. Business Weightage: You chalk this out in consultation with your product teams to understand how important is this feature and sub feature that this test case aims to test and has got nothing to do with "testing time".

Both execution Complexity and Business Weight-age need to be interlaced together for an effective outcome. For example Login as a testing feature would get an execution complexity of Low but a Business weight-age of High /Very High.


Effective prioritization on test cases help testers to conduct a more focused test and not just Blatant testing . This also helps when dealing with tight deadlines.


Split up your test cases

Your project should have multiple sets of test cases and not just one ( which most of the testing teams do ) . You need to break down your test cases based on the test environment and test type predominantly. Here are the different types of test cases that each and every project needs to have.

  1. Functional Sanity/Smoke Tests : These are cases which help you do a quick check of your major flows which certifies the app to proceed for detailed testing.
  2. Functional New New Feature Tests : These are tests which are authored for the new feature or module . It's ideal to have them separate because, these will undergo a lot of changes post the release to improve accuracy of these.
  3. Functional Regression Tests : This is your master set of test cases which assist in testing the entire application . This can be called your master test suite and you want to carefully prepare these by applying the 80/20 Rule as discussed above. This should be a combination of negative and positive scenarios.
  4. UI Check Lists ( Optional ) : While you can cover these in your functional tests itself and you need not create a new suite however a couple of projects I that I managed were very UI Heavy and there was a need to perform UI compatibility and style guide checks , these tests come in handy as even a new team member can run them easily ( A flat training curve) .

Embed your test data in your tests

When designing test cases the objective of the author should be to ensure that the tester is able to execute the tests without any additional help and use the test cases like a user manual. Test data plays a very important role in seamless test execution. The problem however is that most of the test cases do not manage test data optimally . Here are a few pointers to ensure you keep your test data healthy and in check.

  • Factor Test data in test cases : The teams should either create a column for test data or should simply embed the test data within the tests so that when the tester is testing the application he/she should not be struggling to find the test data from a third party location. This proves out to be a great hindrance while testing the application ( Impacts the speed of testing ).
  • Create rules for Test Data : Test data creation strategies should also be taken into consideration and certain rules can be put in place across the team like every QA Credential should begin with QA and Staging credentials should be STG. Passwords can be common across the team so that you don't keep asking for passwords when the actual owner of the test data is enjoying a vacation.

De-clutter your cases

Keep your test cases minimal and less jazzy for better understanding and test execution. All you need are simple columns/Fields in your test case ( You can customize the test management tool for optimum usage). Always remember that complicated test cases when authoring can seem very exciting but the actual usage might suffer ( Simple = Better)

The key aspects of a test case ( Not restricted to this , can and surely should be tweaked as per your project )

  • Test Case ID : This is your unique test case ID so that its easy to identify a test case.
  • Test Summary : Clear one liner summary of your test.
  • Test Description : Detailed Test Steps
  • Expected Outcome : What should the tester expect at each step / end of test which will help determine if the test passed or failed.
  • Execution Status ( Platform and Build ) : Captures the Status such as Pass , Fail , Pending , Blocked on different platforms and builds
  • Priority : A must have column to define the execution complexity and business weight age.
  • Feature : This helps the tester filter out only certain features while testing when the user is performing a focused testing.
  • Associated Defects: Related bugs should be associated with a test case for a better reference point on the history of an issue ( will be taken care by the tool if you are using a test management tool )

You can add remove more columns based on the need for better execution filters. If your test cases are simple you will feel like updating them more frequently which will keep them in shape.

Product Owner's Test case review

Ensure you get your test cases grilled by the product owner . You might interject here stating "The test lead and test engineer have already reviewed the test cases so why do we need the product owner to review the cases"? Well that's because The Product owner will have a different perspective which would be that of the end user , and moreover the product owner's review in a way acts as an approval on the test cases approving of your test coverage. Product owner may as well update us of any changes which happened recently which would help the test team in amending the cases accordingly.

Conclusion

Not only do good test cases make testing a cakewalk , good test sets can as well help automation testing tremendously. Clearer, concise and modular test cases are a great way to assess the automation feasibility as test cases serve as building blocks to automation projects. You can very well guess what happens to structures when the foundation is weak. Test cases are a great way of providing training to newbies as it details out the steps for each and every feature within the application with expected outcomes ( Works as a user manual ) . To summarize I must admit this is one area which is overlooked a lot, Test cases deserves better attention. So start working on you cases right away!

Cheers!

About the author

Niranjan is a Retail Banking and Digital Banking expert with around 9.5 year's experience in Testing domain. He has been instrumental in managing delivery functions for various client engagements focused around Digital Banking Transformations. He has experience of working with cross-cultural teams across varied geo-locations in latest cutting edge technologies. Currently working with Cigniti Tech.

Nice article and I have already used 80/20 methodology part of RBT ,it works very well but my approach was little vary instead of counting steps for prioritizing used core basic functionality system to prioritize tests along with frequent changes implemented ...

Nice perspective. Keep writing

To view or add a comment, sign in

More articles by Niranjan Keshavan

Others also viewed

Explore content categories