Testing in the DevOps era - Episode 2
Test with a purpose - A.K.A. Effectiveness
This is the second article of a series that I intend to publish in which I will share my experience and opinions on testing in DevOps.
In the first article, I wrote about "Testing your tests -A.K.A. Boundaries". 2+2 will equal 4. There is not much value added by testing that. Are your tests providing enough coverage with "boundary tests" and "equivalence partitions"? Are you just testing happy paths or balancing that with enough not-so-happy paths?
In this article, I want to continue on the thoughts like: Is my te$ting paying off? What is my ROI?
The total number of your tests will keep growing throughout your DevOps lifecycle. Every time you have a release you will likely be adding new test cases to your "Test Case Bank". Your test bank account will keep growing but unlike your money bank account, this is not going to make you smile more. It means every new release cycle will need more time and more re$ource$ for testing.
This is the time when you (or your boss) starts to ponder "Hey Siri", "Ok Google" and "Alexa"... Why can we not have a test bot? Why are you not automating? ML this... AI that... You start looking at the Pythons from a different angle (If you are in Florida it is not a rare sight).
And then comes the test-automation! Coded UI is in its death bed, but there is definitely no shortage of tools and framework for test automation. In fact, they can not only make your head spin but also leave you --headless(!). (Hint: This series wouldn't be complete without an episode about automation)
Even though we are surrounded by AI and ML in our daily lives and our whereabouts are known to others even when we don't know it ourselves, at the time of this article being written, your automated tests are most likely still written by a "developer", which technically makes them another "app". Well, in that case, wouldn't they need to be tested as well?
- Test the app
- Automate the test - which is technically another app
- Test the "new" app
It sounds like a vicious cycle, isn't it? #1, then #2, Then Go To #1.
How do you break this cycle?
I say "test with a purpose".
Do not test just for the sake of testing. Write down your purpose (your very own purpose) for testing, print it with 144pts fonts, laminate and put it up in a visible wall in the QA office. Or save it in a ROM memory and insert it into your memory extension. Whichever works better for you. Just do not lose sight of it. You cannot measure effectiveness without an articulate purpose!
If you are having a hard time articulating that purpose, or your purpose is not objective, observable and measurable (something like "make sure -blah-" is surely not), it is time to stop and think. https://www.google.com/search?q=test+effectiveness would be a great start.
The purpose should be your own. "Find bugs" may sound like a good one, and it is... It is objective, observable and measurable. e.g. "We run 400(0) tests in 12(0) hours and found 3 bugs".
"Don't let bugs escape!" is another good one. e.g. "We found 5 bugs after the release to production despite running 400(0) tests!". The more specific you can get, the better it is.
"This test takes 5 minutes to run, we run it for 500 times in the last 6 months and never resulted in a failure..." - 42 paid hours of smooth sailing. Shall we keep running this test? Controversial isn't it? But worth to ask in the economies of scale.
Other than asking questions and provoking thoughts, I could list here several "recommendations" and throw some claims like "best practice"... But my 24 years of IT experience stops me from doing it - against my ego. What I can surely recommend: Learn, review, understand but do not worship the "Best Practices". Find the "better" practice for your own case. If YOU are going to practice something, it better be practical to YOU.
There are times and places where a bus is better than a bike, and vice versa even for kids that go to the same school. Remember the story of five men and the elephant? Every culture has a version of the story for a reason.
I can still suggest something useful and practical though: Run your new tests on a previous release and see if they "fail". If you fixed a bug, enhanced something or added new functionality in this release and added a new test to validate it, that test should fail in the previous release, right? Automated or not, this is quite a fruitful validation - and relatively easy-to-do. Earlier you do it in your process, more re$ource$ you will save... Think about all the re$ource$ you'd have wasted to automate this test.
Did you find your purpose in testing? I hope you did.
So, until the next episode, happy testing.