What To Test Where?
To me this simple sentence sums of the state of the software automation industry. There has been an explosion recently in jobs for software quality automation engineers in the US. Many firms that had not previously moved to Agile development, especially in industries such as banking and finance are now plunging head-first into agile practices. This is leading to a sudden flurry of companies that have recently answered the question "do we need more and better automation?" with a resounding YES.
This has now lead to a new question facing those companies - how do we automate and what do we test where? With a slew of practices and terms - TDD, BDD, Unit testing, Integration testing, UAT, etc. how do companies ensure that the automation and tests that they are adding are actually adding value. Companies that have already walked this path have found that it is very easy to add the wrong kinds of tests, to test the wrong thing or to have unreliable tests that fail intermittently. In some cases these lead to the test infrastructure either not adding the intended value or, in some cases, to a lot of 'feel good' testing that is of limited value.
One step that everyone involved must take is to study the Agile Testing Quadrants and gain a solid understanding of what the different quadrants represent and what sort of tests they will mean for *your* company.
The Agile testing pyramid is another great to understand the type and quantity of testing that should exist.
There needs to be considerable communication between the development and quality assurance engineers, both of whom write tests, for sharing tests and artifacts so that testing in different areas is complementary rather than duplicative. To be clear, this implies activities such as pairing for several hours and not just single 'presentation meetings'. Some duplication between different testing levels is acceptable, for example input field validation might also have Jasmine unit tests, Server procedure unit tests, client side javascript unit tests and some end to end UI happy/sad path tests. The question to consider at the end of the day is what value having highly reliable and generally bug free software is to the company. For example the decisions for a bug for a large bank vs. those for a rapidly growing social media startup will usually be quite different.
Some of the challenges facing companies looking to automate in an agile fashion are not about writing tests, but rather about their overall approach and ability and desire to 'fail fast and often' - or not. Companies who have not yet adopted modern practices such as this can easily full into the trap of churning out thousands of tests that eventually act as more of a constraint on modern rapid development, than an aid to making it happen. It can be hard to get companies to start writing valuable tests but it is even harder to say when not to write them or to remove tests that don't actually add value and whose cost outweighs the benefit in terms of both time to wait while they are running and also the cost of maintaining them.
Another example where simply adding more tests is not the solution is in the simple fear of breaking the front-end production system which is often where a majority of a companies revenue comes from. It is easy to have a 1 day break cost millions of dollars not to mention bad publicity, poor customer and employee morale, etc. This can be avoid with modern practices to closely monitor software deployments so that any significant issues can be spotted almost immediately and the change rolled back or fixed to address it. Frequently this is done with simple charts to compare current activity to last hour/day/week/month to observe changes. Canary releases ('canary in a coalmine') can try out changes with a tiny fraction of production traffic, for example 1%. In some cases, getting this feedback and responding to it quickly with further changes will be a much faster and more desirable process than spending days and weeks trying to assure that 'nothing will break in staging', only to find out time and again that in production something unanticipated happened and the predicted knowledge of the future was, as always, imperfect.
To help address the above issues and to translate the Agile testing pyramid into concrete actions I recommend that companies take a typical initiative, for example "Add American Express credit card processing" and then come up with, say, 30 examples of unit tests, 8 examples of integrated tests and 3 examples of UI tests. They should present this approach and practical example to all the folks involved so that there is a shared understanding of the approach and how to tackle the real world examples they work with every day. For many non-technical folks this will be the best practical way to understand how to correctly divide up testing in an Agile environment and it will help address the desire to 'test everything in the UI'.