Continuous Delivery - Don't be Compelled to Over Speed!
I’m sure many of us in software industry hear about Continuous Delivery and DevOps movement more than often now. The ideas of deployment pipeline, automated build deploy test and release processes, improved collaboration, quick feedback, all are very compelling to adopt and achieve a repeatable, reliable and predictable release process. Sounds quite good isn't it? But I wanted to understand the whole concept more deeply so decided to scratch below the surface. There are a few principles and fundamentals that I found being promoted by the CD founders which I’m trying to reason below. I thought to share this as I felt this may be useful for anyone who’s looking at or tasked with CD adoption in their organization and wants to build a more practical understanding about it.
1) Continuous Delivery is built on the premise that “Quality does not equal perfection” and that “the perfect is the enemy of the good” as long as the delivered software is of sufficient quality.
If you are in a business such as eCommerce dealing with cut throat competition where it’s inevitable to release new features out to customers on a daily basis or otherwise the competition will disrupt your bottom line, then you have no other choice but to buy into this argument. You can always switch back quickly to the last release if the new deployment has resulted in an issue in production, you can fix the issue quickly and release it again with in the day or next day. Quite powerful isn’t it? But what if you are in a mission critical sector say healthcare, policing, or banking, electronic trading, public services such as tax collection or benefits payouts, insurance, etc. etc. The consequences of releasing an imperfect software even for an hour, despite the roll back ability, can severely impact your firm’s bottom line and reputation or the value you deliver to your customers and so don’t be compelled to make a compromise on quality over speedy delivery of new features if that is not right for your business.
2) Continuous Delivery requires that you release frequently and so that the delta between releases is very small.
By doing so, you significantly reduces the risk to your business and in case you had to roll back to previous release there isn’t a substantial loss of functionality to your customers. In recent times, I’ve heard examples of so many firms talking about their frequency or speed of continuous delivery. If you are not doing more than one or a few releases a day into production then you’re probably made to feel like a bad guy. It doesn’t require much analysis here to realize that amongst speed and quality- “Speed” is the winner here. And that you can only achieve this high speed if you are making very minute incremental changes to your software. This probably may work well in a scenario where you have a well established software in the market to support your business, you are in a profitable phase and you’re not looking to make drastic changes to your software. You are only looking to continue in a low risk, maintain the status quo, incremental change mode. But what if you are in a situation which requires making drastic changes to your software for survival, or say your business is facing a new technological disruption, or although you’re getting quick but mostly negative feedback and there are some significant quality concerns that needs addressing. You probably won’t want to over speed through your continuous delivery pipeline then and so don’t be compelled to do so in such times. And this may be the case more then often then you realize.
3) In Continuous Delivery feedback must be delivered quickly and key to fast feedback is automation.
As you start making small incremental changes per deployment, the amount of manual exploratory testing required in a release becomes minimal in the deployment pipeline making it very close to 100% automated testing. Automated testing helps reduce regression risk, increase the reliability of deployment pipeline and offers quick feedback. In case you are going through a major transformation project remediating older systems or building a new system from scratch it may not be feasible to achieve that high level of automation if you’re trying to speed through the deployment pipeline, and so don’t be compelled to be 100% automated from the word go. You may be able to achieve that perhaps after a few release iterations have occurred beyond the initial delivery.
4) In continuous delivery unit tests are the most critical tests and a failure at this level should stop the deployment pipeline. Some failure rate in the functional acceptance test level may still be acceptable.
The driver here to favor unit tests over acceptance test again is to do with speed as unit tests run fastest on cheapest hardware. And to achieve faster execution cycles unit tests may be using a lot of fakes to isolate the dependencies. The environment in which the unit tests are run is perhaps not that close to production as well. Nonetheless, unit tests are important tests covering more than 75% of the code base and focused on critical functionality of the software. Acceptance tests although run in an environment as close to production with no fakes but actual components takes much longer to run. If the mantra is speed then the more the acceptance tests the slower is the deployment pipeline flow and the slower is the feedback. You may mitigate this by parallel execution but the preference is still to have more unit tests. Do you see the tie up here with the first point we made above about sufficient quality vs. perfect quality? And so don’t be compelled to reduce acceptance tests over unit tests if quality is more important to you than over speeding.
5) Continuous Delivery is inspired by the philosophy and idea of lean manufacturing movement
Lean manufacturing is all about waste elimination and improving the flow in assembly line production environments. The concept stems from industrial engineering where the requirement is to mass produce the same product with no variation in quality. Reducing non-value added work, overburden and unevenness in the assembly line results in reduced cost and increased throughput. And the quality is maintained by stopping and fixing the problems as they occur in the line. There is a fundamental difference here between Lean and CD. A software is continuously changing with each iteration through the deployment pipeline which is simply not the case in Lean manufacturing. Comparing throughput in a manufacturing unit to a software deployment frequency is a dangerous one and so don’t be compelled to over speed by agreeing to such analogy.
To sum up, if you already have a proven software which is needing incremental changes for sometime to come, or your business requirements are such that you can live with a sufficient enough quality software, but for you releasing new features quickly is essential to compete in the market then it’s a good idea to go head full throttle when implementing continuous delivery and break a few speed records. If any one of these factors don’t apply, perfect quality of software is paramount for your business, you are undertaking a major technology transformation of your existing software, or you are looking to build a radically new software solution for your business then you probably want to evaluate how much you want to push on your continuous delivery speed and not just be compelled to over speed because some one else is doing so. Or unless if you are too adventurous and can risk a speed crash. Think about how that would impact your customers. Perhaps a better strategy would be to have a fully automated software deployment pipeline in place where you are in control of the speed and are able to adapt it to suit you business needs.
I would add another warning--don't get pulled away from the architecture you started with through implicit restructuring of what you're building. It's rather easy to undermine the integrity of an application through death by a thousand cuts. You plow along responding to each and every 'critical' request from the stakeholders or product owner until you realize that you have bastardized your architecture, compromising your ability to respond to future change requests or changes in the business context. This is the biggest danger in Agile and it's entirely preventable with discipline and patience. Unfortunately, these seem to be in short supply these days.
That's correct but only reflects the engineering perspective of continuous delivery; what about the user perspective ? In other words what happens between release and delivery ? https://caminao.wordpress.com/2015/09/07/agile-dynamic-programming/