DevOps : A Simple Workflow!!!
Venkata Nagarjuna Dondapati, Raghul Mukundan, Leonard Lehew, Viveka Gorla.

DevOps : A Simple Workflow!!!

Software development has come a long way with many innovative changes in the last few years. Particularly, the industry shift towards Agile methodology is something worth mentioning. This shift has made it easy for organizations to build software quickly. As it is an iterative model it is easy to bring new additions, improvement and fixes to the existing software.

It has become a huge challenge to the teams and organizations to cope with the release activities after every iteration. As the developers focus on getting the production ready software by the end of each cycle, it’s being resisted by the operations team to release it. The way operations team work depends on the organization. Most organizations have a single operations team. Each development team will not have dedicated operations team resource assigned to them. So, the development teams will have to schedule their release with the Operations team. Now, with developers wanting to release production ready software by the end of each cycle, puts burden on the Operations team. This seriously affects stability and quality of the entire application. Every time development team need to release a bit of software the entire application needs to be brought down, as its not ideal to let the customers access the software when patches are being done to it. Once the release activities are done, teams perform a sanity check on the application, to verify that new features and fixes are working as expected. If they find issues, then the application will be rolled back to its older state, and the release will be pushed to a future date. Due to competition, businesses want to deliver new features and fixes in the shortest time possible to their customers. The key thing to understand here is, customers drive businesses. If a customer feels he is not happy with the service, he does not hesitate to leave a bad review and switch over to a different one with a better service. This forces businesses to be agile to keep their customers happy and stay ahead of the competition. How do they do it? Do businesses have to compromise on the quality to deliver new things quickly to their customers? There is a logjam here. The operations team is not ready to push things quickly, because they are worried about stability. Businesses want to get things quickly out. Is it possible? The simple answer is, “yes.” How can this be achieved? DevOps is the answer.

The basic building blocks of any software development methodology are requirements gathering, development, testing, building and deploying. We used to have different teams serving the specific job activity, for example, Business Analysts for requirements gathering, Developers for development, QA team for testing and Release team for Build and Deployment. As the software goes on from one phase to another, there is a high chance of bugs being introduced, and if its a manual process, it involves lot of time and cost. So why can’t a single team handle all these activities? Why can’t we have developers sit in the requirements gathering sessions, develop the code, write the unit and integrated tests, automate the build and deploy process. This would tremendously reduce the costs involved for any organization and make sure the software is delivered on time. Sounds good, right? Is making developers part of every phase the solution to this problem? The real question is, can the developers do everything? Hold on to that question, we shall delve further on this topic and find out the answer to this.

Coming to DevOps, what exactly is it? DevOps refers to Development and Operations going hand in hand instead of having separate teams handling each specific process for them. It could be termed as a culture rather than process where it involves expanded collaboration between various stakeholders, developers, testers, operations and customers. Our experience with "agile" practices has shown us the advantages of "cross-functional" teams combining end users, developers, testers, and other specialists working together to build software. DevOps extends this organizational approach to also include "operations" professionals to make sure the software is delivered on time as expected. The major advantages of DevOps are its shorter development cycles, automation of manual stuff which is less error prone, improved bug detection, reduction in deployment failures and a faster recovery time from failures.

There are various DevOps flows/tools which could be used based on the organizations requirements and the team’s maturity in following agile. And today in this cloud-driven environment the various cloud service providers are providing their own tools making it easy for the organizations to adopt DevOps in their culture. I will present a simple workflow of DevOps which is the base underlying process across any kind of deployment process.

Requirements Gathering: The requirements gathering is the first and most important phase of any software project. This phase involves all the business analysts/product owners, scrum masters, developers, testers, operations, stakeholders taking part in all the discussions and understand the purpose of software being built. Requirements, especially in agile environment, are always changing and it is the agile team that should adapt to this ever changing requirements. Due to this dynamic nature, requirements are generally gathered and planned only for a short span, like 2 weeks. This span of time in agile terms is called a Sprint. So, only a sprint’s worth of requirements are generally planned, and when approaching the end of sprint, requirements for the next sprint are planned. As we discussed earlier, businesses want to push things quickly to their customers, this way of planning only a sprint’s worth of tasks enables the option to roll the features developed every sprint to production. It is important to have all the features planned in a sprint to be developed and thoroughly tested. This constant push towards quicker releases at the end of every new sprint leads to Continuous Integration.

Continuous Integration: A project can have multiple developers working on the source code at the same time. So, that means, source code keeps changing very often. There will be new files added/removed from the code repository and changes could be done to the same file by multiple developers. What could this lead to? Can you confidently put such dynamic code base on production whenever you want to? Believe me, that will be chaos and can cost you a fortune. What is the solution then? You can ask the developers to go slow and not to do any code changes until testing team tests all the changes done. Think about this for a minute. For every code change developers do, the testing team has to test the entire application, and other developers has to wait till the testing is done. Following Agile methodology means, the requirements are constantly changing and as we said before businesses want to get fixes and new features quickly to their customers. That means development has to be fast paced. So the approach of asking developers going slow falls flat and will not work. Lets list the problems we are trying to solve 1) #NoFearReleases - Confidently moving a dynamic code base to production whenever you need to 2) #ThoroughTesting - Each and every change done to the code base should be thoroughly tested 3) #FasterDevelopment - The process you adopt should not slow down the development. Great! We are done listing problems, and let us straight away get into the solutions.

Dynamic codebases require a development platform that can handle frequent code updates and maintains a version history. Days when I started coding, people were using repository management tools like SVN, CVS, Clearcase etc. These were great tools, but they don't go well for our Agile use case and do not provide out of the box solutions to the above listed problems. Then arrived Git, a source control system which changed the landscape and paved way for Continuous Integration and a whole lot new possibilities (I am not going in depth on git, as there are great tutorials online). Git uses the concept of branches to handle multiple developers working on the code base at a time. Teams create production ready code in one branch and developers create independent branches to work on their features. This approach gives confidence to release code to production at any time and developers can keep working on their features without stepping on each other’s code. Once developers are done with their changes, the code in their branch is merged with the code in production ready branch. Would you trust this code to be defect free? Merging the code without testing will create more problems, it will pollute the production ready branch. More the developers, more the features, more chances of defects. So, it is ideal to test the code before merging. How to achieve this? If there is a dedicated testing team, it can test each developers changes before merging the code. What effort will it take for the testing team if there are multiple developers trying to merge the code at the same time? Would not this be a bottleneck? So having a dedicated testing team isn’t going to solve the problem, what will then? The answer is Automated Testing. Automated Testing simply means to use a special software to control the execution of tests and then compare the actual outcomes with predicted outcomes. Automated testing is way faster, as its a system performing the tests, it will catch more issues faster than manual testing. Tools like Selenium, TestingWhiz can be used to write scripts for Automated Testing. An automation server is required to run these automated tests. Jenkins/Teamcity are examples of automation servers. Beauty of Git is, it is not a simple repository manager, it has built in capability to integrate with various other systems like Jenkins or Teamcity or other build automation tools. Whenever a developer merges code from his branch to the production branch, you can configure Git to invoke/trigger a job in Jenkins, that runs the automated tests and push the artifacts to a binary repository (Nexus or Artifactory etc.). You can do more such things like configure Git to trigger a job that runs automation tests to make sure the new changes don’t break any existing stuff and also we could have unit tests to test the new code. This way, we are constantly testing the new code as well as the stability of the existing code due to the changes. That is continuous integration for you. Continuous integration with the help of a good version control software like git, solves the above listed problems that an agile team faces. Once we figure out how to do Continuous Integration, the next issue arises. Guess what that is? Continuous deployment, let's get to bottom of it in the next section.

Continuous Deployment: Continuous Deployment is an extension to Continuous Integration. While the Continuous Integration focuses on constantly updating the code base and being free from errors, Continuous Deployment mainly aims at moving this code to the live environment readily available for the users without any manual intervention. This phase involves running all the integration tests, smoke tests, automated tests to make sure everything is running fine and the stability is not affected. Once all the tests run fine, then the code is pushed to the dev/integration environment automatically, and it is ready for other kinds of testing (could be automated too). The developers should make sure that every new code which is pushed should have automated tests written and run. This makes sure that the existing software works fine and the new changes work as expected. Once all the required tests run fine in dev/integration environments, the code is pushed to UAT and then to the live environment. This process could be configured in such a way that every small change made could go to live and there is no need for a separate release with a huge chunk of changes. Note that, these changes go to the production environment everyday without the customers even noticing it and with a very less error rate. The operations team (or even developers) help here in writing the automated scripts for the deployment and automate all the other manual processes involved. It should be a one time setup where it takes care of all the edge cases involved in the deployment process. This way the life of the operations team becomes simple and they could handle hundreds of releases daily. For example, Amazon Web Services deploys code every 11 seconds on average, yet only about 0.001% of those deployments cause any kind of outage.

This process if handled in an organized way is less error prone and more efficient. So building an effective CI/CD pipeline is the first and the most important step in DevOps. Here are the list of simple steps which would be the part of developer routine following DevOps:

  1. The developer understands the requirements of the software being built by attending the sessions and all the tasks will be documented in issue tracking software.
  2. The developer then works on these tasks one by one after defining the sprint goals and giving an estimate of the work being done. This estimate should include the time required to write the Unit tests, Integration tests and other kind of testing.
  3. The developer works on the tasks and makes sure that the code is pushed to the corresponding branch and raise a pull request. Here he could configure the build to run all the integration tests (which includes the newly written tests for this code) and make sure all the tests are passed. This makes sure that the code doesn’t break with the new changes.
  4. If any of the tests fail, the developer should fix them and make sure they pass. Its as simple as this, FIX IT IF YOU BREAK IT.
  5. Once all the tests run fine, the code is merged to develop which in turn does a complete build (automated CI process) and pushes the binaries to the binary repository and do a automated deployment to the integration server.
  6. Now we have the new code ready in the integration environment which could be used to test all the cases by QA team.
  7. Once we get a QA sign-off we could push it to the UAT environment which could be promoted automatically. All these config are a one time setup and it does not involve any manual process.
  8. Once all the UAT sign offs are achieved, we could do a deployment to the production by following all the release process. Please note that the release process is also automated here and it pushes every small change to production without any manual intervention and without any downtime required. (this has become much easy with the containerization and microservices architecture). This way, we could push changes to live environment everyday.

This would be a very simple introduction on the DevOps flow from the developers perspective. We will deep dive into the specific tools in the coming articles. We appreciate your time to read this article and please leave feedback/questions in the comments section, we will try to address them.DevOps : A Simple Workflow!!!

Work By: Venkata Nagarjuna Dondapati, Raghul Mukundan, Leonard Lehew, Viveka Gorla.

#DevOps #ContinuousIntegration #ContinuousDeployment

Good stuff, looks like what exactly we follow. For a SR logged divided in to stories, the same agile team is responsible for developing, testing, and releasing. And very certain that the entire release process will be completely automated so developers spend less time on it.

Good topic and well written. Keep it coming!!

To view or add a comment, sign in

Others also viewed

Explore content categories