Click here and learn how to make money from them today!
It’s no surprise that Amazon Web Services is way ahead of the world with continuous integration and continuous deployment of software, especially since it advertises itself as a go-to place for organizations seeking to put CI/CD into full practice. The online services giant has taken its own internal CI/CD practices to the next level, however, making it essentially a completely “hands-off” operation.
At AWS, changes in microservices are automatically deployed to production “multiple times a day by continuous deployment pipelines,” according to Clare Liguori, a principal software engineer at AWS. This pipeline-centered strategy is key to its ability to keep pumping out code. In a recent post, she explains how Amazon moves software through its phases rapidly and automatically. Remarkably, managers and developers spend little to no time shepherding deployments and watching logs and metrics for any impact. “Automated deployments in the pipeline typically don’t have a developer who actively watches each deployment to prod, checks the metrics, and manually rolls back if they see issues. These deployments are completely hands-off. The deployment system actively monitors an alarm to determine if it needs to automatically roll back a deployment.”
Software code at AWS moves through four major stages, with automated mechanisms and processes that check and double-check results every step of the way:
- Source changes, validated: Amazon’s pipelines “automatically validate and safely deploy any type of source change to production, not only changes to application code,” says Liguori. “They can validate and deploy changes to sources such as website static assets, tools, tests, infrastructure, configuration, and the application’s underlying operation system. All of these changes are version controlled in individual source code repositories. The source code dependencies, such as libraries, programming languages, and parameters like AMI IDs, are automatically upgraded to the latest version at least weekly.”
- Build, a teamwork process: The code is compiled and unit tested, Liguori continues. “Teams can choose the unit test frameworks, linters, and static analysis tools that work best for them. In addition, teams can choose the configuration of those tools, such as the minimum acceptable code coverage in their unit test framework.”
- Testing and more testing: The pipeline runs the latest changes through a set of tests and deployment safety checks. “These automated steps prevent customer-impacting defects from reaching production and limit the impact of defects on customers if they do reach production.”
- Production, incorporating “bake time”: Code is released into production staged across AWS regions, to mitigate an issues that arise. “We have found that grouping deployments into ‘waves’ of increasing size helps us achieve a good balance between deployment risk and speed,” Liguori says. “Each wave’s stage in the pipeline orchestrates deployments to a group of regions, with changes being promoted from wave to wave. New changes can enter the production phase of the pipeline at any time.” She adds that every production rollout has a “bake time” to assess its impact, as “sometimes a negative impact caused by a deployment is not readily apparent. It’s slow burning; it doesn’t show up immediately during the deployment, especially if the service is under low load at the time. Each prod stage in the pipeline has bake time, which is when the pipeline continues to monitor the team’s high-severity aggregate alarm for any slow-burning impact after a deployment is completed and before moving on to the next stage.”
AWS just doesn’t slap automation onto its processes and hope for the best — its automated deployment practices are carefully built, tuned and tested, Liguori observes, “based on what helps us balance deployment safety against deployment speed. At the same time, we want to minimize the amount of time developers need to spend worrying about deployments.”