« Back

Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation

Software Deployment at the Push of a Button

Fernando Berzal

Even though software development can't be fully automated, some of its processes can and should be. In Continuous Delivery, authors Jez Humble and David Farley propose an automated process for taking every change from check-in to release.

Continuous delivery ties together the build and deployment processes. It can be viewed as the logical consequence of continuous integration (see M. Fowler, "Continuous Integration," May 2006; http://martinfowler.com/articles/continuousIntegration.html), a practice whereby each member of a team integrates his or her work frequently and verifies each integration as soon as possible through an automated build and test. Like continuous integration, continuous delivery also relies on a high degree of automation, and Humble and Farley provide helpful references to myriad software tools, ranging from well-known version-control, build, unit-testing, and functional-testing tools to continuous-integration software, artifact repositories, forensic tools, and environment-management systems. Moreover, they review the evolution of these tools and highlight both their key features and main drawbacks.

A fully automated, repeatable, reliable deployment process is, in some sense, the Holy Grail of operations teams. Continuous delivery facilitates it through shared deployment scripts for development, testing, staging, and production environments. Sharing these scripts ensures that they are thoroughly tested, while improving confidence in frequent deployments and reducing the associated operational risks. Like any other practice, however, continuous delivery depends on discipline to be effective. Humble and Farley caution against manual adjustments, which create subtle differences between deployment environments and introduce hard-to-debug problems. After all, continuous delivery is supposed to make software delivery a frequent, less stressful process.

The authors have tried to make each chapter readable more or less in isolation, which causes a certain amount of repetition if you read the book from cover to cover. However, the text is also sprinkled with many cross-references, so chapters aren't really self-contained for readers who are new to the continuous delivery approach. In addition, the book includes some forward references that make it less readable than it should be.

The book is organized in three main parts:

  • Foundations — introduces the principles of regular, low-risk software releases and the practices that support them. Automation, version control, and continuous improvement are at the heart of continuous delivery; manual deployment is one of its most egregious antipatterns. The authors stress putting everything under version control — not just source code and build scripts but also deployment configurations and whole environments (developers can use artifact repositories for versioning whole environments when needed, since standard version-control tools might not be suitable for that purpose). They also suggest building binaries just once and separating them completely from configuration information. This avoids subtle problems that compiler options can cause and makes the binaries deployable in any environment, but it’s also something that common software development tools often discourage.
  • The Deployment Pipeline — describes the architecture that supports the continuous delivery practice. The term is analogous to the microprocessor pipeline. Every commit launches a commit stage, which builds binaries, runs automated tests, and generates metrics. This stage runs in just a few minutes. It's designed to quickly catch the most common problems, as in more conventional smoke tests. This staged process lets developers resume work on the development of new features while more expensive test stages run in parallel. This part of the book covers build and deployment scripting in detail, emphasizing version-controlled scripts as the only mechanism used to deploy your software. The prototypical deployment pipeline also includes automated acceptance testing — that is, using executable specifications as acceptance criteria, and automated testing of nonfunctional requirements, such as capacity testing. If you deem test-driven development useful, you will also want automated acceptance tests to become part of your essential toolbox.
  • The Delivery Ecosystem — covers more advanced practice topics. For example, the authors look at virtualization as a means to fast and reliable deployment. They address key issues in data preservation, for both the deployment of new versions of your software and the rollback of such deployments if errors are found. They also complete the analysis of what they consider to be your three degrees of freedom when working with large software systems: the deployment pipeline, branches (advanced version control), and dependency management (componentization).

This book provides the big picture of continuous delivery. It also includes some detail for software project managers who are considering it. For example, the final chapter describes a management approach that deals with "both conformance and performance." More technically oriented readers might find some examples simplistic, but will also enjoy the authors' war stories and will probably recognize themselves in similar situations.

Despite minor shortcomings of repetitive style, one-too-many forward references, and some scattered typos, this book is a valuable guide to an even more valuable practice — bringing automation to an often disregarded part of the software development process.

Fernando Berzal is a researcher at the Information and Communications Technologies Research Center (CITIC) and an associate professor at the Department of Computer Science and Artificial Intelligence, both at the University of Granada (Spain). He's a member of the IEEE Computer Society and a senior member of the ACM. Contact him at berzal@acm.org.

 

More Reviews