
Advanced technology is not always immediately effective and well received. There’s a risk that users will not adapt to or ultimately accept every new software release due to functionality complications or bugs that affect system software. To increase the likelihood of any product or program’s immediate success upon release, billion-dollar companies utilize canary deployments that allow testing across a limited market segment.
Canary deployments offer an outlet for releasing new features staggered to mitigate risks such as breaks in services, outages, and non-compliance. This approach also allows for a swift and safe rollback to previous working versions as potential issues are navigated and solved. Canaries are typically shared with the most tech-savvy users initially, while others utilize older, stable product versions. They are ideal for trailing new versions of an entire application that include specific features, upgrades, or configuration changes. Multiple canaries can be used to release different program aspects to millions of users to effectively expose all or most of one’s user base at a more cost-effective rate. Unfortunately, canary deployments are not applicable in all circumstances. It’s important for industry professionals to understand the challenges and considerations involved when developing a canary deployment.
The meaning behind the rationalization for canary deployments is quite literal. During the 19th century, coal mining became more common and dangerous as miners travelled deeper into mines with the help of motorized tools. Canary birds would be brought into mines to have their singing and chirping monitored. If they became less audible, it indicated that carbon monoxide and other poisonous gases were present because canaries are more sensitive to these gases than humans.
Today, a canary release can “warn” developers and designers of potential flaws and inefficiencies in a new application version that consists of specific new features, upgrades, or configuration changes. A canary includes all the necessary application code and dependencies and is released to a target environment.
In a canary deployment, two application versions run simultaneously—the current “stable” version and the new canary version. Canary deployments generally follow a standard set of steps:
There are key benefits to implementing a canary deployment model configured for a specific infrastructure:
Canary deployments also contain certain limitations:
Orchestration engines are frequently used to deploy, scale, and manage containerized applications that contain the dependencies required to run the software, such as configuration files, binaries, libraries, and frameworks. Kubernetes is a particularly popular orchestration engine used in tandem with Docker to deliver application software in container packages. Although Kubernetes does not provide canary deployment functionality out of the box, there are several ways to achieve this.
A common way to enable or disable a new feature in software development is by using feature flags to change the runtime behaviour of an application without restarting it. This strategy gives DevOps teams more granular control over deployment to eliminate rash and taxing rollbacks. The canary release is now deployed to all nodes in production, but the new features are hidden behind feature flags. Each flag can be turned on or off to control the rollout of a new feature to a subset of users. The feature flag handles the canary deployment; if the canary test fails, the feature flag can remain off. Conversely, if all is well with canary tests, code can be deployed to all nodes, and feature flag rollout can begin.
Consider these factors when planning a canary deployment:
Software delivery is always challenging. The real test is when users begin using a new release in production. Canary deployments allow DevOps teams to conduct controlled trials with real users and help them have a consistent experience. They are more suitable for projects that leverage Kubernetes’ flexibility and closely monitor its performance. By conducting comprehensive deployments, organizations can implement sophisticated deployment strategies that enhance an application’s reliability and user experience. The future of canary deployment looks promising as a suitable option for modern challenges where new technologies emerge at an unprecedented rate. It has an essential advantage compared to other deployment models where automation is a priority in cloud-based and distributed applications, increasingly frequent deployments, microservice-based architectures, and layering of multiple deployment teams. More recently in the field of AI and machine learning, a canary deployment strategy can be used to validate new AI models and updates online under real-world conditions. Teams that possess an agile development practice will especially experience the benefits.
Dinesh Chacko is an enthusiastic IT evangelist with a knowledge and passion for AI, cloud computing, information technology, and online security. He has seen the IT landscape change, been a part of organizational and cultural change during large company mergers and has been involved and affected by many first and second-generation IT outsources. Dinesh understands the personal and professional challenges and opportunities created during organizational transitions. He has held multiple roles for small and large organizations, including the public sector, oil and gas, banks, telecoms, financial services, EU institutions, and directly for IT solution integrators. For more information, contact dinesh.chacko@ieee.org.
Disclaimer: The author is completely responsible for the content of this article. The opinions expressed are their own and do not represent IEEE's position nor that of the Computer Society nor its Leadership.