Navigation

Asset Publisher

Can fog computing be the next solution to cloud resource allocation and management for mobility-aware applications?

With the advancement of IoT, the number of smart and connected devices is increasing. These geographically distributed devices produce and consume a huge amount of heterogeneous and dynamic data...

The true location cloud computing

  Figure 1. Source Apple Computer The cloud concept is meant to be ethereal, floating, that we don’t need to worry about it. It has worked! However, cloud computing is very...

Assessing effectiveness of Ansible

Ansible is relatively new configuration management tool which gained popularity after its acquisition by Redhat. In this blog post, we at the DevOps Center of Excellence at...

Web Content Display

With the advancement of IoT, the number of smart and connected devices is increasing. These geographically distributed devices produce and consume a huge amount of heterogeneous and dynamic data known as ‘Big Data’ at the network edge that is close to the end users. Therefore, a new requirement of data management and computing capacity at the network edge has been evolved with respect to user mobility and diverse requirements of applications. Since the traditional cloud data-centers are not capable of handling such extensive data as well as user mobility, it has become indispensable to rethink about the resource allocation and management in the cloud infrastructure. In this case, distributed computing models such as fog computing, mobile clouds and vehicular networks come into play.

 

The article, ‘Mobility-aware application scheduling in fog computing’ by Luiz F. Bittencourt et al., discusses the advantageous aspects of fog computing in the context of faster data processing and computing at the edges of the network for the applications dependent on users’ geographical location. It gives an overview of the hierarchical fog computing infrastructure and illustrates the possible development of user access point called ‘cloudlets’ with the utilization of computation and storage facility. Applications can be classified into different categories based on the user mobility and Quality of Service (QoS) requirements of the applications. These classes can influence the design of scheduling strategies for fog computing infrastructure.

 

The article depicts that by putting application classes and fog computing scheduling policies together while considering user mobility can reduce network delay which makes the applications perform better. To find out more detailed information, please follow the link,

https://www.computer.org/cms/Computer.org/magazines/whats-new/2017/07/mcd2017020026.pdf.

The Clear Cloud - Home

ContentContainer

Architecting Application Deployment with Containers
SEP 02, 2016 04:51 AM
A+ A A-

Introduction

Enterprise applications are moving to cloud based deployments at faster pace and it provides the cloud providers an opportunity to offer newer features and competitive pricing to attract wider adoption of their platforms. Though Platform as a Service concept is gaining wide attention there is lesser adoption as compared to IaaS methods since enterprise applications have to be re-architected with service based approach at its core with high cost to implement them.

Traditional path for migration has been to Lift and Shift the application into an IaaS framework to iron out the connectivity related issues in the processes and applications without much changes to application and deployment. It is becoming imperative of the Cloud architects to propose solutions that contain components that are cloud agnostic and adopt to the changing cost/value offerings of the cloud providers to offer a steady and optimal ROI to their clients in long term. Various open source and commercial tools are available that are cloud agnostic to handle the needs of scaling, clustering and monitoring that could be proposed with minimal differences in cost and performance.

This blog describes one exercise to shift an application from an On-Premise implementation to a container based solution along with monitoring and management tools that can work on both AWS and Azure frameworks without major changes.

Container technologies like Docker offer tools that can help bundling of the application and shift conveniently to the public cloud platform. Entire lifecycle of application deployment, monitoring and scaling had to be addressed in the proposed solution that was provided and discussed in the subsequent sections.

 

Brief History of Containers

Hardware Hypervisors grew in popularity in early 2000’s to provide virtualization and enhance the utilization of hardware across different users who could securely share that for their use thereby reducing cost. Similar concept was adopted by LXC project in 2008 attempted create isolated self-contained instances of Linux operating systems to run on top of the host operating system to utilize the underlying hardware more efficiently. This could help cloud hosting providers to offer lower cost packages without adversely affecting performance of the applications running in these independent operating systems.

The name ‘container’ originated from Google project in 2006 that was later renamed as Control Groups(cgroups) that formed the basis for the LXC and later Warden project(of CloudFoundry) in 2011 that attempted to move the guest operating systems to run on different host operating system types thus making the containers more portable across different platforms.

In 2013 Google offered tooling that bundled the components needed to build (lmctfy) and monitor (cAdvisor) containers across different platforms. With Google contributing these efforts into libcontainer projects along with Docker helped accelerated the adoption of the container technologies. Google continues its development and support for the Kubernetes initiative while Docker continues to build ecosystem required for a full-fledged container clustering, highly available and scalable solution with independent monitoring solutions. 


Figure 1 : Docker Container and underlying Linux components

Application considerations – Three tier scaling approach

The Application that was to be migrated from On-Premise to the public cloud was a microservices based application developed in NodeJS technology. It is headless (GUI-less) application having only secure REST based API methods to interact with external systems. Applications interacting with this platform had to use a token (time stamp embedded) mechanism to validate their access to help preventing replay attacks in addition to the SSL based authentication. Application can spawn multiple threads to handle the load but was capped with certain number of threads to avoid internal resource overruns while processing the requests.

Being a stateless application it was decided to scale the platform with additional application instances and utilize the elastic load balancing to route across the requests to different Virtual machines hosting the same. However the addition of Virtual machines was found to be less economical since the load (average and peak) was not high requiring dedicated Virtual machine instance for each application installation. 

Figure 2- Application Deployment Architecture

Based on the utilization efficiency it was decided to enable multiple instance of application deployment in a single Virtual machine using container based technologies as shown above. This led to adopting of the “three tier scaling” approach that would increase application threads first, followed by containers and finally Virtual machines to handle the load increase at peak hours and decrease in the reverse order.

Application Shift with Docker ecosystem

Docker is one of the leading container management tool and ecosystem that are in active consideration of many organizations for their cloud agnostic application deployment approach. Process of building the Docker container with application is shown below

Figure 3: Docker Architecture (Courtesy: www.docker.com)

Dockerfile (in YAML format) is used to build the container that includes the operating system that is suitable for the application that needs to be run inside that. Typical build file required for creating MongoDB container is described below.


The keyword “FROM” indicates that we need to build a OS container that will contain the operating system mentioned. “MAINTAINER” provides reference to the individual/organization that is building the image, this is very important when the image is built for sharing with others. Keyword “RUN” indicates that the instructions have to be performed inside the container before the image is built. Initially all operating system updates are performed followed by the database installation. All the updates are applied and other packages are installed. Database port is made available on Host Virtual Machine using the “EXPOSE” keyword. “ENTRYPOINT” is used for starting the application, we can also have some additional commands run before the application start.

Docker image created is uploaded into a S3 URL (used as private registry integrated with DevOps process) for access into multiple VMs where the application has to be started.

The Docker build file for the application is also created in similar method as described above. Application container is linked to the database containers and started using a Docker Compose Definition file (YAML format) in the host (VM). Sample Compose file is given below.


Environment variables required for the application and database containers are set in the Compose definition file. Any other additional instructions required for the Docker image startup are provided here. Docker Compose command internally uses the ‘pull’ and ‘run’ commands to start the containers in the Docker Machine.

There are alternate methods available to start the Docker images using Docker native commands, Docker Swarm etc. Appropriate choice of startup can be decided based on the needs of the application and its ecosystem. 

Clustering and Monitoring

Clustering of containers helps in increasing the availability and achieve higher scale of running applications in the cloud. Docker’s Swarm tools provide the framework required to cluster different containers of same type and enable other containers to look up and utilize the services efficiently.

Swarm relies on a service discovery mechanism that help in registering these services and provide a lookup mechanism. On start-up the containers register themselves with this registry by name and broadcast their services to other containers. Some of the registry mechanism that Docker can work with are Hashicorp Consul, etcd, zookeeper etc.

Swarm itself is highly available with the replication of information via primary and multiple secondaries that can be run across different zones. Appropriate mechanism has to be chosen for the service discovery tool that is chosen for the high availability scenario.

High Availability and Scaling

One of the key aspects of the High Availability is the monitoring of various components involved in the service offering to ensure that the failure of components is detected and corrective actions performed in near real-time. This can be achieved with the help of the infrastructure monitoring tools provided by the providers in conjunction with the application monitoring systems to provide the high availability architecture for the applications. There are choices of commercial and open source tools available for the application monitoring, Prometheus was the open source tool that was utilized for monitoring in this exercise.

For a successful monitoring of the availability of components there has to be appropriate instrumentation built into the applications, containers and the clustering mechanisms to provide the availability status to the monitoring tools. There are different instrumentation choices available that can provide at a minimum the on/off status to the details of resources utilized and any other custom information relating to the internals of the applications/containers that they monitor. One of the most popular tool, Google’s cAdvisor was utilized to offer the details of the components used in the containers and the application stack.

Another reason for choice of Prometheus was its ability to integrate with NodeJS application clusters. This can help in monitoring the number of threads that were made available from the application and scale it based on the three tier scaling approach mentioned above.

Prometheus framework offers capability to set and monitor thresholds for various parameters of applications, containers and the VMs using the data collected by cAdvisor. Events (Alerts) are generated on crossing the thresholds that triggers alarms that can sent to various listeners. Two listeners in master slave mode handled the various alerts including the loss of one of them to ensure that the application clusters were highly available. Based on the resource utilization metrics additional instances of VM, containers and/or applications were made available based on the three tier scaling approach. On the reduction of application usage these resources were scaled down based on the same three tier scaling approach to ensure that the resource utilization are also kept optimal.

The deployment architecture with all component mentioned above is shown below:


Swarm and cAdvisor nodes run in each container where the application and the database containers are running. All the management, monitoring and scaling applications run in highly availability mode along with the application clusters. Shipyard is additionally used to visualize the Docker containers as an additional monitoring tool.

Conclusion

There are many application level challenges while shifting them to public cloud infrastructure from on-premise deployments. In addition to those there are challenges that are added due to the additional components required for ensuring high availability of them using a different set of tools from the enterprise environment. Appropriate choices of tools and personnel experienced in understanding these tools and public cloud migration challenges are essential for getting an optimal lift and shift strategy.

References

http://www.docker.com/

https://docs.docker.com/engine/article-img/architecture.svg

Author Information

Maniappan Rajagopalan is a Senior Enterprise Architect in the Technology Office of the Engineering Services team in HCL Technologies. He can be reached at maniappan.r@hcl.com 

FIRST
PREV
NEXT
LAST
Page(s):
[%= name %]
[%= createDate %]
[%= comment %]
Share this:
Please login to enter a comment:
RESET

Computing Now Blogs
Business Intelligence
by Drew Hendricks
by Keith Peterson
Cloud Computing
A Cloud Blog: by Irena Bojanova
The Clear Cloud: by STC Cloud Computing
Careers
Computing Careers: by Lori Cameron
Display Technologies
Enterprise Solutions
Enterprise Thinking: by Josh Greenbaum
Healthcare Technologies
The Doctor Is In: Dr. Keith W. Vrbicky
Heterogeneous Systems
Hot Topics
NealNotes: by Neal Leavitt
Industry Trends
The Robotics Report: by Jeff Debrosse
Internet Of Things
Sensing IoT: by Irena Bojanova