Navigation

Asset Publisher

Can fog computing be the next solution to cloud resource allocation and management for mobility-aware applications?

With the advancement of IoT, the number of smart and connected devices is increasing. These geographically distributed devices produce and consume a huge amount of heterogeneous and dynamic data...

The true location cloud computing

  Figure 1. Source Apple Computer The cloud concept is meant to be ethereal, floating, that we don’t need to worry about it. It has worked! However, cloud computing is very...

Assessing effectiveness of Ansible

Ansible is relatively new configuration management tool which gained popularity after its acquisition by Redhat. In this blog post, we at the DevOps Center of Excellence at...

Web Content Display

With the advancement of IoT, the number of smart and connected devices is increasing. These geographically distributed devices produce and consume a huge amount of heterogeneous and dynamic data known as ‘Big Data’ at the network edge that is close to the end users. Therefore, a new requirement of data management and computing capacity at the network edge has been evolved with respect to user mobility and diverse requirements of applications. Since the traditional cloud data-centers are not capable of handling such extensive data as well as user mobility, it has become indispensable to rethink about the resource allocation and management in the cloud infrastructure. In this case, distributed computing models such as fog computing, mobile clouds and vehicular networks come into play.

 

The article, ‘Mobility-aware application scheduling in fog computing’ by Luiz F. Bittencourt et al., discusses the advantageous aspects of fog computing in the context of faster data processing and computing at the edges of the network for the applications dependent on users’ geographical location. It gives an overview of the hierarchical fog computing infrastructure and illustrates the possible development of user access point called ‘cloudlets’ with the utilization of computation and storage facility. Applications can be classified into different categories based on the user mobility and Quality of Service (QoS) requirements of the applications. These classes can influence the design of scheduling strategies for fog computing infrastructure.

 

The article depicts that by putting application classes and fog computing scheduling policies together while considering user mobility can reduce network delay which makes the applications perform better. To find out more detailed information, please follow the link,

https://www.computer.org/cms/Computer.org/magazines/whats-new/2017/07/mcd2017020026.pdf.

The Clear Cloud - Home

ContentContainer

Cloud Native Orchestration with Docker v1.12 Swarm Mode
JUL 28, 2016 23:58 PM
A+ A A-


Dealing with container orchestration, clustering, high availability and management has been complex activity for most. We saw in my earlier post on, how Rancher can be used as an option, to achieve the same in a seamless manner. However, the issue here is that there are many sub-systems that are needed to be taken care-off to make everything work. One of the main reasons for offerings like Rancher, Kubernetes, Mesosphere etc.  evolving, has been to fill in the gaps where the native Docker eco-system has been lacking. The Docker native offering of Swarm clusters has also been an option, however this required multiple other pieces also to be setup to achieve a production grade clustered container environment.

Docker recently came out with a release candidate version 1.12, that has really got most of us excited in terms of ease with which a cluster can now be setup, apart from some of the other features. To understand this a bit better, let’s look at what the possible deployment architecture would look like using the current Swarm cluster setup.

Fig: Sample Swarm cluster architecture

Swarm was a standalone offering and needed to setup multiple components namely

  • The Swarm Manager – For health checking, taking to the swarm nodes, scheduling containers etc. For high availability this needed to be combined with a Service Discovery framework via Consul, etcd, Zookeeper etc.
  • Docker nodes running Swarm Agents
  • Service Discovery via Zookeeper, Consul
  • If load balancing was required for the container instances, then a load balancer needed to be configured via the service discovery node, so that any containers being added or removed, are dynamically updated in the load balancer template.
  • Setup of keys and encryption across the entire eco-system.

With the new Docker Swarm Mode in v1.12 RC, Swarm is now natively part of the Docker Engine and you can now optionally select the Docker Engine to run in “Swarm Mode”, which would enable you to setup a cluster of Docker engines. The key factor here is the simplicity of the overall process, that requires just a couple of commands to achieve native clustering. There is no external framework required for Service Discovery and Load Balancing required any more.

Fig: Reference (https://blog.docker.com/2016/06/docker-1-12-built-in-orchestration/)

 

Some of the key highlighted features include

  • Built-in orchestration (Swarm mode)

  • No need to setup any external component to enable clustering

  • Built-in service discovery, it now provides an internal strongly consistent distributed state store

  • It provides consistency and resiliency in the apps and services in case of a node failure

  • Maintenance and diagnostics support using features like the “Drain” availability mode for nodes in a swarm cluster.

  • There is a new service deployment API that is designed to provide the ability to do define a complete application stack definition with desired state, so that the swarm manager can then take care of making sure the actual state is monitored and maintained as per the desired state.

  • Container and port aware load balancing using feature called “Routing Mesh”, where if a node is sent a request for a service/container it is not running, then that node can dynamically route it to a node that is running that container

  • An overlay network can be specified for your services that helps in better isolation of the containers, which is then used by the swarm manager for application initialization and updates.

  • There is end to end encryption support available across the swarm via TLS encryption, TLS mutual authentication and certificate rotation.

Using the tutorial provided by Docker (https://docs.docker.com/engine/swarm/swarm-tutorial/), these new features have been given a spin and the findings on some of the new features of v1.12 are covered below.

1) To start off, download the Docker toolbox for windows pre-release version v1.12.0-rc4 (or which ever is the most recent version) from https://github.com/docker/toolbox/releases .

2) Create 3 nodes using the Docker Engine, as we setup 1 swarm manager and 2 nodes as per the requirement.

  • docker-machine create -d virtualbox --virtualbox-memory "2000" manager

  • docker-machine create -d virtualbox --virtualbox-memory "2000" node1

  • docker-machine create -d virtualbox --virtualbox-memory "2000" node2

Listing the created nodes/docker hosts is given below

3) Switch to the manager node to make the “manager” node active.

4) Use swarm init to create the swarm manager on the node manager


Note the secret as that is required for the other nodes to join the swarm

5) Running the docker info command now shows that the swarm as Active and we have created the swarm manager


6) Connect to the node1 and node2 and simply run “swarm join” with the secret, the manager IP and port to join the swarm


7)  Now we have the swarm manager and the two nodes added to the swarm.


8) The next step is to use the declarative service model feature, where you define the desired state of the services that comprise your application. In other words, this service will define the container image, number of instances you need running or the scale by using the “replicas” command line option part of the “service create” command.

The swarm mode also provides production level update features for always-on/available, where you could use command line options of “update-parallelism” and “update-delay” that ensure that not all containers are brought down during an update and a “rolling” update is applied. This is based on the numbers specified for parallelism (specified in the “update-parallelism” parameter) and what level of delay is required between the next batch of updates (specified in the “update-delay” parameter).

Here we have used an update delay of 10 seconds with containers brought down one at a time while creating a redis service that uses the redis Docker image v3.0.6

docker service create --replicas 3 --name redis --update-delay 10s --update-parallelism 1 redis:3.0.6

The “service inspect” command provides a view into service definition.


9)  The other key feature for production deployment is, the ability to be able to update existing services and as provided in the Docker tutorial, we were able to update the redis service with an upgraded version of redis to v3.0.7 


This update is applied as per the update policy we had defined for the service as part of the desired state and we can see that redis version update is gradually being applied to the various nodes running this service.


If you see that the “LAST STATE” column is still showing “Preparing”, that shows the Docker image is still being downloaded to the local registry, and now is getting updated with the new updated image.

One feature that is probably missing here is, the ability to know how much of the image download has completed. For now, you need to just keep checking the “docker ps command on the specific node or running the same “service tasks” command to know whether that particular service has started executing.

10) If you want to prevent one of the nodes to receive any updates or not be available to serve any tasks (say for maintenance), then there is a feature to set those nodes with availability set as “DRAIN”. 

In the sample below node1s availability has been set as “drain” and we can see that the swarm manager automatically reschedules the tasks that were running on node1 to other nodes, to maintain the replica count to 3 which we had defined as part of the service definition.



 

11) Once done with the maintenance work, we can set the availability back to “active”


Now it is again available as part of the cluster and the swarm manager starts scheduling new tasks on that node.

To try that out, the redis service was scaled to 5 and we see node1 one of the nodes on which the task has been scheduled to run.


Conclusion

Docker v1.12 release has got some very exciting features and has made container orchestration extremely simple to implement. The Docker Core Engineering team have been using the slogan “With Docker 1.12, the best way to orchestrate Docker is Docker!” 

It would be interesting to see how the current best practices and clustering architecture blueprints evolve and adopt this native orchestration solution. 

Author


Twitter: @TarunKumarSukhu

Tarun is a Senior Technical Architect at TFG, part of the Technology Office in Engineering and R&D Services group of HCL Technologies and has extensive experience in Product Engineering and Consultancy Services, dealing with Data Management platforms, Cloud, Platform Migration, and Digital e-Commerce. He is also a Microsoft Certified Professional and Microsoft Specialist in Architecting Microsoft Azure Solutions. 

FIRST
PREV
NEXT
LAST
Page(s):
[%= name %]
[%= createDate %]
[%= comment %]
Share this:
Please login to enter a comment:
RESET

Computing Now Blogs
Business Intelligence
by Drew Hendricks
by Keith Peterson
Cloud Computing
A Cloud Blog: by Irena Bojanova
The Clear Cloud: by STC Cloud Computing
Careers
Computing Careers: by Lori Cameron
Display Technologies
Enterprise Solutions
Enterprise Thinking: by Josh Greenbaum
Healthcare Technologies
The Doctor Is In: Dr. Keith W. Vrbicky
Heterogeneous Systems
Hot Topics
NealNotes: by Neal Leavitt
Industry Trends
The Robotics Report: by Jeff Debrosse
Internet Of Things
Sensing IoT: by Irena Bojanova