You are on page 1of 4

Container Orchestration - Selecting Right Weapons

Any cloud computing discussion today will be incomplete without mentioning either
containers or containerization. Containerization is also changing DevOps. They are
making environments consistent, there by making it easy to re-create similar
environments for all the four stages of software development viz. development,
testing, staging and production. They are making tooling consistent, allowing
consistent re-use of the critical functions of code quality analysis and build and test
functions within developer workspaces. They are helping in the creation of similar,
at times, dependent image life cycles and hence revolutionizing the DevOps
pipeline.
So what are containers? Applications usually start misbehaving once they are
moved from one environment to anotherbe it from a software programmers
machine to a test environment or from one staging server to another production
environment or in fact from a physical computer in a data center to a cloud based
virtual machine.
This happens because the environment that supports the
application in one instance gets altered in another. An example is, an application
getting tested on version 3 of one language may be required to run on version 3.4
of the same language during production, or an SSL library supporting the
application in one environment gets changed in another or the network topology
changes.
Containers are runtime environments that have solved this problem. They have
simplified application management by making it host agnostic same set of
management commands can be used against any host. They have also created a
major simplification opportunity in DevOps as well. Through Docker it is possible to
create a Docker image that can be identically deployed across any environment
within seconds whether it is a developers desktop or a testing machine or a
production machine.
Containers offer several key advantages:

Several issues like software conflicts, driver compatibility issues, and library
conflicts are eliminated all together
They support micro services architecture by decoupling the monolithic
applications.
They simplify DevOps by allowing the simultaneous working of operation
engineers and developers. The first can work inside and the second outside
the container.
Containers allow speed in deployment, reduced overheads, ease in migration
and quicker re-starts

Containers reduce the effort from the system administrators because they
are relieved of the duty of maintaining the hypervisor, in the case of virtual
machines

Containers and microservices have inherent synergy. Containers are a very good
way to develop and deploy microservices and the infrastructure (tools and platform)
that are used for operating containers.
They are a good way to manage
microservice based applications, hence it is imperative that containers are used
while using the microservices based model.
Several tools in the container ecosystem manage the container life cycle. These
tools manage multiple key activities for the container like pulling repositories from
registries, optimal running of the container, attaching the container to a terminal,
assigning the container to a new image and bringing a running container to halt.
When it comes to managing several containers which are deployed on multiple
hosts, we start talking about multi-container work-loads and here the concept of
container orchestration comes into play. Container Orchestration is the process of
handling all the key activities of a large number of containers in a multi-container
workload. The classes of tools that carry on this critical function are called
Container Orchestration tools.
Data centers hosting the cloud usually use high availability clusters for load
balancing, back up and failover purposes. Most of the latest cloud environments are
running on servers deploying container-based applications hence their scalability
and availability are governed by Container Orchestration tools.
These tools also govern the interaction between containers. It is really important to
select the right toolset while running Docker containers in high availability
clustering. Let us look at three such tools first and then their comparison on some
key parameters like application type, the number of nodes to be clustered, batch
processing workload, long running, stateful and stateless.
Docker Swarm - It is a container orchestration tool from Docker that provides the
ability to cluster, schedule and integrate. It allows building of multi-container,
multi-host heavily distributed applications and ways to manage scale. it makes
most sense to use in environments where the Docker containers are already getting
used, as it uses the standard Docker API and networking. It is modular and pretty
straightforward to us. The Docker Swarm provides an extension of a single host. It
starts at the container and builds out. The learning curve and set-up takes less
effort in Docker Swarm. As of now, the Swarm is perceived to be more suitable for
experiments and smaller scale deployments.
Google Kubernetes This is another cluster management tool that has the ability
to schedule a mammoth number of containers. It is a result of learnings coming
out of Googles internal container orchestration system Borg. It is among the tools
of choice used for handling the scaling up of large container based applications. It

is preferred if the situation demands a departure from the Docker way of cluster
management. Kubernetes starts from the cluster and makes the use of containers
like an implementation detail. Kubernetes allows different networking options and
use of Mesos as stock scheduler. It even permits using other container formats like
rkt. If the need of the hour is to use multiple sets of stateless microservices,
Kubernetes allows an interaction framework. It is considered suitable for large-scale
deployment, in spite of the larger amount of effort needed in the set up and
learning curve.
Apache Mesos - This is a distributed systems kernel that is older than either
Swarm or Kubernet. It is essentially a cluster manager that provides computing to
the frameworks that deal with whatever is running on the Meso cluster. It allows the
running of non-containerized workloads along with containers. The set-up for Meso
is relatively complex as compared to Swarm. Mesos is a good choice for those
looking to run non-containerized workloads along with containers and also looking
for a tool that has been proven at tens of thousands of nodes in long- running, real
world situations.

Comparison of the three on application type, the number of nodes to be


clustered, batch processing workload, long running, stateful and stateless
Parameters

Swarm

Kubernetes

Mesos

Application Type

Only Containers

Only Containers

Containers & NonContainers

Nodes

1 10

Excellent

Excellent

Excellent

10 - 100

Good

Excellent

Excellent

100
1000

Good

Excellent

1000 +
+

Excellent

Batch

Good

Good

Excellent

Long Running

Excellent

Excellent

Excellent

Stateful

Good

Good

Good

There is no single rule of the thumb for zeroing down on the best tool because there
are different levels of scale needed and different people are comfortable with

different eco-systems. It is however, wise to zero down on the suitable one only
after doing a proof of concept with each of these, by using a workload that
replicates the one most likely to be encountered in real life conditions.
Containers and container orchestration tools are fast becoming the backbone of
cloud-based applications and it appears that they will continue to revolutionize
cloud computing in the near future.

You might also like