Docker Containers and Kubernetes, Smart ecosystem solution to virtualize in the Cloud
https://denibertovic.com/talks/supercharge-development-env-using-docker/img/what_is_docker.png

Docker Containers and Kubernetes, Smart ecosystem solution to virtualize in the Cloud

Hardware virtualization allows installing more than one operating system on the same server, that brings efficiency to hardware utilization and allows separating between applications resources.

In many cases, operating systems consume more resources than applications, but there was no way to isolate between applications running on the same hardware except by putting them on separate operating systems – that was before introducing containers.

Containers allow virtualization on the kernel level, allowing to separate compute resources between applications within the same operating system. This article explains how this happens by Docker containers, and explains the Docker – Kubernetes ecosystem solution.

History of Containers:

Containers were introduced some years ago, they took different functions and names like Solaris-Zones, Jails, and Linux containers. Linux Containers (LXC) does virtualization on the operating-system-level, it allows running multiple isolated Linux systems (containers) on a single Linux control host.

The company DotCloud, a Platform-as-a-Service (PaaS) provider, initiated a project to enhance the Linux Containers to fit more with their PaaS delivery, it brought the attention of many giant vendors before it goes into production, for example Yandex (The Russian equivalent to Google) used it in their production environment some months before Docker goes in its first release – that encouraged the DotCloud team to make a separate company - the Dockercon Inc. that provides the Docker containers as an open-source software.

What are Docker Containers:

From the Docker homepage, “Docker is an open platform for developers and sysadmins to build, ship, and run distributed applications. Consisting of Docker Engine, a portable, lightweight run-time and packaging tool, and Docker Hub, a cloud service for sharing applications and automating work-flows, Docker enables apps to be quickly assembled from components and eliminates the friction between development, QA, and production environments. As a result, IT can ship faster and run the same app, unchanged, on laptops, data center VMs, and any cloud.”

With Docker you can put any Linux application in what is called container and let it run on any Linux Operating System after defining what resources it can use, it looks isolated in its container, and can also be some-how host independent in terms of operating system, since the container can run a different operating system than the host operating system.

Docker relies on name-spaces (pid, mnt, network, uts, ipc, user) and control groups (memory, CPU, cpuset, blkio, devices), – control groups allow you to allocate resources such as CPU time, system memory, network bandwidth, or combinations of these resources among user-defined groups of tasks (processes) running on a system. Docker allows developers to build, ship and run any application anywhere

Build: package your application in a container
Ship: move that container from a machine to another
Run: execute that container (i.e. your application)
Any application: anything that runs on Linux
Anywhere: local VM, cloud instance, bare metal … etc.

How Docker works:

Build: being able to take any application that can run on Linux and put it in a container, this is done using Docker files, you have to start from a base image of a Docker container, you select what operating system you need and then run your commands or build your application on top of this container – so it is near to what you do with virtual machines (VM) when you download a VM with pre-installed operating system, then you install your application on it – here you do the same with containers under the condition that it works only on Linux operating systems.

In the build phase, It takes a snapshot at each step, so when you do a change - it doesn't save all the stuff from beginning, which means it doesn't rebuild everything, it puts a flag “cashed” on the stuff that did not change and doesn't re-save them during the re-build that makes it agile.

Another advantage of the Docker build concept, is that the old versions are still there, you can still use them if you need - because you create the new container beside the old one and you can choose where and when to run the new one as well – if you tell your load balancing utility to use another port for the new version of the container, you can have them running in parallel

Ship: being able to put this container on any operating system using the Docker hub which is is a huge library of Docker images that contains all required libraries, you simply push the container to it, then pull it anywhere you want.

Run: containers can host any application that runs on Linux, and can run on Cloud or PC platform, you can build a container application in an Ubuntu container image and then host it on fedora or Debian.

Like this many dev-ops conflicts are eliminated – developers can do whatever they want, and pass it to the operations teams to just run it without any surprises related to environment since it is containerized i.e isolated. You can run firefox in Docker, or a Docker inside another Docker, you can also run a KVM hypervisor in a Docker or a VPN or firewall in a Docker container.

Docker containers need milliseconds or seconds to start – they are light weight and definitely lighter than virtual machines – this means you can easily run containers on top of virtual machines, you might have already felt the pain of running a virtual machine inside another virtual machine – if you tried that already.

Docker can manage databases in a smart way, it separates data from volumes and works only on volumes allowing agility and reliability when doing changes or migrating from a machine to another – and makes it compatible to work with big-data. Docker uses Copy-on-write storage which makes it reliable and comparably fast.

Requirements to run Docker:

From the Docker's website: “Docker requires a 64-bit installation regardless of your Ubuntu version. Additionally, your kernel must be 3.10 at minimum. The latest 3.10 minor version or a newer maintained version are also acceptable. Kernels older than 3.10 lack some of the features required to run Docker containers. These older versions are known to have bugs which cause data loss and frequently panic under certain conditions.”

You can install the boot2Docker VM image that contains the Docker engine image and start exploring the Docker solution on your windows machine, it also can be installed on any Linux operating system.

Microsoft recently announced having a beta releases of Docker Machine and Docker Swarm on Azure (their cloud platform) and Docker Machine support on Hyper-V, this doesn't mean you can run Windows application on Linux or vise-verse, it means you can use the same build mechanism, and APIs and Orchestration tools and frameworks to operate both.

Process distribution management:

When containers run in the cloud in a vast fleet, they need a clustering application that allows distributed processes management, here I will shed the light on the Kubernetes cluster management solution that Google developed and is using and is part of the Docker's ecosystem solution - it is also an open-source.

Kubernetes:

Kubernetes is the cluster manager for Docker, allowing scheduling and deploying any number of container replicas onto a node cluster, it will take care of making decisions like which containers go on which servers for you. Kubernetes is a solution for overseeing and managing multiple containers at scale, rather than just working with Docker on a manually-configured host.

Kubernetes key concepts:

Pods: this is one container, can also be multiple that are tightly connected to each others, so they live and die together - this is the unit of scheduling, is template based where you say which Docker image and where are the resources it uses.

Labels: these are key and value pairs, when you create the pod you give it a name, value, role – this helps you to get statistics on what is doing which function and where and other creative reports. As well, you can also talk to the API in groups, - if you don't want to think per machine/container basis, you can simply use the labels to say give me these labels doing this function and do that with them.

A small example about labels is to make a label detecting the running environment, where one of its key can be environment and its value can be production, staging or test – you can add other keys to set the resources usage and other related functions.

Replication Controllers: this is for scheduling replications – like for example when you say i want to run this Pod 5 time or always run 5 copies of this Pod.

The Docker/Kubernetes solution allows Google to containerize their data-centers, they send Docker containers that are managed by Kubernetes to a bulk of data-center infrastructure without being worried on where in the data-center it will run, or what resources it will consume (because it is predefined in the container) or what is its prioroty (because it is predefined in the cluster management).

Will containers dominate over virtual machines ?

Containers attracted Linux users, mainly developers, once there will be a solution that can run on Windows and configured via GUI with less command line coding, then the virtualization competition will look different.

Using containers will help customers having their IT environment in the cloud to pay less since they will use less storage, compute and networking resources when reducing the number of operating systems they use, which means another business may be opened until the same container technology become compatible with other types of Operating systems, which is migrating to Linux environment.

The cloud computing technology is being developed very fast and so many booming companies started quickly, as we see some of their ideas did not start from scratch and are built on logic like Docker, many of us asked themselves why do I have to install so many operating systems to run a web-server or an OSS solution that just needs isolation of the database resources.

Stay tuned for other articles in the same domain, and enjoy the journey to the cloud.

Written by: Yasser Emam
Solution Architect (OSS/BSS and Cloud Management)

Image source

Why can't more people write articles like this !! Simple, concise and easy to understand. Thank you.

Very good introduction to a very interesting topic.

That's a great summary Yasser!!

To view or add a comment, sign in

More articles by Yasser Yassin

  • This time will go

    An Indian king asked his counselors to compose a statement to sign on his ring, to keep his emotions balanced when he…

    1 Comment
  • Understanding AGILE, SCRUM, and KANBAN

    The history of software development methodologies Software development methodologies passed several evolutionary…

  • Achieving peace of mind at work

    The body reacts to stress is in the sequence of fight, flight or freeze; this happens when facing physical stress like…

  • Somewhere, .. I met someone

    A teenager having some mental retardation issues – sitting with her mother on the bench at the tram station, she seemed…

  • Introducing SDN: Software-defined networks

    Having a car that can fly in the air, sail in the river beside traveling in normal and abnormal roads, sounds like a…

  • Don't give up, and focus on goals

    Imagine life as a game, a special kind of game. You don't know the rules beforehand and you don't know when it will…

  • OSS/BSS for Power Utilities, a treasure that big vendors neglect

    Some companies providing energy and water, known as Power-utilities make use of their infrastructure to build an IT…

  • Challenges facing customers migrating into a cloud solution

    When the electricity was introduced, the only option to use it was the on-premises option which means building your own…

  • Cloud Computing - Service Automation and Orchestration

    Cloud Computing in brief: Cloud Computing allows using computing resources without owning them, through a process of…

    4 Comments
  • Selling OSS Solutions, understanding the customer pain

    OSS, Operations Support Systems "together with Business Support Systems(BSS)" are used to support various end-to-end…

    5 Comments

Others also viewed

Explore content categories