Grasping Kubernetes
Collaborizm Kubernetes Initiative
At Collaborizm, we help small teams and large enterprises realize their vision. We see Kubernetes as an important part of the tech landscape that's only getting started. Over the next few months we will be focusing on fostering a vibrant pool of Kubernetes experts and discussions in our community.
Here is a the first in a series of posts to help get the community started!
What is Kubernetes?
Kubernetes is a container orchestration system. You take your code pack it into containers. Kubernetes handles everything around running your containers in production. If you aren’t sure what a container is keep reading. Kubernetes is unconstrained by application type, anything you can run on traditional servers you can run on Kubernetes.
Why Kubernetes?
Kubernetes killer advantage is that it standardizes the cloud. If you learn how to deploy serverless functions to various cloud providers you quickly learn that each vendor has their own nuances. I generally find that serverless function are easy to code in a portable manner but when it comes to deploying, testing, connecting to DNS, it can become very complex. With Kubernetes there are certainly differences between the various clouds but you’ll be able to pick up those differences in minutes over having to master a whole new set of paradigms and tooling.
Kubernetes vs Serverless
Serveless’ killer feature is pay per use, you can deploy as many services as you’d like without having to pay a penny until you reach a certain threshold of useage. This is great for many side projects where you want to throw some code online and see what happens.
Serverless, from a technical point of view is designed to be stateless and short-lived. This makes serverless a bad fit for WebSockets, HTTP/2 Server Push, or long running Cron Jobs. In addition, you cannot run applications like database servers inside of a serverless system. Generally this isn't an issue because of the wide range of applications available on today's clouds.
What is a container?
Containers have been around in Linux and Unix operating systems for years. They are processes which run container images isolated from the rest of the system. Container images are a bundle of everything needed to execute your application. They contain your source code and all dependencies including which operating system your application should be executed on. Once an image is created, it should run consistently across any platform. Since containers are running on the host OS in a process, they are more efficient than virtualization, which requires a whole operating systems to run inside of another operating system.
With the advent of Docker, containers became much easier to create and run. Docker democratized containers and brought about an explosion in tooling. Docker isn’t a requirement of Kubernetes but it’s the most popular choice for building and running containers. It’s used in this guide so make sure to install it.
Kubernetes components from low level to high level
Container
Containers run container images.
Pod
Pods are comprised of one or more containers.
Node
A node is a server, can be physical or virtual.
Node Pool
All Nodes run in Node Pools. They are groups of Nodes with the same configuration. If you don’t create a Node Pool, then your Nodes are in the Default Node Pool. If your application has a machine learning component and a web server, you can have two Node Pools. One with GPUs and the other with just CPUs. Each part of your application can then be scheduled to run in the appropriate pool.
Master
The Master is another server which is responsible for scheduling Pods to run on Nodes. The Master exposes an API which is used to configure the state of your cluster.
Cluster
An instantiation of Kubernetes which includes all of the above components.
Service
Predefined applications that can easily be executed on top of your cluster. Primarily used to offer resources provided by a cloud such as Load Balancing and Logging.
Kubernetes and desired state
When you deploy your application to Kubernetes, your not imperatively telling kubernetes what to do step by step. You are instead declaring what the desired state of how you would like your cluster to run.
Example, you deploy your Hello World app to run across two Pods and have them load balanced. If a Pod goes down or even a whole Node goes down, it’s the Master’s job to restore the state to what you defined.
This is similar to other declarative vs imperative paradigms like React vs jQuery. The declarative model frees you up from dealing with low level operations and instead offloads it to an operator.
Next Steps
This was just a brief overview of Kubernetes, I’m currently working on a tutorial which will show how to deploy your side project to Kubernetes. Enjoy!