Evolution towards Cloud Native Architecture & Development -Final

Evolution towards Cloud Native Architecture & Development -Final

In past articles, we saw how architecture evolved based on demand for serving multiple endpoints, integration of services and fundamental changes to development, deployment over the past few years. All these put together helps us to understand the paradigm shift from monolithic to cloud native architecture.

 As per the Cloud Native Computing Foundation, an organization that aims to create and drive the adoption of the cloud-native programming paradigm, defines cloud native as:

·        Each application/module packaged as micro service and deployed in its own container

·        Dynamically Orchestrated, containers are actively scheduled and managed

·        Enable loosely coupled systems that are resilient, manageable, and observable

Source ( https://www.cncf.io/about/faq/)

 We will take a look at Kubernetes to understand container management and deployments

The name Kubernetes originates from Greek, meaning helmsman or pilot. K8s as an abbreviation results from counting the eight letters between the "K" and the "s". Google open-sourced the Kubernetes project in 2014. (Source : https:// https://kubernetes.io/)

A Kubernetes cluster is made of multiple virtual or physical machines that act as either a master or as a node. Each node hosts groups of one or more containers (containing the applications), and the master communicates with nodes for managing creation or removing the containers. At the same time, it tells nodes how to re-route traffic based on new container alignments. A typical Kubernetes cluster is shown in Figure 1.

No alt text provided for this image

Figure 1

The Kubernetes master

The Kubernetes master is the access point or the control plane from which administrators and other users interact with the cluster to manage the scheduling and deployment of containers. A cluster will always have at least one master but may have more depending on the cluster’s replication pattern.

The master stores the state and configuration data for the entire cluster in etcd, a persistent and distributed key-value data store. Each node has access to etcd, and through it, nodes learn how to maintain the configurations of the containers they’re running. You can run etcd on the Kubernetes master, or in standalone configurations.

Masters communicate with the rest of the cluster through the kube-apiserver, the main access point to the control plane. For example, the kube-apiserver makes sure that configurations in etcd match with configurations of containers deployed in the cluster.

The kube-controller-manager handles control loops that manage the state of the cluster through the Kubernetes API server. Deployments, replicas, and nodes have controls handled by this service. For example, the node controller is responsible for registering a node and monitoring its health throughout its lifecycle.

Node workloads in the cluster are tracked and managed by the kube-scheduler. This service keeps track of the capacity and resources of nodes and assigns work to nodes based on their availability.

The cloud-controller-manager is a service running in Kubernetes that helps keep it “cloud-agnostic.” The cloud-controller-manager serves as an abstraction layer between the APIs and tools of a cloud provider (for example, storage volumes or load balancers) and their representational counterparts in Kubernetes.

Nodes

All nodes in a Kubernetes cluster must be configured with a container runtime, which is typically Docker. The container runtime starts and manages the containers as they’re deployed to nodes in the cluster by Kubernetes. Your applications (web servers, databases, API servers, etc.) run inside the containers.

Each Kubernetes node runs an agent process called a kubelet that is responsible for managing the state of the node: starting, stopping, and maintaining application containers based on instructions from the control plane. The kubelet collects performance and health information from the node, pods and containers it runs and shares that information with the control plane to help it make scheduling decisions.

The kube-proxy is a network proxy that runs on nodes in the cluster. It also works as a load balancer for services running on a node.

The basic scheduling unit is a pod, which consists of one or more containers guaranteed to be co-located on the host machine and can share resources. Each pod is assigned a unique IP address within the cluster, allowing the application to use ports without conflict.

Desired state of the containers in a pod is defined through a YAML or JSON object called a Pod Spec. These objects are passed to the kubelet through the API server.

A pod can define one or more volumes, such as a local disk or network disk, and expose them to the containers in the pod, which allows different containers to share storage space. For example, volumes can be used when one container downloads content and another container uploads that content somewhere else.

Since containers inside pods are often ephemeral, Kubernetes offers a type of load balancer, called a service, to simplify sending requests to a group of pods. A service targets a logical set of pods selected based on labels (explained below). By default, services can be accessed only from within the cluster, public access can be enabled for outside requests.

Conclusion

To sum up, we have seen how micro services, can be deployed and containers like Docker and containers managed through K8’s to provide scale. At its core, the term cloud native is a way to increase the flexibility and agility of a business and a method to deliver applications in form of micro services in containerized environment taking advantage of the automation and scalability that cloud native technologies like Kubernetes offer.

If one wants to take deep dive into cloud native application. Online Boutique is a cloud-native microservices demo application. Online Boutique consists of a 10-tier microservices application. The application is a web-based e-commerce app where users can browse items, add them to the cart, and purchase them.

Check out at https://github.com/GoogleCloudPlatform/microservices-demo)

Google uses this application to demonstrate use of technologies like Kubernetes/GKE, Istio, Stackdriver, gRPC and OpenCensus. This application works on any Kubernetes cluster, as well as Google Kubernetes Engine. It’s easy to deploy with little to no configuration.

Hope this series gives some insights on cloud native architecture.

Great summary for K8s use for microservices over cloud. It's strength lies with the fact that apart from GCP, AWS & Azure also had to offer K8s by Elastic Kubernetes Service (EKS) & Azure Kubernetes Service (AKS) respectively. Even though Amazon pioneered cloud services, Google's footprint in evolving cloud native architecture has placed it on top of all providers.

Like
Reply

To view or add a comment, sign in

More articles by Ravindra Vijayendra

Others also viewed

Explore content categories