Cracking the Kubernetes Code | Decoding Kubernetes Architecture | AWS EKS Deployment

Cracking the Kubernetes Code | Decoding Kubernetes Architecture | AWS EKS Deployment

Kubernetes is a powerful open-source platform designed to automate deploying, scaling, and operating application containers. Its architecture is composed of several key components that work together to maintain the desired state of applications. Here’s a detailed look at the architecture:

1. Control Plane

The control plane is responsible for managing the Kubernetes cluster. It consists of several components:

  • API Server (kube-apiserver): The API server is the central management entity that exposes the Kubernetes API. It processes REST operations, validates them, and updates the corresponding objects in the etcd store.It also serves as the communication hub for all other components1.
  • etcd: A consistent and highly-available key-value store used as Kubernetes’ backing store for all cluster data. It stores the configuration data, state, and metadata of the cluster2.
  • Controller Manager (kube-controller-manager): This component runs controller processes that regulate the state of the cluster. Controllers include the Node Controller, Replication Controller, Endpoints Controller, and others. Each controller watches the state of the cluster through the API server and makes changes to move the current state towards the desired state2.
  • Scheduler (kube-scheduler): The scheduler assigns newly created pods to nodes in the cluster based on resource requirements, constraints, and policies. It ensures that workloads are balanced and resources are efficiently utilized2.

2. Node Components

Nodes are the worker machines in Kubernetes. Each node runs several components necessary for managing the pods:

  • Kubelet: An agent that runs on each node in the cluster. It ensures that containers are running in a Pod. The kubelet takes a set of PodSpecs provided through various mechanisms and ensures that the containers described in those PodSpecs are running and healthy2.
  • Container Runtime: The software responsible for running containers. Kubernetes supports several container runtimes that implement the Container Runtime Interface (CRI), such as containerd, CRI-O, and Docker (via cri-dockerd)2.
  • Kube-Proxy: A network proxy that runs on each node, maintaining network rules on nodes. These rules allow network communication to your Pods from network sessions inside or outside of your cluster. Kube-proxy uses the operating system packet filtering layer if there is one and it’s available. Otherwise, kube-proxy forwards the traffic itself.

3. Networking

Kubernetes networking is designed to provide a flat network structure where each pod can communicate with any other pod without NAT. Key components include:

  • Pod Network: Each pod gets its own IP address, and all containers within a pod share the network namespace, including the IP address and network ports.
  • Service: A Kubernetes Service is an abstraction which defines a logical set of Pods and a policy by which to access them. Services enable loose coupling between dependent Pods.
  • Ingress: Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource.

4. Security

Security in Kubernetes is multi-faceted and includes:

  • Authentication and Authorization: Kubernetes supports multiple authentication strategies (client certificates, bearer tokens, etc.) and authorization modes (RBAC, ABAC).
  • Network Policies: These are used to control the traffic flow between pods.
  • Secrets and ConfigMaps: These are used to manage sensitive information and configuration data separately from the application code.

5. Monitoring and Logging

Monitoring and logging are crucial for maintaining the health and performance of a Kubernetes cluster:

  • Prometheus: A popular monitoring tool that collects and stores metrics as time series data.
  • Grafana: Used for visualizing the metrics collected by Prometheus.
  • Fluentd: A log collector that can be used to aggregate logs from various sources and forward them to a centralized logging backend.


Amazon Elastic Kubernetes Service (EKS)

Amazon Elastic Kubernetes Service (EKS) is a managed service that simplifies running Kubernetes on AWS without needing to install and operate your own Kubernetes control plane or nodes. Here’s a closer look at what makes AWS EKS a powerful tool for developers and businesses:


Article content
Amazon Elastic Kubernetes Service (Amazon EKS) Architecture Explained with Diagram

Key Features of AWS EKS:

  1. Managed Kubernetes Control Plane: AWS EKS automatically manages the availability and scalability of the Kubernetes control plane nodes responsible for scheduling containers, managing the application’s desired state, and monitoring the cluster.
  2. Integration with AWS Services: EKS integrates seamlessly with various AWS services such as IAM for authentication, VPC for networking, and CloudWatch for logging and monitoring, providing a robust and secure environment for your applications.
  3. High Availability and Scalability: EKS runs the Kubernetes management infrastructure across multiple AWS Availability Zones, ensuring high availability. It also scales the infrastructure based on the needs of your applications.
  4. Security: AWS EKS provides a secure and compliant Kubernetes environment, with features like IAM roles for service accounts, encryption at rest and in transit, and integration with AWS security services.
  5. Flexibility and Customization: With EKS, you can run Kubernetes applications on both Amazon EC2 and AWS Fargate, giving you the flexibility to choose the best compute option for your workload.

Benefits of Using AWS EKS:

  • Reduced Operational Overhead: By offloading the management of the Kubernetes control plane to AWS, you can focus more on building and deploying applications rather than managing infrastructure.
  • Cost Efficiency: You only pay for the AWS resources (like EC2 instances or Fargate compute) you use, making it a cost-effective solution for running Kubernetes workloads.
  • Enhanced Security: Leveraging AWS’s security features and best practices ensures that your Kubernetes clusters are secure and compliant with industry standards.
  • Scalability: EKS allows you to scale your applications seamlessly, handling increased traffic and workloads without manual intervention.

Deployment Scripts and YAML Files

To deploy a sample application on AWS EKS, follow these steps:

Step 1: Create an EKS Cluster

You can create an EKS cluster using the AWS Management Console, AWS CLI, or eksctl. Here’s an example using eksctl:

eksctl create cluster --name my-cluster --region us-west-2 --nodegroup-name linux-nodes --node-type t3.medium --nodes 3 --nodes-min 1 --nodes-max 4 --managed        

Step 2: Create a Namespace

Namespaces allow you to group resources in Kubernetes. Create a namespace for your application:

kubectl create namespace eks-sample-app        

Step 3: Create a Deployment

Save the following contents to a file named eks-sample-deployment.yaml. This deployment pulls a container image from a public repository and deploys three replicas to your cluster.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: eks-sample-deployment
  namespace: eks-sample-app
  labels:
    app: eks-sample-app
spec:
  replicas: 3
  selector:
    matchLabels:
      app: eks-sample-app
  template:
    metadata:
      labels:
        app: eks-sample-app
    spec:
      containers:
      - name: nginx
        image: public.ecr.aws/nginx/nginx:1.23
        ports:
        - containerPort: 80        

Apply the deployment manifest to your cluster:

kubectl apply -f eks-sample-deployment.yaml        

Step 4: Create a Service

Save the following contents to a file named eks-sample-service.yaml. This service allows you to access all replicas through a single IP address or name.

apiVersion: v1
kind: Service
metadata:
  name: eks-sample-service
  namespace: eks-sample-app
  labels:
    app: eks-sample-app
spec:
  selector:
    app: eks-sample-app
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
  type: LoadBalancer        

Apply the service manifest to your cluster:

kubectl apply -f eks-sample-service.yaml        

Step 5: Verify the Deployment

Check the status of your deployment and service:

kubectl get all -n eks-sample-app        

This command will list all resources in the eks-sample-app namespace, including pods, services, and deployments.

Conclusion

Kubernetes architecture is designed to provide a robust, scalable, and flexible platform for running containerized applications. By understanding the roles of each component and how they interact, you can effectively manage and optimize your Kubernetes clusters.Whether you are using AWS EKS or another Kubernetes service, the principles remain the same, allowing you to focus on deploying and scaling your applications efficiently.


-- Alok Saraswat

-- Reference -- kubernetes.io & Amazon web service documentation

To view or add a comment, sign in

More articles by Alok Saraswat

Others also viewed

Explore content categories