Kubernetes Ingress Controller with Advanced Application Services

Kubernetes Ingress Controller with Advanced Application Services

Modern application architectures based on microservices have made appliance-based load balancing solutions obsolete. Containerized applications deployed in Kubernetes clusters need scalable and enterprise-class Kubernetes Ingress Services for load balancing, global and local traffic management, service discovery, monitoring/analytics and security. Avi Networks offers an advanced Kubernetes ingress controller with multi-cloud application services that offer enterprise-grade features, high levels of automation based on machine learning, and observability are needed to help bring container-based applications into enterprise production environments.

Applications based on microservices architecture require a modern, distributed application services platform to deliver an ingress gateway. Traditional appliance-based ADC solutions are no longer an option for web-scale, cloud-native applications deployed using container technology as microservices. Kubernetes container clusters can have tens and hundreds of pods, each containing hundreds and thousands of containers, mandating full automation, policy driven deployments and elastic container services for Kubernetes.

Kubernetes ingress is an object with rules for routing and controlling the ways that external users access services running in a Kubernetes cluster. You can expose applications in Kubernetes to external users taking one of three basic approaches:

  • A NodePort type of Kubernetes service exposes the application on a port across each node
  • A Load Balancer Kubernetes service points users to Kubernetes services in the cluster
  • A Kubernetes Ingress Resource and Ingress controller can together expose the application

VMware Avi Vantage is based on a software-defined, scale-out architecture that provides container services for Kubernetes beyond typical Kubernetes controllers, such as traffic management, security, observability and a rich set of tools to simplify application maintenance and rollouts. You can deploy and automate in six steps:

  • Deploy a lightweight, distributed fabric of proxy services alongside nodes in the container cluster
  • Automate service discovery and dynamically map between a service name and its IP address for ephemeral containers
  • Observe and collect analytics through Avi Service Engines and provide Kubernetes load balancing with autoscaling based on real-time traffic
  • Integrate with container orchestration platforms like Kubernetes to automate the deployment and management of containers
  • Extend application services with an ingress gateway for secure service-to-service communication in multi-cluster, multi-region and multi-cloud environments

For the Container Network Interface (CNI) options, there are two options supported by VMware Tanzu Kubernetes Grid Service: Antrea (default) and Calico. Then, what is the different between Antrea and Calico? For Antrea, there is no dependency to Linux network primitives. There is Linux network primitives on Calico because of the Open vSwitch. There is also an advantage for using Antrea over Calico that Antrea supports for Windows Kubernetes Workers.

To view or add a comment, sign in

Others also viewed

Explore content categories