Exposing multiple ports/services on same Load Balancer in Kubernetes
Scenario: I have to expose Kibana (5601), Apache Storm (8080) & say nginx (80), all on same Load Balancer (public IP) on Kubernetes.
Well, while doing a POC on same, I found there is NO/ZERO (or atleast I didn’t find one) document on how it can be done. Even on checking with experts and raising questions on forum, answers which I receive were either
- sorry it’s not possible. Have a public IP per service (Yes we can but if we have 6-7 services and all needs to be exposed to clients, its not easy to manage)
- your design is wrong
- Using HAProxy, Nginx, etc. to route request
- And more….
FYI: I was trying on Azure Kubernetes Service (AKS)
For last 4-5 years, am using Docker Swarm and this feature is available there and works very smooth and as Kubernetes is kind of replacing Swarm, how can such a big and necessary feature be removed??
On doing some R&D and spending countless hours finally I was able to get it working with Kubernetes (Eureka!!! I found it :)). When I iterated over solution, I was laughing on myself and others as to why such a nice feature is not documented, unexplored and why to make things complex by using other technologies for routing.
Answer is very simple – Tags.
Yes, tags will help you get your work done. Not Tag but Tags.
You can have multiple tags on same workload (Deployment/Service/Pod/etc.). And then Kubernetes will internally link objects with similar tags together.
Let’s take an example to demo this.
Note: For demo purpose, am ignoring nodeselector, disks, etc.
Step 1: Create an external facing Load Balancer
apiVersion: v1 kind: Service metadata: name: external-lb spec: type: LoadBalancer ports: - name: elk-kibana port: 5601 targetPort: "5601-port" - name: storm-nimbus port: 8080 targetPort: "8080-port" - name: nginx port: 80 targetPort: "80-port" selector: lbtype: external
Nothing fancy in yaml file. Just 1 important point – selector (lbtype: external)
Once you apply it to Kubernetes, you can view on K8 dashboard (You can also use describe command but as a picture is worth 1000 words, using an image)
Step 2: Create deployment/services to setup Kibana, Apache Storm & Nginx
Kibana : Kind - Deployment
apiVersion: apps/v1 kind: Deployment metadata: name: elk-kibana labels: app: elk-kibana spec: replicas: 1 selector: matchLabels: lbtype: external template: metadata: labels: lbtype: external app: elk-kibana spec: containers: - name: elk-kibana image: elk-kibana:7.4.0-2 imagePullPolicy: Always resources: limits: cpu: 1500m memory: "2Gi" ports: - containerPort: 5601 name: "5601-port"
Nimbus: Kind - Service with StatefuleSet
apiVersion: v1 kind: Service metadata: name: storm-nimbus labels: app: storm-nimbus spec: ports: - port: 8080 targetPort: 8080 name: 8080-port selector: app: storm-nimbus --- apiVersion: apps/v1 kind: StatefulSet metadata: name: storm-nimbus spec: selector: matchLabels: lbtype: external serviceName: storm-nimbus replicas: 2 updateStrategy: type: RollingUpdate template: metadata: labels: app: storm-nimbus lbtype: external spec: containers: - name: storm-nimbus imagePullPolicy: Always image: storm-nimbus-clustered:1.2.1 resources: requests: memory: "2G" ports: - containerPort: 8080 name: 8080-port volumeMounts: - name: storm-nimbus-data mountPath: /var/lib/storm - name: storm-logs mountPath: /usr/share/storm/logs env: - name: node.name valueFrom: fieldRef: fieldPath: metadata.name volumes: - name: storm-logs hostPath: path: /var/log/containers/storm-nimbus/default volumeClaimTemplates: - metadata: name: storm-nimbus-data labels: app: storm-nimbus spec: accessModes: [ "ReadWriteOnce" ] storageClassName: managed-premium-retain resources: requests: storage: 10Gi
Nginx: Kind - Deployment
apiVersion: apps/v1 kind: Deployment metadata: name: nginx labels: app: nginx spec: replicas: 1 selector: matchLabels: lbtype: external template: metadata: labels: lbtype: external app: nginx spec: containers: - name: nginx image: nginx:latest tty: true imagePullPolicy: Always resources: limits: cpu: 500m memory: "1Gi" ports: - containerPort: 80 name: "80-port"
In all of the above deployment files, if you see carefully, there are multiple labels, one for selector and one for template. Selector will link each workload with external-lb (based of label name) whereas, to individually track deployment/service, we are using another label "app".
"lbtype: external" is set as selector for external-lb and so all the objects will be linked to external-lb with same label.
You can navigate to K8 dashboard can check the services & deployments (do check label)
Internal view of external LB
As you can see, all the pods are listed under external-ln service.
Now, time for moment of truth. Let's hit the URL with different ports
Kibana: Accessible on same IP on 5601
Apache Storm: Accessible on same IP on 8080
Nginx: Accessible on same IP on 80
So, as you can see, we have achieved hosting multiple services on "single load balancer".
Do try it out and message me if you have any doubts on same.
Happy Coding!!
Can we perform this with type ClusterIP
This can be helpful: https://github.com/abdheshnayak/port-bridge Went through the same problem. But the above solution also has some constraints like podTemplate can't be updated and it will not work across multiple namespaces. For that i wrote kubernetes operator which will automatically accumulate all your nodeport type services across different namespaces and create a single load balancer service with tunnel requests to specific services is iptable rules.
hi Sunil Agarwa, is it possible to use only 1 haproxy deployment (instead of your examples nginx, nimbus, kibana) and connect all the ports with related ports from haproxy and let haproxy handle all the different backends depending on the port?
This works on Linode, you are a life saver 🤝
Hey, This works great thank you! but to make this work the service and the deployment/statefulSet need to be deployed in the same namespace if i am correct. is there a way that I deploy the service in default namespace and Deployment/StatefulSet in different namespace and still make it working ?