Kubernetes Cluster Misconfigurations Found in 60 Minutes

I was handed read access to a production Kubernetes cluster. 60 minutes. No insider knowledge. Just open source tools. Here's every misconfiguration I found 👇 🔴 7 containers running as root 🔴 2 privileged containers in production (one was a backend API — no reason for it) 🔴 A CI/CD service account with cluster-admin, created "temporarily" 8 months ago — still active 🔴 3 hardcoded secrets in plain env vars, likely sitting in a Git repo 🔴 Zero NetworkPolicies — every pod could talk to every other pod freely 🔴 No resource limits on 60% of pods 🔴 6 images running :latest — one had a known critical CVE 🔴 etcd backups configured but never tested 🔴 No admission controller — meaning every fix could be silently undone on the next deployment Kubescape final score: 47% compliance with NSA/CISA hardening guidelines. This wasn't a terrible cluster. It had TLS on ingress, secrets in Kubernetes Secrets, and separate namespaces for staging and prod. But 9 findings in under an hour — with just kubectl, Trivy, Kube-bench, Polaris, and Kubescape. All free. All open source. If you haven't audited your cluster recently, you might not like what you find. That's exactly why you should. 📖 Full write-up with every command, fix, and explanation: https://lnkd.in/dqSVQJ-3 #Kubernetes #DevSecOps #CloudSecurity #K8s #DevOps #CloudNative #OpenSource #Security

  • No alternative text description for this image

To view or add a comment, sign in

Explore content categories