Your CI pipeline is not less risky than production. It runs with secrets, has internet access, and most engineers treat it as config, not code. Wiz published a full GitHub Actions threat model this week. A few things that stood out: → Untrusted inputs in `run:` steps can trigger script injection without a single PR approval → `GITHUB_TOKEN` is routinely over-permissioned and scoped to the whole repo → Third-party actions are supply chain risk by default. Pinning to a SHA is not optional. → Secrets in env vars leak into logs more often than most teams realize Full breakdown in the comments ↓ #devops #security #githubactions
DevOps Bulletin’s Post
More Relevant Posts
-
Watching the Anthropic GitHub situation unfold recently was a sobering moment for anyone running an engineering team. A minor misconfiguration leaked some internal code. To contain it, an automated DMCA script was deployed. But the script couldn't distinguish between the leaked secret and legitimate developer forks. Thousands of innocent projects got caught in the crossfire before the manual "undo" button was hit. It highlights a tension we are all dealing with: the speed of automation versus the nuance of human judgment. We are building incredibly fast automated defenses to protect our perimeters. But when those scripts are given the authority to execute, like issuing a takedown, without a human circuit breaker, the blast radius is entirely unpredictable. If a critical alert goes off in your infrastructure today, how much autonomy does your containment script have? #CTO #Security #DevOps
To view or add a comment, sign in
-
-
GitHub Launches Fork Commit Detector to Flag Malicious Code in Supply Chains 📌 GitHub’s new Fork Commit Detector scans code supply chains to spot sneaky “imposter commits” - malicious forks masquerading as trusted upstream code. Built for DevOps teams, it flags risky Git SHA references before they trigger CI/CD pipelines or break critical tools. A vital step in securing automated workflows against hidden supply chain threats. 🔗 Read more: https://lnkd.in/d8PKUEsy #Github #Forkcommit #Supplychain #Git #Imposter
To view or add a comment, sign in
-
Hackers Hijacked a GitHub Actions Workflow to Push Malicious Code to PyPI: Elementary Data's open source CLI was the victim, and v0.23.3 is not a version you want installed. Read more: https://lnkd.in/gfgGEaGN 🎪 Step right up to the DevOps community! Join us for an amazing journey of learning and growth.
To view or add a comment, sign in
-
I was handed read access to a production Kubernetes cluster. 60 minutes. No insider knowledge. Just open source tools. Here's every misconfiguration I found 👇 🔴 7 containers running as root 🔴 2 privileged containers in production (one was a backend API — no reason for it) 🔴 A CI/CD service account with cluster-admin, created "temporarily" 8 months ago — still active 🔴 3 hardcoded secrets in plain env vars, likely sitting in a Git repo 🔴 Zero NetworkPolicies — every pod could talk to every other pod freely 🔴 No resource limits on 60% of pods 🔴 6 images running :latest — one had a known critical CVE 🔴 etcd backups configured but never tested 🔴 No admission controller — meaning every fix could be silently undone on the next deployment Kubescape final score: 47% compliance with NSA/CISA hardening guidelines. This wasn't a terrible cluster. It had TLS on ingress, secrets in Kubernetes Secrets, and separate namespaces for staging and prod. But 9 findings in under an hour — with just kubectl, Trivy, Kube-bench, Polaris, and Kubescape. All free. All open source. If you haven't audited your cluster recently, you might not like what you find. That's exactly why you should. 📖 Full write-up with every command, fix, and explanation: https://lnkd.in/dqSVQJ-3 #Kubernetes #DevSecOps #CloudSecurity #K8s #DevOps #CloudNative #OpenSource #Security
To view or add a comment, sign in
-
-
I was handed read access to a production Kubernetes cluster. 60 minutes. No insider knowledge. Just open source tools. Here's every misconfiguration I found 👇 🔴 7 containers running as root 🔴 2 privileged containers in production (one was a backend API — no reason for it) 🔴 A CI/CD service account with cluster-admin, created "temporarily" 8 months ago — still active 🔴 3 hardcoded secrets in plain env vars, likely sitting in a Git repo 🔴 Zero NetworkPolicies — every pod could talk to every other pod freely 🔴 No resource limits on 60% of pods 🔴 6 images running :latest — one had a known critical CVE 🔴 etcd backups configured but never tested 🔴 No admission controller — meaning every fix could be silently undone on the next deployment Kubescape final score: 47% compliance with NSA/CISA hardening guidelines. This wasn't a terrible cluster. It had TLS on ingress, secrets in Kubernetes Secrets, and separate namespaces for staging and prod. But 9 findings in under an hour — with just kubectl, Trivy, Kube-bench, Polaris, and Kubescape. All free. All open source. If you haven't audited your cluster recently, you might not like what you find. That's exactly why you should. 📖 Full write-up with every command, fix, and explanation: https://lnkd.in/dBAW7yYG #Kubernetes #DevSecOps #CloudSecurity #K8s #DevOps #CloudNative #OpenSource #Security
To view or add a comment, sign in
-
-
Security maturity in CI/CD is not just about blocking abuse, it’s also about detecting it when controls fail, without becoming a release bottleneck. Layered attack surface (each step reduces risk): - Install scripts → try to limit/disable arbitrary code execution during build time - Pinned dependencies → visible control, but only at the top layer → what about invisible transitive dependencies or composite actions you don’t see, pin, or audit? - Cooldown strategies → helpful, but only to an extent → the delay often just lets the wider community act as a canary - Threat intelligence on packages → reactive and often lagging - GitHub Actions hardening (pwn_request, injection controls) → reduces known attack paths - Unprivileged sandboxing in CI → limits blast radius, not initial compromise - Private registries / proxy controls → strong guardrails on what enters your pipeline → but still focused on prevention and policy, not runtime detection of misuse All of these reduce risk. What actually detects when something goes wrong? - When a compromised transitive dependency executes… - When a nested action pulls malicious code… - When secrets are silently exfiltrated… How do you detect it without heavy instrumentation or impacting developer velocity? 👇 #GitHub #CircleCI #CI #CD #SupplyChainSecurity #PipelineSecurity #AppSec #DevSecOps #Detection #Canaries #Secrets #Dependencies
To view or add a comment, sign in
-
-
Bitwarden CLI got hit in a supply chain attack this week. Version 2026.4.0 was live on npm for 90 minutes before it was pulled. That window is all it takes. The malware used a preinstall hook to collect SSH keys, .env files, AWS credentials, GitHub tokens, shell history, and even AI tooling configs like Claude, Cursor, and Codex CLI. Everything encrypted and exfiltrated to a public GitHub repo as fallback if the primary C2 was blocked. The interesting part is how trusted publishing failed here. NPM trusted publishing replaced long-lived tokens with an OIDC policy tied to a specific repo and branch. Solid in theory. But it checks the workflow, not what the workflow's actions are actually running. Bitwarden's CI used a Checkmarx GitHub Action that was already compromised in an earlier attack. The build looked clean. The action it called was not. If you installed @bitwarden/cli in the last 48 hours without a pinned version, rotate every secret that machine had access to. Pin your deps. No caret. No tilde. Commit your lockfiles. And audit the third-party GitHub Actions in your pipeline, not just your own code. The weakest link is not your code. It is the thing that builds your code. #SupplyChainSecurity #DevSecOps #NPM #GitHubActions #DeveloperSecurity #CloudSecurity
To view or add a comment, sign in
-
-
I built and open-sourced a pipeline that proves whether an artifact was actually built by CI (Sigstore + GitHub Actions + ArgoCD). Because last month proved something uncomfortable: If it’s in your registry, most pipelines just trust it. The March 2026 axios attack wasn’t a CI failure. A threat actor stole an npm maintainer token and pushed malicious versions directly to the registry. No CI compromise. No pipeline breach. Just a valid push. 👉 Millions of downloads 👉 Thousands of pipelines affected 👉 Everything looked legitimate Here’s the real problem: Most pipelines cannot tell whether an artifact was built by their CI or pushed by someone else. Both look identical. Both get deployed the same way. That’s the gap Sigstore solves. Not just signing but proving artifact origin. With Cosign: CI generates an ephemeral key (never stored) Identity is verified via OIDC Fulcio issues a short-lived cert Artifact is signed Rekor logs it publicly Now you can verify: “This image was built by this exact CI pipeline” Verification (this is where most teams fail): cosign verify \ --certificate-identity "https://lnkd.in/gQsRgUBk" \ --certificate-oidc-issuer "https://lnkd.in/g2c-BFSv" \ image:tag If this fails → your cluster should reject it. The pipeline I built: feature → lint/test only PR → full test (no push) main → build + sign + push tag → verify + promote (NO rebuild) prod → human approval + admission policy What this fixes: Stolen registry token ≠ trusted artifact Registry is no longer your root of trust CI identity becomes the source of truth What it doesn’t fix: Compromised CI Malicious commits Vulnerable dependencies The axios attack worked because a push was enough. This model makes a push not enough. Full breakdown and GitHub repo are in the first comment 👇 #DevSecOps #Kubernetes #GitOps #Sigstore #SupplyChainSecurity #CloudNative #PlatformEngineering
To view or add a comment, sign in
-
-
I think one of the biggest issues I've had with Kubernetes in general is having to do compliance evidence gathering and remediations against it. The whole functions as a service era seemed to actually mark a nice spot for that. Lambda was simply a code upload (well you could do containers later) which meant offloading a huge part of shared responsibility model off dev teams. Trying to handle several CVE remediations when working with a diverse set of container images can get tedious pretty fast. Not to mention the potential of having to do it against node groups. I miss the "upload this code and we'll do the rest" era. Sometimes it feels like doing evidence gathering for two different clouds.
To view or add a comment, sign in
More from this author
-
GitHub Actions Weakest Link, Lambda's Invisible Network, Cloudflare's AI Stack and Terragrunt is Dead
DevOps Bulletin 15h -
AWS DevOps Agent, AI Cloud Attacks and Security Skills for AI Agents
DevOps Bulletin 1d -
Claude Code Security Bypass, prt-scan Supply Chain Attack, Duolingo EKS Migration and Cloudflare Artifacts
DevOps Bulletin 2w
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development
https://www.wiz.io/blog/github-actions-security-threat-model-and-defenses