Evening reflection: Docker Engine v29.3.0 preview strengthens security and resource handling, securing container flows like reliable handshakes. Early notes: https://lnkd.in/gq4jTYFT In containerized environments, these keep deployments robust. Early preview features on your radar? Reply! #Docker #DevOps #Containerization #Security #CloudNative
Docker Engine v29.3.0 Strengthens Security and Resource Handling
More Relevant Posts
-
"Evening reflection: Docker Engine v29.3.0 preview strengthens security and resource handling, securing container flows like reliable handshakes. Early notes: https://lnkd.in/gq4jTYFT In containerized environments, these keep deployments robust. Early preview features on your radar? Reply! #Docker #DevOps #Containerization #Security #CloudNative
To view or add a comment, sign in
-
Stop leaking your source code. 🛑 I recently analyzed Claude code issue where production .map files were left publicly accessible. It’s a common but critical blunder that allows anyone to reverse-engineer minified bundles back into your original TypeScript source code. How to stay secure (My approach): * Debug Locally: Generate source maps locally to map cryptic production errors (e.g., Line 1, Col 5000) back to the exact TS line without ever uploading the map file. * Server-Side Blocking: If maps must be on the server, use Nginx rules to explicitly deny all access to any file ending in .map. * CI/CD Discipline: Ensure build artifacts are stripped of maps during the production pipeline and verify they are strictly listed in your .gitignore. Security isn’t just about the code you write; it’s about how you protect the build. #SoftwareEngineering #WebSecurity #TypeScript #DevOps #SeniorDeveloper #CodingTips #claudecode
To view or add a comment, sign in
-
Anthropic just shipped 512,000 lines of Claude Code source to npm — because of ONE missing line in .npmignore. No CVE. No breach. Just a 59.8 MB source map sitting in a public package. Here's what actually happened: → Bun generates source maps by default → The npm package was missing `.map` in .npmignore → Researcher Chaofan Shou found it within hours → The map pointed to a zip on Anthropic's own cloud storage → 1,900 files of unobfuscated TypeScript — the full repo Then the internet did what the internet does. A clean-room rewrite hit 50,000 GitHub stars in 2 hours. Reportedly the fastest-growing repo in GitHub history. Anthropic's statement: "release packaging issue caused by human error, not a security breach." Technically correct. Strategically catastrophic. The architecture of the fastest-growing dev tool of 2026 is now public domain. Why this matters for every developer shipping npm packages: - Source maps are ON BY DEFAULT in Bun, webpack, Vite, esbuild, Rollup - One missing .npmignore entry bundles your entire codebase - "Not a breach" and "total IP exposure" can be the same event - Your build pipeline is now your biggest trade-secret risk Three things to add to your CI today: 1. `npm pack --dry-run` — list every file before publish 2. Add `.map`, `.map.js`, `.ts`, `src/` to .npmignore 3. Set `sourcemap: false` for production builds, or use `hidden-source-map` If Anthropic's engineers missed it, your team will too. The scariest part? This wasn't sophisticated. It was one line of config. Check your next npm publish. Run the dry-run. Read every file in the tarball. Because the difference between shipping a library and shipping your company is sometimes just one glob pattern. If you found this useful, repost — someone in your network is one `npm publish` away from the same mistake. #npm #DevOps #SoftwareEngineering #ClaudeCode #SupplyChainSecurity
To view or add a comment, sign in
-
-
Stop Shipping "Heavy" Docker Images: The Power of Multi-Stage Builds & Distroless Are your Docker images bloated with unnecessary tools? If you are still shipping compilers, build logs, and shell utilities to production, you're leaving performance and security on the table. In modern DevOps, smaller is better. The Problem: Traditional Builds A standard Dockerfile often includes everything: the OS, build tools (Go, Maven, GCC), source code, and the final app. Result: A 1GB image for a 5MB application. Risk: High attack surface (shells like bash and package managers like apt can be exploited by hackers). The Solution: Multi-Stage Builds + Distroless By splitting your Dockerfile into two stages, you can drastically optimize your workflow: Stage 1 (Builder): Use a heavy image to compile your code. Stage 2 (Runtime): Use a Distroless image to run it. What is Distroless? It’s a minimalistic image that contains only your application and its runtime dependencies. No shell, no package manager, no extra bloat. Why this matters: Massive Size Reduction: Go from 800MB+ images down to <20MB. Hardened Security: By removing the shell (/bin/sh), you eliminate the most common way hackers execute malicious commands in a container. Faster Scaling: Smaller images pull from registries faster, making your CI/CD pipelines and Kubernetes deployments lightning quick. Pro-Tip: If you are building statically linked binaries (like in Go or Rust), try using FROM scratch. It's the ultimate zero-byte base image! Stop shipping your "builder" tools to production. Your infrastructure—and your security team—will thank you. #Docker #DevOps #ContainerSecurity #CloudNative #Kubernetes #SoftwareEngineering #Distroless #Microservices
To view or add a comment, sign in
-
-
🛡️ Just shipped Dock Shield — an end-to-end DevSecOps pipeline that scans container images for vulnerabilities before they ever reach production. Here's what I built and why it matters: 𝗧𝗵𝗲 𝗣𝗿𝗼𝗯𝗹𝗲𝗺: Most teams deploy containers without knowing what's inside them. Vulnerabilities, leaked secrets, and misconfigurations silently ship to production every day. 𝗧𝗵𝗲 𝗦𝗼𝗹𝘂𝘁𝗶𝗼𝗻: Dock Shield is a container security scanning dashboard that catches these issues at every stage — from local development to CI/CD to Kubernetes. 𝗪𝗵𝗮𝘁 𝗜 𝗕𝘂𝗶𝗹𝘁: → A Node.js backend that runs real Trivy scans (vulnerabilities + secrets + misconfigurations) → A clean web UI served through Nginx to visualize scan results → GitHub Actions pipeline with Gitleaks + Semgrep + Trivy integrated into every push → Infrastructure as Code with Terraform provisioning a DigitalOcean Kubernetes cluster → Kubernetes manifests with health checks, readiness probes, and proper service networking → Support for both public and private container registry scanning 𝗧𝗵𝗲 𝗖𝗜/𝗖𝗗 𝗳𝗹𝗼𝘄: Push to main → Gitleaks scans for secrets → Semgrep runs SAST → Docker build → Trivy scans the image → Push to GHCR → Auto-provision DOKS cluster if needed → Deploy to Kubernetes 𝗧𝗲𝗰𝗵 𝘀𝘁𝗮𝗰𝗸: Docker · Kubernetes · Terraform · Trivy · GitHub Actions · Gitleaks · Semgrep · Node.js · Nginx · DigitalOcean 𝗞𝗲𝘆 𝗹𝗲𝗮𝗿𝗻𝗶𝗻𝗴𝘀: 1.Shifting security left isn't just a buzzword — embedding Trivy into the pipeline caught CVEs that would've gone live otherwise. 2.Terraform + Kubernetes + GitHub Actions is a powerful combination when you need reproducible, automated cloud infrastructure. 3.Building the scanning tool yourself (not just using a managed service) teaches you how vulnerability databases, CVE scoring, and fix tracking actually work under the hood. 4The entire project is open source 👇 🔗 https://lnkd.in/dafFjyr4 If you're working on DevSecOps, container security, or cloud-native tooling — I'd love to connect and exchange ideas. #DevSecOps #ContainerSecurity #Kubernetes #Docker #Terraform #CloudNative #DevOps #Trivy #GitHubActions #InfrastructureAsCode #OpenSource #CyberSecurity
To view or add a comment, sign in
-
-
Spent the last 2.5 days debugging something that looked simple — but turned into a full-stack networking puzzle. I had an application working perfectly on: IP:port 😊 But the moment I mapped it to a domain (DNS + reverse proxy), everything started breaking. 😶 Here’s what I was dealing with: 🏗️ Hybrid setup Docker services (frontend, backend workers) Kubernetes cluster (Hatchet, Postgres, RabbitMQ) Reverse proxy using Caddy Keycloak for authentication 💥 Problems I faced 1. Frontend showing OAuth callback errors 2. Keycloak login not redirecting properly 3. Hatchet stuck on "Verify Email" loop 4. API working via curl but not via browser 5. gRPC workers failing with connection refused 6. DNS working, but routing broken 7. Same server, but Docker ↔ Kubernetes networking failing --> What made it tricky Everything worked individually: ✔ Services running ✔ Pods healthy ✔ APIs responding But together: 🪢 Routing + cookies + auth + networking = chaos 🫣 💥 Key lessons learned 🤗 ✔ /api routing must explicitly go to backend (not frontend) ✔ Reverse proxy misrouting can look like auth failures ✔ Kubernetes NodePort vs port-forward → completely different behaviors ✔ Docker container “localhost” ≠ host machine ✔ host.docker.internal needs explicit mapping on Linux ✔ Browser cache can break Next.js Server Actions ✔ Resetting DB = losing tokens (hidden dependency!) ⚡ Final architecture that worked Domain → Caddy ├── / → Frontend (Docker) ├── /api → Backend (Docker) ├── /auth → Keycloak └── /hatchet → Kubernetes services Workers → Docker→ host.docker.internal → port-forward → Hatchet engine 🎯 Biggest takeaway Most “auth issues” are actually routing or networking issues in disguise. If you're working with: Docker + Kubernetes hybrid setups Reverse proxies (Caddy / Nginx) OAuth (Keycloak / NextAuth) gRPC services 🧐 Double-check networking before debugging application logic. 😎 This was frustrating — but honestly one of the best learning experiences I’ve had in DevOps. #DevOps #Kubernetes #Docker #Caddy #Keycloak #Debugging #SRE #AWS #Cloud #Jenkins
To view or add a comment, sign in
-
Running a custom static page on an Nginx web server. It sounds straightforward, But as anyone in DevOps knows, the simplest tasks often hide the best lessons. I didn't want to bake my index.html directly into a new Docker image every time I made a change. Instead, I decided to use a Kubernetes ConfigMap to inject the file dynamically into the container. I realized that ConfigMaps are a powerful way to keep your configuration separate from your application. By mounting the ConfigMap as a volume, I could update my website's content without ever touching my deployment manifest or rebuilding my image. The Technical Insight Indentation is Everything: I spent 15 minutes debugging a "Failed Mount" error only to find a single missing space in my YAML indentation. In K8s, a tiny spacing error is the difference between a successful deployment and a silent failure. Volume Mounts: I learned that mounting a ConfigMap as a file in /usr/share/nginx/html is a great way to handle static content, but you have to be careful not to overwrite the entire directory if other files are already there! One thing I’m still exploring is how to trigger an automatic Nginx reload when I update the ConfigMap. Kubernetes updates the file on the disk, but Nginx doesn't always "see" the change immediately. It’s these small, practical hurdles that are making me a more patient and precise engineer. Every failed apply is just a step toward a more stable infrastructure. To my fellow DevOps learners: What’s the silliest "small error" that took you way too long to find? Let’s normalize the YAML struggle in the comments! 👇 #DevOps #Kubernetes #K8s #ConfigMaps #LearningInPublic #Nginx #CloudEngineering #TechJourney #BuildingInPublic
To view or add a comment, sign in
-
1 is the loneliest number, but 2 is the scariest Claude Code is incredible. It's also training a muscle most of us shouldn't be building. You sit in the terminal, the agent proposes an action, and you tap 1. Then 1. Then 1 again. After about twenty of those, your thumb starts looking for a shortcut. That's when you discover 2 — approve this kind of action for the rest of the session. Suddenly the friction is gone. So is the gate. Here's the part I keep coming back to: 1 is review fatigue. 2 is a standing authorization handed to a non-deterministic system that can decide, mid-session, that the cleanest path forward involves your filesystem, your git history, or your cloud account. Five risks I'd take seriously: 1. Blast radius creep. "Allow all bash for this session" covers npm install and rm -rf ~/projects. The permission doesn't know the difference. 2. Destructive git ops. git reset --hard, git push --force, branch deletes. One approval, hours of work gone, and the reflog only saves you if you remember it exists. 3. Secret exfiltration paths. Reading .env, ~/.aws/credentials, ~/.ssh/, then piping to a curl or a new file. Each step looks reasonable in isolation. 4. Package and dependency drift. Auto-approved installs pull transitive deps you never reviewed. Supply chain risk that bypasses the review step you'd normally do in a PR. 5. Cloud and infra calls. gcloud, aws, kubectl, terraform apply. Same keystroke, very different bill on Monday. Five things I'm doing instead: 1. Stay on 1 for anything touching the filesystem, network, or git. The friction is the feature. 2. Run risky sessions in a sandboxed working tree — separate clone, separate branch, no prod creds in the shell env. 3. Allowlist by exact command shape in .claude/settings.json rather than blanket-approving a tool. Bash(npm test) is fine. Bash(*) is not. 4. Keep a pre-session checklist: which creds are loaded, which directories are writable, what the last good commit is. Thirty seconds, saves hours. 5. Review the diff before letting it run a build or push. If the agent wrote it, you still own it. None of this is anti-agent. I use Claude Code daily and it has genuinely changed how fast I can move. But speed without a gate isn't velocity, it's just momentum — and momentum is what carries you off the cliff. If you've found a workflow that keeps the speed without the standing authorization, I want to hear it. And if you've ever pressed 2 and immediately regretted it — same. I'm asking for a friend. The friend is me.
To view or add a comment, sign in
-
I was handed read access to a production Kubernetes cluster. 60 minutes. No insider knowledge. Just open source tools. Here's every misconfiguration I found 👇 🔴 7 containers running as root 🔴 2 privileged containers in production (one was a backend API — no reason for it) 🔴 A CI/CD service account with cluster-admin, created "temporarily" 8 months ago — still active 🔴 3 hardcoded secrets in plain env vars, likely sitting in a Git repo 🔴 Zero NetworkPolicies — every pod could talk to every other pod freely 🔴 No resource limits on 60% of pods 🔴 6 images running :latest — one had a known critical CVE 🔴 etcd backups configured but never tested 🔴 No admission controller — meaning every fix could be silently undone on the next deployment Kubescape final score: 47% compliance with NSA/CISA hardening guidelines. This wasn't a terrible cluster. It had TLS on ingress, secrets in Kubernetes Secrets, and separate namespaces for staging and prod. But 9 findings in under an hour — with just kubectl, Trivy, Kube-bench, Polaris, and Kubescape. All free. All open source. If you haven't audited your cluster recently, you might not like what you find. That's exactly why you should. 📖 Full write-up with every command, fix, and explanation: https://lnkd.in/dBAW7yYG #Kubernetes #DevSecOps #CloudSecurity #K8s #DevOps #CloudNative #OpenSource #Security
To view or add a comment, sign in
-
-
I was handed read access to a production Kubernetes cluster. 60 minutes. No insider knowledge. Just open source tools. Here's every misconfiguration I found 👇 🔴 7 containers running as root 🔴 2 privileged containers in production (one was a backend API — no reason for it) 🔴 A CI/CD service account with cluster-admin, created "temporarily" 8 months ago — still active 🔴 3 hardcoded secrets in plain env vars, likely sitting in a Git repo 🔴 Zero NetworkPolicies — every pod could talk to every other pod freely 🔴 No resource limits on 60% of pods 🔴 6 images running :latest — one had a known critical CVE 🔴 etcd backups configured but never tested 🔴 No admission controller — meaning every fix could be silently undone on the next deployment Kubescape final score: 47% compliance with NSA/CISA hardening guidelines. This wasn't a terrible cluster. It had TLS on ingress, secrets in Kubernetes Secrets, and separate namespaces for staging and prod. But 9 findings in under an hour — with just kubectl, Trivy, Kube-bench, Polaris, and Kubescape. All free. All open source. If you haven't audited your cluster recently, you might not like what you find. That's exactly why you should. 📖 Full write-up with every command, fix, and explanation: https://lnkd.in/dqSVQJ-3 #Kubernetes #DevSecOps #CloudSecurity #K8s #DevOps #CloudNative #OpenSource #Security
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development