A single git push can execute arbitrary commands on GitHub's backend servers... CVE-2026-3854 is a command injection in GitHub's push processing pipeline. User-supplied push option values were not sanitized before being injected into internal service headers. Standard git client... any authenticated user... full RCE. Here is the high-signal breakdown of the chain: > The RCE Chain. > Three injections chained together. A non-production rails_env bypasses the sandbox... custom_hooks_dir redirects the hook directory... and a crafted hook entry executes arbitrary commands as the git user. > Result: Full filesystem read/write and visibility into internal service configurations. > The Cross-Tenant Blast Radius. > This is the nightmare scenario... GitHub's shared storage architecture meant code execution on one node gave access across tenants. > Millions of public and private repositories—including those of other organizations—were accessible on the affected nodes. > The AI Angle: IDA MCP. > This is one of the first critical vulnerabilities discovered in closed-source binaries using autonomous AI. > Wiz used IDA MCP for automated reverse engineering across compiled binaries. AI is now finding bugs faster than humans can patch them. > The Exposure Now. > GitHub.com was patched within two hours of disclosure on March 4. Public disclosure was held until yesterday to give Enterprise Server operators time to patch. > Current state: 88% of GHES instances remain unpatched. The takeaway.. The responsible disclosure window is officially closed. If you run GitHub Enterprise Server... you are likely still exposed. Upgrade to GHES 3.19.3 now. Not this week... now. #AppSec #DevSecOps #GitHub #AIInfrastructure #SoftwareEngineering
GitHub CVE-2026-3854: Critical RCE Vulnerability Exploited via Git Push
More Relevant Posts
-
🚀 💻 𝐁𝐫𝐢𝐝𝐠𝐢𝐧𝐠 𝐑𝐞𝐝 𝐇𝐚𝐭 𝐃𝐨𝐜𝐮𝐦𝐞𝐧𝐭𝐚𝐭𝐢𝐨𝐧 𝐰𝐢𝐭𝐡 𝐀𝐈 𝐮𝐬𝐢𝐧𝐠 ⚡𝐌𝐂𝐏⚡: 𝐒𝐨𝐥𝐯𝐢𝐧𝐠 𝐭𝐡𝐞 𝐋𝐋𝐌 𝐓𝐫𝐮𝐬𝐭 𝐆𝐚𝐩 ➤➤➤ I’ve spent most of my time as an Azure Architect and Microsoft-focused professional. That ecosystem is where I’m most comfortable. Recently, I got involved in a Red Hat OpenShift Virtualization project—and that meant quickly ramping up on a completely different stack. Like many of us do today, I leaned heavily on AI tools for queries, answers, and guidance. But there’s always that underlying challenge: 👉 the well-known hallucination problem 🚀 I built a small tool to solve a very specific (and very real) problem While working on this project, I found myself increasingly relying on LLMs to speed things up.They were great at generating answers.But then came the part that actually matters in real delivery work: 👉 Can I trust this? 👉 Where is this coming from? That’s where things started to break down. ⚠️ »»ᅳ𝐏𝐫𝐨𝐛𝐥𝐞𝐦 𝐒𝐭𝐚𝐭𝐞𝐦𝐞𝐧𝐭ᅳ► There was no simple way to validate LLM-generated responses against official Red Hat documentation. Unlike Microsoft Docs (which has an MCP server), Red Hat documentation wasn’t easily accessible in a structured, LLM-friendly way. This led to: • Time spent manually searching and cross-referencing docs • Reduced confidence in AI-generated outputs • Friction in day-to-day engineering workflows 🛠️ 𝐇𝐨𝐰 𝐈 𝐬𝐨𝐥𝐯𝐞𝐝 𝐢𝐭 I built a lightweight Red Hat Docs MCP server that runs locally using Python (over stdio). The idea was simple: bring reliable documentation closer to the LLM workflow. It focuses on: ✅ Discovering relevant Red Hat documentation ✅ Fetching and structuring content to work with Red Hat’s rendering patterns ✅ Making it easier to ground LLM responses in trusted sources ⚙️ 𝐇𝐨𝐰 𝐢𝐭 𝐫𝐮𝐧𝐬 (𝐞𝐱𝐚𝐦𝐩𝐥𝐞 𝐌𝐂𝐏 𝐜𝐨𝐧𝐟𝐢𝐠) Here’s a simple example of how I wired it up locally using MCP: { "servers": { "redhat-docs": { "type": "stdio", "command": "${workspaceFolder}/env/bin/python", "args": [ "${workspaceFolder}/redhat_docs.py" ] } } } This keeps everything local, simple, and easy to plug into your existing MCP-compatible setup. 💡 »»ᅳ𝐖𝐡𝐚𝐭 𝐬𝐭𝐨𝐨𝐝 𝐨𝐮𝐭ᅳ► This wasn’t a complex system—but it solved a real gap. When working with AI: • Speed is easy to achieve • Trust and validation are the hard parts And sometimes, solving that doesn’t require a big platform—just the right small tool. If you’re working with OpenShift, Red Hat docs, or LLM-based workflows, this might help: Link to the Github repository 👇 📌 https://lnkd.in/gfzHuG8x Built with curiosity, community input, and a bit of vibe coding. #MCP #Python #RedHat #OpenShift #OpenShiftVirtualization #AIEngineering #LLM #DeveloperTools #VibeCoding
To view or add a comment, sign in
-
Found a GitHub repo today that fixes the most expensive mistake in Claude Code setups. It's called Claude Context, built by Zilliz (the team behind Milvus). Here's the problem it solves: When you run Claude Code on a large repo, the typical response is to dump entire directories into context. The costs spiral. A single refactoring session can burn $30-50 in tokens because Claude keeps reading files that aren't relevant to the task. Claude Context fixes the root cause, not the symptom. It indexes your entire codebase into a vector database. When Claude needs context, it runs a semantic search and retrieves only the relevant code chunks. No directory dumps. No manual file curation. Session costs stay flat as your repo grows. Here's how it works technically: 1. Index once: run the setup command and Claude Context builds a vector index of your entire codebase 2. Plug into Claude Code via MCP: one config line and it's live 3. Every Claude Code session now retrieves semantically relevant files instead of reading everything What I like most: it works with any MCP-compatible agent. Cursor, Windsurf, Claude Code. Not locked to one tool. Also ships as a VS Code extension and npm packages if you prefer that integration path. 7,500+ stars on GitHub. Trending #1 today. MIT license. The detail that makes this practical: you self-host the vector DB (Milvus) or use Zilliz Cloud. Your code never leaves your environment. For anyone building on large codebases, this is worth 20 minutes to set up. (Link to repo in the comments) --- 🎁 Bonus Get the best AI news, tips, tutorials and resources in my newsletter. Join today and receive: • AI Playbook • 60+ free AI courses • 3000+ helpful prompts 100% FREE 👉 https://lnkd.in/gmbECShG
To view or add a comment, sign in
-
-
A few days ago, GitHub went down. It was a reminder of how much the world runs on a few critical systems. A big reason behind the strain? - The explosion of AI-driven workflows More code, More automation, More load than expected. Even platforms at GitHub’s scale can get caught off guard. The takeaway for me: We often think scaling problems are solved. But scale is a moving target. Especially in the AI era. And maybe the bigger lesson: The best engineering teams aren’t the ones that never fail. They’re the ones that: • Own the failure • Explain it clearly • Fix it fast • And learn from it Because at scale, outages are inevitable. Trust isn’t. #GitHub #Engineering #Reliability #AI #SystemDesign https://lnkd.in/ghBu2NbP
To view or add a comment, sign in
-
GitHub has been down for over a day… because it’s reindexing. This is a failure mode we’ve normalized. We built systems where: - data is written once - indexed somewhere else - and the product depends on that index being perfectly up to date When it falls behind, you don’t degrade. You break. In 2026, this should not be acceptable. Indexing should be: - offline - versioned - swappable - completely decoupled from serving At KalDB, we designed the system so indexing and serving are separate concerns. You can rebuild indexes offline without impacting production. Worst case, data is slightly stale. But the system stays available. That tradeoff is intentional. Availability should never depend on whether your index finished rebuilding. We don’t need faster reindexing. We need architectures where reindexing doesn’t matter.
To view or add a comment, sign in
-
Excited to share something I've been building — a GitHub Repository Analyzer MCP Server! It connects Claude AI directly to any GitHub repository through 13 powerful tools, letting you analyze codebases, browse files, search code, inspect commits, and more — all through natural language. 𝗪𝗵𝗮𝘁 𝗶𝘁 𝗰𝗮𝗻 𝗱𝗼: → Deep structural analysis of any repo (language breakdown, entry points, tooling detection) → Read files, search code patterns, browse full file trees → Fetch commits, branches, contributors, issues & PRs → Compare branches and get file URLs instantly → Clone repos locally (local mode) 𝗛𝗼𝘄 𝗶𝘁 𝘄𝗼𝗿𝗸𝘀: The server is built with FastMCP + Python, deployed on Vercel as an async HTTP server. Anyone can connect it to Claude.ai in seconds by adding the MCP URL to the Connectors section. Each tool call accepts your own GitHub Personal Access Token via a github_token parameter — your token is never stored, never logged, and masked from all error output. It lives only for the duration of that single request. You stay in full control of your credentials at all times. No setup. No installation. Just paste the link, pass your token, and start analyzing repositories. Want to self-host? Clone the repo, run vercel --prod, and you have your own instance live in minutes. 𝗧𝗲𝗰𝗵 𝘀𝘁𝗮𝗰𝗸: Python · FastMCP · anyio · PyGithub · Vercel Serverless The project is fully open source under MIT license — contributions, PRs, and feedback are very welcome! 🙌 🔗 Try it now — add this to Claude.ai Connectors: https://lnkd.in/gbpfr59Y 🔑 You'll need a GitHub PAT — generate one at github.com/settings/tokens (public_repo scope is enough for public repos) 📂 GitHub repo: https://lnkd.in/gWgRQAFa #MCP #Claude #AI #Python #OpenSource #GitHub #DeveloperTools #BuildInPublic
To view or add a comment, sign in
-
As of Friday, May 1, 2026, GitHub is currently operational, though it is emerging from a "reliability crisis" that defined much of April. Timeline of Events : (April – May 2026) - March 31: The "Real" Outage Begins :- A six-hour data loss event on internal metadata systems served as the first major signal of structural instability. - April 17: Caching Failure :- A capacity saturation of a caching component caused errors for about 1.5% of web requests, leading to slow page loads and failed requests. - April 23: The Silent Merge Regression :- A major bug in GitHub Merge Queues silently reverted or corrupted commits across over 2,000 Pull Requests. Many developers were unaware their work had been reverted until manual checks were performed. - April 27: The Elasticsearch "Blackout" :- The GitHub Search backend (Elasticsearch) was hit by a botnet-driven traffic spike. While core Git operations (push/pull) worked, the UI became a "liar"—PR lists appeared empty and search was completely disabled. - April 28: Critical RCE Disclosure (CVE-2026-3854) :- Wiz Research disclosed a critical Remote Code Execution (RCE) vulnerability. This flaw allowed anyone with push access to execute commands on GitHub's backend using a single git push. - April 29: The "Mitchell Hashimoto" Exit :- Mitchell Hashimoto (founder of HashiCorp) famously pulled his high-profile project, Ghostty, off GitHub, citing the platform's recent instability as a "personal" crisis for his workflow. - May 1 (Current): Recovery and "30x" Plan :- GitHub CTO Vlad Fedorov officially admitted that "agentic AI workflows" have pushed demand to 30 times the expected load. The platform is now undergoing a massive architectural shift (moving Ruby monolith paths to Go) to handle this permanent shift in usage. [1, 2, 3, 4, 5, 6, 7, 8, 9]. . . . . . . . #ai #cloud #go #ruby #git #github #aws #google #gcp #amazon #trending #status
To view or add a comment, sign in
-
-
DevOps Project: Multi-Container Architecture with Docker, Redis & PostgreSQL Today, I built a hands-on project focused on Docker-based microservices architecture using Docker and Docker Compose. This project gave me practical exposure to how modern applications are designed for scalability, flexibility, and independent deployments. 🔍 What I explored: ✅ Microservices Architecture → Breaking a monolithic application into smaller, independent services ✅ Containerization with Docker → Packaging applications with all dependencies for consistent environments ✅ Project Structure Design → Organizing multiple services in a clean and maintainable way ✅ Building & Running Containers → Creating Docker images and running services independently ✅ Best Practices → Writing optimized Dockerfiles and preparing services for scaling 🔗 I’ve pushed these real-time scenarios to my GitHub repository — would love for you to check it out and share your thoughts! GitHub Link - https://lnkd.in/gmD77n_w 💡 Key Learning: Containerization is not just about running applications — it's about building systems that are portable, scalable, and production-ready. Using Docker Compose, I orchestrated multiple containers simultaneously, including: 🗄️ PostgreSQL for database management ⚡ Redis for caching 🔧 Backend services (User & Data services) For API testing, I used Postman since the project currently does not include a frontend. ⚙️ What’s next? Next, I’ll enhance this project by: Improving service communication Adding CI/CD pipelines Deploying on cloud infrastructure (AWS) 🚀 #Docker #Microservices #DevOps #LearningInPublic #100DaysOfCode #CloudComputing #SoftwareEngineering #AWS #Containers #Automation
To view or add a comment, sign in
-
Today I created MindTheGap, a lightweight proxy that lets you use the DeepSeek API directly inside the GitHub Copilot CLI. The best part? It fully preserves the <thinking> tags that are mandatory on certain providers like DeepSeek. Without this proxy the GH Copilot CLI returns this (in)famous error: ✗ 400 The `reasoning_content` in the thinking mode must be passed back to the API. Code and docs here: https://lnkd.in/d_TmaZKW #GitHubCopilot
To view or add a comment, sign in
-
9 Kubernetes objects. One cheat sheet 🚀 Whether you're prepping for the CKA, debugging a StatefulSet at 2 AM, or finally figuring out why your Ingress isn't routing, these are the core building blocks every K8s engineer must know cold. Here's what this sheet actually covers (and why it matters): • Workloads - Pods are ephemeral, Deployments handle stateless apps, StatefulSets give you stable identity for databases like MySQL, MongoDB, Kafka. • Networking - Service (ClusterIP/NodePort/LoadBalancer), Ingress (note: Ingress-NGINX maintenance ends March 2026, plan your migration), and the Gateway API (v1.5 GA) which is the modern successor with role-oriented design and native traffic splitting. • Configuration & Organization - ConfigMaps for non-sensitive config, Secrets (remember: base64 ≠ encryption, enable encryption at rest!), and Namespaces for RBAC boundaries and multi-team isolation. Save it. Bookmark it. Share it. But here's the truth : you won't learn Kubernetes by reading cheat sheets. You learn it by SSH'ing into a broken cluster, watching a pod CrashLoopBackOff, and fixing it yourself. That's exactly what hands-on labs in our Kubernetes for the Absolute Beginners course give you: ✔️ Real browser-based terminals - no setup, no local install ✔️ Spin up Pods, break Deployments, debug Services ✔️ Real clusters you can mess up and reset instantly ✔️ CKA-aligned content for certification prep Stop watching. Start kubectl'ing. • Step 1: Sign up for free at kodekloud.com • Step 2: Enroll in the Kubernetes course to start your first lab today 👉 https://kode.wiki/4tJO3jT #Kubernetes #DevOps #CKA #HandsOnLearning #KodeKloud
To view or add a comment, sign in
-
-
AI is killing Github 😔 😔 Github had 2 incidents recently due to too much code pushed to the platform because of AI • April 23 – Merge Queue Bug A squash-merge bug reversed earlier commits → 658 repos, 2,092 PRs affected → No data lost, but repos broke • April 27 – Search Outage Elasticsearch cluster overloaded (likely bot traffic) → PRs & issues UI broke → Core APIs still worked While the company initially planned for a 10x capacity increase starting in late 2025, the massive surge in "agentic development workflows" (AI-driven coding) requires them to scale by 30x. This exponential growth in PRs, API usage, and massive monorepos is compounding stress on GitHub's distributed systems. GitHub has shifted its immediate priorities to Availability first, then Capacity, then New Features What they’re fixing: • Moving webhooks out of MySQL • Reducing DB load with better caching • Scaling infra via Microsoft Azure • Isolating critical systems (Git, Actions) • Breaking Ruby monolith → Go services • Moving toward multi-cloud Tech blog in comments #github #ai
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development
Fireship just dropped “GitHub is having some major issues right now…” The CVE was one piece. Merge queue silently reverted 2,092 PRs the same week. Three failures in five days… on a platform with no CEO.