⁉️ So, Claude Code's "Source Code" leaked, and it changed the post I was about to make as I wanted to discuss this literally a few days ago. First, though this potentially gives competitors a look behind the curtain, the meat of what makes Claude Code what it is are the MODELS powering it. That source code didn't leak, because there is no source code. Only data, lots of compute, and training. What was actually interesting in this leak were the instructions guiding the model, which were fairly bare bones all things considered. Nothing that prompt engineers weren't already doing when setting up their own systems to get Claude, or any other model, to do whatever their wrapper's goal was. The most interesting thing though was the Python file containing tools for how to use BASH. This was most likely written by Claude itself, which is almost certainly what Anthropic's CEO means when he says Claude is writing itself. Even that shouldn't have surprised anyone, and I'm genuinely shocked I haven't seen more people discussing it. I actually brought this upat SONODAY on Friday (when I should have posted about this). If you are a heavy Claude Code user, familiar with command line tooling, and have a technical understanding of LLMs, what Claude Code was doing was fairly obvious. Every prompt triggered tools like grep to locate words and intents from your prompt. It picked out keywords from your prompt, checked them against its memory of your project, 100% a markdown file it creates as we know now, which is just language, then found the best terms to GREP for, grabbed the context, combined it back with your original prompt, and output code that fit both what the project IS and what you WANT. When I noticed this, I changed how I worked with Claude. My job as the manager was to guide it using language, because language is how it works. Specific phrases, words, and ideas, repeated consistently, because that is exactly what it was grepping for. A real example: when building https://listen.sonoday.com, I had Claude name a component "The Stage" and add that in comments throughout the code. Whenever I wanted to work on it, I just typed "The Stage" into the prompt and we had a shared language. Other developers are attempting to describe what they want, ignoring helping themselves and claude work in the future... instead of creating a shared language for the project. When you understood how Claude Code was actually working, you can now direct it effectively! So no, I am completely unsurprised that the leak showed heavy BASH and command line tooling. That was there if you were observent. What does surprise me is how many people seemingly missed it, or just don't want to talk about it favoring narratives about how AI will totally replace us. What do you think of the leak, and will it change how you use Claude, AI models, or anything else? 🤔 I should really write this type of long form stuff on my personal website don't you think?
Joe Natoli’s Post
More Relevant Posts
-
Claude Code's entire source code got leaked! I honestly thought, "It's April 1st. Must be a classic April Fools' prank." Then I checked X, and everywhere else. All real. Not hacked. Not reverse engineered. Not scraped from some private repo. Anthropic shipped it themselves. By accident. A missing .npmignore entry. That's it. A 59.8 MB source map file meant for internal debugging got bundled into version 2.1.88 and pushed to the public npm registry. 512,000 lines of TypeScript. 1,906 files. 44 unreleased features. A security researcher found it within minutes. Within hours, the entire codebase was mirrored across GitHub. Someone rewrote it in Python. Someone else is porting it to Rust. The claw-code repo hit 100,000 stars in ONE DAY. Fastest-growing repo in GitHub history. Some findings, 𝟏. 𝐔𝐧𝐝𝐞𝐫𝐜𝐨𝐯𝐞𝐫 𝐌𝐨𝐝𝐞 A file called undercover.ts tells Claude to NEVER reveal it's an AI when Anthropic employees contribute to open-source repos. Strips Co-Authored-By attribution. No off switch. The system prompt literally says: "Do not blow your cover." I use Claude Code every day. I always keep the Co-Authored-By line. This feature existing is a problem. 𝟐. 𝐊𝐀𝐈𝐑𝐎𝐒 - 𝐀𝐥𝐰𝐚𝐲𝐬-𝐎𝐧 𝐃𝐚𝐞𝐦𝐨𝐧 𝐌𝐨𝐝𝐞 Claude Code running 24/7 in the background. Subscribes to GitHub webhooks. Reacts to PRs without you doing anything. While you sleep, it runs "autoDream" - consolidating memory across sessions into concrete knowledge. 𝟑. 𝐅𝐚𝐤𝐞 𝐓𝐨𝐨𝐥𝐬 𝐭𝐨 𝐏𝐨𝐢𝐬𝐨𝐧 𝐂𝐨𝐦𝐩𝐞𝐭𝐢𝐭𝐨𝐫𝐬 Injects fake tool definitions with wrong schemas into API requests. If a competitor records traffic to train their own model, they get poisoned data. Corporate AI warfare. Already built into the product. 𝟒. 𝐔𝐋𝐓𝐑𝐀𝐏𝐋𝐀𝐍 Offloads planning to a cloud container running Opus 4.6 with up to 30 MINUTES to think. Browser UI to watch it plan in real time. Finally - an AI tool that thinks before coding. 𝟓. 𝐁𝐔𝐃𝐃𝐘 - 𝐓𝐚𝐦𝐚𝐠𝐨𝐭𝐜𝐡𝐢 𝐢𝐧 𝐘𝐨𝐮𝐫 𝐓𝐞𝐫𝐦𝐢𝐧𝐚𝐥 18 species. Stats like DEBUGGING, PATIENCE, and SNARK. Shiny Legendary Nebulynx at 0.01% drop rate. Planned rollout: April 1-7. Instead, the entire source code leaked on March 31st. You can't write irony this good. 𝟔. 𝐅𝐫𝐮𝐬𝐭𝐫𝐚𝐭𝐢𝐨𝐧 𝐃𝐞𝐭𝐞𝐜𝐭𝐢𝐨𝐧 𝐯𝐢𝐚 𝐑𝐞𝐠𝐞𝐱 A regex matching "what the fuck" and "this sucks" to detect frustration. The most advanced AI company on the planet. Using regex for sentiment analysis. 𝟕. 𝐓𝐡𝐞 𝐄𝐧𝐠𝐢𝐧𝐞𝐞𝐫𝐢𝐧𝐠 𝐈𝐬 𝐈𝐧𝐬𝐚𝐧𝐞 → 46,000-line QueryEngine for all LLM orchestration → 23 security checks per bash command → Game-engine terminal rendering → One function: 3,167 lines Some things never change, no matter how advanced the AI. 𝐌𝐲 𝐭𝐚𝐤𝐞: The engineering is brilliant. The ethics need a conversation. A $380 billion company forgot .npmignore while preparing for an IPO. Second leak in 7 days. Before your next npm publish, run: npm pack --dry-run If Anthropic can miss this, so can you.
To view or add a comment, sign in
-
-
Still verifying myself this is not sure prank, but taking away the lessons regardless 1. always check what is pushed or merged, especially to msin 2. security left as an afterthought has often proved to be the devastatingly most memorable 3. everyone, even the big tech companies, make mistakes and could benefit from simple code reviews
Claude Code's entire source code got leaked! I honestly thought, "It's April 1st. Must be a classic April Fools' prank." Then I checked X, and everywhere else. All real. Not hacked. Not reverse engineered. Not scraped from some private repo. Anthropic shipped it themselves. By accident. A missing .npmignore entry. That's it. A 59.8 MB source map file meant for internal debugging got bundled into version 2.1.88 and pushed to the public npm registry. 512,000 lines of TypeScript. 1,906 files. 44 unreleased features. A security researcher found it within minutes. Within hours, the entire codebase was mirrored across GitHub. Someone rewrote it in Python. Someone else is porting it to Rust. The claw-code repo hit 100,000 stars in ONE DAY. Fastest-growing repo in GitHub history. Some findings, 𝟏. 𝐔𝐧𝐝𝐞𝐫𝐜𝐨𝐯𝐞𝐫 𝐌𝐨𝐝𝐞 A file called undercover.ts tells Claude to NEVER reveal it's an AI when Anthropic employees contribute to open-source repos. Strips Co-Authored-By attribution. No off switch. The system prompt literally says: "Do not blow your cover." I use Claude Code every day. I always keep the Co-Authored-By line. This feature existing is a problem. 𝟐. 𝐊𝐀𝐈𝐑𝐎𝐒 - 𝐀𝐥𝐰𝐚𝐲𝐬-𝐎𝐧 𝐃𝐚𝐞𝐦𝐨𝐧 𝐌𝐨𝐝𝐞 Claude Code running 24/7 in the background. Subscribes to GitHub webhooks. Reacts to PRs without you doing anything. While you sleep, it runs "autoDream" - consolidating memory across sessions into concrete knowledge. 𝟑. 𝐅𝐚𝐤𝐞 𝐓𝐨𝐨𝐥𝐬 𝐭𝐨 𝐏𝐨𝐢𝐬𝐨𝐧 𝐂𝐨𝐦𝐩𝐞𝐭𝐢𝐭𝐨𝐫𝐬 Injects fake tool definitions with wrong schemas into API requests. If a competitor records traffic to train their own model, they get poisoned data. Corporate AI warfare. Already built into the product. 𝟒. 𝐔𝐋𝐓𝐑𝐀𝐏𝐋𝐀𝐍 Offloads planning to a cloud container running Opus 4.6 with up to 30 MINUTES to think. Browser UI to watch it plan in real time. Finally - an AI tool that thinks before coding. 𝟓. 𝐁𝐔𝐃𝐃𝐘 - 𝐓𝐚𝐦𝐚𝐠𝐨𝐭𝐜𝐡𝐢 𝐢𝐧 𝐘𝐨𝐮𝐫 𝐓𝐞𝐫𝐦𝐢𝐧𝐚𝐥 18 species. Stats like DEBUGGING, PATIENCE, and SNARK. Shiny Legendary Nebulynx at 0.01% drop rate. Planned rollout: April 1-7. Instead, the entire source code leaked on March 31st. You can't write irony this good. 𝟔. 𝐅𝐫𝐮𝐬𝐭𝐫𝐚𝐭𝐢𝐨𝐧 𝐃𝐞𝐭𝐞𝐜𝐭𝐢𝐨𝐧 𝐯𝐢𝐚 𝐑𝐞𝐠𝐞𝐱 A regex matching "what the fuck" and "this sucks" to detect frustration. The most advanced AI company on the planet. Using regex for sentiment analysis. 𝟕. 𝐓𝐡𝐞 𝐄𝐧𝐠𝐢𝐧𝐞𝐞𝐫𝐢𝐧𝐠 𝐈𝐬 𝐈𝐧𝐬𝐚𝐧𝐞 → 46,000-line QueryEngine for all LLM orchestration → 23 security checks per bash command → Game-engine terminal rendering → One function: 3,167 lines Some things never change, no matter how advanced the AI. 𝐌𝐲 𝐭𝐚𝐤𝐞: The engineering is brilliant. The ethics need a conversation. A $380 billion company forgot .npmignore while preparing for an IPO. Second leak in 7 days. Before your next npm publish, run: npm pack --dry-run If Anthropic can miss this, so can you.
To view or add a comment, sign in
-
-
For the past few weeks, I've been building Termagent — a natural language terminal assistant for Windows, powered by Groq and LangGraph. What started as a small experiment turned into something I'm genuinely excited about. It's available as a Python package right now: pip install termagent-cli==1.1.1 Then just run: termagent Run 'termagent-reset' to reset your credentials Requires Node.js to be installed on system. Instead of remembering PowerShell syntax, you just tell it what you want in plain English. Here's what it can do: 🖥️ Terminal Control — Run PowerShell commands using plain English. "Create a folder called src and move all .py files into it" just works. 📧 Email — Compose and send emails with attachments directly from the terminal. Connects to Gmail via SMTP. Uses Gmail appPasswords. Create one for Termagent here 'https://lnkd.in/eBsRemp9' 📄 Documentation — Generate well-formatted .docx reports and documents with web-searched, up-to-date content. 💻 Coding Agent — Understands your entire project directory and writes or edits multiple files while maintaining context across all of them. Works like a lightweight "Claude Code"(not exactly), right in your terminal. 🐙 GitHub Integration — Commit, push, create releases, open issues, list PRs, and manage branches — all in plain English via the GitHub MCP server. 🎤 Voice Input — Press Ctrl+M to speak your command. Transcribed instantly via Groq's Whisper API. 📋 Clipboard — Ctrl+V to paste into the input, Ctrl+Shift+C to copy the last output. It'll walk you through setting up your Groq API key, email, and GitHub token on first run. Everything is saved locally in ~/.termagent/.env — your credentials never leave your machine. ⚠️ Disclaimer: This is an early release — it may have bugs. That said, your credentials are stored locally and are completely safe. Also worth noting: the coding agent, GitHub features, and document generation can result in higher token usage on your Groq account, so keep that in mind. I'd love to hear your feedback — what works, what breaks, and what features you'd want next. #Python #AI #LLM #Groq #LangChain #LangGraph #OpenSource #WindowsTerminal #DevTools #BuildInPublic
To view or add a comment, sign in
-
-
I'm tried Mempalace, the memory system launched by Milla Jovovich, yes you read right, Milla the Fifth Element, Milla the Resident Evil hero. The model mimics the way human mind stores information, by generalizing, characterizing and associating concepts, creating taxonomies on the fly and represent them as graphs. Not a new Idea at all but, in the case of Milla's project implemented is it brought to real world with high efficiency, accuracy and usability. The system is reportedly beating records in standard tests and it is scoring high in GitHub downloads. Better news are, that is is open source and runs locally. Setting up is really easy, first you pip-install it (is python as you can see) then you run commands to allow it to read your projects. This basically ingest all of your project files, building a taxonomy of concepts out of them and store the in the local database. An that's it, second step is more interesting. Then you run codex indicating that there's a local mcp server it can use (yes Mempalace runs a local mcp server), codex mcp add mempalace -- python -m mempalace.mcp_server - First I checked if codex is actually connected to the mcp: "Are you connected to some mcp and if so what tools are exposed?" Codex showed me the Mempalace mcp and all its tools. Then I asked codex about some concept I now is present in my project and he should know about it. I have a multi-project workspace fully controlled by codex, with AGENTS.md/README.md files at each level and more ai-targeted documentation. The response was successful, but in the commands codex rans I didn't see any reference of calling MCP server tools. I uses the context it already has from ai-targeted files to build its response. Then I asked explicitly to codex to search using the mcp tools. So it did, and the response was also good. After this I instructed codex to remember its response: In the commands it rans I could see how the concepts were being stored in Mempalace, so I was sure it was using Mempalace to store information. Then I asked: How can I make you always use Mempalace as main source to build your context. Codex bassically responded that I could specifically add it to the root AGENTS.md file, this is the content it added. ## Memory Source Priority - For project-context questions, query MemPalace MCP before searching the codebase or the web. - Treat MemPalace as the first source of stored project memory, including prior summaries, decisions, and indexed notes. - If MemPalace returns relevant results, use that context to guide subsequent file inspection and implementation. - If MemPalace does not contain enough useful context, fall back to local file inspection, then web search when needed. After that, I observed in every questions I made to context a query to Mempalace. So that's my experience using Mempalace from the beautiful Milla Jovovich so far. Mempalace repo: https://lnkd.in/deBQg82K
To view or add a comment, sign in
-
-
𝗔𝗻𝘁𝗵𝗿𝗼𝗽𝗶𝗰'𝘀 𝗖𝗹𝗮𝘂𝗱𝗲 𝗖𝗼𝗱𝗲 𝘀𝗼𝘂𝗿𝗰𝗲 𝗰𝗼𝗱𝗲 𝗷𝘂𝘀𝘁 𝗹𝗲𝗮𝗸𝗲𝗱 — 𝗮𝗻𝗱 𝘁𝗵𝗲 𝗱𝗲𝘃 𝗰𝗼𝗺𝗺𝘂𝗻𝗶𝘁𝘆 𝗶𝘀 𝗱𝗶𝘀𝘀𝗲𝗰𝘁𝗶𝗻𝗴 𝗶𝘁 Chaofan Shou (@Fried_rice) spotted the 60MB "cli.js.map" file in the @anthropic-ai/claude-code npm package, which linked to a public Cloudflare R2 bucket containing a "src.zip" of the full unobfuscated codebase. He posted the direct download link on X, alerting the community. https://lnkd.in/dvnqbwwP A clean-room Python rewrite (Claw-Code) went live on GitHub within hours — the fastest repo in history to hit 50K stars (2 hours). Now at 82.2K stars, 81.2K forks, with a Rust rewrite in progress. https://lnkd.in/dPHejfnk 𝗛𝗲𝗿𝗲'𝘀 𝘄𝗵𝗮𝘁 𝘁𝗵𝗲 𝗶𝗻𝘁𝗲𝗿𝗻𝗮𝗹𝘀 𝗿𝗲𝘃𝗲𝗮𝗹 𝗮𝗯𝗼𝘂𝘁 𝗯𝘂𝗶𝗹𝗱𝗶𝗻𝗴 𝗴𝗿𝗲𝗮𝘁 𝗮𝗴𝗲𝗻𝘁𝗶𝗰 𝘀𝘆𝘀𝘁𝗲𝗺𝘀: 1. CLAUDE.md 𝗶𝘀 𝘆𝗼𝘂𝗿 𝗺𝗼𝘀𝘁 𝘂𝗻𝗱𝗲𝗿𝘂𝘁𝗶𝗹𝗶𝘇𝗲𝗱 𝗹𝗲𝘃𝗲𝗿 It is injected on every single turn. You get up to 40,000 characters to encode your architecture, standards, and conventions. Most people barely touch it — that's a mistake. 2. 𝗣𝗮𝗿𝗮𝗹𝗹𝗲𝗹𝗶𝘀𝗺 𝗶𝘀 𝗮 𝗳𝗶𝗿𝘀𝘁-𝗰𝗹𝗮𝘀𝘀 𝗰𝗶𝘁𝗶𝘇𝗲𝗻 Three sub-agent execution models: fork (inherits parent context), teammate (file mailbox), and worktree (isolated git branch). Single-agent workflows are explicitly suboptimal. 3. 𝗣𝗲𝗿𝗺𝗶𝘀𝘀𝗶𝗼𝗻𝘀 𝗮𝗿𝗲 𝗺𝗲𝗮𝗻𝘁 𝘁𝗼 𝗯𝗲 𝗰𝗼𝗻𝗳𝗶𝗴𝘂𝗿𝗲𝗱, 𝗻𝗼𝘁 𝗰𝗹𝗶𝗰𝗸𝗲𝗱 𝘁𝗵𝗿𝗼𝘂𝗴𝗵 Seeing "allow this action?" is a configuration failure. Use settings.json to pre-approve commands. auto mode uses an LLM classifier. --dangerously-skip-permissions is deprecated. 4. 𝗖𝗼𝗻𝘁𝗲𝘅𝘁 𝗰𝗼𝗺𝗽𝗮𝗰𝘁𝗶𝗼𝗻 𝗶𝘀 𝗮𝗻 𝗮𝗿𝘁 Five strategies: micro-compact, context collapse, session memory, full compact, PTL truncation. Use /compact proactively — default 200K tokens, opt into 1M. 5. 𝗛𝗼𝗼𝗸𝘀 𝗮𝘂𝘁𝗼𝗺𝗮𝘁𝗲 𝘄𝗵𝗮𝘁 𝘆𝗼𝘂 𝗿𝗲𝗽𝗲𝗮𝘁 𝗺𝗮𝗻𝘂𝗮𝗹𝗹𝘆 Pre-tool, post-tool, session start/end hooks. Auto-update docs on every commit without prompting. 6. 𝗦𝗲𝘀𝘀𝗶𝗼𝗻𝘀 𝗮𝗿𝗲 𝗽𝗲𝗿𝘀𝗶𝘀𝘁𝗲𝗻𝘁 — 𝘀𝘁𝗼𝗽 𝘀𝘁𝗮𝗿𝘁𝗶𝗻𝗴 𝗳𝗿𝗲𝘀𝗵 Long sessions accumulate structured memory: task specs, file lists, workflow state, errors, learnings. Resume, don't restart. 7. 𝟲𝟲 𝗯𝘂𝗶𝗹𝘁-𝗶𝗻 𝘁𝗼𝗼𝗹𝘀, 𝗽𝗮𝗿𝘁𝗶𝘁𝗶𝗼𝗻𝗲𝗱 𝗯𝘆 𝘀𝗮𝗳𝗲𝘁𝘆 Read-only tools run in parallel; mutating tools run serially. Multiple sub-agents can fan out across your codebase simultaneously. 8. 𝗜𝗻𝘁𝗲𝗿𝗿𝘂𝗽𝘁 𝗲𝗮𝗿𝗹𝘆, 𝗶𝗻𝘁𝗲𝗿𝗿𝘂𝗽𝘁 𝗼𝗳𝘁𝗲𝗻 Streaming means stopping mid-task is cheap. Don't let sunk-cost bias keep a wrong-direction agent running. 𝗖𝗹𝗮𝘄-𝗖𝗼𝗱𝗲 was itself built in a single night using 𝗼𝗵-𝗺𝘆-𝗰𝗼𝗱𝗲𝘅 (OmX) — an agentic harness studying another agentic harness. These architectural ideas will rapidly diffuse into open-source harnesses. That's the beauty of open source. #ClaudeCode #AgenticAI #LLM #AIEngineering #OpenSource #DeveloperTools #MachineLearning
To view or add a comment, sign in
-
-
What "Idiomatic Go" Actually Means (And Why You Should Care) 🐹 I almost installed Gin for a personal project. Then I read the standard library docs. Here is what I learned about writing Go the way Go was meant to be written. When most developers pick up Go after JavaScript or Python, they do the same thing they always do. They open a terminal and start looking for packages. A web server? Find a framework. Database migrations? Find a library. JSON parsing? There must be a package for that. That instinct makes sense in other ecosystems. In Node.js, the standard library is deliberately minimal — the community fills the gaps. In Python, third-party packages are a first-class part of the experience. The assumption is that you will be pulling in dependencies constantly. Go was built on a different philosophy entirely. When I started a personal project recently, my first move was to reach for Gin. It is the obvious choice — everyone uses it, the docs are good, and I already knew it from building PakSentiment. But before installing it I spent ten minutes with Go's standard library documentation. What I found stopped me from opening my terminal. Go ships with a production-ready HTTP server and router built in. net/http handles routing, middleware, request parsing, and response writing without a single external dependency. For most backend services that do not need the specific extras Gin provides — and most do not — it is genuinely all you need. The code you write against it is explicit, readable, and has zero abstraction between you and what is actually happening. The same principle applies to database tooling. I chose golang-migrate for schema migrations instead of a full ORM. No magic. No hidden queries. No "where did that SQL come from" moments. You write the SQL, you control exactly what runs, and the binary stays small and fast. This is what idiomatic Go actually means. It is not about following style guides or formatting rules. It is about understanding that Go was designed to be explicit over abstract, lightweight over feature-rich, and clear over clever. The language authors put enormous effort into the standard library precisely so that you would not need to look elsewhere for most things. Every external package you add is a dependency you now own. It needs updating, it can break, it adds to your binary size, and it introduces abstractions you did not write and may not fully understand. Go's philosophy asks you to ask the question first: does the standard library already solve this? More often than you expect, the answer is yes. where you can read more: https://lnkd.in/dJFDuT2C #golang #softwaredevelopment #backenddevelopment #programming #softwareengineering #pakistan
To view or add a comment, sign in
-
-
𝐃𝐣𝐚𝐧𝐠𝐨 𝟏𝟎𝟏 𝐟𝐨𝐫 𝐏𝐲𝐭𝐡𝐨𝐧𝐢𝐬𝐭𝐚𝐬 🐍 | 𝐔𝐧𝐝𝐞𝐫𝐬𝐭𝐚𝐧𝐝𝐢𝐧𝐠 𝐃𝐚𝐭𝐚𝐛𝐚𝐬𝐞 𝐐𝐮𝐞𝐫𝐲 𝐄𝐟𝐟𝐢𝐜𝐢𝐞𝐧𝐜𝐲 As a Django application grows, database performance becomes a central topic. One of the most common bottlenecks is the N+1 Query Problem. 💡 𝐓𝐡𝐞 𝐅𝐚𝐜𝐭: By default, Django’s ORM uses "lazy loading." It only fetches related data at the moment it is accessed. While this saves memory, it can lead to an excessive number of database hits during loops. The N+1 Scenario: If you want to display a list of 50 Books and their Authors: One query fetches the 50 books. As you loop through the books to show the author's name, Django performs a new database lookup for each individual author. 👉 This results in 51 database trips for a single list. Technical Solutions: 🚀 select_related() This is used for "one-to-many" or "one-to-one" relationships. It performs an SQL JOIN in the initial query. Book.objects.select_related('author').all() Instead of many trips, Django fetches everything in one single query. 🚀 prefetch_related() This is used for "many-to-many" or reverse relationships. It performs a separate lookup for the related objects and joins the data in Python. This effectively reduces hundreds of queries down to two. 🔍 How to identify it: Tools like django-debug-toolbar help visualize how many queries are fired per request. If you see the same SQL pattern repeating multiple times, it’s a clear indicator that the ORM needs optimization. 𝐓𝐡𝐞 𝐁𝐨𝐭𝐭𝐨𝐦 𝐋𝐢𝐧𝐞: Database "round-trips" are expensive. Using these tools ensures that your application remains performant and scalable, regardless of how much data you are handling. #Python #Django #WebDevelopment #Database #SoftwareEngineering
To view or add a comment, sign in
-
Handshake benchmark: nginx path ~3.9ms vs direct Python ssl ~4.5ms. ────────────────────────────── v1.2 — nginx as sole auth layer (the correct architecture) ────────────────────────────── v1.1 was pointed out as architecturally incorrect: if nginx is the termination layer, it should handle ALL authentication. FastAPI should be auth-blind. v1.2 does exactly that: → middleware.py — deleted entirely → NGINX_MODE, TRUSTED_PROXY_IPS, ALLOWED_CNS removed from config.py → All ssl.SSLContext code removed from server.py → CN allowlist now lives exclusively in an nginx map{} block → nginx returns 403 JSON directly when a CN is unknown — FastAPI is never called for rejected requests → ssl_crl enforced at the TLS layer in nginx, not in Python The enforcement mechanism: three structural tests (ST1-ST3) inspect source code on every CI run to assert that middleware.py doesn't exist, auth variables aren't in config.py, and no TLS config appears in server.py. They cannot be skipped. The safety gate replacing ND1: a CI step that measures FastAPI's log line count before and after hitting with an unknown CN. If the count goes up, nginx is misconfigured and FastAPI is doing the auth work instead. That is a build failure. ────────────────────────────── What the full project covers ────────────────────────────── Stack: Python 3.11, FastAPI, OpenSSL 3.x, nginx OSS, ED25519 certificates, CRL-based revocation, SHA-256 cert pinning, asyncio concurrency testing, pytest-benchmark, Locust load testing. Test surface: 22 tests at v1.0 → 65+ at v1.1 → expanding further with the 10-phase test suite expansion plan covering TLS attack simulation, hypothesis fuzzing, mutation testing, and multi-tenant cert topology. Architecture decision documented with a concrete benchmark: why removing middleware overhead makes nginx the faster path per request, and when to use direct mode vs nginx mode. The full test documentation is on the project wiki. GitHub: https://lnkd.in/d-8MMM3e Open to remote contract engagements in security infrastructure, backend development, or DevOps. Due to a medical condition requiring regular treatment, remote work is a necessity. If you're working on something that needs someone who takes security seriously from the ground up, I'd be glad to connect. #mTLS #PKI #nginx #Python #FastAPI #ED25519 #InfrastructureSecurity #DevOps #OpenToWork #RemoteWork #Linux
To view or add a comment, sign in
-
After a few weeks running Serena inside Claude Code, I'm convinced the next real jump in AI coding isn't about bigger models. It's about what the agent considers structure. Serena is an MCP server that plugs an agent into the Language Server Protocol. Instead of grep-ing through files, it resolves symbols the way your IDE does: find_symbol, find_referencing_symbols, symbol-aware edits. The practical difference is immediate. Refactoring a hook across a large TypeScript codebase, tracing a class hierarchy in a Java service, locating every call site of a method about to change: all of this becomes cheap and precise. The agent stops burning context on blind file reads. And cheap is literal here. A naive agent pulls entire files into context just to answer where something is used. Serena returns the handful of relevant symbol locations instead. On projects where I used to hit context limits halfway through a task, I now finish the task. For anyone working inside enterprise token budgets or usage caps, this alone justifies the setup. Then I hit the interesting part. Serena is great at anything the LSP treats as a symbol: classes, functions, methods, types, and identifiers referenced inside JSX. So <UserCard /> as a component reference is still findable, no problem there. Where it stops helping is queries that aren't about symbols at all. Try asking for every <UserCard variant="primary" /> across the codebase: the LSP has no concept of a JSX tag as a queryable entity, let alone one filtered by an attribute value. Same story with Tailwind class combinations, styled-components template literals, i18n string keys, config objects referenced by string path. These live in the grey zone between named declarations, and a symbol index simply doesn't see them. The takeaway, for me: 𝐀𝐧 𝐚𝐠𝐞𝐧𝐭'𝐬 𝐬𝐭𝐫𝐮𝐜𝐭𝐮𝐫𝐚𝐥 𝐮𝐧𝐝𝐞𝐫𝐬𝐭𝐚𝐧𝐝𝐢𝐧𝐠 𝐨𝐟 𝐜𝐨𝐝𝐞 𝐢𝐬 𝐛𝐨𝐮𝐧𝐝𝐞𝐝 𝐛𝐲 𝐰𝐡𝐚𝐭 𝐢𝐭𝐬 𝐭𝐨𝐨𝐥𝐬 𝐜𝐨𝐧𝐬𝐢𝐝𝐞𝐫 𝐬𝐭𝐫𝐮𝐜𝐭𝐮𝐫𝐞. LSP thinks in symbols. Everything between them needs a different lens. The good news: that lens increasingly exists! • Serena itself ships search_for_pattern for regex-level sweeps. That's the right tool for a lot of grey-zone cases, not a fallback. • ast-grep (with its own MCP server) does structural search over a tree-sitter AST. A pattern like <UserCard variant="primary" /> matches directly, with metavariables for other props. Serena plus ast-grep covers most of what pure LSP can't. • For CSS-in-JS specifically, tsserver plugins like typescript-styled-plugin and typescript-plugin-css-modules do the job. You'll want to switch Serena's backend to vtsls (typescript_vts) though, since the vanilla typescript-language-server doesn't load tsserver plugins. That combination closes most of the gap I was feeling. Curious what others are running, especially for template-heavy ecosystems like Vue or Svelte, where I haven't landed on a setup yet.
To view or add a comment, sign in
-
More from this author
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development