Why your "perfect" code is returning a 404 (and it’s not a typo). 🛑 I’ve been heads-down building SprintSync AI, an automated engine that translates raw Git Diffs into high-level sprint updates for teams. Yesterday, I hit a wall that every dev knows: the code is correct, the logic is sound, but the API says “I don’t exist.” I was trying to fetch code comparisons from private repos via the GitHub API. First, I hit ENOSPC because Next.js was generating more cache than my system could handle. Then, I hit the 404/403 loop. The Lesson: In the world of GitHub's new "Fine-grained" tokens, a 404 doesn't always mean "Not Found." Often, it's a security 404—GitHub is hiding the resource because it doesn't think you have the right to know it exists. How I solved it: Cleaned the pipes: Flushed the .next cache and pruned my Docker images to give the compiler room to breathe. Permission Pivot: Traded the finicky Fine-grained tokens for a Classic PAT with scoped repo access. The "Bearer" Fix: Ensured my headers were explicitly using the right authorization syntax. The Result: SprintSync AI is now pulling real-time, authenticated code changes into a clean AI-summarized dashboard. If you’re building with the GitHub API, don't let a 404 gaslight you. Check your token scopes first! #MicroSaaS #NextJS #GitHubAPI
GitHub API 404 Fix: Security Tokens and Permissions
More Relevant Posts
-
The Claude Code Leak: 512,000 Lines of TypeScript On March 31, 2026, a routine npm update accidentally exposed nearly 1,900 TypeScript files over 500,000 lines from Anthropic’s Claude CLI in 2.1.88 version. A large source map file was published, linking compressed code to the full source. Although Anthropic quickly fixed the issue, the code was already mirrored on public sites. No model weights, training data, or user info were leakedjust the client-side orchestration layer. The root cause was a default source map from Bun and a missing .npmignore rule. The leaked code shows features like "undercover mode" for protecting internal info and advanced agent coordination for complex tasks and some other features for background operations and proactive monitoring. GitHub link for leaked code : https://lnkd.in/gZYDDseT
To view or add a comment, sign in
-
Sharing a side project I've been building: Tokenomy — an open-source toolkit that reduces your token usage on Claude Code and Codex CLI over time. Published as an npm package, so getting started is a single command — no cloning, no build step. The problem it solves: my last long coding session burned through a massive chunk of my context window before the agent did any real work. Bloated Jira JSON. The same file is read multiple times. Stacktraces that were mostly noise. Tokenomy plugs into the hook contracts Anthropic/OpenAI already exposes and does surgical work: → Trims bloated MCP responses (Atlassian, Linear, Slack, Gmail, GitHub) → Dedupes repeat tool calls in the same session → Redacts secrets before they hit the model → Clamps unbounded Read calls on huge files → Ships a code-graph MCP server so the agent stops brute-force reading your repo Every trim is logged with measured bytes-in / bytes-out. The savings are provable, and they compound across sessions. TypeScript, Node 20+, 161 tests green, zero runtime deps in the hot path. Looking for contributors. Good first issues are genuinely one-file drop-ins. npm: https://lnkd.in/gQhCqTNt GitHub: https://lnkd.in/g2HJuuBp #ClaudeCode #MCP #OpenSource #AI #Token #Claude #Codex #CLI #Tokenomy #Agents #Typescript #Node #npm
To view or add a comment, sign in
-
-
I just deleted 179 GitHub repos in 15 minutes with Copilot CLI (yes on purpose). Like most developers, I had years of accumulated repos. Forks from 2015. Test projects named catpawtest4. Angular 2 demos that haven't seen a commit in nearly a decade. 300 repos total. It was time. I used GitHub Copilot in the CLI to: - Inventory all 300 repos with metadata (language, stars, last push, fork status) - Categorize them into Keep, Update, and Delete 👀 Walk me through the delete candidates 10 at a time for review - Parallel-delete 179 confirmed repos in ~2 minutes 🔄 Sync every remaining fork to its upstream The part that stood out: I stayed in control the entire time. Copilot didn't just suggest — it built a workflow. Paginated review, tracked my decisions in a database, asked for auth scope when needed, and executed deletions 10 at a time. This is what AI-assisted developer tooling looks like in practice. Not replacing judgment — amplifying it. I made every keep/delete decision. Copilot handled the tedious parts at scale. RIP to a decade of "I'll clean this up later." 🧹 #GitHubCopilot #DeveloperProductivity #AI #GitHub #DevEx
To view or add a comment, sign in
-
If you are still using standard Nix flakes or tools like niv and npins for input pinning, you are seriously missing out! 🚨 While Nix flakes brought us a long way toward reproducibility, they have hidden limitations that slow down development and limit your freedom. Do you enjoy redownloading massive tarballs (like a 45GB AI model) just to check if they've updated? What happens when GitHub goes down and your inputs have no mirror fallback? What if some project uses Fossil, Pijul, or Darcs instead of Git? The latest blog post by toastal on Nixcademy introduces a complete game-changer: Nixtamal. Nixtamal is the missing piece to your Nix setup, providing features that standard flakes and existing tools simply don't have: 🚀 Custom Freshness Commands: Define how to check if an input is stale using simple shell commands (e.g., checking an API). No more wasteful, multi-gigabyte redownloads! 🪞 Mirror Support: Ensure your builds don't fail by falling back to mirrors when your primary forge is down or rate-limited. 🛠️ Pre-import Patching: Awaiting a Nixpkgs PR? Nixtamal lets you apply patches to the input itself before importing! ⚡ Faster Hashing: Override the hash algorithm on a per-input basis and leverage the blazing-fast BLAKE3 algorithm. 🌍 VCS Agnostic: Go beyond Git and Mercurial. Use Subversion, Fossil, Darcs, or Pijul without waiting for C++ Nix binary updates. Stop letting traditional flakes limit you. Unlock better inputs, save bandwidth, and regain complete control over your dependency management. Read the full deep-dive here and see what you've been missing: https://lnkd.in/eEGaibBq #Nix #NixOS #DevOps #SoftwareEngineering #Automation #Nixtamal #Reproducibility
To view or add a comment, sign in
-
I just open-sourced a small but surprisingly powerful tool that’s changed how I review code every single day. It’s called Gitty a git-first review skill for Claude Code and Cursor (Codex). Here’s what it actually does: I drop a GitHub PR or GitLab Merge Request link into the chat, and Gitty clones it locally into a temporary directory. Then it uses real git commands - diff, log, blame, merge-base to build proper context and gives me a clear, focused review. No more scrolling through messy web diffs or losing track of what changed where. When I want to push feedback back, it can post summary comments, inline review comments, or GitLab discussions directly from the AI, but only if I explicitly ask and the right token is available. Everything stays token-gated and fails safely if auth is missing. The part I like most: it works beautifully on GitLab Free. No need for paid tiers or extra MCP servers. It uses the standard Notes and Discussions APIs that are already available, so I get browser-visible MR comments without any extra infrastructure. If you spend a lot of time reviewing PRs/MRs and want a cleaner, more local-git-native workflow especially on GitLab Free you might enjoy this one. Repo is here: https://lnkd.in/dF4ZHVBy #AI #Git #CodeReview #DeveloperTools #OpenSource
To view or add a comment, sign in
-
-
EP04: I broke my own API on purpose. Here's what I learned. Most beginners write code that works when everything goes right. Professionals write code that handles when everything goes wrong. Today I stress tested the ExcuseEngine API with bad input: → Unknown category: relationship → Wrong type: urgency=abc → Empty request body: {} Here's what I discovered: Not everything is an error. → urgency=abc — FastAPI catches it automatically. 422. I wrote zero code for this. → urgency=99 — Pydantic rejects it. 422. I just defined the rule. The framework enforced it. → category=relationship — silently returned a work excuse. No error. No warning. Wrong. I had to fix this manually. → empty body {} — returned defaults. 200. Not an error. Graceful behaviour. The fix for the silent failure: if category not in EXCUSES: raise HTTPException( status_code=404, detail=f"Category '{category}' not found." f"Available: {list(EXCUSES.keys())}" ) Now unknown categories return: { "detail": "Category 'relationship' not found. Available: ['work', 'gym', 'code', 'family']" } Loud. Clear. Helpful. The lesson that actually stuck: Silent failures are more dangerous than loud ones. A good API never lies to the client. It tells them exactly what went wrong and why. That's not just backend knowledge. That's engineering discipline. Live → https://lnkd.in/gKh78ePe GitHub → https://lnkd.in/gqYNXPts Next up — adding a real database. No more hardcoded excuses. Episode 5 dropping tomorrow. Build the man. One project at a time. #Python #FastAPI #BackendDevelopment #WeThinkCode #BuildInPublic
To view or add a comment, sign in
-
https://lnkd.in/gtYEt-X9 Cloudflare Launches Git for AI Agents. More code will be written over the next 5 years than in all of programming history. Enter Cloudflare which introduced Artifacts: a versioned filesystem built for agents that speaks Git natively. Traditional source control was designed for humans; this handles the coming explosion of agent-generated code. You can spin up repositories programmatically for every agent session, sandbox, or Worker. Fork tens of millions from a base, hand off Git URLs, or use REST/Workers APIs for serverless environments. It scales where classic Git platforms struggle with volume and always-on automation. Currently in private beta, public beta targeted for early May. Times are a changin'... #AI Agents #GitForAgents
To view or add a comment, sign in
-
-
Claude Code's source code leaked last week. 512,000 lines of TypeScript, accidentally exposed through an npm sourcemap Anthropic left in a published package. I've been following the coverage since I use Claude Code every day. A few things that stood out: 1) Claude Code can send fake tool definitions in API requests. If someone is recording the traffic to train a competing model, the fake tools contaminate their dataset. Researchers have been calling it "anti-distillation." A little unsettling to think about, but also kind of clever. 2) 44 feature flags for things that are built but not shipped. Autonomous modes, planning and review flows, memory that persists across sessions. I've been building some of these patterns myself with config files and custom workflows. Turns out the tool already has its own versions internally. 3) Anthropic tried to DMCA the code off GitHub, but their automated takedown accidentally flagged thousands of unrelated repos. This happened March 31 so everyone assumed it was an April Fools' joke. It wasn't. 4) Someone found a security vulnerability in the leaked code within days. Not going to get into specifics, but it does make me think about how much access these tools have on my machine. I just posted about the configuration layers I've built up over the past couple months. Some of those same patterns already exist inside the tool's own codebase. I'm curious how many will ship as real features and make my manual setup unnecessary. Anyone else been following the leak coverage? What jumped out at you? 🤙
To view or add a comment, sign in
-
-
Helix Community is now open source. 🚀 We're releasing Helix — a self-healing production system that automates the entire journey from a crash to a pull request. Your team stays in control. Helix does the grunt work. 🔄 How it works: 1️⃣ Sentinel — receives your Sentry/Rollbar webhook, classifies the crash, detects language 2️⃣ Regen — deduplicates against open issues, generates a failing test 3️⃣ Forge — reads source files, generates a minimal fix, posts to your GitHub Issue 4️⃣ Pulse — notifies your team on Slack with the full context 5️⃣ You — review and approve. Nothing merges without a human sign-off. Built for teams who want autonomous incident response without sacrificing oversight. 📦 Community Edition includes: • Apache 2.0 open source license • Self-hosted on Docker Compose, Railway, Fly.io, or ECS • 7 languages: Python, JS, TypeScript, Ruby, Java, Kotlin, Go • Sentry + Rollbar support out of the box • Local LLM support via Ollama — zero API cost • HMAC-verified webhooks — secure by default Try it today ↔️ ⭐ GitHub → https://lnkd.in/dGFN_-yx 🎥 Demo → https://lnkd.in/dnXWHmWp 📚 Docs & community → https://lnkd.in/d-3w_gW9 💡 Helix Cloud (managed, with dashboard) → https://helix.88hours.io/ We're listening. Open an issue, star the repo, or reach out at hello@88hours.io #opensource #devtools #incidentresponse #AI #engineering #AIAgent
To view or add a comment, sign in
-
I built a tool that gives git a memory. Git tells you what changed. It has no idea why. After 6 months on a codebase, you're staring at files like middleware.py and old_auth.py with no idea what feature they belong to, whether anyone still uses them, or if they're safe to delete. I built gitmind to fix that. Every time you commit, a post-commit hook runs a local LLM (Ollama — no API costs, your code never leaves your machine) that analyzes the diff and writes structured metadata directly into your repo: → What changed → Why it likely changed → Which feature it belongs to → Which files are part of that feature Six months later you can ask: what's stale? what's safe to remove? when was auth last touched? And get a real answer. The part I'm most proud of: the tool documented its own development. Every commit I made while building it was analyzed by the LLM and stored in metadata.json. The build log on the docs site was written entirely by the AI — no human wrote a single summary. https://lnkd.in/eusR5TqA It also ships with a local web dashboard (python3 cli/dashboard.py) — feature health cards, a commit frequency chart, and a staleness report with an interactive threshold slider. No npm, no Node.js, just Python. Stack: Python · Ollama · qwen2.5-coder:7b · vanilla JS · GitHub Actions for CI + docs Fully open source. Would love feedback from anyone who's felt the pain of undocumented codebases. 🔗 https://lnkd.in/e92UVU5h #buildinpublic #opensource #devtools #python #llm #git
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development