AI Native Dev’s cover photo
AI Native Dev

AI Native Dev

Technology, Information and Internet

Exploring the future of AI in software development.

About us

AI is changing the way we write code, automate workflows, and build software. If you’re experimenting with AI-assisted coding, building AI-driven tools, or just trying to keep up with the latest trends, you’re in the right place. 💡 What we offer: 💬 Discord – A space to chat with devs working on AI-native software 🎙️ Podcast – Conversations with developers building real AI-powered tools 🎤 AI Native DevCon – A no-fluff conference on AI in software development 📬 Newsletter – Practical insights on AI coding, automation, and engineering 📅 Next Event: AI Native DevCon | November 18 & 19th 2025. A conference to help you build faster, smarter, and more efficiently with AI.

Website
https://ainativedev.io/
Industry
Technology, Information and Internet
Company size
11-50 employees

Updates

  • OpenAI has open-sourced “Symphony,” a spec for orchestrating coding agents around tasks. It connects issue trackers like Linear to Codex-powered agents, letting tasks be queued and picked up automatically — with each one handled in its own isolated workspace and turned into a pull request. The benefit is coordination: multiple tasks can run in parallel without needing engineers to prompt or manage each step. Because it connects directly with the issue tracker, it turns tickets into executable work that agents can pick up and complete on their own. 🔗 Read the full story here: https://lnkd.in/exWw9vxN

  • What if the interface for working with AI isn’t a chat… but a canvas? Steve Ruiz is joining AI Native DevCon London 2026 to explore a very different way of interacting with agents. As Founder and CEO of tldraw, Steve has been pushing the boundaries of how humans and machines collaborate visually. From viral experiments like Make Real to real-time agent workflows, his work sits at the intersection of creativity, tooling, and AI. His session, Agents on the canvas with tldraw, looks at what happens when you move AI out of text boxes and into shared visual space. Where multiple agents and humans can think, build, and iterate together. What this session gets into • How the infinite canvas changes the way we interact with AI systems • What works (and what breaks) when agents operate in real-time visual environments • Lessons from building and shipping canvas-based AI tools • How spatial interfaces can improve collaboration between users and agents • Whether the future of AI workflows is visual, not conversational This is a glimpse into a different interaction model, one that feels less like prompting and more like working side by side. If you’re curious about where interfaces for AI are heading, this session will expand how you think about it. Join us in London or online: https://tessl.io/devcon/ (use AIND-LI-BB-20 for 20% discount)

    • No alternative text description for this image
  • Anthropic published a post-mortem explaining the reasons behind the recent drop in Claude Code’s performance — and the model wasn’t the problem. A series of small changes around it — prompts, caching, reasoning settings — compounded into a noticeable decline in output quality. And therein lies the lesson for companies building AI systems. As these tools move into real workflows, the model is only one piece. The harness — how context is managed, how prompts are structured, how reasoning is configured — is what shapes the outcome. Nothing broke in isolation, but together the system became unreliable. 🔗 Read the full story here: https://lnkd.in/e2rWTW7d

  • Skills promise a Matrix moment. Upload knowledge, instant expertise. Reality is messier. We looked at thousands of real-world skills across GitHub, and the pattern is clear. Adoption is exploding, but most skills are written once and never touched again. Meanwhile, the systems they’re supposed to guide keep evolving. That gap is where things start to break. The most common failure is simple. Vague descriptions. If your skill says “helps with code quality,” the agent has no idea when to use it. Activation drops, or worse, it triggers in the wrong place. The fix is not more detail, it’s specificity. Clear task, clear boundary. Then come the “God skills.” Huge bundles trying to do everything. They look impressive, but they confuse activation and dilute impact. One skill, one workflow tends to outperform anything bloated. Context bloat is another silent killer. Adding more instructions feels helpful, but often just burns tokens and introduces noise. The best-performing skills focus on what the model doesn’t already know, not what it does. And even if you get all of that right, there’s a deeper issue. Skills are not reliably used. Activation can drop below 50% depending on setup. Install ten skills, and now they compete. The agent hesitates, or ignores them entirely. What actually works is treating skills like real software. Not static files, but living systems. Generate them with intent. Evaluate them against real tasks. Version and distribute them properly. Observe how they behave in production. This is what Tessl calls the Context Development Lifecycle, and it’s the difference between something that looks useful and something that consistently improves outcomes. Because when done right, skills are a real multiplier. Around 20% accuracy gains, lower costs with smaller models, faster execution. But poorly designed skills can just as easily make things worse. The teams that get this right won’t just build better agents. They’ll build better context. Read the full blog here: https://lnkd.in/g5i6PqUr

    • No alternative text description for this image
  • Picking an agent used to feel like a decision. Now it feels like a commitment. Adrian Cole is joining AI Native DevCon London 2026 to challenge that idea. As a Principal Engineer at Tetrate and long-time open source contributor, Adrian has been working on agent interoperability, tooling, and the messy reality of integrating multiple systems that weren’t designed to work together. His session, How to break up with your Agent, looks at a problem many developers are starting to run into. What happens when the agent you chose no longer fits, but everything around it depends on it? What this session explores • Why agent choice is becoming more about ecosystems than capabilities • The limitations of tightly coupled agent + interface setups • How projects like Goose are enabling interchangeable agent architectures • What the Agent Client Protocol (ACP) unlocks in terms of flexibility • Where interoperability works today and where it still falls short This is a deep dive into decoupling your workflow from any single agent, so you can swap tools without rewriting everything around them. If you’re experimenting with multiple agents or worried about getting locked into one, this session offers a practical path forward. Join us in London or online: https://tessl.io/devcon/ (use AIND-LI-BB-20 for 20% discount)

    • No alternative text description for this image
  • Simon Maple took the AI Native Dev mic to AI Engineer London, one of the most concentrated rooms of bleeding-edge builders you'll find anywhere in Europe. Some highlights from the expo floor:  • Stripe is drowning in 1,300 AI PRs a week and CICD wasn't built for it.  • Google DeepMind hit 10M downloads in a week making the case for on-device models.  • The UK government is digitizing decades of planning records with agents.  • Jellyfish is watching companies hit token consumption targets and call it AI adoption. Full episode out now. Want in the room? AI Native DevCon London, June 1-2. 

  • Cursor has partnered with Chainguard to secure open-source dependencies in AI-built code, as it pushes deeper into enterprise use. The integration lets developers pull dependencies from Chainguard’s curated, verified repositories instead of public registries — reducing the risk of compromised packages entering AI-generated applications. 🔗 Read the full story here: https://lnkd.in/eABQ67sR

  • CI/CD changed how we ship code. What happens when AI becomes part of that loop? Don Syme is joining AI Native DevCon London 2026 to explore exactly that shift. A Principal Researcher at GitHub Next and the mind behind F#, as well as a co-originator of async/await, Don has spent decades shaping how developers build software. Now his focus is on what changes when agents become an active part of the development lifecycle. In his session, The Agentic Repository Automation Revolution, he introduces a new layer: Continuous AI. Not as a replacement for CI/CD, but as something that sits alongside it, constantly improving and validating code as it evolves. What this session dives into • How agentic workflows extend traditional CI/CD pipelines • What continuous AI looks like in real repositories • Automating code quality, performance, and maintenance tasks • The role of agents in validation and formal verification • How teams can stay in control while increasing automation This is a glimpse into a development model where agents don’t just assist, they actively maintain and improve systems over time. If you’re curious about where software delivery is heading next, this session connects the dots between today’s tooling and what’s coming. Join us in London or online: https://tessl.io/devcon/ (use AIND-LI-BB-20 for 20% discount)

    • No alternative text description for this image
  • Similar to OpenAI and Anthropic, Replit has launched a “Security Agent” to scan and fix vulnerabilities in AI-built apps directly inside its coding environment. Instead of treating security as a separate step, the agent reviews code as it’s built — identifying issues, assessing risk, and suggesting fixes before deployment. As agents take on more of the development process, catching mistakes at the point they’re introduced is crucial. 🔗 Read the full story here: https://lnkd.in/eBzRvnqc

Affiliated pages

Similar pages