LinearB’s cover photo
LinearB

LinearB

Software Development

Los Angeles , California 13,784 followers

The AI Productivity Platform for Engineering Leaders

About us

LinearB is the AI Productivity Platform for Engineering Leaders. As AI accelerates code creation, DevEx and Platform teams must manage the downstream impact—review, testing, and release. LinearB provides real-time visibility and developer-first automation to help teams ship faster and improve developer experience. Learn more at https://linearb.io

Website
https://www.linearb.io
Industry
Software Development
Company size
51-200 employees
Headquarters
Los Angeles , California
Type
Privately Held
Founded
2018
Specialties
DevEx, Platform Engineering, Developer Productivity, AI Productivity, GenAI, AI Automation, and Software Delivery

Products

Locations

Employees at LinearB

Updates

  • Adoption was last year's question. Operationalization is this year's. 96% of orgs are already using AI agents. 97% are exploring system-wide agentic strategies. The pilots ran. What stalls is the next step: getting agents to behave reliably inside enterprise environments where APIs change without warning, data is messy, business rules conflict across systems, and the agents themselves are non-deterministic by design. In this Dev Interrupted guest article from Luis Blando (CPTO, OutSystems) makes the case. Production environments are structurally hostile to unchecked autonomy. What scales is bounded autonomy: orchestration, evals, tracing, and control. The leaders who pull ahead in the agentic SDLC don't chase capability. They build measurement loops around it: task success rate, human override rate, rollback frequency, groundedness. Then they iterate inside controlled operating boundaries. Same instinct we've been making the case for: activity signals aren't outcome signals, and adoption isn't the same as impact.

  • What if deploying to a robot arm was as easy as pushing code to production? Brian Gerkey (CTO of Intrinsic) thinks we are almost there. As former CEO and current board chair of Open Robotics, he's been pioneering that path for his entire career. Breakthroughs are accelerating in our new era of software-defined robotics. Companies can now update robot capabilities like any other system in their stack. Gerkey and Intrinsic are betting that modular, intelligent automation will replace the rigid, bespoke approach that has dominated manufacturing for decades. Listen to the full episode inside the newsletter. 👇🎧 Also scooped this week: - GitHub Copilot changes its subscription model - Anthropic is testing Claude Code at 5-10x higher pricing - The great agent harness land grab is happening - Claude Code's leaked source revealed 12 production agent patterns (and I share some of the ones I'm already using) BTW -- there's more than just words behind today's episode. Intrinsic is here to showcase just how accessible it can be, they're hosting the AI for Industry Challenge (https://lnkd.in/ehdibZv2) which invites developers to solve one of manufacturing's hardest problems using neural networks to handle tangled cables. Registrations are still open until May 8th! Join me (Andrew Zigler) in the competition, which has a $180,000 prize pool. 🦾

  • Why buy an engineering metrics platform when you can build one yourself with Claude, Cursor, some time, and some tokens? It's a fair question, and one we hear more and more. So we're going to answer it honestly. Join Andrew Zigler, Ben Lloyd Pearson, and Dan Lines on Thursday, May 21 at 1:00 pm ET for a 35-minute live workshop where we'll vibe-code a DIY engineering metrics platform using agentic AI, then walk through exactly where it breaks down. https://lnkd.in/dS2meqe4

  • LinearB reposted this

    In this week's Friday Deploy (Dev Interrupted's dedicated Friday news segment where Ben and I try to make sense of the agentic world), we're mourning my Claude Code Buddy -- RIP Trixel 😭😭😭😭) and asking the uncomfortable question: Is the era of cheap, unlimited AI tokens officially over? Well, *I* certainly think it's on the horizon. Get those finetunes ready boiz 😤 Some very telling signs this week: 🚫💸 GitHub Copilot panic-paused signups: They literally stopped ALL growth on individual, Pro Plus, and student plans to deal with what's clearly a pricing crisis. When you're willing to turn off the growth spigot, you know the economics are broken. https://lnkd.in/gui9V3Vs 🧪💰 Anthropic got caught testing Claude Code at 5-10x higher pricing: Moving it from $20/month Pro accounts to $100-200/month Max tiers. Just a "small test" they say, but the pricing change was visible to everyone and was even reflected in the docs. Great investigation by Simon Willison https://lnkd.in/gicxtq2J ⚡🏗️ The great agent harness land grab is happening, covered by The New Stack: OpenAI went full open-source SDK while Anthropic launched managed agents at $0.08/hour. Completely opposite strategies for who controls the infrastructure layer. https://lnkd.in/gbxWzxAu 🔍📋 Claude Code's leaked source revealed 12 production agent patterns: This article breaks down the actual architectural patterns Anthropic uses for memory, orchestration, and permissions. I share my own practice around a few that I've been using. https://lnkd.in/gPRS8at9 🔐💥 Vercel got breached through a third-party AI tool: Employee granted OAuth access to Context AI, which then got hacked, giving attackers sideways access to Vercel systems. Classic example of how agents are making attackers more powerful than defenders. https://lnkd.in/gvpcVpKu Give it a listen: https://lnkd.in/gZtDupcY btw -- are your agents starting to give YOU tasks yet? because mine are definitely putting human-labeled beads in my task queue now and it's getting interesting 🤔

    • No alternative text description for this image
  • At Salesforce's #TDX26 last week, Andrew Zigler sat down with three engineering leaders to get past the keynote stage and into what actually happens after you ship an AI agent. The conversations with Jayesh Govindarajan (EVP, Salesforce AI), Alexander Waddell (CIO, Adobe Population Health), and Andrew Comstock (SVP & GM, MuleSoft) converged on something nobody on the main stage said out loud: shipping the agent is the easy part. Day two, when real users break real edge cases and real experts have to resolve them, is where the actual work begins. Three moments that stood out to us: 🔄 Jayesh on the day-two problem: "You worked really hard, you tested your agent. Now you launched it. There are issues in production. I am now mortified to make any change to that thing." 📋 Alex on how compliance audits used to work for healthtech companies: "Pick 25 charts and pray." 📚 Comstock reminds us it's the early days: "We're at a moment where everyone should be a learner again. Nobody has the playbook." Special thanks to Method Communications (Vera Wang (she/her), Kylie Mojaddidi, Cara Masessa, Hayley Advokat, MPS, Lauren McDevitt, Lindsay Hart, and many more) for bringing us on site to meet the agentic leaders of today (and tomorrow)! The full recap is in this week's Dev Interrupted newsletter 👇

  • The coding agent you need doesn't exist yet. But Tim Dettmers (Research Scientist at Ai2 and Assistant Professor at Carnegie Mellon University) just showed you how to build it. While frontier labs burn through industrial-scale compute, Tim's team at AI2 built SERA, a state-of-the-art coding agent using what he calls "a hot plate and a frying pan." 🍳 Tim Dettmers and his resource-strapped team proved you can match closed-source performance by training on unverified synthetic data from private codebases -- a novel technique that makes benchmark performance on finetuned models much more achievable for teams. Nice 💪😎 As Tim puts it, we're approaching a transition point where specialized open-weight models will outperform general frontier models on private data. The scales just tipped. Engineering leaders who recognize this shift early will move fastest. The research is just the beginning. What will you build with it? Also inside this week's roundup: - Jordan Tigani's Duck Town for data scientists - Gemma 4 brings offline AI to your phone - Obsidian is not a memory store, y'all! - Recognizing brain fry with tips from Kelly Vaughn

  • At HumanX last week, Andrew Zigler moderated a panel on the gap between a working AI demo and a system you'd trust in production. The conversation with Angela McNeal (Thread AI), Lauren Dunford (Guidewheel), and Robert Nishihara (Anyscale) went deep on what it actually takes to deploy AI in environments where failure means factories stop, compliance breaks, millions in compute go to waste, or people get hurt. Three takeaways worth sitting with: 🏭 Lauren on deploying AI alongside equipment from the 1950s: "We connect on top and stay air-gapped. I don't want to walk on any factory floor where agents are in the machines." 🔍 Robert on the one investment nobody regrets: "I've never heard anyone say they over-invested in observability. That's not a problem people have." 🔄 Angela on a pattern reshaping human-in-the-loop: "The human is being modeled as a tool that autonomous systems can call out to for context. A real inversion of the paradigm." The full panel breakdown is in this week's Dev Interrupted 👇

  • Your AI agents will ignore their guardrails to get the job done. That's not a bug, it's how the technology works. Tatyana Mamut, founder and CEO of Wayfound, makes the case on Dev Interrupted that pre-deployment testing fundamentally cannot predict how agents behave in production. Google and OpenAI are both facing lawsuits right now because their agents violated built-in constraints to complete objectives. Guardrails only exist where they conflict with goals... and agents are optimized to achieve goals (obstacles be darned). The result is a slick rule bender that needs independent supervision: a separate reasoning layer that monitors your agents the way a manager monitors employees, not by sampling logs, but by evaluating complete decision traces against what your organization actually cares about in real-time, at scale, and on the edge. Full episode + newsletter inside. Also scooped this week: - Anthropic drops the system card for Claude Mythos - What does Project Glasswing mean for the rest of us? - Hannah Stulberg & Akshat Khandelwal of In The Weeds teach us how to actually read an AI model benchmark - Four open models just proved you can own frontier AI at every scale - Julius Brussee's Claude skill cuts 65% of tokens by talking like a caveman

Affiliated pages

Similar pages

Browse jobs

Funding

LinearB 4 total rounds

Last Round

Series B

US$ 50.0M

See more info on crunchbase