Edward Boyce’s Post

GitHub just changed how Copilot is priced for individuals. New signups paused, tighter limits. Opus models removed from the base Pro tier. The stated reason: agentic workflows now regularly generate compute costs that exceed the plan price. A handful of requests could consume more than a month’s subscription. This is a direct consequence of the agent/subagent era. A lot of people read this as “AI is getting more expensive.” I don’t totally agree. What actually happened is that the unit of AI work changed - and the pricing model hadn’t caught up. Copilot was priced for chat. You send a message, you get a reply, that’s a request. Agents broke that model. A single well-specified session can now do what used to take 40 back-and-forth exchanges. The compute is real and the request count is not the right proxy for it anymore. The shift is straightforward: write the spec first. Give the agent the full picture upfront - what you’re building, the constraints, the acceptance criteria, what done looks like. One well-constructed session replaces 40 back-and-forth exchanges. That’s one request, not forty. This is exactly how Claude Opus 4.7 is designed to be used - and why the 7.5x premium request weight (introductory price) is justified. “More expensive” is the wrong lens. The real question is whether you’re interacting with it correctly for the agent era by using long-horizon, well-specified, context-rich sessions - not rapid-fire back-and-forth that burn requests. The price of getting AI wrong is going up. Expect more restructuring and usage-based pricing and tighter tiers across the industry. #GitHubCopilot #AIProductivity #DeveloperProductivity #SoftwareEngineering #AgenticAI #SpecDrivenDevelopment

  • No alternative text description for this image

This is such a clear reframe. The "more expensive" narrative misses the point entirely. We're not paying more for the same thing, we're paying for a fundamentally different kind of output. The shift from chat to agentic workflows is like comparing a quick phone call to hiring a specialist for a multi, day project. The value isn't in the number of messages exchanged, it's in the depth and completeness of what gets delivered. If one well, scoped session can replace hours of iterative prompting, that's not inflation, that's efficiency. Your point about spec, first thinking resonates. It forces clarity upfront (which most of us skip), and that clarity is exactly what makes

I can relate to this Edward Boyce I use Sonnet 4.5 (~90%), Haiku (~6–7%), and occasionally Opus 4.5 and the difference is very clear. It’s not just response quality, it’s how deeply the model understands the problem. Opus stands out it doesn’t just answer, it challenges your approach and suggests better strategies. That’s where real value comes in. Haven’t used 4.7 yet, but based on this trajectory, I’d expect even stronger behavior in specific domains.

This is the real shift. We went from “prompt better” to “scope better.” Agents don’t make bad specs cheaper, they just execute them faster.

See more comments

To view or add a comment, sign in

Explore content categories