Software development is being rewritten in real-time.

An OpenAI VP just shared that their engineers' jobs have "fundamentally changed since December." Before then, Codex handled unit tests. Now it writes essentially all their code.

This isn't just an OpenAI story. It's happening everywhere.

If you're leading engineering teams and haven't touched the latest tools, you're probably underestimating what's possible. I know because I've watched my own assumptions get blown up every few months.

The landscape has exploded:

  • OpenAI Codex — async agents in sandboxed environments
  • Claude Code — Anthropic's CLI agent, sharp on complex multi-file reasoning
  • Cursor / Windsurf — AI-native IDEs with deep codebase awareness
  • GitHub Copilot Agent Mode — inline agents in VS Code
  • Gemini Code Assist / Gemini CLI — Google's play, strong on large codebases
  • Antigravity — pushing autonomous coding boundaries

These aren't autocomplete. They're agents that debug, refactor, write tests, and ship PRs.

Where we are at PLACE:

A few months back, I shared how we gave our developers access to tools like Cline, Cursor, Claude Code, Aider — and saw 20-50% productivity gains on certain features. Some teams shipped entire features with AI.

Since then, we've gone deeper.

We've had agents doing code review for months now — Claude Code catching issues before humans even look. That taught us something: the tooling is ready. The bottleneck is adoption and infrastructure.

So we invested heavily:

  • Compliance guardrails — agents operate within defined boundaries
  • Security controls — sandboxing, access management, audit trails
  • Budget tracking — visibility into AI spend across teams

Without that foundation, agentic development is a liability. With it, it's a force multiplier.

What we're pushing now:

  1. Agent-first by default. Before opening an editor: "Can an agent do this?" The answer is increasingly yes.
  2. AI Champions own the rollout. Each team has someone responsible for figuring out how agents fit their specific workflow — not a side gig, a lens on their existing work.
  3. AGENTS.md in every repo. Tells the agent how to navigate the codebase, what conventions to follow, what to avoid. Update it every time the agent struggles.
  4. Skills, not just prompts. Reusable patterns agents can execute repeatedly. Committed to a shared repo so everyone benefits.
  5. No slop. AI-generated code at scale creates new quality risks. A human is accountable for every merge. Same review bar — or higher.
  6. Infrastructure keeps compounding. Observability, sandboxing, MCP servers for internal tools. Agents are only as good as the environment they run in.

The hard part isn't technical — it's cultural.

This isn't "add AI to your workflow." It's "rethink the workflow entirely."

Some engineers leap in. Others resist. Leadership's job is to create conditions where trying is easy, learning is shared, and outcomes — not activity — get measured.

We're still early. But the companies that figure this out will build faster, with smaller teams, at higher quality.

That's the bet we're making at PLACE.


👉 What's working (or not) for your engineering teams? I'm all ears.

To view or add a comment, sign in

More articles by Roberto Moreno

Others also viewed

Explore content categories