Welcome to the agentic enterprise

Welcome to the agentic enterprise

Everyone is talking about AI agents.

My local one is named Claw (so original!), lives in a Mac Mini (also so original!) and is about to start a long and hopefully productive career in fetching news articles and building ecosystem maps for me.

Most of the agentic AI conversation is vague, though: buzzwords and bold promises without a ton of substance. Or it’s super tactical: how to X or Y or Z. So when I got UiPath CEO Daniel Dines and CMO Michael Atalla on my TechFirst podcast, I wanted to learn something more real.

And maybe, something more important.

What does an agentic enterprise actually look like? What breaks? What gets better? What sucks? And what happens to the humans in the middle?

And, importantly, does the agentic enterprise mean everyone has to pull a Block and lay off half their employees? I sure hope not.

Check out our convo here (and yeah, subscribe to the TechFirst YouTube channel to never miss an episode):

Article content
(Click to watch on YouTube)

This episode of TechFirst is sponsored by KindBody Fitness: AI-powered fitness for all the health and none of the gym bro nonsense. Check out KindBody Fitness today.

Here’s a bit of what I learned.

Enterprise agents are not all the same, and that actually matters

Daniel pushed back on how the industry loosely uses “agentic AI.” He broke agents into three distinct tiers:

  1. Task agents handle a single, discrete action — like taking a patient’s temperature.
  2. Stage agents manage a dynamic cluster of related tasks with shared context.
  3. Case agents (what UiPath calls process-level agents) understand the entire workflow — all transitions, all exceptions, the full picture.

This taxonomy matters because most of the AI hype assumes we’re already at the case-agent level: autonomous swarms being handed a goal and left to figure it out.

Daniel’s maybe somewhat contrarian view: we’re not there yet, the technology isn’t ready, and more importantly, enterprises aren’t ready to trust it. Before we hand over the keys to an entire business process, humans need to be able to read, review, and reason about what AI is doing … in the same way we want AI to generate readable code rather than raw binary that nobody can audit.

That’s a grounding point worth sitting with.

The agentic enterprise isn’t a flip of a switch. It’s a gradient, and right now most companies are closer to the beginning of it than the end.

Also important: human taste is a competitive moat

The worst insult my fashion-obsessed mom could give was “all her taste is in her mouth.”

Here’s the idea Michael brought up that stuck with me most: as AI gets better, human taste and judgment don’t become less valuable — they become more so.

  • AI cannot navigate organizational politics.
  • It doesn’t understand a company’s risk profile the way someone who’s been inside it for years does.
  • It can’t evaluate whether an AI system should be trusted — because it will always tell you it can be trusted. (Sure … install that OpenClaw skill … it’s safe!)

Only a human has the judgment to actually assess that.

Daniel went further: he’s been trying to write a book with LLM assistance and keeps hitting the same wall: not shockingly, the output is technically competent but flavorless. It’s bland.

LLMs are, in a sense, the average of everything they’ve been trained on. They don’t have a body. They haven’t lived. They don’t have a point of view that’s been earned through experience, and that’s exactly what style and taste require.

This has a practical implication for how companies should think about talent: the people who will matter most in an agentic enterprise are the ones with strong judgment, strong taste, and the ability to QA AI outputs at scale.

That’s not a smaller role. It’s a harder one.

Software is becoming disposable, and that changes everything

Daniel made a bold claim about what coding agents are doing to the build-vs-buy equation: if implementation cost is trending toward zero, the entire software industry has to reckon with that.

For UiPath, the practical implication is that the time-to-value for automating an enterprise process — something that today takes months — is about to shrink to days. An agent interviews subject-matter experts, documents the existing process, proposes improvements, writes the code, tests it, deploys it, monitors it, and handles exceptions when things break.

All of it, automated.

A year ago that would have sounded like science fiction. Now it sounds like a Q2 roadmap item.

This is sort of “vibe coding meets the enterprise.”

The freewheeling, build-fast energy of individual developers using AI to ship apps in hours is colliding with the rigorous, regulated, deeply complex world of enterprise software. That sounds like it’ll be chaotic, and it probably will.

But, it will be fast. And companies (vendors?) that put the right guardrails around it will do amazing things in insanely short periods of time … without it blowing up in their faces.

(Hopefully.)

Jobs will be redefined, not just deleted

I brought up the Jack Dorsey question.

He famously cut half the headcount at Block and said AI was the reason. (My take: could have had something to do with the $60 party with JayZ in San Francisco, dude, and that fact that you built two separate orgs for a single company.)

Michael’s framing was useful: every role will be transformed by AI, but it doesn’t mean half the company disappears overnight.

What changes is the shape of the role. There’s 3 emerging critical roles in the agentic enterprise:

  1. Sellers
  2. Builders
  3. Critics

The emerging job categories Daniel sees are sellers, builders (not just engineers — a more fluid, cross-disciplinary creator role), and critics — people with the taste and judgment to evaluate what AI produces and push back when it’s wrong.

There’s crazy awesome about that.

For the first time, a product manager can actually execute their vision. A marketer can run 10 campaign variants instead of two. A designer can check in code.

The constraint shifts from “what can I build?” to “what should I build?” And yeah, you still need very smart people to ensure it all actually works, scales, and is secure.

The new huge gating factor: trust

If there’s one thread running through all of this, it’s trust.

Not just technical trust — whether the AI gets the right answer — but institutional trust. Companies will need to decide which AI vendors they’re willing to hand serious process authority to, and that decision will be shaped by track record, transparency, and accountability.

The companies that will lead the agentic transformation aren’t necessarily the ones with the flashiest models. They’re the ones that enterprises already trust with their most complex, regulated, high-stakes processes and who are building the governance infrastructure to match.

Daniel ended with a line that I keep coming back to: “Uncertainty is the new normal.”

The companies that thrive in what’s coming won’t be the ones who found a stable footing. They’ll be the ones who learned to move well in the chaos: shifting strategy fast, trusting their judgment, and staying on the train.

We’re all on the change train. And it’s not stopping.


Listen to the full episode of TechFirst with Daniel Dines and Michael Atalla wherever you get your podcasts, like on Spotify or Apple Podcasts or watch on YouTube.

I found this conversation so insightful and on a personal note, despite the many technological advancements that have happened during my 15 years in the industry, I still haven't found the day when I am working less than I did before. AI enables me to do more of the work I love rather than the mundane, repetitive tasks that used to take days.

Like
Reply

I have been thinking about Point 1 a lot recently! The train sure isn’t stopping for anyone.

Like
Reply

Great conversation—and a refreshingly grounded take on what “agentic enterprise” actually means beyond the buzzwords. 🔥

Great conversation, John. One thing that keeps pulling me deeper, Daniel has us running a thought exercise internally: if you started the company over today with two people, a seller and a builder, who's the third hire? Then the fourth? It forces a different question than "where do we cut?" It forces you to ask what roles actually matter when AI can do so much of the execution. And the answer keeps coming back to the same place you landed: taste, judgment, trust. The people who can look at what AI produces and know whether it's right. Not just technically correct. Right for the business, right for the customer, right for the moment. That's not a skill you automate away. It's the skill that becomes more valuable the more you automate everything else.

This was such a fun episode to watch. You spiced up the agentic enterprise and I had so much fun rewatching. next up: Future of coding agents??..... 🤩

To view or add a comment, sign in

More articles by John Koetsier

Others also viewed

Explore content categories