The AI agent lifecycle: what changes when your software stops following rules

The AI agent lifecycle: what changes when your software stops following rules

For decades, the Software Development Life Cycle gave engineering teams a reliable framework. Define requirements, design the system, write the code, test it, deploy it, maintain it. The model worked because once shipped, software behaves the way you built it to behave.

AI agents change that entirely, and the teams discovering this gap in production are the ones who didn't see the shift coming early enough.

Why the SDLC starts to crack

Traditional software follows rules written by humans. When something goes wrong, you trace it back to logic or configuration. The cause is findable. The fix is deployable.

AI agents operate on different terms:

  • Output is probabilistic, not fixed
  • Behaviour depends on data, context, and memory state
  • Responses can shift over time without any code changes
  • Part of the logic lives inside models your team doesn't fully inspect or control

According to Salesforce's research on the agentic SDLC, this non-deterministic nature is precisely what makes traditional development practices struggle at enterprise scale. Teams applying SDLC thinking directly to agent development tend to find the gaps at the worst time: in production.

Three things that change fundamentally

Article content
SDLC mindset vs agent development mindset

As Bain's Technology Report 2025 highlights, developer roles are already shifting from implementation to orchestration, with ongoing system quality becoming the primary focus rather than delivery milestones.

How each phase shifts

1. Problem definition becomes capability framing 

You're defining what decisions the agent is authorised to make, and where it must stop and escalate to a human.

2. Design expands beyond UI and backend 

System architecture now includes planners, memory strategies, tool integrations, and escalation paths, all decided before a line of code is written.

3. Development includes orchestration 

Prompts, retrieval logic, and operational policies become first-class development assets, versioned and reviewed with the same rigour as code.

4. Testing becomes scenario-driven 

Unit tests aren't sufficient. Reliable validation means testing behaviour across:

  • Full conversation flows, not just individual responses
  • Real-world edge cases
  • Adversarial inputs and failure modes

5. Deployment requires built-in control 

Launching an agent means having monitoring pipelines, rate limits, human oversight triggers, and rollback strategies in place from day one.

6. Operations becomes where the real work lives 

Drift, latency, cost, and unexpected behaviour require continuous observation and active adjustment, not periodic maintenance.

What this means in practice

Gartner projects that by 2028, 33% of enterprise software applications will use agentic AI, up from less than 1% in 2024. The organisations preparing well now share one thing in common: they've built governance into the lifecycle from the start rather than retrofitting it after something goes wrong.

The lifecycle tightens around control rather than completion. When software starts making decisions, the work never truly finishes, and that requires a fundamentally different way of thinking about every phase of the build.

Which phase is your team most underprepared for?

To view or add a comment, sign in

Others also viewed

Explore content categories