The Analytics Development Lifecycle Needs a Rethink - And Agentic AI Is the Reason Why
Most organisations approach analytics the way they approach construction: sequentially, in silos, and with a lot of handoffs that bleed time and context. A business question arrives, a data team disappears for weeks, and a dashboard or model eventually surfaces — often partially answering the original question, and rarely connected to the next one.
This approach made sense when analytics was a specialist function sitting at the edge of the business. It no longer does. As AI moves from tool to co-worker, and as the pace of decision-making accelerates, the organisations that treat analytics as a monolithic process will fall behind those that treat it as a system of composable, continuously improving building blocks.
The shift starts with how we design the analytics development lifecycle itself. And it starts with being clear about something: agentic AI is not a replacement for analytical talent. It is the infrastructure that allows that talent to operate at its highest level.
Breaking It Down: The Building Blocks That Actually Matter
An analytics development lifecycle — whether for a report, a predictive model, a data product, or an AI system — is not one thing. It is a sequence of distinct activities, each with its own inputs, outputs, quality standards, and failure modes. When we treat it as one thing, we create processes that are hard to debug, hard to improve, and impossible to accelerate systematically.
The building blocks, at their core, are these:
1. Problem Definition & Scoping
The most underinvested step. What decision does this analysis need to enable? What does good enough look like? What data exists, and what is the cost of being wrong? Most analytics failures originate here — not in the modelling. This is irreducibly human work: no agent can substitute for the judgment required to define the right question.
2. Data Discovery & Profiling
Understanding what data actually exists, what shape it is in, and what it can and cannot support. This is where assumptions go to die — and where projects get scoped accurately or not at all.
3. Data Engineering & Preparation
Transforming raw, messy, inconsistent data into something analytically usable. This step consumes a disproportionate share of analytics time — often 60-70% — and remains the single largest drag on cycle speed.
4. Analytical Development
The step most people mean when they say analytics — model development, statistical analysis, segmentation, forecasting. High-skill, high-judgment work. Also the step most dependent on the quality of everything that precedes it.
5. Validation & Quality Assurance
Does the output actually answer the question? Is the model performing as expected? Are the results stable, interpretable, and trustworthy enough to act on? This step is frequently compressed under time pressure — and that is where bad decisions get made.
6. Communication & Activation
Translating analytical outputs into decisions. A finding that cannot be communicated clearly is a finding that will not be used. The last mile of analytics is not technical — it is human.
7. Monitoring & Iteration
Once an insight is deployed or a model is in production, what happens next? Most organisations lack systematic feedback loops. The result is drift — models that degrade silently, decisions built on stale assumptions.
Where Agentic AI Changes the Equation — And Where It Cannot
Agentic AI — systems that can reason, plan, and act across multi-step workflows — does not replace this lifecycle. It transforms what is possible at each stage, and it creates the conditions for the lifecycle to run as a connected system rather than a sequence of disconnected handoffs. But at every stage, the value of agentic AI is realised through human judgement, not in spite of its absence.
At problem definition, AI agents can interrogate existing data assets, surface analogous past analyses, and stress-test the framing of a question before a single analyst commits time to it.
Human role: A senior analyst or domain expert must still validate the framing — because agents optimise for what is answerable, not necessarily what is worth answering.
At data discovery, agents can profile datasets autonomously, flag quality issues, identify joins and relationships, and generate a data availability assessment in hours rather than weeks.
Recommended by LinkedIn
Human role: The engineer's role shifts from exploration to interrogation — applying expertise to evaluate what the agent found, not finding it themselves.
At data engineering, agentic systems can write, test, and iterate on transformation pipelines — handling the mechanical burden that consumes most of analytics' time. Crucially, agents can also auto-generate pipeline documentation: logging data sources, transformation logic, lineage, and version history in structured, audit-ready formats. In regulated environments, this documentation is not a by-product — it is a core deliverable.
Human role: Human oversight remains essential to validate that the logic is sound and that the documentation accurately reflects what was built.
At analytical development, AI co-pilots can generate candidate models, run feature selection, and benchmark approaches — compressing the exploratory phase significantly. Agentic AI raises the ceiling on what a team can explore.
Human role: Interpretation, contextualisation, and the final judgement on which model is appropriate for the decision at hand remain the preserve of experienced analysts. It does not lower the bar for what they need to understand.
At validation, agents can run systematic quality checks, probe for edge cases, test model stability under distributional shift, and generate structured validation reports — including model cards, assumption logs, and performance benchmarks that satisfy both internal governance and external regulatory requirements.
Human role: Human reviewers assess these outputs with the domain expertise and accountability that no agent can replicate.
At communication, large language models can translate analytical outputs into executive narratives and draft the story behind the data.
Human role: The analytical professional owns the narrative — verifying accuracy, ensuring the communication serves the decision being made, and standing behind the conclusions.
At monitoring, agentic systems can watch deployed models and data pipelines continuously, alert on degradation, auto-generate performance reports, and maintain living documentation of model behaviour over time.
Human role: In regulated industries, this continuous audit trail — generated systematically rather than retrospectively — is increasingly the difference between a compliant AI system and a liability.
The Regulatory Dimension Nobody Is Talking About Enough
There is a dimension of the building-block approach that goes beyond operational efficiency: regulatory readiness.
Regulators across financial services, healthcare, and other data-intensive industries are moving rapidly toward requiring explainability, audit trails, and documented governance for AI and analytics systems. The question is no longer whether your models perform — it is whether you can demonstrate how they were built, validated, monitored, and governed, at any point in their lifecycle.
The building-block structure, combined with agentic AI's capacity to generate documentation systematically at each stage, creates something that manual analytics processes rarely produce: a complete, structured, continuously maintained record of the entire analytical lifecycle. Data lineage, transformation logic, model assumptions, validation results, monitoring history — all generated as a natural output of the process, not assembled retrospectively under audit pressure.
This is a strategic asset, not a compliance cost. Organisations that instrument their analytics lifecycle correctly are not just faster and more rigorous — they are audit-ready by design, and able to demonstrate responsible AI use in a way that builds trust with regulators, partners, and boards alike.
The Executive Implication
The organisations that will lead in analytics over the next five years will not be those with the most data scientists. They will be those that have built the most intelligent, well-governed, continuously improving analytics systems — where human expertise is amplified, not replaced, and where compliance is built in, not bolted on.
Three questions worth asking:
Have we designed our analytics function as a set of composable, improvable building blocks — or as a black box that produces outputs on request?
Are our people being freed to work at the level that matters — problem framing, critical interpretation, stakeholder judgement — or are they still buried in the mechanics that agentic AI could handle?
And are we capturing the documentation, lineage, and audit trail that regulators will increasingly require — not as a retrospective exercise, but as a natural output of how we build?
The building blocks are the architecture. Agentic AI is the operating system. Human expertise is the governance layer that makes the whole thing trustworthy.
That combination is what responsible, scalable analytics looks like.
Liked your framing this as a lifecycle rethink, not one more AI feature Pier Paolo Borgia. The tension I keep seeing is that agentic analytics is moving at the speed of decisions, while the underlying data and governance lifecycles are still built for static dashboards. The interesting frontier feels like where agents don’t just analyze but also help tame the messy data supply chain underneath so humans can stay focused on problem framing and oversight instead of pipeline firefighting.