Appendix: The Evolution of AI Analytics - A Practical Guide
This guide walks through a natural evolution teams can experience when adopting AI for analytics, from initial euphoria to mature frameworks.
Note: this document represents my own views based on my own personal experience. I am not representing Wells Fargo.
Stage 1: Direct AI Execution
"Just do it for me"
What It Looks Like
You ask AI to create a report, and it delivers a complete document:
You say: "Create a quarterly sales report with regional breakdown and year-over-year trends"
AI delivers: A polished 5-page PDF with:
The Initial Experience
Why Teams Move On
It becomes clear that this approach has limitations:
Example: A finance analyst loved AI-generated weekly reports until realizing the revenue calculations were inconsistent week-to-week. The "time saver" required additional verification work.
🔄 The Transition
"I need to see what it's doing"
At this point, some analysts want to abandon AI entirely and return to manual analysis. But many analysts were already using code like Python and SQL for their work, which suggests a better path: having AI generate reviewable code. AI coding is so good that even teams with less coding experience can shift from "do the analysis for me" to "show me the code" if they have an attitude willing to dig into the details. The need for transparency and reliability drives this change.
Stage 2: Transparent Code Generation
"Show me how"
What It Looks Like
AI translates requests into reviewable, runnable code:
You say: "Create Python code to analyze customer churn by cohort"
AI generates:
import pandas as pd
import matplotlib.pyplot as plt
# Load customer data
customers = pd.read_csv('customers.csv')
# Calculate cohort metrics
customers['cohort'] = pd.to_datetime(customers['signup_date']).dt.to_period('M')
churn_by_cohort = customers.groupby(['cohort', 'churned']).size().unstack()
# ... 50 more lines of analysis code
The New Capabilities
There's a trade-off here: these code-generated reports are inherently generic. They produce data, charts and basic statements, but they lack commentary or insights. Many analysts find value in feeding the results back into AI to draft commentary. Either way they review the results themselves to add meaningful interpretation.
The Emerging Challenges
As teams use code generation extensively, patterns start to emerge:
Teams notice they're asking AI for the same types of analyses repeatedly, just with slight variations.
🔄 The Natural Evolution
"We need to capture these patterns"
The pattern that emerges looks like:
Stage 3: Framework Building
"Let's build our analytical language"
The Realization
As teams work to build repeatable scripts and standardize customizations, they naturally end up creating frameworks. Here are two different approaches they might take:
Framework Approach A: Stay-in-the-System Frameworks
Build a framework where most work happens inside the system, with escape hatches for customization. This works particularly well in Jupyter notebook environments where you can mix different types of cells using custom Jupyter notebook magics.
Example Configuration:
STEP 1: Load Data
source: data_warehouse.customers
filters:
- status: active
- signup_date > '2023-01-01'
STEP 2: Standard Calculations
calculate: [ltv, churn_risk, engagement_score]
STEP 3: Custom Analysis
# This is where you add your specific logic
enterprise_mask = (data['tier'] == 'enterprise')
data.loc[enterprise_mask, 'health_score'] *= 1.2
# Maybe some exploratory plots
plt.scatter(data['usage'], data['churn_risk'])
STEP 4: Output Generation
format: dashboard
destination: tableau_server
Approach B: Generate-and-Release Frameworks
Build a framework that generates complete, standalone code from high-level descriptions. Think of it as an intelligent template system where the generated code becomes your working document.
Example Request:
generate: QuarterlyBoardReport
include:
- revenue_trends
- customer_segmentation
- market_share_analysis
tone: executive_summary
comparisons: [last_quarter, last_year]
Framework Generates → 200+ lines of documented Python code that you then own. For any customization you need, you modify the code directly. The framework doesn't maintain ongoing control.
The Coverage Reality
The framework shouldn't try to handle everything—otherwise it becomes too complex and slows you down.
What effective coverage might look like:
Fully Framework-Handled (30-50% of work):
Framework-Assisted (30-50% of work):
Custom Development (10-20% of work):
This mixed approach gives you the benefits of standardization where it matters while preserving flexibility everywhere else.
🔄 The Democratization Opportunity
"What if non-technical people could use our framework?"
This question arises naturally when the team finds they often answer questions by simply writing the correct configuration and running a standard part of the framework. In that case the only difficult part is knowing the exact syntax of configuration, which is something that AI LLM models can already do a good job of following.
Recommended by LinkedIn
Stage 4: AI-Powered Access
"Natural language to framework"
The Vision
Your framework becomes accessible to anyone through natural language:
Business User Says: "Show me customer health scores for our West Coast enterprise accounts"
AI Translates To:
analysis: CustomerHealth
filters:
- region: west_coast
- tier: enterprise
output: executive_dashboard
Framework Executes: Standardized, reliable analysis using team conventions
Why This Matters
🔄 The Next Frontier
"What if AI could work like a skilled analyst?"
Once teams have frameworks and AI-powered access working well, a natural question emerges: instead of just translating requests into configurations, what if AI could work with the framework more intelligently—validating results, iterating when something looks wrong, and handling complex multi-step analyses autonomously?
Stage 5: Intelligent Orchestration
"AI as analytical partner"
Agentic coding tools like Anthropic's Claude Code and OpenAI's Codex already do this well with coding. The same abilities can be applied to analysis. In this case we want the AI system to work within the framework like a skilled analyst.
How It Works
AI agents that can generate configurations, validate results, and iterate autonomously:
You say: "Analyze customer churn by segment and identify key drivers"
AI Agent Process:
The key point here is to get the AI to use the same tools we have built for analysts already and allow it to iterate through multiple cycles autonomously.
The End State
In a mature state, your analytics ecosystem might look like:
The dream becomes reality: less time on routine work, more time on what matters. But it happens mainly through building better tools, not through AI doing everything.
But, How Realistic is This?
Nothing above depends on AI getting better than it is now, though surely it has still improvements still ahead of it. Here's how to build a proof of concept for this entire system today using existing tools. For the below I am showing this as a single-user proof of concept.
The Tech Stack
Note: I am skipping stage 1 since that is just using LLMs out of the box.
Stage 2: Building with AI Code Generation
You start here: Use Claude Code to generate Python scripts for your analyses. You provide to Claude Code access to your data and it will generate the code, run the scripts, and test them directly. You review changes in VS Code and use Git to track incremental improvements. Make sure you understand what the code is doing (ask it if you aren't sure) and roll back to the last good version when it goes in the wrong direction.
You: "Create a Python script to analyze customer retention by cohort"
Claude Code generates: 150 lines of pandas/matplotlib code
You: Review, modify, save as customer_retention.py
After building several scripts, you notice patterns and start extracting common functions into shared Python modules with Claude Code's help.
Stage 3: Framework Development
For this example, we'll use the stay-in-the-system approach (Option A from the main guide) since that is my preference.
You build a clean Python API:
framework = AnalyticsFramework()
framework.load_data('customers', filters={'status': 'active'})
framework.calculate_standard_metrics(['retention', 'ltv'])
In a separate Jupyter cell, you can add custom SQL to do something outside the framework:
%%sql
SELECT tier, AVG(retention_score)
FROM processed_customers
WHERE signup_date > '2023-01-01'
GROUP BY tier
Finally, you generate the output:
framework.generate_report('dashboard')
This approach lets you embed SQL queries or direct Python between framework steps for complex cases.
Optional Enhancement - YAML Configuration: For better readability, you can create a YAML interpretation layer with custom Jupyter magics. Either way a key advantage in Jupyter is the step-by-step approach where you mix configuration with custom cells:
%%analytics_framework
analysis: customer_retention
data_source: customer_database
filters:
- signup_date > '2023-01-01'
- status: active
Then you have separate cells for custom logic:
# Custom analysis in its own cell
enterprise_adjustment = data['tier'] == 'enterprise'
data.loc[enterprise_adjustment, 'retention_score'] *= 1.2
%%analytics_framework
output: dashboard
Stage 4: Natural Language Access
You create an MCP server that sits on top of your framework. This server exposes your analytical capabilities to Claude Desktop:
# Your MCP server
@mcp.tool
def run_customer_analysis(region: str, tier: str):
return framework.run_analysis('customers', filters={
'region': region, 'tier': tier
})
Now in Claude Desktop:
You: "Show me retention rates for West Coast enterprise customers"
Claude: [Uses your MCP server] → Runs framework → Returns results
Stage 5: Intelligent Orchestration
You expand your MCP server to include documentation reading and dynamic code generation:
@mcp.tool
def analyze_with_custom_logic(question: str, data_context: str):
# MCP server can read your documentation
# Generate custom SQL/Python for novel analyses
# Use your framework for standard components
return framework.run_custom_analysis(question, data_context)
Claude Desktop can now handle complex, multi-step analyses autonomously using your framework as the foundation.
Documentation and Business Context
Throughout this process, you use Claude Code to help write markdown documentation explaining how to use your framework within your business context. These docs become part of what your MCP server can reference when handling requests.
The Result
You end up with a personal analytics assistant that:
This entire system can be built incrementally using tools available today.