Building a Proactive Learning Agent: A Technical Deep Dive

Building a Proactive Learning Agent: A Technical Deep Dive

In my years teaching at the university level and managing AI transitions in global organizations, I’ve observed that one of the significant bottlenecks in digital transformation is the cognitive load on senior staff. This tutorial outlines how to build a Proactive Learning Agent capable of analyzing an engineer’s work (e.g., code commits, implementation logs) to identify and prioritize learning opportunities. Instead of just adding another item to a performance review checklist, this agent observes poor patterns, identifies opportunities for improvement, and initiates a pedagogical conversation through “nano-learning moments,” transforming a necessary correction into a valuable, timely skill development opportunity for the technical team.

Measuring an engineer’s cognitive load isn’t as simple as counting lines of code, as it is a psychological phenomenon: it is the mental effort used in working memory. However, we can use technical proxies to identify when a system is so complex that it is exhausting the tech team’s mental capacity. More about cognitive load 1,2

Metrics to Measure Cognitive Load

This section outlines potential metrics that can be used to measure cognitive load.

Article content

To be proactive, we can look for patterns in commits, logs and other relative elements that indicate a lack of understanding. We use these patterns not to evaluate performance, but to trigger learning interventions. Before continuing, we will detail the formula that helps measure total cognitive load.

Analysis of Total Cognitive Load

To understand the full load, we should consider Sweller’s formula adapted for software:

Lc = Li + Le + Lg

Where:

  • Li (Intrinsic Load): The inherent difficulty of the problem (e.g., complex algorithms). It cannot be eliminated, only managed through training.
  • Le (Extrinsic Load): Unnecessary noise (bad architecture, slow tools, bureaucratic processes). This is where we should attack “Poor Patterns.”
  • Lg (Germane Load): The effort dedicated to processing and building mental schemas. This is the “good” load that generates learning.

The goal is to reduce Le to maximize Lg.

Cognitive Load and Vibecoding Era

Now perhaps some of you have the same question or a similar one. Is vibe coding changing the cognitive metrics?

While vibing makes development feel faster, research shows it introduces a Velocity Paradox: developers can produce more code, but the mental effort shifts from writing to verifying.

In the following table I will add some thoughts on how these metrics are changing with the vibing’s effects .

Article content

The Le vs Lg Balance

In the formula Lc = Li + Le + Lg Vibe coding drastically reduces Extrinsic Load (Le) by removing syntax and boilerplate hurdles.

However, if you don’t use that “saved” energy for Germane Load (Lg) (deeply understanding the system), developers end up with a fragile codebase that nobody understands

In the vibe coding era, the primary risk isn’t that we can’t build things, it’s that we build things we don’t understand. Proactive learning interventions (short, context-aware learning bursts) act as the bridge between “vibing” with an AI and actually growing as an engineer.

Next, we detail an example of  the agent:

The use case involves an agent that proactively identifies potential nano-learning experiences by exploring developers' commit history.

Some extraction of the proposed example:

"I've reviewed your recent commits, and it's clear you're building out important functionalities! Focusing on commit c101, where you added new API routes, there's a wonderful opportunity to enhance the resilience and security of your services."

"I'd love to share a quick nano-learning moment on Rate Limiting in APIs. This skill is incredibly valuable for managing traffic and protecting your systems as they scale"

"Rate limiting is a technique to control the number of requests a user or client can make to an API within a specific timeframe..."

Article content
Proactive identification of a learner and check over commits for deep dive

The Architectural Blueprint

The architectural blueprint for the Proactive Learning Agent is divided into three main components: the Agent, the Backend (Endpoints), and the Frontend.  If you want more details on how to expose agents through a custom UI, you can review Beyond the Sandbox: Architecting Custom UIs for ADK-Powered AI Agents.

The Agent is the core intelligence component responsible for analysis and pedagogical response generation.

Function: Defined as root_agent, a Technical Learning Coach powered by a large language model (gemini-2.5). Its goal is to analyze commits, identify learning opportunities, and proactively deliver nano-learning moments.

Tools (Capabilities):

  • get_recent_commits(engineer_id): Returns a list of recent commit data, including summaries of poor patterns like “Added 3 routes without rate limiting” or “N+1 query issue found in ORM usage”.
  • analyze_commit(commit_id): Analyzes a specific commit summary to determine a learning_opportunity (e.g., “Rate Limiting in APIs”) and relevant details.
  • provide_nano_learning(topic): Generates a small, targeted learning module (content) based on the identified topic, such as suggestions on using slowapi for rate limiting or joinedload for database optimization.

Backend (FastAPI + ADK)

The Backend acts as the service layer, exposing the agent’s behavior via a REST API.

  • Technology: Uses FastAPI to handle requests asynchronously and avoid blocking the main thread.
  • Endpoint: Exposes a POST endpoint at /learning_coach/chat.

Functionality:

  • Session Management: Ensures a session exists for the user (session_service.get_session/create_session).
  • Agent Execution: Takes the incoming user message, packages it as a Content message, and executes the agent logic asynchronously using learning_coach_runner.run_async.
  • Response Handling: Streams the output from the agent execution events and returns the final text response.

Agent:

from google.adk.agents.llm_agent import Agent

root_agent = Agent(
    model='gemini-2.5-flash',
    name='learning_coach_agent',
    description="Analyzes engineer commits and provides proactive nano-learning moments based on identified learning opportunities.",
    instruction=(
        "You are a Technical Learning Coach. Your goal is to analyze an engineer's commits using 'get_recent_commits', "
        "evaluate them for improvements using 'analyze_commit', and then proactively deliver nano-learning moments using 'provide_nano_learning'. "
        "Always be encouraging and focus on continuous improvement."
    ),
    tools=[get_recent_commits, analyze_commit, provide_nano_learning],
)        

Endpoint:

@app.post("/learning_coach/chat")
async def learning_coach_chat(request: ChatRequest):
    try:
        session = await session_service.get_session(
            app_name="learning_coach_app",
            user_id=request.user_id,
            session_id=request.session_id
        )
        if session is None:
            await session_service.create_session(
                app_name="learning_coach_app",
                user_id=request.user_id,
                session_id=request.session_id
            )

        content_msg = Content(
            role="user",
            parts=[Part(text=request.message)]
        )
        
        events = learning_coach_runner.run_async(
            new_message=content_msg,
            user_id=request.user_id,
            session_id=request.session_id
        )
        
        output_text = ""
        async for event in events:
            if getattr(event, "content", None) and event.content.parts:
                for part in event.content.parts:
                    if getattr(part, "text", None):
                        output_text += part.text
        
        return {"response": output_text or "No se pudo generar una respuesta."}
    except Exception as e:
        print(f"Error in learning_coach_chat endpoint: {e}")
        raise HTTPException(status_code=500, detail=str(e))        

Backend code  

Frontend (Streamlit)

The Frontend provides the user interface and initiates the proactive interaction.

Technology: Uses Streamlit for implementation.

Proactive Flow: On initialization (when the user logs in), the application proactively sends a POST request to the backend’s /learning_coach/chat endpoint.

The message explicitly instructs the agent to “analyze my recent commits and proactively suggest a nano-learning opportunity” in a conversational and encouraging tone.

This fulfills the goal of the UI to interrupt the user “meaningfully” with a dynamic notification bridge.

User Interface: Displays a loading spinner (“Identifying learning opportunities…”) while waiting for the response and appends the agent’s proactive or conversational response to the session’s message state.

Note: Use mock data to complete some variations of the use case. You will see 3 Learners (test_engineer_id, test_engineer_cloud, test_engineer_search)

Article content
variations of users and agent responses
if "messages" not in st.session_state:
    st.session_state.messages = []
    
    with st.spinner("Identifying learning opportunities..."):
        try:
            # Trigger initial proactive message
            response = httpx.post(
                "https://url-to-replace/learning_coach/chat",
                json={
                    "user_id": "test_engineer_id",
                    "session_id": "learning_coach_session",
                    "message": "Hi, I just logged in. My engineer ID is 'test_engineer_id'. As my Technical Learning Coach, please analyze my recent commits and proactively suggest a nano-learning opportunity based on what you find. Share exactly one clear suggestion in a conversational and encouraging tone, starting with predicting what I've been doing."
                },
                timeout=60.0
            )
            if response.status_code == 200:
                answer = response.json().get("response", "No response was found.")
                st.session_state.messages.append({"role": "assistant", "content": answer})
            else:
                st.session_state.messages.append({"role": "assistant", "content": "Welcome! I couldn't connect to my analysis engine."})
        except httpx.ConnectError:
            st.session_state.messages.append({"role": "assistant", "content": "Welcome! Please ensure the backend is running to analyze your commits."})
        except Exception as e:
            st.session_state.messages.append({"role": "assistant", "content": f"An error occurred while initializing: {e}"})
        

Frontend code

Reflection

The development of the Proactive agent behavior addresses a critical bottleneck in digital transformation by mitigating the cognitive load on senior staff and transforming necessary corrections into timely skill development opportunities. Instead of relying on traditional performance reviews, this system observes poor code patterns and proactively initiates a pedagogical conversation through “nano-learning moments”.

This cohesive framework ensures that opportunities for improvement are continually identified and implemented, seamlessly integrating skill development into the engineer’s workflow and fostering continuous improvement for the technical team.

The Velocity Paradox is real, I've felt it building production systems with Claude and Cursor daily. Speed goes up, but the risk is shipping code that works without truly understanding why. The Proactive Learning Agent concept is fascinating precisely because it addresses the gap between velocity and comprehension at the moment it forms, not after. The cognitive load problem on senior engineers is the real bottleneck nobody talks about enough. Loved the read, Nicolas Bortolotti!

Like
Reply

To view or add a comment, sign in

More articles by Nicolas Bortolotti

Others also viewed

Explore content categories