The Architect's Guide to UiPath Agent Types: Low-Code, Conversational, Coded, and External Agent Integration

The Architect's Guide to UiPath Agent Types: Low-Code, Conversational, Coded, and External Agent Integration

A deep-dive comparison for technical decision-makers evaluating agentic automation strategies in the enterprise

Introduction

You've been asked to design an agentic automation strategy for your organization. You open the UiPath documentation and immediately encounter a taxonomy you need to make sense of: four distinct agent paradigms, each suited to a different set of requirements, each with its own development model, governance profile, and cost structure.

This article exists to help you make that architecture decision well.

UiPath supports four agent paradigms:

  1. Low-code agents — built visually in Studio Web, no coding required
  2. Conversational agents — multi-turn dialogue agents deployed to Slack, Teams, or web
  3. Coded agents — full Python development via LangChain, LlamaIndex, or raw SDK
  4. External agent integration — connecting agents from Vertex AI, Snowflake, Azure, Crew AI, Salesforce, Databricks, and others

The right choice depends on who's building it, how complex the logic needs to be, which LLM providers you need, and how quickly you need to ship. The strategic answer for most enterprises is a combination — but you need to know each type's trade-offs before you can make that call.


Understanding the Agent Taxonomy

Article content

UiPath classifies agents along two axes: who builds it and where the code lives.

UiPath System Agents (Autopilot, Healing Agent) are built and maintained by UiPath. You configure them but don't develop them.

UiPath-Built Agents — low-code and conversational — are built by your teams inside Studio Web using a visual canvas. UiPath manages the runtime; you own the agent design.

Coded and External Agents are built outside UiPath — in your IDE or on a third-party platform — then deployed to Orchestrator for execution, monitoring, and governance.

Every agent, regardless of type, is composed of four core components: a Prompt (instructions), Context (knowledge sources and memory), Tools (actions the agent can invoke), and Escalation paths (human-in-the-loop via Action Center).


1. Low-Code Agents

Overview

Low-code agents are the native UiPath agent experience — built on the Agent Designer canvas in Studio Web, requiring no Python or JavaScript. They're the right starting point for most organizations.

What They Can Do

Low-code agents support six tool categories: built-in Activities, other Agents (as sub-agents), API Workflows, RPA Automations, IXP integrations, and MCP Servers. Context Grounding connects them to enterprise knowledge bases. Three-level guardrails (Agent, LLM, Tool) enforce safety and compliance.

Key behavioral capabilities:

  • Planning — Break down goals into executable steps
  • Deciding — Real-time decisions based on state and context
  • Healing — Identify and recover from broken workflows
  • Learning — Retain memory across sessions (Agent Memory, preview)
  • Coordinating — Work alongside other agents, robots, and humans

Publishing Gates

UiPath recommends these validation gates before production:

Gate Requirement Prompts finalized Role, constraints, 3–5 examples Tools described Name, description, I/O schema for all tools Context connected At least one grounded knowledge source Interactive tests ≥30 tests across typical, edge, malformed inputs Evaluation sets ≥30 curated test cases Evaluation score ≥70% with no regressions

When to Choose Low-Code

  • Process automation that can be expressed visually without custom code
  • Teams without Python development capacity
  • Fast iteration with the built-in evaluation framework
  • Use cases that need Orchestrator governance from day one

Ideal use cases: Invoice processing with exception handling, document routing, employee onboarding workflows, approval automation.

Trade-offs

Strong on speed, governance, and accessibility. Hits a ceiling on complex custom logic — agents that need custom ML models, arbitrary Python libraries, or very complex state machines will outgrow the visual canvas.


2. Conversational Agents

Overview

Conversational agents are a specialized class designed for dynamic, multi-turn, real-time dialogue with users. They differ from autonomous agents primarily in their interaction model — continuous chat rather than single-prompt task execution.

Conversational vs. Autonomous

Feature Conversational Autonomous Interaction model Multi-turn, back-and-forth Single-turn, task from prompt Primary use Real-time user support Executing a defined plan Core strength Context persistence, ambiguity handling Tool orchestration at scale

Deployment Channels

Channel Status Features Slack Preview New chat, history, citations Microsoft Teams Preview New chat, history, citations, delete session Web Widget Available Embeddable in any web application

Licensing

Conversational agents use a hybrid user- and consumption-based model. Under Flex pricing: Cloud Basic users get 50 free messages/month, then 1 Agent Unit per message. Automation Developer and Attended users get unlimited messages. Power users aren't penalized; light users get a meaningful free tier.

When to Choose Conversational

  • User-facing self-service experiences (helpdesk, onboarding, benefits)
  • Scenarios requiring ongoing clarification or back-and-forth exchange
  • Channel-native deployment to Slack or Teams
  • Cases where a chat interface reduces adoption friction

Trade-offs

Not designed for batch processing or high-volume background automation. Slack and Teams channels are still in Preview. The real-time nature requires careful attention to latency — slow LLM responses noticeably degrade the user experience.


3. Coded Agents

Overview

Coded agents are built in your preferred IDE — VS Code, PyCharm, or any other — using Python. UiPath provides three SDKs:

  • UiPath Python SDK (uipath) — core SDK for custom agents and automations
  • UiPath LangChain SDK (uipath-langchain) — LangGraph-based graph orchestration
  • UiPath LlamaIndex SDK (uipath-llamaindex) — ReAct loop, data-first, RAG-native

All three deploy identically to Orchestrator via uipath pack → uipath publish.

LangChain SDK Example

from uipath_langchain.chat.models import UiPathAzureChatOpenAI
from uipath_langchain.retrievers import ContextGroundingRetriever
from langchain_core.tools.retriever import create_retriever_tool

# No API key needed — routes through UiPath LLM Gateway
llm = UiPathAzureChatOpenAI(
    model="gpt-4.1-mini-2025-04-14",
    temperature=0,
    max_tokens=4000,
    max_retries=2,
)

retriever = ContextGroundingRetriever(index_name="Company Policy Context")
retriever_tool = create_retriever_tool(
    retriever,
    "PolicySearch",
    "Search company internal documents for policy information."
)

agent = create_agent(llm, [retriever_tool],
    system_prompt="Answer questions using the policy search tool.")
        

LlamaIndex SDK — Google Vertex AI (Gemini)

from uipath_llamaindex.llms.vertex import UiPathVertex
from uipath_llamaindex.llms import GeminiModel
from llama_index.core.agent.workflow import ReActAgent

llm = UiPathVertex(model=GeminiModel.gemini_2_5_pro)

def analyze_invoice(invoice_id: str) -> str:
    """Analyze an invoice and return key details"""
    return f"Invoice {invoice_id}: Amount $5,000, Status: Pending"

def approve_payment(invoice_id: str, amount: float) -> str:
    """Approve payment for a given invoice"""
    return f"Payment of ${amount} approved for invoice {invoice_id}"

agent = ReActAgent(tools=[analyze_invoice, approve_payment], llm=llm)
        

Human-in-the-Loop (LlamaIndex)

from uipath_llamaindex.models import CreateTaskEvent, HumanResponseEvent

ctx.write_event_to_stream(
    CreateTaskEvent(
        app_name="InvoiceApproval",
        app_folder_path="Finance/Approvals",
        title="High-Value Invoice Review Required",
        data={"invoice_id": "INV-2026-001", "amount": 50000},
        assignee="finance-manager@company.com"
    )
)
task_data = await ctx.wait_for_event(HumanResponseEvent)
        

Licensing

Plan LLM Calls Executions Community 500 LLM calls/org/day 300 Robot units/org/month Flex 1 Agent Unit per LLM call 1 Agent Unit per 5-min execution Unified 0.2 Platform Units per LLM call 0.2 Platform Units per execution

When to Choose Coded

  • Complex orchestration requiring LangGraph's explicit graph control
  • Data-intensive agents with strong RAG requirements (LlamaIndex)
  • Need for AWS Bedrock (Claude) or Google Vertex AI (Gemini) models
  • Custom ML models or arbitrary Python libraries
  • CI/CD and Git-native development workflows

Trade-offs

Full power, full responsibility. No visual guardrail builder — safety checks are your code. No built-in evaluation framework — you build your own. Longer dev cycles, dependency management, and more to maintain. But for the complex 20% of use cases that low-code can't handle, this is the right tool.


4. External Agent Integration

Overview

UiPath's platform is designed to orchestrate agents regardless of their origin. External agents — built on Vertex AI, Snowflake Cortex, Azure AI Foundry, Crew AI, Salesforce AgentForce, Databricks, or any other platform — can be integrated through:

  • Remote MCP Servers — connect to external HTTPS endpoints (simplest path)
  • Coded agent wrappers — wrap external SDKs in a UiPath coded agent (most flexible)
  • API Workflows — for simple REST API integrations
  • Command MCP Servers — NPM/PyPI packages for quick integration

Integration Patterns

Crew AI wrapped as a UiPath coded agent:

from crewai import Agent, Task, Crew
from uipath_langchain.chat.models import UiPathAzureChatOpenAI

# Use UiPath's LLM Gateway for Crew AI agents
llm = UiPathAzureChatOpenAI(model="gpt-4.1-2025-04-14")

researcher = Agent(
    role="Senior Research Analyst",
    goal="Research and analyze market trends",
    backstory="Expert market analyst with 20 years experience",
    llm=llm,
)

research_task = Task(
    description="Analyze Q1 2026 market trends in AI automation",
    agent=researcher,
)

crew = Crew(agents=[researcher], tasks=[research_task], verbose=True)
# Deploy via: uipath pack && uipath publish
        

Snowflake Cortex as a tool:

import snowflake.connector

def query_cortex_analyst(question: str) -> str:
    """Query Snowflake Cortex Analyst for data insights"""
    conn = snowflake.connector.connect(
        account=os.environ["SNOWFLAKE_ACCOUNT"],
        user=os.environ["SNOWFLAKE_USER"],
        password=os.environ["SNOWFLAKE_PASSWORD"],
    )
    cursor = conn.cursor()
    result = cursor.execute(
        f"SELECT SNOWFLAKE.CORTEX.COMPLETE('llama3.1-70b', '{question}')"
    ).fetchone()
    return result[0]
        
Note: Both examples above are integration patterns, not drop-in production code. Connection handling, error management, and security should be implemented per your organization's standards.

Security Responsibility

⚠️ Important: UiPath manages security within its platform boundary — encryption in transit and at rest, RBAC, audit logging, guardrails. For external integrations, data privacy, endpoint security, and regulatory compliance are the customer's responsibility. This includes managing secrets, validating trusted endpoints, and auditing external code execution.

When to Use External Integration

  • You have existing investments in Vertex AI, Cortex, Azure AI Foundry, or Salesforce AgentForce that you want to leverage
  • You need domain-specific AI capabilities (Cortex for data analytics, AgentForce for CRM)
  • You're pursuing a gradual migration strategy — integrating external agents while building native UiPath capabilities
  • You need models not available through UiPath's LLM Gateway

Trade-offs

The integration story is genuine — but so is the operational cost. You're paying for both UiPath Agent Units and the external platform. Monitoring splits across dashboards. Guardrails don't extend to external agent internals. Debugging distributed traces across platforms is significantly harder than debugging within a single observability context. Reserve external integration for where the specialized capability genuinely justifies the added complexity.


Comparison at a Glance

Article content

The headline numbers: low-code and conversational agents reach production in days to weeks. Coded agents in weeks to months. External integrations in months. Governance is strongest for low-code/conversational (built-in), solid for coded (Orchestrator-native), and split for external (you own the external side). Customization inverts that pattern entirely.


Decision Framework

Article content

Start with the primary requirement, not the technology:

  • Interactive dialogue with users → Conversational agent, deployed to Slack/Teams/Web
  • Process automation expressible visually → Low-code agent
  • Complex custom logic or ML → Coded agent (LangChain for orchestration, LlamaIndex for RAG)
  • Leverage existing external platform → External integration via Remote MCP or coded wrapper

Scenario-Based Recommendations

Scenario Recommended Approach HR chatbot for employee self-service Conversational Agent (Teams/Slack) Invoice processing with exceptions Low-Code Agent Multi-agent research pipeline Coded Agent (LangChain/LangGraph) CRM-integrated customer support Conversational + Salesforce AgentForce Data analytics with natural language Coded Agent + Snowflake Cortex Document processing with Gemini Coded Agent (LlamaIndex + Vertex AI) Enterprise-wide agent platform Low-Code (80%) + Coded (complex 20%)


Enterprise Implementation Guidance

Start low-code, graduate to coded. Build the first agents with Studio Web. This validates the agent pattern, builds organizational familiarity, and leverages the built-in evaluation framework before investing in custom development. Most use cases never need to graduate.

Apply guardrails from day one. Use Block actions for sensitive data in production; Log actions during evaluation. Review guardrail logs regularly — false positives teach you where thresholds need tuning.

Design Context Grounding indexes carefully. Use versioned, descriptive index names (e.g., HR-Policies-2025-Q3). Set relevance score thresholds to filter noise. Update indexes as source documents change — stale context is a silent failure mode.

Scope agents narrowly. An agent that does one thing well is easier to test, govern, and improve than one with sprawling responsibilities. Multi-agent orchestration — where specialized agents call each other as tools — scales better than single agents with broad scope.

Plan multi-agent coordination early. Low-code agents can invoke other agents as tools natively. Coded agents use InvokeProcess. External agents connect via MCP. Define responsibility boundaries per agent before the architecture gets complex.


Conclusion

The UiPath agent ecosystem gives enterprises genuine architectural flexibility — from no-code visual design to full programmatic control, with a coherent deployment and governance layer underneath all of it.

The most important insight for architects: these approaches are complements, not alternatives. Low-code agents handle the structured, well-defined majority of use cases with speed and governance. Coded agents handle the complex, custom minority where visual design runs out of expressiveness. Conversational agents surface automation capability where users already are — in chat. External integrations bring specialized AI capabilities where they're genuinely superior, at a cost worth paying only when the value is clear.

The organizations that will get the most from agentic automation aren't the ones that pick the "best" paradigm — they're the ones that match the right paradigm to each use case, and build a governance foundation solid enough to scale across all of them.


Resources

To view or add a comment, sign in

More articles by Karthik Ch

Others also viewed

Explore content categories