The way we think about agents today is overly naive. We treat them like they're one thing—"agents"—when they're actually going to be as varied as software itself. A customer support agent needs to be careful, double-check everything, build trust. A commercial agent? Maybe you want it to be a bit pushy. Decision support agents can never be wrong about a number, never leak information, and must explain their reasoning clearly. Each type requires completely different design choices. Your customer support agent needs to understand your specific return policies, your brand voice. Your decision support agent needs to know your risk tolerance, your strategic priorities, how your board thinks. These aren't generic capabilities—they're deeply specific to how your organization operates. The future isn't one super-intelligent agent or one type of agent for all tasks. It's dozens of specialized agents, each designed for its specific role in your specific organization. Those who grasp this will deploy the right agent for each job. Those who don't will wonder why their one-size-fits-all approach keeps falling short. #AI #AIDilemma #AIAgents #EnterpriseAI
Automated Customer Support
Explore top LinkedIn content from expert professionals.
-
-
At Rackspace, we reduced IT ticket volume by 70% without adding headcount. By integrating an AI coworker directly into Microsoft Teams, it now automates 500+ tickets end-to-end each month. AI works best when employees don’t have to change how they work. So our team built an AI coworker for IT (RITA) that doesn’t need a new portal or separate interface. By running inside Microsoft Teams, an app Rackers use every day, RITA fits naturally into existing workflows. Employees don’t need to switch tools or change how they work, which drives widespread adoption. Beyond answering questions, RITA executes workflows in real time and handles device provisioning, account lockouts, and everyday software issues. It completes the work, not just the request, which lets IT teams spend less time on triage and more time on higher-value work. As a result, we see a widening gap in the market. Teams that treat AI as a tool stay stuck in pilots, while teams that design AI as a participant in operations scale faster. After running RITA inside Rackspace and refining it in production, we deploy it for other IT teams that want to scale without adding headcount. Happy to start a conversation via LinkedIn DMs if this is something you’re actively working on. And if helpful, we’ve written up how this approach played out alongside three other agentic AI solutions we deployed at Rackspace Technology. The link is here: https://bit.ly/4q177Ii.
-
I've tested over 20 AI agent frameworks in the past 2 years. Building with them, breaking them, trying to make them work in real scenarios. Here's the brutal truth: 99% of them fail when real customers show up. Most are impressive in demos but struggle with actual conversations. Then I came across Parlant in the conversational AI space. And it's genuinely different. Here's what caught my attention: 1. The Engineering behind it: 40,000 lines of optimized code backed by 30,000 lines of tests. That tells you how much real-world complexity they've actually solved. 2. It works out of the box: You get a managed conversational agent in about 3 minutes that handles conversations better than most frameworks I've tried. 3. Conversation Modeling Approach: Instead of rigid flowcharts or unreliable system prompts, they use something called "Conversation Modeling." Here's how it actually works: 1. Contextual Guidelines: ↳ Every behavior is defined as a specific guideline. ↳ Condition: "Customer wants to return an item" ↳ Action: "Get order number and item name, then help them return it" 2. Controlled Tool Usage: ↳ Tools are tied to specific guidelines ↳ No random LLM decisions about when to call APIs ↳ Your tools only run when the guideline conditions are met. 3. Utterances Feature: ↳ Checks for pre-approved response templates first ↳ Uses those templates when available ↳ Automatically fills in dynamic data (like flight info or account numbers) ↳ Only falls back to generation when no template exists What I Really Like: It scales with your needs. You can add more behavioral nuance as you grow without breaking existing functionality. What's even better? It works with ALL major LLM providers - OpenAI, Gemini, Llama 3, Anthropic, and more. For anyone building conversational AI, especially in regulated industries, this approach makes sense. Your agents can now be both conversational AND compliant. AI Agent that actually does what you tell it to do. If you’re serious about building customer support agents and tired of flaky behavior, try Parlant.
-
Power Automate Work Queues are not built for scale! That's a fact. When you think about scalability in Power Automate, one thing that will definitely come to mind at some point is queues and workload management. While you might be able to survive without them in some event-based transactional flows that only process a single item at a time, but whenever you process tasks in batches, or when RPA gets involved, you'll need queues. Power Automate comes with Work Queues out of the box. And you would think that's your go-to queueing mechanism for scaling. After all, it's at scale that you really need those queues - to de-couple your flows and make it easier to maintain, support, debug them, as well as make them more robust and efficient. Queues is a must even at medium scale. Heck, we use them even in small scale implementations. But the surprising thing about Power Automate Work Queues is that they are not fit for high scale implementations. And that is by design! The docs themselves (link in the comments) explicitly state that if have high volumes or if you dequeue (pick up work items from the queue for processing) concurrently, you should either do it within moderate levels or use something else. If you try and use Power Automate Work Queues for high scale implementations (more than 5 concurrent dequeue operations or hundreds/thousands of any type operations involving the queues), you'll get in trouble. There can be all sorts of issues that could happen - your data may get duplicated, you may accidentally deque the same work item in multiple concurrent instances, or your flows might simply get throttled or even crash. This is because of the way they're build and the way they utilize Dataverse tables for storing work items and work queue metadata. So, if you do want to scale, it's best to use an alternative. And, obviously, Microsoft wouldn't be Microsoft if they didn't have an alternative tool to do that. The docs themselves recommend Azure Service Bus Queues for high throughput queueing mechanisms. Another alternative could also be Azure Storage Queues, but that only makes sense if the individual work items in your queue can get large (lots of data or even documents) or when you expect your queue to grow beyond 80GB (which is possible in very large scale implementations). Otherwise, Azure Service Bus Queues are absolutely perfect for very large volumes of small transactions. On top of that, they have some very advanced features for managing, tracking, auditing and otherwise handling your work items. And, of course, there's a existing connector in Power Automate to use it. So, while I do love Power Automate Work Queues, I'll only use them in relatively small scale implementations. And for everything else - my queues will go to Azure. And so should yours.
-
Cursor’s AI support bot has influencers freaking out. Users couldn’t log into their accounts on multiple devices. Cursor’s LLM-powered support said it was company policy to allow only one device per license, but that policy doesn’t exist. Hallucinations are common with LLMs, and there’s a simple solution. LLM answers must be grounded in source documentation, knowledge graphs, or tabular data. A fundamental guardrail design pattern for agents fixes this, so there’s no reason to freak out. Once the LLM provides an answer, a round of checks must run to verify it. In this case, a similarity score would have revealed that the support bot’s answer wasn’t a close match to any passage in a company policy document. Salesforce and many other companies use similarity scoring to prevent hallucinations from seeing the light of day. Deterministic guardrails are critical design elements for all agents and agentic platforms. Another best practice is using small language models (SLMs) that are post-trained on domain or workflow-specific data (customer support questions and answers in this case). LLMs are more prone to hallucinations than SLMs. AI product managers and system architects work together during the agent design phase to scenario plan failure cases and specify the guardrails that will mitigate the most significant risks. It’s agentic design 101 and has been part of my instructor-led AI product management course for almost a year. Cursor’s AI customer support agent is poorly designed, but the influencer freak-out and media attention it attracted are just more proof that most of these people aren’t actively working in the field. #AI #ProductManagement
-
Metacognition is central to our ability to use AI well. The paper "Exploring the Potential of Metacognitive Support Agents for Human-AI Co-Creation" demonstrates how "metacognitive agents" can help human mechancial designers, also surfacing valuable lessons on effective agent design. The Carnegie Mellon University researchers created three agents, SocratAIs, HephAIstos and Expert FreeForm. Some of the key findings: 🧠 Metacognitive agents boost design feasibility. Designers supported by metacognitive agents produced significantly more feasible mechanical parts than those without support. The average design quality score was 3.5 out of 5 for supported users, compared to just 1.0 for unsupported users. 🗣️ Voice-based agents effectively prompt reflection. Using a voice interface, agents like SocratAIs and HephAIstus prompted designers to reflect on their design decisions and simulate real-world conditions. For instance, SocratAIs’ questions led users to reconsider incorrect force directions, improving load case setup and part feasibility. 🛠️ Sketching + planning enhances design reasoning. HephAIstus prompted users to sketch free-body diagrams and fill out planning sheets, leading to deeper engagement and improved problem setup. All users followed through with these activities, and in several cases, these tools anchored productive discussions that corrected prior design flaws. 📉 Over-questioning can backfire. While SocratAIs helped many, repeated questioning sometimes increased doubt and led users to override correct assumptions. In one session, this caused a participant to regress from a correct load setup to an incorrect one, illustrating how reflective support needs careful timing and calibration. 👥 Experts adaptively modulate support. Expert designers acting as support agents intuitively timed their interventions, sometimes delaying advice until users showed readiness. They blended reflective questioning with direct support, effectively guiding users without overstepping or causing dependency. 🧭 Metacognitive agents enhance self-regulation. Participants reported that agents helped them plan better and reflect more thoroughly. Some described feeling more organized and aware of their design logic, aligning with principles of self-regulated learning. One user noted the agent “walked me through my own thought process.” There is a lot more work to do in this vein, but this offers an important framing and valuable insights.
-
Conversational AI is transforming customer support, but making it reliable and scalable is a complex challenge. In a recent tech blog, Airbnb’s engineering team shares how they upgraded their Automation Platform to enhance the effectiveness of virtual agents while ensuring easier maintenance. The new Automation Platform V2 leverages the power of large language models (LLMs). However, recognizing the unpredictability of LLM outputs, the team designed the platform to harness LLMs in a more controlled manner. They focused on three key areas to achieve this: LLM workflows, context management, and guardrails. The first area, LLM workflows, ensures that AI-powered agents follow structured reasoning processes. Airbnb incorporates Chain of Thought, an AI agent framework that enables LLMs to reason through problems step by step. By embedding this structured approach into workflows, the system determines which tools to use and in what order, allowing the LLM to function as a reasoning engine within a managed execution environment. The second area, context management, ensures that the LLM has access to all relevant information needed to make informed decisions. To generate accurate and helpful responses, the system supplies the LLM with critical contextual details—such as past interactions, the customer’s inquiry intent, current trip information, and more. Finally, the guardrails framework acts as a safeguard, monitoring LLM interactions to ensure responses are helpful, relevant, and ethical. This framework is designed to prevent hallucinations, mitigate security risks like jailbreaks, and maintain response quality—ultimately improving trust and reliability in AI-driven support. By rethinking how automation is built and managed, Airbnb has created a more scalable and predictable Conversational AI system. Their approach highlights an important takeaway for companies integrating AI into customer support: AI performs best in a hybrid model—where structured frameworks guide and complement its capabilities. #MachineLearning #DataScience #LLM #Chatbots #AI #Automation #SnacksWeeklyonDataScience – – – Check out the "Snacks Weekly on Data Science" podcast and subscribe, where I explain in more detail the concepts discussed in this and future posts: -- Spotify: https://lnkd.in/gKgaMvbh -- Apple Podcast: https://lnkd.in/gj6aPBBY -- Youtube: https://lnkd.in/gcwPeBmR https://lnkd.in/gFjXBrPe
-
This is how Adyen built an LLM-based ticket routing + support agent copilot to increase the speed of their support team. - Adyen used LangChain as the primary framework. The entire setup runs on Kubernetes for flexibility and scalability. - First, the ticket routing system uses an LLM to automatically direct support tickets to the right agents based on content analysis. This improved the accuracy in ticket allocation compared to their human operators. - For the support agent copilot, Adyen built a document management and retrieval system. It uses vector search to retrieves relevant docs from their internal support documentation and suggests answers to support agents, which cuts down the response time significantly. - The architecture is modular so their existing microservices are integrated easily too. Link to article: https://lnkd.in/gqUZZ6nd #AI #RAG #LLMs
-
LangChain recently published a helpful step-by-step guide on building AI agents. 🔗 How to Build an Agent –https://lnkd.in/dKKjw6Ju It covers key phases: 1. Defining realistic tasks 2. Documenting a standard operating procedure 3. Building an MVP with prompt engineering 4. Connect & Orchestrate 5. Test & Iterate 6. Deploy, Scale, and Refine While the structure is solid, one important dimension that’s often overlooked in agent design is: efficiency at scale. This is where Lean Agentic AI becomes critical—focusing on managing cost, carbon, and complexity from the very beginning. Let’s take a few examples from the blog and view them through a lean lens: 🔍 Task Definition ➡️ If the goal is to extract structured data from invoices, a lightweight OCR + regex or deterministic parser may outperform a full LLM agent in both speed and emissions. Lean principle: Use agents only when dynamic reasoning is truly required—avoid using LLMs for tasks better handled by existing rule-based or heuristic methods 📋 Operating Procedures ➡️ For a customer support agent, identify which inquiries require LLM reasoning (e.g., nuanced refund requests) and which can be resolved using static knowledge bases or templates. Lean principle: Separate deterministic steps from open-ended reasoning early to reduce unnecessary model calls. 🤖 Prompt MVP ➡️ For a lead qualification agent, use a smaller model to classify lead intent before escalating to a larger model for personalized messaging. Lean principle: Choose the best-fit model for each subtask. Optimize prompt structure and token length to reduce waste. 🔗 Tool & Data Integration ➡️ If your agent fetches the same documentation repeatedly, cache results or embed references instead of hitting APIs each time. Lean principle: Reduce external tool calls through caching, and design retry logic with strict limits and fallbacks to avoid silent loops. 🧪 Testing & Iteration ➡️ A multi-step agent performing web search, summarization, and response generation can silently grow in cost. Lean principle: Measure more than output accuracy—track retry count, token usage, latency, and API calls to uncover hidden inefficiencies. 🚀 Deployment ➡️ In a production agent, passing the entire conversation history or full documents into the model for every turn increases token usage and latency—often with diminishing returns. Lean principle: Use summarization, context distillation, or selective memory to trim inputs. Only pass what’s essential for the model to reason, respond, or act.. Lean Agentic AI is a design philosophy that brings sustainability, efficiency, and control to agent development—by treating cost, carbon, and complexity as first-class concerns. For more details, visit 👉 https://leanagenticai.com/ #AgenticAI #LeanAI #LangChain #SustainableAI #LLMOps #FinOpsAI #AIEngineering #ModelEfficiency #ToolCaching #CarbonAwareAI LangChain
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development