User Experience Considerations for Chatbot Updates

Explore top LinkedIn content from expert professionals.

Summary

User experience considerations for chatbot updates involve designing chatbot interactions that are easy to use, trustworthy, and accessible, ensuring the technology genuinely meets user needs and works for everyone. This includes making chatbots clear, transparent, and adaptive, while also maintaining proper memory and accessibility for all users.

  • Prioritize transparency: Always let users know they are interacting with a chatbot, and clearly communicate system status or potential limitations throughout the conversation.
  • Design for accessibility: Make sure chatbots are easy to navigate for all users, including those using screen readers or keyboard-only controls, and announce dynamic content changes reliably.
  • Maintain context: Implement proper state management so chatbots remember user preferences and past interactions, creating smoother and more personalized conversations.
Summarized by AI based on LinkedIn member posts
  • View profile for Vitaly Friedman
    Vitaly Friedman Vitaly Friedman is an Influencer

    Practical insights for better UX • Running ā€œMeasure UXā€ and ā€œDesign Patterns For AIā€ • Founder of SmashingMag • Speaker • Loves writing, checklists and running workshops on UX. šŸ£

    226,009 followers

    šŸ¤– How To Design Better AI Experiences. With practical guidelines on how to add AI when it can help users, and avoid it when it doesn’t ↓ Many articles discuss AI capabilities, yet most of the time the issue is that these capabilities either feel like a patch for a broken experience, or they don't meet user needs at all. Good AI experiences start like every good digital product by understanding user needs first. 🚫 AI isn’t helpful if it doesn’t match existing user needs. šŸ¤” AI chatbots are slow, often expose underlying UX debt. āœ… First, we revisit key user journeys for key user segments. āœ… We examine slowdowns, pain points, repetition, errors. āœ… We track accuracy, failure rates, frustrations, drop-offs. āœ… We also study critical success moments that users rely on. āœ… Next, we ideate how AI features can support these needs. ↳ e.g. Estimate, Compare, Discover, Identify, Generate, Act. āœ… Bring data scientists, engineers, PMs to review/prioritize. šŸ¤” High accuracy > 90% is hard to achieve and rarely viable. āœ… Design input UX, output UX, refinement UX, failure UX. āœ… Add prompt presets/templates to speed up interaction. āœ… Embed new AI features into existing workflows/journeys. āœ… Pre-test if customers understand and use new features. āœ… Test accuracy + success rates for users (before/after). As designers, we often set unrealistic expectations of what AI can deliver. AI can’t magically resolve accumulated UX debt or fix broken information architecture. If anything, it visibly amplifies existing inconsistencies, fragile user flows and poor metadata. Many AI features that we envision simply can’t be built as they require near-perfect AI performance to be useful in real-world scenarios. AI can’t be as reliable as software usually should be, so most AI products don’t make it to the market. They solve the wrong problem, and do so unreliably. As a result, AI features often feel like a crutch for an utterly broken product. AI chatbots impose the burden of properly articulating intent and refining queries to end customers. And we often focus so much on AI that we almost intentionally avoid much-needed human review out of the loop. Good AI-products start by understanding user needs, and sparkling a bit of AI where it helps people — recover from errors, reduce repetition, avoid mistakes, auto-correct imported files, auto-fill data, find insights. AI features shouldn’t feel disconnected from the actual user flow. Perhaps the best AI in 2025 is ā€œquietā€ — without any sparkles or chatbots. It just sits behind a humble button or runs in the background, doing the tedious job that users had to slowly do in the past. It shines when it fixes actual problems that it has, not when it screams for attention that it doesn’t deserve. Useful resources: AI Design Patterns, by Emily Campbell https://www.shapeof.ai AI Product-Market-Fit Gap, by Arvind Narayanan,Ā Sayash Kapoor https://lnkd.in/duEja695 [continues in comments ↓]

  • View profile for Patricia Reiners✨

    AI x UX Specialist | Podcast FUTURE OF UX | W&V 100 2023 | Creating great user experiences and exploring AI, Spatial Design & Innovation

    27,393 followers

    How proactive AI will change UX - šŸ“† schedule ChatGPT requests! OpenAI has introduced a new task scheduling feature for ChatGPT. This means you can now ask ChatGPT to handle tasks at a future time — like sending you a weekly global news update, recommending a daily personalized workout, or setting reminders for important events. šŸ’” Why is this interesting from a UX perspective? This shift is a step toward proactive AI — moving from reactive systems (waiting for user input) to anticipatory, context-aware experiences that help users save mental energy and stay on top of their routines. Let’s break it down from a real-life use case - creating daily recipes: I currently eat sugar-free, gluten-free (because I am celiac), and generally low-carb and like to let ChatGPT create recipes for me. I don’t want a fixed meal plan, but I do need flexible, personalized recipe suggestions that fit my nutrition goals. Ideally, I’d want ChatGPT toĀ  → suggest automatically 3-4 recipes daily around 3 PM → send them to meĀ  → and based on my choice adjust future suggestions for the next days based on what I’ve already eaten that week (for balanced nutrients). With the new task feature, this kind of personalized experience could become much much more seamless. I wouldn't need to ask repeatedly — the assistant would learn my preferences over time and adapt its suggestions accordingly. šŸŽÆ What can we learn from this in AI-UX design? 1ļøāƒ£ From static interactions to dynamic experiences: We often design AI tools that rely on users asking for something. But this update shows the value of continuous, evolving interactions. Users shouldn’t need to start from scratch every time — systems can proactively adjust to their needs and context. 2ļøāƒ£ Mental models of AI assistants: For users to trust AI routines, they need to understand what the assistant will do and when. It’s about designing predictability and transparency in a way that still allows for flexibility and spontaneity. 3ļøāƒ£ Proactive ≠ intrusive: There’s a fine balance between helpful and annoying. The best AI interactions feel like a supportive partner — offering assistance at the right time, based on context and past behavior, without overwhelming users with irrelevant notifications. In AI-UX, we’re increasingly designing for systems that adapt and evolve with the user.Ā  This new feature is a great example of how AI can shift might be able rom a passive tool to an active assistant — can’t wait to try it. How do you see proactive AI changing the way we design user experiences? Would love to hear your thoughts! šŸ‘€

  • View profile for Nick Babich

    Product Design | User Experience Design

    85,918 followers

    šŸ’”Design Principles for AI Chatbots Recently, I had an interesting (but somewhat frustrating) experience with an AI chatbot designed by one of the world’s largest eCommerce platforms. The issue was that the AI assistant wasn’t very helpful in resolving a simple problem I encountered with my order, and it kept giving me generic suggestions that didn’t work in my case. What made things worse was that the assistant pretended to be a real human, and it took me three attempts (three separate dialogues) to realize it was actually AI. In the end, I solved my problem by reaching out to a real human agent—although the AI was reluctant to connect me to one. I've decided to write this post not to mock a poorly designed AI product, but rather to share 3 foundational rules that will help product designers create more human-friendly AI chatbots: 1ļøāƒ£ Be transparent and communicate system status āœ… Be transparent about who users speak to. If user is speaking with AI chatbot, they should know it upfront, at the begging of the conversation.Ā Never make AI pretend to be a real human. āœ… Display disclaimers where AI might generate uncertain or probabilistic answers. This is especially important in areas that can cause risks for users (such as financial operations) āœ… Add system messages (e.g., "AI is typing…") to clearly communicate the waiting time for the user. āœ… Allow users to get a transcript of a conversation with a chatbot in one click. 2ļøāƒ£ Clarity is a top priority āœ… Keep responses concise, structured, and scannable; avoid overwhelming users with long text blocks. āœ… Maintain context within the session and remind users when needed ("Earlier you mentioned…"). āœ… Use onboarding hints or contextual examples to set expectations what AI can or cannot do. āœ… Use different size of chat window for different tasks. For long, complex tasks, it’s better to use full page screen. For short, momentary tasks, contextual chat widget works great. 3ļøāƒ£ Offer a freedom of choiceĀ for users āœ… Users should have the freedom to interact with AI in the way they like—using a full spectrum of natural language or quick replies (i.e., contextual shortcuts). Offer quick replies or buttons to reduce typing effort and guide interactions. āœ… Provide undo, edit, or re-ask options. For example, if the user decides to go back to the previous step, AI should not restrict this. āœ… Always keep the exit routes clear (return to home, escalate to human). āœ… Allow users to "speak to a human" if AI is not helpful (usually, people write "operator" to speak to a human). šŸ“– Chat UI Patterns by Visa Design System: https://lnkd.in/de8-WG_S šŸ–¼ļø Chat interface by Dennis Snellenberg #UX #uxdesign #productdesign #design #chatbot

  • View profile for Diana Khalipina

    WCAG & RGAA web accessibility expert | Frontend developer | MSc Bioengineering

    15,270 followers

    We all use chatbots and most of them are still inaccessible After a quick look at the Digital Trust Index 2025 (which shows that 13 out of 15 popular Belgian chatbots fail basic accessibility checks, the link: https://lnkd.in/gnYPHE67), I started paying attention to chatbots I personally use and I found similar patterns. A recent academic study analysing 106 real web-based chatbots across healthcare, education and customer service found that over 80% had critical accessibility issues, and ~45% lacked proper semantic structure or misused ARIA roles, making them partially or fully unusable with assistive technologies. The link to read the study: https://lnkd.in/eJkd2hfT Here are real-life examples that many of us use: 1ļøāƒ£ ChatGPT What works: • Keyboard navigation generally works - you can tab through messages. Challenges in accessibility: • Screen readers often announce long blocks of text without meaningful structure (headings, roles). • Dynamic updates may not always be announced via ARIA live regions, leading to missed or delayed feedback. • Focus sometimes jumps unexpectedly after submitting a prompt. These are not uncommon issues: many rich JS UIs generate content without robust announcement patterns. 2ļøāƒ£ Banking / Telecom Support Chatbots Examples I’ve personally tested: Orange France support bot • Keyboard: can’t open the bot using Tab alone — must click. • Screen reader: bot status changes are not announced reliably. • Focus may become trapped inside controls. BNP Paribas support chatbot • Chat window opens but doesn’t maintain a consistent focus order. • Screen reader labels on dynamic replies are inconsistent. These are problems that violate basic WCAG patterns for dialogs and focus management. 3ļøāƒ£ Transportation / Travel Chatbots SNCF / Oui.sncf support bot • Text input fields with low contrast against background. • Tabbing traps in sections - can’t reach ā€œcloseā€ button. • Screen readers announce ā€œbuttonā€ repeatedly without context. These issues make a mission-critical service harder for low-vision, keyboard-only, or screen reader users. 4ļøāƒ£ Retail Chatbots Fnac online support • Chat interface often overlays content without clear landmarks. • Screen reader may read page content + hidden elements at the same time. • Visible focus indicators are minimal or absent. This is a classic sight for assistive users trying to understand where they are. Why these problems matter, even with AI: • Dynamic content must be announced reliably - chatbots change the DOM constantly • Focus management is essential - users must know where they are and where to go next • ARIA roles & live regions are not optional - they’re required for assistive tech to interpret changes When these fundamentals are missing, no amount of ā€œAI smartsā€ can rescue it. #WebAccessibility #Chatbots #AIAccessibility #InclusiveDesign #UX #DigitalTrust #WCAG #AssistiveTech

  • View profile for Aditya Santhanam

    Founder | Building Thunai.ai

    10,127 followers

    Conversational AI forgets. Users expect it to remember. That gap costs you trust. Here's how state management fixes it: → Session State Architecture Every conversation needs a container. Store user inputs, bot responses, and context. Without structure, the AI starts fresh every time. → Memory Types Matter Short-term: Holds the current conversation flow. Long-term: Remembers user preferences across sessions. Blend both to create continuity. → Database vs In-Memory Storage In-memory = fast but volatile. Database = persistent but slower. Use in-memory for active chats. Move to database when sessions pause. → Multi-Turn Handling Track conversation threads, not single messages. Link questions to previous answers. Context dies without proper turn tracking. → Scalability Considerations Thousands of users = thousands of states. Compress old data. Set timeout policies. Clean up abandoned sessions automatically. State management isn't optional. It's the difference between a chatbot and a conversation partner. šŸ”„ Repost this if you've seen AI lose context mid-conversation. āž”ļø Follow Aditya for insights that turn AI theory into working systems

  • View profile for Sandra Roosna

    AI chatbots that actually work | Perfect human + AI blend | Askly is built by one of your customers | For your customer service & sales team šŸ’›

    10,628 followers

    After testing over 3,000 live chat and chatbot solutions, one crucial insight emerged that I must share! 🟔 Chat History Vanishes: When customers change browsers, leave the site, and return, or even navigate around the site, conversations often disappear, forcing them to start over again. šŸ” Why It Matters: • Such CX frustrates your customers. • We expect conversations with businesses to be reliable, or we might prefer competitors' support. • It impacts sales and, after all, customer loyalty. šŸ’” Pro Tip: Ensure your #chatbot retains history across sessions, or consider this when choosing an #AI assistant or chatbot solution. Smart, seamless interactions build trust and satisfaction. Let’s improve CX together! #Chatbots #CustomerExperience #UserEngagement #professional #customerexperience #LiveChat #CustomerSupport

  • View profile for Mohsen Ghasempour, Ph.D.

    Chief AI Officer @ Kingfisher | Non-Executive Director @ Hays Travel | Keynote Speaker | Advisor

    3,798 followers

    Is Your AI Agent Inclusive? Chatbots are probably one of the most common ways people interact with an AI agent today. They are the way most people experience AI directly, so making them accessible is not optional. It is essential. At Kingfisher plc we are committed to building AI experiences that everyone can use across our brands, B&Q, Screwfix, Castorama and BRICO DƉPƔT. Our new article by Sheldon Marsh explores what accessibility really means for chatbots and why it matters so much. We talk about simple but powerful choices like using semantic HTML, designing for screen readers, and thinking beyond visuals so the experience works for every user. We also share some of the real challenges we hit along the way. For example, how a typing animation made messages impossible for screen readers to understand, and how we engineered a solution that kept the conversational feel while staying fully accessible. The takeaway is simple: accessibility is not just good practice. It makes your product better for everyone. If you are curious about how we are building inclusive AI across our business, or you are passionate about accessibility, give it a read. #AIInWork #Accessibility #InclusiveDesign #Chatbots #AIAgents #UserExperience #WebAccessibility #DigitalInclusion #TechForGood #KingfisherAI

  • View profile for Karin Pespisa, MBA

    AI Behavior Architect | Prompt Engineering | Conversation Design | Model UX

    4,502 followers

    šŸ“¢ Chatbot Europe 2025: Conversational AI is evolving, and businesses need to stay ahead. Here are key takeaways with high potential ROI: āœ… Proactive Chatbots are the Future: Companies like Scotiabank see value in chatbots that initiate conversations to address user pain points. The practice of ā€œshifting AI left to identify where users are having pain points with products or services,ā€ and deploying chatbots to initiate proactive conversations can reduce support costs, says Cassie MacKenzie, Lead Content Designer at Scotiabank. āœ… Content Personalization: Jesse L., Digital Product Manager at Lebara envisions proactive chatbots serving personalized content to users in situ: ā€œWhy can’t a website adapt to the conversation a chatbot is having?ā€ This is a mindblowing idea. āœ… Faster Development, Smarter Testing: LLMs are accelerating chatbot development, but the focus is shifting to rigorous testing and ensuring reliability. LLM testing and evaluation is paramount for building long-term trust in your brand. Rushing to GTM? Reliable AI business assistants mitigate the costs of AI hallucination errors and brand-damaging chatbot fails. Still, many companies have yet to define effective LLM testing and evaluation processes.šŸ‘€ āœ… UX Still Reigns Supreme: Eunji Jeong, Chief Design Officer at Design Connected pointed out, understanding why users are interacting and designing clear, empathetic flows is crucial for success. ā€œTo get a chatbot right, consider what problem the person is trying to solve,ā€ said Eunji. ā€œConsider three categories of user interaction. They are to get info, take action or get results.ā€ āœ… Meet User Emotion: AI assistants that read and respond to user emotion build stronger connections and elevate customer experience. This goes beyond understanding words through NLP. ✨VƬctor Leon, Customer Success Director at Mplus stated, "A chatbot should be able to read emotion to see if a call is going in the right direction. And read emotion again as the conversation progresses." ✨Giulia van den Winkel, Conversational Designer at GetYourGuide advises matching chatbot response length to the length of each user’s input prompt. If longer responses are needed, ā€œbreak them into two responses and stick to no more than three points in each.ā€ This makes the conversation sound more human. ✨Postdoctoral researcher Oksana šŸ’› šŸ’™ Hagen shared, ā€œOne of the things that helped is instructing the bot to match the tone of the speaker. Now, it can come up with short sentences in the same style as the user.ā€œ Which of these trends are you seeing? Let's discussšŸ‘‡ ✨Day 1 Highlights: https://lnkd.in/e9ys3gKM ✨When RAG stops with G: https://lnkd.in/euRvAamw #ai #chatbots #aiagents #ux #cx #conversationalai #ChatbotEurope

  • View profile for Niels Van Quaquebeke

    Human | Professor of Leadership | Author, Speaker, Educator | Psychologist, on a mission to improve leadership at work.

    14,259 followers

    As AI chatbots—especially those with expressive voice capabilities—become more human-like, more users are turning to them not just for information, but for emotional support and companionship. But what are the psychological consequences of these interactions? A recent four-week randomized controlled study (n = 981, >300,000 messages) explored how different chatbot features—such as voice style (text, neutral voice, engaging voice) and conversation type (personal, non-personal, open-ended)—influence users’ experiences ofĀ loneliness,Ā social connection, andĀ emotional dependence on AI. šŸ” Key insights from the study: ā˜ Voice-based chatbots initially reduced loneliness and emotional dependence more effectively than text-based ones—but these effects disappeared with heavier use, especially when the voice was neutral. ā˜Personal conversations slightly increased loneliness but also reduced dependence; non-personal topics led to greater emotional attachment, particularly among heavy users. ā˜High daily usage—across all chatbot types—was linked toĀ increased loneliness, higher emotional dependence, and less social interaction with real people. ā˜Users with stronger emotional attachment tendencies or higher trust in the chatbot were especially vulnerable to these effects. This research highlights the delicate balance between theĀ design of emotionally expressive AIĀ andĀ user behavior. While chatbots have the potential to support emotional well-being, the study raises important questions about how to prevent overreliance and protect real-world social relationships. https://lnkd.in/dwQah9AS

  • View profile for Cesc Vilanova

    Founder at Agent Studio | We help teams adopt generative AI to transform how they work 🧭

    4,859 followers

    Why do chatbots still frustrate users, even as technology keeps improving? Here's what recent research tells us šŸ‘‡ Studies by Ipsos and The Wharton School (links in the first comment) show that even though chatbots are everywhere, 77% of people still find them frustrating. Even more striking: 88% would still rather talk to a real person, no matter how accurate the chatbot is. The problem isn't just technical, it's psychological. If we want to create better conversational experiences, we need to pay closer attention to how users feel. Wharton's Blueprint found that simply telling users a bot is "always available," "instantly responsive," or "learning after each interaction" can boost satisfaction by up to 37%. But when people are stressed or upset (like after a cancellation) they prefer bots that get to the point quickly, not ones that try to sound human. On the other hand, when a chatbot delivers good news (like a refund or an upgrade) a bit of warmth can make a real difference. Small changes in language or tone can shift satisfaction much more than any technical upgrade. As product builders, we need to rethink our approach. Instead of focusing only on perfectly accurate responses, we should care more about how the experience feels. Maybe we should stop focusing so much on LLM benchmarks and start paying closer attention to the nuances in agent human conversations. Not an easy problem to solve. Let me know if you're trying to tackle this too. We might be able to help šŸ™‚ What have you learned building conversational products? Have you used a chatbot that actually felt human in a good way? Share your story below šŸ’­ #conversationaldesign #nlx #chatbots #productdesign

Explore categories