User Experience Design for Chatbots

Explore top LinkedIn content from expert professionals.

  • View profile for Vitaly Friedman
    Vitaly Friedman Vitaly Friedman is an Influencer

    Practical insights for better UX • Running “Measure UX” and “Design Patterns For AI” • Founder of SmashingMag • Speaker • Loves writing, checklists and running workshops on UX. 🍣

    225,959 followers

    🔮 Design Patterns For AI Interfaces (https://lnkd.in/dyyMKuU9), a practical overview with emerging AI UI patterns, layout considerations and real-life examples — along with interaction patterns and limitations. Neatly put together by Sharang Sharma. One of the major shifts is the move away from traditional “chat-alike” AI interfaces. As Luke Wroblewski wrote, when agents can use multiple tools, call other agents and run in the background, users orchestrate AI work — there’s a lot less chatting back and forth. In fact, chatbot widgets are rarely an experience paradigm that people truly enjoy and can fall in love with. Mostly because the burden of articulating intent efficiently lies on the user. It can be done (and we’ve learned to do that), but it takes an incredible amount of time and articulation to give AI enough meaningful context for it to produce meaningful insights. As it turned out, AI is much better at generating prompt based on user’s context to then feed it into itself. So we see more task-oriented UIs, semantic spreadsheets and infinite canvases — with AI proactively asking questions with predefined options, or where AI suggests presets and templates to get started. Or where AI agents collect context autonomously, and emphasize the work, the plan, the tasks — the outcome, instead of the chat input. All of it are examples of great User-First, AI-Second experiences. Not experiences circling around AI features, but experiences that truly amplify value for users by sprinkling a bit of AI in places where it delivers real value to real users. And that’s what makes truly great products — with AI or without. ✤ Useful Design Patterns Catalogs: Shape of AI: Design Patterns, by Emily Campbell 👍 https://shapeof.ai/ AI UX Patterns, by Luke Bennis 👍 https://lnkd.in/dF9AZeKZ Design Patterns For Trust With AI, via Sarah Gold 👍 https://lnkd.in/etZ7mm2Y AI Guidebook Design Patterns, by Google https://lnkd.in/dTAHuZxh ✤ Useful resources: Usable Chat Interfaces to AI Models, by Luke Wroblewski https://lnkd.in/d-Ssb5G7 The Receding Role of AI Chat, by Luke Wroblewski https://lnkd.in/d8xcujMC Agent Management Interface Patterns, by Luke Wroblewski https://lnkd.in/dp2H9-HQ Designing for AI Engineers, by Eve Weinberg https://lnkd.in/dWHstucP #ux #ai #design

  • View profile for Gianluca Brugnoli

    VP Design at TomTom - PhD, UX-CX Strategy and leadership, Automotive UX

    10,893 followers

    AI is not killing UX. It’s proving its importance. Even the most advanced AI needs a good UX to unlock user value and drive adoption at scale.  AI companies are now racing to embed their models into everyday products, like smartphones, web browsers, mobile apps, and wearables. The key goal is to boost consumer adoption to make sense of their huge investments. Making AI more accessible to a broader audience is crucial for accelerating growth and driving business impact. Current AI companies growth issues are also critical UX challenges. Chatbot interfaces are useful for simple tasks but fall short in complex use cases and journeys. Feature discoverability is very limited, most of the capabilities of the system are hidden, often leaving users unaware of what’s possible, what they can do with that technology. Without visual cues, guidance and clear options, the learning curve is steep, and users struggle to understand, engage, explore advanced features, and achieve meaningful outcomes. Chatbots are too limited also for expert users with specific needs and complex and advanced use cases. We’ve moved past the simple “ask a question to a chatbot” experience. One interface does not fit all scenarios. To unlock the full AI’s potential, these users need a tool embedded within their workflows with rich interfaces designed for the specific task and use case, something that support and guides them toward valuable outcomes. AI companies now realize that success hinges on a deeper integration into people's digital lives, with a better understanding of the user intent and the context of each interaction. Technology can evolve very rapidly but the human brain does not. People interact with digital products following familiar patterns: they skim, scroll, choose and expect immediate, clear feedback. Ignoring these behaviors for technology-first solutions leads to disengagement that hinders growth. Technology alone doesn’t make a product. It’s UX design that transforms innovation into something people want to use. Despite multiple predictions that AI will make UX design (and UI) obsolete, the reality today tells a different story. Even the most advanced AI technology cannot address critical challenges like user value, adoption, conversion, and retention on its own. The success of AI rests on a timeless design principle: understanding and designing for users’ needs and behaviors, instead of expecting users to adapt to technology. UX isn’t optional. It’s the key to unlocking AI’s full potential for millions of users and driving long-term business impact. To be successful, AI needs a good UX.

  • View profile for Niels Van Quaquebeke

    Human | Professor of Leadership | Author, Speaker, Educator | Psychologist, on a mission to improve leadership at work.

    14,258 followers

    As AI chatbots—especially those with expressive voice capabilities—become more human-like, more users are turning to them not just for information, but for emotional support and companionship. But what are the psychological consequences of these interactions? A recent four-week randomized controlled study (n = 981, >300,000 messages) explored how different chatbot features—such as voice style (text, neutral voice, engaging voice) and conversation type (personal, non-personal, open-ended)—influence users’ experiences of loneliness, social connection, and emotional dependence on AI. 🔍 Key insights from the study: ☝ Voice-based chatbots initially reduced loneliness and emotional dependence more effectively than text-based ones—but these effects disappeared with heavier use, especially when the voice was neutral. ☝Personal conversations slightly increased loneliness but also reduced dependence; non-personal topics led to greater emotional attachment, particularly among heavy users. ☝High daily usage—across all chatbot types—was linked to increased loneliness, higher emotional dependence, and less social interaction with real people. ☝Users with stronger emotional attachment tendencies or higher trust in the chatbot were especially vulnerable to these effects. This research highlights the delicate balance between the design of emotionally expressive AI and user behavior. While chatbots have the potential to support emotional well-being, the study raises important questions about how to prevent overreliance and protect real-world social relationships. https://lnkd.in/dwQah9AS

  • View profile for Aaron "Ronnie" Chatterji
    Aaron "Ronnie" Chatterji Aaron "Ronnie" Chatterji is an Influencer

    Chief Economist of OpenAI and Distinguished Professor at Duke University

    30,106 followers

    Another ChatGPT consumer usage paper insight. Asking, Doing, Expressing: A New Lens on How We Use AI Over the past year, we've seen a shift in how people use tools like ChatGPT. - 49% of all messages are “Asking” prompts: people seeking advice, context, or judgment to inform decisions. - 40% are “Doing” prompts: completing tasks like writing or coding. - 11% are “Expressing” prompts: creative self-expression or play. "Asking" is the fastest-growing and most highly rated category. By June 2025, it represented 52% of messages, outpacing “Doing” (35%). At work, "Doing" still dominates, but highly educated and professional users are leaning heavily into “Asking.” Why does this matter? It suggests AI is a tool helping people complete tasks and a teammate performing cognitive collaboration.A wonky side note: These figures are based on cumulative message data over time, not just a snapshot. The trend line matters. Curious to hear from others: how are you using AI? More for asking, doing, or expressing?

  • View profile for Gayatri Agrawal

    Building AI transformation company @ ALTRD

    35,886 followers

    Everyone’s excited to launch AI agents. Almost no one knows how to measure if they’re actually working. Over the last year, we’ve seen brands launch everything from GenAI assistants to support bots to creative copilots but the post-launch metrics often look like this: • Number of chats • Average latency • Session duration • Daily active users Useful? Yes. But sufficient? Not even close. At ALTRD, we’ve worked on AI agents for enterprises and if there’s one lesson it’s this: Speed and usage mean nothing if the agent isn’t solving the actual problem. The real performance indicators are far more nuanced. Here’s what we’ve learned to track instead: 🔹 Task Completion Rate — Can the AI go beyond answering a question and actually complete a workflow? 🔹 User Trust — Do people come back? Do they feel confident relying on the agent again? 🔹 Conversation Depth — Is the agent handling complex, multi-turn exchanges with consistency? 🔹 Context Retention — Can it remember prior interactions and respond accordingly? 🔹 Cost per Successful Interaction — Not just cost per query, but cost per outcome. Massive difference. One of our clients initially celebrated their bot’s 1 million+ sessions - until we uncovered that less than 8% of users actually got what they came for. That 8% wasn’t a usage issue. It was a design and evaluation issue. They had optimized for traffic. Not trust. Not success. Not satisfaction. So we rebuilt the evaluation framework - adding feedback loops, success markers, and goal-completion metrics. The results? CSAT up by 34% Drop-off down by 40% Same infra cost, 3x more value delivered The takeaway: Don’t just measure what’s easy. Measure what matters. AI agents aren’t just tools - they’re touchpoints. They represent your brand, shape user experience, and influence business outcomes. P.S. What’s one underrated metric you’ve used to evaluate AI performance? Curious to learn what others are tracking.

  • View profile for Muazma Zahid

    Data and AI Leader | Advisor | Speaker

    18,900 followers

    Happy Friday! This week in #learnwithmz, we’re looking at what OpenAI ChatGPT usage tells us about how people actually use conversational AI. AI is not the same as ChatGPT, but ChatGPT gives us one of the clearest behavioral lenses into how millions of people interact with conversational AI. A new paper 'How People Use ChatGPT' analyzed over 2.5 billion daily messages from 700 million users, uncovering how generative AI fits into daily life, learning, and work. Key Insights - Non-Work Dominates By July 2025, more than 70% of messages were for non-work activities. From tutoring and how-to guidance to creative writing, people are using AI for personal curiosity and growth, not just productivity. - The Big 3 Use Cases Nearly 80% of conversations focus on: 1- Practical Guidance (how-to and tutoring) 2- Writing (editing, summarizing) 3- Seeking Information (as a search alternative) - Work Use = Writing + Decision Support For professional users, ChatGPT is used most for editing, refining, or summarizing text (42%), and decision support when evaluating choices or making recommendations. - Closing Demographic Gaps The gender gap in usage has nearly closed, with women slightly outpacing men in active use by mid-2025. Growth has also been fastest in low and middle-income countries, highlighting broader accessibility. What this means for PMs, Designers, and Researchers - Chat-based interfaces are becoming the new default UX for reasoning and exploration. (It would be great to see next version of the paper talking about Voice AI vs. Chat interface usage breakdown) - AI adoption reflects human curiosity as much as utility, blending search, learning, and self-expression Read the Full Paper How People Use ChatGPT: https://lnkd.in/dUc8Qp2e Which area do you think will define the next wave of Conversational AI? #AI #ChatGPT #AIUX #ProductManagement #LLM #AIResearch #learnwithmz

  • View profile for Oren Greenberg
    Oren Greenberg Oren Greenberg is an Influencer

    Designing AI-Native GTM Systems for B2B Tech Revenue Leaders

    39,199 followers

    Optimising the user experience: humans chatting with bots and lovable AI In the world of mobile apps, AI chatbots have become game-changers, transforming customer service and user engagement with real-time interactions and intelligent support. If in 2020, only 3% of mobile apps featured AI technology, this figure rose to 34% in the first quarter of 2023 (BusinessofApps). Chatbots are also predicted to become the primary customer service channel for roughly a quarter of organisations by 2027 (Gartner) and efforts to make their interactions with humans as seamless as possible have taken centre stage. But what do customers really care about when they communicate with bots? According to a survey by Amazon Web Services (AWS), these are the three most important criteria to ensure customer satisfaction: ➡️ Accuracy of response (89% of respondents) ➡️ Ability to understand questions and issues (88%) ➡️ Speed of resolution (86%) In this context, the introduction of AI-powered chatbots with different forms and functions keeps picking up the pace among big tech companies: 👻 TikTok is testing Tako a new AI chatbot in the early stages of its development 🎭 Meta is also leveraging AI to create a series of personas, chatbots with human-like characteristics 💌 And as the humanisation of AI becomes every day more popular, Inflection AI has recently launched Pi, a personal intelligence bot focused on emotions and empathy. But before delving into chatbots’ perceptions of us, we want to know what you think about them. Share your insights below 👇

  • View profile for Eugene L.

    GTM @ ElevenLabs

    20,661 followers

    🔊 Have you ever stayed on a customer‑service call simply because the person on the other end sounded trustworthy? 🎧 Researchers from Beijing University of Technology , the The University of Texas at Austin and the University of Memphis recently tested how different AI voices affect persuasion. Their findings were: • 𝗙𝗹𝗶𝗿𝘁𝘆 𝗱𝗼𝗲𝘀𝗻’𝘁 𝘄𝗼𝗿𝗸. A playful “coquetry” voice actually decreased persuasion, especially for male chatbots. • 𝗦𝘁𝗲𝗿𝗻 𝗶𝗻𝘃𝗶𝘁𝗲𝘀 𝗾𝘂𝗲𝘀𝘁𝗶𝗼𝗻𝘀. Stern voices were just as effective as gentle ones and, in male voices, even increased customer questions. • 𝗔𝗴𝗲 𝗶𝘀𝗻’𝘁 𝘁𝗵𝗲 𝗶𝘀𝘀𝘂𝗲. 𝗲𝗻𝗴𝗮𝗴𝗲𝗺𝗲𝗻𝘁 𝗶𝘀. There was no significant difference between “young” and “old” voices. What mattered was that older‑sounding voices kept people talking longer. • 𝗪𝗼𝗿𝗱𝘀 𝗺𝗮𝘁𝘁𝗲𝗿. Using affirmative sentences - particularly in female voices - prompted more customer inquiries, whereas rhetorical questions were less effective. For leaders in banking and finance, this isn’t just academic. Voice is the new front door of your brand. A gentle but confident tone can build trust with high‑net‑worth clients. An affirmative female voice can reassure anxious SME owners. Conversely, a playful chatbot might unintentionally undermine credibility. 𝗦𝗼𝗺𝗲 𝗾𝘂𝗶𝗰𝗸 𝗮𝗰𝘁𝗶𝗼𝗻𝘀 𝘁𝗼 𝗰𝗼𝗻𝘀𝗶𝗱𝗲𝗿: 1. Audit your AI voice scripts. Are you using affirmative statements that invite dialogue? 2. Experiment with different voice personas. Avoid flirty tones and observe how clients react. 3. Treat voice as part of your CX strategy. Integrate data from calls, chatbots and apps so you can personalize the experience for each customer, because customer empathy is your competitive moat. We’ve moved from building “voices” metaphorically to designing them intentionally. The tone of your AI isn’t just a detail, it’s part of the customer experience. Link to research in comments below. #AI #Voice

  • View profile for Michael Andrews

    Content Architect | Strategist | Evangelist

    3,214 followers

    Chatbots are shifting the emphasis from reading to questioning. This may have profound implications for how users interact with content. Web content interaction has largely mirrored print behaviors until now. Users were passive readers, absorbing (or not) the messages crafted by the author. Online content testing centered on issues such as whether users noticed information and messages (scroll and click behavior) and how they understood them (preference and comprehension evaluation). Ebooks might tally which passages were most frequently highlighted. Chatbots change what users expect from content. Many users are not interested in reading or even scanning source content. They want to extract things that interest them from the content without looking at the full text prepared by the author. Already, a range of apps are available that allow users to load URLs or PDFs of content and ask questions of the material. This adoption is happening rapidly among academic and tech researchers and spreading to the broader online population. These tools support a more active relationship with the topics than the passive reading offered by static pages. They allow users to look at content more broadly and selectively. With chatbots, the reader's agenda becomes more important than the author's. We can gather insights into what readers are curious about and how similar those interests are to what authors have written. It is not just whether the content "covers" an issue, but how the content addresses it. Users may ask questions about hypotheticals, introduce another context, and ask for a comparison. They may also ask why something is as it is and be disappointed if the content doesn't explain that. They may cross-interrogate multiple related articles from different sources to develop an opinion. The value of content will increasingly be defined by how effectively it shows its relevance to users' chatbot conversations.

  • View profile for Rasel Ahmed

    3× Co-Founder | CEO @ Musemind GmbH | UX Design Awards Jury | Top #2 Design Leadership Voice 🇩🇪 | Driving innovative, sustainable, empathetic AI × UX that delivers real impact

    51,696 followers

    AI has a UX problem that most designers ignore! Every week, I see product advice like: “Just slap a chatbot on it.” “Make it conversational.” “Ship fast, fix later.” But here’s the truth: The AI tools people actually use: didn’t win because they were smart. They won because they were usable. Let me show you 5 examples: Example 1: OpenAI ↳ Didn’t take off until it became a chat interface. Example 2: Cursor ↳ Went viral by merging with dev workflows. Example 3: Granola ↳ Focused on human agency, not just answers. Example 4: Descript ↳ Succeeded by mimicking a word processor. Example 5: Notion AI ↳ Feels like writing, not prompting a robot. The secret is ↳ There is no secret. Just real UX work applied to AI. Here’s the new AI UX checklist I want every team to follow: Step 1: Know your user Map the journey. Find the friction. ↳ Don’t just “add AI” to solve real problems. Step 2: Ideate for AI Start with the user, not the model. ↳ Ask: “What can generative AI unlock here?” Step 3: Accept limitations AI hallucinates. It’s not magic. ↳ Design for the edge cases. Build trust. Step 4: Design input + output The feeling of control matters. ↳ Output is important. Input is everything. Step 5: Test for understanding Do users get what the AI is doing? ↳ Don’t guess. Test comprehension, not just clicks. Step 6: Build Quiet AI In 2025, the best AI will feel invisible. ↳ It’ll solve boring tasks without stealing the spotlight. Because in the end: AI won’t win on intelligence. It’ll win on experience. Which of these 6 steps is your team focusing on right now? Drop it below 👇 (And I’ll share a UX tip or example that’s worked for us at Musemind.)

Explore categories