🔮 Design Patterns For AI Interfaces (https://lnkd.in/dyyMKuU9), a practical overview with emerging AI UI patterns, layout considerations and real-life examples — along with interaction patterns and limitations. Neatly put together by Sharang Sharma. One of the major shifts is the move away from traditional “chat-alike” AI interfaces. As Luke Wroblewski wrote, when agents can use multiple tools, call other agents and run in the background, users orchestrate AI work — there’s a lot less chatting back and forth. In fact, chatbot widgets are rarely an experience paradigm that people truly enjoy and can fall in love with. Mostly because the burden of articulating intent efficiently lies on the user. It can be done (and we’ve learned to do that), but it takes an incredible amount of time and articulation to give AI enough meaningful context for it to produce meaningful insights. As it turned out, AI is much better at generating prompt based on user’s context to then feed it into itself. So we see more task-oriented UIs, semantic spreadsheets and infinite canvases — with AI proactively asking questions with predefined options, or where AI suggests presets and templates to get started. Or where AI agents collect context autonomously, and emphasize the work, the plan, the tasks — the outcome, instead of the chat input. All of it are examples of great User-First, AI-Second experiences. Not experiences circling around AI features, but experiences that truly amplify value for users by sprinkling a bit of AI in places where it delivers real value to real users. And that’s what makes truly great products — with AI or without. ✤ Useful Design Patterns Catalogs: Shape of AI: Design Patterns, by Emily Campbell 👍 https://shapeof.ai/ AI UX Patterns, by Luke Bennis 👍 https://lnkd.in/dF9AZeKZ Design Patterns For Trust With AI, via Sarah Gold 👍 https://lnkd.in/etZ7mm2Y AI Guidebook Design Patterns, by Google https://lnkd.in/dTAHuZxh ✤ Useful resources: Usable Chat Interfaces to AI Models, by Luke Wroblewski https://lnkd.in/d-Ssb5G7 The Receding Role of AI Chat, by Luke Wroblewski https://lnkd.in/d8xcujMC Agent Management Interface Patterns, by Luke Wroblewski https://lnkd.in/dp2H9-HQ Designing for AI Engineers, by Eve Weinberg https://lnkd.in/dWHstucP #ux #ai #design
UX Design And Artificial Intelligence
Explore top LinkedIn content from expert professionals.
-
-
One of the areas that excites me the most about AI is prototyping. I'm constantly trying out new tools so that I can share my experience. And I think what Figma has achieved with Figma Make is very impressive. But to achieve great results, you need to know when and how to use it. Figma Make excels at the following: - Prototyping complex interactions. - High accuracy when translating a design to code. - Coming up with ideas based on an existing design. I’ve used other vibe coding tools to go from idea to product as quickly as possible, without a starting design. But when it comes to high accuracy in design and prototyping complex interactions that would have taken ages with traditional prototyping, Figma Make can be incredible. Here are a few examples of where I use Figma Make instead of traditional prototyping: - Creating interactive components. - Complex interactions for web apps. - Advanced logic or data-heavy products. - Trying out different responsive approaches. - Anything that requires external libraries, such as data visualization. Nowadays, when I want to communicate an interaction idea to an engineer, I first try and do it in Figma Make. After testing it a few times, it becomes second nature. 1. Think of an interaction you want to prototype. 2. Send your design to Figma Make. 3. Describe and build. 4. Duplicate and try alternatives. In this carousel, I'll be taking you through my workflow and examples in detail. (Swipe to get started 👉) -- If you found this useful, consider reposting ♻️ Are you using AI prototyping in your workflow? And when? Let me know in the comments 👇 #productdesign #uxdesign #ai #figmapartner
-
LLMs are optimized for next turn response. This results in poor Human-AI collaboration, as it doesn't help users achieve their goals or clarify intent. A new model CollabLLM is optimized for long-term collaboration. The paper "CollabLLM: From Passive Responders to Active Collaborators" by Stanford University and Microsoft researchers tests this approach to improving outcomes from LLM interaction. (link in comments) 💡 CollabLLM transforms AI from passive responders to active collaborators. Traditional LLMs focus on single-turn responses, often missing user intent and leading to inefficient conversations. CollabLLM introduces a :"Multiturn-aware reward" system, apply reinforcement fine-tuning on these rewards. This enables AI to engage in deeper, more interactive exchanges by actively uncovering user intent and guiding users toward their goals. 🔄 Multiturn-aware rewards optimize long-term collaboration. Unlike standard reinforcement learning that prioritizes immediate responses, CollabLLM uses forward sampling - simulating potential conversations - to estimate the long-term value of interactions. This approach improves interactivity by 46.3% and enhances task performance by 18.5%, making conversations more productive and user-centered. 📊 CollabLLM outperforms traditional models in complex tasks. In document editing, coding assistance, and math problem-solving, CollabLLM increases user satisfaction by 17.6% and reduces time spent by 10.4%. It ensures that AI-generated content aligns with user expectations through dynamic feedback loops. 🤝 Proactive intent discovery leads to better responses. Unlike standard LLMs that assume user needs, CollabLLM asks clarifying questions before responding, leading to more accurate and relevant answers. This results in higher-quality output and a smoother user experience. 🚀 CollabLLM generalizes well across different domains. Tested on the Abg-CoQA conversational QA benchmark, CollabLLM proactively asked clarifying questions 52.8% of the time, compared to just 15.4% for GPT-4o. This demonstrates its ability to handle ambiguous queries effectively, making it more adaptable to real-world scenarios. 🔬 Real-world studies confirm efficiency and engagement gains. A 201-person user study showed that CollabLLM-generated documents received higher quality ratings (8.50/10) and sustained higher engagement over multiple turns, unlike baseline models, which saw declining satisfaction in longer conversations. It is time to move beyond the single-step LLM responses that we have been used to, to interactions that lead to where we want to go. This is a useful advance to better human-AI collaboration. It's a critical topic, I'll be sharing a lot more on how we can get there.
-
Talk less. Prototype faster. The best teams don’t discuss ideas endlessly; they just build them. But how do you get the right prototype fast enough? Most new product initiatives are not about creating a new product. They're about improving existing ones. In other words, they already have a product, customers, and a design language. The machine is slow, perhaps rusty, but it has worked for ages now. Any attempts to improve the process usually failed or gave barely any noticeable improvement. However, this is where the AI comes in and why I’m genuinely impressed with Reforge Build, which has now been launched in beta! It’s an AI prototyping tool made for product teams, not solo builders. It starts where your product already is and accelerates what comes next. Don't take my word for it, try it yourself: Check out Reforge Build and explore what’s possible with AI that actually understands your product: https://lnkd.in/duh4YC_H But why did it impress me? 1) Looks like your product Upload a screenshot or connect to Figma. Reforge Build instantly matches your real design system: colors, fonts, spacing, everything. No endless cleanup. No imagination is needed when painting a vision of a future successful product to the stakeholders. 2) Understanding the context Add your product data, strategy docs, and customer insights. Build the prototypes using your actual tiers, features, and messaging. This won't be just a rough draft, but something your actual design team could have presented to you after weeks of work. 3) Plans before it generates Instead of vague prompts, you define user needs, metrics, and layout priorities. AI creates a plan before generating, so the first version is already close to your vision. After all, you need a workable prototype, not an AI slop wannabe! 4) Explores options, not just outputs This REALLY left me with my jaw on the floor: Reforge Build generates multiple design directions, compares them side by side, and mixes the best ideas. I can only imagine this is the experience of a Product Manager with multiple design teams ready to work on a single project... 5) Works like a team tool, not a solo hack Comment, remix, reuse templates, so your second iteration takes minutes, not hours. Nobody's perfect, not even your AI teammate, but every teammate gets better with proper feedback! Impressive, isn't it? Would such an AI prototype tool speed up your new feature's go-to-market time? Let me know in the comments! #productmanagement #ai #ux
-
🌟 Transforming emotion detection with Multi-Modal AI systems! 🌟 In an ever-evolving world where the complexity of human emotions often surpasses our understanding, East China Normal University is pioneering a revolution in emotion recognition technology. Their newly published research, supported by the Beijing Key Laboratory of Behavior and Mental Health, is pushing the boundaries of AI-driven therapy and mental health support. 🔍 Why Multi-Modal AI Matters: Human emotions aren't one-dimensional. They manifest through facial expressions, vocal nuances, body language, and physiological responses. Traditional emotion detection techniques, relying on single-modal data, fall short in capturing these nuances. Enter Multi-Modal AI Systems, which seamlessly integrate data from text, audio, video, and even physiological signals to decode emotions with unprecedented accuracy. 🎯 Introducing the MESC Dataset: Researchers have constructed the Multimodal Emotional Support Conversation (MESC) dataset, a groundbreaking resource with detailed annotations across text, audio, and video. This dataset sets a new benchmark for AI emotional support systems by encapsulating the richness of human emotional interactions. 💡 The SMES Framework: Grounded in Therapeutic Skills Theory, the Sequential Multimodal Emotional Support (SMES) Framework leverages LLM-based reasoning to sequentially handle: ➡ User Emotion Recognition: Understanding the client’s emotional state. System Strategy Prediction: Selecting the best therapeutic strategy. ➡ System Emotion Prediction: Generating empathetic tones for responses. Response Generation: Crafting replies that are contextually and emotionally apt. 🌐 Real-World Applications: Imagine AI systems that can genuinely empathize, provide tailored mental health support, and bring therapeutic interactions to those who need it the most – all while respecting privacy and cultural nuances. From healthcare to customer service, the implications are vast. 📈 Impressive Results: Validation of the SMES Framework has revealed stunning improvements in AI’s empathy and strategic responsiveness, heralding a future where AI can bridge the gap between emotion recognition and support. #AI #MachineLearning #Technology #Innovation #EmotionDetection #TherapeuticAI #HealthcareRevolution #MentalHealth
-
Will Apps Need to Redesign Their Interfaces to Accommodate AI Agents? AI agents from OpenAI, Perplexity, and others can comfortably navigate textual and structured digital spaces but quickly hit barriers when faced with visually oriented tools like Gamma, Canva, or WordPress. These popular applications were designed specifically for human cognitive styles, relying heavily on visual intuition, recognition of subtle cues, and interactions guided by visual metaphors. As we can see from early tests, an AI agent accessing these tools via a browser faces hurdles. The reason: interfaces designed around human perception and intuition become ambiguous or even indecipherable to a purely logic-driven entity. This poses a nuanced design question: to effectively support AI agents, will software companies need to consider creating specialised, agent-oriented interfaces separate from the human-focused UX? The idea isn’t simply about creating more structured web pages. Rather, it suggests building parallel visual experiences explicitly designed around AI cognition, incorporating clear functional signposting, predictable interactions, and logical progressions that agents can reliably parse. The implications are notable: ➡️ Strategic Differentiation: Platforms offering agent-friendly interfaces might attract companies prioritising automation and seamless AI integration, creating new competitive landscapes. ➡️ UX Complexity: App developers will need to strike a balance. How much complexity can they add before negatively impacting the human experience? Can dual interfaces coexist without excessive overhead? ➡️ Productivity and Innovation: With optimised interfaces, agents could more effectively handle complex workflows, opening up new productivity gains beyond basic task automation. Reflections: 🤔 Will AI-friendly UX design become a new competitive advantage? 🤔 How feasible is it for companies to maintain dual-interface platforms for humans and AI agents? 🤔 Will the cognitive divide between human intuition and AI logic become a central consideration in the next era of software design? I'd be very interested in your thoughts. #AI #UX #ProductDesign #FrictionAdvantage
-
We're manufacturing regret and calling it personalization. Customers who experienced personalized digital interactions are 3.2 times more likely to regret their purchase. They're 4 times more likely to say they should have chosen something different. They're twice as likely to feel overwhelmed by the volume of information they receive. This isn't a minor side effect. This is a crisis. For ages, we've been told personalization is the answer. Know your customer. Tailor the experience. Recommend the next best action. Remove friction. Make it effortless. And it worked. Sort of. Personalized experiences drive 1.8x premium pricing and 3.7x more purchases than intended. The conversion metrics look great. Except we're also creating radical regret. Here's what's actually happening: We optimized for speed when customers needed confidence. We removed friction when they needed reflection. We personalized the path forward when they needed help figuring out if forward was even the right direction. We made it easier to buy. We made it harder to buy well. Think about what personalization actually does in most customer journeys. It reminds you of your abandoned cart. It suggests the next piece of content. It recommends products that "customers like you" purchased. All of this is designed to keep you moving. To reduce hesitation. To get you from consideration to conversion as efficiently as possible. But efficiency isn't the same as confidence. The irony is that because the experience was personalized "for them," the regret cuts deeper. The algorithm knew them. The recommendations were tailored. So when the purchase doesn't feel right, it's not just "I made a bad choice." It's "The system that knew me helped me make a bad choice." We've turned purchase anxiety into algorithmic self-doubt. The data shows 30% of people who experienced personalization delayed or put off important decisions. That's not conversion optimization. That's decision paralysis. And we created it by focusing on what we wanted—the sale—rather than on what customers needed—the confidence to make the right choice. So what do we do? If your personalization drives purchases but also drives regret, you're burning through customers, not building a brand. Some friction is good. Not every hesitation is a problem to solve. Sometimes customers need space to think, permission to slow down. Personalize for confidence, not just conversion.
-
Your AI-generated code is probably excluding many people. "a11y" is shorthand for accessibility — building digital products that anyone can use, including people with visual, motor, cognitive, or hearing disabilities. Over 1 billion people worldwide. But lots of existing websites aren't taking them into consideration. In 2025, WebAIM found that 94.8% of the top one million home pages have detectable accessibility failures. Sadly, AI does not fix this. Because AI coding tools learn from existing code on the web. And 95% of that code is already inaccessible. The models are reproducing a broken baseline. A 2025 study from Carnegie Mellon found three problems when developers use AI coding assistants: → AI doesn't give you accessible code by default (if you don't ask, AI won't prioritize it) → AI omits many important a11y attributes → AI doesn't verify compliance. Many a11y flows have to be verified at runtime The result is missing keyboard navigation, broken focus management, ARIA attributes sprinkled in for show but wired up wrong — which is actually worse than no ARIA at all. This isn't about AI being bad. It's about a knowledge gap that AI inherits rather than solves. As AI generates more of our frontend code, inaccessible patterns are scaling faster than ever. Every vibe-coded app shipped without accessibility review is another site that excludes people. If you're building for the web, start with these basics: → Use semantic HTML. A button should be a <button>, not a styled div. → Test with your keyboard. Tab through your page. Can you reach everything? → Use headless UI components like Radix, Ariakit, Base UI, etc., they have a11y features built in. → Run a11y checkers like axe DevTools or WAVE. They catch the low-hanging fruit in seconds. → Don't trust AI output blindly. Review it specifically for accessibility. Accessibility isn't charity, it's quality engineering. It should not be an afterthought.
-
Happy Friday! This week in #learnwithmz, let’s explore how AI is transforming UX and Product Design: From prototyping to research to testing. Top AI Design Tools - Uizard Turn sketches or text into wireframes https://uizard.io - Stitch (previously Galileo AI acquired by Google) Generate polished UI layouts, export to Figma https://lnkd.in/giqThC-Y https://www.usegalileo.ai - Figma AI First Draft, Rename Layers, Smart Layout https://lnkd.in/gJVVHwc4 Magician for Figma https://lnkd.in/gNFVaV82 - Penpot (recent favorite) AI suggestions for layouts/components https://penpot.app Visual Design & Assets - Midjourney Generate moodboards and visuals https://www.midjourney.com - Adobe Firefly Generative fill and variations inside Creative Cloud https://lnkd.in/gtcpuCCR - Canva AI Visualize ideas, generate compelling copy, and turn your thoughts into stunning, fully editable designs https://lnkd.in/gsd5wi7n Takeaways for PMs & Designers - AI really shines at the grunt work In my own workflows, the biggest win has been cutting out repetitive tasks renaming, generating placeholder copy, auto-cleaning designs. It’s not glamorous, but it buys you back hours every week. - Speed is a gift, but refinement is non-negotiable AI can get you a “good enough” starting point in seconds. But the best outcomes still come from layering human judgment making sure the design aligns with your system, brand, and real user needs. - We’re seeing deeper integration into core tools Figma, Adobe, Notion… AI isn’t an add-on anymore, it’s inside the workflows we already use. That makes adoption easier but also means teams should stay intentional about how and when to lean on it. - The open-source and no-code ecosystem is catching up fast I’ve been impressed by tools like Penpot with options like Self-host . They make AI design assistance accessible beyond big enterprises so smaller teams and startups can experiment without huge budgets. How are you using AI in your product design workflow? Which tool has surprised you the most? #AI #ProductDesign #UX #Prototyping #AIDesign #learnwithmz PS. the video was created with Google Veo with one line prompt in 2 minutes (next week's post on video generation)
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development