User Testing Methods for Designers

Explore top LinkedIn content from expert professionals.

  • View profile for Aurimas Griciūnas
    Aurimas Griciūnas Aurimas Griciūnas is an Influencer

    Founder @ SwirlAI • Ex-CPO @ neptune.ai (Acquired by OpenAI) • UpSkilling the Next Generation of AI Talent • Author of SwirlAI Newsletter • Public Speaker

    183,367 followers

    I have been developing Agentic Systems for more than two years now and the same patterns keep emerging. 👇 𝗘𝘃𝗮𝗹𝘂𝗮𝘁𝗶𝗼𝗻 𝗗𝗿𝗶𝘃𝗲𝗻 𝗗𝗲𝘃𝗲𝗹𝗼𝗽𝗺𝗲𝗻𝘁 is the only way how you can be successful in building your 𝗔𝗴𝗲𝗻𝘁𝗶𝗰 𝗦𝘆𝘀𝘁𝗲𝗺𝘀 - here is my template. Let’s zoom in: 𝟭. Define a problem you want to solve: is GenAI even needed? 𝟮. Build a Prototype: figure out if the solution is feasible. 𝟯. Define Performance Metrics: you must have output metrics defined for how you will measure success of your application. 𝟰. Define Evals: split the above into smaller input metrics that can move the key metrics forward. Decompose them into tasks that could be automated and move the given input metrics. Define Evals for each. Store the Evals in your Observability Platform. ℹ️ Steps 𝟭. - 𝟰. are where AI Product Managers can help, but can also be handled by AI Engineers. 𝟱. Build a PoC: it can be simple (excel sheet) or more complex (user facing UI). Regardless of what it is, expose it to the users for feedback as soon as possible. 𝟲. Instrument your application: gather traces and human feedback and store it in an Observability Platform next to previously stored Evals. 𝟳. Run Evals on traced data: traces contain inputs and outputs of your application, run evals on top of them. 𝟴. Analyse Failing Evals and negative user feedback: this data is gold as it specifically pinpoints where the Agentic System needs improvement. 𝟵. Use data from the previous step to improve your application - prompt engineer, improve AI system topology, finetune models etc. Make sure that the changes move Evals into the right direction. 𝟭𝟬. Build and expose the improved application to the users. 𝟭𝟭. Monitor the application in production: this comes out of the box - you have implemented evaluations and traces for development purposes, they can be reused for monitoring. Configure specific alerting thresholds and enjoy the peace of mind. Learn all of this hands-on in my End-to-End AI Engineering Bootcamp starting in 2 weeks (10% off this week): https://lnkd.in/djvtszk5 ✅ 𝗖𝗼𝗻𝘁𝗶𝗻𝘂𝗼𝘂𝘀 𝗗𝗲𝘃𝗲𝗹𝗼𝗽𝗺𝗲𝗻𝘁 𝗼𝗳 𝘆𝗼𝘂𝗿 𝗮𝗽𝗽𝗹𝗶𝗰𝗮𝘁𝗶𝗼𝗻: ➡️ Run steps 𝟲. - 𝟭𝟬. to continuously improve and evolve your application. ➡️ As you build up in complexity, new requirements can be added to the same application, this includes running steps 𝟭. - 𝟱. and attaching the new logic as routes to your Agentic System. ➡️ You start off with a simple Chatbot and add a route that can classify user intent to take action (e.g. add items to a shopping cart). What is your experience in evolving Agentic Systems? Let me know in the comments 👇

  • View profile for Vitaly Friedman
    Vitaly Friedman Vitaly Friedman is an Influencer

    Practical insights for better UX • Running “Measure UX” and “Design Patterns For AI” • Founder of SmashingMag • Speaker • Loves writing, checklists and running workshops on UX. 🍣

    225,947 followers

    🔬 UX Concept Testing. How to test your UX design without spending too much time and effort polishing mock-ups and prototypes ↓ ✅ Concept testing is an early real-world check of design ideas. ✅ It happens before a new product/feature is designed and built. ✅ It helps you find an idea that will meet user and business needs. ✅ Always low-fidelity, always pre-launch, always involves real users. 🚫 Testing, not validation: ideas are not confirmed, but evaluated. ✅ What people think, do, say and feel are often very different things. ✅ You’ll need 5 users per feature or a group of features. ✅ You will discover 85% of usability problems with 5 users. ✅ You will discover 100% of UX problems with 20–40 users. 🚫 Poor surveys are a dangerous, unreliable tool to assess design. 🚫 Never ask users if they prefer one design over the other. ✅ Ask what adjectives or qualities they connect with a design. ✅ Tree testing: ask users to find content in your navigation tree. ✅ Kano model survey: get user’s sentiment about new features. ✅ First impression test: ask to rate a concept against your keywords. ✅ Preference test: ask to pick a concept that better conveys keywords. ✅ Competitive testing: like preference test, but with competitor’s design. ✅ 5-sec test: show for 5 secs, then ask questions to answer from memory. ✅ Monadic testing: segment users, test concepts in-depth per segment. ✅ Concept testing isn’t one-off, but a continuous part of the UX process. In design process, we often speak about “validation” of the new design. Yet as Kara Pernice rightfully noted, the word is confusing and introduces bias. It suggests that we know it works, and are looking for data to prove that. Instead, test, study, watch how people use it, see where the design succeeds and fails. We don’t need polished mock-ups or advanced prototypes to test UX concepts. The earlier you bring your work to actual users, the less time you’ll spend on designing and building a solution that doesn’t meet user needs and doesn’t have a market fit. And that’s where concept testing can be extremely valuable. Useful resources: Concept Testing 101, by Jenny L. https://lnkd.in/egAiKreK A Guide To Concept Testing in UX, by Maze https://lnkd.in/eawUR-AM Concept Testing In Product Design, by Victor Yocco, PhD https://lnkd.in/egs-cyap How To Test A Design Concept For Effectiveness, by Paul Boag https://lnkd.in/e7wre6E4 The Perfect UX Research Midway Method, by Gabriella Campagna Lanning https://lnkd.in/e-iA3Wkn Don’t “Validate” Designs; Test Them, by Kara Pernice https://lnkd.in/eeHhG77j UX Research Methods Cheat Sheet, by Allison Grayce Marshall https://lnkd.in/eyKW8nSu #ux #testing

  • View profile for Aakash Gupta
    Aakash Gupta Aakash Gupta is an Influencer

    Helping you succeed in your career + land your next job

    311,020 followers

    Two types of PMs are emerging from the AI prototyping wave. The first group learned to build. They can spin up a working prototype in 45 minutes. They demo it the next day. Stakeholders approve it because working software is more convincing than a PowerPoint. Then metrics don’t move. Nobody tested “red shoes size 10 wide” and watched the AI parse “wide” as a style descriptor. Nobody counted the clicks and realized AI search adds 2 steps over the existing filter sidebar. Nobody asked engineering about API costs at production traffic. $40K/month, unbudgeted. They went from writing bad specs to building bad prototypes. Same failure mode, just faster. The second group learned to evaluate. Boris Cherny’s Claude Code team prototyped the terminal spinner 50-100 times. 80% didn’t ship. Agent teams went through hundreds of versions. The condensed file view took 30 prototypes then a month of internal dogfooding. Boris ships 20-30 PRs a day. But the 80% he kills are more important than the 20% he ships. “Half my ideas are bad. I don’t know which half until I try.” The skill that separates these two groups is what I’m calling taste at speed: the ability to evaluate working software fast, kill most of it, and ship the survivors. A PM who reviews one spec per month builds judgment from 12 data points per year. A PM evaluating 15 prototypes per week builds judgment from 780. Same role. Same year. 65x more pattern-matching reps. That gap compounds every single week. I wrote the complete guide: 1. Why taste at speed is the defining PM skill (with the printing press analogy that changed how I think about this) 2. How Boris’s team actually works (5 parallel terminals, plan mode, phone-first agents) 3. The 5 Lenses evaluation framework (problem-solution fit, interaction cost, edge cases, technical debt, business model) 4. How to build this skill at any level (never prototyped, can prototype, ready to change your team) 5. Where the PRD fits now (it moved from step 2 to step 6) 6. A full real-world teardown showing the same feature evaluated by two PMs with wildly different outcomes Plus 4 downloadable templates: a Prototype Evaluation Scorecard, a Skill-Building Roadmap, a Prototype-First PRD Template, and a Divergent Prototyping Prompt Template. Full guide for subscribers: https://lnkd.in/g-HmamRS Not everyone can be Boris. Most PMs have meetings from 9 to 5 and a company that still requires PRDs. But a director who prototypes one feature per month makes dramatically better decisions because of it. A parent doing one prototype per sprint is already ahead of 90%. The reps compound regardless of volume.

  • View profile for Kritika Oberoi
    Kritika Oberoi Kritika Oberoi is an Influencer

    Founder at Looppanel | User research at the speed of business | Eliminate guesswork from product decisions

    29,095 followers

    Your research findings are useless if they don't drive decisions. After watching countless brilliant insights disappear into the void, I developed 5 practical templates I use to transform research into action: 1. Decision-Driven Journey Map Standard journey maps look nice but often collect dust. My Decision-Driven Journey Map directly connects user pain points to specific product decisions with clear ownership. Key components: - User journey stages with actions - Pain points with severity ratings (1-5) - Required product decisions for each pain - Decision owner assignment - Implementation timeline This structure creates immediate accountability and turns abstract user problems into concrete action items. 2. Stakeholder Belief Audit Workshop Many product decisions happen based on untested assumptions. This workshop template helps you document and systematically test stakeholder beliefs about users. The four-step process: - Document stakeholder beliefs + confidence level - Prioritize which beliefs to test (impact vs. confidence) - Select appropriate testing methods - Create an action plan with owners and timelines When stakeholders participate in this process, they're far more likely to act on the results. 3. Insight-Action Workshop Guide Research without decisions is just expensive trivia. This workshop template provides a structured 90-minute framework to turn insights into product decisions. Workshop flow: - Research recap (15min) - Insight mapping (15min) - Decision matrix (15min) - Action planning (30min) - Wrap-up and commitments (15min) The decision matrix helps prioritize actions based on user value and implementation effort, ensuring resources are allocated effectively. 4. Five-Minute Video Insights Stakeholders rarely read full research reports. These bite-sized video templates drive decisions better than documents by making insights impossible to ignore. Video structure: - 30 sec: Key finding - 3 min: Supporting user clips - 1 min: Implications - 30 sec: Recommended next steps Pro tip: Create a library of these videos organized by product area for easy reference during planning sessions. 5. Progressive Disclosure Testing Protocol Standard usability testing tries to cover too much. This protocol focuses on how users process information over time to reveal deeper UX issues. Testing phases: - First 5-second impression - Initial scanning behavior - First meaningful action - Information discovery pattern - Task completion approach This approach reveals how users actually build mental models of your product, leading to more impactful interface decisions. Stop letting your hard-earned research insights collect dust. I’m dropping the first 3 templates below, & I’d love to hear which decision-making hurdle is currently blocking your research from making an impact! (The data in the templates is just an example, let me know in the comments or message me if you’d like the blank versions).

  • View profile for Sachin Rekhi

    Helping product managers master their craft in the age of AI | sachinrekhi.com

    56,827 followers

    Customer discovery via functional prototypes + PostHog is night & day better than the old school way of asking for feedback on Figma mockups. Here's why: I get to observe actual user behavior instead of asking the user to guess how they might use my product. My favorite example of why this matters comes from a Sony Walkman user study. They asked a bunch of people what they thought about a yellow walkman and they said "so sporty! not boring like the black one!". And yet, when they were given the opportunity to take a walkman home after the study, everyone picked the black one. We learned a lot more from user behavior than we did expressed preferences. Here's my setup for now observing user behavior from prototypes: 1. Create a functional prototype in your favorite prototyping tool (Bolt, Lovable, Reforge Build, Magic Patterns, Claude Code) 2. Ask the prototyping tool to integrate PostHog analytics 3. Ask the prototyping tool to instrument key user actions in PostHog Then you get all of these ways of observing actual behavior: - DAUs \ WAUs \ retention curves - I can actually see if people come back and use my prototype instead of taking their word for it - Action metrics dashboards - I can see what actions people are taking vs not - Post-usage survey - I can add a built-in pop-up survey to ask the user a question about the experience after they have engaged with the prototype - Session replays - I can see exactly where people are clicking and how they are using the product to identify usability issues - Heatmaps - I can see what part of my design is working across all sessions I'd never go back to testing with just a mockup after this.

  • View profile for Vikas Singhvi

    Building Velora AI - Automated Construction Reporting From WhatsApp | AI Product Builder | ex- Microsoft

    10,528 followers

    "My product is internal only. My user base is captive - leaders will force them to use." Ring a bell - many PMs building internal products think this way. "I build products based on requirements given by 1 business stakeholder - s/he will ensure adoption. I build, adoption is not my headache." If you are in this boat, time to wake up. Think like a real product manager, not a project manager. Here's how you can behave and showcase your true PM skills, by caring about meaningful product adoption: 🔍 Understand Your Internal Users: Treat your colleagues as customers. Conduct user interviews, surveys, and usability tests to understand their pain points, needs and workflows. Just like external customers, internal users have unique requirements and expectations. 🛠 Iterate Based on Feedback: Gather continuous feedback from users. Use this data to iterate and improve your internal product, ensuring it truly meets the needs of your users. 📈 Drive Adoption: Adoption is the internal product’s equivalent of growth. High adoption rates indicate that your product is valuable, user-friendly, and effectively solving problems. Monitor usage metrics, engagement levels, and satisfaction scores to gauge success. 🚀 Champion Internal Advocacy: Encourage your teams to pitch your product on any stage available. Create compelling training materials, host workshops, and provide excellent support to make it easy for users to adopt and champion your product. 🔄 Align with Business Goals: Ensure your internal product aligns with broader business goals. Demonstrating how your product contributes to overall efficiency, cost savings or any other objective and key result committed by your team. If you really think about it, you can erase the boundaries between an internal or external product. A product is a product, period. And your role as a product manager for an internal product is as critical as a PM for an external profit-making product. If you are not continuously obsessing about product adoption, you are not really doing your core work - you end up being a project manager or an engineer at best. #ProductManagement #InternalProducts #UserAdoption #ProductDiscovery #GrowthMindset #OrganizationalSuccess

  • View profile for Anupam Mishra

    Product Design & Storytelling

    8,948 followers

    What is the single best way to validate your SaaS product/ feature as early as possible? 🚀 I have observed a large number of SaaS founders and product owners doing what they think is product validation. This typically involves the founder excitedly describing exactly what his magical product is going to do. He often goes on and on until the potential customers eyes have developed that sort of hunted look that you see when you corner an animal. At the end of the sales pitch, the entrepreneur "validates" the idea by asking , "So would that solve your problem?" Most of these potential customers would agree to practically anything just to get the entrepreneurs to shut the hell up. So what's the alternative? Instead of describing what you are going to build, why not show them what you are going to build? Simply observing people interacting with a prototype, even a very rough one, can give you a tremendous amount of insight into whether they understand your potential product and feel it might solve a problem. Prototype tests are the single best way to validate your product as early as possible even before you put any resource or dollar into developing it. At xMoonshot, we insist on observing actual users use a a clickable design prototype without a single line of explanation about what it does. I have personally been to Starbucks with my laptop. I requested 10-12 people to play around the clickable prototype of a direct marketing SaaS product aimed at upper middle class in exchange for a free coffee. 6 people agreed. With just 4 hours and a modest budget, we debunked assumptions and gained priceless insights about user preferences within INR 2000, that is $24. Don't tell, show! Prototype tests are like the crystal ball for product validation. Get your insights without emptying your pockets before development even begins. 🧙♂️🛠 #saas #uxdesign #productdesign #ProductValidation #PrototypingMagic #InnovateSmart

  • View profile for Abhishek Sharma

    Landing Page Redesign Specialist | I Fix Pages That Look Good But Don’t Convert | CRO + UX Research + Strategy

    1,495 followers

    Designers’ View vs Users’ View! You put a baby on the bed. Above the baby, you hang some toys From your side (designer’s view), it looks beautiful. All toys are visible, colors are bright, arrangement is perfect. But… From the baby’s side (user’s view), The scene is totally different. The baby only sees the bottom side of the toys. Maybe it looks confusing, boring, or even a little scary. The baby is the real user. And the real user experience is very different from what the designer imagined. The Lesson: Just because we (designers) find something attractive does not mean users will also like it. Users see things from their own perspective, environment, and needs. If we ignore the user’s view, our design may look perfect to us but fail in real life. Why understanding the user’s view is important? 1. Design is not for us, it’s for users. What looks nice to us might be confusing to them. 2. User’s perspective is always different. They focus on completing their task, not on admiring visuals. 3. Testing reveals reality. Only when we test our product with real users, we realize: ⤷ Which parts are helpful? ⤷ Which parts are confusing? ⤷ What should be improved? 4. Better experience = Better product. When we design for users’ comfort, the product becomes easy, useful, and successful. Final Thought: As designers, we must step down from our own “beautiful view” and look from the user’s side. Because finally, the product is not for us, It’s for the user. #UXDesign #UserExperience #UIDesign #DesignThinking #UserTesting

  • View profile for Dave Westgarth

    Delivery | Cloud | AI | Vibe Coding | Agility

    16,213 followers

    One of the best ways to align teams, stakeholders, and strategy is to make the invisible visible. That’s why I’m such a fan of mapping techniques. They help you zoom out, focus in, and uncover the things that are often hiding in plain sight. Whether it’s unclear goals, conflicting priorities, or pain points users are quietly putting up with. Here are 7 mapping techniques I keep coming back to and how I use them in delivery: 🗺️ User Story Mapping Helps me turn flat backlogs into something visually dynamic, tangible, and user-focused. I use this to map out a user's journey step by step, then slice features based on what really matters to them. It’s a brilliant way to align teams around MVPs and delivery releases. 🗺️ Impact Mapping Just like Simon Sinek this one starts with why. It links business goals to user behaviors and potential features, helping teams focus on outcomes over outputs. I’ve used it to reframe entire product roadmaps around expected impact instead of a list of things to build. 🗺️ Wardley Mapping This is more strategic and it's great for mapping components of a system by how visible they are to users and how mature they are. It’s helped me spot where we should innovate, where we can standardise, and where buying makes more sense than building. 🗺️ Dysfunction Mapping I use this when things feel off, but the problem or solution isn’t immediately obvious. It’s a structured way to identify root causes of delivery friction whether it’s misaligned priorities, unclear ownership, or recurring blockers. Great for retros and recovery plans. 🗺️ Stakeholder Mapping Simple but powerful. I use this to understand who’s influencing the project, who needs to be kept in the loop, and who we might be unintentionally leaving out. It’s especially useful when stepping into a new team or navigating complex stakeholder landscapes. 🗺️ Experience Mapping This is about stepping into the user’s shoes and walking through their journey. Not just where the product touches them, but where the experience begins and ends. I’ve used this to uncover gaps, friction points, and opportunities we hadn’t considered. 🗺️ Empathy Mapping When we’re trying to build something truly user-centric, empathy mapping helps us understand what users think, feel, say, do, and hear. It goes deeper than roles or personas and helps teams emotionally hook in with the people we’re building for. If you’re in delivery, product, UX, or transformation work there’s probably a mapping method in here that can help you in your day to day role. Let me know if I've missed any effective mapping techniques and if a deep dive into any of these would be useful!

  • View profile for Nick Babich

    Product Design | User Experience Design

    85,898 followers

    💡 Mapping user research techniques to levels of knowledge about users When doing user research, it's important to choose the right methods and tools to uncover valuable insights about user behavior. It's possible to identify 3 layers of user behavior, feelings, and thoughts: 1️⃣ Surface level - Say & Think This level captures what users say in conversations, interviews, or surveys and what they think about a product, feature, or experience. It reflects their stated opinions, thoughts, and intentions. Example: "I prefer simple products" or "I think this app is easy to use." Methods: Interviews, Questionnaires. These methods capture stated thoughts and opinions. However, insights may be influenced by social norms or biases. 2️⃣ Mid-level - Do & Use This level reflects what users actually do when interacting with a product or service. It emphasizes actions, usage patterns, and observed behaviors, revealing insights that may differ from what users say. Example: Users may claim they enjoy customizing app settings, but data shows they rarely change default options. Methods: Usability Testing, Observation. Observation helps to reveal gaps between what people say and what they actually do. 3️⃣ Deep level - Know, Feel and Dream This level uncovers deep motivations, emotions, desires, and aspirations that users may not be consciously aware of or may struggle to articulate. It also includes tacit knowledge—things people know intuitively but find hard to express. Example: A user might not realize that their preference for a minimalist design comes from the information overload of a current design. Methods: Probes (e.g., participatory design, diary studies). Insights collected using these methods will uncover implicit and emotional drivers influencing behavior. 📕 Practical recommendations for mapping ✅ Triangulate insights by using multiple methods. What people say (interviews/surveys) may differ from what they do (observations) and feel. That's why it's essential to interpret these results in context. For example, start with interviews to learn what users say. Follow up with usability testing to observe real behavior. Use probes for long-term or emotional insights. ✅ Align research with business goals. For product improvements, focus on usability testing to catch interaction issues. For innovation, use probes to generate new ideas from user insights. ✅ Practice iterative learning. Apply surface techniques (like surveys) early to refine assumptions and guide more in-depth research later. Use deep techniques (like probes) for strategic decisions and to foster innovation in long-term projects. 🖼️ UX Research methods by Maze #ux #uxresearch #design #productdesign #uxdesign #ui #uidesign

Explore categories