As you start working with more and more stakeholders, there is a natural tendency to try accomodate every bit of feedback received. This is something we refer to as "Design by committee". It's also a surefire way to build subpar experiences by combining multiple irrelevant ideas into a single solution, rather than thinking deeply about the problem being solved and what the right solution is. Here is what the situation usually looks like: - Stakeholder A: "This competitor app is doing it that way." - Stakeholder B: "I showed this to my partner, and they didn't like it." - Stakeholder C: "Let's rethink this as it won't be clear to users." Some of the feedback above is valid, whereas other pieces are purely opinion-based, with no particular evidence or logical argument. It's your role as a designer to cut through the noise, eliminate pure opinion, debate where needed, and ultimately arrive at a solution that addresses the original problem, both for the business and the user. I have a simple decision tree I've used throughout my career as a thought process when dealing with feedback from multiple stakeholders. It boils down to four questions: 🟢 Is it clear and specific? ↳ If not, clarify it. 🟢 Is it supported by evidence or logic? ↳ If not, debate it. 🟢 Will it help us meet the objective? ↳ If not, kindly disregard. 🟢 Is it feasible? ↳ If not, save it as a fast-follow or future idea. If all the checks above are met, it's worth actioning the feedback. It still doesn't mean you have to act on every single suggestion, but it does mean you can quickly narrow down to a much smaller pool of items to consider. -- If you found this useful, consider reposting ♻️ What else have you found helpful in dealing with feedback from multiple stakeholders? Let me know in the comments 👇 PS: I'm working on a larger content piece on managing and working with stakeholders, dropping in the next few weeks. Find the link to the newsletter in the first comment.
Designing with User Feedback
Explore top LinkedIn content from expert professionals.
-
-
🎡 How To Run UX Workshops With Users (Scripts + Templates) (https://lnkd.in/evqDZSFe), a helpful overview of practical techniques to turn a verbal-only interview into a collaborative UX workshop — with sticky note mapping, solution drag’n’drop and voting. Put together by Laura Eiche-Laane. 👏🏽 🤔 Users and designers often a speak a different language. ✅ Insights are clearer when you see users performing tasks. ✅ Switch question-answer sections with small visual tasks. ✅ Sticky note mapping: for user flows, journeys, org maps. ✅ Card sorting: organize data, filters, menu items into groups. ✅ Feature location: ask users where they’d expect a new feature. ✅ Drag’n’drop: ask users to design their own UI or page layout. ✅ Solution voting: get feedback on many design directions. ✅ When explaining a task, show what you’d like them to do. ✅ Track where users are undecided, and follow up in a debrief. When I jump in a new project, I like to run walkthroughs with actual users as a way to understand the domain and the product. I simply ask them what the product does and how it helps them in their daily work. And then I invite them to show and explain it to me. I ask them to show how it works, the features they use, the quirks they’ve discovered and the shortcuts and loopholes they rely on daily. Perhaps there is something where the product fails on them, or something they wish was better, or something that is too fragile, confusing, complex or irrelevant. That’s when insights emerge, and that’s when you might notice that the things said and the things done are not necessarily the same thing. Of course users sometimes exaggerate their struggles, but they rarely complain lividly about something that isn’t really an issue for them. 🗃️ Useful resources: How And Why To Include Users In UX Workshops, by Maddie Brown https://lnkd.in/eKdd5GXp UX Workshop Activities With Users, by Jonathon Juvenal https://lnkd.in/eJjpcibR Remote UX Workshop Activities, by Jordan Bowman https://lnkd.in/e8wSMVwC Usability Testing Templates (Scripts), by Slava Shestopalov https://lnkd.in/gZyBtK6u UX Workshop Scripts + Templates https://theuxcookbook.com UX Research Templates, by Odette Jansen https://lnkd.in/eqpXyGHH --- 🧲 Miro and Notion templates: UX Research Templates (Miro), by ServiceNow https://lnkd.in/e48nKzKA Miro Templates For Designers https://lnkd.in/e8Hkp-ws Notion Templates For Designers https://lnkd.in/en_VBc6r #ux #design
-
Design reviews aren’t about proving your design is “right.” They’re about sparking the right conversations, surfacing blind spots, and aligning your work with both the business and the user. But here’s the thing: The quality of the questions you ask directly shapes the quality of the feedback you’ll receive. When you ask questions that seek approval, you invite surface-level reactions: “I don’t like that color.” “Can you move this button?” “It doesn’t feel right.” When you ask questions that seek perspective, you unlock insights that go much deeper: “Does this flow align with the goals we set?” “Which part of this journey feels riskiest for launch?” “What business constraints should we keep in mind?” That’s the shift: ❌ Approval → opinions ✅ Perspective → alignment, priorities, and actionable feedback Strong designers don’t just show screens. They guide the conversation by asking thoughtful, open questions that: Clarify the “why” behind feedback Dig into what truly matters for success Encourage stakeholders to connect feedback back to goals That’s how design reviews stop feeling like a defensive battle and start becoming a collaboration that moves everyone forward. Because when you stop asking “Do you like it?” and start asking “How does this support our goals?”you elevate both the conversation and the design.
-
I used to think user research was easy. But then I switched to B2B. And oh boy... reality hit hard Back when I was working on a B2C product, I could run 10 user interviews in a day. Users would happily spend 45 minutes answering questions and testing new designs. I thought this was just regular product design. Turns out, I was riding a perfect wave of continuous discovery without even realizing it. Then I switched to B2B. And I admit it really felt scary at first. Users were just too busy to pick up my phone calls. It took 3 weeks to schedule 5 calls. Some users left a bad CSAT score with barely any comment. Damn. How can we build anything serious without ever talking to users? At that time, it really felt like an impossible task. And any way I tried to put it, there were just no efficient process to get those users on the phone. But then it hit me. What if the best discovery touch points weren’t designers or PMs at all? What if they were already happening… in sales calls, support chats, internal Slack threads? And we had this feedback scattered across tools, threads, and people. But no one was making sense of it. So we built a Feedback Management System. We plugged every feedback into a single source of truth directly in Notion: - Intercom conversations and Modjo calls with customers - Internal tickets from sales and support to discuss user pain points or feature requests - User feedback forms submitted on the platform All filtered and organized per team through Notion automations. Each designer spends 2 hours per week turning raw feedback into structured insights. Then each team reviews it together weekly, and it feeds product decisions and the roadmap. It’s simple. It’s scalable. And it changed everything. Product designers no longer design based on shaky assumptions or partial data. They're now the source of customer truth and alignment. In B2B, discovery doesn’t happen in a lab. It happens in the wild. You just need to know where to listen. #productdesign #uxdesign #userresearch
-
Building a product isn’t just about solving a problem - it’s about ensuring you solve the right problem, in a way that resonates with your users. Yet, so many products miss the mark, often because the feedback from the people who matter most - users - isn’t prioritized. The key to a great product lies in its alignment with real user needs. Ignoring feedback can lead to building features that no one uses or overlooking pain points that drive users away. In fact, 42% of startups fail because their products don’t address a genuine market need ( source: CB Insights). Starting with a Minimal Desirable Product (MDP) can help. This isn’t about launching the simplest version of your idea, but about delivering something functional that still brings delight - encouraging users to engage and share their insights. How to Integrate Feedback Effectively - Observe User Behavior: Watch how users interact with your product. Are there steps where they hesitate or struggle? Their actions often tell you more than their words. - Ask the Right Questions: Use surveys and interviews to go beyond surface-level feedback. Open-ended questions can reveal frustrations or desires you hadn’t anticipated. - Iterate, Don’t Hesitate: Apply feedback to refine your product. Prioritize changes that align with user needs and eliminate features that don’t serve a purpose. - Keep Listening: The market evolves, and so do user preferences. Regularly revisiting feedback ensures your product stays relevant. The Hidden Cost of Ignoring Feedback A study from Harvard Business Review shows that 35% of product features are never used, and 19% are rarely used. That’s not just a waste of resources - it’s a missed opportunity to deliver real value. Let’s be honest: integrating feedback is hard work. It’s not a one-time task but an ongoing commitment. Negative feedback can be tough to hear, but it’s often where the biggest opportunities for improvement lie. Great products are never built in isolation. How do you incorporate user feedback into your product journey? #innovation #technology #future #management #startups
-
Stop asking clients "what's your feedback?" Well, I don't mean don't ask for feedback. Obviously you should. But "what do you think?" is an open invitation to chaos. I made a small cheat sheet in Framer that you can bookmark for your next design review. Every designer has lived this meeting: you present refined brand concept and someone reopens the logo discussion. Someone else mentions a competitor. The color debate starts again. Suddenly the entire project is back at square one and you're playing design ping-pong with six people who all have different opinions about blue. The problem is that nobody defined WHAT kind of feedback the work actually needs right now. One trick I learned at IDEO is naming the feedback mode at the beginning of every session. Not "any thoughts?" but what kind of thinking we're doing today. Here's the framework I use: [Inspire mode] When we're exploring what the brand could become, ask questions like: → Which references feel closest to your ambition? → Which ones feel completely wrong? → Where should this brand sit culturally — more institutional or more experimental? [Challenge mode] When we need to stress-test the concept, ask: → Does this feel too safe or too bold for where the company is today? → What objections would users or investors raise? → Would this still feel right if the company scaled 10×? [Decide mode] When it's time to commit, ask: → Which direction best reflects the company's future, not just today? → What trade-offs come with this choice? → If we shipped this tomorrow, would you defend it publicly? [Refine mode] When the direction is right but the details need tuning, ask: → What parts feel strongest? → Where does something feel slightly off — even if you can't articulate why? → Where do you want more clarity or emphasis? [Polish mode] When the work is almost ready to ship, ask: → Anything unclear before launch? → Are there key use cases we haven't stress-tested? → Anything that makes you nervous about rollout? Once I started doing this, feedback sessions stopped being fight-or-flight situation. And the framing can be very simple in practice! For example: “For this review I’d love to stay in inspiration mode. I’m not looking for approval yet — I’m trying to understand what territory feels right for the brand. Which of these directions feels closest to your ambition, and which ones feel completely wrong?” Or later in the project: “Today we’re in refine mode. The concept is already chosen, so I’m mostly looking for signals on details — what parts feel strongest, and where something feels slightly off.” A tiny shift in framing, but it changes the entire conversation. I hope it might save you from at least one unnecessary “i don’t like this shade of blue” debate!
-
User experience surveys are often underestimated. Too many teams reduce them to a checkbox exercise - a few questions thrown in post-launch, a quick look at average scores, and then back to development. But that approach leaves immense value on the table. A UX survey is not just a feedback form; it’s a structured method for learning what users think, feel, and need at scale- a design artifact in its own right. Designing an effective UX survey starts with a deeper commitment to methodology. Every question must serve a specific purpose aligned with research and product objectives. This means writing questions with cognitive clarity and neutrality, minimizing effort while maximizing insight. Whether you’re measuring satisfaction, engagement, feature prioritization, or behavioral intent, the wording, order, and format of your questions matter. Even small design choices, like using semantic differential scales instead of Likert items, can significantly reduce bias and enhance the authenticity of user responses. When we ask users, "How satisfied are you with this feature?" we might assume we're getting a clear answer. But subtle framing, mode of delivery, and even time of day can skew responses. Research shows that midweek deployment, especially on Wednesdays and Thursdays, significantly boosts both response rate and data quality. In-app micro-surveys work best for contextual feedback after specific actions, while email campaigns are better for longer, reflective questions-if properly timed and personalized. Sampling and segmentation are not just statistical details-they’re strategy. Voluntary surveys often over-represent highly engaged users, so proactively reaching less vocal segments is crucial. Carefully designed incentive structures (that don't distort motivation) and multi-modal distribution (like combining in-product, email, and social channels) offer more balanced and complete data. Survey analysis should also go beyond averages. Tracking distributions over time, comparing segments, and integrating open-ended insights lets you uncover both patterns and outliers that drive deeper understanding. One-off surveys are helpful, but longitudinal tracking and transactional pulse surveys provide trend data that allows teams to act on real user sentiment changes over time. The richest insights emerge when we synthesize qualitative and quantitative data. An open comment field that surfaces friction points, layered with behavioral analytics and sentiment analysis, can highlight not just what users feel, but why. Done well, UX surveys are not a support function - they are core to user-centered design. They can help prioritize features, flag usability breakdowns, and measure engagement in a way that's scalable and repeatable. But this only works when we elevate surveys from a technical task to a strategic discipline.
-
User research is great, but what if you do not have the time or budget for it........ In an ideal world, you would test and validate every design decision. But, that is not always the reality. Sometimes you do not have the time, access, or budget to run full research studies. So how do you bridge the gap between guessing and making informed decisions? These are some of my favorites: 1️⃣ Analyze drop-off points: Where users abandon a flow tells you a lot. Are they getting stuck on an input field? Hesitating at the payment step? Running into bugs? These patterns reveal key problem areas. 2️⃣ Identify high-friction areas: Where users spend the most time can be good or bad. If a simple action is taking too long, that might signal confusion or inefficiency in the flow. 3️⃣ Watch real user behavior: Tools like Hotjar | by Contentsquare or PostHog let you record user sessions and see how people actually interact with your product. This exposes where users struggle in real time. 4️⃣ Talk to customer support: They hear customer frustrations daily. What are the most common complaints? What issues keep coming up? This feedback is gold for improving UX. 5️⃣ Leverage account managers: They are constantly talking to customers and solving their pain points, often without looping in the product team. Ask them what they are hearing. They will gladly share everything. 6️⃣ Use survey data: A simple Google Forms, Typeform, or Tally survey can collect direct feedback on user experience and pain points. 6️⃣ Reference industry leaders: Look at existing apps or products with similar features to what you are designing. Use them as inspiration to simplify your design decisions. Many foundational patterns have already been solved, there is no need to reinvent the wheel. I have used all of these methods throughout my career, but the trick is knowing when to use each one and when to push for proper user research. This comes with time. That said, not every feature or flow needs research. Some areas of a product are so well understood that testing does not add much value. What unconventional methods have you used to gather user feedback outside of traditional testing? _______ 👋🏻 I’m Wyatt—designer turned founder, building in public & sharing what I learn. Follow for more content like this!
-
I posted this image last month and a lot of people asked for a breakdown — not the theory, but how each stage actually works in a real project. Here’s the reminder this visual was meant to give: Understand → Ideate → Test → Implement is not a straight line. It’s a loop. You return to previous stages every time new data proves you wrong. Example from my own work: I was designing a dashboard for a SaaS product. The UI looked polished and was already “ready for handoff,” until usability testing showed that 4 out of 6 users couldn’t correctly interpret the main metric. So we had to loop back: → Understand: clarify user mental model → Ideate: restructure hierarchy + labels → Test: validate again with a quick prototype → Implement: only then ship the updated version The design didn’t change visually — the clarity did. Task success rate went from 42% to 91%. That’s real UX. Not a clean slide with arrows — but constant informed rewinding. A few things people underestimate in real projects: • “Understand” is not only interviews — it’s business goals, constraints, and success criteria • “Ideate” is not Dribbble-style wireframes — it’s structured problem solving • “Test” is not just moderated sessions — analytics, heatmaps, and field feedback count too • “Implement” doesn’t end at handoff — onboarding, content, states, and accessibility are still design The process doesn’t fail. What fails is expecting it to work in one direction. What is your take on this? #uxdesign #productdesign #designprocess #userexperience #uxresearch #uidesign #uxworkflow #designthinking #uxstrategy #usabilitytesting #saasdesign #uxcasestudy
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Innovation
- Event Planning
- Training & Development