How do you figure out what truly matters to users when you’ve got a long list of features, benefits, or design options - but only a limited sample size and even less time? A lot of UX researchers use Best-Worst Scaling (or MaxDiff) to tackle this. It’s a great method: simple for participants, easy to analyze, and far better than traditional rating scales. But when the research question goes beyond basic prioritization - like understanding user segments, handling optional features, factoring in pricing, or capturing uncertainty - MaxDiff starts to show its limits. That’s when more advanced methods come in, and they’re often more accessible than people think. For example, Anchored MaxDiff adds a must-have vs. nice-to-have dimension that turns relative rankings into more actionable insights. Adaptive Choice-Based Conjoint goes further by learning what matters most to each respondent and adapting the questions accordingly - ideal when you're juggling 10+ attributes. Menu-Based Conjoint works especially well for products with flexible options or bundles, like SaaS platforms or modular hardware, helping you see what users are likely to select together. If you suspect different mental models among your users, Latent Class Models can uncover hidden segments by clustering users based on their underlying choice patterns. TURF analysis is a lifesaver when you need to pick a few features that will have the widest reach across your audience, often used in roadmap planning. And if you're trying to account for how confident or honest people are in their responses, Bayesian Truth Serum adds a layer of statistical correction that can help de-bias sensitive data. Want to tie preferences to price? Gabor-Granger techniques and price-anchored conjoint models give you insight into willingness-to-pay without running a full pricing study. These methods all work well with small-to-medium sample sizes, especially when paired with Hierarchical Bayes or latent class estimation, making them a perfect fit for fast-paced UX environments where stakes are high and clarity matters.
Enterprise User Research Techniques
Explore top LinkedIn content from expert professionals.
Summary
Enterprise user research techniques are methods used by companies to understand how people interact with complex systems, products, or services in business environments. These approaches help organizations uncover user needs, make informed decisions, and navigate restrictions like limited access to users or strict compliance rules.
- Map workflows: Examine existing processes, job roles, or service blueprints to reveal real-world challenges and opportunities for improvement.
- Probe deeply: Use semi-structured interviews and scenario questions to uncover underlying needs and motivations beyond surface-level feedback.
- Adapt research tools: Choose specialized techniques, such as menu-based conjoint or latent class analysis, to learn what matters most in environments with limited user access or sample sizes.
-
-
What strategies can we use to do #UserResearch about complex systems, particularly ones we're unfamiliar with? I like to use the following strategies: ✅ Pick pilot participants strategically ✅ Take a tour of the system through different POVs ✅ Ask for comparisons and metaphors ✅ Reflect on counterfactuals and rare scenarios ✅ Co-construct research documentation 1️⃣ Pick pilot participants strategically: We are more efficient as researchers when we have a tentative "outline" of what a system COULD look like before we dive into interview sessions. I like to use pilot participants to help brainstorm that outline so I always try to recruit the following type of folks because they tend to have a better grasp of how a system (doesn't) works: 💡 Work in operations 💡 Have long tenures in that role or organization 💡 Do "glue work" (to quote Yvonne Lam) 2️⃣ Ask for a "tour" of the system through different perspectives and through progressively more nuanced explanations: For example, "How would you describe X to a new hire who is unfamiliar with the system but has a deep expertise in the work?" versus "someone who is more senior and removed from the everyday work?" You can also ask participants to “correct” your “misunderstanding” of the system by presenting them a lexicon, processual map, diagram, etc. with a purposeful mistake. Observe what they correct (first) and what elicits an emotional response. I also appreciate Melanie Kahl's approach of asking about: 💡How things "really" happen 💡What common misunderstanding do they have to constantly correct 💡What are the "Informal roles" or invisible work that enable things to happen 3️⃣ Ask for comparisons and metaphors: Comparisons--whether scenario-based or metaphorical--are a useful way to ground any abstract or complex system description participants offer. But it's important to remember that when asking participants to generate metaphors, you should also ask them to explain HOW and WHY it fits. The explanation is often more important than the metaphor itself. 4️⃣ Reflect on counterfactuals and rare scenarios: Particularly when interviewing "expert users", asking "what if" questions can clarify tacit knowledge, rules and requirements, red tape, and more. I also like this list of discussion points by Arvind Venkataramani: 💡Where is change easy and where is it difficult 💡What part/person if removed would cause breakage 💡What happens when this system is shocked/stressed 💡What is mysterious to them 5️⃣ Co-construct research documentation: Hand over the pen and paper or digital whiteboard and ask them to map out the system themselves. Observe: What do they start with? What do they designate as foundational elements? What do the center versus put on the periphery? #PracticalEthics #UXResearch #QualitativeResearch #UX #systemsthinking
-
🔬 How To Run UX Research In B2B and Enterprise. Practical techniques of what you can do in strict environments, often without access to users. 🚫 Things you typically can’t do 1. Stakeholder interviews ← unavailable 2. Competitor analysis ← not public 3. Data analysis ← no data collected yet 4. Usability sessions ← no users yet 5. Recruit users for testing ← expensive 6. Interview potential users ← IP concerns 7. Concept testing, prototypes ← NDA 8. Usability testing ← IP concerns 9. Sentiment analysis ← no media presence 10. Surveys ← no users to send to 11. Get support logs ← no security clearance 12. Study help desk tickets ← no clearance 13. Use research tools ← no procurement yet ✅ Things you typically can do 1. Focus on requirements + task analysis 2. Study existing workflows, processes 3. Study job postings to map roles/tasks 4. Scrap frequent pain points, challenges 5. Use Google Trends for related search queries 6. Scrap insights to build a service blueprint 7. Find and study people with similar tasks 8. Shadow people performing similar tasks 9. Interview colleagues closest to business 10. Test with customer success, domain experts 11. Build an internal UX testing lab 12. Build trust and confidence first In B2B, people buying a product are not always the same people who will use it. As B2B designers, we have to design at least 2 different types of experiences: the customer’s UX (of the supplier) and employee’s UX (of end users of the product). In customer’s UX, we typically work within a highly specialized domain, along with legacy-ridden systems and strict compliance and security regulations. You might not speak with the stakeholder, but rather company representatives — who regulate the flow of data they share to manage confidentiality, IP and risk. In employee’s UX, it doesn’t look much brighter. We can rarely speak with users, and if we do, often there is only a handful of them. Due to security clearance limitations, we don’t get access to help desk tickers or support logs — and there are rarely any similar public products we could study. As H Locke rightfully noted, if we shed the light strongly enough from many sources, we might end up getting a glimpse of the truth. Scout everything to see what you can find. Find people who are the closest to your customers and to your users. Map the domain and workflows in service blueprints and . Most importantly: start small and build a strong relationship first. In B2B and Enterprise, most actors are incredibly protective and cautious, often carefully manoeuvring compliance regulations and layers of internal politics. No stones will be moved unless there is a strong mutual trust from both sides. It can be frustrating, but also remarkably impactful. B2B relationships are often long-term relationships for years to come, allowing you to make huge impact for people who can’t choose what they use and desperately need your help to do their work better. [continues in comments ↓] #ux #b2b
-
99% of UX leaders think the next generation of high-performing researchers are methodological purists who live and die by textbook approaches. They're wrong. The next generation of high-performing UX researchers are strategic rule-breakers. Here are 7 counterintuitive practices that separate elite researchers from the rest in 2025: 1. They strategically ask leading questions Most researchers are taught to avoid leading questions at all costs. Elite researchers? They use them deliberately. Top researchers know that after establishing rapport and collecting unbiased feedback, a strategic leading question can validate hypotheses and push participants beyond rehearsed answers. The difference is knowing exactly when and why you're using them. 2. They work with small sample sizes (and know when not to) Elite researchers calibrate sample size to the specific research question, not some arbitrary standard. For early-stage discovery? They might talk to just 3 users. For validating a major pivot? They'll insist on proper representation. 3. They embrace their biases instead of pretending to be neutral The best don't pretend to eliminate bias—they document their assumptions before starting, openly discuss how their perspectives might influence analysis, and invite diverse viewpoints during synthesis to counterbalance their inevitable blind spots. 4. They sometimes skip talking to users entirely When existing data can answer the question faster and more accurately, elite researchers leverage analytics, support tickets, previous research, and behavioral data. They save primary research for questions that truly require it, maximizing their impact per hour invested. 5. They operate as strategic partners, not order-takers Elite researchers proactively identify business questions that research can solve. The top performers consistently align their work to critical business decisions, revenue opportunities, and strategic priorities to make their role indispensable rather than optional. 6. They design solutions, not just find problems Traditional researchers stop at identifying problems. Elite researchers actively participate in solution development. The best researchers facilitate ideation, prototype early concepts, and use their deep user understanding to shape solutions without overstepping their role or boxing out designers and product managers. 7. They craft research deliverables like marketers Elite researchers create tailored experiences for each stakeholder. Top performers know an executive needs a different format than a designer. They deliver 5-minute video highlight reels for busy VPs, interactive workshops for product teams, and annotated journey maps for designers, all from the same research. What unexpected trait has made the biggest difference in your research practice? And which of these seven traits challenges your current approach the most?
-
Last week, I coached a product team through a user interview debrief. They were excited! Users had shown enthusiasm for a new feature! 🎉 But when I asked, “What problem does this solve for them?” the room went quiet. 🫣 This happens more often than we’d like to admit. 🧠 The Trap: Mistaking Enthusiasm for Validation When users say, “That sounds great!” we often interpret it as validation. But here's the catch: - Users want to be polite. - They might not fully understand their own needs. - As product teams, we may hear what we want. This is why relying solely on user enthusiasm can lead us astray. 🔍 The Solution: Semi-Structured Interviews We need to dig deeper to understand our users truly. Semi-structured interviews strike the right balance between guidance and flexibility. Key practices include: - Start with hypotheses: Identify what you believe to be true. - Ask open-ended questions: Encourage users to share experiences, not just opinions. - Listen actively: Pay attention to what’s said—and what’s not. - Probe for underlying needs: Seek to understand the 'why' behind their behaviours. This approach helps uncover genuine insights, leading to solutions that truly resonate. 🌟 Imagine the Impact By adopting this method: - Teams build products that solve real problems. - User satisfaction increases. - Resources are invested wisely, reducing wasted effort. It's not just about building features—it's about delivering value. 🦾 Take Action Next time you're planning user interviews: - Prepare a set of hypotheses. - Design questions that explore user experiences. - Remain open to unexpected insights. Remember, the goal is to understand your users, not just confirm your assumptions deeply.
-
While it can be easily believed that customers are the ultimate experts about their own needs, there are ways to gain insights and knowledge that customers may not be aware of or able to articulate directly. While customers are the ultimate source of truth about their needs, product managers can complement this knowledge by employing a combination of research, data analysis, and empathetic understanding to gain a more comprehensive understanding of customer needs and expectations. The goal is not to know more than customers but to use various tools and methods to gain insights that can lead to building better products and delivering exceptional user experiences. ➡️ User Research: Conducting thorough user research, such as interviews, surveys, and observational studies, can reveal underlying needs and pain points that customers may not have fully recognized or articulated. By learning from many users, we gain holistic insights and deeper insights into their motivations and behaviors. ➡️ Data Analysis: Analyzing user data, including behavioral data and usage patterns, can provide valuable insights into customer preferences and pain points. By identifying trends and patterns in the data, product managers can make informed decisions about what features or improvements are most likely to address customer needs effectively. ➡️ Contextual Inquiry: Observing customers in their real-life environment while using the product can uncover valuable insights into their needs and challenges. Contextual inquiry helps product managers understand the context in which customers use the product and how it fits into their daily lives. ➡️ Competitor Analysis: By studying competitors and their products, product managers can identify gaps in the market and potential unmet needs that customers may not even be aware of. Understanding what competitors offer can inspire product improvements and innovation. ➡️ Surfacing Implicit Needs: Sometimes, customers may not be able to express their needs explicitly, but through careful analysis and empathetic understanding, product managers can infer these implicit needs. This requires the ability to interpret feedback, observe behaviors, and understand the context in which customers use the product. ➡️ Iterative Prototyping and Testing: Continuously iterating and testing product prototypes with users allows product managers to gather feedback and refine the product based on real-world usage. Through this iterative process, product managers can uncover deeper customer needs and iteratively improve the product to meet those needs effectively. ➡️ Expertise in the Domain: Product managers, industry thought leaders, academic researchers, and others with deep domain knowledge and expertise can anticipate customer needs based on industry trends, best practices, and a comprehensive understanding of the market. #productinnovation #discovery #productmanagement #productleadership
-
Your UX research is lying to you. And no, I'm not talking about small data inconsistencies. I've seen founders blow $100K+ on product features their users "desperately wanted" only to face 0% adoption. Most research methods are fundamentally flawed because humans are terrible at predicting their own behavior. Here's the TRUTH framework I've used to get accurate user insights: T - Test with money, not words • Never ask "would you use this?" • Instead: "Here's a pre-order link for $50" • Watch what they do, not what they say R - Real environment observations • Stop doing sterile lab tests • Start shadowing users in their natural habitat • Record their frustrations, not their feedback U - Unscripted conversations • Ditch your rigid question list • Let users go off on tangents • Their random rants reveal gold T - Track behavior logs • Implement analytics BEFORE research • Compare what users say vs. what they do • Look for patterns, not preferences H - Hidden pain mining • Users can't tell you their problems • But they'll show you through workarounds • Document their "hacks" - that's where innovation lives STOP: • Running bias-filled focus groups • Asking leading questions • Taking feedback at face value • Rushing to build based on opinions START: • Following the TRUTH framework • Measuring actions over words • Building only what users prove they need PS: Remember, Henry Ford said if he asked people what they wanted, they would have said "faster horses." Don't ask what they want. Watch what they do. Follow me, John Balboa. I swear I'm friendly and I won't detach your components.
-
Founders often say they don't do research. "I don't have time for that." "I know my users so I don't need it." But if you do any of these things, you're already participating in research: 1/ You read customer support tickets You're identifying patterns in pain points and understanding where your product is falling short 2/ You listen to Gong calls You're gathering insights on what resonates with your ICP and what objections keep coming up 3/ You check your product analytics You're tracking user behavior to understand what features drive engagement and where people drop off 4/ You A/B test your pricing page You're testing hypotheses about what messaging converts better with your audience 5/ You send out NPS surveys You're measuring customer satisfaction and identifying promoters versus detractors 6/ You run beta programs You're validating product concepts before you scale them to your full user base 7/ You ask "why did you choose us?" You're uncovering the jobs-to-be-done that your product fulfills for customers 8/ You track churn and interview people who leave You're understanding what drives customers away so you can fix it 9/ You read competitor reviews You're mapping where the market has gaps and where you can differentiate 10/ You prototype before you build You're validating demand and testing assumptions about what users actually need 11/ You check which email subject lines get opened You're experimenting with messaging that resonates with your audience 12/ You monitor what features get requested most You're prioritizing your roadmap based on patterns in customer demand 13/ You ask customers for testimonials You're discovering what value they're actually getting from your product in their own words 14/ You test landing page variations You're optimizing how you communicate value to drive conversions 15/ You have regular check-ins with your ICP You're building continuous feedback loops to stay connected to how their needs evolve Even if you don't think of it as conducting research, you can consider it an exercise in building empathy.
-
💡Atomic UX Research Atomic UX research is a framework created by Daniel Pidcock and Marielle de Geest that provides a structured approach for organizing and utilizing UX research insights effectively. Atomic UX Research arose from the challenges of managing research insights in large organizations. Traditional approaches to research made it difficult for teams to leverage past research findings to provide valuable recommendations. Atomic UX breaks the research process down into four main components: Experiments, Fact, Insight, and Recommendation. 1️⃣ Experiments (Where we learned it) Describes methods or activities that a team uses to collect data, such as user testing, interviews, and data analytics systems like Google Analytics. Methods and activities form the basis for gathering evidence that leads to deeper insights. The goal is to collect data from various methods to gain insights into user behavior and preferences. 2️⃣ Fact (What we learned) This component summarizes what was observed or discovered during the research in a format of facts. It can include quotes from users, observations, or statistics. Facts are objective observations or results obtained from the experiments. Facts should be: ✔ Unbiased and free from assumptions. ✔ Concise and easy to read. ✔ Contextualized, showing where and how they were discovered. 3️⃣ Insight (Why we think we found this) Insights interpret the facts to infer underlying reasons for user behavior. They provide a more holistic understanding by connecting multiple facts. It's all about analyzing the facts to understand the underlying reasons or patterns. You need to consider context, cause, and effect to explain user behavior. Insights should be: ✔ Easy to understand and self-explanatory. ✔ Applicable to relevant situations (e.g., different features or user groups). Insights are classified into: ✔ Principle-based (generally applicable) ✔ Strategic (helpful for broader plans) ✔ Tactical (specific to a feature) 4️⃣ Recommendation (How we'll proceed) Recommendations are action items based on insights. They propose how to address identified problems or improve the user experience. The strength of a recommendation depends on the number and quality of insights that support it. Recommendations should be: ✔ Easy to understand. ✔ Evidence-based, taking into account both supporting and opposing evidence. ✔ Measurable and testable. ✔ Prioritized with clear status and steps. #UX #uxresearch #uxdesign #design #productdesign #research
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development