One of the biggest challenges in UX research is understanding what users truly value. People often say one thing but behave differently when faced with actual choices. Conjoint analysis helps bridge this gap by analyzing how users make trade-offs between different features, enabling UX teams to prioritize effectively. Unlike direct surveys, conjoint analysis presents users with realistic product combinations, capturing their genuine decision-making patterns. When paired with advanced statistical and machine learning methods, this approach becomes even more powerful and predictive. Choice-based models like Hierarchical Bayes estimation reveal individual-level preferences, allowing tailored UX improvements for diverse user groups. Latent Class Analysis further segments users into distinct preference categories, helping design experiences that resonate with each segment. Advanced regression methods enhance accuracy in predicting user behavior. Mixed Logit Models recognize that different users value features uniquely, while Nested Logit Models address hierarchical decision-making, such as choosing a subscription tier before specific features. Machine learning techniques offer additional insights. Random Forests uncover hidden relationships between features - like those that matter only in combination - while Support Vector Machines classify users precisely, enabling targeted UX personalization. Bayesian approaches manage the inherent uncertainty in user choices. Bayesian Networks visually represent interconnected preferences, and Markov Chain Monte Carlo methods handle complexity, delivering more reliable forecasts. Finally, simulation techniques like Monte Carlo analysis allow UX teams to anticipate user responses to product changes or pricing strategies, reducing risk. Bootstrapping further strengthens findings by testing the stability of insights across multiple simulations. By leveraging these advanced conjoint analysis techniques, UX researchers can deeply understand user preferences and create experiences that align precisely with how users think and behave.
How to Analyze User Behavior With Data
Explore top LinkedIn content from expert professionals.
Summary
Analyzing user behavior with data means studying how people interact with products, services, or platforms by examining the digital traces they leave—like clicks, purchases, and navigation patterns—to better understand what drives their choices. This process combines both what users say and what they do, uncovering hidden trends that help businesses improve experiences and make smarter decisions.
- Combine data sources: Integrate behavioral data, such as transaction logs, session recordings, and profile information, to reveal deeper patterns in user decision-making.
- Frame clear hypotheses: Approach user data with curiosity, posing specific questions about behavior and testing those against multiple metrics and user segments.
- Investigate beyond surface metrics: Dive below basic statistics like conversion rates or abandoned carts to find root causes and actionable insights using techniques like segment analysis, heat maps, and session replays.
-
-
🚨The greatest drop-off is from Product Details Page To Cart Page, so we must improve our Product Details Page! Not so fast ✋ In today's age of data obsession, almost every company has an analytics infrastructure that pumps out a tonne of numbers. But rarely do teams invest time, discipline & curiosity to interpret numbers meaningfully. I will illustrate with an example. Let's take a simple e-commerce funnel. Home Page ~ 100 users List Page ~ 90 users Product Display Page ~ 70 users Cart Page ~ 20 users Address Page ~ 15 users Payments Page ~12 users Order Confirmation Page ~ 9 users A team that just "looks" at data will immediately conclude that the drop-off is most steep between Product Details Page & Cart Page. As a consequence they will start putting in a lot of fire power into solving user problems on Product Display Page. But if the team were data "curious", would frame hypothesis such as "do certain types of users reach cart page more effectively than others?" and go on to look at users by purchase buckets, geography, category etc and look at the entire funnel end to end to observe patterns. In the above scenario, it's likely that the 20 cart users were power users whilst new & early purchasers don't make it to this stage. The reason could be poor recommendations on the list page or customers are only visiting the product display page to see a larger close up of the product. So how should one go about looking at data ? Do ✅ Start with an open & curious mind ✅ Start with hypothesis ✅ Identify metrics & counter metrics that will help prove/disprove hypothesis ✅ Identify the various dimensions that could influence behaviours - user type, geography, category, device type, gender, price point, day, time etc. The dimensions will be specific to your line of business. ✅ Check for data quality and consistency ✅ Look at upstream and downstream behaviour to see how the behaviour is influenced upstream and what happens to the behaviour downstream. ✅ Check for historical evidence of causality Dont ❌ Look at data to satisfy your bias ❌ Rush to conclude your interpretation ❌ Look at data in isolation - - - TLDR - Be curious. Not confirmed. #metrics #analytics #productmanagement #productmanager #productcraft #deepdiveswithdsk
-
Crowning a New Term: “Iceberg Metrics” 🧊 ✨ I’m calling it: Iceberg Metrics represent KPIs that only reveal the tip of what’s really happening below the surface. Metrics like abandoned carts seem simple but often mask much more—checkout friction, hidden costs, trust issues, and more. To truly understand and optimize, we need to dig deeper. Here’s how to dive into the “iceberg” of abandoned cart rates: 1. Establish Baseline Metrics: Start by gathering data on current abandoned cart rates, session times, and bounce rates using heat maps and session recordings to see where users drop off. 2. Segment the Audience: Analyze users by behavior (first-time vs. repeat visitors, mobile vs. desktop) and traffic source (organic, paid, email). 3. Experiment Hypotheses: Develop hypotheses for abandonment reasons—shipping costs, checkout friction, distractions, or lack of trust signals—and test them. 4. Run A/B Tests: Test variations like simplifying the checkout process, showing shipping costs earlier, adding trust badges, or retargeting abandoned cart emails. 5. Use Heat Maps & Session Recordings: Examine user behavior in real time. Look for confusion or hesitation, where users hover, and whether they engage with key information. 6. Contextualize Results: Analyze how changes impact overall user flow. Did simplifying checkout help, or did other metrics like bounce rate increase? 7. Ecosystem Approach: Examine how tweaks affect the full journey—from product discovery to checkout—balancing short-term improvements with long-term goals like lifetime value. 8. Iterate: Refine solutions based on experiment findings and continuously optimize the customer journey. This one’s mine, folks! #IcebergMetrics #OwnIt #DataDriven #EcommerceOptimization #NewMetricAlert Cheers, Your cross-legged CAC and CLV buddy 🤗
-
Most teams are just wasting their time watching session replays. Why? Because not all session replays are equally valuable, and many don’t uncover the real insights you need. After 15 years of experience, here’s how to find insights that can transform your product: — 𝗛𝗼𝘄 𝘁𝗼 𝗘𝘅𝘁𝗿𝗮𝗰𝘁 𝗥𝗲𝗮𝗹 𝗜𝗻𝘀𝗶𝗴𝗵𝘁𝘀 𝗳𝗿𝗼𝗺 𝗦𝗲𝘀𝘀𝗶𝗼𝗻 𝗥𝗲𝗽𝗹𝗮𝘆𝘀 𝗧𝗵𝗲 𝗗𝗶𝗹𝗲𝗺𝗺𝗮: Too many teams pick random sessions, watch them from start to finish, and hope for meaningful insights. It’s like searching for a needle in a haystack. The fix? Start with trigger moments — specific user behaviors that reveal critical insights. ➔ The last session before a user churns. ➔ The journey that ended in a support ticket. ➔ The user who refreshed the page multiple times in frustration. Select five sessions with these triggers using powerful tools like @LogRocket. Focusing on a few key sessions will reveal patterns without overwhelming you with data. — 𝗧𝗵𝗲 𝗧𝗵𝗿𝗲𝗲-𝗣𝗮𝘀𝘀 𝗧𝗲𝗰𝗵𝗻𝗶𝗾𝘂𝗲 Think of it like peeling back layers: each pass reveals more details. 𝗣𝗮𝘀𝘀 𝟭: Watch at double speed to capture the overall flow of the session. ➔ Identify key moments based on time spent and notable actions. ➔ Bookmark moments to explore in the next passes. 𝗣𝗮𝘀𝘀 𝟮: Slow down to normal speed, focusing on cursor movement and pauses. ➔ Observe cursor behavior for signs of hesitation or confusion. ➔ Watch for pauses or retracing steps as indicators of friction. 𝗣𝗮𝘀𝘀 𝟯: Zoom in on the bookmarked moments at half speed. ➔ Catch subtle signals of frustration, like extended hovering or near-miss clicks. ➔ These small moments often hold the key to understanding user pain points. — 𝗧𝗵𝗲 𝗤𝘂𝗮𝗻𝘁𝗶𝘁𝗮𝘁𝗶𝘃𝗲 + 𝗤𝘂𝗮𝗹𝗶𝘁𝗮𝘁𝗶𝘃𝗲 𝗙𝗿𝗮𝗺𝗲𝘄𝗼𝗿𝗸 Metrics show the “what,” session replays help explain the “why.” 𝗦𝘁𝗲𝗽 𝟭: 𝗦𝘁𝗮𝗿𝘁 𝘄𝗶𝘁𝗵 𝗗𝗮𝘁𝗮 Gather essential metrics before diving into sessions. ➔ Focus on conversion rates, time on page, bounce rates, and support ticket volume. ➔ Look for spikes, unusual trends, or issues tied to specific devices. 𝗦𝘁𝗲𝗽 𝟮: 𝗖𝗿𝗲𝗮𝘁𝗲 𝗪𝗮𝘁𝗰𝗵 𝗟𝗶𝘀𝘁𝘀 𝗳𝗿𝗼𝗺 𝗗𝗮𝘁𝗮 Organize sessions based on success and failure metrics: ➔ 𝗦𝘂𝗰𝗰𝗲𝘀𝘀 𝗖𝗮𝘀𝗲𝘀: Top 10% of conversions, fastest completions, smoothest navigation. ➔ 𝗙𝗮𝗶𝗹𝘂𝗿𝗲 𝗖𝗮𝘀𝗲𝘀: Bottom 10% of conversions, abandonment points, error encounters. — 𝗕𝘂𝗶𝗹𝗱𝗶𝗻𝗴 𝗮 𝗖𝗼𝗻𝘀𝗶𝘀𝘁𝗲𝗻𝘁 𝗦𝗲𝘀𝘀𝗶𝗼𝗻 𝗥𝗲𝗽𝗹𝗮𝘆 𝗣𝗿𝗮𝗰𝘁𝗶𝗰𝗲 Make session replays a regular part of your team’s workflow and follow these principles: ➔ Focus on one critical flow at first, then expand. ➔ Keep it routine. Fifteen minutes of focused sessions beats hours of unfocused watching. ➔ Keep rotating the responsibiliy and document everything. — Want to go deeper and get more out of your session replays without wasting time? Check the link in the comments!
-
Your users leave a trail of behavioral breadcrumbs with every transaction, and your recommendation engine might be stepping right over them. A new study by Upwork analyzed 9M marketplace users across 62M interactions and found that combining text-based profile analysis with behavioral data improved matching accuracy by 8-12% compared to text-only approaches. The system learns simultaneously from what users write about themselves and how they actually behave on the platform. Who they hire, what they buy, which connections succeed. This architecture works anywhere you're connecting two sides of a market. - Airbnb matching guests to hosts. - Amazon connecting buyers to sellers. - Uber pairing riders with drivers. - Dating apps. - B2B sales platforms. The pattern is the same. You have profiles (text people write about themselves), and you have behavior (the trail of interactions in your database). Most recommendation systems use one or the other. Combining both produces substantially better matches. If you run a two-sided marketplace, your transaction and interaction logs are an underutilized asset. The patterns of who your users connect with contain a real signal about who you should connect them with next.
-
Analytics aren’t just numbers; they’re your roadmap to publishing growth. Data isn’t power, it’s potential. For publishers, the real value lies in transforming raw metrics into repeatable growth strategies that drive audience retention, revenue, and #SEO performance. Too often, publishers collect vast amounts of data but fail to extract meaningful takeaways. The key is understanding what content resonates, how audiences engage, and where opportunities for growth exist. Collecting data is easy; extracting insights is not. Without clarity, metrics like pageviews and bounce rates become distractions. For example, a 40% drop in returning visitors isn’t just a traffic issue—it’s a retention red flag. By using the right tools and refining strategies based on real data, you can turn numbers into growth. Here are actionable strategies to turn data into action: 1. Know Your Audience Beyond Pageviews Pageviews alone don’t tell the full story. Instead, track return visitors, time on page, and scroll depth to measure true engagement. Tools like Google Analytics 4 (GA4) and Parse.ly provide deeper insights. Cohort analysis can reveal trends, millennials may prefer video, while Gen X engages more with newsletters. For example, if mobile traffic spikes by 20% after 8 PM, push breaking news via mobile notifications to capture that audience in real-time. 2. Optimise Content Performance with Behavioural Data Understanding why some content performs well helps you replicate success. Use @Google Search Console and Semrush to analyse search visibility and Hotjar Digital Marketing Company to track user interactions. For example, if "AI in media" gets 3x more shares than "content trends," double down on AI-related content. Additionally, A/B test headlines (e.g., “5 Growth Hacks” vs. “Proven Tactics”) to see what improves click-through rates. 3. Track Conversions, Not Just Traffic Traffic alone doesn’t guarantee success—conversions do. Set up goals in GA4 to measure newsletter sign-ups, paid subscriptions, or product purchases. Identify which referral sources drive the highest conversion rates, and adjust your strategy accordingly. For example, premium subscribers from "how-to guides" tend to have a 15% higher lifetime value than general news readers, meaning content type matters when driving long-term revenue. To scale what works, automate reporting with Power BI Visualization or Looker Studio to save 10+ hours per month. Analytics only matter when they drive actions. The biggest mistake any publishers can make is to treat data as a report card instead of a playbook. Start by auditing one content category this week, setting up a conversion goal in GA4, and A/B testing a headline. Data doesn’t lie, but it won’t work unless you do something. What analytics tools are you using to grow your publishing efforts? Share your go-to platforms in the comment below. #DigitalPublishing #SEO #ContentStrategy #AudienceGrowth #DataAnalytics
-
When I was interviewing users during a study on a new product design focused on comfort, I started to notice some variation in the feedback. Some users seemed quite satisfied, describing it as comfortable and easy to use. Others were more reserved, mentioning small discomforts or saying it didn’t quite feel right. Nothing extreme, but clearly not a uniform experience either. Curious to see how this played out in the larger dataset, I checked the comfort ratings. At first, the average looked perfectly middle-of-the-road. If I had stopped there, I might have just concluded the product was fine for most people. But when I plotted the distribution, the pattern became clearer. Instead of a single, neat peak around the average, the scores were split. There were clusters at both the high and low ends. A good number of people liked it, and another group didn’t, but the average made it all look neutral. That distribution plot gave me a much clearer picture of what was happening. It wasn’t that people felt lukewarm about the design. It was that we had two sets of reactions balancing each other out statistically. And that distinction mattered a lot when it came to next steps. We realized we needed to understand who those two groups were, what expectations or preferences might be influencing their experience, and how we could make the product more inclusive of both. To dig deeper, I ended up using a mixture model to formally identify the subgroups in the data. It confirmed what we were seeing visually, that the responses were likely coming from two different user populations. This kind of modeling is incredibly useful in UX, especially when your data suggests multiple experiences hidden within a single metric. It also matters because the statistical tests you choose depend heavily on your assumptions about the data. If you assume one unified population when there are actually two, your test results can be misleading, and you might miss important differences altogether. This is why checking the distribution is one of the most practical things you can do in UX research. Averages are helpful, but they can also hide important variability. When you visualize the data using a histogram or density plot, you start to see whether people are generally aligned in their experience or whether different patterns are emerging. You might find a long tail, a skew, or multiple peaks, all of which tell you something about how users are interacting with what you’ve designed. Most software can give you a basic histogram. If you’re using R or Python, you can generate one with just a line or two of code. The point is, before you report the average or jump into comparisons, take a moment to see the shape of your data. It helps you tell a more honest, more detailed story about what users are experiencing and why. And if the shape points to something more complex, like distinct user subgroups, methods like mixture modeling can give you a much more accurate and actionable analysis.
-
Strong signals bring user needs into focus. Over the years, I’ve worked with many teams that create user personas, giving them names like “Cindy” and saying things like “She needs to find this feature” to guide their design decisions. That’s a good start. But user needs are more complex than a few traits or surface-level goals. They include emotions, behaviors, and deeper motivations that aren’t always visible. That’s why we’re building Glare, our open framework for data-informed design. We've learned a lot using Helio. It helps teams create clear, measurable signals around user needs. UX metrics help turn user needs into real data: → What users think → What users do → What users feel → What users say When you define the right audience traits and pick the helpful research methods, you can turn vague assumptions into specific, actionable signals. Let’s take a common persona example: Your team says, “Cindy can’t find the new dashboard feature.” Instead of stopping there, create signals using UX metrics to define usefulness better: → Attitudinal Metrics (how Cindy feels) Usefulness ↳ 42% of users say the dashboard doesn’t help them complete their tasks Sentiment ↳ Users overwhelmingly selected: Confused, Frustrated, Overwhelmed Only 12% chose Clear or Confident Post-Task Satisfaction ↳ 52% of people are satisfied after completing key actions → Behavioral Metrics (what Cindy does) Frequency ↳ Only 18% of users revisit the dashboard weekly, down from 35% last quarter → Performance Metrics (how the product supports Cindy) Helpfulness ↳ 60% of users say they needed help materials to complete a task, suggesting the experience is unclear With UX data like this, your team can stop guessing and start aligning around the real needs of users. UX metrics turn assumptions into signals… leading to better product decisions. Reach out to me if you want to learn how to incorporate UX metrics into your team workflows. #productdesign #productdiscovery #userresearch #uxresearch
-
Ever wondered how Netflix knows you binge-watched an entire season in one sitting, or how Google Analytics separates your morning browsing from your evening research - that's sessionization at work. Sessionization is one of those fundamental data engineering problems that sounds simple but reveals layers of complexity when you dig in. 𝙒𝙝𝙖𝙩 𝙞𝙨 𝙎𝙚𝙨𝙨𝙞𝙤𝙣𝙞𝙯𝙖𝙩𝙞𝙤𝙣? At its core, sessionization is about grouping sequential user events into meaningful "sessions" based on activity patterns. The standard rule: if a user goes idle for more than 30 minutes, we consider that session ended. Simple concept, but with a tricky implementation. 𝙒𝙝𝙮 𝙄𝙩 𝙈𝙖𝙩𝙩𝙚𝙧𝙨 Product teams need sessions to understand: → How long users actually engage with your platform → What features are used together in a single visit → Where users drop off in their journey → Peak usage patterns throughout the day Without proper sessionization, you're just looking at disconnected events. With it, you see user behavior as a story. 𝙏𝙝𝙚 𝙎𝙌𝙇 𝘾𝙝𝙖𝙡𝙡𝙚𝙣𝙜𝙚 One of the problems in today's 50-Day Data Challenge asked: given a table of user events with timestamps, assign session numbers where any gap > 30 minutes starts a new session. Let's solve this using SQL (this is pretty straightforward in Flink using the Session Window class: https://lnkd.in/eZYACdnG): 1. Look Backward: Use LAG() to grab each event's previous timestamp for the same user. You can't identify gaps without knowing what came before. 2. Identify Boundaries: Flag rows where either there's no previous event (first-time user) OR the time gap exceeds 30 minutes. These flags mark session boundaries. 3. Propagate State Forward: Use a running SUM() of those boundary flags. Each time you hit a boundary (flag = 1), the cumulative sum increments, creating a new session ID that carries forward to all subsequent events. This transforms: [0, 1, 0, 0, 1, 0] into session IDs: [1, 2, 2, 2, 3, 3] 4. Aggregate: Now that every event knows its session, GROUP BY to calculate session start, end, duration, and event count. 𝙏𝙝𝙚 𝙍𝙚𝙖𝙡 𝙇𝙚𝙖𝙧𝙣𝙞𝙣𝙜 This pattern is a blueprint for solving "groups with gaps" problems: → Detecting streaks (consecutive days of activity) → Identifying downtime periods in system logs → Finding engagement patterns in subscription products → Measuring time-to-resolution in support tickets The technique remains the same: compare to previous → flag boundaries → propagate state → aggregate groups. Week 2 of Zach's 50-Day Data Challenge: SQL 🔥 Moved from data modeling last week to SQL this week. Today was all about sessionization - one of those problems that looks straightforward until you actually build it. Day 8/50 ✅ #DataEngineering #50DayDataChallenge #SQL #Analytics #Sessionization
-
The Hospitality Data That Nobody Uses Most hospitality brands think they understand their guests because they track the usual performance numbers. That's surface level. The real insight sits in the behavioral data that almost every property collects without even realizing it. I'm talking about the tiny signals that reveal what guests actually want, what slows them down, what annoys them, and what inspires them to spend more. This data is everywhere, and almost nobody uses it in a meaningful way. Here’s where the opportunity lives. Your guests tell you how to increase revenue through their patterns. How long they linger in the lobby. When they return to their room. Where they avoid walking. When they browse your app and immediately exit. How often they pass a restaurant without looking inside. These behaviors are not random. They're emotional decisions, and they're loaded with financial implications. When you understand the emotion behind the behavior, your ROI becomes predictable instead of reactive. If you want to turn this into real growth, start analyzing the data that shows friction. Look at the moments when guests cluster in certain areas and ask why. Look at the times when guests repeatedly ask your team the same questions. That means your communication failed somewhere. Look at what time people naturally crave food or drinks and match your promotions to their real patterns instead of pushing offers on your preferred schedule. This is behavioral revenue management, and it works every single time because it is built on truth, not on assumptions. Here’s a tactic almost no one uses. Review app engagement curves daily. If guests only stay on your app for a few seconds, that tells you your design is not helping them complete the actions that matter. Fix those pathways and you will see more bookings for experiences, more upgrades, more outlet spend, and more repeat visits. Another tactic is to study foot traffic through your public spaces. If a hundred people walk past the bar and only three sit down, the issue isn't demand. It's energy, layout, lighting, or service. Fixing that can double revenue in a week without touching your marketing budget. A weekly behavioral insights meeting should be mandatory. Bring one insight from tech, one from operations, one from F&B, and one from housekeeping. Compare patterns, not opinions. You'll start seeing emotional blind spots that cost you money. The fixes are simple, but the impact is immediate. This is how you create ROI from intelligence instead of luck. The brands that win over the next decade will be the ones that understand behavioral data better than their competitors. Not the ones with the loudest campaigns. Not the ones with the prettiest videos. The winners will be the ones who know what their guests feel, when they feel it, and why they act the way they act. That's where real revenue comes from. --- If you like the way I look at the world of hospitality, let’s chat: scott@mrscotteddy.com
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development