There's something almost nobody is talking about in AI - but it affects everything from asking ChatGPT for advice to companies deploying AI globally. A fascinating study tested major AI Models - the foundations powering tools millions use daily - against cultural values from 107 countries worldwide. The result? Each one reflected the same assumptions - those of English-speaking, Western European societies. None aligned with how people in Africa, Latin America, or the Middle East actually build trust, show respect, or resolve conflicts. Why does this matter? Imagine you're a global company rolling out AI customer service. Your system learns "best practice": when customers complain about late orders, "apologise briefly, offer a discount, and focus on quick resolution". In Germany, the direct, efficient approach works perfectly. Customer satisfied. But in Japan, that brief apology violates meiwaku - the cultural need to deeply acknowledge when you've caused someone inconvenience. Your "efficient" response feels dismissive and damages customer relationships. And in the UAE, the discount offer backfires completely. It feels like charity rather than respect. One AI system, similar contexts, completely different cultural outcomes. This isn't intentional though - it's inevitable. LLMs absorb embedded patterns about communication from their training data, and most of that data comes from billions of English web pages and content. The result? AI systems that, unless thoughtfully shaped, are blind to the diversity of human interaction. Klarna, the global payments company, made headlines in 2024 when they introduced an AI system that "did the work of 700 customer service reps", handled 2.5 million conversations in 35 languages, and cut response time by 82%. Technical triumph. 14 months later: "Klarna reverses AI strategy and is hiring humans again". Their CEO admitted it had led to "lower quality". Some reports said they'd seen a 20%+ decrease in customer satisfaction. What I think really happened: Klarna optimised for 35 languages while completely missing 35 different ways humans expect to be treated. The challenge? Most companies are focusing on technical integration and completely missing cultural intelligence. We measure response time and cost savings, but never ask, "which human complexities are we overlooking?" The goal isn't neutrality though - that's impossible and undesirable. It's conscious awareness. Understanding that the output from AI models is filtered through a specific cultural lens. For companies building AI strategies, key questions worth asking: * Which cultural assumptions are embedded in our AI systems? * How do we test cultural intelligence alongside technical performance? * Who provides this expertise in our AI teams? The individuals and organisations that develop this conscious awareness will make better decisions, while others unknowingly apply one-size-fits-all approaches to beautifully diverse human contexts.
Ethnographic Research In UX
Explore top LinkedIn content from expert professionals.
-
-
Why do some qualitative studies generate groundbreaking insights while others barely scratch the surface? The secret is not in the data collected, but in matching your methodology to your research goals. The 5 qualitative research methods nobody talks about: 1. Phenomenology • Perfect for understanding perceptions • Uses deep interview analysis • Captures lived experiences 2. Ethnography • Based on extended fieldwork • Documents cultural patterns • Gives insider perspective 3. Narrative Inquiry • Uses conversations & artifacts • Finds patterns in experiences • Tells people's stories 4. Case Study • Answers specific questions • Uses multiple data sources • Creates rich context 5. Grounded Theory • Perfect for unexplored topics • Analyzes data continuously • Builds new theories Pick your method based on your goal: → Want experiences? Use phenomenology → Need cultural insights? Try ethnography → Looking for stories? Go narrative → Seeking answers? Case study works → Building theory? Grounded theory fits Most researchers fail because they pick the wrong method for their research question. The right method = better research. 🗞️ Join 7,278+ researchers on my weekly newsletter: https://lnkd.in/e4HfhmrH P.S. Do you check method-research-question fit?
-
🦚 How To Capture Users’ Emotions in UX. With practical guidelines, frameworks and toolkits to better understand people‘s emotions and act on them. ✅ What people think, do, say and feel are often very different things. ✅ We aren’t good at explaining where our emotions come from. ✅ Sympathy is the acknowledgement of the suffering of others. ✅ Empathy is the ability to fully understand/share person’s needs. ✅ Compassion is empathy in action, with effort to bring a change. ✅ Empathy relies on open-ended questions in user research. ✅ There is nothing more powerful than silence in a conversation. ✅ Silence often opens room for much needed clarifications. 🚫 Don’t mistake smiling and nodding for support or agreement. ✅ Users often hide criticism and exaggerate positive feedback. Emotions are always difficult to capture, but they are easier to spot once you observe people doing what they need to do without external influence or interruptions. In the past, I was using "speak-aloud" protocol and asked users to walk me through their thought process as they were completing tasks. But it actually turns out to be quite disruptive, and because people are focused on speaking at the same time while solving a task, many emotions remain hidden or obscured by their language. So, when conducting usability testing, I don’t ask users to speak through their experience. Instead, I observe where they tap or hover with the mouse, where their mouse circles without an action, where they scroll, and how long. Eventually when a user confirms that they are done or that they are stuck, I ask questions. One helpful trick is to use mirroring — repeating what a user has said, or ask the same question twice, just paraphrasing it. Or navigating the emotions wheel (attached) to better capture and understand the emotion. These strategies help uncover some of the issues that perhaps didn't come up in the first answer. That's also when a user then tends to provide more context and details as they explain their confusion. Useful resources: The Spectrum of Empathy in UX, by Sarah Gibbons, NN/g (attached image) https://lnkd.in/d-kj3hmr Emotion Wheel Toolkit (PNG), by Geoffrey Roberts https://imgur.com/q6hcgsH Scale of Negative UX Impact, by Indi Young https://lnkd.in/eg2FiRSE Human Connection Toolkit (Framework + Method Cards), via Rosie Sherry https://www.deepr.cc/tools Belonging Design Principles, by Othering & Belonging Institute at UC Berkeley https://lnkd.in/eudUfAd2 Designing For Belonging (Toolkit), by Susie Wise, via Anamaria Dorgo https://lnkd.in/enJTh2mw #ux #design
-
Your tech solutions might be universal, but business cultures rarely are. For founders expanding globally, understanding cultural nuances can make a world of difference. I've seen so many brilliant construction tech solutions face unexpected challenges internationally not because of product issues, but because of cultural cues that were hiding in plain sight. What works smoothly in your home market frequently encounters unexpected barriers abroad. In our latest Practical Nerds episode, Shubhankar and I explored three cultural patterns we've observed that often create unexpected challenges for founders expanding internationally: 1/ Trust deficit can kill deals in Asia before you realize what happened. Asian markets require relationships BEFORE transactions. That mid-deal silence? It's not disinterest—it's a fundamental lack of trust. When things stall, don't send another "just checking in" email. Request a direct call: "Hey, can we get on a call? I'd just like to hear from you." 2/ Europeans want facts, not hype. Your high-energy American pitch style? It can be "overcompensating" to Europeans. They're engineering-minded—lead with observations, not judgments. And remember: Europeans minimize downside before maximizing upside. Frame your solution as risk mitigation first, opportunity second. 3/ Middle East surprisingly loves American tech but demands in-person presence. Virtual meetings barely register as "meetings" at all. And forget the org chart—decisions flow through specific gatekeepers who might not even appear in formal hierarchies. What seems to work well for many companies in global expansion? Maintaining consistent products and channels while building localized teams who can navigate the nuances of each market's business culture. 👇 Dive deeper into our full analysis of global construction tech expansion below. #ConstructionTech #GlobalExpansion #BusinessCulture
-
The real threat isn't malware. It's silence. We had the tools. We had the budget. But we still got breached. That line came from a client years ago in Southeast Asia, and it stuck with me. After 15 years in cybersecurity, one lesson stands out clearly across the Asia Pacific: Technology rarely fails first. People do. When you operate across countries like Singapore, Thailand, Malaysia, Indonesia, and the Philippines, you start to see deeper patterns. Not technical ones. Behavioural ones. In some markets, staff members hesitate to report incidents for fear of being blamed. In others, there is a strong culture of hierarchy, which makes it hard for junior employees to challenge suspicious activity. Sometimes, the biggest obstacle is not the attacker outside. It is the silence inside. We often treat cybersecurity as a technical challenge. But adoption is driven by trust, context, and cultural relevance. That is why the same selling approach that works in the US has a low chance of working in Southeast Asia. It is not because people do not care. It is because the framing does not fit the environment. Over time, I have learnt to approach cyber selling like market development: → Start with empathy → Speak the local language, not just linguistically but emotionally and socially → Focus on behaviours, not just knowledge Cybersecurity in Asia is not just about rolling out tools. It's about building a culture where people feel safe speaking up, slowing down, and making informed decisions under pressure. If we want real resilience, we need to stop selling fear and start shaping habits. What cultural or behavioural barriers have you seen when it comes to cyber awareness in your region? I would love to hear your stories. #alvinsratwork ✦ #BusinessTechnologist ✦ #ExecutiveDirector
-
If you're a UX researcher working with open-ended surveys, interviews, or usability session notes, you probably know the challenge: qualitative data is rich - but messy. Traditional coding is time-consuming, sentiment tools feel shallow, and it's easy to miss the deeper patterns hiding in user feedback. These days, we're seeing new ways to scale thematic analysis without losing nuance. These aren’t just tweaks to old methods - they offer genuinely better ways to understand what users are saying and feeling. Emotion-based sentiment analysis moves past generic “positive” or “negative” tags. It surfaces real emotional signals (like frustration, confusion, delight, or relief) that help explain user behaviors such as feature abandonment or repeated errors. Theme co-occurrence heatmaps go beyond listing top issues and show how problems cluster together, helping you trace root causes and map out entire UX pain chains. Topic modeling, especially using LDA, automatically identifies recurring themes without needing predefined categories - perfect for processing hundreds of open-ended survey responses fast. And MDS (multidimensional scaling) lets you visualize how similar or different users are in how they think or speak, making it easy to spot shared mindsets, outliers, or cohort patterns. These methods are a game-changer. They don’t replace deep research, they make it faster, clearer, and more actionable. I’ve been building these into my own workflow using R, and they’ve made a big difference in how I approach qualitative data. If you're working in UX research or service design and want to level up your analysis, these are worth trying.
-
💡 Mapping user research techniques to levels of knowledge about users When doing user research, it's important to choose the right methods and tools to uncover valuable insights about user behavior. It's possible to identify 3 layers of user behavior, feelings, and thoughts: 1️⃣ Surface level - Say & Think This level captures what users say in conversations, interviews, or surveys and what they think about a product, feature, or experience. It reflects their stated opinions, thoughts, and intentions. Example: "I prefer simple products" or "I think this app is easy to use." Methods: Interviews, Questionnaires. These methods capture stated thoughts and opinions. However, insights may be influenced by social norms or biases. 2️⃣ Mid-level - Do & Use This level reflects what users actually do when interacting with a product or service. It emphasizes actions, usage patterns, and observed behaviors, revealing insights that may differ from what users say. Example: Users may claim they enjoy customizing app settings, but data shows they rarely change default options. Methods: Usability Testing, Observation. Observation helps to reveal gaps between what people say and what they actually do. 3️⃣ Deep level - Know, Feel and Dream This level uncovers deep motivations, emotions, desires, and aspirations that users may not be consciously aware of or may struggle to articulate. It also includes tacit knowledge—things people know intuitively but find hard to express. Example: A user might not realize that their preference for a minimalist design comes from the information overload of a current design. Methods: Probes (e.g., participatory design, diary studies). Insights collected using these methods will uncover implicit and emotional drivers influencing behavior. 📕 Practical recommendations for mapping ✅ Triangulate insights by using multiple methods. What people say (interviews/surveys) may differ from what they do (observations) and feel. That's why it's essential to interpret these results in context. For example, start with interviews to learn what users say. Follow up with usability testing to observe real behavior. Use probes for long-term or emotional insights. ✅ Align research with business goals. For product improvements, focus on usability testing to catch interaction issues. For innovation, use probes to generate new ideas from user insights. ✅ Practice iterative learning. Apply surface techniques (like surveys) early to refine assumptions and guide more in-depth research later. Use deep techniques (like probes) for strategic decisions and to foster innovation in long-term projects. 🖼️ UX Research methods by Maze #ux #uxresearch #design #productdesign #uxdesign #ui #uidesign
-
People often have little or no introspective access to the processes that generate their judgments, choices, or behaviors (Nisbett & Wilson, 1977). One of my hobbies is to read classic studies and think about their applications in our today’s world. This paper is one of those studies that I genuinely believe every UX researcher should read and deeply understand. It delivers an uncomfortable but essential lesson: what users say about their decisions is often not a direct window into how those decisions were actually made. One of the most natural instincts in UX research is to ask users exactly what they think and why they made a particular choice. We ask what they liked, what confused them, what mattered most, and what influenced their decision. This feels intuitive and respectful. After all, who knows the user better than the user themselves? Yet decades of cognitive and behavioral research show that much of perception, evaluation, and decision making happens outside conscious awareness. When users explain their choices, they are often constructing a story rather than reporting the mechanisms that produced the behavior. This does not mean users are dishonest. It means the human mind is not designed for transparent self inspection. Classic work by Nisbett and Wilson demonstrated that people can be highly confident in their explanations while being wrong about the true drivers of their behavior. When pressed for reasons, the mind does not “look inside” and retrieve a causal record. Instead, it relies on plausible, culturally grounded explanations that make sense after the fact. These explanations feel true to the person giving them, but from a scientific perspective they are often post hoc rationalizations. This distinction matters deeply for UX research. Self reports are excellent for understanding narratives, beliefs, expectations, and meaning. They are far less reliable for uncovering how decisions actually unfold. Pushing users harder for “why” can make things worse by encouraging polished, socially acceptable stories that drift further away from the real drivers of behavior. Clear, confident quotes can be seductive, but behavior, timing, errors, and trade offs often tell a more accurate story. The takeaway is not to ignore users, but to listen differently. What users say helps us understand how they make sense of their experience. What they do helps us understand how they actually interact, decide, and struggle. This is why strong UX research cannot rely on a single method. We need qualitative approaches to capture meaning and lived experience, and quantitative and behavioral methods to reveal patterns and constraints users cannot articulate. Only by combining both can we get as close as possible to the user’s mind, even if it never becomes fully transparent. To learn more: https://lnkd.in/gKfybGiz
-
Most UX failures start the same way: Teams skip mapping, rely on assumptions, and build without clear user insight. Effective products, in contrast, are based on structured understanding. Here are the key methods used: 1. 𝗘𝗺𝗽𝗮𝘁𝗵𝘆 𝗠𝗮𝗽 • Captures what users say, think, feel, and do • Builds a shared understanding of user behavior 2. 𝗖𝘂𝘀𝘁𝗼𝗺𝗲𝗿 𝗝𝗼𝘂𝗿𝗻𝗲𝘆 𝗠𝗮𝗽 • Maps how users interact across stages • Highlights friction points and drop-offs 3. 𝗘𝘅𝗽𝗲𝗿𝗶𝗲𝗻𝗰𝗲 𝗠𝗮𝗽 • Provides a broad view of the full experience • Identifies key moments and interactions 4. 𝗦𝗲𝗿𝘃𝗶𝗰𝗲 𝗕𝗹𝘂𝗲𝗽𝗿𝗶𝗻𝘁 • Connects user experience with internal processes • Aligns front-stage and back-stage activities 5. 𝗔𝗳𝗳𝗶𝗻𝗶𝘁𝘆 𝗠𝗮𝗽 • Organizes research into patterns • Reveals core insights and issues 6. 𝗔𝘀𝘀𝘂𝗺𝗽𝘁𝗶𝗼𝗻 𝗠𝗮𝗽 • Separates assumptions from validated facts • Helps reduce risk early 7. 𝗘𝗰𝗼𝘀𝘆𝘀𝘁𝗲𝗺 𝗠𝗮𝗽 • Shows the wider network of people, systems, and services • Adds context to user interactions 8. 𝗦𝗰𝗲𝗻𝗮𝗿𝗶𝗼 𝗠𝗮𝗽 • Illustrates how users achieve specific goals • Frames real-life usage situations 9. 𝗖𝗼𝗴𝗻𝗶𝘁𝗶𝘃𝗲 𝗠𝗮𝗽 • Reflects how users mentally structure information • Supports better information design 10. 𝗦𝗶𝘁𝗲 𝗠𝗮𝗽 • Defines content structure and hierarchy • Improves navigation clarity 11.𝗙𝗹𝗼𝘄 𝗠𝗮𝗽 • Outlines user paths through a product • Clarifies steps and decision points 12. 𝗥𝗼𝗮𝗱𝗺𝗮𝗽 • Aligns goals, priorities, and timelines • Guides product development UX mapping reduces guesswork and improves decisions. Which method do you use most? #UXDesign #UserExperience #UXResearch #ProductDesign #UXStrategy #DesignThinking #CustomerExperience #imenmlika
-
The Forgotten Layer in Understanding Indian Consumers Every marketer today talks about algorithms. Few talk about anthropology. Understanding modern consumers requires decoding the Behavioural Stack - 4 layers that shape every decision: 1️⃣ Individual Cognition – how people think, perceive, and decide. 2️⃣ Social Dynamics – how peers, family, and community influence choices. 3️⃣ Technological Mediation – how digital platforms, algorithms, and devices guide attention. 4️⃣ Cultural Context – the deep-rooted beliefs, traditions, and value systems that define meaning. Most brands focus on the first three, and very few get the 4th right. In India, culture isn’t a layer; it’s the operating system, yet most brand playbooks still treat culture as a variable rather than a vantage point. The country doesn’t have “a consumer.” It has multiple Indias, each driven by its own moral economy of choice. A luxury purchase in Delhi might signal power. In Coimbatore, it might signal prudence. A wedding in Kerala is a ritual. In Rajasthan, it is also a performance. The same consumer psychology behaves differently depending on the cultural code it’s wrapped in. Consider this: 90% of India’s internet users engage in regional languages. Brands that localized campaigns in native languages saw 2–3x higher engagement. Even caste, cuisine, and climate quietly shape consumption from gold ownership (highest in South India) to vegetarian FMCG patterns in the North. In the West, psychology drives purchase. In India, psychology is filtered through culture. For Indian brands, the next competitive advantage may not just come from better data science, But also from cultural fluency, ie, knowing not just what India buys, but why one India buys differently from another. Because technology can mediate behaviour. But only culture can make sense of it.
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development