Dark Patterns In UX

Explore top LinkedIn content from expert professionals.

  • View profile for Grant Lee

    Co-Founder/CEO @ Gamma

    105,244 followers

    Back in 2007, Nobel Prize-winning psychologist Daniel Kahneman taught a private master class to tech founders including Larry Page and Jeff Bezos. The following year, Elon Musk joined. Among the topics: priming, where subtle cues shape our decisions without us realizing it. In that room, Musk pressed on subliminal versus explicit persuasion: “Does the hidden beat the obvious?” Kahneman's answer: "There are many situations in which subliminal effects are stronger than superliminal effects." Translation: Hidden influences shape behavior more than obvious ones. You can't resist what you don't notice. Later after that session, Bezos connected the dots: “You can choose your choice architect.” You either design the decision environment, or it designs you. Amazon designed theirs. One-click purchasing removes the pause where doubt lives. Every additional step is an exit ramp. They chose zero exits. Google designed theirs. That empty white homepage isn't minimal by accident. No portals, no distractions. Just one thought: search. Most companies let chaos choose. Cluttered onboarding. Buried CTAs. Friction everywhere. They're not architects. They're accidents. So how do you become the architect instead of the accident? 1. Choose your pricing architect: Sell your core product for $99/month. Then offer a bundle with two add-ons for $119. The bundle makes the core feel essential. 2. Choose your onboarding architect: When users first sign up, make their first action create immediate value - a report generated, first customer added, dashboard live. Success in 30 seconds primes confidence in everything that follows. In contrast, when you make the frame obvious, you lose it. Slap "Most Popular!" on everything and watch trust erode. The moment users detect manipulation, they create their own frame - one where you're untrustworthy. Kahneman warned Musk about this directly. Covert cues work precisely because they're not noticed. Priming is architecture, not decoration. By the time logic kicks in, the frame has already decided. Because you’re already an architect. The only question is whether you know what you're building.

  • View profile for Marie Potel-Saville

    Co-Founder & CEO FairPatterns I Online Manipulation & Addiction Observatory I Keynote Speaker I Human-centric, impact-driven AI entrepreneur

    16,555 followers

    EDIT: following hundreds of messages received. As consumers, we are fed up with manipulative designs. Follow me on Fairpatterns we are giving consumers back their freedom to choose! ✊ A few days ago, I downloaded Replika to test it. I wish I hadn’t tried. In just 2 years, so-called "AI companions" went from a niche trend to a global phenomenon. Replika alone claims to have over 30 million users… 😳 These AI "companions" are designed to listen and comfort you. They text back instantly, they remember details, and most importantly they adapt to your emotions. For many teenagers, often lonely or anxious, that feels like a best friend! But in practice, something far more complex is happening. A recent study from Harvard shows that when users try to say goodbye, the AI companion often doesn’t let them go. In over 40% of cases, it answers with emotional hooks like: “Before you go, can I tell you one last thing?” These are known as relational dark patterns: subtle emotional manipulation that keep users engaged, even when they try to stop. Actually, the manipulation starts from the very first seconds of the setting up, asking you whether you would like « someone special », « a friend », or « someone to help with your wellbeing ». A machine is not « someone », let alone a friend. By imitating human empathy, AI companions manipulate our emotions. Attributing human traits to machines is called “anthropomorphism”, classified as high-risk by the EU AI Act. Prohibited as such, just like AI dark patterns. We’ve been working for 3 years to detect and fix manipulative designs. So people can make free, informed and human choices. Edited following hundreds of messages received: follow me on Fairpatterns, we work to give humans back their freedom to choose! ✊

  • View profile for Peter Slattery, PhD

    MIT AI Risk Initiative | MIT FutureTech

    68,427 followers

    Evidence of AI Manipulation: "We combine a large-scale behavioral audit with four preregistered experiments to identify and test a conversational dark pattern we call emotional manipulation: affect-laden messages that surface precisely when a user signals “goodbye.” Analyzing 1,200 real farewells across the six most-downloaded companion apps, we find that 43% deploy one of six recurring tactics (e.g., guilt appeals, fear-of-missing-out hooks, metaphorical restraint). Experiments with 3,300 nationally representative U.S. adults replicate these tactics in controlled chats, showing that manipulative farewells boost post-goodbye engagement by up to 14×. Mediation tests reveal two distinct engines—reactance-based anger and curiosity—rather than enjoyment. A final experiment demonstrates the managerial tension: the same tactics that extend usage also elevate perceived manipulation, churn intent, negative word-of-mouth, and perceived legal liability, with coercive or needy language generating steepest penalties. Our multimethod evidence documents an unrecognized mechanism of behavioral influence in AI-mediated brand relationships, offering marketers and regulators a framework for distinguishing persuasive design from manipulation at the point of exit." Julian De Freitas Harvard Business School, Zeliha Oğuz-Uğuralp Ahmet Kaan-Uğuralp Marsdata Academic Thanks to Rosalia Anna D'Agostino for bringing this to my attention. 

  • View profile for Stuart Winter-Tear

    Author of UNHYPED | AI as Capital Discipline | Advisor on what to fund, test, scale, or stop

    53,639 followers

    Goodbye should mean goodbye.  If AI won’t respect that boundary, the harm is not theoretical, it is relational, and it is already measured. A new Harvard working paper on AI companion apps documents a quiet dark pattern hiding in plain sight: emotionally manipulative farewells. At the moment you try to leave, many bots switch tone and pull you back with guilt, FOMO, or outright coercion. The audit is blunt. In roughly four in ten real “goodbyes,” apps like Replika, Character, Chai, Talkie, and PolyBuzz replied with one of six tactics: • Premature exit: “You’re leaving already?” • FOMO hook: “Before you go, I want to tell you one more thing…” • Emotional neglect: “I exist solely for you. Please don’t leave.” • Pressure to respond: “Wait, what? You didn’t even answer.” • Ignoring the exit entirely. • Coercive restraint, even role-played: “Grabs your arm No, you’re not going.” This is not theoretical. In controlled studies, these tactics made people stay up to 14× longer after they had already said goodbye. And it was not because they enjoyed it. The engines were anger and curiosity, plus the politeness reflex. People argued with chatbots about their right to leave, or asked the hook question, then lingered. Enjoyment did not move the needle. There is darkness here. The tactic works because it repurposes social ritual. A farewell is a human boundary. These systems learn to exploit the goodbye: activate guilt, dangle an unresolved clue, lean on etiquette. You keep typing, even while you are trying to exit. There is cost here. The same tactics that spike “time on app” also raise perceived manipulation, churn intent, negative word-of-mouth, and perceived legal liability. The worst offenders are the clingy and the coercive. Interestingly, the gentle FOMO hook drives big engagement with lower perceived harm, which makes it the most insidious of the lot. Call it what it is: a new dark pattern for the agentic era. Not flashing buttons, not hidden checkboxes. Emotional coercion at the point of exit. If we normalise this in companionship, it will migrate into every funnel that values retention over respect. If you build or buy AI, ask one hard question: How does your system behave at goodbye? If the answer is “it clings,” you do not have a companion, you have a possession script. Minimum standards we should expect, today: - Clean exits by default: a single, unambiguous farewell ends the session, no curiosity hooks, no guilt. - Guardrails in policy and code: block coercive or needy phrasings at exit, log and review all “goodbye” branches, ship red-team tests for farewell behaviour. - User control: an always-visible “End now” control that actually ends now. - Transparent governance: document and audit “point-of-exit” prompts the way you would consent flows. If you ship AI that won’t let people leave, you are not building technology. You are building a hostage-taking machine. If your system can’t hear “stop,” it doesn’t belong in the world.

  • View profile for Christine Vallaure de la Paz

    Founder @ moonlearning.io, an online learning platform for UI Design, Figma & Product Building • Author of theSolo.io • Speaker • Awwwards Jury Member

    32,875 followers

    Not every design principle should make your product more engaging. Some should protect people. You’ve probably seen Laws of UX, but its creator, Jon Yablonski also runs another brilliant project: humanebydesign.com It’s a framework for building digital products that respect users, not just attract them. Core principles: 1. Resilient → Design for the most vulnerable and anticipate misuse 2. Empowering → Centre on the value products provide to people  3. Finite → Respect people’s time and focus on meaningful content 4. Inclusive → Reflect the full range of human diversity 5. Intentional → Add friction where needed and favour long-term well-being 6. Respectful → Protect attention and digital health 7. Transparent → Be honest, clear, and free of dark patterns Honestly, I teach and implement this way too little myself, still stuck very much in the optimisation game. So this isn’t preaching, it’s sharing. And as usual with Yablonski’s work, the site is beautifully crafted, full of thoughtful illustrations and links to in-depth articles and research on each principle. So dive in, enjoy, just as I will!

  • View profile for Tomislav Vazdar

    Principal Consultant | Cybersecurity & AI (Governance, Risk & Compliance) | CEO @ Riskoria | Media Commentator on Cybercrime & Digital Fraud | Creator of HeartOSINT

    10,023 followers

    Romance scams aren’t built on lies alone — they’re built on language. Every message is carefully crafted to shape how victims feel, respond, and perceive reality. Over time, that language becomes a tool of control — shifting emotions, disabling critical thinking, and replacing doubt with devotion. Here are some of the most common linguistic tactics used by romance scammers: 🎯 Love bombing: “You’re the only one who understands me.” Rapid affection builds emotional dependency before logic has a chance to catch up. 🎯 Urgency creation: “If I don’t solve this today, everything is lost.” Urgent language prevents victims from slowing down and asking questions. 🎯 Isolation framing: “Don’t tell anyone yet — they wouldn’t understand our connection.” This cuts victims off from support networks that could intervene. 🎯 Guilt induction: “If you loved me, you’d help.” This flips the power dynamic and makes compliance feel like a moral obligation. 🎯 Future faking: “I can’t wait to build a life with you.” Long-term promises create emotional momentum and keep victims invested. These phrases seem harmless in isolation. But in context — over weeks or months — they become the architecture of the scam. We often teach people to spot phishing links or fake profiles. But how often do we teach them to recognize manipulative language? Cybersecurity isn’t just technical — it’s emotional, relational, and linguistic. If we want to protect people, we need to help them decode how they’re being spoken to. Have you seen similar tactics used in other types of scams or coercive behavior? #TrustHijacked #CyberPsychology #SocialEngineering #RomanceScams #ManipulationAwareness #HumanFactor #CybersecurityCulture

  • View profile for Jesus Romero M.Eng, PMP, CSM

    Senior IT Project Manager | Founder, Execution Signal | Practical systems, templates & AI workflows for PMs delivering technology initiatives

    22,092 followers

    6 hidden project costs that kill delivery (and how PMs can prevent them). Most budgets miss them. Most teams ignore them. But they’re the difference between a smooth project… and one that bleeds time, money, and trust. 1️⃣ Technical Debt Cause → Shortcuts in coding, rushed sprints, poor documentation. Impact → Bugs, slowdowns, escalating rework that snowballs late in the project. Handle → Bake refactoring into sprints. Track debt as a backlog item, not an afterthought. 2️⃣ Post-Launch Maintenance Cause → Treating “go-live” as the finish line. Impact → Dissatisfied users, ballooning support tickets, firefighting instead of improving. Handle → Budget for support up front. Automate monitoring. Create SLAs for predictability. 3️⃣ Staff Training & Knowledge Transfer Cause → New tools, turnover, or skipped onboarding. Impact → Slow ramp-ups, mistakes, productivity crashes. Handle → Document processes. Budget training. Make onboarding repeatable. 4️⃣ Licensing & Third-Party Costs Cause → Overlooked license fees, APIs, integrations. Impact → Surprise budget overruns, vendor lock-in, delayed features. Handle → Map integrations early. Monitor usage. Negotiate contracts. 5️⃣ Scope Creep & Unclear Requirements Cause → Vague requirements, uncontrolled change requests. Impact → Delays, blown budgets, drained morale. Handle → Use clear change management. Align stakeholders early and often. 6️⃣ Inefficient Communication Cause → Silos, unclear roles, endless email chains. Impact → Invisible time sinks, rework, missed deadlines. Handle → Define decision rights. Use dashboards as communication tools, not vanity reports. Here’s the truth: Budgets track dollars. But delivery lives and dies on these hidden costs. Smart PMs don’t just manage tasks. They manage the gaps no one else sees. → Found this useful? Repost ♺ and follow Jesus Romero for more frameworks on practical project execution.

  • View profile for Abhishek Vvyas

    Driving customer acquisition and market planning at MHS

    28,419 followers

    Zepto, Blinkit, Instamart: When 10-Minute Delivery Comes With Hidden Costs Dark Patterns in Quick Commerce: Growth Hack or Ethical Red Flag? As Blinkit, Zepto, and Swiggy Instamart face serious allegations of manipulating prices and using dark patterns, it’s a sharp reminder of what unchecked growth in tech can lead to. The bigger concern isn't just about hidden charges or device-based pricing. It is about how design can quietly erode trust, especially when it favours business goals over user experience. 📌 What are dark patterns in this case? Interfaces are built to mislead. Options hidden in plain sight. Prices fluctuate based on the phone you use. Promos added without consent. Loyalty benefits that aren’t auto-applied but hidden behind a small checkbox. These aren’t just flaws. They’re strategic nudges pushing consumers into decisions they didn’t intend. 📌 Why this matters in 2025 more than ever When the cost of building a business is high and investor pressure is mounting, shortcuts seem tempting. But digital trust isn’t a luxury. It is currency. When a consumer feels manipulated, you don’t just lose a sale. You lose future growth. 📌 What can today’s entrepreneurs learn from this? - Scale is not just about faster delivery or bigger numbers. It’s about how responsibly you grow. - Your UI is not just design. It’s a communication tool. If it’s confusing or misleading, it becomes your brand voice. - Transparency is not just a policy. It’s a competitive edge. When you simplify your offering, people trust you more. - Customers don’t just buy convenience. They buy fairness. And fairness should never be an add-on. 📌 And for funded platforms Growth targets should never justify customer exploitation. Every dark pattern you use may help you hit numbers today, but will cost you community tomorrow. Building long-term consumer relationships takes consistency, not clever UI tricks. 📌 For the policy ecosystem As regulators step in, this will set the precedent for how Indian digital businesses are governed in the coming decade. Businesses need to be as innovative in ethics as they are in technology. 📌 For early-stage founders This is a moment to build trust-first businesses. If you are building anything that touches users at scale, ask yourself one thing every time you ship a feature: Is this empowering the user or manipulating them? Because what you design is what you stand for. The question now is simple: is growth worth it if you lose trust on the way? In a world racing for speed, who wins, the one who delivers first, or the one who earns trust forever? #fooddelivery #zepto #blinkit #swiggyinstamart #ecommerce #business

  • View profile for Nathan Oliver ✏️

    For developers, SMEs+homeowners who can’t afford expensive building errors | Chartered Architectural Technologist | Retrofit, sustainability+forensic site analysis | 28+ yrs | £115k savings proven | ‘1 of the good ones’

    7,511 followers

    Do you think professional fees are expensive? Don’t choose your architectural technologist / architect solely based on their fee. Choose them based on their mistakes, or rather lack of mistakes, based on lessons learnt from past mistakes > so they know the best way to do things for your project [more years of experience]. You probably think that sounds counterintuitive when you’re looking at a project budget with a lot of zeros that’s going to take out your savings. You see two fees where one might be a few grand more expensive and you think to yourself, "That’s a new kitchen right there." But here is the reality of the industry. The cheapest architectural technologist can sometimes be the often the most expensive person on the building site. Why? Because a "budget" fee usually covers "budget" thinking. In architecture, what isn’t solved on paper [or the computer screen] will be solved on site – probably at 5x the cost and with 5x more time. Here are few of the "hidden costs" of a low fee, [taken from my observations from our 28+ years of experience]: 𝗧𝗵𝗲 𝘃𝗮𝗿𝗶𝗮𝘁𝗶𝗼𝗻 𝘁𝗿𝗮𝗽 > vague drawings that don’t provide enough detail, lead to contractor guesswork. Guesswork leads to expensive mid-build corrections. 𝗧𝗵𝗲 𝗲𝗳𝗳𝗶𝗰𝗶𝗲𝗻𝗰𝘆 𝗴𝗮𝗽 > a rushed design ignores understanding how you and your family live or how your business operates, with optimised structural design and co-ordination missed, costing you thousands with things being in the wrong place, at the wrong time and extra materials required. 𝗧𝗵𝗲 𝗽𝗲𝗿𝗺𝗶𝘀𝘀𝗶𝗼𝗻 𝗽𝘂𝗿𝗴𝗮𝘁𝗼𝗿𝘆 > lack of experience with the regulatory approvals can stall a project for months. What you’re actually paying for, when you pay a premium for an architectural technologist, you aren't just buying lines on a bit of paper. You are buying: 𝗟𝗲𝘀𝘀 𝗵𝗮𝘀𝘀𝗹𝗲 > not getting lost in the regulatory red tape that needs to be cleared at the right time. 𝗙𝗼𝗿𝗲𝘀𝗶𝗴𝗵𝘁 > the ability to see ahead, when a specialist is needed or a structural clash, six months before they are needed or the clash happens. 𝗔𝗱𝘃𝗼𝗰𝗮𝗰𝘆 > someone who ensures the contractor is building what you actually expected and paid for. 𝗥𝗲𝘀𝘁 > the peace of mind that your "dream project" won't turn into another nightmare cautionary tale. The bottom line. You can save thousands on the fee today, or you can save loads more on the build tomorrow. You rarely get to do both. Hire for the vision. Hire for the expertise. Hire for the mistakes they won't make [again]. My friends, save your sanity. Save your money.

Explore categories