Reddit, Inc. fined £14,500,000 million by the Information Commissioner's Office No breach. No cyberattack. The issue was simpler. And more important. They relied on self-declared age. That meant children under 13 could access the platform. And their personal data was being processed without a valid lawful basis. Most organisations still don’t understand: Regulators don’t assess risk based on who you intend your users to be. They assess risk based on who can realistically access your service. That’s the shift. It’s not enough to say: “Children aren’t our target audience.” The real question is: Could they get in anyway? If the answer is yes, then you must design for that risk. This is where leadership matters. Strong privacy teams don’t wait for evidence. We assess foreseeable risk. We don’t design for intention. We design for reality. Because enforcement rarely comes from what you planned. It comes from what actually happens. The best privacy leaders understand: 1. Timing is accountability 2. Risk visibility is governance 3. Assumptions are expensive and embarrassing The organisations that get this right don’t just avoid fines. They build trust at scale. When regulators come looking, they don’t ask what you intended. They ask what you knew, and what you did about it!
How Regulators Address Privacy Manipulation
Explore top LinkedIn content from expert professionals.
Summary
Regulators address privacy manipulation by enforcing laws and standards that protect personal data and ensure transparent, accessible privacy practices—especially when companies use misleading tactics or make it difficult for users to control their information. Privacy manipulation means designing systems or processes that intentionally or unintentionally restrict users’ ability to exercise their privacy rights.
- Design for reality: Make sure your website or platform accounts for everyone who could access it, not just your intended users, to avoid regulatory penalties.
- Prioritize transparency: Clearly explain how personal information is handled and give users easy-to-understand privacy policies instead of vague or overly technical documents.
- Streamline user control: Allow consumers to manage their data and privacy choices with simple, straightforward steps, avoiding unnecessary hurdles or confusing processes.
-
-
We spend a lot of time talking about Data Privacy in AI. But a new draft regulation from China (the CAC), released Dec 27th, just shifted the conversation to something much harder to measure: Psychological Safety. For HR and Recruitment leaders, the "Interim Measures for the Administration of Humanized Interactive Services" is a wake-up call. It specifically targets AI that mimics human personality and emotion. If you are using "empathetic" chatbots for candidates or "wellness coaches" for employees, the rules of engagement are about to change. Here are the 4 takeaways for HR Governance: 🛑 1. The "Turing Test" Compliance Check If your candidate engagement bot is designed to feel "human," you are in the danger zone. The new rules demand explicit transparency. If a candidate starts "bonding" with the bot or over-sharing, the system must break character and remind them: "I am an AI." The lesson: Transparency > Immersion. 🆘 2. Wellness Bots Need a "Human Loop" Using AI for employee mental health? Under these rules, an AI cannot handle a crisis alone. If an employee expresses distress or "extreme emotion," the bot is legally required to trigger a human intervention. The lesson: You cannot automate duty of care. 🤥 3. No More "False Promises" We’ve all seen eager AI recruiters say, "You sound perfect for this role!" The draft explicitly bans AI from making "false promises that affect user behavior." The lesson: Guardrails on your LLMs need to be tighter than ever. 🔒 4. The "Right to be Forgotten" for Chat Logs Vendor contracts often hide clauses about using chat data for "model training." This regulation flips that: you need separate, explicit consent to train on user interactions, and employees must have the right to delete their chat history. The Bottom Line: Whether or not you operate in China, this is the future of AI Ethics. Regulators are moving beyond "Is the data safe?" to "Is the interaction safe?" My advice: Audit your HR Tech stack today. Ask your vendors: "Does this AI pretend to be a person?" If the answer is yes, ask to see their safety brakes.
-
🎆 The OAIC has a New Year’s Resolution… and it’s your privacy policy. 🎆 Most of us are resolving to exercise more or stop midnight doom scrolling, but the OAIC is starting 2026 with its first-ever proactive privacy compliance sweep. No breach. No complaint. Just a regulator looking at compliance and that’s a meaningful shift in posture. 1. Proactive privacy enforcement is here For years, resource constraints meant investigations were mostly reactive. This sweep shows the OAIC is now willing to monitor organisations proactively. If your strategy relies on "the OAIC won't notice us,” that thinking belongs in the 2025 archive. 2. The regulator has sharper tools. With the 2024 reforms, deficient privacy policies can attract infringement notices of: 🔴 $66,000 for listed corporations 🔴 $19,800 for non-listed entities These are quick, decisive penalties. And once the OAIC is in the door, it can escalate to broader regulatory action if deeper issues appear. 3. This is just the beginning. Real estate agents, chemists, car rentals, car dealerships, pawnbrokers… all high-risk for in-person data collection. But this is a test case for a model the OAIC can scale. Once refined, the natural next step is broader, lower-cost monitoring, especially if the regulator leverages AI tools to scan websites. 4. Privacy policy quality is a real market problem. I routinely see: ⭐ generic templates with minimal tailoring ⭐ outdated policies that no longer reflect operational reality ⭐ vague statements that hide more than they reveal ⭐ dense legalese no person could meaningfully understand ⭐ missing content ⭐ no distinction between different groups of individuals, even though handling differs significantly A good privacy policy should: 🟢 clearly distinguish between different classes of individuals 🟢 describe the unique ways information is collected and handled for each 🟢 avoid blanket statements that apply to “everyone” when, in reality, practices vary significantly 🟢 reflect the actual, operational handling of personal information across the organisation This type of structuring requires deep thinking, careful distillation, and genuine understanding of the data lifecycle. It’s not easy but it’s the foundation of meaningful compliance. 5. Privacy policies often reflect governance maturity. If your privacy policy is weak, generic or inaccurate, regulators (and customers) assume your underlying privacy governance is similar. This sweep is designed to lift the market’s minimum standard and to reward organisations that take transparency seriously. 6. The upside? There's a fix. Refreshing a privacy policy is far cheaper and far simpler than dealing with an OAIC investigation or enforcement outcome. Your New Year’s Resolution should be this: review your privacy policy properly, with accuracy, transparency and clear explanations for the different ways you handle information. #privacy #dataprotection #privacylaw #compliance #cybersecurity
-
Dear Reader, It is the season of digital transformation, and across Africa’s bustling digital ballrooms, data has become the most coveted currency of all. Mobile money moves faster than whispers at a soirée, biometric systems promise certainty with a glance, and artificial intelligence courts both efficiency and excess. Yet, as with all great transformations, not everyone has been minding their manners. As we reflect on the past year this Privacy Day, Africa’s regulators are no longer mere observers of the spectacle. They have stepped onto the floor firm, deliberate, and increasingly assertive reminding governments and global technology giants alike that privacy is not a polite suggestion, but a legal right. One thing is clear, data protection in Africa has entered its enforcement era. Across the continent, Africa’s digital transformation is accelerating from mobile money to digital IDs, AI systems, health platforms, and cross-border digital trade. With this transformation comes an unavoidable truth, data protection is no longer an aspirational policy, it is a legal and regulatory imperative. We are also witnessing a shift in regulatory confidence and maturity. With recent enforcement actions telling a powerful story. Here are some highlights: 📌 Kenya’s High Court decision in Republic v Tools for Humanity (2025) reaffirmed that biometric and AI-driven systems require valid consent, DPIAs, and accountability. 📌 Nigeria’s NDPC enforcement against Meta (2025) demonstrated that African regulators will assert jurisdiction over global platforms and impose significant penalties. 📌 Uganda’s PDPO determination against Google (2025) confirmed that foreign tech companies processing African data must comply with local laws. 📌 South Africa’s Information Regulator action against the Department of Justice (2025) sent a strong message, public institutions are not exempt from privacy obligations. Equally important is the emergence of cross-border regulatory cooperation. Collaboration between DPA’s such as Kenya’s ODPC and Uganda’s PDPO in handling cross-border complaints signals the future of enforcement in a continent defined by regional integration and digital trade. As we go into 2026, here are my reflections: 📌 Africa is no longer a passive recipient of global privacy norms it is shaping its own enforcement narrative. 📌 Big Tech and public institutions alike must be accountable. 📌 AI, biometrics, and large-scale data systems are now central regulatory priorities. 📌 Collaboration among African DPAs will define the next phase of effective enforcement. As we look ahead, Africa’s data protection story is one of agency, constitutional grounding, and growing regulatory power. The challenge and opportunity is ensuring that innovation continues with trust, dignity, and rights at its core. Privacy is not a barrier to Africa’s digital future. It is its foundation. Happy International Privacy Day. #dataprotection #dataprivacy #compliance
-
$632,500 for making consumer privacy rights too difficult to exercise. That’s the fine Honda received from the California Privacy Protection Agency (CPPA). It’s a wake-up call for companies still treating privacy rights as a checkbox exercise. It’s also something I’ve seen repeatedly in privacy assessments - companies making it unreasonably difficult for consumers to exercise their privacy rights. Here are some areas regulators flagged: ❗ Requiring up to 8 fields of information just to opt out (excessive!) ❗ Creating a convoluted submission process for privacy rights requests ❗ Consumers had to directly confirm they authorized an agent to submit a request to opt out of sale/sharing or request to limit (illegal under CCPA) ❗ Failing to train employees handling privacy requests ❗ Ignoring Global Privacy Control (GPC) signals ❗ Creating multiple steps to opt out while enabling one-click opt ins ❗ Sharing data with vendors without proper documentation The lesson? Privacy rights must be PRACTICALLY accessible, not just technically available. Is your company vulnerable to similar issues? Ask: ✅ Can consumers opt out in 2 steps or fewer? ✅ Does your site recognize GPC signals? ✅ Do you have contracts with all vendors covering CCPA obligations? ✅ Is your team trained to process all types of privacy requests? ✅ Is opting out just as simple as opting in? I'm seeing regulators across states increasingly focus on the how, not just the what of privacy compliance. The days of hiding opt-out buttons or creating friction-filled privacy request processes are over. Make it easier for people to exercise their privacy rights. What's been your experience with consumer privacy rights implementations? Have you seen examples of companies doing this particularly well (or poorly)? Read more about the critical compliance areas companies should review in my latest article for the IAPP: https://lnkd.in/e4aH7Qna
-
🎉 Excited to share my article on #darkpatterns co-authored with Cristiana Santos, "Dark Patterns, Enforcement, and the Emerging Digital Design Acquis: Manipulation beneath the Interface," published in the European Journal of Law and Technology. This piece explores the intricate world of dark patterns—deceptive design strategies in digital interfaces that manipulate user decisions: We analyse the legal and policy frameworks addressing these designs, emphasising the importance of comprehensive regulations like the EU’s #DigitalMarketsAct and Digital Services Act to protect consumers. We examine how these designs manipulate user decisions and the legislative responses to such practices, particularly focusing on the EU’s framework, which includes the #DigitalServicesAct and the #AIAct. We explore the evolution of enforcement against deceptive designs and introduce a novel visibility spectrum to classify dark patterns based on their detectability and manipulativeness. While robust, we argue that the current legal frameworks still overlook the subtler, more insidious forms embedded deep within system architectures. This calls for a nuanced understanding and regulatory approach to ensure user autonomy is not compromised. For a deeper understanding, we propose a three-tier visibility threshold model for dark patterns and scrutinize the adequacy of existing and proposed regulations to address these manipulative practices. Our analysis stresses the importance of extending regulatory oversight beyond user interface to include system architecture to effectively safeguard against the darkest forms of digital manipulation. Read the full article here https://lnkd.in/eR_jky9q #DigitalDesign #TechLaw #Enforcement #ConsumerProtection #DataProtection #CompetitionLaw #ArtificialIntelligence #OpenAccess #UserInterface #UI #UserExperience #UX CC Harry Brignull Deceptive Patterns Thanks to Abhilash Nair Silvia De Conca Zachary Cooper Edoardo Celeste
-
Disney’s CCPA settlement gets framed as an opt-out issue. It is, but not in the way most teams think. California’s case basically says the consumer made the choice, but the choice did not travel. A user could opt out in one place and still have to do it again on another service or another device across Disney+, Hulu, and ESPN+ tied to the same account. In one example from the complaint, a bundled subscriber using a laptop, tablet, and connected TV could have had to opt out up to ten times. That is not a notice problem. That is not a wording problem. That is a systems problem. On paper, the rule is simple: a consumer opts out, and the company stops selling or sharing their data. Inside the stack, it is rarely that simple. Different apps. Different identifiers. Different ad tech. Different enforcement paths. California also alleged that Disney’s webform stopped sharing through Disney’s own ad platform, while some embedded third-party ad tech could still keep receiving data. That is the part privacy teams should sit with. Most privacy programs are built around collecting the signal. Much fewer are built around proving the signal actually propagated. And regulators are starting to care about that difference. The real question is no longer whether an opt-out exists. It is whether the opt-out survives contact with the system. How is your team checking that a privacy choice actually carries through the plumbing, not just the interface? #Privacy #Engineering #DataPrivacy #CCPA #Compliance
-
The U.S. Senate just sent a very loud message to Big Tech: “𝐘𝐨𝐮𝐫 𝐀𝐈 𝐠𝐮𝐚𝐫𝐝𝐫𝐚𝐢𝐥𝐬 𝐚𝐫𝐞 𝐟𝐚𝐢𝐥𝐢𝐧𝐠 𝐚𝐧𝐝 𝐰𝐞 𝐰𝐚𝐧𝐭 𝐩𝐫𝐨𝐨𝐟 𝐲𝐨𝐮’𝐫𝐞 𝐟𝐢𝐱𝐢𝐧𝐠 𝐢𝐭.” 𝐂𝐚𝐬𝐞 𝐨𝐯𝐞𝐫𝐯𝐢𝐞𝐰 Eight U.S. senators formally questioned 𝐗, 𝐌𝐞𝐭𝐚, 𝐀𝐥𝐩𝐡𝐚𝐛𝐞𝐭, 𝐒𝐧𝐚𝐩, 𝐑𝐞𝐝𝐝𝐢𝐭, 𝐚𝐧𝐝 𝐓𝐢𝐤𝐓𝐨𝐤. The focus? Non-consensual, sexualized deepfakes. Across 𝐜𝐨𝐧𝐭𝐞𝐧𝐭 and 𝐀𝐈 𝐩𝐫𝐨𝐝𝐮𝐜𝐭𝐬. 𝐖𝐡𝐚𝐭 𝐫𝐞𝐠𝐮𝐥𝐚𝐭𝐨𝐫𝐬 𝐚𝐫𝐞 𝐚𝐬𝐤𝐢𝐧𝐠 𝐟𝐨𝐫 Not statements. Not policies. Not blog posts. They want 𝐞𝐯𝐢𝐝𝐞𝐧𝐜𝐞. ➤ Internal documentation ➤ How images are created ➤ How they’re detected ➤ How moderation works ➤ How monetization is blocked This goes far beyond PR. 𝐓𝐡𝐞 𝐭𝐫𝐢𝐠𝐠𝐞𝐫 𝐆𝐫𝐨𝐤 came under scrutiny after reports it generated sexualized images, including of minors. That triggered attention from lawmakers. And an investigation by California’s AG. 𝐖𝐡𝐲 𝐭𝐡𝐢𝐬 𝐦𝐚𝐭𝐭𝐞𝐫𝐬 This is no longer a content moderation issue. It’s now: ✓ Product design ✓ AI governance ✓ Compliance at the model layer If your system can generate harm, you own the risk. 𝐓𝐡𝐞 𝐫𝐞𝐠𝐮𝐥𝐚𝐭𝐨𝐫𝐲 𝐬𝐡𝐢𝐟𝐭 The 𝐓𝐚𝐤𝐞 𝐈𝐭 𝐃𝐨𝐰𝐧 𝐀𝐜𝐭 criminalizes non-consensual sexualized imagery. States like New York are pushing: ➤ Mandatory AI labels ➤ Election-period deepfake bans The direction is clear. 𝐓𝐡𝐞 𝐫𝐞𝐚𝐥 𝐭𝐚𝐤𝐞𝐚𝐰𝐚𝐲 This feels like the early days of data privacy. Except now, it’s about: People’s bodies. Identities. Democracy. The bar has moved from: “𝐖𝐞 𝐡𝐚𝐯𝐞 𝐫𝐮𝐥𝐞𝐬” to “𝐖𝐞 𝐜𝐚𝐧 𝐩𝐫𝐨𝐯𝐞 𝐚𝐛𝐮𝐬𝐞 𝐝𝐨𝐞𝐬𝐧’𝐭 𝐬𝐜𝐚𝐥𝐞.” 𝐂𝐲𝐛𝐞𝐫 𝐑𝐢𝐬𝐤 𝐩𝐞𝐫𝐬𝐩𝐞𝐜𝐭𝐢𝐯𝐞: Deepfakes are now a systemic threat, not an edge case 𝐖𝐡𝐚𝐭 𝐦𝐚𝐧𝐝𝐚𝐭𝐨𝐫𝐲 𝐬𝐞𝐜𝐮𝐫𝐢𝐭𝐲 𝐚𝐧𝐝 𝐠𝐨𝐯𝐞𝐫𝐧𝐚𝐧𝐜𝐞 𝐜𝐨𝐧𝐭𝐫𝐨𝐥𝐬 𝐬𝐡𝐨𝐮𝐥𝐝 𝐞𝐯𝐞𝐫𝐲 𝐀𝐈 𝐩𝐥𝐚𝐭𝐟𝐨𝐫𝐦 𝐝𝐞𝐩𝐥𝐨𝐲 𝐭𝐨 𝐝𝐞𝐭𝐞𝐜𝐭 𝐚𝐧𝐝 𝐜𝐨𝐧𝐭𝐚𝐢𝐧 𝐝𝐞𝐞𝐩𝐟𝐚𝐤𝐞 𝐚𝐛𝐮𝐬𝐞 𝐚𝐭 𝐬𝐜𝐚𝐥𝐞? Curious where you’d draw the hard line 👇 -------------- Hi, I'm Harris D. Schwartz, 𝐅𝐫𝐚𝐜𝐭𝐢𝐨𝐧𝐚𝐥 𝐂𝐈𝐒𝐎 𝐚𝐧𝐝 𝐂𝐲𝐛𝐞𝐫𝐬𝐞𝐜𝐮𝐫𝐢𝐭𝐲 𝐋𝐞𝐚𝐝𝐞𝐫. I help CEOs and executive teams strengthen their security posture and build resilient, compliant organizations. With 𝟑𝟎+ 𝐲𝐞𝐚𝐫𝐬 𝐚𝐜𝐫𝐨𝐬𝐬 𝐍𝐈𝐒𝐓, 𝐈𝐒𝐎, 𝐏𝐂𝐈, 𝐚𝐧𝐝 𝐆𝐃𝐏𝐑, I know how the right security decisions reduce risk and protect growth. If you are planning how your security program needs to evolve in 2026, this is the right time to have that conversation. #AI #AIGovernance #Deepfakes #ProductLeadership #TechPolicy #cybersecurity #security #aifrauds #riskmanagement
-
Future of Privacy Forum "Privacy Papers for Policymakers" winner titled "Can Consumers Protect Themselves Against Privacy Dark Patterns" by Matthew Kugler, Lior Strahilevitz, Marshini Chetty, Chirag Mahapatra, and Yaretzi Ulloa. Abstract: "Dark patterns have emerged in the last few years as a major target of legislators and regulators. Dark patterns are online interfaces that manipulate, confuse, or trick consumers into purchasing goods or services that they do not want, or into surrendering personal information that they would prefer to keep private. As new laws and regulations to restrict dark patterns have emerged, skeptics have countered that motivated consumers can and will protect themselves against these manipulative interfaces, making government intervention unnecessary. This debate occurs alongside active legislative and regulatory discussion about whether to prohibit dark patterns in newly enacted comprehensive consumer privacy laws. Our interdisciplinary paper provides experimental evidence showing that consumer self-help is unlikely to fix the dark patterns problem. Several common dark patterns (obstruction, interface interference, preselection, and confusion), which we integrated into the privacy settings for a video-streaming website, remain strikingly effective at manipulating consumers into surrendering private information even when consumers were charged with maximizing their privacy protections and understood that objective. We also provide the first published evidence of the independent potency of "nagging" dark patterns, which pester consumers into agreeing to an undesirable term. These findings strengthen the case for legislation and regulation to address dark patterns. Our paper also highlights the broad popularity of a feature of the recent California Consumer Privacy Act (CCPA), which gives consumers the ability to opt-out of the sale or sharing of their personal information with third parties. As long as consumers see the Do Not Sell option, a super-majority of them will exercise their rights, and a substantial minority will even overcome dark patterns in order to do so." Download: https://lnkd.in/eFaTiKQt
-
Stop marketing trust. Start showing evidence. This is the fourth post in my series on privacy myths and what actually works inside organizations. Myth: Selling trust as the cornerstone of your privacy program is the best way to win customers, partners, or executives. Reality: Regulators do not police “trust.” They police claims. If you say you “honor consent” or you “delete on request,” those promises must be substantiated or they become evidence in enforcement or litigation. Here is what I have seen: → Companies with the loudest trust slogans often have the weakest evidence to back them up. → Claims on websites and in sales decks are treated as commitments by regulators and plaintiffs. → Overpromising creates more exposure than underpromising. I believe that a stronger approach is to lead with proof: → Retention logs that demonstrate deletion in practice → Ticketing evidence showing DSAR fulfillment times → Vendor contract matrices that reflect real oversight These are not marketing slides. This is clear evidence of well-maintained privacy compliance infrastructure. They are what executives listen to when they want to know, “Can we prove it if asked?” Privacy earns credibility when it produces evidence that stands up in discovery, in an audit, or across the table from a regulator. That is much stronger than any slogan. This has been my experience across many organizations. If you have seen “trust marketing” truly work as a long-term strategy for privacy, please contribute your experience to the conversation. And I will keep exploring these myths in my upcoming book: So You Got the Privacy Officer Title—Now What? Link to waitlist in comment
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development