How Age Verification Affects Privacy

Explore top LinkedIn content from expert professionals.

Summary

Age verification laws are designed to protect minors online, but they often require individuals to share sensitive personal information, raising serious privacy concerns for both children and adults. This process can compromise anonymity, expose personal data to potential breaches, and reshape how people access digital spaces.

  • Audit data practices: Regularly review how much personal information your organization collects and store only what is truly necessary to reduce privacy risks.
  • Explore privacy-friendly solutions: Seek out verification methods that confirm age without storing full identity details, such as document-free checks or age attestation tokens.
  • Advocate for balanced rules: Stay informed about emerging laws and participate in public discussions to encourage policies that protect both children and digital privacy.
Summarized by AI based on LinkedIn member posts
  • View profile for Omer Tene

    Partner, Goodwin

    15,134 followers

    ✅ New EDPB opinion on age assurance. ✅ Age verification requirements are proliferating, raising thorny privacy challenges. On the one hand, who would oppose protecting kids online? On the other hand, such protection could levy a steep price on individual rights (privacy, free speech, and more) of not just kids but rather *all* users online. ✅ Laws requiring age verification are multiplying at a furious pace - including laws in more than a dozen states on access to porn sites; laws on kids' and teens' access to social media; laws on addictive online services; and privacy laws and regulatory mandates applying to sites and services likely to be accessed by kids. ✅ Alas, in an online context, verifying age requires processing some of the most sensitive personal information out there - biometrics, government IDs, credit card numbers, and more. And it risks crushing online anonymity for not just kids but also adults who need to prove their age bona fides. ✅ Now that's a real pickle. ✅ The EDPB does a good job unpacking the competing interests and offering businesses a path forward. In short: implement a risk based approach (don't use a sledgehammer to crack a nut); complete a DPIA; minimize data (no need to verify identity/location to just determine someone's over 13); employ privacy enhancing technologies (architectures where third party vendors verify credentials and convey just a binary response); ensure security; and provide human oversight of ADMT.

  • View profile for Michael J. Silva

    Founder - Periscope Dossier & Ultra Secure Emely.AI | Cybersecurity Expert [20251124,20251230]

    8,315 followers

    🛂 How comfortable are you handing over a scan of your driver’s license – or your face – every time you want to read an article, watch a video, or join a support group? Governments on several continents are fast-tracking laws that force websites and apps to prove a visitor’s age before displaying “adult” or even mildly sensitive content. Most companies meet the mandate by collecting highly personal data - photo IDs, biometric scans, credit-card details - creating fresh targets for hackers and eroding the presumption of online anonymity. Early rollouts show the irony: determined teens sail past checks with VPNs, while adults who value privacy or lack acceptable ID get shut out. 😑 My Take For decades the internet’s strength was its ability to let people explore ideas without revealing their real-world identity. Age-verification plans flip that model on its head. Once platforms capture your face template or government ID, they have a tempting trove for marketers, law enforcement, and threat actors alike. Recent breaches at third-party “verify-tech” vendors prove that even well-intentioned rules can spill lifetime identifiers across the dark web. The Future 🔮 Within the next five years, showing digital ID will likely become as routine as cookie pop-ups are today. Two scenarios could unfold: 1. A privacy-conscious path where zero-knowledge proofs confirm age without exposing extra details. That demands heavy investment and political will. 2. A surveillance-by-default path where full identity is stored in vast databases, ripe for misuse. The second is cheaper and therefore more probable unless citizens and businesses push back now. What You Should Think About ✅ - Audit your organization’s data-collection practices. Storing less personal information is the simplest way to avoid future liability. - If your product targets a global audience, map emerging laws country by country – compliance may require different verification methods or a redesign of user flows. - Explore privacy-preserving solutions (age attestation tokens, document-free checks) and join standards groups shaping them. Your early input can steer the market toward safer norms. - As an individual, consider diversifying your online accounts: keep sensitive communities on platforms that commit to minimal data retention, and enable end-to-end encryption where possible. 🔐 - Finally, speak up. Legislators often adopt blunt technical mandates because they don’t hear practical alternatives. Share your expertise, comment on draft bills, and encourage peers to do the same. The move to an ID-checked internet isn’t just a child-safety issue; it’s a pivotal moment for digital civil liberties. Let’s make sure the cure doesn’t compromise the very privacy it aims to protect. Source: nymag

  • View profile for Damir Ćuća

    Founder • CEO | 2x Successful Exits • Father of 6 • Retired

    4,124 followers

    Debunking the Myths of Age Verification on Social Networks 🚨 The government has passed a bill mandating age restrictions on social networks for children under 16. While the intent is to protect kids, the execution raises serious privacy concerns. Here’s what’s really at stake: 1️⃣ 𝗘𝘃𝗲𝗿𝘆𝗼𝗻𝗲 𝘄𝗶𝗹𝗹 𝗻𝗲𝗲𝗱 𝗮𝗴𝗲 𝘃𝗲𝗿𝗶𝗳𝗶𝗰𝗮𝘁𝗶𝗼𝗻. The rule applies to under-16s, but because social networks can’t know your age without verifying it, 𝘦𝘷𝘦𝘳𝘺 𝘈𝘶𝘴𝘵𝘳𝘢𝘭𝘪𝘢𝘯 will need to prove their age to access platforms like Facebook, Instagram, or TikTok. It’s not just kids—it’s everyone. 2️⃣ 𝗦𝗲𝗻𝘀𝗶𝘁𝗶𝘃𝗲 𝗱𝗼𝗰𝘂𝗺𝗲𝗻𝘁𝘀 𝗮𝗿𝗲 𝗺𝗮𝗻𝗱𝗮𝘁𝗼𝗿𝘆. Age checks can only be performed using official documents like passports, driver’s licences, identity cards, or birth certificates. This means sharing sensitive personal information even if you’re well over 16. 3️⃣ 𝗬𝗼𝘂𝗿 𝗱𝗮𝘁𝗮 𝗶𝘀 𝗮𝘁 𝗿𝗶𝘀𝗸. Social networks will either build their own age verification systems or rely on third-party services. Both options significantly increase the number of entities handling your personal data, multiplying the risks of data breaches. Sensitive details could leak onto the dark web, exposing you to identity theft or fraud. 4️⃣ 𝗡𝗼 𝗺𝗼𝗿𝗲 𝗮𝗻𝗼𝗻𝘆𝗺𝗶𝘁𝘆. Every social media account and post will be traceable to an individual. This kills anonymity. People may censor themselves out of fear that their opinions could lead to social backlash, legal trouble, or government scrutiny. This poses a serious threat to free speech. 5️⃣ 𝗡𝗼𝗻-𝗔𝘂𝘀𝘁𝗿𝗮𝗹𝗶𝗮𝗻𝘀 𝗮𝗿𝗲 𝗺𝗼𝗻𝗶𝘁𝗼𝗿𝗲𝗱 𝘁𝗼𝗼. The government wants to ensure people can’t bypass the rules using a VPN. Social networks would need to monitor posts, photos, and videos to detect patterns that suggest a user is Australian. This requires algorithms to continuously track and estimate your physical location, creating an invasive surveillance system. 6️⃣ “𝗕𝗶𝗻𝗮𝗿𝘆 𝗰𝗵𝗲𝗰𝗸𝘀” 𝗮𝗿𝗲 𝗺𝗶𝘀𝗹𝗲𝗮𝗱𝗶𝗻𝗴. The government claims age verification is a simple yes-or-no (binary) process. In reality, verifying your age still requires you to hand over sensitive documents. Whoever provides the age-check service will see and store those documents. A binary result is only possible after verifying your identity in full. 7️⃣ 𝗠𝗼𝗿𝗲 𝘁𝗵𝗮𝗻 𝗷𝘂𝘀𝘁 𝘆𝗼𝘂𝗿 𝗮𝗴𝗲 𝗶𝘀 𝘀𝗵𝗮𝗿𝗲𝗱. When an age verification happens, the identity service logs not just your age but also the requesting social network and your profile. This means the government could potentially track which platforms you use, your account handles, and your social activity, creating a detailed map of your online presence. 8️⃣ 𝗙𝘂𝗹𝗹 𝘁𝗿𝗮𝗰𝗲𝗮𝗯𝗶𝗹𝗶𝘁𝘆 𝗯𝗲𝗰𝗼𝗺𝗲𝘀 𝘁𝗵𝗲 𝗻𝗼𝗿𝗺. Both the social network and the identity provider will maintain records linking your social media accounts to your verified identity. This ensures full traceability of everything you post, comment, or like, eroding any remaining privacy.

  • TL;DR Governments worldwide are moving to restrict access to online services based on age. More than 370 scientists have signed an open letter calling for a moratorium on age-assessment technologies until there is solid evidence of their feasibility and societal impact. Protecting minors is essential — but blanket identity control across the internet is unlikely to be the right solution. https://lnkd.in/eXrx77Hm Countries including Australia and several EU Member States are advancing age-verification laws. What once seemed unthinkable — systematic access control to the internet — is becoming mainstream policy. Protecting children from harmful content is a legitimate goal. But mandatory age verification is a far-reaching intervention introduced without sufficient evidence that it works or without a full assessment of its broader consequences. Offline age checks are limited and situational. Online, however, verification risks becoming a permanent identification requirement for nearly all digital interactions, affecting adults as well as minors. This shifts the default from trust to systematic control and could undermine the internet’s role as a space for information, community, and democratic participation. Implementing age verification would require large-scale new infrastructure, raise privacy risks (especially with biometrics), and potentially lead to blocking non-compliant services. The danger of “function creep” is real. Such systems are costly yet easy to bypass via VPNs or shared credentials, while increasing data collection and excluding vulnerable groups. Before redesigning the digital public sphere, policymakers should ensure proportionality, scientific evidence, and democratic debate — and focus on targeted accountability for tech companies rather than universal identification.

  • View profile for Jamal Ahmed

    Privacy & AI Governance Expert | Privacy Leader of the Year | Global Keynote Speaker | Bestselling Author, The Easy Peasy Guides: GDPR & EU AI Act (2026) | 73,786+ Careers Elevated 🔥

    35,882 followers

    Reddit, Inc. fined £14,500,000 million by the Information Commissioner's Office No breach. No cyberattack. The issue was simpler. And more important. They relied on self-declared age. That meant children under 13 could access the platform. And their personal data was being processed without a valid lawful basis. Most organisations still don’t understand: Regulators don’t assess risk based on who you intend your users to be. They assess risk based on who can realistically access your service. That’s the shift. It’s not enough to say: “Children aren’t our target audience.” The real question is: Could they get in anyway? If the answer is yes, then you must design for that risk. This is where leadership matters. Strong privacy teams don’t wait for evidence. We assess foreseeable risk. We don’t design for intention. We design for reality. Because enforcement rarely comes from what you planned. It comes from what actually happens. The best privacy leaders understand: 1. Timing is accountability 2. Risk visibility is governance 3. Assumptions are expensive and embarrassing The organisations that get this right don’t just avoid fines. They build trust at scale. When regulators come looking, they don’t ask what you intended. They ask what you knew, and what you did about it!

  • View profile for Ann-Mary Rajanayagam

    Helping Leaders Make Responsible AI & Tech Decisions | Chief Technology Officer | Founder - Alderon & Female Founders Club | NED | Speaker | Creator of the Human-First, AI-Native Framework

    5,169 followers

    ⚠️ We didn’t just ban kids from social media. We normalised digital ID for everyone. To enforce Australia’s under 16 social media ban, platforms now have to verify that users are not children. That sounds simple... until you follow the logic. Age verification means more identity checks, more inference, more data collection. This time it's government mandated. This ban may still be the right intervention... But it isn’t a free one. In this edition of the Human In The Loop Pod Newsletter, I’ve unpacked what platforms are actually doing, why this aligns uncomfortably well with their data incentives, and the regulatory lessons we keep relearning the hard way.

  • View profile for Maxwell Labi

    AI-Enabled Transformation Leader | Enterprise IT Integration | Deliveroo, DoorDash & Wolt

    3,524 followers

    Last night, I appeared on The BBC OneShow to share something that started as a parent protecting his child but has also become a case study in AI deployment failures. (Video below) My 12-year-old daughter did Roblox's facial age verification check so she could keep chatting with her friends. Their AI classified her as 18-20. All the parental controls I'd set up were immediately removed without notification or consent. When I tried to fix it, I hit walls: broken chatbots, circular emails, no human support. Here's what concerns me as someone who works in technology : We're rolling out AI systems at scale without adequate safeguards. Facial recognition AI has documented accuracy problems with darker skin tones. How many other children are being misclassified? How many parents don't know? This isn't just about Roblox. Under the UK's Online Safety Act, this same age verification technology is being mandated across platforms. If the "gold standard" system can't identify a 12-year-old correctly, we have a problem. Three lessons for anyone deploying AI systems: 1. Test for edge cases and bias - especially when children are involved 2. Build human oversight into critical failures - AI should augment, not replace human judgment in high-stakes scenarios 3.Don't strip away existing safeguards - if your AI overrides manual protections, you're creating risk For parents with children on Roblox: Check your child's account settings. Look for the age badge on their profile. If it's wrong, you should now be able to correct it through Roblox's help article: "Updating Your Child's Age (For Parents)" As a parent, I'm disappointed. As someone who believes in technology's potential, I'm frustrated. We can do better than this. Full segment also available on BBC iPlayer: https://lnkd.in/e5Mvw3Ps #ChildSafety #AI #TechAccountability #OnlineSafety

  • View profile for Mateusz Kupiec, FIP, CIPP/E, CIPM

    Institute of Law Studies, Polish Academy of Sciences || Privacy Lawyer at Traple Konarski Podrecki & Partners || DPO || I know GDPR. And what is your superpower?🤖

    26,596 followers

    ‼️The European Data Protection Board has just published its draft Guidelines 3/2025 (version 1.0) on the interplay between the #DSA and the #GDPR. 📍The guidelines stress that the DSA often refers to GDPR concepts such as profiling, special categories of data, or transparency obligations. The EDPB outlines several areas of interplay. Content moderation under the DSA inevitably involves processing personal data, which must be based on lawful grounds under the GDPR. Notice-and-action mechanisms, complaint handling, and account suspensions also require strict adherence to data minimisation and transparency principles. On advertising, the prohibition in Article 26 DSA on using special categories of data for targeting complements GDPR restrictions, reinforcing a layered protection regime. Recommender systems, meanwhile, raise risks of automated decision-making that could trigger Article 22 GDPR. 📍For me, the most striking part of the guidelines concerns minors. Article 28 DSA obliges providers of online platforms accessible to minors to ensure a high level of privacy, safety, and security. The EDPB clarifies that these duties can justify certain data processing under Article 6(1)(c) GDPR, but only if strictly necessary and proportionate. Crucially, Article 28(3) DSA specifies that platforms are not required to process additional personal data simply to establish whether a user is a minor. 📍The guidelines strongly discourage intrusive age assurance methods such as scanning government IDs or permanently storing age data. Instead, platforms should apply privacy-preserving approaches, for example by confirming only that a user meets a threshold age without revealing their exact identity or date of birth. The EDPB emphasises that age assurance must be risk-based: stricter methods may be justified if the platform exposes children to high risks (e.g. harmful or manipulative content), while lighter-touch measures may suffice where risks are low. 📍Another important clarification is that providers must not nudge minors into choosing recommender systems based on profiling. Non-profiling options should be presented neutrally, and once selected, the platform should not continue processing data for profiling in the background. Similarly, advertisements cannot be targeted at minors on the basis of profiling, even if other GDPR grounds might otherwise permit such processing. 📍The guidelines also recognise that protecting children online must go beyond technical measures. Providers should adapt their services to address risks to minors’ wellbeing, including exposure to harmful content, pressure from personalised recommendations, and misuse of sensitive data. At the same time, measures must be designed with the GDPR principles of minimisation, proportionality, and privacy by design and by default firmly in mind. #privacy #rodo #ecommerce #platforms

  • View profile for Jamie Lord

    Solution Architect at CDS UK

    17,178 followers

    The UK House of Lords just voted to require age verification for VPN services—tools explicitly designed to protect user privacy. If you work in security, infrastructure, or privacy-focused product development, this fundamentally changes how you'll need to think about serving UK users. Two amendments passed last week as part of the Children's Wellbeing and Schools Bill. Amendment 92 (207-159 votes) mandates that VPNs "offered or marketed to persons in the United Kingdom" implement age assurance. Amendment 94a (261-150) extends similar requirements to virtually all platforms where users can post or share content with others. VPNs exist to conceal browsing data and prevent profiling. Requiring identity verification to use them inverts their entire purpose. It's rather like mandating that anonymous tip lines record caller IDs. How would a zero-logs provider like Mullvad comply? Their entire architecture avoids storing user identity. Many privacy-focused services accept cryptocurrency payments specifically to avoid collecting identifiable information. These providers will likely simply refuse to serve UK users, pushing people toward less reputable alternatives—or toward rolling their own solutions on cheap VPS instances, which requires no such verification. The amendments also rejected more intrusive proposals, including mandatory on-device content scanning. That these were seriously considered indicates the direction of travel. Worth noting the broader scope here. Despite political messaging framing this as a "social media ban for under-16s," the definition of "user-to-user services" captures forums, gaming platforms, messaging apps, and essentially any interactive service. The identity verification layer would extend far beyond what most people imagine. This still requires House of Commons approval, and amendments can be rejected. For those building privacy-preserving infrastructure or serving UK markets, the question is whether to architect around potential compliance requirements now or wait to see what actually becomes law. #Cybersecurity #Privacy #VPN #UKTech #DigitalPolicy #InfoSec #DataProtection #OnlineSafety #TechPolicy

  • View profile for Daisy Soderberg-Rivkin

    Global Policy Manager at Rover

    2,239 followers

    This week in Demystifying Trust & Safety: The Age Problem No One Can Solve OpenAI's teen safety features expose an impossible choice: you cannot build age-appropriate AI without building surveillance infrastructure. To detect if someone is under 18, you need behavioral profiling. That same system can detect anything, depression, political views, sexual orientation. The infrastructure doesn't care what you're looking for. We're demanding perfect safety with perfect privacy, then acting shocked when companies can't deliver both. This week: why all three options (do nothing, hard verification, behavioral detection) fail, what "age prediction" actually requires, and why we're forcing companies to make life-or-death policy decisions in a vacuum. #TrustAndSafety #AIEthics #TechPolicy #Tech

Explore categories