HUGE AI LEGAL NEWS! The European Data Protection Board (EDPB) has published its much anticipated Opinion on AI and data protection. The opinion looks at 1) when and how AI models can be considered anonymous, 2) whether and how legitimate interest can be used as a legal basis for developing or using AI models, and 3) what happens if an AI model is developed using personal data that was processed unlawfully. It also considers the use of first and third-party data. The opinion also addresses the consequences of developing AI models with unlawfully processed personal data, an area of particular concern for both developers and users. The EDPB clarifies that supervisory authorities are empowered to impose corrective measures, including the deletion of unlawfully processed data, retraining of the model, or even requiring its destruction in severe cases. On the issue of anonymity, the opinion grapples with the question of whether AI models trained on personal data can ever fully transcend their origins to be considered anonymous. The EDPB highlights that merely asserting that an AI model does not process personal data is insufficient. Supervisory authorities (SAs) must assess claims of anonymity rigorously, considering whether personal data has been effectively anonymised in the model and whether risks such as re-identification or membership inference attacks have been mitigated. For AI developers, this means that claims of anonymity should be substantiated with evidence, including the implementation of technical and organisational measures to prevent re-identification. On legitimate interest as a legal basis for AI, the opinion offers detailed guidance for both development and deployment phases. Legitimate interest under Article 6(1)(f) GDPR requires meeting three cumulative conditions: pursuing a legitimate interest, demonstrating that processing is necessary to achieve that interest, and ensuring the processing does not override the fundamental rights and freedoms of data subjects. For third-party data, the opinion emphasises that the absence of a direct relationship with the data subjects necessitates stronger safeguards, including enhanced transparency, opt-out mechanisms, and robust risk assessments. The opinion’s findings stress that the balancing test under legitimate interest must consider the unique risks posed by AI. These include discriminatory outcomes, regurgitation of personal data by generative AI models, and the broader societal risks of misuse, such as through deepfakes or misinformation campaigns. The opinion also provides examples of mitigating measures that could tip the balance in favour of controllers, such as pseudonymisation, output filters, and voluntary transparency initiatives like model cards and annual reports. The implications for developers are significant: compliance failures in the development phase can render an entire AI system non-compliant, leading to legal and operational challenges.
Balancing Data Privacy and Transparency in the EU
Explore top LinkedIn content from expert professionals.
Summary
Balancing data privacy and transparency in the eu means protecting people's personal information while making sure data practices and AI systems are clear and understandable. This ongoing challenge requires strict rules, especially with new digital regulations and ai technologies, to keep both privacy and openness at the forefront of european digital policy.
- Clarify data practices: Make privacy policies and disclosures easy to find and understand for every way people might interact with your business, including offline or customer support channels.
- Use privacy by design: Build systems and processes that reduce unnecessary data collection, limit profiling—especially for minors—and favor privacy-preserving approaches from the start.
- Promote responsible transparency: Offer accessible explanations about how ai models and decision-making processes use data, so users can trust and understand what's happening with their information.
-
-
This new white paper by Stanford Institute for Human-Centered Artificial Intelligence (HAI) titled "Rethinking Privacy in the AI Era" addresses the intersection of data privacy and AI development, highlighting the challenges and proposing solutions for mitigating privacy risks. It outlines the current data protection landscape, including the Fair Information Practice Principles, GDPR, and U.S. state privacy laws, and discusses the distinction and regulatory implications between predictive and generative AI. The paper argues that AI's reliance on extensive data collection presents unique privacy risks at both individual and societal levels, noting that existing laws are inadequate for the emerging challenges posed by AI systems, because they don't fully tackle the shortcomings of the Fair Information Practice Principles (FIPs) framework or concentrate adequately on the comprehensive data governance measures necessary for regulating data used in AI development. According to the paper, FIPs are outdated and not well-suited for modern data and AI complexities, because: - They do not address the power imbalance between data collectors and individuals. - FIPs fail to enforce data minimization and purpose limitation effectively. - The framework places too much responsibility on individuals for privacy management. - Allows for data collection by default, putting the onus on individuals to opt out. - Focuses on procedural rather than substantive protections. - Struggles with the concepts of consent and legitimate interest, complicating privacy management. It emphasizes the need for new regulatory approaches that go beyond current privacy legislation to effectively manage the risks associated with AI-driven data acquisition and processing. The paper suggests three key strategies to mitigate the privacy harms of AI: 1.) Denormalize Data Collection by Default: Shift from opt-out to opt-in data collection models to facilitate true data minimization. This approach emphasizes "privacy by default" and the need for technical standards and infrastructure that enable meaningful consent mechanisms. 2.) Focus on the AI Data Supply Chain: Enhance privacy and data protection by ensuring dataset transparency and accountability throughout the entire lifecycle of data. This includes a call for regulatory frameworks that address data privacy comprehensively across the data supply chain. 3.) Flip the Script on Personal Data Management: Encourage the development of new governance mechanisms and technical infrastructures, such as data intermediaries and data permissioning systems, to automate and support the exercise of individual data rights and preferences. This strategy aims to empower individuals by facilitating easier management and control of their personal data in the context of AI. by Dr. Jennifer King Caroline Meinhardt Link: https://lnkd.in/dniktn3V
-
🗞️ A must-read for anyone interested in European AI governance right now: this study, drafted for the Committee on Industry, Research and Energy (ITRE) of the European Parliament by the Policy Department for Transformation, Innovation & Health 👉🏼Analyses how the AI Act adopted mid-2024 is articulated with other key EU digital regulations 🔎 Examines interactions with: • GDPR • Data Act (DA) • Data Governance Act (DGA) • Digital Services Act (DSA) • Digital Markets Act (DMA) • Cyber Resilience Act (CRA) • NIS2 Directive, the New Legislative Framework (NLF) and product-safety / digital-elements rules 📖 A timely document as the #EU faces the demanding task of building digital rules that the world still lacks, balancing innovation, transparency and fundamental rights. ➡️ creating a broad legal ecosystem connecting data, algorithms and human values. 🎯 3 goals • Ensure trustworthy #AI in Europe — safe, transparent, respectful of rights and EU values. • Foster innovation and competitiveness • Provide legal certainty through a proportionate, risk-based approach. 🗺️ The study maps the interplay among current acts: 🔹with GDPR – Encourage joint guidance between data-protection and AI authorities to simplify impact assessments and ensure consistent supervision across Member States. 🔹with Data Act -Streamline obligations on data quality and access so that compliance supports, rather than slows, AI innovation. -Coordinate governance to prevent duplication and promote data flows for trustworthy AI. 🔹with Data Governance Act -Build bridges between data-sharing frameworks & AI requirements through interoperable standards and clear responsibilities for data use. 🔹with DSA / DMA -Use platform transparency & risk-assessment mechanisms to reinforce, not duplicate, AI Act duties -promote a coherent, innovation-friendly environment for general-purpose models 🔹with CRA / NIS2 / NLF -Align product-safety, cybersecurity & AI conformity processes to create 1 coherent certification pathway for digital products. 👉🏼an #AI Act as integrated regulatory ecosystem covering data, algorithms, products, platforms and rights = smart coordination turning compliance into trust and competitiveness. Future model proposed : • Principle-based horizontal rules with sectoral modules • Clear layering — data → algorithms → systems → services • Aligned definitions & conformity regimes • Simplified compliance for SMEs, rigorous oversight for high-risk systems 🧭 Practical steps forward ▶️Short term: joint guidelines (AI Act / GDPR), shared sandboxes, harmonised templates. ⏩️Medium term: clarify mandates, connect conformity procedures. ⏭️Long term: build a unified digital framework linking data, AI and platform rules, strengthen international standardisation& partnerships. ➡️ AI for good, trustworthy by design, aligned with rights and values. 🙏🏻 Authors Hans Graux Krzysztof G. Nayana Murali Jonathan Cave Maarten Botterman
-
‼️The European Data Protection Board has just published its draft Guidelines 3/2025 (version 1.0) on the interplay between the #DSA and the #GDPR. 📍The guidelines stress that the DSA often refers to GDPR concepts such as profiling, special categories of data, or transparency obligations. The EDPB outlines several areas of interplay. Content moderation under the DSA inevitably involves processing personal data, which must be based on lawful grounds under the GDPR. Notice-and-action mechanisms, complaint handling, and account suspensions also require strict adherence to data minimisation and transparency principles. On advertising, the prohibition in Article 26 DSA on using special categories of data for targeting complements GDPR restrictions, reinforcing a layered protection regime. Recommender systems, meanwhile, raise risks of automated decision-making that could trigger Article 22 GDPR. 📍For me, the most striking part of the guidelines concerns minors. Article 28 DSA obliges providers of online platforms accessible to minors to ensure a high level of privacy, safety, and security. The EDPB clarifies that these duties can justify certain data processing under Article 6(1)(c) GDPR, but only if strictly necessary and proportionate. Crucially, Article 28(3) DSA specifies that platforms are not required to process additional personal data simply to establish whether a user is a minor. 📍The guidelines strongly discourage intrusive age assurance methods such as scanning government IDs or permanently storing age data. Instead, platforms should apply privacy-preserving approaches, for example by confirming only that a user meets a threshold age without revealing their exact identity or date of birth. The EDPB emphasises that age assurance must be risk-based: stricter methods may be justified if the platform exposes children to high risks (e.g. harmful or manipulative content), while lighter-touch measures may suffice where risks are low. 📍Another important clarification is that providers must not nudge minors into choosing recommender systems based on profiling. Non-profiling options should be presented neutrally, and once selected, the platform should not continue processing data for profiling in the background. Similarly, advertisements cannot be targeted at minors on the basis of profiling, even if other GDPR grounds might otherwise permit such processing. 📍The guidelines also recognise that protecting children online must go beyond technical measures. Providers should adapt their services to address risks to minors’ wellbeing, including exposure to harmful content, pressure from personalised recommendations, and misuse of sensitive data. At the same time, measures must be designed with the GDPR principles of minimisation, proportionality, and privacy by design and by default firmly in mind. #privacy #rodo #ecommerce #platforms
-
A reminder for privacy professionals, especially as many complete their annual privacy policy updates, to ensure all interactions are covered or create interaction specific policies. Too often, privacy policies are narrowly scoped to websites or mobile apps. While missing other channels, such as customer support interactions. This can include call recordings, identity or account verification details, agent notes, and quality assurance monitoring. When these collections are not clearly disclosed, organizations create a transparency gap. Privacy laws consistently require that individuals are informed before or at the point of collection, and that notices accurately describe the processing activity. A general privacy policy is not a safe harbor. Pointing individuals to a policy that does not cover offline or human-mediated interactions can raise risk under frameworks such as FTC Section 5, CCPA and CPRA notice-at-collection requirements, GDPR Articles 13 and 14, and state call recording and consent laws. The fix is not complicated, but it does require intentional governance: • Ensure privacy policies clearly apply to all interaction channels, not just web or app use • Add supplemental or contextual notices for customer support interactions • Align agent scripts and disclosures with actual data practices • Revisit notices as support tools, AI, or monitoring practices evolve Transparency is about ensuring the right notice reaches the individual at the right moment, for the right type of data collection.
-
The European Union is shaping one of the most ambitious digital regulatory frameworks in the world. The AI Act, Data Act, Data Governance Act and the GDPR together aim to balance innovation, transparency and fundamental rights. The recent study “Interplay between the AI Act and the EU Digital Legislative Framework”, written for the European Parliament’s ITRE Committee by Hans Graux, Krzysztof G. ,Nayana Murali, Jonathan Cave and Maarten Botterman provides one of the clearest analyses of how these frameworks overlap, complement and sometimes contradict each other. The central insight is simple yet powerful: Europe does not lack regulation. It lacks coherence. 🔍 The key overlaps AI Act and GDPR ✔️Both frameworks are risk-based, yet they approach risk differently. ✔️The AI Act encourages the use of sensitive data to detect or mitigate bias, which may conflict with Article 9 of the GDPR restricting such processing. ✔️Data subject rights like access, rectification or erasure become technically complex when applied to machine learning models. AI Act and Data Act ✔️The Data Act focuses on data access and sharing, while the AI Act prioritises data quality, representativeness and traceability. ✔️What is legally shareable under the Data Act might not always meet the technical and ethical requirements of the AI Act. ✔️Government access mechanisms under both Acts can overlap without clear coordination. ✔️Obligations around cloud switching in the Data Act could interfere with the audit trails required for AI compliance. AI Act and Data Governance Act (DGA) ✔️The DGA establishes trusted frameworks for data intermediaries and data altruism. ✔️These mechanisms can build a culture of trustworthy and transparent data sharing across Europe. ✔️When properly aligned with the AI Act, they can strengthen access to reliable and ethically sourced data for AI development. ✔️Governance structures such as the European Data Innovation Board could play a vital role in supporting the AI Office and ensuring consistent oversight. 💭 My Take The AI Act should not be seen as an isolated piece of regulation but as part of a broader legal ecosystem connecting data, algorithms, and human values. Understanding this interplay is essential for transforming compliance into trust, innovation, and competitive advantage. A must-read for anyone shaping or implementing European AI governance.
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development