Data Privacy Regulations for Businesses

Explore top LinkedIn content from expert professionals.

  • View profile for Dr. Barry Scannell
    Dr. Barry Scannell Dr. Barry Scannell is an Influencer

    AI Law & Policy | Partner in Leading Irish Law Firm William Fry | Member of the Board of Irish Museum of Modern Art | PhD in AI & Copyright

    59,869 followers

    HUGE AI LEGAL NEWS! The European Data Protection Board (EDPB) has published its much anticipated Opinion on AI and data protection. The opinion looks at 1) when and how AI models can be considered anonymous, 2) whether and how legitimate interest can be used as a legal basis for developing or using AI models, and 3) what happens if an AI model is developed using personal data that was processed unlawfully. It also considers the use of first and third-party data. The opinion also addresses the consequences of developing AI models with unlawfully processed personal data, an area of particular concern for both developers and users. The EDPB clarifies that supervisory authorities are empowered to impose corrective measures, including the deletion of unlawfully processed data, retraining of the model, or even requiring its destruction in severe cases. On the issue of anonymity, the opinion grapples with the question of whether AI models trained on personal data can ever fully transcend their origins to be considered anonymous. The EDPB highlights that merely asserting that an AI model does not process personal data is insufficient. Supervisory authorities (SAs) must assess claims of anonymity rigorously, considering whether personal data has been effectively anonymised in the model and whether risks such as re-identification or membership inference attacks have been mitigated. For AI developers, this means that claims of anonymity should be substantiated with evidence, including the implementation of technical and organisational measures to prevent re-identification. On legitimate interest as a legal basis for AI, the opinion offers detailed guidance for both development and deployment phases. Legitimate interest under Article 6(1)(f) GDPR requires meeting three cumulative conditions: pursuing a legitimate interest, demonstrating that processing is necessary to achieve that interest, and ensuring the processing does not override the fundamental rights and freedoms of data subjects. For third-party data, the opinion emphasises that the absence of a direct relationship with the data subjects necessitates stronger safeguards, including enhanced transparency, opt-out mechanisms, and robust risk assessments. The opinion’s findings stress that the balancing test under legitimate interest must consider the unique risks posed by AI. These include discriminatory outcomes, regurgitation of personal data by generative AI models, and the broader societal risks of misuse, such as through deepfakes or misinformation campaigns. The opinion also provides examples of mitigating measures that could tip the balance in favour of controllers, such as pseudonymisation, output filters, and voluntary transparency initiatives like model cards and annual reports. The implications for developers are significant: compliance failures in the development phase can render an entire AI system non-compliant, leading to legal and operational challenges.

  • View profile for Marcel Warchaftig

    Mastering digital sovereignty: Your data, your rules! | Sales Lead New Business Western Europe at Nextcloud | 🤝

    4,498 followers

    What a surprise for the EU 😱 😉 A recently published expert opinion commissioned by the German Federal Ministry of the Interior has sparked a pivotal discussion on data governance and sovereignty. According to the report, US authorities can exert far-reaching access rights to cloud data managed by US-based companies, even when that data is stored in European data centers and administered through local subsidiaries. This is because legal instruments such as the Stored Communications Act extended by the Cloud Act and Section 702 of FISA focus on the provider’s control, not the physical location of the servers. This finding is a firm reminder that simply hosting data on European soil does not guarantee protection from extraterritorial legal claims. It reveals structural risks in relying on dominant foreign cloud providers for sensitive data and critical digital infrastructure. For Europe to truly uphold its data protection principles and strategic autonomy, the conversation must go beyond compliance checklists and contractual assurances. We need stronger investment in #opensource digital infrastructure and indigenous technologies that reduce dependency on non-European platforms. Open source fosters transparency and auditability while enabling communities and businesses to build on systems that are not bound by foreign legal systems. If #digitalsovereignty is to mean more than a buzzword, we must accelerate our efforts towards resilient, interoperable, and locally governed alternatives. Only then Europe can ensure that its data is governed by the laws and values that its citizens and organisations expect. Source: https://lnkd.in/dtpXiwYN

  • View profile for Luiza Jarovsky, PhD
    Luiza Jarovsky, PhD Luiza Jarovsky, PhD is an Influencer

    Co-founder of the AI, Tech & Privacy Academy (1,400+ participants), Author of Luiza’s Newsletter (94,000+ subscribers), Mother of 3

    131,252 followers

    🚨 AI Privacy Risks & Mitigations Large Language Models (LLMs), by Isabel Barberá, is the 107-page report about AI & Privacy you were waiting for! [Bookmark & share below]. Topics covered: - Background "This section introduces Large Language Models, how they work, and their common applications. It also discusses performance evaluation measures, helping readers understand the foundational aspects of LLM systems." - Data Flow and Associated Privacy Risks in LLM Systems "Here, we explore how privacy risks emerge across different LLM service models, emphasizing the importance of understanding data flows throughout the AI lifecycle. This section also identifies risks and mitigations and examines roles and responsibilities under the AI Act and the GDPR." - Data Protection and Privacy Risk Assessment: Risk Identification "This section outlines criteria for identifying risks and provides examples of privacy risks specific to LLM systems. Developers and users can use this section as a starting point for identifying risks in their own systems." - Data Protection and Privacy Risk Assessment: Risk Estimation & Evaluation "Guidance on how to analyse, classify and assess privacy risks is provided here, with criteria for evaluating both the probability and severity of risks. This section explains how to derive a final risk evaluation to prioritize mitigation efforts effectively." - Data Protection and Privacy Risk Control "This section details risk treatment strategies, offering practical mitigation measures for common privacy risks in LLM systems. It also discusses residual risk acceptance and the iterative nature of risk management in AI systems." - Residual Risk Evaluation "Evaluating residual risks after mitigation is essential to ensure risks fall within acceptable thresholds and do not require further action. This section outlines how residual risks are evaluated to determine whether additional mitigation is needed or if the model or LLM system is ready for deployment." - Review & Monitor "This section covers the importance of reviewing risk management activities and maintaining a risk register. It also highlights the importance of continuous monitoring to detect emerging risks, assess real-world impact, and refine mitigation strategies." - Examples of LLM Systems’ Risk Assessments "Three detailed use cases are provided to demonstrate the application of the risk management framework in real-world scenarios. These examples illustrate how risks can be identified, assessed, and mitigated across various contexts." - Reference to Tools, Methodologies, Benchmarks, and Guidance "The final section compiles tools, evaluation metrics, benchmarks, methodologies, and standards to support developers and users in managing risks and evaluating the performance of LLM systems." 👉 Download it below. 👉 NEVER MISS my AI governance updates: join my newsletter's 58,500+ subscribers (below). #AI #AIGovernance #Privacy #DataProtection #AIRegulation #EDPB

  • View profile for Sam Gabriel - CIPP/E, CIPP/US

    Privacy Consultant | CIPP/E, CIPP/US | IEEE AI Healthcare Privacy Standards Contributor | EU, U.S., Gulf, APAC Compliance

    3,322 followers

    📌 When Privacy Gets Personal: How GDPR and CCPA view Sensitive Data You’ve mapped out your privacy obligations. But do you know what kind of personal data you’re dealing with? Some data is more… sensitive. Let’s break it down 👇 🇪🇺 GDPR 🔬 Special Categories of Personal Data Clearly defined under Article 9, including: • Health data • Ethnic origin • Biometric and genetic data • Political opinions, trade union membership • Sexual orientation, religious beliefs, etc. 🔐 Requires stronger safeguards Typically needs explicit consent — or must fall under narrow legal exceptions (e.g., public health, employment, legal claims). ⚖️ Risk-based & contextual Processing can trigger DPIAs, stricter contracts, and regulatory scrutiny. 🧪 Example: An employer installs facial recognition to control building access. Since this involves biometric data, it's classed as special category data under GDPR — requiring explicit consent or a valid legal justification. 💡 Bottom Line: Clear definitions. High thresholds. Built-in guardrails. 🇺🇸 CCPA/CPRA 🧩 “Sensitive Personal Information” (SPI) Introduced under CCPA, and more broadly framed: • Social Security Number (SSN) • Precise geolocation • Financial account + login info • Ethnic origin, religion, union membership • Contents of messages, biometric info, etc. 🚪 Consumers can limit its use/disclosure Businesses must offer a “Limit the Use of My Sensitive Personal Information” link in applicable cases. ⚠️ No explicit consent required Unlike GDPR, CCPA doesn’t require a separate legal basis — it’s more about giving consumers control. 🧪 Example: A company uses facial recognition for identity verification in its services. If that involves a California resident, the business must provide an option to limit the use of this biometric data — or risk non-compliance. 💡 Bottom Line: CCPA treats SPI like a “do-not-track” toggle — not a hard stop. 🎯 The Core Difference GDPR → “Some data is off-limits unless you can strongly justify it.” CCPA → “Use it if you must — but give consumers a way to opt out.” 🌍 What This Says About Privacy Culture 🇪🇺 GDPR: Protection through restriction 🇺🇸 CCPA: Protection through empowerment Same data — different sensitivities. #DataPrivacy #PrivacyLaw #GDPR #CCPA #BiometricData #SensitiveData #DataProtection #Compliance #CIPPE #CIPPUS #LegalTech #InfoSec #LinkedinLearning

  • View profile for Marie-Doha Besancenot

    Senior advisor for Strategic Communications, Cabinet of 🇫🇷 Foreign Minister; #IHEDN, 78e PolDef

    40,982 followers

    🗞️ A must-read for anyone interested in European AI governance right now: this study, drafted for the Committee on Industry, Research and Energy (ITRE) of the European Parliament by the Policy Department for Transformation, Innovation & Health 👉🏼Analyses how the AI Act adopted mid-2024 is articulated with other key EU digital regulations 🔎 Examines interactions with: • GDPR • Data Act (DA) • Data Governance Act (DGA) • Digital Services Act (DSA) • Digital Markets Act (DMA) • Cyber Resilience Act (CRA) • NIS2 Directive, the New Legislative Framework (NLF) and product-safety / digital-elements rules 📖 A timely document as the #EU faces the demanding task of building digital rules that the world still lacks, balancing innovation, transparency and fundamental rights. ➡️ creating a broad legal ecosystem connecting data, algorithms and human values. 🎯 3 goals • Ensure trustworthy #AI in Europe — safe, transparent, respectful of rights and EU values. • Foster innovation and competitiveness • Provide legal certainty through a proportionate, risk-based approach. 🗺️ The study maps the interplay among current acts: 🔹with GDPR – Encourage joint guidance between data-protection and AI authorities to simplify impact assessments and ensure consistent supervision across Member States. 🔹with Data Act -Streamline obligations on data quality and access so that compliance supports, rather than slows, AI innovation. -Coordinate governance to prevent duplication and promote data flows for trustworthy AI. 🔹with Data Governance Act -Build bridges between data-sharing frameworks & AI requirements through interoperable standards and clear responsibilities for data use. 🔹with DSA / DMA -Use platform transparency & risk-assessment mechanisms to reinforce, not duplicate, AI Act duties -promote a coherent, innovation-friendly environment for general-purpose models 🔹with CRA / NIS2 / NLF -Align product-safety, cybersecurity & AI conformity processes to create 1 coherent certification pathway for digital products. 👉🏼an #AI Act as integrated regulatory ecosystem covering data, algorithms, products, platforms and rights = smart coordination turning compliance into trust and competitiveness. Future model proposed : • Principle-based horizontal rules with sectoral modules • Clear layering — data → algorithms → systems → services • Aligned definitions & conformity regimes • Simplified compliance for SMEs, rigorous oversight for high-risk systems 🧭 Practical steps forward ▶️Short term: joint guidelines (AI Act / GDPR), shared sandboxes, harmonised templates. ⏩️Medium term: clarify mandates, connect conformity procedures. ⏭️Long term: build a unified digital framework linking data, AI and platform rules, strengthen international standardisation& partnerships. ➡️ AI for good, trustworthy by design, aligned with rights and values. 🙏🏻 Authors Hans Graux Krzysztof G. Nayana Murali Jonathan Cave Maarten Botterman

  • View profile for Adriano D'Ottavio

    Lawyer - Privacy, Data Protection and Cyber Security @Bird & Bird

    8,825 followers

    🚀 New EDPB Guidance for Cross-Border Data Transfers — EU-U.S. Data Privacy Framework FAQ v2.0 🇪🇺➡️🇺🇸 The European Data Protection Board has just published the updated FAQ for European Businesses on the EU-U.S. Data Privacy Framework (DPF) – Version 2.0, adopted on 15 January 2026. This is essential reading for any organisation in the EEA transferring personal data to the United States under the DPF. 📌 Why this matters The DPF continues to be a key mechanism enabling data transfers from the EEA to U.S. entities self-certified under the Framework. The new FAQ clarifies operational questions and compliance expectations, helping companies navigate the requirements with more certainty. 🔍 Key highlights from the updated FAQ: ✔️ A clear explanation of what the DPF is and how it works in practice for European data exporters. ✔️ Guidance on which U.S. companies are eligible to participate in the Framework (e.g., those subject to FTC or DoT enforcement). ✔️ Practical steps European businesses should take before transferring personal data to a DPF-certified entity. ✔️ Specific considerations for transfers to U.S. subsidiaries, controllers and processors. ✔️ References to further resources and verification tools to check active DPF certifications. ✔️ Importantly, while the DPF facilitates lawful transfers, all remaining GDPR obligations still apply — from legal basis to transparency and accountability. 📘 For the full text, take a look here below 👇 #DataPrivacy #EDPB #DPF #GDPR #DataProtection #CrossBorderDataTransfers #Compliance #Privacy

  • View profile for Colin S. Levy
    Colin S. Levy Colin S. Levy is an Influencer

    General Counsel at Malbek | Author of The Legal Tech Ecosystem | I Help Legal Teams and Tech Companies Navigate AI, Legal Tech, and Digital Enablement | Fastcase 50

    51,846 followers

    As a veteran SaaS lawyer, I've watched Data Processing Agreements (DPAs) evolve from afterthoughts to deal-breakers. Let's dive into why they're now non-negotiable and what you need to know: A) DPA Essentials Often Overlooked: -Subprocessor Management: DPAs should detail how and when clients are notified of new subprocessors. This isn't just courteous - it's often legally required. -Cross-Border Transfers: Post-Schrems II, mechanisms for lawful data transfers are crucial. Standard Contractual Clauses aren't a silver bullet anymore. -Data Minimization: Concrete steps to ensure only necessary data is processed. Vague promises don't cut it. -Audit Rights: Specific procedures for controller-initiated audits. Without these, you're flying blind on compliance. -Breach Notification: Clear timelines and processes for reporting data breaches. Every minute counts in a crisis. B) Why Cookie-Cutter DPAs Fall Short: -Industry-Specific Risks: Healthcare DPAs need HIPAA provisions; fintech needs PCI-DSS compliance clauses. One size does not fit all. -AI/ML Considerations: Special clauses for automated decision-making and profiling are essential as AI becomes ubiquitous. -IoT Challenges: Addressing data collection from connected devices. The 'Internet of Things' is a privacy minefield. -Data Portability: Clear processes for returning data in usable formats post-termination. Don't let your data become a hostage. -Privacy by Design: Embedding privacy considerations into every aspect of data processing. It's not just good practice - it's the law. In 2024, with GDPR fines hitting €1.4 billion, generic DPAs are a liability, not a safeguard. As AI and IoT reshape data landscapes, DPAs must evolve beyond checkbox exercises to become strategic tools. Remember, in the fast-paced tech industry, knowledge of these agreements isn't just useful – it's essential. They're not just legal documents – they're the foundation for innovation and collaboration in our digital age. Pro tip: Review your DPAs quarterly. The data world moves fast - your agreements should keep pace. Pay special attention to changes in data protection laws, new technologies you're adopting, and shifts in your data processing activities. Clear, well-structured DPAs prevent disputes and protect all parties' interests. What's the trickiest DPA clause you've negotiated? Share your war stories below. #legaltech #innovation #law #business #learning

  • View profile for Katharina Koerner

    AI Governance, Privacy & Security I Trace3 : Innovating with risk-managed AI/IT - Passionate about Strategies to Advance Business Goals through AI Governance, Privacy & Security

    44,702 followers

    This new white paper by Stanford Institute for Human-Centered Artificial Intelligence (HAI) titled "Rethinking Privacy in the AI Era" addresses the intersection of data privacy and AI development, highlighting the challenges and proposing solutions for mitigating privacy risks. It outlines the current data protection landscape, including the Fair Information Practice Principles, GDPR, and U.S. state privacy laws, and discusses the distinction and regulatory implications between predictive and generative AI. The paper argues that AI's reliance on extensive data collection presents unique privacy risks at both individual and societal levels, noting that existing laws are inadequate for the emerging challenges posed by AI systems, because they don't fully tackle the shortcomings of the Fair Information Practice Principles (FIPs) framework or concentrate adequately on the comprehensive data governance measures necessary for regulating data used in AI development. According to the paper, FIPs are outdated and not well-suited for modern data and AI complexities, because: - They do not address the power imbalance between data collectors and individuals. - FIPs fail to enforce data minimization and purpose limitation effectively. - The framework places too much responsibility on individuals for privacy management. - Allows for data collection by default, putting the onus on individuals to opt out. - Focuses on procedural rather than substantive protections. - Struggles with the concepts of consent and legitimate interest, complicating privacy management. It emphasizes the need for new regulatory approaches that go beyond current privacy legislation to effectively manage the risks associated with AI-driven data acquisition and processing. The paper suggests three key strategies to mitigate the privacy harms of AI: 1.) Denormalize Data Collection by Default: Shift from opt-out to opt-in data collection models to facilitate true data minimization. This approach emphasizes "privacy by default" and the need for technical standards and infrastructure that enable meaningful consent mechanisms. 2.) Focus on the AI Data Supply Chain: Enhance privacy and data protection by ensuring dataset transparency and accountability throughout the entire lifecycle of data. This includes a call for regulatory frameworks that address data privacy comprehensively across the data supply chain. 3.) Flip the Script on Personal Data Management: Encourage the development of new governance mechanisms and technical infrastructures, such as data intermediaries and data permissioning systems, to automate and support the exercise of individual data rights and preferences. This strategy aims to empower individuals by facilitating easier management and control of their personal data in the context of AI. by Dr. Jennifer King Caroline Meinhardt Link: https://lnkd.in/dniktn3V

  • View profile for Paakhhi G.

    Data Privacy Consultant & Trainer | GDPR |DPDPA| DPO Track | Compliance & Risk Management

    12,627 followers

    Most companies get this wrong: NDA ≠ DPA. I still see organisations trying to “solve privacy” by inserting one confidentiality clause into a vendor NDA — and assuming they are compliant. BUT, they aren't. ✔️ An NDA protects business secrecy. ✔️ A DPA governs lawful processing of personal data. The distinction is not academic — it determines: 👉 Whether your processing is lawful at all 👉 Whether your vendor relationship is compliant under DPDP / GDPR 👉 Whether you are exposed to regulatory penalties even without a breach I’ve uploaded a short comparison note that breaks down: → When an NDA is enough → When a DPA is legally mandatory → Why can one not substitute the other → What legal, operational, and regulatory risks each one addresses If you are: • An in-house counsel reviewing vendor contracts • A DPO or privacy consultant designing compliance frameworks • A founder outsourcing data processing • Or a lawyer advising on tech/data matters This distinction will materially change how you draft, review, and negotiate contracts. 📄 See the document for the complete comparison. If you’ve ever seen NDAs used as a “privacy workaround, I’d be interested to hear how you’ve handled that in practice.

  • View profile for Mateusz Kupiec, FIP, CIPP/E, CIPM

    Institute of Law Studies, Polish Academy of Sciences || Privacy Lawyer at Traple Konarski Podrecki & Partners || DPO || I know GDPR. And what is your superpower?🤖

    26,595 followers

    ⚖️The Italian Data Protection Authority imposed a €10,000 fine on the nursery “La Combricola Dei Birichini di Betty” (Rho, Milan) for multiple infringements of the #GDPR relating to the processing of children’s personal data. The case concerned the large-scale publication of identifiable images of infants - some depicting particularly intimate moments such as rest, mealtime, or hygiene - on the nursery’s website and business profile, as well as the deployment of a video surveillance system within classrooms and bathrooms. 💡The Garante examined consent not only through the formal lens of Article 6(1)(a) GDPR but also within the broader framework of children’s fundamental rights. It reaffirmed that the validity of parental consent is substantively limited by the principle of the child’s best interests, a constitutional and international human rights standard. Where consent is incompatible with that principle e.g. where it results in public exposure of minors in ways detrimental to their dignity and safety - it cannot serve as a lawful basis under the GDPR, regardless of formal compliance. 🔹 Garante identified also several classic (based on my experience with educational institutions) deficiencies undermining the validity of consent. First, the consent form contained misleading information, incorrectly stating that refusal to authorise the publication of the child’s images would preclude enrolment. This misrepresentation, irrespective of the nursery’s actual practice, was sufficient to vitiate the “freely given” nature of consent under GDPR. 🔹Second, the form failed to provide sufficiently specific and intelligible information about the scope of processing particularly the types of images to be published and the online channels used and us preventing parents from understanding the real extent of disclosure and rendering consent not informed. Third, consent was collected through a single, undifferentiated declaration covering several distinct processing purposes (educational documentation, promotional use, and internal sharing), in breach of the GDPR’s requirement of granularity. Finally, the Garante observed that the nursery obtained consent from only one parent, whereas any lawful publication of children’s images - where admissible at all requires the consent of both holders of parental responsibility 🔹The Authority further held that recording children, staff, and third parties within educational areas via video surveillance lacked a lawful basis under both Article 6 GDPR and Article 4 Law No. 300/1970 The installation of cameras in areas used by children and educators was deemed a disproportionate and intrusive form of control. The nursery also failed to perform a DPIA provide adequate layered information and maintain an independent and properly designated DPO, in breach of Articles 37(7) and 38(6) GDPR. #privacy #rodo

Explore categories