Privacy Guidelines from Data Protection Authorities

Explore top LinkedIn content from expert professionals.

Summary

Privacy guidelines from data protection authorities are official rules and recommendations that help organizations protect individuals’ personal information and comply with privacy laws like the GDPR. These guidelines clarify how data must be handled, what rights individuals have, and what safeguards companies should implement, especially in today's tech-driven environments.

  • Clarify data use: Clearly explain what personal data you collect and why, using straightforward language that anyone can understand.
  • Respect user rights: Make it easy for people to access, correct, or delete their data and provide clear instructions for withdrawing consent or raising complaints.
  • Adapt to evolving tech: Stay up-to-date with privacy authorities’ guidance on emerging topics like AI and cross-border data sharing, and update your privacy practices accordingly.
Summarized by AI based on LinkedIn member posts
  • View profile for Vipender Mann

    Lawyer | DPDP Act & Data Protection Law | AI Governance (AIGP) & Privacy Engineering (CMU) | Making Regulatory Decisions Defensible

    13,551 followers

    DPDP Privacy Notice is not a UX nice-to-have It is a statutory object. Compliance hinges on the consent screen, not on how long your privacy policy is. If mandatory elements are missing, you are non-compliant—full stop. What DPDP actually requires at the notice stage, in real product language: 1. What data + why (specified purpose) Your notice must clearly itemise which personal data you collect and the specific purpose(s) for processing. Vague lines like “to improve our services” fail Section 5. Example “We collect your name, mobile number, PAN and bank account details to open and operate your trading account, verify your identity, and comply with KYC/AML.” Avoid: “to improve our services.” 2. Rights & withdrawal: the “how”, not just the “what” You must explain how users can withdraw consent and raise grievances—via specific links and usable channels. If users have to hunt, the notice fails. Example “Withdraw consent or raise a grievance via Settings → Privacy, or email privacy@company.in” 3. Escalation route to the Data Protection Board The notice must state how a Data Principal can approach the Data Protection Board of India if dissatisfied with grievance handling. No future URLs required—just the statutory route. Example “If your grievance is not resolved to your satisfaction, you may complain to the Data Protection Board of India in the manner notified by the Government.” 4. Language and clarity are compliance requirements The notice must offer an option to access it in English or any Eighth Schedule language, and be standalone, clear, and plain. Example Short screen at collection + language toggle: English | हिंदी | বাংলা | தமிழ் Practical takeaway Build a reusable DPDP notice component that always carries: data + purpose, rights/withdrawal route, Board complaint route, language option. If you mapped every sentence of your notice to a DPDP provision, would it survive scrutiny? Relevant provisions DPDP Act, 2023: Sections 5(1)–(3), 6(4), 11–13 DPDP Rules, 2025: Rule 3 My view Most DPDP non-compliance will not come from missing policies. It will come from weak notices that look fine to designers but fail legally. #DPDP #DataProtection #ProductCompliance

  • View profile for Mateusz Kupiec, FIP, CIPP/E, CIPM

    Institute of Law Studies, Polish Academy of Sciences || Privacy Lawyer at Traple Konarski Podrecki & Partners || DPO || I know GDPR. And what is your superpower?🤖

    26,595 followers

    ‼️The European Data Protection Board has just published its draft Guidelines 3/2025 (version 1.0) on the interplay between the #DSA and the #GDPR. 📍The guidelines stress that the DSA often refers to GDPR concepts such as profiling, special categories of data, or transparency obligations. The EDPB outlines several areas of interplay. Content moderation under the DSA inevitably involves processing personal data, which must be based on lawful grounds under the GDPR. Notice-and-action mechanisms, complaint handling, and account suspensions also require strict adherence to data minimisation and transparency principles. On advertising, the prohibition in Article 26 DSA on using special categories of data for targeting complements GDPR restrictions, reinforcing a layered protection regime. Recommender systems, meanwhile, raise risks of automated decision-making that could trigger Article 22 GDPR. 📍For me, the most striking part of the guidelines concerns minors. Article 28 DSA obliges providers of online platforms accessible to minors to ensure a high level of privacy, safety, and security. The EDPB clarifies that these duties can justify certain data processing under Article 6(1)(c) GDPR, but only if strictly necessary and proportionate. Crucially, Article 28(3) DSA specifies that platforms are not required to process additional personal data simply to establish whether a user is a minor. 📍The guidelines strongly discourage intrusive age assurance methods such as scanning government IDs or permanently storing age data. Instead, platforms should apply privacy-preserving approaches, for example by confirming only that a user meets a threshold age without revealing their exact identity or date of birth. The EDPB emphasises that age assurance must be risk-based: stricter methods may be justified if the platform exposes children to high risks (e.g. harmful or manipulative content), while lighter-touch measures may suffice where risks are low. 📍Another important clarification is that providers must not nudge minors into choosing recommender systems based on profiling. Non-profiling options should be presented neutrally, and once selected, the platform should not continue processing data for profiling in the background. Similarly, advertisements cannot be targeted at minors on the basis of profiling, even if other GDPR grounds might otherwise permit such processing. 📍The guidelines also recognise that protecting children online must go beyond technical measures. Providers should adapt their services to address risks to minors’ wellbeing, including exposure to harmful content, pressure from personalised recommendations, and misuse of sensitive data. At the same time, measures must be designed with the GDPR principles of minimisation, proportionality, and privacy by design and by default firmly in mind. #privacy #rodo #ecommerce #platforms

  • View profile for Andreea Lisievici Nevin

    🇪🇺 Privacy & Tech Lawyer⚡ Mentoring and training privacy professionals ⚡ Lecturer @ Maastricht Uni⚡ Certified DPO (ECPC-B), CIPP/E, CIPM, FIP

    9,673 followers

    🔐 𝐅𝐢𝐧𝐚𝐥 𝐄𝐃𝐏𝐁 𝐆𝐮𝐢𝐝𝐞𝐥𝐢𝐧𝐞𝐬: 𝐇𝐨𝐰 𝐭𝐨 𝐇𝐚𝐧𝐝𝐥𝐞 𝐃𝐚𝐭𝐚 𝐑𝐞𝐪𝐮𝐞𝐬𝐭𝐬 𝐟𝐫𝐨𝐦 𝐍𝐨𝐧-𝐄𝐔 𝐀𝐮𝐭𝐡𝐨𝐫𝐢𝐭𝐢𝐞𝐬 The EDPB has recently published its final guidelines on Article 48 GDPR – a provision that’s often overlooked but absolutely critical for companies receiving law enforcement or government data access requests from outside the EU. Here’s what you need to know.   Article 48 GDPR limits the ability of non-EU authorities (e.g. U.S., Chinese, Indian regulators or courts, BUT also international arbitration courts established outside of the EEA!) to directly compel EU-based organisations to hand over personal data.   📌 Key principle: foreign decisions aren’t enforceable in the EU Requests from third country authorities do not constitute a valid legal basis for transfer unless: ✔️ there’s an international agreement in place (e.g. MLA treaty); ✔️ or a legal basis under Chapter V applies and proper safeguards are met. 📌 No agreement? Not an excuse to transfer Only in exceptional, case-by-case circumstances can organisations consider alternatives (like Art. 49 derogations). And even then, the threshold is very high. 🧩 The finalised guidelines clarify some grey areas that often trip up global companies: ➤ Processor requests (new) If you're a processor (i.e., handling data on behalf of a client/controller), and a third country authority comes knocking, you can’t decide to disclose the data. You must inform the controller and follow their instructions. The controller is responsible for determining if any legal basis for transfer exists. If you transfer without instruction, it’s a GDPR breach. ➤ Parent company scenarios (new) If a parent company in a third country receives a legal request and then asks its EU-based subsidiary to hand over data, this is still considered a transfer under GDPR. That means all the Chapter V rules apply - just because the request comes from your HQ doesn’t mean you’re exempt. ➤ Legally binding ≠ enforceable in the EU (new) Even if the third country decision is “binding” under its own law (e.g., a U.S. subpoena), that doesn’t mean it has legal effect in the EU. Transfers must comply with EU standards of enforceability, including judicial review, fundamental rights protection, and the ability to challenge the order. ➤ Derogations Only in Rare Cases: Art. 49 derogations (e.g., legal claims) remain narrow - occasional, strictly necessary, minimal data. Don’t assume a foreign order equals necessity.   Version also 2.0 introduces an Annex with “Practical steps” to guide controllers/processors through the thinking process – you can use this to refine internal procedures for handling third-country requests.   👉 𝐓𝐚𝐤𝐞𝐚𝐰𝐚𝐲: Foreign orders don’t trump GDPR, and the derogation for defending legal claims does not mean you can automatically comply with foreign court orders for broad discovery. #GDPR #DataTransfers #Article48 #EDPB #PrivacyCompliance #SchremsII #ThirdCountryRequests

  • View profile for Priyanka Sinha

    Contract Risk & Governance | Data Privacy | AI Governance | 10+ years building governance systems that scale without slowing growth | IAPP Chapter Chair, Singapore | Speaker IAPP/ISACA 2026

    1,990 followers

    Last month at an IAPP privacy webinar, the discussion centered on how data privacy and AI truly align. As the panel unpacked real-world audits and case studies, I discovered a set of hidden GDPR articles that quietly sync with the way modern AI actually works. That’s when it hit me → the toughest GDPR tests for AI often come from five quieter articles that regulators rely on to measure real compliance. Here are the five that every AI user should have on their risk radar: 💡 GDPR guards the data. The EU AI Act governs the AI system itself. Most teams forget you need to pass both tests. Rule 1 → Article 22: Automated Decision-Making & Profiling Yes, this is the human-in-the-loop safeguard. If your model makes a decision solely by algorithm with legal or significant impact (credit, hiring, healthcare, insurance), users have the right to: ↳ Opt out of the automated decision ↳ Demand a human review before the outcome stands ➡️ Designing that review pathway isn’t optional; it’s architecture. Rule 2 → Articles 13 & 14: Radical Transparency These require clear, intelligible notices describing: ↳ What data you collect ↳ Why you process it ↳ Your lawful basis Even if data is obtained indirectly (e.g., scraped training sets). ➡️ Must be written in plain language—not legalese—and shown at the point of collection. Rule 3 → Article 30: Records of Processing (RoPA) Your single source of truth: ↳ Every dataset ↳ Purpose of processing ↳ Categories of subjects ↳ Retention periods ↳ Transfers ➡️ Supervisory authorities usually ask for this first. Keep it audit-ready. Rule 4 → Articles 44–49: Cross-Border Data Transfers Using global cloud platforms or U.S.-based APIs? These clauses dictate when you need: ↳ Standard Contractual Clauses (SCCs) ↳ Binding Corporate Rules (BCRs) ↳ Adequacy decisions ➡️ Essential for lawful data flows post-Schrems II. Rule 5 → Articles 37–39: Data Protection Officer (DPO) Triggered by: ↳ Large-scale monitoring ↳ Special-category data processing This isn’t ceremonial. A DPO is: ↳ The operational bridge between engineering, governance, and regulators ↳ A trust signal for investors and enterprise clients 💡 Takeaway GDPR isn’t just Europe’s privacy law; it’s the architectural blueprint for AI governance worldwide. Before you deploy another model or ship the next feature, stress-test your design against these five “quiet” articles. #GDPR #ResponsibleAI #HumanInTheLoop #DataPrivacy #AICompliance #RiskManagement #IAPP

  • View profile for Vadym Honcharenko

    Privacy Engineer @ Google | AIGP, CIPP/E/US/C, CIPM/T, CDPSE, CDPO | LLB | MSc Cybersecurity | EDPB Pool of Experts | ex-Grammarly

    16,827 followers

    📕 Swedish Data Protection Authority published a report describing how GDPR can be applied to Generative AI. 🤔 Anything new? Yes ❗ 1️⃣ When describing how controllers can address data subjects' rights when using Gen-AI, the regulator divided its guidance into "personal data in input and output" and "personal data in AI models". 👉 Personal data in input and output: • Right to information: The regulator goes deep here and describes how to address the right to information if you, for example, use a retrieval augmented generation (RAG) solution (see more about it below). So, for RAG to work, you need to share the data with a third-party service provider, so you should describe the data recipients. •  Right to access: In addition to the confidentiality or professional secrecy exceptions to the right of access, the regulator emphasizes personal data in "continuous text that has not yet reached its final form at the time of the request" and "personal data contained in memos or similar documents" if they have not be shared with third-parties. Also, prompts and generated output with personal data can be covered by an exception to the right of access as it can be considered as running text that has not yet been finalized or as a memo that wasn't shared with a third party. Still, if the output data is recorded or forwarded, it is in the scope if not subject to confidentiality or professional secrecy. 👉 Personal data that in AI models: •  Right to information: The regulator says that businesses do not have to voluntarily disclose information about using a generative AI system to all individuals whose personal data may have been used as training data to develop the system's base model, as it would entail a disproportionate effort. What they should do is provide information to the public about which generative AI systems or base models are used, for example, by including such information in a publicly available privacy policy or AI policy. • Right to access, delete, and rectify: The DPA agrees that addressing data subjects' rights in AI models is complicated. Still, you should let an individual know why their request cannot be addressed, or if you use a third-party AI system, you should forward the request to the service provider. 2️⃣ Technical security measures • The regulator here describes solutions like probability thresholds and temperature, PETs, etc. • The DPA also refers to the retrieval-augmented generation (RAG) solution, which is basically a combination of the base model's generative capabilities with external or business-specific information from selected sources. This allows the system to access more up-to-date information than is available in its original training data, generate more qualitative answers, and answer questions based on the business's data. Still, it is clear that such a solution and data processing should have a legal basis and proper data accessibility rules. #AI #privacy #GDPR

  • View profile for Sam Castic

    Privacy Leader and Lawyer; Partner @ Hintze Law

    4,060 followers

    The Oregon Department of Justice released new guidance on legal requirements when using AI. Here are the key privacy considerations, and four steps for companies to stay in-line with Oregon privacy law. ⤵️ The guidance details the AG's views of how uses of personal data in connection with AI or training AI models triggers obligations under the Oregon Consumer Privacy Act, including: 🔸Privacy Notices. Companies must disclose in their privacy notices when personal data is used to train AI systems. 🔸Consent. Updated privacy policies disclosing uses of personal data for AI training cannot justify the use of previously collected personal data for AI training; affirmative consent must be obtained. 🔸Revoking Consent. Where consent is provided to use personal data for AI training, there must be a way to withdraw consent and processing of that personal data must end within 15 days. 🔸Sensitive Data. Explicit consent must be obtained before sensitive personal data is used to develop or train AI systems. 🔸Training Datasets. Developers purchasing or using third-party personal data sets for model training may be personal data controllers, with all the required obligations that data controllers have under the law. 🔸Opt-Out Rights. Consumers have the right to opt-out of AI uses for certain decisions like housing, education, or lending. 🔸Deletion. Consumer #PersonalData deletion rights need to be respected when using AI models. 🔸Assessments. Using personal data in connection with AI models, or processing it in connection with AI models that involve profiling or other activities with heightened risk of harm, trigger data protection assessment requirements. The guidance also highlights a number of scenarios where sales practices using AI or misrepresentations due to AI use can violate the Unlawful Trade Practices Act. Here's a few steps to help stay on top of #privacy requirements under Oregon law and this guidance: 1️⃣ Confirm whether your organization or its vendors train #ArtificialIntelligence solutions on personal data.  2️⃣ Validate your organization's privacy notice discloses AI training practices. 3️⃣ Make sure organizational individual rights processes are scoped for personal data used in AI training. 4️⃣ Set assessment protocols where required to conduct and document data protection assessments that address the requirements under Oregon and other states' laws, and that are maintained in a format that can be provided to regulators.

  • View profile for Martin Ebers

    Robotics & AI Law Society (RAILS)

    42,190 followers

    European Data Protection Board - European Commission: Joint Guidelines on the Interplay between DMA and GDPR The Digital Markets Act (DMA) and the General Data Protection Regulation (GDPR) pursue different purposes and objectives and have different scopes. While the GDPR aims to protect natural persons with regard to the processing of personal data and ensure the free flow of personal data in the Union covering all data controllers and processors, the DMA aims to tackle unfair practices, and their potential harmful effects for business users, by laying down harmonised rules applicable to gatekeepers ensuring, for all businesses, contestable and fair markets in the digital sector across the Union, to the benefit of both business users and end users. The DMA and GDPR are complementary in terms of goals and in terms of the protections provided to individuals. Compliance with obligations under the GDPR goes together with the objective of addressing gatekeepers’ data-driven advantages that the DMA, among other objectives, aims to tackle. These Guidelines on the interplay between the DMA and the GDPR aim to ensure that the DMA and the GDPR are interpreted and applied in a compatible manner, enabling a coherent application that achieves their respective objectives, in line with relevant CJEU case law. Article 5(2) DMA prohibits gatekeepers from carrying out certain processing operations without end users’ valid consent, within the meaning of the GDPR. The Guidelines explain the elements that gatekeepers should consider in order to comply with the requirements of specific choice and valid consent under Article 5(2) DMA and the GDPR. The Guidelines also describe circumstances where consent may not be required by either the GDPR or the DMA and under which conditions other legal bases can be relied upon (e.g. the possibility of relying on Article 6(1), point (c) GDPR when processing personal data for security purposes, provided certain conditions are met). Article 6(4) DMA requires gatekeepers providing operating system CPS to (inter alia) allow and technically enable the installation and effective use of third-party software applications or software application stores on their operating system. The Guidelines recall that gatekeepers should ensure that the measures they implement in compliance with Article 6(4) DMA also comply with applicable laws, including the GDPR and the ePrivacy Directive. However, when selecting appropriate measures to comply with obligations stemming from the GDPR, gatekeepers should select the measures that less adversely affect the pursuit of the objectives of Article 6(4) DMA, provided that they remain effective in ensuring compliance with the GDPR, also taking into account that operating system providers are generally considered separate and independent controllers from the providers of apps or app stores.

  • The EDPS - European Data Protection Supervisor has issued a new "Guidance for Risk Management of Artificial Intelligence Systems." The document provides a framework for EU institutions acting as data controllers to identify and mitigate data protection risks arising from the development, procurement, and deployment of AI systems that process personal data, focusing on fairness, accuracy, data minimization, security and data subjects’ rights. Based on ISO 31000:2018, the guidance structures the process into risk identification, analysis, evaluation, and treatment — emphasizing tailored assessments for each AI use case. Some highlights and recommendations include: - Accountability: AI systems must be designed with clear documentation of risk decisions, technical justifications, and evidence of compliance across all lifecycle phases. Controllers are responsible for demonstrating that AI risks are identified, monitored, and mitigated. - Explainability: Models must be interpretable by design, with outputs traceable to underlying logic and datasets. Explainability is essential for individuals to understand AI-assisted decisions and for authorities to assess compliance. - Fairness and bias control: Organizations should identify and address risks of discrimination or unfair treatment in model training, testing, and deployment. This includes curating balanced datasets, defining fairness metrics, and auditing results regularly. - Accuracy and data quality: AI must rely on trustworthy, updated, and relevant data.  - Data minimization: The use of personal data in AI should be limited to what is strictly necessary. Synthetic, anonymized, or aggregated data should be preferred wherever feasible. - Security and resilience: AI systems should be secured against data leakage, model inversion, prompt injection, and other attacks that could compromise personal data. Regular testing and red teaming are recommended. - Human oversight: Meaningful human involvement must be ensured in decision-making processes, especially where AI systems may significantly affect individuals’ rights. Oversight mechanisms should be explicit, documented, and operational. - Continuous monitoring: Risk management is a recurring obligation — institutions must review, test, and update controls to address changes in system performance, data quality, or threat exposure. - Procurement and third-party management: Contracts involving AI tools or services should include explicit privacy and security obligations, audit rights, and evidence of upstream data protection compliance. The guidance establishes a practical benchmark for embedding data protection into AI governance — emphasizing transparency, proportionality, and accountability as the foundation of lawful and trustworthy AI systems. 

  • View profile for Ken Priore

    Deputy General Counsel- Product, Engineering, IP & Partner | Driving Ethical Innovation at Scale

    6,827 followers

    🌟 The European Data Protection Board (EDPB) has released its guidance on AI models and personal data processing! If you’re navigating the complexities of AI development, this is a must-read. 🚀 Key highlights: 🔒 Anonymity Redefined: AI models aren’t automatically anonymous. The bar is high—data must be irreversibly anonymized, ensuring extraction is highly unlikely. ⚖️ Legitimate Interest: A viable legal basis, but it requires rigorous necessity tests and rights balancing. No shortcuts here! 🌐 Web Scraping under Spotlight: Safeguards and opt-out mechanisms are essential when scraping public data for training AI models. 🚫 Unlawful Data Use: Mishandling personal data during training can undermine the lawfulness of the model’s deployment unless anonymized appropriately. 🪞 Transparency Matters: Clear, accessible communication with data subjects is non-negotiable. 💡 Why this matters: The EDPB underscores the need for privacy by design and accountability. AI innovation must align with GDPR principles to ensure trust and compliance. Whether you’re an in-house counsel, compliance professional, or innovator in AI, these guidelines are a wake-up call for responsible AI development. Stay ahead by embedding these principles into your practices. 💼✨ #AI #GDPR #Privacy #DataProtection #ArtificialIntelligence #EDPB #LegalTech #Compliance #Innovation

  • View profile for Priyadarshi Prasad

    VP/GM - AI Data Infrastructure

    5,711 followers

    On October 11, 2023, the French Data Protection Authority (the “CNIL”) published a new set of guidelines addressing the research and development of AI systems from a data protection perspective (the “Guidelines”). In the Guidelines, the CNIL confirms the compatibility of the EU General Data Protection Regulation (“GDPR”) with AI research and development. The Guidelines are divided into seven “AI how-to sheets”, these guides: (1) determining the applicable legal regime (e.g., the GDPR or the Law Enforcement Directive); (2) adequately defining the purpose of processing; (3) defining the role (e.g., controller, processor, or joint controller) of AI system providers; (4) defining the legal basis and implementing necessary safeguards to ensure the lawfulness of the data processing; (5) drafting a data protection impact assessment (“DPIA”) where necessary; (6) adequately considering data protection in the AI system design choices; and (7) implementing the principle of data protection by design in the collection of data and adequately managing data after collection. Noteworthy takeaways from the Guidelines include: In line with the GDPR, the purpose of the development of an AI system must be specific, explicit, and legitimate. The CNIL clarifies that where the operational use of AI systems in the deployment phase is unique and precisely identified from the development stage, the processing operations carried out in both phases pursue, in principle, a single overall purpose. Consent, legitimate interests, contract performance, and public interest may all theoretically serve as legal bases for the development of AI systems. Controllers must carefully assess the most adequate legal basis for their specific case. DPIAs carried out to address the processing of data for the development of AI systems must address specific AI risks, such as the risk of producing false content about a real person or the risks associated with known attacks specific to AI systems (such as attacks by data poisoning, insertion of a backdoor, or model inversion). Data minimization and data protection measures that have been implemented during data collection may become obsolete over time and must be continuously monitored and updated when required. Re-using datasets, particularly those publicly available on the Internet, is possible to train AI systems, provided that the data was lawfully collected and the purpose of re-use is compatible with the original collection purpose. The CNIL considers AI to be a topic of priority. It has set up a dedicated AI department, launched an action plan to clarify the rules and support innovation in this field, and introduced two support programs for French AI players. What do you think about the CNIL's Guidelines on AI development and data protection? #France #DPA #dataprotection #ai

Explore categories