This new white paper by Stanford Institute for Human-Centered Artificial Intelligence (HAI) titled "Rethinking Privacy in the AI Era" addresses the intersection of data privacy and AI development, highlighting the challenges and proposing solutions for mitigating privacy risks. It outlines the current data protection landscape, including the Fair Information Practice Principles, GDPR, and U.S. state privacy laws, and discusses the distinction and regulatory implications between predictive and generative AI. The paper argues that AI's reliance on extensive data collection presents unique privacy risks at both individual and societal levels, noting that existing laws are inadequate for the emerging challenges posed by AI systems, because they don't fully tackle the shortcomings of the Fair Information Practice Principles (FIPs) framework or concentrate adequately on the comprehensive data governance measures necessary for regulating data used in AI development. According to the paper, FIPs are outdated and not well-suited for modern data and AI complexities, because: - They do not address the power imbalance between data collectors and individuals. - FIPs fail to enforce data minimization and purpose limitation effectively. - The framework places too much responsibility on individuals for privacy management. - Allows for data collection by default, putting the onus on individuals to opt out. - Focuses on procedural rather than substantive protections. - Struggles with the concepts of consent and legitimate interest, complicating privacy management. It emphasizes the need for new regulatory approaches that go beyond current privacy legislation to effectively manage the risks associated with AI-driven data acquisition and processing. The paper suggests three key strategies to mitigate the privacy harms of AI: 1.) Denormalize Data Collection by Default: Shift from opt-out to opt-in data collection models to facilitate true data minimization. This approach emphasizes "privacy by default" and the need for technical standards and infrastructure that enable meaningful consent mechanisms. 2.) Focus on the AI Data Supply Chain: Enhance privacy and data protection by ensuring dataset transparency and accountability throughout the entire lifecycle of data. This includes a call for regulatory frameworks that address data privacy comprehensively across the data supply chain. 3.) Flip the Script on Personal Data Management: Encourage the development of new governance mechanisms and technical infrastructures, such as data intermediaries and data permissioning systems, to automate and support the exercise of individual data rights and preferences. This strategy aims to empower individuals by facilitating easier management and control of their personal data in the context of AI. by Dr. Jennifer King Caroline Meinhardt Link: https://lnkd.in/dniktn3V
Privacy Law Considerations for Cognitive Data
Explore top LinkedIn content from expert professionals.
Summary
Privacy law considerations for cognitive data focus on the rules and responsibilities when collecting, processing, or inferring sensitive information about people's thoughts, emotions, and behaviors using AI and neurotechnologies. As these systems advance, laws must protect mental privacy, personal identity, and ensure transparent use of data, since traditional privacy frameworks struggle to address these emerging risks.
- Prioritize mental privacy: Safeguard individuals' thoughts and feelings by limiting unauthorized access and manipulation of cognitive or neurodata, and ensure data collection happens with meaningful consent.
- Clarify data purposes: Clearly communicate why cognitive and personal information is being collected, how it will be used, and offer straightforward ways for people to control their data or opt out.
- Strengthen data governance: Build transparent data supply chains and implement technical safeguards so organizations can track, manage, and protect cognitive data throughout its lifecycle.
-
-
Chile is a country I’ve always wanted to visit, but haven’t yet. Between the fantastic food/wine, the amazing stargazing in the Atacama Desert, and the wonderful Chileans, what’s not to love? Then I discovered Chile was the first country to legally protect “neuro-rights.” Specifically, they passed a bill aimed at safeguarding mental privacy, personal identity, and free will while providing equitable access to neurotechnologies. 👉 Mental privacy ensures that thoughts and feelings are protected from unauthorized access and manipulation. Collecting neurodata on users’ emotional states without their knowledge or consent and then selling this data to advertisers for targeted marketing would be prohibited under this new law. This is exactly the worry around Google’s acquisition of FitBit. The biometric data that FitBit collects could be combined with the digital analytics data that Google already has to manipulate consumers with advertising or on e-commerce sites. 👉 The right to personal identity protects against alterations to a person’s mental state that could change their personality or identity. Commercial brain stimulation devices, for instance, designed to enhance mood that unintentionally or intentionally changed a person’s core personality traits, such as making them more aggressive or docile without their consent, would be a violation of this section principle. 👉 Free will safeguards individuals from neurotechnological interventions that could manipulate their decisions and actions. A company developing neuro-marketing techniques would need to ensure that its methods are transparent and non-coercive, allowing individuals to make purchasing decisions without undue influence. For example, using eye tracking to follow visual attention and pop-up inducements (“10% off if you buy now!”) based on eye movements would be a violation of the free will safeguards. 👉 Equitable access ensures that advancements in neurotechnologies are accessible to all, preventing socio-economic disparities in cognitive enhancements or treatments. The aim here is to ensure that neurotechnological treatments for mental health issues are available to all citizens, regardless of their socio-economic status. It also aims to prevent wealthy individuals from having sole access to neuro-enhancements. Finally, their constitution now explicitly recognizes and protects neurorights. I’ve written about neuro-ethics many times but this is the first legislation that I am aware of that tries to embody ethical principles in the regulation of neuro-technologies. If you think this sounds like science fiction, rest assured it is not. I’ll add a link to a talk I gave in 2019 that provides several real-world examples: https://lnkd.in/ehCgvbRB What do you think? Is Chile on the right track? Should the UN Human Rights Act include a Right of Cognitive Liberty? Or is this neo-liberal nonsense and scaremongering designed to stifle innovation?
Ethics for our Brave New World
https://www.youtube.com/
-
If you are an organisation using AI or you are an AI developer, the Australian privacy regulator has just published some vital information about AI and your privacy obligations. Here is a summary of the new guides for businesses published today by the Office of the Australian Information Commissioner which articulate how Australian privacy law applies to AI and set out the regulator’s expectations. The first guide is aimed to help businesses comply with their privacy obligations when using commercially available AI products and help them to select an appropriate product. The second provides privacy guidance to developers using personal information to train generative AI models. GUIDE ONE: Guidance on privacy and the use of commercially available AI products Top five takeaways * Privacy obligations will apply to any personal information input into an AI system, as well as the output data generated by AI (where it contains personal information). * Businesses should update their privacy policies and notifications with clear and transparent information about their use of AI * If AI systems are used to generate or infer personal information, including images, this is a collection of personal information and must comply with APP 3 (which deals with collection of personal info). * If personal information is being input into an AI system, APP 6 requires entities to only use or disclose the information for the primary purpose for which it was collected. * As a matter of best practice, the OAIC recommends that organisations do not enter personal information, and particularly sensitive information, into publicly available generative AI tools. GUIDE 2: Guidance on privacy and developing and training generative AI models Top five takeaways * Developers must take reasonable steps to ensure accuracy in generative AI models. * Just because data is publicly available or otherwise accessible does not mean it can legally be used to train or fine-tune generative AI models or systems.. * Developers must take particular care with sensitive information, which generally requires consent to be collected. * Where developers are seeking to use personal information that they already hold for the purpose of training an AI model, and this was not a primary purpose of collection, they need to carefully consider their privacy obligations. * Where a developer cannot clearly establish that a secondary use for an AI-related purpose was within reasonable expectations and related to a primary purpose, to avoid regulatory risk they should seek consent for that use and/or offer individuals a meaningful and informed ability to opt-out of such a use. https://lnkd.in/gX_FrtS9
-
𝐖𝐡𝐚𝐭 𝐚𝐫𝐞 𝐬𝐨𝐦𝐞 𝐞𝐱𝐚𝐦𝐩𝐥𝐞𝐬 𝐨𝐟 𝐰𝐡𝐞𝐫𝐞 𝐀𝐈 𝐦𝐢𝐠𝐡𝐭 𝐜𝐡𝐚𝐥𝐥𝐞𝐧𝐠𝐞 𝐀𝐮𝐬𝐭𝐫𝐚𝐥𝐢𝐚'𝐬 𝐩𝐫𝐢𝐯𝐚𝐜𝐲 𝐥𝐚𝐰?🤔 It's an interesting thought, and one the OAIC is no doubt agonising over in preparing its eagerly awaited Privacy and AI guidelines. Here's a few I cooked up earlier: 𝐂𝐨𝐥𝐥𝐞𝐜𝐭𝐢𝐨𝐧 𝐑𝐞𝐪𝐮𝐢𝐫𝐞𝐦𝐞𝐧𝐭𝐬 - AI can infer and generate new PI from existing data (remember - accuracy doesn't matter). This could be indirect "collection" under the Privacy Act. Further, how do collection requirements apply in the context of AI training and inputting? 𝐔𝐬𝐞 𝐑𝐞𝐬𝐭𝐫𝐢𝐜𝐭𝐢𝐨𝐧𝐬 - Using historical data to train AI models is troublesome. It may be considered secondary use, needing either consent (seems unfeasible) or be directly linked to collection and be reasonably expected by an individual. How do permissions required to use PI in AI change as the system changes? 𝐒𝐭𝐨𝐫𝐚𝐠𝐞 𝐚𝐧𝐝 𝐑𝐞𝐭𝐞𝐧𝐭𝐢𝐨𝐧- AI complicates traditional data storage, especially since its outputs and operation are unpredictable. How do you protect data and comply with APP11 requirements in an AI 'black box' world? And what about data retention and deletion obligations? 𝐈𝐧𝐝𝐢𝐯𝐢𝐝𝐮𝐚𝐥 𝐑𝐢𝐠𝐡𝐭𝐬- We don't have GDPR-style individual rights, but there are still rights to access and correct PI. How do you apply these when PI is processed by AI, especially if the data is altered in hard-to-trace ways? 𝐒𝐞𝐧𝐬𝐢𝐭𝐢𝐯𝐞 𝐈𝐧𝐟𝐨𝐫𝐦𝐚𝐭𝐢𝐨𝐧- Handling sensitive info like health or biometric info becomes more complex. The law has strict requirements for sensitive info, and AI use demands clear guidance to assist compliance. 𝐓𝐫𝐚𝐧𝐬𝐩𝐚𝐫𝐞𝐧𝐜𝐲- AI often operates as a "black box," making it difficult to understand how decisions are made. The Privacy Act emphasises openness and transparency, but AI challenges this principle. 𝐃𝐞-𝐢𝐝𝐞𝐧𝐭𝐢𝐟𝐢𝐜𝐚𝐭𝐢𝐨𝐧- AI’s ability to re-identify anonymised data is a growing concern. Traditional methods might not be enough as AI becomes more sophisticated. 𝐂𝐫𝐨𝐬𝐬-𝐁𝐨𝐫𝐝𝐞𝐫- AI development often involves international collaboration, raising questions about cross border transfers. Cross border data flow provisions may face pressure in ensuring PI protection outside Australia. 𝐃𝐚𝐭𝐚 𝐌𝐢𝐧𝐢𝐦𝐢𝐬𝐚𝐭𝐢𝐨𝐧 - AI models require vast datasets, conflicting with the principle of data minimisation. This could require a re-examination of what "reasonably necessary for, or directly related to, one or more of its functions or activities" means in the AI context. 𝐁𝐞𝐡𝐚𝐯𝐢𝐨𝐮𝐫𝐚𝐥 𝐀𝐧𝐚𝐥𝐲𝐭𝐢𝐜𝐬 - AI’s ability to profile and analyse behavioural patterns can lead to invasive insights, testing the Privacy Act's boundaries, especially when AI-generated profiles are used for targeted marketing or credit scoring. Oh, and if you're using AI in Australia, a comprehensive Privacy Impact Assessment is crucial.📋 #AI #Privacy #PrivacyLaw #ArtificialIntelligence #GDPR
-
Excited to share my latest article published by the IAPP, “Privacy by Proxy: Regulating Inferred Identities in AI Systems.” This piece explores how AI systems are increasingly making assumptions about people who never actually interact with them. These “inferred identities” raise complex questions for existing privacy laws that were built around direct data collection and consent. I dive into what this means for organizations and regulators, especially as more state privacy laws begin to address inferences and profiling. If you work in privacy, AI governance, or emerging tech policy, I’d love to hear your thoughts.
-
⚠️Privacy Risks in AI Management: Lessons from Italy’s DeepSeek Ban⚠️ Italy’s recent ban on #DeepSeek over privacy concerns underscores the need for organizations to integrate stronger data protection measures into their AI Management System (#AIMS), AI Impact Assessment (#AIIA), and AI Risk Assessment (#AIRA). Ensuring compliance with #ISO42001, #ISO42005 (DIS), #ISO23894, and #ISO27701 (DIS) guidelines is now more material than ever. 1. Strengthening AI Management Systems (AIMS) with Privacy Controls 🔑Key Considerations: 🔸ISO 42001 Clause 6.1.2 (AI Risk Assessment): Organizations must integrate privacy risk evaluations into their AI management framework. 🔸ISO 42001 Clause 6.1.4 (AI System Impact Assessment): Requires assessing AI system risks, including personal data exposure and third-party data handling. 🔸ISO 27701 Clause 5.2 (Privacy Policy): Calls for explicit privacy commitments in AI policies to ensure alignment with global data protection laws. 🪛Implementation Example: Establish an AI Data Protection Policy that incorporates ISO27701 guidelines and explicitly defines how AI models handle user data. 2. Enhancing AI Impact Assessments (AIIA) to Address Privacy Risks 🔑Key Considerations: 🔸ISO 42005 Clause 4.7 (Sensitive Use & Impact Thresholds): Mandates defining thresholds for AI systems handling personal data. 🔸ISO 42005 Clause 5.8 (Potential AI System Harms & Benefits): Identifies risks of data misuse, profiling, and unauthorized access. 🔸ISO 27701 Clause A.1.2.6 (Privacy Impact Assessment): Requires documenting how AI systems process personally identifiable information (#PII). 🪛 Implementation Example: Conduct a Privacy Impact Assessment (#PIA) during AI system design to evaluate data collection, retention policies, and user consent mechanisms. 3. Integrating AI Risk Assessments (AIRA) to Mitigate Regulatory Exposure 🔑Key Considerations: 🔸ISO 23894 Clause 6.4.2 (Risk Identification): Calls for AI models to identify and mitigate privacy risks tied to automated decision-making. 🔸ISO 23894 Clause 6.4.4 (Risk Evaluation): Evaluates the consequences of noncompliance with regulations like #GDPR. 🔸ISO 27701 Clause A.1.3.7 (Access, Correction, & Erasure): Ensures AI systems respect user rights to modify or delete their data. 🪛 Implementation Example: Establish compliance audits that review AI data handling practices against evolving regulatory standards. ➡️ Final Thoughts: Governance Can’t Wait The DeepSeek ban is a clear warning that privacy safeguards in AIMS, AIIA, and AIRA aren’t optional. They’re essential for regulatory compliance, stakeholder trust, and business resilience. 🔑 Key actions: ◻️Adopt AI privacy and governance frameworks (ISO42001 & 27701). ◻️Conduct AI impact assessments to preempt regulatory concerns (ISO 42005). ◻️Align risk assessments with global privacy laws (ISO23894 & 27701). Privacy-first AI shouldn't be seen just as a cost of doing business, it’s actually your new competitive advantage.
-
No one announced it. But mental privacy just became a real compliance problem. This week, two unrelated events quietly snapped together. And once you see it, you can’t unsee it. First: the tech. Researchers at Columbia University published BISC, a paper thin, wireless brain interface that rests on the surface of the brain and streams neural signals externally at high bandwidth. https://lnkd.in/gS9bpPv9 No deep implants. No sci-fi surgery theater. Designed to be less invasive, removable, and scalable. Today it’s medical. That’s how every disruptive tech starts. Second: the law. California already updated its privacy framework to create a new protected category called #NeuralData. Not biometric data. Not medical data. A separate class for signals generated by your nervous system. That’s not an accident. That’s a regulator looking ahead and saying: this is different, and it can’t be treated like clickstream data. Now connect the dots. For years, organizations governed behavioral data. What you clicked. Where you went. What you bought. What’s emerging next is cognitive data. Attention. Stress. Cognitive load. Reaction timing. You don’t need to “read thoughts” to influence behavior. You just need to know when someone is distracted, overloaded, anxious, or primed to react. That’s where things get uncomfortable. Because governance always lags capability. 🤔 We laughed about CISOs once. Then breaches happened. 🤔 We laughed about Chief AI Officers. Then models started making decisions. Next up, half joking, half inevitable: 🤔 Chief Neural Data Officer. Someone will have to answer: 🔸 Are we collecting neural or near-neural signals? 🔸 Are they inferred or directly measured? 🔸 Who owns them? 🔸 How long do we retain them? 🔸 Can they be subpoenaed? 🔸 Can they be sold? 🔸 Can they be abused? The serious part, before this gets dismissed as cyberpunk hype: This is not about brain implants tomorrow. It’s about governance gaps today, and about the last mile of privacy. The last frontier is not your inbox. It’s your internal state. Once that becomes data, there is no reset button. I am not sure if I really want this. ❗ If this feels “too futuristic” to worry about, that’s usually the moment risk shows up uninvited. 🔔 Follow Michael Reichstein for cybersecurity, leadership, and AI strategy ♻️ Useful? Share to help others, and join me on Substack for the unfiltered version: https://lnkd.in/gKDVq944 #Cybersecurity #Privacy #AI #Governance #BCI #NeuralData #CISO #DigitalRights #FutureOfWork
-
The Oregon Department of Justice released new guidance on legal requirements when using AI. Here are the key privacy considerations, and four steps for companies to stay in-line with Oregon privacy law. ⤵️ The guidance details the AG's views of how uses of personal data in connection with AI or training AI models triggers obligations under the Oregon Consumer Privacy Act, including: 🔸Privacy Notices. Companies must disclose in their privacy notices when personal data is used to train AI systems. 🔸Consent. Updated privacy policies disclosing uses of personal data for AI training cannot justify the use of previously collected personal data for AI training; affirmative consent must be obtained. 🔸Revoking Consent. Where consent is provided to use personal data for AI training, there must be a way to withdraw consent and processing of that personal data must end within 15 days. 🔸Sensitive Data. Explicit consent must be obtained before sensitive personal data is used to develop or train AI systems. 🔸Training Datasets. Developers purchasing or using third-party personal data sets for model training may be personal data controllers, with all the required obligations that data controllers have under the law. 🔸Opt-Out Rights. Consumers have the right to opt-out of AI uses for certain decisions like housing, education, or lending. 🔸Deletion. Consumer #PersonalData deletion rights need to be respected when using AI models. 🔸Assessments. Using personal data in connection with AI models, or processing it in connection with AI models that involve profiling or other activities with heightened risk of harm, trigger data protection assessment requirements. The guidance also highlights a number of scenarios where sales practices using AI or misrepresentations due to AI use can violate the Unlawful Trade Practices Act. Here's a few steps to help stay on top of #privacy requirements under Oregon law and this guidance: 1️⃣ Confirm whether your organization or its vendors train #ArtificialIntelligence solutions on personal data. 2️⃣ Validate your organization's privacy notice discloses AI training practices. 3️⃣ Make sure organizational individual rights processes are scoped for personal data used in AI training. 4️⃣ Set assessment protocols where required to conduct and document data protection assessments that address the requirements under Oregon and other states' laws, and that are maintained in a format that can be provided to regulators.
-
The protection of mental privacy in the area of neuroscience - Societal, legal and ethical challenges 📢 This study was presented to the European Parliament's STOA Panel in April 2024 and published in July. 🎬 Neurotechnologies are essential for the recovery and preservation of physiological and mental health and thus quality of life among clinical patients. However, technological advances and research findings have led to these technologies being applied outside the clinical domain. The use and processing of personal data of such devices raises several concerns and entails a multitude of open questions on possible moral and psychological implications, safety hazards, and data security. 🔎 Following a broad analysis of soietal, ethical and legal challenges related to the use of neurotechnologies, the report examines four use cases: 🔹 invasive neurotechnologies for patients with quadriplegia 🔹 non-invasive neurotechnologies for a healthy population 🔹 invasive neurotechnologies for a healthy population 🔹 non-invasive neurotechnologies for patients with Alzheimer's disease 📌 The report puts forward a set of orchestrated steps to change the state of play in the form of following recommendations: 🔸 investigate technology-centred risk evaluations to complement the risk evaluations in the AI Act 🔸 track public communication on neurotechnologies to promote fair communication on their benefits but also on their limitations and risks 🔸 discontinue formulations of neurorights at the level of human and fundamental rights and promote more specific and practically applicable legal formulations 🔸 fund research to fill gaps in the existing literature 🔸 support EU neurotechnology providers by implementing a legal basis and thereby preventing data distribution on non-European servers 🔸 investigate whether general standards for neurotechnological devices are sufficient or if new standards should be created 🔔 Interesting to highlight that the authors see a role for a European neurodata space: "As most neurotechnology providers are situated outside the EU, the data of European citizens will mostly be processed outside the EU, which is problematic insofar as non-EU countries have different data security policies. Therefore, EU-based providers of neurotechnology should be supported by implementing a solid legal basis for a European neurodata space. Its design could follow the example of the European Health Data Space (EHDS), the regulation for which is currently being adopted. For neurotechnologies, the aim of such a data space would be to prevent the loss of valuable information on both neurotechnologies and European citizens. A possible alternative is the European Open Science Cloud (EOSC) portal or EU Node. This goal could also be achieved by actively promoting the development and use of EU neurotechnologies in science, research and industry." #neurotechnology #artificialintelligence #neurorights #privacy #EHDS #digitalhealth #healthdata
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development