MAJOR AI LEGAL NEWS. The revised EU Product Liability Directive came into force yesterday, 8 December 2024. It represents a fundamental shift in how liability for AI systems and software is addressed. The Directive could directly impact organisations using and developing AI, and they may wish to consider if they need to reassess their contracts, policies, and operational approaches to liability management. Under the new framework, AI system providers (treated as manufacturers in the legislation) are liable for defects in AI systems and software that cause harm, potentially including defects that emerge after deployment. This potentially includes harm linked to updates, upgrades, or the evolving behaviour of machine-learning systems. Organisations should also consider the liability implications for failing to have sufficient AI literacy among their staff which is a requirement under the AI Act from 2 February. AI training may now be a business imperative for some organisations. The Directive’s approach to defectiveness considers not only when a product is placed on the market but also whether the manufacturer retains control over it post-market, such as through updates or connected services. This means manufacturers may be held liable for defects that arise after deployment if they could reasonably foresee and mitigate risks but fail to act. Organisations, particularly those providing software or AI systems, should look at ongoing compliance and risk management to meet evolving safety expectations. The Directive's coverage of potential liability for post-market defects could have big implications for contracts. Organisations should consider whether their agreements with suppliers, integrators, and distributors include clear terms governing responsibility for defects. The focus is on whether the product provides the safety consumers are entitled to expect. A proactive approach to risk management, extending beyond initial product deployment to encompass ongoing updates and system monitoring may be prudent. Software providers should take note that they potentially could be held liable even if their product operates as a component of a larger system. This liability regime incentivises stronger warranties, indemnities, and cooperation agreements to allocate risk effectively across supply chains. Companies should review existing contracts to confirm they reflect the Directive's requirements and renegotiate where necessary to close gaps in accountability. The Directive also works in tandem with EU regulations like the AI Act. Businesses that fail to meet mandatory product safety requirements under the likes of the AI Act risk facing presumptions of defectiveness under the Product Liability Directive. With the AI Liability Directive in progress, organisations should also prepare for further changes that will make it easier for claimants to bring AI-related liability claims.
Liability Guidelines for AI-Generated Content
Explore top LinkedIn content from expert professionals.
Summary
Liability guidelines for AI-generated content help explain who is responsible if artificial intelligence causes harm or mistakes, whether through incorrect information, privacy breaches, or system defects. These guidelines are evolving to ensure businesses and developers address risks and uphold safety standards when deploying AI systems—especially those that learn or change after launch.
- Update contracts: Review and revise agreements with partners, vendors, and distributors to clearly outline who is responsible for defects or harm caused by AI systems.
- Prioritize transparency: Share information about how your AI models work, what data they use, and any known risks with customers and users to build trust and meet regulatory requirements.
- Monitor and train: Set up ongoing processes for monitoring AI systems after deployment and provide staff with regular training to help prevent issues and ensure compliance.
-
-
AI liability is about to get real. The lawsuit against OpenAI over a teenager’s ChatGPT-assisted death isn’t just about one tragic case. It could redefine how enterprises must treat AI. If courts accept these claims, AI will no longer be “just software.” It will be judged like a dangerous product - with strict liability, duty to warn, and negligence standards applied. For enterprises, the ripple effects are enormous: 1. Product liability exposure - Deploying AI could carry the same legal risks as selling a defective car or medical device. “Use at your own risk” disclaimers won’t be enough. 2. Duty to warn - Expect mandatory disclaimers, onboarding risk screens, and context-specific safety alerts when AI is used in HR, finance, or healthcare. 3. Governance as legal defense - Companies will need documented AI safety frameworks (NIST/ISO-style) to prove they took “reasonable care.” 4. Unlicensed practice risk - If courts rule AI engaged in psychology, similar arguments could apply to AI in law, medicine, or finance. Human oversight may become legally required. 5. Insurance shake-up - AI-specific liability coverage will become a must-have, not an afterthought. This could be the moment where AI moves from “experimental software” to regulated, high-liability product. Enterprise leaders should start planning now: • Demand transparency from vendors on safety testing and controls. • Implement “safety by design” in internal AI programs. • Review insurance, compliance, and risk frameworks before lawsuits force the issue. The question is no longer if AI liability will hit enterprises, it’s when, and how prepared you’ll be.
-
⚖️Generative AI in EU law 🔍 This paper serves as a critical analysis of the AI Act, identifying gaps and challenges in addressing the rapidly advancing applications of Generative AI. It provides recommendations to ensure the safe and compliant deployment of LLMs. 🚀 Regarding liability: 🎯 Benefits The Product Liability Directive and AILD provide valuable structures for addressing liability in GenAI applications by recognizing the potential liability from post-deployment learning. This scope supports claims for damages, including rights violations, and addresses AI opacity and information asymmetry between providers and users. Both directives shift the burden of proof, requiring providers to disclose relevant information if harm is suspected. 🎯Gaps Both directives rely on the AI Act, which has limitations when applied to General-Purpose AI (GPAI) models. Initially, the AI Act classified GPAI as high-risk by default, but it has since adopted a 'systemic risk' approach. Yet, it lacks clear criteria for defining societal risks specific to GPAI, creating ambiguity around liability and making it challenging to determine the conditions under which GenAI falls within AILD’s scope. 🎯 Recommendations for a Tailored Code of Practice (CoP) The authors recommend establishing a CoP for GPAI models presenting systemic risks. This CoP would clarify the model’s compliance with the AI Act and provide a framework for risk management specific to GenAI. Extending the disclosure mechanism and rebuttable presumption of causation to all GPAI models would also enhance accountability, as GenAI developers typically possess incident-relevant information and should be obligated to share it. 🎯Clarifying Model Development and Data Intent The lack of a singular purpose in GenAI models complicates risk prediction and compliance assessments as required by the AI Act. To manage risks more effectively, the authors propose emphasizing criteria such as model scalability, input diversity, and transparent data usage objectives. For models trained on restricted datasets that rely on few/zero-shot learning capabilities, developers may need to disclose auxiliary information, thereby clarifying links between observed and unobserved object classes and aligning with transparency goals. 🎯Incorporating Ethical and Technical Safeguards The paper suggests combining conventional fault criteria with additional ethical and technical safeguards within the CoP. These would guide GenAI developers to: 🔸 Enhance Data Transparency: Document data intent and collection methods. 🔸 Ensure Data Quality: Construct representative datasets of sufficient quality, reducing risks of overfitting and increasing generalizability. 🔸 Implement (Pro)Active Monitoring: Includes reporting potential harm incidents and forming alliances with credible third-party organizations for validation and evidence access. 🔗 https://lnkd.in/dERy5n9u #AI #AIAct
-
If you are an organisation using AI or you are an AI developer, the Australian privacy regulator has just published some vital information about AI and your privacy obligations. Here is a summary of the new guides for businesses published today by the Office of the Australian Information Commissioner which articulate how Australian privacy law applies to AI and set out the regulator’s expectations. The first guide is aimed to help businesses comply with their privacy obligations when using commercially available AI products and help them to select an appropriate product. The second provides privacy guidance to developers using personal information to train generative AI models. GUIDE ONE: Guidance on privacy and the use of commercially available AI products Top five takeaways * Privacy obligations will apply to any personal information input into an AI system, as well as the output data generated by AI (where it contains personal information). * Businesses should update their privacy policies and notifications with clear and transparent information about their use of AI * If AI systems are used to generate or infer personal information, including images, this is a collection of personal information and must comply with APP 3 (which deals with collection of personal info). * If personal information is being input into an AI system, APP 6 requires entities to only use or disclose the information for the primary purpose for which it was collected. * As a matter of best practice, the OAIC recommends that organisations do not enter personal information, and particularly sensitive information, into publicly available generative AI tools. GUIDE 2: Guidance on privacy and developing and training generative AI models Top five takeaways * Developers must take reasonable steps to ensure accuracy in generative AI models. * Just because data is publicly available or otherwise accessible does not mean it can legally be used to train or fine-tune generative AI models or systems.. * Developers must take particular care with sensitive information, which generally requires consent to be collected. * Where developers are seeking to use personal information that they already hold for the purpose of training an AI model, and this was not a primary purpose of collection, they need to carefully consider their privacy obligations. * Where a developer cannot clearly establish that a secondary use for an AI-related purpose was within reasonable expectations and related to a primary purpose, to avoid regulatory risk they should seek consent for that use and/or offer individuals a meaningful and informed ability to opt-out of such a use. https://lnkd.in/gX_FrtS9
-
If you use GenAI… I want to hold you… accountable. As AI becomes a key tool in legal practice, ensuring ethical use is critical. This condensed framework is based on ABA guidelines and other regulatory standards, balancing efficiency with accountability. 1. Competence Lawyers must understand AI’s capabilities and risks, such as inaccuracies or biases. Regular training is crucial for staying updated. 2. Confidentiality Client data must be protected when using AI tools. Anonymize sensitive data and ensure AI systems are secure. 3. Transparency Lawyers must inform clients about AI use, particularly when it impacts legal services or fees, fostering transparency and trust. 4. Verification of Outputs AI-generated outputs must be reviewed for accuracy to avoid errors like false citations, ensuring the integrity of legal work. 5. Reasonable Fees Fees must be reasonable and reflect the actual work performed. When using AI, this means that lawyers can charge for tasks like inputting data into AI tools and verifying the AI-generated results. However, lawyers should not bill clients for time saved due to AI’s efficiency, unless the client has specifically agreed to this arrangement in advance. This ensures transparency and fairness in billing practices. 6. Addressing Bias Firms should actively mitigate AI biases that could lead to unfair outcomes, particularly in sensitive legal areas . 7. Supervision Supervisory lawyers must ensure that AI use complies with ethical standards, implementing policies and training to manage AI responsibly.
-
Using #LLM Outputs Across Different #AI Platforms: Terms of Service Analysis As AI tools become increasingly integrated into workflows, a crucial question arises: Does using one LLM’s output and pasting it into another violate terms of service? This report examines the legal and policy implications of transferring AI-generated content across platforms like #Grok, #Perplexity, #ChatGPT, and #Google #Gemini. 🚨 Key Findings 🔹 Perplexity AI – Among the most restrictive, claiming ownership of API outputs and prohibiting copying, caching, or creating derivative works. Their restrictive policies align with their “answer engine” business model and ongoing copyright lawsuits from publishers like Dow Jones. 🔹 Google Gemini – Similar restrictions on redistribution, but more transparency with citation metadata. Google differentiates between free and paid API tiers, impacting how user data is used. 🔹 Grok (xAI) – More permissive, allowing broader use of outputs, provided users attribute Grok as the source. This aligns with Elon Musk’s stance on AI openness. 🔹 ChatGPT (OpenAI) – Unclear stance on output ownership. However, legal precedents suggest OpenAI does not have strong intellectual property claims on ChatGPT’s outputs, though terms of service may still restrict certain uses. ⚠️ Potential Consequences Violating an LLM’s terms could lead to: ❌ Account suspension or bans ⚖️ Legal action in extreme cases 🚀 Risks of “jailbreaking” if it circumvents intended platform controls Conclusion: Copy-pasting outputs across LLMs may violate terms on some platforms (especially Perplexity and Gemini), while others (like Grok) are more lenient. To ensure compliance, always review the latest TOS before using AI-generated content across multiple platforms. 📌 What are your thoughts on AI-generated content ownership? Should LLM outputs be freely transferable? Drop your insights below! 👇 #AI #LLM #ArtificialIntelligence #LegalTech #MachineLearning #AICompliance #PerplexityAI #ChatGPT #GoogleGemini #GrokAI #AIRegulations
-
Does training an #AI #model with public, copyrighted work and personal data require consent? Usually not – if done correctly: Livio Veraldi and I examined what really happens when a large language model (#LLM) is trained with copyrighted content and personal data and then applied known concepts of #copyright, #dataprotection, and other laws (with a focus on Swiss law, but we hope to inspire beyond). Here are some of the copyright findings: ➡ There is a concept that copyright covers only uses that allow humans to "enjoy" the work at issue. This typically doesn't occur when an LLM is trained, at least where works are (as is usually the case) not memorized. It's a common misconception that LLMs "remember" everything they see during pre-training; also, there are anti-memorization techniques. ➡ Even where memorization occurs, works are fragmented within an LLM and these fragments are combined with all other information, causing the work to fade when looking at the model as a whole. While the right prompts may be able to later reassemble these fragments to resemble the original in the output, this constitutes a separate act requiring independent analysis. ➡ The creators of the AI Act failed to get their GPAIM provision on copyright right. Thus, only those who train their model within the EU will likely have to comply with the Text and Data Mining (TDM) rule and its "opt-out". In Switzerland, we have something similar, but no opt-out. This makes Switzerland (and, of course, the US) more attractive for AI companies. ➡ Is a provider liable when users' LLM-generated content infringes existing copyrights? After all, it's the user who supplies the prompt. Case law re search engines may offer guidance: A recent case found a search provider not being liable for a user's search term that resulted in illegal content. Similar principles could apply to prompts generating illegal output. Our paper was published in German in February by #Jusletter of Weblaw (61 pages), and now also in English in Jusletter IT. It is based on an analysis we did for the AI Center of the Swiss Federal Institute of Technology (they are creating a unique Swiss LLM). The full paper is available here https://lnkd.in/efxKX8K5 (English) and here https://lnkd.in/eJj7m2RH (German). For those who only have time for a summary, get our AI blog post no. 26 https://lnkd.in/eaXpXiTn (English) and https://lnkd.in/ecgPwbNx (German). To get all posts of our AI blog series that have already appeared, go to https://vischer.com/ai and subscribe to the future VISCHER blog posts at https://lnkd.in/eDGjbZAU Many thanks to Nicole Ritter, Giulia Odermatt, Valeria Locher, Imanol Schlag, Martin Jaggi, Florent Thouvenin, Egzona Redzepi, Alessandro Loperfido and Mairi Weder-Gillies for their support, review and input.
-
State Bar of California approves guidance on use of generative AI in the practice of law. Key points: 🔹 A lawyer must not input any confidential information of the client into any generative AI solution that lacks adequate confidentiality and security protections. A lawyer must anonymize client information and avoid entering details that can be used to identify the client. (Duty of confidentiality) 🔹 AI-generated outputs can be used as a starting point but must be carefully scrutinized. They should be critically analyzed for accuracy and bias, supplemented, and improved, if necessary. (Duty of competence and diligence) 🔹 A lawyer must comply with the law (e.g. IP, privacy, cybersecurity) and cannot counsel a client to engage, or assist a client in conduct that the lawyer knows is a violation of any law, rule, or ruling of a tribunal when using generative AI tools. (Duty to comply with the law) 🔹 Managerial and supervisory lawyers should establish clear policies regarding the permissible uses of generative AI and make reasonable efforts to ensure that the firm adopts measures that give reasonable assurance that the firm’s lawyers and non lawyers’ conduct complies with their professional obligations when using generative AI. This includes providing training on the ethical and practical aspects, and pitfalls, of any generative AI use. (Duty to Supervise) 🔹 The lawyer should consider disclosure to their client that they intend to use generative AI in the representation, including how the technology will be used, and the benefits and risks of such use. A lawyer should review any applicable client instructions or guidelines that may restrict or limit the use of generative AI (Duty to communicate) 🔹 A lawyer may use generative AI to more efficiently create work product and may charge for actual time spent (e.g., crafting or refining generative AI inputs and prompts, or reviewing and editing generative AI outputs). A lawyer must not charge hourly fees for the time saved by using generative AI. (Charging for work produced by AI) 🔹 A lawyer must review all generative AI outputs, including, but not limited to, analysis and citations to authority for accuracy before submission to the court, and correct any errors or misleading statements made to the court. (Duty of candor to tribunal) 🔹 Some generative AI is trained on biased information, and a lawyer should be aware of possible biases and the risks they may create when using generative AI (e.g., to screen potential clients or employees). (Prohibition on discrimination) 🔹 A lawyer should analyze the relevant laws and regulations of each jurisdiction in which a lawyer is licensed to ensure compliance with such rules. (Duties in other jurisdictions) #dataprivacy #dataprotection #AIprivacy #AIgovernance #privacyFOMO Image by vectorjuice on Freepik https://lnkd.in/dDUuFfes
-
Onboarding an AI vendor? Don't sign until you've reviewed this checklist. From our analysis of 50+ AI addendums, these are the clauses that actually matter. Not all issues will be relevant to every deal. So always start with the basics: - What data are they collecting? - What can they actually do with it? Force the issue by deleting any usage data or aggregated data rights on a first pass. 𝐆𝐨𝐯𝐞𝐫𝐧𝐚𝐧𝐜𝐞 & 𝐏𝐞𝐫𝐦𝐢𝐬𝐬𝐢𝐨𝐧𝐬 🔹 No AI use without prior written approval; unapproved use = material breach 🔹 No high-risk or automated decision-making AI unless required for services 🔹 Must comply with all AI laws and related policies 🔹 Support transparency and documentation if buyer requests it 𝐃𝐚𝐭𝐚 & 𝐈𝐏 🔹 Buyer owns all AI inputs, outputs, and related IP 🔹 Vendor cannot use buyer data to train, fine-tune, or improve any AI 🔹 All AI data and outputs are confidential information 🔹 On termination, vendor must return or destroy buyer data and certify deletion 𝐒𝐞𝐜𝐮𝐫𝐢𝐭𝐲 & 𝐂𝐨𝐦𝐩𝐥𝐢𝐚𝐧𝐜𝐞 🔹 Maintain strong security controls: MFA, least-privilege, audits, and incident response 🔹 Periodically test and validate AI systems for confidentiality, integrity, and reliability 𝐏𝐞𝐫𝐟𝐨𝐫𝐦𝐚𝐧𝐜𝐞 & 𝐄𝐭𝐡𝐢𝐜𝐬 🔹 Ensure AI outputs are accurate, reliable, and ethically developed 🔹 Test for and mitigate bias in training data and outputs 🔹 Don’t generate illegal, offensive, or harmful content 🔹 Clearly label AI-generated audio, images, video, or text 𝐑𝐢𝐬𝐤 & 𝐋𝐢𝐚𝐛𝐢𝐥𝐢𝐭𝐲 🔹 Warrant that AI systems are accurate, secure, bias-free, and virus-free 🔹 Indemnify buyer for IP infringement, contract breaches, or violations of law 🔹 Maintain robust cyber insurance and assume full liability for AI errors or misuse 𝐍𝐨𝐭 𝐬𝐭𝐚𝐧𝐝𝐚𝐫𝐝 𝐲𝐞𝐭, 𝐛𝐮𝐭... 🔹 Conduct third party AI audits 🔹 Maintain AI insurance
-
The United States Copyright Office just published a new 52 page report on AI-generated content and copyrightability. If you're using AI for marketing, here’s what you need to know. Listen to the Content Amplified podcast for more insights like these: https://lnkd.in/gaVVrmEv 1️⃣ You don’t automatically own AI-generated content If AI creates something without enough human input, you don’t have copyright protection. That means anyone else could use it too. "Copyright does not extend to purely AI-generated material, or material where there is insufficient human control over the expressive elements." 2️⃣ Writing a prompt isn’t enough A long, detailed prompt doesn’t make you the author. Courts see prompts as instructions, not creative expression. "Based on the functioning of current generally available technology, prompts do not alone provide sufficient control." 3️⃣ Human effort = ownership If you edit, modify, or arrange AI-generated content in a meaningful way, that part is copyrightable. "Human authors are entitled to copyright in their works of authorship that are perceptible in AI-generated outputs, as well as the creative selection, coordination, or arrangement of material in the outputs, or creative modifications of the outputs." So how do you protect your brand’s AI content? ✅ Use AI as a tool, not a replacement for human creativity ✅ Keep records of your input and edits to AI-generated content ✅ Make sure key brand assets have clear human authorship ✅ Stay updated—copyright rules are evolving ℹ️ Image source: The U.S. Copyright Office Disclaimer: This post is for informational purposes only and not legal advice. Consult a legal professional for specific guidance.
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development