Yesterday, the AI Office published the third draft of the General-Purpose AI Code of Practice, a key regulatory instrument for AI providers seeking to align with the EU AI Act. Developed with input from 1,000 stakeholders, the draft refines previous versions by clarifying compliance requirements and introducing a structured approach to regulation. GPAI providers must meet baseline obligations on transparency and copyright compliance, while models classified as having systemic risk face additional commitments under Article 51 of the AI Act. The final version, expected in May 2025, aims to facilitate compliance while ensuring AI models adhere to safety, security, and accountability standards. The Code introduces the Model Documentation Form, requiring AI providers to disclose key details such as model architecture, parameter size, training methodologies, and data sources. Transparency obligations include specifying the provenance of training data, documenting measures to mitigate bias, and reporting compute power and energy consumption. GPI providers must also outline their models’ intended uses, with additional requirements for systemic-risk models, including adversarial testing and evaluation strategies. Documentation must be retained for twelve months after a model is retired, with copyright compliance mandatory for all providers, including open-source AI. GPAI providers must establish formal copyright policies and comply with strict data collection rules. Web crawlers cannot bypass paywalls, access piracy sites, or ignore the Robot Exclusion Protocol. The Code also requires providers to prevent AI-generated copyright infringement, mandate compliance in acceptable use policies, and implement mechanisms for rightsholders to submit copyright complaints. Providers must maintain a point of contact for copyright inquiries and ensure their policies are transparent. For AI models with systemic risk, the Code introduces a Safety and Security Framework, aligning with the AI Act’s high-risk requirements. Providers must assess risks in areas such as cyber threats, manipulation, and autonomous AI behaviours. They must define risk acceptance criteria, anticipate risk escalations, and conduct assessments at key development milestones. If risks are identified, development may need to be paused while safeguards are implemented. GPAI providers must introduce technical safeguards, including input filtering, API access controls, and security measures meeting at least the RAND SL3 standard. From 2 November 2025, systemic-risk models must undergo external risk assessments before release. Providers must maintain a Safety and Security Model Report, report AI-related incidents within strict timeframes, and implement governance structures ensuring responsibility at all levels. Whistleblower protections are also required. With the final version expected in May 2025, AI providers have a short window to prepare before the AI Act takes full effect in August.
AI Industry Transparency Guidelines
Explore top LinkedIn content from expert professionals.
Summary
AI industry transparency guidelines are rules and frameworks that require companies to openly share details about how AI systems are built, tested, and used. These guidelines help ensure AI is developed responsibly, promoting safety, accountability, and public trust as organizations disclose information about their models, data, risks, and practices.
- Document model details: Clearly describe your AI model's architecture, training methods, data sources, and intended uses to help users and regulators understand how it works.
- Implement risk reporting: Set up processes for assessing, documenting, and reporting potential risks, incidents, and safety concerns throughout your AI system’s lifecycle.
- Disclose data practices: Publish information about data provenance, licensing, and any modifications to datasets to ensure legal compliance and build credibility with stakeholders.
-
-
Our paper on transparency reports for large language models has been accepted to AI Ethics and Society! We’ve also released transparency reports for 14 models. If you’ll be in San Jose on October 21, come see our talk on this work. These transparency reports can help with: 🗂️ data provenance ⚖️ auditing & accountability 🌱 measuring environmental impact 🛑 evaluations of risk and harm 🌍 understanding how models are used Mandatory transparency reporting is among the most common AI policy proposals, but there are few guidelines available describing how companies should actually do it. In February, we released our paper, “Foundation Model Transparency Reports,” where we proposed a framework for transparency reporting based on existing transparency reporting practices in pharmaceuticals, finance, and social media. We drew on the 100 transparency indicators from the Foundation Model Transparency Index to make each line item in the report concrete. At the time, no company had released a transparency report for their top AI model, so in providing an example we had to build a chimera transparency report with best practices drawn from 10 different companies. In May, we published v1.1 of the Foundation Model Transparency Index, which includes transparency reports for 14 models, including OpenAI’s GPT-4, Anthropic’s Claude 3, Google’s Gemini 1.0 Ultra, and Meta’s Llama 2. The transparency reports are available as spreadsheets on our GitHub and in an interactive format on our website. We worked with companies to encourage them to disclose additional information about their most powerful AI models and were fairly successful – companies shared more than 200 new pieces of information, including potentially sensitive information about data, compute, and deployments. 🔗 Links to these resources in comment below! Thanks to my coauthors Rishi Bommasani, Shayne Longpre, Betty Xiong, Sayash Kapoor, Nestor Maslej, Arvind Narayanan, Percy Liang at Stanford Institute for Human-Centered Artificial Intelligence (HAI), MIT Media Lab, and Princeton Center for Information Technology Policy
-
The European Commission published its first draft of the “Code of Practice on Transparency of AI‑Generated Content” designed as a tool to help organizations demonstrate alignment with the transparency requirements (Art. 50) of the AI Act. Article 50 of the AI Act includes obligations for providers to mark AI-generated or manipulated content in a machine-readable format, and for users who deploy generative AI systems for professional purposes to clearly label deepfakes and AI-text publications on matters of public interest. The document is divided into two sections. The first section covers rules for marking and detecting AI content, applicable to providers of generative AI systems, including to: - Use a Multi‑layered machine-readable marking of AI‑generated content - Use imperceptible watermarks interwoven within content - Adopt a digitally signed “manifest/provenance certificate” for content that can’t securely carry metadata - Offer free detection interfaces/tools, including confidence scoring, and complementary forensic detection that does not rely on active marking - Test against common transformations and adversarial attacks - Use open standards and shared/aggregated verifiers to enable cross-platform detection and lower compliance friction The second section covers labelling deepfakes and certain AI-generated or manipulated text on matters of public interest and is applicable to deployers of generative AI systems, including: - Deepfake labelling - Modality‑specific labelling rules for real-time video, non-real-time video, images, multimodal content, and audio-only - Operational governance: encourages internal compliance documentation, staff training, accessibility measures, and mechanisms to flag and fix missing/incorrect labels.
-
Businesses that offer generative AI systems or services in California should be aware that the state's Generative AI Training Data Transparency Act takes effect on January 1, 2026. It imposes documentation and disclosure obligations on developers of such systems released on or after January 1, 2022. Specifically, covered developers must post on their website documentation describing the data used to train, test, validate, or fine-tune the system, including: · Sources or owners of datasets and how they support the system’s intended purpose. · The size of datasets (ranges permitted; estimates for dynamic datasets). · Types and characteristics of data points and labeling practices. · Whether datasets include copyrighted, trademarked, or patented material versus public domain content. · Whether datasets were purchased or licensed. · Whether datasets include personal information or aggregate consumer information as defined under California law. · Any cleaning, processing, or modifications performed and their purposes. · Data collection periods, including whether collection is ongoing, and the dates first used in development. · Whether synthetic data generation was used, and the functional need for it if included. There are certain exemptions, including if the system (i) is made available only to a federal entity exclusively for national security, military or defense purposes (ii) or made available solely to a hospital medical staff member. Businesses offering generative AI systems and services in California should consider taking the following next steps: · Conducting a data provenance and licensing assessment for all covered systems released since January 1, 2022. · Building a standardized disclosure template aligned with the statute’s enumerated elements to support publication before January 1, 2026 and at each substantial modification. · Establishing change‑management triggers so that retraining or fine‑tuning that materially affects performance prompts updated disclosures. · Mapping exemptions, if any apply, and document the basis for relying on them. #geospatiallaw #geoai
-
“Trust but verify”. ^ That’s the 3-word summary of the policy approach proposed by the Joint California Policy Working Group on AI Frontier Models (attached below). Even if you’re not based in California, this is a fantastic rulebook on AI policy and regulation. It's one of the more nuanced and deeply-thought papers that cuts past the generic “regulation v innovation” debate, and dives straight into a specific policy solution for governing frontier models (with wisdom draw from historical analogies in tobacco, energy, pesticides and car safety). Here’s my quick summary of the “trust but verify” model. 1️⃣ TRANSPARENCY In a nutshell, the “trust but verify” approach is rooted in transparency, which is essential for building “trust”. But transparency is such a broad concept, so the paper neatly breaks it down in terms of: ▪️ Data acquisition ▪️ Safety practices ▪️ Security practices ▪️ Pre-deployment testing ▪️ Downstream impact ▪️ Accountability for openness There’s nuance and different transparency mechanisms to each area. However, transparency alone doesn’t guarantee accountability or redress. In fact, the paper warns us about “transparency washing” – i.e. where policymakers (futilely) pursue transparency for the sake of it without achieving anything. Transparency needs to be tested and verified (hence the “verify”). 2️⃣ THIRD PARTY RISK ASSESSMENT This supports the “verify” aspect, and the idea of “evidence-based transparency” (i.e. transparency that you can actually trust). This is not just about audits and evaluations, but also specific things like: ▪️ researcher protections (i.e. safe harbour / indemnity protections for public interest safety research) ▪️ responsible disclosure (i.e. infrastructure is needed to communicate identified vulnerabilities to affect parties) 3️⃣ WHISTLEBLOWER PROTECTION This means legal safeguards to protect retaliation against whistleblowers who report misconduct, fraud, illegal activities, etc. It might be the secret to driving *real* corporate accountability in AI. 4️⃣ ADVERSE EVENT REPORTING A reporting regime for AI-related incidents (similar to data breach reporting regimes) help with identification and enforcement + regulatory coordination and information sharing + analytics. 5️⃣ SCOPE What type of frontier models should be regulated? The paper suggests these guiding principles: ▪️ "Generic developer-level thresholds seem to be generally undesirable given the current AI landscape" ▪️ "Compute thresholds are currently the most attractive cost-level thresholds, but they are best combined with other metrics for most regulatory intents" ▪️ "Thresholds based on risk evaluation results and observed downstream impact are promising for safety and corporate governance policy, but they have practical issues" 👓 Want more? See my map which tracks AI laws and policies around the world (see link in 'Visit my website'). #ai #tech #airegulation #policy #california
-
Onboarding an AI vendor? Don't sign until you've reviewed this checklist. From our analysis of 50+ AI addendums, these are the clauses that actually matter. Not all issues will be relevant to every deal. So always start with the basics: - What data are they collecting? - What can they actually do with it? Force the issue by deleting any usage data or aggregated data rights on a first pass. 𝐆𝐨𝐯𝐞𝐫𝐧𝐚𝐧𝐜𝐞 & 𝐏𝐞𝐫𝐦𝐢𝐬𝐬𝐢𝐨𝐧𝐬 🔹 No AI use without prior written approval; unapproved use = material breach 🔹 No high-risk or automated decision-making AI unless required for services 🔹 Must comply with all AI laws and related policies 🔹 Support transparency and documentation if buyer requests it 𝐃𝐚𝐭𝐚 & 𝐈𝐏 🔹 Buyer owns all AI inputs, outputs, and related IP 🔹 Vendor cannot use buyer data to train, fine-tune, or improve any AI 🔹 All AI data and outputs are confidential information 🔹 On termination, vendor must return or destroy buyer data and certify deletion 𝐒𝐞𝐜𝐮𝐫𝐢𝐭𝐲 & 𝐂𝐨𝐦𝐩𝐥𝐢𝐚𝐧𝐜𝐞 🔹 Maintain strong security controls: MFA, least-privilege, audits, and incident response 🔹 Periodically test and validate AI systems for confidentiality, integrity, and reliability 𝐏𝐞𝐫𝐟𝐨𝐫𝐦𝐚𝐧𝐜𝐞 & 𝐄𝐭𝐡𝐢𝐜𝐬 🔹 Ensure AI outputs are accurate, reliable, and ethically developed 🔹 Test for and mitigate bias in training data and outputs 🔹 Don’t generate illegal, offensive, or harmful content 🔹 Clearly label AI-generated audio, images, video, or text 𝐑𝐢𝐬𝐤 & 𝐋𝐢𝐚𝐛𝐢𝐥𝐢𝐭𝐲 🔹 Warrant that AI systems are accurate, secure, bias-free, and virus-free 🔹 Indemnify buyer for IP infringement, contract breaches, or violations of law 🔹 Maintain robust cyber insurance and assume full liability for AI errors or misuse 𝐍𝐨𝐭 𝐬𝐭𝐚𝐧𝐝𝐚𝐫𝐝 𝐲𝐞𝐭, 𝐛𝐮𝐭... 🔹 Conduct third party AI audits 🔹 Maintain AI insurance
-
Last Friday, I joined Laura Frederick and Nathan Leong for a timely How to Contract AI Explained session on Human-in-the-Loop and Transparency in AI contracts. Highlighting some of our discussion points: 1. Regulations Are Prescriptive and (for now) Focused on High-Risk Systems. HITL and transparency aren't just best practices, they are (in certain scenarios) legally mandated. The EU AI Act requires that high-risk systems provide meaningful human oversight built into the system. The Colorado AI Act requires that for consequential decisions there must be a "reasonable opportunity" for human review. GDPR Article 22 addresses when human intervention is required. The rules are specific for requirements in these areas, but we still have to translate that language to contractual terms. 2. Use a Risk-Tiered Framework. Even if you aren't deploying a high-risk system, you should think about how and where humans will be involved. Consider a tiered system based on risk that includes human review for high-risk and human intervention (override) for less risky. 3. Human Oversight Requires Real Resources. Under EU AI Act Article 14, overseers need "appropriate competence, training, and authority." Instead of general compliance with the laws language, your contracts should be clear about who provides reviewers, their qualifications, training costs, and what happens when review capacity hits a bottleneck. 4. Different Regulators Want Different Types of Transparency or Explainability. EU wants system-level architecture. GDPR wants "logic involved." Colorado wants decision-level reasons. NYC wants input factors disclosed upfront. Be specific about which type your use case requires. 5. Negotiate Customer Control Rights. Consider when and where you want operational flexibility (from the vendor and buyer side) to between operating / review modes, override decisions without penalty, and adjust confidence thresholds. Given resources and costs, vendors may consider tiered pricing reflecting different human involvement levels. My main takeaway: Human oversight and transparency should improve decision quality, not just provide someone to blame when algorithms fail. When we think about these issues, we should consider how they make our processes and decision-making more effective, not just how they shift liability.
-
At the end of September, Governor Gavin Newsom signed the Transparency in Frontier Artificial Intelligence Act (S.B. 53), requiring large AI companies to report the risks their systems pose and the safeguards they have in place. Unlike last year’s vetoed S.B. 1047, this new version de-emphasizes liability. It explicitly caps financial penalties—even for catastrophic AI failures—and focuses instead on transparency and reporting. As Senator Scott Wiener explained, “Whereas SB 1047 was more of a liability-focused bill, SB 53 is more focused on transparency.” In this new piece for Institute for Law & AI (LawAI), https://lnkd.in/gAig3vSz, Josephine Wolff and I argue that SB 1047's basic approach makes sense, as expanding liability for AI harms won’t necessarily make AI systems safer or more secure. Liability almost always brings insurers into the picture—and as we’ve seen in the cyber insurance market, insurers often struggle to model or mitigate complex, evolving risks. When that happens, insurance helps firms manage liability exposure, not safety risk. California’s transparency-first approach is a smarter place to start. By requiring companies to report on AI risks and incidents, regulators can help build the data needed to understand what works—and what doesn’t—when it comes to preventing AI-related harms. That kind of foundation is critical if we want policy, regulation, and insurance to actually make emerging technologies safer.
-
AI + Privacy New Consumer Report titled "Artificial Intelligence Policy Recommendations" Key Recommendations: Transparency 🔍 Companies must disclose when algorithms are used for important decisions like loans, rentals, promotions, or rate changes. 📝 Companies must explain adverse algorithmic decisions clearly, including how to improve outcomes. Complex unexplainable tools shouldn't be used. 🔬 Algorithm developers must provide access to vetted researchers to understand how tools work and their limitations. ⚖️ Companies must substantiate claims made when marketing their AI products. Fairness 🚫 Algorithmic discrimination should be prohibited, with clarification on how civil rights laws apply to AI development and deployment. 🧪 Independent testing for bias and accuracy should be required before and after deployment of consequential decision-making tools. 🏆 Big Tech shouldn't use AI to unfairly preference their own products when it harms competition. Privacy 📊 Companies should minimize data collection to only what's necessary for requested services. 🔒 Personal data collected by generative AI tools shouldn't be sold or shared with third parties. 👁️ Remote biometric tracking in public spaces should be banned with limited exceptions. Safety 📋 Companies creating consequential or risky tools must conduct risk assessments and make necessary changes. 🗣️ Whistleblower protections are needed for those exposing AI problems that companies won't disclose. ⚠️ Clarify liability for developers who fail to prevent harmful AI uses and unintended consequences. Enforcement + Government Capacity 💰 The FTC and state regulators need additional resources to oversee companies effectively. ⚡ Create legal pathways for individuals harmed by biased algorithms to seek justice when enforcement agencies lack capacity. https://lnkd.in/eHfnJn2C
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development