AI Model Release Guidelines

Explore top LinkedIn content from expert professionals.

Summary

AI model release guidelines are rules and procedures that organizations follow to ensure artificial intelligence models are safe, transparent, and legally compliant before making them available to the public. These guidelines help control risks, protect users, and build trust in AI technologies.

  • Document thoroughly: Provide clear information about your AI model’s purpose, data sources, training methods, and any known risks so users understand how the model works and its intended use.
  • Prioritize safety: Put safeguards in place to prevent misuse, test for bias, and regularly review your model for security and ethical concerns throughout its lifecycle.
  • Comply with laws: Make sure your model follows all relevant regulations for privacy, copyright, and ethical standards in the regions where it will be used.
Summarized by AI based on LinkedIn member posts
  • View profile for Dr. Barry Scannell
    Dr. Barry Scannell Dr. Barry Scannell is an Influencer

    AI Law & Policy | Partner in Leading Irish Law Firm William Fry | Member of the Board of Irish Museum of Modern Art | PhD in AI & Copyright

    59,871 followers

    Yesterday, the AI Office published the third draft of the General-Purpose AI Code of Practice, a key regulatory instrument for AI providers seeking to align with the EU AI Act. Developed with input from 1,000 stakeholders, the draft refines previous versions by clarifying compliance requirements and introducing a structured approach to regulation. GPAI providers must meet baseline obligations on transparency and copyright compliance, while models classified as having systemic risk face additional commitments under Article 51 of the AI Act. The final version, expected in May 2025, aims to facilitate compliance while ensuring AI models adhere to safety, security, and accountability standards. The Code introduces the Model Documentation Form, requiring AI providers to disclose key details such as model architecture, parameter size, training methodologies, and data sources. Transparency obligations include specifying the provenance of training data, documenting measures to mitigate bias, and reporting compute power and energy consumption. GPI providers must also outline their models’ intended uses, with additional requirements for systemic-risk models, including adversarial testing and evaluation strategies. Documentation must be retained for twelve months after a model is retired, with copyright compliance mandatory for all providers, including open-source AI. GPAI providers must establish formal copyright policies and comply with strict data collection rules. Web crawlers cannot bypass paywalls, access piracy sites, or ignore the Robot Exclusion Protocol. The Code also requires providers to prevent AI-generated copyright infringement, mandate compliance in acceptable use policies, and implement mechanisms for rightsholders to submit copyright complaints. Providers must maintain a point of contact for copyright inquiries and ensure their policies are transparent. For AI models with systemic risk, the Code introduces a Safety and Security Framework, aligning with the AI Act’s high-risk requirements. Providers must assess risks in areas such as cyber threats, manipulation, and autonomous AI behaviours. They must define risk acceptance criteria, anticipate risk escalations, and conduct assessments at key development milestones. If risks are identified, development may need to be paused while safeguards are implemented. GPAI providers must introduce technical safeguards, including input filtering, API access controls, and security measures meeting at least the RAND SL3 standard. From 2 November 2025, systemic-risk models must undergo external risk assessments before release. Providers must maintain a Safety and Security Model Report, report AI-related incidents within strict timeframes, and implement governance structures ensuring responsibility at all levels. Whistleblower protections are also required. With the final version expected in May 2025, AI providers have a short window to prepare before the AI Act takes full effect in August.

  • View profile for Ibrahim Haddad, Ph.D.

    VP Engineering & Advisor | Open Source Strategy | AI Governance | PyTorch Foundation | LF AI & Data | Samsung Research

    7,199 followers

    Introducing the Model Openness Framework Abstract Generative AI (GAI) offers unprecedented possibilities but its commercialization has raised concerns about transparency, reproducibility, bias, and safety. Many "open-source" GAI models lack the necessary components for full understanding and reproduction, and some use restrictive licenses, a practice known as "openwashing." We propose the Model Openness Framework (MOF), a ranked classification system that rates machine learning models based on their completeness and openness, following principles of open science, open source, open data, and open access. The MOF requires specific components of the model development lifecycle to be included and released under appropriate open licenses. This framework aims to prevent misrepresentation of models claiming to be open, guide researchers and developers in providing all model components under permissive licenses, and help companies, academia, and hobbyists identify models that can be safely adopted without restrictions. Wide adoption of the MOF will foster a more open AI ecosystem, accelerating research, innovation, and adoption. Whitepaper (Google Doc – open for public comment): https://lnkd.in/dFkvXvHT

  • View profile for Luca Bertuzzi

    Chief AI Correspondent at MLex

    30,060 followers

    ❗ Today, the European Commission not only launched a public consultation to gather input for its forthcoming guidelines on general-purpose AI models, but also unveiled its “preliminary approach” on how it interprets the GPAI rules of the AI Act. This approach is detailed in a dense 21-page document and marks the first time the EU executive has clarified its interpretation of some key provisions of the law and outlined its intended application. Perhaps the most notable aspect of the document is the establishment of a compute threshold—set at 10^22 floating point operations per second (FLOPs)—to determine whether a model falls under the AI Act. A similar threshold is used to decide if a modified model should be considered a new model, with all the accompanying legal implications. Remarkably, if a GPAI model initially falls short of the systemic threshold but meets it after modification, the entity responsible for the modification will be designated as a provider of a model with systemic risk. While this measure may be necessary to prevent circumvention of the systemic risk categorization, it might also discourage modifications to models that are just below 10^25 FLOPs. Another key concept the Commission aims to clarify in the guidelines is when a GPAI model is considered placed in the EU market, with the preliminary approach already including several examples. Moreover, two methodologies, one hardware-based and one architecture-based, are provided to calculate the computational resources. Additionally, the working document appears to encourage GPAI model providers to sign the upcoming code of practice. Signatories can expect that the Commission’s enforcement efforts will focus on their adherence to the code. In contrast, companies that choose not to sign will have to demonstrate compliance through other means, conduct a gap analysis, and be prepared to provide additional information upon request. Finally, the Commission outlined its enforcement approach for the first time, emphasizing a collaborative and proportionate strategy. It anticipates close, informal collaboration with model providers and a proactive stance from those supplying models with systemic risks. My full analysis for MLex.

  • View profile for Barbara Li

    Partner at Reed Smith China & IAPP Asia Advisory Board Member & Vice Chair of Cybersecurity Working Group of EU Chamber of Commerce in China

    5,322 followers

    📢 BREAKING – China Issues Draft #AI #Ethics Rules for Public Consultation 🚀 Yesterday 22 August, China’s Ministry of Industry and Information Technology (MIIT), along with Ministry of Science and Technology (MOST), CAC and several other national regulators, released the draft Measures for the Administration of Ethics for AI Technological Activities. The consultation will end on 22 Sept. 🤖 The draft Measures apply to all AI R&D and technological services in China that may affect human health and safety, personal reputation, environmental protection, public order, or sustainability, covering businesses across industries, healthcare institutions, research organizations, and academics engaged in AI-related activities. The Measures set out ethical requirements for AI R&D and services, including • Developing technology for the public good • Respecting life, health, and reputation • Upholding justice, fairness, and accountability • Managing risks responsibly • Ensuring compliance with existing laws and regulations Entities are encouraged to establish an Ethics Commission responsible for ethics review. For organizations without an internal body, local authorities will create Ethics Service Centres to provide review services. AI technological activities within scope must undergo ethics review, either by an internal Ethics Commission or a local Ethics Service Centre. Reviews will focus on: • Fairness, risk control, trust, transparency, and explainability • Accountability and liability tracing • Qualifications of personnel involved • Risk–benefit balance and social value of the AI activity Reviews should conclude within 30 days, with outcomes being: approval, rectification and resubmission, or rejection. A simplified review is available for low-risk AI activities, such as those comparable to normal daily scenarios or involving immaterial updates to previously approved projects. MIIT and MOST will publish a list of AI activities requiring expert second review for high-risk activities, such as algorithm models capable of mobilizing public opinions and automated decision-making systems with significant implications for human safety and health. A streamlined review process is available for public emergencies. ❓ What’s Next? 💡 This Ethics Measures reflect China’s pragmatic and agile approach to AI governance. Instead of a sweeping AI law, Chinese regulators are targeting high-risk areas such as #algorithms, #deepfakes, #generativeAI, and AI #labeling. With the Ethics Measures now open for feedback, ethical compliance is expected to be a formal requirement for corporations and institutions operating in China. 🔀 Organizations should closely monitor these developments and adapt their AI strategies and risk management frameworks accordingly. #AI #AIgovernance #China #law #ethics #data #privacy #riskmanagement #regulatory #compliance #enforcement #digitaltrust #digitalgoverance picture credit to Freepik.

  • View profile for Greeshma .M. Neglur

    SVP | Enterprise AI & Technology Executive | Digital Transformation | Cybersecurity Leader | Financial Services

    3,519 followers

    𝐏𝐫𝐚𝐜𝐭𝐢𝐜𝐚𝐥 𝐀𝐈 𝐆𝐨𝐯𝐞𝐫𝐧𝐚𝐧𝐜𝐞: 𝐓𝐡𝐞 𝐌𝐢𝐧𝐢𝐦𝐮𝐦 𝐒𝐭𝐚𝐜𝐤 𝐭𝐨 𝐒𝐭𝐚𝐫𝐭 𝐑𝐞𝐬𝐩𝐨𝐧𝐬𝐢𝐛𝐥𝐲 Most organizations don't fail at AI because they moved too fast. They fail because they moved without a control environment and by the time audit, legal, or a regulator shows up, the exposure is already baked in. 𝐓𝐡𝐫𝐞𝐞 𝐋𝐚𝐲𝐞𝐫𝐬 1. Policies: What the organization requires. 2. Standards: How requirements become repeatable rules. 3. Controls: How compliance is actually enforced. 𝟑 𝐏𝐨𝐥𝐢𝐜𝐢𝐞𝐬. 𝐍𝐨𝐭 𝟑𝟎. 1. AI Governance & Risk Management Policy:  Oversight structure, risk classification, use case intake, lifecycle governance, human oversight requirements. 2. AI Acceptable Use & Secure Development Policy:  What employees can and cannot do with AI. How applications are built, tested, and released. 3. AI Data, Privacy, Third-Party & Supply Chain Risk Policy:  Data sourcing, personal data handling, vendor vetting, AI supply chain controls. Usually written last. Almost always where the real risk lives. 𝟒 𝐒𝐭𝐚𝐧𝐝𝐚𝐫𝐝𝐬 𝐓𝐡𝐚𝐭 𝐌𝐚𝐤𝐞 𝐏𝐨𝐥𝐢𝐜𝐲 𝐑𝐞𝐚𝐥 1. AI Use Case Intake & Risk Classification Standard:  How use cases are submitted, assessed, risk-tiered, and routed for approval. 2. AI Application Development Standard:  Secure development, testing, explainability, human oversight, monitoring, prompt safety, output validation, change management. 3. AI Data, Privacy & Security Standard:  Data quality, minimization, approved sources, sensitive data handling, privacy reviews, access controls. 4. AI Third-Party & Supply Chain Risk Standard:  Due diligence for external models, AI vendors, datasets, plugins, orchestration frameworks. 𝐌𝐢𝐧𝐢𝐦𝐮𝐦 𝐂𝐨𝐧𝐭𝐫𝐨𝐥𝐬 𝐁𝐞𝐟𝐨𝐫𝐞 𝐏𝐫𝐨𝐝𝐮𝐜𝐭𝐢𝐨𝐧 • Formal use case intake with go/no-go review. • Risk classification before development begins. • Legal and privacy review when personal data is involved. • Output validation before model outputs trigger actions. • Prompt injection controls. • Least-privilege access for agents and autonomous systems. • Logging, monitoring, and incident escalation. • Vendor due diligence and contract controls. • Red teaming before production. 𝐓𝐡𝐞 𝐅𝐫𝐚𝐦𝐞𝐰𝐨𝐫𝐤𝐬 𝐖𝐨𝐫𝐭𝐡 𝐁𝐮𝐢𝐥𝐝𝐢𝐧𝐠 𝐅𝐫𝐨𝐦 • NIST AI RMF: Your governance architecture. • EU AI Act: Your regulatory compliance lens. • GDPR: Your data and privacy design standard. • OWASP LLM Top 10: Your security reference. • ISO/IEC 42001: Your long-term maturity target. AI governance is not about slowing implementation down. It's about making sure that when your initiatives scale, you have something that holds up. Retrofitting governance after the fact is always more expensive. In regulated industries, it can be existential. Where is your governance stack today? ♻️ Repost this to help your network get started ➕ Follow Greeshma for more #AIGovernance #ResponsibleAI #EnterpriseAI

  • View profile for Luiza Jarovsky, PhD
    Luiza Jarovsky, PhD Luiza Jarovsky, PhD is an Influencer

    Co-founder of the AI, Tech & Privacy Academy (1,400+ participants), Author of Luiza’s Newsletter (94,000+ subscribers), Mother of 3

    131,268 followers

    🚨 BREAKING: The EU AI Act's Code of Practice for General-Purpose AI has just been published! Here's what you need to know and the next steps: 1. Background If you remember my recent posts, there has been a lot of back and forth about whether the EU AI Act's applicability deadlines would be postponed or not. One of the main reasons for these discussions was the fact that neither the Code of Practice for GPAI nor the EU AI standards were ready, and companies were complaining that these were essential instruments for compliance. As I wrote before, I disagree that these delays were obstacles to compliance, but in any case, today the Code of Practice for GPAI is FINALLY out. 2. The purpose of the Code of Practice for GPAI According to Article 53 of the EU AI Act, providers of GPAI models may rely on codes of practice to demonstrate compliance with their obligations until an EU AI standard is published. The Code of Practice is voluntary, and if a provider does not adhere to it or does not comply with an EU AI standard, it must demonstrate alternative adequate means of compliance. 3. What is included in the Code of Practice It contains 3 separate chapters, which follow the provisions of the EU AI Act's rules for GPAI: - Transparency: offers a user-friendly Model Documentation Form  - Copyright: offers practical solutions to implement a policy to comply with EU copyright law - Safety and Security: offers concrete practices for managing systemic risks 4. What's next In the following weeks, EU Member States and the EU Commission will asses it. The Code of Practice will still be complemented by additional EU Commission guidelines (to be published in July). - 👉 Download the Code of Practice below. 👉 Never miss my analyses, curations, and recommendations on AI: join my newsletter's 66,700+ subscribers (link below). 👉 To learn about the EU AI Act in depth, join the 23rd cohort of my AI Governance Training (September).

  • View profile for Chris Kraft

    Federal Innovator

    22,469 followers

    National Institute of Standards and Technology (NIST) just released an updated version of Managing Misuse Risk for Dual-Use Foundation Models. Comments are being accepted until March 15th. Key updates: - Detailing Best Practices for Model Evaluations: New appendix that provides an overview of existing approaches to measuring misuse risk. - Expanding Domain-Specific Guidelines on Cyber and Chemical and Biological Risk: Two appendices were added – one on chemical and biological misuse risk and a second on cybersecurity misuse risk. - Underscoring a Marginal Risk Framework: Clarified the importance of a "marginal risk" framework for assessing and managing risk. - Addressing Open Models: Updated to support their proportional application to and usefulness for open model developers. - Managing Risk Across the AI Supply Chain: Expanded risk management practices to include more of the AI supply chain. Submit comments to NISTAI800-1@nist.gov or online at regulations.gov. Press Release: https://lnkd.in/eMVrdwXF Full Publication: https://lnkd.in/eraCAi4q

Explore categories