Recent Global Developments in AI Policy

Explore top LinkedIn content from expert professionals.

Summary

Recent global developments in AI policy refer to the rapid evolution of laws, regulations, and oversight frameworks that govern how artificial intelligence is designed, deployed, and managed across countries. As AI takes a central role in business and daily life, governments worldwide are creating new rules to address risks like bias, data misuse, and lack of accountability, aiming to make AI safer, fairer, and more trustworthy for everyone.

  • Understand local rules: Review how new AI regulations in your region—such as the EU AI Act, US executive orders, or China’s mandatory controls—apply to your organization, since these laws now carry real legal and business consequences.
  • Prioritize transparency: Build processes that document how AI systems make decisions, manage data, and handle risks, so you can clearly explain and defend your practices to regulators, customers, and partners.
  • Prepare for audits: Establish regular monitoring, bias checks, and risk assessments for your AI systems to stay compliant and earn trust in a landscape where enforcement and public scrutiny are increasing worldwide.
Summarized by AI based on LinkedIn member posts
  • View profile for Peter Slattery, PhD

    MIT AI Risk Initiative | MIT FutureTech

    68,443 followers

    "What is the Global Landscape of AI Regulation? Between new laws & revoked orders, the landscape of #AIRegulation is shifting quickly. Last week, as the US House passed a bill potentially banning all state AI laws for the next decade, there is an urgent need to clarify what "AI regulation" actually means & develop analytical tools that resist political shifts. We are excited to share that our paper, a joint collaboration between Stanford University and Harvard University researchers, introduces a taxonomy to capture the global landscape of AI regulation. With co-authors Shira Gur-Arieh, Tom Zick, PhD. & Kevin Klyman, we analyze emerging AI regulatory frameworks across five early movers–the EU, US, China, Canada, and Brazil– to identify patterns, divergences & blind spots. The taxonomy illustrates the breadth & depth of AI regulatory approaches by analyzing key metrics, including technology or application-focused rules, ex ante precautions or ex post liabilities, horizontal or sectoral regulatory coverage, maturity of the digital legal landscape, enforcement mechanisms & level of stakeholder participation. To democratize our findings, we collaborated with designers Vikramaditya Sharma, Steven Morse & Tanil Raif to translate dense legal texts into accessible outputs. Key takeaways: 1️⃣ We must clarify the term "AI regulation." The term is used ambiguously to describe both binding legal frameworks & voluntary industry guidelines. Lines are often strategically blurred between hard law (AI regulation) & soft law (AI policy). Such semantic ambiguity can mislead public expectations, create a false sense of protection & open the door to regulatory capture. 2️⃣ Innovation vs. regulation is a false dichotomy. China's experience shows it is possible to enforce mandatory safeguards while continuing to develop cutting-edge models like DeepSeek. While the intentions behind Chinese AI regulation differ from Western ones, for example to control political dissent, the coexistence of strict regulation & rapid innovation proves that the two are not mutually exclusive. Countries can lead the AI arms race while having legally-binding safety requirements. 3️⃣ Under the same umbrella term, not all AI regulations are equal. Some frameworks are more comprehensive than others. Hybrid AI regulations–combining both ex ante & ex post mechanisms and technology & application based rules–address societal harms and national security risks, while imposing obligations before and after deployment. 4️⃣ Civic engagement remains a blind spot. There is little data on whether civic consultations translate into meaningful, legal outcomes—or are merely performative." Good work from Sacha Alanoca (who wrote the above summary) and Berkman Klein Center for Internet & Society at Harvard University

  • View profile for Dr. Barry Scannell
    Dr. Barry Scannell Dr. Barry Scannell is an Influencer

    AI Law & Policy | Partner in Leading Irish Law Firm William Fry | Member of the Board of Irish Museum of Modern Art | PhD in AI & Copyright

    59,869 followers

    The Financial Times has reported that Brussels is preparing a tougher 2026 enforcement push under the Digital Markets Act and Digital Services Act, with Google, Meta, Apple and X squarely in view. It also reported that the Trump administration is threatening retaliation, including tariffs and visa bans, over what it frames as European “censorship”. The DMA and DSA were built to curb platform dominance and to force accountability for illegal content and systemic risks. But enforcement now overlaps with AI in practice: recommendation systems, generative content, manipulative ad targeting, and algorithmic amplification. In fact the AI Offie in Europe is meant to take control of DSA AI enforcement under the proposed Digital Omnibus. If Brussels follows through, the effect will be to push global platforms towards EU-style governance controls even outside Europe. Washington’s response is the counter-model. Rather than argue over the substance of European laws, the Trump administration is threatening economic and diplomatic costs for applying them. The result is a new reality for boards and general counsel - AI compliance is now inseparable from geopolitical exposure. You may comply perfectly and still be caught in retaliation politics. While Europe and the US trade blows, China is quietly opening a different front. Beijing has released draft AI safety rules aimed at curbing suicide, self-harm and violence content, but with a telling addition: restrictions on “emotional manipulation”, including so-called “emotional traps” and false promises to users. The regulatory idea here is psychological safety by design. China is treating emotionally persuasive AI as a consumer harm category, akin to gambling or online addiction. That framing will not stay in China - Western regulators can reach similar outcomes through product safety, consumer law, youth protection and liability doctrines without passing a single “AI companion statute”. India is building another path. The Economic Times reported that the central government has asked states to submit proposals for AI Centres of Excellence under the IndiaAI Mission, explicitly aimed at strengthening AI capability and deployment. In Rajasthan, officials will unveil an AI-ML Policy 2026 next week, backed by a dedicated AI data centre in Jaipur. This is governance through capacity, procurement and infrastructure, not headline regulation. Three conclusions follow. First, the global AI rulebook is fragmenting into enforcement-first Europe, control-and-safety China, and capacity-and-deployment India. Second, AI regulation is increasingly a trade and foreign policy instrument, not merely a domestic consumer protection issue. Third, the next wave of obligations will be operational: disclosure, intervention protocols, logging, and systemic-risk mitigation that regulators can measure.

  • View profile for Kevin Fumai

    Asst. General Counsel @ Oracle ǀ AI Governance

    35,686 followers

    So much happens so quickly in #AIgovernance that I’ve decided to launch a Month in Review. This will only spotlight the key developments that should be on your radar. With that, here’s my Top 10 for January: ▶️ The first International AI Safety Report was published. It synthesizes the state of scientific understanding of general-purpose AI, with a focus on managing its current and emerging risks. It’s a must-read filled with technical rigor, balanced policy perspectives, and tangible recommendations. 🔗 https://lnkd.in/e7vupCba ▶️ President Trump started the US down a new path by revoking the foundational 2023 executive order and directing his administration to develop an AI action plan within 180 days.  The National AI Advisory Committee promptly provided a 10-point framework. 🔗 https://lnkd.in/ehzErwiK (EO) 🔗 https://lnkd.in/exNjVb5y (NAIAC) ▶️ The US Copyright Office released a report on the copyrightability of AI-generated works, with nine conclusions or recommendations (and significant supporting research). 🔗 https://lnkd.in/eJhzRNfV ▶️ DeepSeek launched R1, captured attention, created confusion, and sparked concerns. And the global gyrations (and governance implications) are just beginning. 🔗 https://lnkd.in/eHNGQqtM ▶️ The EU AI Office unveiled a draft template that would require GPAI model providers to disclose a “sufficiently detailed summary” of the data used to train their models, including sources. 🔗 https://lnkd.in/e3rz8Zpi ▶️ California's Attorney General issued AI advisories informing consumers of their rights and companies of their obligations under existing law. This theme continues to resonate around the world, with many other regulators offering similar reminders. 🔗 https://lnkd.in/eFyazZDq ▶️ The US FTC finalized a settlement with IntelliVision over claims related to its facial recognition software. While not expressly tied to Operation AI Comply, the case serves as another example of how existing laws apply to AI and how regulatory enforcement will likely progress. 🔗 https://lnkd.in/efV3T5u6 ▶️ The Netherlands updated its AI impact assessment template, offering a new glimpse into the EU AI Act requirement. 🔗 https://lnkd.in/eURuYdKK ▶️ The US FDA proposed guidelines for AI-enabled medical devices and drug development. While not yet finalized, they signal support for innovation so long as rigorous scientific and regulatory standards are satisfied. 🔗 https://lnkd.in/e9eNVrXB (devices) 🔗 https://lnkd.in/epN64-6q (drugs) ▶️ The World Economic Forum released an “Industries in the Intelligent Age” Series, with detailed snapshots of AI’s applications and best practices across seven sectors. 🔗 https://lnkd.in/evRFN7ZB

  • View profile for Benjamin Cedric Larsen, PhD

    Director of Programs, Frontier Model Forum | Global AI Governance & Policy

    9,491 followers

    I'm thrilled to announce the release of my latest article published by The Brookings Institution, co-authored with Sabrina Küspert, titled "Regulating General-Purpose AI: Areas of Convergence and Divergence across the EU and the US." 🔍 Key Highlights: EU's Proactive Approach to AI Regulation: -The EU AI Act introduces binding rules specifically for general-purpose AI models. -The creation of the European AI Office ensures centralized oversight and enforcement, aiming for transparency and systemic risk management across AI applications. -This comprehensive framework underscores the EU's commitment to fostering innovation while safeguarding public interests. US Executive Order 14110: A Paradigm Shift in AI Policy: -The Executive Order marks the most extensive AI governance strategy in the US, focusing on the safe, secure, and trustworthy development and use of AI. -By leveraging the Defense Production Act, it mandates reporting and adherence to strict guidelines for dual-use foundation models, addressing potential economic and security risks. -The establishment of the White House AI Council and NIST's AI Safety Institute represents a coordinated effort to unify AI governance across federal agencies. Towards Harmonized International AI Governance: -Our analysis reveals both convergence and divergence in the regulatory approaches of the EU and the US, highlighting areas of potential collaboration. -The G7 Code of Conduct on AI, a voluntary international framework, is viewed as a crucial step towards aligning AI policies globally, promoting shared standards and best practices. -Even when domestic regulatory approaches diverge, this collaborative effort underscores the importance of international cooperation in managing the rapid advancements in AI technology. 🔗 Read the Full Article Here: https://lnkd.in/g-jeGXvm #AI #AIGovernance #EUAIAct #USExecutiveOrder #AIRegulation

  • View profile for Sumeet Agrawal

    Vice President of Product Management

    9,696 followers

    AI is not unregulated anymore. It’s becoming one of the most governed technologies in the world. And most businesses are not ready for it. Because AI is no longer experimental - it’s making real decisions in hiring, finance, healthcare, and security. Here’s what every business needs to understand 👇 Why AI regulation matters: Bias. Data misuse. Lack of accountability. These aren’t technical issues anymore - they’re legal and business risks. The global shift: Governments are moving fast with structured frameworks. Risk-based classification. Transparency requirements. Clear accountability. This is no longer optional. Key regulations shaping AI globally: - EU AI Act (Europe) Risk-based AI classification. High-risk systems require strict compliance. Some use cases are banned entirely. - GDPR (Europe) User consent. Data protection. Right to explanation. Privacy is now a design requirement. - NIST AI Framework (US) A practical approach to managing AI risks across the lifecycle. Helps companies operationalize governance early. - Executive Orders (US) Focus on safety testing, responsible deployment, and fairness in AI systems. Signals stricter laws ahead. - China AI Regulations Strict centralized control. Mandatory algorithm registration. Strong enforcement and compliance checks. - Singapore AI Model Flexible, business-friendly governance focused on transparency, explainability, and accountability. - OECD AI Principles Global baseline for AI policy - human-centered, fair, and accountable systems. - ISO/IEC Standards Standardizing AI practices globally - risk management, lifecycle governance, and reliability. - Algorithmic Accountability Laws Bias audits. Risk assessments. Documentation. Businesses must prove their AI is fair. - Global Data Protection Laws GDPR, CCPA, DPDP - data compliance is now core to AI systems. What businesses must do now: AI governance is no longer a technical add-on. It’s a core business function. → Build internal governance frameworks → Ensure transparency and accountability → Implement monitoring, audits, and documentation 💡 The big reality: AI is no longer unregulated innovation. It’s a regulated system with global oversight. The companies that win won’t be the fastest. They’ll be the most trusted. Because the future belongs to businesses that build compliant, responsible, and trustworthy AI systems.

  • View profile for Eamonn O'Brien-Strain

    Engineering Manager, Google Search | AI personalization safety and user controls

    2,092 followers

    The Center for AI and Digital Policy recently published the Artificial Intelligence and Democratic Values 2026 (CAIDP Index), a comprehensive evaluation of 90 countries across 12 dimensions. The report is an excellent resource. To add another layer of analysis, I wanted to explore how the 12 dimensions cluster together. I ran a quick statistical breakdown (principal component analysis) to summarize these dimensions into two key axes, which are displayed in the attached graph. Here are the two dominant themes I found: Y-Axis: Enforceability. Countries towards the top demonstrate more domestic regulatory enforceability (hard laws on algorithmic transparency, dedicated oversight bodies, and written domestic policies). Countries towards the bottom lean heavily into international treaties and pledges (like the CoE Treaty and OECD principles), potentially lagging on producing hard domestic enforcement agencies of their own. X-Axis: Democratic Governance. This axis captures elements like human rights compliance and the endorsement of major international alignment frameworks that represent the global consensus on AI governance. Countries towards the right are more likely to be participating in international AI democratic treaties and have broad human rights frameworks in place. This view highlights an important tension in global AI policy: a strong international consensus on ethical goals (X-axis) versus a developing domestic capacity for enforcement (Y-axis). https://lnkd.in/gcYqNK8Q

  • View profile for Tristan Ingold

    AI Governance at Meta

    5,867 followers

    In 26 December 2024, the South Korean National Assembly approved and adopted the AI Basic Act. As of this morning (Jan 22), the framework has taken effect, marking the establishment of the worlds first law focused on enforcing safety requirements on high-performing or 'Frontier' AI systems. For GRC and security professionals, this marks another significant addition to global AI regulation. Here are my top takeaways: 1️⃣ An AI Safety Institute is on the way. South Korea is setting up its own institute to evaluate high-performance AI systems, joining similar efforts in the UK and US. They’re also establishing a Presidential Council on National AI Strategy, which shows a serious, long-term commitment to governance. 2️⃣ The grace period offers a valuable runway. The law includes a one-year period focused on “guidance, consultation, and education,” with no penalties yet. This gives teams a key opportunity to get their AI inventories organized and prepare for compliance before any fines kick in. 3️⃣ The approach to risk is a little different. Instead of focusing on high-risk applications like the EU AI Act does (think healthcare or hiring), South Korea’s law sets technical thresholds such as cumulative training computation to decide what’s covered. This ‘frontier-first’ mindset targets the most powerful models directly. For multinational companies, this definitely adds complexity. From where I stand, the most effective way to navigate these evolving requirements may be to build your governance program to the highest standard currently available, which in my opinion is the EU AI Act. This approach helps cover multiple regulatory environments at once. I’m interested to hear how teams are approaching this ‘third pillar’ in global AI rules. Always glad to connect and share what I’ve learned. 🤝 I'll some sources down in the comments for folks to read through if you'd like to learn more! #AI #AIGovernance #AIRisk #GlobalAI #AIRegulation #AISafety

Explore categories