AI Governance Practices

Explore top LinkedIn content from expert professionals.

  • View profile for Andreas Horn

    Head of AIOps @ IBM || Speaker | Lecturer | Advisor

    242,205 followers

    McKinsey & Company 𝗮𝗻𝗮𝗹𝘆𝘇𝗲𝗱 𝟭𝟱𝟬+ 𝗲𝗻𝘁𝗲𝗿𝗽𝗿𝗶𝘀𝗲 𝗚𝗲𝗻𝗔𝗜 𝗱𝗲𝗽𝗹𝗼𝘆𝗺𝗲𝗻𝘁𝘀 — 𝗮𝗻𝗱 𝗳𝗼𝘂𝗻𝗱 𝗼𝗻𝗲 𝗰𝗼𝗺𝗺𝗼𝗻 𝘁𝗵𝗿𝗲𝗮𝗱: ⬇️ One-off solutions don’t scale. The most successful projects take a different path: They use open, modular architectures that enable speed, reuse, and control. → Designed for reuse → Able to plug in best-in-class capabilities → Free from vendor lock-in This is the reference architecture McKinsey now recommends — optimized to scale what works while staying compliant. It consists of five core components: ⬇️ 𝟭. 𝗦𝗲𝗹𝗳-𝘀𝗲𝗿𝘃𝗶𝗰𝗲 𝗽𝗼𝗿𝘁𝗮𝗹: → A secure, compliant “pane of glass” where teams can launch, monitor, and manage GenAI apps. → Preapproved patterns, validated capabilities, shared libraries. → Observability and cost controls built-in. 𝟮. 𝗢𝗽𝗲𝗻 𝗮𝗿𝗰𝗵𝗶𝘁𝗲𝗰𝘁𝘂𝗿𝗲 → Services are modular, reusable, and provider-agnostic. → Core functions like RAG, chunking, or prompt routing are shared across apps. → Infra and policy as code, built to evolve fast. 𝟯. 𝗔𝘂𝘁𝗼𝗺𝗮𝘁𝗲𝗱 𝗴𝗼𝘃𝗲𝗿𝗻𝗮𝗻𝗰𝗲 𝗴𝘂𝗮𝗿𝗱𝗿𝗮𝗶𝗹𝘀 → Every prompt and response is logged, audited, and cost-attributed. → Hallucination detection, PII filters, bias audits — enforced by default. → LLMs accessed only through a centralized AI gateway. 4. 𝗙𝘂𝗹𝗹-𝘀𝘁𝗮𝗰𝗸 𝗼𝗯𝘀𝗲𝗿𝘃𝗮𝗯𝗶𝗹𝗶𝘁𝘆 → Centralized logging, analytics, and monitoring across all solutions → Built-in lifecycle governance, FinOps, and Responsible AI enforcement → Secure onboarding of use cases and private data controls → Enables policy adherence across infrastructure, models, and apps 5. 𝗣𝗿𝗼𝗱𝘂𝗰𝘁𝗶𝗼𝗻-𝗴𝗿𝗮𝗱𝗲 𝗨𝘀𝗲 𝗖𝗮𝘀𝗲𝘀 → Modular setup for user interface, business logic, and orchestration → Integrated agents, prompt engineering, and model APIs → Guardrails, feedback systems, and observability built into the solution → Delivered through the AI Gateway for consistent compliance and scale The message is clear: If your GenAI program is stuck, don’t look at the LLM. Look at your platform. 𝗜 𝗲𝘅𝗽𝗹𝗼𝗿𝗲 𝘁𝗵𝗲𝘀𝗲 𝗱𝗲𝘃𝗲𝗹𝗼𝗽𝗺𝗲𝗻𝘁𝘀 — 𝗮𝗻𝗱 𝘄𝗵𝗮𝘁 𝘁𝗵𝗲𝘆 𝗺𝗲𝗮𝗻 𝗳𝗼𝗿 𝗿𝗲𝗮𝗹-𝘄𝗼𝗿𝗹𝗱 𝘂𝘀𝗲 𝗰𝗮𝘀𝗲𝘀 — 𝗶𝗻 𝗺𝘆 𝘄𝗲𝗲𝗸𝗹𝘆 𝗻𝗲𝘄𝘀𝗹𝗲𝘁𝘁𝗲𝗿. 𝗬𝗼𝘂 𝗰𝗮𝗻 𝘀𝘂𝗯𝘀𝗰𝗿𝗶𝗯𝗲 𝗵𝗲𝗿𝗲 𝗳𝗼𝗿 𝗳𝗿𝗲𝗲: https://lnkd.in/dbf74Y9E

  • View profile for Dr. Barry Scannell
    Dr. Barry Scannell Dr. Barry Scannell is an Influencer

    AI Law & Policy | Partner in Leading Irish Law Firm William Fry | Member of the Board of Irish Museum of Modern Art | PhD in AI & Copyright

    59,869 followers

    I’m so happy to see this! Yesterday, the ISO published a new standard, ISO/IEC 42001:2023 for AI Management Systems. My suspicion is that it will become as important to the AI world as ISO/IEC 27001 arguably became the most important standard for information security management systems. The standard provides a comprehensive framework for establishing, implementing, maintaining, and improving an artificial intelligence management system within organisations. It aims to ensure responsible AI development, deployment, and use, addressing ethical implications, data quality, and risk management. This set of guidelines is designed to integrate AI management with organisational processes, focusing on risk management and offering detailed implementation controls. Key aspects of the standard include performance measurement, emphasising both quantitative and qualitative outcomes, and the importance of AI systems’ effectiveness in achieving intended results. It mandates conformity to requirements and systematic audits to assess AI systems. The standard also highlights the need for thorough assessment of AI's impact on society and individuals, stressing data quality to meet organisational needs. Organisations are required to document controls for AI systems and rationalise their decisions, underscoring the role of governance in ensuring performance and conformance. The standard calls for adapting management systems to include AI-specific considerations like ethical use, transparency, and accountability. It also requires continuous performance evaluation and improvement, ensuring AI systems' benefits and safety. ISO/IEC 42001:2023 aligns closely with the EU AI Act. The AI Act classifies AI systems into prohibited and high-risk categories, each with distinct compliance obligations. ISO/IEC 42001:2023's focus on ethical AI management, risk management, data quality, and transparency aligns with these categories, providing a pathway for meeting the AI Act’s requirements. The AI Act's prohibitions include specific AI systems like biometric categorisation and untargeted scraping for facial recognition. The standard may help guide organisations in identifying and discontinuing such applications. For high-risk AI systems, the AI Act mandates comprehensive risk management, registration, data governance, and transparency, which the ISO/IEC 42001:2023 framework could support. It could assist providers of high-risk AI systems in establishing risk management frameworks and maintaining operational logs, ensuring non-discriminatory, rights-respecting systems. ISO/IEC 42001:2023 may also aid users of high-risk AI systems in fulfilling obligations like human oversight and cybersecurity. It could potentially assist in managing foundation models and General Purpose AI (GPAI), necessary under the AI Act. This new standard offers a comprehensive approach to managing AI systems, aiding organisations in developing AI that respects fundamental rights and ethical standards.

  • View profile for Brij kishore Pandey
    Brij kishore Pandey Brij kishore Pandey is an Influencer

    AI Architect & Engineer | AI Strategist

    720,687 followers

    Many engineers can build an AI agent. But designing an AI agent that is scalable, reliable, and truly autonomous? That’s a whole different challenge.  AI agents are more than just fancy chatbots—they are the backbone of automated workflows, intelligent decision-making, and next-gen AI systems. However, many projects fail because they overlook critical components of agent design.  So, what separates an experimental AI from a production-ready one?  This Cheat Sheet for Designing AI Agents breaks it down into 10 key pillars:  🔹 AI Failure Recovery & Debugging – Your AI will fail. The question is, can it recover? Implement self-healing mechanisms and stress testing to ensure resilience.  🔹 Scalability & Deployment – What works in a sandbox often breaks at scale. Using containerized workloads and serverless architectures ensures high availability.  🔹 Authentication & Access Control – AI agents need proper security layers. OAuth, MFA, and role-based access aren’t just best practices—they’re essential.  🔹 Data Ingestion & Processing – Real-time AI requires efficient ETL pipelines and vector storage for retrieval—structured and unstructured data must work together.  🔹 Knowledge & Context Management – AI must remember and reason across interactions. RAG (Retrieval-Augmented Generation) and structured knowledge graphs help with long-term memory.  🔹 Model Selection & Reasoning – Picking the right model isn't just about LLM size. Hybrid AI approaches (symbolic + LLM) can dramatically improve reasoning.  🔹 Action Execution & Automation – AI isn't useful if it just predicts—it must act. Multi-agent orchestration and real-world automation (Zapier, LangChain) are key.  🔹 Monitoring & Performance Optimization – AI drift and hallucinations are inevitable. Continuous tracking and retraining keeps your AI reliable.  🔹 Personalization & Adaptive Learning – AI must learn dynamically from user behavior. Reinforcement learning from human feedback (RHLF) improves responses over time.  🔹 Compliance & Ethical AI – AI must be explainable, auditable, and regulation-compliant (GDPR, HIPAA, CCPA). Otherwise, your AI can’t be trusted.  An AI agent isn’t just a model—it’s an ecosystem. Designing it well means balancing performance, reliability, security, and compliance.  The gap between an experimental AI and a production-ready AI is strategy and execution.  Which of these areas do you think is the hardest to get right?

  • View profile for Amanda Bickerstaff
    Amanda Bickerstaff Amanda Bickerstaff is an Influencer

    Educator | AI for Education Founder | Keynote | Researcher | LinkedIn Top Voice in Education

    90,589 followers

    Common Sense Media recently released a comprehensive risk assessment of AI teacher assistants/lesson planning tools. Their findings reveal that while these tools promise increased productivity and creative support, they're also creating "invisible influencers" that could fundamentally undermine educational quality. Unlike GenAI foundation model chatbots, these tools are specifically designed for instructional planning and classroom use and are rapidly being adopted across districts. Key Concerns from their report: • "Invisible Influencers" in Student Learning: AI-generated content directly shapes what students learn through potentially biased perspectives and historical inaccuracies that teachers may miss; evidence also shows these tools suggest different approaches and responses based on student race/gender • “Outsourced Thinking" Problem: Tools make it dangerously easy to push unreviewed AI instructional content straight to classrooms, while novice teachers lack experience to spot subtle errors and biasses • High-Stakes Outputs: IEP and behavior plan generators create official-looking documents that could impact student educational trajectories even though these plans should be human-generated (and in the case of IEP goals are mandated to be human generated) • Undermining High-Quality Instructional Materials: Without proper integration, these tools fragment learning and can undermine coherent, research-backed curricula Recommendations from the report: • Experienced educator oversight required for all AI-generated educational content • Clear district policies and guidelines for AI teacher assistant implementation • Integration with existing high-quality curricula rather than replacement of established materials • Robust teacher training on identifying bias and evaluating AI outputs • Careful oversight of real-time AI feedback tools that interact directly with students We'd also recommend foundational AI literacy for teachers before they begin using GenAI teacher assistants, so that they are aware of the potential limitations. While AI teacher assistants aren't inherently problematic, they require the same careful implementation and oversight we'd expect for any tool that directly impacts student learning. The potential for enhanced productivity is real, but so are the risks to educational equity and quality. This report underscores the urgent need for GenAI EdTech tool makers to provide evidence of how their tools mitigate these issues along with evidence-based policies and professional development to help educators navigate AI tools responsibly. All of which underline how important AI Literacy is for the 2025-2026 school year. Link in the comments to check out the full report. Also check out our 5 Questions to Ask GenAI EdTech Providers resource in the comments if you are planning to implement any of these tools in your school or district. #AIinEducation #ailiteracy #Education #K12 AI for Education

  • View profile for Bertalan Meskó, MD, PhD
    Bertalan Meskó, MD, PhD Bertalan Meskó, MD, PhD is an Influencer

    The Medical Futurist, Author of Your Map to the Future, Global Keynote Speaker, and Futurist Researcher

    366,889 followers

    BREAKING! The FDA just released this draft guidance, titled Artificial Intelligence-Enabled Device Software Functions: Lifecycle Management and Marketing Submission Recommendations, that aims to provide industry and FDA staff with a Total Product Life Cycle (TPLC) approach for developing, validating, and maintaining AI-enabled medical devices. The guidance is important even in its draft stage in providing more detailed, AI-specific instructions on what regulators expect in marketing submissions; and how developers can control AI bias. What’s new in it? 1) It requests clear explanations of how and why AI is used within the device. 2) It requires sponsors to provide adequate instructions, warnings, and limitations so that users understand the model’s outputs and scope (e.g., whether further tests or clinical judgment are needed). 3) Encourages sponsors to follow standard risk-management procedures; and stresses that misunderstanding or incorrect interpretation of the AI’s output is a major risk factor. 4) Recommends analyzing performance across subgroups to detect potential AI bias (e.g., different performance in underrepresented demographics). 5) Recommends robust testing (e.g., sensitivity, specificity, AUC, PPV/NPV) on datasets that match the intended clinical conditions. 6) Recognizes that AI performance may drift (e.g., as clinical practice changes), therefore sponsors are advised to maintain ongoing monitoring, identify performance deterioration, and enact timely mitigations. 7) Discusses AI-specific security threats (e.g., data poisoning, model inversion/stealing, adversarial inputs) and encourages sponsors to adopt threat modeling and testing (fuzz testing, penetration testing). 8) And proposed for public-facing FDA summaries (e.g., 510(k) Summaries, De Novo decision summaries) to foster user trust and better understanding of the model’s capabilities and limits.

  • View profile for Ian Romero

    Ultra runner, family man, COO. I help businesses grow through better systems, stronger teams, and smarter use of technology

    2,703 followers

    Claude.ai just announced that their Microsoft 365 connector is now available on EVERY plan, including free and personal accounts. That means ANY of your end users with a free Claude account can now connect it directly to your company's Microsoft 365 environment and start pulling in emails, files, spreadsheets, whatever they have access to. That should make you uncomfortable. Because unless your tenant requires admin approval for third-party app connections, any employee can enable this on their own. No ticket, no approval, no one in leadership even knows it happened. And now sensitive client data is sitting inside a platform you didn't evaluate, didn't approve, and don't control. A public AI model is potentially learning from your sensitive data, and almost definitely storing it. This isn't a Claude problem. Every major AI platform is racing to build connectors into your business tools, and every one of them is a potential data exposure event if you're not ready. Here's what I'd recommend doing as soon as possible: - Lock down third-party app permissions. Require admin approval for all app connections in your Microsoft 365 tenant. If you're not sure whether this is on, assume it isn't. - Audit your environment. Do you know where your sensitive data lives and who can access it? Most companies find out the hard way that employees are over-permissioned, and AI makes that exponentially more dangerous because it makes finding and extracting data faster than ever. - Communicate and educate. Most employees aren't being reckless, they just don't know this is a problem. Send a simple message this week: don't connect any AI tools to company systems without approval. Then start building a real AI use policy, even a one-pager. - Review your client agreements. If you handle sensitive client data, your contracts probably don't address AI processing yet. Close that gap before a client asks about it. This isn't about being anti-AI. Every new AI capability is a new governance question, and most businesses aren't asking it fast enough. At the same time, it's imperative that companies start preparing for AI integration because it is inevitable for those that want to move forward with technology in a meaningful way. Have questions? Shoot me a message. Client or not, I'm happy to chat more if I can help!

  • View profile for Barbara C.

    Board & C-suite advisor | AI strategy, growth, transformation | Cloud, IoT, SaaS | Former CMO & MD | Ex-AWS, Orange

    15,097 followers

    Europe just defined how AI must be secured On 15 Jan, the European Telecommunications Standards Institute (ETSI) published a standard, EN 304 223, defining baseline cybersecurity requirements for AI models and systems. ➡️ A common set of AI cybersecurity controls, usable across jurisdictions, vendors, supply chains. Why this matters now Traditional cybersecurity was built for software & networks. AI changes the attack surface: ▫️ training data can be poisoned ▫️ models can be manipulated or obfuscated ▫️ prompts can be indirectly injected ▫️ behaviour can drift in invisible ways ➡️ EN 304 223 explicitly names these risks, treating them as security failures. How this takes effect EN 304 223 is already being pulled into procurement processes, security questionnaires, internal audits, vendor due diligence, insurance reviews. With the EU AI Act, high-risk AI systems will need to demonstrate compliance through conformity assessment either via internal control with robust technical documentation, or through assessment by a notified body. ➡️ EN 304 223 is the operational “how” that law and auditors will rely on. The real breakthrough: lifecycle security The standard defines 13 principles and 72 trackable requirements, organised across 5 phases of the AI system lifecycle: 1️⃣ secure design 2️⃣ secure development 3️⃣ secure deployment 4️⃣ secure maintenance 5️⃣ secure end of life ➡️ Retraining a model = redeploying a system from a security standpoint. AI security becomes a continuous operational discipline. Accountability made operational EN 304 223 assigns accountability across 3 technical roles: ✔️ developers ✔️ system operators ✔️ data custodians ➡️ AI risk lives between teams. This standard makes ownership explicit. The target: production AI EN 304 223 applies to deep neural networks and GenAI models already embedded in products, services, and operational decisions. Academic or research environments are excluded. ➡️ This standard is about AI that is live, scaled, and consequential, particularly in finance, healthcare, and critical infrastructure. What “compliance” means Complying with legal, audit, procurement, and insurance expectations using EN 304 223 as evidence: mapping controls across the lifecycle and ownership across roles. What Boards and executives should do now 1️⃣ Mandate an AI inventory: What AI is live, where, doing what, using which data pipelines, supplied by whom. 2️⃣ Assign named accountability across the lifecycle: Align to the standard’s role logic per system. 3️⃣ Require an AI security evidence pack per high-impact system, mapped across its lifecycle. 4️⃣ Decide your assurance route early. For high-risk systems plan for internal control vs notified body assessment. The bigger signal EU is turning AI security into auditable infrastructure. Trustworthy AI is becoming a standard of execution. For companies operating globally, proof of AI security is becoming the baseline. #AI #GenAI #AIGovernance #AISecurity #Boardroom

  • View profile for Nina Fernanda Durán

    Ship AI to production, here’s how

    58,857 followers

    Don’t let your AI project die in a notebook. You don’t need more features. You need structure. This is the folder setup that actually ships from day one. 📁 𝗧𝗵𝗲 𝗳𝗼𝗹𝗱𝗲𝗿 𝘀𝘁𝗿𝘂𝗰𝘁𝘂𝗿𝗲 𝘁𝗵𝗮𝘁 𝘄𝗼𝗿𝗸𝘀 Forget monolithic scripts. You need this: /config 🔹YAML files for models, prompts, logs 🔹Config lives outside the code, always /src 🔹Modular logic: llm/, utils/, handlers/ 🔹Clean, testable, scalable from day one /data 🔹Cached outputs, embeddings, prompt responses 🔹Cut latency + save on API costs instantly /notebooks 🔹For testing, analysis, and iteration 🔹Never pollute your main codebase again 𝗪𝗵𝗮𝘁 𝘁𝗵𝗶𝘀 𝘀𝘁𝗿𝘂𝗰𝘁𝘂𝗿𝗲 𝘀𝗼𝗹𝘃𝗲𝘀 ▪️Prompt versioning is built in ▪️Rate limiting and caching come standard ▪️Error handling is modular ▪️Experiments stay reproducible ▪️Deployment is one Dockerfile away 𝗕𝗲𝘀𝘁 𝗽𝗿𝗮𝗰𝘁𝗶𝗰𝗲𝘀 𝗯𝗮𝗸𝗲𝗱 𝗶𝗻 1. Prompts are versioned by default ▪️Stored in prompt_templates.yaml + templates⋅py ▪️Track, test, roll back 2. Rate limiting is pre-integrated ▪️rate_limiter⋅py stops API overloads and surprise bills 3. Caching is plug-and-play ▪️Duplicate calls get stored in /data/cache ▪️Cut costs by 70% on day one 4. Each module does one thing only ▪️Models in llm/, logs in utils/, errors in handlers/ ▪️No sprawl 5. Notebooks are safely isolated ▪️Run tests and explorations in prompt_testing.ipynb ▪️Nothing leaks into production logic ⚙️ Clone the github template below - in first comment This structure ships faster, costs less and scales without rewrites. ------------ ⚡I’m Nina. I build with AI and share how it’s done weekly. #aiagents #softwaredevelopment #MCP #genai #promptengineering

  • View profile for Nick Tudor

    CEO/CTO & Co-Founder, Whitespectre | Advisor | Investor

    13,871 followers

    AI success isn’t just about innovation - it’s about governance, trust, and accountability. I've seen too many promising AI projects stall because these foundational policies were an afterthought, not a priority. Learn from those mistakes. Here are the 16 foundational AI policies that every enterprise should implement: ➞ 1. Data Privacy: Prevent sensitive data from leaking into prompts or models. Classify data (Public, Internal, Confidential) before AI usage. ➞ 2. Access Control: Stop unauthorized access to AI systems. Use role-based access and least-privilege principles for all AI tools. ➞ 3. Model Usage: Ensure teams use only approved AI models. Maintain an internal “model catalog” with ownership and review logs. ➞ 4. Prompt Handling: Block confidential information from leaking through prompts. Use redaction and filters to sanitize inputs automatically. ➞ 5. Data Retention: Keep your AI logs compliant and secure. Define deletion timelines for logs, outputs, and prompts. ➞ 6. AI Security: Prevent prompt injection and jailbreaks. Run adversarial testing before deploying AI systems. ➞ 7. Human-in-the-Loop: Add human oversight to avoid irreversible AI errors. Set approval steps for critical or sensitive AI actions. ➞ 8. Explainability: Justify AI-driven decisions transparently. Require “why this output” traceability for regulated workflows. ➞ 9. Audit Logging: Without logs, you can’t debug or prove compliance. Log every prompt, model, output, and decision event. ➞ 10. Bias & Fairness: Avoid biased AI outputs that harm users or breach laws. Run fairness testing across diverse user groups and use cases. ➞ 11. Model Evaluation: Don’t let “good-looking” models fail in production. Use pre-defined benchmarks before deployment. ➞ 12. Monitoring & Drift: Models degrade silently over time. Track performance drift metrics weekly to maintain reliability. ➞ 13. Vendor Governance: External AI providers can introduce hidden risks. Perform security and privacy reviews before onboarding vendors. ➞ 14. IP Protection: Protect internal IP from external model exposure. Define what data cannot be shared with third-party AI tools. ➞ 15. Incident Response: Every AI failure needs a containment plan. Create a “kill switch” and escalation playbook for quick action. ➞ 16. Responsible AI: Ensure AI is built and used ethically. Publish internal AI principles and enforce them in reviews. AI without policy is chaos. Strong governance isn’t bureaucracy - it’s your competitive edge in the AI era. 🔁 Repost if you're building for the real world, not just connected demos. ➕ Follow Nick Tudor for more insights on AI + IoT that actually ship.

  • View profile for Peter Slattery, PhD

    MIT AI Risk Initiative | MIT FutureTech

    68,436 followers

    "This white paper offers a comprehensive overview of how to responsibly govern AI systems, with particular emphasis on compliance with the EU Artificial Intelligence Act (AI Act), the world’s first comprehensive legal framework for AI. It also outlines the evolving risk landscape that organizations must navigate as they scale their use of AI. These risks include: ▪ Ethical, social, and environmental risks – such as algorithmic bias, lack of transparency, insufficient human oversight, and the growing environmental footprint of generative AI systems. ▪ Operational risks – including unpredictable model behavior, hallucinations, data quality issues, and ineffective integration into business processes. ▪ Reputational risks – resulting from stakeholder distrust due to errors, discrimination, or mismanaged AI deployment. ▪ Security and privacy risks – encompassing cyber threats, data breaches, and unintended information disclosure. To mitigate these risks and ensure AI is used responsibly, in this white paper we propose a set of governance recommendations, including: ▪ Ensuring transparency through clear communication about AI systems’ purpose, capabilities, and limitations. ▪ Promoting AI literacy via targeted training and well-defined responsibilities across functions. ▪ Strengthening security and resilience by implementing monitoring processes, incident response protocols, and robust technical safeguards. ▪ Maintaining meaningful human oversight, particularly for high-impact decisions. ▪ Appointing an AI Champion to lead responsible deployment, oversee risk assessments, and foster a safe environment for experimentation. Lastly, this white paper acknowledges the key implementation challenges facing organizations: overcoming internal resistance, balancing innovation with regulatory compliance, managing technical complexity (such as explainability and auditability), and navigating a rapidly evolving and often fragmented regulatory landscape" Agata Szeliga, Anna Tujakowska, and Sylwia Macura-Targosz Sołtysiński Kawecki & Szlęzak

Explore categories