Ethical Guidelines for Military Technology Deployment

Explore top LinkedIn content from expert professionals.

  • View profile for Marie-Doha Besancenot

    Senior advisor for Strategic Communications, Cabinet of 🇫🇷 Foreign Minister; #IHEDN, 78e PolDef

    40,983 followers

    ✈️ 🇪🇺 « Trustworthy AI in Defence »: The European Way 🗞️The European Defence Agency’s White Paper is out! At a time when global powers are racing to develop & deploy AI-enabled defence capabilities,the European way =tech innovation + ethical responsibility, operational effectiveness + legal compliance, strategic autonomy + respect for human dignity & democratic values. 🔹AI in defence as legally compliant, ethically sound, technically robust, societally acceptable. 1 🤝🏻Principles of Trustworthiness 🔹foundational principles for trustworthy AI in defence: accountability, reliability, transparency, explainability, fairness, privacy, human oversight. Not optional but integral to the legitimacy of AI systems used by European armed forces. 2. Ethical and Legal Compliance 🔹 Europe’s commitment is to effective military capabilities but also to a rules-based international order. The EU explicitly rejects the idea that technological advancement justifies the erosion of ethical norms. 🔹 importance of ethical review mechanisms, institutional safeguards, alignment with #EU legal frameworks=a legal-ethical backbone ensuring trustworthiness is a practical requirement embedded into every phase of AI development/deployment. 3. Risk Assessment & Mitigation 🔹 EU’s precautionary principle=>rigorous & ongoing risk assessments of AI systems, incl. risks related to technical failures, misuse, bias, and unintended escalation in operational contexts. To anticipate harm before it materializes and equip systems with built-in safeguards 🔹Risk mitigation not only a technical task but an ethical &strategic imperative in high-stakes domains (targeting, threat detection, autonomous mobility). 4. 👁️Human Oversight & Control 🔹The EU rejects fully autonomous weapon systems operating without human intervention in critical functions like the use of force. The Paper calls for clear human-in-the-loop models, where operators retain oversight, intervention capability, and accountability. = safeguards democratic accountability & operational reliability, ensuring no algorithm makes life-and-death decisions. 5. Transparency and Explainability 🔹transparent #AI systems, not black-box models : decision-making processes understandable by users & traceable by designers. Key for after-action reviews, audits, & compliance. Strong stance on explainability 6. European Cooperation &Standardization 🔹Enhanced cooperation and harmonization in defence AI : shared definitions, frameworks to ensure interoperability, avoid duplication, promote a common culture of responsibility. 🔹 joint work on certification processes, training, testing environments 7. Continuous Monitoring and Evaluation 🔹ongoing monitoring, validation, recalibration of AI tools throughout their deployment. «trustworthiness must be maintained, not assumed » =The European way: lead not by imitating others’ race toward automation at any cost, but by demonstrating security, innovation, and values can go hand in hand

  • View profile for Davide Maniscalco

    Head of Legal, Regulatory & Data Privacy Officer | Special Adv DFIR | Auditor ISO/IEC 27001| 27701 | 42001 | CBCP | Italian Army (S.M.O.M.) Reserve Officer ~ OF-2 |

    19,797 followers

    NATO Science & Technology Organization (STO) TR-HFM-330 (Dec 2025) on #Human #Systems Integration for Meaningful Human Control (#MHC) over #AI-based systems, concise takeaways: ▪︎ Working definition: humans must be able to make informed choices, in sufficient time, to influence AI systems to enable desired effects or prevent undesired effects. ▪︎ MHC is not a “feature” you bolt on: it is a socio-technical property across the full #lifecycle (policy, design/dev, testing/validation, training, mission planning/execution, debrief, governance). ▪︎ Control is multi-dimensional: when (real-time vs prior control), by whom (single vs distributed teams), and what (direct action vs indirect constraints/intent). ▪︎ “Effective” control is part of MHC: performance and risk reduction must be engineered alongside legal/ethical and societal acceptability. ▪︎ Human-centred design is the central lever: continuous user involvement, measurable human performance (situation awareness, workload, response time, etc.), and testing that includes edge cases and ethical dilemmas. No silver bullet: the report outlines 17 candidate methods spanning “left of launch” through operations, e.g., validated MHC requirements for acquisition, human readiness levels, advance control directives, ethical hazard analyses, explainable AI, real-time MHC monitoring, dynamic task allocation, and reporting/learning systems. ▪︎ Holistic “bowtie” framing: links local operator decisions to organizational, societal and legal layers, via chains of trust, ability, authority, responsibility and accountability (and avoids “moral crumple zones”). ▪︎ Forward agenda: prioritize MHC for information-domain operations (incl. cognitive warfare), strengthen multistakeholder/participatory design, and institutionalize incident disclosure + lessons learned. #ResponsibleAI #HumanSystemsIntegration #HumanFactors #AI #DefenseTech #AIGovernance #NATO https://lnkd.in/dmtucKjV

  • View profile for Martin Zwick

    Lawyer | AIGP | CIPP/E | CIPT | FIP | GDDcert.EU | DHL Express Germany | IAPP Advisory Board Member

    20,354 followers

    Artificial Intelligence in the Military Domain and its Implications for International Peace and Security "Artificial Intelligence in the Military Domain and its Implications for International Peace and Security: An Evidence-Based Road Map for Future Policy Action," is produced by UNIDIR's Security and Technology Programme. This report assesses the transformative role of AI in military contexts and its profound implications for global security and highlights both the opportunities and challenges posed by AI in the military domain. On one hand, AI can enhance decision-making, improve logistics, and optimize training processes, acting as a force multiplier for military operations. On the other hand, it raises critical concerns around ethical use, accountability, and the potential for an AI arms race. Key takeaways include: 1) The need for a comprehensive UN-led dialogue to establish principles for responsible AI use in military applications. 2) The importance of developing national strategies that prioritize transparency, accountability, and robust data governance. 3) The call for multilateral, regional, and national cooperation to ensure that AI technologies are deployed safely and ethically. I encourage everyone to read this publication and reflect on how we can collectively shape a future where technology serves as a tool for stability rather than conflict.

  • View profile for Wendy R. Anderson

    National Security Technology Leader | Former Sr USG Official | Investor | Ex-Palantir

    23,933 followers

    There’s a real conversation to be had about AI and national security. But this Financial Times piece misses the mark—and the moment. Jonathan Guyer raises important questions in today’s FT. But in conflating today's AI developments with vague dystopian tropes, the piece misses an opportunity to seriously engage what’s happening inside the United States Department of Defense (DoD). Yes, the stakes are high. But the systems in place are far more robust—and far more thoughtful—than the article suggests. 1. Humans are in the loop. Every credible AI-enabled capability used by the U.S. military today requires human oversight. Autonomous systems are subject to strict DOD policy, and decisions with lethal consequences are governed by multiple layers of human review. 2. This is not happening in secret. Far from it. The DoD has publicly released ethical frameworks (like the 2020 AI Ethical Principles and the 2022–24 Responsible AI Implementation Pathway), launched dedicated oversight bodies like the DoD Chief Digital and Artificial Intelligence Office (CDAO), and built in auditability, red-teaming, and transparent test-and-evaluation protocols across the lifecycle of AI systems. Congressional reporting adds another layer of visibility. 3. We’ve seen progress—even in hard times. It’s true that Google exited Project Maven in 2018, but others—including Palantir Technologies—stayed, helping U.S. forces protect civilians and improve battlefield intelligence. We should be proud that Google has now reengaged and is working alongside companies like Microsoft, Amazon, Anduril, and Anthropic to responsibly bring advanced AI into defense. 4. Mindsets are shifting. Several tech and AI companies are rethinking prior bans on military use—recognizing that working with democratic governments on national security can align with ethical values. OpenAI recently updated its policies and is piloting its first Pentagon contract. Meta now allows LLaMA for defense use. These aren’t careless choices—they’re signs that leading firms are finding principled ways to engage on missions that matter. 5. This collaboration didn’t happen overnight. The progress we’re seeing is the product of years of meetings, demos, offsites, classified briefings, pilots, and mutual learning. There’s more to do—but we’re building a more responsive, resilient, and mission-aligned innovation ecosystem. 6. To make an obvious point: this isn’t about war for war’s sake. It’s about building software that reduces friendly fire, accelerates logistics, enables humanitarian response, and helps democracies—not autocracies—compete in a fast-changing world. We should debate how AI is used in defense. But that debate deserves precision—not platitudes. Let’s push for serious public dialogue. Let’s challenge assumptions. But let’s also recognize the deliberate, transparent, and ethical work already underway. #AI #NatSec #DefenseTech #ArtificialIntelligence #EthicalAI #DoD #Innovation #ResponsibleAI

  • View profile for Kuba Szarmach

    Advanced AI Risk & Compliance Analyst @Relativity | Curator of AI Governance Library | CISM CIPM AIGP | Sign up for my newsletter of curated AI Governance Resources (2.000+ subscribers)

    20,286 followers

    🚀 Building Trust in Military AI—A Practical Framework for Governance As AI reshapes global security, ensuring trust, accountability, and responsible development in the military domain is no longer optional—it’s a necessity. The new policy brief from Institut UNIDIR, Governance of Artificial Intelligence in the Military Domain, Yasmin Afina, PhD and Giacomo Persi Paoli provides a structured, multi-stakeholder approach to tackling this challenge. 🔑 Why This Matters AI in military applications raises profound security, ethical, and governance challenges. Without clear frameworks, we risk unintended escalations, opaque decision-making, and reduced human control over autonomous systems. This report outlines six priority areas for a practical, trust-based approach to AI governance in defense: 📌 1. Building a Knowledge Base – AI in the military lacks global definitions and shared understanding. The report calls for a living lexicon to align key concepts, risks, and governance strategies. 📌 2. Trust Building – Trust in states, technology, and operators is essential. This means: • Identifying red lines on AI applications. • Creating verification mechanisms for compliance. • Developing global technical standards to ensure responsible AI deployment. 📌 3. The Human Element – AI must remain accountable to human decision-making. The report pushes for: • Clear guidelines on human oversight across the AI lifecycle. • Regular training for military personnel on AI limitations and risks. 📌 4. Data Practices – Biased or opaque data fuels unreliable AI. The report recommends: • Stronger data governance to ensure reliable and lawful AI training. • Cross-sector data-sharing platforms to improve transparency. 📌 5. Life Cycle Management – AI doesn’t stop evolving after deployment. The report outlines: • End-of-life strategies to prevent outdated AI from becoming a security risk. • Procurement guidelines for states and defense contractors. 📌 6. Destabilization Risks – AI could escalate conflicts if not carefully managed. The report suggests: • A multi-stakeholder platform to track risks and prevent proliferation. • Clear international agreements on AI’s role in warfare. 💡 Practical AI Governance, Not Just Policy Talk This report goes beyond theory—it provides clear, structured recommendations for governments, militaries, researchers, and industry stakeholders to work together on AI safety. 🔍 AI governance isn’t just about regulation—it’s about trust. Without multi-layered, practical solutions, AI risks becoming a destabilizing force in military operations. 📢 How should global leaders approach AI trust-building in defense? Let’s discuss. ⬇️ #AIGovernance #MilitaryAI #AISafety #ResponsibleAI #AITrust #EthicalAI __________________________________ Did you like this post? Connect or Follow 🎯 Jakub Szarmach, AIGP, CIPM Want to see all my posts? Ring that 🔔

  • View profile for Paul Forrest

    Investment Executive | Board Member | Value Creator | TEDx Speaker

    23,733 followers

    Ok… we all know that AI is revolutionising industries and offers huge transformative capabilities, improved efficiency and new operational opportunities. However, in national defence, its potential is even more unparalleled. Recognising this, the UK Ministry of Defence has introduced the Joint Service Publication 936, a directive aimed at ensuring AI adoption is ethical, safe and effective. This structured framework balances pretty ambitious deployment with robust governance and ethical assurance.   So… the JSP 936 embodies the MOD’s commitment to aligning AI adoption with the UK’s democratic values whilst ensuring its operational readiness. The directive provides a framework for developing and deploying AI-enabled systems that is centred on ethical, legal and safety standards. At its core are the MOD’s AI Ethical Principles of human-centricity, accountability, understanding, bias mitigation and reliability.   The directive’s scope excludes commercial tools but spans robotic and autonomous systems, logistics tools and decision-making support.   Integration of AI in defence clearly raises complex ethical challenges and the JSP 936 embeds ethical considerations throughout the AI lifecycle ensuring meaningful human control and accountability. This is facilitated by a key role, the Responsible AI Senior Officer who oversees ethical governance within MOD organisations.   Interestingly, the MOD’s ethical principles are intended to prioritise human welfare, ensure accountability through transparent governance and require AI systems to be explainable, bias-free and reliable. Of course these principles should build trust among users and stakeholders and ensure socially and technically aligned AI systems.   AI’s #defence applications range from enhanced reconnaissance to decision-making tools. Examples include reinforcement learning for command operations and object detection for surveillance. However, challenges such as system unpredictability and transparency have to be addressed and JSP 936 emphasises rigorous testing and validation ensuring systems function reliably in diverse environments.   JSP 936 usefully adopts a lifecycle approach that aligns with management practices like DevOps and MLOps and ensures continuous validation and reliability throughout an AI system’s operational lifespan. Ethical risk assessments are central with high-risk applications requiring oversight from the Defence AI and Autonomy Unit. Continuous monitoring ensures adaptability to emerging risks. In addition, the JSP 936 underscores collaboration with allies, aligning with NATO’s Principles of Responsible AI Use.   So… JSP 936 is pretty foundational but of pivotal importance to those seeking to build #AI systems in the UK defence sector. Furthermore, the Directive represents an interesting steer and robust best practice for commercial AI implementation.   More here https://lnkd.in/e_aXEUas #responsibleai #aiethics #humanintheloop

  • View profile for Cordula Droege

    Chief Legal Officer and Head of the Legal Division at International Committee of the Red Cross - ICRC. International humanitarian law & policy, humanitarian disarmament, multilateral diplomacy & strategy, leadership.

    4,400 followers

    There is much talk in the media about the effects of AI in the armed conflicts in the Middle East, in particular in light of the reported exponential number of strikes on targets in Iran. This weekend, the Financial Times editorial comes out in favour of a treaty on autonomous weapons systems #AWS. The need for international regulation is indeed becoming urgent, as AI and autonomous weapons systems are being developed and tested at breakneck speed. The International Committee of the Red Cross - ICRC has articulated what rules a treaty should contain, with a two-pronged approach, which you can find here: https://lnkd.in/eB3FGs4G. In short, it proposes: 1. An express prohibition of unpredictable autonomous weapon systems, notably because of their indiscriminate effects. 2. An express prohibition on the use of autonomous weapon systems to target human beings, because of ethical considerations to safeguard humanity, and to uphold international humanitarian law rules for the protection of civilians and combatants hors de combat. 3. Limitations in order to protect civilians and civilian objects, uphold the rules of international humanitarian law and safeguard humanity: through a combination of: - limits on the types of target, such as constraining them to objects that are military objectives by nature - limits on the duration, geographical scope and scale of use, including to enable human judgement and control in relation to a specific attack - limits on situations of use, such as constraining them to situations where civilians or civilian objects are not present - requirements for human–machine interaction, notably to ensure effective human supervision, and timely intervention and deactivation. The editorial also touches on AI in military decision-making and the risks of error and of setting aside human judgment in decisions to use military force. Commonly referred to as AI-DSS, these computerized tools bring together data sources such as satellite imagery, sensor data, social media feeds or mobile phone signals – and draw on them to present analyses, recommendations and predictions to decision makers. The International Committee of the Red Cross - ICRC has made a number of recommendation to ensure that AI-DSS can support, rather than undermine, human judgement, legal compliance and the protection of those affected by armed conflict. They focus on 1) ensuring human control and judgement; 2) system design requirements; 3) testing, evaluation, verification and validation; 4) legal reviews; 5) operational constraints on use; 6) user training; 7) after-action reviews; and 8) accountability, among others. You can find our recommendations here: https://lnkd.in/ewK5ZGc2 https://lnkd.in/eq7YnsZD

  • View profile for Dr Zena Assaad
    Dr Zena Assaad Dr Zena Assaad is an Influencer

    Associate Professor, Safety Engineering | UNIDIR Fellow | Top 10 Women in AI APAC | 100 Brilliant Women in AI Ethics | Host Responsible Bytes Podcast

    8,296 followers

    A reading recommendation for this week is this white paper developed by the IEEE SA Research Group on Issues of Autonomy and AI in Defense Systems which presents 𝗔 𝗙𝗿𝗮𝗺𝗲𝘄𝗼𝗿𝗸 𝗳𝗼𝗿 𝗛𝘂𝗺𝗮𝗻 𝗗𝗲𝗰𝗶𝘀𝗶𝗼𝗻 𝗠𝗮𝗸𝗶𝗻𝗴 𝗧𝗵𝗿𝗼𝘂𝗴𝗵 𝘁𝗵𝗲 𝗟𝗶𝗳𝗲𝗰𝘆𝗰𝗹𝗲 𝗼𝗳 𝗔𝘂𝘁𝗼𝗻𝗼𝗺𝗼𝘂𝘀 𝗮𝗻𝗱 𝗜𝗻𝘁𝗲𝗹𝗹𝗶𝗴𝗲𝗻𝘁 𝗦𝘆𝘀𝘁𝗲𝗺𝘀 𝗶𝗻 𝗗𝗲𝗳𝗲𝗻𝘀𝗲 𝗔𝗽𝗽𝗹𝗶𝗰𝗮𝘁𝗶𝗼𝗻𝘀. The framework addresses stakeholders involved in policy, design, testing, procurement, decision-making, deployment, and evaluation processes related to autonomous and intelligent systems (AIS), in public-sector decisions about defence applications. The framework supports stakeholders in raising and offering first steps towards applying existing sets of broad ethical principles and standards in the context of AIS, including but not limited to those associated with Article 36 of Additional Protocol I to the Geneva Conventions. The white paper was developed by a number of great authors including Dr Ingvild Bode, Ariel Conn and Rain Liivoja among many others. I have included the report below for easy access. You can also freely access and download the report here: https://lnkd.in/gEqbraya . . . . #ReadingRecommendation #IEEE #AutonomousSystems #IntelligentSystems #HumanDecisionMaking #DefenseApplications #EthicalFrameworks #PolicyDevelopment #StakeholderEngagement #TechnologyInDefense #AIethics #ResearchInsights #InnovationInDefense #PublicSector #DefensePolicy

  • View profile for Dr. Krunoslav Ris

    Fractional CTO for Scaleups | Board-Level Tech Governance | AI Strategy + Architecture (EU/US) | Reducing tech OPEX 28-40% | PMP & PBA Holder

    16,945 followers

    🔹 Control and Responsibility in AI Development: Ethics and Weaponry The recent video and article about the development of a ChatGPT-powered "sentry" rifle that uses voice commands, which was later shut down by OpenAI, prompted me to reflect on the importance of ethics and responsibility in AI technologies, especially when it comes to weaponry. How should regulations and ethical guidelines shape the future development of AI, particularly in sensitive areas like autonomous weapons? ➖ Regulatory Framework: There is a need to develop international and national regulations that clearly define responsibility for AI decisions in military operations. ➖ Ethical Guidelines: The development of autonomous systems should include clear rules that limit fully autonomous systems and require human oversight. ➖ Human Responsibility: In any system, AI should never take full responsibility for decisions that can significantly impact lives. ➖ International Cooperation: Global cooperation and agreements are needed to ensure safe development and use of AI in weaponry. 🟥 I strongly oppose the use of AI in home-based weapon production. Technology that allows for the independent manufacturing of autonomous weapons can be extremely dangerous, and without proper controls, it could fall into the wrong hands. The question is: who is responsible when an AI system makes a mistake? Is human oversight strong enough to prevent disasters? Clear ethical guidelines and regulations are needed to shape a safe and responsible future for AI technologies. 📢 I invite you to join the discussion – what do you think about ethics and responsibility in AI within sensitive areas like weaponry? How can we ensure safety and accountability in the development of these technologies? https://lnkd.in/da_hgBHp #AI #Ethics #AutonomousWeapons #Regulations #AIResponsibility #Technology #Innovation #Safety #AIRegulations #EthicalAI

  • View profile for Dr. Barry Scannell
    Dr. Barry Scannell Dr. Barry Scannell is an Influencer

    AI Law & Policy | Partner in Leading Irish Law Firm William Fry | Member of the Board of Irish Museum of Modern Art | PhD in AI & Copyright

    59,873 followers

    AI occupies a unique position in terms of dual-use technologies (DUT), reflecting its potential for both beneficial applications and military utilisation. AI's dual-use nature poses significant regulatory and ethical challenges, notably in its military dimensions which remain largely outside the ambit of civilian legislation such as the proposed AI Act. DUT are those with potential applications in both civilian and military domains. The essence of DUT lies in its versatility; the same technology that propels advancements in healthcare, education, and industry can also be adapted for surveillance, autonomous weaponry, and cyber warfare. This inherent ambiguity in application makes the governance of DUT, especially AI, a complex task. The AI Act primarily addresses civilian uses of AI, focusing on ethical guidelines, data protection, and transparency. Military applications of AI, by contrast, remain largely outside the scope of this act and other similar legislative efforts globally. The nuanced aspect of dual-use capabilities in AI brings software contracts into focus, serving as a critical instrument in governing the use, deployment, and development of AI technologies. Software contracts between developers, vendors, and users sometimes contain dual-use provisions to explicitly govern the use of the technology in both civilian and military contexts. These provisions are designed to ensure that the deployment of AI technologies aligns with legal standards, ethical norms, and, when applicable, international regulations. Dual-use clauses in software contracts may include restrictions on usage, export controls, compliance with international law, and requirements for end-use monitoring. Restrictions on Usage: Contracts may specify permissible uses of the software, explicitly prohibiting or restricting its application in military settings without proper authorisation. This helps in mitigating the risks associated with unintended or unauthorised military use of AI technologies. Export Controls: Given the potential military applications of AI, software contracts often include clauses related to export controls, requiring compliance with national and international regulations governing the export of dual-use technologies. This ensures that AI technologies do not inadvertently contribute to proliferation or escalate geopolitical tensions. Compliance with International Law: Provisions may also require that the use of AI technologies, particularly in military contexts, complies with international humanitarian law and other relevant legal frameworks. This is crucial in ensuring that the deployment of AI in warfare adheres to principles of distinction, proportionality, and necessity. It is clear that addressing the dual-use dilemma of AI extends beyond contractual measures. It requires a holistic approach that combines legal frameworks, ethical considerations, and international cooperation.

Explore categories