A case for a responsible approach to the review of AI-DSS and other AI-enabled systems
Are Article 36 legal reviews alone enough for Military artificial intelligence (AI)Decision Support Systems and other AI-enabled targeting systems?
As AI becomes embedded in military decision-making, Article 36 of Additional Protocol I to the Geneva Conventions is increasingly invoked as the primary legal mechanism for determining the legality of new military technologies. While Article 36 plays a vital role in ensuring that new weapons, means, and methods of warfare comply with international humanitarian law (IHL). when applied to AI decision-support systems (AI-DSS) and autonomous weapon systems (AWS), Article 36 may have some important limitations.
The Article 36 legal review obligation was conceived in an era of weapons centric regulation. While its text also refers to “means and methods of warfare,” evidence of national review practice suggest that States focus primarily on platforms and munitions that directly apply force. This creates difficulties for AI-DSS and AI-enabled targeting systems that are part of the 'kill chain' but do not directly apply force. In several States, AI systems that support targeting, intelligence analysis or operational planning are not treated as weapons or means or methods of warfare for the purposes of the Article 36 legal review obligation. Accordingly, they may fall outside national Article 36 processes altogether.
This outcome is increasingly difficult to defend as AI-DSS can profoundly shape how force is used in compliance with IHL. For example, AI-DSS can influence target selection, proportionality assessments, precautions in attack, and the tempo of operations. From a functional perspective, the distinction between “weapon” and “decision-support” is becoming artificial where both engage IHL targeting rules. Yet the result is a governance gap in systems that materially affect IHL-regulated decisions may never be subject to any structured review. Does this raise the need for a 'responsible review' of AI-DSS and AI-enabled systems that fall outside a States' traditional interpretation of weapon, means or method of warfare?
Where an AI-enabled capabilities are brought within Article 36 processes, a further problem may arises as Article 36 is traditional designed to answer a narrow but essential question: whether the employment of a capability would be prohibited by international law. Howvever, many of the most significant risks associated with AI-DSS and AWS, such as ethical concerns, do not fit neatly within that framework. Ethical concerns such delegating life-and-death decisions to machines, risks to human agency and accountability, reliability and predictability failures, automation bias, escalation dynamics, and broader legitimacy concerns are not easily captured by a strictly legal assessment.
This does not mean that Article 36 is unnecessart or outdated. On the contrary, it remains an indispensable legal obligation and a critical component of responsible weapons governance. But it is structurally under-inclusive as a governance tool for military AI. Treating Article 36 as the sole review mechanism places excessive weight on a process that was never designed to address the full socio-technical risk profile of AI-enabled systems.
What is needed is not the replacement of Article 36, but its supplementation through a broader Responsible Review of AI-DSS and AWS. A Responsible Review would operate alongside Article 36 legal review and extend structured assessment to AI systems that influence IHL-relevant decisions, even where they fall outside national definitions of weapons. It would treat legal compliance as a baseline while also examining ethical acceptability, human-machine interaction, technical robustness, accountability, and systemic risk across the capability life cycle.
Such an responsible approach does not substitute ethics for law or dilute legal standards. Rather, it recognises that lawful use in practice depends on design choices, operational context, human control, and governance arrangements. By embedding these considerations into a structured review process, States can better ensure that AI-enabled military capabilities are not only lawful in theory, but controllable, accountable, and legitimate in practice.
In an era of AI-enabled warfare, the question is no longer whether Article 36 remains relevant, as an express legal obligation it clearly does. The question is whether it can, on its own, bear the full burden of governing military AI. A Responsible Review framework offers a principled and practical way to close that gap.