Article 36 Compliance Reports‍ ‍

Enabling Lawful, Responsible, and Investable Military AI

The rapid development of AI-enabled and autonomous military capabilities presents both operational opportunities and legal risk.

An Article 36 Compliance Report provides an independent, structured legal assessment of whether a new weapon system or AI-enabled capability is lawful and capable of being used in compliance with international law.

Grounded in Article 36 of Additional Protocol I, these reports translate complex legal obligations into practical design, investment, and acquisition insights.

If you are developing, investing in, or acquiring AI-enabled military capabilities, an Article 36 Compliance Report can provide the clarity and assurance needed to make informed decisions.

Purpose of an Article 36 Compliance Report

An Article 36 Compliance Report is designed to:

  • Identify legal risks early in the design and development lifecycle of new military capabilities

  • Assess compliance with international law including IHL rules of distinction, proportionality, and precautions in attack

  • Support investment and acquisition decisions with objective legal analysis

  • Enable responsible-by-design development of AI-enabled and autonomous systems

  • Enhance bids in competitive military acquisition processes across jurisdictions

These reports provide pre-development or acquisition, industry-facing due diligence that informs decision-making before significant capital or procurement commitments are made.

What does a report cover?

Each Article 36 Compliance Report provides a comprehensive, structured analysis aligned with how States conduct legal reviews.

Legal Review Requirement

  • Whether the system constitutes a “new weapon, means or method of warfare”.

  • The application of Article 36 and legal review obligations across jurisdictions.

  • What the system’s ‘normal or expected use’ is including the data requirements for the conduct of a legal review.

Assessment Against International Law Prohibitions and Restrictions

  • Specific treaty prohibitions and restrictions;

  • General prohibitions, including:

    • Indiscriminate weapons,

    • Superfluous injury or unnecessary suffering, or

    • Environmental harm.

Functional Review of AI-Enabled Systems

  • A functional, lifecycle-based analysis of AI-enabled capabilities, assessing:

    • Object identification and classification,

    • Target confirmation and engagement,

    • Terminal guidance and autonomy, and

    • Human control and decision-making roles

Practical Outputs

Our reports are designed not only to assess legality but to enable lawful, responsible, and operationally effective capability development.

Each report will provide the following outputs:

  • The report identifies practical legal risks, including:

    • Inability to fulfil precautions in attack in real-world conditions

    • Loss of human control in communications-degraded environments

    • Ambiguity in lines of legal responsibility

    • Performance degradation affecting lawful targeting

  • Each risk is paired with actionable mitigation measures, including:

    • System design changes

    • User interface improvements (e.g. embedding IHL decision workflows)

    • Operational restrictions or conditions of use

    • Training and doctrine enhancements

  • The report provides an indicative assessment of how a State legal review is likely to assess the system, including:

    • Whether the system is capable of lawful use

    • Any recommended restrictions on use

    • Cross-jurisdictional alignment