Article 36 Canaries in the Military AI Coal Mine
Recent international reports, including those from SIPRI, the Global Commission on Responsible AI in the Military Domain (GC REAIM), and the REAIM Summit process underscore an urgent need for States to move beyond high-level principles to operationalise responsible design, development, and use across the lifecycle of new military technologies.
This is no longer theoretical. Ongoing conflicts, particularly in Ukraine and Iran, illustrate a clear trajectory in contemporary warfare. Artificial intelligence is increasingly integrated into decision-making processes to accelerate both the tempo of operations and the application of force. In Ukraine, for example, the proliferation of semi-autonomous and robotic systems is progressively removing human operators from the most contested and dangerous areas of the battlespace.
At the same time, geopolitical tensions are driving a fundamental shift in how military capabilities are acquired. Traditional acquisition models that are carefully structured, extensively tested, and measured in years] are being compressed. While such processes remain for major platforms, emerging capabilities such as AI-enabled drones and short-range autonomous or one-way loitering systems are being developed and fielded at pace. Increasingly, capability development is expected to occur “at the speed of relevance”. In conflicts such as Ukraine, where measure/ countermeasure cycles evolve rapidly, that speed is often measured in weeks or months.
This acceleration places significant strain on existing Article 36 legal review frameworks. Most national legal review systems were designed for a different era: lower volumes of acquisitions, longer development timelines, and more centralised oversight. Today, States that maintain such processes are confronted with a surge in small- and medium-scale projects, often accompanied by timelines that are incompatible with traditional review methods. In practice, legal reviews are frequently conducted by a single advisor or a small team, often alongside other operational law advice responsibilities.
The risks are becoming increasingly apparent. Even among the relatively small number of States (approximately twenty) that maintain formal legal review processes, capacity is under pressure. This is compounded by limited visibility over decentralised testing, evaluation, and acquisition activities, particularly those occurring at unit or single-service level. At the same time, while States consistently affirm the importance of legal review, the reality at the acquisition “coalface” can diverge. Processes perceived as slowing innovation may be sidelined, and institutional tolerance for risk may increase.
It is within this context that military legal advisors are assuming a new and critical role. At unit and formation level, legal advisors are increasingly involved in early-stage research, development, and experimentation. In effect, they are becoming “Article 36 canaries in the military AI coal mine”, identifying legal risks early, signalling the need for formal review, and providing preliminary weapons law advice before systems reach formal acquisition pathways.
This reflects an emerging shift toward a more distributed, lifecycle-based approach to legal assurance—aligned with a broader Lawful by Design paradigm.
In practice, this points to a “hub-and-spoke” model of legal review. Central legal authorities retain responsibility for formal determinations of legality, while decentralised legal advisors provide early input and situational awareness. The challenge will be ensuring consistency, maintaining institutional knowledge, and standardising approaches across dispersed actors. Structured training and capacity-building—such as the [[Five Eyes Article 36 Legal Review Course]][1] hosted by UK Army Legal Services—will be essential in sustaining this model.
However, early legal engagement alone is not sufficient. To meet the demands of accelerated military AI acquisition, responsibility must also extend to the private sector. Defence industry actors developing AI-enabled systems must anticipate legal review requirements and integrate international law considerations into the design and development process.
This requires more than awareness, it requires preparation. Companies must be able to provide the technical and operational data necessary to support legal assessments. This includes information on weapon effects, accuracy, reliability, and precision. For systems designed to have anti-personnel effects, data on physiological impact is essential to assess compliance with the prohibition on superfluous injury or unnecessary suffering. Where AI systems perform targeting-related functions—such as target selection or engagement—developers must be able to explain how the system operates, including data inputs, sensor dependencies, and the impact of environmental conditions on performance.
Embedding these considerations early in the lifecycle is a core element of a Lawful by Design approach. It not only supports compliance but also reduces friction in later stages, avoiding delays associated with late-stage legal review.
This is precisely where structured approaches to legal readiness can add value. Article 36 Legal’s Compliance Report provides pre-acquisition due diligence for defence companies developing AI-enabled or conventional weapon systems. By aligning system design and documentation with the expectations of national legal review processes, such assessments enable companies to anticipate outcomes, strengthen tender submissions, and reduce legal and operational risk.
Ultimately, national legal review frameworks must evolve to match the pace and complexity of military AI. This requires equipping legal advisors at all levels to engage meaningfully in early-stage testing and evaluation, and embedding legal review as a continuous, lifecycle-based function. Legal review should not be viewed as a bureaucratic hurdle, but as an integral component of capability development—informing design choices, identifying risks early, and ensuring that systems are capable of lawful use.
Nor does the challenge end at acquisition. The adaptive and evolving nature of AI-enabled systems requires ongoing legal governance throughout their in-service life. This includes identifying novel or expanded uses and reassessing compliance where material changes to software or hardware occur. Legal review, in this sense, is no longer a single event, but a continuous process.
In an era of accelerated technological change, Article 36 legal review must itself adapt.
The “canaries” are already in the coal mine. The question is whether institutions, processes, and industry are prepared to respond to what they are signalling.