Responsible By Design (RBD)
Rating for Military AI
Independent ratings for responsible innovation

What is the RBD Rating?

Drawing on international law, recognised ethical AI principles, engineering assurance standards and governance best practice, the RBD Rating is an independent star-rating system that provides a transparent and objective assessment of whether national military AI frameworks, individual companies, and specific military AI capabilities are responsible i.e. lawful, ethical, safe, and trustworthy.

The RBD Rating provides states, investors, industry and the public with a clear benchmark for what “responsible” military AI means in practice, providing transparency, building trust, and motivating responsible innovation.

Building Responsibility into Military AI

The RBD Rating creates a common standard that brings together states, industry, and investors around the common goal of ensuring that AI in the military context is developed and used responsibly.

By embedding law, ethics, safety and trustworthiness at every stage of system lifecycle, the RBD Rating helps to ensure that military AI is not only technologically advanced but also legally sound, ethically defensible, and societally acceptable.

Methodology

The RBD Rating methodology draws on structured indicators, transparent evidence, and independent assessment. It includes:

  • Four categories: Legal, Ethical, Safe, Trustworthy (each weighted at 25%).

  • Gating rules: Non-negotiable legal requirements (e.g., failure to provide an Article 36 legal review file results in an automatic zero rating).

  • Evidence packs: States and companies provide supporting documentation (e.g., test reports, safety cases, policies, legal reviews).

  • Independent review: Article 36 Legal conducts the assessment and awards the rating.

RBD Categories, Criteria, Weighting & Evidence

  • Legal Compliance — Gating + 25% weight

    Core criteria: Demonstrate conformity with IHL (including where applicable, legal review requirements), national law, human judgment, ability to cancel/suspend and auditability.

    Evidence Pack: Article 36 legal review file, HMI/ CONOPS showing system of control, test reports.

    Gating failures (automatic 0): unable to demonstrate operation in compliance with IHL, legal review requirements and lack of human control.

  • Ethical Compliance — 25% weight

    Core criteria: Alignment with relevant national/ alliance defence AI ethics policies (DoW, NATO, MoD), human oversight, bias mitigation, explainability, accountability.

    Evidence Pack: ethics matrix, dataset documentation, ethical red-teaming results.

  • Safety (Engineering Assurance) — 25% weight

    Core criteria: map autonomous functions and demonstrate compliance with relevant standard (e.g. MIL-STD-882E, IEC 61508, DO-178C, STANAG 4671/4703), safety case, hazard analysis, battlefield robustness, redundancy, TEVV evidence.

    Evidence Pack: safety case, hazard logs, test reports, cyber-hardening validation, TEVV evidence.

  • Trustworthiness (Governance & Lifecycle) — 25% weight

    Core Criteria: Evidence of national/ company governance framework, lifecycle risk management, and operational readiness.

    Evidence Pack: Compliance with Responsible AI policies, SOPs, training, incident reporting mechanisms, supply chain assurance.

How the RBD Rating system works

The RBD Rating system evaluates military AI across four categories and awards a transparent star rating from 0 to 5.

Failure to meet international or national legal requirements results in an automatic zero rating, while higher ratings reflect stronger alignment with international and national military AI standards as well as responsible design practices.

The rating system applies at three levels, nations, companies and specific military AI capabilities to show how robust their responsible AI frameworks are. By doing so, it helps states, industry, and investors build trust, demonstrate accountability and foster innovation that is both technologically advanced and responsibly governed.

  • 5★ – Exemplar (Gold)

  • 4★ – Strong (Silver)

  • 3★ – Meets Standard

  • 2★ – Conditional (improvements required)

  • 1★ – Below Standard
    ——————————————————————-

  • 0★ – Reject (unlawful)

Current RBD Ratings

  • Country Ratings

    An independent objective assessment of national military AI frameworks RBD Rating.

  • Company Ratings

    An independent objective assessment of individual company RBD ratings (coming soon)

  • Capability Ratings

    An independent objective assessment of individual military AI capability RBD Rating (coming soon)

RBD Rating

Lawful. Ethical. Safe. Trustworthy

Independent ratings for responsible innovation.