The Alan Turing Institute: Assurance of third-party AI systems for UK national security

This case study explores how national security bodies can effectively evaluate AI systems designed and developed, at least in part, by industry suppliers, before they are deployed in high stakes national security environments.

Background & Description

This case study explores how national security bodies can effectively evaluate AI systems designed and developed, at least in part, by industry suppliers, before they are deployed in high stakes national security environments. Our tailored AI assurance framework for UK national security facilitates transparent communication about AI systems between industry suppliers and national security customers so that all stakeholders understand the risks which come with a particular AI system.

We provide a method for the robust assessment of whether AI systems meet the stringent requirements of national security bodies. The framework centres on a structured system card template for UK national security. This template sets out how AI system properties should be documented – to cover legal, supply chain, performance, security, and ethical considerations. We recommend government and industry work together to ‘fill out’ this system card template with relevant evidence that an AI system is safe to deploy.

In addition to this, we offer guidance on what evidence should be used to fill out the system card template and on how contractual clauses should be used to mandate transparent information sharing from suppliers.

Finally, we offer guidance to national security bodies about how they should go about assessing all the evidence in the system card. We address the need to establish clear lines of accountability and to ensure ongoing post-deployment checks are in place to monitor any residual risks associated with an AI system.

How this technique applies to the AI White Paper Regulatory Principles

Safety, Security & Robustness

This assurance methodology addresses questions of AI security directly. The system card template we provide has a section dedicated to the security of AI systems. Users must fill this in with evidence on supply chain security, on compliance with international standards for cybersecurity, on data hosting/management plans and more.

Our assurance method also addresses broader AI safety concerns. The system card template includes space to detail the results of any red teaming exercises which have been conducted and of any performance and robustness tests which have been done.

Appropriate Transparency & Explainability

This assurance tool is explicitly designed to increase transparency around AI systems, particularly to increase transparency between AI developers and AI customers.

Transparency is supported by two key processes within our assurance methodology. First, we provide a system card template that encourages AI suppliers to be more transparent about their AI systems, giving them instructions on what evidence they should with potential customers. Second, we propose contractual clauses that will mandate further transparency from suppliers.

Beyond this, we also propose further transparency from security services about how they procure AI systems. This should allow broader scrutiny of the processes of AI procurement and AI risk management.

Fairness

This assurance methodology provides guidance to the national security community on how to incorporate ethical considerations into their procurement process for AI systems. We propose fairness is considered as part of this, both from a technical and societal perspective, to ensure any AI systems used in the national security context operate without any unacceptable biases.

The system card template has a section dedicated to ethical considerations, including fairness. We encourage a range of evidence is used to demonstrate adequate safeguards have been introduced by suppliers to avoid unfair bias (e.g. through using impact assessments, technical fairness assessments, and international standards). We also incorporate considerations of fairness into other system card sections, for example asking users to disaggregate performance metrics for distinct demographic groups.

Accountability & Governance

We provide guidance to national security bodies on how AI systems should be assessed before they are purchased and before they are deployed. In doing so, we make recommendations on who within organisations should take accountability for these decisions. We also discuss the importance of independent oversight over procurement decisions. Finally, we address issues of legal compliance, proposing that our AI assurance framework is used to support wider legal due diligence checks for AI in the national security context.

Why we took this approach

While much progress has been made by other sectors on AI assurance, no other dedicated approach exists to meet the needs of the national security community. AI uses in national security come with additional risks due to the prevalence of high-stakes deployment contexts where tolerance for error is especially low. We build on existing AI assurance research to explicitly address risks that emerge when security services use industry-designed AI systems.

Our approach to AI assurance also aims to build on existing industry and government practices to make sure its implementation is feasible in the near term. Our assurance process is robust, with thorough safeguards included, but also practical, with streamlined stages that should be straightforward to implement.

Finally, this assurance framework explicitly tackles issues associated with third party AI, introducing crucial considerations such as supply chain accountability and data provenance, which are missing in assurance tools that deal instead with assurance from the perspective of a single developing organisation.

Benefits to the organisation using the technique

Aids transparent communication between suppliers and customers about the properties of an AI system; Increases oversight of AI customers over the whole AI lifecycle; Allows AI customers to easily compile all evidence that an AI system is trustworthy within a single document – the AI system card; Meets the specific needs of the national security sector; Offers guidance on how AI suppliers might prove their systems are ethical, legally compliant, secure, and reliable.

Limitations of the approach

Requires multidisciplinary expertise and significant organisational capacity; Is reliant upon contributions from industry suppliers who may be reluctant to share all relevant information about an AI system with the government customer.

Further AI Assurance Information

Updates to this page

Published 9 April 2024