The Alan Turing Institute: Applying argument-based assurance to AI-based digital mental health technologies

Case study from the The Alan Turing Institute.

Background & Description

This case study is focused on the application of Trustworthy and Ethical Assurance to Digital Mental Health Technologies (DMHT). Trustworthy and Ethical Assurance is a methodology and framework designed to provide a structured process for evaluating and justifying claims about ethical properties of a data-driven technology or system.

DMHTs can make use of AI techniques in a variety of ways, and include health and wellness smartphone apps, virtual and augmented reality (VR/AR), decision support systems designed to support healthcare professionals, and wearable neurotechnologies such as electroencephalogram (EEG) headbands and brain-computer interfaces.

How this technique applies to the AI White Paper Regulatory Principles

More information on the AI White Paper Regulatory Principles.

Appropriate Transparency and Explainability

Following a series of stakeholder engagement events conducted in 2022, Turing’s project team co-developed an argument pattern that can be used as a starting template for building an assurance case about how explainability has been established throughout a DMHTs lifecycle.

The pattern focuses on the following properties and core attributes:

  1. transparency and accountability,
  2. responsible project governance,
  3. informed and autonomous decision-making,
  4. sustainable impact.

Further information and the argument pattern.

Fairness

Following a series of stakeholder engagement events conducted in 2022, our project team co-developed an argument pattern that can be used as a starting template for building an assurance case about how fairness has been established throughout a DMHTs lifecycle.

The pattern focuses on the following properties and core attributes:

  1. bias mitigation,
  2. non-exclusion,
  3. non-discrimination,
  4. equitable impact.

Further information and the argument pattern.

Accountability and Governance

The framework used to support Trustworthy and Ethical Assurance is grounded in an approach to project governance that is both complementary to regulatory approaches, such as the one outlined in the Government’s recent white paper, as well as broader research into responsible research and innovation.

This is best represented through our project lifecycle model.

In short, the project lifecycle model sets up a scaffold for identifying and evaluating claims and evidence about properties of a system (and its design, development, and deployment) that require assurance. This model has been developed and used in additional work with public sector organisations and government departments, including the Ministry of Justice and Office for Artificial Intelligence.

Contestability and Redress

The Trustworthy and Ethical Assurance framework and methodology produces assurance cases that explicitly detail the basis for how certain normative properties, such as non-discriminatory classification or interpretable decisions, have been established throughout a project’s lifecycle. As such, the artefacts that are produced (i.e. assurance cases) support procedures for contestability and redress by enabling transparent and accessible forms of communication between stakeholders and affected parties.

Why we took this approach

Previously, argument-based assurance had primarily been used as a methodology for auditing and assurance in safety-critical domains (e.g. aviation). However, it had largely focused on goals related to technical and physical safety. Our rationale for taking this approach was to build on a well-established method, with existing standards, norms, and best practices, but to extend the methodology to include ethical goals, such as sustainability, accountability, fairness, explainability, and responsible data stewardship.

Benefits to the organisation

Benefits of trustworthy and ethical assurance include: aiding transparent communication among stakeholders and building trust, integrating evidence sources and disparate methods (e.g. model cards), making the implicit explicit through structured assurance cases, aiding project management and governance, supporting ethical reflection and deliberation.

Limitations of the approach

Developing an assurance case requires wide-ranging involvement, deliberation and expertise across a project team which may require significant time and organisational capacity. In large or distributed teams, this can present a barrier for project governance.

Further AI Assurance Information

Updates to this page

Published 6 June 2023