Alan Turing Institute and University of York: Trustworthy and Ethical Assurance Platform
Case study from the Alan Turing Institute and University of York.
Background & Description
The Trustworthy and Ethical Assurance platform (TEA) is an open-source tool that has been designed and developed by researchers at the Alan Turing Institute, in collaboration with the University of York. The purpose of the tool is to support a process of developing and communicating structured assurance arguments that show how data-driven technologies, such as machine learning or AI, adhere to ethical principles and best practices. The outputs of the tool are known as ‘assurance cases’—structured and graphical representations of an argument made about some principle related to a project, technology, or system.
Assurance cases have been widely used in safety-critical domains, such as health, energy, and transport, for many decades. Traditionally, these have focused on goals related to technical and physical safety. The TEA platform extends this approach to consider a broader range of ethical goals.
Users are required to have a project or system in mind, ideally at an early stage of design, and to use the platform to iteratively build a structured assurance case. To support this process, the TEA platform guides the user through the process of developing an assurance case step-by-step. It also provides freely available resources and guidance to help build a supportive community of users with identifying claims and evidence to demonstrate the achievement of a particular outcome or goal. For instance, users can share and comment on publicly available assurance cases, access argument patterns that serve as templates that help implement ethical principles throughout a project’s lifecycle, and, in general, help build best practices and consensus around assurance standards (e.g. determining evidence for specific claims).
The assurance cases can be used for a wide range of purposes, including internal quality assurance, reflection, and documentation, as well as external assurance (e.g. compliance or auditing).
Why we took this approach
Our rationale for taking this approach was to a): enable more diverse users and stakeholders to participate in the co-creation of ethical standards and best practices for a wide range of principles (e.g. fairness, explainability) and b): build on a well-established and validated method for safety assurance—with existing standards, norms, and best practices—but to extend the methodology to include ethical goals and practices. In doing so, this tool also supports and aligns with principles-based regulatory frameworks, such as the UK’s Office for AI pro-innovation approach to AI regulation, which outlines the following principles: safety, security and robustness; appropriate transparency and explainability; fairness; accountability and governance; and contestability and redress.
We also sought to ensure that our platform was easy to use and accessible, recognising the needs and challenges that many sectors or domains have (e.g. low levels of readiness for data-driven technologies). Therefore, the platform has been designed to be simple and accessible, but also flexible and extensible using additional guidance, freely available on our documentation site.
The open-source nature of the tool also allows for extensibility and community support. For instance, a free-to-access version of the tool is available so that users and organisations can deploy the platform in a local/private environment.
Benefits to the organisation using the technique
- Aiding transparent and structured communication within project teams and among stakeholders to help create a more systematic and open approach to AI assurance;
- Providing a logical structure that supports the integration of evidence from disparate sources (e.g. model cards, international standards), to help users identify shared best practices and communicate emerging best practices within a single platform;
- Making the implicit explicit by helping project teams clearly specify the practical steps and decisions taken over the course of a project’s lifecycle, and linking respective claims together into a unified (and evidence-based) argument;
- Aiding project management and governance by providing a flexible tool for transparent documentation of assurance processes;
- Supporting ethical reflection and deliberation through complementary resources (e.g. structured bias identification and mitigation activity, templates for assuring general ethical principles); and
- Supporting an open-source repository, helping build a shared knowledge base, and improving the usability of the platform for the wider community by sharing feedback.
Limitations of the approach
In the ideal case, developing an assurance case requires wide-ranging involvement of stakeholders, and iterative deliberation and involvement of expertise across a project team. This may require significant time and organisational capacity. In large or distributed teams, this can present a barrier to effective project governance. However, the methodology is highly flexible and tiered or proportional approaches can be followed.
Further Links (including relevant standards)
- GitHub Repository: https://github.com/alan-turing-institute/AssurancePlatform
- Documentation Site: https://alan-turing-institute.github.io/AssurancePlatform/
- Policy-report showing validation of tool and argument patterns in digital mental healthcare: https://zenodo.org/record/7107200
- Journal Article describing methodology: https://drive.google.com/file/d/1DHBYWEtrHn2EVAI-b55ub60_H5uvv2eL/view
- A principles-based ethics assurance argument pattern for AI and autonomous systems https://link.springer.com/article/10.1007/s43681-023-00297-2
- Assurance Case Guidance Version 1: https://scsc.uk/r159:1
Further AI Assurance Information
- For more information about other techniques visit the CDEI Portfolio of AI Assurance Tools: https://www.gov.uk/ai-assurance-techniques
- For more information on relevant standards visit the AI Standards Hub: https://aistandardshub.org/