Best Practice AI: Developing an explainability statement for an AI-enabled medical symptom checker
Case study from Best Practice AI.
Background & Description
This case study is focussed on an AI-enabled smart symptom checker. Users can enter their medical symptoms, answer some simple questions, and are provided with a report which suggests possible causes and next steps.
As part of this, we have developed an AI Explainability Statement to provide users with transparent information regarding the workings of the AI.
How this technique applies to the AI White Paper Regulatory Principles
More information on the AI White Paper Regulatory Principles.
Appropriate Transparency and Explainability
The purpose of the AI Explainability Statement is to provide end users, customers and external stakeholders (including regulators) with transparent information, in line with the GDPR expectations set by the UK’s data protection regulator, the Information Commissioner’s Office (ICO). The document provides, for example, clear insight into the purpose and rationale for AI deployment, how it has been developed and what data has been used in design and delivery, insight on governance and oversight procedures, clarity on consideration and management of ethical and bias issues.
In response to guidance published by the Information Commissioner’s Office (ICO) on Explaining decisions made with AI, Best Practice AI worked with a consumer digital healthcare company and experts in AI regulation and law at Simmons & Simmons and Fountain Court to develop an Explainability Statement for an AI-enabled smart symptom checker.
The Explainability Statement is a document providing a non-technical explanation of how the organisation uses AI within its symptom checker, why it is being used, how it was designed, and how it operates. It was aimed at customers, regulators and the wider public and is available on the company’s website.
The statement consists of a list of questions with accompanying explanations covering a range of topics including the reasons for using AI, how the AI system works, and how it was developed, tested, and updated.
Developing this Explainability Statement involved input from a range of external subject matter experts. During the preparation of the statement, feedback was received from the ICO, which helped inform the process.
To inform the preparation of an Explainability Statement clients will have already invested in techniques such as various audits (e.g. bias audit). This approach requires firms to document their internal processes including governance and risk management. Where there appear to be gaps or potential issues these are flagged by the external team for future improvement. The main output is a document published on the organisation’s website to provide maximum transparency to external stakeholders.
Why we took this approach
There is a clear need to explain AI systems to a wide range of stakeholders, in language that is readily accessible and in a format that can be understood by non-technical readers. Merely claiming that AI is a “black box” is no longer good enough.
There is also a growing regulatory focus around the need for transparent and explainable AI. This is described in the ICO’s Explaining decisions made with AI guidance. AI explainability statements are public-facing documents providing transparency in order to comply with global best practices and AI ethical principles, as well as legislation. In particular, AI explainability statements aim to facilitate compliance with Articles 13, 14, 15 and 22 of GDPR for organisations using AI to process personal data.
In addition, producing an AI explainability statement is a good exercise for an organisation to go through in order to ensure that internal procedures, approaches and governance will bear external scrutiny. It therefore helps companies to both ensure internal best practice but also be prepared for anticipated future developments in regulatory requirements (e.g. The EU AI act).
Ultimately AI explainability statements are conducted in order to provide transparency to customers and key stakeholders on the rationale , provision, data provenance, model training, bias control and mitigation, oversight and control of AI at the relevant firm.
Benefits to the organisation
-
Provides transparency for external stakeholders
-
Builds and maintain public trust in the AI systems being used
-
Provides a benchmark for internal stakeholders on where best practice would suggest process, governance or management improvements
-
Demonstrates compliance with relevant regulation
Limitations of the approach
The statement builds on internal tools and processes - and whilst it can propose improvements it cannot make them happen.
Further Links (including relevant standards)
Further AI Assurance Information
-
For more information about other techniques visit the OECD Catalogue of Tools and Metrics: https://oecd.ai/en/catalogue/overview
-
For more information on relevant standards visit the AI Standards Hub: https://aistandardshub.org/