Holistic AI: Risk Mitigation Roadmaps

Case study from Holistic AI.

Background & Description

Holistic AI’s risk mitigation roadmaps are a set of guides to help enterprises mitigate some of the most common AI risks. Each roadmap, hosted on GitHub with documentation on GitBook, outlines a specific technical risk (efficacy, robustness, privacy, bias, and explainability) and why it matters. For each risk, it provides potential solutions, which typically have two or more steps. The Roadmaps are supplemented by Jupyter notebooks whose Python code can be repurposed as needed to address specific needs. Therefore, they are best suited to those with programming skills that system specifications to understand inputs, outputs, and model parameters.

Relevant Cross-Sectoral Regulatory Principles

Fairness

Bias,a subset of fairness, is defined as the risk that the system treats individuals or groups unfairly. The Roadmaps provide background information on why it is important that bias is identified and mitigated, as well as ways to do this with example Python code, covering both classification and regression tasks. The roadmaps outline widely used metrics in relevant fields, including business psychology, and how to interpret their values, as well as additional reading materials.

Accountability & Governance

By following a Holistic AI mitigation roadmap, users can examine the effectiveness of their internal control processes and other governance procedures to identify whether additional provisions are needed to support the safe development and deployment of their systems. Carrying out the assessments and mitigations detailed in the roadmaps also contributes to accountability for the system’s outputs and associated impacts, where users can take steps to ensure they are in compliance with legal requirements and best practices.

Safety, Security & Robustness

Robustness is the risk that the system fails in response to changes or attacks, which the Mitigation Roadmaps provide solutions for in terms of handling dataset shift and adversarial training for robustness. Each mitigation roadmap to address robustness provides an accessible overview of the risk as well as additional resources to further understand the risk at a deeper level. It also recommends specific metrics for users to investigate the risk themselves, and specific mitigation steps that can be carried out depending on the findings from the self-assessment, both of which are accompanied by example Python code.

Appropriate Transparency & Explainability

Explainability is the risk that an AI system may not be understandable to users and developers. The Roadmaps provide solutions for improving the explainability of machine learning models including examples of datasheets for datasets and model cards for model reporting that can be followed as a template. Taking a more technical approach, the roadmap for extracting explanations for machine learning models contains example Python code for in-processing and post-processing methodologies that can be explored and adapted as needed by users.

Why we took this approach

Preventing and mitigating the risks of AI systems is becoming increasingly important to protect against avoidable harm. Holistic AI’s Risk Mitigation Roadmaps provide an easy-to-use guide to mitigating these risks to start enterprises on their Risk Management journey.

Benefits to the organisation using the technique

The Roadmaps provide easy-to-follow steps that can help support steps toward reducing common technical risks with AI systems, as well as example Python code, where relevant, to support users of the roadmaps with what can be a daunting task. The roadmaps also provide reading recommendations and tools and are grounded in academic research and extensive industry experience.

While the most robust evaluations are those carried out by impartial, independent third-parties, internal efforts are an important step in the right direction nonetheless and can be a first-step towards ensuring that the appropriate safeguards are in place and that they are effective at mitigating risks, reducing potential legal liability and improving the trust in the evaluated systems.

Limitations of the approach

Each AI system is unique and therefore has unique challenges. While these Roadmaps are a good place to start, it is important to take a tailored approach to each system to adequately safeguard against harm.

Updates to this page

Published 19 September 2023