Advai: Implementing a Risk-Driven AI Regulatory Compliance Framework
Case study from Advai.
Background & Description
As AI becomes central to organisational operations, it is crucial to align AI systems and models with emerging regulatory requirements globally. This use case focuses on integrating a risk-driven approach, based on and aligning with ISO 31000 principles, to assess and mitigate risks associated with AI implementation.
In the risk stages briefly outlined below, stress testing is influenced by stages 1 and 5, and instrumental to stages 2, 3 and 4. Accurate assignment of risk is built on the understanding of the point of model failure. Stress testing is the technical capability that brings integrity to any effective risk assessment of AI.
-
Context Understanding: Assess the AI model type, expected user behaviour, application, and the potential for wider impact within its operational environment; if things go wrong, what and who might be impacted?
-
Risk Identification: Stress testing methods to identify potential AI risks such as data privacy issues, biases and security vulnerabilities.
-
Risk Assessment: Evaluate the likelihood of AI model failure in a given context. Stress testing reveals X vulnerability, but how likely is this to occur under real-world conditions?
-
Risk Treatment: Analysis of causes for failure implies strategies to mitigate these risks, such as augmenting training data with data designed to counter a bias.
-
Monitoring and Review: Continuously monitor AI systems to detect new risks and assess the effectiveness of risk mitigation strategies. This informs where to target future stress testing and which methods to use.
Some emerging ISO 31000 standards we incorporate into our AI Alignment approach include:
- ISO/IEC TR 24027:2021
Information technology — Artificial intelligence (AI) — Bias in AI systems and AI aided decision making
- ISO/IEC 25059:2023
Software engineering — Systems and software Quality Requirements and Evaluation (SQuaRE) — Quality model for AI systems
- ISO/IEC FDIS 5338 (Under Development)
Information technology — Artificial intelligence — AI system life cycle processes
It addresses the AI threat landscape evolution and introduces a structured process for organisations to ensure their AI systems are digestible by those responsible for risk management.
How this technique applies to the AI White Paper Regulatory Principles
More information on the AI White Paper Regulatory Principles.
Safety, Security & Robustness
By integrating ISO 31000 into AI use cases, organisations proactively address the safety, security, and robustness of AI systems, anticipating risks and implementing controls to mitigate them.
Appropriate Transparency & Explainability
Transparency and explainability are enhanced through detailed documentation and record-keeping, as well as implementing processes that ensure outcomes are understandable to stakeholders.
Fairness
The approach involves continuous monitoring and assessment of AI systems to identify and eliminate biases, promoting fairness in AI decision-making.
Accountability & Governance
A risk architecture that is calibrated to manage modern AI-specific risks ensures effective governance, and default documentation processes maintain accountability. The framework demands that roles and responsibilities are clearly defined in the context of AI risks – the assignment of personnel responsible for each AI model-specific compliance requirement.
Why we took this approach
The approach modernises traditional risk management frameworks to encompass unprecedented risk challenges that are nuanced to AI. Many modern efforts seem to throw out decades of otherwise sensible and effective risk management processes. We’ve chosen to align the unique risks posed by AI with the corpus of proven risk management philosophies and methods. Further, in leveraging existing enterprise risk management frameworks and tailoring them to the nuances of AI, we translate the modern challenges of AI risk management into terms and processes organisations are already equipped to manage. This therefore ensures compliance, enhances trust, and safeguards against AI-specific threats.
Benefits to the organisation using the technique
-
Enhanced compliance with global AI regulations.
-
Improved risk identification and mitigation strategies for AI systems.
-
Strengthened trust from stakeholders through robust governance and accountability measures.
-
A clear mechanism for addressing and rectifying issues arising from AI system errors or biases.
Limitations of the approach
-
Rapidly changing regulatory environments may necessitate frequent updates to the risk framework.
-
The technique relies upon the accuracy of an assigned risk in the context of a particular organisation, i.e. the effectiveness of the technique is subject to the relevance of the assigned risk.
Further Links (including relevant standards)
Further AI Assurance Information
-
For more information about other techniques visit the CDEI Portfolio of AI Assurance Tools: https://www.gov.uk/ai-assurance-techniques
-
For more information on relevant standards visit the AI Standards Hub: https://aistandardshub.org