Careful AI: PRIDAR Assurance Framework
Case study from Careful AI.
Background & Description
The PRIDAR framework is a tool that helps organisations make more informed and effective decisions about AI development and deployment. PRIDAR simplifies the process of decision-making when an organisation is faced with managing multiple AI suppliers or development options. It provides a clear visual representation through a dashboard of the continuing fitness for the purpose of AI. It is particularly useful in large organisations where the responsibilities for managing AI are spread across different departments, organisations and suppliers.
It is used to benchmark the opportunities and risks associated with AI system design, deployment, and use. It focuses on evidence of user-led design, assurance, and organisational preparedness. It is therefore well placed to enable an organisation to address bias and discrimination across the AI lifecycle.
PRIDAR is designed to be open: so that it can be embodied in existing standards under and AGPLv3 type licence e.g. BSI30440 (The standard for the validation of AI in Healthcare). See the following links for more information.
How this technique applies to the AI White Paper Regulatory Principles
More information on the AI White Paper Regulatory Principles.
Safety, Security & Robustness
Across industries there are many approaches for measuring safety, security and robustness. PRIDAR does not seek to re-invent these. It simply enables an organisation to report on these facets along with other criteria that are critical to the effective use of AI: user-centred design, technology readiness, formal assurances, and other areas of organisational preparedness.
Transparency & Explainability
Ensuring transparency and explainability from counterfactual dashboards to human alignment studies etc is commonplace. We do not seek to re-invent these. They are embodied in the Data, Model and Integration agreements, and User Centred design sections of PRIDAR. PRIDAR enables the user to visually compare risk reports for these facets with other criteria such as technology readiness, formal assurances, and other areas of organisational risk. This enables users to put the impact of transparency and explainability risks into perspective.
Fairness
Unlike traditional approaches, PRIDAR provides a holistic solution to the challenge of fairness in AI. It recognises that fairness is not just an algorithmic or technical issue, but one that encompasses user-centred design, assurance, and organisational preparedness.
Accountability & Governance
PRIDAR is a powerful easy-to-use method of evidencing transparent accountability. It enables better governance by enabling users to benchmark the fitness for purpose and risks of AI. It places emphasis on user accountability, focussing on ethical and legal considerations in AI system deployment. By fostering accountability, PRIDAR helps to ensure that AI systems are developed and used in a manner that respects fairness and avoids organisational bias.
Contestability & Redress
Evidence that those affected by algorithms are enabled to contest and seek redress is embodied in the User Centred design and Data, Model and Integration agreements section of PRIDAR. PRIDAR covers the whole life cycle to AI development and therefore enables AI users in any part of their organisation to contest and seek redress for potential harms that may occur because of how an AI system is designed, managed or deployed.
Benefits to the organisation
PRIDAR’s practical and inclusive approach to AI management helps organisations to ensure AI contributes to their overall objectives in a manner that respects fairness and avoids bias. Many regulated sectors e.g. Healthcare, Finance, Banking, Insurance, Transport, Energy, Food, Agriculture, Energy, Environmental, and Rights Protection use PRIDAR as a method of prioritising the effort they place on new and existing commitments they have to AI.
Limitations of the approach
PRIDAR’s practical and inclusive approach to AI management helps organisations to ensure AI contributes to their overall objectives in a manner that respects fairness and avoids bias. Many regulated sectors e.g. Healthcare, Finance, Banking, Insurance, Transport, Energy, Food, Agriculture, Energy, Environmental, and Rights Protection use PRIDAR as a method of prioritising the effort they place on new and existing commitments they have to AI.
Further Links (including relevant standards)
- Background on PRIDAR https://www.carefulai.com/pridar.html
- Background in PRIDAR and BS 30440 https://www.carefulai.com/bs30440.html
- PRIDAR embodied in BS30440 https://knowledge.bsigroup.com/products/validation-framework-for-the-use-of-artificial-intelligence-ai-within-healthcare-specification/standard/preview
Further AI Assurance Information
- For more information about other techniques visit the CDEI Portfolio of AI Assurance Tools: https://www.gov.uk/ai-assurance-techniques
- For more information on relevant standards visit the AI Standards Hub: https://aistandardshub.org/
- For more information on cross-sector leadership groups: https://www.carefulai.com/