Careful AI: PRIDAR Assurance Framework

Case study from Careful AI.

Background & Description

The PRIDAR framework is a tool that helps organisations make more informed and effective decisions about AI development and deployment. PRIDAR simplifies the process of decision-making when an organisation is faced with managing multiple AI suppliers or development options. It provides a clear visual representation through a dashboard of the continuing fitness for the purpose of AI. It is particularly useful in large organisations where the responsibilities for managing AI are spread across different departments, organisations and suppliers.

It is used to benchmark the opportunities and risks associated with AI system design, deployment, and use. It focuses on evidence of user-led design, assurance, and organisational preparedness. It is therefore well placed to enable an organisation to address bias and discrimination across the AI lifecycle.

PRIDAR is designed to be open: so that it can be embodied in existing standards under and AGPLv3 type licence e.g. BSI30440 (The standard for the validation of AI in Healthcare). See the following links for more information.

How this technique applies to the AI White Paper Regulatory Principles

More information on the AI White Paper Regulatory Principles.

Safety, Security & Robustness

Across industries there are many approaches for measuring safety, security and robustness. PRIDAR does not seek to re-invent these. It simply enables an organisation to report on these facets along with other criteria that are critical to the effective use of AI: user-centred design, technology readiness, formal assurances, and other areas of organisational preparedness.

Transparency & Explainability

Ensuring transparency and explainability from counterfactual dashboards to human alignment studies etc is commonplace. We do not seek to re-invent these. They are embodied in the Data, Model and Integration agreements, and User Centred design sections of PRIDAR. PRIDAR enables the user to visually compare risk reports for these facets with other criteria such as technology readiness, formal assurances, and other areas of organisational risk. This enables users to put the impact of transparency and explainability risks into perspective.

Fairness

Unlike traditional approaches, PRIDAR provides a holistic solution to the challenge of fairness in AI. It recognises that fairness is not just an algorithmic or technical issue, but one that encompasses user-centred design, assurance, and organisational preparedness.

Accountability & Governance

PRIDAR is a powerful easy-to-use method of evidencing transparent accountability. It enables better governance by enabling users to benchmark the fitness for purpose and risks of AI. It places emphasis on user accountability, focussing on ethical and legal considerations in AI system deployment. By fostering accountability, PRIDAR helps to ensure that AI systems are developed and used in a manner that respects fairness and avoids organisational bias.

Contestability & Redress

Evidence that those affected by algorithms are enabled to contest and seek redress is embodied in the User Centred design and Data, Model and Integration agreements section of PRIDAR. PRIDAR covers the whole life cycle to AI development and therefore enables AI users in any part of their organisation to contest and seek redress for potential harms that may occur because of how an AI system is designed, managed or deployed.

Benefits to the organisation

PRIDAR’s practical and inclusive approach to AI management helps organisations to ensure AI contributes to their overall objectives in a manner that respects fairness and avoids bias. Many regulated sectors e.g. Healthcare, Finance, Banking, Insurance, Transport, Energy, Food, Agriculture, Energy, Environmental, and Rights Protection use PRIDAR as a method of prioritising the effort they place on new and existing commitments they have to AI.

Limitations of the approach

PRIDAR’s practical and inclusive approach to AI management helps organisations to ensure AI contributes to their overall objectives in a manner that respects fairness and avoids bias. Many regulated sectors e.g. Healthcare, Finance, Banking, Insurance, Transport, Energy, Food, Agriculture, Energy, Environmental, and Rights Protection use PRIDAR as a method of prioritising the effort they place on new and existing commitments they have to AI.

Further AI Assurance Information

Updates to this page

Published 19 September 2023