Credo AI: Responsible AI Governance Platform

Case study from Credo AI.

Background & Description

The Credo AI Responsible AI Governance Platform (the “Platform”) is designed to help organisations ensure responsible development and use throughout the entire AI value chain. The Platform enables organisations to assess their AI systems for risks related to fairness, performance, transparency, security, and privacy, and to produce standardised AI/ML transparency artefacts for internal AI governance reviews, external compliance requirements, and independent audits.

How this technique applies to the AI White Paper Regulatory Principles

More information on the AI White Paper Regulatory Principles.

The UK’s principles-based approach to regulating AI (outlined in its “pro-innovation approach to AI regulation”) helps to guide businesses by setting out the key elements of responsible AI design, development and use. The Platform helps organisations assess their AI systems for risks related to these same cross-sectoral principles. Within the Platform, Policy Packs provide modular technical, process, and documentation requirements to guide developers and deployers in adequately documenting important information about their AI use cases, while Credo AI’s AI Registry provides a centralised database that allows organisations to gain comprehensive oversight of multiple AI initiatives.

Safety Security and Robustness

Based on provided use case context, the Platform recommends risk scenarios related to safety, security, and robustness, and mitigating controls which developers and deployers can use to evaluate, address and monitor the AI system. The Platform also provides guidance on the implementation of these controls for generally available ML libraries.

Appropriate Transparency and Explainability

Developers and deployers can catalogue and document details about models and datasets, architectural considerations, and risk assessments in the Platform. These details enable users to access, interpret, and understand the AI system’s decision-making processes and potential impacts. The Platform also provides means to assemble and manage stakeholders of an AI system to establish accountability mechanisms and generate customised reports on different aspects of the AI use case. These reports shed light on the risks, compliance aspects, and potential impact of the system. These outputs can be adapted to align with different audiences’ needs (e.g. impact assessments or model cards), promoting transparency to multiple stakeholders.

Fairness

In order to address fairness in AI systems across their life cycle, users are recommended risk scenarios (based on provided use case context) related to potential unwanted bias, discrimination, and fairness issues, and mitigating controls to evaluate and govern their use cases. This includes guides for evaluating AI systems for potential unwanted bias and discrimination, adhering to relevant laws, appropriate fairness definitions, and statistical bias metrics.

Accountability and Governance

Accountability and governance in AI systems throughout their entire life cycle is critical. The Credo AI Registry enables developers, deployers, and buyers of AI systems to track use cases that need to be governed, including third-party AI tools, effectively monitoring the supply and use of these systems. Governance tasks are also assigned to individual stakeholders, further promoting accountability.

Contestability and Redress

Policy packs and controls include guidance that instructs developers and deployers of AI systems to include mechanisms for contestability and redress (such as the possibility for the end users and impacted communities to appeal system outcomes). The Platform maintains an audit log of all governance actions relevant to each use case, providing transparency and traceability in decision-making processes and outcomes, helping users and affected parties understand decisions and outcomes of the systems.

Why we took this approach

Experience working with enterprise has demonstrated that adopting mature governance procedures, like rigorous process controls and technical mitigations, can be difficult to control given a lack of visibility into the list of AI systems that an enterprise is already developing, buying, using, or selling. Standardising the information collected about AI systems, and an interface that reduces the burden (psychological, time, and effort) of collecting, viewing, and analysing AI use case information, allows governance owners to both decide what information to track, and effectively manage complicated information structures.

Benefits to the organisation using the technique

By maintaining an AI registry, companies can manage multiple AI projects effectively, identify project ownership, and ascertain individuals responsible for reporting on their outcomes (success or failure). A clear and concise dashboard of use case metadata, relevant geographic markets, and information about known risks enables critical triaging of which AI initiative to focus on as an organisation employs governance. This systematic approach helps enhance transparency, accountability, and visibility, making it easier for businesses to navigate the rapidly expanding AI landscape with confidence and compliance. Once an AI use case is tracked effectively, stakeholders can seamlessly move to governing the use case.

Limitations of the approach

Improving the efficiency, standardisation, and quality of tracking AI use case information cannot solve all human challenges associated with standing up a governance framework at an organisation. Building support for responsible AI practices and governance remains a critical prerequisite to deriving value from a centralised AI Registry and downstream governance practices.

After tracking AI initiatives and their associated risks, stakeholders will have to triage high-risk and high-impact use cases to govern and then implement governance practices including data controls, performance evaluations, evaluations of fairness and bias, steps to ensure regulatory compliance, and reporting to relevant decision-makers. Full governance of the entire AI lifecycle - as enabled by tools like the Credo AI Responsible AI Governance Platform - is critical.

Updates to this page

Published 19 September 2023