EY: Global Responsible AI Framework - Conducting an AI Governance Review for a global biopharmaceutical company

Case study from EY.

Background & Description

Client Background

Our client, a global biopharmaceutical company, had previously conducted an internal AI audit which helped identify several gaps. The most pressing was the absence of an AI governance framework. The firm understood that plugging this gap would be a vital step in harnessing the opportunities of AI, while successfully identifying and managing AI risks.

Working with EY

Our client subsequently developed a comprehensive AI governance framework embracing responsible AI principles like transparency, fairness, and human-centricity. However, the leadership required assurance that it was moving in the right direction and went in search of an independent partner to validate their efforts. EY teams worked with the client on a review of its AI governance programme, to support the business in maintaining its organizational values, preparing a solid foundation for forthcoming EU regulation, as well as building employee, patient, and clinician trust, without impeding innovation.

EY Global Responsible AI framework

We leveraged our global Responsible AI framework to help the client optimize their approach to AI governance, mitigate AI risks, and protect stakeholders. The global Responsible AI framework is a flexible set of guiding principles connected to risk areas, and practical actions that enable assurance over AI products and overarching governance programmes. A multi-disciplinary team consisting of digital ethicists, IT risk practitioners, data scientists and subject-matter resources harnessed EY’s Responsible AI framework to evaluate how well the biopharma’s responsible AI principles had been rolled out and understood across the business.

Key steps in our process included:

  • Conducting interviews with stakeholders critical to the development or procurement of AI products and services within the organisation
  • Documentation review to assess how successfully the business had mitigated the risks of AI throughout its lifecycle, from problem identification through to modelling, deployment, and ongoing monitoring
  • Governance review of key AI projects, including forecasting, adverse events tracking, and early disease detection to assess if the client had developed and implemented AI in line with its responsible AI principles
  • Preparation and delivery of audit report outlining governance findings and suggested next steps to bridge critical gaps

Crossover with Relevant Cross-Sectoral Regulatory Principles

Safety, Security, and Robustness

The policies and procedures outlining how AI systems and underlying data are secured from unauthorised access, corruption or adversarial attack were evaluated as part of this audit.

Appropriate Transparency & Explainability

Transparency and explainability figured heavily into this audit, given the use of AI products and services in the context of health and medical care impacting patients and clinicians. Understanding how the organisation was assessing and subsequently demonstrate transparency and explainability in a consistent and reproducible manner across AI projects was key to our findings and subsequent recommendations.

Fairness

The makeup of development teams and the logic underlying the creation, procurement, and/or utilisation of AI products and services throughout the organisation and across geographies were considered to gauge the potential introduction of bias.

Accountability & Governance

Our detailed review helped the biopharma appreciate the need for major changes in its approach to AI governance, which have subsequently been translated into a roadmap to guide the future of the organisation’s investment and strategy regarding AI governance and risk management.

Why we took this approach

We adopted a principles-focused approach in this audit to gauge the gap between their aspirational ethical behaviour and the day-to-day realities of how aspirations had been practically operationalised. In adopting a consultative audit style focused on the decision and documentation processes surrounding the AI lifecycle, we were able to widen the set of stakeholders with which we engaged, as opposed to limiting our interviews to technical staff. The practice of putting technical, commercial, and risk-oriented stakeholders in conversation with one another across from our own multi-disciplinary team enabled a holistic understanding of the biopharma’s approach to AI governance and the nature of the challenges that they faced in implementing said governance.

Benefits to the organisation using the technique

Recommendations resulting from our review added value for the organisation by informing the creation of their future AI governance roadmap. Our detailed review helped the biopharma appreciate the need for major changes to its AI governance methodology, including the introduction of an improved third-party AI risk assessment and a new central AI inventory.

“The EY audit highlighted a number of gaps in our approach, allowing us to set minimum requirements for business teams working with AI, which we’re already working toward,” says the biopharma company’s AI Governance Lead.

Limitations of the approach

Due to the qualitative nature of the audit, stakeholders must be engaged and forthcoming during interviews in order for audit teams to make an informed judgement regarding governance gaps and subsequent recommendations.

We utilised EY Responsible AI Framework, which was developed from various AI principles and areas for AI evaluation from academia and various sectors available at the time, to identify and assess areas relevant for AI Assurance. However, as the engagement was delivered before foundation models and current AI standards exist, we did not consider specific universal test metrics, generative AI risks, minimum documentation standards, or other areas of evaluation from AI risk management standards that are available today (i.e. ISO, NIST RMF, etc.).

Case study: How a global biopharma became a leader in ethical AI | EY UK

EY’s responsible AI framework: How do you teach ai the value of trust

Updates to this page

Published 19 September 2023