Qualitest: Supporting NHS England Future-Proof Their QA Practices with AI

Case study from Qualitest.

Background & Description

In recent years, industry has seen significant growth and investment in AI and related technologies, with a forecasted annual growth rate of 33.2% between 2020 and 2027. Organisations surveyed also reported failure rates on their projects up to 50%; high failure rates were attributed to AI systems minimising their own errors rather than production considerations, which is a type of learned bias in AI.

With many AI-related technology failures, there is a rising concern that we may not be fully ready to deal with all the ethical, societal, and quality considerations of increased reliance on AI. AI provides new power to software however, and the desire to leverage it and unlock new use cases and potentials is increasing.

One area pushing heavily to make more use of AI is healthcare and the market is expecting to see a surge in AI infused healthcare devices and systems in the current months. Our client, who was then NHS Digital, now NHS England, understood there is an expected influx in AI-enabled technology in healthcare and wanted future-proofed QA practices to deal with it.

NHS England wanted to:

  • Understand the current state of industry’s awareness and readiness for the assurance of AI & ML powered technology.

  • Identify the new approaches and refinements to existing approaches required to enable us to assure the quality of AI software vs. traditional software.

  • Understand the ethical, fairness and bias risk of AI-enabled medical technology and have the processes in place to guard against them being introduced to their software landscape.

NHS England wanted to ensure that as AI-enabled solutions arrive in their landscape, they have the existing collateral, processes, quality enablers and accelerators ready to ensure they are able to reliably assure these technologies in the safety-critical context of healthcare. NHS England’s main concern was that if they did not take a forward-looking stance and tried to reactively learn in-project, there would be the risk of delays to the rollout of vital IT solutions that would have internal and potentially clinical implications.

Our Approach

Defects in AI-infused software can be related to underlying data, the modelling processes and the deployment and use of the software. A comprehensive approach was needed to assure the end-to-end processes of delivering this software succeeds, and Qualitest used their knowledge to ensure the client was equipped with this.

Qualitest deployed specialist Data Scientists in Test who undertook to liaise with the client to understand the delivery of software into their landscape, the existing test and assurance efforts undertaken today, to identify where to augment these approaches for AI-enabled technology.

Our consultants provided new reference strategic materials to the client that were built upon our significant experience and knowledge, current industry AI Assurance best practice and a review of published industry and academic literature.

These materials provided full details to the client on the lifecycle that Machine Learning systems go through along with details of strategic and tactical quality considerations that must be met in each stage of that lifecycle. We aligned to the CRISP-DM industry standard lifecycle as, whilst not the only industry model for rollout of intelligent systems, it is one that shares similarities to the traditional software lifecycle and would simplify adapting existing processes for the client.

At each phase of the lifecycle from business understanding, through modelling and deployment, Qualitest gave a detailed picture of where defects could be introduced, what they might exhibit as, and the quality processes required for defect prevention and testing.

Crossover with Relevant Cross-Sectoral Regulatory Principles

Safety, Security & Robustness

In a healthcare setting, safety, security and robustness of solutions are of paramount importance. This was critical in our considerations when informing AI Assurance strategies in this engagement, and we have provided processes that enforce checks at every step of the AI delivery process to ensure a safe, secure rollout.

Appropriate Transparency & Explainability

When making critical decisions, understanding of how the decision has been reached is vital. There are challenges in understanding how certain types of Machine Learning reach their conclusions and where possible our approach encourages the use of explainable systems, however where not possible, we promote a risk-conscious approach agreed with SMEs and appropriate oversight on decisions to ensure that key decisions are not made without appropriate review of contributing factors, or left to unintelligible mechanisms.

Fairness

Fairness is a significant concern in AI in all landscapes, but particularly in healthcare settings where it is key to understand if any population or demographic is being unfairly represented. One of our key deliverables to our client was specifically around identifying causes and risks of bias, and providing a suite of fairness tests that should be executed on decision systems to ensure output is suitable.

Accountability & Governance

Our outputs were designed to enhance and empower our customers capability to provide robust governance on the quality of learning systems, as they do with traditional systems. An implicit part of this is also identifying the accountable parties in the development of AI systems from domain teams identifying solutions to be met with ML components, through to data scientists implementing solutions and development teams encapsulating these in products. We provided details to the client of key roles in the ML lifecycle and understanding of their requirements to deliver and be accountable for the quality in their parts of the ML delivery process.

Contestability & Redress

Our approach allows NHS England to understand the designs of the AI services they take on in a manner that should the results or methods that are used by the AI service be contested, they are able to review and understand how to establish a journey for redress or alteration of the AI mechanics.

Why we took this approach

NHS England required support from a Quality Assurance company that was able to help them understand their challenges and create processes for their QA strategy for AI and ML. This meant that a consultative approach by an organisation that was highly experienced in both software assurance and successful AI/ML implementations, that could call on that experience to work in partnership with them at a strategic level.

Benefits to the organisation

Qualitest provided NHS England with a forward-facing strategy to enable them to safely roll out AI infused healthcare technology as it comes into their roadmap in the coming months. We provided detailed documents to enable the creation of robust QA Practices by examining and informing on:

  • The nature and types of issues that are more prevalent in learning systems than traditional ones, which there may be QA process gaps around
  • The details on how to prevent and check for data, modelling and adoption defects in intelligent software.
  • The techniques and approaches suitable to test for bias and fairness in delivered intelligent software, (bias and fairness are two significant threats to learning-powered software that can greatly undermine its effectiveness which, in a healthcare setting is an unacceptable risk)
  • The alignment of QA and Data Science Delivery teams for successful collaborative implementation of learning systems

Our approaches and strategies for NHS England were also tailored to their specific landscape to simplify the uptake, rollout and monitoring of them with minimal deviation or learning curve away from their current QA and testing approaches.

After engaging with Qualitest, NHS England has taken leading steps to prepare for the increasing adoption of AI & ML technology in the healthcare landscape. They are aware of the unique and diverse challenges that these systems take and have taken proactive steps to prepare their quality approaches around them. This will maximise their speed-to-market and minimise their risk in rolling out these solutions.

Limitations of the approach

Our approach to the requirements for NHS England was one of supporting them for their existing and future needs around AI Assurance. This approach supports advancing the quality management practise for an organisation and how to understand the unique challenges that come from using AI and ML services. The limitation of this approach is that it is not designed to provide tests for a particular AI or ML service, but rather provides holistic guidance to assuring AI-enabled technology as a whole.

Further AI Assurance Information

For more information about other techniques visit the OECD Catalogue of Tools and Metrics: https://oecd.ai/en/catalogue/overview

For more information on relevant standards visit the AI Standards Hub: https://aistandardshub.org/

Updates to this page

Published 6 June 2023