Aival Monitor, Aival Analysis Lab: enabling organisations to understand how a deployed AI product is working over time

Aival Monitor allows organisations actively using AI products to assess how they perform over time, providing continuous vigilance of a deployed solution.

Background & Description

Aival Monitor allows organisations actively using AI products to assess how they perform over time, providing continuous vigilance of a deployed solution. It is a module of the Aival Analysis Lab, with an initial focus on imaging applications such as Radiology AI. Using our software, customers can trust that a product is continuing to work effectively and fairly through use, alerting to any drops to performance (for instance, due to data drift). Aival Monitor is designed for use without needing technical expertise so a customer without specialist AI training can easily obtain and understand results.

How this technique applies to the AI White Paper Regulatory Principles

More information on the AI White Paper Regulatory Principles

Safety, Security & Robustness

We provide independent analysis of how a deployed AI product works at a customer site, geared towards improving customer understanding of performance and alerts to when performance drops. Our software outputs a standardised performance report and enables multiple deployed products to be assessed.

Fairness

We break down our statistical analysis into subgroups, to ensure the AI product works well and fairly for different groups. For medical images, demographic and scanner information is extracted from imaging metadata and stratified statistics are reported. We also allow the user to upload their own subgroup metadata to display richer analysis on the groups relevant to their use case.

Accountability & Governance

By using our software, customers can show that their AI is performing well and not harming any particular group. This monitoring is independent from the AI vendor.

Why we took this approach

Independent monitoring of AI is important as AI products do not necessarily generalise to new situations. This is crucial for high-stakes applications. Even if a product has been validated at the point of deployment, data drifts such as due to seasonality, migration, software updates, may result in performance changes over time. We developed Aival Monitor to allow users of AI to have greater transparency into how AI works over time, without needing specialist training.

Benefits to the organisation using the technique

Using Aival Monitor, organisations can understand how their deployed AI solutions are working over time, helping to ensure safety and effectiveness is maintained through use. The software will alert a user to any drops in performance below a set threshold so that organisations can spot unexpected changes early and take appropriate remedial action.

Limitations of the approach

A limitation of this solution is the need to integrate with the AI product predictions. This would involve accessing the product outputs and directing them to Aival Monitor, for instance through a REST API. For Radiology applications, Aival Monitor can support specialist output formats including HL7 and DICOM formats.

Further AI assurance information

Updates to this page

Published 26 September 2024