Newton’s Tree’s Federated AI Monitoring Service (FAMOS)

A dashboard for real-time monitoring of healthcare AI products.

Background & Description

Newton’s Tree’s Federated AI Monitoring Service (FAMOS) is a dashboard for real-time monitoring of healthcare AI products. The dashboard is designed to enable users to observe and monitor the quality of data that goes into the AI, changes to the outputs of the AI, and developments in how healthcare staff use the product. Monitoring these outputs is necessary if drift is to be mitigated. Drift is a change that impacts the performance of an AI product, and its means that products that start safe do not necessarily remain safe.

FAMOS is part of Newton’s Tree’s enterprise-wide deployment platform, which allows healthcare organisations to assess and download healthcare AI products to improve the delivery of care. It is a vendor-neutral service, not a re-seller, meaning healthcare organisations and AI vendors can independently negotiate what is best for them.

Relevant Cross-Sectoral Regulatory Principles

Safety, Security & Robustness

Risk should be managed across the whole life-cycle of a healthcare AI product. However, managing the risk of drift - when something in the product or its environment changes the AI system’s impact - is a challenge because the relevant metrics are either not collected or collated. This leaves open risks in the post-deployment phase of the AI life-cycle, underserving the principle of safety, security and robustness. The FAMOS dashboard enables organisations to address this by collecting and collating the metrics needed to monitor performance so a user and AI vendor can intervene in time.

Appropriate Transparency & Explainability

In order to communicate relevant information about the performance of an AI product, performance data must be collected, collated, and finally shared in an understandable manner. That is the purpose of the FAMOS dashboard. It brings the information needed into one place, and shares what is relevant to the user. For example, a clinician may want to observe metrics across all AI products in one hospital, whereas a vendor may want to observe metrics for one product across all hospitals. Observing changes in the AI and environment can help explain how it is working to those who oversee or are impacted by it.

Fairness

It may not be evident that a product or user is discriminating against a particular population until a change has occurred over time. For example, a change in season may lead to a change in the cohort the healthcare organisation is serving, and at this time a change in AI output or use may follow, requiring swift and corrective action. However, this can only be observed if the appropriate data is presented to the right people at the right time.

Accountability & Governance

Effective post-deployment monitoring is a necessary step for appropriate product and clinical governance. An AI vendor needs the right information in order to uphold the quality of their product, and likewise a healthcare organisation needs the right information to uphold the quality of the care provided. FAMOS provides necessary metrics on AI inputs, AI outputs, and AI use in real-time so appropriate action can be taken at an appropriate time.

Contestability & Redress

The first step in contesting an AI product’s outputs, or the use of those outputs, is gathering the right evidence. For example, a patient may have reason to believe that a clinician was over relying on an AI product’s output when making a clinical decision. FAMOS would provide relevant metrics in one place to support an investigation into these concerns. For example, in the scenario above the dashboard may show signs of automation bias, i.e. a trajectory of increased agreement between the AI and clinician.

Why we took this approach

Newton’s Tree’s services were developed following experience from the frontline of healthcare. The first iteration of the deployment platform was developed by Newton’s Tree’s Chief Executive Officer due to his work in the National Health Service (NHS). Deploying algorithms was expensive and time consuming so he sought to solve this problem. Further, it became clear standard practice may not be adequate for maintaining patient safety once some AI products were deployed. As the ambition for solving this problem grew, Newton’s Tree was created to deliver a solution.

As demands for healthcare AI have grown, so has the need to spread these solutions both locally and internationally.

Benefits to the organisation using the technique

Throughout health care delivery patient care and safety is the primary concern. There is evidence that AI can improve patient care but it should not come at the cost of patient safety. An AI product may work well at the time of deployment but that does not guarantee it will work 3 months, or 3 years, later. Changes in the AI or its environment may change the impact of the AI, and monitoring must be maintained to mitigate this risk. The FAMOS dashboard allows manufacturers and healthcare organisations to monitor changes to AI inputs, AI outputs, and use of the AI in real-time. This means that AI use that starts safe can stay safe.

Limitations of the approach

FAMOS is only designed to cover the latter half of the AI life-cycle (post-deployment), and therefore has no impact on the initial stages (development), although these early stages also impact the quality of a product. For example, if an AI product was built and had little utility for clinicians, using FAMOS could not help to address that, as creating utility - ensuring what is built is useful to users - comes at the start of the AI life-cycle. However, if a product is not useful, it is less likely to be purchased and utilised by a healthcare organisation.

https://www.newtonstree.ai/

Further AI Assurance Information

Updates to this page

Published 5 December 2024