Mind Foundry: Using Continuous Metalearning to govern AI models used for fraud detection in insurance
Case study from Mind Foundry.
Background & Description
Continuous Metalearning is an AI governance capability, which has three core objectives
- Manage model risks in production: with the ability to visualise, interrogate and intervene to ensure the continued safe use of AI
- Maintain model capabilities post-deployment to be as performant, if not more performant, than at deployment
- Expand and augment model capabilities: optimise the models’ learning process, in order to learn new patterns and trends, in an interpretable and human-centred way
We use this capability to identify, prioritise and investigate fraudulent insurance claims within the insurance industry. Fraudulent claims contribute to increases in the premiums of policy-holders, and there can be large losses for insurers. Mind Foundry worked with Aioi Nissay Dowa Insurance (ANDE-UK) to understand the patterns of fraud and embed these into an AI solution to combat fraud more effectively. This model was deployed using a Continuous Metalearning capability, which enabled the model’s risks to be effectively governed while enabling the model to learn new types of fraud in production. This improved the quality of cases sent to triage and ultimately, to investigation.
How this technique applies to the AI White Paper Regulatory Principles
More information on the AI White Paper Regulatory Principles.
Accountability & Governance
Continuous Metalearning enables:
- Management of model performance, ensuring that it remains as performant, or more performant as it was at the point of deployment
- This also guarantees other model attributes that may have been used at the point of deployment, such as fairness and bias metrics, are still met in production, given the changes that can happen with new data inputs
- We use state-of-the-art drift detection techniques, to understand when there are changes to the distribution of data, and address them in order to prevent, or escalate model failure.
- The ability for your model to learn new, distinct, yet relevant intelligence in production
- Utilising meta-learning techniques such as few-shot learning. Traditionally, models need hundreds, if not thousands of labelled examples to ‘understand’ a new pattern or class. Few-shot learning enables models to learn based on a much smaller sample of labelled data. Optimal human and AI collaboration makes meta-learning techniques particularly powerful.
- A centralised full history of a model(s), including data provenance and model lineage
- Enables proper audits and a full chain of accountability for model behaviour
- Limits the propagation of biases across an AI system
- Allows the user to see feature importance (interpretability) at every iteration of the model.
Continuous Metalearning (CML) spans the post-production lifecycle of a model and is focused on enabling users to effectively govern a portfolio of models.
Why we took this approach
There is a lot of emphasis on ensuring that AI models are designed and developed responsibly, with many toolkits and ethical design processes related to these specific stages of the life-cycle. There is less support to ensure it continues to behave responsibly, once a model has been deployed. This is critical, as AI and the space in which AI operates, change over time. CML addresses continual monitoring, and continuous improvement, and builds transparency into the origins of the model. These principles are central to ensuring an evolving, in-production model remains responsible.
Benefits to the organisation
- The user can continually monitor their models to ensure that performance remains consistent (or improves), with safeguards built to protect against performance degradation.
- CML offers continuous support to models in production and automatically updates where it recognises new trends. This is a stronger assurance than statically checking for bias at the point of deployment only - as the model updates with new data, so must the assurance surrounding it.
- The user is empowered to provide a full audit regarding the origins of the model, having access to the lineage and provenance of the data and models. This traceability will be key in holding companies accountable (and aware) of how their system of models has been built and subsequently maintained.
Limitations of the approach
-
CML was built for models that use batch inference (could be an hourly, daily, or weekly cadence for example). More research is expected to expand the reach of CML beyond batch inference models and to expand the breadth of model coverage.
-
Care should be given to ensure that the solution domain and modelling environment supports CML as a technique.
Further Links (including relevant standards)
More information on this case study is available on Mind Foundry’s website.
Further AI Assurance Information
-
For more information about other techniques visit the OECD Catalogue of Tools and Metrics: https://oecd.ai/en/catalogue/overview
-
For more information on relevant standards visit the AI Standards Hub: https://aistandardshub.org/