ClearBank: Safeguarding generative AI use cases in a regulated fintech banking API

Generative AI such as Large Language Models (LLMs) with retrieval augmented generation (RAG) architecture have potentially transformative use cases in the Fintech domain of banking APIs. This case study focuses on these AI use cases at ClearBank.

Background & Description

Generative AI such as Large Language Models (LLMs) with retrieval augmented generation (RAG) architecture have potentially transformative use cases in the Fintech domain of banking APIs. This case study focuses on these AI use cases at ClearBank including their impact, and how they can be deployed on internal infrastructure, securely, and transparently with minimal bias. These use cases include:

  1. Enquiries Monitoring: automated reviewing and scheduling of client transactional requests and handling complaints before they require escalation.
  2. Policy review bot: ingest internal polices to ensure protection against misalignment of key policies in all stages of the regulatory pipeline.
  3. Client screening: review policy and client documents to assess whether the onboarding criteria are met and summarise outputs.

We routinely evaluate and track each pipeline with retrieval and response evaluation methods to assess document specificity, precision, QA correctness, hallucinations, and toxicity. We additionally use Recall-Oriented Understudy for Gisting Evaluation (ROUGE) for document summarisation-based use cases. In addition to human evaluation and feedback, our evaluation pipeline allows us to provide a single assurance approach for all our generative AI use cases.

How this technique applies to the AI White Paper Regulatory Principles

Safety, Security & Robustness

By implementing our generative models on our internal infrastructure, and by constantly evaluating and monitoring generative AI use case pipelines within the ClearBank API, our domain experts can give real time feedback and tune our models in a safeguarded environment ensuring the robustness and safety of our systems.

Appropriate Transparency & Explainability

With RAG and internally implemented generative AI models, we can trace the source documents for each pipeline output for our generative AI use cases, and with our model experiment tracking system supply the end user with the exact base model version used.

Accountability & Governance

In addition to implementing generative AI models in a safeguarded environment, we also have in place organisation-wide generative AI a policy and procedures, including appropriate training, and an ethics working group.

Fairness

Along with quantitative tracking of hallucinations and toxicity, we ensure that a human is always in the loop when evaluating answers. We provide domain experts the opportunity to give feedback on any biases to the product owner of each pipeline to allow for model tuning and refinement of input documents.

Why we took this approach

Generative AI helps regulated financial institutions such as ClearBank provide the most optimal and trustworthy service possible. We achieve a high level of service through optimising processes, improving security, and ensuring regulatory compliance through a single assurance model monitoring approach for all our generative AI use cases. This approach allows us to implement generative AI in a way that best protects our clients and systems, with a combined set of standards that allow us to unlock value.

Benefits to the organisation using the technique

A single continuous evaluation of generative AI models at ClearBank on internal infrastructure with traceability and transparency allows us to implement better decisions, ensure regulatory alignment and optimise processes securely and transparently with minimal bias.

Limitations of the approach

This approach requires a learning curve from the end user in terms of interpreting how models are evaluated. Improving a model for a new use case is an iterative process. In addition to the learning curve, developing use cases in parallel with infrastructure teams to support it can be challenging.

https://clearbank.github.io/

https://www.iso.org/standard/74438.html

https://www.iso.org/standard/81230.html

https://mlflow.org/docs/latest/llms/index.html

https://docs.databricks.com/en/generative-ai/generative-ai.html

Further AI Assurance Information

Updates to this page

Published 9 April 2024