Deloitte: Enhanced due diligence processes for third-party models
Case study from Deloitte.
Background & Description
This case study is focused on an AI model used for predictive analytics within the financial services sector.
We were engaged by a client’s Risk Committee who wanted to gain comfort around a machine learning model developed and deployed by a third-party service provider before approving a business request to collaborate with that service provider in the development of financial products to be marketed to the client’s customer base.
Our objective was to support the client’s due diligence process in relation to the third-party service provider and its machine learning model. Our role was to perform a targeted review of specified processes and controls related to the design, development, and functioning of the model to support the client’s assessment of potential reputational, regulatory and operational risks related to the use of the AI system and its outputs. The results of our review were fed into the client’s new product approval process.
In performing this engagement, we used internally developed procedures to evaluate certain aspects of the service provider’s AI control framework, with a focus on:
- Data sources, data quality and data management
- Modelling and development processes
- Operation of the model and the production of outputs
How this technique applies to the AI White Paper Regulatory Principles
More information on the AI White Paper Regulatory Principles.
Safety, Security & Robustness
Our review considered the robustness of the algorithm development process and the reliability of the operation of the algorithm.
Transparency & Explainability
Our review considered the transparency of the algorithms and, in particular, the ability of the service provider to explain outputs in terms of the inputs to the process.
Accountability & Governance
Our review informed the wider risk management process conducted by the client’s Risk Committee and promoted good governance.
Why we took this approach
The client did not require a formal assurance opinion but wanted to verify certain aspects of the functioning of AI system, including the use of publicly available data in the model, the suitability of people, processes and systems used to produce outputs, and the alignment of front-to-back processes to industry good practice.
There were no regulatory requirements for the service provider to meet and our scope of work focused on the machine learning model itself rather than on related risk assessments; therefore, a standard ‘Review and Recommend’ format to formally verify those aspects was chosen.
Benefits to the organisation
- Given the complexity of AI systems, organisations should be performing enhanced due diligence on third parties that are providing them with AI-based services. Through this engagement, the client was able to obtain an independent assessment of certain key aspects of the third-party service provider’s AI control framework in the absence of any regulatory requirement for the service provider to perform a self-assessment or demonstrate compliance with any regulation or standard.
- The client was able to draw on our expertise in the subject area to help define the review procedures.
- The client was able to gain an understanding of the processes and controls in place at the service provider which helped them to make a more informed decision on the prospect of collaboration with that service provider.
Limitations of the approach
- Supplier-imposed restrictions on the availability of information can be an issue (although we did not find this to be the case in this case).
- Although the service provider gave us full access to their documentation and provided us with a dedicated resource to talk us through their processes and controls, those processes and controls were not well documented.
- We noted a lack of understanding on the part of the service provider as to the type of documentation, as well as the quantity and quality of the documentation, that is required to support an AI assurance engagement.
Further Links (including relevant standards)
Our review was based on internally developed procedures and did not result in a formal Assurance Opinion.
Further AI Assurance Information
- For more information about other techniques visit the CDEI Portfolio of AI Assurance Tools: https://www.gov.uk/ai-assurance-techniques
- For more information on relevant standards visit the AI Standards Hub: https://aistandardshub.org/