Guidance

Managing your artificial intelligence project

Understand how to manage a project which uses artificial intelligence.

This guidance is part of a wider collection about using artificial intelligence (AI) in the public sector.

Once you have planned and prepared for your AI implementation, you will need to make sure you effectively manage risk and governance.

This guidance is for people responsible for:

  • setting governance
  • managing risk

Governance when running your AI project

The Alan Turing Institute (ATI) has written guidance on how to use AI ethically and safely.

Safety

Governance in safety is important to make sure the model shows no signs of bias or discrimination. You can consider whether:

  • the algorithm is performing in line with safety and ethical considerations
  • the model is explainable
  • there is an agreed definition of fairness implemented in the model
  • the data use aligns with the Data Ethics Framework
  • the algorithm’s use of data complies with privacy and data processing legislation

Purpose

Governance in purpose makes sure the model is achieving its purpose/business objectives. You can consider whether:

  • the model solves the problem identified
  • how and when you will evaluate the model
  • the user experience aligns with existing government guidance

Accountability

Governance in accountability provides a clear accountability framework for the model. You can consider:

  • whether there is a clear and accountable owner of the model
  • who will maintain the model
  • who has the ability to change and modify the code

Testing and monitoring

Governance in testing and monitoring makes sure a robust testing framework is in place. You can consider:

  • how you will monitor the model’s performance
  • who will monitor the model’s performance
  • how often you will assess the model

Public narrative

Governance in public narrative protects against reputational risks arising from the application of the model. You can consider whether:

  • the project fits with the government organisation’s use of AI
  • the model fits with the government organisation’s policy on data use
  • the project fits with how citizens/users expect their data to be used

Quality assurance

Governance in quality assurance makes sure the code has been reviewed and validated. You can consider whether:

  • the team has validated the code
  • the code is open source

Managing risk in your AI project

Risk How to mitigate
Project shows signs of bias or discrimination Make sure your model is fair, explainable, and you have a process for monitoring unexpected or biased outputs
Data use is not compliant with legislation, guidance or the government organisation’s public narrative Consult guidance on preparing your data for AI
Security protocols are not in place to make sure you maintain confidentiality and uphold data integrity Build a data catalogue to define the security protocols required
You cannot access data or it is of poor quality Map the datasets you will use at an early stage both within and outside your government organisation. It’s then useful to assess the data against criteria for a combination of accuracy, completeness, uniqueness, relevancy, sufficiency, timeliness, representativeness, validity or consistency
You cannot integrate the model Include engineers early in the building of the AI model to make sure any code developed is production-ready
There is no accountability framework for the model Establish a clear responsibility record to define who has accountability for the different areas of the AI model

Updates to this page

Published 10 June 2019

Sign up for emails or print this page