Planning and preparing for artificial intelligence implementation
Guidance to help you plan and prepare for implementing artificial intelligence (AI).
This guidance is part of a wider collection about using artificial intelligence (AI) in the public sector.
Once you have assessed whether AI can help your team meet your users’ needs, this guidance will explore the steps you should take to plan and prepare before implementing AI. As with all technology projects and programmes, you should follow the Technology Code of Practice.
This guidance is for anyone responsible for:
- deciding how a project runs
- building teams and planning implementation
Planning your project
As with all projects, you need to make sure you’re hypothesis-led and can constantly iterate to best help your users and their needs.
You should integrate your AI development with your wider project phases.
- Discovery - consider your current data state, decide whether to build, buy or collaborate, allocate responsibility for AI, assess your existing data, build your AI team, get your data ready for AI, and plan your AI modelling phase.
- Alpha - build and evaluate your machine learning model.
- Beta - deploy and maintain your model.
You should consider AI ethics and safety throughout all phases.
Significant time is needed to understand the feasibility of using your data in a new way. This means the discovery phase tends to be longer and more expensive than for services without AI.
Your data scientists may be familiar with a lifecycle called CRISP-DM and may wish to integrate parts of it into your project.
Start your discovery phase
Discovery can help you understand the problem that needs to be solved.
Assess your user needs and data sources
You should:
- thoroughly understand the problem and the needs of different users
- assess whether AI is the right tool to address the user needs
- understand the processes and how the AI model will connect with the wider service
- consider the location and condition of the data you will use
Assess your existing data
To prepare for your AI project, you should assess your existing data. Training an AI system on error-strewn data can result in poor results due to:
- the dataset not containing clear patterns for the model to explore when making a prediction
- the dataset containing clear but accidental patterns, resulting in the model learning biases
You can use a combination of accuracy, completeness, uniqueness, timeliness, validity, relevancy, representativeness, sufficiency or consistency to see if the data is high enough quality for an AI system to make predictions from.
When assessing your AI data, it’s useful to collaborate with someone who has deep knowledge of your data, such as a data scientist. They will be familiar with the best practice for measuring, cleaning and maintaining good data standards for ongoing projects. Make your data proportionate to user needs and understand the limitations of the data to help you assess your data readiness.
Questions for you to consider with data scientists are:
- do you have enough data for the model to learn from?
- do you understand the onward effects of using data in this way?
- is the data accurate and complete and how frequently is the data updated?
- is the data representative of the users the model’s results will impact?
- was the data gathered using suitable, reliable, and impartial sources of measurement?
- is the data secure and do you have permission to use it?
- what modelling approaches could be suitable for the data available?
- do you have access to the data and how quickly can you access it?
- where is the data located?
- what format is the data in and does it require significant preparation to be ready for modelling?
- is your data structured - for example can you store it in a table, or unstructured such as emails or webpages?
- are there any constraints on the data - for example does it contain sensitive information such as home addresses?
- can you link key variables within and between datasets?
If you’re unsure about your use of data, consult the Data Ethics Framework guidance to check your project is a safe application and deployment of AI.
Build your team for AI implementation
As with other projects, your team should be multidisciplinary, with a diverse combination of roles and skills to reduce bias and make sure your results are as accurate as possible. When working with AI you may need specialist roles such as a:
- data architect to set the vision for the organisation’s use of data, through data design, to meet business needs
- data scientist to identify complex business problems while leveraging data value - often having at least 2 data scientists working on a project allows them to better collaborate and validate AI experiments
- data engineer to develop the delivery of data products and services into systems and business processes
- ethicist to provide ethical judgements and assessments on the AI model’s inputs
- domain expert who knows the environment where you will be deploying the AI model results - for example if the AI model will be investigating social care, collaborate with a social worker
You may not need all of these roles from the very beginning, but this may change as the work progresses. You may want to break up your discovery into smaller phases so you can evaluate what you are learning.
It can be useful for your team to have:
- experience of solving an AI problem similar to the one you’re solving
- commercial experience of AI - understanding of machine learning techniques and algorithms, including production deployments at scale
- an understanding of cloud architecture, security, scalable deployment and open source tools and technologies
- hands-on experience of major cloud platforms
- experience with containers and container orchestrations - for example Docker and Kubernetes
- experience in or strong understanding of the fundamentals of computer science and statistics
- experience in software development - for example Python, R or Scala
- experience building large scale backend systems
- hands-on experience with a cluster-computing framework - for example Hadoop or Spark
- hands-on experience with data stores - for example SQL and No-SQL
- technical understanding of streaming data architectures
- experience of working to remove bias from data
Managing infrastructure and suppliers
When preparing for AI implementation, you should identify how you can best integrate AI with your existing technology and services.
It’s useful to consider how you’ll manage:
- data collection pipelines to support reliable model performance and a clean input for modelling, such as batch upload or continuous upload
- storing your data in databases and how the type of database you choose will change depending on the complexity of the project and the different data sources required
- data mining and data analysis of the results
- any platforms your team will use to collate the technology used across the AI project to help speed up AI deployment
When choosing your AI tools, you should bring in specialists, such as data scientists or technical architects to assess what tools you currently have to support AI.
Use Cloud First when setting up your infrastructure.
Consider the benefits of AI platforms
A data science platform is a type of software tool which helps teams connect all of the technology they require across their project workflow, speeding up AI deployment and increasing the transparency and oversight over AI models.
When deciding on whether to use a data science platform, it’s useful to consider how the platform can:
- provide access to flexible computation which allow teams to have secure access to the power needed to process large amounts of data
- help your team build workflows for accessing and preparing datasets and allow for easy maintenance of the data
- provide common environments for sharing data and code so the team can work collaboratively
- let your teams clearly share their output through dashboards and applications
- provide a reproducible environment for your teams to work from
- help control and monitor project-specific or sensitive permissions
Prepare your data for AI
After you’ve assessed your current data quality, you should prepare your data to make sure it is secure and unbiased. You may find it useful to create a data factsheet during discovery to keep a record of your data quality.
Ensuring diversity in your data
In the same way you should have diversity in your team, your data should also be diverse and reflective of the population you are trying to model. This will reduce conscious or unconscious bias. Alongside this, a lack of diverse input could mean certain groups are disadvantaged, as the AI model may not cater for a diverse set of needs. You should read the Data Ethics Framework guidance to understand the limitations of your data and how to recognise any bias present. You should also:
- evaluate the accuracy of your data, how it was collected, and consider alternative sources
- consider if any particular groups might be at an advantage or disadvantage in the context in which the system is being deployed
- consider the social context of where, when and how the system is being deployed
Keeping your data secure
Make sure you design your system to keep data secure. To help keep data safe:
- follow the National Cyber Security Centre’s (NCSC) guidance on using data with AI
- make sure your system is compliant with GDPR and DPA 2018
As with any other software, you should design and build modular, loosely coupled systems which can be easily iterated and adapted.
Writing and training algorithms can take a lot of time and computational power. In addition to ongoing cost, you’ll need to think about the network and memory resources your team will need to train your model.
Using historic data
Most of the data in government available to train our models is within legacy systems which might contain bias and might have poor controls around it. For legacy systems to be compatible with AI technology, you will often need to invest a lot of work to bring your legacy systems up to modern standards.
You’ll also need to carefully consider the ethical and legal implications of working with historic data and whether you need to seek permission to use this information.
Evaluate your data preparation phase
When you complete your data preparation phase you should have:
- a dataset ready for modelling in a technical environment
- a set of features (measurable properties) generated from the raw data set
- a data quality assessment using a combination of accuracy, bias, completeness, uniqueness, timeliness/currency, validity or consistency
Researching the end to end service
During the discovery phase, you should explore the needs of the users of the end to end service. Like other digital services, you’ll use this phase to determine whether there’s a viable service you could build that would solve user needs, and that it’s cost-effective to pursue the problem.
You’ll be able to check guidance on how to know when your discovery is finished before moving on to alpha.
Moving to the alpha phase
Plan and prototype your AI model build and service
If you have decided to build your AI model in-house, you should follow these steps.
- Split the data.
- Create a baseline model.
- Build a prototype of the model and service.
- Test the model and service.
- Evaluate the model.
- Assess and refine performance.
Split the data
Your team will need to train the models they build on data. Your team should split your data into a:
- training set to train algorithms during the modelling phase
- validation set for assessing the performance of your models
- test set for a final check on the performance of your best model
Create a baseline model
Your team should build a simple baseline version model before they build any more complex models. This provides a benchmark that your team can later compare more complex models against, and will help your team identify problems in your data.
Build a prototype of the model and service
Once you have a baseline model, your team can start prototyping more complex models. This is a highly iterative process, requiring substantial amounts of data, and will see your team probably build a number of AI models before deciding on the most effective and appropriate algorithm for your problem.
Keeping your team’s first AI model simple and setting up the right end-to-end infrastructure will help smooth the transition from alpha to beta. You can action this by focusing on the infrastructure requirements for your AI pipelines at the same time as your team is developing your model. Your simple model will provide you with baseline metrics and information on the model’s behaviour that you can use to test more complex models.
Throughout the build, you should make sure your AI model security complies with advice from the NCSC.
Test the model and service
Your team will need to test your models throughout the process to mitigate against issues such as overfitting or underfitting that could undermine your model’s effectiveness once deployed.
Your team should only use the test set on your best model. Keep this data separate from your models until this final test. This test will provide you with the most accurate impression of how your model will perform once deployed.
Evaluate the model
Your team will need to evaluate your model to assess how it is performing against unseen data. This will give you an indication of how your model will perform in the real world.
The best evaluation metric will depend on the problem you are trying to solve, and your chosen model. While you should select the evaluation metric with data scientists, you should also consider the ethical, economical and societal implications. These considerations make the fine tuning of AI systems relevant to both data scientists and delivery leads.
Choose the final model
When choosing your final model, you will need to consider:
- what level of performance your problem needs
- how interpretable you need your model to be
- how frequently you need predictions or retraining
- the cost of maintaining the model
Assess and refine performance
Once you select a final model, your team will need to assess its performance, and refine it to make sure it performs as well as you need it to. When assessing your model’s performance consider:
- how it performs compared to simpler models
- what level of performance you need before deploying the model
- what level of performance you can justify to the public, your stakeholders, and regulators
- what level of performance similar applications deliver in other organisations
- whether the model shows any signs of bias
If a model does not outperform human performance, it still may be useful. For example, a text classification algorithm might not be as accurate as a human when classifying documents, however they can perform at a far higher scale and speed than a human.
Evaluate your Alpha phase
When you complete building your AI prototyping phase, you should have:
- a final model or set of predictive models and a summary of their performance and characteristics
- a decision on whether or not to progress to the beta phase
- a plan for your beta phase
Moving to the beta phase
Moving from alpha to beta involves integrating the model into the service’s decision-making process and using live data for the model to make predictions on.
Using your model in your service has 3 stages.
- Integrating your model - performance-test the model with live data and integrate it within the decision-making workflow. Integration can happen in a number of ways, from a local deployment to the creation of a custom application for staff or customers. This decision is dependent on your infrastructure and user requirements.
- Evaluating your model - undertake continuous evaluation to make sure the model still meets business objectives and the model is performing at the level required. This will make sure the model performance is in line with the modelling phase and to help you identify when to retrain the model.
- Helping users - make sure users feel confident in using, interpreting, and challenging any outputs or insights generated by the model.
You should continue to collect user needs so your team can use the model’s outputs in the real world.
When moving from alpha to beta, there are some best-practice guidelines to smooth the transition.
Iterate and deploy improved models
After creating a beta version, you team can use automated testing to create some high-level tests before moving to more thorough testing. Working in this way means you can launch new improvements without worrying about functionality once deployed.
Maintain a cross-functional team
During alpha, you will have relied mostly on data scientists to assess the opportunity and your data state.
Moving to beta needs specialists with a strong knowledge of dev-ops, servers, networking, data stores, data management, data governance, containers, cloud infrastructure and security design.
This skillset is likely to be better suited to an engineer rather than a data scientist so maintaining a cross-functional team will help smooth the transition from alpha to beta.
When you complete your beta phase, you should have:
- AI running on top of your data, learning and improving its performance, and informing decisions
- a monitoring framework to evaluate the model’s performance and rapidly identify incidents
- launched a private beta followed by a public end-to-end beta prototype which users can use in full
- found a way to measure your service’s success using new data you’ve got during the beta phase
- evidence that your service meets government accessibility requirements
- tested the way you’ve designed assisted digital support for your service
Related guides
- Understanding artificial intelligence
- Assessing if artificial intelligence is the right solution
- Managing your AI project
- Understanding artificial intelligence ethics and safety
- Examples of real-world artificial intelligence use
- National Cyber Security Centre guidance for assessing intelligent tools for cyber security
- The Data Ethics Framework
- The Technology Code of Practice