DVSA: MOT Risk Rating

An algorithm to identify potential non-compliance in MOT testing, and prioritise visits to MOT garages.

Tier 1 Information

Name

MOT Risk Rating

Description

The MOT risk rating tool was built to help prioritise the order in which DVSA Vehicle Examiners visit MOT testing stations. It outputs a Red, Amber or Green (RAG) rating for MOT testers and testing centres. These RAG ratings are then used by DVSA Vehicle Examiners, along with other information, to decide where to send resources to conduct site visits to review a tester and/or testing station.

Website URL

N/A

Contact email

MIDS@dvsa.gov.uk

Tier 2 - Owner and Responsibility

1.1 - Organisation or department

Driver and Vehicle Standards Agency (DVSA)

1.2 - Team

MOT Policy

1.3 - Senior responsible owner

Head of MOT Policy / MOT Testing

1.4 - External supplier involvement

Yes

1.4.1 - External supplier

Kainos

1.4.2 - Companies House Number

NI019370

1.4.3 - External supplier role

Kainos developed the initial tool, after development ownership was then handed over to DVSA. DVSA are now responsible for running the tool on a monthly basis as well as any required maintenance.

1.4.4 - Procurement procedure type

Open competition for full contract of which this was a small part.

1.4.5 - Data access terms

N/A

Tier 2 - Description and Rationale

2.1 - Detailed description

DVSA is responsible for approving people to be MOT testers, and approving the centres they work in. As part of this DVSA is required to visit MOT testing stations to support garages, to raise standards of testing and improve compliance.

The model is made up of a machine learning model which applies a red, amber, green rating (RAG) to MOT testers and testing centres, this is then used to help prioritise the order in which DVSA visits MOT testing stations.

The model was trained using R in RStudio on our internal dataset collected through the MOT testing service. The model uses a local outlier factor algorithm, which identifies outliers in a dataset, it outputs a novelty score which is then converted into Red, Amber and Green.

The output is available to DVSA Vehicle Examiners via a Power BI app and Excel download. The RAG Ratings are presented alongside other data related to the MOT testing station. This allows the DVSA Vehicle Examiners to consider multiple data points when considering the prioritisation of their visits.

2.2 - Scope

DVSA is responsible for helping the MOT industry test to the right standards by supporting testers, as well as detecting and dealing with deliberate and fraudulent behaviours that compromise the MOT service. This is achieved using various interactions with testers and testing stations, such as site visits or off site reviews, vehicle re-inspections.

The tool was designed to help prioritise the order in which the DVSA visits and contacts garages. These visits could be for a range of reasons, including: to provide support, detect non-compliance, or help with educating garages or testers that are not testing to the correct standard.

The tool was not designed to be used to make any disciplinary decisions. Those decisions are only taken based on other activity such as evidence found as a result of a site visit.

2.3 - Benefit

The primary benefit is to ensure DVSA are utilising their vehicle examiners resource as efficiently as possible to visit the testers and testing stations most in need of DVSA support or inspection. Minor benefits are time savings to compile a list each month to review.

2.4 - Previous process

Previously visits were prioritised based on time since they were last visited.

2.5 - Alternatives considered

N/A

Tier 2 - Decision making Process

3.1 - Process integration

The tool is only used to decide which testers or garages to visit. No other decisions are made based on the tool output.

Any decisions made such as disciplinary action or educational activity are only taken based on other activity such as evidence found as a result of a visit.

3.2 - Provided information

The tool outputs a RAG rating to prioritise the testers and testing stations to visit. This is presented along with other information separate to the algorithm and from different sources about the tester and/or testing station, such as date since last visit. The vehicle examiners then uses all of this information to manually prioritise which testers and/or testing stations to visit.

3.3 - Frequency and scale of usage

There are approx. 64,000 active testers and 23,000 active garages. Used on a daily basis by approx. 150 DVSA officers who will supervise visit and assess.

3.4 - Human decisions and review

The tool does not make any decisions but only provides a RAG rating which is used alongside other information to help decide which testers and garages to visit. The RAGs are updated each month and a new list of sites to visit is decided following the update.

3.5 - Required training

The tool is designed for use by vehicle examiners who use their professional expertise from on-the-job training.

3.6 - Appeals and review

There are a number of touch points for review. If vehicle examiners feel the RAG rating is inaccurate for specific testers and testing stations this information is fed back via the MOT product team and these insights are used for further development of the risk rating tool. This insight would be obtained following their review of the supporting information alongside the RAG or following a visit to a testing station.

Tier 2 - Tool Specification

4.1.1 - System architecture

The model runs in RStudio within the Amazon Web Services environment, accessing database tables within that environment and outputting data via S3 buckets in Amazon Web Services.

4.1.2 - Phase

Production

4.1.3 - Maintenance

DVSA have now taken over maintenance of the tool and are reviewing the accuracy and effectiveness

4.1.4 - Models

MOT Risk Rating

Tier 2 - Model Specification

4.2.1 - Model name

MOT Risk Rating

4.2.2 - Model version

v1

4.2.3 - Model task

The model is used to RAG rate testers and garages into Red, Amber and Green ratings.

4.2.4 - Model input

Various features relating to tester behaviour.

4.2.5 - Model output

A numerical score presented as a Red Amber Green flag.

4.2.6 - Model architecture

Local outlier factor. Number of neighbours used for optimisation 50, 100, 500, 1000, 5000.

4.2.7 - Model performance

N/A.

4.2.8 - Datasets

MOT test data

4.2.9 - Dataset purposes

Various features from the MOT test data is used to train the model.

Tier 2 - Data Specification

4.3.1 - Source data name

MOT test data (DVSA internal)

4.3.2 - Data modality

Tabular

4.3.3 - Data description

Details of all MOT tests completed are entered into the MOT testing service by the testers. Details entered here make up the MOT test data.

4.3.4 - Data quantities

64,000 samples aggregated from 10,000,000. 8 attributes.

4.3.5 - Sensitive attributes

Testers names

4.3.6 - Data completeness and representativeness

The data includes all data from the testing population, with no missing values.

4.3.7 - Source data URL

No URL is available for this dataset as it is not publicly available.

4.3.8 - Data collection

Data is collected via the MOT testing service.

4.3.9 - Data cleaning

N/A

4.3.10 - Data sharing agreements

N/A

4.3.11 - Data access and storage

The training data and model output is stored in a secure Amazon Web Services (AWS) environment. Outputs are exported to a secure Microsoft 365 environment via Power BI and SharePoint. Only vehicle examiners have access to the data and outputs of the model.

Tier 2 - Risks, Mitigations and Impact Assessments

5.1 - Impact assessment

N/A

5.2 - Risks and mitigations

The outputs of the tool could be used for a purpose that it was not intended, which would unfairly impact the testers and garages. This is mitigated by the fact that we do not publish this data externally.

All machine learning algorithms are susceptible to the following risks:

  1. Potential bias from the model (e.g. consistently scoring establishments of a certain type much lower, less accurate predictions). To mitigate these risks we are continually developing our tools to understand and adjust for any biases.
  2. Potential bias from users seeing a RAG prioritisation. This may have an impact on how the tester or testing station is perceived before a visit. To mitigate for this risk vehicle examiners are also presented with other information about the testers and testing stations which will give them a broader view of the testers or testing stations.
  3. With the use of AI/ML there is a chance of decision automation bias or automation distrust bias occurring. Essentially, this refers to a user being over or under reliant on the system leading to a degradation of human-reasoning. To mitigate for this risks we have provided training on the use of the tool as well as continually engaging with the vehicle examiners.

Updates to this page

Published 10 February 2025