Record MOT test results live reassessment
The report from the live reassessment for DVSA's Record MOT Test Results on 30 October 2018.
From: | Central Digital and Data Office | |
---|---|---|
Assessment date: | 30 October 2018 | |
Stage: | Live | |
Result: | Met | |
Service Provider | Driver and Vehicle Standards Agency |
The service met the Standard because:
- the service team had acted on the recommendations from the previous assessment
- the service team showed evidence of the ongoing iteration of the service, and the commitment of the DVSA to resourcing this in the future
- the service team have committed to resolving the issues outlined in the recommendations section.
About the service
Service description
The service provides authorised testers and their managers with the ability to record MOT results and manage all aspects of their MOT business - supporting the delivery of quality MOT testing to the motoring public on behalf of DVSA.
The highest volume transaction is the recording of MOT results (circa 40m pa) but it is also used to buy slots (the garages’ way of paying government), access data to help them manage their own test quality and conduct administrative activities around managing sites.
Service users
Primary users are Testers and Garage Managers (known as the Authorised Examiner Designated Manager). These roles are authorised by DVSA and require training and some form of qualification. That process is managed outside of this service.
DVSA enforcement examiners are also users of the service - using the service for parts of their duties in checking on testers and garage managers.
Detail
User needs
The team were able to demonstrate that they’ve done extensive research with their users, having spoken to more than 200 users since their last assessment. As a result they’ve built up a solid understanding of their users and their behaviours. They’ve used a range of appropriate methodologies to make sure that the service meets user needs, including contextual observation, usability testing and card sorting. Contextual research visits are the bread and butter of the research they do, which has enabled the team to develop a good understanding of the environment in which testers use the service.
At this stage in the service lifecycle, the panel would have liked the service team to have developed a more nuanced understanding of their sub user groups, perhaps in line with different business profiles, e.g. smaller garages vs. larger providers. In addition, we would encourage them to understand the distinct needs of garage managers vs MOT testers. Doing so could help them to optimise the user experience for these two groups of users, for example only displaying ‘Test Quality information’ to garage managers if only they need to look at this content.
While the team have done good work to research and iterate the two features that were called out in the previous assessment (see below), the panel thought that the ‘Special notices’ functionality also needs to be iterated based on research. The service team mentioned that users find these notices confusing and tend not to read them. They also have some interesting insights around how garages use them in reality, e.g. one person taking responsibility to explain it to other members of staff or printing notices out and pinning them onto notice boards. While we were pleased to hear that a content designer has worked to improve some of these notices to shorten them and write them in plain English, we’d expect to see them compiling insights from user research in order to understand if they need a completely different way of conveying this information to users.
User needs driven development
The team were able to demonstrate that they’ve applied user testing and iteration to the two features that were called out in the previous assessment - ‘Defect categories’ and ‘Test quality information’. The team have extensively tested these features with users and made some excellent iterations to both bits of functionality so that they better meet user needs and are easier to use. The team were able to show that they now have a better process in place to research and iterate new features before they go live to ensure that incomplete or confusing features don’t appear on the live system. They spoke about recently creating a test environment that was a fully working version of the system, which enabled their users to test a new feature and provide feedback on it. This, along with their extensive usability testing of new features, demonstrates that the team have good processes in place to ensure they’re confident before deploying new functionality to the live system.
The team also spoke about doing follow up garage visits to check how testers are using features in earnest and building time into their roadmap to tweak features after they’ve been shipped, which is great. They need to make sure that after the service has gone live research continues to be built into processes and roadmaps in this way.
Accessibility
One of the recommendations from the previous assessment report was for the team to proactively recruit and test the service with users with accessibility requirements. They were able to show some progress in this area, having carried out usability testing with two dyslexic users. From this testing the team have come up with some good recommendations for how the service could better meet the needs of dyslexic users and they’ve begun prototyping these solutions.
However, at this stage, the team need to do more testing with users with access needs and not just those with dyslexia. The team presume a certain level of ability in their primary users. The panel would encourage them to also try to recruit for those with ADHD, other learning difficulties aside from dyslexia, autism or aspergers, or reduced vision. We don’t believe these range of access needs would preclude someone from this type of work. Access needs don’t just apply to people with disabilities - all users will have different needs at different times and in different circumstances. Someone’s ability to use your service could be affected by a short term health problem or their particular context, e.g. a noisy environment can simulate ADHD-like behaviours in people. Paying attention to those with access needs could improve the service for everyone, not just those with disabilities.
We appreciate that it can be difficult to find realistic users with a range of access needs, but the team have a good network of trade events and forums to recruit through. The team should continue to try to recruit these users, but not limit this recruitment to just dyslexic users. The team need to compile insights from accessibility testing over separate sessions to build up a body knowledge over time. Team The service team showed that a content designer was working as a member of the team, with input into the iteration of the service. They showed that the team had access to the full range of roles set out in the service standard, and had plans to resource these roles beyond the end of the budget that provides the current project team working on the service. They also showed evidence of iterating processes (e.g. in undertaking post-release testing of improvements) as well as the service.
The assessment panel were impressed at the work of the team, particularly in training others in content related issues. It is essential to the ongoing health of this service that this work continues.
Design
Since the last assessment the team have improved the search and browse features considerably. Search is now more forgiving allowing for misspellings for example. The panel are impressed with the work the team have done on browse, grouping results under a heading and reducing copy from sentences to single words with a focus on dyslexic users.
When the panel reviewed the service there were still inconsistencies with government design patterns as well as general visual inconsistencies. The team demonstrated thorough processes that are in place to identify and fix these issues. They are updating patterns to meet the government design system where there isn’t a user need to deviate. Where unique patterns are identified the team showed how they conduct research and share the designs across government and the wider design community through events, blog posts and mailing lists.
The team identified pain points around the use of a keycard to authenticate the user. While the service requires a high level of assurance, the team has been looking at ways to reduce the need of the keycards and authenticate users in other ways.
During the assessment it wasn’t clear how well the notification features meet the goal of informing users of updated practice and guidance. It’s hard to tell if the notifications were relevant to users at the point in time they are using this service.
Recommendations
The service team must:
- develop more granular user profiles/personas, for example by subdividing the broad “tester” group, and in understanding any access needs for these groups
- commission a full external accessibility audit of the service, and ensure that the priority recommendations are implemented
- make sure the points in the accessibility snag list (in particular, the issues around keyboard access) provided by the assessment team are met
- continue testing the service with users with a range of access needs and iterate the service as a result. The service should test the service with users with a range of access needs not just dyslexia, such as other learning difficulties, ADHD or aspergers, and visual impairments
- review the “notifications” aspect of the service from a content and from a service design approach
- make sure the contact centre (and any other routes of support) triage users with potential access needs and record these in order to allow targeted improvements to be made to the service in future
- continue their excellent work into “bringing technology to the test”, including evaluating the use of mobile devices to input data during the test itself and evaluating the potential for taking direct feeds of data from testing machines for e.g. braking performance rather than forcing re-entry of results, with the potential for error
- continue to provide content design resource both to train providers of content and to review and improve content in the service flow
- return to the blue/green deployment process previously adhered to in order to avoid periods of downtime.
The service team should also
- consider how they might be able to A/B test improvements to the service in future
- continue to implement design patterns unless there is a clear user need served by deviating from these, and play an active part in the cross-government design community
- consider the user journey for the tester who has signed up but not been allocated/approved by a garage yet, and test this with users
- continue to evaluate (e.g. through user testing and analysis of support calls) the ways of authenticating users on the service to ensure that the difficulty of the keycard approach is justified by the security needs of the service.
Get advice and guidance
The team can get advice and guidance on the next stage of development by:
- searching the Government Service Design Manual
- asking the cross-government slack community
- contacting the Service Assessment team
Next Steps
To get your service ready to launch on GOV.UK you need to:
- get a GOV.UK service domain name
- work with the GOV.UK content team on any changes required to GOV.UK content Before arranging your next assessment you’ll need to follow the recommendations made in this report.
Digital Service Standard points
Point | Description | Result |
---|---|---|
1 | Understanding user needs | Met |
2 | Do ongoing user research | Met |
3 | Have a multidisciplinary team | Met |
4 | Use agile methods | Met |
5 | Iterate and improve frequently | Met |
6 | Evaluate tools and systems | Met |
7 | Understand security and privacy issues | Met |
8 | Make all new source code open | Met |
9 | Use open standards and common platforms | Met |
10 | Test the end-to-end service | Met |
11 | Make a plan for being offline | Met |
12 | Make sure users succeed first time | Met |
13 | Make the user experience consistent with GOV.UK | Met |
14 | Encourage everyone to use the digital service | Met |
15 | Collect performance data | Met |
16 | Identify performance indicators | Met |
17 | Report performance data on the Performance Platform | Met |
18 | Test with the minister | Met |