Civil money claims alpha assessment
The report from the alpha assessment for MOJ's civil money claims service on 6 April 2017.
From: | Central Digital and Data Office |
Assessment date: | 6/4/2017 |
Stage: | Alpha |
Result: | Met |
Service provider: | HMCTS |
The service met the Standard because
- The service team demonstrated that they had developed a deep understanding of the needs of existing and potential new users through a broad range of user research activities.
- The service team have designed the service inline with Gov.uk patterns, and the panel were particularly impressed by how the service team utilised user research to support the service design.
- The team has selected an appropriate technology stack in line with HMCTS Reform standards and has demonstrated that the overall technical approach is feasible. While we would not necessarily expect prototypes from the alpha phase to be taken through directly into a beta phase, the existing continuous integration and deployment approach is a positive indicator and seems feasible to continue.
- The service team is multidisciplinary and is working in an agile manner, incorporating agile methods and ceremonies into the way it works effectively.
About the service
Description
The service enables users to apply to a county court to claim money they’re owed by a person or business, and the persons or businesses subject to a claim to respond.
Service users
The main users of this service are civil money claims defendants, claimants, internal users in HMCTS and legal representatives. There are a wide range of other less prolific users.
Detail
User needs
Firstly, the panel would like to thank the team for providing such a rich array of artifacts including the journey maps with pain points, user quotes and personas together with the detailed presentation deck.
The team have demonstrated and provided evidence, that they have carried out a wide range of user research from Discovery through to Alpha. They have captured a broad set of user needs from the different user groups who will be interacting and supporting the service. The team must continue to review and refine those user needs through their ongoing research activities, adding new ones to their backlog.
The team chose to focus on the journeys for the claimant and defendant user groups and were able to demonstrate a thorough understanding of the user needs for both. Although the team were only able to demonstrate the ‘happy-path’ for these two user journeys, the panel were satisfied to learn that through on-going lab-based research, the team have been able to incrementally refine the design to improve the user experience.
The team capitalised on having 2 user researchers embedded in the project, enabling one to focus on the claimant user group and the other on the defendant user group. The panel appreciated the complexity and emotive nature of the claims process and the approach of using the researchers in this way appears to have been effective in dovetailing the user experience for both online journeys.
The service, as cited by the team, is aimed for ‘anyone’ needing to make or respond to a claim through the court system. However, despite extensive lab-based research with 51 end users and 11 stakeholders, the panel would have liked to have heard evidence to support a deeper understanding of the needs for the other key user groups including those supporting the end users and those maintaining and using the backend systems. These user groups include;
- 3rd party bodies such as the Public Support Units, Money Advice Service, Citizen Advice advisors and Pro Bono legal advisors
- Representatives, such as the mediators and legal teams
- Internal user group such as the administrators, court staff etc
During Alpha, it is important to demonstrate that your service does not prevent people from successfully accessing and using it. The processes, as demoed, rely on a degree of digital capability and ability to understand the legal and financial commitments in making/defending a claim. The team could have talked more about what they know already about these user groups, the pain points and how they plan to support them or investigate their needs further.
Although the team have some evidence to suggest that they have tested the service with people who have dyslexia, they could be doing more to understand how users with low literacy, poor vision and other cognitive disabilities including dyspraxia, ADHD will be able to successfully use the service.
There is an assumption that the Personal Support Units and potentially, Citizens Advice will meet the support needs of users, based on current feedback from advisors and volunteers. However, as the team have not fully researched what support users may need or tested out any potential solutions, this and any other decisions around support offerings are not well informed at this stage.
The team appear to be relatively successful in reaching out to the appropriate end users so far, and already have in place a recruitment agency to assist them. They have also been resourceful in reaching out to participants through posters and the existing channels already in place i.e. Personal Support Units. It is recommended that the team ensure that they include a representative sample of their demographic for their on-going research activities and continue to tap into their networks to achieve this.
Overall, the panel were pleased to learn that user research plays an integral part of the sprint cycle, and the team, with all members not only observing research sessions but actively involved in the analysis process.
It was unfortunate that neither of the user researchers who took part in the research were able to attend the assessment. However, it is encouraging to know that at least one new user researcher will be joining the team soon with potential pairing with other researchers in the programme.
Team
The team is truly multi-disciplinary, including a wide range of digital skills and domain experts, including policy experts. The team even has a Judge working one day a week!
The team has a healthy mixture of permanent staff, specialists provided by a supplier and interims, and mature approaches to working across multiple sites (London and Manchester) and sharing knowledge. Unfortunately the team lacked a user researcher at the time of the assessment, but the panel understand a replacement has been found.
The team work in sprints, and described a number of different ways they had experimented, iterated and changed direction based on learning during alpha, testing with users forming a part of each sprint.
The team demonstrated the high degree of trust placed in it by their governance and and their ability to inform policy and legislation changes following the Civil Courts Justice review. The service is one of a number of services part of a larger transformation programme. The scope of the service has been defined in part to replace an existing HMCTS service, “We’re not digitising a process, but transforming it”.
Technology
The team has shown that they have appropriate capacity to develop digital services in line with the service standard. The alpha phase of the project has been developed using the “steel thread” methodology, in which successively detailed and production-like systems are created in an iterative fashion. While this is not always seen at an alpha phase, the investigation and trialing of continuous integration and continuous deployment methodologies during the alpha phase is a good sign and shows an appreciation of the work that is yet to come. The team seems to be aware of the standard components and platforms being supported by the wider HMCTS Reform programme and to be picking technology accordingly.
In common with other HMCTS services, the team will need to continue to code in the open, and the team did not present any reasons to suggest that the majority of code needed by the Civil Claims Service could not be publicly accessible. The team is reminded that a primary motivation for open code is to encourage best software development practice in managing deployments, secrets and configuration as well as to improve general readability, code structure and security. Further aspects of developing true Open Source project support can bring other benefits, but the team should focus for the moment on making sure that their working practices align with point 8 of the service standard. Additional engagement and guidance from the GDS Technical Standards group is available to support this point if the team feels that this is needed.
The service asks a user for a company (or possibly another legal entity such as a charity). The panel suggest the team investigate using the Companies House and other APIs to record a Companies House or Charity Commission number, in anticipation of their data linking to GOV.UK registers.
The management of user data seems appropriate, and it is good to hear that policy advice is available to the team to support specific issues of data retention and privacy. In beta we would expect to see some more detail on security and auditing infrastructure to support this point, and the team should show awareness of the new requirements to support subject access requests, redactions, data portability and other aspects of Data Protection compliance. These issues should be built into the culture of the team and the product management process rather than being bolted on after a policy review, as these data requirements can be expensive to meet retroactively.
A key point throughout the development of the beta system will be the integration of casework and other systems in the overall service environment. These exchanges of data should occur at the service level through well-defined APIs, and care will need to be taken to ensure that the single source of truth remains clear following this integration, especially considering that the paper and mediated service processes will continue in parallel with the deployment of the digital service. If the digital service repository is allowed to diverge from that of the existing casework system there could be difficulties for the many internal users of the combined system. On this point, it is important to reiterate that the next phase of work should prototype, iterate and test solutions to support these administrative system actors.
We understand that the system is designed to facilitate a somewhat free-form exchange of proposals and evidence between the two parties in the claim, and it is very helpful to see the thoughtful evolution of the relevant textual user interface elements. We advise the team to be equally thoughtful in supporting the upload of free form documentation. The need for these user-created files is clear from the context, but adds a whole new burden for assisted and non-assisted users who may find scanning and attaching documentation to be difficult. Bear in mind also that poorly-optimised scans can result in very large files which present additional technical issues for upload and download, and that optimising this experience is extremely difficult. If high resolution scans and photographs are to be accepted as corroborating evidence (and there is probably a case to be made for this), the team should consider IIIF and other standards for managing the serving of these image assets through to the various clients. Also, the service team may want to consider the increasing importance of audiovisual media in supporting these claims, and the additional technical burden in defining, managing, transcoding and serving these assets can also be significant. Given the nature of these various kinds of evidence with respect to the legal process, the rules for retention, format migration and archiving of these files can be strict.
The team is commended for their use of common platform components such as GOV.UK Notify and GOV.UK Pay. We recommend the continued engagement with GOV.UK Verify, including the upcoming LOA1 service, as a consistent way to verify the identities of claimants and respondents has obvious benefits for the trustworthiness of of the digital legal process involved.
In beta the team will need to show a wider range of testing in a variety of browser and device contexts, especially given some of the issues of the legacy IT estate in the courts, citizen advice and supporting third-sector environments. To name just one trivial example, express/node.js applications often use socket.io libraries for certain communication tasks, and by default this assumes the availability of WebSocket support in the user’s browser for some operations. Should these features be used, support for IE8 and IE9 might not work as expected, and the polling-based fallback would be needed to be tested carefully. In beta the failover and disaster recovery procedures will also be queried carefully, especially given the legal standing and consequences of the agreements created through this digital service.
Design
The design is being led by insights gathered from user research. Information is collected from new user research and testing, and the existing claims service. Professionals and other parties who assist people with the claims process are included.
The team is using tried and tested GOV.UK patterns, such as save and return, and check your answers. The team is up to speed with newly published design patterns and elements, for example, the very recently published ‘task list’ pattern.
The team has prioritised improvements to the main pain points they have found, for example:
- getting a claim started, and the initial ‘defence’ of a claim
- making it clearer that there are resolutions other than a court decision
- negotiating a resolution
The panel was impressed how significant improvements have been achieved with new design patterns, new content, and variation on existing GOV.UK patterns in the priority areas. These are based on user insights and ideation, and have been iterated based on testing with users. For example:
- claim summary and details - timeline, what the claim is about, how it affected me
- structured negotiation & reviewing the agreement
There are areas of the service that could be improved further to help more people succeed unaided first time. The panel was confident that the team is working in a way that will continue to improve the service based on what they learn.
Analytics
The team described some good thinking on ways they may measure successfully meeting the needs of their users: to resolve a dispute or recover a debt using alternative mechanisms such as mediation and without resorting to the using the service.
Recommendations
To pass the next assessment, the service team must:
- The team must continue to understand the needs, and provide evidence for those who will need help to use the service. This includes those less confident and those who lack access to the online service.
- The team must provide evidence on how they have ensured that their service does not exclude people with disabilities, this will include those with cognitive, hearing/deaf, vision, motor and mental health impairments.
- The team must have at least two accessibility audits and be able to present evidence on any issues fixed
- The team must continue to improve parts of the UI to help more people succeed unaided first time.
- The team will need to continue to code in the open, or to make a specific case as to why particular components cannot be opened in accordance with the standard. The team is reminded that a key motivation for coding in the open is to enforce a higher standard of developer practice, including in the area of secure deployment and configuration management.
- The team will need to show the end-to-end service, including integration with case management and other systems and the availability of appropriate environments. If necessary this testing will need to include production-like or synthetic data and appropriate ways to test with casework systems in a production-like environment.
The service team should also:
- The team should continue to understand the needs for their other user groups, and provide evidence on how they have iterated the service based on their feedback
- The team should test their service with end users using a range of devices including smartphones, tablets
- The team should ensure that they have enough user researchers on board to be able to support the full range of user research activities needed to take the project forward
- Where possible, the service should record addresses in data as UPRNs, companies as Companies House numbers, etc and offer GOV.UK pickers for lists as and when they become available.
- The team should consider the integration of external APIs where possible, for example the Companies House API. If there are data elements needed by the service that would meet the requirements for a government register (for example, a list of mediators or list of courts), the team should liaise with the Registers product team to explore this further.
-
Improve the UI to help people succeed more often first time and make the service more consistent with GOV.UK. Places that feel like they could be simpler for people, because they contain difficult process or legal concepts, or are doing more than ‘one thing per page’ include:
- Claimant - your details
- Claimant - entering the defendant’s details
- Defendant - some of the pages / the journey - feels quite hard to know what to do
- Before creating an account, could more be done to help people collect information needed before they need - e.g. contact details of the person claiming against
- Could ‘What you’ll pay’ (fee calculator) come before creating an account,
- Try other designs for the ‘’agree and sign the settlement agreement’ page - the ‘paper form-like page’
-
Assistance specific to the step in the journey
- The team should keep working on the end to end service, and ensure that this transformation effectively generates savings for the business.
Next Steps
pass - alpha
You should follow the recommendations made in this report before arranging your next assessment.
Get advice and guidance
The team can get advice and guidance on the next stage of development by:
- searching the Government Service Design Manual
- asking the cross-government slack community
- contacting the Service Assessment team