E-Claims - Alpha Assessment

The report from the alpha assessment for DCLG's E-claims service on 8 June 2015.

Department / Agency:
DCLG

Date of Original Assessment:
8/06/2015

Date of Reassessment:
11/12/2015

Assessment Stage:
alpha

Result of Original Assessment:
Not Pass

Result of Reassessment:
Pass

Lead Assessor:
H. Garrett (Original) / M. Brunton-Spall (Reassessment)

Service Manager:
D. Watkinson

Digital Leader:
P. White


## About the service

The E-Claims service is a management and control system for the delivery of DCLG and DWP programmes to meet European Commission guidelines. The public facing component of the service allows users to apply for European Regional Development Fund (ERDF) or European Social Fund (ESF) money.

Reassessment Report

11th December 2015

The E-Claims service has been reviewed against the 18 points of the Service Standard at the end of the alpha development.

After consideration, the assessment panel has concluded that the E-Claims service is on track to meet the Digital Service Standard at this early stage of development.

Reasons

The panel was very pleased to see such an improvement from the last assessment. The service team has clearly taken feedback to heart and has taken steps to improve the team and project. The service has strong and difficult regulations and requirements and is navigating those by understanding user needs and how users will use the system.

Point 1

Understand user needs. Research to develop a deep knowledge of who the service users are and what that means for the design of the service.

The team acknowledged that user research had started at a late point in the service’s development, when the design was already quite advanced. Since bringing a user researcher on board, the team have been meeting users on a regular basis, and are feeding insights from user research into the design of the service. The team have been researching with a wide range of users, both internal civil servants and external applicants. User research is planned and funded to continue through the beta phase. The team has been researching with all users to understand what support they might need, using a prototype of the service for accuracy, and have met with some users with the lowest levels of digital skills.

The team explained how assisted digital support doesn’t need to be provided to civil servant users using the administrative side of the service as part of their work. For the public facing side (applicants) the team explained how the application process itself requires some level of digital skill and access, and that applicants were from larger organisations led them to expect that users’ digital skills would generally be higher.

This was borne out by the team’s research. The team found that most support sought was to clarify content or seek extra information, and had made improvements to the digital service prototype to reduce these enquiries.

Point 10

Be able to test the end-to-end service in an environment identical to that of the live version, including on all common browsers and devices, and using dummy accounts and a representative sample of users.

The team have been working to ensure that the system is hosted on a commercial cloud solution that enables them to automatically construct an environment when it is needed. While there is still some work to be done in this area to get it entirely automated, the expected lead time on needing an environment is measured in hours not weeks or months. Since the environment creation is automated, there is strong assurance that the environments are consistent and that testing performed in them matches the production environment.

Point 13

Build a service consistent with the user experience of the rest of GOV.UK including using the design patterns and style guide.

The prototypes that have been developed are clearly much more in keeping with the government style than the original system, and have been iterated to meet the user needs of the intended users. While the interface is reasonably complex and contains complex language, the team provided assurance that the majority of users are expert users who understand the process already, and that user research is looking at ways to improve guidance content around the forms to aid users who run into difficulties.

Point 14

Encourage all users to use the digital service (with assisted digital support if required), alongside an appropriate plan to phase out non-digital channels/services.

The team is aware of alternative channels, can measure them, and will be working to phase them out as soon as possible by promoting the digital service and subsequently switching off alternatives.

Recommendations

Point 1

Understand user needs. Research to develop a deep knowledge of who the service users are and what that means for the design of the service.

The team should develop personas from their research with users in person, that cover the range of assisted digital support needs for this service. These should take into account the four different applicant-user types required to complete the service - Application Administrator, Claims Editor, Claims Reviewer and Procurement Editor.

The team should carry out more research in context, in people’s offices and workplaces, to understand the full range of support that users would seek, including from third parties and face-to-face. This should include working with third party support-providing organisations, such as universities and charities.

The team should ensure that there isn’t an over-reliance on focus groups, and group research sessions, making sure to meet people individually to understand the differences between users as well as the similarities.

Point 2

Put a plan in place for ongoing user research and usability testing to continuously seek feedback from users to improve the service.

The team needs to ensure that feedback from user research that goes into the prototype is successfully able to be put into the main system. The prototypes success shows that the new design is important, but it was not clear how the team would be able to bring the design changes back into the main system.

Point 3

Put in place a sustainable multidisciplinary team that can design, build and operate the service, led by a suitably skilled and senior service manager with decision-making responsibility.

The lack of an identifiable service manager, as defined by the Government Service Design Manual, means that there is no single decision making point for the entire end-to-end service. The team has done an admirable job in attempting to diversify this role across a number of different positions, but the panel strongly recommend that the service appoint a single suitably skilled service manager with decision making responsibility.

It was also noted that some team members were contracted via different routes, making it unclear whether the entire team was funded for the duration of the service, through to live. We recommend that the team address this and ensure that a fully funded and well staffed multidisciplinary team is in place before the private beta.

Point 12

Create a service that is simple and intuitive enough that users succeed first time.

The team were able to demonstrate users succeeding when using the service from a start page to the end of a transaction.

In the next steps the team should ensure this scenario is widened so it includes the whole experience, from the point of need, through to getting to the digital service, reading guidance and then onto the completion of an application or payment of a claim. The team should be sure to include in their support model for this service all routes of assisted digital support that their research shows that users need, including, if required, face-to-face support.

The supporting content on GOV.UK does not meet GOV.UK standards. The panel recommends that the team liaise with the DCLG content team and ensure that the service and supporting content is fully driven by user needs.

Additionally, the service duplicates information that can currently be found on GOV.UK. The panel recommends that before the beta assessment, the team discuss with the GOV.UK team to agree what content lives on GOV.UK and what content lives in the service. Additionally, the team should agree with GOV.UK how the user journey between the two should work.

Point 13

Build a service consistent with the user experience of the rest of GOV.UK including using the design patterns and style guide.

While there has been a content designer on the team, budget for their time is still to be confirmed for the next phase. The team should ensure this is in place so they can continue the good work already done in this area.

The same holds true for the supplier team who have been working on the front-end code in the prototype. This team need to be in place for the next phase so the quality that exists in that code, taken from user research, is brought into the main build.

Design of the service is currently handled by the user experience and user researcher on the team. This works for the current phase but the panel would expect to see someone with overall responsibility for the design on the team by the next phase (this could be an existing member).

Point 16

Identify performance indicators for the service, including the 4 mandatory key performance indicators (KPIs) defined in the manual. Establish a benchmark for each metric and make a plan to enable improvements.

The team are measuring the performance of their own department’s telephone support, but should be sure to also measure any other routes of support for the required model of support for this service - for example face-to-face and third party support.

Summary

The panel was very pleased to see the team again and to see the difference that has happened. The panel recognise that the team has worked hard on taking the feedback and working on it.

The service at alpha has demonstrated that there is a user need for a better way to claim and administer the grants, and that the demonstrated approach is much better than the existing systems, and that it therefore passes the alpha standard and can progress into building a beta that can be tested with real users.


Original Assessment

8th June 2015

After consideration the assessment panel has concluded that the E-Claims service is not yet on track to meet the Digital Service Standard at this early stage of development.

Reasons

User needs and research and creating a simple and intuitive service - service standard points 1, 8, 9, 10, 12, 19, 20

There has not been sufficient user research during the alpha phase to understand user needs fully enough to proceed to beta. The panel was pleased to hear about the work done in discovery to talk to internal users of the service and find out more about their needs, but there has not been continued investment in user research. Specifically, there has been very little research with applicants, other than an initial workshop during discovery.

Offline support is provided for users of the current service, but due to lack of evidence of user research it was unclear whether the proposed support would meet the needs of assisted digital users for this specific service.

The panel was pleased to see how the service team were using agile methods and techniques. This includes involving lots of internal users in show and tells and actively seeking feedback on work in progress from a wide group of internal users. However, this is not a substitute for seeing people using the service.

At this stage the full end-to-end transaction has yet to be tested with either users applying for or users administering funds, although the team have done some research with internal applicants on the process and have gathered feedback on the sections of the service developed so far. This research could be done by some lightweight prototyping to test out the full journey, this will save time in the long run as it will help bring out the major usability issues early on.

The panel appreciate that there are fixed deadlines for launching the new service, but this kind of research is imperative in ensuring that any problems in the journey are uncovered as soon as possible. At the current time it is not clear how the team can be confident that the service is simple and intuitive enough that users succeed first time, unaided.

Multidisciplinary team - service standard points 2, 13

There isn’t a multidisciplinary team in place with all disciplines represented and these gaps in the team are reflected in the service. The team does not have a user researcher, content designer, product analyst or designer (or sufficient time from each of those disciplines) working on the service.

The content is written outside the team, and the team seemed reluctant to challenge content that has been signed off. Whilst the panel appreciate the complexities of the service and legal implications, the team should have the autonomy to feedback about the content. It is essential that the team have the research and evidence to inform these decisions and have the content design skills within the team to improve the content.

The team have been using the GDS design patterns and style guide and will be building their forms based on them. This should remove the need to design solutions for patterns of interaction which already exist and ensure the service is consistent with the rest of GOV.UK. It is essential however, that the team have enough time from a designer to assist in the creation of any new patterns and ensure the service as a whole is kept simple and intuitive by ensuring all decisions are directly informed by the outputs of user research.

Analysis - service standard points 7, 21, 22, 23, 24

Decisions appear to have been made outside of the control of the team that are not necessarily backed by data. For example, the plan is to support IE9 and above, and not older browsers. This decision was articulated in an initial requirements document, but it was not clear whether this was based on evidence. For instance was it based on the current usage of legacy back office systems or an analysis of potential users of the application service (business, charities, colleges)? The team did not appear to be able to direct these decisions about the service.

It is positive that the team plan to use Google Analytics to measure the performance of the service. However, the team need to understand how they will benchmark, measure and report on the key performance indicators of cost per transaction, completion rate, user satisfaction and digital take up.

Technology decisions - service standard points 5, 14, 15, 17

The panel found it difficult to do a thorough assessment of the technology decisions due to the lack of user research to base them on. Technology decisions should be driven by user needs and ensure that the service is doing the hard work to make the service simple. The panel has therefore assessed what was presented, but understand that further research into the user needs may change the product direction.

While it was good to hear that the team would like to open source the code, and has already seen the benefits of owning the IP so that suppliers can share code internally, the panel did not see any actual evidence that the team has started the process.

While the panel understand the difficult reality of the team’s situation with appointing a supplier, the panel felt that appointing a database vendor to build the system means that the team are less able make informed decisions about vendor lock-in. The panel was presented with no evidence to indicate that the team had the skills or knowledge to challenge or guide the vendors correctly.

The panel was very pleased to note that the technical intention of the disaster recovery plan appeared to be very modern and appropriate to the requirements. The panel was also are pleased to see the hosting arrangements are equally modern and appropriate.

The panel was very concerned to note that the current deployment practices are not intending to continue through to live. The panel was unable to get a good understanding of how releases into production are intended to be done once the service is live, however it appeared that the team wanted to add significantly more governance and overhead in the future which would reduce the deployment velocity and prevent regular releases. The panel strongly recommend against taking this course of action.

Recommendations

The service must address the following recommendations before the next assessment.

  • The team should ensure that the skills gaps within the team (user research, design, content design and analysis) are addressed as a matter of urgency.

  • The team should begin user research in the alpha stage, particularly with external applicants. An immediate first step could be to observe applicants using the interim forms that are already in place.

  • User research should be undertaken to begin understanding assisted digital user needs and potential barriers to using the digital service independently.

  • The team should test out the full end-to-end transaction in prototype form with both internal and external users and iterate that prototype based on those research findings. These findings should form the basis of a beta.

  • There was a concern that the governance structure described during the assessment is likely to prevent rapid iteration once the service is live. The team should look at how they can improve these processes which are in danger of slowing them down.

  • The team should make a plan to open source the code.

  • The team should meet with the Performance Platform team in GDS to agree how they will measure and report on the four KPIs.

Digital Service Standard criteria

Criteria Passed Criteria Passed
1 Yes 2 Yes
3 Yes 4 Yes
5 Yes 6 Yes
7 Yes 8 Yes
9 Yes 10 Yes
11 Yes 12 Yes
13 Yes 14 Yes
15 Yes 16 Yes
17 Yes 18 Yes

Updates to this page

Published 6 January 2017