Personal Independence Payment beta assessment

Service standard assessment report Personal Independence Payment (PIP) 31/10/2024

Service Standard assessment report

PIP

From: Assurance team
Assessment date: 31/10/2023
Stage: Beta
Result: Red
Service provider: DWP

Previous assessment reports

Service description

Personal Independence Payment (PIP) is a benefit that supports citizens with long-term health conditions or disabilities, and who need help with extra costs because they have difficulties with daily living or getting around (or both).

To apply for this benefit, claimants must provide some basic biographical information, provide information about their circumstances and functional ability, and upload evidence that supports their claim. Claimants may also be referred for further assessment of the impacts of their condition(s).

The current default way to apply is via phone and paper channels

Service users

This service is for…

Primary users

  • Potentially any citizen
  • Citizens with a long-term health condition and/or disability
  • Mental and physical health conditions (sometimes both)

Secondary users

  • 3rd party organisations who provide general support to citizens
  • Those who become involved in direct support in relation to PIP
  • Other individuals (friends/family) providing support to the citizen

Internal users

  • G4S staff, contact centres & call handlers
  • Decision makers, Case managers, Healthcare practitioners
  • DWP staff or orgs connected to DWP

Covering advisory note

This service is a first step along a wider journey; having an easy and intuitive way to know if you can apply for PIP and submit required documentation is a prerequisite for a successful application. The team has demonstrated excellent approaches within the constraints to which they are working to.

The assessment panel evaluated the work against the service standard; the panel did not evaluate the whole end to end service. The panel only considered the part of the service that enables the user to Apply for PIP. This includes all steps from awareness of PIP to submitting the application. While assessment scope did not include assessing all elements of Apply for PIP end to end and front to back, the Service Standard requires the service team to demonstrate understanding of the part being assessed connected with other elements of the journey.

This is in line with Point 3 of the Service Standard: “Service teams should: be able to explain how the service the team is working on will join up with other things into a journey that solves a whole problem for users.” We did not see adequate evidence in this assessment that this is happening.


1. Understand users and their needs

Decision

The service was rated amber for point 1 of the Standard.

The panel acknowledges that the team has done a good amount of research, they’ve developed a sound set of user needs and personas, included end-users as well as secondary users, assistive tech and hard-to-reach users in research and has also done user support ticket analysis.

Recommendations:

  • one important aspect of the service meeting the user needs is making sure the end to end user experience is coherent and easy to understand and use. Unfortunately the team has not been able to test the whole extent of the PIP-apply journey, and therefore they cannot be sure the user will find the whole journey easy to understand and use. The recommendation is to test the end to end PIP-apply journey, from registration (PIP1) up to evidence and IDV, with users of the service (including people with access needs and those who regularly user assistive tech) and focusing on evaluating the experience is coherent and of good quality in terms of usability. If the research session is too long it could be divided up into several sessions (with the same participants of course) or instead use diary studies or any other suitable method. This is the main issue with the team’s approach, they could come back for a reassessment as soon as they plan and execute this research, making sure its findings inform design.

  • in future research the team should aim to cover the lower spectrum of the inclusion scale when recruiting research participants. This can be difficult when using recruiting agencies but the risk of not recruiting participants with low digital confidence is that the service might be excluding this segment of the population.

  • in future research the team should aim to do more in person research. There are advantages of doing remote research but there are also advantages in in person research, particularly for users with low digital confidence, for those with certain disabilities or those who regularly use assistive technology.

2. Solve a whole problem for users

Decision

The service was rated red for point 2 of the Standard.

The service team mentioned during the assessment that they are working under restrictive policy constraints, particularly around ‘equivalency’ that do not allow them to create a digital service that delivers a better experience for users than the paper route. The team are experimenting with alternative designs at the transaction level, but alternatives to the design of the broader user journey / whole problem space do not appear to have been fully considered.

Recommendations:

  • the outcome the service team is working to deliver is a step in the wider journey. At the next assessment the panel expects to see how non-digital channels connect with this digital journey. The panel saw a work in progress service blueprint and would encourage the team to spend time working with the service designer to reflect understanding of how the whole service fits together, including the journeys users undertake when unable or unwilling to use digital channels.

  • before the next assessment the team should provide evidence that more consideration was given to appropriately scoping this service to reflect how users think

  • after the end to end testing recommended in point 1 is done, ensure future iterations encompass findings related to pain points in the journey

3. Provide a joined-up experience across all channels

Decision

The service was rated red for point 3 of the Standard.

Recommendations:

  • while we appreciate the incremental approach to assessment PIP is taking, it is essential to demonstrate an understanding of the whole service at a high level. We recommend that this work is included in future assessments. It is essential that the team demonstrate an understanding at a high level. While we do not expect the service team to be able to demonstrate an in-depth understanding of the whole service on all channels, we did not see sufficient evidence of understanding the service as a whole. The team should be able to show the assessment panel an artefact that demonstrates an understanding of the end to end, front to back Apply for Pip

  • the team should be able to demonstrate at the assessment how: paper, phone and face to face channels interact or will interact with the current journey

  • work on closely interlinked channels is happening at different times eg. Telephony improvements are in an early stage. We ask that the team reviews how non digital channels connect with the digital journey.

4. Make the service simple to use

Improvements in the digital channel are blocked by not being able to deviate significantly from paper forms. While we appreciate that this is out of the Service Team’s control, it stops them from being able to make the service simple to use.

Decision

The service was rated red for point 4 of the Standard.

Recommendations

  • provide more evidence that there were more initiatives to look at designs that would be easy for users. The team should present more evidence of rounds of designs which were created, tested and iterated as a result.
  • work more actively with the design system community to benefit from the experience that community offers and the evidence based patterns and components.

5. Make sure everyone can use the service

Decision

The service was rated red for point 5 of the Standard.

Recommendations

  • the Accessibility Audit was requested but has not yet been received. Before the next assessment the team needs to successfully have completed the accessibility audit.

  • all usability testing was done online. The lack of face to testing excludes many users.

  • it is not possible to assess the assisted digital offer as it was excluded from the Service team’s scope.

6. Have a multidisciplinary team

Decision

The service was rated red for point 6 of the Standard.

Recommendations

  • we recommend there is stronger UCD leadership overlooking the whole end to end to ensure Apply service work is closer aligned to other teams’ work. The team should review the DDat framework, Service Standard and the nearest comparison size services across gov.uk and compare their team structure. It appeared at the assessment that overall number of Design leads and head of research were lacking. These roles need to be equivalent level to product, delivery and technical roles.

  • given the size of the service and the importance of meeting user needs, the team would benefit from a more joined up approach to research planning and execution. One approach to achieve this would be to have someone coordinating the user research efforts of the whole end to end team, making sure that research is joined up and that there are no gaps or duplications in the work.

  • before the next assessment the panel would like to see how the risk of creating service silos is being mitigated.

  • we recommend that the roles in the PIP team are more aligned to the DDaT Framework eg ‘Designer’ rather than Interaction Designer, Service Designer or Content Designer; it appeared as in the service team one role would cover several roles.

7. Use agile ways of working

Decision

The service was rated amber for point 7 of the Standard.

Recommendations

  • before the next assessment the team needs to demonstrate they have governance arrangements that are consistent with the agile governance principles and make sure that the right people know what’s happening with the service, at the right level of detail

  • provide more evidence of working closer with The Central / End to End team in an agile way. At the assessment it seemed the end to end team needs to have the high level overview of the whole service rather than be in a lower level single team.

8. Iterate and improve frequently

The team are limited in how much they can improve the service. Improvements in the digital channel are blocked by not being able to deviate significantly from paper forms.

Decision

The service was rated red for point 8 of the Standard.

Recommendations

  • at the assessment it wasn’t clear how some key design decisions were made based on user research. The panel felt that the team didn’t have the freedom to fully test whether journeys reflect how users think. This was also a recommendation given in previous assessments. At the next assessment the team needs to be able to evidence that they are working in the spirit of the standard and working towards meeting the user needs. Point 2 asks that journeys are scoped according to how users think .

9. Create a secure service which protects users’ privacy

Decision

The service was rated green for point 9 of the Standard.

The team informed the panel that they regularly engage with DPO and are regularly reviewing the cookie policy, privacy policy and DPIA. The team informed that the ITHC is planned in a week’s time. They engage with security experts and the security team. The data is All data encrypted at rest using MongoBD native encryption for the WiredTiger storage engine and during transit TLS 1.2 secures the data. The main integration is with the shared DWP External Identity capability. The connection is secured through the Kong API Gateway using the standard OIDC OAuth2.0 framework. Data can be accessed by the authorised people only with audit log.

The policies are:

https://www.apply-for-pip.dwp.gov.uk/accessibility https://www.apply-for-pip.dwp.gov.uk/cookies https://www.apply-for-pip.dwp.gov.uk/cookies-details?redirect=%2Faccessibility

Recommendations

  • The team informed that the ITHC is planned in a week’s time.

10. Define what success looks like and publish performance data

Decision

The service was rated amber for point 10 of the Standard.

Recommendations

Measurement needs to take in to account the PIP journey as a whole, beyond apply, so it’s crucial to get the work right here, so it flows through the rest of the service.

  • the team need a closer relationship with the data team, specifically the digital performance analyst. Currently the potential for good data use and measurement is huge, but has not able to come to fruition due to the separation between the service team and the data. This could be improved by a dedicated analyst for PIP or more time dedicated to the PIP team by the data team.
  • whilst there is evidence of measurement in iterations, the dashboard needs to be shown to be driving decision making.
  • data collection needs to be a priority, as current gaps in the data leave the current dashboard unreliable.
  • the performance framework is outdated and not reflective of the current service roadmap. The team need an updated performance framework and plan to maintain and use it in decision making.
  • the team needs to set measurable goals for public beta, to show their progress and readiness for live.
  • we recommend UR and Data colleagues from the service to engage with the central GOV.UK team (via content colleagues) to obtain feedback from the start page related to the current service

11. Choose the right tools and technology

Decision

The service was rated green for point 11 of the Standard.

The team is using right tools and technologies for development and testing and release. The team informed that they are using Node.js, nunjucks; Database - MongoDB, Analytics and dashboards - Azure DataFactory, Microsoft PowerBI; Infrastructure/ Cloud – AWS, Terraform for IaC; Testing -Selenium, RestAssured; CI/CD pipeline - GitLab, Stories/ Epics/ Themes and project management - Jira and Confluence; Audit/ Monitoring and Logging -AWS Cloudwatch, Prometheus, Grafana. Team is using live-like staging environments and CI/CD fragments check for various code vulnerabilities, Trivy to check the docker images, OWASP checking java libs for vulnerabilities, Gitleaks for finding hard coded secrets, Sonar for code quality checks, Spotbugs to look for java bugs

12. Make new source code open

Decision

The service was rated green for point 12 of the Standard.

The team informed that they have published the code in open with the following Open-source Repositories:

Recommendations

  • Continue to publish the code in open.

13. Use and contribute to open standards, common components and patterns

Decision

The service was rated green for point 13 of the Standard.

Team informed that they are using govuk-casa<https://github.com/dwp/govuk-casa> framework. The team is also using GOV.UK Notifyhttps://www.notifications.service.gov.uk/ The team is utilising the DWP External Identity team are positioning to use OneLogin for Governmenthttps://www.sign-in.service.gov.uk/ in future.

14. Operate a reliable service

Decision

The service was rated green for point 14 of the Standard.

The team informed that the SLA for the operation and running of service 24/7/365 service operating hours, Business NFR is 99.9% availability. The team informed that if the service is down, the alternative routes of Phone and paper channels remain available. Shutter pages hosted by Akamai direct the claimant to the appropriate phone number if the service cannot be reached. The service is designed for zero down time and instance recovery time was 6 minutes. Disaster Recovery Exercise completed 21/2/23 e.g. In the simulated AZ outage.

Recommendations

  • the team should investigate conducting and testing full DR and business continuity plans in next 3-6 months again for validations again.

Next Steps

In order for the service to continue to the next phase of development, it must meet the Standard and get CDDO spend approvals. The service must be reassessed against the points of the Standard that are rated red at this assessment.

Updates to this page

Published 14 October 2024