Sign up to Making Tax Digital alpha assessment

Service Standard assessment report Sign up to Making Tax Digital 11/06/2024


Service Standard assessment report

Sign up to Making Tax Digital

Assessment date: 11/06/2024
Stage: Alpha assessment
Result: Red
Service provider: HM Revenue & Customs

Previous assessment reports

  • Not applicable

Service description

Sign up for Making Tax Digital

  • Making Tax Digital (MTD) Sign Up for Income Tax Self-Assessment (ITSA) is a service that allows users (and Agents acting on their behalf) currently using Self-Assessment to register to a new way of reporting income and expenses, if the user (and Agents acting on their behalf) is a sole trader and / or landlord by making quarterly submissions of Income and Expenses returns via API enabled third party software.

Choose software for Making Tax Digital

  • Software Choices is a standalone tool provided by HMRC, that supports users (and Agents acting on their behalf) in making a choice of MTD software to use by providing details on functionality and features supported.

Service users

This service is for…

  • individuals who are in receipt of rental income and / or sole traders (self-employed individuals who are a one-person business registered with HMRC)
  • agents can sign up clients (agents must be registered with HMRC and have a formal relationship)

Things the service team has done well:

Researched with different users over the course of two years and made good attempts to research with users with access needs. The team understood some of the challenges and pain points which their users may encounter when using the service. The user researcher on the team showed a good understanding of some of the gaps in the research and had begun to consider how to address these gaps going forward.

The team has the expected and recommended range of digital expertise to ensure a UCD approach can be embedded in their delivery. The team evidenced iterations of the guidance within the service that had been improved based on user feedback and insights relating to user personas. It was good to see measures were in place to protect knowledge transfer within the team due to the length of time the service has been in delivery and the turnover of team members.

The team has implemented Google Analytics tracking and have the Customer Insight Platform dataset to support their analytics requirements. The team has pulled together a performance framework to start understanding their KPIs.


1. Understand users and their needs

Decision

The service was rated amber for point 1 of the Standard.

During the assessment, we didn’t see evidence of:

  • detailed understanding of the differentiated needs, pain points and behaviours of people within the user groups and how this will impact how they will use the service
  • an in-depth understanding of how the different user journeys might impact the user experience with the service
  • research with private beta users to gather insights which can inform the iteration of the service and understand user behaviour reflected in analytics (e.g. drop out)
  • how identified user needs and pain points will be addressed by the service

2. Solve a whole problem for users

Decision

The service was rated amber for point 2 of the Standard.

During the assessment, we didn’t see evidence of:

  • understanding the constraints of the service and how the team have challenged and worked around them
  • the team building relationships with the the other teams responsible for providing assisted digital support

3. Provide a joined-up experience across all channels

Decision

The service was rated red for point 3 of the Standard.

During the assessment, we didn’t see evidence of:

  • data and research being used to improve offline and online channels - the team shared evidence that showed potential risks around users with low digital skills being mandated to use the service but the panel didn’t see evidence the team have tried to address the concerns
  • how design, user research and content design are considering the full end-to-end service, including the non-digital parts

4. Make the service simple to use

Decision

The service was rated red for point 4 of the Standard.

During the assessment, we didn’t see evidence of:

  • making sure the service helps the user do the thing they need to do as simply as possible. The team has focused on professional users who have multiple clients but the proposed design requires these users to manually input information repeatedly. The design seems more appropriate for low-frequency, individual users.
  • in focusing on professional users immediately using the service, the team haven’t planned for adapting the design to suit individual users who don’t have agents, despite these users being mandated to use the service in the coming years
  • testing for usability frequently with actual users from the private beta, particularly those with access needs

5. Make sure everyone can use the service

Decision

The service was rated red for point 5 of the Standard.

During the assessment, we didn’t see evidence of:

  • testing of the private beta service with users with low literacy and low digital literacy
  • testing of the private beta service with users with access needs
  • exploration of the experience of users who need an offline route due to digital exclusion and low digital literacy
  • exploration or understanding of users at risk of being excluded from using an online service and what impact this may have

6. Have a multidisciplinary team

Decision

The service was rated green for point 6 of the Standard.

7. Use agile ways of working

Decision

The service was rated red for point 7 of the Standard.

During the assessment, we didn’t see evidence of:

  • sufficient governance arrangements that are consistent with the agile governance principles and make sure that the right people know what’s happening with the service, at the right level of detail. On several occasions during the assessment the team suggested that various elements of the service were not in scope or being considered by another team within the programme, such as assisted digital routes, key decisions about service design and prioritising user needs.
  • decision making with senior stakeholders that allow the team to adapt and change.
  • changes to the service being based on user research and feedback and testing rather than direction from members of the programme that are not close to the delivery of the service.

8. Iterate and improve frequently

Decision

The service was rated green for point 8 of the Standard.

9. Create a secure service which protects users’ privacy

Decision

The service was rated red for point 9 of the Standard.

During the assessment, we didn’t see evidence of:

  • how citizens are informed of their data privacy rights and how their data will be used by government. Making Tax Digital is a very substantial change when compared to the current process. The integration of third-party software into HMRC may give rise to concern from citizens about the security and privacy of their data. The lack of an alternative route is also likely to raise questions around consent. More evidence that users understand their privacy rights and have opt-out choices would be expected for a change of this significance.
  • how third-party vendor software is regularly scanned for vulnerabilities. This will become more critical as the number of third party vendors scales up, creating potential risks related to security and privacy, for example data leaks.
  • the way in which citizens who go through an agent for their tax returns have their identity verified. While citizens who self-report must go through Government Gateway (and eventually One Login for Government), the process for verifying the identity of citizens filing through an agent was unclear.

10. Define what success looks like and publish performance data

Decision

The service was rated amber for point 10 of the Standard.

During the assessment, we didn’t see evidence of:

  • the KPIs in the performance framework linking to the hypotheses and user needs (the team should assess what the success and failure points of the hypotheses are to create more tailored KPIs)
  • analytics and setting success measures have been baked into the teams ways of working (such as utilising the design/UR sprints ending three weeks ahead of development)
  • measuring the impact of the latest iterations they have made, such as the change to the GOV.UK pages

11. Choose the right tools and technology

Decision

The service was rated red for point 11 of the Standard.

During the assessment, we didn’t see evidence of:

  • how the service will migrate to One Login for Government. One Login now has close to 20 services which citizens can use to prove their identity (see https://www.gov.uk/using-your-gov-uk-one-login/services). Given the timescales and current phase of Making Tax Digital, we expected to see evidence that the service had begun to work with One Login. The One Login identity journey is substantially different to Government Gateway. Given the timelines, scale and ambition of Making Tax Digital, understanding the risks and opportunities presented by the move to One Login is essential.
  • how citizens will be treated fairly and equitably by third party software vendors. Many of these products come at a cost, and accessibility and ease-of-use across these platforms will be an important part of a successful service. Taking the example of Verify (see https://www.nao.org.uk/reports/investigation-into-verify/), there is a risk related to the use of third party vendor software which should be, at very least, addressed through further research. This could include exploring a free and open GOV.UK alternative to licenced vendor products.

12. Make new source code open

Decision

The service was rated green for point 12 of the Standard.

13. Use and contribute to open standards, common components and patterns

Decision

The service was rated amber for point 13 of the Standard.

During the assessment, we didn’t see evidence of:

14. Operate a reliable service

Decision

The service was rated amber for point 14 of the Standard.

During the assessment, we didn’t see evidence of:

  • how the service team will monitor the reliability of vendor software and how users will report issues, for example related to software compatibility

Next Steps

In order for the service to continue to the next phase of development, it must meet the Standard and get CDDO spend approvals. The service must be reassessed against the points of the Standard that are rated red at this assessment.

Updates to this page

Published 30 December 2024