Census Test 2017 - Beta Assessment

The report from the beta assessment for Office for National Statistics's Census Test 2017 on 12 January 2017.

Stage: Beta
Result: Met
Service provider: Office for National Statistics

The service met the Standard because:

  • there’s an effective and capable team, performing to a high standard, under a skilled and knowledgeable service manager;
  • the team have taken a thorough and effective approach to user research, using their findings to develop and improve the test service;
  • a pragmatic approach to a substantial technical challenge is in place, with appropriate measures to manage security risks and ensure privacy of personal data.

About the service

Service Manager: Tom Scott

Digital Leader: Terry Makewell

Description

The service is a test version of the 2021 Census. The test will be carried out in 2017. The census is a 10-yearly snapshot of population data that’s used to inform policy.

The test will be used to examine the impact of new questions on sexual identity. The impact will be assessed by monitoring responses in the questionnaire and by interviews of some respondent householders. The test will reach 200,000 voluntary users.

Service users

The users of this service are householders in England and Wales, who will (in 2021) have a legal obligation to complete a census return, and policy makers who require evidence from the census to inform policy and advice. There will be a significant number of administrative and field staff supporting the process, who will require effective tools and services to support this test, the 2019 test and 2021 census. The 2017 test is voluntary but ONS will still aim to reach 200,000 users to ensure data is statistically relevant.

Detail

Lead Assessor: Simon Everest

User needs

The team have done a variety of contextual user research in different locations and good usability testing with prototypes. This has given them a good understanding of the people who need to complete the census.

They have included people with low digital skills and access in their research to understand their needs. They have done good contextual research with people with access needs, and Electronic Questionnaire (EQ, the underlying survey tool) has been through a full accessibility audit.

The team have a good plan to learn more about users and their needs during the 2017 test. They are working to create behavioural personas and journey maps to document what they’ve learned.

The team have done good initial research at communal establishments to understand how residents will complete the census and what support is needed. Different types of communal establishments will be included in the 2017 test.

To complete the census, one of the people living in each household needs to act as the ‘householder’. The team know this can be a problem in many types of shared properties. The team needs to do more contextual research and usability testing on this issue.

The team know that some users will leave the census at a question they don’t want to answer. Users can skip questions, but to encourage users to complete more questions, the interface does not make this obvious. The team need to do more to learn about and resolve this issue.

During the 2017 test, the team need a clearer plan to learn how the census works for people with access needs, for example, can people with low vision use the invitation letter, and can people who are deaf use the helpline.

The team must do more research with the staff who will deliver the census, to understand their needs from case and workforce management systems. The success of the census depends on these systems working well for staff. If these systems and staff are provided by a third party, the census should require the supplier to work with the team to research the service end to end and front to back.

The team have tested the 2017 test service with the National Statistician, John Pullinger, and plan to test the 2021 Census service with the Minister for the Cabinet Office.

Team

The service team demonstrated an extensive knowledge of the Census, it’s importance and place in the government landscape. It’s evident the wider Census project includes expertise in data and statistical analysis, and the majority of other roles were well represented.

However, the team currently does not have a designer working within the team. The team is reliant on the design capability in the separate EQ team, based in Newport, due to recruitment constraints at the Titchfield office. This is a serious gap in the team and if it is not possible to recruit permanently into the team a short term solution should be found.

The ability to rapidly prototype features is vital for effective service development and user research. The dependence on the separate EQ team for such a fundamental component of the service is an apparent source of some delay and compromise in the service development. A number of positive steps have been taken, including working practices and improved technology (eg high quality video conferencing) to support collaboration between the two teams. This should continue to ensure that EQ prioritisation isn’t a barrier to improving the census service both during the test phases and as the 2021 census service is developed. In the event that EQ continues to slow or limit progress it may be necessary to consider alternative approaches.

The team are working in an agile way, and have access to appropriate tools and technologies to enable them to work flexibly and effectively. They are largely co-located, aside from the EQ team, and have a working environment that encourages sharing, openness and collaboration.

Technology

The team has shown good pragmatism in creating components to support the 2017 test, using appropriate rapid development technology stacks to produce the provisional Response Management and backend case management components.

Although largely working initially from an Enterprise ‘Blueprint’ the team appear empowered to take their own decisions, for example in their use of Amazon Web Services (AWS) over the legacy EduServe hosting environment, in advance of ONS’ proposed wholesale move to CloudFoundry.

The team has a good understanding of performance testing methodology for predicted 2017 digital take-up volumes and how to support the service during the test period. The team has a good approach to security including managing privacy risks using appropriate levels of encryption for data stored in public cloud, and conducting penetration testing.

Continuous delivery was presented in a nascent phase due to the ongoing migration of components in to AWS. The census platform has one environment in AWS at the present time. Since the service is not yet in production, for the purposes of this assessment we accept the current situation as appropriate, but future assessments will expect to see a full delivery pipeline, including the usual set of development and production environments, to allow the team to iterate quickly with rapid feedback on their work. We understand the team already has this well in hand and were provisioning a new testing environment in AWS on the day of the assessment.

The delivery model of the census itself presents some marked technical challenges compared to the usual agile delivery methodologies for digital government projects. In particular, the long period between the 2017 test and the 2021 census and the limited opportunities to test the platform at scale between now and then. Best technical practices of today will undoubtedly change considerably by 2021. As such the panel have made some recommendations on technology choices as to how the team should proceed to delivery for 2017 and for 2021.

Design

The team have made a usable system for completing the census form. It has an exemption from conforming to the GOV.UK look.

It is good that hard-to-reach user groups are being considered and extra functionality and processes have been put in place to reach these users.

Several questions rely on a lot of help text and guidance that the user must comprehend to answer correctly. This is partly due to the reliance on existing EQ functionality and how it can ask questions.

The team must find ways to quickly change and test how questions are asked. The panel recommend the team take a far more radical approach to complicated questions (eg multipart questions, answer section titles, free text entry), minimise the need for help text and iterate the flow for each question. Initially, this could be done separately from the need to preserve the statistical validity of questions. Once the team understand the best way to ask a question so users comprehend it, work to make it statistically comparable could take place.

The questions are based on a (long) paper form, which gives the user the opportunity to see the size of the questionnaire and the kind of information it will ask for. Online forms don’t have this readability and more design and research needs to be carried out to explain the process and where the user is in the process.

The workforce management tool/resource manager is currently not designed, researched and iterated. This will be replaced by an outsourced solution for future tests. This interface and usability by the workforce must be designed, researched and iterated, whether developed internally or by a 3rd party.

Analytics

The team has a comprehensive approach to monitoring service success in order to improve their processes through the use of analytics and investigation into other metrics such as use of telephone helplines. The panel and Performance Platform team agree there is little value reporting data on the mandatory KPIs for this test as it will not be comparable with future tests.

Recommendations

To pass the next assessment, the service team must:

  • Have a full team including designer, user researcher and content designer working within the census team.
  • Have the ability to prototype new UI for questions, and get changes incorporated into EQ within a reasonable amount of time.
  • Demonstrate that the use of and relationship with EQ is fit for purpose and enabling the Census team to deliver the best possible results.
  • Have a clearer plan for learning how the census works for people with access needs, for example, can people with low vision use the invitation letter, and can people who are deaf access the helpline.
  • Avoid treating technical choices, and platform components for the 2017 test, as the completed solutions or final choices for 2021. The expectation should be that many of these will evolve and change over the next 5 years.

  • Ensure ongoing development and maintenance of the necessary components for 2021 over the next 5 years, in response to ongoing user research, changing user needs, and evolving technical best practice.

  • Ensure components expected to be used in 2021 are continuously exercised in other production settings whenever possible, for example identifying opportunities to reuse them for other surveys and data collection exercises conducted by ONS.

  • Find the most appropriate way to work with other teams, including the ability to build and contribute features to codebases such as EQ. Avoid forking or taking a siloed approach to development of the census components that may lead to them becoming outdated or redundant after 2021.

The service team should also:

  • Do more to understand ‘householders’, including whether this is the right way to describe the lead person who fills in the household parts of the census
  • Do more to balance the organisational goal of encouraging users to complete questions, with the problem of users abandoning the census at questions they don’t want to answer
  • Try new ways of asking questions online, separate from statistically significant wording changes
  • ONS is currently moving certain legacy servers for data storage and collection from a legacy on-premise data center into Crown Hosting. ONS should continue to engage with GDS to assess the appropriateness of this arrangement, particularly because by 2021 the current Crown Hosting agreement with the provider will have ended.

Digital Service Standard points

Point Description Result
1 Understanding user needs Met
2 Improving the service based on user research and usability testing Met
3 Having a sustainable, multidisciplinary team in place Met
4 Building using agile, iterative and user-centred methods Met
5 Iterating and improving the service on a frequent basis Met
6 Evaluating tools, systems, and ways of procuring them Met
7 Managing data, security level, legal responsibilities, privacy issues and risks Met
8 Making code available as open source Met
9 Using open standards and common government platforms Met
10 Testing the end-to-end service, and browser and device testing Met
11 Planning for the service being taken temporarily offline Met
12 Creating a simple and intuitive service Met
13 Ensuring consistency with the design and style of GOV.UK The Census is exempt from this point, however, the team are using GOV.UK patterns and elements and following the ONS styles. Exempt
14 Encouraging digital take-up Met
15 Using analytics tools to collect and act on performance data Met
16 Defining KPIs and establishing performance benchmarks Met
17 Reporting performance data on the Performance Platform Met
18 Testing the service with the minister responsible for it Met

Updates to this page

Published 12 October 2017