Use land and property data alpha assessment report

The report for Land Registry's Use land and property data alpha assessment on 29/08/2018.

From: Central Digital and Data Office
Assessment date: 29th August 2018
Stage: Alpha
Result: Not met
Service provider: HMLR

To meet the Standard the service should:

  • identify and be able to articulate true user needs, and conduct more research with users that do not use the current service and those of lower data literacy (see highlighted parts of ‘user needs’ section)
  • prototype and test an API based alternative to the bulk data download service (see highlight parts in ‘Technology’ and ‘Design’ sections)
  • further consideration and a better understanding of how ID verification will be resolved (see highlighted part of ‘Technology’ section).

About the service

Description

The service enables users to access and download HMLR datasets as open data (with no charge) and view licences associated with these datasets. Currently seven datasets have been released with the service intending to make another 20 available. Where datasets require verification of a user, the service will enable the user to register to access these.

The overall vision of the service - to make HMLR datasets open data - will not only benefit HMLR and its users, but also departments across government that face similar challenges in making data openly available.

Service users

The users of this service are anyone from citizens with an interest in unique data to large companies and users that use the data commercially. The users vary from experts to those with little or no knowledge of HMLR, resulting in a range of needs depending on their understanding of the data. Users can be UK citizens, businesses or from overseas.

Detail

User needs

Whilst the team have conducted a wealth of research during Alpha, this has focused on users of the current service only, including organisations and consumers of large data sets. The team have created user archetypes based on personas that were identified in Discovery, and have used these archetypes as a way of showcasing the differences in data literacy and understanding of HMLR services amongst their users. Whilst this is a great way to explain differing knowledge levels amongst users, the team’s research has focused on ‘expert level’ users (such as the craftsperson archetype) with little evidence of engagement with true beginners or those that don’t know which data is available to them. To pass the Alpha phase, more research should be conducted with users with lower data literacy and those that do not use the existing service. Furthermore, the team should engage with users with accessibility and assisted digital needs in order to pass the next assessment.

It was evident that the team have invested time and effort into creating user needs, however these needs were showcased as user stories and subject to change as opposed to ‘true’ user needs, which should remain the same regardless of policy, available services and technology changes. The team weren’t able to evidence their user needs with anecdotes or quotes from real users, and therefore it was difficult for the panel to pinpoint how research had fed into the user needs and the overall scope of the service. Moving forward, the team should focus on understanding the needs of all of the users, including understanding different or opposing needs between different user groups.These needs should be linked to evidence from the research that has been conducted.

The panel were pleased to hear that the whole service team had been involved in shaping and analysing the user research, and strongly recommend that the team continue their collaborative ways of working as they are a great example of user research as a team sport. Although the team all have input into the research, it was clear that a dedicated user researcher is in place to conduct research and explain next steps to the team. The service team were able to explain outstanding design issues that they’d like to address moving forwards, and the panel suggest that the user researcher and designer should continue to build and test hypotheses to continuously iterate their design.

Moving forward, it is crucial that the team conduct research with a wider user base to create evidenced user needs. Whilst it was clear that this service will provide value to the users who have a need for the service and the wider HMRL teams, the team must build a concrete base of user needs to assist them in developing and iterating the right service going forwards.

Team

The panel were impressed by the great working dynamic the team have. It is clear that they work well together and are supportive of each other. There is an atmosphere of trust and support for each other and the team are empowered to make decisions. The team can be open and honest with each other and work collaboratively to solve problems which is fantastic.

The team is made up of a combination of HMLR staff and their Delivery Partners, with knowledge being transferred to the permanent members of the HMLR team. This upskilling of HMLR members, their enthusiasm for the agile way of working and the way the team are sharing their experience with other teams in HMLR – particularly the closely aligned Datasets team - was great to hear about.

The panel were concerned that a Content Designer had only joined the team very recently, especially as the service is based on a ‘content first’ approach; as content will play a key role in ensuring that users are directed to the right part of the service for them. The recognition of the need to get a content designer, and bringing one on board is a positive step. The panel would recommend that the content designer review the current content – removing placeholder copy in the prototype - and play a key role in the future development of the service.

The team are generally co-located and use a variety of tools to remain in contact when not in the office. The team are working to scrum with two week sprints and agile ceremonies. The panel were encouraged by the way the team worked collaboratively to shape the user research plan and prioritise the backlog. There is a good working relationship between research, product and technology aspects of the team, with views from each being part of the prioritisation process.

Whilst it was great to hear how the team were bringing the Datasets team with them by sharing learning, feedback, inviting them to sessions and attending there meetings, the panel was concerned about the dependencies the service had on the work of Datasets team. The team manage the datasets that will be available in the service while the Datasets team own the datasets - and the naming of them. The panel was worried that the team have no control over the datasets and their naming conventions and that this could affect the effectiveness of the team’s goal to use content to make things easier for their users. The panel recommend that the team continue to work closely with the Datasets team, particularly providing them with user feedback and influencing the work of the other team where they can.

The team is doing great work to engage with stakeholders and bring them along on the journey. The team have the support of their senior stakeholders (including the Chief Executive and Chief Land Registrar) and the panel believe this is vital to the continued development of a worthwhile service. The panel was impressed by the work the team have done to influence and educate stakeholders; seeking opportunities and using regular show and tells to champion the value of the service. The team, encouragingly, are working with external parties including Ordnance Survey and Royal Mail.

Technology

The tactical solution that exists for 7 datasets is published by 4 separate third-party providers. The user journey is disjointed, inconsistent and does not provide an end-to-end service to the users. The solution is also not scalable. The Open Government Mandate expects HMLR to publish 20 datasets by April 2019 and release all publishable data by April 2021.

The datasets that are in scope for the Alpha are under free or chargeable licences. There is also a fraud risk around the improper use of the data resulting in a need for an Identity and Access Management solution to be in place before publishing the data. The users of the service range from UK citizens, businesses and overseas users. A separate team within HMLR is investigating IAM tools for use within the wider HMLR. The panel sees this as a risk that this solution may not be ready in time for Private Beta. The shared container platform also may not be ready in time which could be another risk to the service.The panel recommends the technical team to work closely with the other teams involved and consider an alternative IAM solution as a back-up.

The technology choice is sound and inline with HMLR and GDS Code of Practice. The panel is pleased to hear that the team is planning on open sourcing the customised CKAN code, which is the technology that drives Data.gov.uk. The panel recommends the team to open source as much of the source code as possible. This project is a good example of solving an interesting problem by publishing open data, the results of which can enable reusable data across wider government. This is an interesting problem the team is solving and is a good project that can enable re-use across the wider government departments.

The team is using the volumetric model from the existing service. The team is able to demonstrate a plan to handle unanticipated traffic and is able to scale to the demand. The service currently offers download and save option for bulk data. It is evident that there is a use case for querying data via an API and this should be prototyped and tested with users. If a user need has been identified, this feature should be prioritised and included in the future iterations of the service.

The team is thinking about the monitoring controls and show a good understanding of the risks to the service.The team is starting to think about the service working hours and alternatives if the service exceeds the critical outage.

The service is using GOV.UK Notify for service notifications. The panel recommends GOV.UK Verify is explored as part of investigation for an IAM solution to cater for citizen identities.

Design

The panel was delighted to see a well resourced alpha team who demonstrated they can drill down into detailed interactions but also have the capacity to think about the broader issues such as how users feedback on both the quality of the data sources and the quality of the service. This is uncommon (although favourable) and as it is clearly working well for the service.

The panel advise the makeup of the team to include service, content & interaction design as it progresses into beta. The panel see the late addition of a content designer has had a significant impact. The team have been testing with a large amount of placeholder copy thus compromising the effectiveness of their user research to date. Now a content designer is in place the team might consider incorporating eye tracking software in their usability testing to understand users comprehension of the content which should help inform the prioritisation of content improvements.

As already outlined in the user need section of the report, the panel feel the service team is struggling to understand their users needs in discovery. It’s likely this is a direct result of the stop-start nature the project has been subject to and the change in teams has likely resulted in a lack of knowledge transfer between the discovery and alpha team. This is significant because the user needs the team discuss highlight a need for a data download service and also a way to query the data without bulk download and storage. The latter has been descoped by the team, however as they are still in Alpha they should have already prototyped and tested an API based version of this service in order to determine which option best meets their users needs. The team must explore all options, have a clearer articulation of why they have chosen to progress forward with a specific one and demonstrate a good understanding of the consequence of this decision in order to progress to beta.

The team has a good approach of hypothesis driven design. The team describe how they might take forward a hypothesis to use an account dashboard as a means to collect feedback on the quality and usefulness of previously downloaded data sources so as to not interrupt user journeys mid flow. The panel feel evidence as to how the team is using the hypothesis driven approach during alpha was poorly articulated and should be made clearer in future assessments.

From an interaction design perspective, the panel was pleased to hear the team is engaging with the GOV.UK design system to source patterns as a starting point to iterate from. It was very interesting to learn about how the team balance the hierarchy of the big green download button, creating a speedbump, to ensure users had fully understood the licensing terms prior to downloading a dataset. The panel encourage that these learnings from research be fed back into the design system so that the rest of government can benefit.

The panel believe it is important to prototype the file download experience in a higher fidelity way, as the current testing environment doesn’t reflect real life experience (known as ecological validity). Without triggering this interaction the service team can not be sure the content, design and of screens are working, but more broadly that bulk download as a solution is what the user needs. A simple fix would be to set a file download attribute of a comparable sized file and ensure your discussion guide is prompting on what the user expects to happen next.

The panel was pleased to see how the team is tracking KPI’s and ensuring the design work is being prioritised against the metrics and goals of the service whilst also keeping an eye on users needs. Additionally the panel commend the team for using research to push back on assumed feature needs (ie choosing to exclude data shopping baskets) - keep this kind of push back up.

Analytics

The team is identifying service goals, the indicators, and the metrics for these goals, which was great to hear. During Private Beta, the team plan to use analytics, performance data and satisfaction of users to understand how the service is performing.

The team is able to provide examples of the metrics they would measure – about active and inactive accounts – to identify how the service was performing. Through the performance manager at HMLR the team is engaging with the performance platform and GDS.

The panel was pleased that the team is thinking about performance and measures of success for the service. It was good to see that they had engaged with the performance platform and have a plan for testing with the minister.

Recommendations

To pass the reassessment, the service team must:

  • conduct research with users of low digital literacy, assisted digital and accessibility needs, and ensure that these users can be recruited using offline methods. Use this research to gain an understanding of the similarities and differences between user groups and identify conflicting needs
  • use outputs (such as quotes or anecdotes) from user research to create and evidence user needs for the service. Ensure that these are irrespective of current technology, available services or policy decisions and focus on the problem that the user is trying to solve as opposed to the solution the team have identified
  • appoint a Content Designer to work closely with the team, deliver content that will complement the teams content first approach and remove place placeholder copy
  • decide, by testing with users, on a solution for accessing data, including how the data downloads, what users experience whilst this happens and the journey for the download
  • further consideration and a better understanding of how ID verification will be resolved.

The service team should also:

  • continue to form hypotheses and test these through design and research, in particular, with the content within the service
  • continue to work closely with the Datasets Team, educating them and using feedback to influence their decisions on datasets, particularly the naming of the datasets
  • engage with others across government about making datasets openly available, continually highlighting and championing the work the team are doing with this service as it will be of use to a number of government organisations
  • where possible, contribute learnings back to the GOV.UK design system
  • further consider analytics and measures of performance; how the team will understand if the service is successful and baselines from the current tactical solution to compare against

Next Steps

Reassessment

In order for the service to continue to the next phase of development it must meet the Standard. The service must be re-assessed against the criteria not met at this assessment.

Please contact the Service Assessment team at least 4 weeks before the date you’d like to hold a reassessment.

Digital Service Standard points

Point Description Result
1 Understanding user needs Not met
2 Improving the service based on user research and usability testing Not met
3 Having a sustainable, multidisciplinary team in place Met
4 Building using agile, iterative and user-centred methods Not met
5 Iterating and improving the service on a frequent basis Not met
6 Evaluating tools, systems, and ways of procuring them Met
7 Managing data, security level, legal responsibilities, privacy issues and risks Met
8 Making code available as open source Met
9 Using open standards and common government platforms Met
10 Testing the end-to-end service, and browser and device testing Met
11 Planning for the service being taken temporarily offline Met
12 Creating a simple and intuitive service Met
13 Ensuring consistency with the design and style of GOV.UK Met
14 Encouraging digital take-up Met
15 Using analytics tools to collect and act on performance data Met
16 Defining KPIs and establishing performance benchmarks Met
17 Reporting performance data on the Performance Platform Met
18 Testing the service with the minister responsible for it Met

Updates to this page

Published 26 May 2020