Accessing NHS 111 Service beta assessment
111 Online is a patient-facing digital service which helps users access to the right care for their needs.
From: | Central Digital and Data Office |
Assessment date: | 18 July 2017 |
Stage: | Beta |
Result: | Not met |
Service provider: | Government Digital Service |
To meet the Standard the service should:
- recruit critical roles that will help the digital service succeed namely, front end developers, delivery managers and content designers. Ensure that the clinician remains part of the digital team, this was commended by the panel
- ensure that all of the points in the accessibility report are addressed and conduct usability testing with users with access needs to ensure the service works for them. The panel noted several accessibility issues while using the service - including contrast, unclear or missing validation, missing labels on forms. The service will need to address all accessibility issues before commencing public beta
- review the use of external JS resources. Ensure that there is a process for identifying vulnerabilities in software dependencies
About the service
Description
111 Online is a patient-facing digital service which helps users access to the right care for their needs. It allows users with medical concerns to complete a health assessment online by answering a series of questions about their condition. They are then triaged based on their symptoms using a clinically-approved and safe set of algorithms, and then connected to the local services most appropriate for their effective treatment.
Service users
The users of this service are:
- patients seeking healthcare for themselves or others
- healthcare professionals (clinicians) in urgent care services
Although not direct users, critical stakeholders also include commissioners and providers of 111 services.
Detail
Overall the panel were impressed with the digital service and how the team had developed the service in conjunction with users. The concept of a series of online questions with an outcome to lock in appointments is positive and one which the assessment team feels should continue. The panel were impressed with the fact that a clinician was embedded in the team and impressed with how well the team had built digital capability with the organisation.
The panel felt that the service is not quite ready to go into public beta.
There are issues on elements of the team with key roles needed to be recruited, accessibility issues that should be drawn out of the the Digital Accessibility Centre (DAC) report and bugs in navigation need to be addressed. This is due to key roles not being filled, front end developer being one. The team also saw some technical issues that need to be addressed.
The assessment team recommend a re-assessment. The panel feel that the issues raised in this report could be resolved quickly and feel a reassessment of those elements in eight weeks could be achievable. The panel urge the team to take on board the recommendations in the report. The team may wish to consider expanding the private beta to incorporate a second region before the next assessment.
User needs
NHS 111 is a safety-critical service, in that any errors or misunderstandings on the user’s part can have serious consequences. This means that the user research element is key to reducing risk by ensuring that the target users are able to use the service to meet their needs, and that the outcome is correct.
Point 1: Understand user needs
The team were able to describe the three main user groups and their associated user needs. Largely these had been derived from research conducted in discovery and alpha phases, and the team continue to refine the user needs and add new ones as their knowledge of the users increases.
The team were readily able to refer to the user needs, but the only reference to different user types was a segmentation of health seeker user groups derived from a survey. Clearly the team are aware of different user behaviours, such as those illustrated by the perceived and actual acuity dimensions. It would be helpful if the team could refer to these different user behaviours (and others) on a regular basis to ensure the service meets all user needs. This can be achieved by using personas, or if preferred another format, to represent the different user types based on the research conducted in discovery, alpha and private beta.
It is clear that the team has so far conducted a significant amount of user testing, in the lab and remotely. The participants for this research have been recent users (in the last 3 months) of the NHS 111 telephone service. Participants have been asked to recall the reason for their call to NHS 111, and repeat this using the online service instead.
Although this has resulted in useful insight, and in improvements to the online service, the panel feels that more contextually relevant research needs to be conducted, with users who are actually in the real situation of seeking help (who are stressed, concerned, potentially in pain or discomfort etc.), as well as research with non-users of the NHS 111 telephone service. This would provide more confidence in the insight gained from the research. The panel acknowledges that it is very difficult to find participants in this context, who are willing to participate in research. However it is suggested that more contextual research could be conducted for example in A&E (with non-emergency cases) and GP waiting rooms.
In addition, the participants for research so far have been recruited via a specialist agency. We know that participants who sign up for research are likely to have a higher level of digital skills than might be found in the general population. Testing with other users as suggested above could also address the risk of relying on insight gained from testing only with users who are more confident online.
The private beta has been running in one location since March. So far, user feedback has been gained through the feedback survey at the end of the service, but there have only been twenty responses to this so far. Telephone interviews with users are planned. It is strongly recommended that more qualitative research is carried out with users of the Private Beta to get feedback on use of the real service.
This could be achieved by asking all users of the private beta to provide feedback when they exit the service (via an online questionnaire), or by asking them to provide their contact details via a screener at the start of the service (although this may not be allowed because of patient confidentiality issues). An alternative method could be to ask the clinicians who call back the user to run through a short feedback questionnaire and/ or to request contact details for user research.
Point 2: Do ongoing user research
A detailed research plan has been provided, covering user research for the next six months with the different user groups (health seekers and clinicians). The team has two user researchers working full time on the service. There is evidence of the insight from user research being acted upon and incorporated in the design of the service.
The service has gone through an accessibility audit at DAC. In addition, the team will need to conduct usability testing with users with access needs prior to commencing public beta.
As the wording of some of the questions is based on the questions that are used by trained call handlers, there is a risk that these questions may not be easily understood by members of the public, particularly if they have relatively common cognitive disorders such as dyslexia and autism. It is recommended that the service is tested with members of the public who have access needs, in particular mild cognitive disorders as above, and mild visual conditions that do not rely on assistive technology (for example the need to use zoom in the browser).
The panel has some concerns around the usability of some elements of the service. These need to be addressed in future rounds of user research:
- the panel recommend testing with some users who have access needs in future user research sessions
The panel were encouraged that the team were working closely with a transgender organisation. This group should be tested with when asking users to identify their gender.
Team
The team are a multi-disciplined team with very few contractors there was a reference to only one in the team which is highly commendable. The team is split into two product teams and across the twenty-two members of the team there is a good spread of roles. Some gaps were identified namely front end developer who will be starting on Monday and there was some duplication of roles. The four developers are also picking up the scrum master role however the assessment panel were reassured that delivery managers were on their way as part of a recruitment exercise. It was explained that dedicated delivery managers were being on-boarded on 1st August.
A strong recommendation from the assessment team is that more content designers would be beneficial given that the digital service is about a series of questions to help get to an outcome. This should be added to the recruitment plans.
The team are working in an agile fashion. They are not co located however meet regularly in a shared location to conduct retros. The team is changing and adapting as it moves through private beta. There is recognition that some gaps exist and recruitment is underway to bring in roles that are not filled including delivery managers and front end developers. It is recommended that the team recruits to fill the gaps quickly. It is evident that the front end developer is a role that would be highly beneficial and will help the team in the area of design.
While it is recognised that the team is recruiting a delivery manager, there is a clear issue in how the team has been operating to date with the developers acting as a delivery manager based on SCRUM qualifications alone; by definition the lack of a delivery manager will inhibit the team’s ability to provide continuous unhindered delivery.
The next phase will be moving more into an operational phase where the team look to support and roll out to more regions. This will require a reassessment of how the team is structured. Currently there is a Dev Ops person but there may need to be more working alongside the developers and testers to support and continuously improve the service. The developers need to be working closely with devops, product owner and user research to ensure that changes can be made to the service quickly, only then can the service react quickly to changing user needs. It was also recognised that a full time performance analyst in the team would be hugely beneficial; currently the performance analyst is stretched across multiple teams.
It was recognised by the assessors that the addition of a clinician as part of the delivery team is a real asset and has really helped the delivery team having a subject matter expert on hand.
The Governance seems quite complex however the team seemed empowered to make decisions with regards to changes to the service. Changes happen quickly however changes to content are subject to approval by experts to ensure that the correct language is being used. There needs to be focus to ensure that processes support the need for quick content changes and changes to content don’t get lost in bureaucratic sign off. This is always a danger when multiple boards are involved.
The team does not currently have a dedicated front-end developer - this is an essential skillset for the service, and should be filled immediately.
Technology
The team have generally sought to align their technology choices with the wider NHS Digital stack, allowing them to reuse existing tools and expertise. However, not all technology choices had a clear rationale - specifically, it is unclear why Neo4J community edition is being used as a read only graph database, even though they have found Redis more performant. A graph visualisation could be interesting as an offline analytics tool, but it is not clear what benefit the current usage provides.
While it is understood the team have commissioned a pen test and there may be outstanding points remaining on the backlog, there is one item the panel wish to raise specifically. The service makes use of javascript served directly from third party services, which if compromised in some way would be a major security problem. There is a history of such shared asset service being compromised, so the team should review the use of them, and likely either serve a copy along with the other service assets or implement SRI (Sub Resource Integrity) to minimise the impact of a compromise.
Google Tag Manager is being used to manage analytics tags. As with external CDNs, the team should ensure that they have considered the security implications of trusting an external party. They should also ensure that their GTM workflow follows best practice, such as separating out the edit function from the publish function to reduce risk, increase accuracy and increase integrity.
The assessment team was pleased to see as much of the code had been open sourced as possible, and recognised the unfortunate licensing issues around the pathways data. It would be good if the team could spend some time providing public documentation (you may already hold this privately) describing the modules and provide a story of potential reuse.
The team currently commit code to a gitlab repository and manually push that periodically to github. While the panel appreciate there may be politics and concerns beyond the scope of this project that prevent them committing straight to a public code repository; the panel would recommend that instead of this manual periodic publish, the team instead synchronise the master branch between gitlab and github via CI so that github will only have peer reviewed code. If another developer or team were then to make a contribution to the github repository it would be possible to walk that change back through to the gitlab repository, and naturally float back to github.
While the team were able to talk about how their infrastructure specialist had produced scripted deployments and they were able to create new environments easily, during the assessment the team referred to a lack of parity between the staging environment and production, particularly when explaining that staging was slower than production; it is important that the team should be testing in an environment that is as similar to live as possible.
While it is recognised that the team has produced a number of automated tests, there is still a significant amount of manual testing being carried out by the test analyst, this will inhibit the team’s ability to iterate effectively and deploy confidently, which was unfortunately demonstrated by the issues the assessment team found when trying both the staging and production versions. We would recommend that the automated (including selenium) tests are run continuously on CI push rather than nightly or prior to release, this will insure that the master branch is ‘green’ at all times.
The panel was impressed to see a robust support arrangement in place of developers, infrastructure, delivery manager, clinician on call.
The team should security scan both their code and their dependency code and put in place notifications to the support channel when there are vulnerabilities discovered in the code.
The team were unable to describe what would happen to an inflight transaction during a deploy that might remove or alter the steps that describe where they currently are in the workflow. The team should test for this and provide sensible messaging to the user, or mapping of steps.
The browser behaviour was peculiar around refreshing a question page, which would often refer a 501 with an “access denied” message and break the process.
The team have thought about scalability to a degree, however this needs to be put into practice so that elasticsearch, muleESB, neo4j and redis are all clustered to provide a scalable and highly available solution. Clustering should not be seen just as a scaling concern but also an availability one.
Design
NHS 111 works with a wide range of users, and needs to support those with minor issues through to life threatening issues. The team have a careful balancing act between ensuring they provide medically accurate advice and diagnosis together with iterating the content in response to user research. The team have already made significant improvements to the content, but much more is still needed.
The panel were very encouraged to hear about clinicians being on the team, and the speed that content could be changed in response to user research. As the service is tied to the existing telephone service logic, more complex changes did not seem easy to do (such as splitting questions), and other improvements such as the potential to include images had not been investigated.
The team is building an NHS branded service, and have worked to align it with the NHS Digital brand. The team should continue to work with NHS Digital, and where possible contribute patterns back to the wider cross government design community.
The team may want to consider GOV.UK Notify to send notifications or reminders to users in relation to the service.
Accessibility
The team had recently had an accessibility audit, but had not resolved all the issues identified in it. In addition, the panel noted several accessibility issues whilst using the service - including contrast, unclear or missing validation, missing labels on forms. The service will need to address all accessibility issues before commencing public beta.
Content
The service has a large amount of content - up to 1700 questions. Whilst the team had an ability to change these, the panel saw many instances of poor or confusing content. Examples include content referring to both ‘you’ and ‘I’ to refer to the end user, questions flipping between ‘you’ and ‘your child’. The team should have a strategy for ensuring that all content in the service is of a high quality, and that common or significant journeys are well supported.
We recommend that the team works with a content specialist to review all questions and devises a way of testing any that cause concern. These could be tested quickly via guerrilla testing.
While the service works without javascript, much of the microcopy in the service expected it. This meant pages weren’t always clear when javascript wasn’t available - in particular the design pattern using links to progressively reveal more content. With javascript unavailable the content is already visible, and so the links do not function despite looking like they should.
Bugs while navigating
The back button and page refresh did not always work without showing users a resubmission error or service error. The team will need to improve handling of navigation so that users can use standard browser features without being given errors.
Design patterns
The majority of question pages made use of disabled forward buttons with green buttons to return to the previous page. The panel strongly recommends reconsidering the use of disabled buttons and avoiding having two primary calls to action on the page. The panel recommends testing with a question page format as recommended in the service manual, with a single call to action, a back link, and clear validation.
The start of the service asks users if they have any of a number of serious conditions, but confirms this with a passive forward button. The panel suggests considering having users make an active choice.
Several pages could do with further design and frontend iteration - particularly the appointment booking page and maps pages. Issues included: missing labels, use of placeholder text in inputs, poor validation. The panel recommends avoiding inline validation that causes the page to jump around whilst users type. The panel would also recommend moving to a ‘page per thing’ pattern for the appointment booking flow.
GOV.UK Elements
Where possible, the panel recommend reusing styles and patterns from GOV.UK Elements. Potential styles and patterns include:
- border colours for inputs to increase contrast
- larger checkboxes and radios to increase hit area
- back links
- button styles
- validation styles
Improving take-up, analytics and reporting performance
The team works with two part-time analysts who work on this product and other projects.
The team demonstrated that they recognised the importance of data and analysis to measure the performance of the service and to prioritise improvements to the user experience. All stories have ‘how do we measure’ as part of the acceptance criteria.
The team gave examples of where insight from analysis and user research had led them to prioritise the dental pathway, because of its priority as well as re-authoring 999 responses and splitting out questions in the triage to make them clearer.
The team also provided examples of how the keyword database was benchmarked and iterated from data.
Areas to explore deeper with analysis include:
- search. Continue to analyse keywords that are used, especially those that yield no results, to enhance synonyms
- abandonment of search and when users switch to categories
- segment the users who successfully get to a disposition from different starting points - for example search vs browse and different symptoms
- use of radio buttons - it would be interesting to explore whether there is any patterns how users interact with the radio buttons - for example whether there is a tendency to select the top button
- use of ‘what does this mean?’
The team discussed wider KPIs such as speed of arrival at a disposition and are starting to think about measuring what is the USP of the service - the ability to link to an online ‘offer’ where appropriate - for example booking an appointment.
Encouraging digital take-up
The service is only available in the Leeds area and is mainly only accessible by referral from the telephony channel. Have the number of telephony calls fallen or calls that have been abandoned fallen due to the introduction of 111 Online?
The team has challenging targets to roll out across England. As part of the roll-out to other areas, it is important to capture what plans commissioning bodies have in place to promote the online channel.
The team has started a conversation with the Performance Platform regarding a dashboard, but to meet the standard, should resolve the status of the service - is it transactional or ‘look-up’ and have a dashboard in place displaying the relevant KPIs.
Recommendations
To pass the reassessment, the service team must:
[Mandatory items that must be completed in order to meet the standard].
- recruit dedicated delivery managers, front end developers and more content skills. This is a content driven site (1700 questions) and more content designers or a content specialist will help move the site through beta and help address any confusion from users
- conduct user research/ user testing with participants in a live context of needing help with a health problem
- include some users who have access needs in future research sessions
- continue to test search with as many users as possible, including those with low digital skills
- content review all of the questions by a content designer. Those of concern to be tested with users
- conduct qualitative research with private beta users to get real insight about the use and value of the NHS 111 online service, before rolling out to further areas
- review use of external JS resources, including use of the integrity attribute in <script> tags where appropriate.
- put in place a process whereby vulnerabilities in dependencies can be identified, and the team can become aware of them
- ensure the service is fully accessible by addressing issues raised by DAC or otherwise observed in the service
- conduct usability testing with users with access needs to ensure the service works for them
- ensure that standard user behaviour such as refreshing the page, using the back button, or opening multiple tabs does not cause errors
- the team must start measuring the impact of the service in driving channel shift from telephony to online. Even in the trial, it should be possible to identify trends in calls abandoned vs online use
- the team has started a conversation with the Performance Platform regarding a dashboard, but to meet the standard, should resolve the status of the service - is it transactional or ‘look-up’ and have a dashboard in place displaying the agreed KPIs
The service team should also:
[These are non-mandatory items that would improve the service].
- cluster the backend services to provide a highly available service
- create representations of user types based on behaviours (e.g. personas) and regularly refer to these as a team in ongoing iterations of the service
- improve the automated test cases to run as part of CI, reducing the dependency on manual testing
- reconsider the use of Neo4j community edition, or at least appropriately warm the Redis cache so that the service is not dependant on Neo4J and can be considered highly available
- break up the microservices so they are truly loosely coupled services; splitting the ‘microservice’ modules into separate git repositories, and removing the dependency for all services to be deployed at once
- produce public documentation on what the modules do and how they could be reused by other teams
- review use of multiple calls to action on the page, and reconsider usage of disabled buttons
- ensure the analysts are fully involved in the agile team, helping prioritise stories and identifying success criteria
- ensure that no personally identifiable information is being collected inadvertently. IP address collection should be anonymised in Google Analytics and ensure that your survey is not collecting IP addresses. Ensure that any personal information is removed from the feedback form as soon as possible
- as the service expands the team need to have robust measures to demonstrate its impact on policy intent - to reduce pressure on face to face provision where appropriate
Next Steps
Reassessment
In order for the service to continue to the next phase of development it must meet the Standard. The service must be reassessed against the criteria not met at this assessment.
The panel recommend a reassessment in eight weeks time by which time they feel the team should have addressed the issues raised in this report. The panel will reassess those areas that the service did not meet.
Get advice and guidance
The team can get advice and guidance on the next stage of development by:
- searching the Government Service Design Manual
- asking the cross-government slack community
- contacting the Service Assessment team