Examining a Cause of Death alpha assessment

The service enables and records the independent scrutiny of non-coronial deaths in hospitals by a Medical Examiner. In other words the ability for a Medical Examiner to scrutinise a death and agree (or disagree) with the attending doctor’s verdict as to cause of death. The Medical Examiner is able to refer cases to the Coroner where appropriate to do so.

Service provider: Department of Health & Social Care

Service description

The service enables and records the independent scrutiny of non-coronial deaths in hospitals by a Medical Examiner. In other words the ability for a Medical Examiner to scrutinise a death and agree (or disagree) with the attending doctor’s verdict as to cause of death. The Medical Examiner is able to refer cases to the Coroner where appropriate to do so.

Service users

Medical Examiners (inc. the National Medical Examiner and regional lead Medical Examiners) and Medical Examiner Officers (inc. regional lead Medical Examiner Officers)

1. Understand user needs

Decision

The team met point 1 of the Standard.

What the team has done well

The panel was impressed that:

  • The team really put a lot of effort to understand users and their needs. They took time to observe key users and spent time with them. They visited six or seven hospitals in the discovery stage.

What the team needs to explore

Before their next assessment, the team needs to:

  • Demonstrate that they have managed to replicated the real service as much as possible in testing. All testing so far has been done on the prototype. Often prototypes in InVision or other software can be quite limiting and behave differently from real things.
  • Test on different devices. The team hasn’t tested much on tablets and not on phones at all. We know that more and more civil servants use phones and tablets to do their job and it’s important to test on these devices. Even if some of your users don’t have these devices, their organisations may acquire them in the future.
  • Test the service with all users who will use the service in private beta. Without a clear understanding of what all parties need from the service the team cannot be confident their service meets the needs of their users.
  • Demonstrate that they have considered accessibility needs of potential users. The team mentioned that there were no users with accessibility needs and if there are ever that kind of users using the service, then features could be build by NHS Digital to address these needs. This is a common thing which we hear in the majority of assessments and this happens due to misunderstanding of accessibility needs. There are a few things to consider: 1) they can guide or train people to do the tasks needed in the same way that a medical examiner would do the job (at a high level), 2) most people acquire disabilities rather than being born with them, so in a team of several hundred medical examiners it is highly likely that a significant percentage will develop assistive needs e.g. visual impairments (Leonard Cheshire says “Eight out of 10 people with a disability weren’t born with it.”) 3) hidden disabilities are a big factor e.g. attention deficit, dyslexia, autism, anxiety 4) it isn’t realistic to recruit someone for a job, then build accessibility for them at that point. People need to be able to user services straight away, and there is a danger of discrimination if they can’t recruit say someone who is blind, or someone who relies on dragon.

2. Do ongoing user research

Decision

The team met point 2 of the Standard.

What the team has done well

The panel was impressed that:

  • The team showed how they used user research findings to make iterations. They clearly demonstrated what findings they came up with and what changes they made following these findings.
  • The team engaged the Welsh government in the process to adapt service to Welsh users.
  • The team travelled across the country to meet users and explore their needs.

What the team needs to explore

Before their next assessment, the team needs to:

  • Involve as many team members as possible in user research so that everyone in the team understand why specific changes are being made. It wasn’t clear whether a whole team (all people attending the assessment) went to observe, take notes and do the analysis of user research sessions. It sounded like a lot of research findings were shared through show and tells, and other ways.

3. Have a multidisciplinary team

Decision

The team met point 3 of the Standard.

What the team has done well

The panel was impressed that:

  • There is a multidisciplinary team in place for alpha and had a good team makeup planned for beta.
  • There are regular engagements in place between the stakeholders, service manager and the service team (supplier). The team should consider how to get greater engagement from NHSI. This can speed up tech decisions that are determined by NHS and understand what metrics are important and whether the team needs to be involved in rollup and future improvements after the service goes lives. The team should also consider how they will transfer knowledge and handover the service to NHSI.

What the team needs to explore

Before their next assessment, the team needs to:

  • Consider how to work with NHS Improvement as an integrated whole to reduce the burden of reporting, so decisions can be made with a view of the whole service and not just the digital parts.

4. Use agile methods

Decision

The team did not meet point 4 of the Standard.

What the team has done well

The panel was impressed that:

  • The team worked in sprints and had the usual agile ceremonies in place. There was a distinct impression of the team doing the design of the service first, before deciding on the choice of technology and building the service.

What the team needs to explore

Before their next assessment, the team needs to:

  • Demonstrate how they are using agile approaches to all areas of the service, not just design and how that has impacted decision about the service. While there were 2 iterations of the invision prototype that was tested with users, the team did not show how the feedback and research was taken on board by the whole team. In particularly, the finding from the research did not impact any tech decision the team made or analytics the team should capture or report on in the future or impacted policy. There was also no mention of content or language changes of the prototype.

5. Iterate and improve frequently

Decision

The team did not meet point 5 of the Standard.

What the team needs to explore

Before their next assessment, the team needs to:

  • Demonstrate frequent iterations of their service, how these iterations are meeting user needs and how these iteration will improve the service across all areas. Similar to point 4, iteration and feedback of the service did not feed into decision in other areas of the service, tech, content and analytics. Iteration of the service were fairly shallow and focused mainly on user research and design, with only 2 iterations of the prototype. There was not enough evidence that the service was iterated and meets user needs. In particular, the choice of using InVision to prototype, while allowing the team to proceed quickly in design, doesn’t fully reflect how users will interact with the service. Although the service plans to continue iterating the design in the next phase, there are concerns over how many times the service team will be able to iterate, do research, make changes and repeat this process enough to know what is and isn’t working before the service is made available from April 2019.

6. Evaluate tools and systems

Decision

The team met point 6

What the team has done well

The panel was impressed that:

  • The proposed technology stack of Python, Django, ASP.Net Core seems fine and these are open source and freely available.
  • The choice of using CosmosDB and Azure were chosen to suit the NHS, which already seems invested in these technologies.

What the team needs to explore

Before their next assessment, the team needs to:

  • Demonstrate the tooling can work together. Tools have been evaluated at a high level but how they will work together in practice in the project as yet has not been demonstrated. Although the emphasis of this point is on evaluation, it is concerning that by the Alpha assessment no code has been written to demonstrate this. This is an important part of showing that the combination of them has proven fit for purpose enough to support prototyping.

7. Understand security and privacy issues

Decision

The team did not meet point 7

What the team has done well

The panel was impressed that:

  • It seems like a good idea that the team has chosen to use Okta for user authentication as it is a product that has already been used for similar needs elsewhere.

What the team needs to explore

Before their next assessment, the team needs to:

  • Various security issues have been discussed to prevent hacking, and in theory these measures sound fine. But without the service having been implemented in code yet, it is likely there would be few insights and discoveries into security flaws that would normally surface through iterative development. There are other security issues which seem understood but not decided on. For example, whether to use two factor authentication seems to be something the NHSI needs to be consulted about.
  • The service captures data in free-text fields that may mix data about a dead person (not covered by GDPR) with data about other people such as the bereaved (covered by GDPR). Their own example of how a bereaved person could react suggest the data may at some point describe issues of mental health, criminal offences or litigation (eg: he was distressed, became violent and threatened to complain to the newspapers). These scenarios suggest the service may need to consider how it processes special status data covered in the GDPR and how it would support generating reports in response to data access requests by the police, other health bodies or by the bereaved themselves.Normally this is something that would be emphasised more in later stage assessments. However, the lack of an existing code base showing how privacy controls are implemented, the short time until the group expect to be in private beta and the sensitive nature of the data in the system raise the question of whether the group will have time to do a GDPR assessment of what data they collect and how they process it.

8. Make all new source code open

Decision

The team did not meet point 8

What the team needs to explore

Before their next assessment, the team needs to:

  • The group hasn’t actually written any code to implement the wire frame screens they have been iterating through with user research. It is reasonable to expect that by the Alpha assessment, they would have written some code and the fact there is none is of concern.
  • During the assessment it became apparent that the team didn’t have plans for open sourcing. It is important to open source the code to foster a sense of transparency with the general public, to encourage good coding habits associated with coding in the open, and to encourage the prospect that the code base could be taken over by other developers beyond the consultants who have been doing the work. We would encourage the team to have plans for open sourcing. If some parts of your code are sensitive, be able to explain why it cannot be shared.

9. Use open standards and common platforms

Decision

The team met point 9 Tech assessor | Met |

What the team has done well

The panel was impressed that:

  • The team is using Okta to support user authentication. It is being used because other groups in the NHS are using it as well. It makes sense to reuse assets that are already being used.

10. Test the end-to-end service

Decision

The team did not meet point 10 of the Standard.

What the team has done well

The panel was impressed that:

  • This is a service that seems to have been well architected.

What the team needs to explore

Before their next assessment, the team needs to:

  • The low fidelity prototype limited the scope of testing the end to end service. The service needs to be tested with a higher fidelity prototype sketched in some basic code, where users interact with something that looks and feels like the intended service. Pictures, rather than something that users can scroll through and click on, are far less likely to expose unstated needs and potential pain points of the new service.

11. Make a plan for being offline

Decision

The team did not meet point 11 of the Standard.

What the team has done well

The panel was impressed that:

  • The team did mention the idea of using Azure features to make their service available in multiple domains, thereby eliminating the likelihood of downtime.

What the team needs to explore

Before their next assessment, the team needs to:

  • The plan for being offline seems to be to let users return to a paper-based system they are already using.
  • The panel recommend considering how the service being unavailable differs from there not being a service in the first place. Especially as the service can contain part-complete reports and is used to manage workload.
  • When service users expect to be in place isn’t available measures need to be in place for users to understand what they should do when the service isn’t available, if any data they need to be able to access isn’t available for example and what to do with any offline actions taken.

12. Make sure users succeed first time

Decision

The team did not meet point 12 of the Standard.

What the team needs to explore

Before their next assessment, the team needs to:

  • The team haven’t tested the service end-to-end. There are a number of different user groups for example GPs and coroners that haven’t been tested with. There panel recommend prototyping and testing the service end-to-end as there still a lot to be explored about how the various user groups interact with this service.
  • The lack of fidelity in the prototype and not testing scenarios end-to-end means there’s a lot still to be learnt. The panel would strongly recommend testing a more authentic prototype with scenarios that interrogate how the most complex parts of the service will actually work before committing to product development.
  • The panel are concerned about the lack of coordination between the team and NHS Improvement. The team need to consider how they are organised as much of the service is delivered by a different organisation. Delivery and design are not separate and viewing them as such the team risk delivering a service that doesn’t work for user and missing opportunities where digital and non-digital ways of the service will need to work together. The panel strongly recommend exploring how training, uptake of the service and support for users are going to affect user’s ability to use the service.
  • The design of the service is complex, the team talked about how it had been reduced in scope from initial ambitions. The panel would recommend starting with building from a simpler design and proving what is necessary. The panel are concerned that the complexity of the service will be near-impossible to unpick once they have started developing production software.

13. Make the user experience consistent with GOV.UK

Decision

The team did not meet point 13

What the team needs to explore

Before their next assessment, the team needs to:

  • The interaction and content design of the service does not follow GDS or NHS patterns or best practice. The few pages of the service that have been prototyped have the following issues:
  • uses colour as the only way of communicating meaning, which is inaccessible to anyone with colour blindness and even with good colour vision and a good monitor the meaning being conveyed isn’t clear
  • there is inconsistent affordance given by different elements on the page so it’s requires guess work for users to know what is interactive
  • there are many unexplained acronyms
  • the tabs on the ‘ME Dashboard’ aren’t closely associated with the target content and it’s easy to miss what change is triggered when they’re clicked
  • there are redundant icons
  • process bars are vague without specific meaning being conveyed
  • The ‘ME’ and ‘logout’ link appear to have the same purpose
  • type size in some places is smaller than 9px and hard to read
  • it appears that input fields and other content are greyed out in certain states but it’s unclear as to why
  • line length is more than 70 characters which makes reading more difficult for users
  • The team should consider using the GOV.UK design system as a starting point for their design approach as this has been thoroughly tested and the consistency with service across the public sector reduces the burden on the user to learn how this service works.
  • The team should consider speaking to other government departments who have created similar, multi-party case working systems to find places where problems have already been solved.

14. Encourage everyone to use the digital service

Decision

The team did not meet point 14 of the Standard.

What the team needs to explore

Before their next assessment, the team needs to:

  • The team has said that they will not be involved in the rollout process of this new service. The MVS (Minimum Viable Service) will be built by April 2019 and passed over to NHS Improvements (NHSI) to coordinate and manage the rollout of the service. There was no indication of any further assistance the team would provide to NHSI to encourage the adoption of this new digital service. We would encourage the team to work with NHSI to ensure there is a plan to increase digital take-up. What data could be collected to help NHSI make decisions on improving take-up. Is there any insight coming from user research that can help NHSI or does NHSI have data about hospitals and process that could help the team shape their service? Additionally, once the service is available, how will feedback about the rollout, usage of the service etc. be shared between the service team and NHSI to make further improvements and increase adoption over time?The team (or NHSI) could also consider getting further insights from potential adopters to understand how to promote further uptake in the long-run.

15. Collect performance data

Decision

The team did not meet point 15 of the Standard.

What the team has done well

The panel was impressed that:

  • The team has conducted some qualitative research with the 2 existing pilot to understand how the process currently works and have used this to improve their understanding of users. For example, how many transactions, how many concurrent cases an Medical Examiner could have active at anytime.

What the team needs to explore

Before their next assessment, the team needs to:

  • While there was some good discussion during the assessment about what performance data will be useful for the service. However, there wasn’t any evidence that the team has thoroughly considered or decided on what data they need to capture. The 4 KPIs, completion rate, user satisfaction, cost per transaction and digital take-up, as well as other performance measures have not been considered and there are no plans in place either.
  • It’s recommended that the team works with NHSi to decide on what performance measures are important and decide on how they will be benchmarked, captured and measured. There should also be a way to feedback the outcomes to NHSI, so that they know how the service is performing. This is particular important to a service that will be built and made available in a short amount of time where the service doesn’t have time to conduct thorough usability testing. How will the service know if it’s meeting its intended purpose? How will the team track and identify journey completion and areas of poor performance?

16. Identify performance indicators

Decision

The team did not meet point 16 of the Standard.

What the team needs to explore

Before their next assessment, the team needs to:

  • See point 15

17. Report performance data on the Performance Platform

Decision

The team did not meet point 15

What the team needs to explore

Before their next assessment, the team needs to:

  • See point 15
  • When the service is made available, it’s expected that there is a minimum amount of performance data that will be published on Performance Platform dashboard. As the timeline for is quite short, the time should kick off this process soon.

18. Test with the minister

The team met point 18 of the Standard.

What the team has done well

The panel was impressed that:

  • There are plans to do this later in the project.

Updates to this page

Published 9 May 2019