Campaigns Platform - Alpha Assessment

The report from the alpha assessment of the GDS’s Campaigns Platform service on 21 September 2016.

Stage Alpha
Result Met
Service provider Government Digital Service (GDS)

The service met the Standard because:

  • The team demonstrated the rationale behind developing the platform in the first place and showed a commitment towards user research to improve the platform going forward.

  • The team was multidisciplinary in nature, covering all the major roles (product manager, delivery manager, user researcher, technical architect, development and design and analytics). Team members demonstrated an ability to consider user needs, conduct continual user research, modify the service based on testing, and consider the bigger picture going forward.

  • The team showed that it worked in an agile and responsive way, capable of tweaking and iterating the service as it went along, namely by customising and simplifying a generic WordPress site, transforming it into a platform that meets Service Standard guidelines for a GOV.UK site.

About the service

Product Manager: Charles Davie

Digital Leader: Chris Ferguson

Detail of the assessment

Lead Assessor: Emily Hall-Strutt

Technical Assessor: Steve Lorek

User Research Assessor: Ruben Perez-Huidobro

Design Assessor: Jason Bootle

User needs

The team demonstrated a good degree of user research based upon a sample of personnel working in a wide range of government departments and agencies. The research made a general case for who would potentially use the service, what issues could arise for users, and how tweaks have already been made to correct problems that have been identified. There was evidence of iteration based on user needs identified during the early research phase, showing that user concerns were being addressed in an agile and flexible way.

In terms of improvement moving forward towards Beta, we would expect more detail on the different types of users who would potentially use the service, both in the departments and ultimately among those towards whom the campaigns are aimed. At the moment, there appears to be one umbrella user need that doesn’t refer to a specific user but more to a general type of person who works in the Civil Service. We would require more detailed analysis of the needs of specific users (strategy teams, content teams, policy teams, etc.). Moreover, given that user research has been conducted across a wide range of government departments and agencies, more clearly-focused research to show how specific needs are being identified in each individual department/agency would be helpful. We would also expect a greater level of detail regarding what research the team is conducting in relation to issues of assisted digital and accessibility for users and to issues of ease of usability for first-time or non-expert users.

Team

The team clearly demonstrated that it had the multidisciplinary skills needed to deliver the service (product manager, delivery manager, user researcher, designer and developer, technical architect, analytics). The product manager showed evidence of being able to prioritise and get buy-in from senior stakeholders. The team clearly showed that it worked in an agile manner with daily stand ups, user stories from testing, use of backlog to create an order of prioritisation, show and tells, and retrospectives at the end of each sprint. It also showed evidence of planning to test the service with senior stakeholders (heads of digital in all departments and, hopefully, with minister) at an external show and tell in October.

Going forward towards Beta, the team needs to show more evidence of how the service works and meets user needs. It also needs to conduct more testing to show that the service works across multiple devices. In this regard, giving a specific example of a current standalone microsite that could go on the campaigns platform would be helpful - this would shed light on the whole process around transferring to and using the platform and on whether users would be able to figure out how to measure and meet KPIs.

Technology

The team have elected to use a WordPress multi-site solution. They demonstrated that the majority of end users had prior experience with WordPress, and this is consistent with other government sites in particular gov.uk.

During the assessment we discussed the potential for breaking changes being introduced either on the technology side (WordPress) or the content design elements being incompatible with content. As the team are using a single WordPress instance this risk increases over time as content becomes stale, and the teams responsible for the content are assigned to other projects, so it will be important to develop a plan to deal with this eventuality.

We reviewed the technical architecture diagrams which follow a pattern consistent with other gov.uk services. However, while the team have set up redundant application servers in multiple availability zones, there remains a single point of failure in that the database server is located in a single availability zone. Therefore if that zone became unavailable, the service will be unavailable and may indeed produce errors where the remaining instance cannot establish a database connection. This was queried during the assessment and we recommend that in order to ensure a resilient service this weakness is addressed as the project moves towards beta.

Accessibility requirements have yet to be addressed and testing has been limited to the OSX screen reader. No testing has been performed on the back end.

Limited browser/platform testing has been conducted so far and the team did not have data to demonstrate that they know which platforms their users are accessing the service on.

It was explained that individual content teams will have responsibility for managing their content and testing that the service meets their audience’s needs. It would be useful to demonstrate that the users of the service are sufficiently informed as to how to test the content and to address any problems that may arise.

We also discussed that content teams would be responsible for managing their own users with no intervention from the service team. There is a risk that user accounts could be either compromised or that credentials could be shared. The team should consider approaches to mitigate this risk e.g. stale account locking and pro-active monitoring of activity.

Design

The team presented the required evidence that the service is on the right path of iterating the design based on the user research conducted thus far. In terms of visual impact, the front end is in keeping with GOV.UK style guidelines - what will be interesting going forward is to see what plans the team has in terms of new styles and design trends, how far any design modifications will go, and whether they will also fall within GOV.UK style guidelines.

Looking at things to work on, it is not clear how easy the design is for first-time/non-expert users and how much training or help such users would need to create a campaign by themselves - this will require further user research. The team must also show the overall design is compatible with a wider range of screen readers and assistive technologies - this will require more accessibility testing. Finally, the team will have to show that it has done sufficient testing to show the service works responsively across a variety of mobile and tablet devices for ultimate end users - i.e. those the campaigns are actually targeting.

Analytics

Although the team showed an awareness of the need for analytics, it has not yet sufficiently outlined how it plans to use metrics and performances indicators to assess performance.

Going forward, the team will have to establish a clear plan for the service, in terms of what metrics it intends to use to measure success, e.g. number of standalone campaigns migrating to the platform or quantity of money saved by not creating microsites. At the moment, it is not even clear how many microsites there are so, before anything, a historic baseline will have to be established. Once the baseline has been established, the four key performance indicators outlined in point 16 of the Service Standard are mandatory and will have to be shown to be used to pass the next phase. Then, given the differing nature of campaigns that will potentially go on the platform, separate additional performance indicators should be established for each individual campaign, using analytics tools such as GA or GTM (for more experienced users). When accurate data has been collected, the team should then create a mechanism for reporting the data on the Performance Platform.

Recommendations

User Needs

Conduct more research on the specific type of users who would use the service. Make sure that you create a plan to cover all the different types of users from content to policy and strategy.

Conduct more research on the specific needs, wants, and capabilities of different government departments/agencies who would be using the service.

Show more evidence of how research and testing is taking into account not only general issues but the issues of assisted digital and accessibility for users and ease of usability for first-time or non-expert users.

Make a clear plan about how you are going to address user research in a private beta including a good range of Departments and Agencies and including all possible users.

Team

Do more testing to show that the service works across multiple devices.

Test the transfer of a current (or redundant) campaign microsite onto the campaigns platform to see how the whole process around the platform would work with users and to gauge what potential issues could arise.

Technology

Implement a plan to include users with accessibility needs and to test and demonstrate that the service meets their needs.

Research user technology needs e.g. browsers and devices and demonstrate that they have tested the service against these requirements.

Review the technical architecture to ensure that it is fault tolerant and in particular address single points of failure such as the database server (e.g. Multi-AZ deployment).

Devise a plan for managing old content and how to deal with breaking changes in the technology stack and the front end designs.

Devise a plan for onboarding content editors and ensuring they are aware of their responsibilities for providing content that is accessible and optimised for the intended audience and how they can use the tools provided to meet these needs.

Develop means to mitigate risk of account sharing and redundant accounts remaining active.

Design

Show more evidence of how easy the design is for first-time/non-expert users to use.

Demonstrate that the overall design is compatible with a wider range of screen readers and assistive technologies.

Show the service works responsively across a variety of mobile and tablet devices.

Analytics

Put metrics in place with clear benchmarks and objectives - establish a clear plan to meet objectives.

Explain how the service addresses the four key performance indicators - lowering cost per transaction, improving user satisfaction, increasing completion rate (or equivalent), and increasing digital take up.

Identify any other performance indicators and measurable metrics that could be used to show how the service is doing for specific campaigns and whether it is meeting objectives.

To pass the next assessment, the service team must:

  • Demonstrate increased resilience of the service, which currently has a single point of failure with the database server located in a single availability zone.

  • Demonstrate that users with accessibility needs are accommodated and that sufficient testing has been conducted to ensure that the service is easy to use first time for non-expert/first-times users and works on a range of desktop, laptop, mobile, and tablet devices.

  • Demonstrate clearly identified performance indicators for the service, including the mandatory four KPIs as outlined in point 16 of the Service Manual. The team needs to put metrics in place with benchmarks and objectives and establish a clear plan regarding how to meet objectives. It also needs to report results on the Performance Platform.

The service team should also:

  • Seek to further customise the service for different users/departments, based upon further research.

  • Set out a clear plan for disposing of redundant campaign sites.

  • Conduct more browser/platform testing.

Digital Service Standard points

Point Description Result
1 Understanding user needs Met
2 Improving the service based on user research and usability testing Met
3 Having a sustainable, multidisciplinary team in place Met
4 Building using agile, iterative and user-centred methods Met
5 Iterating and improving the service on a frequent basis Met
6 Evaluating tools, systems, and ways of procuring them Met
7 Managing data, security level, legal responsibilities, privacy issues and risks Met
8 Making code available as open source Met
9 Using open standards and common government platforms Met
10 Testing the end-to-end service, and browser and device testing Met
11 Planning for the service being taken temporarily offline Met
12 Creating a simple and intuitive service Met
13 Ensuring consistency with the design and style of GOV.UK Met
14 Encouraging digital take-up Met
15 Using analytics tools to collect and act on performance data Met
16 Defining KPIs and establishing performance benchmarks Met
17 Reporting performance data on the Performance Platform Met
18 Testing the service with the minister responsible for it Met

Updates to this page

Published 22 December 2016