Research and analysis

Drivers of technological needs project: Delphi

Published 29 August 2024

Government Office for Science (GO-Science)

What was the challenge? 

The GO-Science Foresight project, Drivers of Technological Needs, was part of GO-Science work to build a cross-government picture of the technologies that are most critical to the UK. The aim was to create a consistent methodology for mapping technologies to four long-term strategic goals covering ‘sustainable environment’, ‘health and life sciences’, ‘national security and defence’ and ‘a digital- and data-driven economy’.

To manage the challenge of directly mapping individual technologies to ‘big picture’ strategic goals and improve the transparency of results, an intermediate concept of ‘technical capabilities’ was introduced (for example, ‘the capability to deliver effective health services in homes and communities, regardless of income’). Technical capabilities are the practical steps which are needed to support the delivery of each overarching goal and are themselves underpinned by key technologies (alongside other enablers, such as skills and institutions).  

What was the approach? 

Using a modified Delphi process, the GO-Science project team worked with a diverse group of 60 leading experts from the UK government, business, and academia to identify technical capabilities and assess their importance to achieving the four strategic goals. A high-level overview of the project’s methodology is provided further down.

Quality assurance was inbuilt throughout the process. Experts were invited to review the curation of capabilities during phase 1. The results of the phase 2 scoring process were returned to experts during the workshops in phase 3. Finally, phase 4 aimed to check the judgements that had been derived from the means and standard deviations of their scores and clarify any discrepancies.

A key principle for the Delphi process was that responses gathered from individual experts were anonymous; participants could know who else was involved, but not what they said. Anonymity ensured that opinions were heard independently without bias and helped to avoid groupthink.

Overview of the Drivers of Technology Needs Project Process

Phase 0 – Pre-survey work 

  • Desk research, exploration of strategic goals from policy documents and development of survey approach. 

  • Expert panel identification and recruitment – one panel of approximately 15 experts recruited from Government, business and academia, per strategic goal. 

Phase 1 – Survey to identify technical capabilities 

  • Experts from each panel submitted lists of technical capabilities they considered necessary to achieve a strategic goal. 

  • GO-Science project team curation of technical capabilities (merging/re-wording similar capabilities) into lists of between 10–20 capabilities for each strategic goal including identification of cross-cutting capabilities (relevant for more than one goal). 

  • Expert reviews of curated capabilities and finalisation of the technical- capability list for each goal (including common wording for cross-cutting capabilities).   

Phase 2 – Survey to score technical capabilities and prioritise 

  • Expert scoring of technical capabilities according to their importance to achieving strategic goals. Capabilities were scored on a scale ranging from 5 (essential) to 1 (not directly relevant). 

  • GO-Science project team process scoring- survey submissions, calculating the mean score and standard deviation for each capability. 

Phase 3 – Workshop to explore scoring and rescore technical capabilities 

  • Four expert workshops (one per goal). 

  • Expert discussion and rescoring of the ten technical capabilities with the greatest variation in initial scoring, for final prioritisation. 

Phase 4 – Outputs 

  • Scores finalised, averages and standard deviations calculated for technical capabilities, to produce a final list of 68 technical capabilities. 

  • To visualise results from the Delphi exercise, boxplots were produced to show capability rank and the spread of scores. Pre- and post-box-plot workshop figures provided visualisation of how the spread in scoring changed following consensus workshop discussions.

This approach enabled the project team to collect both quantitative assessments for the importance of technologies and capabilities as well as qualitative rationale and context for these assessments. Using this methodology ensured  the project was able to recognise the value of informed, expert judgements, but also that people can change their mind once they hear different perspectives on a complex issue.

What was the impact? 

This project was a significant input into the UK government’s first attempt to compare its strategic interests in 50 technologies. The analysis was referenced in the Science and Technology Framework as underpinning the choice of critical technologies. Multiple teams have since used the work to consider their own interests in technologies.  

Written in 2023.