Speech

UK perspective on evidence based policy planning

Ambassador Alison Kemp spoke about the importance of evidence based policy planning and its impact on Montenegrin EU integration.

This was published under the 2016 to 2019 May Conservative government
Alison Kemp

I’m delighted to be here today.

I’m not a data analyst, and when I have been asked to speak on this subject today, I was a bit overwhelmed to start with: thinking how could a foreign policy specialist talk the wide range of experts and analysts we have here today about evidence based policy. But then I remembered, two things: actually what an Embassy does every day is collect quantitative and qualitative evidence to better inform policy development and delivery on everything from supporting the development of your education system to ensuring that we are working with Montenegrin authorities to support British tourists.

Secondly, earlier in my career I was the head of the FCO’s Policy Planners, and worked with policy planners across the UK government and with foreign policy planners in other Foreign Ministries on foreign policy planning workshops in which establishing a good base of evidence to understand the issue and develop policy solutions was crucial; everything from what should the UK’s strategy towards country x be? How will 2° climate change affect our foreign policy? So there is no part of policy development and delivery which does not rely on your teams ability to collect, interpret and draw sound conclusions from a range of evidence.

And while I realise that there are some difference in scale between doing evidence based policy in the UK and in Montenegro, the fundamentals we’ll discuss today still hold.

What do I mean by the best possible policy ideas? Well for me that’s very simple, it’s the Holy Grail of a policy:

  • which is credible: it will meet an otherwise unmet need of the UK’s citizens;
  • which is sustainable: the government has the resources to deliver this;
  • and which is realistic: if we implement this policy we will achieve the desired result, we’ve thought through the risks and the unintended consequences and we have some ideas how to manage them.

What is evidence based policy making?

It’s when policy makers consider the existing available evidence, and engage with analysts to produce new evidence when needed. Analysts can help you design and implement good policy leading to better and informed decision making.

UK perspective on evidence based policy planning

UK perspective on evidence based policy planning

If you’re analysing evidence that goes beyond the routine for your role, it’s always a good idea to involve a professional analyst who’s an expert in handling and interpreting that type of evidence at an early stage in your project. Using analysts helps you to be confident that you’re considering the best evidence available, that you’re interpreting and using that evidence accurately, which allows you to make informed decisions and design effective evidence-based policy. There are different kinds of analysts including: statisticians, economists, social researchers, operational researchers, scientists, engineers.

For example, statisticians supported children’s centres in the implementation of new policy by conducting research to help effectively identify families that needed their services the most, allowing them to allocate their resources to families with the greatest need. This was achieved through analysis of a variety of data sources, for example, area deprivation data, live birth data, and data from partner services, being combined with case studies and interviews with a range of stakeholders.

Economists advise how to maximise welfare (or benefit) from scarce resources. Microeconomic analysis looks at the trade-offs inherent in any policy decision through concepts such as cost-benefit and opportunity cost, helping you to choose one course of action over another. Macroeconomic analysis looks at the economy as a whole, and aims to create prosperity, high employment and economic stability. For example, economists and policy professionals worked together to create the payments by results mechanism for the probation service. A combination of stakeholder and economic analysis was essential to understanding the problem and development of an effective payment mechanism that made this policy a success.

Social researchers use both qualitative and quantitative research and evaluation methods to help you understand the potential and actual effects policy has on society and social groups. For example, social researchers conducted a randomised control trial to help policymakers determine whether a programme to encourage nursing home workers to have the flu vaccination would increase vaccination rates and if this would have positive effects on patient health. The study found that nursing homes participating in the programme did increase vaccination rates and that all-cause mortality of residents were lower, compared to the control group. This led to the national recommendation to vaccinate all workers in nursing homes.

Operational researchers help find solutions to complex problems through problem structuring and scientific, mathematical and statistical modelling to aid understanding of new or revised policy options and the expected effects. For example, reducing greenhouse gas emissions in line with challenging international and domestic targets, while keeping costs to a minimum, is a priority and high profile issue for the government. The modelling that operational researchers are doing helps policy-makers identify options available to achieve these targets. 

Scientists and engineers use their specialist and domain knowledge, and apply scientific method and systems thinking to understand problems, produce evidence based advice and develop policy solutions. They fulfil an important function assuming a ‘transmission mechanism’ function between expert scientific communities working in academia, industry and government, and government policy makers. For example, in 2013 the Scientific Advisory Group for Emergencies (SAGE) was activated in response to wide scale UK flooding. SAGE was responsible for ensuring that timely and coordinated scientific advice across a range of issues, including weather forecasting, assessing landslide risk, assessing safety of drinking water, and monitoring and predicting sinkholes, was made available to the Cabinet Office Briefing Rooms committee (COBR). 

Why do evidence based policy making in general?

Evidence based policy making will contribute towards you building a proper understanding of the problem your trying to solve and the various factors that will be involved in successful policy delivery. It will help you measure success and communicate your policy.

You will need to draw on a range of evidence, expertise and analysis, and considering how risk should be managed. You’ll need to draw on this evidence to think strategically, establishing a vision which includes national interests and reflects the wider political, social and EU context, to develop credible, politically-sensitive, adaptable policy. Evidence doesn’t just mean facts and figures, you’ll need qualitative as well as quantitative evidence: so you need to identify and engage with key internal and external stakeholders, this can include colleagues in your department, other departments, the private sector and interested parties in civil society to draw on their expertise to inform policy development and delivery.

Having gathered the evidence you’ll need to assimilate and analyse this information quickly – before it is out of date - to establish clear action plans. And as more and more information is available online you’ll need to use digital tools effectively to develop, implement, evaluate high-quality policy.

You should consider using different futures techniques, including to identify questions about the future that impact on policy development and implementation, and to future-proof policy decisions. And in all of this, you’ll need to think creatively and innovatively, applying open policy making techniques.

You’ll be using the data you’ve gathered to evaluate your policy at every step. During policy design and implementation this means welcoming constructive challenge and working collaboratively to understand what a particular set of evidence might mean for your policy. And it means setting up a clear results framework and monitoring and evaluation framework at the design stage of your policy.

You’ll need to find a way to communicate the data behind your policy to decision makers in executive and legislative and to the public.

How to do evidence based policy making?

In the UK most government departments have a policy formulation framework, which guides civil servants through policy development. These will vary from department to department, but the approach is pretty consistent across all departments:

  1. Work out what your rationale and your objective are. It’s important to engage with analysts at the very beginning of the policy cycle. By engaging with analysts at the rationale stage, they can help you better understand the problem and the context. By outlining the evidence available about what works and what doesn’t work, it can also help them to understand the rationale behind the policy and the thinking involved.
 Then it’s important that you clearly define the objectives and the intended long-term outcomes for your policy, and what outcomes can be measured. Analysts can help you to identify potential success measures and targets and can help you better set out the rationale and evidence behind your policy, and to develop a theory of change.
 
It’s important for policy professionals to engage analysts right from the beginning of the policy cycle, when you’re thinking about designing a new policy and thinking about how to evaluate it. Evaluation doesn’t come in right at the end of the policy cycle; it’s important to think about how to evaluate a policy right from the beginning.

  2. Appraise and analyse the current situation and consider how different interventions would affect it. The purpose of this stage is to find the best way to execute the policy prior to implementation by identifying a list of options that are likely to meet the objectives, and assessing these for the costs and benefits they’re likely to bring.


  3. Decide on a preferred policy and examine how you would deliver it, the risks you’d face and how you’d effectively mitigate these. Implementation has to be at the forefront of your policy development and that will involve working very closely with your operational colleagues who will be very good at telling you what the reality is on the ground.

  4. Implement: When you implement your policy, you should start to monitor your policy in parallel.  Without this, you’ll be unable to set a baseline measure that compares what was happening before and after your policy was implemented. Setting up a control and comparison group at this point can help you tell whether your policy had an effect and how it is performing against your planned outcomes.

  5. Evaluation: This is the assessment of policy effectiveness and efficiency during and after implementation. It seeks to measure outcomes and their effects to assess whether the anticipated benefits have been realised. We look at the evaluation options available to you in more detail later in this tutorial.


Throughout the process you’ll find that feedback is essential to honing your policy: Capturing feedback on the effectiveness and efficiency of policy is crucial in helping future policy makers understand what works and what doesn’t, but also in helping you continuously improve your own policies.


There are many different types of evidence available to you. Your understanding of the problem your trying to solve and your early engagement with analysts will help you decide what evidence you need. Some of the different types of evidence are:

Survey and administrative data: Provides valuable information about the nature, size, frequency and distribution of a problem or research question under investigation. It can also generate evidence of correlations and can be used to generate hypotheses that can be used in experimental and quasi-experimental studies. Existing survey sources are typically used, but bespoke surveys may be commissioned in the absence of existing evidence and when it’s cost and time-effective to do so. In addition, government departments often hold administrative data that may be useful in providing evidence.

Economic evaluation evidence: Policy making, design and implementation inevitably involve decisions about the use and allocation of scarce resources. As a result, economic evaluation is required to provide information about the most cost-effective way of achieving a given objective and how the greatest benefit can be achieved from the resources available. 

Impact and process evaluation: These evaluations assess how the policy was implemented and determine the outcomes related to the policy. They explore to what extent the policy is responsible for the outcomes and the extent other factors are responsible. This is often achieved by comparing those who have experienced the policy with those who haven’t.

Theory-based evaluations: These involve understanding, systematically testing and refining assumed theoretical connections between a policy and the anticipated outcomes. These connections can be explored using a wide range of research methods including qualitative, quantitative or a combination of both.

Meta evaluations and meta-analysis: Meta-evaluations can also use both qualitative and quantitative techniques to bring together a number of related evaluations and derive an overview or summary conclusion from their combined results

  • existing domestic and international academic research statistics and studies (particularly systematic reviews which independently synthesise available evidence on a given topic)
  • preliminary results from research studies (undertaken in response to a specific question or a new field of study)
  • stakeholder consultation and analysis including surveys, ethnography, interviews, case studies and focus groups 
  • expert knowledge.

You’ll need to understand the difference between quantitative and qualitative evidence. Qualitative research aims to provide an in-depth understanding of people’s experiences, perspectives and histories in the context of their personal circumstances. It can help answer ‘what is’, ‘how’ and ‘why’ questions relating to a particular situation or relationship, from the perspective of those being studied. Qualitative evidence is not context free, therefore care must be taken to make sure it isn’t being misinterpreted, or the finding generalised to groups or contexts that were not involved in the study – consulting a specialist analyst will help you navigate these risks.

Qualitative research is used for a range of purposes including:

  • examining an issue or problem
  • helping you to understand why something may or may not work
  • identifying outcomes (intended or unintended) and how they occur
  • examining the different needs of the people who will be affected by the policy
  • exploring the contexts in which policies operate
  • exploring organisational aspects of implementation.

Quantitative research captures numerical data that attempts to establish the effect a policy will have, by determining the relationship between the intervention and the outcomes. It achieves this by establishing whether:

  • one factor causes a change in another factor representing a cause and effect relationship,
  • 2 factors that are commonly seen together without one causing the other, demonstrating a correlational relationship
  • there are additional intertwined factors, known as confounding factors, influencing the relationship under investigation, and tries to determine what these factors might be.

Quantitative research uses a range of methods, including:

  • randomised control trials
  • before and after comparison trials
  • surveys and questionnaires
  • observations
  • statistical analysis

Quantitative research is more appropriate when you want to test a hypothesis by measuring outcomes. For example, testing to see if declaring intentions to exercise, on social media, actually increases exercise levels in young adults. Discovering the relationships between factors can help us predict outcomes and behaviours produced by interventions, informing and supporting our policy decisions.
 


Both qualitative and quantitative research can use questionnaires, however unlike the detailed descriptions produced by qualitative research, quantitative research uses a rating score or answers a closed question. While the information collected is less detailed, greater numbers of people can be included.

Once you’ve developed an understanding of the problem, analysts will help you decide what evidence you’ll need. It’s important to remember that some evidence is more reliable than others. It can be helpful to imagine evidence as a hierarchy when comparing and identifying what forms are more valuable than others, and deciding what evidence to use for policy design.
 
A policy with basic supporting evidence would have a clear logic model linking the activities of the policy to measurable outcomes. The best quality evidence would have a clear theory of change. It would also be able to point to a range of different randomised control trials that allows you show that your policy achieves its intended outcomes in a range of contexts, and identify the features of the policy that have a positive effect.

Interpreting Evidence

So now you’ve got your evidence, you’ve got to interpret it carefully: ‘facts’ are rarely simple and should never be use on their own. When used alone, they can be selective, ambiguous and deceptive. I’d like to highlight a couple of risks:

Firstly, a common error is to interpret evidence of a correlation as proof of causation. Just because 2 factors move in ways relating to each other, it doesn’t necessarily mean that one is influencing the other. Scared Straight is a US developed intervention to deter at risk children from criminal behaviours through exposure to the frightening realities of a life of crime. Reported success rates were as high as 94%. However none of these evaluations had a control group and therefore no baseline measure. When they ran the trial again using randomised control trials, it was found that those participating in the intervention actually had higher rates of criminal behaviour than those not participating. This demonstrated that the intervention didn’t meet its objective of deterring at risk children from criminal behaviours — in fact it appears to have had a negative effect.

Secondly, interventions and outcomes are rarely the result of just one factor changing when you see a correlation or a cause and effect relationship. It’s important not to take this relationship at face value and consider other factors that could have influenced the relationship. When considering the confounding factors it’s important to explore both independent variables (factors changed by the intervention) and dependent variables (the outcomes of the intervention) as both variables can have confounding variables. Statistical analysis can help you feel more confident when interpreting relationships. There are methods that can be applied to assess the influence of primary factors and predict the influence of any additional factors you have found in the relationship.

Thirdly, generalisation: just because you have seen an effect in one population and in a specific context doesn’t mean you can expect to see the same outcome from the same intervention in a different context. You should be careful when generalising evidence. Ask yourself:

  • would it work elsewhere?
  • where is the evidence to support this assumption? 

For decades, adults with severe head injury were treated using steroid injections, based on the principle that steroids reduce swelling and the assumption that swelling inside the skull killed people with head injuries. Randomised control trials found that patients receiving them were more likely to die. In reality this treatment was killing people.

UK perspective on evidence based policy planning

UK perspective on evidence based policy planning

We should talk about cognitive bias. No-one is immune: Cognitive bias affects everyone; no one is immune. There are 3 types policy-makers should try to avoid:

  • Confirmation bias can lead you to only look for evidence that supports what you already believe, and to not give full consideration to contrary evidence.
  • Optimism bias leads you to believe that you’re able to achieve something regardless of the evidence to the contrary, resulting in over optimistic evaluations of cost, time, and benefits.
  • Loss aversion can result in your continued work on a project or policy, especially one that you have already invested time and money in, even when there’s evidence that the project or policy will not be effective. Similarly no-one, not even analysts are above making assumptions. All evidence is based on a set of assumptions. These help analysts to determine the most appropriate research methods and analysis to use, but assumptions also affect how the results can be interpreted. When reviewing evidence it’s important you speak to your analysts about how the assumptions might affect the results, the interpretation and the associated limitations.
 
Stronger assumptions generate greater uncertainty, therefore all evidence should explicitly acknowledge the assumptions used.

Don’t be inflexible: No strategy survives contact with the public intact

Which is why testing policy is important, hopefully the better evidenced your choice of policy has been, the less chance of failure, but still you will need to test and adapt your policy.

Pilots can be a good way to explore the effect of a policy on a small scale to test whether it produces the desired outcomes and assess its value. For example in 2003, the DWP conducted a randomised control trial to examine the effect of 3 new interventions on incapacity benefit claimants:

  • support at work
  • support based on their individual health needs
  • both interventions

The extra support cost on average £1,400 per person, but the pilot found there was no improvement in outcomes beyond the standard support already available. The trial provided evidence that the additional support was not generating the intended benefits, saving the taxpayer millions of pounds.

Evidence based policy making for EU integration?

If I may say so, when it comes to EU integration evidence policy making is even more important, you analysts and your data is even more important. Cherish them: get to know them, support and strengthen them. This is certainly something the European Commission believes, which is why the EU is investing in MontStat and in building the Montenegrin government’s capacity to develop evidence-based policy making.

You need data to know where you are and to build a roadmap towards the EU acquis. It is in no-one’s interests to developing opening positions, draft benchmarks or action plans, which receive substantial revisions by the Commission, needing substantial investment in expertise and evidence and delay on the part of the accession country to develop subsequent drafts of these documents.

And delivering evidence based policy will also only be possible if the institutions involved are independent, professional and have the credibility to perform their function. One way to establish that credibility is through demonstrating an ethical approach to acquiring and using evidence and through transparency around every stage of policy development and implementation.

Thank you.

Updates to this page

Published 8 December 2018