Guidance

Introduction to evaluation in health and wellbeing

Helping public health practitioners conducting evaluations – what the evaluation is, when it should be undertaken and the different types of evaluation available.

This guidance aims to help public health practitioners when conducting evaluations. This section provides an overview of what evaluation is, when it should be undertaken, and different types of evaluation. These are described in more detail on other pages. There is also a glossary defining the important terms and a guide to other resources.

What evaluation is

Evaluations tell us what works and what does not. An evaluation should be a rigorous and structured assessment of a completed or ongoing activity, intervention, programme or policy that will determine the extent to which it is achieving its objectives and contributing to decision-making (Menon, Karl and Wignaraja, 2009). For example, an evaluation might aim to determine if an intervention reached its intended audience, was implemented as planned, had desired impacts, improved outcomes and/or to establish for whom, how and why it had its effects.

Evaluation involves collection of information or data and facilitates judgements about the success and value of an intervention. Evaluations can be used to inform changes to improve an intervention, and aid decision-making about future courses of action. Evaluation can also help to ensure public accountability and that best use is made of limited resources.

Public health evaluations can vary in size and scope. For example, a study might evaluate the effects of government policies on health inequalities throughout England, or evaluate whether the provision of well-fitting slippers reduces falls in a home for older people. In this way, evaluations can improve services locally and provide evidence for national policy-making and, thereby, improve public health practice.

When to conduct an evaluation

It is particularly important to conduct an evaluation when one or more of the following criteria are met:

  • there has been a significant investment of time, money and/or resources
  • there is a possibility of risk or harm
  • the intervention represents a novel or innovative approach
  • the intervention is the subject of high political scrutiny or priority
  • there is a gap in services or knowledge about how to address a problem or provide effective services for a particular population

In contrast, evaluations should not be conducted:

  • if constant changes or modifications have been made to the intervention (because the evaluation could be premature and inconclusive)
  • if the intervention is too early in development (unless the evaluation is designed as a formative evaluation with the aim of improving an intervention)
  • if there is a lack of clarity or consensus on objectives because this makes it difficult for the evaluator to establish what is being evaluated
  • for purely promotional purposes, that is, evaluations should not begin with the aim of identifying ‘success stories’

Further guidance on when to evaluation can be found in the evaluability section.

Stages of evaluation

There are 4 stages to evaluation.

  1. Defining your evaluation questions. What do you want to discover – for example, what outcomes will you assess for whom, and over what time frame?
  2. Data collection.
  3. Analysing the data collected in stage 2 to answer the questions defined in stage 1.
  4. Clarifying the implications of the findings and producing recommendations.

Further details can be found in the section on planning an evaluation.

Types of evaluation

We will consider 3 categories of evaluation:

Outcome evaluation

Public health interventions are intended to improve outcomes. Outcome evaluations are assessments of the results of an intervention and measure the changes brought about by the intervention (WK Kellogg Foundation, 2004). They therefore collect and analyse data on specific outcomes that are thought to be influenced by the intervention. The findings from an outcome evaluation can then tell us how effective the intervention is at changing those outcomes.

Several different study designs are used in outcome evaluations, including single-group pre-post comparisons, quasi-experimental studies with matched control groups, and randomised controlled trials. Each of these designs provides a different level of evidence about the effectiveness of an intervention: they vary in the extent to which they are able to attribute any observed change to the intervention (as opposed to other initiatives taking place at the same time) and demonstrate cause and effect (see causality for more information).

The section on outcome evaluation provides additional information and lists resources providing more detail.

Process evaluation

Process evaluations are assessments of whether a policy is being implemented as intended and what, in practice, is felt to be working more or less well, and why (HM Treasury, 2011). They do not primarily focus on outcomes but on how an intervention or service works. Process evaluations are often conducted alongside outcome evaluations (see above), and are usually used to evaluate complex interventions which have several components and are addressing multiple aims. They typically collect data on different aspects of the intervention, using mixed methods. The methods section provides further description and examples of evaluation methods.

In public health, process evaluations often assess how interventions are delivered within particular settings. They also investigate behavioural and other changes in the staff delivering and people receiving the intervention. Such changes in processes may explain observed changes in health outcomes, and the influence of contextual factors on how an intervention operates and brings about change.

Process evaluations are useful for understanding why interventions work in one service or area but not in another. They can also highlight for whom an intervention works best, enabling optimal targeting of particular interventions or services, or providing insight into aspects of the intervention that might need changing for other groups.

A key component of a process evaluation is construction of a logic model that explains how the intervention is thought to generate outcomes. Further details on what logic models are and how to create them is provided in the section on logic models.

Process evaluations are critical to improving the effectiveness of interventions, services and policies as they identify how interventions work, including strengths and weaknesses in delivery that influence effectiveness. More information and resources can be found in the process evaluation section.

Economic evaluation

Economic evaluations are assessments of the value gained from and the costs of resources used to implement a policy, programme or intervention (HM Treasury, 2011). An economic evaluation can clarify the costs and benefits of an intervention compared to an alternative course of action. This information can support decision-makers in allocating future resources, setting priorities and shaping health policy.

Economic evaluations depend on assessment and valuation of resources (for example, staff time) to estimate the costs involved in and stemming from an intervention. The outcomes or benefits of an intervention, often expressed in terms of quality-adjusted life years, also need to be carefully assessed. However, in public health, where there are long time lags between interventions and outcomes. it can be challenging to capture benefits fully.

The section on economic evaluation provides further description, examples and lists relevant resources.

References

HM Treasury (2011). ‘The Magenta Book: guidance for evaluation’.

Menon S, Karl J and Wignaraja K (2009). ‘Handbook on planning, monitoring and evaluating for development results’ United Nations Development Programme (UNDP) Evaluation Office, New York, NY.

WK Kellogg Foundation (2004). ‘WK Kellogg Foundation evaluation handbook’ WK Kellogg Foundation.

Acknowledgements

Written by Charles Abraham, Jane Smith, Sarah Denford, Krystal Warmoth, Margaret Callaghan and Sarah Morgan Trimmer.

This work was partially funded by the UK National Institute for Health Research (NIHR) School for Public Health Research, the NIHR Collaboration for Leadership in Applied Health Research and Care of the South West Peninsula (PenCLAHRC) and by Public Health England. However, the views expressed are those of the authors.

Updates to this page

Published 7 August 2018

Sign up for emails or print this page