Guidance

When to evaluate: evaluation in health and wellbeing

Helping public health practitioners conducting evaluations – identifying when it is possible and appropriate to evaluate.

Evaluability

In some instances, it is not possible or appropriate to perform an evaluation. It is important to recognise these instances so that limited resources can be saved and used elsewhere.

Evaluability assessment involves assessing the extent to which an intervention can be evaluated in a reliable and credible fashion (Davies, R. (2013). Planning Evaluability Assessments: A Synthesis of the Literature with Recommendations. Report of a Study Commissioned by the Department for International Development).

Evaluability can be considered in 3 complementary ways:

  • in principle
  • in practice
  • in relation to utility

The first relates to the nature of the intervention design (for example, is the theory of change supported or plausible in this intervention) and focuses on whether it is possible to evaluate the intervention as has been described or implemented.

The second considers the availability of relevant data and systems needed to make that data available. A variety of more or less costly methods may be needed to collect reliable evaluation data. So evaluability depends on access to that data and the practicality and cost of collecting it.

The third aspect of evaluability is the potential usefulness of the evaluation (Davis, 2013). This is likely to involve the perspectives of relevant stakeholders and users.

Prioritising evaluations

Ogilvie and colleagues describe a process by which researchers and practitioners can prioritise which evaluations should be conducted. They suggest that whether an evaluation is worthwhile can be assessed by considering the following important questions, before making evaluation decisions.

These questions should be considered, especially if funding is limited and choices need to be made to select only some interventions for evaluation. Evaluability checklists are available, and can provide a useful and systematic approach to ensuring all relevant issues are considered.

What is the stage of development or implementation of the intervention? Is:

  • the intervention too early in its development to make evaluation meaningful
  • there already enough evidence to support the implementation of the programme

Are the results of the evaluation likely to lead to change in policy or practice? Ask:

  • who is the target population and are they in need
  • will the results have any bearing on policy questions
  • could policy decisions rely on the results
  • could the results have implications for more than one sector (for example, education, transport, and health)

How widespread or large are the effects of an intervention likely to be? Is the intervention:

  • likely to have a substantial effect on a large number of people
  • addressing a risk factor for just a small subgroup which nonetheless has important adverse or knock-on effects it contributing towards increasing or reducing health inequalities

Can the findings of the evaluation add to the existing body of evidence? Ask if:

  • this test is an established intervention in a novel way, different setting or a new group
  • this intervention will have effects on outcomes that have not been previously studied
  • the mechanisms are underpinning intervention effects understood

Is it practical to conduct a meaningful evaluation within the time and resources available? Can:

  • data be collected and analysed at an appropriate time, or routinely available data used retrospectively
  • any effects of the intervention be separated from other changes taking place

Acknowledgements

This work was partially funded by the School for Public Health Research, the NIHR Collaboration for Leadership in Applied Health Research and Care of the South West Peninsula (PenCLAHRC) and by Public Health England. However, the views expressed are those of the authors.

Updates to this page

Published 7 August 2018

Sign up for emails or print this page