Process evaluation: evaluation in health and wellbeing
Helping public health practitioners conducting evaluations – using process evaluations to explain how complex interventions work.
Introduction to process evaluation
Process evaluations aim to explain how complex interventions work. They are especially useful for interventions that include a number of interacting components operating in different ways and also when interventions address complex problems, or seek to generate multiple outcomes.
Process evaluations can be independent studies or conducted simultaneously with outcome evaluations such as randomised controlled trials. They examine the processes through which an intervention generates outcomes, that is, how they work.
Process evaluations can be used to answer various questions about an intervention, including the following 3 questions:
- Can a novel intervention be successfully implemented, especially in a complex setting such as across a network of organisations or where resources are scarce?
- Are the underlying ideas or theories about how problems arise and may be alleviated accurate, or do they need to be revised in order to design more effective interventions in the future?
- Why did people participate or decline to participate in the intervention?
Process evaluations can also help explain why interventions do not work: for example, the underlying theory of change (Bartholomew, LK and others, Planning health promotion programmes: an intervention mapping approach) may be sound, but the intervention may not have been delivered as planned, that is, the delivery had poor fidelity. Process evaluations can also aid understanding of why the intervention works for some population groups, in some contexts, but not others. These are important findings which can contribute to better-designed interventions and studies in the future.
The Medical Research Council (MRC) process evaluation guidance (Moore, G and others. Process evaluation of complex interventions. UK Medical Research Council Guidance. London: MRC Population Health Science Research Network: (2014)) provides a framework which describes the main aspects of an intervention that a process evaluation might investigate, such as the:
- implementation of an intervention
- mechanisms (or theory) of change of the intervention, that is, how the intervention produces change in recipients
- impact of context on how the intervention works
Process evaluations typically examine aspects related to delivery and implementation processes such as fidelity (that is, was it delivered as planned?), dose and reach. Process evaluations are also concerned with how an intervention has an effect on participants, organisations, and communities, including their response to the intervention and its influence on determinants of outcomes (for example, did it change the identified negative attitudes, communication skills, or community engagement?)
Context can affect and be affected by an intervention; contextual factors may include an individual’s characteristics, family, social network, organisation, such as a school, or local community. For example, a process evaluation could investigate whether the intervention worked better for older or younger people or in particular types of schools.
The relationships between various aspects of an intervention are also an important area of investigation for a process evaluation. For example, the ways in which an intervention is delivered may have an effect on participant or community response, and the acceptability of the intervention. However, relationships between intervention aspects are not necessarily linear and may include feedback loops. For example, low acceptability may result in changes to delivery methods which could in turn change levels of uptake of an intervention.
When process evaluations are undertaken during feasibility and pilot studies, they can establish how an intervention might be optimally designed, before conducting a definitive evaluation. For example, they can identify problems with uptake and implementation of the intervention and how these might be improved. This type of evaluation that leads directly into reformulating the intervention is sometimes called ‘formative evaluation’.
Theory and logic models
In process evaluation, the underlying theory of how the intervention works should form the basis for the evaluation. The MRC process evaluation guidance (Moore, G and others. Process evaluation of complex interventions. UK Medical Research Council Guidance. London: MRC Population Health Science Research Network: (2014)) recommends using a theory-based approach, in which the underpinning theory of an intervention provides a structure for the process evaluation design, data collection and analysis.
The ‘programme theory’ (De Silva M, and others, Theory of change: a theory-driven approach to enhance the Medical Research Councils’ framework for complex interventions) or ‘theory of change’ articulates how an intervention is understood to generate change in its target population or group, specifying cause-and-effect (see causation) pathways operating in the intervention.
Relevant psychological and social theories, and existing literature, can inform the theory of change, as can stakeholder perspectives (such as those of potential participants and professionals who may use the intervention). The theory might also describe how an intervention could be introduced in a particular context. Contextual factors will also be important in explaining any variation in changes that occur as a result of the intervention.
Theory can be developed during ‘intervention mapping’, which provides a useful framework for intervention design and evaluation (WK Kellogg Foundation (2004), ‘Logic model development guide: using logic models to bring together planning, evaluation and action’. Battle Creek MI: WK Kellogg Foundation). It outlines important steps that begin with a needs assessment (identifying what, if anything, needs to be changed and for whom) and ends with generating an evaluation plan. This iterative process emphasises the integration of theory and evidence for aspect of the intervention. Accordingly, it produces a detailed programme theory for the intervention.
The MRC guidance and the intervention mapping framework recommend producing a logic model to graphically represent the ‘theory of change’ of how an intervention works. A logic model can illustrate the stages of an intervention and the causal pathways theorised to occur, from the delivery of the intervention through to mechanisms of change and outcomes (Moore, G and others, Process evaluation of complex interventions).
Complex interventions can have many interacting components, and logic models representing intervention processes may also become complex. Therefore, there is a trade-off between having a simple (and useable) logic model and having a logic model which reflects the complexity of the empirical world.
Qualitative methods can be used to explore processes in more depth, such as participant perceptions of the intervention and how the intervention and its context interact with each other. These might involve qualitative analyses (for example, thematic analyses) of interviews with participants or with those who delivered the intervention. See the ‘Methods’ section.
Process evaluations often involve collection of multiple qualitative and quantitative datasets. These provide an opportunity for mixed methods approaches to be used, where different types of data are integrated at the data collection and/or data analysis stages. For example, one type of data (for example, qualitative data from interviews) can be used to expand on findings from another type of data (for example, quantitative data on attendance rates).
Mixed methods approaches which integrate different data (Rogers, ‘Using programme theory to evaluate complicated and complex aspects of interventions’084674. Evaluation 2008: volume 14, 29-48) are able to produce robust and comprehensive findings about the multiple and interacting aspects, processes and causal pathways in complex interventions. Furthermore, since trials always produce quantitative outcome data, and process evaluations normally employ qualitative methods, mixed methods approaches will commonly be required in process evaluations that seek to use process data to explain outcomes.
Design and management
Conducting a meaningful process evaluation requires careful design and study management.
Process evaluations should be tailored according to research priorities, depending on the research questions and the resources available. A logic model can be used to structure the evaluation, for example, by identifying priority processes on which to collect data.
However, for complex interventions, process evaluation activity may have to be further prioritised on the basis of:
- the main research questions or causal pathways of interest
- gaps in existing knowledge and theory
- pragmatic decisions about methodological feasibility
It is important to prioritise the main research questions for a process evaluation early on. Priorities will vary according to the intervention and the evaluation context. However, an examination of fidelity and how the intervention is actually delivered is usually an important component, because the interpretation of other findings from an evaluation (such as participant satisfaction/acceptability and the outcomes) will depend on knowing what exactly was delivered to participants (Bartholomew, LK and others, Planning health promotion programmes: an intervention mapping approach).
Data collection should be planned and co-ordinated so that the research process is efficient. For example, it may be possible to collect both qualitative and quantitative data from participants during the same research visit/encounter, or make use of routinely collected data, reducing data collection costs.
However, flexibility may also be required in the research design. Iterative approaches to data collection and analysis can be useful, to pursue emerging themes. For example, if unexpected events occur, such as poor implementation or low uptake, interviews with intervention staff or participants could be conducted to investigate the reasons for this.
If process evaluations are used in conjunction with outcome evaluations, care needs to be taken to avoid Hawthorne effects, that is, when participants change their behaviour because it is being observed. Data collection can also inadvertently become part of the intervention. For example, repeated interviews during the intervention might improve participants’ outcomes by reminding them about the goals of the intervention.
Yet, it may be important to collect such data during the intervention, rather than at the end when recall will be less accurate. The evaluation team needs to balance such factors when designing a combined process and outcome evaluation study.
Since process evaluations nested within trials can be methodologically complex studies, careful consideration should be given to the process evaluation team in terms of their mix of skills, team structure and relationships to the main trial team. Individuals qualified in different methods are usually required. The outcome evaluation and process evaluation teams will need to co-ordinate with each other during the planning, data collection and analysis phases to ensure that:
- the methods complement each other
- the data collection efforts do not disrupt each other
- the data are brought together in a coherent way
Mixed methods approaches will also require expertise in bringing methods together, rather than merely conducting parallel qualitative and quantitative studies, and demand additional analysis time towards the end of the evaluation. See the ‘evaluation planning’ section.
Studying implementation
Implementation may refer to putting an intervention into practice after it has been evaluated by a wider roll-out so that, in healthcare for example, it becomes part of routine care. Implementation can also refer to the way an intervention is delivered, such as through face-to-face contacts or alternatively using a web-based platform. Fidelity (was the intervention delivered as intended), reach (did the intervention reach its target population), and dose (did participants receive the right ‘amount’ of an intervention, for example did they attend all sessions provided?) are all likely to be of interest when examining implementation.
Fidelity can refer to fidelity of ‘form’ or fidelity of ‘function’, (Hawe, P and others (2004), Complex interventions: how “out of control” can a randomised controlled trial be?). Fidelity of form refers to delivering an intervention in exactly the same way each time, whereas fidelity of function means there can be flexibility in how an intervention is delivered so long as it is achieving the same delivery goal each time. For example, information could be delivered to a client group in exactly the same way each time through a leaflet (fidelity of form), or information could be delivered flexibly to achieve the same aim. For example, delivering information on the internet for younger populations but using a leaflet for older participants (fidelity of function).
Studying fidelity also involves exploring whether intervention providers have added to or subtracted components from the original intervention design. Such modification may be influenced by providers’ training or experiential backgrounds, or by their response to perceived client need and can have critical positive or negative effects on intervention effectiveness.
What makes process evaluation different
‘Process evaluations’ employ a variety of methods used in the social sciences, normally including both qualitative and quantitative methods. The purpose of a process evaluation is to explain how an intervention generates outcomes or effects. In health research, process evaluations may be undertaken in conjunction with a randomised controlled trial of a complex intervention. Process evaluations, like other theory-based approaches to evaluation (such as realist evaluation and ‘theory of change’ approaches) investigate the underpinning theory of the intervention. Data collection and analyses in process evaluations are usually structured around logic models which represent this theory and which illustrate the causal pathways thought to be operating in the intervention. These causal pathways are the ‘processes’ of process evaluation.
When to do a process evaluation
Process evaluations should be conducted when you want to know not just whether an intervention works or not, but how it works. A process evaluation should help you to understand in some detail how an intervention operates to produce outcomes. This includes:
- knowing which aspects of an intervention are important
- how different aspects of an intervention work together
- how an intervention can be implemented in a given context
A process evaluation will illuminate the mechanisms through which an intervention produces change. This is important if you hope to roll out your intervention more widely in the future. Findings from a process evaluation can help implementation and adaptation of the intervention, as necessary, to other populations and contexts. A process evaluation can also explain why an intervention failed and indicate how you might need to redesign it.
How to design it
Theorise what your intervention is, how you think it will operate and what impact it should have on the problem it is trying to solve. Then build a logic model based on this underpinning theory of the intervention, specifying what the intervention will deliver (for example, advice sessions), how you think the intended target population will respond (this might include both psychological and behavioural change), and short and long term outcomes you hope will follow from the intervention. This can help guide the data collection and analyses to be undertaken.
You may identify multiple research questions arising from the logic model and only have limited resources to answer them. If so, you will also need to prioritise research questions early on. Also note that sometimes inexperienced researchers design an evaluation around methods or around data that will be easy to collect, rather than the main research questions for the study. If you do this you may collect a lot of data but may not answer the questions you need answers to.
What theory to use
This should be decided by the evaluation team. There are several sources of theory, and it is possible to combine more than one source. Exploring, documenting and combining these sources is part of the intervention mapping process. The aim is to identify mechanisms of change relevant to the problem the intervention seeks to address; mechanisms for which there is evidence of effectiveness. You could draw on formal theory from the literature in your field, and/or consult stakeholders (including intervention staff and recipient populations) who are familiar with the type of intervention or context in which it will run.
The research team could decide to develop a novel theory of how the intervention works. Keep in mind there may be sources of bias both within the research team and stakeholders who may have vested interests in particular theories. For example, researchers may privilege theories they developed themselves and stakeholders may prefer theories that present the intervention delivery team or context in a positive light.
References
-
Bartholomew, LK and others: Planning health promotion programmes: an intervention mapping approach. Jossey-Bass 2016, San Francisco.
-
De Silva M, and others: Theory of change: a theory-driven approach to enhance the Medical Research Councils’ framework for complex interventions. Trials 2014: volume 15, 267 doi:10.1186/1745-6215-15-267.
-
Hawe, P and others.(2004): Complex interventions: how “out of control” can a randomised controlled trial be? BMJ 2004, volume 328, pages 1561–63.
-
Moore, G and others: Process evaluation of complex interventions. UK Medical Research Council Guidance. London: MRC Population Health Science Research Network: (2014).
-
Rogers: ‘Using programme theory to evaluate complicated and complex aspects of interventions’ 084674. Evaluation 2008: volume 14, 29–48.
-
WK Kellogg Foundation (2004): ‘Logic model development guide: using logic models to bring together planning, evaluation and action’. Battle Creek MI: WK Kellogg Foundation.