Comparability between Taking Part Survey and the Participation Survey
Updated 9 November 2023
Applies to England
1. Comparability between the Taking Part Survey and the Participation Survey
1.1 The Taking Part Survey
The Taking Part Survey (TPS) was DCMS’ flagship survey for many years, collecting data on how adults and children engage with our sectors. The emergence of the COVID-19 pandemic prevented face-to-face fieldwork taking place in the 2020/21 (yr 16) survey year and therefore provided an unavoidable break in the survey time series.
1.2 The Participation Survey
The Department regularly reviews that the surveys it commissions remain a cost-effective and robust mechanism through which to develop its evidence base. In order to meet the evidence needs for data on engagement in DCMS sectors during COVID-19 recovery, it was concluded that a new adult participation survey would need to be designed and commissioned.This would allow for the collection of data within a landscape of changing social contact restrictions. Findings from the review on the need for more regular and geographically granular data were fed into the design of a Participation survey.
The Participation Survey is a continuous push to web survey of adults aged 16 and over in England. There are also paper surveys available for those not digitally engaged. It began running in October 2021 and is the main evidence source for DCMS and its sectors, by providing statistically representative national estimates of adult engagement with DCMS sectors. The survey’s main objectives are to:
- Provide a central, reliable evidence source that can be used to analyse cultural, digital, and sporting engagement, providing a clear picture of why people do or do not engage.
- Provide data at a county level to meet user needs, including providing evidence for the levelling up agenda.
- Underpin further research on driving engagement and the value and benefits of engagement.
1.3 Comparability summary overview
There are many common themes within both surveys, but the Participation Survey has been designed to capture more on digital engagement with DCMS sectors, as well as physical which has previously been the more traditional way of engagement and mainly captured in the Taking Part Survey.
There are also some key differences in the design of each survey which are summarised in Table 1.
Table 1: Key differences between the Taking Part survey and interim Participation Survey
Factor | Taking Part | Interim Participation Survey |
---|---|---|
Mode | Face-to-face | Push-to-web with paper based alternative |
Age coverage | Adults - 16+ years. Youth - 11-15 years. Child - 5-10 years | Adults - 16+ years |
Survey length | Approx. 45 mins | Approx. 30 mins |
Sample size | Adults ~8,000 | Adults ~ 33,000 |
Geographical breakdowns | Adults - English region. Youth+Child - National | Adults - English County level |
Regularity of data publication | Annually | Quarterly |
Longitudinal | Adults | None |
In an ideal world, users would be able to compare estimates from the Taking Part Survey with those also asked in the Participation Survey. However there have been various changes between the two surveys, namely changes in
- mode
- questionnaire content
- sampling approach & methodology
- real world changes (such as COVID-19)
and therefore the direct comparison of the Taking Part Survey and Participation survey was not considered feasible. Annex A explores some of the comparability approaches that we explored but deemed not suitable in this instance.
1.4 Other data sources
A literature review of surveys and administrative data was undertaken. The purpose was to identify the impact on engagement in DCMS sectors since the COVID-19 lockdown was imposed in April 2020, using non-DCMS surveys. The surveys used varying methodologies and time periods which makes comparisons with Taking Part Survey or Participation Survey problematic and unreliable, but they can provide some contextual information. Some of these collected data between April 2020 and March 2022 are linked below.
- Insights Alliance – Missing Audiences, Sept 2021 – Mar 2022
- Audience Agency – COVID-19 Cultural Participation Monitor, Nov 2021
- Creative Industries Policy & Evidence Centre – Digital Culture Consumer Tracking Study, Nov 2020
- UCL – The role of the Arts during the COVID-19 Pandemic, Aug 2021
- Visit England – Visitor Attraction Trends in England 2020, Aug 2021
- Network of European Museum Organisations – Impact of COVID-19 on museums in Europe, Jan 2021
- Visit England – COVID-19 Consumer Sentiment Tracker, Sept 2020 – Feb 202
- Clearsight – Recovery & COVID-19, Oct 2021
- Statista – Internet usage in the United Kingdom, Dec 2021
2. Annex A
Previously considered approaches to enabling comparability between both surveys
We have spoken with a number of people - social survey contractors, methodologists, other government departments running surveys, and colleagues within DCMS - to understand the options available to DCMS in order to determine the best approach, in terms of comparability approaches, between both surveys. The various approaches considered and the reasons for not pursuing are outlined below.
1 . Running a parallel study (i.e. run a one off face-to-face survey using the Taking Part questionnaire and sampling approach in parallel to the new Participation Survey). This would be the most reliable approach. The drawback is that face-to-face interview surveys are very expensive to do and it would be a significant investment to make. It was agreed that this was not a high enough priority to seek additional funding.
2 . Using the longitudinal data to model the ‘missing’ Taking Part face to face data. We could potentially use the Taking Part web panel which is a longitudinal instrument and has run throughout the pandemic and gap between the Taking Part cross-sectional survey and the start of the interim Participation survey. Using the web panel data, it may be possible to look at the relationship between that and the cross-sectional data and then use modelling techniques to enable us to impute modelled estimates for the missing Taking Part data. From this we could compare the modelled Taking Part estimates with the interim Participation survey data and treat the difference as a net effect between the 2 estimates. This would cover both the sample composition and type effects and measurement effect.
However, the Taking Part web panel is a biassed sample, in favour of those who engage with DCMS sectors. The sample is recruited from the Taking Part cross sectional survey and so it is not representative and respondents have not been recruited since March 2020 (as a result of the face to face surveys not happening). Moreover, while it may be possible to discern sources of variation, we would be able to differentiate how much variation could be attributed to change in mode, change in sample, etc. This approach was not pursued due to the inherent biases and sampling complexities which are likely to mean the estimates provided are not robust enough to be useful.
3 . Use of state space models. This would essentially produce a series of time series models, forecasting one step ahead of the existing time series and then compare with the actual data, before deciding whether to make an adjustment. Imputation would be performed to use covariates from previous years, linking them using a proxy variable shared across all data sets (e.g. NS-SEC). This would reduce the total amount of error observed overall. The issue here would be that we do not have any overlapping data (Taking Part Survey finished March 2020 and Participation survey data began October 2021), we do not have another data series that correlates well to what we are capturing here and was carried out throughout the pandemic, and finally there are a lot of assumptions being made that will make it very difficult to identify the real change versus the change as an impact of mode, questionnaire and pandemic changes/influences. We did not pursue this approach because the difference between the two surveys being compared (time sampled, sample size, no overlap, change in survey mode, change in survey questions) would be too great to reliably generate a forecast. This would be resource intensive and you would still not be able to disaggregate contribution of error from each variable.
4 . Forecasting techniques from Taking Part data only. We could compare the interim Participation Survey data to the forecasted Taking Part data to determine the difference, and identify whether the new survey estimates are within the confidence intervals of the forecasted Taking Part data. We could also look at other large government surveys, for example the Labour Force Survey, to identify where surveys have metricised the impact of Covid-19 (or other large scale events, such as Brexit) to see if a similar difference has been observed with the Participation survey and the forecasted Taking Part data. Forecasting alone would have a slightly higher error than the use of state space models because of a lack of imputation and the difficulties in differentiating the sources of error[footnote 1]. We did not pursue this approach because it will generate comparable levels of error as the other methods, and would give only a very partial answer to the question we are interested in.
-
Assuming a linear regression forecast model is used, it may also be possible to break down the compound error reflected in the confidence intervals for the forecasting using Principal Component Regression/Analysis, but this would have additional considerations and caveats which would need to be identified in preliminary analysis. For instance, where we know that a forecast may have error over a certain time period, it may be possible to estimate each factor’s (e.g. the pandemic, change in mode, etc.) contribution of error. ↩