Official Statistics

Participation Survey 2023 to 2024 Annual Technical Report

Published 24 July 2024

Applies to England

DCMS Participation Survey 2023/24

Annual Technical Note

May 2023 to March 2024

© Verian [2024]

1. Introduction

1.1 Background to the survey

In 2021, the Department for Culture, Media and Sport (DCMS) commissioned Verian (formerly Kantar Public) to design and deliver a new, nationally representative ‘push-to-web’ survey to assess adult participation in DCMS sectors across England. The survey served as a successor to the Taking Part Survey, which ran for 16 years as a continuous face to face survey.

The 2023/24 Participation Survey was commissioned by DCMS in partnership with Arts Council England (ACE). The scope of the survey is to deliver a nationally representative sample of adults (aged 16 years and over) and to assess adult participation in DCMS sectors across England, targeting enough households to allow for Local Authority level of reporting of the data. The data collection model for the Participation Survey is based on ABOS (Address-Based Online Surveying), a type of ‘push-to-web’ survey method. Respondents take part either online or by completing a paper questionnaire. In 2023/24 the target respondent sample size increased to 175,000 – it was previously 33,000 in each of the 2021/22 and 2022/23 survey years.

The fieldwork period for the annual 2023/24 survey was divided in to four quarters.

  • Quarter one: Fieldwork conducted between 9th May 2023 and 28th June 2022
  • Quarter two: Fieldwork conducted between 7th July 2023 and 2nd October 2023.
  • Quarter three: Fieldwork conducted between 6th October 2023 and 29th December 2023.
  • Quarter four: Fieldwork conducted between 12th January 2024 and 2nd April 2024.

Following Kantar Public’s divestment from Kantar Group and rebranding to Verian, any logos or mentions of Kantar Public on the online questionnaire, paper questionnaire, survey website, invitation and reminder letters were changed to Verian from January 2024.

1.2 Survey objectives

The key objectives of the 2023/24 Participation Survey were:

  • To inform and monitor government policy and programmes in DCMS, ACE and other government departments (OGDs) on adult engagement with the DCMS and digital sectors [footnote 1]. The survey will also gather information on demographics (for example, age, sex, ethnicity).
  • To assess the variation in engagement with cultural activities across DCMS sectors in England, and the differences in social-demographics such as location, age, education, and income.
  • To monitor and report on progress in achieving the Outcomes set out in Let’s Create [footnote 2] – Creative People, Cultural Communities, and A Creative and Cultural Country (as set out in the Arts Council England Impact Framework).

In preparation of the 2023/24 survey, Verian (formerly Kantar Public) undertook questionnaire development work to test any new or amended questions. The 2023/24 survey launched in May 2023.

1.3 Survey design

The 2023/24 Participation Survey was conducted via an online and paper questionnaire using Address Based Online Surveying (ABOS), an affordable method of surveying the general population that still employs random sampling techniques. ABOS is also sometimes referred to as “push to web” surveying.

The basic ABOS design is simple: a stratified random sample of addresses is drawn from the Royal Mail’s postcode address file (PAF) and an invitation letter is sent to each one, containing username(s) and password(s) plus the URL of the survey website. Sampled individuals can log on using this information and complete the survey as they might any other web survey. Once the questionnaire is complete, the specific username and password cannot be used again, ensuring data confidentiality from others with access to this information.

It is usual for at least one reminder to be sent to each sampled address and it is also usual for an alternative mode (usually a paper questionnaire) to be offered to those who need it or would prefer it. It is typical for this alternative mode to be available only on request at first. However, after nonresponse to one or more web survey reminders, this alternative mode may be given more prominence.

Paper questionnaires ensure coverage of the offline population and are especially effective with sub-populations that respond to online surveys at lower-than-average levels. However, paper questionnaires have measurement limitations that constrain the design of the questionnaire and also add considerably to overall cost. For the Participation Survey, paper questionnaires are used in a limited and targeted way, to optimise rather than maximise response.

2. Questionnaire

2.1 Questionnaire development

Much of the survey content remained consistent with previous years to enable key trends to be tracked over time. However, a key development task was to create a new set of questions that address both DCMS and ACE objectives, ensuring a study that assesses the variation in engagement with cultural activities and helps monitor progress in achieving the ‘Let’s Create’ outcomes.

As a result, a new set of questions were developed, and several changes were made to existing response options and definitions in the 2023/24 questionnaire. The questionnaire for 2023/24 was developed collaboratively to adapt to the needs and interests of both DCMS and ACE.

Given the extent of questionnaire changes, it was important to implement a comprehensive development and testing phase. This was made up of three key stages:

  • Questionnaire review
  • Cognitive testing
  • Usability testing

Further details about the questionnaire development work can be found in the Participation Survey methodology reports [footnote 3].

2.2 2023/24 Participation Questionnaire

The online questionnaire was designed to take an average of 30 minutes to complete. A modular design was used with around half of the questionnaire made up of a core set of questions asked of the full sample. The remaining questions were split into three separate modules, randomly allocated to a subset of the sample.

The postal version of the questionnaire included the same set of core questions asked online, but the modular questions were omitted to avoid overly burdening respondents who complete the survey on paper, and to encourage response. Copies of the online and paper questionnaires are available online.

2.3 Questionnaire changes

Questions on the following topics of interest were added to the 2023/24 Participation Survey, as requested by ACE and/or DCMS:

  • Environment, which included questions on mode of transport taken while travelling to an arts and cultural event, distance travelled, and reason(s) for transportation choice.
  • Social prescribing, which included questions on the respondent’s experience with social prescribing, and the types of activities they were referred to.
  • Further questions on arts and culture engagement, which included questions on the types of classes and clubs respondents have taken part in, the frequency and reasons(s) for their involvement, the impact/benefits of participating, and for non-participants, the reason for not participating.
  • Pride in Place, which included questions on respondents’ sense of belonging and pride of their local area, the role culture plays in choosing where to live, and the current arts and culture scene in their local area.

The following changes to the digital questions were also made to the 2023/24 Participation Survey:

  • Smart devices: New devices listed, response method changed to collect the number of devices owned and new questions added on whether respondents considered security features when purchasing said devices.
  • Digital skills: New question added to measure confidence completing various tasks online and on different devices, response options changed to differentiate whether respondents who had completed digital or online training did so as part of work or in their own time and new question added on whether this training resulted in an academic qualification.
  • A new concept for digital identity was introduced, followed by a short list of factors one might consider when choosing a company or agency to process their digital identity. The respondent was asked to rank the importance of each.
  • A new question was added to ask how respondents react to cookie banners.
  • In the following sections, online safety and security, 5G, comfort around use of data, adjustments were made to questions and response options to simplify statements, further explanations or examples were provided to improve understanding and response options were updated to improve neutrality.

2.3.1 Quarter 2 questionnaire changes

From July 2023, the response options for the 5G awareness question (CDIG5GAW) were expanded. A new response option was added after the first response option in the list.

“5G (which stands for fifth generation) is the next step in mobile technology. It offers faster mobile internet speeds.

Which statement below best describes how much you know about 5G mobile technology?

  1. I hadn’t heard of it before now

  2. I have heard of it and already use it - The new addition

  3. I have heard of it but am not sure what it is

  4. I understand what it is but am not interested in getting it in the near future

  5. I understand what it is and am interested in getting it in the near future.”

2.3.2 Quarter 3 questionnaire changes

From October 2023, three changes were made to the online questionnaire. A frequency question (CFREMUSONL1) was added following the question on online activities relating to museums (CMUSONL).

“How often in the last 12 months have you [option selected in CMUSONL]?

Please don’t include paid work, school, college or structured academic activities.

  1. At least once a week
  2. Less often than once a week but at least once a month
  3. Less often than once a month but at least 3 or 4 times a year
  4. Twice in the last 12 months
  5. Once in the last 12 months

999      Don’t know”

In addition, CHERVIS12 – a question on historic places visited in England – and CDIGHER12 – a question on heritage related online activities – were asked of all respondents. Previously these questions were only asked of a subsample.

2.3.3 Quarter 4 questionnaire changes

From January 2024, the FOLLOWUP and FOLLOWUP2 questions were updated to make it clearer to participants that their address, as well as other contact information will be kept securely by Verian (formerly Kantar Public) should they consent to being recontacted in the next two years.

“This will involve us keeping a secure record of your name, address, email address and or telephone number for two years.”

3. Sampling

3.1 Sample design: addresses

The address sample design is intrinsically linked to the data collection design (see ‘Details of the data collection model’ below) and was designed to yield a respondent sample that is representative with respect to neighbourhood deprivation level, and age group within each of the 309 local authority areas and 33 ITL2 regions in England [footnote 4]. This approach limits the role of weights in the production of unbiased survey estimates, narrowing confidence intervals compared with other designs.

The design sought a minimum four-quarter respondent sample size of 500 in each local authority area and a minimum four quarter effective respondent sample size of 2,700 in each ITL2 region [footnote 5]. Although there were no specific targets per quarter, the sample selection process was designed to ensure that the respondent sample size per local authority and per ITL2 region was approximately the same per quarter.

As a first step, a stratified master sample of 726,790 addresses in England was drawn from the Postcode Address File (PAF) ‘small user’ subframe. Before sampling, the PAF was disproportionately stratified by local authority area (309 strata) and, within region, proportionately stratified by neighbourhood deprivation level (5 strata). A total of 1,468 strata were constructed in this way. Furthermore, within each of the 1,468 strata, the PAF was sorted by (i) super output area, and (ii) by postcode. This ensured that the master sample of addresses was geographically representative within each stratum.

This master sample of addresses was then augmented by data supplier CACI. For each address in the master sample, CACI added the expected number of resident adults in each ten-year age band. Although this auxiliary data will have been imperfect, investigations by Verian (formerly Kantar Public) have shown that it is highly effective at identifying households that are mostly young or mostly old. Once this data was attached, the master sample was additionally stratified by expected household age structure based on the CACI data: (i) all aged 35 or younger (17% of the total); (ii) all aged 65 or older (21% of the total); (iii) all other addresses (62% of the total).

The conditional sampling probability in each stratum was varied to compensate for (expected) residual variation in response rate that could not be ‘designed out’, given the constraints of budget and timescale. The underlying assumptions for this procedure were derived from empirical evidence obtained from the 2021/22 and 2022/23 Participation Surveys.

Verian (formerly Kantar Public) drew a stratified random sample of 455,546 addresses from the master sample of 726,790 and systematically allocated them with equal probability to quarters 1, 2, 3 and 4 (that is, approximately 113,887 addresses per quarter). Verian (formerly Kantar Public) then systematically distributed the quarter-specific samples to three equal-sized ‘replicates’, each with approximately 37,962 addresses and the same profile. The first replicate was expected to be issued two weeks before the second replicate, itself expected to be issued two weeks before the third replicate, to ensure that data collection was spread throughout the three-month period of each quarter [footnote 6].

These replicates were further subdivided into twenty-five equal sized ‘batches’. This process of sample subdivision into batches was intended to help manage fieldwork. The expectation was that only the first twenty batches within each replicate would be issued (that is, approximately 30,370 addresses), with the twenty first to the twenty fifth batches kept back in reserve.

However, as fieldwork for quarter 1 was only two months long (instead of the usual three), all three replicates were issued at the same time, that is, at the beginning of fieldwork. Only the first twenty batches of each replicate were issued (that is, as planned). Fieldwork for Q1 was delayed until May 2023 to enable additional time for cognitive and pilot testing. However, the sample expected over a typical three-month quarter was carried out over a two-month quarter, meaning no loss of sample or data.

Sample productivity was reviewed twice each quarter, with alterations made to the sample issue for the subsequent quarter with an update for the third replicate of that quarter. This review was carried out at local authority level, leading to some substantial differences between what was planned at the start of the year and what was issued in practice. In quarter four, a small number of addresses (3,167) was additionally sampled from the unused part of the master sample, augmenting the available batches for four local authorities (Brentwood, Kensington & Chelsea, Thurrock, and the Isles of Scilly).

In total, 397,265 addresses were issued: 32,828 more than planned (+9%). These were distributed as follows: 91,110 were issued in quarter one, 102,451 in quarter two, 105,764 in quarter three, and 97,940 in quarter four.

Table 1 shows the combined quarters one, two, three and four (issued) sample structure with respect to the major strata.

Table 1: Address issue by area deprivation quintile group.

Expected household age structure Most deprived 2nd 3rd 4th Least deprived
All <=35 17,066 18,105 14,910 12,505 9,475
Other 45,233 55,886 54,081 50,612 44,853
All >=65 11,967 13,032 16,937 17,277 15,326

3.2 Sample design: individuals within sampled addresses

All resident adults aged 16+ were invited to complete the survey. In this way, the Participation Survey avoided the complexity and risk of selection error associated with remote random sampling within households.

However, for practical reasons, the number of logins provided in the invitation letter was limited. The number of logins was varied between two and four, with this total adjusted in reminder letters to reflect household data provided by prior respondent(s). Addresses that CACI data predicted contained only one adult were allocated two logins; addresses predicted to contain two adults were allocated three logins; and other addresses were allocated four logins. The mean number of logins per address was 2.7. Paper questionnaires were available to those who are offline, not confident online, or unwilling to complete the survey this way.

3.3 Details of the data collection model

Table 2 summarises the data collection design within each principal stratum, showing the number of mailings and type of each mailing: push-to-web (W) or mailing with paper questionnaires (P). For example, ‘WWP’ means two push-to-web mailings and a third mailing with paper questionnaires included alongside the web survey login information. In general, there was a two-week gap between mailings. For the very final issued replicate (the third of quarter four), a fourth ‘W’ contact was added for addresses in the nine principal strata that had a default three-contact design (either ‘WWW’ or ‘WWP’) to make up as much of the shortfall as possible before fieldwork closed for the 23/24 survey year.

Table 2: Data collection design by principal stratum.

Expected household age structure Most deprived 2nd 3rd 4th Least deprived
All <=35 WWPW WWWW WWWW WWW WWW
Other WWPW WWW WWW WWW WWW
All >=65 WWPW WWPW WWP WWP WWP

4. Fieldwork

Fieldwork for the 2023/24 Participation Survey was conducted between May 2023 and April 2024, with samples issued on a quarterly basis. Each quarter’s sample was split into three replicates (with the exception of quarter one), the first of which was issued at the start of the quarter, and the second two weeks later, and the third two weeks after that. The specific fieldwork dates for each quarter are shown below in Table 3.

Table 3: Fieldwork dates.

Quarter Batch Fieldwork start Fieldwork end
Quarter one 1 5th May 2023 28th June 2023
Quarter two 1 5th July 2023 30th August 2023
  2 26th July 2023 20th September 2023
  3 7th August 2023 2nd October 2023
Quarter three 1 4th October 2023 29th November 2023
  2 25th October 2023 20th December 2023
  3 31st October 2023 29th December 2023
Quarter four 1 10th January 2024 4th March 2024
  2 25th January 2024 20th March 2024
  3 7th February 2024 2nd April 2024

The paper questionnaire was made available to sampled individuals in seven of the fifteen principal strata at the second reminder stage as shown in Table 2 section 3.3. The paper questionnaire was also available on request to all respondents who preferred to complete the survey on paper or who were unable to complete online.

4.1 Contact procedures

All sampled addresses were sent an invitation letter in a white envelope with an On His Majesty’s Service logo. The letter contained the following information:

  • A brief description of the survey

  • The URL of survey website (used to access the online script)

  • A QR code that can be scanned to access the online survey

  • Log-in details for the required number of household members

  • An explanation that participants will receive a £10 voucher

  • Information about how to contact Verian (formerly Kantar Public) in case of any queries

  • The reverse of the letter featured responses to a series of Frequently Asked Questions

All non-responding addresses were sent two reminder letters, at the end of the second and fourth weeks of fieldwork respectively. A pre-selected subset of non-responding addresses (see Table 2) was sent a third reminder letter at the end of the sixth week of fieldwork. The information contained in the reminder letters was similar to the invitation letters, with slightly modified messaging to reflect each reminder stage.

As well as the online survey, respondents were given the option to complete a paper questionnaire, which consisted of an abridged version of the online survey. Each letter informed respondents that they could request a paper questionnaire by contacting Verian (formerly Kantar Public) using the email address or freephone telephone number provided, and a cut-off date for paper questionnaire requests was also included on the letters.

In addition, some addresses received up to two paper questionnaires with the second reminder letter. This targeted approach was developed based on historical data Verian (formerly Kantar Public) has collected through other studies, which suggests that proactive provision of paper questionnaires to all addresses can actually displace online responses in some strata. Paper questionnaires were pro-actively provided to (i) sampled addresses in the most deprived quintile group, and (ii) sampled addresses where it was expected that every resident would be aged 65 or older (based on CACI data).

4.2 Confidentiality

Each of the letters assured the respondent of confidentiality, by answering the question “Is this survey confidential?” with the following:

Yes, the information that is collected will only be used for research and statistical purposes. Your contact details will be kept separate from your answers and will not be passed on to any organisation outside of Verian (formerly Kantar Public) or supplier organisations who assist in running the survey.

Data from the survey will be shared with DCMS, DSIT, and ACE for the purpose of producing and publishing statistics. The data shared won’t contain your name or contact details, and no individual or household will be identifiable from the results.

For more information about how we keep your data safe, you can access the privacy policies of the involved organisations.

4.3 Fieldwork performance

When discussing fieldwork figures in this section, response rates are referred to in two different ways:

Household response rate – This is the percentage of households contacted as part of the survey in which at least one questionnaire was completed.

Individual response rate – This is the estimated response rate amongst all adults that were eligible to complete the survey.

Overall, the target number of interviews was 175,000 post validation checks, equating to 43,750 per quarter.

In total 397,265 addresses were sampled, from which 182,318 respondents completed the survey – 159,786 via the online survey and 22,532 by returning a paper questionnaire. Following data quality checks (see Chapter 5 for details), 10,569 respondents were removed (10,513 web and 57 paper), leaving 171,748 respondents in the final dataset. The majority of participants took part online (87%), while 13% completed a paper questionnaire.

This constitutes a 43% conversion rate, a 31% household-level response rate, and an individual-level response rate of 25% [footnote 7].

The full breakdown of the fieldwork figures and response rates by quarter are available in Table 4.

Table 4: Combined online and paper fieldwork figures by quarter.

Quarter No. of sampled addresses Interviews achieved – online and paper No. households completed Household response rate Individual response rate
Quarter one 91,110 40,505 26,619 32% 26%
Quarter two 102,451 44,020 28,666 30% 25%
Quarter three 105,764 44,554 29,092 30% 24%
Quarter four 97,940 42,669 27,871 31% 25%
Total 397,265 171,748 112,248 31% 25%

4.4 Incentive system

All respondents that completed the Participation Survey were given a £10 voucher as a thank you for taking part.

Online incentives

Participants completing the survey online were provided with details of how to claim their voucher at the end of the survey and were directed to the voucher website, where they could select from a range of different vouchers, including electronic vouchers sent via email and gift cards sent in the post.

Paper incentives

Respondents who returned the paper questionnaire were also provided with a £10 voucher. This voucher was sent in the post and could be used at a variety of high street stores.

4.5 Survey length

For the online survey, the median completion time was 25 minutes and 34 seconds, and the average completion time was 27 minutes and 50 seconds [footnote 8].

5. Data processing

5.1 Data management

Due to the different structures of the online and paper questionnaires, data management was handled separately for each mode. Online questionnaire data was collected via the web script and, as such, was much more easily accessible. By contrast, paper questionnaires were scanned and converted into an accessible format.

For the final outputs, both sets of interview data were converted into IBM SPSS Statistics, with the online questionnaire structure as a base. The paper questionnaire data was converted to the same structure as the online data so that data from both sources could be combined into a single SPSS file.

5.2 Partial completes

Online respondents can exit the survey at any time, and while they can return to complete the survey at a later date some chose not to do so.

Equally respondents completing the paper question occasionally leave part of the questionnaire blank, for example if they do not wish to answer a particular question or section of the questionnaire.

Partial data can still be useful, providing respondents have answered the substantive questions in the survey. These cases are referred to as usable partial interviews.

Survey responses were checked at several stages to ensure that only usable partial interviews were included. Upon receipt of receiving returned paper questionnaire, the booking in team removed obviously blank paper questionnaires. Following this, during data processing, rules were set for the paper and online surveys to ensure that respondents had provided sufficient data. For the online survey, respondents had to reach a certain point in the questionnaire for their data to count as valid (just before the wellbeing questions). Paper data was judged complete if they answered at least 50% of the questions and reached at least as far as Q46 in the questionnaire.

5.3 Validation

Initial checks were carried out to ensure that paper questionnaire data had been correctly scanned and converted to the online questionnaire data structure. For questions common to both questionnaires, the SPSS output was compared to check for any notable differences in distribution and data setup.

Once any structural issues had been corrected, further quality checks were carried out to identify and remove any invalid interviews. The specific checks were as follows:

  1. Selecting complete interviews: Any test serials in the dataset (used by researchers prior to survey launch) were removed. Cases were also removed if the respondent did not answer the declaration statement (online: QFraud; paper: Q73).

  2. Duplicate serials check: If any individual serial had been returned in the data multiple times, responses were examined to determine whether this was due to the same person completing multiple times or due to a processing error. If they were found to be valid interviews, a new unique serial number was created, and the data was included in the data file. If the interview was deemed to be a ‘true’ duplicate, the more complete or earlier interview was retained.

  3. Duplicate emails check: If multiple interviews used the same contact email address, responses were examined to determine if they were the same person or multiple people using the same email. If the interviews were found to be from the same person, only the most recent interview was retained. In these cases, online completes were prioritised over paper completes due to the higher data quality.

  4. Interview quality checks: A set of checks on the data were undertaken to check that the questionnaire was completed in good faith and to a reasonable quality. Several parameters were used:

    a. Interview length (online check only).

    b. Number of people in household reported in interview(s) vs number of total interviews from household.

    c. Whether key questions have valid answers.

    d. Whether respondents have habitually selected the same response to all items in a grid question (commonly known as ‘flatlining’) where selecting the same responses would not make sense.

    e. How many multi-response questions were answered with only one option ticked.

Following the removal of invalid cases, 171,748 valid cases were left in the final dataset.

5.4 Standard paper questionnaire edits

Upon completion of the general quality checks described above, more detailed data checks were carried out to ensure that the right questions had been answered according to questionnaire routing. This is generally all correct for all online completes, as routing is programmed into the scripting software, but for paper completes, data edits were required.

There were two main types of data edits, both affecting the paper questionnaire data:

  1. Single-response question edits: If a paper questionnaire respondent had mistakenly answered a question that they weren’t supposed to, their response in the data was changed to “-3: Not Applicable”. If a paper questionnaire respondent had neglected to answer a question that they should have, they were assigned a response in the data of “-4: Not answered but should have (paper)”. If a paper questionnaire respondent had tick more than one box for a single response question they were assigned a response in the data of “-5: Multi-selected for single response (paper)”.

  2. Multiple response question edits: If a paper questionnaire respondent had mistakenly answered a question that they weren’t supposed to, their response was set to “-3: Not Applicable”. If a paper questionnaire respondent had neglected to answer a question that they should have, they were assigned a response in the data of “-4: Not answered but should have (paper)”. Where the respondent had selected both valid answers and an exclusive code such as “None of these”, any valid codes were retained and the exclusive code response was set to “0”.

5.5. Questionnaire specific paper questionnaire edits

Other, more specific data edits were also made, as described below:

  1. Additional edits to library questions: The question CLIBRARY1 was formatted differently in the online script and paper questionnaire. In the online script it was set up as one multiple-response question, while in the paper questionnaire it consisted of two separate questions (Q21 and Q25). During data checking, it was found that many paper questionnaire respondents followed the instructions to move on from Q21 and Q25 without ticking the “No” response. To account for this, the following data edits were made:

    a. If CFRELIB12 and CPARLI12B was not answered and CNLIWHYA was answered, set CLIBRARY1_001 was set to 0 if it was left blank.

    b. If CFRELIDIG and CDIGLI12 was not answered and CNLIWHYAD was answered, CLIBRARY1_002 was set to 0 if it was left blank.

    c. CLIBRARY1_003 and CLIBRARY1_004 was set to 0 for all paper questionnaire respondents.

  2. Additional edits to grid questions: Due to the way the paper questionnaire was set up, additional edits were needed for the following linked grid questions: CARTS1/CARTS1A, CARTS2/CARTS2A, CARTS3/CARTS3A, CARTS4/CARTS4A, ARTPART12/ARTPART12A.

Figure 1 shows an example of a section in the paper questionnaire asking about attendance at arts events.

Figure 1: Example of the CARTS1 and CARTS1A section in the paper questionnaire .

Marking the option “Not in the last 12 months” on the paper questionnaire was equivalent to the code “0: Have not done this” at CARTS1 in the online script. As such, leaving this option blank in the questionnaire would result in CARTS1 being given a default value of “1” in the final dataset. In cases where a paper questionnaire respondent had neglected to select any of the options in a given row, CARTS1 was recoded from “1” to “0”.

If the paper questionnaire respondent did not tick any of the boxes on the page, they were recoded to “-4: Not answered but should have (paper)”.

5.6 Coding

Post-interview coding was undertaken by members of the Verian (formerly Kantar Public) coding department. The coding department coded verbatim responses, recorded for ‘other specify’ questions.

For example, if a respondent selected “Other” at CARTS1 and wrote text that said they went to some type of live music event, in the data they would be back-coded as having attended a “a live music event” at CARTS1_006.

For the sets CASRT1/CARTS1A/CARTS1B, CASRT2/CARTS2A/CARTS2B and CHERVIS12/CFREHER12/CVOLHER data edits were made to move responses coded to “Other” to the correct response code, if the answer could be back coded to an existing response code.

5.7 Data outputs

Once the checks were complete, a final SPSS data file was created that only contained valid interviews and edited data. Five data sets were made available

  • Quarter one data
  • Quarter two data
  • Quarter three data
  • Quarter four data
  • A combined annual dataset

A set of Microsoft Excel data tables, containing headline measures were produced alongside each data set. Due to the changes to the questionnaire structure, the tables have also been updated accordingly. Notably the measures for “Engaged with heritage physically or digitally” and “Engaged with heritage physically and digitally” were not derived in quarters 1 and 2 [footnote 9].

The data tables also display confidence intervals. Confidence intervals should be considered when analysing the Participation Survey data set, especially when conducting sub-group analysis. A figure with a wide confidence interval may not be as robust as one with a narrow confidence interval. Confidence intervals vary for each measure and each demographic breakdown and will vary from year to year and should be calculated using a statistical package which takes account of design effects.

5.8 Standard errors

The standard error is useful as a means to calculate confidence intervals.

Survey results are subject to various sources of error that can be divided into two types: systematic and random error.

Systematic error

Systematic error or bias covers those sources of error that will not average to zero over repeats of the survey. Bias may occur, for example, if a part of the population is excluded from the sampling frame or because respondents to the survey are different from non-respondents with respect to the survey variables. It may also occur if the instrument used to measure a population characteristic is imperfect.  Substantial efforts have been made to avoid such systematic errors. For example, the sample has been drawn at random from a comprehensive frame, two modes and multiple reminders have been used to encourage response, and all elements of the questionnaire were thoroughly tested before being used.

Random error

Random error is always present to some extent in survey measurement. If a survey is repeated multiple times minor differences will be present each time due to chance. Over multiple repeats of the same survey these errors will average to zero. The most important component of random error is sampling error, which is the error that arises because the estimate is based on a random sample rather than a full census of the population. The results obtained for a single sample may by chance vary from the true values for the population, but the error would be expected to average to zero over a large number of samples. The amount of between-sample variation depends on both the size of the sample and the sample design. The impact of this random variation is reflected in the confidence intervals presented in the data tables for headline measures.

Random error may also follow from other sources such as variations in respondents’ interpretation of the questions, or variations in the way different interviewers ask questions.

Standard errors for complex sample designs

The Participation Survey employs a systematic sample design, and the data is both clustered by address and weighted to compensate for non-response bias. These features will impact upon the standard errors for each survey estimate in a unique way. Generally speaking, systematic sampling will reduce standard errors while data clustering and weighting will increase them. If the complex sample design is ignored, the standard errors will be wrong and usually too narrow.

The confidence intervals published in the annual data tables have been estimated using the svyciprop function of the R survey library, using the “logit” method.

Data considerations

Confidence intervals are important to consider when it comes to analysing the Participation Survey data, especially when drawing out inferences from the data.

Confidence intervals vary for each measure and each demographic breakdown and will vary from year to year. The 2023/24 Participation Survey collects over 170,000 responses, so confidence intervals are generally very narrow. While this reflects a strength of the data, when highlighting differences users may wish to implement a sifting rule to limit what is reported on.     d.

5.9 Missing data

Due to questionnaire changes (section 2.3), some data are missing for certain quarters, which has impacted corresponding derived variables. The affected variables and derived variables are:

  • CDIG5GAW2 – In quarter 2, a new response, option 2: “I have heard of it and already use it”, was added with this option having no data in quarter 1.

  • CHERVISDIGOR_NET & CHERVISDIGAND_NET – The derived variables that reported on how respondents engaged with heritage sites physically and/or digitally were not produced for quarters 1 and 2. This was due to the CHERVIS12 and CDIGHER12 questions being asked to different subsets of respondents in quarters 1 and 2.

  • CFREMUSONL1 & CFREMUSONL_DV – A new question on how frequently respondents engaged in museum activities was added in quarter 3; hence, reporting on these variables was only possible for the last two quarters of the survey.

6. Weighting

Each quarter, a three-step weighting process was used to compensate for differences in both sampling probability and response probability:

  1. An address design weight was created equal to one divided by the sampling probability; this also served as the individual-level design weight because all resident adults could respond.

  2. The expected number of responses per address was modelled as a function of data available at the neighbourhood and address levels. The step two weight was equal to one divided by the predicted number of responses.

  3. The product of the first two steps was used as the input for the final step to calibrate the sample. The responding sample was calibrated to the latest available Labour Force Survey (LFS) [footnote 10] with respect to (i) gender by age, (ii) educational level by age, (iii) ethnic group, (iv) housing tenure, (v) ITL2 region, (vi) employment status by age, (vii) household size, and (viii) internet use by age.

The sum of these ‘grossing’ weights equals the population of England aged 16+. An additional standardised weight was produced that was the same but scaled so the weights sum to the respondent sample size.

Equivalent weights were also produced for the (majority) subset of respondents who completed the survey by web. This weight was needed because a few items were included in the web questionnaire but not the paper questionnaire.

For the annual dataset (quarters 1, 2, 3 and 4), the ‘grossing’ weights were re-scaled and new standardised weights produced to ensure that each quarter would contribute equally to estimates based on the annual dataset.

After this, the whole annual dataset was re-calibrated using the average of the population totals used for calibrating each quarterly dataset. In addition (as part of the same process), new population totals were included in the calibration matrix: for each 2023 local authority [footnote 11], the (adjusted) mid-2022 population estimates for six groups: men aged 16-34, men aged 35-64, men aged 65+, women aged 16-34, women aged 35-64, women aged 65+. The published mid-2022 population estimates were adjusted very slightly to ensure no conflict with the national sex/age population totals – based on the Labour Force Survey – that were also included in the calibration matrix.

The final weight variables in the quarters one, two, three and four datasets are:

  • ‘Finalweight’ – to be used when analysing data available from both the web and paper questionnaires.

  • ‘Finalweightweb’ – to be used when analysing data available only from the web questionnaire.

The final weight variables in the annual dataset are:

  • ‘Y3SampleSizeWeight’ – to be used when analysing data available from both the web and paper questionnaires.

  • ‘Y3SampleSizeWeight_WebOnly’ – to be used when analysing data available only from the web questionnaire.

It should be noted that the weighting only corrects for observed bias (for the set of variables included in the weighting matrix) and there is a risk of unobserved bias. Furthermore, the raking algorithm used for the weighting only ensures that the sample margins match the population margins. There is no guarantee that the weights will correct for bias in the relationships between the variables.

7. Appendix

7.1 Invitation letter

7.2 Reminder letter 1

7.2.1 Partial response

7.2.2 No response

7.3 Reminder letter 2

7.3.1 Partial response with paper questionnaires included

7.3.2 Partial response with no paper questionnaires included

7.3.3 No response with paper questionnaires included

7.3.4 No response with no paper questionnaires included

7.4 Reminder letter 3

7.4.1 Partial response

7.4.2 No response

7.5 Ad hoc paper questionnaire request letter

7.6 Postal incentive letter

  1. In February 2023, there was a Machinery of Government (MoG) change and responsibility for digital policy now sits within the Department for Science, Innovation and Technology (DSIT). This MoG change did not affect the contents of the Participation Survey for 2023/24—digital questions are still part of the survey. 

  2. Let’s Create, a strategic vision by ACE, sets out that by 2030 they want England to be a country in which the creativity of each of us is valued and given the chance to flourish and where everyone has access to a remarkable range of high-quality cultural experiences. They invest public money from the government and The National Lottery to help support the sector and to deliver this vision. 

  3. https://www.gov.uk/government/publications/participation-survey-methodology 

  4. International Territorial Level (ITL) is a geocode standard for referencing the subdivisions of the United Kingdom for statistical purposes, used by the Office for National Statistics (ONS). Since 1 January 2021, the ONS has encouraged the use of ITL as a replacement to Nomenclature of Territorial Units for Statistics (NUTS), with lookups between NUTS and ITL maintained and published until 2023. 

  5. The effective sample size represents the statistical value of the sample after applying weights to compensate for the variation in local authority area average sampling probabilities within each ITL2 region. 

  6. In the event, the interval between first and second replicates was three weeks and between second and third replicates, the interval was one and a half weeks. 

  7. Response rates (RR) were calculated via the standard ABOS method. An estimated 8% of ‘small user’ PAF addresses in England are assumed to be non-residential (derived from interviewer administered surveys). The average number of adults aged 16 or over per residential household, based on the Labour Force Survey, is 1.89. Thus, the response rate formula: Household RR = number of responding households / (number of issued addresses×0.92); Individual RR = number of responses / (number of issued addresses×0.92×1.89). The conversion rate is the ratio of the number of responses to the number of issued addresses.  

  8. Interview lengths under 2 minutes are removed, and they are capped at the 97th percentile. If interviews are under 10 minutes, they are flagged in the system for the research team to evaluate; if they are flagged for other validation checks, then those interviews are removed. 

  9. Due to an oversight when allocating questions to different split sample modules, the physical heritage questions were asked to one subset of respondents, whilst the digital heritage questions were asked to a different subset of respondents. This means we cannot produce a figure for total heritage engagement (physical or digital) or a figure for engaging in both physically and digitally in quarters 1 and 2. However, this has been rectified for quarters 3 and 4. 

  10. January-March 2023 for quarter one, April-June 2023 for quarters two and three, and October-December 2023 for quarter four. 

  11. At the time of sampling, there were 309 local authorities in England but by the end of fieldwork, some had been combined together to form new, larger local authorities. There were only 296 ‘2023 local authorities’ compared to 309 the year before.