Official Statistics

Community Life Survey January to March 2024: Technical report

Published 4 December 2024

Applies to England

1. Introduction

1.1 Background to the survey

The Community Life Survey has been conducted by Verian (formerly Kantar Public) [footnote 1] on behalf of the Department for Culture, Media and Sport (DCMS) since 2012.  The Community Life Survey is a nationally representative annual survey of adults (16+) in England that aims to track the latest trends and developments across areas that are key to encouraging social action and empowering communities. More information and historical data and reports can be found on GOV.UK [footnote 2]

Verian has been commissioned to deliver the Community Life Survey for 2023/24 and 2024/25. These survey years have been commissioned by DCMS in partnership with the Ministry of Housing, Communities and Local Government (MHCLG)[footnote 3], and employ a boosted sample approach (n=175,000) to enable production of reliable statistics at lower-tier local authority level and expand the questionnaire to incorporate new themes (for example, perceptions of community in your local area). This aims to inform cross-government work on these issues, including MHCLG’s evaluation of the UK Shared Prosperity Fund. More information and supporting documents on this can be found on GOV.UK [footnote 4], with findings to be published in due course.

The scope of the survey is to deliver a nationally representative sample of adults (aged 16 years and over) in England. The data collection model for the Community Life Survey is based on Address-Based Online Surveying (ABOS), a type of ‘push-to-web’ survey method. Respondents take part either online or by completing a paper questionnaire. 

This technical report relates to the Quarter 2 fieldwork, conducted between 20 January and 8 April 2024.

1.2 Note on survey timings and seasonality effect

In 2023/24 the sample consisted of approximately 175,000 interviews across two quarters of fieldwork (October to December 2023, and January to March 2024). Condensing the survey into two quarters was necessary due to delays in the commissioning of the survey, and the overarching needs of the Community Life Survey and connected research. There is precedent for this approach, with previous waves of both the Community Life Survey and the Participation Survey (DCMS’s two social surveys) run across fewer than four quarters.  In addition, questions in these surveys ask the respondent for their recall over “the last 12 months” which helps to mitigate for seasonality effect.

For completeness, Verian conducted an analysis of the effect of conducting the Community Life Survey across fewer than four quarters. The analysis compared the results of 22 key variables in the survey across the pre-pandemic time periods (2013 to 2020), specifically focusing on the differences between the results from surveys in spring/summer, and those in autumn/winter (noting that the 2023/24 survey is autumn/winter fieldwork only). Please see figure 1.1 for the full list of variables analysed.

Figure 1.1 Key variables included in seasonality effect analysis

Variable name Description  
FrndSat1 Agree-disagree scale - people to help if needed
FrndSat2   Agree-disagree scale - people to socialise with
Counton1   People to count on if needed
SBeNeigh  Strength of belonging to immediate neighbourhood 
SchatN Interaction with neighbours
SPull Agree-disagree scale - people pull together to improve neighbourhood
STrust Trust in people in neighbourhood
Slocsat Satisfied-dissatisfied scale - local area as place to live 
STogeth Agree-disagree scale - people from different backgrounds in local area get on
BetWors2 Has area improved or got worse last two years? 
FUnHrs2 Hours spent helping groups, clubs & organisations in last 4 weeks
IHlpHrs2  Last 12 months - helping non-relatives with tasks - hours spent last 4 weeks
Givech3 Yes - Given to charity in last 4 weeks 
GivAmtB2 Last 4 weeks - amount given to charity 
WellB1 Satisfaction with life nowadays
WellB4 Feel things in life are worthwhile
WellB2  Level of happiness 
WellB3 Level of anxiety 
Lon1 Frequency of lacking companionship 
Lon2 Frequency of exclusion
Lon3 Frequency of isolation 
LonOft Frequency of loneliness

The analysis highlighted three variables where there was a statistically significant seasonality effect to report; one with sizeable differences between spring/summer and autumn/winter and two with small differences. 

Using a net of ‘on most days’ and ‘once or twice a week’, the frequency with which people chat to their neighbours more than just to say hello (SChatN) differed from 53% in autumn/winter to 58% in spring/summer. This difference was consistent across the survey years and is worthy of consideration when comparing the results of 2023/24 against previous years.

The two variables with smaller but still significant differences were the proportion giving to charity in the last four weeks (Givech3), with a higher proportion donating in autumn/winter (80%) compared to spring/summer (79%), and level of anxiety (WellB3), with more people feeling anxious in the autumn/winter time periods (20% scored 7 to 10 out of 10 – ‘completely anxious’ – for level of anxiety in autumn/winter compared with 18% in spring/summer). 

1.3 Survey objectives

  • To provide robust, nationally representative data on behaviours and attitudes within communities that can be used to inform and direct policy and action in these areas.

  • To provide a key evidence source for policy makers in government, public bodies, voluntary, community and social enterprise (VCSE) sector organisations and other external stakeholders.

  • To underpin further research and debate on building stronger communities.

1.4 Survey design

The basic ABOS design used in previous years of the survey was unchanged for 2023/24: a stratified random sample of addresses was drawn from the Royal Mail’s postcode address file (PAF) and an invitation letter was sent to each one, containing username(s) and password(s) plus the URL of the survey website. Sampled individuals were able to log on using this information and complete the survey as they might any other web survey. Once the questionnaire was complete, the specific username and password could not be used again, ensuring data confidentiality from others with access to this information. 

The survey design included an alternative mode of completion in the form of a paper questionnaire. The invitation letter offered this mode on request, and up to two copies were included in the first or second reminder letter for a proportion of the sampled addresses. More details on this can be found in the Contact Procedures section.

Paper questionnaires ensure coverage of the offline population and are especially effective with sub-populations that respond to online surveys at lower-than-average levels. However there are limitations in the use of paper questionnaires, such as: 

  • the physical space available on paper for questions

  • the level of complexity that can be used in routing from one question or section to another

  • the length of time a person is willing to spend completing a paper questionnaire

  • the cost of administering a paper questionnaire compared to an online one 

  • the difficulty of incorporating a modular system of questions within a paper questionnaire design

For these reasons, the Community Life Survey, used paper questionnaires in a limited and targeted way, to optimise rather than maximise response. More details on the differences between the online and paper questionnaires can be found in the Questionnaire section below.

2. Sampling 

2.1 Sample design: addresses

The address sample design was intrinsically linked to the data collection design (see ‘Details of the data collection model’ below) and was designed to yield a respondent sample that was as representative as possible of the adult population within each of the 309 lower tier or unitary local authorities in England.

The design sought a minimum two-quarter respondent sample size of 500 within each local authority and 2,720 within each ITL2 region [footnote 5]. The actual targets varied between local authorities (from 500 to 2,675) and between ITL2 regions (from 2,720 to 12,090). This variation maximised the statistical efficiency of the national sample while also accommodating the local and regional sample size requirements. Although there were no specific targets per fieldwork quarter, the sample selection process was designed to ensure that the respondent sample size per local authority and ITL2 region was approximately the same per quarter.

As a first step, a stratified master sample of just over 906,000 addresses in England was drawn from the PAF ‘small user’ subframe. Before sampling, the PAF was disproportionately stratified by local authority (309 strata) and, within local authority, the PAF was sorted by (i) neighbourhood deprivation level (5 groups), (ii) super output area, and finally (iii) by postcode. This ensured that the master sample of addresses was geodemographically representative within each local authority.

This master sample of addresses was then augmented by data supplier CACI. For each address in the master sample, CACI added the expected number of resident adults in each ten-year age band. Although this auxiliary data will have been imperfect, Verian’s investigations have shown that it is highly effective at identifying households that contain people aged 65 or older. Once this data was attached, the master sample was additionally coded with expected household age structure based on the CACI data: (i) all aged under 65; (ii) at least one aged 65 or older.

Verian drew a stratified random sample of 646,926 addresses from the master sample of 906,223 and systematically allocated them with equal probability to quarters 1 and 2 (323,463 addresses per quarter). 

The sampling probability of each address in the master sample was determined by the expected number of completed questionnaires from that address[footnote 6] given the selected data collection design: where this was lower than the average, the sampling probability was higher than the average, and vice versa. By doing this, Verian compensated for any (expected) variation in response rate that could not be fully ‘designed out’, given the constraints of budget and timescale. The underlying response assumptions were derived from empirical evidence obtained from the 2020 to 2021 and 2021 to 2022 Community Life Surveys and the 2022 to 2023 Participation Survey (a survey with a similar design also commissioned by DCMS and carried out by Verian).

After allocating the sample of 646,926 addresses to quarters 1 and 2, Verian then systematically distributed the quarter-specific samples to three equal-sized ‘replicates’, each with the same profile. The replicates were expected to be issued several weeks apart, to ensure that data collection was spread throughout the three-month period allocated to each quarter. 

These replicates were further subdivided into twenty-five equal-sized ‘batches’, each comprising a little over 4,300 addresses. This process of sample subdivision into small batches was intended to help manage fieldwork. The expectation was that only the first twenty batches within each replicate would be issued (that is, approximately 86,250 addresses), with the last five batches kept back in reserve. 

Verian’s plan was to review fieldwork outcomes at a local authority level (i) before the third replicate in quarter 1 was issued, (ii) before the first replicate in quarter 2 was issued, and finally (iii) before the third replicate in quarter 2 was issued. At each review point, Verian intended to recalculate the number of small batches to issue per local authority per subsequent replicate. This review process was designed to adjust the sample accounting for the latest response rates at local authority level, and to maximise the probability of achieving each of the local authority targets.

As a result of the delay in the commencement of fieldwork for quarter 1 until late October, review (i) was not carried out, but reviews (ii) and (iii) were carried out as planned. The number of batches issued per replicate per local authority in quarter 2 varied from 10 to 25, averaging at 18 compared to the planned 20. An additional set of 579 addresses was added to the third replicate of quarter 2. These addresses were drawn from the pool of master sample addresses remaining after the initial sample of 646,926 had been drawn (essentially reserve sample). These extra addresses were drawn in four local authorities where response rates had been particularly low in quarter 1 against expectation: Kensington & Chelsea, Knowsley, Westminster, and Brentwood. In total, 206,914 addresses were issued for quarter 2.

2.2 Sample design: individuals within sampled addresses

All resident adults aged 16 or over were invited to complete the survey. In this way, the Community Life Survey avoided the complexity and risk of selection error associated with remote random sampling within households. 

However, for practical reasons, the number of logins provided in the invitation letter was limited. The number of logins was varied between two and four, with this total adjusted in reminder letters to reflect household data provided by prior respondent(s). Addresses that CACI data predicted contained only one adult were allocated two logins; addresses predicted to contain two adults were allocated three logins; and other addresses were allocated four logins. The mean number of logins per address was 2.8. Paper questionnaires were available to those who are offline, not confident online, or unwilling to complete the survey online.

2.3 Details of the data collection model

Four different data collection designs were used for the Community Life Survey in 2023 to 2024. Each had a code reference that showed the number of mailings and type of each mailing: push-to-web (W) or mailing with paper questionnaires (P). For example, ‘WWP’ meant two push-to-web mailings and a third mailing with up to two paper questionnaires included alongside the web survey login information. In general, there was a two-week gap between mailings. For quarter 2, the allocation into the four data collection designs was as follows (post sample reviews):

Data collection design Allocation (%) Allocation (n=)
Quarter 2 total 100% 206,914
WW 16.4% 33,996
WP 13.1% 27,121
WWW 67.2% 139,123
WWP 3.2% 6,674

Only addresses coded by CACI as containing somebody aged 65 or over could be allocated to the WP or WWP designs (meaning to receive a mailing with paper questionnaires included). Each of these ‘older household’ addresses had a 75% probability of being allocated to one of these designs. This targeted approach was based on historical data Verian has collected through other studies, which suggests that provision of paper questionnaires to all addresses simply displaces online responses in some strata. 

Other than that, addresses were allocated to whichever data collection design would be expected to yield a mean of at least 0.35 completed questionnaires. If the expected yield under a two-mailing design was under 0.35, the address was allocated to a three-mailing design instead. The regression model used to make this choice incorporated response rates from earlier waves of the Community Life Survey, as well as various measures from the census small area data, CACI data, and index of multiple deprivation.

3. Questionnaire

3.1 Questionnaire development

The online questionnaire was designed to take an average of 30 minutes to complete. A modular design was used with approximately three-quarters of the questionnaire made up of a core set of questions asked of the full sample, and the remaining questions split into three separate modules randomly allocated to a subset of the sample. Applying a modular design to the survey created space for the expanded questionnaire.

The postal version of the questionnaire was designed, as far as possible, to be equivalent to the online version. However, there were some limitations to this, namely:

  • Space – how many questions could reasonably fit into a paper version of the questionnaire within printing limits?

  • Time – avoid overly burdening respondents and keep within a time limit to encourage response.

  • Budget constraints – it was not possible to produce multiple versions of the paper questionnaire, so the modular content was removed.

  • Complexity – the survey already contained variants in design related to contact methods and volumes, online and paper versions, and question modules (online only). Introducing different versions of the paper questionnaire was felt to be too challenging.

The paper questionnaire included all questions published as part of the annual statistical release, questions that were of critical importance to policy clients, and demographic questions used in analysis or weighting. The question wording used in both the online and paper versions was as closely matched as possible, and in total the paper questionnaire covered around 50% of the material in the online questionnaire. 

Copies of both the online and paper questionnaires can be found here [DCMS to insert links].

3.2 Questionnaire changes

Substantial changes were made to the questionnaire for the 2023/24 Community Life Survey in order to incorporate the new content, whilst remaining within the 30-minute target questionnaire length of the online survey. In addition, there were some amendments to existing CLS content, as required. DCMS and MHCLG consulted with internal colleagues and stakeholders to agree the additions and amendments required. A summary of the changes can be found in the quarter 1 technical report [footnote 7]. Full details of the content of the 2023/24 online and paper questionnaires, including additions, amendments and removals, can be found in Appendix A of the Q1 technical report [footnote 8].  

Whilst there was no time between quarters of fieldwork to conduct a full review of the questionnaire and make wholesale changes, there were a couple of questions that needed adjustment for quarter 2. These are detailed below (covering both online and paper versions of the questionnaire):

  • The question WORKPLACE, which asked about work locations for the respondent’s main job, was mistakenly asked as a single-choice question in quarter 1. For quarter 2, this was replaced with WORKPLACEmc, a multiple-choice question. The question wording and answer codes were unchanged for the online questionnaire. For the paper questionnaire equivalent, Q80, the same change was made. 

  • For the follow-up question(s), WORKLOC in the online questionnaire and Q81 to 83 in the paper (proximity to work location), an order of prioritisation was created. If the respondent worked at one or more fixed workplaces, they would answer for the one used most often; if they worked on the move or elsewhere, they would answer for how far away they usually travelled; if they worked from home, they would answer for the base location of the company they worked for.

  • The questions asking for permission to retain respondents’ details for follow-up research by Verian or others (FOLLOWUP & FOLLOWUP2 in the online questionnaire and Q96 to 97 in the paper) did not include mention of ‘postal address’ in Quarter 1. This was added to both questions for Quarter 2.

Online questionnaire for 2023/24 quarter 1

Paper questionnaire for 2023/24 quarter 1

3.3 Cognitive testing

Given the extent of questionnaire changes in the 2023/24 Community Life Survey whilst being aware of the constraints within the project timetable, Verian undertook two stages of cognitive testing in July and August 2023.

A full report of the cognitive testing will be published in due course.

4. Fieldwork

4.1 Contact procedures

All selected addresses were sent an initial invitation letter containing the following information: 

  • a brief description of the survey

  • the URL of survey website (used to access the online script)

  • a QR code that could be scanned to access the online survey

  • login details for the required number of household members

  • an explanation that participants would receive a £10 shopping voucher for completing the survey

  • information about how to contact Verian in case of any queries or to request a paper questionnaire

  • frequently asked questions, and responses to these (including how to access the survey privacy notice)      

All partially or non-responding households were sent one reminder letter at the end of the second week of fieldwork. A further targeted second reminder letter was sent to households for which, based on Verian’s ABOS field data from previous studies, this was deemed likely to have the most significant impact (mainly deprived areas and addresses with a younger household structure). The information contained in the reminder letters was similar to the invitation letters, with slightly modified messaging to reflect each reminder stage.

Details of the provision of paper questionnaires, including their content and allocation, can be found above.

4.2 Fieldwork timings

In Quarter 1, it was anticipated that after data quality checks, there would be approximately 87,500 usable responses. Due to a higher-than-expected response rate, there were 97,444 after the completion of data quality checks. As a result, the Quarter 2 starting sample was adjusted down to limit the risk of over-run across the annual survey fieldwork. Additionally, and for the same reason,  stepped closure of fieldwork was employed. The table below details the fieldwork dates for the different sample batches in Quarter 2:

Quarter 2 Start date Close date - online survey Close date - paper survey
Batch 1  20th January  2024 20th March 2024  8th April  2024
Batch 2   31st January 2024  20th March 2024  8th April 2024
Batch 3  13th February 2024 25th March 2024 8th April 2024 

4.3 Fieldwork performance

In total, 84,230 respondents completed the survey during Quarter 2: 74,949 via the online survey and 9,281 by returning a paper questionnaire. Following data quality checks (see Chapter 5 for details), 4,798 (5.7%) respondents were removed (within the expected range of removals which is between 5-6%), leaving 79,432 respondents in the final Quarter 2 dataset.  

This constituted a 38.39% conversion rate, a 28.26% household-level response rate, and an individual-level response rate of 22.08% (CAWI level response rate = 19.60% & PAPI level response rate = 2.47%)[footnote 9]

For the online survey, after the data quality check removals, the average completion time was 27:54 and the median completion time was 25:47.

4.4 Incentive System

As a thank you for taking part, all respondents that completed the Community Life Survey received an incentive voucher worth £10. 

Online incentives

Participants completing the survey online were provided with details of how to claim their voucher at the end of the survey and were directed to the voucher website, where they could select from a range of different vouchers, including electronic shopping vouchers sent via email, credit with a payments service, or a charitable donation.

Paper incentives

Respondents who returned the paper questionnaire were also provided with a £10 shopping voucher. This voucher was sent in the post and could be used at a variety of high street stores.

5. Data processing

5.1 Data management

Due to the different structures of the online and paper questionnaires, data management was handled separately for each mode. Online questionnaire data was collected via the web script and, as such, was much more easily accessible. By contrast, paper questionnaires were scanned and converted into an accessible format.

For the final outputs, both sets of interview data were converted into IBM SPSS Statistics, with the online questionnaire structure as a base. The paper questionnaire data was converted to the same structure as the online data so that data from both sources could be combined into a single SPSS file.

5.2 Partial completes

Online respondents were able to exit the survey at any time, and while they could return to complete the survey at a later date, some chose not to do so. Equally respondents completing on paper occasionally left part of the questionnaire blank, for example if they did not wish to answer a particular question or section of the questionnaire. 

Partial data can still be useful, providing respondents have answered the substantive questions in the survey. These cases are referred to as usable partial interviews.

Survey responses were checked at several stages to ensure that only usable partial interviews were included. Upon receiving returned paper questionnaire, the booking-in team removed obviously blank paper questionnaires. Following this, during data processing, rules were set for the paper and online surveys to ensure that respondents had provided sufficient data. 

For the online survey, respondents had to reach the questions relating to their qualifications for their data to count as valid. This was by either answering yes at the question ‘Degree’ or giving any answer at ‘Quals’. Paper data was judged complete if the respondent answered at least 50% of the questions or reached and answered Q46. This was the last question of Section 11: ‘Local area involvement’. 

5.3 Quality checking

Initial checks were carried out to ensure that paper questionnaire data had been correctly scanned and converted to the online questionnaire data structure. For questions common to both questionnaires, the SPSS output was compared to check for any notable differences in distribution and data setup.

Once any structural issues had been corrected, further quality checks were carried out on the online and paper responses to identify and remove any invalid interviews. The specific checks were as follows:

  1. Selecting complete interviews: Any test serials in the dataset (used by researchers prior to survey launch) were removed. Cases were also removed if the respondent reached - but did not answer - the fraud declaration statement (online: QFraud; paper: Q99).

  2. Duplicate serials check: If any individual serial had been returned in the data multiple times, responses were examined to determine whether this was due to the same person completing multiple times or due to a processing error. If they were found to be valid interviews, a new unique serial number was created, and the data was included in the data file. If the interview was deemed to be a ‘true’ duplicate, the more complete or earlier interview was retained.

  3. Duplicate emails check: If multiple interviews used the same contact email address, responses were examined to determine if they were the same person or multiple people using the same email. If the interviews were found to be from the same person, only the most recent interview was retained. In these cases, online completes were prioritised over paper completes due to the higher data quality.

  4. Interview quality checks: A set of checks on the data were undertaken to check that the questionnaire was completed in good faith and to a reasonable quality. Several parameters were used:

  • Interview length (online check only)

  • Number of people in household reported in interview(s) vs number of total interviews from household.

  • Whether key questions had valid answers.

  • Whether respondents had habitually selected the same response to all items in a grid question (commonly known as ‘flatlining’) where selecting the same responses would not make sense.

  • How many multi-response questions were answered with only one option ticked.

5.4 Data checks and edits

Upon completion of the general quality checks described above, more detailed data checks were carried out to ensure that the correct questions had been answered according to questionnaire routing. This was generally all correct for all online completes, as routing is programmed into the scripting software, but for paper completes, data edits were required.

There were two main types of data edit, both affecting the paper questionnaire data:

  1. Single-response questions edits: If a paper questionnaire respondent had mistakenly answered a question that they weren’t supposed to, their response in the data was changed to “-1: Item not applicable”. If a paper questionnaire respondent had neglected to answer a question that they should have, they were assigned a response in the data of “-5: Not answered but should have (paper)”. Where the respondent had selected multiple answers, their response was changed in the data to “-6: Multi-selected for single response (paper)”

  2. Multiple response question edits: If a paper questionnaire respondent had mistakenly answered a question that they weren’t supposed to, their response was set to “-1: Item not applicable”. If a paper questionnaire respondent had neglected to answer a question that they should have, they were assigned a response in the data of “-5[footnote 10]: Not answered but should have (paper)”. Where the respondent had selected both valid answers and an exclusive code such as “None of these”, any valid codes were retained, and the exclusive code response was set to “0”.

5.5 Coding

Post-interview coding was undertaken by members of Verian’s coding department. The coding department coded verbatim responses, recorded for ‘other specify’ questions.

For all ‘other specify’ questions data edits were made to move responses coded to “Other” to the correct response code if the answer could be back coded to an existing response code.

As an example, please see the following question:

RETAILSAT

What are the reasons you are [very/fairly] satisfied with the shops and retailers available in your local area? 

  1. Easy to get to

  2. They have all the basic essentials I need

  3. There are a wide range of goods and services to choose from

  4. Reasonably priced

  5. Independent or locally run 

  6. Some other reason (please type in)

  7. Don’t know

If a respondent selected “Some other reason” at this question and wrote text that said they were satisfied with the shops in their area because they are nearby, in the data they would be back coded to the code “Easy to get to”.

Where “Other” responses could not be back coded to an existing code and where the number of mentions of a particular response was given by at least 2% of those answering the question, new codes were opened to reflect these responses where appropriate – and the relevant responses were coded to these new codes accordingly.

5.6 Data outputs

Once the checks were complete a final SPSS data file was created that only contained valid interviews and edited data. From this dataset, a set of data tables were produced.  The coverage and format of the data tables were agreed by DCMS and MHCLG, to cover key questions of interest. Quarterly tables cover national data only; local authority level data tables have been published alongside the annual release. 

5.7 Weighting

A three-step weighting process was used with each quarterly dataset, to compensate for variation within the respondent sample with respect to both sampling probability and response probability:

  1. An address design weight was created equal to one divided by the sampling probability; this also served as the individual-level design weight because all resident adults could respond.

  2. The expected number of responses per address was modelled as a function of data available at the neighbourhood and address levels. The step two weight was equal to one divided by the predicted number of responses.

  3. The product of the first two steps was used as the input for the final step: calibration. The responding sample was calibrated to the 2023 Annual Population Survey (APS) with respect to (i) sex by age, (ii) educational level by age, (iii) ethnic group, (iv) housing tenure, (v) ITL1 region, (vi) employment status by age, and (vii) household size.

The combined (annual) 2023 to 2024 dataset will be further calibrated to ensure that the sex/age distribution within each local authority broadly matches that of the most recent (2022) mid-year population estimates published by ONS.

An equivalent weight was also produced for the (majority) subset of respondents who completed the survey by web. This weight was needed because some items were included in the web questionnaire but not the paper questionnaire.

It should be noted that the weighting only corrects for observed bias (for the set of variables included in the weighting matrix) and there is a risk of unobserved bias. Furthermore, the rating algorithm used for the calibration only ensures that the sample margins match the population margins. There is no guarantee that the weights will correct for bias in the relationships between the variables.

The final weight variables in the dataset are:

  • ‘Finalweight’ – to be used when analysing data available from both the web and paper questionnaires.

  • ‘Finalweightweb’ – to be used when analysing data available only from the web questionnaire.

  1. Please note that due to a change in ownership in 2022, Kantar Public is now trading as Verian (since November 2023). All materials related to Quarter 1 of the Community Life Survey referred to ‘Kantar Public’. For Quarter 2, the materials used the ‘Verian’ brand name.  

  2. Community Life Survey 

  3. Whilst the 2023/24 survey was being undertaken, the department was known as the Department for Levelling Up, Housing and Communities (DLUHC). Subsequent to the change of Government following the general election, this was renamed to the Ministry of Housing, Communities and Local Government (MHCLG) on the 9 July 2024 

  4. UK Shared Prosperity Fund: prospectus 

  5. International Territorial Level (ITL) is a geocode standard for referencing the subdivisions of the United Kingdom for statistical purposes, used by the Office for National Statistics (ONS). Since 1 January 2021, the ONS has encouraged the use of ITL as a replacement to Nomenclature of Territorial Units for Statistics (NUTS), with lookups between NUTS and ITL maintained and published until 2023. 

  6. Using data from 2020 to 2022, extensive modelling was carried out to determine the likely response level under various different potential data collection designs and as a function of data that can be attached to all sampled addresses: effectively Census and Census-derived data plus the CACI (modelled) household age structure data. The Census data was compacted into six ‘factor component’ scores that, between them, cover the majority of the between-neighbourhood (output area) variation in Census data. 

  7. Community Life Survey October - December 2023: Technical report 

  8. Summary of questionnaire composition for 2023/24 

  9. Response rates (RR) were calculated via the standard ABOS method. An estimated 8% of ‘small user’ PAF addresses in England are assumed to be non-residential (derived from interviewer administered surveys). The average number of adults aged 16 or over per residential household, based on the Labour Force Survey, is 1.89. Thus, the response rate formula: Household RR = number of responding households / (number of issued addresses×0.92); Individual RR = number of responses / (number of issued addresses×0.92×1.89). The conversion rate is the ratio of the number of responses to the number of issued addresses.   

  10. There was one exception to this rule. For the question Assets2: “For each of the following, please indicate whether there is at least one within a 15 to 20 minute walk from your home, further away but still in your local area, or there is not one in your local area at all”, option K: “Place of worship for my faith or religion, such as a church, mosque, temple” was treated differently. In the Paper questionnaire, if a respondent didn’t provide an answer where they should have, rather than being coded to a -5, they were instead coded into answer code 5: “Not applicable I do not have a religion/faith” (an answer code which was only available for this facility).