Independent report

Analysis of Care Quality Commission data on inspections, assessments and ratings, 2014 to 2024

Updated 17 October 2024

Applies to England

Executive summary 

In May 2024, Dr Penelope Dash was asked to review the operational effectiveness of the Care Quality Commission (CQC). This included an action to examine the suitability of CQC’s methodology for inspections and ratings.

To assist with this objective, CQC extracted management information from their internal data systems. This covered a 10-year period, from 2014 (when they started routinely publishing inspection outcomes) up to 30 July 2024. CQC also provided, in a separate file, information on assessments conducted under its new single assessment framework (SAF), which was introduced in November 2023 and rolled out gradually.

Analysts within the Department of Health and Social Care (DHSC) were able to use this information to explore a series of research questions over the course of August and September 2024, the headline findings from which are summarised below.

Volume and nature of inspections

COVID-19 impacted CQC’s inspection programme. Between 2019 and 2020, the number of inspected locations per year declined by 59.5%, from 15,757 to 6,381. This period also marked a shift from comprehensive to focused inspections.

This meant, for example, instead of looking at:

  • all services provided in a hospital, an inspection would focus on a specific area of care (such as maternity services)
  • whether a service was safe, effective, caring, responsive and well led, an inspection would focus on one or 2 of these areas

Inspection rates have not yet returned to pre-COVID-19 levels, with 6,734 inspections occurring in 2023 (the most recent, full calendar year of data available).

Locations with and without a rating

Over the past 5 years, the proportion of locations that had never been rated by CQC (under their previous inspection framework) has increased from 13% in 2019 to 19% in 2024. GP locations are most likely to have a rating (97%), while NHS acute hospital locations are the least likely (44%).

However, results for NHS acute hospital locations may not be as reliable, due to limitations in how CQC systems capture data. Specifically:

  • a single hospital can be registered multiple times under a different trust: for example, Bradford Royal Infirmary is registered with an acute trust under one location ID and a community trust under another location ID. It holds a rating under the acute trust registration, but not with the community trust. This leads to double counting in CQC systems, where Bradford Royal Infirmary is counted as ‘inspected and rated’ and ‘not rated’ simultaneously
  • some services, though registered as an ‘NHS acute non-specialist hospital’, are actually part of broader community service provisions and may deliver care in a very different way to acute hospitals. CQC said this means they will not always receive an overall rating because it was agreed (often through internal regulatory planning meetings or with the local relationship owner) that a visit was not necessary

Average age of ratings

Within the past 5 years, the average age of overall-level ratings (the amount of time that has passed since a location rating was published under the previous inspection framework) has almost doubled, from 2 years old in 2020 to approximately 3 years and 11 months old in 2024.

Ratings that are ‘outstanding’ are on average the oldest (4 years and 11 months), while those that are ‘inadequate’ are on average the youngest (a year old). GP locations have the oldest average overall rating age (5 years and 6 months), followed by acute specialist NHS hospital locations (4 years and 6 months).

This does not account for the fact that some overall-level ratings will have been derived from older key question-level ratings. For example, Wythenshawe Hospital’s overall rating is dated 2023, but 3 of the 5 key question-level ratings underpinning this were from 2019. We expand on this case study in further detail in the ‘Case study’ section below.

It means that, in some cases:

  • the most recent overall-level rating available is based on the quality of care assessed several years ago
  • the overall-level rating may mask even older key question-level ratings

We have not been able to quantify the extent of this issue.

Re-inspecting underperforming locations

As of 30 July 2024, the average amount of time between a location being rated as ‘inadequate’ overall and being re-inspected under the previous framework was 136 days (approximately 4 months and 2 weeks). The average wait was highest for acute non-specialist NHS hospital locations at 264 days on average.

As of 30 July 2024, the average amount of time between a location being rated as ‘requires improvement’ overall and being re-inspected was 360 days (nearly a year). This average wait has grown over time, increasing by 21% since the end of 2019.

We estimate that there are around 268 locations with an ‘inadequate’ rating and 3,537 locations with a ‘requires improvement’ rating that have not been re-inspected at the point of us conducting our analysis (though this does not capture any re-inspections under the new framework).

Use of key questions, quality statements and evidence categories

CQC told us that, between 4 December 2023 and 30 July 2024, it had completed 980 assessments under the new framework (this figure includes unpublished assessments). Of the these, 75% of the assessments looked at the ‘safe’ key question, whereas only 38% looked at the ‘effective’ key question.

The proportion of assessments looking at each key question varies considerably across sectors, with 90% of adult social care assessments looking at ‘safe’ compared with 67% of secondary and specialist care assessments (which includes NHS acute hospitals). No community health assessments had looked at the key questions ‘effective’ and ‘responsive’, and no secondary and specialist care assessments had looked at the key question ‘caring’.

There are different ways to analyse CQC’s usage of the quality statements underpinning these assessments.

Across all assessments, 9,058 quality statements were used for 980 assessments, giving an average of 9.2 out of 34 quality statements used per assessment conducted. On average, 3.7 of 8 possible quality statements on ‘safe’ were assessed (46%) while, on average, 1.2 of 6 possible quality statements on ‘effective’ were assessed (20%). Of the assessments that looked at the key question ‘effective’, only 34% considered the quality statement ‘monitoring and improving outcomes’.

Of all quality statements used, 40% (3,580) are part of the ‘safe’ key question, while 13% (1,181) relate to ‘effectiveness’. For secondary and specialist care assessments carried out in the same period, 63% (115 of 184 quality statements used for secondary and specialist care assessments) are within the ‘safe’ key question, with only 9% (17 out of 184) in the ‘effective’ key question. These results may be skewed by the fact that there are a greater number of ‘safe’ quality statements (8) within the framework compared with ‘effective’(6). 

There are also different ways to analyse CQC’s intended usage of evidence categories.

One approach looks at the total number of times CQC said it would collect each type of evidence (of which there are 6 categories) against each of the 34 quality statements (giving 204 possible data points). For primary health services, CQC intends to collect up to 100 pieces of data, 2 of which fall within the evidence category ‘outcomes’ (2%). 

The second approach looks at each evidence category separately. So, for primary health services, we see that only 2 of 34 statements are likely to be assessed by measuring ‘outcomes’ (6%), while 23 of 34 statements are likely to be assessed by measuring ‘people’s experience’ (68%).

Background

The Care Quality Commission (CQC) is the independent regulator of health and adult social care in England. It:

  • registers health and care providers
  • monitors, inspects and rates services
  • takes action to protect people who use services

CQC’s comprehensive inspection programme began in September 2013, following the Mid Staffordshire NHS Foundation Trust Public Inquiry led by Robert Francis. The purpose of this was to determine whether health and social care services were:

  1. Safe: are service users protected from abuse and avoidable harm?
  2. Effective: does the care, treatment and support provided achieve good outcomes and help service users maintain quality of life, based on the best available evidence?
  3. Caring: do staff treat service users with compassion, kindness, dignity and respect?
  4. Responsive: are services organised to meet service users’ needs?
  5. Well led: do the leadership, management and governance of the organisation make sure it is providing high-quality care that’s based around service users’ individual needs, encouraging learning and innovation, and promoting an open and fair culture?

CQC would determine on a case-by-case basis whether a service needed to be inspected across all 5 of these key questions (formerly known as ‘quality domains’), or whether a focused inspection looking at fewer areas would be appropriate. In 2014, CQC began routinely publishing inspection outcomes, using a single-word rating system to describe services (and each key question or domain) as ‘outstanding’, ‘good’, ‘requires improvement’ or ‘inadequate’.

In November 2023, CQC started rolling out a new single assessment framework (SAF). Under this new system, the 5 key questions are underpinned by a set of ‘quality statements’, which can be assessed using up to 6 types of ‘evidence categories’. CQC says that the purpose of this, in part, was to make assessments more structured and transparent, making its decisions about ratings clearer and easier to understand. Because of the phased rollout, inspections under the old model were (and are) still taking place after the introduction of the new framework.

In May 2024, Dr Penelope Dash was asked to review the operational effectiveness of the CQC. Concerns were raised, through stakeholder interviews conducted during the early stages of the interim review, about CQC’s approach to calculating and aggregating service ratings.

At DHSC, we wanted to independently determine if these concerns were substantiated with objective data on CQC inspections and ratings over the past decade.

The next section of this report sets out the specific research questions DHSC sought to answer.

Research questions

The interim report produced by Dr Dash highlighted several stakeholder concerns. For example, the perception that:

  • assessments are not conducted frequently enough, so may not reflect recent data and intelligence on the quality of care being provided
  • there is a lack of transparency around how ratings are calculated, including the use of the outcome of previous inspections (often several years ago) to determine a ‘current’ rating

DHSC and Dr Dash used these emerging findings to inform the following research questions to be explored through our analysis of CQC data:

  1. How many inspections has CQC conducted each year (and what proportion were comprehensive or focused)?
  2. How many locations hold a CQC rating (and how many have never been rated)?
  3. What is the average age of ratings overall (and by location type)?
  4. What is the average gap between receiving a ‘requires improvement’ or ‘inadequate’ rating overall and being re-inspected?
  5. Which key questions, quality statements and evidence categories are being prioritised under CQC’s new assessment framework?

The next section of this report sets out our methods and analytical approach to answering these research questions.

Methods

To answer the 5 research questions set out above, analysts from DHSC analysed management information supplied by the CQC from its internal data systems. This was supplemented with data published by CQC through its publicly accessible application programming interface (API) and information from internal guidance documents held by CQC.

Below, we provide further information on:

  • CQC data - including how we constructed a historical data set
  • terminology and calculations - including how service types were defined
  • limitations - some relating to CQC data and some relating to our analysis

CQC data

CQC publishes data on inspections and ratings in a variety of formats on its website. This includes, for example:

  • a spreadsheet, updated once a week, listing all places regulated by CQC
  • a spreadsheet, published every month, with the latest rating for each provider (for example, a domiciliary care provider or an NHS trust), location (for example, a care home or an NHS acute hospital) and service (for example, the maternity service within a hospital)
  • a public API, updated daily, listing the current status of all services, which users can access using software
  • PDF and HTML reports for each service along with a published rating

Initially, we attempted to construct a longitudinal data set of all published ratings over time, using the monthly spreadsheets and data from CQC’s API. However, we identified inconsistencies between the data sets, which rendered this option infeasible in the time available.

In August and September 2024, at DHSC’s request, CQC extracted management information from its internal data systems and provided 6 data files containing:

  • all inspections conducted and ratings published each year under the previous inspection model (see ‘Inspections and assessments’ below for details), from 2014 up to 30 July 2024
  • all assessments conducted and ratings published under the new SAF, from December 2023 (when the first assessments took place) up to 30 July 2024
  • the number of quality statements looked at, by sector, in completed assessments under the new SAF, from December 2023 up to 30 July 2024
  • a list of locations to exclude from our analysis, based on its primary inspection categories, as they represent location types CQC generally does not rate (such as dentists)
  • a list of the regulated activity dates for locations
  • a list of regulated activity dates for providers

We supplemented these files with metadata from CQC’s API to group locations into categories, such as ‘Social care: residential’, ‘NHS acute hospital’ and ‘GP practice’.

CQC also provided us with an internal guidance document, which listed priority quality statements and evidence categories for a range of sectors, under the new framework.

Terminology and calculations

Provider registration

All providers of health and adult social care activities in England are legally required to register with the CQC under the Health and Social Care Act 2008.

A provider can be one of 3 types of legal entity:

  • an individual
  • a partnership
  • an organisation (such as a local authority, NHS trust, or registered company or charity)

When registering with the CQC, providers must include details of all places at which, or from which, regulated activities will be provided. These places are called ‘locations’.

Locations

A location is where a regulated activity is being:

  • delivered to people and may represent a service (for example, a care home)
  • organised or managed from, but the regulated activity is carried out elsewhere (for example, a domiciliary care agency)

CQC monitors compliance with regulations at each location. We therefore analysed CQC inspections and ratings at location level. Locations are grouped under providers. Different services are grouped under locations that perform this regulated activity.

Location types

CQC inspection and assessment processes can vary depending on the type of service being provided.

In answering our research questions, we therefore looked at the results for locations overall and by location type (see Table 1 below).

Specifically, we looked at:

  • social care locations (including residential and community-based services)
  • NHS acute hospital locations (including acute specialist and non-specialist services)
  • GP practice locations
  • independent sector locations (including acute hospital specialist and non-specialist services, and independent consulting doctors)

Table 1: how location types were identified in CQC data files provided to DHSC

Location type Records captured
Social care location Location had a primary inspection category in the API of ‘residential social care’ or ‘community-based adult social care services’.
Social care services Location had a primary inspection category in the API of ‘residential social care’ or ‘community-based adult social care services’, and one of the following DHSC Governance and Assurance Committee service type labels: ‘care home service’ (with or without nursing), ‘domiciliary care service’, ‘shared lives’, ‘supported living service’ and ‘extra care housing services’.
NHS acute hospital location Location had a primary inspection category in the API of ‘acute hospital - NHS specialist’ or ‘acute hospital - NHS non-specialist’.
GP practice locations Location had a primary inspection category in the API of ‘GP practices’.
Independent sector locations Location had a primary inspection category in the API of ‘acute hospital - independent specialist’, ‘acute hospital - independent non-specialist’, ‘independent consulting doctors’, or ‘mental health - community and hospital - independent’.
Non-rated locations

Non-rated locations, where care is regulated by the CQC but the services or providers are exempt from its legal duty to give ratings, are excluded from this analysis. Such non-rated locations include:

  • primary dental services
  • children’s homes
  • sexual assault referral centres
  • blood and transplant services
  • hyperbaric oxygen therapy services
  • medical laboratories
  • adult prisons, youth offending institutions and immigration removal centres

CQC provided us with a list of 23,876 locations to remove from our analysis, as they represented locations CQC does not routinely rate.

Inspections and assessments

Under its previous model, CQC carried out 2 main types of inspection:

  • comprehensive inspections: these covered all 5 key questions (formerly known as ‘quality domains’) to check that services are ‘safe’, ‘caring’, ‘effective’, ‘responsive’ (to people’s needs), and ‘well led’
  • focused inspections: these were smaller in scale than comprehensive inspections, often in response to specific information received or to follow up on findings from a previous inspection, and did not look at all 5 key questions

CQC set out key lines of enquiry (KLOEs), prompts and suggested sources of evidence for each domain to help guide inspectors.

Inspections could be conducted at different levels. For example, an inspection could look at the safety of a hospital location overall and/or the safety of a specific service (such as the maternity service) within a hospital.

Under the new framework, some of this terminology has changed. For example, CQC now refers to:

A SAF assessment may be ‘responsive’ (in response to information of concern) or ‘planned’, and may or may not result in an updated rating.

Ratings

CQC uses a 4-point rating scale: ‘outstanding’, ‘good’, ‘requires improvement’, and ‘inadequate’. A separate rating is given to each of the 5 key questions assessed (safe, caring, effective, responsive and well led). These are then aggregated into an overall rating.

Our analysis focuses on the ‘overall level’ rating for a location or location type, rather than the ratings given at a ‘service level’ or the ratings given to each of the 5 key questions (safe, caring, effective, responsive and well led). A service-level rating would be a rating for a specific service of a location, such as the rating for only ‘end-of-life care’ at a hospital.

Inspections and assessments may not always result in a rating and instead be labelled as ‘inspected but not rated’, ‘not formally rated’ or ‘insufficient evidence to rate’. These cases were excluded from our analysis.

Reference dates

‘In-year’ results refer to data within a single calendar year. A calendar year is defined as 1 January to 31 December, except for 2024, which covers the period 1 January to 30 July (the most recent data point provided to us).

‘Cumulative’ results should be understood as a ‘snapshot’ taken on 31 December each year (aside from 2024, which is taken on 30 July), capturing all data from 1 January 2014 up to that point.

Limitations

When interpreting the results of this analysis, it is important to note the caveats set out in this section of our report.

Figures may suffer from edge effects due to the paucity of data prior to 2014 and incomplete annual data from 2024

CQC started routinely publishing single-word ratings under the previous inspection framework in 2014. This is why the figures for this year are very low and increase rapidly.

We were not able to include ratings issued under CQC’s new SAF in our analysis for research questions 2, 3 and 4. This is because data on location ratings under the previous and new frameworks were provided to us in different files and, at the point of conducting our analysis, we did not have the information required to accurately link records across the two.

However, of the 110,456 inspections and assessments resulting in a rating that were conducted between 1 January 2014 and 30 July 2024, less than 1% were done under the new framework. We therefore anticipate that the impact of excluding SAF ratings from our analysis is minimal.

CQC can make the decision, based on a location’s risk profile, to act outside of protocols and give it a rating. For example, Victoria Dental and Healthcare is registered under ‘primary dental care’ (a non-rated location type) and has a primary inspection category of ‘dentists’. However, it also provides medical healthcare services, including gynaecology, general medicine, dermatology and psychiatry. In April 2020, CQC carried out a focused, desk-based review of the ‘safe’ key question for these services, awarding it a rating of ‘good’ for safety and overall.

CQC told us that decisions about which locations to inspect are often made during internal regulatory planning meetings or are known to the local relationship team or owner. These decisions are not consistently recorded in a systemised way, but rather as notes in the meeting. This means it is not possible to quantify how many non-rated location types do in fact have a rating and were excluded from our analysis due to their primary inspection category.

Results for locations labelled as ‘NHS acute hospital’ may be skewed by the inclusion of broader community service provisions

CQC told us that some services, though registered as ‘NHS acute non-specialist hospital’, are actually part of broader community service provisions and therefore not treated in the same way. This includes, for example:

  • satellite sites, which are often inspected as part of the provider’s core service or the parent hospital’s location rather than as standalone sites
  • community hospitals, which are not inspected individually, but are instead looked at as part of community core services and rated at the provider level
  • dental services and minor injury units, which are usually part of wider community services, might not be inspected separately and are often included within the overall service ratings for the provider

This means that, even if a location is classified as an ‘NHS acute non-specialist hospital’, it may not always receive an overall rating because it was agreed (often through internal regulatory planning meetings or with the local relationship owner) that a visit was not necessary for one of the above reasons or based on wider proportionality.

CQC told us that its systems do not flag why a decision was made to never rate or inspect a service that is, by definition, carried out at a ‘rated’ location. Nor do its systems easily show whether a location is a satellite service, community hospital, dental service or minor injury unit, making it hard to exclude them accurately in any calculations.

CQC believe this explains, at least in part, why figures regarding the number of ratings of NHS acute non-specialist hospitals and thus the number of ratings of NHS acute hospital locations appear low.

Some NHS acute non-specialist hospital locations may be double counted in our analysis

A single hospital can be registered multiple times under different trusts. For example, Bradford Royal Infirmary is registered with an acute trust under location code RAE01 and has a rating attached to it, and it is registered with a community trust under location code TADY3 and described as ‘not yet inspected’. This leads to double counting in CQC management information, where Bradford is counted as ‘inspected and rated’ and ‘not rated’ simultaneously.

It was not possible to quantify the extent of this issue.

Mental health and community trust ratings are not captured in our analysis

CQC told us that mental health and community trusts only have ratings at provider level, rather than location level, so are not captured in our analysis. CQC also explained that it does not register mental health core services - meaning such services would only be assigned a rating during the process of inspecting the trust within which they were based. This means it is also not possible to identify the total number of mental health core services in CQC management information, from which we could assess the proportion that have been inspected and rated (and therefore ‘excluded’ from our analysis).   

Our assessment of the age of overall ratings does not account for the fact that some will have been derived from older key question-level ratings

If a ‘focused’ inspection was carried out and, for example, only 2 of the 5 key questions were rated, then CQC will carry forward the most recent historical ratings for the other 3 key questions it did not inspect this time around. The overall rating will therefore be derived from inspections conducted at different points in time (which may be several years apart). However, the date assigned to the overall rating will be the date it was published following the most recent inspection. It is therefore possible that the average age of ratings at key question level (safe, effective, caring, responsive and well led) are older than the age of overall-level ratings.

Other points to note regarding data completeness

Note that:

  • all categorisation is based on the current primary inspection category as provided by the API because no historic time series was available
  • when looking at regulated locations and providers, we are only able to capture the first and last date that regulated activity occurred and cannot account for any disjoint periods in between those dates where regulated activity might have ceased because CQC does not hold this data
  • where we refer to assessments under the new framework, it is worth noting that the SAF data provided does not distinguish between providers and locations so both will be included in the totals. It also does not distinguish between types of assessment
  • the analysis does not take into account dormancy of locations as no time series data on this was available

Results

As of 30 July 2024, there were 40,919 locations in England where care was regulated by the CQC. This included, for example:

  • 29,301 social care locations
  • 6,384 GP practice locations
  • 2,826 independent sector locations
  • 741 NHS acute hospital locations

‘NHS acute hospital locations’ often cover a broader range of services relative to the other location types listed. This means that, while they are smaller in number, the process of inspecting them and arriving at an overall rating can be more complex.

Below, we present the results of our analysis in relation to our 5 research questions, followed by a case study to illustrate CQC’s approach to aggregating ratings.

The data we refer to is available in more detail in the accompanying ‘Analysis of CQC data on inspections, assessments and ratings, 2014 to 2024: supplementary data tables’.

1. How many inspections has CQC conducted each year (and what proportion were comprehensive or focused)?

We looked at the number of locations inspected or assessed each year, from 2014 to 2024, which resulted in a published rating.

We found that the number of inspections declined by 59.5% between 2019 and 2020, from 15,757 to 6,381 (as shown in Figure 1 below). This coincided with CQC’s suspension of routine inspections at the beginning of 2020 in response to the COVID-19 outbreak.

The volume of inspections and assessments conducted in more recent years has not yet returned to pre-COVID-19 levels, with 6,734 carried out in 2023 and 1,820 in 2024 (up to 30 July).

Figure 1: number of in-year location inspections resulting in a rating, 2014 to 2024

The COVID-19 pandemic also marked a shift from comprehensive inspections to focused inspections. Between 2019 and 2020, the proportion of inspections that were comprehensive dropped from 93% to 75%, and the proportion of focused inspections increased from 6% to 23% (as shown in Figure 2 below).

Figure 2: percentage of in-year inspections resulting in a rating, 2014 to 2024, by inspection type

In November 2023, CQC started to roll out its new SAF across a small number of geographical areas. In 2024 (as of 30 July), 43% of assessments (resulting in a published rating) were conducted under the new framework, while 57% continued under the previous approach.

2. How many locations have a CQC rating (and how many have never been rated)?

We wanted to look at the number of locations, where care is regulated by the CQC, with and without a rating.

As of 30 July 2024, 81% of locations had received a rating at some point in time, while 19% had never been rated (shown in Figure 3 below).

This represents a gradual decrease in the proportion of rated locations over the past 5 years, with 87% of active locations at the end of 2019 having a rating.

As explained previously in the ‘Limitations’ section of our report, less than 1% of ratings published between 2014 and 30 July 2024 were estimated to have been conducted under the new framework, so the impact of excluding these from this analysis should be minimal.

Figure 3: cumulative percentage of locations with a published rating under the previous inspection framework, 2014 to 2024

Note: CQC’s rating system under the previous inspection framework started in 2014, which is why the figures for that year are very low and increase rapidly. This does not capture SAF ratings.

The proportion of locations with a rating varies by location type. As of 30 July 2024, 97% of GP locations had a rating, compared with 82% of social care locations, 58% of independent sector locations and 44% of NHS acute hospital locations (as shown in Figure 4 below).

Figure 4: cumulative percentage of locations with a published rating under the previous inspection framework, 2014 to 2024, by location type

Note: CQC’s rating system under the previous inspection framework started in 2014, which is why the figures for that year are very low and increase rapidly. This does not capture SAF ratings.

There are also variations within these location groupings. For example:

  • a higher proportion of residential social care locations have been rated, compared with community-based social care locations (96% vs 67% as of 30 July 2024)
  • a higher proportion of acute non-specialist NHS hospital locations have been rated, compared with acute specialist NHS hospital locations (50% vs 21% as of 30th July 2024)

CQC explained that figures relating to NHS acute hospital locations may be lower, at least in part, because these records also capture services such as community hospitals, dental services and minor injury units, which are not rated at this level. A more detailed explanation is set out in the previous ‘Limitations’ section of this report.

3. What is the average age of ratings overall (and by location type)?

We wanted to understand the age of location overall-level ratings - in other words, how long it had been since locations had received their most recent rating.

We found that, within the past 5 years, the average age of ratings has almost doubled, from 2 years old in 2020 to approximately 3 years and 11 months in 2024 (as shown in Figure 5 below).

The oldest rating held by an active location (a location still providing care), as of 30 July 2024, was from 8 June 2014. This does not mean that CQC has had no contact with the location since the rating was issued - in some cases, CQC may have re-inspected but not rated the location and/or determined, based on a review of information and data available to them, that reassessment was not necessary.

Figure 5: average age of CQC location ratings under the previous inspection framework, 2015 to 2024

The average age of location ratings varies by location type (as shown in Figure 6 below). As of 30 July 2024, GP locations had the oldest average rating age (5 years and 6 months).

The average age of NHS acute hospital location ratings, as of 30 July 2024, was 2 years and 9 months. However, ratings of acute specialist NHS hospital locations are, on average, 2 years older than acute non-specialist locations (4 years and 6 months, compared with 2 years and 6 months).

Figure 6: average age of CQC location ratings under the previous inspection framework, 2015 to 2024, by location type

We also found that overall-level ratings that are ‘outstanding’ are, on average, the oldest (4 years and 11 months, as of 30 July 2024), while those that are ‘inadequate’ are the youngest (a year old) (as shown in Figure 7 below).

Figure 7: average age of inadequate, requires improvement, good and outstanding ratings under previous inspection framework, by location type, as of 30 July 2024

As set out in the previous ‘Limitations’ section of this report, we have only looked at the age of overall-level ratings. This does not account for the fact that some overall-level ratings will have been derived from older key question-level ratings. It is therefore possible that the average age of ratings at key question level (safe, effective, caring, responsive and well led) are older than the age of overall-level ratings. For example, Wythenshawe Hospital’s overall rating is dated 2023, but 3 of the 5 key question-level ratings underpinning this were from 2019. This example is described in more detail in the below ‘Case study’ section of this report.

4. What is the average gap between receiving a ‘requires improvement’ or ‘inadequate’ rating overall and being re-inspected?

We wanted to understand the average number of days between a location receiving a ‘requires improvement’ or ‘inadequate’ rating overall and being re-inspected by CQC.

CQC told us that it used to set a maximum period for re-rating a service, based on the previous rating awarded. For example, if a service was rated as ‘inadequate’, the expectation was it would be re-inspected and re-rated within 6 to 12 months. After running a consultation on its new strategy in 2021, CQC removed its fixed timeframes for re-inspection but said it continued to prioritise re-inspection of services with ‘inadequate’ and ‘requires improvement’ ratings. CQC told us it looks at a range of risk indicators when determining priorities for (re)assessment - including patterns of contact with CQC (such as whistleblowing, safeguarding and feedback from service users).

As of 30 July 2024, we found that, on average:

  • 136 days passed between an ‘inadequate’ overall rating and subsequent inspection (approximately 4 months and 2 weeks)
  • 360 days passed between a ‘requires improvement’ overall rating and subsequent inspection (nearly a year)

The average duration between a ‘requires improvement’ overall rating and the next inspection has noticeably increased over time, growing by 21% between the end of 2019 and 30 July 2024 (as shown in Figure 8 below).

Figure 8: average number of days to re-inspection under the old framework), after receiving a ‘requires improvement’ or ‘inadequate’ rating, 2015 to 2024

Note: only locations with regulated activity were included. If there was no re-inspection following a ‘requires improvement’ or ‘inadequate’ rating, the age of the rating - as of 31 December for 2015 to 2023, or as of 30 July for 2024 - was taken. If a supposed ‘next’ inspection started before the previous inspection’s rating was published, or if the date of the first site visit was after the rating publication date, these were removed.

Since 2015, NHS acute non-specialist hospital locations have consistently had the longest average time between receiving an ‘inadequate’ overall rating and re-inspection, compared with social care, GP and independent sector locations.

As of 30 July 2024, the average time to re-inspection after an inadequate rating was:

  • 264 days for acute non-specialist NHS hospital locations
  • 157 days for independent sector locations
  • 134 days for GP locations
  • 132 days for social care locations

There are 268 active locations with an inadequate rating that had not yet been re-inspected at the point of us conducting our analysis (noting that this does not capture ratings issued under the SAF). No location defined as an acute specialist NHS hospital had received an ‘inadequate’ overall rating during the period reviewed.

As of 30 July 2024, the average time to re-inspection after a ‘requires improvement’ overall rating was:

  • 747 days for acute specialist NHS hospital locations
  • 477 days for independent sector locations
  • 450 days for acute non-specialist NHS hospital locations
  • 361 days for social care locations
  • 307 days for GP locations

There were 3,537 active locations with a ‘requires improvement’ rating that had not yet been re-inspected at the point of us conducting our analysis (noting that this does not capture ratings issued under the SAF).

CQC told us that all locations will be re-assessed regardless of their rating. The urgency of the re-inspection will be based on several factors including:

  • the previous rating
  • any information of concern submitted by the provider, staff or the public
  • insights from data including risk scores

CQC said it will continue to look at the best methods for collecting the evidence required.

5. Which key questions, quality statements and evidence categories are being prioritised under CQC’s new assessment framework?

Summary of main changes

CQC introduced its new SAF in November 2023, with assessments starting to take place from December 2023 across a small number of locations.

One main difference under the new framework is that each of the 5 key questions is now underpinned by a series of ‘quality statements’. While there are 34 quality statements in total, the new framework is intended to be flexible, meaning that assessments may look at one, several or all of them.

The other difference is that CQC developed 6 categories to describe the types of evidence it may collect. These are:

  • people’s experiences
  • feedback from staff and leaders
  • observations of care
  • feedback from partners
  • processes
  • outcomes of care

Scores for quality statements and evidence categories will be rolled up into question-level scores, using the same single-word rating system as before (outstanding, good, requires improvement and inadequate). These will then be aggregated into an ‘overall’ level rating.

Further information would be required from CQC to map assessment and rating records under the old system to the new assessment model. This is partly due to changes in terminology, but also changes to how CQC categorises locations. For example, it no longer assigns locations a ‘primary inspection category’, but rather an ‘assessment service group’, which is why the location labels below are different to other sections of this report.

Use of key questions

CQC told us that, between 4 December 2023 and 30 July 2024, it had completed 980 assessments under the new framework (unlike other data and analysis presented in this report, this figure includes unpublished assessments).

As shown in Figure 9 below, of the assessments completed up to 30 July 2024:

  • 75% looked at the ‘safe’ key question
  • 38% looked at ‘effective’
  • 49% looked at ‘caring’
  • 60% looked at ‘responsive’
  • 63% looked at ‘well led’

Figure 9: percentage of completed SAF assessments looking at each of the 5 key questions, up to 30 July 2024

The proportion of assessments looking at each key question varies considerably across sectors - for example, as shown in Table 2 below:

  • 90% of adult social care assessments looked at ‘safe’ compared with 22% of primary care assessments
  • 82% of dental assessments looked at ‘effective’ compared with 0% of community health assessments

Table 2: percentage of completed SAF assessments looking at each of the 5 key questions, by sector, up to 30 July 2024

Key question Adult social care Community health Dental Mental health Primary care Secondary and specialist care
Safe 90% 33% 89% 19% 22% 67%
Effective 39% 0% 82% 16% 20% 24%
Caring 61% 33% 78% 14% 4% 0%
Responsive 57% 0% 78% 14% 83% 27%
Well led 71% 33% 99% 9% 27% 64%

Use of quality statements

Across all assessments, 9,058 quality statements were used for 980 assessments, giving an average of 9.2 out of 34 quality statements used per assessment conducted. This varied by sector (as shown in Figure 10 below).

Figure 10: average number of quality statements used in completed SAF assessments, by sector, as of 30 July 2024

Of all quality statements used, 40% (3,580) are part of the ‘safe’ key question, while 13% (1,181) relate to the ‘effective’ key question. For secondary and specialist care assessments carried out in the same period, 63% (115 of 184 quality statements used for secondary and specialist care assessments) are within the ‘safe’ key question, and only 9% (17 out of 184) within the ‘effective’ key question. These results may be skewed by the fact that there are a greater number of ‘safe’ quality statements (8) within the framework compared with ‘effective’(6). 

On average, 3.7 of 8 possible quality statements on ‘safe’ were assessed (46%) while, on average, 1.2 of 6 possible quality statements on ‘effective’ were assessed (20%). Again, this varied by sector (as shown in Table 3 below).

Table 3: average number of quality statements used in completed SAF assessments, by key question and sector, as of 30 July 2024

Key question Maximum possible number of quality statements that could be assessed Average number of quality statements used in CQC assessments overall Average number of quality statements used in adult social care assessments Average number of quality statements used in community health assessments Average number of quality statements used in dental assessments Average number of quality statements used in mental health assessments Average number of quality statements used in primary care assessments Average number of quality statements used in secondary and specialist care assessments
Safe 8 3.7 4.6 1.0 2.6 0.8 1.2 3.5
Effective 6 1.2 1.5 0.0 0.8 0.4 0.8 0.5
Caring 5 1.0 1.3 0.7 0.8 0.3 0.2 0.0
Responsive 7 1.3 1.4 0.0 0.8 0.4 1.3 0.3
Well led 8 2.2 2.8 0.7 1.0 0.3 0.9 1.3
Overall 34 9.2 11.6 2.3 6.0 2.1 4.3 5.6

For the key question ‘safe’, CQC most frequently collected data against the quality statement of ‘safe and effective staffing’ (examined in 98% of completed assessments that looked at the key question ‘safe’). ‘Learning culture’ and ‘safe systems, pathways and transitions’ were looked at the least frequently (43% and 30% respectively) (see Figure 11 below).

Figure 11: the proportion looking at each quality statement, of the completed assessments that looked at the ‘safe’ key question, up to 30 July 2024

For the key question ‘effective’, CQC most frequently collected data against the quality statement of  ‘assessing needs’ (examined in 78% of completed assessments that looked at the key question ‘effective’). ‘Monitoring and improving outcomes’ and ‘how staff, teams and services work together’ were looked at the least frequently (34% each) (see Figure 12 below).

Figure 12: the proportion looking at each quality statement, of the completed assessments that looked at the ‘effective’ key question, up to 30 July 2024

For the key question ‘caring’, CQC most frequently collected data against the quality statement of ‘independence, choice and control’ (examined in 85% of completed assessments that looked at the key question ‘caring’). ‘Treating people as individuals’ was looked at the least frequently (18%) (see Figure 13 below).

Figure 13: the proportion looking at each quality statement, of the completed assessments that looked at the ‘caring’ key question, up to 30 July 2024

For the key question ‘responsive’, CQC most frequently collected data against the quality statement of ‘equity in experiences and outcomes’ (examined in 69% of completed assessments that looked at the key question ‘responsive’). ‘Planning for the future’ and ‘providing information’ were looked at the least frequently (15% each) (see Figure 14 below).

Figure 14: the proportion looking at each quality statement, of the completed assessments that looked at the ‘responsive’ key question, up to 30 July 2024

For all assessments of the key question ‘well led’, CQC collected data against the  quality statement ‘governance, management and sustainability’. However, ‘workforce equality, diversity and inclusion’ was only considered in 30% of completed assessments looking at ‘well led’, and ‘environmental sustainability’ was not looked at in any (see Figure 15 below).

Figure 15: the proportion looking at each quality statement, of the completed assessments that looked at the ‘well led’ key question, up to 30 July 2024

Use of evidence categories

We do not have any data from CQC on the evidence categories it used to inform its completed assessments of the quality statements referred to above.

However, we do have information from CQC on the evidence categories it intends to prioritise in assessments of primary health services, NHS acute and independent hospital services, care homes and supported living, and independent consulting doctors.

There are also different ways to analyse CQC’s intended usage of evidence categories.

One approach looks at the total number of times CQC said it would collect each type of evidence (of which there are 6 categories) against each of the 34 quality statements (giving 204 possible data points). For primary health services, CQC intends to collect up to 100 pieces of data, 2 of which fall within the evidence category ‘outcomes’ (2%).

The second approach looks at each evidence category separately. So, for primary health services, we see that only 2 of 34 statements are likely to be assessed by measuring ‘outcomes’ (6%), while 23 of 34 statements are likely to be assessed by measuring ‘people’s experience’ (68%).

As shown in Tables 4 and 5 below, in NHS acute and independent hospital service assessments, CQC indicated that only 13% of quality statements underpinning the key question ‘safe’ will prioritise ‘outcomes’ evidence, while 100% are likely to prioritise ‘people’s experience’ evidence.

In care home and supported living assessments, CQC indicated that no quality statements underpinning the key questions ‘caring’, ‘responsive’ and ‘well led’ will prioritise ‘outcomes’ evidence.

Table 4: the proportion of quality statements where the evidence category ‘outcomes’ should be used, by key question and sector (based on internal CQC guidance)

Key question Primary health services NHS acute and independent hospital services Care homes and supported living Independent consulting doctors
Safe 13% 13% 13% 13%
Effective 17% 50% 17% 17%
Caring 0% 20% 0% 0%
Responsive 0% 29% 0% 0%
Well led 0% 13% 0% 0%
Total 6% 24% 6% 6%

Table 5: the proportion of quality statements where the evidence category ‘people’s experience’ should be used, by key question and sector (based on internal CQC guidance)

Key question Primary health services NHS acute and independent hospital services Care homes and supported living Independent consulting doctors
Safe 75% 100% 75% 88%
Effective 100% 100% 100% 100%
Caring 80% 80% 80% 80%
Responsive 86% 100% 86% 86%
Well led 13% 13% 13% 13%
Total 68% 76% 70% 71%

Both approaches to analysing CQC’s intended usage of evidence categories suggest it is less likely to focus on ‘outcomes’ (such as how care has affected people’s physical, functional or psychological status), relative to ‘people’s experience’ (such as their satisfaction with the services received), when conducting assessments.

Case study

To illustrate CQC’s approach to aggregating ratings, we randomly selected a location that had gone through a focused inspection. A location with a focused inspection was selected, as this would illustrate the aggregating of ratings from inspections of fewer than 5 key questions and/or a subset of services with comprehensive inspection ratings from a previous inspection.

This location was Wythenshawe Hospital (location ID R0A07). Wythenshawe is an NHS acute hospital, provided and run by Manchester University NHS Foundation Trust.

It delivers 8 services as defined by CQC: medical care (including older people’s care), services for children and young people, critical care, end-of-life care, maternity, outpatients, surgery, and urgent and emergency care.

Comprehensive inspection (2018 to 2019)

In 2019, Wythenshawe Hospital was assigned a rating of ‘good’ overall, and for each of the 5 key questions (safe, effective, caring, responsive and well led). This was based on a comprehensive inspection of all 8 services in 2018 (as shown in Table 6 below).

Table 6: Wythenshawe Hospital - ratings overall, by key question and by service, 2019

Service Safe Effective Caring Responsive Well led Overall service rating
Urgent and emergency care Requires improvement Requires improvement Good Requires improvement Good Requires improvement
Medical care (including older people’s care) Good Good Good Requires improvement Good Good
Surgery Good Good Good Good Good Good
Critical care Good Good Outstanding Outstanding Good Outstanding
Maternity Good Good Good Good Good Good
Services for children and young people Good Good Good Outstanding Requires improvement Good
End-of-life care Good Good Outstanding Good Good Good
Outpatients Good Not rated Good Good Good Good
Overall hospital rating Good Good Outstanding Requires improvement Good Good

Focused inspection (2023)

In 2022, CQC announced its focused National Maternity Inspection Programme. Under this programme, all NHS acute hospital maternity services that had not been inspected and rated since April 2021 were to be re-inspected across the ‘safe’ and ‘well led’ key questions.

The maternity service in Wythenshawe Hospital was therefore re-inspected in 2023. Its 2019 ratings for ‘safe’ and ‘well led’ were downgraded from ‘good’ to ‘inadequate’ and ‘requires improvement’ respectively. CQC did not reinspect the ‘effective’, ‘caring’ or ‘responsive’ key questions, and so the maternity service’s ratings for these from 2019 were carried forward to 2023 (as shown in Table 7 below).

Table 7: Wythenshawe Hospital - maternity service ratings in 2019 and 2023

Year Inspection type Safe Effective Caring Responsive Well led Overall
2019 Comprehensive Good Good Good Good Good Good
2023 Focused Inadequate Not inspected: carry forward ‘good’ rating from 2019 Not inspected: carry forward ‘good’ rating from 2019 Not inspected: carry forward ‘good’ rating from 2019 Requires improvement Requires improvement

This focused inspection of 2 of the 5 key questions (safe and well led) for 1 of 8 services (maternity) (as shown in Table 8 below), resulted in the downgrading of Wythenshawe Hospital overall (as shown in Table 9 below).

While Wythenshawe Hospital’s overall rating is dated 2023 (following the focused inspection of maternity services), the majority of service-level and key question-level ratings were conducted in 2018 and published in 2019 (following the last comprehensive inspection of all services).

Table 8: Wythenshawe Hospital - number of key questions rated by service, 2019 and 2023

Service Key questions rated 2019 Key questions rated 2023
Maternity All 2 out of 5 rated (safe and well-led)
3 out of 5 not rated: carry forward 2019 ratings
Urgent and emergency care All None: carry forward 2019 ratings
Medical care All None: carry forward 2019 ratings
Surgery All None: carry forward 2019 ratings
Critical care All None: carry forward 2019 ratings
Services for children and young people All None: carry forward 2019 ratings
End-of-life care All None: carry forward 2019 ratings
Outpatients 4 out of 5 rated None: carry forward 2019 ratings

Table 9: Wythenshawe Hospital - overall and key question-level ratings, 2019 and 2023

Year Safe Effective Caring Responsive Well led Overall
2019 Good Good Outstanding Requires improvement Good Good
2023 Requires improvement (based on 7 out of 8 services assessed in 2019, and 1 out of 8 services assessed in 2023) Good (carried forward from 2019) Outstanding (carried forward from 2019) Requires improvement (carried forward from 2019) Requires improvement (based on 7 out of 8 services assessed in 2019, and 1 out of 8 services assessed in 2023) Requires improvement

Impact at trust level

The picture becomes even more complex at trust level. Under the previous inspection framework:

  • the rating for ‘well led’ was based on CQC’s inspection at trust level, taking into account what it found in individual services across the trust
  • the ratings for ‘safe’, ‘effective’, ‘caring’ and ‘responsive’ were assigned by combining hospital-level ratings, and using CQC’s “professional judgement” (we do not have further information on what ‘professional judgement’ means)

For example, Manchester University NHS Foundation Trust was established in 2017. At the point of its first inspection in 2018, the trust had 9 hospital sites:

  • Wythenshawe Hospital
  • Manchester Royal Eye Hospital
  • Manchester Royal Infirmary
  • Royal Manchester Children’s Hospital
  • Saint Mary’s Hospital
  • the University Dental Hospital
  • Altrincham Hospital
  • Trafford General Hospital
  • Withington Community Hospital

The trust also has an inpatient child and adolescent mental health unit, and a community child and adolescent mental health service.

The ratings for the 9 hospitals, mental health and community services were combined to give Manchester University NHS Foundation Trust a published rating in 2019 of ‘good’ overall.

Since this rating was published in 2019:

  • North Manchester General Hospital moved into the trust in 2020
  • Wythenshawe Hospital ratings were updated in 2023, following the focused inspection of its maternity service
  • Saint Mary’s Hospital ratings were updated in 2023, following the focused inspection of its maternity service

However, on CQC’s website, Manchester University NHS Foundation Trust’s ratings have not been updated to reflect these developments.