Technologies Validation Group: using tests to detect COVID-19
When tests to detect COVID-19 are commonly used (use cases) and what to consider when deciding which type of test to employ.
This page sets out where tests to detect COVID-19 (coronavirus) are commonly used (also called a use case) and gives guidance for points to be considered when deciding on the optimal type of test to use for different use cases. The aim of this guidance is to help users to decide what the minimum performance standards of tests (how well a test works) are that they should be considering when establishing a testing programme. This includes information on areas that underpin the performance of testing equipment. Please be aware that setting up a testing scheme and managing any risks associated with the use of tests is entirely within the responsibility of the organisation undertaking that test.
What to consider
This section sets out the 3 key areas that should be considered when picking a test for your use case and what they mean.
Users should be aware that no test is entirely reliable or accurate (that is, always correct). The overall performance and technical accuracy of a test is dependent on its limit of detection, sensitivity and specificity (see below). The consequences of choosing tests with low performance and accuracy should be considered before purchasing tests.
1. Limit of detection
The limit of detection (LOD) is defined as a measure of the lowest concentration (smallest amount) of the viral target (protein or RNA) which can be reliably identified in a sample and with a high degree of confidence. Usually, the LOD refers to the amount detected at least 95 times out of 100 attempts (95% probability of obtaining a correct result).
2. Sensitivity
The sensitivity is a measure of how well the test correctly identifies individuals with the coronavirus. The sensitivity can be used to understand the chance that a test will incorrectly give a negative result for someone who actually has coronavirus (that is, someone who would have tested positive if the test was completely accurate). This is called a ‘false negative’.
Tests that are less sensitive are likely to lead to more ‘false negatives’. This means that there’s an increased risk of infectious individuals continuing to be present in the workplace or setting that’s undertaking the testing. Testing more regularly when using a test with low sensitivity can help to reduce this risk, but it will also increase the number of false positives.
3. Specificity
Specificity is a measure of how well the test correctly identifies individuals without coronavirus. It can be used to understand the chance that a test will incorrectly give a positive result (a ‘false positive’) for someone who does not have coronavirus and would have tested negative if the test was completely accurate.
Tests that are less specific are likely to lead to a greater number of false positives. This may lead to the individuals tested having to be isolated unnecessarily. It should be noted that the number of false positives found by any test will increase as the number of people in the population who have the infection decreases (prevalence).
Test performance categories
Below are 4 common contexts (use cases) that tests might be used in and suggested minimum performance standards that are recommended for use in each of these settings.
Example uses | 1) Clinical diagnostic testing | 2) Healthcare and public health screening and testing | 3) Workplace, healthcare, and public health screening and testing | Workplace screening and testing |
---|---|---|---|---|
Sensitivity [Note 1] | ≥97% Potential to report 3 out of every 100 positive samples as negative | ≥80% Potential to report 20 out of every 100 positive samples as negative | ≥80% Potential to report 20 out of every 100 positive samples as negative | ≥68% [Note 2] Potential to report 32 out of every 100 positive samples as negative. |
Specificity | ≥99% Potential to report up to 1 out of 100 negative samples as positive. | ≥99% Potential to report up to 1 out of 100 negative samples as positive | ≥97% Potential to report up to 3 out of 100 negative samples as positive | ≥97% Potential to report up to 3 out of 100 negative samples as positive |
Repeat screening or confirmatory testing | Single point in time diagnostic test | Single point in time screening or repeat screening. Positives may require confirmatory testing [Note 3] | Single point in time screening or repeat screening. Positives may require confirmatory testing [Note 3] | Single point in time screening or repeat screening. Positives may require confirmatory testing [Note 3] |
Source: Technologies Validation Group, NHS Test and Trace
-
Note 1: Baseline expectation for all tests: ≥95% sensitivity for results where Ct Value is identified as equivalent to 1,000,000 genomic copies/mL (using external calibrants to calibrate Ct (threshold cycle, the point at which a PCR test becomes positive) value for comparator PCR (Polymerase Chain reaction) test).
-
Note 2: 95%CI entirely above 60%
-
Note 3: Subject to current guidance at the time of testing
When setting up a testing programme you must be aware of relevant legal requirements for your setting.
Repeat screening or confirmatory testing
The table sets out when repeat screening is recommended, based on the use case and the sensitivity and specificity of the test used. When tests have low sensitivity, they usually only detect the virus in samples that contain a lot of the virus; this usually relates to people who are most infectious, but some of these people will still be missed by these low sensitivity tests. To increase the chance of detecting these people (reduce false negatives), you can retest them in a pattern, such as daily or twice weekly.
Tests can also produce false positives; therefore, the table above indicates when people who have a positive test result, using one of the lower performing tests, should get a confirmatory test using the best tests available (category 1). This becomes more important when there are fewer people in the general population with the virus, as the chances increase of a false positive result.
Cycle threshold (Ct)
During the test validation process, the diagnostic sensitivity is carefully assessed by ensuring there is fair representation of samples with high, medium and low amounts of virus in these. Selection of samples for this exercise is based on their real-time, reverse transcriptase PCR result known as a Ct (Cycle threshold) value, where samples with Ct<25 are considered to have a high amount of virus, those with Ct>25<30 are considered to have a medium amount of virus, and those with Ct>30 are considered to have a low amount of virus.
When considering lower sensitivity tests, use of only samples containing a high amount of virus could lead to an overestimate of test sensitivity, whereas use of only samples with a low amount of virus present could lead to an underestimate in test sensitivity; hence, this careful balancing of samples used for the validation process is paramount.
The table above refers to the full sensitivity of the test (what the test detects when all samples from low to high viral load are included in the analysis), but full technical validation reports may also assess and report the sensitivity of the test for samples with Ct<25. The latter is particularly important for understanding tests designed for identifying individuals likely to have current infection that is most transmissible to others.
Updates to this page
Last updated 19 October 2021 + show all updates
-
Under 'Test performance categories', updated the example use cases to include 'and testing'.
-
First published.