friaaz azeez gets tested for covid-19 from a health care worker at a pop-up testing centre at the islamic institute of toronto during the covid-19 pandemic in scarborough, ont., on friday, may 29, 2020. the scarborough health network said it is working in conjunction with the ministry and toronto public health to operate the first of the pop-up testing centres at the islamic institute of toronto, in the northeast part of the city. the canadian press/nathan denette
during the covid-19 pandemic, words and phrases that have typically been limited to epidemiologists and public health professionals have entered the public sphere. although we’ve rapidly accepted epidemiology-based news, the public hasn’t been given the chance to fully absorb what all these terms really mean.
false negative test results
are even more dangerous, as people may think it is safe and appropriate for them to engage in social activities. of course, factors such as the type of test, whether the individual had symptoms before being tested and the timing of the test can also impact how well the test predicts whether someone is infected.
in the epidemiological context, sensitivity is the proportion of true positives that are correctly identified. if 100 people have a disease, and the test identifies 90 of these people as having the disease, the sensitivity of the test is 90 per cent.
what will healthcare professionals do with their children this september? we asked 81 canadian doctors, medical residents, and physician assistants.
advertisement
advertisement
specificity is the ability of a test to correctly identify those without the disease. if 100 people don’t have the disease, and the test correctly identifies 90 people as disease-free, the test has a specificity of 90 per cent.
this simple table helps outline how sensitivity and specificity are calculated when the prevalence — the percentage of the population that actually has the disease — is 25 per cent (totals in bold):
a test sensitivity of 80 per cent can seem great for a newly released test (like for the made-up case numbers i reported above).
predictive value
but these numbers don’t convey the whole message. the usefulness of a test in a population is not determined by its sensitivity and specificity. when we use sensitivity and specificity, we are figuring out how well a test works when we already know which people do, and don’t, have the disease.
but the true value of a test in a real-world setting comes from its ability to correctly predict who is infected and who is not. this makes sense because in a real-world setting, we don’t know who truly has the disease — we rely on the test itself to tell us. we use the positive predictive value and negative predictive value of a test to summarize that test’s predictive ability.
advertisement
advertisement
to drive the point home, think about this: in a population in which no one has the disease, even a test that is terrible at detecting anyone with the disease will appear to work great. it will “correctly” identify most people as not having the disease. this has more to do with how many people have the disease in a population (prevalence) rather than how well the test works.
using the same numbers as above, we can estimate the positive predictive value (ppv) and negative predictive value (npv), but this time we focus on the row totals (in bold).
the ppv is calculated as the number of true positives divided by the total number of people identified as positive by the test.
the ppv is interpreted as the probability that someone that has tested positive actually has the disease. the npv is the probability that someone that tested negative does not have the disease. although sensitivity and specificity do not change as the proportion of diseased individuals changes in a population, the ppv and npv are heavily dependent on the prevalence.
let’s see what happens when we redraw our disease table when the population prevalence sits at one per cent instead of 25 per cent (much closer to the true prevalence of covid-19 in canada).
advertisement
advertisement
so, when the disease has low prevalence, the ppv of the test can be very low. this means that the probability that someone that tested positive actually has covid-19 is low. of course, depending on the sensitivity, specificity and the prevalence in the population, the reverse can be true as well: someone that tested negative might not truly be disease-free.
false positive and false negative tests in real life
what does this mean as mass testing begins for covid-19? at the very least it means the public should have clear information about the implications of false positives. all individuals should be aware of
the possibility of a false positive or false negative test
, especially as we move to a
heavier reliance on testing this fall
to inform our actions and decisions. as we can see using some simple tables and math above, the ppv and npv can be limiting even in the face of a “good” test with high sensitivity and specificity.
without adequate understanding of the science behind testing and why false positives and false negatives happen, we might drive the public to further mistrust — and even question the usefulness — of public health and testing. knowledge is power in this pandemic.