Testing - what to be aware of

A common case used to identify gaps in knowledge on serology testing
COVID-19
testing
serology
Author

Jeffrey Post

Published

April 12, 2020

Motivation for write-up

The real-world motivation for this write-up can be found under Story Time section, but I first wanted to give a bit of theoretical background here.

The importance of testing has been greatly talked about these last few weeks/months with the emergence of the COVID-19 pandemic with numerous articles being published, all underlining the importance of testing. The part emphasized is the fact that early testing allows for quick isolation of sick individuals and tracing of their potential contacts, and thus limiting the potential for spread.

The kind of test for this are called virologic testing and test directly for the presence of virus in an individual (active infection). This is done with Nucleic Acid Tests, or NAT, usually after amplification of the very small amount of genetic material present via Polymerase Chain Reaction. Results are available within hours or days and require diagnostic machinery and specialists.

Knowing who has been infected is also important as it could allow already recovered patients (who are thought to gain immunity from COVID-19) to return safely to work and live basically normally. Tests that check for past infections exist, and are called serology or antibody tests. They check for specific antibodies that match those deveopped during an immune response response against SARS-CoV-2.

This is all good in theory, but with a disease that can cause such serious conditions as COVID-19 can, we need to be sure a positive test means for certain that person is now immmune, or we risk allowing individuals with false positives to return to normal when they should not, and continue the damaging spread of the disease.

The aim of this short right-up is to clear up some misconceptions around testing protocols, discuss the importance of false positives, false negatives, and its importance to guiding public health policies. The idea is basically to answer the following questions:

  • How many tests should return positive for a person to be, say 95% or 99% person sure he is now immune?
  • What if a different test is negative?

Specificity, Sensitivity, False positives, False negatives?

As briefly explained above, neither virological and serological tests are infallible. False positives i.e. healthy individuals with a positive test, and false negatives i.e. infected indiviuals with negative tests, can, and do happen.

There are numerous reasons how and why this can happen, but that is not the point of this write-up. Here, we acknowledge the fact non-perfect tests are a reality and establish testing protocol to deal with that fact.

Thankfully, before being shipped out, the various laboratories test their tests. They are able to characterize them rather precisely and give an indiction of how useful they may be with two important values: * Specificity * Sensitivity

Specificity

Specificity is the true negative rate - i.e. the percentage of healthy people correctly identified as such (for antibody testing, it is the percentage of people not having antibodies correctly identified as such).

In other words, if a test was used on 100 people who do not have antibodies, the number of people correctly identified as not hvaing antibodies is the specificity.

A perfect test with 100% specificity, means there are no false positives. This has major implications in the current context of COVID-19 pandemic as having an anitbody test with 100% specificity would allow immune people to know so for certain (as long as research showed antibodies gave immunity).

Mathematically, we pose specificity as follows:

\(Specificity = \frac{True\ negatives}{True\ negatives + False\ posiives}\)

Sensitivity

Sensitivity is the true positive rate - i.e. the percentage of infected people correctly identified as such (for antibody tests, it is the percentage of people having antibodies correctly identified as such).

In other words, if an antibody test was used on 100 people with antibodies, the number of people correctly identified as having anitbodies is the sensitivity.

A perfect test with 100% sensitivity, means there are no false negatives.

Mathematically, we pose specificity as follows:

\(Sensitivity = \frac{True\ positives}{True\ positives + False\ negatives}\)

Prevalence

Prevalence is simply the proportion of a population that has a certain characteistic. In the current context of antibody testing, the prevalence will be defined as the proportion of people who have antibody conferring immunity to COVID-19 (i.e. the proportion that has had the disease).

\(Prevalence = \frac{\#\ People\ with\ antibodies}{Total\ number\ of\ people}\)

Where \(Total\ number\ of\ people\) is simply \(\#\ People\ with\ antibodies + \# People\ without\ antibodies\)

Story time - Part 1

Specificity, sensitivity, prevalence, false negatives, false positives.. This is all good, but it can be a bit abstract outside of a specific testing context.

Let’s use the current COVID-19 pandemic as an example.

Antibody tests are finally becoming available to the general population, and you want to know if you’ve had the disease (developped antibodies against it).

  • Now let’s say you had influenza like symptoms back in January or February, would you expect a positive or negative result on the test?
  • What if you haven’t been sick but want to check out of curiosity, what result would you expect?
  • If it does come back positive, how certain would you be that you actually have those antibodies and it wasn’t a false positive?
  • You decide to use a second test to make sure, again it comes positive. Now how certain are you that you have antibodies?
  • Out of extreme precaution you decide to try a test from another laboratory (different specificity and sensitivity), and this time the test comes back negative. It’s become a bit more complex to evaluate your situation now.
  • So how about another test from this second laboratory? Again, negative.. Two positives, two negatives - what can you make of this information?

However far fetched this scenario may seem, it is exactly what happened to this Florida physician:

twitter: https://twitter.com/HandtevyMD/status/1245832946612711424

There are two questions that come out of this story:

  • After those 4 tests, what is the probability that Dr. Antevy has those antibodies - or more generally, can we calculate the probability of someone having antibodies given their test results?
  • What should be the threshold of such a probability to minimize the risk of someone without antibodies going out in nature thinking he does ? (obviously if someone has 10 positive tests in a row, it seems sure enough that person has antibodies) This pushes for the need of rigorous testing protocol.

Calculating probabilites given test results

Clearly, our objective is to calculate the probability that a person has antibodies, or:

\(P(seropositive)\)

Conditional probabilities

Baye’s theorem describes probabilities when given evidence.

Say a person has had some COVID-19 symptoms (dry cough, fever, loss of smell, slight fever) a few weeks ago. He might say there is a 75% chance that he had contracted COVID-19, and 25% chance it was another disease. In this case:

\(P(seropositive) = 0.75\)

Now this person goes to get an antibody test. What is the probability he is seropositive given a positive or negative result? Baye’s theorem allows us to write it as follows:

\(P(seropositive\ |\ positive\ test) = \frac{P(positive\ test\ |\ seropositive)\ *\ P(seropositive)}{P(positive\ test)}\)

and

\(P(seropositive\ |\ negative\ test) = \frac{P(negative\ test\ |\ seropositive)\ *\ P(seropositive)}{P(negative\ test)}\)

Note:

\(P(seropositive)\) is called the prior.

\(P(seropositive\ |\ positive\ test)\) and \(P(seropositive\ |\ negative\ test)\) are called the posterior.

\(P(Positive\ test)\)

Let’s have a look at the probability of getting a positive test - there are 2 ways to get a positive result :

  • A false positive
  • A true positive

\(P(False\ positive) = P(Positive\ test\ |\ seronegative)*P(seronegative)\)

And

\(P(True\ positive) = P(Positive\ test\ |\ seropositive)*P(seropositive)\)

So:

\(P(Positive\ test) = P(Positive\ test\ |\ seropositive)*P(seropositive) + P(Positive\ test\ |\ seronegative)*P(seronegative)\)

Sensitivity and Specificity revisited

Earlier we saw:

\(Sensitivity = \frac{True\ positives}{True\ positives + False\ negatives}\)

And that

\(Specificity = \frac{True\ negatives}{True\ negatives + False\ positives}\)

But we can rewrite these equations as follows:

\(Sensitivity = P(Positive\ test\ |\ seropositive)\)

And

\(Specificity = P(Negative\ test\ |\ seronegative) = 1-P(Positive\ test\ |\ seronegative)\)

Re-writing the posterior probability

Using Baye’s rule and the calculations above we can re-write the posterior equations as follows:

\(P(seropositive\ |\ Positive\ test) = \frac{Sensitivity*P(seropositive)}{Sensitivity*P(seropositive)+ (1-Specificity)*(1-P(seropositive))}\)

And:

\(P(seronegative\ |\ Negative\ test) = \frac{Specificity*(1-P(seropositive))}{Specificity*(1-P(seropositive))+(1-Sensitivity)*P(seropositive)}\)

The role of prevalence in these calculations

The equations above describe the probability for an individual given a test result and their prior probability. This prior probability can be estimated given presence or not of symptoms, contact with other infected individuals, location, other diagnostics, etc…

However, on a population level, if we were to test a random individual, this prior becomes the prevalence and for a random individual, the equations become:

\(P(seropositive\ |\ Positive\ test) = \frac{Sensitivity*Prevalence}{Sensitivity*Prevalence+(1-Specificity)*(1-Prevalence)}\)

And:

\(P(seronegative\ |\ Negative\ test) = \frac{Specificity*(1-Prevalence)}{Specificity*(1-Prevalence)+(1-Sensitivity)*Prevalence}\)

Serology testing simulation

Let’s see what these equations look like in practice.

#hide
!pip install plotly==4.6.0
Collecting plotly==4.6.0
  Downloading https://files.pythonhosted.org/packages/15/90/918bccb0ca60dc6d126d921e2c67126d75949f5da777e6b18c51fb12603d/plotly-4.6.0-py2.py3-none-any.whl (7.1MB)
     |████████████████████████████████| 7.2MB 2.4MB/s 
Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from plotly==4.6.0) (1.12.0)
Requirement already satisfied: retrying>=1.3.3 in /usr/local/lib/python3.6/dist-packages (from plotly==4.6.0) (1.3.3)
Installing collected packages: plotly
  Found existing installation: plotly 4.4.1
    Uninstalling plotly-4.4.1:
      Successfully uninstalled plotly-4.4.1
Successfully installed plotly-4.6.0
#collapse_hide
import numpy as np
import plotly.express as px
import plotly.graph_objects as go
#collapse_hide
# Let's write a function to output the posterior probability given prior, test result, and test characteristics (sensitivity and specificity)
def Pposterior(Pprior, test_res, Sn, Sp):
  if test_res:
    return ((Sn * Pprior) / (Sn * Pprior + (1-Sp) * (1-Pprior)))
  else:
    return (1-((Sp * (1-Pprior))/(1-(Sn * Pprior + (1-Sp) * (1-Pprior)))))

Say we have an antibody test with 90% sensitivity and 90% specificity - meaning we have 90% true positives and 90% true negatives, we obtain a graph as below:

#collapse_hide

# Below is the prior probability of being infected:
num=10000
Pprior = np.linspace((1/num),(num-1)/num,num=num)

# Graph the results
fig = go.Figure(data=[
    go.Scatter(name='Test negative', x=100*Pprior, y=100*Pposterior(Pprior, False, 0.9, 0.9), line_color="green"),
    go.Scatter(name='Test positive', x=100*Pprior, y=100*Pposterior(Pprior, True, 0.9, 0.9), line_color="red"),
    go.Scatter(name='No test', x=100*Pprior, y=100*Pprior, line_color="blue")
])

fig.update_layout(
    xaxis_title = 'Prior probability of being infected',
    yaxis_title = 'Posterior probability of being infected given test result<br>Specificity=90.0<br>Sensitivity=90.0'
)

fig.show()

If you hover the mouse over the lines you can see the exact numbers.

As you can see, a positive or negative test does give more information than no test, but it doesn’t quite give you certainty.

Story time - Part 2

Let’s circle back to our Dr. Antevy with his two positive tests and the two negative tests.

Prior to any tests, he was about 50% certain of having contracted COVID-19 based on his assesment of his symptoms, location, contact with other people, etc..

Let’s go through his test results to see what his posterior probability of having antibodies is.

#collapse_hide
# Let's make a new function for multiple tests in a row

def PposteriorM(Pprior, test_res):
  x = Pprior
  for tr, sn, sp in test_res:
    if tr == 1:
      x = (sn * x) / (sn * x + (1-sp) * (1-x))
    elif tr == 0:
      x = (1-((sp * (1-x))/(1-(sn * x + (1-sp) * (1-x)))))
  return x

Let’s say these are the characteristics of the tests he used:

  • Test 1 and 2:
  • Specificity = 0.90
  • Sensitivity = 0.99
  • Test 3 and 4:
  • Specificity = 0.97
  • Sensitivity = 0.95

So a highly sensitive first test followed by a rather good allround test, a bit more specific than the first.

#collapse_hide

# Below is the prior probability of being infected:
num=10000
Pprior = np.linspace((1/num),(num-1)/num,num=num)

# Test characteristics
test_results = [(1, 0.99, 0.90),(1, 0.99, 0.90),(0,0.95,0.97),(0,0.95,0.97)]

# Graph the results
fig = go.Figure(data=[
    go.Scatter(name='1 - 1st positive test', x=100*Pprior, y=100*PposteriorM(Pprior, [test_results[0]])),
    go.Scatter(name='2 - 2nd positive test', x=100*Pprior, y=100*PposteriorM(Pprior, test_results[0:2])),
    go.Scatter(name='3 - 1st negative test', x=100*Pprior, y=100*PposteriorM(Pprior, test_results[0:3])),
    go.Scatter(name='4 - 2nd negative test', x=100*Pprior, y=100*PposteriorM(Pprior, test_results[0:4]))    
])

fig.update_layout(
    xaxis_title = 'Prior probability of being infected',
    yaxis_title = 'Posterior probability of being infected given test results'
)

fig.show()

So let’s go through step-by-step:

  • Before any test, he was about 50% sure he contracted COVID-19
  • After the 1st positive test, this goes up to 90.8% sure
  • After the 2nd positive test, up to 99.0% sure
  • But the 1st negative test drops it back to 83.5%
  • And the 2nd negative all the way down to 20.7%

What if this was done on a random person in France for example, and all 4 tests were positive.

Then the prior would be the prevalence in France (0.2%) instead of 50%, and the step by step would be as follows: * Before any test, about 0.20% * After 1st positive: still only 1.9% chance of being seropositive
* After 2nd positive test: only 16.4% chance of seropositive * After 3rd positive: 86% * And after 4th positive test 99.5%

So it took about 4 positive tests for a random person in France to become confident enough to be seropositive.

Discussion

The results above strongly underline the need for clear testing protocols and clear understanding of the interpretation of test results.

Wtih a disease that can be so devastating as COVID-19, a few things should be kept in mind: * A high treshold should be used to hedge the risk a false positive * Multiple tests should be taken * Multiple tests with different characteristics (ideally at least one with high sensitivity, and one with high specificity)