Skip to main content
Advertisement
  • Loading metrics

Comparative assessment of multiple COVID-19 serological technologies supports continued evaluation of point-of-care lateral flow assays in hospital and community healthcare settings

  • Suzanne Pickering ,

    Roles Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Project administration, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing

    suzanne.pickering@kcl.ac.uk (SP); blair.merrick@gstt.nhs.uk (BM); katie.doores@kcl.ac.uk (KD)

    ‡ These authors share first authorship on this work.

    Affiliation Department of Infectious Diseases, School of Immunology & Microbial Sciences, King’s College London, London, United Kingdom

  • Gilberto Betancor ,

    Roles Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Project administration, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing

    ‡ These authors share first authorship on this work.

    Affiliation Department of Infectious Diseases, School of Immunology & Microbial Sciences, King’s College London, London, United Kingdom

  • Rui Pedro Galão ,

    Roles Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Project administration, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing

    ‡ These authors share first authorship on this work.

    Affiliation Department of Infectious Diseases, School of Immunology & Microbial Sciences, King’s College London, London, United Kingdom

  • Blair Merrick ,

    Roles Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Project administration, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing

    suzanne.pickering@kcl.ac.uk (SP); blair.merrick@gstt.nhs.uk (BM); katie.doores@kcl.ac.uk (KD)

    ‡ These authors share first authorship on this work.

    Affiliation Centre for Clinical Infection and Diagnostics Research, Department of Infectious Diseases, Guy’s and St Thomas’ NHS Foundation Trust, London, United Kingdom

  • Adrian W. Signell ,

    Roles Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Project administration, Validation, Visualization, Writing – original draft, Writing – review & editing

    ‡ These authors share first authorship on this work.

    Affiliation Department of Infectious Diseases, School of Immunology & Microbial Sciences, King’s College London, London, United Kingdom

  • Harry D. Wilson ,

    Roles Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Project administration, Validation, Visualization, Writing – original draft, Writing – review & editing

    ‡ These authors share first authorship on this work.

    Affiliation Department of Infectious Diseases, School of Immunology & Microbial Sciences, King’s College London, London, United Kingdom

  • Mark Tan Kia Ik,

    Roles Data curation, Investigation, Methodology

    Affiliation Centre for Clinical Infection and Diagnostics Research, Department of Infectious Diseases, Guy’s and St Thomas’ NHS Foundation Trust, London, United Kingdom

  • Jeffrey Seow,

    Roles Investigation, Methodology

    Affiliation Department of Infectious Diseases, School of Immunology & Microbial Sciences, King’s College London, London, United Kingdom

  • Carl Graham,

    Roles Investigation, Methodology

    Affiliation Department of Infectious Diseases, School of Immunology & Microbial Sciences, King’s College London, London, United Kingdom

  • Sam Acors,

    Roles Investigation, Methodology

    Affiliation Department of Infectious Diseases, School of Immunology & Microbial Sciences, King’s College London, London, United Kingdom

  • Neophytos Kouphou,

    Roles Investigation, Methodology

    Affiliation Department of Infectious Diseases, School of Immunology & Microbial Sciences, King’s College London, London, United Kingdom

  • Kathryn J. A. Steel,

    Roles Investigation, Methodology

    Affiliation Centre for Inflammation Biology and Cancer Immunology (CIBCI), Dept of Inflammation Biology, School of Immunology & Microbial Sciences, King’s College London, London, United Kingdom

  • Oliver Hemmings,

    Roles Investigation, Methodology

    Affiliation Department of Immunobiology, School of Immunology and Microbial Sciences, Faculty of Life Sciences and Medicine, King’s College London, London, United Kingdom

  • Amita Patel,

    Roles Data curation, Project administration, Resources

    Affiliation Centre for Clinical Infection and Diagnostics Research, Department of Infectious Diseases, Guy’s and St Thomas’ NHS Foundation Trust, London, United Kingdom

  • Gaia Nebbia,

    Roles Conceptualization, Data curation, Methodology, Project administration

    Affiliation Centre for Clinical Infection and Diagnostics Research, Department of Infectious Diseases, Guy’s and St Thomas’ NHS Foundation Trust, London, United Kingdom

  • Sam Douthwaite,

    Roles Conceptualization, Data curation, Project administration, Resources

    Affiliation Centre for Clinical Infection and Diagnostics Research, Department of Infectious Diseases, Guy’s and St Thomas’ NHS Foundation Trust, London, United Kingdom

  • Lorcan O’Connell,

    Roles Data curation

    Affiliation Centre for Clinical Infection and Diagnostics Research, Department of Infectious Diseases, Guy’s and St Thomas’ NHS Foundation Trust, London, United Kingdom

  • Jakub Luptak,

    Roles Investigation, Resources

    Affiliation MRC Laboratory of Molecular Biology, Cambridge, United Kingdom

  • Laura E. McCoy,

    Roles Investigation, Methodology, Resources, Writing – review & editing

    Affiliation Division of Infection and Immunity, University College London (UCL), London, United Kingdom

  • Philip Brouwer,

    Roles Resources

    Affiliation Department of Medical Microbiology, Amsterdam UMC, University of Amsterdam, Amsterdam Infection & Immunity Institute, Amsterdam, the Netherlands

  • Marit J. van Gils,

    Roles Resources

    Affiliation Department of Medical Microbiology, Amsterdam UMC, University of Amsterdam, Amsterdam Infection & Immunity Institute, Amsterdam, the Netherlands

  • Rogier W. Sanders,

    Roles Resources

    Affiliation Department of Medical Microbiology, Amsterdam UMC, University of Amsterdam, Amsterdam Infection & Immunity Institute, Amsterdam, the Netherlands

  • Rocio Martinez Nunez,

    Roles Conceptualization

    Affiliation Department of Infectious Diseases, School of Immunology & Microbial Sciences, King’s College London, London, United Kingdom

  • Karen Bisnauthsing,

    Roles Data curation, Methodology

    Affiliation Centre for Clinical Infection and Diagnostics Research, Department of Infectious Diseases, Guy’s and St Thomas’ NHS Foundation Trust, London, United Kingdom

  • Geraldine O’Hara,

    Roles Conceptualization, Data curation, Methodology

    Affiliation Centre for Clinical Infection and Diagnostics Research, Department of Infectious Diseases, Guy’s and St Thomas’ NHS Foundation Trust, London, United Kingdom

  • Eithne MacMahon,

    Roles Conceptualization, Data curation, Methodology

    Affiliation Centre for Clinical Infection and Diagnostics Research, Department of Infectious Diseases, Guy’s and St Thomas’ NHS Foundation Trust, London, United Kingdom

  • Rahul Batra,

    Roles Conceptualization, Funding acquisition, Investigation, Methodology, Project administration, Resources, Software, Visualization

    Affiliation Centre for Clinical Infection and Diagnostics Research, Department of Infectious Diseases, Guy’s and St Thomas’ NHS Foundation Trust, London, United Kingdom

  • Michael H. Malim,

    Roles Conceptualization, Funding acquisition, Methodology, Supervision, Writing – review & editing

    Affiliation Department of Infectious Diseases, School of Immunology & Microbial Sciences, King’s College London, London, United Kingdom

  • Stuart J. D. Neil,

    Roles Conceptualization, Funding acquisition, Methodology, Project administration, Supervision, Writing – review & editing

    Affiliation Department of Infectious Diseases, School of Immunology & Microbial Sciences, King’s College London, London, United Kingdom

  • Katie J. Doores ,

    Roles Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Resources, Supervision, Validation, Writing – original draft, Writing – review & editing

    suzanne.pickering@kcl.ac.uk (SP); blair.merrick@gstt.nhs.uk (BM); katie.doores@kcl.ac.uk (KD)

    Affiliation Department of Infectious Diseases, School of Immunology & Microbial Sciences, King’s College London, London, United Kingdom

  •  [ ... ],
  • Jonathan D. Edgeworth

    Roles Conceptualization, Funding acquisition, Methodology, Project administration, Resources, Supervision, Writing – review & editing

    Affiliations Department of Infectious Diseases, School of Immunology & Microbial Sciences, King’s College London, London, United Kingdom, Centre for Clinical Infection and Diagnostics Research, Department of Infectious Diseases, Guy’s and St Thomas’ NHS Foundation Trust, London, United Kingdom

  • [ view all ]
  • [ view less ]

Abstract

There is a clear requirement for an accurate SARS-CoV-2 antibody test, both as a complement to existing diagnostic capabilities and for determining community seroprevalence. We therefore evaluated the performance of a variety of antibody testing technologies and their potential use as diagnostic tools. Highly specific in-house ELISAs were developed for the detection of anti-spike (S), -receptor binding domain (RBD) and -nucleocapsid (N) antibodies and used for the cross-comparison of ten commercial serological assays—a chemiluminescence-based platform, two ELISAs and seven colloidal gold lateral flow immunoassays (LFIAs)—on an identical panel of 110 SARS-CoV-2-positive samples and 50 pre-pandemic negatives. There was a wide variation in the performance of the different platforms, with specificity ranging from 82% to 100%, and overall sensitivity from 60.9% to 87.3%. However, the head-to-head comparison of multiple sero-diagnostic assays on identical sample sets revealed that performance is highly dependent on the time of sampling, with sensitivities of over 95% seen in several tests when assessing samples from more than 20 days post onset of symptoms. Furthermore, these analyses identified clear outlying samples that were negative in all tests, but were later shown to be from individuals with mildest disease presentation. Rigorous comparison of antibody testing platforms will inform the deployment of point-of-care technologies in healthcare settings and their use in the monitoring of SARS-CoV-2 infections.

Author summary

PCR-based throat and nose swab tests for novel coronavirus (SARS-CoV-2) establish if someone is infected with the virus, while antibody tests can determine whether someone has had it in the past. However, for diagnosis later in disease, or in delayed-onset syndromes such as paediatric inflammatory multisystem syndrome (PIMS), antibody tests could form an important part of hospital diagnostic capabilities. They will also be essential for patient management strategies and community seroprevalence studies. We have conducted unbiased, head-to-head comparisons of ten commercial antibody test kits, using blood from patients admitted to hospital with COVID-19 throughout the peak of the epidemic in London, UK. As there was no existing approved diagnostic antibody test, we developed our own sensitive assay and used this to cross-compare the commercial tests. There was a broad range of performance among the tests, but all gave the best results when used 20 days or more after the start of symptoms. Furthermore, antibody levels were higher in individuals with severe illness compared to those with asymptomatic or mild disease. Some of the best-performing tests were rapid lateral flow immunoassays, which are affordable, quick and easy to use, and if they are deployed appropriately could have considerable utility in healthcare settings.

Introduction

As of the 1st of June 2020, over 6 million cases of SARS-CoV-2 have been confirmed worldwide, accounting for more than 370,000 deaths (https://covid19.who.int/). Lack of treatments or vaccines have forced governments to adopt strict quarantine strategies in an attempt to control the spread of the virus, causing major economical disturbances as well as adversely affecting quality of life and healthcare provision.

Current guidelines by leading health bodies, including the Centers for Disease Control and Prevention (CDC) in the US and Public Health England (PHE) in the UK, recommend SARS-CoV-2 diagnosis from upper or lower respiratory specimens (including nasopharyngeal swabs and bronchoalveolar lavage) using real-time RT-PCR, typically targeting the nucleocapsid (N) or RNA-dependent RNA polymerase (RdRp) genes [1,2]. These tests are highly sensitive—capable of detecting vestigial viral RNA levels—and are optimal for the early detection of the virus. However, the performance of the test is dependent on the time the sample is collected, with viral load declining after the first week of symptoms [3,4].

There is, therefore, a clear requirement for accurate serology testing as a companion diagnostic to PCR-based testing. This is highlighted by the recent appearance of clusters of paediatric inflammatory multisystem syndrome (PIMS) and other hyperimmune reactions associated with SARS-CoV-2 infection [57]. Presentation is delayed relative to active viral infection, with the detection of antibody responses being key to clinical diagnosis. In addition, monitoring population seroprevalence will be central to future public health planning based on disease susceptibility and herd immunity [8]. For this to be meaningful, it is imperative that antibody detection methods are affordable, reliable, and readily accessible.

However, with an incomplete knowledge of the immunology of COVID-19, evaluating tests with the assumption that antibodies ‘should’ be there, and comparatively to RT-PCR, is problematic. Head-to-head comparisons of multiple sero-diagnostic assays on identical samples therefore provides a robust assessment of individual assay performance. Accordingly, we developed a highly specific semi-quantitative ELISA for the detection of anti-spike (S), -S receptor binding domain (RBD) and -N antibodies, and used this to cross-evaluate ten commercial antibody tests (seven lateral flow immunoassays (LFIAs), one chemiluminescent assay and two ELISAs) on a collection of 110 serum samples from confirmed RNA positive patients, and 50 pre-pandemic samples from March 2019. Our results demonstrate a wide variation in the performance of the different platforms, ranging from 60.9% to 87.3% sensitivity and from 82% to 100% specificity. As expected, performance is highly dependent on the time the sample was taken post onset of symptoms (POS) and disease severity. Results obtained in this work have enabled the diagnostic-grade validation for one of the LFIAs evaluated for pilot clinical use for adult and paediatric patients with a range of clinically-suspected COVID-19 inflammatory syndromes at Guy’s and St Thomas’ Hospitals.

Results

Study samples

110 serum samples collected from 87 individuals between the 4th of March and 21st of April 2020 at St Thomas’ Hospital were used to compare a panel of serological assays. At the time of study, UK government guidelines limited SARS-CoV-2 testing to individuals requiring hospitalisation, and all 87 individuals had RT-PCR-confirmed SARS-CoV-2 infection. Samples were representative of typical hospital admissions during the period, with a spectrum of clinical severities from mild (requiring no respiratory support) to critical (requiring extra-corporeal membrane oxygenation (ECMO)), and a range of time points after self-reported onset of symptoms (1 to 30 days) (Table 1).

thumbnail
Table 1. Baseline of patient cohort clinical characteristics.

Clinical characteristics of 87 SARS-CoV-2-positive individuals admitted to St Thomas’ Hospital in March and April 2020.

https://doi.org/10.1371/journal.ppat.1008817.t001

In-house ELISA development

In-house ELISAs were developed to measure antibody responses against the full-length S, the RBD and N. Recombinant S and RBD were expressed in HEK 293F cells and purified by affinity and size exclusion chromatography. N was expressed in and purified from E. coli. A total of 320 pre-pandemic serum samples from several cohorts were used to determine the lower limit of the assay, including sera from individuals attending St Thomas’ Hospital in March 2019, sera from vaccination studies, cancer patients, healthy volunteers, and individuals with acute EBV infection (S1 Table). Samples from hospital patients with confirmed SARS-CoV-2 infection (from the described serum samples in Table 1) were used as positive controls. All 24 serum samples from PCR positive ICU patients taken at least 10 days post symptoms showed strong IgG binding to S, RBD and N (Fig 1, S1 Table). In contrast, although high IgM reactivity was also observed to S and RBD in some individuals, only 5 of the 320 negative control samples showed IgG reactivity to S or RBD (Fig 1, S1 Table). High IgM and IgG reactivity was observed in the pre-pandemic samples against N (Fig 1) suggestive of potential cross-reaction with seasonal coronaviruses. Of note, all individuals with acute EBV infection had high IgM reactivity to N and RBD. Importantly, none of the pre-COVID sera had detectable IgG binding to N and S or N and RBD. Taking into account the reactivity of negative control samples in this ELISA, 100% specificity could be reached using a cut-off where IgG against N or S both have OD values at least 4-fold above the wells containing secondary antibody only (Fig 1B, S1 Table).

thumbnail
Fig 1. Validation of a SARS-CoV-2 ELISA for measuring IgG and IgM in Covid-19 patients.

(A) 320 pre-pandemic serum/plasma samples were assayed for IgM and IgG against SARS-CoV-2 stabilised and uncleaved S protein, the RBD from S, and N. Sera/plasma came from emergency admissions to St Thomas’ hospital in March 2019 (STH healthy, n = 105), individuals with acute EBV infection (n = 9), vaccine trials (HIRD cohort [16], n = 44), cancer patients (n = 61), and healthy volunteers (UCL healthy, n = 101). The IgM and IgG binding were compared to that of 24 SARS-CoV-2 PCR positive ICU patients collected in March and April 2020. Sera and plasma were diluted to 1:50 and 1:25 respectively. The values reported are the fold-change in OD above background. (B) Analysis of IgG binding of pre-Covid-19 human sera/plasma (n = 320) compared to sera from SARS-CoV-2 infected patients (n = 24) revealed that IgG against both N and S distinguished these two groups. The fold-change above background for IgG against N was plotted against the fold-change above background for S IgG binding and RBD IgG binding for each individual. (C) Analysis of IgM binding of pre-Covid-19 human sera/plasma (n = 320) compared to sera from SARS-CoV-2 infected patients (n = 24).

https://doi.org/10.1371/journal.ppat.1008817.g001

Comparison of serological tests

All serological assays were evaluated with the same set of 110 serum samples from confirmed SARS-CoV-2-positive individuals. Each of the samples was tested: for anti-SARS-CoV-2 IgM by in-house ELISA and seven LFIAs; for IgA by commercial ELISA (Fig 2); and for IgG by in-house ELISA, seven LFIAs, a commercial ELISA and for total antibody (IgG, IgM and IgA) using a chemiluminescent assay (Fig 3). The commercial ELISAs detect anti-S1 antibodies; for the LFIAs this is undisclosed proprietary information, but for some of the tests is known to be S. With no existing standardised diagnostic test for the assessment of the serological response to SARS-CoV-2, we started by comparing commercial serological assays with the configuration of the in-house ELISA most likely to represent antibodies detected by the commercial tests (detection of anti-S IgM and IgG antibodies), and that had high specificity and sensitivity (S1 Table). For the purpose of illustration, the intensity of bands shown in a positive LFIA test, or signal strength in commercial ELISA or chemiluminescent assay, is reproduced as a heatmap. However, the visible detection of a band or result above a given manufacturer’s threshold scored as a positive result regardless of classification. For samples giving a strong response by ELISA (> 9-fold), the majority of the commercial assays show a consensus positive result. Weaker antibody responses yielded mixed results from the LFIAs, with a clear pattern of increasing detection of antibodies seen across all tests with increasing time POS. Samples from days 1–9 gave an extremely mixed picture, indicative of an early evolving immune response. For the detection of IgM, 11 of the 72 samples from days 10 or more POS were negative by anti-S ELISA. 5 of these 11 samples were positive, in some cases only weakly, for at least two LFIAs and/or IgA. The remaining 6 were negative in at least 6 LFIA tests and for IgA (indicated with a yellow circle in Fig 2). These same 6 samples were negative for IgG by in-house and commercial ELISA, in all 7 LFIAs and by chemiluminescent assay (indicated with a yellow circle in Fig 3). Importantly, all 6 samples came from individuals with a disease severity score of 0. Later time points were obtained for three of these individuals (both of the day 10 samples and the sixth day 14 sample) and the next available samples were found to be strongly positive for IgG and IgM by LFIA (days 47, 77 and 51 POS, respectively). Further investigation into the nature of the negative day 23 sample revealed that it was an error in self-reporting the time of symptom onset: this individual had COVID-19-compatible symptoms for 23 days prior to sampling, yet tested negative for RNA 10 days POS. 21 days POS the individual tested positive for RNA, therefore the more likely time of sampling was between 2 and 12 days POS. Samples were not re-classified, as they are representative of the real-time analyses being performed during the peak of a pandemic. However, the day 23 sample was omitted from sensitivity calculations.

thumbnail
Fig 2. Comparison of nine serological assays for the detection of anti-SARS CoV-2 IgM and IgA.

110 serum samples from 87 individuals with confirmed SARS-CoV-2 infection (RNA+ by RT-PCR) were assayed for anti-SARS CoV-2 IgM using an in-house anti-S ELISA (shown in the graph across the top of each panel, black bars), seven colloidal gold lateral flow tests (Deep Blue, Accu-Tell, GenBody, SureScreen, Spring, Biohit and Medomics), and for anti-S1 IgA using a commercial ELISA (EUROIMMUN). The threshold for a positive result in the in-house ELISA is set at 4-fold above background, as indicated by the red dashed line. Results for the other tests are represented as heatmaps, with colour intensity corresponding to strength of signal for each test. For EUROIMMUN, scores of <0.8 are negative, ≥0.8 to <1.1 are borderline, ≥1.2 to <4 are positive, and ≥4 are strong positive. Samples are grouped according to days post onset of COVID-19 symptoms, and squares aligned in columns under each bar of the graph show results for a single serum sample. Yellow circles indicate samples from 10 days or more POS that were negative by ELISA and in at least 6 other tests, as detailed in the text.

https://doi.org/10.1371/journal.ppat.1008817.g002

thumbnail
Fig 3. Comparison of ten serological assays for the detection of anti-SARS CoV-2 IgG.

As for in Fig 2, the same 110 serum samples were assessed for the presence of anti-SARS CoV-2 IgG. Each sample was assayed using an in-house anti-S ELISA (shown in the graph across the top of each panel, black bars), seven colloidal gold lateral flow tests (Deep Blue, Accu-Tell, GenBody, SureScreen, Spring, Biohit and Medomics), and a commercial ELISA (EUROIMMUN). A chemiluminescent assay for total anti-SARS CoV-2 IgM, IgG and IgA (Watmind) was also included. The threshold for a positive result in the in-house ELISA is set at 4-fold above background, as indicated by the red dashed line. Results for the other tests are represented as heatmaps, with colour intensity corresponding to strength of signal for each test. For EUROIMMUN, scores of <0.8 are negative, ≥0.8 to <1.1 are borderline, ≥1.2 to <4 are positive, and ≥4 are strong positive. For Watmind, scores <1 are negative, >1 to <10 are positive, and >10 to 100 are strong positive. Yellow circles indicate samples from 10 days or more POS that were negative by ELISA and in all commercial tests.

https://doi.org/10.1371/journal.ppat.1008817.g003

Overall, with the exception of GenBody and Watmind, all tests gave cross-assay agreements between 81.8 and 95.4% (Fig 4 and S1 Fig). The highest level of agreement was seen between the in-house ELISA, SureScreen, Accu-Tell, Spring and EUROIMMUN tests. Interestingly, the in-house ELISA IgM and EUROIMMUN IgA results showed particularly good agreement (94.5%), although the EUROIMMUN detected IgA more frequently in early samples compared with the IgM detected by in-house ELISA (Fig 2).

thumbnail
Fig 4. Comparative tables.

Serological assays were compared and the percentage agreement between the results of each of the samples in the assays is represented within each box.

https://doi.org/10.1371/journal.ppat.1008817.g004

Assay specificity and sensitivity

Results from the 110 SARS-CoV-2-positive samples and an identical set of 50 pre-pandemic negative samples were used to evaluate assay sensitivities and specificities, with extended specificity assessments using larger sample numbers performed on selected tests (Fig 5A, S2 Fig and S2S5 Tables). Cross comparison of overall specificities and sensitivities led to the shortlisting of six tests with the highest specificity and sensitivity (ELISA IgM and IgG, SureScreen, Accu-Tell, Spring and EUROIMMUN IgA). These were the same tests that gave the best agreement in the cross-assay comparisons. Notably, sensitivity increased for all tests with increasing days POS, with antibodies being variably detected at early times (Fig 5B). Deep Blue, Accu-Tell, SureScreen and Spring also displayed the highest levels of sensitivity at less than 10 days. In sum, Deep Blue, Accu-Tell, SureScreen, Spring, Biohit, Medomics, EUROIMMUN (IgA and IgG) and in-house ELISAs (IgM and IgG) all had sensitivities above 95% for samples taken ≥20 days POS.

thumbnail
Fig 5. Sensitivity and specificity comparison of serological assays.

(A) Specificity was determined for each serological assay using a panel of 50 pre-pandemic serum samples the St Thomas’ emergency admissions cohort (STH Healthy, March 2019). Overall sensitivity was determined for each serological assay, based on results for 110 serum samples from known SARS-CoV-2-positive individuals (shown in Figs 3 and 4). For the lateral flow assays, a positive result for IgM, IgG or both is considered positive. For the EUROIMMUN and in-house ELISAs, sensitivities for IgA, IgM and IgG are calculated separately. (B) Sensitivity and 95% confidence intervals (vertical lines) were determined for each serological assay at increasing days POS. 95% sensitivity is indicated by the horizontal dashed line. Results for each test were categorised according to whether the serum sample was from <10, ≥10, ≥14, or ≥20 days POS.

https://doi.org/10.1371/journal.ppat.1008817.g005

Association of antibody detection with days POS and severity of disease

Sample-by-sample analyses of multiple serological assays showed a trend for increasing detection of antibodies with increasing days POS (up to 30). We also observed that individuals with severe disease had a more readily detectable antibody response, particularly compared to those who had a brief hospital stay with minimal intervention. Samples were grouped according to days POS and clinical severity, and compared for anti-S IgM and IgG by in-house ELISA. Significant differences in antibody levels were observed with increases in both days and severity, although there was no association between days and severity themselves (Fig 6A). To assess the development of the antibody response in sequential samples from the same individual, and to evaluate the ability of LFIAs to detect nuances in antibody response, longitudinal samples (from individuals with disease severity scores of 4) were tested by one of the best performing LFIAs (SureScreen) and in-house ELISA (Fig 6B and 6C). A similar pattern of detection was seen for both types of assay, with IgM detectable earlier than IgG, and both tests showing consensus for the strength and timing of the response.

thumbnail
Fig 6. Antibody detection over increasing days POS and illness severity.

(A) In-house ELISA results for anti-SARS-CoV-2 IgM and IgG were grouped by days POS and severity of illness, with 0 indicating mild illness (requiring no respiratory support) and 5 indicating critical (requiring ECMO) (see materials and methods for full classification). The 4-fold threshold for a positive result is shown as a dashed line on each graph. Median values are shown as red lines. (B) Sequential serum samples from five individuals were assayed for the development of anti-SARS-CoV-2 IgM and IgG. All five individuals were hospitalised with COVID-19 symptoms and confirmed positive for SARS-CoV-2 RNA by RT-PCR. A semi-quantitative in-house ELISA detecting anti-S antibodies (shown in graphs) and the SureScreen lateral flow assay (shown in heatmaps below the graphs) were compared for their detection of anti-SARS-CoV-2 in the same serum samples. Days POS are indicated for each sample. (C) Images of the lateral flow test results for patient 3 are shown.

https://doi.org/10.1371/journal.ppat.1008817.g006

Discussion

This study describes the development of six in-house ELISA configurations for the detection of IgM and IgG against SARS-CoV-2 S, RBD and N. Anti-S and -N IgG attained specificities of up to 99%–100% when results for both targets are combined—and sensitivities of up to 96%. We used this semi-quantitative platform to cross-evaluate seven LFIAs, a chemiluminescent test and two commercial ELISAs. Importantly, the availability of sequential serum samples from patients admitted from the start of the outbreak under an existing ethics agreement for storage and analysis of surplus amounts of routinely collected clinical samples, enabled us to conduct this detailed study including examining assay sensitivity with respect to time POS.

Our analysis demonstrates a broad range of performance across the different platforms, with several commercial tests performing above 98% specificity. We found that all platforms showed highest sensitivity, with narrowest confidence limits, in samples taken 20 days POS, with most tests reaching a value of over 95% (Fig 5B). When all commercial tests were compared, Accu-Tell, SureScreen and Spring demonstrated highest sensitivity at earlier time points, while maintaining specificities of 98% or above. These tests also gave the best cross-assay agreements with each other and with the in-house ELISA. In the best-performing tests, we also observed that signal strength aligned with that seen by ELISA; this was further supported by the sequential signal increase seen in longitudinal samples from five individuals.

We approached this study with the intention that an unbiased and transparent comparison will be of broad value to the scientific and infection diagnostics communities, both in terms of naming and comparing the kits, and in the nature of the samples likely to be encountered in hospitals during a SARS-CoV-2 outbreak. Few studies have been published to date in which multiple LFIAs and ELISAs have been evaluated side-by-side with named kits [910]; and in those that have, only one test overlaps with our study, Deep Blue [9]. Early reports stating that LFIAs have insufficient sensitivity may have been due to testing samples from mixed time points and disease severities, or tests that differ from those evaluated here [11].

The strength of our study is the head-to-head evaluation of multiple tests on identical serum samples. The sample set was not compiled retrospectively for the purpose of evaluation, but was part of an ongoing process to deploy a serological assay to broaden diagnostic capability at Guy’s and St Thomas’ Hospitals during the peak of the SARS-CoV-2 epidemic in London. These samples are therefore entirely representative of the type that will be encountered by hospital laboratories, and the challenges associated with variable time to seroconversion and errors in self-reporting onset of symptoms are real. The cross-evaluation of multiple tests enabled the identification of samples that were negative in every test performed, despite the fact that samples were taken at 10 days POS or later. While this could highlight a characteristic of COVID-19 where a subset of patients do not produce a detectable antibody response, we have evidence that in three cases (the two day 10 samples and sixth day 14) where later samples were available (47, 77 and 51 days POS, respectively; the only next available samples) both IgM and IgG antibodies were detected in several LFIAs. In another case (sample at day 23 POS), we uncovered a likely error in self-reported symptom onset and the sample could actually range anywhere between 2 and 11 days POS. It is important to note that all of the six unexpectedly negative samples were from COVID-19 cases classified as severity level 0. The fact that several post-day 10 samples were negative in all serological tests has practical implications for the use of such assays in diagnostic settings, and thought should be given to the meaning of a negative result. It also implies that serological surveys are likely to underestimate the level of exposure to SARS-CoV-2, and the wide variation in the detection of antibodies, both in terms of time and disease severity, casts into doubt the utility of “immunity passports”. However, although it is unclear at present whether detection of antibodies to SARS-CoV-2 indicates protection against future infection, measurement of antibodies to S, rather than N, is likely to better predict neutralisation function.

In agreement with previous studies demonstrating a relationship between disease severity and antibody titres [1214], we observed a significant increase in detection of antibodies with increased severity of symptoms (Fig 6A). Importantly, this correlation is not explained by a concomitant increase in the days POS of the sample. Therefore, before deployment in situations where the pre-test prevalence is likely to be low, such as seroprevalence studies, out-patient assessment or pre-admission screening for operations, these assays will require further evaluation with known SARS-CoV-2 asymptomatic and ambulatory cases, alongside an extended set of pre-pandemic samples. This is a priority as countries navigate their way out of lockdown and move towards living with the ongoing threat of SARS-CoV-2, and seroprevalence studies will be important in the implementation and management of safe public health policies. To maintain the high specificity of our in-house ELISA in community cohorts where PCR status is unknown, we would recommend determining seropositivity based on IgG to both N and S. Sequential or alternate detection of IgM, IgA and IgG may also provide information on the history of infection. In the only IgA test that we evaluated (EUROIMMUN), specificity was high while also showing a strong signal in early samples and a good overall sensitivity. Detection of IgA in serum, and potentially even earlier by mucosal sampling, may be a useful diagnostic tool.

Next generation antibody tests may improve on those currently being trialled, but our results demonstrate that LFIAs may have utility in a hospital setting as of now, particularly if deployed where a rapid result could aide a clinical pathway or decision in real time, such as ward location or prioritisation of further diagnostics and follow up. The ease of use and affordability of the LFIAs weighs heavily in their favour, especially for potential in resource-poor settings or as point-of-care solutions in hospitals. Choosing the tests with the highest specificity will translate to confidence in a positive test result in the clinic; negative or borderline cases may be tested serially or used together with ELISA testing [15]. Combination testing, in parallel with RT-PCR, and serial or sequential testing, would provide diagnostic solutions to the delayed-onset syndromes such as PIMS that are increasingly being reported post-peak pandemic [57]. Further evaluations of candidate tests should be performed prior to use in clinical settings, ideally tailored to the intended usage and likely pre-test prevalence. A limitation of our study was the restricted number of pre-pandemic negative and confounding samples used to cross-validate the commercial kits, and further specificity and sensitivity studies with a shortlisted group of commercial tests are currently in progress. It will be particularly important to evaluate samples from individuals infected with other human coronaviruses or respiratory viruses for potential cross-reacting antibodies. A further consideration that should be given for healthcare service deployment in the hospital or community setting is consistency of use. Although the LFIAs are marketed as home testing kits, in our experience user assiduity is essential to their optimal performance, particularly when scoring borderline cases and considering need for two independent readers. There will also likely be need to evaluate alternative sources of blood collection, particularly pin-prick collection, and even different samples such as saliva, all of which should be fully evaluated before deployment.

In summary, our study compares the performance of 10 commercially available platforms and several combinations of in-house methods for the detection of anti-SARS-CoV-2 antibodies in serum samples. Although LFIAs lack the semi-quantitative information provided by ELISA tests, they have a clear utility advantage over ELISA or chemiluminescence-based technologies. Shortlisted tests, combined with confirmatory reflex testing using our in-house ELISA, are now being taken forward into extended validations as part of a pilot clinical service at Guy’s and St Thomas’ Hospitals. These incremental steps keeping different technologies in scope, whilst multiple different use-cases are still being defined, will help determine the clinical utility and cost-effectiveness of COVID-19 serological testing in healthcare settings both in the hospital and the community.

Materials & methods

Ethics

For the St Thomas’ Hospital samples, surplus serum was retrieved from the routine biochemistry laboratory at point of discard, and then aliquoted, stored and linked with a limited clinical dataset by the direct care team, before anonymisation under an existing ethics framework (REC reference 18/NW/0584) and with expedited R&D approval. Serum/plasma samples used as negative controls in the in-house ELISA development were obtained from the KCL Infectious Disease Biobank (KD1-220518), UCLH (11/LO/0421 IRAS:43993) and The Royal Free Hospital (11/WA/0077, ref. NC2016.002).

Patient overview and sample origin

110 individual venous serum samples collected at St Thomas’ Hospital, London from 87 patients diagnosed as SARS-CoV-2 positive via real-time RT-PCR, were obtained for serological analysis. Samples ranged from 1 to 30 days after onset of self-reported symptoms. For the longitudinal study 17 serum samples (6–24 days after symptoms onset) were obtained from 5 patients (3–4 samples each) with confirmed COVID-19 diagnosis. Two patients overlapped between the longitudinal study and validation study meaning in total there are 90 unique patients between both studies. Patient information is given in Table 1.

Pre-pandemic negative samples

320 pre-pandemic serum or plasma samples from various collections were used as negative controls for the initial development of the in-house ELISA. These included emergency admissions to St Thomas’ hospital in March 2019 (STH healthy, n = 105), individuals with acute EBV infection (n = 9), plasma samples collected from individuals 7 days following immunisation with an H1N1 vaccine (as part of the HIRD trial [16]) obtained from the KCL Infectious Diseases Biobank (n = 44), cancer patients (n = 61), and healthy volunteers (UCL healthy, n = 101). An identical panel of 50 pre-pandemic serum samples from 50 patients, obtained at St Thomas’ Hospital in March 2019 and taken from the “STH healthy cohort”, were used for head-to-head specificity calculations for all commercial tests, with an extended panel of 105 samples used to calculate specificity of the in-house IgM and IgS S ELISA. Further samples from emergency admissions to St Thomas’ hospital in March 2019 (STH healthy) were used for the extended validations of four LFIAs (SureScreen, Spring, Biohit and Medomics).

COVID-19 severity classification

Patients diagnosed with COVID-19 were classified as follows:

0—asymptomatic OR no requirement for supplemental oxygen.

1—requirement for supplemental oxygen (FiO2 <0.4) for at least 12 hrs.

2—requirement for supplemental oxygen (FiO2 ≥0.4) for at least 12 hrs.

3—requirement for non-invasive ventilation (NIV)/ continuous positive airways pressure (CPAP) OR proning OR supplemental oxygen (FiO2 >0.6) for at least 12 hrs AND not a candidate for escalation above level 1 care.

4—requirement for intubation and mechanical ventilation OR supplemental oxygen (FiO2 >0.8) AND peripheral oxygen saturations <90% (with no history of type 2 respiratory failure (T2RF)) OR <85% (with known T2RF) for at least 12 hrs.

5—requirement for ECMO.

ELISA protocol

All sera/plasma was heat-inactivated at 56°C for 30 mins before use in the in-house ELISA. High-binding ELISA plates (Corning, 3690) were coated with antigen (N, S or RBD) at 3 μg/mL (25 μL per well) in PBS, either overnight at 4°C or 2 hr at 37°C. Wells were washed with PBS-T (PBS with 0.05% Tween-20) and then blocked with 100 μL 5% milk in PBS-T for 1 hr at room temperature. Wells were emptied and sera and plasma diluted at 1:50 and 1:25 respectively in milk were added and incubated for 2 hr at room temperature. Control reagents included CR3009 (2 μg/mL), CR3022 (0.2 μg/mL), negative control plasma (1:25 dilution), positive control plasma (1:50) and blank wells. Wells were washed with PBS-T. Secondary antibody was added and incubated for 1 hr at room temperature. IgM was detected using Goat-anti-human-IgM-HRP (1:1,000) (Sigma: A6907) and IgG was detected using Goat-anti-human-Fc-AP (1:1,000) (Jackson: 109-055-043-JIR). Wells were washed with PBS-T and either AP substrate (Sigma) was added and read at 405 nm (AP) or 1-step TMB substrate (Thermo Scientific) was added and quenched with 0.5 M H2S04 before reading at 450 nm (HRP).

Protein expression

N protein was obtained from the James lab at LMB, Cambridge. The N protein used is a truncated construct of the SARS-CoV-2 N protein comprising residues 48–365 (both ordered domains with the native linker) with an N terminal uncleavable hexahistidine tag. N was expressed in E. Coli using autoinducing media for 7h at 37°C and purified using immobilised metal affinity chromatography (IMAC), size exclusion and heparin chromatography.

S protein consists of a pre-fusion S ectodomain residues 1–1138 with proline substitutions at amino acid positions 986 and 987, a GGGG substitution at the furin cleavage site (amino acids 682–685) and an N terminal T4 trimerisation domain followed by a Strep-tag II [17]. The protein was expressed in 1 L HEK-293F cells (Invitrogen) grown in suspension at a density of 1.5 million cells/mL. The culture was transfected with 325 μg of DNA using PEI-Max (1 mg/mL, Polysciences) at a 1:3 ratio. Supernatant was harvested after 7 days and purified using StrepTactinXT Superflow high capacity 50% suspension according to the manufacturer’s protocol by gravity flow (IBA Life Sciences).

The RBD plasmid was obtained from Florian Krammer at Mount Sinai University [18]. Here the natural N-terminal signal peptide of S is fused to the RBD sequence (319 to 541) and joined to a C-terminal hexahistidine tag. This protein was expressed in 500 mL HEK-293F cells (Invitrogen) at a density of 1.5 million cells/mL. The culture was transfected with 1000 μg of DNA using PEI-Max (1 mg/mL, Polysciences) at a 1:3 ratio. Supernatant was harvested after 7 days and purified using Ni-NTA agarose beads.

Lateral flow immunoassays (LFIA)

We tested seven point-of-care colloidal-gold-based LFIAs detecting IgG and IgM antibodies against SARS-CoV-2, full details of which are shown in Table 2. With the exception of Medomics, all LFIAs were CE IVD marked. Target antigens were undisclosed proprietary information, but for several of them were known to be S. LFIAs were run according to manufacturer’s instructions. Typically, 10–20 μl of serum was added to the LFIA membrane start point, followed by 1–3 drops of supplied buffer. Kits were run at room temperature for 10 minutes and then immediately scored using a 4-point scale (negative, borderline, positive, strong positive) for both IgM and IgG. Scoring was performed independently by two individuals.

thumbnail
Table 2. LFIA kit details.

Full names and manufacturers of all LFIA test kits used in the study are given, alongside reference and lot numbers.

https://doi.org/10.1371/journal.ppat.1008817.t002

Chemiluminescence-based Immunoassay

The SARS-CoV-2 Ab Diagnostic Test Kit (Shenzen Watmind Medical Co., Ltd.) detecting total antibody against SARS-CoV-2 was run on the Chemical Luminescence Immunity Analyzer MF02 (Shenzen Watmind Medical Co., Ltd). The platform was calibrated with a supplied control cartridge daily prior to testing. Panel samples were analysed according to manufacturer’s instructions. Results equal to and below 1.0 AU (arbitrary units) /ml were negative, scores above 1.0 AU/ml were deemed positive. For comparison to other immunoassays, scores between 1<10 AU/ml were deemed positive, scores >10 AU/ml were deemed a strong positive.

Enzyme-linked immunosorbent assay (ELISA)

Two commercial ELISAs detecting IgA (EI 2606–9601 A) or IgG (EI 2606–9601 G) antibodies to the SARS-CoV-2 S1 protein were obtained from EUROIMMUN Medizinische Labordiagnostika AG. Serum samples were tested according to manufacturer’s instructions. Absorbance was measured on a Multiskan FC (SkanIt Software 5.0 for Microplate Readers RE. ver. 5.0.0.42) at a wavelength of 450 nm with a reference wavelength of 630 nm.

Statistical analysis

Expected binomial exact 95% confidence intervals were calculated on Prism 8.0 using Wilson/Brown statistical analysis.

Supporting information

S1 Fig. Extended comparative tables.

Serological assays were compared for IgM or IgG (left and right panels, respectively), and the percentage agreement between each of the samples in the assays is represented within each box.

https://doi.org/10.1371/journal.ppat.1008817.s001

(TIF)

S2 Fig. Extended sensitivity and specificity comparison of serological assays.

Overall sensitivity (A) and specificity (B) were determined for each serological assay as in Fig 5. Sensitivity was determined for each serological assay at increasing days POS (C), or severity of illness (D). Results for each test were either categorised according to whether the serum sample was from <10, ≥10, ≥14, or ≥20 days POS, or severity of illness, with 0 indicating mild illness (requiring no respiratory support) and 5 indicating critical (requiring ECMO) (see Materials and methods for full classification). 95% confidence intervals are shown for each assay in all panels (Wilson/Brown expected binomial).

https://doi.org/10.1371/journal.ppat.1008817.s002

(TIF)

S1 Table. Specificity and sensitivity of in-house ELISAs during development phase.

Specificity and sensitivity (%) were determined for each configuration of the in-house ELISA (detection of IgM and IgG to N, S and RBD) during initial development. 320 pre-pandemic samples from several cohorts were used as negative controls for specificity calculations, and 24 RT-PCR-confirmed SARS-CoV-2 positive samples were used as positive controls for sensitivity calculations. 95% CIs are shown for each calculation.

https://doi.org/10.1371/journal.ppat.1008817.s003

(DOCX)

S2 Table. Head-to-head specificity calculations for all immunoassays.

50 pre-pandemic negative samples from the St Thomas’ emergency admissions cohort (STH Healthy, March 2019) were used to perform head-to-head specificity calculations for all immunoassay platforms. An extended panel of 105 samples was used for specificity calculations of the in-house ELISA for anti-S IgM and IgG. 95% CIs are shown for each calculation.

https://doi.org/10.1371/journal.ppat.1008817.s004

(DOCX)

S3 Table. Specificity of selected lateral flow immunoassays determined on an extended panel of pre-pandemic serum samples from March 2019.

A select group of best-performing lateral flow immunoassays were taken forward for extended specificity calculations, using up to 200 pre-pandemic negative samples from the St Thomas’ emergency admissions cohort (STH Healthy, March 2019). 95% CIs are shown for each calculation.

https://doi.org/10.1371/journal.ppat.1008817.s005

(DOCX)

S4 Table. Sensitivity of immunoassays classified by days POS.

Head-to-head sensitivity calculations were performed for all immunoassays on a panel of 110 SARS-CoV-2-positive samples. Results for each test were further categorised according to whether the serum sample was from <10, ≥10, ≥14, or ≥20 days POS. 95% CIs are shown for each calculation.

https://doi.org/10.1371/journal.ppat.1008817.s006

(DOCX)

S5 Table. Sensitivity of immunoassays classified by disease severity.

Head-to-head sensitivity calculations were performed for all immunoassays on a panel of 110 SARS-CoV-2-positive samples and classified according to severity of disease, with 0 indicating mild illness (requiring no respiratory support) and 5 indicating critical (requiring ECMO) (see Materials and methods for full classification). 95% CIs are shown for each calculation.

https://doi.org/10.1371/journal.ppat.1008817.s007

(DOCX)

Acknowledgments

Thank you to Dr Terry Wong for his support in acquiring test kits. Thank you to SureScreen Diagnostics for their engagement and technical assistance.

Thank you to Bindi Patel, Nicola Varatharajah, Abayomi Fatola and Amelia Moore for laboratory assistance.

Thank you to Florian Krammer for provision of the RBD expression plasmid, and Leo James and Leo Kiss for the provision of purified N protein.

We thank King’s College London Infectious Diseases Biobank for provision of pre-COVID-19 vaccine samples, all patients and control volunteers who participated in this study and to all clinical staff who helped with recruitment, including those working with the TAPb project at The Royal Free Hospital. We thank the Maini Lab at the Division of Infection and Immunity for providing pre-Covid19 pandemic healthy control samples.

We are extremely grateful to all patients and staff at St Thomas’ Hospital who participated in this study.

References

  1. 1. NHS. Guidance and Standard Operating Procedure COVID-19 Virus Testing in NHS Laboratories (https://www.rcpath.org/uploads/assets/90111431-8aca-4614-b06633d07e2a3dd9/Guidance-and-SOP-COVID-19-Testing-NHS-Laboratories.pdf)
  2. 2. Centers for Disease Control and Prevention. Interim Guidelines for Collecting, Handling, and Testing Clinical Specimens for COVID-19 (https://www.cdc.gov/coronavirus/2019-nCoV/lab/guidelines-clinical-specimens.html)
  3. 3. To KK, Tsang OT, Leung WS, Tam AR, Wu TC, Lung DC, et al. Temporal profiles of viral load in posterior oropharyngeal saliva samples and serum antibody responses during infection by SARS-CoV-2: an observational cohort study. Lancet Infect Dis. 2020;20(5):565–74.
  4. 4. Zou L, Ruan F, Huang M, Liang L, Huang H, Hong Z, et al. SARS-CoV-2 Viral Load in Upper Respiratory Specimens of Infected Patients. N Engl J Med. 2020;382(12):1177–9.
  5. 5. Verdoni L, Mazza A, Gervasoni A, Martelli L, Ruggeri M, Ciuffreda M, et al. An outbreak of severe Kawasaki-like disease at the Italian epicentre of the SARS-CoV-2 epidemic: an observational cohort study. Lancet. 2020.
  6. 6. European Centre for Dieases Prevention and Control. Paediatric inflammatory multisystem syndrome and SARS-CoV-2 infection in children (https://www.ecdc.europa.eu/en/publications-data/paediatric-inflammatory-multisystem -syndrome-and-sars-cov-2-rapid-risk-assessment)
  7. 7. Whittaker E, Bamford A, Kenny J et al. Clinical Characteristics of 58 Children With a Pediatric Inflammatory Multisystem Syndrome Temporally Associated With SARS-CoV-2. JAMA, 2020.
  8. 8. Randolph HE, Barreiro LB. Herd Immunity: Understanding COVID-19. Immunity. 2020;52(5):737–41.
  9. 9. Whitman JD, Hiatt J, Mowery CT, Shy BR, Yu R, Yamamoto TN, et al. Test performance evaluation of SARS-CoV-2 serological assays. medRxiv 2020.04.25.20074856 [Preprint]. 2020 [cited 2020 May 29]. https://doi.org/10.1101/2020.04.25.20074856.
  10. 10. Lassaunière R, Frische A, Harboe ZB, Nielsen ACY, Fomsgaard A, Krogfelt, Jørgensen CS. Evaluation of nine commercial SARS-CoV-2 immunoassays. medRxiv 2020.04.09.20056325 [Preprint]. 2020 [cited 2020 May 29]. https://doi.org/10.1101/2020.04.09.20056325.
  11. 11. Adams E, Ainsworth M, Anand R, Andersson MI, Auckland K, et al. Antibody testing for COVID-19: A report from the National COVID Scientific Advisory Panel. medRxiv 2020.04.15.20066407 [Preprint]. 2020 [cited 2020 May 29]. https://doi.org/10.1101/2020.04.15.20066407.
  12. 12. Zhao J, Yuan Q, Wang H, Liu W, Liao X, Su Y, et al. Antibody responses to SARS-CoV-2 in patients of novel coronavirus disease 2019. Clin Infect Dis. 2020.
  13. 13. Tan W, Lu Y, Wang J, Dan Y, Tan Z, He, X, et al. Viral Kinetics and Antibody Responses in Patients with COVID-19. medRxiv 2020.03.24.20042382 [Preprint]. 2020 [cited 2020 May 29]. https://doi.org/10.1101/2020.03.24.20042382.
  14. 14. Long QX, Liu BZ, Deng HJ, Wu GC, Deng K, Chen YK, et al. Antibody responses to SARS-CoV-2 in patients with COVID-19. Nat Med. 2020.
  15. 15. Egners W. Recommendations for verification and validation methodology and sample sets for evaluation of assays for SARS-CoV-2 (COVID-19). https://www.rcpath.org/uploads/assets/541a4523-6058-4424-81c119dd2ab0febb/Verification-validation-of-sample-sets-assays-SARS-CoV-2.pdf
  16. 16. Sobolev O, Binda E, O’Farrell S, Lorenc A, Pradines J, Huang Y, Duffner J, Schulz R, Cason J, Zambon M, Malim MH, Peakman M, Cope A, Capila I, Kaundinya GV and Hayday AC. Adjuvented influenza-H1N1 vaccination reveals lymphoid signatures of age-dependent early responses and of clinical adverse events. Nat Immunol. 2016 Feb; 17(2): 204–213.
  17. 17. Brouwer PJM, Caniels TG, van der Straten K, Snitselaar JL, et al. Potent neutralizing antibodies from COVID-19 patients define multiple targets of vulnerability. BioRxiv 2020.05.12.088716. https://www.biorxiv.org/content/10.1101/2020.05.12.088716v1
  18. 18. Amanat F, Stadlbauer D et al. A serological assay to detect SARS-CoV-2 seroconversion in humans. Nat Med. 2020 s41591-020-0913-5.