Background:
1,5-Anhydroglucitol (1,5-AG) is a novel biomarker of glycemic control proposed to monitor recent hyperglycemic excursions in persons with diabetes. The clinical utility of 1,5-AG outside of diagnosed diabetes is unclear, but it may identify people at high risk for diabetes and its complications. We compared associations of 1,5-AG with 2-h glucose for risk of major clinical complications.
Research design and methods:
We prospectively followed 6644 Atherosclerosis Risk in Communities (ARIC) Study participants without diagnosed diabetes for incident diagnosed diabetes, chronic kidney disease, cardiovascular disease, and all-cause mortality for ∼20 years. We assessed associations of 1,5-AG and 2-h glucose (modeled categorically and continuously with restricted cubic splines) with adverse outcomes using Cox models and evaluated improvement in risk discrimination using Harrell's c-statistic.
Results:
1,5-AG <10 µg/mL was statistically significantly associated with incident diabetes (HR: 2.70, 95% CI 2.31, 3.15), and showed suggestion of association with the other outcomes compared to 1,5-AG ≥10 µg/mL. Continuous associations of 1,5-AG with outcomes displayed a clear threshold effect, with risk associations generally observed only <10 µg/mL. Comparing associations of 1,5-AG and 2-h glucose with outcomes resulted in larger c-statistics for 2-h glucose than 1,5-AG for all outcomes (difference in c-statistic [2-h glucose -1,5-AG] for diagnosed diabetes: 0.17 [95%CI, 0.15, 0.19]; chronic kidney disease 0.02 [95%CI 0.00, 0.05]; cardiovascular disease 0.03 [95%CI, 0.00, 0.06]; and all-cause mortality 0.04 [95%CI, 0.02, 0.06]).
Conclusions:
In this community-based population without diagnosed diabetes, low 1,5-AG was modestly associated with major clinical outcomes and did not outperform 2-h glucose.
Background:
Laboratorians have the opportunity to help minimize the frequency of adverse drug reactions by implementing pharmacogenomic testing and alerting care providers to possible patient/drug incompatibilities before drug treatment is initiated. Methods combining PCR with MALDI-ToF MS have allowed for sensitive, economical, and multiplexed pharmacogenomic testing results to be delivered in a timely fashion.
Method:
This study evaluated the analytical performance of the Agena Biosciences iPLEX® PGx 74 panel and a custom iPLEX panel on a MassARRAY MALDI-TOF MS instrument in a clinical laboratory setting. Collectively, these panels evaluate 112 SNVs across 34 genes implicated in drug response. Using commercially available samples (Coriell Biorepository) and in-house extracted DNA, we determined ideal reaction conditions and assessed accuracy, precision, and robustness.
Results:
Following protocol optimization, the Agena PGx74 and custom panels demonstrated 100% concordance with the 1000 Genomes Project Database and clinically validated hydrolysis probe genotyping assays. 100% concordance was also observed in all assessments of assay precision when appropriate QC metrics were applied.
Conclusions:
Significant development time was required to optimize sample preparation and instrumental analysis and 3 assays were removed due to inconsistent performance. Following modification of the manufacturer's protocol and instituting manual review of each assay plate, the Agena PGx74 and custom panel constitute a cost-effective, robust, and accurate method for clinical identification of 106 SNVs involved in drug response.
Background:
Broad-spectrum drug screening is offered by many clinical laboratories to support investigation of possible drug exposures. The traditional broad-spectrum drug screen employed at our laboratory utilizes several different analytical platforms, thus requiring relatively high volumes of sample and a cumbersome workflow. Here we describe the development and validation of a consolidated broad-spectrum drug screen assay designed to qualitatively detect 127 compounds in urine (Ur) and serum/plasma (S/P) samples.
Methods:
An LC-MS/MS method was developed using the Ultivo LC-MS/MS and designed to be qualitative with a 1-point calibration curve and 50% to 150% controls. Sample preparation included the addition of 122 internal standards (IS) followed by mixed-mode strong cation exchange solid-phase extraction and reverse-phase chromatographic separation on a biphenyl column.
Results:
For the method described herein, ≥ 95% of analytes in urine and serum control samples had a CV of ≤20% for total imprecision. Accuracy testing included 46 external controls and demonstrated 99.9% accuracy. Method comparison studies to quantitative testing are discussed. The high level of coverage of the analytes with a stable isotope-labeled IS (SIL-IS) helped normalize for matrix effects when significant ion suppression (>25%) was present. Analyte stability in the matrix, the impact of potentially interfering compounds, and method ruggedness were demonstrated. Method limitations include limited detection of glucuronidated drugs and potential cross-contamination with samples at very high concentrations (>>100 × cutoff).
Conclusions:
The broad-spectrum drug screen method developed here qualitatively detected 127 drugs and select metabolites. This method could be used to support investigations of possible drug exposures in a clinical setting.
Background
We examined the concordance of 13 commercial cardiac troponin (cTn) assays [point-of-care, high-sensitivity (hs), and conventional] using samples distributed across a continuum of results.
Methods
cTnI (11 assays) and cTnT (2 assays) were measured in 191 samples from 128 volunteers. cTn assays included Abbott (iSTAT, STAT, and hs), Alere (Cardio 3), Beckman (AccuTnI+3), Pathfast (cTnI-II), Ortho (Vitros), Siemens (LOCI, cTnI-Ultra, Xpand, Stratus CS), and Roche [4th Generation (Gen), hs]. Manufacturer-derived 99th percentile cutoffs were used to classify results as positive or negative. Alternative 99th percentile cutoffs were tested for some assays. Correlation was assessed using Passing–Bablok linear regression, bias was examined using Bland–Altman difference plots, and concordance/discordance of each method comparison was determined using the McNemar method.
Results
Regression slopes ranged from 0.63 to 1.87, y-intercepts from 0.00 to 0.03 ng/mL, and r values from 0.93 to 0.99. The cTnT methods had a slope of 0.93, y-intercept of 0.02 ng/mL, and r value of 0.99. For the cTnI assays, positive, negative, and overall concordance was 76.2%–100%, 66.0%–100%, and 82.9%–98.4%, respectively. Overall concordance between the 4th Gen cTnT and hsTnT assays was 88.9%. A total of 30 of the 78 method comparisons showed significant differences in classification of samples (P <0.001); the iSTAT showed 10, hsTnT showed 9, AccuTnI+3 showed 5, Xpand showed 5, and Stratus CS showed 1. Using alternative 99th percentile cutoffs to those listed by manufacturers lowered the method discordance by 6-fold, from 30 to 5 (all involved iSTAT).
Conclusions
These data provide insight into characteristics of cTn methods and will assist the healthcare community in setting expectations for relationships among commercial cTn assays.
To the Editor
Helicobacter pylori infection continues to be a major health problem worldwide, causing considerable morbidity and mortality due to peptic ulcer disease and gastric cancer. Urea breath tests (UBTs) have higher diagnostic accuracy than other non-invasive tests for identifying H. pylori (in patients without a history of gastrectomy) (1). Patients as well as healthcare and laboratory workers may have a lower preference for stool-based tests (stool antigen testing) (2).
While ¹³C-UBT is often preferred in well-resourced regions, the unit cost of ¹⁴C-UBT is lower and the test could be provided at a low cost using a central laboratory “hub-and-spoke” model for service delivery (2). False-positive tests could occur in patients who have hypochlorhydria or may be due to other bacteria with urease activity (3).
The total testing process of ¹⁴C-UBT includes collection of a patient breath sample (containing carbon dioxide, CO2), transfer of the breath sample including CO2 to collection fluid, and analysis of ¹⁴CO2 by a scintillation counter. The interpretation of results (disintegrations per min, DPM) as suggested by the manufacturer (Tri-Med, Perth, Australia) are: <50 DPM (negative for H. pylori), 50 to 199 DPM (borderline positive), >200 DPM (positive).
Introduction:
Medical management of prosthetic joint infections (PJIs) relies on the identification of causative organisms through traditional culture-based approaches to guide therapy. However, diagnosis of many PJIs remains challenging, with many clinically apparent infections remaining culture-negative. Molecular diagnostics have the potential to increase diagnostic yield, particularly among culture-negative PJIs.
Methods:
Bone, tissue, or synovial fluid from patients with clinically identified PJIs were collected for inclusion in this study. Samples were assessed with traditional cultures and classified as culture-positive or -negative after 48 h. Samples subsequently underwent a Staphylococcus aureus-/Kingella kingae-specific PCR followed by a 16s rRNA gene PCR.
Results:
A total of 77 unique patients with clinically identified PJIs contributed a total of 89 samples for inclusion in the study. There were 54 culture-negative and 35 culture-positive samples evaluated. The sensitivity and specificity of S. aureus PCR in culture-positive samples was 57.1% (95% CI, 34.1%-78.1%) and 92.9% (95% CI, 66.1%-98.9%), respectively. Among culture-positive samples, 16s rRNA gene PCR correctly identified 3 of 21 (14.3%) samples with S. aureus and 2 of 5 (40%) samples with Streptococcus spp. All molecular tests were negative in those with clinically identified, culture-negative PJI.
Conclusions:
Our study suggests that these diagnostic tools have a limited role in PJI diagnosis.
Background
Efficient tools are needed to stage liver disease before treatment of patients infected with hepatitis C virus (HCV). Compared to biopsy, several studies demonstrated favorable performance of noninvasive multianalyte serum fibrosis marker panels [fibrosis-4 (FIB-4) index] and aspartate aminotransferase (AST)-to-platelet ratio index (APRI), but suggested cutoffs vary widely. Our objective was to evaluate FIB-4 index and APRI and their component tests for staging fibrosis in our HCV-infected population and to determine practical cutoffs to help triage an influx of patients requiring treatment.
Methods
Transient elastography (TE) results from 1731 HCV-infected patients were mapped to an F0–F4 equivalent scale. Each patient's APRI and FIB-4 index were calculated. Areas under the receiver operator curve (AUROCs) and false-positive and false-negative rates were calculated to retrospectively compare the performance of the indices and their component tests.
Results
The highest AUROCs for distinguishing severe (F3–F4) from mild-to-moderate (F0–F2) fibrosis had overlapping 95% CIs: APRI (0.77; 0.74–0.79), FIB-4 index (0.76; 0.73–0.78), and AST (0.74; 0.72–0.77). Cutoffs had false-negative rates of 2.7%–2.8% and false-positive rates of 6.4%–7.4% for all 3 markers.
Conclusions
AST was as effective as FIB-4 index and APRI at predicting fibrosis. Published cutoffs for APRI and FIB-4 index would have been inappropriate in our population, with false-negative rates as high as 11%. For our purposes, no serum fibrosis marker was sufficiently sensitive to rule-out significant fibrosis, but cutoffs developed for AST, FIB-4 index, and APRI all had specificities of 79.2%–80.3% for ruling-in severe fibrosis and could be used to triage 1/3 of our population for treatment without waiting for TE or liver biopsy.
Background
Biochemical prenatal screening tests are used to determine the risk of fetal aneuploidy based on the concentration of several biomarkers. The concentration of these biomarkers could be affected by preanalytical factors (PAFs) such as sample type (whole blood vs serum), storage time, and storage temperature. The impact of these factors on posttest risk is unknown.
Methods
Blood samples were collected from 25 pregnant patients. Each sample was divided into 24 aliquots, and each aliquot was subjected to 1 of 24 different treatments (2 sample types × 2 temperatures × 6 storage times). The impact of each PAF on calculated risk was estimated using mixed-effects regression and simulation analysis.
Results
PAFs were associated with statistically significant changes in concentration for some analytes. Simulation studies showed that PAFs accounted for 6% of the variation in posttest risk, and analytical imprecision accounted for 94% of the variation. We estimated that the background misclassification rate due to analytical imprecision is approximately 1.37% for trisomy 21 and 0.12% for trisomy 18. Preanalytical factors increased the probability of misclassification by 0.46% and 0.06% for trisomies 21 and 18, respectively.
Conclusions
Relaxing sample specifications for biochemical prenatal serum screening tests to permit analysis of serum samples stored for up to 72 h at room temperature or 4 °C as well as serum obtained from whole blood stored similarly has a small impact in calculated posttest aneuploidy risk.
Background
Consistent information on long-term storage stability for a broad range of nutritional biomarkers is lacking. We investigated the stability of 18 biomarkers stored at suboptimal temperatures (−20 °C and 5 °C) for up to 12 months.
Methods
Multiple vials of serum or whole blood pools (3 concentrations) were stored at −20 °C or 5 °C, removed from the −20 °C freezer after 3, 6, 9, and 12 months and from the 5 °C refrigerator after 6 and 12 months, and placed into a −70 °C freezer until analysis at study completion. Vials stored continuously at −70 °C were used as the reference condition for optimal storage. We measured 18 biomarkers: 4 iron status, 1 inflammation, 8 water-soluble vitamin, and 5 fat-soluble vitamin. For each temperature, we calculated geometric mean concentrations and average percent changes of geometric means across pools relative to the reference condition estimated from a linear mixed model.
Results
Most biomarkers (13 of 18) showed no difference in concentration after 12 months of storage at −20 °C. Serum ferritin (1.5%), soluble transferrin receptor (−1.7%), and folate (−10.5%) showed small to moderate significant changes at 6 months, but changes were acceptable based on biologic variability. Serum pyridoxal-5′-phosphate (−18.6% at 9 months) and vitamin C (−23% at 6 months) showed large and unacceptable changes at −20 °C. All serum fat-soluble vitamins and iron status indicators, vitamin B12, total homocysteine, and methylmalonic acid showed acceptable changes when stored at 5 °C for up to 12 months.
Conclusions
Overall, we found good long-term stability for multiple nutritional biomarkers stored at suboptimal temperatures.
Background
In the 1880s, concern over the inconvenience of hazardous chemical solutions used for bedside urinalysis sparked an interest in the development of dry reagents for a range of common urine tests.
Content
This article examines the history of Dr Pavy’s Pellets and Dr Oliver’s Papers, 2 different dry reagent systems developed in the 1880s for bedside urine testing. It sets these developments in the context of the earlier dry chemistry work (e.g., indicator papers) and the subsequent work that led to modern day reagent tablets and dipstick devices.
Summary
Tests based on dry reagents can be traced back to the 1st century, but active development, in the form of indicator papers, dates from the 1600s. In the 1880s, spurred by dissatisfaction with liquid-based bedside urine testing among clinicians, Dr Frederick William Pavy and Dr George Oliver developed dry reagent tests, based on pellets (Dr Pavy’s Pellets) and chemically impregnated papers (Dr Oliver’s Papers) for urine sugar and urine albumin. These reagents were commercialized by a number of companies and provided in convenient cases (Physician’s Pocket Reagent Case). Eventually, these tests lost popularity and were replaced by the type of tablets and dipsticks developed by both Eli Lilly, and the Ames Division of Miles Laboratories (subsequently Bayer, and currently Siemens Healthineers) during the 1940s and 1950s.
Background
Patient surges beyond hospital capacity during the initial phase of the COVID-19 pandemic emphasized a need for clinical laboratories to prepare test processes to support future patient care. The objective of this study was to determine if current instrumentation in local hospital laboratories can accommodate the anticipated workload from COVID-19 infected patients in hospitals and a proposed field hospital in addition to testing for non-infected patients.
Methods
Simulation models predicted instrument throughput and turn-around-time for chemistry, ion-selective-electrode and immunoassay tests using vendor-developed software with different workload scenarios. The expanded workload included tests from anticipated COVID patients in two local hospitals and a proposed field hospital with a COVID-specific test menu in addition to the pre-pandemic workload.
Results
Instrumentation throughput and turn-around time at each site was predicted. With additional COVID-patient beds in each hospital the maximum throughput was approached with no impact on turnaround time. Addition of the field hospital workload led to significantly increased test turnaround times at each site.
Conclusions
Simulation models depicted the analytic capacity and turn-around times for laboratory tests at each site and identified the laboratory best suited for field hospital laboratory support during the pandemic.
Background
The epidemiology and clinical manifestation of COVID-19 in the pediatric population is different from the adult population. The purpose of this study is to identify effects of COVID-19 pandemic on laboratory test utilization in a pediatric hospital.
Methods
We performed retrospective analysis on test utilization data from Ann & Robert H. Lurie Children’s Hospital of Chicago, an academic pediatric medical center. Data between two 100-day periods prior to (pre-pandemic) and during the pandemic (mid-pandemic) were analyzed to evaluate changes in test volume, lab utilization, and test positivity rate. We also evaluated these metrics based on in- versus out-patient testing, and performed modeling to determine what variables significantly impact the test positivity rate.
Results
During the pandemic period, there was an expected surge in COVID-19 testing, while over 84% of lab tests studied decreased in ordering volume. The average number of tests ordered per patient was not significantly different during the pandemic for any of the laboratories (adjusted p-value > 0.05). 33 studied tests showed significant change in positivity rate during the pandemic. Linear modeling revealed test volume and inpatient status as the key variables associated with change in test positivity rate.
Conclusions
Excluding SARS-CoV-2 tests, the COVID-19 pandemic has generally led to decreased test ordering volume and laboratory utilization. However, at this pediatric hospital, the average number of tests performed per patient and test positivity rates were comparable between pre- and mid-pandemic periods. These results suggest that overall, clinical test utilization at this site remained consistent during the pandemic.
Background:
COVID-19 is a highly contagious respiratory disease that can be transmitted through human exhaled breath. It has caused immense loss and has challenged the healthcare sector. It has affected the economy of countries and thereby affecting numerous sectors. Analysis of human breath samples is an attractive strategy for rapid diagnosis of COVID-19 by monitoring breath biomarkers.
Content:
Breath collection is a non-invasive process. Various technologies are employed for detection of breath biomarkers like mass spectrometry, biosensors, artificial learning, machine learning. These tools have low turn-around time, robustness, onsite results, Also, MS based approaches are promising tools with high speed, specificity, sensitivity, reproducibility and broader coverage as well as its coupling with various chromatographic separation techniques provides better clinical and biochemical understanding of COVID-19 using breath samples.
Summary:
Herein, we have tried to review the MS based approaches as well as other techniques used for analysis of breath samples for COVID-19 diagnosis. We have also highlighted the different breath analyzers being developed for COVID-19 detection.
Background:
COVID-19 is caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), a novel beta-coronavirus that is responsible for the 2019 coronavirus pandemic. Acute infections should be diagnosed by polymerase chain reaction (PCR) based tests, but serology tests can demonstrate previous exposure to the virus.
Methods:
We compared the performance of the Diazyme, Roche, and Abbott SARS-CoV-2 serology assays using 179 negative subjects to determine negative percent agreement (NPA) and in 60 SARS-CoV-2 PCR confirmed positive patients to determine positive percent agreement (PPA) at three different timeframes following a positive SARS-CoV-2 PCR result.
Results:
At ≥ 15 days, the PPA (95% CI) was 100 (86.3-100)% for the Diazyme IgM/IgG panel, 96.0 (79.7-99.9)% for the Roche total Ig assay, and 100 (86.3-100)% for the Abbott IgG assay. The NPA (95% CI) was 98.3 (95.2-99.7)% for the Diazyme IgM/IgG panel, 99.4 (96.9-100)% for the Roche total Ig assay, and 98.9 (96.0-99.9)% for the Abbott IgG assay. When the Roche total Ig assay was combined with either the Diazyme IgM/IgG panel or the Abbott IgG assay, the positive predictive value was 100% while the negative predictive value remained greater than 99%.
Conclusions:
Our data demonstrates that the Diazyme, Roche, and Abbott SARS-CoV-2 serology assays have similar clinical performance. We demonstrated a low false positive rate across all three platforms and observed that false positives observed on the Roche platform are unique compared to those observed on the Diazyme or Abbott assays. Using multiple platforms in tandem increases the PPVs which is important when screening populations with low disease prevalence.
Background:
There are numerous benefits to performing salivary serology measurements for severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), the causative pathogen for coronavirus disease 2019 (COVID-19). Here, we used a sensitive multiplex serology assay to quantitate salivary IgG against 4 SARS-CoV-2 antigens: nucleocapsid, receptor-binding domain, spike, and N-terminal domain.
Methods:
We used single samples from 90 individuals with COVID-19 diagnosis collected at 0 to 42 days postsymptom onset (PSO) and from 15 uninfected control subjects. The infected individuals were segmented in 4 groups (0-7 days, 8-14 days, 15-21 days, and >21 days) based on days PSO, and values were compared to controls.
Results:
Compared to controls, infected individuals showed higher levels of antibodies against all antigens starting from 8 days PSO. When applying cut-offs with at least 93.3% specificity at every time interval segment, nucleocapsid protein serology had the best sensitivity at 0 to 7 days PSO (60% sensitivity [35.75% to 80.18%], ROC area under the curve [AUC] = 0.73, P = 0.034). Receptor-binding domain serology had the best sensitivity at 8 to 14 days PSO (83.33% sensitivity [66.44%-92.66%], ROC AUC = 0.90, P < 0.0001), and all assays except for N-terminal domain had 92% sensitivity (75.03%-98.58%) at >14 days PSO.
Conclusions:
This study shows that our multiplexed immunoassay can distinguish infected from uninfected individuals and reliably (93.3% specificity) detect seroconversion (in 60% of infected individuals) as early as the first week PSO, using easy-to-collect saliva samples.
Background
Numerous studies have documented reduced access to patient care due to the COVID-19 pandemic including access to a diagnostic or screening tests, prescription medications, and treatment for an ongoing condition. In the context of clinical management for venous thromboembolism, this could result in suboptimal therapy with warfarin. We aimed to determine the impact of the pandemic on utilization of International normalized ratio (INR) testing and the percentage of high and low results.
Methods
INR data from 11 institutions were extracted to compare testing volume and the percentage of INR results ≥3.5 and ≤1.5 between a pre-pandemic period (January-June 2019, period 1) and a portion of the COVID-19 pandemic period (January-June 2020, period 2). The analysis was performed for inpatient and outpatient cohorts.
Results
Testing volumes showed relatively little change in January and February, followed by a significant decrease in March, April and May, and then returned to baseline in June. Outpatient testing showed a larger percentage decrease in testing volume compared to inpatient testing. At 10 of the 11 study sites we observed an increase in the percentage of abnormal high INR results as test volumes decreased, primarily among outpatients.
Conclusion
The COVID-19 pandemic impacted INR testing among outpatients which may be attributable to several factors. Increased supratherapeutic INR results during the pandemic period when there was reduced laboratory utilization and access to care is concerning because of the risk of adverse bleeding events in this group of patients. This could be mitigated in the future by offering drive through testing and/or widespread implementation of home INR monitoring.
Background:
Severe acute respiratory syndrome coronavirus 2 (COVID-19) poses substantial challenges for health care systems. With a vastly expanding amount of publications on COVID-19, clinicians need evidence synthesis to produce guidance for handling patients with COVID-19. In this systematic review and meta-analysis, we examine which routine laboratory tests are associated with severe COVID-19 disease.
Content:
PubMed (Medline), Scopus, and Web of Science were searched until the 22nd of March 2020 for studies on COVID-19. Eligible studies were original articles reporting on laboratory tests and outcome of patients with COVID-19. Data were synthesised and we conducted random effects meta-analysis and estimated mean difference (MD) and standard mean difference at biomarker level for disease severity. Risk of bias and applicability concern was evaluated using the Quality Assessment of Diagnostic Accuracy Studies -2.
Summary:
45 studies were included, of which 21 publications were used for the meta-analysis. Studies were heterogeneous, but had low risk of bias and applicability concern in terms of patient selection and reference standard. Severe disease was associated with higher white blood cell count (MD 1.28 x 109/L), neutrophil count (MD 1.49 x 109/L), C-reactive protein (MD 49.2 mg/L), lactate dehydrogenase (MD 196 U/L), D-dimer (SMD 0.58), and aspartate aminotransferase (MD 8.5 U/L), all p < 0.001. Furthermore, low lymphocyte count (MD -0.32 x 109/L), platelet count (MD -22.4 x 109/L), and haemoglobin (MD -4.1 g/L), all p < 0.001, were also associated with severe disease. In conclusion, several routine laboratory tests are associated with disease severity in COVID-19.
Background
Anti-SARS-CoV-2 serological responses may have a vital role in controlling the spread of the disease. However, the comparative performance of automated serological assays has not been determined in susceptible patients with significant co-morbidities.
Methods
In this study, we used a large number of COVID-19 negative patient samples (n = 2030) as well as COVID-19 positive patient samples (n = 112) to compare the performance of four serological assay platforms; Siemens Healthineers Atellica IM Analyzer, Siemens Healthineers Dimension EXL Systems, Abbott ARCHITECT, and Roche cobas.
Results
All four serology assay platforms exhibited comparable negative percent agreement with negative COVID-19 status ranging from 99.2-99.7%, and positive percent agreement from 84.8-87.5% with positive real-time reverse transcriptase polymerase chain reaction (RT-PCR) results. Of the 2142 total samples, only 38 samples (1.8%) yielded discordant results on one or more platforms. However, only 1.1% (23/2030) of COVID-19 negative cohort results was discordant whereas discordance was 10-fold higher for the COVID-19 positive cohort at 11.3% (15/112). Of the total 38 discordant results, 34 were discordant on only one platform.
Conclusion
Serology assay performance was comparable across the four platforms assessed in a large population of COVID-19 negative patients with relevant comorbidities. The pattern of discordance shows that samples were discordant on a single assay platform, and discordance rate was 10-fold higher in the COVID-19 positive population.
Impact statement
High negative percent agreement reinforces the reliability of serology testing especially in a cohort of at-risk patients. Serology platform discordance highlights the importance of a two-test strategy for properly identifying seroconverted patients.
Background:
Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), is a novel beta-coronavirus that has recently emerged as the cause of the 2019 coronavirus pandemic (COVID-19). Polymerase chain reaction (PCR) based tests are optimal and recommended for the diagnosis of an acute SARS-CoV-2 infection. Serology tests for viral antibodies provide an important tool to diagnose previous exposure to the virus. Here we evaluate the analytical performance parameters of the Diazyme SARS-CoV-2 IgM/IgG serology assays and describe the kinetics of IgM and IgG seroconversion observed in patients with PCR confirmed COVID-19 who were admitted to our hospital.
Methods:
We validated the performance of the Diazyme assay in 235 subjects to determine specificity. Subsequently, we evaluated the SARS-CoV-2 IgM and IgG seroconversion of 54 PCR confirmed COVID-19 patients and determined sensitivity of the assay at three different timeframes.
Result:
Sensitivity and specificity for detecting seropositivity at ≥ 15 days following a positive SARS-CoV-2 PCR result, was 100.0% and 98.7% when assaying for the panel of IgM and IgG. The median time to seropositivity observed for a reactive IgM and IgG result from the date of a positive PCR was 5 days (IQR: 2.75-9 days) and 4 days (IQR: 2.75-6.75 days), respectively.
Conclusions:
Our data demonstrates that the Diazyme IgM/IgG assays are suited for the purpose of detecting SARS-CoV-2 IgG and IgM in patients with suspected SARS-CoV-2 infections. For the first time, we report longitudinal data showing the evolution of seroconversion for both IgG and IgM in a cohort of acutely ill patients in the United States. We also demonstrate a low false positive rate in patients who were presumed to be disease free.
Background:
An evolving COVID-19 testing landscape and issues with test supply allocation, especially in the current pandemic, has made it challenging for ordering providers. We audited orders of the Xpert® Xpress SARS-CoV-2 RT-PCR platform-the fastest of several other testing modalities available-to illuminate these challenges utilizing a multidisciplinary laboratory professional team consisting of a pathology resident and microbiology lab director.
Methods:
Retrospective review of the first five hundred Xpert® Xpress SARS-CoV-2 RT-PCR test orders from a 2-week period to determine test appropriateness based on the following indications: emergency surgery, emergent obstetric procedures, initial behavioral health admission, and later including discharge to skilled care facilities and pediatric admissions. Our hypothesis was that a significant proportion of orders for this testing platform were inappropriate.
Results:
Upon review, a significant proportion of orders were incorrect, with 69.8% (n = 349, p < 0.0001) not meeting indications for rapid testing. Of all orders, 249 designated as emergency surgery were inappropriate, with 49.0% of those orders never proceeding with any surgical intervention; most of these were trauma related (64.6% were orders associated with a trauma unit).
Conclusions:
Significant, pervasive inappropriate ordering practices were identified at this center. A laboratory professional team can be key to identifying problems in testing and play a significant role in combating inappropriate test utilization.
Background:
Coronavirus Disease 2019 (COVID-19) was formally characterized as a pandemic on March 11, 2020. Since that time, the COVID-19 pandemic has led to unprecedented demand for healthcare resources. The purpose of this study was to identify changes in laboratory test utilization in the setting of increasing local incidence of COVID-19.
Methods:
We performed a retrospective assessment of laboratory test order and specimen container utilization at a single, urban tertiary care medical center. Data were extracted from the laboratory information system database over a 10-week period, spanning the primordial inflection of COVID-19 incidence in our region. Total testing volumes were calculated during the first and last two-weeks of the observation period and used as reference points to examine the absolute and relative differences in test order volume between the pre-pandemic and COVID-19 surge periods.
Results:
Between February 2, 2020 and April 11, 2020, there were 873,397 tests ordered and final verified. The in-house SARS-CoV-2 PCR positivity rate for admitted patients in the last week of the observation period was 30.8%. Significant increases in workload were observed in the send-out laboratory section and for COVID-19 diagnosis (PCR) and management-related testing. Otherwise, there was a net decrease in overall demand across nearly all laboratory sections. Increases in testing were noted for tests related to COVID-19 management. Viral transport media and citrated blue top containers demonstrated increases in utilization.
Conclusion:
Increasing local incidence of COVID-19 had a profound impact on laboratory operations. While volume increases were seen for laboratory tests related to COVID-19 diagnostics and management, including some with limited evidence to support their use, overall testing volumes decreased substantially. During events such as COVID-19, monitoring of such patterns can help inform laboratory management, staffing, and test stewardship recommendations for managing resource and supply availability.
Background
We launched a retrospective analysis of SARS-CoV-2 antibodies in 192 patients with COVID-19, aiming to depict the kinetic profile of SARS-CoV-2 antibodies and explore the factors related to SARS-CoV-2 antibody expression.
Methods
Data on 192 confirmed patients with COVID-19 between January and February 2020 was collected from the designated hospital that received patients with COVID-19 in Guangzhou, China. Moreover, a cohort of 130 suspected patients with COVID-19 and 209 healthy people were also enrolled in this study. IgM and IgG antibodies to SARS-CoV-2 were detected by the chemiluminescence immunoassay kits in different groups.
Results
A total of 192 COVID-19 cases were analyzed, of which had 81.8% anti-SARS-CoV-2 IgM detected and 93.2% anti-SARS-CoV-2 IgG detected, respectively, at the time of sampling. The kinetics of anti-SARS-CoV-2 IgM and IgG showed that, the confirmed cases had anti-SARS-CoV-2 IgM seroconversion occurred 5–10 days after the onset of the symptoms, and then IgM rose rapidly to reach a peak within around 2–3 weeks, maintaining at its peak for 1 week before its decline. While they had anti-SARS-CoV-2 IgG seroconversion simultaneously or sequentially with IgM, reaching its peak within around 3 to 4 weeks and began to decline after the fifth week. Besides, correlation analysis showed that in patients with COVID-19 the level of IgM was related to gender and disease severity (P < 0.01), and the level of IgG was related to age and disease severity (P < 0.001). The univariate analysis of relevant factors indicated that the level of IgG had a weak correlation with age (r = 0.374, P < 0.01). The level of IgM in male patients was higher than that in female patients (P < 0.001). The expression level of anti-SARS-CoV-2 IgM and IgG were positively correlated with the severity of COVID-19 and the duration of the virus in the patients.
Conclusion
The findings of this study show that anti-SARS-CoV-2 IgM and IgG can be important assisting COVID-19 diagnosis, especially in the early phase of infection. Furthermore, antibody expression in patients with COVID-19 is also correlated with disease severity, age, gender, and virus clearance or continuous replication.
Background:
COVID-19, the disease caused by SARS-CoV-2 (severe acute respiratory syndrome coronavirus 2) can present with symptoms ranging from none to severe. Thrombotic events occur in a significant number of patients with COVID-19, especially in critically ill patients. This apparent novel form of coagulopathy is termed COVID-19 associated coagulopathy and endothelial derived von Willebrand factor (vWF) may play an important role in its pathogenesis.
Content:
vWF is a multimeric glycoprotein molecule that is involved in inflammation, primary and secondary hemostasis. Studies have shown that patients with COVID-19 have significantly elevated levels of vWF antigen and activity, likely contributing to an increased risk of thrombosis seen in CAC. The high levels of both vWF antigen and activity have been clinically correlated with worse outcomes. Furthermore, the severity of a COVID-19 infection appears to reduce molecules that regulate vWF level and activity such as ADAMT-13 and high density lipoproteins (HDL). Finally, studies have suggested that patients with blood group O (a blood group with lower than baseline levels of vWF) have a lower risk of infection and disease severity compared to other blood groups; however, more studies are needed to elucidate the role of vWF.
Summary:
CAC is a significant contributor to morbidity and mortality. Endothelial dysfunction with the release of pro-thrombotic factors, such as vWF, needs further examination as a possible important component in the pathogenesis CAC.
Background:
Despite improving supplies, SARS-CoV-2 nucleic acid amplification tests remain limited during surges and more so given concerns around COVID-19/influenza co-occurrence. Matching clinical guidelines to available supplies ensures resources remain available to meet clinical needs. We report a change in clinician practice after an electronic health record (EHR) order redesign to impact emergency department (ED) testing patterns.
Methods:
We included all ED visits between December 1, 2021 and January 18, 2022 across a hospital system to assess the impact of EHR order changes on provider behavior 3 weeks before and after the change. The EHR order redesign included embedded symptom-based order guidance. Primary outcomes were the proportion of COVID-19 + flu/respiratory syncytial virus (RSV) testing performed on symptomatic, admitted, and discharged patients, and the proportion of COVID-19 + flu testing on symptomatic, discharged patients.
Results:
A total of 52 215 ED visits were included. For symptomatic, discharged patients, COVID-19 + flu/RSV testing decreased from 11.4 to 5.8 tests per 100 symptomatic visits, and the rate of COVID-19 + flu testing increased from 7.4 to 19.1 before and after the intervention, respectively. The rate of COVID-19 + flu/RSV testing increased from 5.7 to 13.1 tests per 100 symptomatic visits for symptomatic patients admitted to the hospital. All changes were significant (P < 0.0001).
Conclusions:
A simple EHR order redesign was associated with increased adherence to institutional guidelines for SARS-CoV-2 and influenza testing amidst supply chain limitations necessitating optimal allocation of scarce testing resources. With continually shifting resource availability, clinician education is not sufficient. Rather, system-based interventions embedded within exiting workflows can better align resources and serve testing needs of the community.
Background
Nonpharmaceutical interventions to prevent the spread of coronavirus disease 2019 also decreased the spread of respiratory syncytial virus (RSV) and influenza. Viral diagnostic testing in patients with respiratory tract infections (RTI) is a necessary tool for patient management; therefore, sensitive and specific tests are required. This scoping literature review aimed to summarize the study characteristics of commercially available sample-to-answer RSV tests.
Content
PubMed and Embase were queried for studies reporting on the diagnostic performance of tests for RSV in patients with RTI (published January 2005–January 2021). Information on study design, patient and setting characteristics, and published diagnostic performance of RSV tests were extracted from 77 studies that met predefined inclusion criteria. A literature gap was identified for studies of RSV tests conducted in adult-only populations (5.3% of total subrecords) and in outpatient (7.5%) or household (0.8%) settings. Overall, RSV tests with analytical time >30 min had higher published sensitivity (62.5%–100%) vs RSV tests with analytical time ≤30 min (25.7%–100%); this sensitivity range could be partially attributed to the different modalities (antigen vs molecular) used. Molecular-based rapid RSV tests had higher published sensitivity (66.7%–100%) and specificity (94.3%–100%) than antigen-based RSV tests (sensitivity: 25.7%–100%; specificity:80.3%–100%).
Summary
This scoping review reveals a paucity of literature on studies of RSV tests in specific populations and settings, highlighting the need for further assessments. Considering the implications of these results in the current pandemic landscape, the authors preliminarily suggest adopting molecular-based RSV tests for first-line use in these settings.
Dr. Edward W. Bermes, Jr., PhD, FAACC, passed away peacefully at home on February 16, 2021, following a long-term illness. He was born in Chicago, Illinois on August 20, 1932. He received his BS in Chemistry from St. Mary’s College in Winona, MN, followed by a MS and Ph.D. in Biochemistry from Loyola University of Chicago (LUC). In 1958, he was appointed Chief Biochemist at Cook County Hospital. On receiving his doctoral degree in 1959, he was appointed Assistant Professor of Biochemistry in LUC’s Stritch School of Medicine (SSOM). During the 1960s, he served as Director of Biochemistry at St. Francis and West Suburban hospitals. In 1969, he was promoted to Associate Professor of Biochemistry and Pathology. Also at that time, he led the creation of a laboratory for a hospital opening on LUC’s new medical center campus (LUMC) in Maywood, IL.
By then, Dr. Bermes had become an excellent teacher and welcomed more teaching opportunities. At St. Francis Hospital, he had developed a predoctoral clinical biochemistry program. He codirected it with Dr. Hugh McDonald, Chair of LUC’s Department of Biochemistry and Biophysics. Dr. McDonald was also a charter member and Past-President of AACC (1954). It was challenging to find sites to provide clinical experience for predoctoral trainees until the opening of LUMC, which also became the location for creation of a postdoctoral fellowship program in clinical chemistry. Initially funded through a NIH grant, the postdoctoral program was approved by the American Board of Clinical Chemistry in 1972. Dr. Bermes’ devotion to postdoctoral training led to his serving on the Commission on the Accreditation of Clinical Chemistry Training Programs Board of Directors, and as its president for 8 years. During his career, he trained thousands of students of medicine, graduate schools, medical technology, and clinical laboratory science as well as residents and fellows in pathology, clinical chemistry, and other disciplines. In 1983, his dedication to teaching was recognized with his receiving AACC’s Award for Outstanding Contributions in Education and Training.
Background:
Glycated albumin is cleared by the Food and Drug Administration (FDA) for clinical use in diabetes care. To understand its performance in the general US population, we conducted measurements in >19 000 samples from the National Health and Nutrition Examination Survey (NHANES). Of these samples, 5.7% had previously undergone at least 2 freeze-thaw cycles and were considered "non-pristine."
Methods:
We measured glycated albumin and albumin using the Lucica GA-L (Asahi Kasei) assay in stored serum samples from NHANES 1999-2004. Serum albumin (Roche/Beckman) was previously measured. We examined the correlations of percent glycated albumin with hemoglobin A1C (HbA1c)and fasting glucose in the pristine and non-pristine samples. We also measured cystatin C (Siemens) and compared these to cystatin C (Dade Behring) previously obtained in a subsample.
Results:
Glycated albumin (%) was significantly lower in pristine vs non-pristine samples (13.8% vs 23.4%, P < 0.0001). The results from the Asahi Kasei albumin assay (g/dL) were highly correlated with albumin originally measured in NHANES (Pearson's correlation coefficient, r = 0.76) but values were systematically higher (+0.25 g/dL, P < 0.0001). Cystatin C (Siemens) was similar to previous cystatin C measurements (r = 0.98) and did not differ by pristine status (P = 0.119). Glycated albumin (%) was highly correlated with HbA1c and fasting glucose in pristine samples (r = 0.78 and r = 0.71, respectively) but not in non-pristine samples (r = 0.11 and r = 0.12, respectively).
Conclusions:
The performance of the glycated albumin assay in the pristine samples was excellent. Performance in non-pristine samples was highly problematic. Analyses of glycated albumin in NHANES 1999-2004 should be limited to pristine samples only. These results have major implications for the use of these public data.
Background:
The circulating concentration of 1α,25-dihydroxyvitamin D [1α,25(OH)2D] is very low, and the presence of multiple isomers may lead to inaccurate quantitation if not separated prior to analysis. Antibody-based immunoextraction procedures are sometimes used to remove structurally related isomers of 1α,25(OH)2D prior to an LC-MS/MS analysis. However, immunoextraction increases sample preparation time and cost. In addition, some dihydroxyvitamin D metabolites are not completely removed by immunoextraction.
Method:
We developed an HPLC method using a phenyl-hexyl column to investigate interfering isomers of 1α,25(OH)2D.
Result:
Using this method, 4-phenyl-1,2,4-triazoline-3,5-dione (PTAD) derivatization product of 1α,25(OH)2D was found to be present as 2 epimers, which were separated chromatographically with an area ratio of 2:1. PTAD derivatized metabolite of 25-hydroxyvitamin D3 [i.e., 4β,25-dihydroxyvitamin D3 (4β,25(OH)2D3)] eluted out between 6R and 6S epimers of derivatized 1α,25(OH)2D3. If not chromatographically resolved, 4β,25(OH)2D can affect 1α,25(OH)2D quantitation. In a method comparison study, it was found that the presence of 4β,25(OH)2D produced positive bias up to 127% on 1α,25(OH)2D3 quantitation.
Conclusion:
The LC-MS/MS method we developed without an immunoextraction procedure was able to resolve the major interference peak from 1α,25(OH)2D and achieved reliable quantitation of 1α,25(OH)2D.
Background
Patients infected with virulent pathogens require the sophisticated diagnostic capabilities of a core laboratory for optimal care. This is especially true in outbreaks that strain healthcare system capacity. However, samples from such patients pose an infection risk for laboratory workers. We evaluated a strategy for mitigating this risk by preincubating specimens with 2-[4-(2,4,4-trimethylpentan-2-yl)phenoxy]ethanol, a non-ionic detergent commonly calledTriton X-100.
Methods
Lithium-heparinized plasma was mixed with the detergent Triton X-100 at 1%. Inactivation of Ebola virus (EBOV), yellow fever virus (YFV), and chikungunya virus (CHIKV) was assessed using a virus-outgrowth assay. The impact of 1% Triton X-100 dilution on the components of a complete metabolic panel (CMP) was assessed on a Roche Cobas analyzer with 15 specimens that spanned a large portion of the analytical measurement range.
Results
Incubation with 1% Triton X-100 for 5 min was sufficient to completely inactivate EBOV and YFV spiked into plasma but did not completely inactivate CHIKV infectivity even after 60 min of incubation. This was true only for CHIKV when spiked into plasma; CHIKV was completely inactivated in cell culture medium. A bias of −0.78 mmol/L (95% CI, −2.41 to 0.85) was observed for CO2 and 5.79 U/L (95% CI, −0.05 to 11.63) was observed for aspartate aminotransferase after addition of Triton X-100. No other components of the CMP were affected by the addition of Triton X-100.
Conclusions
Detergent-based inactivation of plasma specimens may be a viable approach to mitigating the risk that certain blood-borne pathogens pose to laboratory workers in an outbreak setting. However, the effectiveness of this method for inactivation may depend on the specimen type and pathogen in question.