While Six Sigma is used in different disciplines to improve quality, Tony Badric and Elvar Theodorsson in a recent paper in CCLM have questioned Six Sigma application in medical laboratory concluding Six Sigma has provided no value to medical laboratory. In addition, the authors have expanded their criticism to Total Analytical Error (TAE) model and statistical quality control. To address their arguments, we have explained the basics of TAE model and Six Sigma and have shown the value of Six Sigma to medical laboratory.
Point-of-care testing (POCT) is becoming an increasingly popular way to perform laboratory tests closer to the patient. This option has several recognized advantages, such as accessibility, portability, speed, convenience, ease of use, ever-growing test panels, lower cumulative healthcare costs when used within appropriate clinical pathways, better patient empowerment and engagement, and reduction of certain pre-analytical errors, especially those related to specimen transportation. On the other hand, POCT also poses some limitations and risks, namely the risk of lower accuracy and reliability compared to traditional laboratory tests, quality control and connectivity issues, high dependence on operators (with varying levels of expertise or training), challenges related to patient data management, higher costs per individual test, regulatory and compliance issues such as the need for appropriate validation prior to clinical use (especially for rapid diagnostic tests; RDTs), as well as additional preanalytical sources of error that may remain undetected in this type of testing, which is usually based on whole blood samples (i.e., presence of interfering substances, clotting, hemolysis, etc.). There is no doubt that POCT is a breakthrough innovation in laboratory medicine, but the discussion on its appropriate use requires further debate and initiatives. This collective opinion paper, composed of abstracts of the lectures presented at the two-day expert meeting “Point-Of-Care-Testing: State of the Art and Perspective” (Venice, April 4–5, 2024), aims to provide a thoughtful overview of the state-of-the-art in POCT, its current applications, advantages and potential limitations, as well as some interesting reflections on the future perspectives of this particular field of laboratory medicine.
Clinical Chemistry and Laboratory Medicine (CCLM) publishes articles on novel teaching and training methods applicable to laboratory medicine. It is focused on basic and applied research and cutting-edge clinical laboratory medicine. CCLM is one of the leading journals in the field, with an impact factor over 3.5. All contributions submitted for publication in CCLM are single-blind peer reviewed by at least two experts in the field. CCLM is led by a multi-institutional editorial board. It is issued monthly, both in print and electronically. Letters to the Editor and Congress Abstracts are published online only.
Objectives
Diurnal variation of plasma glucose levels may contribute to diagnostic uncertainty. The permissible time interval, pT ( t ), was proposed as a time-dependent characteristic to specify the time within which glucose levels from two consecutive samples are not biased by the time of blood collection. A major obstacle is the lack of population-specific data that reflect the diurnal course of a measurand. To overcome this issue, an approach was developed to detect and assess diurnal courses from big data.
Methods
A quantile regression model, QRM, was developed comprising two-component cosinor analyses and time, age, and sex as predictors. Population-specific canonical diurnal courses were generated employing more than two million plasma glucose values from four different hospital laboratory sites. Permissible measurement uncertainties, pU , were also estimated by a population-specific approach to render Chronomaps that depict pT ( t ) for any timestamp of interest.
Results
The QRM revealed significant diurnal rhythmometrics with good agreement between the four sites. A minimum pT ( t ) of 3 h exists for median glucose levels that is independent from sampling times. However, amplitudes increase in a concentration-dependent manner and shorten pT ( t ) down to 72 min. Assessment of pT ( t ) in 793,048 paired follow-up samples from 99,453 patients revealed a portion of 24.2 % sample pairs that violated the indicated pT ( t ).
Conclusions
QRM is suitable to render Chronomaps from population specific time courses and suggest that more stringent sampling schedules are required, especially in patients with elevated glucose levels.
Objectives
Emergency department (ED) crowding is a widespread problem that positions patients at risk. The desire to improve the ED throughput requires novel approaches. Point-of-care testing (POCT) has emerged as useful technology that could contribute to create more efficient patient flow and better timeliness in the ED. The main objective of our study is to demonstrate, in a multicenter study, that POCT benefits care timeliness in the ED.
Methods
We conducted a multicenter and cluster randomized study. A total of 3,200 patients. We randomly assigned patients to a POCT group or Central Laboratory Group. The primary outcome was the ED time to clinical decision. The secondary outcome included the length of stay and the laboratory turnaround time. Readmission within the seven after discharge was also calculated.
Results
The primary finding of this study is a strategy based on POCT that aims to significantly improve care timeliness in the ED. We found significant reductions in all outcomes regardless of presentation reason, patient disposition or hospital type. Time to clinical decision decreased by 75.2 min (205–129.8), length of stay by 77.5 min (273.1–195.6) and laboratory turnaround time by 56.2 min (82.2–26) in the POCT group. No increase in readmission was found.
Conclusions
Our strategy represents a good approach to optimize timeliness in the ED. It should be seen as a starting point for further operational research focusing on POCT for improving throughput and reducing crowding in the ED.
Circulating tumor cells (CTCs) are pivotal in the distant metastasis of tumors, serving as one of the primary materials for liquid biopsy. They hold significant clinical importance in assessing prognosis, predicting efficacy, evaluating therapeutic outcomes, and studying recurrence, metastasis, and resistance mechanisms in cancer patients. Nevertheless, the rareness and heterogeneity of CTC and the complexity of metastasis make the clinical application of CTC detection confront many challenges, which may need to be settled by some practical strategies. This article will review the content mentioned above.
Objectives
This study aimed to determine the clinical significance of Krebs von den Lungen-6 (KL-6), surfactant proteins A (SP-A) and D (SP-D) in the evaluation and management of interstitial lung disease (ILD).
Methods
Serum KL-6, SP-A, SP-D levels were measured in 122 unique consecutive patients referred for connective tissue disease (CTD) associated ILD (CTD-ILD) autoantibodies and 120 “healthy” controls. Patients’ charts were retrospectively reviewed and categorized as ILD and non-ILD or CTD-ILD and other ILD. All biomarkers were evaluated for diagnosis and moderate vs. severe ILD based on high-resolution computed tomography (HRCT).
Results
ILD was diagnosed in 52 % (n=64) and non-ILD in 48 % (n=58). ILD patients were categorized as other ILD (61 %, n=39) or CTD-ILD (39 %, n=25). Patients with ILD had significantly elevated levels of SP-A (p<0.02), KL-6 or SP-D (both p<0.0001) when compared to those with non-ILD. The mean levels of all biomarkers were significantly elevated levels in the ILD compared to non-ILD group (p<0.0001). No significant difference in biomarker levels between CTD-ILD and other ILD groups (p≥0.900). Biomarkers had comparable specificities (89–93 %) however; sensitivities were variable at 75 , 77 and 17 % for KL-6, SP-D and SP-A, respectively. Combination of KL-6 and SP-D yielded comparable diagnostic accuracy to all biomarkers with median levels significantly higher in patients with severe vs. mild disease.
Conclusions
KL-6 and SP-D levels are elevated in ILD and therefore contribute to the diagnosis and risk stratification for patient management.
Objectives
Mild traumatic brain injury (mTBI) remains challenging to diagnose effectively in the emergency department. Abbott has developed the “GFAP/UCH-L1” mTBI test, to guide the clinical decision to perform a computed tomography (CT) head scan by ruling out the presence of mTBI. We evaluated the diagnostic accuracy of the “GFAP/UCH-L1” mTBI test in a Greek cohort and established age-dependent cut-offs.
Methods
A total of 362 subjects with suspected mTBI and admitted to the Emergency department of the KAT General Hospital of Athens, Greece were recruited for the study. All subjects underwent a CT head scan to establish the diagnosis of mTBI. GFAP and UCH-L1 were measured using Alinity I (Abbott). 163 healthy subjects served as controls.
Results
Using the manufacturer’s cut-offs (35 ng/L for GFAP and 400 ng/L for UCH-L1), the “GFAP/UCH-L1” mTBI test had a sensitivity of 99.1 % and a specificity of 40.6 %. However, the specificity dropped to 14.9 % in patients older than 65 years old. By defining a new cut-off of 115 ng/L for GFAP and 335 ng/L specifically for patients older than 65 years, specificity was increased up to 30.6 % without changing test sensitivity and the number of CT head scans avoided was doubled in this subgroup.
Conclusions
The “GFAP/UCH-L1” mTBI test is an efficient “rule-out test” to exclude patients suffering from mTBI. By adjusting the cut-offs in patients older than 65 years old, we could significantly increase the number of CT head scans avoided without affecting the sensitivity. These new cut-offs should be externally validated.
Important advancements have been made in understanding the pathogenetic mechanisms underlying acute and chronic lung disorders. But although a wide variety of innovative biomarkers have and are being investigated, they are not largely employed to evaluate non-neoplastic lung diseases. The current work aims to examine the use of Krebs von den Lungen-6 (KL-6), a mucin-like glycoprotein predominantly expressed on the surface of type II alveolar epithelial cells (AEC2s), to evaluate the stage, response to treatment, and prognosis in patients with non-neoplastic lung disorders. Data analysis suggests that KL-6 can be utilized as an effective diagnostic and prognostic biomarker in individuals with interstitial lung disease and as a predictor of clinical outcomes in subjects with SARS-CoV-2-related pneumonia. Moreover, KL-6 can be reliably used in routine clinical settings to diagnose and predict the outcome of patients with chronic obstructive pulmonary disease (COPD) exacerbation. The optimal cut-off points within the European population should be defined to improve KL-6’s diagnostic efficacy.
Although the concept of bias appears consolidated in laboratory science, some important changes in its definition and management have occurred since the introduction of metrological traceability theory in laboratory medicine. In the traceability era, medical laboratories should rely on manufacturers who must ensure traceability of their in vitro diagnostic medical devices (IVD-MD) to the highest available references, providing bias correction during the trueness transfer process to calibrators before they are marketed. However, sometimes some bias can be observed arising from an insufficient correction during the traceability implementation. This source of bias can be discovered by the IVD-MD surveillance by traceability-based external quality assessment and confirmed by ad-hoc validation experiments. The assessment of significance should be based on its impact on measurement uncertainty (MU) of results. The IVD manufacturer, appropriately warned, is responsible to take an immediate investigation and eventually fix the problem with a corrective action. Even if IVD-MD is correctly aligned in the validation steps and bias components are eliminated, during ordinary use the system may undergo systematic variations such as those caused by recalibrations and lot changes. These sources of randomly occurring bias are incorporated in the estimate of intermediate reproducibility of IVD-MD through internal quality control and can be tolerated until the estimated MU on clinical samples fulfils the predefined specifications. A readjustment of the IVD-MD by the end-user must be undertaken to try to correct the bias becoming significant. If the bias remains, the IVD manufacturer should be requested to rectify the problem.
The 2024 Kidney Disease: Improving Global Outcomes (KDIGO) guidelines for chronic kidney disease (CKD) evaluation and management bring important updates, particularly for European laboratories. These guidelines emphasize the need for harmonization in CKD testing, promoting the use of regional equations. In Europe, the European Kidney Function Consortium (EKFC) equation is particularly suited for European populations, particularly compared to the CKD-EPI 2021 race-free equation. A significant focus is placed on the combined use of creatinine and cystatin C to estimate glomerular filtration rate (eGFRcr-cys), improving diagnostic accuracy. In situations where eGFR may be inaccurate or clinically insufficient, the guidelines encourage the use of measured GFR (mGFR) through exogenous markers like iohexol. These guidelines emphasize the need to standardize creatinine and cystatin C measurements, ensure traceability to international reference materials, and adopt harmonized reporting practices. The recommendations also highlight the importance of incorporating risk prediction models, such as the Kidney Failure Risk Equation (KFRE), into routine clinical practice to better tailor patient care. This article provides a European perspective on how these KDIGO updates should be implemented in clinical laboratories to enhance CKD diagnosis and management, ensuring consistency across the continent.
Objectives
To provide age- and sex-specific paediatric reference intervals (RIs) for 13 haematological parameters analysed on Sysmex XN-9000 and compare different methods for estimating RIs after indirect sampling.
Methods
Via the Danish Laboratory Information System, we conducted a population-based study. We identified samples from children aged 0–18 years analysed at Aarhus University Hospital from 2019 to 2023, including samples from general practitioners only. Information about all parameters were available for all samples via linkage to the local laboratory middleware. Then, we applied two different methods. First, we excluded potential pathological samples by predefined criteria: if the child had other abnormal blood measurements at date of request, or had a blood sample of any type analysed in the period two months before to two months after. We estimated RIs stratified by age- and sex using the non-parametric percentile method. Second, we used refineR (an open source automated algorithm) to exclude pathological samples and for RI estimation. Finally, we compared our data to results from a study using the direct method.
Results
We identified 22,786 samples. After exclusion by predefined criteria, the population comprised 10,199 samples from 8,736 children (57 % of samples were from females and median age was 13 years). We estimated RIs for red blood cell, white blood cell and platelet indices. The two different methods showed agreement. Furthermore, our data provided results comparable to direct sampling.
Conclusions
Our study provided age- and sex-specific paediatric RIs for 13 haematology parameters useful for laboratories worldwide. RIs were robust using different methods in the framework of indirect sampling. Finally, our data showed agreement with the direct method, indicating that indirect sampling could be useful for establishing RIs on haematology parameters in the future.
The presented guidelines are an update of the position paper, endorsed by the International Osteoporosis Foundation (IOF), on nomenclature of bone markers published over 2 decades ago. Novel insight into bone biology and pathophysiology of bone disorders has highlighted the increasing relevance of new and known mediators implicated in various aspects of bone metabolism. This updated guideline proposes the nomenclature Bone Status Indices (BSI) as the comprehensive classification rather than bone turnover markers, bone markers, metabolic markers of bone turnover or metabolic markers of bone turnover, that are currently in use for the implicated molecules. On behalf of the IFCC Committee on Bone Metabolism and the Joint IOF Working Group and IFCC Committee on Bone Metabolism, the authors propose standardized nomenclature, abbreviations and measurement units for the bone status indices.
Objectives
Expanded carrier screening (ECS) is a preventive genetic test that enables couples to know their risk of having a child affected by certain monogenetic diseases. This study aimed to evaluate the carrier frequency for rare monogenic diseases in the general Chinese population and the impacts of ECS on their reproductive decisions and pregnancy outcomes.
Methods
This single-center study was conducted between September 2022 and April 2023. An ECS panel containing 224 recessive genes was offered to 1,499 Chinese couples from the general population who were at early gestational ages or planned to conceive.
Results
Overall, 55.0 % of the individuals carried for at least one recessive condition. There were 16 autosomal recessive (AR) genes with a carrier frequency of ≥1/100 and 22 AR genes with a carrier frequency of <1/100 to ≥1/200. The most common AR and X-linked diseases were GJB2 -related non-syndromic hearing loss, and hemolytic anemia, respectively. Fifty-five couples (3.67 %; 1 in 27.3) were at increased risk of having an affected child with 19 pregnant at the time of testing. Of these, 10 opted for amniocentesis, and four affected pregnancies were identified, with three of them being terminated.
Conclusions
This study not only provides valuable information about the recessive genetic landscape, but also establishes a solid foundation for couple-based ECS in a real clinical setting.
Objectives
CD34+ hematopoietic stem cell (HSC) enumeration, crucial for HSC transplantation, is performed by flow cytometry to guide clinical decisions. Variability in enumeration arises from biological factors, assay components, and technology. External quality assurance schemes (EQAS) train participants to minimize inter-laboratory variations. The goal is to estimate total error (TE) values for CD34 cell enumeration using state-of-the-art (SOTA) methods with EQA data and to define quality specifications by comparing TE using different cutoffs.
Methods
A total of 3,994 results from 40 laboratories were collected over 11 years (2011–2022) as part of the IC-2 Stem Cells Scheme of the GECLID Program that includes absolute numbers of CD34 cells. The data were analyzed in two periods: 2011–2016 and 2017–2022. The TE value achieved by at least 60 %, 70 %, 80 %, and 90 % of laboratories was calculated across the two different periods and at various levels of CD34 cell counts: above 25, 25 to 15, and under 15 cells/μL.
Results
A decrease in the SOTA-based TE for CD34 cell enumeration was observed in the most recent period in 2017–2021 compared with 2012–2016. A significant increase of P75 TE values in the low CD34 range (<15 cells/μL) levels was found (p<0.001).
Conclusions
Technical advancements contribute to the decrease TE over time. The TE of CD34 cell FC counts is measure-dependent, making it responsive to precision enhancement strategies. The TE measured by EQAS in this study may serve as a quality specification for implementing ISO 15189 standards in clinical laboratories for CD34 cell enumeration.
Objectives
This study examined the comparability of venous and capillary blood samples with regard to routine chemistry analytes.
Methods
Venous and capillary blood samples were collected from adult patients to assess comparability of alanine transaminase, albumin, alkaline phosphatase, apolipoprotein B, aspartate aminotransferase, total bilirubin, calcium, chloride, creatin kinase, creatinine, C-reactive protein, ferritin, folic acid, free T4, gamma glutamyltransferase, glucose, high density lipoprotein cholesterol, iron, lipase, lipoprotein a, magnesium, phosphate, postassium, prostate specific antigen, sodium, total cholesterol, total protein, transferrin, triglycerides, thyroid stimulating hormone, urate, urea, vitamin B12 and 25-hydroxyvitamin-D3. Furthermore, hemolysis-icterus-lipemia Index (HIL-Index) was measured for all samples. All measurements were performed using the Siemens Atellica ® CH or IH Analyzer. Deming regression analysis and mean relative differences between venous and capillary measurements of each analyte were contrasted with the desirable total allowable error (TEa) and Clinical Laboratory Improvement Amendments (CLIA) 2024 proposed acceptance limits for proficiency testing.
Results
Deming regression and mean relative differences demonstrated excellent comparability between venous and capillary samples for most measured analytes.
Conclusions
Capillary and venous samples showed comparable results for almost all studied chemistry analytes. Of the 33 studied analytes for which TEa criteria where available, 30 met TEa criteria. CLIA 2024 criteria where available for 29 of the studied analytes of which only glucose did not meet the criteria. In conclusion, capillary blood draw is a suitable alternative for venous blood sampling for measuring most of the investigated analytes. This benefits patients with fear of needles and might pave the way for remote self-sampling.
Objectives
To evaluate the efficacy, safety and efficiency performances related to the introduction of innovative traceability platforms and integrated blood collection systems, for the improvement of a total testing process, thus also assessing the economic and organizational sustainability of these innovative technologies.
Methods
A mixed-method approach was utilized. A key-performance indicators dashboard was created based on a narrative literature review and expert consensus and was assessed through a real-life data collection from the University Hospital of Padova, Italy, comparing three scenarios over time (2013, 2016, 2019) with varying levels of technological integration. The economic and organizational sustainability was determined considering all the activities performed from the tube check-in to the validation of the results, with the integration of the management of the prevalent errors occurred during the process.
Results
The introduction of integrated venous blood collection and full sample traceability systems resulted in significant improvements in laboratory performance. Errors in samples collected in inappropriate tubes decreased by 42 %, mislabelled samples by 47 %, and samples with irregularities by 100 %. Economic analysis revealed a cost saving of 12.7 % per tube, equating to a total saving of 447,263.80 € over a 12-month period. Organizational efficiency improved with a reduction of 13,061.95 h in time spent on sample management, allowing for increased laboratory capacity and throughput.
Conclusions
Results revealed the strategic relevance of introducing integrated venous blood collection and full sample traceability systems, within the Laboratory setting, with a real-life demonstration of TLA economic and organizational sustainability, generating an overall improvement of the process efficiency.
Male infertility has become an important issue of global concern. Semen analysis is the cornerstone of male fertility assessment. External quality assessment (EQA) of sperm concentration, motility, and morphology is widely recognized in the world. However, over the past 34 years, the implementation of EQA for semen analysis has varied across different countries, and there is no global consensus. The goal of this paper is to first explore the overall development of EQA during this period. Secondly, it aims to discuss the extent of difference of participating laboratories in different countries. Finally, the paper examines the differences in EQA programs developed by various EQA providers in order to seek a global standard. In total, 29 papers met the inclusion criteria and were included in this review. There is inconsistent in the implementation of EQA across different countries, and there is no global consensus. Policies for EQA of semen analysis vary from country to country. Some countries mandate laboratory participation, while others permit voluntary involvement. Different EQA organizers choose different ways to calculate assigned value and acceptance limits. The coefficient of variation (CV) for each EQA item was large. The CVs of concentration, motility, morphology, and viability were 12.7–138.0 %, 17.0–127.0 %, 7–375 %, and 6–41.1 %, respectively. The results of the semen analysis varied considerably among the participating laboratories. The collaborative efforts of national policymakers, EQA organizers, and all participating laboratories are essential to improving the current situation.
Objectives
Early rheumatoid arthritis (RA) detection is crucial for improving patient prognosis. Anticyclic citrullinated peptide antibodies (anti-CCP) and rheumatoid factors (RF) support RA diagnosis but are undetectable in ∼20 % of cases. Recently, antibodies against mutated citrullinated vimentin (anti-MCV) and detection of 14-3-3 eta have emerged with implications for preclinical RA diagnosis and monitoring treatment. The objective of this study was to assess the clinical performance of anti-MCV antibodies and 14-3-3 eta in RA and to compare it to current RA criteria anti-CCP and RF markers, individually and in combination.
Methods
A retrospective chart review of 326 subjects submitted for RA serology testing identified 134 RA positive and 192 RA negative disease control individuals. Fifty healthy controls specimens were also included. Performance of anti-MCV and 14-3-3 eta, alone and combined with CCP3.1 and RF, was assessed.
Results
Anti-MCV had a sensitivity of 71 % and a specificity of 92 %. 14-3-3 eta had a sensitivity of 43 % and a specificity of 90 %. In comparison, CCP3.1 and RF displayed a sensitivity of 79 % and 84 % and a specificity of 92 % and 61 %, respectively. ROC curve analysis demonstrated CCP3.1 and anti-MCV had superior diagnostic performance compared to RF and 14-3-3 eta. In our cohort, anti-MCV and 14-3-3 eta failed to identify seronegative RA patients. Different combinations of double antibody positivity increased specificity at the cost of lost sensitivity.
Conclusions
Individually, 14-3-3 eta, anti-MCV and CCP3.1 assays had ≥90 % specificity in diagnosed RA patients, with better sensitivities for anti-MCV and CCP3.1 than 14-3-3 eta. Overall diagnostic performance of anti-MCV was similar to CCP3.1 and RF, all of which outperformed 14-3-3 eta in our cohort.
Objectives
As high-sensitivity cardiac troponin T (hs-cTnT) is making the transition from diagnostic to prognostic use, a long-term stability study of 5th generation hs-cTnT according to EFLM CRESS recommendations was set up for investigation of frozen clinical specimens (two matrices).
Methods
Study samples collected in serum tubes and lithium heparin tubes with gel from patients admitted for suspected minor myocardial damage were measured directly after completion of the study (0 years), and after 3-year and 6-year storage at −80 °C, and recovery of hs-cTnT concentrations after long-term storage (%hs-cTnT concentration compared to 0-year) was calculated. Hs-cTnT changes were also compared to decisive delta changes, such as the ones proposed in the ESC NSTEMI 0 h/1 h algorithm (<3 or >5 ng/L for ruling out and ruling in suspected NSTEMI patients).
Results
Eighty-six patients were included in the study, whereof 28 both lithium heparin plasma and serum samples were collected simultaneously, in others only serum (n=30) or plasma (n=28). Multiple aliquots per patient were made, so that 479 serum and 473 plasma samples were available for analysis. Across the overall hs-cTnT measuring range, median recovery after 6 years was 105.4 % and 106.2 % for serum and plasma, respectively. Based on these decisive delta changes, serum showed consistent results upon long term storage (max 0.8 % of samples above delta threshold of >5 ng/L) as compared to heparin plasma (up to 19.2 % of samples above threshold).
Conclusions
Over 6 years of storage at −80 °C, recovery of hs-cTnT in serum and heparin plasma was similar and within common lot-to-lot variation. Yet, when evaluating absolute delta increments around hs-cTnT clinical decision points, long-term stored sera displayed better clinical performance compared to heparin plasma samples.