34 reads in the past 30 days
The incidence of postoperative periprosthetic femoral fracture following total hip replacement: An analysis of UK National Joint Registry and Hospital Episodes statistics dataOctober 2024
·
84 Reads
·
1 Citation
Published by PLOS
Online ISSN: 1549-1676
·
Print ISSN: 1549-1277
Disciplines: General Medicine
34 reads in the past 30 days
The incidence of postoperative periprosthetic femoral fracture following total hip replacement: An analysis of UK National Joint Registry and Hospital Episodes statistics dataOctober 2024
·
84 Reads
·
1 Citation
29 reads in the past 30 days
Effects of a self-guided digital mental health self-help intervention for Syrian refugees in Egypt: A pragmatic randomized controlled trialSeptember 2024
·
120 Reads
27 reads in the past 30 days
Rare but elevated incidence of hematological malignancy after clozapine use in schizophrenia: A population cohort studyDecember 2024
·
29 Reads
24 reads in the past 30 days
The cost-effectiveness of preventing, diagnosing, and treating postpartum haemorrhage: A systematic review of economic evaluationsSeptember 2024
·
43 Reads
22 reads in the past 30 days
Identification and outcomes of acute kidney disease in patients presenting in Bolivia, Brazil, South Africa, and NepalNovember 2024
·
23 Reads
An influential venue for research and commentary on the major challenges to human health worldwide, PLOS Medicine publishes articles of general interest on biomedical, environmental, social and political determinants of health. The journal emphasizes work that advances clinical practice, health policy or pathophysiological understanding to benefit health in a variety of settings.
December 2024
·
2 Reads
December 2024
·
25 Reads
Background The impact of light exposure on mental health is increasingly recognised. Modifying inpatient evening light exposure may be a low-intensity intervention for mental disorders, but few randomised controlled trials (RCTs) exist. We report a large-scale pragmatic effectiveness RCT exploring whether individuals with acute psychiatric illnesses experience additional benefits from admission to an inpatient ward where changes in the evening light exposure are integrated into the therapeutic environment. Methods and findings From 10/25/2018 to 03/29/2019, and 10/01/2019 to 11/15/2019, all adults (≥18 years of age) admitted for acute inpatient psychiatric care in Trondheim, Norway, were randomly allocated to a ward with a blue-depleted evening light environment or a ward with a standard light environment. Baseline and outcome data for individuals who provided deferred informed consent were used. The primary outcome measure was the mean duration of admission in days per individual. Secondary outcomes were estimated mean differences in key clinical outcomes: Improvement during admission (The Clinical Global Impressions Scale–Improvement, CGI-I) and illness severity at discharge (CGI-S), aggressive behaviour during admission (Broset Violence Checklist, BVC), violent incidents (Staff Observation Aggression Scale-Revised, SOAS-R), side effects and patient satisfaction, probabilities of suicidality, need for supervision due to suicidality, and change from involuntary to voluntary admission. The Intent to Treat sample comprised 476 individuals (mean age 37 (standard deviation (SD) 13.3); 193 (41%) were male, 283 (59%) were female). There were no differences in the mean duration of admission (7.1 days for inpatients exposed to the blue-depleted evening light environment versus 6.7 days for patients exposed to the standard evening light environment; estimated mean difference: 0.4 days (95% confidence interval (CI) [−0.9, 1.9]; p = 0.523). Inpatients exposed to the blue-depleted evening light showed higher improvement during admission (CGI-I difference 0.28 (95% CI [0.02, 0.54]; p = 0.035), Number Needed to Treat for clinically meaningful improvement (NNT): 12); lower illness severity at discharge (CGI-S difference −0.18 (95% CI [−0.34, −0.02]; p = 0.029), NNT for mild severity at discharge: 7); and lower levels of aggressive behaviour (difference in BVC predicted serious events per 100 days: −2.98 (95% CI [−4.98, −0.99]; p = 0.003), NNT: 9). There were no differences in other secondary outcomes. The nature of this study meant it was impossible to blind patients or clinical staff to the lighting condition. Conclusions Modifying the evening light environment in acute psychiatric hospitals according to chronobiological principles does not change duration of admissions but can have clinically significant benefits without increasing side effects, reducing patient satisfaction or requiring additional clinical staff. Trial registration Clinicaltrials.gov NCT03788993 ; 2018 (CRISTIN ID 602154).
December 2024
·
29 Reads
Background Clozapine is widely regarded as a highly efficacious psychotropic drug that is largely underused worldwide. Recent disproportionality analyses and nationwide case-control studies suggested a potential association between clozapine use and hematological malignancy (HM). Nevertheless, the absolute rate difference is not well-established due to the absence of analytic cohort studies. The clinical significance of such a potential risk remains unclear. Methods and findings We extracted data from a territory-wide public healthcare database from January 2001 to August 2022 in Hong Kong to conduct a retrospective cohort study of anonymized patients aged 18⁺ years with a diagnosis of schizophrenia who used clozapine or olanzapine (drug comparator with highly similar chemical structure and pharmacological mechanisms) for 90⁺ days, with at least 2 prior other antipsychotic use records within both groups. Weighted by inverse probability of treatment (IPTW) based on propensity scores, Poisson regression was used to estimate the incidence rate ratio (IRR) of HM between clozapine and olanzapine users. The absolute rate difference was also estimated. In total, 9,965 patients with a median follow-up period of 6.99 years (25th to 75th percentile: 4.45 to 10.32 years) were included, among which 834 were clozapine users. After IPTW, the demographic and clinical characteristics of clozapine users were comparable to those of olanzapine users. Clozapine users had a significant weighted IRR of 2.22 (95% confidence interval (CI) [1.52, 3.34]; p < 0.001) for HM compared to olanzapine users. The absolute rate difference was estimated at 57.40 (95% CI [33.24, 81.55]) per 100,000 person-years. Findings were consistent across subgroups by age and sex. Sensitivity analyses all supported the robustness of the results and showed good specificity to HM but no other cancers. The main limitation of this observational study is the potential residual confounding effects that could have arisen from the lack of randomization in clozapine or olanzapine use. Conclusions Absolute rate difference in HM incidence associated with clozapine is small despite a 2-fold elevated rate. Given the rarity of HM and existing blood monitoring requirements, more restrictive indication for clozapine or special warnings may not be necessary.
December 2024
·
11 Reads
Background Psychiatric patients experience lower life expectancy compared to the general population. Conditional cash transfer programmes (CCTPs) have shown promise in reducing mortality rates, but their impact on psychiatric patients has been unclear. This study tests the association between being a Brazilian Bolsa Família Programme (BFP) recipient and the risk of mortality among people previously hospitalised with any psychiatric disorders. Methods and findings This cohort study utilised Brazilian administrative datasets, linking social and health system data from the 100 Million Brazilian Cohort, a population-representative study. We followed individuals who applied for BFP following a single hospitalisation with a psychiatric disorder between 2008 and 2015. The outcome was mortality and specific causes, defined according to International Classification of Diseases 10th Revision (ICD-10). Cox proportional hazards models estimated the hazard ratio (HR) for overall mortality and competing risks models estimated the HR for specific causes of death, both associated with being a BFP recipient, adjusted for confounders, and weighted with a propensity score. We included 69,901 psychiatric patients aged between 10 and 120, with the majority being male (60.5%), and 26,556 (37.99%) received BFP following hospitalisation. BFP was associated with reduced overall mortality (HR 0.93, 95% CI 0.87,0.98, p 0.018) and mortality due to natural causes (HR 0.89, 95% CI 0.83, 0.96, p < 0.001). Reduction in suicide (HR 0.90, 95% CI 0.68, 1.21, p = 0.514) was observed, although it was not statistically significant. The BFP’s effects on overall mortality were more pronounced in females and younger individuals. In addition, 4% of deaths could have been prevented if BFP had been present (population attributable risk (PAF) = 4%, 95% CI 0.06, 7.10). Conclusions BFP appears to reduce mortality rates among psychiatric patients. While not designed to address elevated mortality risk in this population, this study highlights the potential for poverty alleviation programmes to mitigate mortality rates in one of the highest-risk population subgroups.
December 2024
·
6 Reads
Background Delays in breast cancer diagnosis and treatment lead to worse survival and quality of life. Racial disparities in care timeliness have been reported, but few studies have examined access at multiple points along the care continuum (diagnosis, treatment initiation, treatment duration, and genomic testing). Methods and findings The Carolina Breast Cancer Study (CBCS) Phase 3 is a population-based, case-only cohort ( n = 2,998, 50% black) of patients with invasive breast cancer diagnoses (2008 to 2013). We used latent class analysis (LCA) to group participants based on patterns of factors within 3 separate domains: socioeconomic status (“SES”), “care barriers,” and “care use.” These classes were evaluated in association with delayed diagnosis (approximated with stages III–IV at diagnosis), delayed treatment initiation (more than 30 days between diagnosis and first treatment), prolonged treatment duration (time between first and last treatment–by treatment modality), and receipt of OncotypeDx genomic testing (evaluated among patients with early stage, ER+ (estrogen receptor-positive), HER2- (human epidermal growth factor receptor 2-negative) disease). Associations were evaluated using adjusted linear-risk regression to estimate relative frequency differences (RFDs) with 95% confidence intervals (CIs). Delayed diagnosis models were adjusted for age; delayed and prolonged treatment models were adjusted for age and tumor size, stage, and grade at diagnosis; and OncotypeDx models were adjusted for age and tumor size and grade. Overall, 18% of CBCS participants had late stage/delayed diagnosis, 35% had delayed treatment initiation, 48% had prolonged treatment duration, and 62% were not OncotypeDx tested. Black women had higher prevalence for each outcome. We identified 3 latent classes for SES (“high SES,” “moderate SES,” and “low SES”), 2 classes for care barriers (“few barriers,” “more barriers”), and 5 classes for care use (“short travel/high preventive care,” “short travel/low preventive care,” “medium travel,” “variable travel,” and “long travel”) in which travel is defined by estimated road driving time. Low SES and more barriers to care were associated with greater frequency of delayed diagnosis (RFD adj = 5.5%, 95% CI [2.4, 8.5]; RFD adj = 6.7%, 95% CI [2.8,10.7], respectively) and prolonged treatment (RFD adj = 9.7%, 95% CI [4.8 to 14.6]; RFD adj = 7.3%, 95% CI [2.4 to 12.2], respectively). Variable travel (short travel to diagnosis but long travel to surgery) was associated with delayed treatment in the entire study population (RFD adj = 10.7%, 95% CI [2.7 to 18.8]) compared to the short travel, high use referent group. Long travel to both diagnosis and surgery was associated with delayed treatment only among black women. The main limitations of this work were inability to make inferences about causal effects of individual variables that formed the latent classes, reliance on self-reported socioeconomic and healthcare history information, and generalizability outside of North Carolina, United States of America. Conclusions Black patients face more frequent delays throughout the care continuum, likely stemming from different types of access barriers at key junctures. Improving breast cancer care access will require intervention on multiple aspects of SES and healthcare access.
November 2024
·
15 Reads
Background This study estimated to what extent the number of measurements of cardiometabolic risk factors (e.g., blood pressure, cholesterol, glycated haemoglobin) were impacted by the COVID-19 pandemic and whether these have recovered to expected levels. Methods and findings A cohort of individuals aged ≥18 years in England with records in the primary care—COVID-19 General Practice Extraction Service Data for Pandemic Planning and Research (GDPPR) were identified. Their records of 12 risk factor measurements were extracted between November 2018 and March 2024. Number of measurements per 1,000 individuals were calculated by age group, sex, ethnicity, and area deprivation quintile. The observed number of measurements were compared to a composite expectation band, derived as the union of the 95% confidence intervals of 2 estimates: (1) a projected trend based on data prior to the COVID-19 pandemic; and (2) an assumed stable trend from before pandemic. Point estimates were calculated as the mid-point of the expectation band. A cohort of 49,303,410 individuals aged ≥18 years were included. There was sharp drop in all measurements in March 2020 to February 2022, but overall recovered to the expected levels during March 2022 to February 2023 except for blood pressure, which had prolonged recovery. In March 2023 to March 2024, blood pressure measurements were below expectation by 16% (−19 per 1,000) overall, in people aged 18 to 39 (−23%; −18 per 1,000), 60 to 79 (−17%; −27 per 1,000), and ≥80 (−31%; −57 per 1,000). There was suggestion that recovery in blood pressure measurements was socioeconomically patterned. The second most deprived quintile had the highest deviation (−20%; −23 per 1,000) from expectation compared to least deprived quintile (−13%; −15 per 1,000). Conclusions There was a substantial reduction in routine measurements of cardiometabolic risk factors following the COVID-19 pandemic, with variable recovery. The implications for missed diagnoses, worse prognosis, and health inequality are a concern.
November 2024
·
16 Reads
Background Chronic or gestational hypertension complicates approximately 7% of pregnancies, half of which reach 37 weeks’ gestation. Early term birth (at 37 to 38 weeks) may reduce maternal complications, cesareans, stillbirths, and costs but may increase neonatal morbidity. In the WILL Trial (When to Induce Labour to Limit risk in pregnancy hypertension), we aimed to establish optimal timing of birth for women with chronic or gestational hypertension who reach term and remain well. Methods and findings This 50-centre, open-label, randomised trial in the United Kingdom included an economic analysis. WILL randomised women with chronic or gestational hypertension at 36 to 37 weeks and a singleton fetus, and who provided documented informed consent to “Planned early term birth at 38+0–3 weeks” (intervention) or “usual care at term” (control). The coprimary outcomes were “poor maternal outcome” (composite of severe hypertension, maternal death, or maternal morbidity; superiority hypothesis) and “neonatal care unit admission for ≥4 hours” (noninferiority hypothesis). The key secondary was cesarean. Follow-up was to 6 weeks postpartum. The planned sample size was 540/group. Analysis was by intention-to-treat. A total of 403 participants (37.3% of target) were randomised to the intervention (n = 201) or control group (n = 202), from 3 June 2019 to 19 December 2022, when the funder stopped the trial for delayed recruitment. In the intervention (versus control) group, losses to follow-up were 18/201 (9%) versus 15/202 (7%). In each group, maternal age was about 30 years, about one-fifth of women were from ethnic minorities, over half had obesity, approximately half had chronic hypertension, and most were on antihypertensives with normal blood pressure. In the intervention (versus control) group, birth was a median of 0.9 weeks earlier (38.4 [38.3 to 38.6] versus 39.3 [38.7 to 39.9] weeks). There was no evidence of a difference in “poor maternal outcome” (27/201 [13%] versus 24/202 [12%], respectively; adjusted risk ratio [aRR] 1.16, 95% confidence interval [CI] 0.72 to 1.87). For “neonatal care unit admission for ≥4 hours,” the intervention was considered noninferior to the control as the adjusted risk difference (aRD) 95% CI upper bound did not cross the 8% prespecified noninferiority margin (14/201 [7%] versus 14/202 [7%], respectively; aRD 0.003, 95% CI −0.05 to +0.06), although event rates were lower-than-estimated. The intervention (versus control) was associated with no difference in cesarean (58/201 [29%] versus 72/202 [36%], respectively; aRR 0.81, 95% CI 0.61 to 1.08. There were no serious adverse events. Limitations include our smaller-than-planned sample size, and lower-than-anticipated event rates, so the findings may not be generalisable to where hypertension is not treated with antihypertensive therapy. Conclusions In this study, we observed that most women with chronic or gestational hypertension required labour induction, and planned birth at 38+0–3 weeks (versus usual care) resulted in birth an average of 6 days earlier, and no differences in poor maternal outcome or neonatal morbidity. Our findings provide reassurance about planned birth at 38+0–3 weeks as a clinical option for these women. Trial registration isrctn.com ISRCTN77258279.
November 2024
·
19 Reads
Background Findings from Kronos Early Estrogen Prevention Study (KEEPS)-Cog trial suggested no cognitive benefit or harm after 48 months of menopausal hormone therapy (mHT) initiated within 3 years of final menstrual period. To clarify the long-term effects of mHT initiated in early postmenopause, the observational KEEPS Continuation Study reevaluated cognition, mood, and neuroimaging effects in participants enrolled in the KEEPS-Cog and its parent study the KEEPS approximately 10 years after trial completion. We hypothesized that women randomized to transdermal estradiol (tE2) during early postmenopause would show cognitive benefits, while oral conjugated equine estrogens (oCEE) would show no effect, compared to placebo over the 10 years following randomization in the KEEPS trial. Methods and findings The KEEPS-Cog (2005–2008) was an ancillary study to the KEEPS (NCT00154180), in which participants were randomized into 3 groups: oCEE (Premarin, 0.45 mg/d), tE2 (Climara, 50 μg/d) both with micronized progesterone (Prometrium, 200 mg/d for 12 d/mo) or placebo pills and patch for 48 months. KEEPS Continuation (2017–2022), an observational, longitudinal cohort study of KEEPS clinical trial, involved recontacting KEEPS participants approximately 10 years after the completion of the 4-year clinical trial to attend in-person research visits. Seven of the original 9 sites participated in the KEEPS Continuation, resulting in 622 women of original 727 being invited to return for a visit, with 299 enrolling across the 7 sites. KEEPS Continuation participants repeated the original KEEPS-Cog test battery which was analyzed using 4 cognitive factor scores and a global cognitive score. Cognitive data from both KEEPS and KEEPS Continuation were available for 275 participants. Latent growth models (LGMs) assessed whether baseline cognition and cognitive changes during KEEPS predicted cognitive performance at follow-up, and whether mHT randomization modified these relationships, adjusting for covariates. Similar health characteristics were observed at KEEPS randomization for KEEPS Continuation participants and nonparticipants (i.e., women not returning for the KEEPS Continuation). The LGM revealed significant associations between intercepts and slopes for cognitive performance across almost all domains, indicating that cognitive factor scores changed over time. Tests assessing the effects of mHT allocation on cognitive slopes during the KEEPS and across all years of follow-up including the KEEPS Continuation visit were all statistically nonsignificant. The KEEPS Continuation study found no long-term cognitive effects of mHT, with baseline cognition and changes during KEEPS being the strongest predictors of later performance. Cross-sectional comparisons confirmed that participants assigned to mHT in KEEPS (oCEE and tE2 groups) performed similarly on cognitive measures to those randomized to placebo, approximately 10 years after completion of the randomized treatments. These findings suggest that mHT poses no long-term cognitive harm; conversely, it provides no cognitive benefit or protective effects against cognitive decline. Conclusions In these KEEPS Continuation analyses, there were no long-term cognitive effects of short-term exposure to mHT started in early menopause versus placebo. These data provide reassurance about the long-term neurocognitive safety of mHT for symptom management in healthy, recently postmenopausal women, while also suggesting that mHT does not improve or preserve cognitive function in this population.
November 2024
·
15 Reads
Background Potentially inappropriate medication (PIM) is associated with negative health outcomes and can serve as an indicator of treatment quality. Previous studies have identified social inequality in treatment but often relied on narrow understandings of social position or failed to account for mediation by differential disease risk among social groups. Understanding how social position influences PIM exposure is crucial for improving the targeting of treatment quality and addressing health disparities. This study investigates the association between social position and PIM, considering the mediation effect of long-term conditions. Methods and findings This cross-sectional study utilized data from the 2017 Danish National Health Survey, including 177,495 individuals aged 18 or older. Data were linked to national registers on individual-level. PIM was defined from the STOPP/START criteria and social position was assessed through indicators of economic, cultural, and social capital (from Bourdieu’s Capital Theory). We analyzed odds ratios (ORs) and prevalence proportion differences (PPDs) for PIM using logistic regression, negative binomial regression, and generalized structural equation modeling. The models were adjusted for age and sex and analyzed separately for indicators of under- (START) and overtreatment (STOPP). The mediation analysis was conducted to separate direct and indirect effects via long-term conditions. Overall, 14.7% of participants were exposed to one or more PIMs, with START PIMs being more prevalent (12.5%) than STOPP PIMs (3.1%). All variables for social position except health education were associated with PIM in a dose-response pattern. Individuals with lower wealth (OR: 1.85 [95% CI 1.77, 1.94]), lower income (OR: 1.78 [95% CI 1.69, 1.87]), and lower education level (OR: 1.66 [95% CI 1.56, 1.76]) exhibited the strongest associations with PIM. Similar associations were observed for immigrants, people with low social support, and people with limited social networks. The association with PIM remained significant for most variables after accounting for mediation by long-term conditions. The disparities were predominantly related to overtreatment and did not relate to the number of PIMs. The study’s main limitation is the risk of reverse causation due to the complex nature of social position and medical treatment. Conclusions The findings highlight significant social inequalities in PIM exposure, driven by both economic, cultural, and social capital despite a universal healthcare system. Understanding the social determinants of PIM can inform policies to reduce inappropriate medication use and improve healthcare quality and equity.
November 2024
·
3 Reads
November 2024
·
10 Reads
Ultra-processed food consumption has increased worldwide, but associations with cancer risk remain unclear and potential underlying mechanisms are speculative. A robust, multidisciplinary, research agenda is needed to address current research limitations and gaps.
November 2024
·
23 Reads
Background The International Society of Nephrology proposes an acute kidney disease (AKD) management strategy that includes a risk score to aid AKD identification in low- and low-middle-income countries (LLMICs). We investigated the performance of the risk score and determined kidney and patient outcomes from AKD at multiple LLMIC sites. Methods and findings Adult patients presenting to healthcare facilities in Bolivia, Brazil, South Africa, and Nepal were screened using a symptom-based risk score and clinical judgment. Those at AKD risk underwent serum creatinine testing, predominantly with a point-of-care (POC) device. Clinical data were collected prospectively between September 2018 and November 2020. We analyzed risk score performance and determined AKD outcomes at discharge and over follow-up of 90 days. A total of 4,311 patients were at increased risk of AKD, and 2,922 (67.8%) had AKD confirmed. AKD prevalence was 80.2% in patients enrolled based on the risk score and 32.5% when enrolled on clinical judgment alone (p < 0.0001). The area under the receiver operating characteristic curve was 0.73 for the risk score to detect AKD. Death during admission occurred in 84 (2.9%) patients with AKD and 3 (0.2%) patients without kidney disease (p < 0.0001). Death after discharge occurred in 206 (9.7%) AKD patients, and 1865 AKD patients underwent reassessment of kidney function after discharge; 902 (48.4%) patients had persistent kidney disease including 740 (39.7%) patients reclassified with de novo or previously undiagnosed chronic kidney disease (CKD). The study was pragmatically designed to assess outcomes as part of routine healthcare, and there was heterogeneity in clinical practice and outcomes between sites, in addition to selection bias during cohort identification. Conclusions The use of a risk score can aid AKD identification in LLMICs. High rates of persistent kidney disease and mortality after discharge highlight the importance of AKD follow-up in low-resource settings.
November 2024
·
20 Reads
Background The risk of re-operation, otherwise known as revision, following primary hip replacement depends in part on the prosthesis implant materials used. Current performance evidences are based on a broad categorisation grouping together different materials with potentially varying revision risks. We investigated the revision rate of primary total hip replacement (THR) reported in the National Joint Registry by specific types of bearing surfaces used. Methods and findings We analysed THR procedures across all orthopaedic units in England and Wales. All patients who received a primary THR between 2003 and 2019 in the public and private sectors were included. We investigated the all-cause and indication-specific risks of revision using flexible parametric survival analyses to estimate adjusted hazard ratios (HRs). We identified primary THRs with heads and monobloc cups or modular acetabular component THRs with head and shell/liner combinations. A total of 1,026,481 primary THRs were analysed (Monobloc: n = 378,979 and Modular: n = 647,502) with 20,869 (2%) of these primary THRs subsequently undergoing a revision episode (Monobloc: n = 7,381 and Modular: n = 13,488). For monobloc implants, compared to implants with a cobalt chrome head and highly crosslinked polyethylene (HCLPE) cup, the all-cause risk of revision for monobloc acetabular implant was higher for patients with cobalt chrome (hazard rate at 10 years after surgery: 1.28 95% confidence intervals [1.10, 1.48]) or stainless steel head (1.18 [1.02, 1.36]) and non-HCLPE cup. The risk of revision was lower for patients with a delta ceramic head and HCLPE cup implant, at any postoperative period (1.18 [1.02, 1.36]). For modular implants, compared to patients with a cobalt chrome head and HCLPE liner primary THR, the all-cause risk of revision for modular acetabular implant varied non-constantly. THRs with a delta ceramic (0.79 [0.73, 0.85]) or oxidised zirconium (0.65 [0.55, 0.77]) head and HCLPE liner had a lower risk of revision throughout the entire postoperative period. Similar results were found when investigating the indication-specific risks of revision for both the monobloc and modular acetabular implants. While this large, nonselective analysis is the first to adjust for numerous characteristics collected in the registry, residual confounding cannot be rule out. Conclusions Prosthesis revision is influenced by the prosthesis materials used in the primary procedure with the lowest risk for implants with delta ceramic or oxidised zirconium head and an HCLPE liner/cup. Further work is required to determine the association of implant bearing materials with the risk of rehospitalisation, re-operation other than revision, mortality, and the cost-effectiveness of these materials.
November 2024
·
26 Reads
Background Community active case finding (ACF) for tuberculosis was widely implemented in Europe and North America between 1940 and 1970, when incidence was comparable to many present-day high-burden countries. Using an interrupted time series analysis, we analysed the effect of the 1957 Glasgow mass chest X-ray campaign to inform contemporary approaches to screening. Methods and findings Case notifications for 1950 to 1963 were extracted from public health records and linked to demographic data. We fitted Bayesian multilevel regression models to estimate annual relative case notification rates (CNRs) during and after a mass screening intervention implemented over 5 weeks in 1957 compared to the counterfactual scenario where the intervention had not occurred. We additionally estimated case detection ratios and incidence. From 11 March 1957 to 12 April 1957, 714,915 people (622,349 of 819,301 [76.0%] resident adults ≥15 years) were screened with miniature chest X-ray; 2,369 (0.4%) were diagnosed with tuberculosis. Pre-intervention (1950 to 1956), pulmonary CNRs were declining at 2.3% per year from a CNR of 222/100,000 in 1950. With the intervention in 1957, there was a doubling in the pulmonary CNR (RR: 1.95, 95% uncertainty interval [UI] [1.81, 2.11]) and 35% decline in the year after (RR: 0.65, 95% UI [0.59, 0.71]). Post-intervention (1958 to 1963) annual rates of decline (5.4% per year) were greater (RR: 0.77, 95% UI [0.69, 0.85]), and there were an estimated 4,599 (95% UI [3,641, 5,683]) pulmonary case notifications averted due to the intervention. Effects were consistent across all city wards and notifications declined in young children (0 to 5 years) with the intervention. Limitations include the lack of data in historical reports on microbiological testing for tuberculosis, and uncertainty in contributory effects of other contemporaneous interventions including slum clearances, introduction of BCG vaccination programmes, and the ending of postwar food rationing. Conclusions A single, rapid round of mass screening with chest X-ray (probably the largest ever conducted) likely resulted in a major and sustained reduction in tuberculosis case notifications. Synthesis of evidence from other historical tuberculosis screening programmes is needed to confirm findings from Glasgow and to provide insights into ongoing efforts to successfully implement ACF interventions in today’s high tuberculosis burden countries and with new screening tools and technologies.
October 2024
·
39 Reads
Background The interaction of CD40L and its receptor CD40 on activated T cells and B cells respectively control pro-inflammatory activation in the pathophysiology of autoimmunity and transplant rejection. Previous studies have implicated signaling pathways involving CD40L (interchangeably referred to as CD154), as well as adaptive and innate immune cell activation, in the induction of neuroinflammation in neurodegenerative diseases. This study aimed to assess the safety, tolerability, and impact on pro-inflammatory biomarker profiles of an anti CD40L antibody, tegoprubart, in individuals with amyotrophic lateral sclerosis (ALS). Methods and findings In this multicenter dose-escalating open-label Phase 2A study, 54 participants with a diagnosis of ALS received 6 infusions of tegoprubart administered intravenously every 2 weeks. The study was comprised of 4 dose cohorts: 1 mg/kg, 2 mg/kg, 4 mg/kg, and 8 mg/kg. The primary endpoint of the study was safety and tolerability. Exploratory endpoints assessed the pharmacokinetics of tegoprubart as well as anti-drug antibody (ADA) responses, changes in disease progression utilizing the Revised ALS Functional Rating Scale (ALSFRS-R), CD154 target engagement, changes in pro-inflammatory biomarkers, and neurofilament light chain (NFL). Seventy subjects were screened, and 54 subjects were enrolled in the study. Forty-nine of 54 subjects completed the study (90.7%) receiving all 6 infusions of tegoprubart and completing their final follow-up visit. The most common treatment emergent adverse events (TEAEs) overall (>10%) were fatigue (25.9%), falls (22.2%), headaches (20.4%), and muscle spasms (11.1%). Mean tegoprubart plasma concentrations increased proportionally with increasing dose with a half-life of approximately 24 days. ADA titers were low and circulating levels of tegoprubart were as predicted for all cohorts. Tegoprubart demonstrated dose dependent target engagement associated and a reduction in 18 pro-inflammatory biomarkers in circulation. Conclusions Tegoprubart appeared to be safe and well tolerated in adults with ALS demonstrating dose-dependent reduction in pro-inflammatory chemokines and cytokines associated with ALS. These results warrant further clinical studies with sufficient power and duration to assess clinical outcomes as a potential treatment for adults with ALS. Trial registration Clintrials.gov ID:NCT04322149.
October 2024
·
23 Reads
Background Detailed subgroup incidence rates for steatotic liver disease (SLD)-related hepatocellular carcinoma (HCC) are critical to inform practice and public health interventions but remain sparse. We aimed to fill in this gap. Methods and findings In a retrospective cohort study of adults with SLD from the United States (US) Merative Marketscan Research Databases (1/2007 to 12/2021), we estimated HCC incidence stratified by sex, age, cirrhosis, diabetes mellitus (DM), and a combination of all these 4 factors. We excluded patients with significant alcohol use and chronic viral hepatitis. We analyzed data from 741,816 patients with SLD (mean age 51.5 ± 12.8 years, 46% male, 14.7% cirrhosis). During a 2,410,166 person-years (PY) follow-up, 1,740 patients developed HCC. The overall HCC incidence yielded 0.72 per 1,000 PY (95% confidence interval [CI, 0.68, 0.75]). The incidence was higher in males (0.95, 95% CI [0.89, 1.01]) compared to females (0.52, 95% CI [0.48, 0.56]) (p < 0.001). For those with cirrhosis, the incidence was significantly higher at 4.29 (95% CI [4.06, 4.51]) compared to those without cirrhosis (0.14, 95% CI [0.13, 0.16]) (p < 0.001). Additionally, the incidence was higher in patients with DM (1.19, 95% CI [1.12, 1.26]) compared to those without DM (0.41, 95% CI [0.38, 0.44]) (p < 0.001). Chronic kidney disease (CKD) was also associated with a higher HCC incidence of 2.20 (95% CI [2.00, 2.41]) compared to those without CKD (0.58, 95% CI [0.55, 0.62]) (p < 0.001). Similarly, individuals with cardiovascular disease (CVD) had a higher HCC incidence of 1.89 (95% CI [1.75, 2.03]) compared to those without CVD (0.51, 95% CI [0.48, 0.54]) (p < 0.001). Finally, the incidence of HCC was significantly higher in patients with non-liver cancer (3.90, 95% CI [3.67, 4.12]) compared to those without other cancers (0.29, 95% CI [0.26, 0.31]) (p < 0.001). On further stratification, HCC incidence incrementally rose by 10-year age intervals, male sex, cirrhosis, and DM, reaching 19.06 (95% CI [16.10, 22.01]) and 8.44 (95% CI [6.78, 10.10]) in males and females, respectively, but only 0.04 for non-diabetic, noncirrhotic aged <40 years patients in both sexes. The main limitation of this methodology is the potential misclassification of the International Classification of Diseases (ICD) codes inherent in claims database studies. Conclusions This nationwide study provided robust granular estimates for SLD-related HCC incidence stratified by several key risk factors. In addition to cirrhosis, future surveillance strategies, prevention, public health initiatives, and future research models should also take into account the impact of sex, age, and DM.
October 2024
·
7 Reads
·
1 Citation
Background Women in sub-Saharan Africa are disproportionately affected by the HIV epidemic. Young women are twice as likely to be living with HIV as men of the same age and account for 64% of new HIV infections among young people. Many studies suggest that financial needs, alongside biological susceptibility, are a leading cause of the gender disparity in HIV acquisition. New robust evidence suggests women adopt risky sexual behaviours to cope with economic shocks, the sudden decreases in household’s income or consumption power, enhancing our understanding of the link between poverty and HIV. We investigated if health insurance protects against economic shocks, reducing the need for vulnerable women to engage in risky sexual behaviours and reducing HIV and sexually transmitted infection (STI) incidence. Method and findings We conducted a randomised controlled trial to test the effectiveness of a formal shock coping strategy to prevent HIV among women at high risk of HIV (registration number: ISRCTN 22516548). Between June and August 2021, we recruited 1,508 adolescent girls and women over age 15 years who were involved in transactional sex (n = 753) or commercial sex (n = 755), using snowball sampling. Participants were randomly assigned (1:1) to receive free health insurance for themselves and their economic dependents for 12 months either at the beginning of the study (intervention; n = 579; commercial sex n = 289, transactional sex n = 290) from November 2021 or at the end of the study 12 months later (control; n = 568; commercial sex n = 290, transactional sex n = 278). We collected data on socioeconomic characteristics of participants. Primary outcomes included incidence of HIV and STIs and were measured at baseline, 6 months after randomisation, and 12 months after randomisation. We found that study participants who engaged in transactional sex and were assigned to the intervention group were less likely to become infected with HIV post-intervention (combined result of 6 months post-intervention or 12 months post-intervention, depending on the follow-up data available; odds ratio (OR) = 0.109 (95% confidence interval (CI) [0.014, 0.870]); p = 0.036). There was no evidence of a reduction in HIV incidence among women and girls involved in commercial sex. There was also no effect on STI acquisition among both strata of high-risk sexual activity. The main limitations of this study were the challenges of collecting reliable STI incidence data and the low incidence of HIV in women and girls involved in commercial sex, which might have prevented detection of study effects. Conclusion The study provides to our knowledge the first evidence of the effectiveness of a formal shock coping strategy for HIV prevention among women who engage in transactional sex in Africa, reinforcing the importance of structural interventions to prevent HIV. Trial registration The trial was registered with the ISRCTN Registry: ISRCTN 22516548. Registered on 31 July 2021.
October 2024
·
6 Reads
Adolescent girls and young women (AGYW) in southern Africa face triple the HIV incidence of their male peers due to multiple factors, including economic deprivation and age-disparate relationships. A new study by Aurélia Lépine and colleagues has demonstrated that addressing healthcare costs among AGYW has the potential to reduce HIV incidence.
October 2024
·
6 Reads
Background Parental drinking can cause harm to the offspring. A parent’s diagnosis of alcohol-related liver disease (ALD) might be an opportunity to reach offspring with preventive interventions. We investigated offspring risk of adverse health outcomes throughout life, their association with their parent’s educational level and diagnosis of ALD. Methods and findings We used nationwide health registries to identify offspring of parents diagnosed with ALD in Denmark 1996 to 2018 and age- and sex-matched comparators (20:1). We estimated the incidence rate ratios (IRRs) of hospital contacts with adverse health outcomes, overall and in socioeconomic strata. We used a self-controlled design to examine whether health outcomes were more likely to occur during the first year after the parent’s ALD diagnosis. The 60,804 offspring of parents with ALD had a higher incidence rate of hospital contacts from age 15 to 60 years for psychiatric disease, poisoning, fracture or injury, alcohol-specific diagnoses, other substance abuse, and of death than comparators. Associations were stronger for offspring with low compared to high socioeconomic position: The IRR for admission due to poisoning was 2.2 versus 1.0 for offspring of an ALD parent with a primary level versus a highly educated ALD parent. Offspring had an increased risk for admission with psychiatric disease and poisoning in the year after their parent’s ALD diagnosis. For example, among offspring whose first hospital contact with psychiatric disease was at age 13 to 25 years, the IRR in the first year after their parent’s ALD diagnosis versus at another time was 1.29 (95% CI 1.13, 1.47). Main limitation was inability to include adverse health outcomes not involving hospital contact. Conclusions Offspring of parents with ALD had a long-lasting higher rate of health outcomes associated with poor mental health and self-harm that increased shortly after their parent’s diagnosis of ALD. Offspring of parents of low educational level were particularly vulnerable. This study highlights an opportunity to reach out to offspring in connection with their parent’s hospitalization with ALD.
October 2024
·
15 Reads
Background Medical imaging is an integral part of healthcare. Globalization has resulted in increased mobilization of migrants to new host nations. The association between migration status and utilization of medical imaging is unknown. Methods and findings A retrospective population-based matched cohort study was conducted in Ontario, Canada from April 1, 1995 to December 31, 2016. A total of 1,848,222 migrants were matched 1:1 to nonmigrants in the year of migration on age, sex, and geography. Utilization of computed tomography (CT), magnetic resonance imaging (MRI), radiography, and ultrasonography was determined. Rate differences per 1,000 person-years comparing migrants to nonmigrants were calculated. Relative rates were calculated using a recurrent event framework, adjusting for age, sex, and time-varying socioeconomic status, comorbidity score, and access to a primary care provider. Estimates were stratified by migration age: children and adolescents (≤19 years), young adults (20 to 39), adults (40 to 59), and older adults (≥60). Utilization rates of CT, MRI, and radiography were lower for migrants across all age groups compared with Ontario nonmigrants. Increasing age at migration was associated with larger differences in utilization rates. Older adult migrants had the largest gap in imaging utilization. The longer the time since migration, the larger the gap in medical imaging use. In multivariable analysis, the relative rate of imaging was approximately 20% to 30% lower for migrants: ranging from 0.77 to 0.88 for CT and 0.72 to 0.80 for MRI imaging across age groups. Radiography relative rates ranged from 0.84 to 0.90. All migrant age groups, except older adults, had higher rates of ultrasonography. The indication for imaging was not captured, thus it was not possible to determine if the imaging was necessary. Conclusions Migrants utilized less CT, MRI, and radiography but more ultrasonography. Older adult migrants used the least amount of imaging compared with nonmigrants. Future research should evaluate whether lower utilization is due to barriers in healthcare access or health-seeking behaviors within a universal healthcare system.
October 2024
·
30 Reads
Background To reduce leprosy risk in contacts of patients with leprosy by around 50%, the World Health Organization (WHO) recommends leprosy post-exposure prophylaxis (PEP) using single-dose rifampicin (SDR). Results from a cluster randomized trial in the Comoros and Madagascar suggest that PEP with a double dose of rifampicin led to a similar reduction in incident leprosy, prompting the need for stronger PEP. The objective of this Phase 2 trial was to assess safety of a bedaquiline-enhanced PEP regimen (intervention arm, bedaquiline 800 mg with rifampicin 600 mg, BE-PEP), relative to the WHO recommended PEP with rifampicin 600 mg alone (control arm, SDR-PEP). Methods and findings From July 2022 to January 2023, consenting participants were screened for eligibility, including a heart rate-corrected QT interval (QTc) <450 ms and liver enzyme tests (ALT/AST) below 3× the upper limit of normal (ULN), before they were individually randomized 1:1 in an open-label design. Recruitment was sequential, by age group. Pediatric dosages were weight adjusted. Follow-up was done at day 1 post-dose (including ECG) and day 14 (including ALT/AST), with repeat of ALT/AST on the last follow-up at day 30 in case of elevation on day 14. The primary outcome was non-inferiority of BE-PEP based on a <10 ms difference in QTc 24 h after treatment administration, both unadjusted and adjusted for baseline QTc. Of 408 screened participants, 313 were enrolled, starting with 187 adults, then 38 children aged 13 to 17 years, and finally 88 children aged 5 to 12 years, of whom 310 (99%) completed all visits. Across all ages, the mean QTc change on BE-PEP was from 393 ms to 396 ms, not significantly different from the change from 392 ms to 394 ms on SDR-PEP (difference between arms 1.8 ms, 95% CI −1.8, 5.3, p = 0.41). No individual’s QTc increased by >50 ms or exceeded 450 ms after PEP administration. Per protocol, all children were analyzed together, with no significant difference in mean QTc increase for BE-PEP compared to SDR-PEP, although non-inferiority of BE-PEP in children was not demonstrated in unadjusted analysis, as the upper limit of the 95% CI of 10.4 ms exceeded the predefined margin of 10 ms. Adjusting for baseline QTc, the regression coefficient and 95% CI (3.3; −1.4, 8.0) met the 10 ms non-inferiority margin. No significant differences in ALT or AST levels were noted between the intervention and control arms, although a limitation of the study was false elevation of ALT/AST during adult recruitment due to a technical error. In both study arms, one serious adverse event was reported, both considered unlikely related to the study drugs. Dizziness, nausea, headache, and diarrhea among adults, and headaches in children, were nonsignificantly more frequently observed in the BE-PEP group. Conclusions In this study, we observed that safety of single-dose bedaquiline 800 mg in combination with rifampicin is comparable to rifampicin alone, although non-inferiority of QTc changes was demonstrated in children only after adjusting for the baseline QTc measurements. A Phase 3 cluster randomized efficacy trial is currently ongoing in the Comoros. Trial Registration ClinicalTrials.gov NCT05406479.
October 2024
·
90 Reads
Background Associations between violent victimisation and psychiatric disorders are hypothesised to be bidirectional, but the role of violent victimisation in the aetiologies of psychiatric disorders and other adverse outcomes remains unclear. We aimed to estimate associations between violent victimisation and subsequent common psychiatric disorders, suicidal behaviours, and premature mortality while accounting for unmeasured familial confounders. Methods and findings Using nationwide registers, we identified a total of 127,628 individuals born in Finland (1987 to 2004) and Sweden (1973 to 2004) who had experienced violent victimisation, defined as either hospital admissions or secondary care outpatient visits for assault-related injuries. These were age- and sex-matched with up to 10 individuals in the general population (n = 1,276,215). Additionally, we matched those who had experienced violent victimisation with their unaffected siblings (n = 132,408). Outcomes included depression, anxiety, personality disorders, alcohol use disorders, drug use disorders, suicidal behaviours, and premature mortality. Participants were followed from the victimisation date until the date of the outcome, emigration, death, or December 31, 2020, whichever occurred first. Country-specific associations were estimated using stratified Cox regression models, which also accounted for unmeasured familial confounders via sibling comparisons. The country-specific associations were then pooled using meta-analytic models. Among 127,628 patients (69.0% male) who had experienced violent victimisation, the median age at first violent victimisation was 21 (interquartile range: 18 to 26) years. Incidence of all outcomes was larger in those who were exposed to violent victimisation compared to population controls, ranging from 2.3 (95% confidence interval (CI) [2.2; 2.4]) per 1,000 person-years for premature mortality (compared with 0.6, 95% CI [0.6; 0.6], in controls) to 22.5 (95% CI [22.3; 22.8]) per 1,000 person-years for anxiety (compared with 7.3, 95% CI [7.3; 7.4], in controls). In adjusted models, people who had experienced violent victimisation were between 2 to 3 times as likely as their siblings to develop any of the outcomes, ranging from adjusted hazard ratio [aHR] 1.7 (95% CI [1.7; 1.8]) for depression to 3.0 (95% CI [2.9; 3.1]) for drug use disorders. Risks remained elevated 2 years post-victimisation, ranging from aHR 1.4 (95% CI [1.3; 1.5]) for depression to 2.3 (95% CI [2.2; 2.4]) for drug use disorders. Our reliance on secondary care data likely excluded individuals with milder assault-related injuries and less severe psychiatric symptoms, thus suggesting that our estimates may be conservative. Another limitation is the possibility of residual genetic confounding, as full siblings share on average about half of their co-segregating genes. However, the associations remained robust even after adjusting for both measured and unmeasured familial confounders. Conclusions In this longitudinal cross-national cohort study, we observed that those who had experienced violent victimisation were at least twice as likely as their unaffected siblings to develop common psychiatric disorders (i.e., depression, anxiety, personality disorder, and alcohol and drug use disorders), engage in suicidal behaviours, and to die prematurely. Importantly, these risk elevations remained 2 years after the first victimisation event. Improving clinical assessment, management, and aftercare psychosocial support could therefore potentially reduce rates of common psychiatric disorders, suicidality, and premature mortality in individuals experiencing violent victimisation.
October 2024
·
21 Reads
In this perspective, we discuss why current mechanistic uncertainty on ultraprocessed foods (UPFs) and health acts as a major challenge to providing informed dietary guidelines and public advice on UPFs. Based on the balance of current evidence, we do not believe it is appropriate to be advising consumers to avoid all UPFs and we await further evidence to inform consumer guidance on the need to limit consumption of specifics foods based on their degree or type of processing.
October 2024
·
7 Reads
In this Editorial on behalf of the PLOS Medicine Editors, Alexandra Tosun discusses how ultra-processed food has found itself at the center of a growing storm of criticism, the complexities of the ongoing nutrition debate and why stakeholders must be held to higher standards.
October 2024
·
23 Reads
Background Globally, over 16 million children were exposed to HIV during pregnancy but remain HIV-free at birth and throughout childhood by 2022. Children born HIV-free (CBHF) have higher morbidity and mortality and poorer neurodevelopment in early life compared to children who are HIV-unexposed (CHU), but long-term outcomes remain uncertain. We characterised school-age growth, cognitive and physical function in CBHF and CHU previously enrolled in the Sanitation Hygiene Infant Nutrition Efficacy (SHINE) trial in rural Zimbabwe. Methods and findings The SHINE trial enrolled pregnant women between 2012 and 2015 across 2 rural Zimbabwean districts. Co-primary outcomes were height-for-age Z-score and haemoglobin at age 18 months (clinicaltrials.gov NCT01824940). Children were re-enrolled if they were aged 7 years, resident in Shurugwi district, and had known pregnancy HIV-exposure status. From 5,280 pregnant women originally enrolled, 376 CBHF and 2016 CHU reached the trial endpoint at 18 months in Shurugwi; of these, 264 CBHF and 990 CHU were evaluated at age 7 years using the School-Age Health, Activity, Resilience, Anthropometry and Neurocognitive (SAHARAN) toolbox. Cognitive function was evaluated using the Kaufman Assessment Battery for Children (KABC-II), with additional tools measuring executive function, literacy, numeracy, fine motor skills, and socioemotional function. Physical function was assessed using standing broad jump and handgrip for strength, and the shuttle-run test for cardiovascular fitness. Growth was assessed by anthropometry. Body composition was assessed by bioimpedance analysis and skinfold thicknesses. A caregiver questionnaire measured demographics, socioeconomic status, nurturing, child discipline, food, and water insecurity. We prespecified the primary comparisons and used generalised estimating equations with an exchangeable working correlation structure to account for clustering. Adjusted models used covariates from the trial (study arm, study nurse, exact child age, sex, calendar month measured, and ambient temperature). They also included covariates derived from directed acyclic graphs, with separate models adjusted for contemporary variables (socioeconomic status, household food insecurity, religion, social support, gender norms, caregiver depression, age, caregiver education, adversity score, and number of children’s books) and early-life variables (length-for-age-Z-score) at 18 months, birthweight, maternal baseline depression, household diet, maternal schooling and haemoglobin, socioeconomic status, facility birth, and gender norms. We applied a Bonferroni correction for the 27 comparisons (0.05/27) with threshold of p < 0.00185 as significant. We found strong evidence that cognitive function was lower in CBHF compared to CHU across multiple domains. The KABC-II mental processing index was 45.2 (standard deviation (SD) 10.5) in CBHF and 48.3 (11.3) in CHU (mean difference 3.3 points [95% confidence interval (95% CI) 2.0, 4.5]; p < 0.001). The school achievement test score was 39.0 (SD 26.0) in CBHF and 45.7 (27.8) in CHU (mean difference 7.3 points [95% CI 3.6, 10.9]; p < 0.001); differences remained significant in adjusted analyses. Executive function was reduced but not significantly in adjusted analyses. We found no consistent evidence of differences in growth or physical function outcomes. The main limitation of our study was the restriction to one of two previous study districts, with possible survivor and selection bias. Conclusions In this study, we found that CBHF had reductions in cognitive function compared to CHU at 7 years of age across multiple domains. Further research is needed to define the biological and psychosocial mechanisms underlying these differences to inform future interventions that help CBHF thrive across the life-course. Trial registration ClinicalTrials.gov The SHINE follow-up study was registered with the Pan-African Clinical Trials Registry (PACTR202201828512110). The original SHINE trial was registered at NCT https://clinicaltrials.gov/study/NCT01824940.
Article processing charge