Background: Bipolar disorder and attention-deficit/hyperactivity disorder are common comorbidities. Attention-deficit/hyperactivity disorder is commonly treated with stimulants (eg, methylphenidate), which, however, have been suggested to cause treatment-emergent mania in patients with bipolar disorder. Here, we assessed the risk of mania, depressive episodes, and psychiatric admissions after initiation of methylphenidate treatment in patients with bipolar disorder. Methods: Using Danish health registries, we identified all individuals registered with a diagnosis of bipolar disorder from January 1, 2000, to January 1, 2018, who were treated with methylphenidate. We applied a 1-year mirror-image model to compare the occurrence of mania, depression, and psychiatric admissions in the period leading up to and after methylphenidate treatment initiation. We furthermore assessed the trend in these outcomes from 4 years before to 1 year after initiation of methylphenidate treatment. Results: A total of 1043 patients with bipolar disorder initiated treatment with methylphenidate. The number of manic episodes decreased by 48% after methylphenidate treatment initiation (P = 0.01), both among patients using mood stabilizers (-50%) and among patients not using mood stabilizers (-45%). The number of manic episodes, however, peaked approximately 6 months before methylphenidate. The results were similar for the secondary outcomes. Conclusions: Initiation of methylphenidate treatment was not associated with an increased risk of mania in patients with bipolar disorder. A decrease in mania, depressive episodes, and psychiatric admissions was observed after methylphenidate. However, these decreases seemed to be driven by regression to the mean after clinical deterioration preceding methylphenidate treatment, rather than by the methylphenidate treatment itself.
Aims: Transthyretin amyloid cardiomyopathy (ATTR CM) is a progressive and severe heart disease with physical and psychological implications. The Nordic PROACT study was conducted to investigate the health-related quality of life (HRQoL) in ATTR CM patients. Methods and results: The Nordic PROACT study was a cross-sectional non-interventional study conducted in 12 cardiology hospital clinics across Norway, Sweden, Finland and Denmark. Men and women aged ≥18 years diagnosed with symptomatic ATTR CM were included. The investigator provided information on medical history, biomarkers, current treatment, co-morbidities and disease severity according to the New York Heart Association (NYHA) class and the National Amyloidosis Centre (NAC) staging. Patients completed the HRQoL questionnaires in the form of the Kansas City Cardiomyopathy Questionnaire (KCCQ), the EQ-5D-5L index with Visual Analog Scale (VAS), and the Major Depression Inventory (MDI). A total of 169 patients (mean ± SD age 77.7 ± 6.2 years) were included. Ninety-two per cent were men. Seventy-six per cent had wildtype ATTR CM (ATTRwt CM) and 15% had a hereditary form of ATTR CM (ATTRv CM) while 9% were genetically unclassified. Most patients were in NYHA class II (54%) and NAC stage 1 (53%). Participation in randomized clinical trials (RCT) was noted in 58% of the patients. The 169 ATTR CM patients had a mean ± SD KCCQ score of 64.3 ± 23.1 for total symptom score, 64.8 ± 20.9 for overall summary score (OSS) and 65.1 ± 21.5 for clinical summary score. The EQ-5D-5L total utility score was 0.8 ± 0.2 and the EQ-5D-5L VAS score was 62.9 ± 20.6. The vast majority (89%) did not report any signs of depression. Patients with ATTRv CM had a higher KCCQ OSS as compared with ATTRwt CM, while EQ-5D-5L utility score, EQ-5D-5L VAS and MDI were similar. Non-RCT participants had a poorer HRQoL as compared with RCT participants as reflected in lower KCCQ OSS and EQ-5D-5L VAS scores and a higher MDI score. Patients with higher NYHA classes and NAC disease stages had a poorer HRQoL as demonstrated by lower KCCQ and EQ-5D-5L scores and higher MDI scores. Correlation between KCCQ, EQ-5D-5L and MDI and the covariate NYHA class remained significant (P < 0.05) after adjusting for multiple testing. Conclusions: KCCQ scores were lower than previously reported for patients with other heart diseases of non-ATTR CM origin. The HRQoL measures correlated well to NYHA class and NAC disease stage. The prevalence of depression appeared to be low.
Background: Obesity is a modifiable risk factor for urinary incontinence, yet few studies have investigated how waist circumference as compared to body mass index (BMI) influences the risk of urinary incontinence. Objective: To estimate how BMI and waist circumference associates with risk of urinary incontinence in midlife and determine which of the two is the strongest predictor of urinary incontinence. Methods: Cohort study among mothers in the Danish National Birth Cohort. Weight and waist circumference were self-reported 7 years after cohort entry. Symptoms of urinary incontinence in midlife were self-reported using the International Consultation on Incontinence Questionnaire Female Lower Urinary Tract Symptoms (ICIQ-FLUTS) and analyzed continuously and as presence or absence of any, stress (SUI), urgency (UUI), and mixed (MUI) urinary incontinence. Linear and log binomial regressions were used to calculate mean differences and risk ratios (RR) with 95% confidence intervals (CI). Restricted cubic splines were generated to explore nonlinear relationships. Results: Among 27 254 women at a mean age of 44.2 years, any urinary incontinence was reported by 32.1%, SUI by 20.9%, UUI by 2.4%, and MUI by 8.6%. For all outcomes, increases in risk were similar with higher BMI and waist circumference. The estimates of association were strongest for MUI (RR 1.10, 95% CI 1.08;1.12 and RR 1.12, 95% CI 1.10;1.14 for half a standard deviation increase in BMI and waist circumference, respectively). While increases in risk of the other outcomes were seen across the entire range of BMI and waist circumference, the risk of SUI rose until BMI 28 kg/m2 (waist circumference 95 cm), and then fell slightly. Conclusions: Symptoms of urinary incontinence and prevalence of any urinary incontinence, SUI, UUI, and MUI increased with higher BMI and waist circumference. Self-reported BMI and waist circumference were equally predictive of urinary incontinence.
Objective: Previous studies indicated higher long-term mortality after the transfusion of allogeneic red blood cells and newer recommendations emphasize lower transfusion rates. Consequences of transfusion of red blood cells in cardiac surgery are unclear, as later studies focused on transfusion triggers and short-term outcomes and reports on long-term complications after cardiac surgery are few. Material and methods: The mandatory Western Denmark Heart Registry was used to identify all adult cardiac surgeries performed in four centres from 2000-2019. Patients with multiple entries or previous cardiac surgery, special/complex procedures, dying within 30 days and not eligible for follow-up were excluded. Results: A total of 32,581 adult cardiac surgeries performed in four centres from 2000-2019 were included. The Kaplan-Meier survival plot for low-risk patients undergoing simple cardiac surgery showed a significantly lower 15-year survival (0.384 vs 0.661) when receiving perioperative red blood cell transfusion (OR 2.43 (CL 2.23-2.66)). The risk decreased with increasing comorbidity or age, and in high-risk patients, no difference was found. The adjusted risk ratio, after RBC transfusion, including age, sex, comorbidity and surgery, was 1.62 (1.48-1.77). Conclusion: Despite reduced transfusion rates, long-term follow-up on especially low-risk patients undergoing comparable cardiac surgery still demonstrates a substantially higher mortality in patients receiving perioperative red blood cell transfusion. Even transfusion of 1-2 units is associated with increased long-term mortality.
Background Thiamine supplementation has demonstrated protective effects in a mouse model of cardiac arrest. The aim of this study was to investigate the neuroprotective effects of thiamine in a clinically relevant large animal cardiac arrest model. The hypothesis was that thiamine reduces neurological injury evaluated by neuron‐specific enolase levels. Methods and Results Pigs underwent myocardial infarction and subsequently 9 minutes of untreated cardiac arrest. Twenty minutes after successful resuscitation, the pigs were randomized to treatment with either thiamine or placebo. All pigs underwent 40 hours of intensive care and were awakened for assessment of functional neurological outcome up until 9 days after cardiac arrest. Nine pigs were included in both groups, with 8 in each group surviving the entire intensive care phase. Mean area under the curve for neuron‐specific enolase was similar between groups, with 81.5 μg/L per hour (SD, 20.4) in the thiamine group and 80.5 μg/L per hour (SD, 18.3) in the placebo group, with an absolute difference of 1.0 (95% CI, −57.8 to 59.8; P =0.97). Likewise, there were no absolute difference in neurological deficit score at the end of the protocol (2 [95% CI, −38 to 42]; P =0.93). There was no absolute mean group difference in lactate during the intensive care period (1.1 mmol/L [95% CI, −0.5 to 2.7]; P =0.16). Conclusions In this randomized, blinded, placebo‐controlled trial using a pig cardiac arrest model with myocardial infarction and long intensive care and observation for 9 days, thiamine showed no effect in changes to functional neurological outcome or serum levels of neuron‐specific enolase. Thiamine treatment had no effect on lactate levels after successful resuscitation.
Background Patients with Brugada syndrome (BrS) are recommended to avoid drugs that may increase their risk of arrhythmic events. We examined treatment with such drugs in patients with BrS after their diagnosis. Methods and Results All Danish patients diagnosed with BrS (2006–2018) with >12 months of follow‐up were identified from nationwide registries. Nonrecommended BrS drugs were grouped into drugs to “avoid” or “preferably avoid” according to http://www.brugadadrugs.org . Cox proportional hazards analyses were performed to identify factors associated with any nonrecommended BrS drug use, and logistic regression analyses were performed to examine associated risk of appropriate implantable cardioverter defibrillator therapy, mortality, and a combined end point indicating an arrhythmic event of delayed implantable cardioverter defibrillator implantation, appropriate implantable cardioverter defibrillator therapy, and mortality. During a median follow‐up of 6.8 years, 93/270 (34.4%) patients with BrS (70.4% male, median age at diagnosis 46.1 years [interquartile range, 32.6–57.4]) were treated with ≥1 nonrecommended BrS drugs. No difference in any nonrecommended BrS drug use was identified comparing time before BrS diagnosis (12.6%) with each of the 5 years following BrS diagnosis ( P >0.05). Factors associated with any nonrecommended BrS drug use after diagnosis were female sex (hazard ratio [HR]) 1.83 [95% CI, 1.15–2.90]), psychiatric disease (HR, 3.63 [1.89–6.99]), and prior use of any nonrecommended BrS drug (HR, 4.76 [2.45–9.25]). No significant association between any nonrecommended BrS drug use and implantable cardioverter defibrillator therapy (n=20/97, odds ratio [OR], 0.7 [0.2–2.4]), mortality (n=10/270, OR, 3.4 [0.7–19.6]), or the combined end point (n=38/270, OR, 1.7 [0.8–3.7]) was identified. Conclusions One in 3 patients with BrS were treated with a nonrecommended BrS drug after BrS diagnosis, and a BrS diagnosis did not change prescription patterns. More awareness of nonrecommended drug use among patients with BrS is needed.
Background: Denmark was one of the few countries where it was politically decided to continue cancer screening during the COVID-19 pandemic. We assessed the actual population uptake of mammography and cervical screening during this period. Methods: The first COVID-19 lockdown in Denmark was announced on 11 March 2020. To investigate possible changes in cancer screening activity due to the COVID-19 pandemic, we analysed data from the beginning of 2017 until the end of 2021. A time series analysis was carried out to discover possible trends and outliers in the screening activities in the period 2017-2021. Data on mammography screening and cervical screening were retrieved from governmental pandemic-specific monitoring of health care activities. Results: A brief drop was seen in screening activity right after the first COVID-19 lockdown, but the activity quickly returned to its previous level. A short-term deficit of 43% [CI -49 to -37] was found for mammography screening. A short-term deficit of 62% [CI -65 to -58] was found for cervical screening. Furthermore, a slight, statistically significant downward trend in cervical screening from 2018 to 2021 was probably unrelated to the pandemic. Other changes, for example, a marked drop in mammography screening towards the end of 2021, also seem unrelated to the pandemic. Conclusions: Denmark continued cancer screening during the pandemic, but following the first lockdown a temporary drop was seen in breast and cervical screening activity. Funding: Region Zealand (R22-A597).
Correctly diagnosing and classifying seizures and epilepsies is paramount to ensure the delivery of optimal care to patients with epilepsy. Focal seizures, defined as those that originate within networks limited to one hemisphere, are primarily subdivided into focal aware, focal impaired awareness, and focal to bilateral tonic-clonic seizures. Focal epilepsies account for most epilepsy cases both in children and adults. In children, focal epilepsies are typically subdivided in three groups: self-limited focal epilepsy syndromes (e.g., self-limited epilepsy with centrotemporal spikes), focal epilepsy of unknown cause but which do not meet criteria for a self-limited focal epilepsy syndrome, and focal epilepsy of known cause (e.g., structural lesions - developmental or acquired). In adults, focal epilepsies are often acquired and may be caused by a structural lesion such as stroke, infection and traumatic brain injury, or brain tumors, vascular malformations, metabolic disorders, autoimmune, and/or genetic causes. In addition to seizure semiology, neuroimaging, neurophysiology, and neuropathology constitute the cornerstones of a diagnostic evaluation. Patients with focal epilepsy who become drug-resistant should promptly undergo assessment in an epilepsy center. After excluding pseudo-resistance, these patients should be considered for presurgical evaluation as a means to identify the location and extent of the epileptogenic zone and assess their candidacy for a surgical procedure. The goal of this seminar in epileptology is to summarize clinically relevant information concerning focal epilepsies. This contributes to the ILAE's mission to ensure that worldwide healthcare professionals, patients, and caregivers continue to have access to high-quality educational resources concerning epilepsy.
Objective Renal fibrosis is one of the main pathophysiological processes underlying the progression of chronic kidney disease and kidney allograft failure. In the past decades, overwhelming efforts have been undertaken to find druggable targets for the treatment of renal fibrosis, mainly using cell- and animal models. However, the latter often do not adequately reflect human pathogenesis, obtained results differ per strain within a given species, and the models are associated with considerable discomfort for the animals. Therefore, the objective of this study is to implement the 3Rs in renal fibrosis research by establishing an animal-free drug screening platform for renal fibrosis based on human precision-cut kidney slices (PCKS) and by limiting the use of reagents that are associated with significant animal welfare concerns. Results Using Western blotting and gene expression arrays, we show that transforming growth factor-β (TGF-β) induced fibrosis in human PCKS. In addition, our results demonstrated that butaprost, SC-19220 and tamoxifen – all putative anti-fibrotic compounds – altered TGF-β-induced pro-fibrotic gene expression in human PCKS. Moreover, we observed that all compounds modulated fairly distinct sets of genes, however they all impacted TGF-β/SMAD signaling. In conclusion, this study revealed that it is feasible to use an animal-free approach to test drug efficacy and elucidate mechanisms of action.
Objective. Despite appropriate oral glucocorticoid replacement therapy, patients with hypocortisolism often sufer from impaired health and frequent hospitalizations. Continuous subcutaneous hydrocortisone infusion (CSHI) has been developed as an attempt to improve the health status of these patients. Te objective of this study was to compare the efects of CSHI to conventional oral treatment on hospitalizations, glucocorticoid doses, and subjective health status. Patients. Nine Danish patients (males: 4 and females: 5) with adrenal insufciency (AI) were included, with a median age of 48 years, due to Addison (n = 4), congenital adrenal hyperplasia (n = 1), steroid induced secondary adrenal insufciency (n = 2), morphine induced secondary adrenal insufciency (n = 1), and Sheehan's syndrome (n = 1). Only patients with severe symptoms of cortisol defcit on oral treatment were selected for CSHI. Teir usual oral hydrocortisone doses varied from 25-80 mg per day. Te duration of follow-up depended on when the treatment was changed. Te frst patient started CSHI in 2009 and the last in 2021. Design. A retrospective case series comparing hospitalizations and glucocorticoid doses before and after treatment with CSHI. In addition, patients were retrospectively interviewed about their health-related quality of life (HRQoL) after the change of treatment modality. Results. Patients sig-nifcantly reduced their daily dose of glucocorticoids by 16.1 mg (p � 0.02) after changing to CSHI. Te number of hospital admission due to adrenal crisis decreased by 1.3 per year on CSHI, which was a 50% reduction (p � 0.04). All patients found it easier to handle an adrenal crisis with CSHI, and almost all patients found it easier to overcome everyday activities and had fewer symptoms of cortisol defcit such as abdominal pain and nausea (7-8 out of 9 patients). Conclusions. Te change of treatment from conventional oral hydrocortisone to CSHI resulted in a reduced daily dose of glucocorticoids and a reduced number of hospitalizations. Patients reported regain of energy, achievement of better disease control, and better handling of adrenal crisis.
Objectives: Teriparatide (TPTD) is an effective treatment for osteoporosis but the individual response to therapy is variable for reasons that are unclear. This study aimed to determine whether the response to TPTD might be influenced by genetic factors. Methods: We searched for predictors of the response of bone mineral density (BMD) to TPTD using a two-stage genome-wide association study in 437 patients with osteoporosis from three referral centres. Demographic and clinical data including the response of BMD to treatment at the lumbar spine and hip were extracted from the medical records of each participant. Results: Allelic variation at rs6430612 on chromosome 2, close to the CXCR4 gene was associated with the response of spine BMD to TPTD at a genome wide significant level (p=9.2×10-9 beta=-0.35 (-0.47 to -0.23)). The increase in BMD was almost twice as great in AA homozygotes at rs6430612 as compared with GG homozygotes with intermediate values in heterozygotes. The same variant was also associated with response of femoral neck and total hip BMD (p=0.007). An additional locus on chromosome 19 tagged by rs73056959 was associated with the response of femoral neck BMD to TPTD (p=3.5×10-9, beta=-1.61 (-2.14 to -1.07)). Conclusions: Genetic factors influence the response to TPTD at the lumbar spine and hip with a magnitude of effect that is clinically relevant. Further studies are required to identify the causal genetic variants and underlying mechanisms as well as to explore how genetic testing for these variants might be implemented in clinical practice.
Background: Ablative fractional CO2 laser (AFL) is an established first-line energy-based treatment for acne scars. Microneedle radiofrequency (MNRF) is an emerging treatment, also targeting the skin in fractions. No studies have so far compared AFL with MNRF for acne scars in a direct controlled, side-by-side comparison. In this study, we compared AFL and MNRF treatments for acne scars in a randomized split-face trial with blinded response evaluation, objective measures, and patient-reported outcomes. Study design/materials and method: Fifteen patients with moderate to severe acne scars were included. At baseline each patient had two similar test areas identified, these were randomized to receive a single treatment with either AFL or MNRF. Standardized multilayer techniques were applied with AFL and MNRF, first targeting the scar base, thereafter the entire scar area. Outcome measures included blinded evaluation of clinical improvement of scar texture (0-10 scale) at 1- and 3-months follow-up, local skin reactions (LSR), pain according to Visual Analogue Scale (VAS), skin integrity quantified by transepidermal water loss, and patient satisfaction. Results: Fifteen patients completed the study with a median test area size of 24.6 cm2 (interquartile range [IQR] 14.9-40.6). A single treatment with AFL or MNRF equally resulted in a median 1-point texture improvement after 3 months follow-up (p < 0.001). Best responders achieved up to a 3-point improvement (n = 3 test areas, 10% of treatment areas). Erythema and loss of skin integrity was more intense after AFL compared with MNRF after 2-4 days (p < 0.001). Patients reported MNRF (VAS 7.0) to be significantly more painful than AFL (5.5) (p = 0.009). Patients were generally satisfied with the overall outcome on a 10-point scale at median 6 for both treatments (IQR 5-7). Conclusion: AFL and MNRF treatments are equally effective at improving texture in skin with acne scars. AFL resulted in more pronounced LSRs whereas MNRF was more painful. Patients were generally satisfied with the overall outcome.
Purpose Statins are the most widely prescribed cholesterol lowering medications and have been associated with both improved and unchanged breast cancer outcomes in previous studies. This study examines the association between the post-diagnostic use of statins and breast cancer outcomes (death and recurrence) in a large, representative sample of New Zealand (NZ) women with breast cancer. Methods Women diagnosed with a first primary breast cancer between 2007 and 2016 were identified from four population-based regional NZ breast cancer registries and linked to national pharmaceutical data, hospital discharges, and death records. Cox proportional hazard models were used to estimate the hazard of breast cancer-specific death (BCD) associated with any post-diagnostic statin use. Results Of the 14,976 women included in analyses, 27% used a statin after diagnosis and the median follow up time was 4.51 years. Statin use (vs non-use) was associated with a statistically significant decreased risk of BCD (adjusted hazard ratio: 0.74; 0.63–0.86). The association was attenuated when considering a subgroup of ‘new’ statin users (HR: 0.91; 0.69–1.19), however other analyses revealed that the protective effect of statins was more pronounced in estrogen receptor positive patients (HR: 0.77; 0.63–0.94), postmenopausal women (HR: 0.74; 0.63–0.88), and in women with advanced stage disease (HR: 0.65; 0.49–0.84). Conclusion In this study, statin use was associated with a statistically significant decreased risk of breast cancer death, with subgroup analyses revealing a more protective effect in ER+ patients, postmenopausal women, and in women with advanced stage disease. Further research is warranted to determine if these associations are replicated in other clinical settings.
Objectives: The 2016 ACR-EULAR Response Criteria for juvenile dermatomyositis (JDM) was developed as a composite measure with differential weights of six core set measures (CSMs) to calculate a Total Improvement Score (TIS). We assessed the contribution of each CSM, representation of muscle-related and patient-reported CSMs towards improvement, and frequency of CSM worsening across myositis response criteria (MRC) categories in validation of MRC. Methods: Data from JDM patients in the Rituximab in Myositis trial (n = 48), PRINTO JDM trial (n = 139), and consensus patient profiles (n = 273) were included. Observed versus expected CSM contributions were compared using Sign test. Characteristics of MRC categories were compared by Wilcoxon tests with Bonferroni-adjustment. Spearman correlation of changes in TIS and individual CSMs were examined. Agreement between physician-assessed change and MRC categories was evaluated by weighted Cohen's Kappa. Results: Of 457 JDM patients with IMACS CSMs and 380 with PRINTO CSMs, 9-13% had minimal, 19-23% had moderate, and 41-50% had major improvement. The number of improved and absolute percentage change of CSMs increased by MRC improvement level. Patients with minimal improvement by MRC had a median of 0-1 CSM worsened, and those with moderate/major improvement had a median of zero worsening CSMs. Of patients improved by MRC, 94-95% had improvement in muscle strength and 93-95% had improvement in ≥1 patient-reported CSM. IMACS and PRINTO CSMs performed similarly. Physician-rated change and MRC improvement categories had moderate-to-substantial agreement (Kappa 0.5-0.7). Conclusion: The ACR-EULAR MRC perform consistently across multiple studies, supporting its further use as an efficacy end point in JDM trials.
Objective: We investigated the relationship between hs-CRP, a marker of low-grade inflammation, alone or in combination with C-peptide, a marker of hyperinsulinemia/insulin resistance, and risk for cardiovascular events (CVEs) and mortality in patients recently diagnosed with type 2 diabetes (T2D). Research design and methods: In patients with recent-onset T2D, we measured serum hs-CRP (n = 7,301) and C-peptide (n = 5,765) in the prospective Danish Centre for Strategic Research in Type 2 Diabetes cohort study. Patients with no prior CVE (n = 6,407) were followed until first myocardial infarction, stroke, coronary revascularization, or cardiovascular death, and all patients (n = 7,301) were followed for all-cause mortality. We computed adjusted hazard ratios (aHRs) by Cox regression and tested for the interaction between hs-CRP and C-peptide. Results: During follow-up (median 4.8 years), high (>3 mg/L) versus low (<1 mg/L) hs-CRP was associated with increased CVE risk (aHR 1.45 [95% CI 1.07-1.96]) and with even greater risk of all-cause mortality (2.47 [1.88-3.25]). Compared with patients with low hs-CRP (≤3 mg/L) and low C-peptide (<1,470 pmol/L), those with high levels of both biomarkers had the highest CVE (1.61 [1.10-2.34]) and all-cause mortality risk (2.36 [1.73-3.21]). Among patients with high C-peptide, risk of CVEs did not differ by low or high hs-CRP, whereas risk of all-cause mortality did. Conclusions: The finding of high hs-CRP as a stronger prognostic biomarker of all-cause mortality than of CVEs may facilitate improved early detection and prevention of deadly diseases besides CVEs. Conversely, elevated C-peptide as a strong CVE biomarker supports the need to target hyperinsulinemia/insulin resistance in T2D CVE prevention.
Performance in short-duration sports is highly dependent on muscle glycogen, but the total degradation is only moderate and considering the water-binding property of glycogen, unnecessary storing of glycogen may cause an unfavorable increase in body mass. To investigate this, we determined the effect of manipulating dietary carbohydrates (CHO) on muscle glycogen content, body mass and short-term exercise performance. In a cross-over design twenty-two men completed two maximal cycle tests of either 1-min (n = 10) or 15-min (n = 12) duration with different pre-exercise muscle glycogen levels. Glycogen manipulation was initiated three days prior to the tests by exercise-induced glycogen-depletion followed by ingestion of a moderate (M-CHO) or high (H-CHO) CHO-diet. Subjects were weighed before each test, and muscle glycogen content was determined in biopsies from m. vastus lateralis before and after each test. Pre-exercise muscle glycogen content was lower following M-CHO than H-CHO (367 mmol · kg-1 DW vs. 525 mmol · kg-1 DW, P < 0.00001), accompanied by a 0.7 kg lower body mass (P < 0.00001). No differences were observed in performance between diets in neither the 1-min (P = 0.33) nor the 15-min (P = 0.99) test. In conclusion, pre-exercise muscle glycogen content and body mass was lower after ingesting moderate compared with high amounts of CHO, while short-term exercise performance was unaffected. This demonstrates that adjusting pre-exercise glycogen levels to the requirements of competition may provide an attractive weight management strategy in weight-bearing sports, particularly in athletes with high resting glycogen levels.
Time for a lead-time definition? Author response to ‘Why the length of recurrence free survival or “lead-times” can be misleading. Comment on: Callesen LB, Takacova T, Hamfjord J, et al. Circulating DNA in patients undergoing loco-regional treatment of colorectal cancer metastases: a systematic review and meta-analysis’
Objective Electroencephalography (EEG) is used in psychiatric services, however, clinical guidelines do not clearly state when EEG is indicated, and its diagnostic value in psychiatric settings is unclear. We aimed to characterize the clinical use and diagnostic consequences of EEG in a general psychiatric setting to evaluate and optimize its use. Methods We performed a quality development project at the psychiatric services of the Central Denmark Region. We identified patients referred for EEG examination from psychiatric services between 1 September 2017 and 1 September 2022. We extracted data from electronic health records on patient characteristics, indications, EEG results, and treatment consequences and analyzed risk factors for abnormal EEGs. Results Among 57,031 persons seen in the psychiatric services in the study period, 219 (0.4%) were referred for EEG examination. Psychosis (n = 70, 32%) was the most common symptom and suspicion of epilepsy (n = 129, 59%) was the most common clinical suspicion leading to referral. Of the 219 patients, 53 (24%) had an abnormal EEG result including 17 (7.8%) with epileptiform changes. Abnormal EEGs led to treatment alterations in six patients (3%). Age, prior epilepsy, use of antiseizure medication, use of clozapine, and convulsions were associated with epileptiform changes in the EEG. Conclusion EEG is rarely used in psychiatric settings and seldom has treatment consequences. However, in specific clinical settings, the EEG result leads to an alteration of clinical management and the findings, therefore, call for refinement of clinical guidelines to optimize the use of EEG.
Institution pages aggregate content on ResearchGate related to an institution. The members listed on this page have self-identified as being affiliated with this institution. Publications listed on this page were identified by our algorithms as relating to this institution. This page was not created or approved by the institution. If you represent an institution and have questions about these pages or wish to report inaccurate content, you can contact us here.
Norrebrogade 44, DK-8000, Århus, Denmark