ABSTRACT Background Intensive treatments have shown encouraging results in the treatment of several psychological disorders, including post-traumatic stress disorder (PTSD). However, qualitative studies on patient experiences with intensive treatment for PTSD remain scarce. Objective The aim of this study was to explore patient experiences with an intensive, outpatient treatment for PTSD and to discover important factors behind treatment feasibility. Method Eight participants were recruited from two groups of patients having completed the intensive treatment programme. Semi-structured qualitative interviews were conducted, and data sets were analysed using thematic analysis. Results The main result indicated that patients experienced the treatment as very demanding, but still worth the effort in terms of reducing symptoms. The intensity was valued as useful. Participants emphasized the sense of unity with other participants as well as physical activity as important factors for completion of the treatment programme. The rotation of therapists was also highlighted as important for treatment efficacy. Conclusions This study provides insights into what the patients experienced and emphasized as important aspects of treatment and essential factors for completing treatment. The main conclusions were that all of the patients evaluated the treatment as demanding, but the reward of reduced symptoms made it worthwhile. The high frequency of therapy sessions and the therapist rotation were reported to counteract avoidance and increase the patients’ commitment to therapy. Physical activity and unity in the group were highlighted as essential for treatment feasibility.
Objectives To explore the diagnostic accuracy of preoperative magnetic resonance imaging (MRI)-derived tumor measurements for the prediction of histopathological deep (≥ 50%) myometrial invasion (pDMI) and prognostication in endometrial cancer (EC). Methods Preoperative pelvic MRI of 357 included patients with histologically confirmed EC were read independently by three radiologists blinded to clinical information. The radiologists recorded imaging findings (T1 post-contrast sequence) suggesting deep (≥ 50%) myometrial invasion (iDMI) and measured anteroposterior tumor diameter (APD), depth of myometrial tumor invasion (DOI) and tumor-free distance to serosa (iTFD). Receiver operating characteristic (ROC) curves for the prediction of pDMI were plotted for the different MRI measurements. The predictive and prognostic value of the MRI measurements was analyzed using logistic regression and Cox proportional hazard model. Results iTFD yielded highest area under the ROC curve (AUC) for the prediction of pDMI with an AUC of 0.82, whereas DOI, APD and iDMI yielded AUCs of 0.74, 0.81 and 0.74, respectively. Multivariate analysis for predicting pDMI yielded highest predictive value of iTFD < 6 mm with OR of 5.8 ( p < 0.001) and lower figures for DOI ≥ 5 mm (OR = 2.8, p = 0.01), APD ≥ 17 mm (OR = 2.8, p < 0.001) and iDMI (OR = 1.1, p = 0.82). Patients with iTFD < 6 mm also had significantly reduced progression-free survival with hazard ratio of 2.4 ( p < 0.001). Conclusion For predicting pDMI, iTFD yielded best diagnostic performance and iTFD < 6 mm outperformed other cutoff-based imaging markers and conventional subjective assessment of deep myometrial invasion (iDMI) for diagnosing pDMI. Thus, iTFD at MRI represents a promising preoperative imaging biomarker that may aid in predicting pDMI and high-risk disease in EC.
Disability and distress caused by chronic low back pain (LBP) lacking clear pathoanatomical explanations cause huge problems both for patients and society. A subgroup of patients has Modic changes (MC), identifiable by MRI as vertebral bone marrow lesions. The cause of such changes and their relationship to pain are not yet understood. We explored the pathobiology of these lesions using profiling of gene expression in blood, coupled with an edema-sensitive MRI technique known as short tau inversion recovery (STIR) imaging. STIR images and total RNA from blood were collected from 96 patients with chronic LBP and MC type I, the most inflammatory MC state. We found the expression of 37 genes significantly associated with STIR signal volume, ten genes with edema abundancy (a constructed combination of STIR signal volume, height, and intensity), and one gene with expression levels significantly associated with maximum STIR signal intensity. Gene sets related to interferon signaling, mitochondrial metabolism and defense response to virus were identified as significantly enriched among the upregulated genes in all three analyses. Our results point to inflammation and immunological defense as important players in MC biology in patients with chronic LBP.
Background Few prospective population-based studies have evaluated the bidirectional relationship between headache and affective disorder. The aim of this large-scale population-based follow-up study was to investigate whether tension-type headache (TTH) and migraine had increased risk of developing anxiety and depression after 11 years, and vice-versa. Methods Data from the Trøndelag Health Study (HUNT) conducted in 2006-2008 (baseline) and 2017-2019 (follow-up) were used to evaluate the bidirectional relationship between migraine and TTH and anxiety and depression measured by Hospital Anxiety and depression Scale (HADS). The population at risk at baseline consisted of respectively 18,380 persons with HADS score ≤ 7 and 13,893 without headache, and the prospective data was analyzed by Poisson regression. Results In the multi-adjusted model, individuals with HADS anxiety (HADS-A) and depression scores (HADS-D) of ≥8 at baseline nearly doubled the risk of migraine (Risk rations (RR) between 1.8 and 2.2) at follow-up whereas a 40% increased risk (RR 1.4) was found for TTH. Vice versa, the risk of having HADS-A and HADS-D scores of ≥8 at follow-up were increased for TTH (RR 1.3) and migraine (RR 1.3-1.6) at baseline. Migraine with aura was associated with 81% (RR 1.81, 95% 1.52-2.14) increased risk of HADS-A score of ≥8. Conclusions In this large-scale population-based follow-up study we found a bidirectional relationship between anxiety and depression and migraine and TTH. For anxiety, this bidirectional association was slightly more evident for migraine than TTH.
In this review we integrate the scientific literature and results-proven practice and outline a novel framework for understanding the training and development of elite long-distance performance. Herein, we describe how fundamental training characteristics and well-known training principles are applied. World-leading track runners (i.e., 5000 and 10,000 m) and marathon specialists participate in 9 ± 3 and 6 ± 2 (mean ± SD) annual competitions, respectively. The weekly running distance in the mid-preparation period is in the range 160–220 km for marathoners and 130–190 km for track runners. These differences are mainly explained by more running kilometers on each session for marathon runners. Both groups perform 11–14 sessions per week, and ≥ 80% of the total running volume is performed at low intensity throughout the training year. The training intensity distribution vary across mesocycles and differ between marathon and track runners, but common for both groups is that volume of race-pace running increases as the main competition approaches. The tapering process starts 7–10 days prior to the main competition. While the African runners live and train at high altitude (2000–2500 m above sea level) most of the year, most lowland athletes apply relatively long altitude camps during the preparation period. Overall, this review offers unique insights into the training characteristics of world-class distance runners by integrating scientific literature and results-proven practice, providing a point of departure for future studies related to the training and development in the Olympic long-distance events.
Background According to the Global Burden of Disease (GBD) study, headache disorders are among the most prevalent and disabling conditions worldwide. GBD builds on epidemiological studies (published and unpublished) which are notable for wide variations in both their methodologies and their prevalence estimates. Our first aim was to update the documentation of headache epidemiological studies, summarizing global prevalence estimates for all headache, migraine, tension-type headache (TTH) and headache on ≥15 days/month (H15+), comparing these with GBD estimates and exploring time trends and geographical variations. Our second aim was to analyse how methodological factors influenced prevalence estimates. Methods In a narrative review, all prevalence studies published until 2020, excluding those of clinic populations, were identified through a literature search. Prevalence data were extracted, along with those related to methodology, world region and publication year. Bivariate analyses (correlations or comparisons of means) and multiple linear regression (MLR) analyses were performed. Results From 357 publications, the vast majority from high-income countries, the estimated global prevalence of active headache disorder was 52.0% (95%CI 48.9–55.4), of migraine 14.0% (12.9–15.2), of TTH 26.0% (22.7–29.5) and of H15+ 4.6% (3.9–5.5). These estimates were comparable with those of migraine and TTH in GBD2019, the most recent iteration, but higher for headache overall. Each day, 15.8% of the world’s population had headache. MLR analyses explained less than 30% of the variation. Methodological factors contributing to variation, were publication year, sample size, inclusion of probable diagnoses, sub-population sampling (e.g., of health-care personnel), sampling method (random or not), screening question (neutral, or qualified in severity or presumed cause) and scope of enquiry (headache disorders only or multiple other conditions). With these taken into account, migraine prevalence estimates increased over the years, while estimates for all headache types varied between world regions. Conclusion The review confirms GBD in finding that headache disorders remain highly prevalent worldwide, and it identifies methodological factors explaining some of the large variation between study findings. These variations render uncertain both the increase in migraine prevalence estimates over time, and the geographical differences. More and better studies are needed in low- and middle-income countries.
The Global Campaign against Headache, as a collaborative activity with the World Health Organization (WHO), was formally launched in Copenhagen in March 2004. In the month it turns 18, we review its activities and achievements, from initial determination of its strategic objectives, through partnerships and project management, knowledge acquisition and awareness generation, to evidence-based proposals for change justified by cost-effectiveness analysis.
Background Burden of disease analyses quantify population health and provide comprehensive overviews of the health status of countries or specific population groups. The comparative risk assessment (CRA) methodology is commonly used to estimate the share of the burden attributable to risk factors. The aim of this paper is to identify and address some selected important challenges associated with CRA, illustrated by examples, and to discuss ways to handle them. Further, the main challenges are addressed and finally, similarities and differences between CRA and health impact assessments (HIA) are discussed, as these concepts are sometimes referred to synonymously but have distinctly different applications. Results CRAs are very data demanding. One key element is the exposure-response relationship described e.g. by a mathematical function. Combining estimates to arrive at coherent functions is challenging due to the large variability in risk exposure definitions and data quality. Also, the uncertainty attached to this data is difficult to account for. Another key issue along the CRA-steps is to define a theoretical minimal risk exposure level for each risk factor. In some cases, this level is evident and self-explanatory (e.g., zero smoking), but often more difficult to define and justify (e.g., ideal consumption of whole grains). CRA combine all relevant information and allow to estimate population attributable fractions (PAFs) quantifying the proportion of disease burden attributable to exposure. Among many available formulae for PAFs, it is important to use the one that allows consistency between definitions, units of the exposure data, and the exposure response functions. When combined effects of different risk factors are of interest, the non-additive nature of PAFs and possible mediation effects need to be reflected. Further, as attributable burden is typically calculated based on current exposure and current health outcomes, the time dimensions of risk and outcomes may become inconsistent. Finally, the evidence of the association between exposure and outcome can be heterogeneous which needs to be considered when interpreting CRA results. Conclusions The methodological challenges make transparent reporting of input and process data in CRA a necessary prerequisite. The evidence for causality between included risk-outcome pairs has to be well established to inform public health practice.
Due to tighter environmental regulations, newly built liquefied natural gas (LNG) carriers are equipped with a re-liquefaction system to minimize combustion of surplus boil-off-gas (BOG). Thus, this paper comparatively analyzes the re-liquefaction system for a low-pressure gas injection engine according to the refrigerant (no external refrigerant or single mixed refrigerant) with three key performance indicators: energy, economic, and environmental aspects. For an energy efficiency analysis, we proposed several process alternatives and optimized them to minimize the specific power consumption required to liquefy BOG. In economic analysis, minimizing total annualized cost is the objective. For an environmental analysis, CO2 emissions at each optimal point is calculated and comparatively analyzed. The results show that the process without external refrigerant has 10% better performance in terms of economy, while the single mixed refrigerant process is suitable in terms of energy efficiency (6%) and environmental (15%) impact.
On-face electronics (On-faceE) are urged to sense “anytime, anywhere” for enabling human senses to globally dominate the Internet of Things. However, with the limited lifetime of batteries and the miniaturization of the On-faceE, On-faceE are confronting two challenges: sustainable operation at the temperature (T) < 0 °C and frost/ice accretion security hazard. Of note, the face and On-faceE themselves continuously generate heat and are more reliable than solar-/motion-based heat supplies. However, such self-generated heat often occurs as waste heat, lost into the environment. Here, a film pioneering the intelligent conservation and utilization of this waste self-generated heat for enabling On-faceE to work even at -30 °C for 8 h is developed, through making a versatile cellulose hydrogel and enabling the super-hygroscopic gel to capture moisture at the T < 0 °C. It creates a ΔT up to 60 °C between inside T and ambient T (-30 °C) after sealing On-faceE and, importantly, intelligently stops conserving heat when the working T > 30 °C and demonstrates outstanding heat-conserving stability and sustainability, thus solving the two challenges without the expense of energy for heating and positioning it as an energy-saving “smart” T-regulating candidate. This work could address the energy shortage and heating costs for conquering low-T limitations for globalizing On-faceE.
Institution pages aggregate content on ResearchGate related to an institution. The members listed on this page have self-identified as being affiliated with this institution. Publications listed on this page were identified by our algorithms as relating to this institution. This page was not created or approved by the institution. If you represent an institution and have questions about these pages or wish to report inaccurate content, you can contact us here.