Article

Real-World Evidence — What Is It and What Can It Tell Us?

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

The FDA is developing guidance on the use of “real-world evidence” — health care information from atypical sources, including electronic health records, billing databases, and product and disease registries — to assess the safety and effectiveness of drugs and devices.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... RWE studies rely on Real World Data (RWD), which is data gathered in a real-world context outside of a clinical trial setting 10 and often involving a population that has received an intervention or therapy in the clinical environment. 11 In this case, the RWD relies on patient information gathered in the context of healthcare provision from patients, clinicians, hospitals, and payers, 12,13 and not necessarily reuse of a previously collected research dataset. ...
... While this RWE study has provided some valuable information surrounding mental illness in psoriasis patients, it is still subject to the inherent limitations of RWD analysis. 13 Data from clinical charts will augment further exploration and study of our RWD derived from the NLCHI databases. The clinical charts will provide disease-specific information such as psoriasis and severity index (PASI), disease life quality index (DLQI) and body surface area (BSA), which will allow us to assess the impact of disease severity on mental illness in future studies. ...
... The limitations of RWE studies such as lack of internal validity and risk of using biased data, have led to calls for caution and transparency when working with RWD. 13 In conclusion, our initial interrogation of RWD has demonstrated an increased prevalence of mental illnesses in psoriatic patients. As noted in this paper, further interrogation will provide good value for the health care system as it can better elucidate the association between psoriasis and mental disorders, particularly the temporal sequence, assess the impact of disease burden and treatment response, as well investigate other aspects of psoriatic disease such as metabolic disease, drug effectiveness, persistence, adverse events, and economic evaluation. ...
Article
Full-text available
Background Psoriasis is a chronic, immune-mediated inflammatory disease with an implied connection to psychiatric disorders. Objective This study aims to illustrate an association between psoriasis and psychiatric disorders using real world data gathered from the Newfoundland and Labrador population. Methods Data on 15,100 patients with psoriasis and 75,500 controls (1:5) was collected from the Newfoundland and Labrador Centre for Health Information’s Electronic Health Records. The cases and controls were matched for age, sex, and geography. Indicators for psychiatric disorders include diagnosis of mental illnesses from physician’s visits and hospitalization records (all coded for mental health using ICD-9 and ICD-10 codes). Results 9,991 (66.2%) cases were identified to have at least one visit with a diagnostic code for mental illness compared to 42,276 (56.0%), P < .0001 in the control group. The percentage of people coded for anxiety was 36.50% compared to 28.95%, P < .0001; depression was 37.04% compared to 30.19%, P < .0001; and adjustment disorder was 6.89% versus 5.48%, P < .0001, among those with and without psoriasis, respectively. The greatest risk for anxiety [OR 1.4 (1.20, 1.67)] and depression [OR 1.65 (1.36, 2.00)] among psoriasis patients was between the 0 to 20 age group. Women with psoriasis are more likely to have anxiety [OR 1.08 (1.03, 1.13)], depression [OR 1.04 (1.01, 1.09)] and adjustment disorder [OR 1.07 (0.98, 1.17)] compared to female controls. Conclusion Our result shows that patients with psoriasis have an increased prevalence of mental illness. Using real world data to carry out further investigations will better elucidate this association and provide an increased understanding of the association between psoriasis and mental disorders.
... To satisfy the stringent evidentiary requirements imposed by regulators such as the European Medicines Agency (EMA) (Scholz, 2015), the US Food and Drug Administration (FDA) (Dabrowska and Thaul, 2018) and the UK Medicines and Healthcare products Regulatory Agency (MHRA) (Criado and Bancsi, 2021), a company needs to present them with, among other things, data derived from clinical trials, which are interventional studies performed in human beings. However, such registrational trials usually have important limitations, including their limited sample size, their relatively short duration and the low external validity of the outcomes they produce (Sherman et al., 2016;Eichler et al., 2018). While they may provide a good indication of how safe the investigational treatment is and how well it works when applied under ideal circumstances (i.e. ...
... While they may provide a good indication of how safe the investigational treatment is and how well it works when applied under ideal circumstances (i.e. exactly as intended by its developers), they are not always appropriate predictors of its effects when used by reallife patients (Sherman et al., 2016;Eichler et al., 2018). ...
... Such data gathered after a therapeutic intervention has been launched onto the market are typically referred to as real-world data (RWD) (GetReal consortium, 2020a). The evidence that arises from the analysis of RWD is called real-world evidence (RWE) (Sherman et al., 2016;GetReal consortium, 2020a). For example, in the European Union, manufacturers of pharmaceutical products are legally obligated to document and report any serious adverse events that are observed in patients taking their drugs in clinical practice as part of the pharmacovigilance legislation (European Medicines Agency, 2022a). ...
Article
Full-text available
Background: The role of real-world evidence (RWE) in the development of anticancer therapies has been gradually growing over time. Regulators, payers and health technology assessment agencies, spurred by the rise of the precision medicine model, are increasingly incorporating RWE into their decision-making regarding the authorization and reimbursement of novel antineoplastic treatments. However, it remains unclear how this trend is viewed by clinicians in the field. This study aimed to investigate the opinions of these stakeholders with respect to RWE and its suitability for informing regulatory, reimbursement-related and clinical decisions in oncology. Methods: An online survey was disseminated to clinicians belonging to the network of the European Organisation for Research and Treatment of Cancer between May and July 2021. Results: In total, 557 clinicians across 30 different countries participated in the survey, representing 13 distinct cancer domains. Despite seeing the methodological challenges associated with its interpretation as difficult to overcome, the respondents mostly (75.0%) perceived RWE positively, and believed such evidence could be relatively strong, depending on the designs and data sources of the studies from which it is produced. Few (4.6%) saw a future expansion of its influence on decision-makers as a negative evolution. Furthermore, nearly all (94.0%) participants were open to the idea of sharing anonymized or pseudonymized electronic health data of their patients with external parties for research purposes. Nevertheless, most clinicians (77.0%) still considered randomized controlled trials (RCTs) to be the gold standard for generating clinical evidence in oncology, and a plurality (49.2%) thought that RWE cannot fully address the knowledge gaps that remain after a new antitumor intervention has entered the market. Moreover, a majority of respondents (50.7%) expressed that they relied more heavily on RCT-derived evidence than on RWE for their own decision-making. Conclusion: While cancer clinicians have positive opinions about RWE and want to contribute to its generation, they also continue to hold RCTs in high regard as sources of actionable evidence.
... Most notable examples, amongst several others, include Didier Raoult [1] in the IHU Méditerranée Infection hospital in Marseilles France, Vladimir Zelenko [2] in upstate New York, George Fareed and Brian Tyson [3] in California, Shankara Chetty [4] in South Africa, Jackie Stone [5] in Zimbabwe, and Paul Marik's group [6,7], which was in the beginning based at the Eastern Virginia Medical School. Their efforts to treat patients generated case series of successfully treated patients that constitute real-world evidence [8]. ...
... Shortly before COVID-19 was declared a pandemic by the World Health Organization, an article [53] was published on 23 February 2020 in the New England Journal of Medicine arguing that "the replacement of randomized trials with non-randomized observational status is a false solution to the serious problem of ensuring that patients receive treatments that are both safe and effective." The opposing viewpoint was published earlier in 2017 by Frieden [54], highlighting the limitations of RCTs and the need to leverage and overcome the limitations of all available sources of evidence, including real-world evidence [8], in order to make lifesaving public health decisions. In particular, Frieden [54] stressed that the very high cost of RCTs and the long timelines needed for planning, recruiting patients, conducting the study, and publishing it, are limitations that "affect the use of randomized controlled trials for urgent health issues, such as infectious disease outbreaks for which public health decisions must be made quickly on the basis of limited and imperfect data." ...
... Just as the quality of evidence provided by randomized controlled trials is fluid, with respect to successful randomization and external validity, the same is true about the quality of real-world evidence [8] that will inevitably become available from the initial response to an emerging new pandemic. We envision that a successful pandemic response, in the area of early outpatient treatment, will proceed as follows: the first element of pandemic response is to assess and monitor the situation by prospectively collecting data, needed to construct predictive models of the probability of hospitalization and death, in the absence of treatments that have yet to be discovered, as a function of the patient's medical profile/history. ...
Article
Full-text available
When confronted with a public health emergency, significant innovative treatment protocols can sometimes be discovered by medical doctors at the front lines based on repurposed medications. We propose a statistical framework for analyzing the case series of patients treated with such new protocols, that enables a comparison with our prior knowledge of expected outcomes, in the absence of treatment. The goal of the proposed methodology is not to provide a precise measurement of treatment efficacy, but to establish the existence of treatment efficacy, in order to facilitate the binary decision of whether the treatment protocol should be adopted on an emergency basis. The methodology consists of a frequentist component that compares a treatment group against the probability of an adverse outcome in the absence of treatment, and calculates an efficacy threshold that has to be exceeded by this probability, in order to control the corresponding p-value and reject the null hypothesis. The efficacy threshold is further adjusted with a Bayesian technique, in order to also control the false positive rate. A random selection bias threshold is then calculated from the efficacy threshold to control for random selection bias. Exceeding the efficacy threshold establishes the existence of treatment efficacy by the preponderance of evidence, and exceeding the more demanding random selection bias threshold establishes the existence of treatment efficacy by the clear and convincing evidentiary standard. The combined techniques are applied to case series of high-risk COVID-19 outpatients that were treated using the early Zelenko protocol and the more enhanced McCullough protocol.
... Determination of a suitable study candidate based on history and disease conditions, as well as additional characteristics considering infection rate, demographics, and ethnicity to represent the most affected. Real World Data (RWD) and Real World Evidence (RWE) are playing an increasing role in healthcare decision-making [9]. The use of computers, mobile devices, wearables, and other biosensors to collect and store vast amounts of health-related data is rapidly accelerating. ...
... These data have the potential to enable pharmaceutical companies to better plan and conduct clinical trials and research in healthcare settings to answer questions that were not previously possible. In addition, with the development of sophisticated, new analytical capabilities, pharmaceutical companies can better analyze these data and apply the results of the analysis to the development and approval of medicines [9][10][11][12]. ...
Article
Full-text available
Recent developments in Digital Medicine approaches concern pharmaceutical product optimization. Artificial Intelligence (AI) has multiple applications for pharmaceutical products’ lifecycle, increasing development speed, quality of the products, and efficiency of the therapy. Here, we systematically review the overall approach for AI implementation in pharmaceutical products’ lifecycle. The published studies in PubMed and IEEE Xplore were searched from inception to March 2022. The papers were screened for relevant outcomes, publication types, and data sufficiency, and a total of 73 (1.2%) out of 6131 studies were retrieved after the selection. We extracted the data according to the Preferred Reporting Items for Systematic Review and Meta-Analysis (PRISMA) statement. All Artificial Intelligence systems could be divided into multiple overlapping categories by implementation. For the 177 projects found, the most popular areas of AI implementation are clinical trials and pre-clinical tests (34%). In second place are novel small molecule design systems, with 33% of the total. The third most popular scope for AI implementation is target identification for novel medicines. More than 25% of the systems provide this functionality. It is interesting that most of the systems specialize in only one area (102 systems—57%). None of the systems provide functionality for full coverage of the lifecycle and function in all categories of the tasks. This meta-analysis demonstrated that Artificial Intelligence solutions in pharmaceutical products’ lifecycle could find numerous implementations, and none of the available market solutions covers them all.
... This need led to the emergence of the concept of realworld evidence (RWE), a decision-making support methodology in Health Technology Assessment [10]. Over the past decade, countries have developed healthcare records and managerial approaches capable of providing quality data for Health Technology Assessment in a more agile and dynamic manner [11]. ...
... Real-world evidence studies require inputs known as realworld data. These are used to support decision making in the healthcare industry and are collected from traditional Health Technology Assessment studies but under an approach of non-experimental and uncontrolled observational research [10]. Real-world data provide useful information on the comorbidity profile of a target population. ...
Article
Objective Propose a process mining-based method for Health Technology Assessment.Methods Articles dealing with prior studies in Health Technology Assessment using Process Mining were identified. Five research questions were defined to investigate these studies and present important points and desirable characteristics to be addressed in a proposal. The was defined method with five steps and was submitted to a case study for evaluation.ResultsThe Literature search identified six main characteristics. As a result, the five-step method proposed was applied in the radical prostatectomy surgical procedure between the robot assisted technique and laparoscopy.Conclusion It was demonstrated in this article the creation of the proposal of an efficient method with its replication for other health technologies, coupled with the good interpretation of the specialists in terms of comprehensibility of the discovered patterns and their correlation with clinical protocols and guidelines.
... The draft guidance released by the United States Food and Drug Administration (FDA) in 2016 has spurred a surge in the literature describing how realworld evidence (RWE) can be used to support regulatory approval for medical devices [38]. RWE refers to any evidence on healthcare generated from multiple sources outside clinical trial settings, which is usually in the form of electronic medical records (EMR), electronic health records (EHR), hospital databases, patient registries, claims data, etc. [39]. In addition to market authorization, RWE was also relevant in post-marketing surveillance, coverage decisions, outcome-based contracting, resource use, and treatment compliance [40,41]. ...
Article
Full-text available
Background Health Technology Assessment (HTA) has been widely recognized as informing healthcare decision-making, and interest in HTA of medical devices has been steadily increasing. How does the assessment of medical devices differ from that of drug therapies, and what innovations can be adopted to overcome the inherent challenges in medical device HTA? Method HTA Accelerator Database was used to describe the landscape of HTA reports for medical devices from HTA bodies, and a literature search was conducted to understand the growth trend of relevant HTA publications in four case studies. Another literature review was conducted for a narrative synthesis of the characteristic differences and challenges of HTA in medical devices. We further conducted a focused Internet search of guidelines and a narrative review of methodologies specific to the HTA of medical devices. Main body The evidence of HTA reports and journal publications on medical devices around the world has been growing. The challenges in assessing medical devices include scarcity of well-designed randomized controlled trials, inconsistent real-world evidence data sources and methods, device-user interaction, short product lifecycles, inexplicit target population, and a lack of direct medical outcomes. Practical solutions in terms of methodological advancement of HTA for medical devices were also discussed in some HTA guidelines and literature. Conclusion To better conduct HTA on medical devices, we recommend considering multi-source evidence such as real-world evidence; standardizing HTA processes, methodologies, and criteria; and integrating HTA into decision-making.
... Observational real-world studies complement findings from clinical trials, providing important evidence demonstrating a therapy's efficacy in populations that are more heterogeneous than those in clinical trials (12). An understanding of real-world treatment practices and how they affect the efficacy of new therapies can also help guide clinicians on optimal drug use and indications (13). ...
Article
Full-text available
Background Cyclin-dependent kinase 4/6 inhibitors are a standard treatment for patients with hormone receptor−positive (HR+)/human epidermal growth factor receptor 2−negative (HER2−) metastatic breast cancer (MBC). However, real-world data on effectiveness in patients with liver or lung metastatic disease is limited. This study compared outcomes of palbociclib plus letrozole versus letrozole alone in patients with HR+/HER2− MBC with lung or liver metastasis treated in routine clinical practice in the United States. Methods This retrospective analysis used Flatiron Health’s database of electronic health records. Women with HR+/HER2− MBC and liver or lung metastasis received first-line palbociclib plus letrozole or letrozole alone between February 2015 and February 2019. Real-world progression-free survival (rwPFS) was defined as time from start of treatment to death or disease progression. Stabilized inverse probability treatment weighting (sIPTW) was used to balance baseline demographic and clinical characteristics between palbociclib plus letrozole versus letrozole cohorts. Cox proportional-hazards models were used to estimate the effectiveness of palbociclib plus letrozole versus letrozole alone in rwPFS and overall survival (OS). Results The study included 353 patients with lung metastasis, 123 with liver metastasis, and 75 with both. After sIPTW, palbociclib plus letrozole versus letrozole alone was significantlly associated with prolonged rwPFS (hazard ratio (HR), 0.56) and OS (HR, 0.58) (both p<0.001) in all patients. Palbociclib plus letrozole compared with letrozole alone demonstrated a median rwPFS of 16.5 versus 10.5 months, respectively (adjusted HR, 0.52; P <0.001), a median OS of not reached versus 40.3 months (adjusted HR, 0.60; P <0.01) in patients with lung metastasis, and median OS of 30.1 versus 16.8 months (adjusted HR, 0.56; P <0.03 in patients with liver metastasis. In patients with liver metastasis, palbociclib plus letrozole had a median rwPFS of 10.7 months versus 8.0 months in the letrozole alone cohort (adjusted HR, 0.70; P =0.12). Conclusions In this real-world population, palbociclib in combination with letrozole is associated with improved outcomes compared with letrozole alone for patients with HR+/HER2− MBC and liver or lung metastasis in the first-line setting. The findings support first-line palbociclib in combination with an aromatase inhibitor as standard of care for HR+/HER2− MBC regardless of visceral disease. Clinical Trial Registration NCT04176354.
... 5,6 Isolated RCTs and their meta-analyses present the highest quality of evidence and are the basis for guidelines issued by healthcare organisations. 7 However, the evaluation of entire profiles of rare irAEs derived from RCTs data is difficult owing to their stringent diagnostic standards and selection criteria, relatively small sample sizes, and limited follow-up duration. 8 The Food and Drug Adverse Event Reporting System (FAERS), one of the largest pharmacovigilance databases with a large number of reported AEs and patient information, could provide data to verify and supplement the findings of RCTs. ...
Article
Full-text available
Background With the increased use of immune checkpoint inhibitors (ICIs) in advanced lung cancer, adverse events (AEs), particularly immune-related AEs (irAEs), have garnered considerable interest. We conducted a comprehensive assessment of the toxicity profile in advanced lung cancer using multi-source medical data. Methods First, we systematically searched the PubMed, Embase, and Cochrane Library databases (from inception to 10 August 2021) for relevant randomised controlled trials (RCTs) involving ICI-based treatments for advanced lung cancer. The primary outcomes were treatment-related AEs and irAEs, including events that were assigned grade 1–5 and 3–5. The secondary outcomes were grade 5 AEs and irAEs (grade 1–5 and grade 3–5) in specific organs. Network comparisons were conducted for 11 treatments, including chemotherapy (CT), ICI monotherapy (three regimens: programmed death-1 receptor [PD-1] inhibitors, programmed death ligand-1 [PD-L1] inhibitors, and cytotoxic T lymphocyte-associated antigen [CTLA-4] inhibitors), dual-ICI combination therapy (two regimens), and treatment using one or two ICI drugs administered in combination with CT (five regimens). We also conducted a disproportionality analysis by extracting reports of various irAEs associated with ICIs from the FDA Adverse Event Reporting System (FAERS) database. The reporting odds ratios and fatality proportions of different irAEs were calculated and compared. PROSPERO: CRD42021268650. Findings Overall, 41 RCTs involving 23,121 patients with advanced lung cancer were included. Treatments containing chemotherapy increased the risk of treatment-related AEs compared to ICI-based regimens without chemotherapy. Concerning irAEs, PD-L1 + CTLA-4 + CT was associated with the highest risk of grade 1–5 irAEs, followed by two regimens of dual ICI combination, three regimens of ICI monotherapy, and three regimens of one ICI combined with CT. For 3–5 irAEs, CTLA-4 accounted for most AEs. Detailed comparisons of ICI-based treatment options provided irAE profiles based on specific organs/systems and AE severity. Insights from the FAERS database revealed that signals corresponding to pneumonitis, colitis, thyroiditis, and hypophysitis were observed across all ICI regimens. Further analyses of the outcomes indicated that myocarditis (163 of 367, 44.4%), pneumonitis (1610 of 4497, 35.8%), and hepatitis (290 of 931, 31.1%) had high fatality rates. Interpretation Included RCTs showed heterogeneity in a few clinical factors, and reports derived from the FAERS database might have involved inaccurate data. Our results can be used as a basis for improving clinical treatment strategies and designing preventive methods for ICI treatment in advanced lung cancer. Funding This study was supported by the Research Project of Drug Clinical Comprehensive Evaluation and Drug Treatment Pathway (SHYXH-ZP-2021-001, SHYXH-ZP-2021-006), Clinical Research Innovation and Cultivation Fund of Ren Ji Hospital (RJPY-LX-008), Ren Ji Boost Project of National Natural Science Foundation of China (RJTJ–JX–001), and Shanghai “Rising Stars of Medical Talent” Youth Development Program – Youth Medical Talents – Clinical Pharmacist Program (SHWJRS (2019) 072).
... Real-world data (RWD) are currently collected from a variety of sources, namely electronic medical records (EMRs), claims and billing databases, product and disease registries, patient-generated data, and home medical devices for monitoring patients, such as the smartwatches. From RWD and through robust analytics, real-world evidence (RWE) can be produced with clear potential benefits for the health and outcomes of patients [1][2][3]. In other words, RWE offers a real difference between what is expected to happen and what is really happening specially in comparison to traditional clinical trials, whose well-known limitations of more homogeneous populations, make it difficult to generalize findings to larger scales. ...
Article
Full-text available
Real world data (RWD) and real-world evidence (RWE) plays an increasingly important role in clinical research since scientific knowledge is obtained during routine clinical large-scale practice and not experimentally as occurs in the highly controlled traditional clinical trials. Particularly, the electronic health records (EHRs) are a relevant source of data. Nevertheless, there are also significant challenges in the correct use and interpretation of EHRs data, such as bias, heterogeneity of the population, and missing or non-standardized data formats. Despite the RWD and RWE recognized difficulties, these are easily outweighed by the benefits of ensuring the efficacy, safety, and cost-effectiveness in complement to the gold standards of the randomized controlled trial (RCT), namely by providing a complete picture regarding factors and variables that can guide robust clinical decisions. Their relevance can be even further evident as healthcare units develop more accurate EHRs always in the respect for the privacy of patient data. This editorial is an overview of the RWD and RWE major aspects of the state of the art and supports the Special Issue on “Digital Health and Big Data Analytics: Implications of Real-World Evidence for Clinicians and Policymakers” aimed to explore all the potential and the utility of RWD and RWE in offering insights on diseases in a broad spectrum.
... Therefore, a definitive RCT that clearly reports whether metformin use is associated with cancer incidence or not remains to be seen. In contrast to the laboratory settings of biological studies, well-designed observational studies that utilize real-world data may provide realworld evidence of new indications for metformin, or none thereof, that is generalizable to a more inclusive population of T2DM patients [20,33]. Thus, future observational studies would benefit from increased methodological rigor that addresses their inherent limitation of possible unknown confounding more thoroughly. ...
Article
Full-text available
Purpose: Immortal time bias (ITB) continues to distort many observational studies on metformin use and cancer risk. Our objective was to employ three statistical methods proven to avoid ITB and compare their results to that of a naïve time-fixed analysis in order to provide further evidence of metformin's association, or none thereof, with colorectal cancer (CRC) incidence. Methods: A total of 41,533 Korean subjects with newly diagnosed type-2 diabetes in 2005-2015 were selected from a prospectively maintained cohort (median follow-up of 6.3 years). Time-to-CRC incidence was regressed upon metformin use (yes/no, average prescription days/year) using time-dependent Cox, landmark, nested case-control, and time-fixed Cox analyses. Other CRC risk factors were included to adjust for possible confounding. Results: Neither metformin ever-use nor average metformin prescription days/year was associated with incident CRC hazard in time-dependent Cox, landmark, and nested case-control analyses with HR (95% CI) of 0.88 (0.68-1.13), 0.86 (0.65-1.12), and 1.10 (0.86-1.40) for metformin ever-use, and 0.97 (0.90-1.04), 0.95 (0.88-1.04), and 1.02 (0.95-1.10) for average metformin prescription days/year, respectively. In contrast, time-fixed Cox regression showed a falsely exaggerated protective effect of metformin on CRC incidence. Conclusion: The association between metformin use and subsequent CRC incidence was statistically nonsignificant after accounting for time-related biases such as ITB. Previous studies that avoided these biases and meta-analyses of RCTs on metformin and cancer incidence were in agreement with our results. A definitive, large-scale RCT is needed to clarify this topic, and future observational studies should be explicit in avoiding ITB and other time-related biases.
... RWE is the conclusion of the proper statistical analysis of RWD that includes drafting an appropriate research design and statistical analysis methods, collecting appropriate RWD according to the research design, and performing data analysis so that analysis results that meet the research objectives can be generated. RWE can provide information about clinical settings and the health-system characteristics that influence treatment effects and outcomes [3]. is real-world study design included an analysis of National Health Insurance Research Database (NHIRD) records and clinical studies of Ge Gen Tang (GGT) in the common cold. ...
Article
Full-text available
Purpose: Real-world evidence refers to patient data derived from the healthcare process. In this study, we used National Health Insurance Research Database (NHIRD) assessments and clinical studies of Ge Gen Tang (GGT, ) in patients with common cold to establish a real-world study model of Traditional Chinese Medicine formulae. GGT is widely prescribed for the treatment of common cold in Taiwan, generally in combination with other medicines. The aim of this study was to determine whether a correlation exists between GGT combined with other medicines and an improvement in cold symptoms. We also established a GGT prescription compatibility system by analyzing Taiwan's NHIRD records for GGT prescription patterns in patients with different types of common cold. Materials and methods: We extracted and analyzed records from the NHIRD for the period 2000-2015 to determine the most common clinical applications of GGT. GGT and GGT with Chuan Xiung Cha Tiao San were most commonly prescribed for common cold, as per NHIRD recommendations. Records for adults aged 20-65 years who were prescribed GGT for the treatment of common cold (Diagnosis Code ICD-9-460) were included in this study. We assessed the following indicators of the common cold, before and after treatment with GGT: nasal congestion, cough, runny nose, sneezing, sore throat, hoarseness, stiff shoulder, headache, and general physical condition. Results: The cold symptom scores before and after taking the GGT prescriptions significantly differed in the 29 volunteers. The 29 volunteers reported a significantly lower headache severity score after medication than before medication (p < 0.004). Furthermore, patient scores for general physical condition decreased significantly (p < 0.01) after medication.
... A retrospective, observational pre/poststudy was conducted, analyzing real-world data. 24,25 Study Setting ...
Article
Full-text available
Background: Anticoagulants are high-risk medications and are a common cause of adverse events of hospitalised inpatients. The incidence of adverse events involving anticoagulants has remained relatively unchanged over the past two decades, suggesting novel approaches are required to address this persistent issue. Electronic medication management systems (eMMS) offer strategies to help reduce medication incidents and adverse drug events, yet poor system design can introduce new error types. Objective: To evaluate the effect of the introduction of an electronic medical record (EMR) on the quality and safety of therapeutic anticoagulation management. Methods: A retrospective, observational pre/post study was conducted, analysing real-world data across five hospital sites in a single health service. Four metrics were compared one year pre- and one year post-EMR implementation. They included clinician-reported medication incidents, toxic pathology results, hospital-acquired bleeding complications (HACs) and rate of heparin-induced thrombocytopenia. Further sub-analyses of patients experiencing HACs in the post-EMR period, identified key opportunities for intervention to maximise safety and quality of anticoagulation within an eMMS. Results: A significant reduction in HACs was observed in the post-EMR implementation period (mean (SD) =12.1 (4.4)/month, vs. mean (SD) =7.8 (3.5)/month; p=0.01). The categorisation of potential EMR design enhancements found that new automated clinical decision support or improved pathology result integration would be suitable to mitigate future HACs in an eMMS. There was no significant difference in the mean monthly clinician-reported incident rates for anticoagulants or the rate of toxic pathology results in the pre- versus post-EMR implementation period. A 62.5% reduction in the cases of heparin-induced thrombocytopenia were observed in the post-EMR implementation period. Conclusion: The implementation of an EMR improves clinical care outcomes for patients receiving anticoagulation. System design plays a significant role in mitigating the risks associated with anticoagulants and consideration must be given to optimising eMMS.
... Real-world data (RWD) provides insights on patient health status and/or health care delivery in routine clinical practice, including access to treatment, therapeutic efficacy, toxicity, and quality of life, which can help in developing interventions to improve patients' health care quality, including patients with cancer [17,18]. A multicenter, retrospective analysis of patients with HR+, HER2− BC who received NAC was conducted, and an RWD was reported on patients' clinical and pathological characteristics, treatments, and surgical and oncological outcomes in clinical practice. ...
Article
Full-text available
Background The data in the real-world setting on breast pathologic complete response (pCR) after neoadjuvant chemotherapy (NAC) for hormone receptor–positive, human epidermal growth factor receptor-2-negative (HR+, HER2−) breast cancer (BC) is limited. The present study aims to screen for some predictors and investigate the prognostic significance of breast pCR after NAC in HR+, HER2− BC in China. Methods This was a multicenter, retrospective study. In this study, three hundred eighty-four HR+, HER2− BC patients who received NAC were enrolled between 2010 and 2016 from Shanghai Jiaotong University Breast Cancer Database (SJTU-BCDB). These patients were dichotomized according to the presence of breast pCR after NAC. Logistic analysis was used to screen for predictors associated with breast pCR. Kaplan-Meier (K-M) curve and a propensity score matching (PSM) analysis were performed to compare the disease-free survival (DFS) between the two groups. Cox regression was used to analyze the prognostic significance of breast pCR on DFS in HR+, HER2− BC. A nomogram model was established to predict the probability of DFS at 1, 3, and 5 years after NAC. Results Fifty-seven patients (14.8%) achieved breast pCR. Univariate analysis showed that tumor size, estrogen receptor (ER), progesterone receptor (PR), and Ki67 were associated with breast pCR. Further, multivariate analysis showed that tumor size, PR, and Ki67 remained statistically significant. K-M curves showed a statistical difference between the breast pCR and non-pCR groups before PSM (p = 0.047), and a more significant difference was shown after PSM (p = 0.033). Cox regression after PSM suggested that breast pCR, adjuvant ET, clinical T stage, and Ki67 status were the significant predictive factors for DFS in HR+, HER2− BC patients. The adjusted hazards ratio (aHR) for breast pCR was 0.228 (95% CI, 0.070~0.739; p = 0.014), for adjuvant endocrine therapy was 0.217 (95% CI, 0.059~0.801; p = 0.022), for Ki67 was 1.027 (95% CI, 1.003~1.052; p = 0.027), for cT stages 2 and 3 compared with 1, the values were 1.331 (95% CI, 0.170~10.389), and 4.699 (95% CI, 0.537~41.142), respectively (p = 0.043). A nomogram was built based on these significant predictors, providing an integrated probability of DFS at 1, 3, and 5 years. The values of area under the receiver operating characteristic (ROC) curve (AUC) were 0.967, 0.991, and 0.787, at 1 year, 3 years, and 5 years, respectively, demonstrating the ability of the nomogram to predict the DFS. Conclusions This real-world study demonstrates that tumor size, PR, and Ki67 were independent predictive factors for breast pCR in HR+, HER2− BC. Breast pCR after NAC was an independent predictor for DFS in HR+, HER2− patients, regardless of a change in nodes. Furthermore, the nomogram built in our study could predict the probability of individualized DFS in HR+, HER2− BC patients.
... Despite many potential applications of data collected from routine health-care delivery, the credibility of RWE remains controversial. Poor data quality, inappropriate study choices, confounding, and bias pose potential threats to validity of findings based on RWE studies (7). These challenges highlight the need for a principled approach to analysis of longitudinal health-care databases as well as a framework for understanding the manners in which RWE can be effectively applied. ...
Article
Full-text available
Background Medical and regulatory communities are increasingly interested in the utility of real-world evidence (RWE) for answering questions pertaining to drug safety and effectiveness but concerns about validity remain. A principled approach to conducting RWE studies may alleviate concerns and increase confidence in findings. This study sought to predict the findings from the PRONOUNCE trial using a principled approach to generating RWE. Methods This propensity-score (PS) matched observational cohort study utilized 3 claims databases to compare the occurrence of major adverse cardiovascular events (MACE) among initiators of degarelix vs. leuprolide. Patients were included if they had history of prostate cancer and atherosclerotic cardiovascular disease. Subjects were excluded if they didn’t have continuous database enrollment in the year prior to treatment initiation, were exposed to androgen deprivation therapy or experienced an acute cardiovascular event within 30 days prior to treatment initiation, or had a history or risk factors of QT prolongation. Results There were 12,448 leuprolide and 1,969 degarelix study-eligible patients before matching, with 1,887 in each arm after PS-matching. The results for MACE comparing degarelix to leuprolide in the observational analysis (hazard ratio= 1.35; 95% confidence interval = 0.94–1.93) was consistent with the subsequently released PRONOUNCE result (hazard ratio = 1.28; 95% confidence interval = 0.59–2.79). Conclusions This study successfully predicted the result of a comparative cardiovascular safety trial in the oncology setting. Although the findings are encouraging, limitations of measuring cancer stage and tumor progression are representative of challenges in attempting to generalize whether claims-based RWE can be used as actionable evidence.
... Real-world evidence is derived from studies of real-world data, which are information on health care accumulated from multiple sources outside the traditional clinical research setting, including electronic health records, medical claims and billing data, product and disease registries, and personal devices and health applications (1). In contrast to traditional clinical trials, data used in real-world evidence studies are repurposed from their original intent (eg, medical claims and billing data for reimbursement) or are set up to answer a variety of research questions (eg, disease registries). ...
... During the COVID-19 pandemic, population-wide person-level electronic health record (EHR) data has increasingly gained importance for exploring, modeling, and reporting disease trends to inform healthcare and public health policy 1 . The increasing availability of COVID-19 digital health data has fostered the interest in the use of real-world data (RWD) 2 , defined as patient data collected from their EHRs, which can be analyzed to generate real-world evidence (RWE) 3 . ...
Preprint
Full-text available
Despite the extensive vaccination campaigns in many countries, COVID-19 is still a major worldwide health problem because of its associated morbidity and mortality. Therefore, finding efficient treatments as fast as possible is a pressing need. Drug repurposing constitutes a convenient alternative when the need for new drugs in an unexpected medical scenario is urgent, as is the case with COVID-19. Using data from a central registry of electronic health records (the Andalusian Population Health Database, BPS), the effect of prior consumption of drugs for other indications previous to the hospitalization with respect to patient survival was studied on a retrospective cohort of 15,968 individuals, comprising all COVID-19 patients hospitalized in Andalusia between January and November 2020. Covariate-adjusted hazard ratios and analysis of lymphocyte progression curves support a significant association between consumption of 21 different drugs and better patient survival. Contrarily, one drug, furosemide, displayed a significant increase in patient mortality.
... The lack of standardization between multiple centers, which is common outside of prospective trials, might have reduced the mpMRI-RCs' discrimination accuracy. However, testing the mpMRI-RCs in a "real-life" setting is crucial for determining the models' robustness, generalizability, and clinical benefit [20]. ...
Article
Full-text available
Purpose Risk calculators (RC) aim to improve prebiopsy risk stratification. Their latest versions now include multiparametric magnetic resonance imaging (mpMRI) findings. For their implementation into clinical practice, critical external validations are needed. Methods We retrospectively analyzed the patient data of 554 men who underwent ultrasound-guided targeted and systematic prostate biopsies at 2 centers. We validated the mpMRI-RCs of Radtke et al. (RC-R) and Alberts et al. (RC-A), previously shown to predict prostate cancer (PCa) and clinically significant PCa (csPCa). We assessed these RCs’ prediction accuracy by analyzing the receiver-operating characteristics (ROC) curve and evaluated their clinical utility using Decision Curve Analysis (DCA), including Net-Benefit and Net-Reduction curves. Results We found that the Area Under the ROC Curve (AUC) for predicting PCa was 0.681 [confidence interval (CI) 95% 0.635–0.727] for RC-A. The AUCs for predicting csPCa were 0.635 (CI 95% 0.583–0.686) for RC-A and 0.676 (CI 95% 0.627–0.725) for RC-R. For example, at a risk threshold of 12%, RC-A needs to assess 334 and RC-R 500 patients to detect one additional true positive PCa or csPCa patient, respectively. At the same risk threshold of 12%, RC-A only needs to assess 6 and RC-R 16 patients to detect one additional true negative PCa or csPCa patient. Conclusion The mpMRI-RCs, RC-R and RC-A, are robust and valuable tools for patient counseling. Although they do not improve PCa and csPCa detection rates by a clinically meaningful margin, they aid in avoiding unnecessary prostate biopsies. Their implementation could reduce overdiagnosis and reduce PCa screening morbidity.
... 13 14 One of the most promising new methodologies in regulatory science is real-world evidence (RWE), a new concept recently proposed by the US Food and Drug Administration. 15 By definition, real-world data (RWD) are data routinely collected from a variety of sources, including insurance claim databases, patient registries, electronic medical records and social networking services. Moreover, RWE is clinical evidence derived from RWD analysis. ...
Article
Full-text available
Objective Adjuvant chemotherapy with trastuzumab improves the postoperative life expectancy of women with early-stage breast cancer. Although trastuzumab is reportedly cardiotoxic, quantification based on real-world evidence is lacking. Therefore, in this study, we aimed to analyse trastuzumab cardiotoxicity using a nationwide claim-based database. Methods In this retrospective study, we used data from a nationwide claims database (Japan Medical Data Center, Tokyo, Japan) under the universal healthcare system. Women with breast cancer who underwent initial surgery were included. Patients with recurrent or advanced-stage breast cancer, with a history of heart failure, receiving neoadjuvant chemotherapy or a preoperative history of less than 6 months were excluded. Propensity score (PS) was calculated using logistic regression based on age, cardiovascular risk factors, radiotherapy and concomitant anthracyclines (AC). Results We identified 12 060 eligible patients (mean age 50.8±8.56 years) between January 2010 and December 2019. After 1:2 PS matching (trastuzumab users, TZ, n=1005; non-users, NT, n=2010), Cox proportional hazards model analysis showed that the rate of heart failure development within 18 months postoperative was significantly higher in the TZ group than in the NT group (adjusted HR 2.28, 95% CI 1.38 to 3.77). Baseline cardiac evaluation in the combined AC/TZ cases was 27.2% preoperative, 66.0% pre-AC and 86.6% pre-TZ, respectively. Conclusion Trastuzumab cardiotoxicity remained relevant in the claim-based analysis adjusted for AC effects. Further collaborative studies in cardio-oncology with real-world data are warranted to improve the rate of baseline cardiovascular risk assessment in patients with cancer scheduled for cardiotoxic cancer treatment.
... 5 Real-world data are crucial for ascertaining treatment effectiveness in clinical practice, yet adjustment for confounding is critical. [6][7][8] Patients who received COVID-19 vaccine or had confirmed previous SARS-CoV-2 infection were excluded from EPIC-HR, representing a difference between EPIC-HR patients and high-risk patients who may receive nirmatrelvir/ritonavir in clinical practice. 4 Furthermore, data evaluating clinical effectiveness of nirmatrelvir/ritonavir against the Omicron variant, which emerged after EPIC-HR completion, are limited and may differ compared with efficacy data from the trial. ...
Preprint
Full-text available
Objectives: The aim of this analysis was to describe nirmatrelvir/ritonavir real-world effectiveness in preventing hospitalization among high-risk US COVID-19 patients during SARS-CoV-2 Omicron predominance. Design: An ongoing population-based cohort study with retrospective and prospective collection of electronic healthcare data in the United States. Methods: Data for this analysis were collected from the US Optum de-identified COVID-19 Electronic Health Record (EHR) dataset during December 22, 2021 to June 8, 2022. Key eligibility criteria for inclusion in the database analysis were at least 12-years-old; positive SARS-CoV-2 test, COVID-19 diagnosis, or nirmatrelvir/ritonavir prescription; and high risk of severe COVID-19 based on demographic/clinical characteristics. Potential confounders between groups were balanced using propensity score matching (PSM). Immortal time bias was addressed. Outcome measures: Hospitalization rates within 30 (primary analysis) or 15 (sensitivity analysis) days from COVID-19 diagnosis overall and within subgroups were evaluated. Results: Before PSM, the nirmatrelvir/ritonavir group (n=2811) was less racially diverse, older, and had higher COVID-19 vaccination rates and a greater number of comorbidities than the non-nirmatrelvir/ritonavir group (n=194,542). Baseline characteristics were well balanced across groups (n=2808 and n=10,849, respectively) after PSM. Incidence of hospitalization (95% CI) within 30 days was 1.21% (0.84%, 1.69%) for the nirmatrelvir/ritonavir group and 6.94% (6.03%, 7.94%) for the non-nirmatrelvir/ritonavir group, with a hazard ratio (95% CI) of 0.16 (0.11, 0.22; 84% relative risk reduction). Incidence within 15 days was 0.78% (0.49%, 1.18%) for the nirmatrelvir/ritonavir group and 6.54% (5.65%, 7.52%) for the non-nirmatrelvir/ritonavir group; hazard ratio 0.11 (0.07, 0.17; 89% relative risk reduction). Nirmatrelvir/ritonavir was effective in African American patients (hazard ratio, 0.35 [0.15, 0.83]; 65% relative risk reduction). Relative risk reductions were comparable with overall results across ages and among vaccinated patients. Conclusions: Real-world nirmatrelvir/ritonavir effectiveness against hospitalization during the Omicron era supports EPIC-HR efficacy among high-risk patients. Future research should confirm these early real-world results and address limitations.
... 38 Variation in VE estimates may stem from variation in case definition, surveillance, and identification of active cases (mild, and severe cases), vaccine characteristics, unmeasured confounding, health system characteristics, and heterogeneity of our study population. [39][40][41] Generally, the inclusion criteria for RCTs are relatively strict with exclusion of frail, immunocompromised, people with comorbidities, and/or a history of herpes zoster who would be targeted for vaccination. 16 The effectiveness of ZVL has been assessed in several observational studies, mostly in North America, and Europe. ...
Article
Full-text available
Background Herpes zoster (HZ) and associated complications cause significant burden to older people. A HZ vaccination programme was introduced in Aotearoa New Zealand in April 2018 with a single dose vaccine for those aged 65 years and a four-year catch up for 66–80 year-olds. This study aimed to assess the ‘real-world’ effectiveness of the zoster vaccine live (ZVL) against HZ and postherpetic neuralgia (PHN). Methods We conducted a nationwide retrospective matched cohort study from 1 April 2018 to 1 April 2021 using a linked de-identified patient level Ministry of Health data platform. A Cox proportional hazards model was used to estimate ZVL vaccine effectiveness (VE) against HZ and PHN adjusting for covariates. Multiple outcomes were assessed in the primary (hospitalised HZ and PHN – primary diagnosis) and secondary (hospitalised HZ and PHN: primary and secondary diagnosis, community HZ) analyses. A sub-group analysis was carried out in, adults ≥ 65 years old, immunocompromised adults, Māori, and Pacific populations. Findings A total of 824,142 (274,272 vaccinated with ZVL matched with 549,870 unvaccinated) New Zealand residents were included in the study. The matched population was 93.4% immunocompetent, 52.2% female, 80.2% European (level 1 ethnic codes), and 64.5% were 65–74 years old (mean age = 71.1±5.0). Vaccinated versus unvaccinated incidence of hospitalised HZ was 0.16 vs. 0.31/1,000 person-years and 0.03 vs. 0.08/1000 person-years for PHN. In the primary analysis, the adjusted overall VE against hospitalised HZ and hospitalised PHN was 57.8% (95% CI: 41.1–69.8) and 73.7% (95% CI:14.0–92.0) respectively. In adults ≥ 65 years old, the VE against hospitalised HZ was 54.4% (95% CI: 36.0–67.5) and VE against hospitalised PHN was 75·5% (95% CI: 19.9–92.5). In the secondary analysis, the VE against community HZ was 30.0% (95% CI: 25.6–34.5). The ZVL VE against hospitalised HZ for immunocompromised adults was 51.1% (95% CI: 23.1–69.5), and PHN hospitalisation was 67.6% (95% CI: 9.3–88.4). The VE against HZ hospitalisation for Māori was 45.2% (95% CI: −23.2–75.6) and for Pacific Peoples was 52.2% (95% CI: −40.6 –83·7). Interpretation ZVL was associated with a reduction in risk of hospitalisation from HZ and PHN in the New Zealand population. Funding Wellington Doctoral Scholarship awarded to JFM.
... However, randomized control trials (RCTs) were not viable as Taiwan managed to halt COVID-19 transmission with nonpharmacological interventions until May 2021 [4], when a sudden surge in COVID-19 cases overwhelmed the local health system. Given the limitation of traditional RCTs in delivering timely evidence necessary for clinical decisions regarding the use of novel therapies [5,6], health care providers, scientists and pharmaceutical companies leveraged the earlier TCM-MM cotreatment experience by providing the patients with NRICM101 and NRICM102, in addition to usual care ( Figure 1). J o u r n a l P r e -p r o o f NRICM elucidated the multi-targeting mechanism and established quality control profiles of NRICM101 & NRICM102. ...
Article
Full-text available
Background Viral- and host-targeted traditional Chinese medicine (TCM) formulae NRICM101 and NRICM102 were administered to hospitalized patients with COVID-19 during the mid-2021 outbreak in Taiwan. We report the outcomes by measuring the risks of intubation or admission to intensive care unit (ICU) for patients requiring no oxygen support, and death for those requiring oxygen therapy. Methods This multicenter retrospective study retrieved data of 840 patients admitted to 9 hospitals between May 1 and July 26, 2021. After propensity score matching, 302 patients (151 received NRICM101 and 151 did not) and 246 patients (123 received NRICM102 and 123 did not) were included in the analysis to assess relative risks. Results During the 30-day observation period, no endpoint occurred in the patients receiving NRICM101 plus usual care while 14 (9.3%) in the group receiving only usual care were intubated or admitted to ICU. The numbers of deceased patients were 9 (7.3%) in the group receiving NRICM102 plus usual care and 28 (22.8%) in the usual care group. No patients receiving NRICM101 transitioned to a more severe status; NRICM102 users were 74.07% less likely to die than non-users (relative risk= 25.93, 95% confidence interval 11.73-57.29). Conclusion NRICM101 and NRICM102 were significantly associated with a lower risk of intubation/ICU admission or death among patients with mild-to-severe COVID-19. This study provides real-world evidence of adopting broad-spectrum oral therapeutics and shortening the gap between outbreak and effective response. It offers a new vision in our preparation for future pandemics.
... 5 Health-data science has undergone rapid development in the past decade, including the common use of electronic health-care record (EHR) systems that condense clinical episodes into coded, structured labels for diseases and health-care utilisation. 6 However, concerns about the quality, data privacy, transparency, and comparability of these systems have restricted the use of evidence generated with structured health-care data. These concerns have also restricted acceptance of evidence generated with structured health-care data by regulators, reimbursement authorities, and guideline task forces. ...
Article
Full-text available
Big data is important to new developments in global clinical science that aim to improve the lives of patients. Technological advances have led to the regular use of structured electronic health-care records with the potential to address key deficits in clinical evidence that could improve patient care. The COVID-19 pandemic has shown this potential in big data and related analytics but has also revealed important limitations. Data verification, data validation, data privacy, and a mandate from the public to conduct research are important challenges to effective use of routine health-care data. The European Society of Cardiology and the [email protected] consortium have brought together a range of international stakeholders, including representation from patients, clinicians, scientists, regulators, journal editors, and industry members. In this Review, we propose the CODE-EHR minimum standards framework to be used by researchers and clinicians to improve the design of studies and enhance transparency of study methods. The CODE-EHR framework aims to develop robust and effective utilisation of health-care data for research purposes.
... Embedding controlled trials within the real world setting, either within registries or routine clinical practice, is now possible and could provide more generalisable results to the population at large. 5 Health data science has undergone rapid development in the past decade, including the common adoption of electronic healthcare record (EHR) systems that condense clinical episodes into a set of coded, structured labels. 6 However, concerns over quality, data privacy, transparency, and comparability of these systems have limited the use of the evidence generated with structured healthcare data. These issues have also restricted acceptance by regulators, reimbursement authorities, and guideline task forces. ...
Article
Full-text available
Big data is important to new developments in global clinical science that aim to improve the lives of patients. Technological advances have led to the regular use of structured electronic health-care records with the potential to address key deficits in clinical evidence that could improve patient care. The COVID-19 pandemic has shown this potential in big data and related analytics but has also revealed important limitations. Data verification, data validation, data privacy, and a mandate from the public to conduct research are important challenges to effective use of routine health-care data. The European Society of Cardiology and the BigData@Heart consortium have brought together a range of international stakeholders, including representation from patients, clinicians, scientists, regulators, journal editors, and industry members. In this Review, we propose the CODE-EHR minimum standards framework to be used by researchers and clinicians to improve the design of studies and enhance transparency of study methods. The CODE-EHR framework aims to develop robust and effective utilisation of health-care data for research purposes.
... While randomized controlled trials provide evidence for the effectiveness of treatments for COPD, observational studies are now receiving special attention in providing complementary real-world evidence (RWE) for regulatory decisionmaking. [30][31][32] This observational study, conducted in a real-world clinical practice setting of COPD treatment among ICSnaïve patients, using an adaptive selection design, found that single-inhaler triple therapy was not more effective than dual bronchodilators at reducing the incidence of exacerbation, except among patients with multiple exacerbations. Thus, single-inhaler triple therapy should be mainly reserved for patients with multiple exacerbations and likely those with asthma and eosonophilia while, for most others, dual bronchodilators are just as effective whilst circumventing the excess risk of severe pneumonias with triple therapy. ...
Article
Full-text available
Purpose: Randomized trials report that single-inhaler triple therapy is more effective than dual bronchodilators at reducing exacerbations in patients with chronic obstructive pulmonary disease (COPD). However, this effect may have been influenced by the forced withdrawal of inhaled corticosteroids (ICS) at randomization. We used an adaptive selection new-user design to compare single-inhaler triple therapy with dual bronchodilators in real-world clinical practice. Patients and methods: We identified a cohort of COPD patients, 40 years or older, treated during 2017-2020, from the United Kingdom's Clinical Practice Research Datalink, a real-world practice setting. ICS-naïve patients initiating single-inhaler triple therapy or dual bronchodilators were compared on the incidence of COPD exacerbation and pneumonia over one year, after adjustment by propensity score weighting. Results: The cohort included 4106 new users of single-inhaler triple therapy and 29,702 of dual bronchodilators. Single-inhaler triple therapy was the first maintenance treatment in 44% of the users and 43% had no COPD exacerbations in the prior year. The adjusted hazard ratio (HR) of a first moderate or severe exacerbation with triple therapy relative to dual bronchodilators was 1.08 (95% confidence interval (CI): 1.00-1.16). Among patients with two or more prior exacerbations the HR was 0.83 (95% CI: 0.74-0.92), while for those with prior asthma diagnosis it was 0.86 (95% CI: 0.70-1.06) and with blood eosinophil count >300 cells/µL it was 0.89 (95% CI: 0.76-1.05). The incidence of severe pneumonia was increased with triple therapy (HR 1.50; 95% CI: 1.29-1.75). Conclusion: In a real-world setting of COPD treatment among ICS-naïve patients, thus unaffected by ICS withdrawal, single-inhaler triple therapy was not more effective than dual bronchodilators at reducing the incidence of exacerbation, except among patients with multiple exacerbations. Single-inhaler triple therapy should be initiated mainly in patients with multiple exacerbations while, for most others, dual bronchodilators are just as effective whilst avoiding the excess risk of severe pneumonias.
... Although bioequivalence has been demonstrated, postmarketing evaluation of biosimilars in less controlled realworld settings is essential to provide evidence when biosimilars are considered as alternatives to originator reference products. In addition, real-world evidence complements generated knowledge from randomized control trials (RCTs) and improves their external validity (generalizability) [16]. ...
Article
Full-text available
Background Despite the demonstrated efficacy and safety of biosimilar filgrastim-aafi (Nivestim™), few studies have compared its use in real-life clinical practice to the originator filgrastim (Neupogen™).Objectives This study aimed to compare the effectiveness and safety of filgrastim and filgrastim-aafi for the primary prophylaxis of chemotherapy induced-febrile neutropenia in the real-life setting.Patients and methodsA retrospective cohort study included all adult cancer patients at the King Hussein Cancer Centre requiring primary prophylaxis for chemotherapy-induced febrile neutropenia between 2014 and 2016. Two cohorts were selected: patients who received filgrastim and those who received filgrastim-aafi. The primary endpoint was the incidence of febrile neutropenia; the secondary endpoints were the incidence of adverse drug reactions (ADRs), hospital admissions due to febrile neutropenia, and the mean length of hospitalization. Chi-squared tests were performed to evaluate differences between groups. Logistic regression was conducted to adjust for confounding factors.ResultsA total of 268 patients were identified, with 88 in the filgrastim cohort and 180 in the filgrastim-aafi cohort; 64%were females. The mean age was 47 (±15) years. The incidence of febrile neutropenia was 21.6% in the filgrastim cohort and 15% in the filgrastim-aafi cohort (P = 0.179). No statistically significant differences were detected in the incidence of hospital admission (P = 0.551) or ADRs (P = 0.623) between the two cohorts. Upon adjusting for the confounding factors, results remained statistically insignificant.Conclusion Filgrastim and filgrastim-aafi had comparable effectiveness and safety as primary prophylaxis for chemotherapy-induced febrile neutropenia. More extensive prospective studies with additional insight on the cost implications are required.
... Wichtig ist die u. a. von der Food and Drug Administration (FDA; [35] ...
Article
Full-text available
Zusammenfassung Hintergrund Vom Studiendesign her stark einschränkende randomisiert kontrollierte Studien (RCT) mit hoch selektierten Teilnehmern und Bedingungen liefern Ergebnisse, deren Übertragbarkeit auf die klinische Routineversorgung und Nützlichkeit für Erstattungsentscheidungen bisweilen bezweifelt wird. Fragestellung Bieten vor dem erwähnten Hintergrund pragmatisch orientierte RCT und registerbasierte RCT Lösungspotenziale? Welche Chancen und Risiken sind mit pragmatischeren Studien verbunden, und welche methodischen Aspekte sind besonders zu beachten? Methoden Der Beitrag zeigt eine narrative Übersicht zu pragmatisch orientierten RCT und registerbasierten RCT mit Vorstellung des PRECIS-2-Ansatzes („pragmatic-explanatory continuum indicator summary“) sowie einer Darstellung von Beispielstudien mit Diskussion methodischer Aspekte. Ergebnisse Klinische RCT zur vergleichenden Nutzenbewertung sind auf einem Kontinuum zwischen den Polen „sehr pragmatisch“ und „sehr explanatorisch“ angesiedelt. Eine Grenze, ab der ein RCT als pragmatisch bezeichnet wird, ist nicht konsentiert. Pragmatischere RCT sind häufig gekennzeichnet durch wenig selektierte, aber dafür große Patientengruppen, Einbettung in ein Normalversorgungssetting und patientenrelevante Outcomes. Sie verzichten meist auf nachhaltige Adhärenzsicherung der initial zugeordneten Behandlung, auf Verblindung und aufwendige Zwischenuntersuchungen. Dies kann allerdings zu interpretatorischen Problemen führen, v. a. wenn sich keine Interventionsunterschiede zeigen. Schlussfolgerungen Pragmatischere randomisierte Studien und registerbasierte RCT haben das Potenzial, mit ihren Ergebnissen zu wichtigen Entscheidungsgrundlagen für die klinische Praxis, aber auch für die Gesundheitspolitik und Erstattungsfragen zu werden. Um dieses Potenzial zu heben, sind allerdings noch manche Hürden vor allem gesetzlicher Art zu beseitigen.
... Real-world data (RWD) sources that can contribute to RWE include, but are not limited to, electronic health records (EHRs), insurance claims, and registries. 3 RWD have several advantages compared with traditional research data sources: they are ubiquitously available, less expensive, and available for more diverse patient populations than usually represented in clinical trials. 4 5 To drive the quality and efficiency in use of RWD for medical device evaluation, FDA established the National Evaluation System for health Technology Coordinating Center (NESTcc). ...
Preprint
Objectives: To examine the current state of Unique Device Identifier (UDI) implementation, including barriers and facilitators, among eight health systems participating in a research network committed to real-world evidence (RWE) generation for medical devices. Design: Mixed methods, including a structured survey and semi-structured interviews. Setting: Eight health systems participating in the National Evaluation System for health Technology research network within the United States. Participants: Individuals identified as being involved in or knowledgeable about UDI implementation or medical device identification from supply chain, information technology, and high-volume procedural area(s) in their health system. Main Outcomes Measures: Interview topics were related to UDI implementation, including barriers and facilitators; UDI use; benefits of UDI adoption; and vision for UDI implementation. Data were analyzed using directed content analysis, drawing on prior conceptual models of UDI implementation and the Exploration, Preparation, Implementation, Sustainment (EPIS) framework. A brief survey of health system characteristics and scope of UDI implementation was also conducted. Results: Thirty-five individuals completed interviews. Three of eight health systems reported having implemented UDI. Themes identified about barriers and facilitators to UDI implementation included knowledge of the UDI and its benefits among decision makers; organizational systems, culture, and networks that support technology and workflow changes; and external factors such as policy mandates and technology. A final theme focused on the availability of UDIs for RWE; lack of availability significantly hindered RWE studies on medical devices. Conclusions: UDI adoption within health systems requires knowledge of and impetus to achieve operational and clinical benefits. These are necessary to support UDI availability for medical device safety and effectiveness studies and RWE generation.
... Embedding controlled trials within the real world setting, either within registries or routine clinical practice, is now possible and could provide more generalisable results to the population at large. 5 Health data science has undergone rapid development in the past decade, including the common adoption of electronic healthcare record (EHR) systems that condense clinical episodes into a set of coded, structured labels. 6 However, concerns over quality, data privacy, transparency, and comparability of these systems have limited the use of the evidence generated with structured healthcare data. These issues have also restricted acceptance by regulators, reimbursement authorities, and guideline task forces. ...
Article
Full-text available
Big data is central to new developments in global clinical science aiming to improve the lives of patients. Technological advances have led to the routine use of structured electronic healthcare records with the potential to address key gaps in clinical evidence. The covid-19 pandemic has demonstrated the potential of big data and related analytics, but also important pitfalls. Verification, validation, and data privacy, as well as the social mandate to undertake research are key challenges. The European Society of Cardiology and the BigData@Heart consortium have brought together a range of international stakeholders, including patient representatives, clinicians, scientists, regulators, journal editors and industry. We propose the CODE-EHR Minimum Standards Framework as a means to improve the design of studies, enhance transparency and develop a roadmap towards more robust and effective utilisation of healthcare data for research purposes. In the context of ageing populations and increasing multimorbidity in all disease areas, 1-3 large scale, real world data provide an opportunity to better understand the epidemiology of rare and common conditions, and to improve prevention strategies and treatment stratification. 4 Tailored management for individual patients has become even more essential to constrain healthcare costs and provide patient centred care that can improve a patient's quality of life and prognosis. Embedding controlled trials within the real world setting, either within registries or routine clinical practice, is now possible and could provide more gen-eralisable results to the population at large. 5 Health data science has undergone rapid development in the past decade, including the common adoption of electronic healthcare record (EHR) systems that condense clinical episodes into a set of coded, structured labels. 6 However, concerns over quality, data privacy , transparency, and comparability of these systems have limited the use of the evidence generated with structured healthcare data. These issues have also restricted acceptance by regulators, reimbursement authorities, and guideline task forces. Despite the availability of numerous reporting standards, consensus has not been met on how to realise the Findable, Accessible, Interoperable, and Reusable (FAIR) principles 7 in the context of structured healthcare data. Existing reporting checklists ask authors to indicate where in their paper particular design issues have been discussed. For example , STROBE (Strengthening the Reporting of Observational studies in Epidemiology) for observational studies, 8 RECORD (REporting of studies Conducted using Observational Routinely-collected Data) for routinely collected health data, 9 and CONSORT-AI (Consolidated Standards of Reporting Trials-Artificial Intelligence) for artificial intelligence interventions. 10 However, these checklists are often lengthy, no minimum standards are specified, and adherence does not relate to study quality or even the quality of transparency for that domain. 11 Although checklists can benefit research quality, they are often used for box ticking to facilitate journal publication. In a study of radiology journals, only 15% (120/821) of surveyed authors used the reporting guideline when designing their study. 12 With a proliferation of reporting checklists for every scenario , authors and readers of such reports are increasingly confused about the value of these checklists. As of 14 February 2022, 488 reporting checklists were registered with EQUATOR (Enhancing the QUAlity and Transparency Of health Research) and 111 were in development. In the case of observational and randomised clinical research using EHRs and other structured data, the source of data, its manipulation, and underpinning governance are of critical importance to extrapolating results. Clarity is needed from a broad stakeholder perspective, providing a quality framework to enhance the design and application of clinical research that increasingly depends on these crucial new sources of data. This article reflects the joint work of a wide range of international stakeholders with a remit to improve the use of structured healthcare data. The programme was coordinated by the European Society of Cardiology, a non-profit organisation of healthcare professionals, and the BigData@Heart Consortium, a public-private partnership funded by the European Union Innovative Medicines Initiative. Our aim was to navigate opportunities and limitations, and to develop a framework for a broad audience of global stakeholders across all disease areas. The CODE-EHR framework seeks to realise the exciting opportunity that digitisation of health data affords to increase efficiency of health-care systems, and improve the lives and wellbeing of patients. Summary points • Research using routinely collected structured healthcare data has the potential for major clinical impact but this requires a clear and transparent approach to describe data sources, linkage protocols, coding definitions, and validation of methods and results • A social license and public mandate are essential components of big data research that can provide societal benefit, addressing the concerns of participants, and ensuring data privacy and integrity • This paper describes the output of international stakeholder meetings for the use of structured healthcare data for research purposes, including patient representatives, clinicians, scientists , regulators, journal editors, and industry representatives • The CODE-EHR checklist provides a minimum standards framework to enhance research design and enable more effective use and dissemination of routine healthcare data for clinical research
Article
Objectives To report 24-week safety and effectiveness data of certolizumab pegol (CZP) in Japanese patients with rheumatoid arthritis (RA) from a post-marketing surveillance study. Methods Enrolled patients were newly receiving CZP. All adverse events (AEs) and adverse drug reactions (ADRs) were recorded for patients who received ≥1 CZP dose. Effectiveness outcomes included: 28-joint Disease Activity Score with erythrocyte sedimentation rate (DAS28-ESR) and EULAR response. Missing data were imputed using last observation carried forward. Results 3,727 patients were enrolled; safety and effectiveness were evaluated in 3,586 and 1,794 patients, respectively. 24.9% of patients reported AEs (n=893/3,586) and 14.7% reported ADRs (528/3,586). Serious AEs and serious ADRs were reported in 8.3% (298/3,586) and 5.3% (190/3,586), respectively. Selected serious ADRs of interest included infections (n=110; 3.1%), tuberculosis (6; 0.2%), interstitial pneumonia (15; 0.4%), malignancy (8; 0.2%) and hepatic function disorder (7; 0.2%). No allergic reactions, autoimmune disease, cardiac failure, demyelinating diseases or pancytopenia were reported. Mean DAS28-ESR reduced from 4.8 (baseline) to 3.4 (final evaluation). At final evaluation, 34.7% of patients achieved EULAR good response. Conclusions These real-world safety and effectiveness results were consistent with previously reported data, with no new safety signals identified. Long-term, real-world CZP safety and effectiveness data are needed.
Article
Full-text available
Disease registries have been used as an interesting source of real-world data for supporting regulatory decision-making. In fact, drug studies based on registries cover pre-approval investigation, registry randomized clinical trials, and post-authorization studies. This opportunity has been investigated particularly for rare diseases—conditions affecting a small number of individuals worldwide—that represent a peculiar scenario. Several guidelines, concepts, suggestions, and laws are already available to support the design or improvement of a rare disease registry, opening the way for implementation of a registry capable of managing regulatory purposes. The present study aims to highlight the key stages performed for remodeling the existing Registry of Multiple Osteochondromas—REM into a tool consistent with EMA observations and recommendations, as well as to lead the readers through the entire adapting, remodeling, and optimizing process. The process included a variety of procedures that can be summarized into three closely related categories: semantic interoperability, data quality, and governance. At first, we strengthened interoperability within the REM registry by integrating ontologies and standards for proper data collection, in accordance with FAIR principles. Second, to increase data quality, we added additional parameters and domains and double-checked to limit human error to a bare minimum. Finally, we established two-level governance that has increased the visibility for the scientific community and for patients and carers. In conclusion, our remodeled REM registry fits with most of the scientific community’s needs and indications, as well as the best techniques for providing real-world evidence for regulatory aspects.
The term Big Data is used to describe extremely large datasets that are complex, multi-dimensional, unstructured, and heterogeneous and that are accumulating rapidly and may be analyzed with appropriate informatic and statistical methodologies to reveal patterns, trends, and associations [...].
Article
A frequent dilemma faced in the inflammatory bowel disease (IBD) clinic is how to best treat a patient with a previous cancer diagnosis. The changing demographics of our patient population will make this quandary more common. Previous guidance has emphasised the importance of lengthy postcancer drug holidays and cautious use of IBD therapies. However, accumulating evidence suggests this approach may be unnecessarily conservative. This review considers recent evidence on the safety of IBD drugs, cancer and recurrent cancer risk in patients with IBD and provides a framework for shared decision making involving patient, gastroenterologist and oncologist.
Article
Background Vedolizumab is a gut-selective anti-lymphocyte trafficking agent approved for the treatment of moderate to severely active inflammatory bowel disease (IBD; ulcerative colitis [UC] and Crohn’s disease [CD]). Methods A systematic literature review (SLR) of real-world studies was conducted to assess the effectiveness of dose escalation of vedolizumab every 8 weeks (Q8W) during maintenance treatment to achieve a response in patients who were either vedolizumab responders experiencing secondary loss of response (SLOR) or non-responders. MEDLINE and EMBASE databases were searched from January 2014 to August 2021. Results Screening of SLR outputs identified 72 relevant real-world study publications featuring dose escalation of vedolizumab maintenance therapy. After qualitative review, ten eligible studies (9 articles, 1 abstract) were identified as reporting clinical response and/or clinical remission rates following escalation of intravenous vedolizumab 300 mg Q8W maintenance dosing to every 4 weeks (Q4W) maintenance dosing in adult patients with UC/CD (≥10 patients per study). Overall, 196/395 (49.6%) patients with IBD had a response within 54 weeks of vedolizumab maintenance dose escalation. Although definitions for clinical response/remission varied across the 10 studies, clinical response rates after escalated vedolizumab Q8W maintenance dosing ranged from 40.0‒73.3% (9 studies) and from 30.0‒55.8% for remission (4 studies) over a range of 8 to <58 weeks’ follow-up. Conclusions This synthesis of real-world effectiveness data in vedolizumab-treated patients with IBD, indicates that approximately half were able to achieve or recapture clinical response after escalating vedolizumab maintenance dosing.
Article
Objective: This observational retrospective cohort study investigates the effect of antihypertensive therapy with angiotensin II receptor blockers (ARBs) or dihydropyridine calcium channel blockers (dCCBs) monotherapy on renal function using longitudinal real-world health data of a drug-naive, hypertensive population without kidney disease. Methods: Using propensity score matching, we selected untreated hypertensive participants (n = 10 151) and dCCB (n = 5078) or ARB (n = 5073) new-users based on annual health check-ups and claims between 2008 and 2020. Participants were divided by the first prescribed drug. Results: The mean age was 51 years, 79% were men and the mean estimated glomerular filtration rate (eGFR) was 78 ml/min per 1.73 m2. Blood pressure rapidly decreased by approximately 10% in both treatment groups. At the 1-year visit, eGFR levels decreased in the ARB group by nearly 2% but increased in the dCCB group by less than 1%. However, no significant difference was apparent in the annual eGFR change after the 1-year visit. The risk for composite kidney outcome (new-onset proteinuria or eGFR decline ≥30%) was lowest in the ARB group owing to their robust effect on preventing proteinuria: hazard ratio (95% confidence interval) for proteinuria was 0.90 (0.78-1.05) for the dCCB group and 0.54 (0.44-0.65) for the ARB group, compared with that for the untreated group after ending follow-up at the last visit before changing antihypertensive treatment. Conclusion: From the present findings based on the real-world data, ARBs can be recommended for kidney protection even in a primary care setting. Meanwhile, dCCB treatment initially increases eGFR with no adverse effects on proteinuria.
Article
Full-text available
Regulators and payers play a pivotal role in facilitating timely and affordable access to safe and efficacious medicines. They use evidence generated from randomised clinical trials (RCTs) to support decisions to register and subsidise medicines. However, at the time of registration and subsidy approval, regulators and payers face uncertainty about how RCT outcomes will translate to real-world clinical practice. In response to this situation, medicines policy agencies worldwide have endorsed the use of real-world data (RWD) to derive novel insights on the use and outcomes of prescribed medicines. Recent reforms around data availability and use in Australia are creating unparalleled data access and opportunities for Australian researchers to undertake large-scale research to generate evidence on the safety and effectiveness of medicines in the real world. Highlighting the critical importance of research in this area, Quality Use of Medicines and Medicine Safety was announced as Australia's 10th National Health Priority in 2019. The National Health and Medical Research Council, Medicines Intelligence Centre of Research Excellence (MI-CRE) has been formed to take advantage of the renewed focus on quality use of medicines and the changing data landscape in Australia. It will generate timely research supporting the evidentiary needs of Australian medicines regulators and payers by accelerating the development and translation of real-world evidence on medicines use and outcomes. MI-CRE is developing a coordinated approach to identify, triage and respond to priority questions where there are significant uncertainties about medicines use, (cost)-effectiveness, and/or safety and creating a data ecosystem that will streamline access to Australian data to enable researchers to generate robust evidence in a timely manner. This paper outlines how MI-CRE will partner with policy makers, clinicians, and consumer advocates to leverage real-world data to co-create real-world evidence, to improve quality use of medicines and reduce medicine-related harm.
Article
PURPOSE Using real-world data (RWD)–based trial simulation approach, we aim to simulate colorectal cancer (CRC) trials and examine both effectiveness and safety end points in different simulation scenarios. METHODS We identified five phase III trials comparing new treatment regimens with an US Food and Drug Administration–approved first-line treatment in patients with metastatic CRC (ie, fluorouracil, leucovorin, and irinotecan) as the standard-of-care (SOC) control arm. Using Electronic Health Record–derived data from the OneFlorida network, we defined the study populations and outcome measures using the protocols from the original trials. Our design scenarios were (1) simulation of the SOC fluorouracil, leucovorin, and irinotecan arm and (2) comparative effectiveness research (CER) simulation of the control and experimental arms. For each scenario, we adjusted for random assignment, sampling, and dropout. We used overall survival (OS) and severe adverse events (SAEs) to measure effectiveness and safety. RESULTS We conducted CER simulations for two trials, and SOC simulations for three trials. The effect sizes of our simulated trials were stable across all simulation runs. Compared with the original trials, we observed longer OS and higher mean number of SAEs in both CER and SOC simulation. In the two CER simulations, hazard ratios associated with death from simulations were similar to that reported in the original trials. Consistent with the original trials, we found higher risk ratios of SAEs in the experiment arm, suggesting potentially higher toxicities from the new treatment regimen. We also observed similar SAE rates across all simulations compared with the original trials. CONCLUSION In this study, we simulated five CRC trials, and tested two simulation scenarios with several different configurations demonstrated that our simulations can robustly generate effectiveness and safety outcomes comparable with the original trials using real-world data.
Article
Prader–Willi Syndrome (PWS) is a multi‐system genetic disorder characterized by hyperphagia and a range of medical complications. While register and cohort studies have explored the natural course of the syndrome, there is little nationally‐representative data. In this study the National Inpatient Sample, a de‐identified all‐payors database of acute care hospital discharges in the United States, was queried for patients discharged with a diagnosis of PWS in 2019. Hospitalizations involving PWS were compared to hospitalizations without a PWS diagnosis matched based on demographic and hospital factors. In total, 540 hospitalizations (95% CI: 513–567) included a diagnosis of PWS. Median age at time of admission was 22 years, with an interquartile range of 6.3–37.8 years. Respiratory conditions accounted for 110 (20.4%) of primary discharge diagnoses, with infectious conditions for 70 (13.0%) and digestive conditions for 65 (12.0%). Hospitalizations involving PWS were significantly more likely to involve respiratory failure (OR 5.49; 95% CI 3.86–7.80), septicemia (OR 2.80, 95% CI 1.97–3.96), or intestinal obstruction and ileus (OR 6.29; 95% CI 3.70–10.7) compared to matched hospitalizations without PWS. Obesity was diagnosed in 230 PWS hospitalizations (42.6%; OR 3.86, 95% CI 3.17–4.72 relative to non‐PWS hospitalizations). These results point to an ongoing need for the improved diagnosis and treatment of PWS complications, and highlight the importance of specific billing codes for rare diseases to enhance the collection of real world evidence.
Article
Objective We summarized a decade of new research focusing on semantic data integration (SDI) since 2009, and we aim to: (1) summarize the state-of-art approaches on integrating health data and information; and (2) identify the main gaps and challenges of integrating health data and information from multiple levels and domains. Materials and Methods We used PubMed as our focus is applications of SDI in biomedical domains and followed the Preferred Reporting Items for Systematic Review and Meta-Analyses (PRISMA) to search and report for relevant studies published between January 1, 2009 and December 31, 2021. We used Covidence—a systematic review management system—to carry out this scoping review. Results The initial search from PubMed resulted in 5,326 articles using the two sets of keywords. We then removed 44 duplicates and 5,282 articles were retained for abstract screening. After abstract screening, we included 246 articles for full-text screening, among which 87 articles were deemed eligible for full-text extraction. We summarized the 87 articles from four aspects: (1) methods for the global schema; (2) data integration strategies (i.e., federated system vs. data warehousing); (3) the sources of the data; and (4) downstream applications. Conclusion SDI approach can effectively resolve the semantic heterogeneities across different data sources. We identified two key gaps and challenges in existing SDI studies that (1) many of the existing SDI studies used data from only single-level data sources (e.g., integrating individual-level patient records from different hospital systems), and (2) documentation of the data integration processes is sparse, threatening the reproducibility of SDI studies.
Article
Objectives The Joint ISPOR-ISPE Special Task Force on Real-World Evidence included patient/stakeholder engagement as a recommended good procedural practice when designing, conducting, and disseminating real-world evidence (RWE). However, there are no guidelines describing how patient experience data (PED) can be applied when designing real-world data (RWD) studies. This article describes development of consensus recommendations to guide researchers in applying PED to develop patient-centered RWE. Methods A multidisciplinary advisory board, identified through recommendations of collaborators, was established to guide development of recommendations. Semistructured interviews were conducted to identify how experienced RWD researchers (n = 15) would apply PED when designing a hypothetical RWD study. Transcripts were analyzed and emerging themes developed into preliminary methods recommendations. An eDelphi survey (n = 26) was conducted to refine/develop consensus on the draft recommendations. Results We identified 13 recommendations for incorporating PED throughout the design, conduct, and translation of RWE. The recommendations encompass themes related to the development of a patient-centered research question, designing a study, disseminating RWE, and general considerations. For example, consider how patient input can inform population/subgroups, comparators, and study period. Researchers can leverage existing information describing PED and may be able to apply those insights to studies relying on traditional RWD sources and/or patient registries. Conclusions Applying these emerging recommendations may improve the patient centricity of RWE through improved relevance of RWE to patient communities of interest and foster greater multidisciplinary participation and transparency in RWD research. As researchers gather experience by applying the methods recommendations, further refinement of these consensus recommendations may lead to “best practices.”
Chapter
Generalization is inference from the specific circumstances of a clinical trial to other settings or populations with the condition of interest. Accomplishing this is complex because trials are not population samples, methods supporting both internal and external validity must be assessed, the trial data must be fit for purpose, and relevant shared biology must be a foundation for extrapolation of results. In the context of the large-scale randomized evidence from the COVID-19 vaccine trials, this chapter discusses these issues and how generalizations might be enhanced. Laboratory experiments are a useful microcosm of the same issues and carry important lessons for this process.
Article
Résumé L’objectif de cette étude était d’examiner et d’analyser l’état des connaissances sur l’utilisation et les limites des bases de données nationales de remboursement (BDNR) en rhumatologie. Trois catégories principales de BDNR ont été distinguées en fonction des données qu’elles contiennent : les BDNR sans détails cliniques (p. ex. France et Québec), les BDNR contenant des informations diagnostiques provenant de la pratique clinique (p. ex. Grande-Bretagne) et les BDNR pouvant être corrélées à des bases de données cliniques (p. ex. Suède). Les BDNR permettent de constituer des cohortes pour collecter des données prospectives avec une bonne puissance statistique, enrichissant ainsi les connaissances sur l’épidémiologie des rhumatismes inflammatoires chroniques, et particulièrement sur le rapport bénéfice-risque des traitements disponibles et le fardeau économique que ces maladies et leurs traitements font peser sur la société. Les retombées peuvent être importantes pour les décisions de santé publique, ainsi que pour l’élaboration ou l’actualisation des recommandations de prise en charge. La principale limite des BDNR est qu’elles ne contiennent pas d’informations médicales exhaustives : évaluation de l’activité de la maladie, résultats d’examens, validation des codes de diagnostic, etc. Leur ampleur et leur complexité, ainsi que la difficulté d’obtenir des extractions de données, constituent également des inconvénients. En conclusion, les BDNR représentent une opportunité majeure pour la recherche médicale en rhumatologie et mettent en évidence la nécessité d’interactions scientifiques entre rhumatologues, gestionnaires de données et biostatisticiens. L’un des principaux inconvénients de ces bases de données, à savoir le manque de données cliniques et paracliniques exhaustives, devrait être compensé dans les années qui viennent en établissant un chaînage avec les registres cliniques.
Article
Full-text available
Objectives Endovascular aortic repair (EVAR) evolved through competition with open aortic repair (OAR) as a safe and effective treatment option for appropriately selected patients with abdominal aortic aneurysm (AAA). Although endoleaks are the most common reason for post-EVAR reintervention, compliance with lifelong regular follow-up imaging remains a challenge. Design Retrospective data analysis. Setting The Japan Medical Data Center (JMDC), a claims database with anonymous data linkage across hospitals, consists of corporate employees and their families of ≤75 years of age. Participants The analysis included participants in the JMDC who underwent EVAR or OAR for intact (iAAA) or ruptured (rAAA) AAA. Patients with less than 6 months of records before the aortic repair were excluded. Main outcome measures Overall survival and reintervention rates. Results We identified 986 cases (837 iAAA and 149 rAAA) from JMDC with first aortic repairs between January 2015 and December 2020. The number of patients, median age (years (IQR)), follow-up (months) and post-procedure CT scan (times per year) were as follows: iAAA (OAR: n=593, 62.0 (57.0-67.0), 26.0, 1.6, EVAR: n=244, 65.0 (31.0-69.0), 17.0, 2.2), rAAA (OAR: n=110, 59.0 (53.0-59.0), 16.0, 2.1, EVAR: n=39, 62.0 (31.0-67.0), 18.0, 2.4). Reintervention rate was significantly higher among EVAR than OAR in rAAA (15.4% vs 8.2%, p=0.04). In iAAA, there were no group difference after 5 years (7.8% vs 11.0%, p=0.28), even though EVAR had initial advantage. There were no differences in mortality rate between EVAR and OAR for either rAAA or iAAA. Conclusions Claims-based analysis in Japan showed no statistically significant difference in 5-year survival rates of the OAR and EVAR groups. However, the reintervention rate of EVAR in rAAA was significantly higher, suggesting the need for regular post-EVAR follow-up with imaging. Therefore, international collaborations for long-term outcome studies with real-world data are warranted.
Article
Background As overdoses due to opioids rise, medications for opioid use disorder (MOUD) continue to be underemployed, resulting in limited access to potentially life-saving treatment. Substance use disorders are prevalent in individuals who are incarcerated, and these individuals are at increased risk for death postrelease due to overdose. Few jails and prisons offer MOUD and most limit access. Extended-release buprenorphine (XR-BUP), a novel monthly injectable MOUD formulation, could be uniquely poised to address treatment access in correctional settings. Methods This study linked a retrospective cohort design of statewide datasets to evaluate the real-world use of XR-BUP. The study included individuals (N = 54) who received XR-BUP while incarcerated from January 2019 through February 2022. The study was conducted at the Rhode Island Department of Corrections, with the nation's first comprehensive statewide correctional MOUD program. Results Fifty-four individuals received a combined total of 162 injections during the study period. The study found no evidence of tampering with the injection site, indicating no attempts by participants to remove, hoard, or divert the medication. Sixty-one percent reported at least one adverse effect after injections were received, with an average of 2.8 side effects. Seventy percent of those released on XR-BUP engaged in MOUD after release, 30 % continued with XR-BUP. Conclusions XR-BUP is feasible and acceptable in correctional settings. XR-BUP addresses administrative concerns of diversion that obstruct lifesaving MOUD and offers another safe and effective treatment option. Further studies and trials should continue to assess this novel medication's ability to treat opioid addiction in the correctional setting and upon release to the community.
Article
Drug repurposing is one of the major fields of Value-Added Medicines. It involves the investigation and evaluation of existing drugs for new therapeutic purposes that address unmet healthcare needs. There are several unmet needs in allergic rhinitis that could be improved by drug repurposing. This could be game-changing for disease management. The current medications for allergic rhinitis are centered around a continuous long-term treatment, and medication registration is based on randomized controlled trials carried out for a minimum of 14 days with adherence ≥70%. A new way of treating allergic rhinitis is to propose an as-needed treatment depending on symptoms, rather than the classical continuous treatment. This rostrum will discuss the existing clinical trials on as-needed treatment for allergic rhinitis and real-world data obtained by the mobile health app MASK-air that has a focus on digitally-enabled, patient-centered care pathways.
Article
The three horizons model is a framework that helps manage an organization's innovation strategy. This model considers three aspects (horizons) that should be present in the institution and guide the development of new systems. Applied to medical science, the horizons are considered as paradigms that set the guidelines for clinical knowledge. New technologies can influence this model by causing disruptive changes. Horizon 1 (evidence-based medicine) reflects the current paradigm and emphasizes the aspect of continuous improvement needed to strengthen it, such as with the introduction of the GRADE (Grades of Recommendation Assessment, Development, and Evaluation) methodology. Evidence-based medicine has made it possible to stop performing harmful interventions like autologous bone marrow or stem cell transplantation in cancer treatment for women with early poor prognosis breast cancer or to discontinue the erroneous belief that children should not sleep on their backs to prevent sudden infant death syndrome. Horizon 2 (real-world evidence) refers to a new model in which innovation has generated new capabilities. This change makes it possible to correct weaknesses of the previous paradigm, as in the case of pragmatic clinical trials. Real-world evidence has been used to show that drugs such as tofacitinib are effective without using methotrexate as background or to demonstrate the efficacy of chemotherapy in older patients with stage II colon cancer. Horizon 3 (precision medicine) involves a disruptive innovation, leading to the abandonment of the traditional mechanistic model of medical science and is made possible by the appearance of major advances such as artificial intelligence. Precision medicine has been used to assess the use of retigabine for the treatment of refractory epilepsy or to define a genome-adjusted radiation dose using a biological model to simulate the response to radiotherapy, facilitate dose adjustment and predict outcome in breast cancer.
Article
Aim: For people with suboptimally controlled type 2 diabetes (T2D) on basal insulin (BI), guidelines recommend several treatment advancement options. This study compared the clinical effectiveness of once-daily iGlarLixi versus a multiple-injection BI + rapid acting insulin (RAI) regimen in adults with T2D advancing from BI therapy in real-world clinical practice. Materials and methods: Electronic medical records from the Observational Medical Outcomes Partnership (OMOP) database were analysed retrospectively using propensity score matching to compare therapy advancement with iGlarLixi or BI + RAI in US adults ≥18 years with T2D on BI who had ≥1 valid glycated haemoglobin (HbA1c) value at baseline and at the 6-month follow-up. The primary objective was non-inferiority of iGlarLixi to BI + RAI in HbA1c change from baseline to 6 months (margin 0.3%). Results: Propensity score matching generated cohorts with balanced baseline characteristics (N = 814 in each group). HbA1c reduction from baseline to 6 months with iGlarLixi was non-inferior to BI + RAI [mean difference (95% confidence interval): 0.1 (-0.1, 0.2)%; one-sided p = .0032]. At 6 months, weight gain was significantly lower with iGlarLixi than with BI + RAI [-0.8 (-1.3, -0.2) kg; two-sided p = .0069]. Achievement of HbA1c <7% without hypoglycaemia and weight gain were similar between groups [odds ratio (95% confidence interval): 1.15 (0.81, 1.63); p = .4280]. Hypoglycaemia was low in both groups, probably because of underreporting. Conclusions: In real-world clinical practice, glycaemic outcomes 6 months after treatment advancement from BI are similar for people with T2D using iGlarLixi versus BI + RAI, with iGlarLixi leading to less weight gain.
Article
Full-text available
Background: Studies evaluating the effects of cancer treatments are prone to immortal time bias that, if unaddressed, can lead to treatments appearing more beneficial than they are. Methods: To demonstrate the impact of immortal time bias, we compared results across several analytic approaches (dichotomous exposure, dichotomous exposure excluding immortal time, time-varying exposure, landmark analysis, clone-censor-weight method), using surgical resection among women with metastatic breast cancer as an example. All adult women diagnosed with incident metastatic breast cancer from 2013-2016 in the National Cancer Database were included. To quantify immortal time bias, we also conducted a simulation study where the "true" relationship between surgical resection and mortality was known. Results: 24,329 women (median age 61, IQR 51-71) were included, and 24% underwent surgical resection. The largest association between resection and mortality was observed when using a dichotomized exposure (HR=0.54, 95% CI=0.51-0.57), followed by dichotomous with exclusion of immortal time (HR=0.62, 95% CI=0.59-0.65). Results from the time-varying exposure, landmark, and clone-censor-weight method analyses were closer to the null (HRs=0.67-0.84). Results from the plasmode simulation found that the time-varying exposure, landmark, and clone-censor-weight method models all produced unbiased HRs (bias -0.003 to 0.016). Both standard dichotomous exposure (HR=0.84, bias -0.177) and dichotomous with exclusion of immortal time (HR=0.93, bias -0.074) produced meaningfully biased estimates. Conclusions: Researchers should use time-varying exposures with a treatment assessment window or the clone-censor-weight method when immortal time is present. Impact: Using methods that appropriately account for immortal time will improve evidence and decision-making from research using real-world data.
Article
Background: Few studies have developed automatic systems for identifying social distress, spiritual pain, and severe physical and phycological symptoms from text data in electronic medical records. Aim: To develop models to detect social distress, spiritual pain, and severe physical and psychological symptoms in terminally ill patients with cancer from unstructured text data contained in electronic medical records. Design: A retrospective study of 1,554,736 narrative clinical records was analyzed 1 month before patients died. Supervised machine learning models were trained to detect comprehensive symptoms, and the performance of the models was tested using the area under the receiver operating characteristic curve (AUROC) and precision recall curve (AUPRC). Setting/participants: A total of 808 patients was included in the study using records obtained from a university hospital in Japan between January 1, 2018 and December 31, 2019. As training data, we used medical records labeled for detecting social distress (n = 10,000) and spiritual pain (n = 10,000), and records that could be combined with the Support Team Assessment Schedule (based on date) for detecting severe physical/psychological symptoms (n = 5409). Results: Machine learning models for detecting social distress had AUROC and AUPRC values of 0.98 and 0.61, respectively; values for spiritual pain, were 0.90 and 0.58, respectively. The machine learning models accurately identified severe symptoms (pain, dyspnea, nausea, insomnia, and anxiety) with a high level of discrimination (AUROC > 0.8). Conclusion: The machine learning models could detect social distress, spiritual pain, and severe symptoms in terminally ill patients with cancer from text data contained in electronic medical records.
Article
Full-text available
The accuracy and precision of the Instant Blood Pressure app are evaluated amid concerns that individuals may use these apps to assess their blood pressure and titrate therapy.Mobile health (mHealth) technologies include unregulated consumer smartphone apps.1 The Instant Blood Pressure app (IBP; AuraLife) estimates blood pressure (BP) using a technique in which the top edge of the smartphone is placed on the left side of the chest while the individual places his or her right index finger over the smartphone’s camera. Between its release on June 5, 2014, and removal on July 30, 2015 (421 days), the IBP app spent 156 days as one of the top 50 best-selling iPhone apps; at least 950 copies of this $4.99 app were sold on each of those days.2 Validation of this popular app or any of the similar iPhone apps still available (eg, Blood Pressure Pocket, Quick Blood Pressure Measure and Monitor), have not been performed. Using a protocol based on national guidelines,3 we investigated the accuracy and precision of IBP.
Article
Full-text available
Cluster randomized trials randomly assign groups of individuals to examine research questions or test interventions and measure their effects on individuals. Recent emphasis on quality improvement, comparative effectiveness, and learning health systems has prompted expanded use of pragmatic cluster randomized trials in routine health-care settings, which in turn poses practical and ethical challenges that current oversight frameworks may not adequately address. The 2012 Ottawa Statement provides a basis for considering many issues related to pragmatic cluster randomized trials but challenges remain, including some arising from the current US research and health-care regulations. In order to examine the ethical, regulatory, and practical questions facing pragmatic cluster randomized trials in health-care settings, the National Institutes of Health Health Care Systems Research Collaboratory convened a workshop in Bethesda, Maryland, in July 2013. Attendees included experts in clinical trials, patient advocacy, research ethics, and research regulations from academia, industry, the National Institutes of Health Collaboratory, and other federal agencies. Workshop participants identified substantial barriers to implementing these types of cluster randomized trials, including issues related to research design, gatekeepers and governance in health systems, consent, institutional review boards, data monitoring, privacy, and special populations. We describe these barriers and suggest means for understanding and overcoming them to facilitate pragmatic cluster randomized trials in health-care settings. © The Author(s) 2015.
Article
Full-text available
The BJC is owned by Cancer Research UK, a charity dedicated to understanding the causes, prevention and treatment of cancer and to making sure that the best new treatments reach patients in the clinic as quickly as possible. The journal reflects these aims. It was founded more than fifty years ago and, from the start, its far-sighted mission was to encourage communication of the very best cancer research from laboratories and clinics in all countries. The breadth of its coverage, its editorial independence and it consistent high standards, have made BJC one of the world's premier general cancer journals. Its increasing popularity is reflected by a steadily rising impact factor.
Article
Full-text available
We estimate hedonic price indexes for clinical trial research, an important component of biomedical R&D, using a large sample of agreements between trial sponsors and clinical investigators obtained from MediData Solutions Worldwide Inc. Nominal prices measured as total grant cost per patient rose by a factor of 4.5 between 1989 and 2011, while the NIH Biomedical R&D Price Index (BRDPI) focused on input costs rose only 2.2-fold. Most of the disparity appears to be attributable to changes in the nature and organization of clinical trials: during this period the average number of patients per site fell substantially while “site work effort” more than doubled. After controlling for these changes in the characteristics of investigator agreements using a variety of methods based on hedonic regressions, we find that the estimated rate of inflation in clinical trials costs tracks the BDRPI very closely. Results from this study suggest that it should be feasible for statistical agencies to develop a producer price index for this type of R&D activity, contributing to broader efforts to develop a deflator for contracted R&D services.Institutional subscribers to the NBER working paper series, and residents of developing countries may download this paper without additional charge at www.nber.org.
Article
Full-text available
Mobile, social, real-time: the ongoing revolution in the way people communicate has given rise to a new kind of epidemiology. Digital data sources, when harnessed appropriately, can provide local and timely information about disease and health dynamics in populations around the world. The rapid, unprecedented increase in the availability of relevant data from various digital sources creates considerable technical and computational challenges.
Article
Federal regulatory frameworks governing medical products are designed to (1) provide evidence that a product benefits patients when used as intended and should be available despite accompanying risks and (2) ensure timely access to needed therapies and diagnostics. Historically, policy makers and product developers have viewed these objectives as being in tension. However, ensuring safety, expediting patient access, and enabling innovation can be complementary goals within a regulatory framework for medical devices.
Article
Patients, clinicians, and policymakers alike need access to high-quality scientific evidence in order to make informed choices about health and healthcare, but the current national clinical trials enterprise is not yet optimally configured for the efficient creation and dissemination of such evidence. However, new technologies and methods hold significant potential for accelerating the rate at which we are able to translate raw findings gathered from both patient care and clinical research into actionable knowledge. We are now entering a period in which the quantitative sciences are emerging as the critical disciplines for advancing knowledge about health and healthcare, and statisticians will increasingly serve as critical mediators in transforming data into evidence. In this new, data-centric era, biostatisticians not only need to be expert at analyzing data but should also be involved directly in diverse efforts, including the review and analysis of research portfolios in order to optimize the relevance of research questions, the use of “quality by design” principles to improve reliability and validity of each individual trial, and the mining of aggregate knowledge derived from the clinical research enterprise as a whole. In order to meet these challenges, it is imperative that we (1) nurture and build the biostatistical workforce, (2) develop a deeper understanding of the biological and clinical context among statisticians, (3) facilitate collaboration among biostatisticians and other members of the clinical trials enterprise, (4) focus on communication skills in training and education programs, and (5) enhance the quantitative capacity of the research and clinical practice worlds.
Article
Clinical decision making regarding the appropriate use of aspirin for the primary prevention of atherosclerotic cardiovascular disease (ASCVD) events is a complex process that requires assessment of the benefits and risks for each patient. Critically important elements of the process include evaluation of the patient’s absolute risk of ASCVD (the primary determinant of potential benefit from aspirin), the patient’s absolute risk of bleeding (the primary determinant of potential risk), and the patient’s willingness to undergo long-term therapy.¹ Despite numerous general guidelines on the use of aspirin for primary prevention, there is limited formal guidance in making these parallel assessments of benefit and risk or in using this information to identify appropriate patients for treatment. Inappropriate use of aspirin for primary prevention is common in clinical practice,² highlighting the important need for improving evidence-based decision making about aspirin use and for providing tools to facilitate this benefit/risk assessment.
Article
Elevated postprandial blood glucose levels constitute a global epidemic and a major risk factor for prediabetes and type II diabetes, but existing dietary methods for controlling them have limited efficacy. Here, we continuously monitored week-long glucose levels in an 800-person cohort, measured responses to 46,898 meals, and found high variability in the response to identical meals, suggesting that universal dietary recommendations may have limited utility. We devised a machine-learning algorithm that integrates blood parameters, dietary habits, anthropometrics, physical activity, and gut microbiota measured in this cohort and showed that it accurately predicts personalized postprandial glycemic response to real-life meals. We validated these predictions in an independent 100-person cohort. Finally, a blinded randomized controlled dietary intervention based on this algorithm resulted in significantly lower postprandial responses and consistent alterations to gut microbiota configuration. Together, our results suggest that personalized diets may successfully modify elevated postprandial blood glucose and its metabolic consequences. VIDEO ABSTRACT.
Article
The need for high-quality evidence to support decision making about health and health care by patients, physicians, care providers, and policy-makers is well documented. However, serious shortcomings in evidence persist. Pragmatic clinical trials that use novel techniques including emerging information and communication technologies to explore important research questions rapidly and at a fraction of the cost incurred by more "traditional" research methods promise to help close this gap. Nevertheless, while pragmatic clinical trials can bridge clinical practice and research, they may also raise difficult ethical and regulatory challenges. In this article, the authors briefly survey the current state of evidence that is available to inform clinical care and other health-related decisions and discuss the potential for pragmatic clinical trials to improve this state of affairs. They then propose a new working definition for pragmatic research that centers upon fitness for informing decisions about health and health care. Finally, they introduce a project, jointly undertaken by the National Institutes of Health Health Care Systems Research Collaboratory and the National Patient-Centered Clinical Research Network (PCORnet), which addresses 11 key aspects of current systems for regulatory and ethical oversight of clinical research that pose challenges to conducting pragmatic clinical trials. In the series of articles commissioned on this topic published in this issue of Clinical Trials, each of these aspects is addressed in a dedicated article, with a special focus on the interplay between ethical and regulatory considerations and pragmatic clinical research aimed at informing "real-world" choices about health and health care.
Article
Like many physicians, Suzanne Clough, MD, struggled to meet her patients’ needs regarding their type 2 diabetes in a few 12-minute visits each year. But too often, patients’ concerns about day-to-day condition management weren’t fully addressed. Many were frustrated, and some didn’t follow her guidance because they weren’t seeing results. The recommendations, she said, “didn’t have value [for them].” Clough wondered whether real-time, 24/7 diabetes management support would help. That question led her on a 10-year journey to develop the WellDoc BlueStar mobile app for patients with type 2 diabetes. It analyzes trends in patient-entered data on blood glucose level, carbohydrate consumption, medication use, and other information to provide real-time coaching for the patient. Patients can then securely share the data with their physician through a web portal.
Article
Background: Current guidelines recommend the use of intravenous (IV) vasodilators in addition to IV loop diuretics for the treatment of acute heart failure (AHF) patients without hypotension. The evidence basis for these recommendations is limited. Methods and results: Hospital billing records for 82,808 AHF patients in the United States were analyzed. Patients receiving IV loop diuretics alone were paired with patients receiving IV loop diuretics + IV nitrates or IV nesiritide with the use of propensity score matching, excluding those with hypotension and/or evidence of cardiogenic shock, myocardial infarction, or acute coronary syndrome. Compared with paired patients receiving IV loop diuretics alone, in-hospital mortality was similar among IV loop diuretics + IV nitrates patients (n = 4,401; 1.9% vs 2.0%; P = .88) and marginally higher for IV loop diuretics + IV nesiritide patients (n = 2,254; 2.2% vs 3.1%; P = .05). Compared with paired IV loop diuretics patients, IV loop diuretics + IV nitrates or IV nesiritide had longer lengths of stay (+1.6 and +2.1 days; P < .01) and 57% higher costs (P < .01). Conclusions: Among hospitalized AHF patients, the addition of IV vasodilators to IV loop diuretics did not lower inpatient mortality or rehospitalization rates compared with loop diuretics alone, and was associated with longer lengths of stay and higher hospitalization costs. Although the lack of complete clinical, socioeconomic, and post-discharge data may have confounded these results, this analysis questions whether currently available IV vasodilators can improve outcomes in hospitalized AHF patients.
Article
The ratio of false-positive to false-negative findings (FP:FN ratio) is an informative metric that warrants further evaluation. The FP:FN ratio varies greatly across different epidemiologic areas. In genetic epidemiology, it has varied from very high values (possibly even >100:1) for associations reported in candidate-gene studies to very low values (1:100 or lower) for associations with genome-wide significance. The substantial reduction over time in the FP:FN ratio in human genome epidemiology has corresponded to the routine adoption of stringent inferential criteria and comprehensive, agnostic reporting of all analyses. Most traditional fields of epidemiologic research more closely follow the practices of past candidate gene epidemiology, and thus have high FP:FN ratios. Further, FP and FN results do not necessarily entail the same consequences, and their relative importance may vary in different settings. This ultimately has implications for what is the acceptable FP:FN ratio and for how the results of published epidemiologic studies should be presented and interpreted.
How big data can revolutionize pharmaceutical R&D
  • J Cattell
  • S Chilukuri
  • M Levy
Cattell J, Chilukuri S, Levy M. How big data can revolutionize pharmaceutical R&D. McKinsey & Company, April 2013 (http:// www.mckinsey.com/ insights/ health_systems_and_services/ how_big_data_can_revolutionize_pharmaceutical_r_and_d).
National Patient-Centered Clinical Research Network. ADAPTABLE, the aspirin study -a patient
  • M Meldrum
Meldrum M. "A calculated risk": the Salk polio vaccine field trials of 1954. BMJ 1998; 317: 1233-6. 22. National Patient-Centered Clinical Research Network. ADAPTABLE, the aspirin study -a patient-centered trial (http:// www.pcornet.org/ aspirin/ ).
Real-world evidence studies
  • M Cziraky
  • M Pollock
Cziraky M, Pollock M. Real-world evidence studies. Applied Clinical Trials. October 12, 2015 (http://www.appliedclinicaltrial sonline.com/ real-world-evidence-studies?pageID=1).
Price indexes for clinical trial research: a feasibility study Bureau of Labor Statistics Monthly Labor Review
  • Er Berndt
  • Im Cockburn
Berndt ER, Cockburn IM. Price indexes for clinical trial research: a feasibility study. Bureau of Labor Statistics Monthly Labor Review. June 2014 (http://www.bls.gov/ opub/ mlr/ 2014/ article/ price-indexes-for-clinical-trial-research-a-feasibility-study-1 .htm).
Food and Drug Administration. Pradaxa (dabigatran etexilate mesylate): drug safety communication -safety review of postmarket reports of serious bleeding events
  • D R Tavris
  • S Dey
  • B Albrecht-Gallauresi
Tavris DR, Dey S, Albrecht-Gallauresi B, et al. Risk of local adverse events following cardiac catheterization by hemostasis device use -phase II. J Invasive Cardiol 2005; 17: 644-50. 28. Food and Drug Administration. Pradaxa (dabigatran etexilate mesylate): drug safety communication -safety review of postmarket reports of serious bleeding events (http://www.fda.gov/ Safety/ MedWatch/ SafetyInformation/ SafetyAlertsforHumanMedical Products/ ucm282820.htm).
Electronic health records-based phenotyping
  • R Richesson
  • M Smerek
Richesson R, Smerek M. Electronic health records-based phenotyping. In: Rethinking clinical trials: a living textbook of pragmatic clinical trials. June 27, 2014 (https:/ / sites.duke.edu/ rethinkingclinicaltrials/ ehr-phenotyping/ ).
Appendix F — discussion paper: transforming the economics of clinical trials In: Institute of Medicine. Envisioning a transformed clinical trials enterprise in the United States: establishing an agenda for 2020: workshop summary
  • Jm Kramer
  • Ka Schulman
Kramer JM, Schulman KA. Appendix F — discussion paper: transforming the economics of clinical trials. In: Institute of Medicine. Envisioning a transformed clinical trials enterprise in the United States: establishing an agenda for 2020: workshop summary. Washington, DC: National Academies Press, 2012 (http://www.ncbi.nlm.nih.gov/ books/ NBK114653/ ).
National Evaluation System for Health Technology (NEST)
  • Drug Food
  • Administration
Food and Drug Administration. National Evaluation System for Health Technology (NEST). (http://www.fda.gov/ AboutFDA/ CentersOffices/ OfficeofMedicalProductsandTobacco/ CDRH/ CDRHReports/ ucm301912.htm).
Exploring the ethical and regulatory issues in pragmatic clinical trials National Institutes of Health Health Care Systems Research Collaboratory. Demonstration projects (https
  • Rm Califf
  • J Sugarman
Califf RM, Sugarman J. Exploring the ethical and regulatory issues in pragmatic clinical trials. Clin Trials 2015; 12: 436-41. 24. National Institutes of Health Health Care Systems Research Collaboratory. Demonstration projects (https:/ / www .nihcollaboratory.org/ demonstration-projects/ Pages/ default.aspx).
based observational research: partners in the evolution of medical evidence
based observational research: partners in the evolution of medical evidence. Br J Cancer 2014; 110: 551-5.
Envisioning a transformed clinical trials enterprise in the United States: establishing an agenda for 2020: workshop summary
  • J M Kramer
  • K A Schulman
Kramer JM, Schulman KA. Appendix F -discussion paper: transforming the economics of clinical trials. In: Institute of Medicine. Envisioning a transformed clinical trials enterprise in the United States: establishing an agenda for 2020: workshop summary. Washington, DC: National Academies Press, 2012 (http://www.ncbi.nlm.nih.gov/ books/ NBK114653/ ).
Food and Drug Administration. Summary of safety and effectiveness data: HeartWare ventricular assist device. Section X. Summary of primary clinical study
  • J Pearl
Pearl J. Causality: models, reasoning, and inference. 2nd ed. New York: Cambridge University Press, 2009: 350. 32. Food and Drug Administration. Summary of safety and effectiveness data: HeartWare ventricular assist device. Section X. Summary of primary clinical study (http://www.accessdata.fda .gov/ cdrh_docs/ pdf10/ P100047b.pdf).
FDA news release: FDA approval expands access to artificial heart valve for inoperable patients
  • Drug Food
  • Administration
Food and Drug Administration. FDA news release: FDA approval expands access to artificial heart valve for inoperable patients. September 23, 2013 (http://www.fda.gov/ NewsEvents/ Newsroom/ PressAnnouncements/ ucm369510.htm).
PDUFA reauthorization performance goals and procedures fiscal years
  • Drug Food
  • Administration
Food and Drug Administration. PDUFA reauthorization performance goals and procedures fiscal years 2018 through 2022 (http://www.fda.gov/ downloads/ ForIndustry/ UserFees/ Prescription DrugUserFee/ UCM511438.pdf). 35. Food and Drug Administration. FDA-industry MDUFA
What is a clinical study
  • Clinicaltrials
  • Gov
ClinicalTrials.gov. What is a clinical study? (https:/ / clinical trials.gov/ ct2/ about-studies/ learn#WhatIs).
Prescription DrugUserFee/ UCM511438 .pdf). 35. Food and Drug Administration. FDA -industry MDUFA IV reauthorization meeting
  • Drug Food
  • Administration
Food and Drug Administration. PDUFA reauthorization performance goals and procedures fiscal years 2018 through 2022 (http://www.fda.gov/ downloads/ ForIndustry/ UserFees/ Prescription DrugUserFee/ UCM511438.pdf). 35. Food and Drug Administration. FDA -industry MDUFA IV reauthorization meeting. August 15, 2016 (http://www.fda .gov/ downloads/ ForIndustry/ UserFees/ MedicalDeviceUserFee/ UCM518203.pdf).