[Show abstract][Hide abstract] ABSTRACT: Funding decisions for cardiovascular R01 grant applications at NHLBI largely hinge on percentile rankings. It is not known whether this approach enables the highest impact science.
To conduct an observational analysis of percentile rankings and bibliometric outcomes for a contemporary set of funded NHLBI cardiovascular R01 grants.
We identified 1492 investigator-initiated de novo R01 grant applications that were funded between 2001 and 2008, and followed their progress for linked publications and citations to those publications. Our co-primary endpoints were citations received per million dollars of funding, citations obtained within 2-years of publication, and 2-year citations for each grant's maximally cited paper. In 7654 grant-years of funding that generated $3004 million of total NIH awards, the portfolio yielded 16,793 publications that appeared between 2001 and 2012 (median per grant 8, 25th and 75th percentiles 4 and 14, range 0 - 123), which received 2,224,255 citations (median per grant 1048, 25th and 75th percentiles 492 and 1,932, range 0 - 16,295). We found no association between percentile ranking and citation metrics; the absence of association persisted even after accounting for calendar time, grant duration, number of grants acknowledged per paper, number of authors per paper, early investigator status, human versus non-human focus, and institutional funding. An exploratory machine-learning analysis suggested that grants with the very best percentile rankings did yield more maximally cited papers.
In a large cohort of NHLBI-funded cardiovascular R01 grants, we were unable to find a monotonic association between better percentile ranking and higher scientific impact as assessed by citation metrics.
Circulation Research 01/2014; · 11.86 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: Rapid publication of clinical trials is essential in order for the findings to yield maximal benefits for public health and scientific progress. Factors affecting the speed of publication of the main results of government-funded trials have not been well characterized.
We analyzed 244 extramural randomized clinical trials of cardiovascular interventions that were supported by the National Heart, Lung, and Blood Institute (NHLBI). We selected trials for which data collection had been completed between January 1, 2000, and December 31, 2011. Our primary outcome measure was the time between completion of the trial and publication of the main results in a peer-reviewed journal.
As of March 31, 2012, the main results of 156 trials (64%) had been published (Kaplan-Meier median time to publication, 25 months, with 57% published within 30 months). Trials that focused on clinical events were published more rapidly than those that focused on surrogate measures (median, 9 months vs. 31 months; P<0.001). The only independent predictors of more rapid publication were a focus on clinical events rather than surrogate end points (adjusted publication rate ratio, 2.11; 95% confidence interval, 1.26 to 3.53; P=0.004) and higher costs of conducting the trial, up to a threshold of approximately $5 million (P<0.001). The 37 trials that focused on clinical events and cost at least $5 million accounted for 67% of the funds spent on clinical trials but received 82% of the citations. After adjustment of the analysis for a focus on clinical events and for cost, trial results that were classified as positive were published more quickly than those classified as negative.
Results of less than two thirds of NHLBI-funded randomized clinical trials of cardiovascular interventions were published within 30 months after completion of the trial. Trials that focused on clinical events were published more quickly than those that focused on surrogate end points. (Funded by the National Heart, Lung, and Blood Institute.).
New England Journal of Medicine 11/2013; 369(20):1926-34. · 51.66 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: The randomized trial is one of the most powerful tools clinical researchers possess, a tool that enables them to evaluate the effectiveness of new (or established) therapies while accounting for the effects of unmeasured confounders and selection bias by indication. Randomized trials, especially huge megatrials, have transformed medical practice. Thanks to randomized trials, we no longer, for example, treat acute myocardial infarction with lidocaine and nitrates. Instead we use rapid revascularization, anticoagulants, and antiplatelet agents, and during long-term follow-up we routinely prescribe statins, beta-blockers, and angiotensin-converting-enzyme inhibitors. But the reputation of randomized trials has suffered of late,(1) owing to reasonable . . .
New England Journal of Medicine 08/2013; · 51.66 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: Outcomes have improved significantly in pediatric cardiovascular disease in recent decades. The challenge now is to sustain these advances through innovative clinical trials, fundamental molecular investigations, genetics and genomics, and outreach to families emphasizing the importance of participating in research. We describe several such efforts and provide a vision of the future for pediatric cardiovascular research.
Journal of the American College of Cardiology 04/2013; · 14.09 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: In 2012, the National Cancer Institute (NCI) engaged the scientific community to provide a vision for cancer epidemiology in the 21st century. Eight overarching thematic recommendations, with proposed corresponding actions for consideration by funding agencies, professional societies, and the research community emerged from the collective intellectual discourse. The themes are (i) extending the reach of epidemiology beyond discovery and etiologic research to include multilevel analysis, intervention evaluation, implementation, and outcomes research; (ii) transforming the practice of epidemiology by moving towards more access and sharing of protocols, data, metadata, and specimens to foster collaboration, to ensure reproducibility and replication, and accelerate translation; (iii) expanding cohort studies to collect exposure, clinical and other information across the life course and examining multiple health-related endpoints; (iv) developing and validating reliable methods and technologies to quantify exposures and outcomes on a massive scale, and to assess concomitantly the role of multiple factors in complex diseases; (v) integrating "big data" science into the practice of epidemiology; (vi) expanding knowledge integration to drive research, policy and practice; (vii) transforming training of 21st century epidemiologists to address interdisciplinary and translational research; and (viii) optimizing the use of resources and infrastructure for epidemiologic studies. These recommendations can transform cancer epidemiology and the field of epidemiology in general, by enhancing transparency, interdisciplinary collaboration, and strategic applications of new technologies. They should lay a strong scientific foundation for accelerated translation of scientific discoveries into individual and population health benefits.
Cancer Epidemiology Biomarkers & Prevention 03/2013; · 4.56 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: In his widely acclaimed book "The Difference," Scott Page, a Professor at the University of Michigan, described a computer modeling experiment designed to test the "Diversity Trumps Ability Theorem." The Theorem postulates that "collections of diverse individuals outperform collections of more individually capable individuals." The computer model showed that diversity enhanced the ability to solve problems or make accurate predictions2, but only when 4 conditions were met: 1) the problems were difficult, 2) all problem solvers were "smart" (but not the smartest), 3) diversity was sufficient to assure that different problem solvers could exploit the solutions of others, and 4) the populations of problem solvers and collections of problem solvers were large. As all 4 of these conditions are clearly met in heart, lung, and blood (HLB) research, we were stimulated to examine the diversity of topics and mechanisms in the NHLBI portfolio. [Extract].
Circulation Research 08/2012; 111(7):833-6. · 11.86 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: About 25 years ago, a group of researchers demonstrated that there is no such thing as the "hot hand" in professional basketball. When a player hits 5 or 7 shots in a row (or misses 10 in a row), what's at work is random variation, nothing more. However, random causes do not stop players, coaches, fans, and media from talking about and acting on "hot hands," telling stories and making choices that ultimately are based on randomness. The same phenomenon is true in medicine. Some clinical trials with small numbers of events yielded positive findings, which in turn led clinicians, academics, and government officials to talk, telling stories and sometimes making choices that were later shown to be based on randomness. I provide some cardiovascular examples, such as the use of angiotensin receptor blockers for chronic heart failure, nesiritide for acute heart failure, and cytochrome P-450 (CYP) 2C19 genotyping for the acute coronary syndromes. I also review the more general "decline effect," by which drugs appear to yield a lower effect size over time. The decline effect is due at least in part to over interpretation of small studies, which are more likely to be noticed because of publication bias. As funders of research, we at the National Heart, Lung, and Blood Institute seek to support projects that will yield robust, credible evidence that will affect practice and policy in the right way. We must be alert to the risks of small numbers.
Journal of the American College of Cardiology 07/2012; 60(1):72-4. · 14.09 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: Policy and science often interact. Typically, we think of policymakers looking to scientists for advice on issues informed by science. We may appreciate less the opposite look: where people outside science inform policies that affect the conduct of science. In clinical medicine, we are forced to make decisions about practices for which there is insufficient, inadequate evidence to know whether they improve clinical outcomes, yet the health care system may not be structured to rapidly generate needed evidence. For example, when the Centers for Medicare and Medicaid Services noted insufficient evidence to support routine use of computed tomography angiography and they called for a national commitment to completion of randomized trials, their call ran into substantial opposition. I use the computed tomography angiography story to illustrate how we might consider a "policy for science" in which stakeholders would band together to identify evidence gaps and to use their influence to promote the efficient design, implementation, and completion of high-quality randomized trials. Such a policy for science could create a culture that incentivizes and invigorates the rapid generation of evidence, ultimately engaging all clinicians, all patients, and indeed all stakeholders into the scientific enterprise.
Journal of the American College of Cardiology 06/2012; 59(24):2154-6. · 14.09 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: Comparative effectiveness research (CER) is the study of existing treatments or ways to deliver health care to determine what intervention works best under specific circumstances. CER evaluates evidence from existing studies or generates new evidence, in different populations and under specific conditions in which the treatments are actually used. CER does not embrace one research design over another but compares treatments and variations in practice using methods that are most likely to yield widely generalizable results that are directly relevant to clinical practice. Treatments used in transfusion medicine (TM) are among the most widely used in clinical practice, but are among the least well studied. High-quality evidence is lacking for most transfusion practices, with research efforts hampered by regulatory restrictions and ethical barriers. To begin addressing these issues, the National Heart, Lung, and Blood Institute convened a workshop in June 2011 to address the potential role of CER in the generation of high-quality evidence for TM decision making. Workshop goals were to: 1) evaluate the current landscape of clinical research, 2) review the potential application of CER methods to clinical research, 3) assess potential barriers to the use of CER methodology, 4) determine whether pilot or vanguard studies can be used to facilitate planning of future CER research, and 5) consider the need for and delivery of training in CER methods for researchers.
[Show abstract][Hide abstract] ABSTRACT: Over the past 50 years, we have seen dramatic changes in cardiovascular science and clinical care, accompanied by marked declines in the morbidity and mortality. Nonetheless, cardiovascular disease remains the leading cause of death and disability in the world, and its nature is changing as Americans become older, fatter, and ethnically more diverse. Instead of young or middle-aged men with ST-segment elevation myocardial infarction, the "typical" cardiac patient now presents with acute coronary syndrome or with complications related to chronic hypertension or ischemic heart disease, including heart failure, sudden death, and atrial fibrillation. Analogously, structural heart disease is now dominated by degenerative valve or congenital disease, far more common than rheumatic disease. The changing clinical scene presents cardiovascular scientists with a number of opportunities and challenges, including taking advantage of high-throughput technologies to elucidate complex disease mechanisms, accelerating development and implementation of evidence-based strategies, assessing evolving technologies of unclear value, addressing a global epidemic of cardiovascular disease, and maintaining high levels of innovation in a time of budgetary constraint and economic turmoil.