Evaluation Guidelines for the Clinical and Translational Science Awards (CTSAs)
Department of Policy Analysis and Management, Cornell University, Ithaca, New York, USA.Clinical and Translational Science (Impact Factor: 1.43). 08/2013; 6(4):303-9. DOI: 10.1111/cts.12036
The National Center for Advancing Translational Sciences (NCATS), a part of the National Institutes of Health, currently funds the Clinical and Translational Science Awards (CTSAs), a national consortium of 61 medical research institutions in 30 states and the District of Columbia. The program seeks to transform the way biomedical research is conducted, speed the translation of laboratory discoveries into treatments for patients, engage communities in clinical research efforts, and train a new generation of clinical and translational researchers. An endeavor as ambitious and complex as the CTSA program requires high-quality evaluations in order to show that the program is well implemented, efficiently managed, and demonstrably effective. In this paper, the Evaluation Key Function Committee of the CTSA Consortium presents an overall framework for evaluating the CTSA program and offers policies to guide the evaluation work. The guidelines set forth are designed to serve as a tool for education within the CTSA community by illuminating key issues and practices that should be considered during evaluation planning, implementation, and utilization. Additionally, these guidelines can provide a basis for ongoing discussions about how the principles articulated in this paper can most effectively be translated into operational reality.
- [Show abstract] [Hide abstract]
ABSTRACT: The Clinical and Translational Science Award (CTSA) program is an ambitious multibillion dollar initiative sponsored by the National Institutes of Health (NIH) organized around the mission of facilitating the improved quality, efficiency, and effectiveness of translational health sciences research across the country. Although the NIH explicitly requires internal evaluation, funded CTSA institutions are given wide latitude to choose the structure and methods for evaluating their local CTSA program. The National Evaluators Survey was developed by a peer-led group of local CTSA evaluators as a voluntary effort to understand emerging differences and commonalities in evaluation teams and techniques across the 61 CTSA institutions funded nationwide. This article presents the results of the 2012 National Evaluators Survey, finding significant heterogeneity in evaluation staffing, organization, and methods across the 58 CTSAs institutions responding. The variety reflected in these findings represents both a liability and strength. A lack of standardization may impair the ability to make use of common metrics, but variation is also a successful evolutionary response to complexity. Additionally, the peer-led approach and simple design demonstrated by the questionnaire itself has value as an example of an evaluation technique with potential for replication in other areas across the CTSA institutions or any large-scale investment where multiple related teams across a wide geographic area are given the latitude to develop specialized approaches to fulfilling a common mission.Evaluation & the Health Professions 12/2013; 36(4):447-63. DOI:10.1177/0163278713510378 · 1.91 Impact Factor
Conference Paper: The Role of Quality Improvement Methods in Translational Research[Show abstract] [Hide abstract]
ABSTRACT: Over the past several decades, interest in translational research to improve healthcare has been growing. The National Institutes of Health (NIH) explicitly made translational research a central priority and has invested heavily in developing an infrastructure of Clinical and Translation Science Awards (CTSAs). In this article, we address how quality improvement (QI) methodologies, particularly those of lean and six sigma, can be used to further understand and measure the processes of translating research from basic to clinical and to practice. The main vehicle for this activity has been the CTSA's Research Translation Mapping and Measurement (RTMM) Interest Group which supported the development of a framework for using the DMAIC (define, measure, analyze, improve, control) methodology in translational research. The important insights we have gained in using QI in this context are presented and discussed.In Proceedings of The Industrial and Systems Engineering Research Conference, Montreal, Canada; 05/2014
- [Show abstract] [Hide abstract]
ABSTRACT: The National Institutes of Health (NIH) Roadmap for Medical Research initiative, funded by the NIH Common Fund and offered through the Clinical and Translational Science Award (CTSA) program, developed more than 60 unique models for achieving the NIH goal of accelerating discoveries toward better public health. The variety of these models enabled participating academic centers to experiment with different approaches to fit their research environment. A central challenge related to the diversity of approaches is the ability to determine the success and contribution of each model. This paper describes the effort by the Evaluation Key Function Committee to develop and test a methodology for identifying a set of common metrics to assess the efficiency of clinical research processes and for pilot testing these processes for collecting and analyzing metrics. The project involved more than one-fourth of all CTSAs and resulted in useful information regarding the challenges in developing common metrics, the complexity and costs of acquiring data for the metrics, and limitations on the utility of the metrics in assessing clinical research performance. The results of this process led to the identification of lessons learned and recommendations for development and use of common metrics to evaluate the CTSA effort. Clin Trans Sci 2015; Volume #: 1-9. © 2015 Wiley Periodicals, Inc.Clinical and Translational Science 06/2015; 8(5). DOI:10.1111/cts.12296 · 1.43 Impact Factor
Data provided are for informational purposes only. Although carefully collected, accuracy cannot be guaranteed. The impact factor represents a rough estimation of the journal's impact factor and does not reflect the actual current impact factor. Publisher conditions are provided by RoMEO. Differing provisions from the publisher's actual policy or licence agreement may be applicable.