Getting our priorities straight: A novel framework for stakeholder-informed prioritization of cancer genomics research

Center for Medical Technology Policy, Baltimore, Maryland, USA.
Genetics in medicine: official journal of the American College of Medical Genetics (Impact Factor: 7.33). 10/2012; 15(2). DOI: 10.1038/gim.2012.103
Source: PubMed


Prioritization of translational research on genomic tests is critically important given the rapid pace of innovation in genomics. The goal of this study was to evaluate a stakeholder-informed priority-setting framework in cancer genomics.

An external stakeholder advisory group including patients/consumers, payers, clinicians, and test developers used a modified Delphi approach to prioritize six candidate cancer genomic technologies during a 1-day meeting. Nine qualitative priority-setting criteria were considered. We used a directed, qualitative content-analysis approach to investigate the themes of the meeting discussion.

Stakeholders primarily discussed six of the original nine criteria: clinical benefits, population health impacts, economic impacts, analytical and clinical validity, clinical trial implementation and feasibility, and market factors. Several new priority-setting criteria were identified from the workshop transcript, including "patient-reported outcomes," "clinical trial ethics," and "trial recruitment." The new criteria were incorporated with prespecified criteria to develop a novel priority-setting framework.

This study highlights key criteria that stakeholders can consider when prioritizing comparative effectiveness research for cancer genomic applications. Applying an explicit priority-setting framework to inform investment in comparative effectiveness research can help to ensure that critical factors are weighed when deciding between many potential research questions and trial designs.

Download full-text


Available from: Joshua A. Roth, Mar 10, 2014
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Background Changes that improve the quality of health care should be sustained. Falling back to old, unsatisfactory ways of working is a waste of resources and can in the worst case increase resistance to later initiatives to improve care. Quality improvement relies on changing the clinical system yet factors that influence the sustainability of quality improvements are poorly understood. Theoretical frameworks can guide further research on the sustainability of quality improvements. Theories of organizational learning have contributed to a better understanding of organizational change in other contexts. To identify factors contributing to sustainability of improvements, we use learning theory to explore a case that had displayed sustained improvement. Methods Førde Hospital redesigned the pathway for elective surgery and achieved sustained reduction of cancellation rates. We used a qualitative case study design informed by theory to explore factors that contributed to sustain the improvements at Førde Hospital. The model Evidence in the Learning Organization describes how organizational learning contributes to change in healthcare institutions. This model constituted the framework for data collection and analysis. We interviewed a strategic sample of 20 employees. The in-depth interviews covered themes identified through our theoretical framework. Through a process of coding and condensing, we identified common themes that were interpreted in relation to our theoretical framework. Results Clinicians and leaders shared information about their everyday work and related this knowledge to how the entire clinical pathway could be improved. In this way they developed a revised and deeper understanding of their clinical system and its interdependencies. They became increasingly aware of how different elements needed to interact to enhance the performance and how their own efforts could contribute. Conclusions The improved understanding of the clinical system represented a change in mental models of employees that influenced how the organization changed its performance. By applying the framework of organizational learning, we learned that changes originating from a new mental model represent double-loop learning. In double-loop learning, deeper system properties are changed, and consequently changes are more likely to be sustained.
    BMC Health Services Research 08/2012; 12(1):235. DOI:10.1186/1472-6963-12-235 · 1.71 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Aims: Comparing effectiveness of diagnostic tests is one of the highest priorities for comparative effectiveness research (CER) set by the Institute of Medicine. Our study aims to identify what information providers, payers and patients need from CER on diagnostics, and what challenges they encounter implementing comparative information on diagnostic alternatives in practice and policy. Materials & methods: Using qualitative research methods and the example of two alternative protocols for HER2 testing in breast cancer, we conducted interviews with 45 stakeholders: providers (n = 25) from four academic and eight nonacademic institutions, executives (n = 13) from five major US private payers and representatives (n = 7) from two breast cancer patient advocacies. Results: The need for additional scientific evidence to determine the preferred HER2 protocol was more common for advocates than payers (100 vs 54%; p = 0.0515) and significantly more common for advocates than providers (100 vs 40%; p = 0.0077). The availability of information allowing assessment of the implementation impact from alternative diagnostic protocols on provider institutions may mitigate the need for additional scientific evidence for some providers and payers (24 and 46%, respectively). The cost-effectiveness of alternative protocols from the societal perspective is important to payers and advocates (69 and 71%, respectively) but not to providers (0%; p = 0.0001 and p = 0.0001). The lack of reporting laboratory practices is a more common implementation challenge for payers and advocates (77 and 86%, respectively) than for providers (32%). The absence of any mechanism for patient involvement was recognized as a challenge by payers and advocates (69 and 100%, respectively) but not by providers (0%; p = 0.0001 and p = 0.0001). Conclusion: Comparative implementation research is needed to inform the stakeholders considering diagnostic alternatives. Transparency of laboratory practices is an important factor in enabling implementation of CER on diagnostics in practice and policy. The incongruent views of providers versus patient advocates and payers on involving patients in diagnostic decisions is a concerning challenge to utilizing the results of CER.
    Journal of Comparative Effectiveness Research 07/2013; 2(4):461-77. DOI:10.2217/cer.13.42 · 0.72 Impact Factor