Managing external risks to health technology assessment programs

Alberta Heritage Foundation for Medical Research, Edmonton, Canada.
International Journal of Technology Assessment in Health Care (Impact Factor: 1.31). 02/2006; 22(4):429-35. DOI: 10.1017/S0266462306051348
Source: PubMed


The aim of this study was to develop a guide to identifying and managing risks for health technology assessment (HTA) programs and to obtain opinions on this topic from HTA agencies.
The risks and approaches to their management were compiled, drawing on experiences from HTA programs and the risk assessment literature. Opinion on this classification was obtained from members of the International Network of Agencies for Health Technology Assessment (INAHTA).
Twenty-one risks for HTA programs were identified under the categories Formulation of HTA Questions, Preparation of the HTA Product, Dissemination, and Contracting. For each risk area, potential consequences and suggested management approaches were outlined. Responses from ten HTA programs indicated substantial agreement regarding the risks that had been identified and on the importance of risk management for their own operations.
Prudent management of HTA programs should take into account the risks related to external factors.

0 Reads
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: To consider how the impact of the NHS Health Technology Assessment (HTA) Programme should be measured. To determine what models are available and their strengths and weaknesses. To assess the impact of the first 10 years of the NHS HTA programme from its inception in 1993 to June 2003 and to identify the factors associated with HTA research that are making an impact. Main electronic databases from 1990 to June 2005. The documentation of the National Coordinating Centre for Health Technology Assessment (NCCHTA). Questionnaires to eligible researchers. Interviews with lead investigators. Case study documentation. A literature review of research programmes was carried out, the work of the NCCHTA was reviewed, lead researchers were surveyed and 16 detailed case studies were undertaken. Each case study was written up using the payback framework. A cross-case analysis informed the analysis of factors associated with achieving payback. Each case study was scored for impact before and after the interview to assess the gain in information due to the interview. The draft write-up of each study was checked with each respondent for accuracy and changed if necessary. The literature review identified a highly diverse literature but confirmed that the 'payback' framework pioneered by Buxton and Hanney was the most widely used and most appropriate model available. The review also confirmed that impact on knowledge generation was more easily quantified than that on policy, behaviour or especially health gain. The review of the included studies indicated a higher level of impact on policy than is often assumed to occur. The survey showed that data pertinent to payback exist and can be collected. The completed questionnaires showed that the HTA Programme had considerable impact in terms of publications, dissemination, policy and behaviour. It also showed, as expected, that different parts of the Programme had different impacts. The Technology Assessment Reports (TARs) for the National Institute for Health and Clinical Excellence (NICE) had the clearest impact on policy in the form of NICE guidance. Mean publications per project were 2.93 (1.98 excluding the monographs), above the level reported for other programmes. The case studies revealed the large diversity in the levels and forms of impacts and the ways in which they arise. All the NICE TARs and more than half of the other case studies had some impact on policy making at the national level whether through NICE, the National Screening Committee, the National Service Frameworks, professional bodies or the Department of Health. This underlines the importance of having a customer or 'receptor' body. A few case studies had very considerable impact in terms of knowledge production and in informing national and international policies. In some of these the principal investigator had prior expertise and/or a research record in the topic. The case studies confirmed the questionnaire responses but also showed how some projects led to further research. This study concluded that the HTA Programme has had considerable impact in terms of knowledge generation and perceived impact on policy and to some extent on practice. This high impact may have resulted partly from the HTA Programme's objectives, in that topics tend to be of relevance to the NHS and have policy customers. The required use of scientific methods, notably systematic reviews and trials, coupled with strict peer reviewing, may have helped projects publish in high-quality peer-reviewed journals. Further research should cover more detailed, comprehensive case studies, as well as enhancement of the 'payback framework'. A project that collated health research impact studies in an ongoing manner and analysed them in a consistent fashion would also be valuable.
    Health technology assessment (Winchester, England) 01/2008; 11(53):iii-iv, ix-xi, 1-180. DOI:10.3310/hta11530 · 5.03 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In light of growing demands for public accountability, the broadening scope of health technology assessment organizations (HTAOs) activities and their increasing role in decision-making underscore the importance for them to demonstrate their performance. Based on Parson's social action theory, we propose a conceptual model that includes four functions an organization needs to balance to perform well: (i) goal attainment, (ii) production, (iii) adaptation to the environment, and (iv) culture and values maintenance. From a review of the HTA literature, we identify specific dimensions pertaining to the four functions and show how they relate to performance. We compare our model with evaluations reported in the scientific and gray literature to confirm its capacity to accommodate various evaluation designs, contexts of evaluation, and organizational models and perspectives. Our findings reveal the dimensions of performance most often assessed and other important ones that, hitherto, remain unexplored. The model provides a flexible and theoretically grounded tool to assess the performance of HTAOs.
    International Journal of Technology Assessment in Health Care 02/2008; 24(1):76-86. DOI:10.1017/S0266462307080105 · 1.31 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Rapid reviews are being produced with greater frequency by health technology assessment (HTA) agencies in response to increased pressure from end-user clinicians and policy-makers for rapid, evidence-based advice on health-care technologies. This comparative study examines the differences in methodologies and essential conclusions between rapid and full reviews on the same topic, with the aim of determining the validity of rapid reviews in the clinical context and making recommendations for their future application. Rapid reviews were located by Internet searching of international HTA agency websites, with any ambiguities resolved by further communication with the agencies. Comparator full systematic reviews were identified using the University of York Centre for Reviews and Dissemination HTA database. Data on a number of review components were extracted using standardized data extraction tables, then analysed and reported narratively. Axiomatic differences between all the rapid and full reviews were identified; however, the essential conclusions of the rapid and full reviews did not differ extensively across the topics. For each of the four topics examined, it was clear that the scope of the rapid reviews was substantially narrower than that of full reviews. The methodology underpinning the rapid reviews was often inadequately described. Rapid reviews do not adhere to any single validated methodology. They frequently provide adequate advice on which to base clinical and policy decisions; however, their scope is limited, which may compromise their appropriateness for evaluating technologies in certain circumstances.
    ANZ Journal of Surgery 12/2008; 78(11):1037-40. DOI:10.1111/j.1445-2197.2008.04730.x · 1.12 Impact Factor
Show more