Avoiding and identifying errors in health technology assessment models: qualitative study and methodological review.

School of Health and Related Research (ScHARR), Regent Court, Sheffield, UK.
Health technology assessment (Winchester, England) 05/2010; 14(25):iii-iv, ix-xii, 1-107. DOI: 10.3310/hta14250
Source: PubMed

ABSTRACT Health policy decisions must be relevant, evidence-based and transparent. Decision-analytic modelling supports this process but its role is reliant on its credibility. Errors in mathematical decision models or simulation exercises are unavoidable but little attention has been paid to processes in model development. Numerous error avoidance/identification strategies could be adopted but it is difficult to evaluate the merits of strategies for improving the credibility of models without first developing an understanding of error types and causes.
The study aims to describe the current comprehension of errors in the HTA modelling community and generate a taxonomy of model errors. Four primary objectives are to: (1) describe the current understanding of errors in HTA modelling; (2) understand current processes applied by the technology assessment community for avoiding errors in development, debugging and critically appraising models for errors; (3) use HTA modellers' perceptions of model errors with the wider non-HTA literature to develop a taxonomy of model errors; and (4) explore potential methods and procedures to reduce the occurrence of errors in models. It also describes the model development process as perceived by practitioners working within the HTA community.
A methodological review was undertaken using an iterative search methodology. Exploratory searches informed the scope of interviews; later searches focused on issues arising from the interviews. Searches were undertaken in February 2008 and January 2009. In-depth qualitative interviews were performed with 12 HTA modellers from academic and commercial modelling sectors.
All qualitative data were analysed using the Framework approach. Descriptive and explanatory accounts were used to interrogate the data within and across themes and subthemes: organisation, roles and communication; the model development process; definition of error; types of model error; strategies for avoiding errors; strategies for identifying errors; and barriers and facilitators.
There was no common language in the discussion of modelling errors and there was inconsistency in the perceived boundaries of what constitutes an error. Asked about the definition of model error, there was a tendency for interviewees to exclude matters of judgement from being errors and focus on 'slips' and 'lapses', but discussion of slips and lapses comprised less than 20% of the discussion on types of errors. Interviewees devoted 70% of the discussion to softer elements of the process of defining the decision question and conceptual modelling, mostly the realms of judgement, skills, experience and training. The original focus concerned model errors, but it may be more useful to refer to modelling risks. Several interviewees discussed concepts of validation and verification, with notable consistency in interpretation: verification meaning the process of ensuring that the computer model correctly implemented the intended model, whereas validation means the process of ensuring that a model is fit for purpose. Methodological literature on verification and validation of models makes reference to the Hermeneutic philosophical position, highlighting that the concept of model validation should not be externalized from the decision-makers and the decision-making process. Interviewees demonstrated examples of all major error types identified in the literature: errors in the description of the decision problem, in model structure, in use of evidence, in implementation of the model, in operation of the model, and in presentation and understanding of results. The HTA error classifications were compared against existing classifications of model errors in the literature. A range of techniques and processes are currently used to avoid errors in HTA models: engaging with clinical experts, clients and decision-makers to ensure mutual understanding, producing written documentation of the proposed model, explicit conceptual modelling, stepping through skeleton models with experts, ensuring transparency in reporting, adopting standard housekeeping techniques, and ensuring that those parties involved in the model development process have sufficient and relevant training. Clarity and mutual understanding were identified as key issues. However, their current implementation is not framed within an overall strategy for structuring complex problems.
Some of the questioning may have biased interviewees responses but as all interviewees were represented in the analysis no rebalancing of the report was deemed necessary. A potential weakness of the literature review was its focus on spreadsheet and program development rather than specifically on model development. It should also be noted that the identified literature concerning programming errors was very narrow despite broad searches being undertaken.
Published definitions of overall model validity comprising conceptual model validation, verification of the computer model, and operational validity of the use of the model in addressing the real-world problem are consistent with the views expressed by the HTA community and are therefore recommended as the basis for further discussions of model credibility. Such discussions should focus on risks, including errors of implementation, errors in matters of judgement and violations. Discussions of modelling risks should reflect the potentially complex network of cognitive breakdowns that lead to errors in models and existing research on the cognitive basis of human error should be included in an examination of modelling errors. There is a need to develop a better understanding of the skills requirements for the development, operation and use of HTA models. Interaction between modeller and client in developing mutual understanding of a model establishes that model's significance and its warranty. This highlights that model credibility is the central concern of decision-makers using models so it is crucial that the concept of model validation should not be externalized from the decision-makers and the decision-making process. Recommendations for future research would be studies of verification and validation; the model development process; and identification of modifications to the modelling process with the aim of preventing the occurrence of errors and improving the identification of errors in models.

1 Bookmark
  • [Show abstract] [Hide abstract]
    ABSTRACT: Health economic models have become the primary vehicle for undertaking economic evaluation and are used in various healthcare jurisdictions across the world to inform decisions about the use of new and existing health technologies. Models are required because a single source of evidence, such as a randomised controlled trial, is rarely sufficient to provide all relevant information about the expected costs and health consequences of all competing decision alternatives. Whilst models are used to synthesise all relevant evidence, they also contain assumptions, abstractions and simplifications. By their very nature, all models are therefore 'wrong'. As such, the interpretation of estimates of the cost effectiveness of health technologies requires careful judgements about the degree of confidence that can be placed in the models from which they are drawn. The presence of a single error or inappropriate judgement within a model may lead to inappropriate decisions, an inefficient allocation of healthcare resources and ultimately suboptimal outcomes for patients. This paper sets out a taxonomy of threats to the credibility of health economic models. The taxonomy segregates threats to model credibility into three broad categories: (i) unequivocal errors, (ii) violations, and (iii) matters of judgement; and maps these across the main elements of the model development process. These three categories are defined according to the existence of criteria for judging correctness, the degree of force with which such criteria can be applied, and the means by which these credibility threats can be handled. A range of suggested processes and techniques for avoiding and identifying these threats is put forward with the intention of prospectively improving the credibility of models.
    PharmacoEconomics 07/2014; · 3.34 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: IssuesEffectiveness of alcohol policy interventions varies across times and places. The circumstances under which effective polices can be successfully transferred between contexts are typically unexplored with little attention given to developing reporting requirements that would facilitate systematic investigation.ApproachUsing purposive sampling and expert elicitation methods, we identified context-related factors impacting on the effectiveness of population-level alcohol policies. We then drew on previous characterisations of alcohol policy contexts and methodological-reporting checklists to design a new checklist for reporting contextual information in evaluation studies.Key FindingsSix context factor domains were identified: (i) baseline alcohol consumption, norms and harm rates; (ii) baseline affordability and availability; (iii) social, microeconomic and demographic contexts; (iv) macroeconomic context; (v) market context; and (vi) wider policy, political and media context. The checklist specifies information, typically available in national or international reports, to be reported in each domain.ImplicationsThe checklist can facilitate evidence synthesis by providing: (i) a mechanism for systematic and more consistent reporting of contextual data for meta-regression and realist evaluations; (ii) information for policy-makers on differences between their context and contexts of evaluations; and (iii) an evidence base for adjusting prospective policy simulation models to account for policy context.Conclusions Our proposed checklist provides a tool for gaining better understanding of the influence of policy context on intervention effectiveness. Further work is required to rationalise and aggregate checklists across interventions types to make such checklists practical for use by journals and to improve reporting of important qualitative contextual data. [Holmes J, Meier PS, Booth A, Brennan A. Reporting the characteristics of the policy context for population-level alcohol interventions: A proposed ‘Transparent Reporting of Alcohol Intervention ContExts’ (TRAICE) checklist. Drug Alcohol Rev 2014]
    Drug and Alcohol Review 10/2014; · 1.55 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Objectives: Health economic models are developed as part of the health technology assessment process to determine whether health interventions represent good value for money. These models are often used to directly inform healthcare decision making and policy. The information needs for the model require the use of other types of information beyond clinical effectiveness evidence to populate the model's parameters. The purpose of this research study was to explore issues concerned with the identification and use of information for the development of such models. Methods: Three focus groups were held in February 2011 at the University of Sheffield with thirteen UK HTA experts. Attendees included health economic modelers, information specialists and systematic reviewers. Qualitative framework analysis was used to analyze the focus group data. Results: Six key themes, with related sub-themes, were identified dealing with decisions and judgments; searching methods; selection and rapid review of evidence; team communication; modeler experience and clinical input and reporting methods. There was considerable overlap between themes. Conclusions: Key issues raised by the respondents included the need for effective communication and teamwork throughout the model development process, the importance of using clinical experts as well as the need for transparent reporting of methods and decisions.
    International Journal of Technology Assessment in Health Care 08/2014; · 1.55 Impact Factor

Full-text (2 Sources)

Available from
May 17, 2014