Elise Berliner

Agency for Healthcare Research and Quality, Maryland, United States

Are you Elise Berliner?

Claim your profile

Publications (6)16.43 Total impact

  • [Show abstract] [Hide abstract]
    ABSTRACT: Objectives The purpose of this Agency for Healthcare Research and Quality Evidence-based Practice Center methods white paper was to outline approaches to conducting systematic reviews of complex multicomponent health care interventions. Study Design and Setting We performed a literature scan and conducted semistructured interviews with international experts who conduct research or systematic reviews of complex multicomponent interventions (CMCIs) or organizational leaders who implement CMCIs in health care. Results Challenges identified include lack of consistent terminology for such interventions (eg, complex, multicomponent, multidimensional, multifactorial); a wide range of approaches used to frame the review, from grouping interventions by common features to using more theoretical approaches; decisions regarding whether and how to quantitatively analyze the interventions, from holistic to individual component analytic approaches; and incomplete and inconsistent reporting of elements critical to understanding the success and impact of multicomponent interventions, such as methods used for implementation the context in which interventions are implemented. Conclusion We provide a framework for the spectrum of conceptual and analytic approaches to synthesizing studies of multicomponent interventions and an initial list of critical reporting elements for such studies. This information is intended to help systematic reviewers understand the options and tradeoffs available for such reviews.
    Journal of Clinical Epidemiology. 11/2014; 67(11):1181–1191.
  • [Show abstract] [Hide abstract]
    ABSTRACT: Objectives Groups such as the Institute of Medicine emphasize the importance of attention to financial conflicts of interest. Little guidance exists, however, on managing the risk of bias for systematic reviews from nonfinancial conflicts of interest. We sought to create practical guidance on ensuring adequate clinical or content expertise while maintaining independence of judgment on systematic review teams. Study Design and Setting Workgroup members built on existing guidance from international and domestic institutions on managing conflicts of interest. We then developed practical guidance in the form of an instrument for each potential source of conflict. Results We modified the Institute of Medicine's definition of conflict of interest to arrive at a definition specific to nonfinancial conflicts. We propose questions for funders and systematic review principal investigators to evaluate the risk of nonfinancial conflicts of interest. Once risks have been identified, options for managing conflicts include disclosure followed by no change in the systematic review team or activities, inclusion on the team along with other members with differing viewpoints to ensure diverse perspectives, exclusion from certain activities, and exclusion from the project entirely. Conclusion The feasibility and utility of this approach to ensuring needed expertise on systematic reviews and minimizing bias from nonfinancial conflicts of interest must be investigated.
    Journal of Clinical Epidemiology. 01/2014;
  • [Show abstract] [Hide abstract]
    ABSTRACT: GRADE requires guideline developers to make an overall rating of confidence in estimates of effect (quality of evidence-high, moderate, low, or very low) for each important or critical outcome. GRADE suggests, for each outcome, the initial separate consideration of five domains of reasons for rating down the confidence in effect estimates, thereby allowing systematic review authors and guideline developers to arrive at an outcome-specific rating of confidence. Although this rating system represents discrete steps on an ordinal scale, it is helpful to view confidence in estimates as a continuum, and the final rating of confidence may differ from that suggested by separate consideration of each domain. An overall rating of confidence in estimates of effect is only relevant in settings when recommendations are being made. In general, it is based on the critical outcome that provides the lowest confidence.
    Journal of clinical epidemiology 04/2012; · 5.48 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The most common reason for rating up the quality of evidence is a large effect. GRADE suggests considering rating up quality of evidence one level when methodologically rigorous observational studies show at least a two-fold reduction or increase in risk, and rating up two levels for at least a five-fold reduction or increase in risk. Systematic review authors and guideline developers may also consider rating up quality of evidence when a dose-response gradient is present, and when all plausible confounders or biases would decrease an apparent treatment effect, or would create a spurious effect when results suggest no effect. Other considerations include the rapidity of the response, the underlying trajectory of the condition, and indirect evidence.
    Journal of clinical epidemiology 07/2011; 64(12):1311-6. · 5.48 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: To describe a systematic approach for identifying, reporting, and synthesizing information to allow consistent and transparent consideration of the applicability of the evidence in a systematic review according to the Population, Intervention, Comparator, Outcome, Setting domains. Comparative effectiveness reviews need to consider whether available evidence is applicable to specific clinical or policy questions to be useful to decision makers. Authors reviewed the literature and developed guidance for the Effective Health Care program. Because applicability depends on the specific questions and needs of the users, it is difficult to devise a valid uniform scale for rating the overall applicability of individual studies or body of evidence. We recommend consulting stakeholders to identify the factors most relevant to applicability for their decisions. Applicability should be considered separately for benefits and harms. Observational studies can help determine whether trial populations and interventions are representative of "real world" practice. Reviewers should describe differences between available evidence and the ideally applicable evidence for the question being asked and offer a qualitative judgment about the importance and potential effect of those differences. Careful consideration of applicability may improve the usefulness of systematic reviews in informing practice and policy.
    Journal of clinical epidemiology 04/2011; 64(11):1198-207. · 5.48 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper outlines specific steps to ensure that systematic reviews describe and characterize the evidence so that users of a review can apply it appropriately in their decisions. The first step, identifying factors that may affect applicability, should be considered at the very earliest stages of a review, when defining key questions and the populations, interventions, comparators, and outcomes of interest. Defining inclusion and exclusion criteria inevitably takes into account factors that may affect the applicability of studies—for example, reviews meant to inform decision-makers in developed countries exclude studies in developing countries because they may not be applicable to the patients and health care settings in Western countries. This paper focuses on subsequent steps in a review to describe a systematic but practical approach for considering applicability in the process of reviewing, reporting, and synthesizing evidence from eligible studies.
    Methods Guide for Effectiveness and Comparative Effectiveness Reviews, 01/2008; Agency for Healthcare Research and Quality (US).