Elise Berliner

Agency for Healthcare Research and Quality, Maryland, United States

Are you Elise Berliner?

Claim your profile

Publications (8)20.5 Total impact

  • [Show abstract] [Hide abstract]
    ABSTRACT: Objectives: Describe characteristics of rapid reviews and examine the impact of methodological variations on their reliability and validity. Study Design and Setting: We conducted a literature review and interviews with organizations that produce rapid reviews or related products to identify methods, guidance, empiric evidence, and current practices. Results: We identified 36 rapid products from 20 organizations (production time, 5 minutes to 8 months). Methods differed from systematic reviews at all stages. As timeframes increased, methods became more rigorous; however, restrictions on database searching, inclusion criteria, data extracted, and independent dual review remained. We categorized rapid products based on extent of synthesis. "Inventories" list what evidence is available. "Rapid responses" present best available evidence with no formal synthesis. “Rapid reviews" synthesize the quality of and findings from the evidence. "Automated approaches" generate meta-analyses in response to user-defined queries. Rapid products rely on a close relationship with end users and support specific decisions in an identified timeframe. Limited empiric evidence exists comparing rapid and systematic reviews. Conclusions: Rapid products have tremendous methodological variation; categorization based on timeframe or type of synthesis reveals patterns. The similarity across rapid products lies in the close relationship with the end user to meet time-sensitive decision-making needs.
    Journal of Clinical Epidemiology 08/2015; DOI:10.1016/j.jclinepi.2015.05.036 · 3.42 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Objectives: To characterize rapid reviews and similar products, to understand the context in which rapid products are produced (e.g., end-users and purposes for rapid products), to understand methodological guidance and strategies used to make products rapid and describe how these differ from systematic review (SR) procedures, and to identify empiric evidence on the impact of methodological approaches on their reliability and validity. Methods: We searched the literature to identify rapid review methods, empiric evidence on rapid review methodology, and methodological guidance. We conducted interviews with members of organizations known to produce rapid reviews to characterize the types of rapid products produced and to understand the context and uses for rapid products, identify current practices, and understand the evolution of their programs and products. Results: We identified 36 examples of rapid products produced by 20 organizations with production time ranging from 5 minutes to 8 months. We categorized rapid products into four groups based on the extent of synthesis: (1) ”inventories” list what evidence is available, and other contextual information needed to make decisions, but do not synthesize the evidence or present summaries or conclusions; (2) ”rapid responses” present the end-user with an answer based on the best available evidence (usually guidelines or SRs), but do not attempt to formally synthesize the evidence into conclusions; (3) ”rapid reviews” perform a synthesis (qualitative and/or quantitative) to provide an answer about the direction of evidence and possibly the strength of evidence; (4) “automated approaches” use databases of extracted study elements and programming to generate meta-analyses in response to user-defined queries. Methodological approaches identified for rapid products include: searching fewer databases; limited use of grey literature; restricting the types of studies included (e.g., English only, most recent 5 years); relying on existing SRs; limiting full-text review; limiting dual review for study selection and/or data extraction; limiting data extraction; limiting risk of bias assessment or grading; minimal evidence synthesis; providing nominal conclusions or recommendations; and limiting external peer review. As the timeframes for products lengthened many limitations were lifted; however, there were still restrictions on database searching, inclusion, extent of data extraction, and dual review. With lengthened production time, there was more often risk of bias assessment, evidence grading, and external peer review. Key informant interviews demonstrated that the essence of rapid products differs from that of SRs: key differences include the close relationship with the end-user and focus on helping a specific end-user make a specific decision in an identified timeframe. Because there may not be lead time before the review is needed and the end-user may need the review urgently, maintaining a highly skilled staff is critical to organizational readiness to produce rapid reviews. Having few and/or narrow questions (e.g., emerging technologies, single interventions, specific populations) was also necessary. There is almost no empiric evidence directly comparing results of rapid products with SRs. One report suggested there may not be any impact; however, it focused on surgical interventions and may not be generalizable to other clinical specialties or health care fields in which rapid products or SRs are conducted. Conclusions: Rapid products have tremendous methodological variation. Overall, they vary on two important dimensions that are captured by the term “rapid review”: the timeframe for completion and extent of synthesis. The similarity of rapid products lies in their close relationship with the end-user to meet decisionmaking needs in a limited timeframe. The following are considerations for creating rapid products: - products should be developed in the context of identified end-users and their specific decisionmaking needs and circumstances; - a close relationship with the end-user and iterative feedback is essential; - reliance on existing SRs require methods to summarize and interpret evidence; - a highly skilled and experienced staff and the capacity to mobilize skilled staff quickly are critical; restricting scope may be necessary; - producers and users need to accept modifications to standard SR methods; and - limitations need to be clearly reported, particularly in terms of potential bias and shortcomings of the conclusions. Future research evaluating end-user perspectives will complement these findings and provide additional considerations for those interested in establishing a rapid response program or producing rapid products. http://www.ncbi.nlm.nih.gov/books/NBK274092/
  • [Show abstract] [Hide abstract]
    ABSTRACT: Objectives The purpose of this Agency for Healthcare Research and Quality Evidence-based Practice Center methods white paper was to outline approaches to conducting systematic reviews of complex multicomponent health care interventions. Study Design and Setting We performed a literature scan and conducted semistructured interviews with international experts who conduct research or systematic reviews of complex multicomponent interventions (CMCIs) or organizational leaders who implement CMCIs in health care. Results Challenges identified include lack of consistent terminology for such interventions (eg, complex, multicomponent, multidimensional, multifactorial); a wide range of approaches used to frame the review, from grouping interventions by common features to using more theoretical approaches; decisions regarding whether and how to quantitatively analyze the interventions, from holistic to individual component analytic approaches; and incomplete and inconsistent reporting of elements critical to understanding the success and impact of multicomponent interventions, such as methods used for implementation the context in which interventions are implemented. Conclusion We provide a framework for the spectrum of conceptual and analytic approaches to synthesizing studies of multicomponent interventions and an initial list of critical reporting elements for such studies. This information is intended to help systematic reviewers understand the options and tradeoffs available for such reviews.
    Journal of Clinical Epidemiology 11/2014; 67(11):1181–1191. DOI:10.1016/j.jclinepi.2014.06.010 · 3.42 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Objectives Groups such as the Institute of Medicine emphasize the importance of attention to financial conflicts of interest. Little guidance exists, however, on managing the risk of bias for systematic reviews from nonfinancial conflicts of interest. We sought to create practical guidance on ensuring adequate clinical or content expertise while maintaining independence of judgment on systematic review teams. Study Design and Setting Workgroup members built on existing guidance from international and domestic institutions on managing conflicts of interest. We then developed practical guidance in the form of an instrument for each potential source of conflict. Results We modified the Institute of Medicine's definition of conflict of interest to arrive at a definition specific to nonfinancial conflicts. We propose questions for funders and systematic review principal investigators to evaluate the risk of nonfinancial conflicts of interest. Once risks have been identified, options for managing conflicts include disclosure followed by no change in the systematic review team or activities, inclusion on the team along with other members with differing viewpoints to ensure diverse perspectives, exclusion from certain activities, and exclusion from the project entirely. Conclusion The feasibility and utility of this approach to ensuring needed expertise on systematic reviews and minimizing bias from nonfinancial conflicts of interest must be investigated.
    Journal of Clinical Epidemiology 11/2014; 67(11). DOI:10.1016/j.jclinepi.2014.02.023 · 3.42 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: GRADE requires guideline developers to make an overall rating of confidence in estimates of effect (quality of evidence-high, moderate, low, or very low) for each important or critical outcome. GRADE suggests, for each outcome, the initial separate consideration of five domains of reasons for rating down the confidence in effect estimates, thereby allowing systematic review authors and guideline developers to arrive at an outcome-specific rating of confidence. Although this rating system represents discrete steps on an ordinal scale, it is helpful to view confidence in estimates as a continuum, and the final rating of confidence may differ from that suggested by separate consideration of each domain. An overall rating of confidence in estimates of effect is only relevant in settings when recommendations are being made. In general, it is based on the critical outcome that provides the lowest confidence.
    Journal of clinical epidemiology 04/2012; 66(2). DOI:10.1016/j.jclinepi.2012.01.006 · 3.42 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The most common reason for rating up the quality of evidence is a large effect. GRADE suggests considering rating up quality of evidence one level when methodologically rigorous observational studies show at least a two-fold reduction or increase in risk, and rating up two levels for at least a five-fold reduction or increase in risk. Systematic review authors and guideline developers may also consider rating up quality of evidence when a dose-response gradient is present, and when all plausible confounders or biases would decrease an apparent treatment effect, or would create a spurious effect when results suggest no effect. Other considerations include the rapidity of the response, the underlying trajectory of the condition, and indirect evidence.
    Journal of clinical epidemiology 07/2011; 64(12):1311-6. DOI:10.1016/j.jclinepi.2011.06.004 · 3.42 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: To describe a systematic approach for identifying, reporting, and synthesizing information to allow consistent and transparent consideration of the applicability of the evidence in a systematic review according to the Population, Intervention, Comparator, Outcome, Setting domains. Comparative effectiveness reviews need to consider whether available evidence is applicable to specific clinical or policy questions to be useful to decision makers. Authors reviewed the literature and developed guidance for the Effective Health Care program. Because applicability depends on the specific questions and needs of the users, it is difficult to devise a valid uniform scale for rating the overall applicability of individual studies or body of evidence. We recommend consulting stakeholders to identify the factors most relevant to applicability for their decisions. Applicability should be considered separately for benefits and harms. Observational studies can help determine whether trial populations and interventions are representative of "real world" practice. Reviewers should describe differences between available evidence and the ideally applicable evidence for the question being asked and offer a qualitative judgment about the importance and potential effect of those differences. Careful consideration of applicability may improve the usefulness of systematic reviews in informing practice and policy.
    Journal of clinical epidemiology 04/2011; 64(11):1198-207. DOI:10.1016/j.jclinepi.2010.11.021 · 3.42 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper outlines specific steps to ensure that systematic reviews describe and characterize the evidence so that users of a review can apply it appropriately in their decisions. The first step, identifying factors that may affect applicability, should be considered at the very earliest stages of a review, when defining key questions and the populations, interventions, comparators, and outcomes of interest. Defining inclusion and exclusion criteria inevitably takes into account factors that may affect the applicability of studies—for example, reviews meant to inform decision-makers in developed countries exclude studies in developing countries because they may not be applicable to the patients and health care settings in Western countries. This paper focuses on subsequent steps in a review to describe a systematic but practical approach for considering applicability in the process of reviewing, reporting, and synthesizing evidence from eligible studies.
    Methods Guide for Effectiveness and Comparative Effectiveness Reviews, 01/2008; Agency for Healthcare Research and Quality (US).

Publication Stats

230 Citations
20.50 Total Impact Points


  • 2011-2014
    • Agency for Healthcare Research and Quality
      Maryland, United States
  • 2012
    • Oregon Health and Science University
      • Department of Medical Informatics & Clinical Epidemiology
      Portland, Oregon, United States