Systematic reviews need systematic searchers.

Ottawa Health Research Institute/Institute of Population Health University of Ottawa Ottawa K1N 6N5 Canada.
Journal of the Medical Library Association JMLA (Impact Factor: 0.99). 02/2005; 93(1):74-80.
Source: PubMed

ABSTRACT This paper will provide a description of the methods, skills, and knowledge of expert searchers working on systematic review teams.
Systematic reviews and meta-analyses are very important to health care practitioners, who need to keep abreast of the medical literature and make informed decisions. Searching is a critical part of conducting these systematic reviews, as errors made in the search process potentially result in a biased or otherwise incomplete evidence base for the review. Searches for systematic reviews need to be constructed to maximize recall and deal effectively with a number of potentially biasing factors. Librarians who conduct the searches for systematic reviews must be experts.
Expert searchers need to understand the specifics about data structure and functions of bibliographic and specialized databases, as well as the technical and methodological issues of searching. Search methodology must be based on research about retrieval practices, and it is vital that expert searchers keep informed about, advocate for, and, moreover, conduct research in information retrieval. Expert searchers are an important part of the systematic review team, crucial throughout the review process-from the development of the proposal and research question to publication.

Download full-text


Available from: Jessie Mcgowan, Jul 02, 2015
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This article is the fourth of six articles addressing systematic reviews in animal agriculture and veterinary medicine. Previous articles in the series have introduced systematic reviews, discussed study designs and hierarchies of evidence, and provided details on conducting randomized controlled trials, a common design for use in systematic reviews. This article describes development of a review protocol and the first two steps in a systematic review: formulating a review question, and searching the literature for relevant research. The emphasis is on systematic reviews of questions related to interventions. The review protocol is developed prior to conducting the review and specifies the plan for the conduct of the review, identifies the roles and responsibilities of the review team and provides structured definitions related to the review question. For intervention questions, the review question should be defined by the PICO components: population, intervention, comparison and outcome(s). The literature search is designed to identify all potentially relevant original research that may address the question. Search terms related to some or all of the PICO components are entered into literature databases, and searches for unpublished literature also are conducted. All steps of the literature search are documented to provide transparent reporting of the process.
    Zoonoses and Public Health 06/2014; 61 Suppl S1:28-38. DOI:10.1111/zph.12125 · 2.07 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This project aims to assess the utility of bibliographic databases beyond the three major ones (MEDLINE, EMBASE and Cochrane CENTRAL) for finding controlled trials of complementary and alternative medicine (CAM). Fifteen databases were searched to identify controlled clinical trials (CCTs) of CAM not also indexed in MEDLINE. Searches were conducted in May 2006 using the revised Cochrane highly sensitive search strategy (HSSS) and the PubMed CAM Subset. Yield of CAM trials per 100 records was determined, and databases were compared over a standardized period (2005). The Acudoc2 RCT, Acubriefs, Index to Chiropractic Literature (ICL) and Hom-Inform databases had the highest concentrations of non-MEDLINE records, with more than 100 non-MEDLINE records per 500. Other productive databases had ratios between 500 and 1500 records to 100 non-MEDLINE records-these were AMED, MANTIS, PsycINFO, CINAHL, Global Health and Alt HealthWatch. Five databases were found to be unproductive: AGRICOLA, CAIRSS, Datadiwan, Herb Research Foundation and IBIDS. Acudoc2 RCT yielded 100 CAM trials in the most recent 100 records screened. Acubriefs, AMED, Hom-Inform, MANTIS, PsycINFO and CINAHL had more than 25 CAM trials per 100 records screened. Global Health, ICL and Alt HealthWatch were below 25 in yield. There were 255 non-MEDLINE trials from eight databases in 2005, with only 10% indexed in more than one database. Yield varied greatly between databases; the most productive databases from both sampling methods were Acubriefs, Acudoc2 RCT, AMED and CINAHL. Low overlap between databases indicates comprehensive CAM literature searches will require multiple databases.
    Evidence-based Complementary and Alternative Medicine 06/2009; 2011:858246. DOI:10.1093/ecam/nep038 · 1.88 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: With increasing number of bibliographic software, scientists and health professionals either make a subjective choice of tool(s) that could suit their needs or face a challenge of analyzing multiple features of a plethora of search programs. There is an urgent need for a thorough comparative analysis of the available bio-literature scanning tools, from the user’s perspective. We report results of the first time semi-quantitative comparison of 21 programs, which can search published (partial or full text) documents in life science areas. The observations can assist life science researchers and medical professionals to make an informed selection among the programs, depending on their search objectives. Some of the important findings are: 1. Most of the hits obtained from Scopus, ReleMed, EBImed, CiteXplore, and HighWire Press were usually relevant (i.e. these tools show a better precision than other tools). 2. But a very high number of relevant citations were retrieved by HighWire Press, Google Scholar, CiteXplore and Pubmed Central (they had better recall). 3. HWP and CiteXplore seemed to have a good balance of precision and recall efficiencies. 4. PubMed Central, PubMed and Scopus provided the most useful query systems. 5. GoPubMed, BioAsk, EBIMed, ClusterMed could be more useful among the tools that can automatically process the retrieved citations for further scanning of bio-entities such as proteins, diseases, tissues, molecular interactions, etc. The authors suggest the use of PubMed, Scopus, Google Scholar and HighWire Press - for better coverage, and GoPubMed - to view the hits categorized based on the MeSH and gene ontology terms. The article is relavant to all life science subjects.
    Nature Precedings