International Journal of Technology Assessment in Health Care, 26:4 (2010), 450–457.
c ?Cambridge University Press 2010
Reporting and presenting
information retrieval processes:
the need for optimizing common
practice in health technology
Medical Review Board of the German Statutory Health Insurances
Lower Saxony (MDK Niedersachsen)
Institute for Quality and Efficiency in Health Care (IQWiG)
Background: Information retrieval (IR) in health technology assessment (HTA) calls for
transparency and reproducibility, but common practice in the documentation and
presentation of this process is inadequate in fulfilling this demand.
Objectives: Our objective is to promote good IR practice by presenting the
conceptualization of retrieval and transcription readable to non-information specialists,
and reporting of effectively processed search strategies.
Methods: We performed a comprehensive database search (04/2010) to synthesize the
current state-of-the-art. We then developed graphical and tabular presentation methods
and tested their feasibility on existing research questions and defined
Results: No generally accepted standard of reporting of IR in HTA exists. We, therefore,
developed templates for presenting the retrieval conceptualization, database selection,
and additional hand-searching as well as for presenting search histories of complex and
lengthy search strategies. No single template fits all conceptualizations, but some can be
applied to most processes. Database interface providers report queries as entered, not as
they are actually processed. In PubMedR ?, the huge difference between entered and
processed query is shown in “Details.” Quality control and evaluation of search strategies
using a validated tool such as the PRESS checklist is suboptimal when only entry-query
based search histories are applied.
Conclusions: Moving toward an internationally accepted IR reporting standard calls for
advances in common reporting practices. Comprehensive, process-based reporting and
presentation would make IR more understandable to others than information specialists
and facilitate quality control.
Keywords: Information storage and retrieval, Technology assessment, Biomedical
Reporting information retrieval processes for HTA
High-quality information retrieval is the base of each sys-
tematic review and health technology assessment. In prac-
tice, planning and executing information retrieval is mostly
performed by information specialists and checked for qual-
ity by researchers. That calls—in addition to the philoso-
comprehensive and repeatable reports of the whole informa-
tion retrieval process. This is supported by the requirements
outlined in numerous statements and tools (e.g., PRISMA
have been developed and introduced into practice in the last
years and address the reporting and evaluation of systematic
reviews and HTAs (11;16;18;19).
The quality of reporting of the information retrieval pro-
cess in technology assessments and systematic reviews has
mon practice is still heterogeneous and does not fulfill the
demand for comprehensiveness and repeatability in an opti-
in incomplete retrieval reports (3;8;12;13;23).
Our aim is to promote good information retrieval practices
by (i) presenting the conceptualization of retrieval and tran-
scription of complex search strategies in a transparent way,
readable to others than information specialists, and (ii) doc-
umenting more accurately the search strategies effectively
The starting point of our project was a comprehen-
sive database search in LISAR ?, The Cochrane LibraryR ?,
MEDLINER ?, EMBASER ?,
PsycInfoR ?, Journals@OVIDR ?, LocatorPlusR ?, and two li-
brary catalogues (Cologne University library, German Cen-
tral Library of Medicine). These sources were searched in
April 2010 for current state-of-the-art literature on report-
ing and presenting information retrieval processes. We then
condensed the results into graphics and tables, tested the fea-
sibility of these tools, and finally defined recommendations
on comprehensive documentation of retrieval results.
Science CitationIndexR ?,
Table 1. Reporting Elements According to STARLITE (1)
Type of studies
Range of years
Inclusion and exclusions
(comprehensive, selective, purposive)
(study types or design)
(additional searching, e.g., hand-searching, snowballing)
(time period chosen or coverage of databases)
(logistic, e.g., human, language)
(conceptual limitations, e.g., geographical location, setting)
(fully, e.g., search strategy or partially, e.g., terminology used without syntax and operators)
(databases used, search platform and vendor)
Our comprehensive literature search initially resulted in 923
hits and a few additional monographs on visualization tech-
niques. After eliminating duplicates and congress abstracts,
and a screening by one reviewer, only a few dozen relevant
articles remained. Most of these were tools for assessing
the quality of published search strategies. These remaining
publications were reviewed in full text.
Few experts currently study issues of information re-
trieval practice specifically adapted to HTA. Only one author
has presented a proposal for reporting: the STAndards for
reporting LITErature searches (STARLITE) (1); its elements
are listed in Table 1.
or health technology assessment offer guidance on reporting
information retrieval processes. These include the “CRD’s
Working Group 4 “Best practice in undertaking and report-
ing health technology assessments,” The Campbell Collabo-
ration “Information Retrieval Policy Brief,” and “Systematic
Review Information Retrieval Checklist,” as well as INAH-
TAs “Checklist for health technology assessment reports”
(2;5;9;20;21). The recommendations on reporting informa-
tion retrieval processes in HTAs are heterogeneous in these
papers. For example, the CRD statements on search date,
also recommended by DACEHTA (6), or on a minimum
of editing provider-reported strategies without removing the
number of hits identified are not covered by STARLITE. The
Cochrane Collaboration calls for reproducible transcripts of
all search strategies for all databases and for “all informa-
tion found on the internet, such as information on ongoing
Further handbooks on synthesizing research evidence,
for example, Sandelowski and Barroso or Kitchenham
(10;15), complement this collection. Guidelines on writing
systematic reviews, for example, PRISMA, contain only few
sentences on reporting information retrieval such as “de-
scribe all information sources (e.g., databases with dates of
coverage, contact with study authors to identify additional
studies) in the search and date last searched” and “present
full electronic search strategy for at least one database, in-
cluding any limits used, such that it could be repeated” (11).
MOOSE requests more details (see Supplementary Table 1,
INTL. J. OF TECHNOLOGY ASSESSMENT IN HEALTH CARE 26:4, 2010451
Niederstadt and Droste
which can be viewed online at www.journals.cambridge.org/
Tools for assessing the quality of systematic reviews
such as AMSTAR or AGREE (16;19) include statements on
reporting information retrieval processes. For AMSTAR, the
report must “include years and database used, key words
and / or MeSHR ?terms must be stated and where feasible
the search strategy should be provided” (16). For AGREE,
“details of the strategy used to search for evidence should
be provided, including search terms used, sources consulted,
and dates of the literature covered” (19).
Proposal for Reporting and Presenting
Information Retrieval Processes
Common practice in reporting information retrieval pro-
cesses is to describe in the methods section what has been
done, and to transcribe full search strategies in appendices of
HTA reports or on journal Web pages. This description and
transcription procedure refers to the step-by-step workflow
commonly performed in HTA information retrieval, consist-
ing of eight steps: Step 1: Translating the research question
into a search question; Step 2: Concept building by model-
ing search components; Step 3: Identifying synonyms; Step
4: Selecting relevant information sources; Step 5: Designing
retrieval results, standardized documentation and presenta-
tion; and Step 8: Final quality check, calculation of precision
and comprehensive information search aiming at highly sen-
sitive results with highest possible precision calls for such
in-text documentation of the information retrieval process:
for a complete and comprehensive transcription of search
components and the concept model as well as of the search
strategies and other search routines used, and finally, of the
In-Text Documentation of the Information Re-
trieval Process. In-text documentation of information re-
trieval processes is heterogeneous and often incomplete or
not sufficiently detailed. The most detailed recommenda-
tions regarding in-text descriptions have been published in
The Cochrane Handbook for Systematic Reviews of Inter-
ventions (22). The recommended elements are as follows:
List of all databases searched; Dates of the last search for
publication status restrictions; List of gray literature sources;
List of individuals or organizations contacted; List of jour-
nals and conference proceedings specifically hand-searched;
and List of any other sources searched.
Citing issues are not addressed by The Cochrane Hand-
cesses should include information on the use of filters, also
Highly Sensitive Search Strategy for identifying randomized
controlled trials [RCTs]) or strategies were acquired partly
or completely from published work. Additional definitions
pertaining to the conceptualization of information retrieval,
referring to the components and model of the search, would
be helpful but are currently not common practice.
DEFINING SEARCH COMPONENTS ACCORDING TO PICO
A translation of the research question into a searchable ques-
tion should be presented using the PICO structure (Patients,
see Table 2. Because the final search components are sel-
dom identical to predefined inclusion criteria applied in trial
(screening) searches, presenting information retrieval speci-
can easily be extended but not all components are mandatory
for information retrieval.
DEFINING THE SEARCH MODEL
Graphical presentation of the search model or models is still
cessesand theirresults.Thisinformation would,however, be
very helpful to researchers and other non-information spe-
cialists. The more complex the search question, the more
helpful the graphical presentation. If a search model is
presented, Venn diagrams are preferred (see Figure 1). In
Boolean logic, the search model of our example reads as fol-
“problem”) AND (search component “intervention”) AND
(search component “study design”).
The sample space usually covers the hits of interest. If
the search model is more complex and further subsets are
Table 2. PICO Definitions of the Example “Effectiveness of Human Insulin in Diabetic
Patients Aged <18 Years”
Search component “patient”
Search component “problem”
Search component “intervention”
Search component “comparator”
Search component “outcomes”
Se arch component “study design”
Search component “time period”
Search component “regional setting”
Toddlers, children, adolescent (0–18 years)
Diabetes mellitus (WHO classification)
452INTL. J. OF TECHNOLOGY ASSESSMENT IN HEALTH CARE 26:4, 2010
Reporting information retrieval processes for HTA
of interest, for example, the intersection of the search com-
ponents “intervention” and “problem,” an Info-Crystal (17)
Info-Crystals are often appropriate to presentations of ethi-
cal or legal issues or, in some cases, economic evaluations of
As Venn diagrams are restricted to three or four search
components, search models with more components require
an alternative presentation method. One alternative could
be derived from the Unified Modeling Language (UML)
(7). A UML-derived model on our example is presented
as Supplementary Figure 1, which can be viewed online at
Appendix/Additional Materials Transcribing the
Information Retrieval. Lately, documentation of the in-
formation retrieval process including a full transcription
of search histories has become more common and is
regarded as state-of-the-art in HTA. However, reporting
practices are still unsatisfactory. Incomplete reporting, lack
of detail and the missing peer-reviewed procedures render
evaluation and repetition of search strategies difficult.
Accurate reporting is also needed for updating. How-
ever, when updating searches, merely reprocessing existing
strategies is insufficient, as it is necessary to check for
changed or new indexing of medical terms etc. Hence,
updating purposes are a concern, but not a primary
Several studies on the completeness and accuracy of
information retrieval processes and results have found poor
and HTAs. Various authors have called for improvements in
reporting or demanded an international consensus on report-
ing standards; some suggest peer-reviewing as improvement
(3;8;11–14;23). Few publications transcribe the full search
in an appendix of the report or “additional material” on Web
While PRISMA (11) recommends reporting the full
search strategy “for at least one major database ...” (not
including number of records per query line), state-of-the-art
practice in HTA is to report all full search strategies, in-
cluding number of records. The INAHTA checklist (9) sug-
gests providing detailed materials optionally, “on request.”
This option results in less transparency and impedes eval-
uating the methodology and credibility of search strate-
gies. Hence, this option is not recommended in publica-
tions (such as HTA reports) where page restrictions are less
In the context of this common—suboptimal—practice,
we developed templates to present the information retrieval
workflow, conceptualization process (defining the question
model, database selection and other sources of information,
Figure 1. Venn diagram and Info-Crystal development on the example “Effectiveness of human insulin in diabetic patients aged <18 years.”
INTL. J. OF TECHNOLOGY ASSESSMENT IN HEALTH CARE 26:4, 2010453
Niederstadt and Droste
additional hand-searching, and templates for structured pre-
sentation of complex and lengthy search histories. No sin-
gle template fits all conceptualizations, but a few can be
applied to most of the common retrieval processes. Our
templates proved to be practical in selected research ques-
tions and more easily understandable than common practice
RECOMMENDATIONS FOR REPORTING INFORMATION
RETRIEVAL PROCESSES IN DETAIL
The whole information retrieval process should be presented
in a comprehensive manner (cf. Appendix 3 in CRD’s guid-
ance ). For better understanding of the proposed reporting
and presentation, a complete set of transcriptions on autolo-
article as an example (Supplementary Figure 2, which can be
viewed online at www.journals.cambridge.org/thc2010030).
According to our concept, the steps in the retrieval pro-
cess include defining search components and models, select-
ing information sources, and transcribing search strategies.
1. Defining search components using PICO and
If the search components are
not documented in the text, they should be presented at this
stage as described earlier in detail.
2. Defining the search model(s).
model(s), if not documented yet, should be presented at this
stage. If different models were processed this should also
be stated, for example, different models for primary studies
Likewise the search
3. Selection of information sources.
mation sources should be listed in appropriate detail which
varies by source as shown below.
All selected infor-
databases, including gray literature and library catalogues,
should be presented using their full name, including the
name of the provider and search interface. If appropriate, the
time range covered by the database should also be specified.
A template for presenting database selection is shown in
Supplementary Table2,whichcanbeviewed onlineatwww.
Searched bibliographicand full-text
template for describing these searches is shown in Sup-
plementary Table 3, which can be viewed online at www.
All searched registers should be listed. A
ceedings, Internet search engines or other Internet sources,
as well as any other information resources used in the
A list of all journals, conference pro-
retrieval process should be compiled. A template for de-
scribing hand searching sources is shown in Supplemen-
tary Table 4, which can be viewed online at www.journals.
Additional searches of reference lists in publications
should be explicitly mentioned. This could be achieved by
marking all publications retrieved in this manner. If all pub-
lications in an assessment are hand searched, a separate list
of resulting new references could be provided.”
guidelines should be listed and described. A template for
documenting relevant regulations and guidelines is shown in
Supplementary Table 5,whichcanbeviewed onlineatwww.
Searches for regulations or
Routine Collected Statistics.
were included, they should be listed. A template for de-
scribing sources of statistical data is shown in Supplemen-
tary Table 6, which can be viewed online at www.journals.
If general statistics sources
Other Sources of Information.
formation used should also be listed. These may include
contacts to manufacturers, patient groups, researchers etc. A
tary Table 7, which can be viewed online at www.journals.
If Web pages of a group of institutions (such as HTA
units) were searched, a list of the institutions’ names should
Any other sources of in-
Original Primary Data.
mary data that are not published elsewhere or already de-
scribed within the text, at least the method and results should
be stated in a separate appendix, for example. If a survey was
performed, the questionnaire should similarly be presented
as an appendix.
If the researchers collected pri-
4. Transcripts of search strategies.
databases and registers, full search strategies should be re-
ported including additional information on the following
items: The searcher; Citation of sources used; Name of
the database (if metafiles were searched, the names of all
databases included in the search process should be listed)
Interface used; Sampling strategy (comprehensive sampling,
ters used; Time period restrictions (even though defined by
the search component); Any further limits, for example, lan-
guage restrictions (which are not state-of-the-art for infor-
mation retrieval, but if they were introduced, they should
be mentioned at this stage); and Date of searching. The
search strategies should be reported according to the de-
fined search components and the search model in all ap-
plications where possible. A structured conduct of search
454 INTL. J. OF TECHNOLOGY ASSESSMENT IN HEALTH CARE 26:4, 2010
Reporting information retrieval processes for HTA
Table 3. Full Search Strategy Reporting of the Example “Effectiveness of Human Insulin in Diabetic Patients aged <18
Responsible for retrieval (conceptualization and conducting): Sigrid Droste,
Sources used: no publication or part of any,
Database: PubMed (NLM),
Sampling strategy: comprehensive sampling,
Standard search filters: no standard filter,
Time period: no restrictions,
Further limits: no limits,
Date of searching: 16/05/2010.
(((“INFANT”[Mesh] OR “CHILD”[Mesh]) OR “ADOLESCENT”[Mesh]) OR “MINORS”[Mesh]) OR
infant∗OR child OR children OR toddler∗OR kindergarten∗OR adolescen∗OR minor OR minors OR boy OR
boys OR girl OR girls OR pediatr∗OR juvenil∗OR youth[Title/Abstract]
kind∗OR padiatr∗OR paediatr∗[Transliterated Title]
#1 OR #2 OR #3 OR #4
((((((“DIABETES MELLITUS”[Mesh:NoExp] OR “DIABETES MELLITUS, TYPE 1”[Mesh]) OR
“DIABETES MELLITUS, TYPE 2”[Mesh]) OR “DIABETES COMPLICATIONS”[Mesh:NoExp]) OR
“DIABETIC ANGIOPATHIES”[Mesh]) OR “DIABETIC COMA”[Mesh]) OR “DIABETIC
KETOACIDOSIS”[Mesh]) OR “PREDIABETIC STATE”[Mesh]
diabetes OR diabetic OR NIDDM OR IDDM OR MODY[Title]
diabetes OR diabetic[Journal]
#6 OR #7 OR #8
(“INSULIN, NPH”[Mesh] OR “INSULIN, NEUTRAL”[Substance Name]) OR “INSULIN,
human insulin∗OR insulin human OR insuman∗OR actrapid∗OR huminsulin∗OR protaphan∗OR ultratard∗
#10 OR #11 OR #12
(“RANDOMIZED CONTROLLED TRIAL”[Publication Type] OR “CONTROLLED CLINICAL
TRIAL”[Publication Type]) OR “RANDOM ALLOCATION”[Mesh]
“DOUBLE-BLIND METHOD”[Mesh] OR “CONTROL GROUPS”[Mesh]
Limits: randomized controlled trial, controlled clinical trial
random∗OR controlled clinical trial OR controlled clinical study OR controlled trial OR controlled
double blind∗OR single blind∗OR placebo∗OR head-to-head∗OR head to head study[Title/Abstract]
#14 OR #15 OR #16 OR #17 OR #18
#5 AND #9 AND #13 AND #19
queries is a prerequisite for structured and complete report-
for each query line, the number of records identified, and the
query text line by line.
The search histories as transcribed by the provider
should be corrected as little as possible. If relevant terms re-
sulted in no records, these lines should remain in the search
history—except where this was caused by typing errors. In
the interest of readability, the number of records identified
should be reported in a separate column left of the query
text, and the index terms searched and Boolean operators
used should be written in capitals while all other query text
should be written in lower cases.
A template search on “Effectiveness of human insulin in
diabetic patients aged <18 years” was designed as shown
in Table 3. This design is simple in structure and does not
fit all search strategies, but further strategies can be designed
along these lines.
REPORTING UPDATING INFORMATION RETRIEVAL
As information retrieval is conducted at the beginning of
the assessment process, it is usually necessary to update the
search to identify relevant publications that were published
or indexed after the initial search date. No common prac-
tice exists for reporting updates. State-of-the-art could be as
follows: (i) If the information retrieval update used identical
search strategies with the primary search, it suffices to report
the date of the update and the number of additional records
(overall and by information source). (ii) If the update intro-
duced one or two new or modified query lines while all other
lines were identical, reporting additional or changed query
by adding the new or modified query line using the format in
which initial strategies were reported. A detailed description
of the location of newly introduced text is required and mod-
ified query lines should be named. (iii) If more than one or
INTL. J. OF TECHNOLOGY ASSESSMENT IN HEALTH CARE 26:4, 2010455
Niederstadt and Droste
should be presented including the date of update.
PROCESS-BASED VERSUS ENTRY-BASED REPORTING OF
The application of published search strategies to differ-
ent database user interfaces yields markedly different re-
sults. In addition to incomplete or faulty documentation, all
commonly used search engines merely provide documen-
tation of the exact query string input entered. Database in-
terface providers do not readily document the queries ac-
tually processed. PubMedR ?(http://www.ncbi.nlm.nih.gov/
sites/entrez) does, however, provide a special frame called
“Details” showing the actual search query as processed by
PubMedR ?algorithms. Thus, in PubMedR ?the huge differ-
ence between queries entered and those processed can be ob-
change frequently, the reader of a published search history
cannot necessarily determine what was actually searched. In
consequence, state-of-the-art in reporting search strategies
should be, wherever possible, to transcribe the queries pro-
cessed instead of those entered. Copying each query from
“Details” in PubMedR ?is more laborious, but adhering to
these details ensures transparency and facilitates evaluation
of the information retrieval.
An example on the different records between queries
entered and those processed is shown in PubMedR ?when
entering “information retrieval” in the query line. The
search history in PubMedR ?Advanced Search reports this
search term only. The “Details” window reports the query
translation processed: “INFORMATION STORAGE AND
AND “storage”[All Fields] AND “retrieval”[All Fields])
OR “information storage and retrieval”[All Fields] OR
(“information”[All Fields] AND “retrieval”[All Fields]) OR
“information retrieval”[All Fields].
QUALITY CONTROL IN REPORTING INFORMATION
PRISMA authors and others ask for a peer review of search
also their reporting should be checked for quality purposes.
This can be done by applying the Peer Review of Electronic
validated instrument is available for evaluating the accuracy
and completeness of reported search strategies, for exam-
ple, checking for insufficient translation of the search ques-
tion, missing thesaurus terms, misused Boolean operators,
spelling variants, and typing errors.
systematic review or health technology assessment. Improv-
information specialists. This seems doable by using suitably
adapted graphs and tables. We also recommend reporting the
queries processed wherever possible. If information retrieval
processes were comprehensively reported more often, this
would facilitate quality control of information retrieval and
published search histories will become more reproducible.
These advances would be a big step toward the development
Supplementary Table 1
Supplementary Table 2
Supplementary Table 3
Supplementary Table 4
Supplementary Table 5
Supplementary Table 6
Supplementary Table 7
Supplementary Figure 1
Supplementary Figure 2
Christina Niederstadt, MPH (Christina@medinf.org),
Medical Reviewer, Department of Consulting, Medical re-
view board of the German statutory health insurances,
Hildesheimer Str. 202, Hannover, Niedersachsen, 30519,
Sigrid Droste, Dipl.-Geogr.
Research Fellow, Department of Quality in Healthcare, In-
stitute for Quality and Efficiency in Healthcare (IQWiG),
Dillenburger Strasse 27, Cologne, 51105, Germany
CONFLICT OF INTEREST
Both authors report having no potential conflict of interest.
1. Booth A. “Brimful of STARLITE”: Toward standards for re-
and reporting health technology assessments: Working group 4
report. Int J Technol Assess Health Care. 2002;18:361-422.
3. Canadian Agency for Drugs and Technologies in Health.
PRESS: Peer review of electronic search strategies. Ottawa:
4. Centre for Evidence Based Medicine (CEBM), University
of Oxford. Asking focused questions. http://www.cebm.net/
index.aspx?o=1036 (accessed May 11, 2010).
5. Centre for Reviews and Dissemination. Systematic reviews:
CRD’s guidance for undertaking reviews in health care. York:
CRD, University of York; 2009.
456 INTL. J. OF TECHNOLOGY ASSESSMENT IN HEALTH CARE 26:4, 2010
Reporting information retrieval processes for HTA Download full-text
nology assessment handbook. Copenhagen: DACEHTA; 2008.
programming/learning+uml/ (accessed May 11, 2010).
J Clin Epidemiol. 2008;61:440-448.
9. International Network of Agencies for Health Technology As-
sessment. A checklist for health technology assessment reports.
Stockholm: INAHTA; 2007.
10. Kitchenham B. Procedures for performing systematic reviews.
Eversleigh: National Information and Communications Tech-
nology Centre of Excellence Australia (NICTA); 2004.
11. Liberati A, Altman DG, Tetzlaff J, et al. The PRISMA state-
ment for reporting systematic reviews and meta-analyses of
studies that evaluate health care interventions: Explanation and
elaboration. PLoS Med. 2009;6:e1000100.
12. PatrickTB,Demiris G, FolkLC,et al.Evidence-based retrieval
in evidence-based medicine. J Med Libr Assoc. 2004;92:196-
13. Roundtree AK, Kallen MA, Lopez-Olivo MA, et al. Poor re-
A systematic review. J Clin Epidemiol. 2009;62:128-137.
14. Sampson M, McGowan J, Tetzlaff J, Cogo E, Moher D. No
consensus exists on search reporting methods for systematic
reviews. J Clin Epidemiol. 2008;61:748-754.
15. Sandelowski M, Barroso J. Handbook for synthesizing qualita-
tive research. New York: Springer; 2007.
16. Shea BJ, Grimshaw JM, Wells GA, et al. Development of AM-
STAR: A measurement tool to assess the methodological qual-
ity of systematic reviews. BMC Med Res Methodol. 2007;7:10.
17. Spoerri A. InfoCrystal: A visual tool for information retrieval.
In: Card SK, MacKinlay JD, Shneiderman B eds. Readings in
information visualization: Using vision to think. San Diego:
Academic Press; 1999.
18. Stroup DF, Berlin JA, Morton SC, et al. Meta-analysis of ob-
servational studies in epidemiology: A proposal for reporting.
19. The AGREE Collaboration. Appraisal of Guidelines for
Research and Evaluation: AGREE instrument. London: St
George’s Hospital Medical School; 2001.
20. The Campbell Collaboration Steering Committee, ed. The
Campbell Collaboration information retrieval policy brief.
Oslo: The Campbell Collaboration; 2004.
21. The Campbell Collaboration. Systematic review information
retrieval checklist: Revised 13/02/2009. Oslo: The Campbell
22. The Cochrane Collaboration. Cochrane handbook for sys-
tematic reviews of interventions: Version 5.0.2. Oxford: The
Cochrane Collaboration; 2009.
23. Yoshii A, Plaut DA, McGraw KA, Anderson MJ, Wellik KE.
Analysis of the reporting of search strategies in Cochrane sys-
tematic reviews. J Med Libr Assoc. 2009;97:21-29.
INTL. J. OF TECHNOLOGY ASSESSMENT IN HEALTH CARE 26:4, 2010 457