ArticlePDF Available

Abstract and Figures

The use of an evidence-based approach to practice requires "the integration of best research evidence with clinical expertise and patient values", where the best evidence can be gathered from randomized controlled trials (RCTs), systematic reviews and meta-analyses. Furthermore, informed decisions in healthcare and the prompt incorporation of new research findings in routine practice necessitate regular reading, evaluation, and integration of the current knowledge from the primary literature on a given topic. However, given the dramatic increase in published studies, such an approach may become too time consuming and therefore impractical, if not impossible. Therefore, systematic reviews and meta-analyses can provide the "best evidence" and an unbiased overview of the body of knowledge on a specific topic. In the present article the authors aim to provide a gentle introduction to readers not familiar with systematic reviews and meta-analyses in order to understand the basic principles and methods behind this type of literature. This article will help practitioners to critically read and interpret systematic reviews and meta-analyses to appropriately apply the available evidence to their clinical practice.
Content may be subject to copyright.
The International Journal of Sports Physical Therapy | Volume 7, Number 5 | October 2012 | Page 493
ABSTRACT
The use of an evidence-based approach to practice requires “the integration of best research evidence with
clinical expertise and patient values”, where the best evidence can be gathered from randomized controlled
trials (RCTs), systematic reviews and meta-analyses. Furthermore, informed decisions in healthcare and
the prompt incorporation of new research findings in routine practice necessitate regular reading, evalua-
tion, and integration of the current knowledge from the primary literature on a given topic. However, given
the dramatic increase in published studies, such an approach may become too time consuming and there-
fore impractical, if not impossible. Therefore, systematic reviews and meta-analyses can provide the “best
evidence” and an unbiased overview of the body of knowledge on a specific topic. In the present article the
authors aim to provide a gentle introduction to readers not familiar with systematic reviews and meta-
analyses in order to understand the basic principles and methods behind this type of literature. This article
will help practitioners to critically read and interpret systematic reviews and meta-analyses to appropri-
ately apply the available evidence to their clinical practice.
Key words: evidence-based practice, meta-analysis, systematic review
IJSPT
INVITED COMMENTARY
SYSTEMATIC REVIEW AND METAANALYSIS: A PRIMER
Franco M. Impellizzeri, PhD1
Mario Bizzini, PT, PhD1
1 Department of Research and Development and FIFA Medical
Assessment and Research Centre, Schulthess Clinic, Zurich,
Switzerland
Acknowledgment
We would like to thank Kirsten Clift for the English revision of
the manuscript.
CORRESPONDING AUTHOR
Mario Bizzini, PT, PhD
FIFA Medical Assessment and Research
Center, Schulthess Clinic
Lengghalde 2, 8008 Zurich, Switzerland
Phone: +41 44 385 75 85
Fax: +41 44 385 75 90
E-mail: mario.bizzini@f-marc.com
The International Journal of Sports Physical Therapy | Volume 7, Number 5 | October 2012 | Page 494
INTRODUCTION
Sacket et al1,2 defined evidence-based practice as “the
integration of best research evidence with clinical
expertise and patient values”. The “best evidence” can
be gathered by reading randomized controlled trials
(RCTs), systematic reviews, and meta-analyses.2 It
should be noted that the “best evidence” (e.g. con-
cerning clinical prognosis, or patient experience) may
also come from other types of research designs par-
ticularly when dealing with topics that are not possi-
ble to investigate with RCTs.3,4 From the available
evidence, it is possible to provide clinical recommen-
dations using different levels of evidence.5 Although
sometimes a matter of debate,6-8 when properly
applied, the evidence-based approach and therefore
meta-analyses and systematic reviews (highest level
of evidence) can help the decision-making process in
different ways:9
1. Identifying treatments that are not effective;
2. Summarizing the likely magnitude of benefits of
effective treatments;
3. Identifying unanticipated risks of apparently
effective treatments;
4. Identifying gaps of knowledge;
5. Auditing the quality of existing randomized con-
trolled trials.
The number of scientific articles published in biomedi-
cal areas has dramatically increased in the last several
decades. Due to the quest for timely and informed deci-
sions in healthcare and medicine, good clinical practice
and prompt integration of new research findings into
routine practice, clinicians and practitioners should
regularly read new literature and compare it with the
existing evidence.10 However, this is time consuming
and therefore is impractical if not impossible for practi-
tioners to continuously read, evaluate, and incorporate
the current knowledge from the primary literature
sources on a given topic.11 Furthermore, the reader also
needs to be able to interpret both the new and the past
body of knowledge in relation to the methodological
quality of the studies. This makes it even more difficult
to use the scientific literature as reference knowledge
for clinical decision-making. For this reason, review
articles are important tools available for practitioners to
summarize and synthetize the available evidence on a
particular topic,10 in addition to being an integral part of
the evidence-based approach.
International institutions have been created in
recent years in an attempt to standardize and update
scientific knowledge. The probably best known
example is the Cochrane Collaboration, founded in
1993 as an independent, non-profit organisation,
now regrouping more than 28,000 contributors
worldwide and producing systematic reviews and
meta-analyses of healthcare interventions. There
are currently over 5000 Cochrane Reviews available
(http://www.cochrane.org). The methodology used
to perform systematic reviews and meta-analyses is
crucial. Furthermore, systematic reviews and meta-
analyses have limitations that should be acknowl-
edged and considered. Like any other scientific
research, a systematic review with or without meta-
analysis can be performed in a good or bad way. As
a consequence, guidelines have been developed and
proposed to reduce the risk of drawing misleading
conclusions from poorly conducted literature
searches and meta-analyses.11-18
In the present article the authors aim to provide an
introduction to readers not familiar with systematic
reviews and meta-analysis in order to help them
understand the basics principles and methods behind
this kind of literature. A meta-analysis is not just a
statistical tool but qualifies as an actual observational
study and hence it must be approached following
established research methods involving well-defined
steps. This review should also help practitioners to
critically and appropriately read and interpret sys-
tematic reviews and meta-analyses.
NARRATIVE VERSUS SYSTEMATIC
REVIEWS
Literature reviews can be classified as “narrative” and
“systematic” (Table 1). Narrative reviews were the first
form of literature overview allowing practitioners to
have a quick synopsis on the current state of science in
the topic of interest. When written by experts (usually
by invitation) narrative reviews are also called “expert
reviews”. However, both narrative or expert reviews
are based on a subjective selection of publications
through which the reviewer qualitatively addresses a
question summarizing the findings of previous studies
and drawing a conclusion.15 As such, albeit offering
The International Journal of Sports Physical Therapy | Volume 7, Number 5 | October 2012 | Page 495
interesting information for clinicians, they have an
obvious author’s bias since not performed by following
a clear methodology (i.e. the identification of the litera-
ture is not transparent). Indeed, narrative and expert
reviews typically use literature to support authors’
statements but it is not clear whether these statements
are evidence-based or just a personal opinion/experi-
ence of the authors. Furthermore, the lack of a specific
search strategy increases the risk of failing to identify
relevant or key studies on a given topic thus allowing
for questions to arise regarding the conclusions made
by the authors.19 Narrative reviews should be consid-
ered as opinion pieces or invited commentaries, and
therefore they are unreliable sources of information
and have a low evidence level.10,11,19
By conducting a “systematic review”, the flaws of nar-
rative reviews can be limited or overcome. The term
“systematic” refers to the strict approach (clear set of
rules) used for identifying relevant studies;11,15 which
includes the use of an accurate search strategy in
order to identify all studies addressing a specific topic,
the establishment of clear inclusion/exclusion crite-
ria and a well-defined methodological analysis of the
selected studies. By conducting a properly performed
systematic review, the potential bias in identifying
the studies is reduced, thus limiting the possibility of
the authors to select the studies arbitrarily considered
the most “relevant” for supporting their own opinion
or research hypotheses. Systematic reviews are con-
sidered to provide the highest level of evidence.
META-ANALYSIS
A systematic review can be concluded in a qualitative
way by discussing, comparing and tabulating the
results of the various studies, or by statistically analys-
ing the results from independent studies: therefore
conducting a meta-analysis. Meta-analysis has been
defined by Glass20 as “the statistical analysis of a large
collection of analysis results from individual studies
for the purpose of integrating the findings”. By com-
bining individual studies it is possible to provide a
single and more precise estimate of the treatment
effects.11,21 However, the quantitative synthesis of
results from a series of studies is meaningful only if
these studies have been identified and collected in a
proper and systematic way. Thus, the reason why the
systematic review always precedes the meta-analysis
and the two methodologies are commonly used
together. Ideally, the combination of individual study
results to get a single summary estimate is appropri-
ate when the selected studies are targeted to a com-
mon goal, have similar clinical populations, and share
the same study design. When the studies are thought
to be too different (statistically or clinically), some
researchers prefer not to calculate summary esti-
mates. Reasons for not presenting the summary esti-
mates are usually related to study heterogeneity
aspects such as clinical diversity (e.g. different metrics
or outcomes, participant characteristics, different set-
tings, etc.), methodological diversity (different study
designs) and statistical heterogeneity.22 Some meth-
ods, however, are available for dealing with these
problems in order to combine the study results.22 Nev-
ertheless, the source of heterogeneity should be always
explored using, for example, sensitivity analyses. In
this analysis the primary studies are classified in dif-
ferent groups based on methodological and/or clinical
characteristics and subsequently compared. Even
Table 1. Characteristics of narrative and systematic reviews, modifi ed from
Physiotherapy Evidence Database.37
The International Journal of Sports Physical Therapy | Volume 7, Number 5 | October 2012 | Page 496
after this subgroup analysis the studies included in the
groups may still be statistically heterogeneous and
therefore the calculation of a single estimate may be
questionable.11,19 Statistically heterogeneity can be cal-
culated with different tests but the most popular are
the Cochran’s Q23 and I.23 Although the latter is thought
to be more powerful, it has been shown that their per-
formance is similar24 and these tests are generally
weak (low power). Therefore, their confidence inter-
vals should always be presented in meta-analyses and
taken into consideration when interpreting heteroge-
neity. Although heterogeneity can be seen as a “statis-
tical” problem, it is also an opportunity for obtaining
important clinical information about the influences of
specific clinical differences.11 Sometimes, the goal of a
meta-analysis is to explore the source of diversity
among studies.15 In this situation the inclusion criteria
are purposely allowed to be broader.
Meta-analyses of observational studies
Although meta-analyses usually combine results
from RCTs, meta-analyses of epidemiological studies
(case-control, cross-sectional or cohort studies) are
increasing in the literature, and therefore, guidelines
for conducting this type of meta-analysis have been
proposed (e.g. Meta-analysis Of Observational Stud-
ies in Epidemiology, MOOSE25). Although the high-
est level of evidence study design is the RCT,
observational studies are used in situations where
RCTs are not possible such as when investigating the
potential causes of a rare disease or the prevalence of
a condition and other etiological hypotheses.3,4,11 The
two designs, however, usually address different
research questions (e.g. efficacy versus effectiveness)
and therefore the inclusion of both RCTs and obser-
vational studies in meta-analyses would not be appro-
priate.11,15 Major problems of observational studies
are the lack of a control group, the difficultly control-
ling for confounding variables, and the high risk of
bias.26 Nevertheless, observational studies and there-
fore the meta-analyses of observational studies can
be useful and are an important step in examining the
effectiveness of treatments in healthcare.3,4,11 For the
meta-analyses of observational studies, sensitivity
analyses for exploring the source of heterogeneity is
often the main aim. To note, meta-analyses them-
selves can be considered “observational studies of
the evidence”11 and, as a consequence, they may be
influenced by known and unknown confounders
similarly to primary type observational studies.
Meta-analyses based on individual
patient data
While “traditional” meta-analyses combine aggregate
data (average of the study participants such as mean
treatment effects, mean age, etc.) for calculating a
summary estimate, it is possible (if data are avail-
able) to perform meta-analyses using the individual
participant data on which the aggregate data are
derived.27-29 Meta-analyses based on individual par-
ticipant data are increasing.28 This kind of meta-anal-
ysis is considered the most comprehensive and has
been regarded as the gold standard for systematic
reviews.29,30 Of course, it is not possible to simply pool
together the participants of various studies as if they
come from a large, single trial. The analysis must be
stratified by study so that the clustering of patients
within the studies is retained for preserving the
effects of the randomization used in the primary
investigations and avoiding artifacts such as the
Simpson’s paradox, which is a change of direction of
the associations.11,15,28,29 There are several potential
advantages of this kind of meta-analysis such as con-
sistent data checking, consistent use of inclusion and
exclusion criteria, better methods for dealing with
missing data, the possibility of performing the same
statistical analyses across studies, and a better exami-
nation of the effects of participant-level covari-
ates.15,31,32 Unfortunately, meta-analyses on individual
patient data are often difficult to conduct, time con-
suming, and it is often not easy to obtain the original
data needed for performance of a such an analysis.
Cumulative and Bayesian meta-analyses
Another form of meta-analysis is the so-called “cumu-
lative meta-analysis”. Cumulative meta-analyses rec-
ognize the cumulative nature of scientific evidence
and knowledge.11 In cumulative meta-analysis a new
relevant study on a given topic is added whenever it
becomes available. Therefore, a cumulative meta-
analysis shows the pattern of evidence over time
and can identify the point when a treatment becomes
clinically significant.11,15,33 Cumulative meta-analy-
ses are not updated meta-analyses since there is not
a single pooling but the results are summarized as
each new study is added.33 As a consequence, in the
The International Journal of Sports Physical Therapy | Volume 7, Number 5 | October 2012 | Page 497
forest plot, commonly used for displaying the effect
estimates, the horizontal lines represent the treat-
ment effect estimates as each study is added and not
the results of the single studies. The cumulative
meta-analysis should be interpreted within the
Bayesian framework even if they differ from the
“pure” Bayesian approach for meta-analysis.
The Bayesian approach differs from the classical, or
frequentist methods to meta-analysis in that data
and model parameters are considered to be random
quantities and probability is interpreted as an uncer-
tainty rather than a frequency.11,15,34 Compared to the
frequentist methods, the Bayesian approach incor-
porates prior distributions, that can be specified
based on a priori beliefs (being unknown random
quantities), and the evidence coming from the study
is described as a likelihood function.11,15,34 The com-
bination of prior distribution and likelihood function
gives the posterior probability density function.34
The uncertainty around the posterior effect estimate
is defined as a credibility interval, which is the
equivalent of the confidence interval in the frequen-
tist approach.11,15,34 Although Bayesian meta-analyses
are increasing, they are still less common than tradi-
tional (frequentist) meta-analyses.
Conducting a systematic review and
meta-analysis
As aforementioned, a systematic review must follow
well-defined and established methods. One reference
source of practical guidelines for properly apply meth-
odological principles when conducting systematic
reviews and meta-analyses is the Cochrane Handbook
for Systematic Reviews of Interventions that is available
for free online.12 However other guidelines and text-
books on systematic reviews and meta-analysis are
available.11,13,14,15 Similarly, authors of reviews should
report the results in a transparent and complete way
and for this reason an international group of experts
developed and published the QUOROM (Quality Of
Reporting Of Meta-analyses),16 and recently the PRISMA
(Preferred Reporting Items for Systematic Reviews and
Meta-Analyses)17 guidelines addressing the reporting of
systematic reviews and meta-analyses of studies which
evaluate healthcare interventions.17,18
In this section the authors briefly present the princi-
pal steps necessary for conducting a systematic
review and meta-analysis, derived from available
reference guidelines and textbooks in which all the
contents (and much more) of the following section
can be found.11,12,14 A summary of the steps is pre-
sented in Figure 1. As with any research, the meth-
ods are similar to any other study and start with a
careful development of the review protocol, which
includes the definition of the research question, the
collection and analysis of data, and the interpreta-
tion of the results. The protocol defines the methods
that will be used in the review and should be set out
before starting the review in order to avoid bias, and
in case of deviation this should be reported and justi-
fied in the manuscript.
Step 1. Defi ning the review question and
eligibility criteria
The authors should start by formulating a precise
research question, which means they should clearly
report the objectives of the review and what ques-
tion they would like to address. If necessary, a
broad research question may be divided into more
specific questions. According to the PICOS frame-
work,35,36 the question should define the Population(s),
Intervention(s), Comparator(s), Outcome(s) and
Figure 1. Steps in conducting a systematic review. Modifi ed
from11,14
The International Journal of Sports Physical Therapy | Volume 7, Number 5 | October 2012 | Page 498
Study design(s). This information will also provide
the rationale for the inclusion and exclusion criteria
for which a background section explaining the con-
text and the key conceptual issues may be also
needed. When using terms that may have different
interpretations, operational definitions should be
provided. An example may be the term “neuromus-
cular control” which can be interpreted in different
ways by different researchers and practitioners. Fur-
thermore, the inclusion criteria should be precise
enough to allow the selection of all the studies rele-
vant for answering the research question. In theory,
only the best evidence available should be used for
the systematic reviews. Unfortunately, the use of
an appropriate design (e.g. RCT) does not ensure
the study was well-conducted. However, the use of
cut-offs in quality scores as inclusion criteria is not
appropriate given their subjective nature, and a sen-
sitivity analysis comparing all available studies
based on some methodological key characteristics
is preferable.
Step 2. Searching for studies
The search strategy must be clearly stated and
should allow the identification of all the relevant
studies. The search strategy is usually based on the
PICOS elements and can be conducted using elec-
tronic databases, reading the reference lists of rele-
vant studies, hand-searching journals and conference
proceedings, contacting authors, experts in the field
and manufacturers, for example.
Currently, it is possible to easily search the litera-
ture using electronic databases. However, the use of
only one database does not ensure that all the rele-
vant studies will be found and therefore various
databases should be searched. The Physiotherapy
Evidence Database (PEDro: http://www.pedro.org.
au) provides free access to RCTs (about 18,000) and
systematic reviews (almost 4000) on musculoskele-
tal and orthopaedic physiotherapy (sports being
represented by more than 60%). Other available
electronic databases are MEDLINE (through PubMed),
EMBASE, SCOPUS, CINAHL, Web of Science of the
Thomson Reuters and The Cochrane Controlled Tri-
als Register. The necessity of using different data-
bases is justified by the fact that, for example, 1800
journals indexed in MEDLINE are not indexed in
EMBASE, and vice versa.
The creation and selection of appropriate keywords
and search term lists is important to find the rele-
vant literature, ensuring that the search will be
highly sensitive without compromising precision.
Therefore, the development of the search strategy is
not easy and should be developed carefully taking
into consideration the differences between databases
and search interfaces. Although Boolean searching
(e.g. AND, OR, NOT) and proximity operators (e.g.
NEAR, NEXT) are usually available, every database
interface has its own search syntax (e.g. different
truncation and wildcards) and a different thesaurus
for indexing (e.g. MeSH for MEDLINE and EMTREE
for EMBASE). Filters already developed for specific
topics are also available. For example, PEDro has fil-
ters included in search strategies (called SDIs) that
are used regularly and automatically in some of the
above mentioned databases for retrieving guidelines,
RCTs, and systematic reviews.37
After performing the literature search using elec-
tronic databases, however, other search strategies
should be adopted such as browsing the reference
lists of primary and secondary literature and hand
searching journals not indexed. Internet sources such
as specialized websites can be also used for retriev-
ing grey literature (e.g. unpublished papers, reports,
conference proceedings, thesis or any other publica-
tions produced by governments, institutions, associa-
tions, universities, etc.). Attempts may be also
performed for finding, if any, unpublished studies in
order to reduce the risk of publication bias (trend to
publish positive results or results going in the same
direction). Similarly, the selection of only English-
language studies may exacerbate the bias, since
authors may tend to publish more positive findings
in international journals and more negative results
in local journals. Unpublished and non-English stud-
ies generally have lower quality and their inclusion
may also introduce a bias. There is no rule for decid-
ing whether to include or not include unpublished or
exclusively English-language studies. The authors
are usually invited to think about the influence of
these decisions on the findings and/or explore the
effects of their inclusion with a sensitivity analysis.
Step 3. Selecting the studies
The selection of the studies should be conducted
by more than one reviewer as this process is quite
The International Journal of Sports Physical Therapy | Volume 7, Number 5 | October 2012 | Page 499
subjective (the agreement, using kappa statistic,
between reviewers should be reported together with
the reasons for disagreements). Before selecting the
studies, the results of the different searches are
merged using reference management software and
duplicates deleted. After an initial screening of titles
and abstracts where the obviously irrelevant studies
are removed, the full papers of potentially relevant
studies should be retrieved and are selected based
on the previously defined inclusion and exclusion
criteria. In case of disagreements, a consensus
should be reached by discussion or with the help of
a third reviewer. Direct contact with the author(s) of
the study may also help in clarifying a decision.
An important phase at this step is the assessment of
quality. The use of quality scores for weighting the
study entered in the meta-analysis is not recom-
mended, as it is not recommended to include in a
meta-analysis only studies above a cut-off quality
score. However, the quality criteria of the studies
must be considered when interpreting the results of
a meta-analysis. This can be done qualitatively or
quantitatively through subgroup and sensitivity
analyses based on important methodological aspects,
which can be assessed using checklists that are pref-
erable over quality scores. If quality scores would
like to be used for weighting, alternative statistical
techniques have been proposed. e.g.38 The assess-
ment of quality should be performed by two inde-
pendent observers. The Cochrane handbook, however,
makes a distinction between study quality and risk
of bias (related for example to the method used to
generate random allocation, concealment, blind-
ness, etc.), focusing more on the latter. As for quality
assessment, the risk of bias should be taken into
consideration when interpreting the findings of the
meta-analysis. The quality of a study is generally
assessed based on the information reported in the
studies thus linking the quality of reporting to the
quality of the research itself, which is not necessar-
ily true. Furthermore, a study conducted at the high-
est possible standard may still have high risk of bias.
In both cases, however, it is important that the
authors of primary studies appropriately report the
results and for this reason guidelines have been cre-
ated for improving the quality of reporting such as
the CONSORT (Consolidated Standards of Reporting
Trials39) and the STROBE (Strengthening the Report-
ing of Observational Studies in Epidemiology40)
statements.
Step 4. Data extraction
Data extraction must be accurate and unbiased and
therefore, to reduce possible errors, it should be per-
formed by at least two researchers. Standardized
data extraction forms should be created, tested, and
if necessary modified before implementation. The
extraction forms should be designed taking into con-
sideration the research question and the planned
analyses. Information extracted can include general
information (author, title, type of publication, coun-
try of origin, etc.), study characteristics (e.g. aims of
the study, design, randomization techniques, etc.),
participant characteristics (e.g. age, gender, etc.),
intervention and setting, outcome data and results
(e.g. statistical techniques, measurement tool, num-
ber of follow up, number of participants enrolled,
allocated, and included in the analysis, results of the
study such as odds ratio, risk ratio, mean difference
and confidence intervals, etc.). Disagreements should
be noted and resolved by discussing and reaching a
consensus. If needed, a third researcher can be involved
to resolve the disagreement.
Step 5. Analysis and presentation of the results
(data synthesis)
Once the data are extracted, they are combined, ana-
lyzed, and presented. This data synthesis can be
done quantitatively using statistical techniques
(meta-analysis), or qualitatively using a narrative
approach when pooling is not believed to be appro-
priate. Irrespective of the approach (quantitative or
qualitative), the synthesis should start with a descrip-
tive summary (in tabular form) of the included stud-
ies. This table usually includes details on study type,
interventions, sample sizes, participant characteris-
tics, outcomes, for example. The quality assessment
or the risk of bias should also be reported. For narra-
tive reviews a comprehensive synthesis framework
(Figure 2) has been proposed.14,41
Standardization of outcomes
To allow comparison between studies the results of
the studies should be expressed in a standardized
format such as effect sizes. The appropriate effect
size for standardizing the outcomes should be
The International Journal of Sports Physical Therapy | Volume 7, Number 5 | October 2012 | Page 500
similar between studies so that they can be com-
pared and it can be calculated from the data avail-
able in the original articles. Furthermore, it should
be interpretable. When the outcomes of the primary
studies are reported as means and standard devia-
tions, the effect size can be the raw (unstandardized)
difference in means (D), the standardized difference
in means (d or g) or the response ratio (R). If the
results are reported in the studies as binary out-
comes the effect sizes can be the risk ratio (RR), the
odds ratio (OR) or the risk difference (RD).15
Statistical analysis
When a quantitative approach is chosen, meta-
analytical techniques are used. Textbooks and courses
are available for learning statistical meta-analytical
techniques. Once a summary statistic is calculated
for each study, a “pooled” effect estimate of the inter-
ventions is determined as the weighting average of
individual study estimates, so that the larger studies
have more “weight” than the small studies. This is
necessary because small studies are more affected
by the role of chance.11,15 The two main statistical
models used for combining the results are the “fixed-
effect” and the “random-effects” model. Under the
fixed effect model, it is assumed that the variability
between studies is only due to random variation
because there is only one true (common) effect. In
other words, it is assumed that the group of studies
give an estimate of the same treatment effect and
therefore the effects are part of the same distribu-
tion. A common method for weighting each study is
the inverse-variance method, where the weight is
given by the inverse of variance of each estimate.
Therefore, the two essential data required for this
calculation are the estimate of the effect with its
standard error. On the other hand, the “random-
effects” model assumes a different underlying effect
for each study (the true effect varies from study to
study). Therefore the study weight will take into
account two sources of error: the between- and
within-studies variance. As in the fixed-effect model,
the weight is calculated using the inverse-variance
method, but in random-effects model the study spe-
cific standard errors are adjusted incorporating both
within and between-studies variance. For this reason,
the confidence intervals obtained with random-effect
models are usually wider. In theory, the fixed-effect
model can be applied when the studies are heteroge-
neous while the random-effects model can be applied
when the results are not heterogeneous. However,
the statistical tests for examining heterogeneity lack
power and, as aforementioned, the heterogeneity
should be carefully scrutinized (e.g. interpreting the
confidence intervals) before taking a decision. Some-
times, both fixed- and random-effects models are
used for examining the robustness of the analysis.
Once the analyses are completed, results should be
presented as point estimates with the corresponding
confidence intervals and exact p-values.
Other than the calculations of the individual studies
and summary estimates, other analyses are neces-
sary. As mentioned various time, the exploration of
possible source of heterogeneity is important and
can be performed using sensitivity, subgroup, or
regression analyses. Using meta-regressions is also
possible to examine the effects of differences in
study characteristics on the treatment effect esti-
mate. When using meta-regression, the larger stud-
ies have more influence than smaller studies; and
regarding other analyses, recall that the limitations
should be taken into account before deciding to use
it and when interpreting the results.
Graphic display
The results of each trial are commonly displayed
with their corresponding confidence intervals in the
so-called “forest plot” (Figure 3). In the forest plot
the study is represented by a square and a horizontal
line indicating the confidence interval, where the
Figure 2. Narrative synthesis framework. Modifi ed from14,41
The International Journal of Sports Physical Therapy | Volume 7, Number 5 | October 2012 | Page 501
dimension of the square reflects the weight of each
study. A solid vertical line usually corresponds to no
effect of treatment. The summary point estimate is
usually represented with a diamond at the bottom of
the graph with the horizontal extremities indicating
the confidence interval. This graphic solution gives
an immediate overview of the results.
An alternated graphic solution called a funnel plot
can be used for investigating the effects of small
studies and for identifying publication bias (Figure
4). The funnel plot is a scatter-plot of the effect esti-
mates of individual studies against measures of
study size and precision (commonly, the standard
error, but the use of sample size is still common). If
there is no publication bias the funnel plot will be
symmetrical (Figure 4B). However, the funnel plot
examination is subjective, based upon visual inspec-
tion, and therefore can be unreliable. In addition,
other causes may influence the symmetry of the
funnel plot such as the measures used for estimating
the effects and precision, and differences between
small and large studies.14 Therefore, its use and
interpretation should be done with caution.
Step 6. Interpretation of the results
The final part of the process pertains to the interpre-
tation of the results. When interpreting or comment-
ing on the findings, the limitations should be discussed
and taken into account, such as the overall risk of bias
and the specific biases of the studies included in the
systematic review, and the strength of the evidence.
Furthermore, the interpretation should be performed
based not solely using P-values, but rather on the
uncertainty and the clinical/practical importance.
Ideally, the interpretation should help the clinician in
understanding how to apply the findings in practice,
provide recommendations or implications for poli-
cies, and offer directions for further research.
Figure 3. Example of a forest plot: the squares represent the
effect estimate of the individual studies and the horizontal
lines indicate the confi dence interval; the dimension of the
square refl ects the weight of each study. The diamond repre-
sent the summary point estimate is usually represented with
a diamond at the bottom of the graph with the horizontal
extremities indicating the confi dence interval. In the example
as standardized outcome measure the authors used d.
Figure 4. Example of symmetric (A) and asymmetric (B) funnel plots.
The International Journal of Sports Physical Therapy | Volume 7, Number 5 | October 2012 | Page 502
CONCLUSIONS
Systematic reviews have to meet high methodological
standards, and their results should be translated into
clinically relevant information. These studies offer a
valuable and useful summary of the current scientific
evidence on a specific topic and can be used for devel-
oping evidence-based guidelines. However, it is impor-
tant that practitioners are able to understand the basic
principles behind the reviews and are hence able to
appreciate their methodological quality before using
them as a source of knowledge. Furthermore, there
are no RCTs, systematic reviews, or meta-analyses that
address all aspects of the wide variety of clinical situa-
tions. A typical example in sports physiotherapy is
that most available studies deal with recreational ath-
letes, while an individual clinician may work with
high-profile or elite athletes in the clinic. Therefore,
when applying the results of a systematic review to
clinical situations and individual patients, there are
various aspects one should consider such as the appli-
cability of the findings to the individual patient, the
feasibility in a particular setting, the benefit-risk ratio,
and the patient’s values and preferences.1 As reported
in the definition, evidence-based medicine is the inte-
gration of both research evidence and clinical exper-
tise. As such, the experience of the sports PT should
help in contextualizing and applying the findings of a
systematic review or meta-analysis, and adjusting the
effects to the individual patient. As an example, an
elite athlete is often more motivated and compliant in
rehabilitation, and may have a better outcome than
average with the given physical therapy or training
interventions (when compared to a recreational ath-
lete). Therefore, it is essential to merge the available
evidence with the clinical evaluation and the patient’s
wishes (and consequent treatment planning) in order
to engage an evidence-based management of the
patient or athlete.
REFERENCES
1. Sackett DL, Rosenberg WM, Gray JA, Haynes RB,
Richardson WS: Evidence based medicine: what it is
and what it isn’t. BMJ. 1996; 312(7023): 71-72.
2. Sackett DL, Strauss SE, Richardson WS, Rosenberg W,
Haynes RB: Evidence-Based Medicine. How to practice
and Teach. EBM (2nd ed). London: Churchill
Livingstone, 2000.
3. Black N: What observational studies can offer
decision makers. Horm Res. 1999; 51 Suppl 1: 44-49.
4. Black N: Why we need observational studies to
evaluate the effectiveness of health care. BMJ. 1996;
312(7040): 1215-1218.
5. US Preventive Services Task Force: Guide to clinical
preventive services, 2nd ed. Baltimore, MD: Williams &
Wilkins, 1996.
6. LeLorier J, Gregoire G, Benhaddad A, Lapierre J,
Derderian F: Discrepancies between meta-analyses
and subsequent large randomized, controlled trials.
N Engl J Med. 1997; 337(8): 536-542.
7. Liberati A: “Meta-analysis: statistical alchemy for the
21st century”: discussion. A plea for a more balanced
view of meta-analysis and systematic overviews of
the effect of health care interventions. J Clin
Epidemiol. 1995; 48(1): 81-86.
8. Bailar JC, 3rd: The promise and problems
of meta-analysis. N Engl J Med. 1997; 337(8):
559-561.
9. Von Korff M: The role of meta-analysis in medical
decision making. Spine J. 2003; 3(5): 329-330.
10. Tonelli M, Hackam D, Garg AX: Primer on
systematic review and meta-analysis. Methods Mol
Biol. 2009; 473: 217-233.
11. Egger M, Daey Smith G, Altman D: Systematic
Reviews in Health Care: Meta-Analysis in Context,
2nd Edition: BMJ Books, 2001.
12. Higgins JPT, Green S: Cochrane Handbook for
Systematic Reviews of Interventions. Version 5.1.0
(updated March 2011). The Cochrane Collaboration
(available from: http://www.cochrane-handbook.
org/), 2008.
13. Atkins D, Fink K, Slutsky J: Better information for
better health care: the Evidence-based Practice
Center program and the Agency for Healthcare
Research and Quality. Ann Intern Med. 2005; 142(12
Pt 2): 1035-1041.
14. Centre for Reviews and Dissemination: Systematic
reviews: CRD’s 16 guidance for undertaking reviews in
health care. York: University of York, 2009.
15. Borenstein M, Hedges LV, Higgins JPT, H.R. R:
Introduction to Meta-Analysis: John Wiley & Sons Ltd,
2009.
16. Clarke M: The QUORUM statement. Lancet. 2000;
355(9205): 756-757.
17. Moher D, Liberati A, Tetzlaff J, Altman DG:
Preferred reporting items for systematic reviews and
meta-analyses: the PRISMA statement. BMJ. 2009;
339: b2535.
18. Liberati A, Altman DG, Tetzlaff J, et al.: The PRISMA
statement for reporting systematic reviews and
meta-analyses of studies that evaluate healthcare
interventions: explanation and elaboration. BMJ.
2009; 339: b2700.
The International Journal of Sports Physical Therapy | Volume 7, Number 5 | October 2012 | Page 503
19. Sauerland S, Seiler CM: Role of systematic reviews
and meta-analysis in evidence-based medicine.
World J Surg. 2005; 29(5): 582-587.
20. Glass GV: Primary, secondary and meta-analysis of
research. Educ Res. 1976; 5: 3-8.
21. Berman NG, Parker RA: Meta-analysis: neither quick
nor easy. BMC Med Res Methodol. 2002; 2: 10.
22. Ioannidis JP, Patsopoulos NA, Rothstein HR: Reasons
or excuses for avoiding meta-analysis in forest plots.
BMJ. 2008; 336(7658): 1413-1415.
23. Higgins JP, Thompson SG: Quantifying
heterogeneity in a meta-analysis. Stat Med. 2002;
21(11): 1539-1558.
24. Huedo-Medina TB, Sanchez-Meca J, Marin-Martinez
F, Botella J: Assessing heterogeneity in meta-
analysis: Q statistic or I2 index? Psychol Methods.
2006; 11(2): 193-206.
25. Stroup DF, Berlin JA, Morton SC, et al.: Meta-analysis
of observational studies in epidemiology: a proposal
for reporting. Meta-analysis Of Observational Studies
in Epidemiology (MOOSE) group. JAMA. 2000;
283(15): 2008-2012.
26. Kelsey JL, Whittemore AS, Evans AS, Thompson
WD: Methods in observational epidemiology. New York:
Oxford University Press, 1996.
27. Duchateau L, Pignon JP, Bijnens L, Bertin S, Bourhis
J, Sylvester R: Individual patient-versus literature-
based meta-analysis of survival data: time to event
and event rate at a particular time can make a
difference, an example based on head and neck
cancer. Control Clin Trials. 2001; 22(5): 538-547.
28. Riley RD, Dodd SR, Craig JV, Thompson JR,
Williamson PR: Meta-analysis of diagnostic test
studies using individual patient data and aggregate
data. Stat Med. 2008; 27(29): 6111-6136.
29. Simmonds MC, Higgins JP, Stewart LA, Tierney JF,
Clarke MJ, Thompson SG: Meta-analysis of
individual patient data from randomized trials: a
review of methods used in practice. Clin Trials. 2005;
2(3): 209-217.
30. Oxman AD, Clarke MJ, Stewart LA: From science to
practice. Meta-analyses using individual patient data
are needed. JAMA. 1995; 274(10): 845-846.
31. Stewart LA, Tierney JF: To IPD or not to IPD?
Advantages and disadvantages of systematic reviews
using individual patient data. Eval Health Prof. 2002;
25(1): 76-97.
32. Riley RD, Lambert PC, Staessen JA, et al.: Meta-
analysis of continuous outcomes combining
individual patient data and aggregate data. Stat Med.
2008; 27(11): 1870-1893.
33. Lau J, Schmid CH, Chalmers TC: Cumulative meta-
analysis of clinical trials builds evidence for
exemplary medical care. J Clin Epidemiol. 1995;
48(1): 45-57; discussion 59-60.
34. Sutton AJ, Abrams KR: Bayesian methods in meta-
analysis and evidence synthesis. Stat Methods Med
Res. 2001; 10(4): 277-303.
35. Armstrong EC: The well-built clinical question: the
key to fi nding the best evidence effi ciently. Wmj.
1999; 98(2): 25-28.
36. Richardson WS, Wilson MC, Nishikawa J, Hayward
RS: The well-built clinical question: a key to
evidence-based decisions. ACP J Club. 1995; 123(3):
A12-13.
37. Physiotherapy Evidence Database (PEDro):
Physiotherapy Evidence Database (PEDro). J Med
Libr Assoc. 2006; 94(4): 477-478.
38. Greenland S, O’Rourke K: On the bias produced by
quality scores in meta-analysis, and a hierarchical
view of proposed solutions. Biostatistics. 2001; 2(4):
463-471.
39. Moher D, Schulz KF, Altman DG: The CONSORT
statement: revised recommendations for improving
the quality of reports of parallel-group randomised
trials. Lancet. 2001; 357(9263): 1191-1194.
40. von Elm E, Altman DG, Egger M, Pocock SJ,
Gotzsche PC, Vandenbroucke JP: The Strengthening
the Reporting of Observational Studies in
Epidemiology (STROBE) statement: guidelines for
reporting observational studies. Lancet. 2007;
370(9596): 1453-1457.
41. Popay J, Roberts H, Sowden A, et al.: Developing
guidance on the conduct of narrative synthesis in
systematic reviews. J Epidemiol Comm Health. 2005;
59(Suppl 1): A7.
... This practice is extremely worrisome and problematic. Without large-scale systematic reviews or meta-analysis studies [56][57][58], it is difficult to determine the degree of discrepancies between the true effects of technology-based interventions and what has been measured and reported. What is clear, however, is that the lack of definitions, compounded by the heterogeneity of the measures adopted to gauge the barely or poorly defined concept, could substantially undermine the reproducibility and replicability of research on technology-based interventions [56][57][58], not to mention the quality of review studies on technology-based interventions for cancer caregivers. ...
... Without large-scale systematic reviews or meta-analysis studies [56][57][58], it is difficult to determine the degree of discrepancies between the true effects of technology-based interventions and what has been measured and reported. What is clear, however, is that the lack of definitions, compounded by the heterogeneity of the measures adopted to gauge the barely or poorly defined concept, could substantially undermine the reproducibility and replicability of research on technology-based interventions [56][57][58], not to mention the quality of review studies on technology-based interventions for cancer caregivers. ...
Article
Full-text available
Background Cancer is a taxing chronic disease that demands substantial care, most of which is shouldered by informal caregivers. As a result, cancer caregivers often have to manage considerable challenges that could result in severe physical and psychological health consequences. Technology-based interventions have the potential to address many, if not all, of the obstacles caregivers encounter while caring for patients with cancer. However, although the application of technology-based interventions is on the rise, the term is seldom defined in research or practice. Considering that the lack of conceptual clarity of the term could compromise the effectiveness of technology-based interventions for cancer caregivers, timely research is needed to bridge this gap. Objective This study aims to clarify the meaning of technology-based interventions in the context of cancer caregiving and provide a definition that can be used by cancer caregivers, patients, clinicians, and researchers to facilitate evidence-based research and practice. Methods The 8-step concept analysis method by Walker and Avant was used to analyze the concept of technology-based interventions in the context of cancer caregiving. PubMed, PsycINFO, CINAHL, and Scopus were searched for studies that examined technology-based interventions for cancer caregivers. Results The defining attributes of technology-based interventions were recognized as being accessible, affordable, convenient, and user-friendly. On the basis of insights gained on the defining attributes, antecedents to, and consequences of technology-based interventions through the concept analysis process, technology-based interventions were defined as the use of technology to design, develop, and deliver health promotion contents and strategies aimed at inducing or improving positive physical or psychological health outcomes in cancer caregivers. Conclusions This study clarified the meaning of technology-based interventions in the context of cancer caregiving and provided a clear definition that can be used by caregivers, patients, clinicians, and researchers to facilitate evidence-based oncology practice. A clear conceptualization of technology-based interventions lays foundations for better intervention design and research outcomes, which in turn have the potential to help health care professionals address the needs and preferences of cancer caregivers more cost-effectively.
... Using these methods, researchers seek to collect, combine, analyze, and present results from existing studies conducted on a specific topic using a predefined study protocol [23]. Together, a systematic review and meta-analysis can provide rigorous evidence and a comprehensive, unbiased overview of the body of knowledge on a specific topic [24]. Therefore, we chose to implement these methods because we sought to answer defined research questions through structured reviews of existing evidence [25]. ...
Article
Full-text available
Inconsistent results published in previous studies make it difficult to determine the precise effect of consumer knowledge on their acceptance of functional foods. We conducted a systematic review and meta-analysis by identifying and collecting relevant literature from three databases. Of the 1050 studies reviewed, we included 40 in the systematic review and 18 in the meta-analysis. Based on the focus of each included study, we operationally defined knowledge as knowledge of the functional food concept, nutritional-related knowledge, and knowledge of specific functional products. Results from the systematic review indicate that most participants from the included studies had low knowledge, especially nutrition-related knowledge associated with consuming functional foods, and were generally not familiar with the concept of functional foods. Results from the meta-analysis generated a summary effect size (r = 0.14, 95% CI [0.05; 0.23]), measured by the correlation coefficient r, which indicates a small positive relationship exists between consumers’ level of knowledge and their acceptance of functional foods. Results from our study demonstrate the importance of increasing consumers’ functional foods knowledge to improve their acceptance of such products. Agricultural and health communicators, educators, and functional foods industry professionals should prioritize increasing consumers’ knowledge through their communications, marketing, and programmatic efforts.
... In general terms, it is unclear what this brief review adds to the body of knowledge about ACL and complex dynamic systems. It summarizes some relevant articles in the field, but scientific accuracy and methodological issues should be underlined, according to review guidelines (Impellizzeri et al., 2012). In the abstract, the term "sport" is mentioned, none of the references is sport-related. ...
... Moreover, statistical heterogeneity is apparent in our analysis. This may be attributed to methodological diversity (different study designs) and/or differences in treatment regimens (doses/durations) or the soy products used (soy protein, isoflavone, soybean, and soy milk) (149). Control or nonintervention groups were different in the included trials in this meta-analysis; this might tend to bias the findings toward the null. ...
Article
Previous studies have suggested that soy products may be beneficial for cardiometabolic health, but current evidence regarding their effects in type 2 diabetes mellitus (T2DM) remain unclear. The aim of this systematic review and meta-analysis was to determine the impact of soy product consumption on cardiovascular risk factors in patients with T2DM. PubMed, Scopus, Embase, and the Cochrane library were systematically searched from inception to March 2021 using relevant keywords. All randomized controlled trials (RCTs) investigating the effects of soy product consumption on cardiovascular risk factors in patients with T2DM were included. Meta-analysis was performed using random-effects models and subgroup analysis was performed to explore variations by dose and baseline risk profile. A total of 22 trials with 867 participants were included in this meta-analysis. Soy product consumption led to a significant reduction in serum concentrations of triglycerides (TG) (WMD: –24.73 mg/dL; 95% CI: –37.49, –11.97), total cholesterol (TC) (WMD: –9.84 mg/dL; 95% CI: –15.07, –4.61), low density lipoprotein (LDL) cholesterol (WMD: –6.94 mg/dL; 95% CI: –11.71, –2.17) and C-reactive protein (CRP) (WMD: –1.27 mg/L; 95% CI: –2.39, –0.16). In contrast, soy products had no effect on high density lipoprotein (HDL) cholesterol, fasting blood sugar (FBS), fasting insulin, hemoglobin A1c (HbA1c), homeostatic model assessment of insulin resistance (HOMA-IR), systolic and diastolic blood pressure (SBP/DBP) or body mass index (BMI) (all P ≥ 0.05). In subgroup analyses, there was a significant reduction in FBS after soy consumption in patients with elevated baseline FBS (>126 mg/dL) and in those who received higher doses of soy intake (>30 g/d). Moreover, soy products decreased SBP in patients with baseline hypertension (>135 mmHg). Our meta-analysis suggests that soy product consumption may improve cardiovascular parameters in patients with T2DM, particularly in individuals with poor baseline risk profiles. However, larger studies with longer durations and improved methodological quality are needed before firm conclusions can be reached.
... Since the National Institutes of Health and the U.S. Department of Health and Human Services deemed systematic review as scientific research (Impellizzeri & Bizzini, 2012), various forms of systematic review as research methods have advanced rapidly over the past decade in other disciplines (i.e. medicine, software engineering, business, economics, environmental studies). ...
Chapter
This chapter presents the results of a systematic review to analyze the current research since 2019 for voice dispossession as attributional accommodation among women in higher education leadership. The authors sought to quantify and categorize these attributes to better identify the verbal and nonverbal ac-commodations made by women in higher education leadership to extend prior critical review of gender parity and equity for these leaders. Study findings may inform higher educational leadership to better understand voice dispossession among female leaders and the resulting attributional accommodations made to improve gender equity and parity for leadership roles in higher education.
... The emphasis of the WHO is zero tolerance for FGM. Systematic reviews are the best forms of research evidence [10]. ...
Article
Full-text available
Female genital mutilation (FGM) is a general health concern. The World Health Organization has recognized it as a condition that endangers women's health. This review study aimed to identify the types of health outcomes of FGM. Therefore, a systematic review was conducted to create a critical view of the current evidence on the effect of Female genital on girls and women's health. In this study, we focused on the health risks of female Female genital. Academic databases such as PubMed, Science Direct, Scopus, Google Scholar, Cochrane Database of Systematic Reviews, SID, IranMedex, Irandoc, and Magiran were searched with regard to the health consequences of FGM from January 1990 until 2018. Eleven review studies met the criteria and contained 288 relevant studies on the risks of FGM. It was suggested that FGM had various physical, obstetric, sexual, and psychological consequences. Women with FGM experienced mental disturbances (e.g., psychiatric diagnoses, anxiety, somatization, phobia, and low self-esteem) than other women. Our study can provide evidence on improving, changing behaviors, and making decisions on the quality of services offered to women suffering from FGM.
... Three reviews focused on parents of children with developmental disabilities (Osborn et al. 2020;Rayan and Ahmad 2018;Sohmaran and Shorey 2019), while two other reviews conducted secondary analyses on the well-being of parents with developmentally disabled children (Burgdorf et al. 2019;Byrne et al. 2020). Three reviews analyzed the outcomes of parental mental well-being and mindfulness but failed to conduct metaanalyses (Byrne et al. 2020;Osborn et al. 2020;Rayan and Ahmad, 2018), hence compromising the strength of their findings (Impellizzeri and Bizzini, 2012). Burgdorf et al. (2019) conducted meta-analyses for the parental stress outcome but included data from studies without control groups and hence did not isolate the impact of the independent variable such as mindfulness-based interventions on parental stress (Hunter et al. 2016). ...
Article
Full-text available
Parents of children with developmental disabilities are susceptible to mental health problems. Mindfulness-based and acceptance and commitment therapy (ACT)-based interventions can improve their mental well-being. This review examined the effectiveness of mindfulness-based and ACT-based interventions in improving mental well-being and mindfulness among parents of children with developmental disabilities. Six electronic databases were searched, resulting in the inclusion of ten studies published between 2014 and 2020. Meta-analysis was conducted using the random-effect model. The results suggest that mindfulness-based and ACT-based interventions were effective in decreasing parental stress, anxiety and depression, however, the effectiveness of these interventions in increasing parental mindfulness was inconclusive. Based on these findings, we discussed considerations for implementing interventions and identified areas which warrant further research.
Article
Lay abstract: Interventions to address core symptoms for young children on the autism spectrum have a strong and growing evidence base. Adapting and delivering evidence-based interventions to infants and toddlers with a high likelihood for autism is a logical next step. This systematic review and meta-analysis summarize the association between infant and toddler interventions and developmental and family outcomes. Results indicate that these early interventions are effective for improving parent implementation of core strategies, yet the effects do not readily translate to child outcomes. However, key studies demonstrate conditional results that indicate that parent implementation is associated with child outcome. Implications for research and practice toward building adaptive interventions that respond to parent implementation and changing child characteristics are discussed.
Article
Full-text available
This study aimed to present a standard and normal distribution of Taekwondo athletes’ physical characteristics and physical fitness profiles using a systematic review. A systematic search was conducted using four Korean databases (Research Information Sharing Service, National Digital Science Library, DBpia, and Korean Studies Information Service System). From 2010 to 2020, we reviewed 838 papers on Taekwondo athletes’ physical characteristics and physical fitness factors (e.g., body composition, muscle strength, muscular endurance, flexibility, cardiorespiratory fitness, power, agility, balance, speed, and reaction time). Of them, 24 papers were selected and analyzed. The criteria for selecting the physical characteristics and physical fitness factors for data extraction were set to have a total sample size of more than 30 individuals and included two or more studies. The sample size and average and standard deviation of physical characteristics and physical fitness factors were extracted from each selected study. In this study, the estimation error of all variables, except for the eyes-closed single-leg stance (15.71%), was less than 8%. Therefore, it was confirmed that there was no problem with the validity of the estimated values. These results could be used as an essential objective basis for evaluating the physical characteristics and physical fitness profiles of Taekwondo athletes in most countries worldwide and setting training goals.
Article
Full-text available
Much biomedical research is observational. The reporting of such research is often inadequate, which hampers the assessment of its strengths and weaknesses and of a study's generalisability. The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) initiative developed recommendations on what should be included in an accurate and complete report of an observational study. We defined the scope of the recommendations to cover three main study designs: cohort, case-control, and cross-sectional studies. We convened a 2-day workshop in September, 2004, with methodologists, researchers, and journal editors to draft a che-cklist of items. This list was subsequently revised during several meetings of the coordinating group and in e-mail discussions with the larger group of STROBE contributors, taking into account empirical evidence and methodological considerations. The workshop and the subsequent iterative process of consultation and revision resulted in a checklist of 22 items (the STROBE statement) that relate to the title, abstract, introduction, methods, results, and discussion sections of articles. 18 items are common to all three study designs and four are specific for cohort, case-control, or cross-sectional studies. A detailed explanation and elaboration document is published separately and is freely available on the websites of PLoS Medicine, Annals of Internal Medicine, and Epidemiology. We hope that the STROBE statement will contribute to improving the quality of reporting of observational studies.
Article
Systematic reviews and meta-analyses are essential to summarize evidence relating to efficacy and safety of health care interventions accurately and reliably. The clarity and transparency of these reports, however, is not optimal. Poor reporting of systematic reviews diminishes their value to clinicians, policy makers, and other users. Since the development of the QUOROM (QUality Of Reporting Of Meta-analysis) Statement-a reporting guideline published in 1999-there have been several conceptual, methodological, and practical advances regarding the conduct and reporting of systematic reviews and meta-analyses. Also, reviews of published systematic reviews have found that key information about these studies is often poorly reported. Realizing these issues, an international group that included experienced authors and methodologists developed PRISMA (Preferred Reporting Items for Systematic reviews and Meta-Analyses) as an evolution of the original QUOROM guideline for systematic reviews and meta-analyses of evaluations of health care interventions. The PRISMA Statement consists of a 27-item checklist and a four-phase flow diagram. The checklist includes items deemed essential for transparent reporting of a systematic review. In this Explanation and Elaboration document, we explain the meaning and rationale for each checklist item. For each item, we include an example of good reporting and, where possible, references to relevant empirical studies and methodological literature. The PRISMA Statement, this document, and the associated Web site (www.prisma-statement.org) should be helpful resources to improve reporting of systematic reviews and meta-analyses.
Article
The second edition of this best-selling book has been thoroughly revised and expanded to reflect the significant changes and advances made in systematic reviewing. New features include discussion on the rationale, meta-analyses of prognostic and diagnostic studies and software, and the use of systematic reviews in practice.