ArticlePDF AvailableLiterature Review

Frequency of data extraction errors and methods to increase data extraction quality: A methodological review

Authors:
  • University Medical Center Goettingen

Abstract and Figures

Background Our objective was to assess the frequency of data extraction errors and its potential impact on results in systematic reviews. Furthermore, we evaluated the effect of different extraction methods, reviewer characteristics and reviewer training on error rates and results. Methods We performed a systematic review of methodological literature in PubMed, Cochrane methodological registry, and by manual searches (12/2016). Studies were selected by two reviewers independently. Data were extracted in standardized tables by one reviewer and verified by a second. Results The analysis included six studies; four studies on extraction error frequency, one study comparing different reviewer extraction methods and two studies comparing different reviewer characteristics. We did not find a study on reviewer training. There was a high rate of extraction errors (up to 50%). Errors often had an influence on effect estimates. Different data extraction methods and reviewer characteristics had moderate effect on extraction error rates and effect estimates. Conclusion The evidence base for established standards of data extraction seems weak despite the high prevalence of extraction errors. More comparative studies are needed to get deeper insights into the influence of different extraction methods. Electronic supplementary material The online version of this article (10.1186/s12874-017-0431-4) contains supplementary material, which is available to authorized users.
Content may be subject to copyright.
R E S E A R C H A R T I C L E Open Access
Frequency of data extraction errors and
methods to increase data extraction
quality: a methodological review
Tim Mathes
*
, Pauline Klaßen and Dawid Pieper
Abstract
Background: Our objective was to assess the frequency of data extraction errors and its potential impact on results
in systematic reviews. Furthermore, we evaluated the effect of different extraction methods, reviewer characteristics
and reviewer training on error rates and results.
Methods: We performed a systematic review of methodological literature in PubMed, Cochrane methodological
registry, and by manual searches (12/2016). Studies were selected by two reviewers independently. Data were
extracted in standardized tables by one reviewer and verified by a second.
Results: The analysis included six studies; four studies on extraction error frequency, one study comparing different
reviewer extraction methods and two studies comparing different reviewer characteristics. We did not find a study
on reviewer training. There was a high rate of extraction errors (up to 50%). Errors often had an influence on effect
estimates. Different data extraction methods and reviewer characteristics had moderate effect on extraction error
rates and effect estimates.
Conclusion: The evidence base for established standards of data extraction seems weak despite the high
prevalence of extraction errors. More comparative studies are needed to get deeper insights into the influence of
different extraction methods.
Keywords: Systematic reviews, Data extraction, Accuracy, Errors, Reviewers
Background
Systematic reviews (SRs) have become the cornerstone
of evidence based healthcare. A SR should use explicit
methods to minimize bias with the aim to provide more
reliable findings [1]. The reduction of bias concerns all
process steps of the review. For example, bias can occur
in the identification of studies, in the selection of studies
(e.g. unclear inclusion criteria), in the data collection
process and in the validity assessment of included stud-
ies [2]. Many efforts have been made to further develop
methods for SRs.
However, the evidence base for most recommenda-
tions that aim to minimize bias in the preparation
process for a systematic review in established guidelines
is sparse [1, 3, 4]. Previous studies found only little re-
search on the influence of different approaches on risk
of bias in systematic reviews [5]. The use of work exten-
sive methods without really knowing the benefit might
waste scientific resources. For example a recent study
found that alternatives to duplicate study selection by
two independent reviewers (e.g. liberal acceleration) are
more cost-effective than independent study selection [6].
As the timely preparation and publication of SR is an
important goal to support decision making consider-
ations on the balance between resources/time and valid-
ity play an important role [7].
Data extraction is a crucial step in conducting SRs.
The term data collection is often used synonymously.
We defined data extraction as any type of extracting data
from primary studies into any form of standardized ta-
bles. It is one of the most time-consuming and most
critical tasks for the validity of results of a SR [1]. Data
* Correspondence: Tim.Mathes@uni-wh.de
Institute for Research in Operative Medicinem, Chair of Surgical Research,
Faculty of Health, School of Medicine, Witten/Herdecke University,
Ostmerheimer Str. 200, 51109 Cologne, Germany
© The Author(s). 2017 Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and
reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to
the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver
(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
Mathes et al. BMC Medical Research Methodology (2017) 17:152
DOI 10.1186/s12874-017-0431-4
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
extraction builds the basis for the results and conclusion
in a SR. However, a previous study has shown a very
high prevalence of data extraction errors in SRs [8].
Our objective was to assess the frequency of data ex-
traction errors and its potential impact on results in sys-
tematic reviews of interventions. Furthermore, we
evaluated the effect of different extraction methods (e.g.
independent data extraction by two reviewers vs. verifi-
cation by a second reviewer), reviewer characteristics
(e.g. experience) and reviewer training on error rates
and results.
Methods
Information sources and search
We searched all PubMed databases and the Cochrane
Methodology Register (12/2016). The full search strat-
egies are provided in Additional file 1. The search strat-
egies were tested by checking whether articles already
known to us (e.g. Buscemi et al. [9]) were identified. We
screened all abstracts of the Cochrane Colloquium (since
2009) and searched the Cochrane database of oral, pos-
ter and workshop presentations (since 1994) in Decem-
ber 2016. In addition we cross checked the references-
list of all included articles, excluded articles, established
guidelines for systematic review preparation and system-
atic reviews on similar topics [1, 35, 7, 10]. Moreover,
we screened the publications linked in the related arti-
cles functions (cited articles and similar articles) in
PubMed for all included articles.
Eligibility criteria and study selection
Two types of articles were eligible. First, we included ar-
ticles on the frequency of data extraction errors. Second,
we included studies that compared aspects that can in-
fluence the quality of data extraction. We considered the
following types of comparisons:
Comparison of different methods (index methods)
for data extraction regarding the involvement of a
second reviewer for quality assurance (e.g.
independent data extraction versus extraction by
one reviewer and verification by a second) for data
extraction (in the following called extraction
method),
Reviewer characteristics: experience, degree,
education,
Reviewer training: training on data extraction before
performing the review, including extraction of a
sample and calibration with the second reviewer and
oral and written instructions.
Studies on technical tools (software, dual monitors,
data extraction templates etc.) to reduce data extraction
errors were excluded.
All studies (studies on error frequency and compara-
tive studies) had to report a quantitative measure for
data extraction errors (e.g. error rate), determined by
use of a reference standard (a data extraction sample
that was considered correct). We included only studies
on the assessment of data extraction for intervention re-
views written in English or German.
All titles/abstracts identified in the electronics data-
bases were screened by two reviewers independently.
The abstracts of the Cochrane Colloquium and the data-
base of oral poster and workshop presentations were
screened by one reviewer. All potentially relevant full-
texts were screened by two reviewers (not independ-
ently). In case of discrepancies, we discussed eligibility
until consensus.
Data collection and analysis
We used a semi-structured template for data extraction
(see Tables 1, 2 and 3). For each study, we extracted the sam-
ple size (the number of included studies and, if applicable
the number of systematic reviews feeding the sample), infor-
mation on the index and reference method, and information
on outcome measures. We extracted all data on data ex-
traction errors and the influence on pooled effect esti-
mates. If possible, we distinguished data extraction errors
in accuracy (e.g. correct values), completeness (e.g. all
relevant outcomes are extracted)/selection (e.g. choice of
correct outcome measure or outcome measurement
time point) and correct interpretation (e.g. confuse mean
and median). In addition, we extracted quantitative mea-
sures for effort (e.g. time, resource use) and rates of
agreement between different approaches.
If provided in the article, we extracted confidence
limits (preferred) or p-values for all outcomes/compari-
sons. Rates with a denominator of at least 10 were con-
verted into percentages to increase the comparability of
the results.
All data were extracted in standardized tables. Be-
fore starting the data extraction the involved re-
viewers discussed each article to agree about the
relevant items for data extraction to avoid misinter-
pretation and omission. Subsequently, one reviewer
(experienced statistician) extracted all data and a sec-
ond reviewer (experienced epidemiologist) verified
the accuracy of data extraction.
Synthesis of data
Each included article was summarized in a structured
narrative way. The narrative synthesis includes informa-
tion on the sample (included reviews, included studies),
the index method, the reference standard considered as
correct data extraction and results (measures for
the quantification of errors and measures for the quanti-
fication of influence on the effect estimates).
Mathes et al. BMC Medical Research Methodology (2017) 17:152 Page 2 of 8
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Results
Study selection
The search in the electronic databases resulted in 818
hits. The search of the abstracts of the Cochrane Col-
loquium and the database of the Cochrane database
of oral, poster and workshop presentations revealed
three additional potentially relevant publications. Add-
itionally, the reference list of included articles and
systematic reviews on similar topics did not reveal
further relevant articles. For two studies, no full-texts
were available. We screened the full-texts of nine arti-
cles. Of these, three studies were excluded [1113].
The study selection process is illustrated in the flow-
diagram (Fig. 1).
The analysis included six studies [8, 9, 1417]; four
studies on extraction error frequency [8, 1517], one
study comparing different reviewer extraction methods
[9] and two studies comparing different reviewer
characteristics [14, 15] (Tendal et al. [15] included in
both analysis). No studies on reviewer training were
identified.
Studies on frequency of extraction errors
Carroll et al. compared the results of three dichotomous
outcomes [16]. The database was three systematic re-
views on the same topic that included the same studies
(N= 8). Their own systematic review was used as the ref-
erence standard. Deviations in the other systematic re-
views were considered as errors. The rate of data
extraction errors ranged between 8% and 42% (depend-
ing on outcome and review). Differences in pooled effect
estimates were small (difference of relative risk, range:
0.010.05).
Gøtzsche et al. [8] replicated the results of 27 meta-
analyses. For the quantification of errors two trials of each
meta-analysis were randomly selected. The reference
Table 1 Results of studies quantifying the frequency of data extraction errors
Study Studies included (reviews) Measure Result
Carroll 2013 [16] 8 (3) Selection (outcome 1) 17% (review 1 vs. reference standard); 8%
(review 2 vs. reference standard)
Selection (outcome 2) 42% (review 1 vs. reference standard); 25%
(review 2 vs. reference standard)
Selection (outcome 3) 21% (review 1 vs. reference standard); 25%
(review 2 vs. reference standard)
Inaccuracy (outcome 1) 8% (review 1 vs. reference standard); 8%
(review 2 vs. reference standard)
Inaccuracy (outcome 2) 17% (review 1 vs. reference standard); 13%
(review 2 vs. reference standard)
Inaccuracy (outcome 3) 13% (review 1 vs. reference standard); 8%
(review 2 vs. reference standard)
Difference in meta-analysis (outcome 1) RR 1.70 (reference standard) /
RR 1.71 (review 1)
Difference in meta-analysis (outcome 2) RR 0.85 (reference standard) / RR 0.87
(review 1) / RR 0.80 (review 2)
Difference in meta-analysis (outcome 3) RR 0.38 (reference standard)
/ RR 0.40 (review 1)
Gøtzsche 2007 [8] 54 (random selected;
27 meta-analysis)
Difference in SMD >0.1 of at least 1
of the 2 included trials
63%
20 (10 meta-analysis
a
) Difference in SMD >0.1 of pooled
effect estimate.
70%
Jones 2005 [17] NR (34) Errors (all types) 50%
Correct interpretation 23.3%
Impact on results All data-handling errors led to changes
in the summary results, but none
of them affected review conclusions
b
Tendal 2009 [15] 45 (10 meta-analysis) Difference in SMD because of
reviewer disagreements < 0.1
53%
Difference in SMD because of
reviewer disagreements < 0.1
(pooled estimates)
31%
a
: meta-analyses at least including one erroneous trial;
b
author statement (no quantitative measures provided); ns no significant differences; NR not reported, RD
relative difference, RS reference standard, SMD standardized mean difference
Mathes et al. BMC Medical Research Methodology (2017) 17:152 Page 3 of 8
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
standard was data extraction by two independent re-
viewers. A difference in the standardized mean difference
of greater 0.1 was classified as error. In 63% of meta-
analysis at least one of the two trials were erroneous. Of
these meta-analyses 70% also showed an error of the
pooled effect estimate.
In the study of Jones et al. [17] 42 systematic reviews of a
Cochrane group were included. A statistician checked the
data of these systematic reviews for errors. Half of the sys-
tematic reviews contained errors and approximately 23% of
this were misinterpretations (e.g. confusing standard devia-
tions and standard error). According to the study authors
Table 2 Characteristics of studies comparing different reviewer extraction methods and reviewer characteristics
Study Comparator/s Reference
a
Studies included
Buscemi 2006 [9] One reviewer verification
by a second
Two reviewers independently Extraction by one reviewer
and verification by an
experienced statistician
N= 30 (6 meta-analysis)
Horton 2010 [14] Minimal data extraction
experience (n= 28)
Moderate data extraction
experience (n= 19)
Substantial data extraction
experience (n= 23)
NA
Minimal systematic
review experience (n= 28)
Moderate systematic review
experience (n= 31)
Substantial systematic review
experience (n= 18)
NA
Minimal overall experience
b
(n= 26) Moderate overall
experience
b
(n= 24)
Substantial overall
experience
b
(n= 37)
NA
Tendal 2009 [15] Experienced methodologists PhD students No reference standard
(comparison of raw
agreement between reviewers)
45 (10 meta-analysis)
a
denominator or subtrahend;
b
based on time involved in systematic reviews and data extraction and the number of systematic reviews; NA not applicable
Table 3 Results of studies comparing different reviewer extraction methods and reviewer characteristics
Measure Result (effect measure, CI or p-value)
Reviewer constellation
Buscemi 2006 [9] Agreement rate 28.0% (95% CI: 25.4, 30.7, range 11.147.2%)
Errors (all types) RD 21.7% (p= 0.019)
Omission RD 6.6% (p= 0.308)
Time (min, mean) RD 49 (p= 0.03)
Difference of pooled effect estimates * 0/0
Reviewer experience
Horton 2010 [14] Errors (all types) 24.3%/26.4%/25.4% (p= 0.91)
Inaccuracy 14.3%/13.6%/15.7% (p= 0.41)
Omission 10.0%/12.1%/12.1% (p= 0.24)
Time (min, mean) 200/149/163 (p = 0.03)
MD in point estimates of meta -analysis ns (5 outcomes)
Errors (all types) 25.0%/ 26.1%/ 24.3% (p= 0.73)
Inaccuracy 14.6%/ 13.2%/ 15.7% (p= 0.39)
Omission 10.0%/ 11.4%/ 10.7% (p= 0.53)
Time (min, mean) 198/179/ 152 (p= 0.01)
MD in point estimates of meta -analysis ns (5 outcomes)
Errors (all types) 26.4%/27.9%/27.9% (p= 0.73)
Inaccuracy 16.4%/12.1%/15.7% (p= 0.22)
Omission 10.4%/12.1%/13.6% (p= 0.47)
Time (min, mean) 211/180/173 (p= 0.12)
MD in point estimates of meta -analysis ns (5 outcomes)
Tendal 2009 [15] Difference of SMD < 0.1 61%/46% (NR)
Difference of SMD < 0.1 (pooled estimates) 33%/27% (NR)
*p<0.05; CI confidence interval, MD mean difference, ns not statistical significant differences (according to authors, significance not specified), NR not reported, RD
relative difference, SMD standardized mean difference
Mathes et al. BMC Medical Research Methodology (2017) 17:152 Page 4 of 8
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
all data-handling errors led to changes in the summary re-
sults, but none of them affected review conclusions[17].
Tendal et al. [15] estimated the influence of deviations
in data extractions between reviewers on results. A stan-
dardized mean difference smaller than 0.1 was consid-
ered as reviewer agreement. Approximately, 53% and
31% reviewers agreed on the trial level and meta-analysis
level, respectively. At the level of meta-analysis the dif-
ference in standardized mean difference (SMD) was at
least 1 in one of ten meta-analyses.
Table1showsthefrequencyofdataextractionerrorsand
influence on the effect estimates for each included study.
Studies comparing different reviewer extraction methods
and reviewer characteristics
Buscemi et al. [18] compared data extraction of 30 ran-
domized controlled trials by two independent reviewers
with data extraction by one reviewer and verification by
a second. The reference standard was the data extraction
by one reviewer with verification by an experienced stat-
istician. The agreement rate of the two extraction
methods was 28%. The risk difference of the total error
rate was statistically significant (risk difference 21.7%, p
= 0.019) in favor of double data extraction. This differ-
ence was primarily because of inaccuracy (risk difference
52.9%, p= 0.308). However, in average double data ex-
traction took 49 min longer. Pooled effect estimates of
both extraction methods varied only slightly and were
not statistically significant different from the reference
standard in any of the meta-analyses (N= 6).
In the study by Horton et al. [14] the data extractions
of minimally experienced, moderately experienced and
substantially experienced reviewers were compared. Ex-
perience classifications were based on years involved in
systematic review preparation (2, 46, >7) and the num-
ber of performed data extractions (50, 51300, >300).
Three outcome measures were used; systematic review ex-
perience, extraction experienceandacompositeofthese
two (overall experience). The reference data extractions
were prepared by independent data extraction by two re-
viewers and additional verification with the data in the ori-
ginal review. Error rates were similar across all overall
experience levels. The median error rates ranged between
24.3%26.4%, 13.6%15.7% and 10.0%12.1% for total er-
rors, inaccuracy and omission, respectively (p-values >0.2
for all comparisons). Unexperienced reviewers required
more time for data extraction (range: 173211). There were
no statistically significant differences (according authors, p-
values not specified) in point estimates (5 outcomes) of
meta-analysis between overall reviewer experience levels.
Tendal et al. [15] estimated the influence of deviations
in data extractions between reviewers on results. Agree-
ment was defined as a standardized mean difference for
the effect estimates smaller than 0.1. Agreement was cal-
culated for all pairs of experienced methodologist and all
pairs of PhD students. Experienced methodologists
agreed more than the PhD students (61% vs. 46% of tri-
als; 33% vs. 27% of meta-analyses).
Table 2 shows the characteristics, and Table 3 the re-
sults of the comparison of different extraction methods
and reviewer characteristics.
Discussion
There is a high prevalence of extraction errors [8, 13, 15,
16]. However, extraction errors seem to have only a mod-
erate impact on the results/conclusion of systematic re-
views. Nevertheless, the high rate of extraction errors
indicates that measures for quality assurance of data
Fig. 1 Flow-diagram of study selection
Mathes et al. BMC Medical Research Methodology (2017) 17:152 Page 5 of 8
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
extraction are important to minimize the risk of biased
results and wrong conclusions [8].
Comparative evidence on the influence of different re-
viewer extraction methods and reviewer characteristics
is sparse [9, 14, 15]. Reviewer characteristics seem to
have only a moderate influence on extraction errors [7,
9]. Data extraction by two independent reviewers seems
to result in less extraction errors than data extraction by
one reviewer and verification by a second. These large
differences might cause significant difference in effect
estimates [15]. However, in view of the limited influence
on the conclusions of a systematic review, double data
extraction of all data by two independent reviewers
seems not always necessary, but reduced data extraction
methods might be justified. The basic principal of re-
duced extraction methods is the focus on critical aspects
(e.g. primary outcomes), which constitute the basis for
conclusions, and reduced emphasize on the data extrac-
tion of less important aspects (e.g. patient characteris-
tics, additional outcomes). Also in the Methodological
Expectations of Cochrane Intervention Reviews
(MECIR), it is stated that dual data extraction may be
less important for study characteristics than it is for out-
come data, so it is not a mandatory standard for study
characteristicsand dual data extraction is particularly
important for outcome data, which feed directly into
syntheses of the evidence, and hence to the conclusions
of the review[19]. This is comparable to the recent pol-
icy of the Institute of Medicine (IOM). The IOM states
that at minimum, use two or more researchers, working
independently, to extract quantitative and other critical
data from each study[20].
Such methods would reduce the effort for data ex-
traction. Considering that the reviewer experience
showed only little influence on extraction errors, cost
might be further reduced by not employing only ex-
perienced methodologists but also staff with less sys-
tematic review experience for data extraction [14, 15].
However, reviewer training seems especially import-
ant, if less experienced reviewers are involved. The
reviewer team should be trained in data extraction
(e.g. using a sample) before performing the complete
data extraction to harmonize data extraction and clear
up common misunderstandings. This could in par-
ticular reduce interpretation and selection errors [13].
The reduction of time and effort is especially relevant
for rapid reviews because this form of evidence syn-
thesis aims timely preparation while remaining sys-
tematic [21]. Thus, clarifying which methods can
reduce the time required for preparation without sig-
nificantly increasing the risk of bias would also con-
tribute to better control the short cuts in rapid
reviews. The risk of bias of reduced data extraction
methods could be further reduced if a detailed
systematic review protocol and data entry instructions
are prepared beforehand because the risk of selecting
wrong outcomes (e.g. time points, measures) and
omission would be reduced [15, 22].
The wide range of agreement between different extrac-
tion methods suggests that some studies are more diffi-
cult to extract than others [9]. There can be many
reasons for this. First, data extraction is probably
dependent on the reporting quality in primary studies.
Often methods and results of trials are insufficiently re-
ported [23, 24]. Bad reporting can complicate the identi-
fication of relevant information (e.g. primary outcome is
not clearly indicated) and impede the extraction of treat-
ment effects in a useful manner (e.g. only statements on
statistical significance without effect measures and confi-
dence intervals). Thus, bad reporting can increase the
risk of omission and varying interpretation.
Second, the level of necessary statistical expertise might
vary by study. Also reviewers who are familiar with a var-
iety of statistical methods may not be aware of more ad-
vanced statistical methods (e.g. hierarchical regression
models) or recently developed methods (e.g. mean cumu-
lative function). Data extraction is particularly challenging
when very different statistical methods (which can result
in different effect estimates) are used across articles.
Third, different backgrounds (e.g. clinicians, epidemi-
ologists) and levels of expertise might also play a role.
However, drawing conclusions for practice seems diffi-
cult without an approach to differentiate between easy
and difficultstudies beforehand (e.g. tools to classify
statistical complexity). Furthermore, it should be ac-
knowledged that the complexity of data extraction not
only depends on the data items in included studies but
also on its aim. For example, to support data extraction
a guide for complex meta-analysis (DECiMAL) has been
recently published [10].
Findings are consistent across our included studies.
Moreover, for short cuts in other process steps (e.g.
quality assessment) that can have influence on the risk
of bias in systematic reviews, similar results have been
observed. For example, a study on the influence of
searching multiple databases found only a weak impact
of reducing the number of searched databases (because
of not searching additional databases) on the results of
meta-analysis and review conclusions [18, 25]. There-
fore, also for other process steps, research on extraction
methods seems to be needed. Future research should
consider the influence on the risk of bias as well as
the impact on scientific resources.
Limitations
Although the included studies are probably not free
from bias we did not perform a formal risk of bias as-
sessment using a checklist because there is no
Mathes et al. BMC Medical Research Methodology (2017) 17:152 Page 6 of 8
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
established risk of bias tool for methodological studies.
A source for risk of bias (correct extraction) is probably
the imperfect reference standard in the included studies.
For example, in the study by Jones et al. [17] only one
statistician performed data extraction of results. Carroll
et al. [16] considered their own systematic review as
the reference standard. However, a perfect gold standard
might be hardly achievable in such studies. Moreover,
most studies considered only a certain research question
regarding patients, intervention and comparison. The
generalizability of the results is therefore unclear.
The search terms describing our research question
were very unspecific (e.g. data collection). Because of this
reason we used field limitations to balance the specificity and
sensitivity. Therefore, we might not have identified all rele-
vant articles.
We did not investigate novel computer-aided ap-
proaches for data extraction. Computer-aided data ex-
traction can result in more efficiency and accuracy [26].
Although such approaches are only in their infancy, it
can be expected that they will become more common in
the future. Biomedical natural language processing tech-
niques will be developed further in the near future [27].
Conclusion
There is a high prevalence of extraction errors. This
might cause relevant bias in effect estimates [8, 1517].
However, there are only a few studies on the influence of
different data extraction methods, reviewer characteris-
tics and reviewer training on data extraction quality.
Thus, the evidence base for the established standards of
data extraction seems sparse because the actual benefit
of a certain extraction method (e.g. independent data ex-
traction) or the composition of extraction team (e.g. ex-
perience) is not sufficiently proven. This is surprising
given that data extraction is a very crucial step in conduct-
ing a systematic review. More comparative studies are
needed to get deeper insights into the influence of differ-
ent extraction methods. In particular, studies investigating
training for data extraction are needed because there is no
such analysis, to date. Similar studies were recently pub-
lished for risk of bias assessment [28]. The application of
methods that require less effort without threating the in-
ternal validity would result in a more efficient utilization
of scientific manpower. Increasing the knowledge base
would also help to design effective training strategies for
new reviewers and students in the future.
Additional file
Additional file 1: Search strategies. (DOCX 13 kb)
Abbreviations
SR: Systematic review
Acknowledgements
Not applicable.
Funding
This research did not receive any specific funding.
Availability of data and materials
Not applicable.
Authorscontributions
TM: idea for the review, study selection, data extraction, interpretation of
results, writing of manuscript. PK: study selection. DP: idea for the review,
study selection, verification of data extraction, interpretation of results. All
authors read and approved the final manuscript.
Ethics approval and consent to participate
Not applicable. This work contains no human data.
Consent for publication
All authors reviewed the final manuscript and consented for publication.
Competing interests
The authors declare that they have no competing interests.
PublishersNote
Springer Nature remains neutral with regard to jurisdictional claims in
published maps and institutional affiliations.
Received: 17 July 2017 Accepted: 15 November 2017
References
1. Higgins JPT, Green S, editors. Cochrane handbook for systematic reviews of
interventions version 5.1.0 [updated March 2011]. The Cochrane
Collaboration; 2011. Available from www.cochrane-handbook.org.
2. Felson DT. Bias in meta-analytic research. J Clin Epidemiol. 1992;45(8):88592.
3. Centre for reviews dissemination. CRD's guidance for undertaking reviews in
health care. York Publishing Services Ltd. 2009;32:Jg.
4. Joanna Briggs Institute. Reviewersmanual: 2011 edition. Adelaide: JBI; 2014.
5. Tricco AC, Tetzlaff J, Sampson M, Fergusson D, Cogo E, Horsley T, Moher D.
Few systematic reviews exist documenting the extent of bias: a systematic
review. J Clin Epidemiol. 2008;61(5):42234.
6. Shemilt I, Khan N, Park S, Thomas J. Use of cost-effectiveness analysis to
compare the efficiency of study identification methods in systematic
reviews. Syst Rev. 2016;5(1):140.
7. Ganann R, Ciliska D, Thomas H. Expediting systematic reviews: methods and
implications of rapid reviews. Implement Sci. 2010;5(1):56.
8. Gøtzsche PC, Hróbjartsson A, MarićK, Tendal B. Data extraction errors in meta-
analyses that use standardized mean differences. JAMA. 2007;298(4):4307.
9. Buscemi N, Hartling L, Vandermeer B, Tjosvold L, Klassen TP. Single data
extraction generated more errors than double data extraction in systematic
reviews. J Clin Epidemiol. 2006;59:697703.
10. Pedder H, Sarri G, Keeney E, Nunes V, Dias S. Data extraction for complex
meta-analysis (DECiMAL) guide. Syst Rev. 2016;5(1):212.
11. Peters MDJ, Godfrey CM, Khalil H, McInerney P, Parker D, Soares CB.
Guidance for conducting systematic scoping reviews. Int J Evid Based
Healthc. 2015;13(3):1416.
12. Zaza S, Wright De Aguero LK, Briss PA, Truman BI, Hopkins DP, Hennessy
MH, Sosin DM, Anderson L, Carande Kulis VG, Teutsch SM, et al. Data
collection instrument and procedure for systematic reviews in the guide to
community preventive services. Task force on community preventive
services. Am J Prev Med. 2000;18:4474.
13. Haywood KL, Hargreaves J, White R, Lamb SE. Reviewing measures of
outcome: reliability of data extraction. J Eval Clin Pract. 2004;10:32937.
14. Horton J, Vandermeer B, Hartling L, Tjosvold L, Klassen TP, Buscemi N.
Systematic review data extraction: cross-sectional study showed that
experience did not increase accuracy. J Clin Epidemiol. 2010;63:28998.
15. Tendal B, Higgins JP, Juni P, Hrobjartsson A, Trelle S, Nuesch E, Wandel S,
Jorgensen AW, Gesser K, Ilsoe-Kristensen S et al: Disagreements in meta-
analyses using outcomes measured on continuous or rating scales: observer
agreement study. In: Bmj. vol. 339; 2009: b3128.
Mathes et al. BMC Medical Research Methodology (2017) 17:152 Page 7 of 8
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
16. Carroll C, Scope A, Kaltenthaler E: A case study of binary outcome data
extraction across three systematic reviews of hip arthroplasty: errors and
differences of selection. BMC research notes. 2013;6:539.
17. Jones AP, Remmington T, Williamson PR, Ashby D, Smyth RL. High
prevalence but low impact of data extraction and reporting errors were
found in Cochrane systematic reviews. J Clin Epidemiol. 2005;58:7412.
18. Hartling L, Featherstone R, Nuspl M, Shave K, Dryden DM, Vandermeer B.
The contribution of databases to the results of systematic reviews: a cross-
sectional study. BMC Med Res Methodol. 2016;16(1):127.
19. Higgins JPT, Lasserson T, Chandler J, Tovey D, Churchill R. Methodological
Expectations of Cochrane Intervention Reviews. London: Cochrane; 2016.
20. Morton S, Berg A, Levit L, Eden J. Finding what works in health care:
standards for systematic reviews.National Academies Press; 2011.
21. Schünemann HJ, Moja L. Reviews: rapid! Rapid! Rapid! and systematic.
Syst Rev. 2015;4(1):4.
22. Silagy CA, Middleton P, Hopewell S. Publishing protocols of systematic
reviews: comparing what was done to what was planned. JAMA. 2002;
287(21):28314.
23. Adie S, Harris IA, Naylor JM, Mittal R. CONSORT compliance in surgical
randomized trials: are we there yet? A systematic review. Ann Surg. 2013;
258(6):8728.
24. Zheng SL, Chan FT, Maclean E, Jayakumar S, Nabeebaccus AA. Reporting
trends of randomised controlled trials in heart failure with preserved
ejection fraction: a systematic review. Open Heart. 2016;3(2):e000449.
25. Halladay CW, Trikalinos TA, Schmid IT, Schmid CH, Dahabreh IJ. Using data
sources beyond PubMed has a modest impact on the results of systematic
reviews of therapeutic interventions. J Clin Epidemiol. 2015;68(9):107684.
26. Saldanha IJ, Schmid CH, Lau J, Dickersin K, Berlin JA, Jap J, Smith BT, Carini
S, Chan W, De Bruijn B, et al. Evaluating data abstraction assistant, a novel
software application for data abstraction during systematic reviews:
protocol for a randomized controlled trial. Syst Rev. 2016;5(1):196.
27. Jonnalagadda SR, Goyal P, Huffman MD. Automating data extraction in
systematic reviews: a systematic review. Syst Rev. 2015;4:78.
28. da Costa BR, Beckett B, Diaz A, Resta NM, Johnston BC, Egger M, Jüni P, Armijo-
Olivo S. Effect of standardized training on the reliability of the Cochrane risk of
bias assessment tool: a prospective study. Syst Rev. 2017;6(1):44.
We accept pre-submission inquiries
Our selector tool helps you to find the most relevant journal
We provide round the clock customer support
Convenient online submission
Thorough peer review
Inclusion in PubMed and all major indexing services
Maximum visibility for your research
Submit your manuscript at
www.biomedcentral.com/submit
Submit your next manuscript to BioMed Central
and we will help you at every step:
Mathes et al. BMC Medical Research Methodology (2017) 17:152 Page 8 of 8
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
1.
2.
3.
4.
5.
6.
Terms and Conditions
Springer Nature journal content, brought to you courtesy of Springer Nature Customer Service Center GmbH (“Springer Nature”).
Springer Nature supports a reasonable amount of sharing of research papers by authors, subscribers and authorised users (“Users”), for small-
scale personal, non-commercial use provided that all copyright, trade and service marks and other proprietary notices are maintained. By
accessing, sharing, receiving or otherwise using the Springer Nature journal content you agree to these terms of use (“Terms”). For these
purposes, Springer Nature considers academic use (by researchers and students) to be non-commercial.
These Terms are supplementary and will apply in addition to any applicable website terms and conditions, a relevant site licence or a personal
subscription. These Terms will prevail over any conflict or ambiguity with regards to the relevant terms, a site licence or a personal subscription
(to the extent of the conflict or ambiguity only). For Creative Commons-licensed articles, the terms of the Creative Commons license used will
apply.
We collect and use personal data to provide access to the Springer Nature journal content. We may also use these personal data internally within
ResearchGate and Springer Nature and as agreed share it, in an anonymised way, for purposes of tracking, analysis and reporting. We will not
otherwise disclose your personal data outside the ResearchGate or the Springer Nature group of companies unless we have your permission as
detailed in the Privacy Policy.
While Users may use the Springer Nature journal content for small scale, personal non-commercial use, it is important to note that Users may
not:
use such content for the purpose of providing other users with access on a regular or large scale basis or as a means to circumvent access
control;
use such content where to do so would be considered a criminal or statutory offence in any jurisdiction, or gives rise to civil liability, or is
otherwise unlawful;
falsely or misleadingly imply or suggest endorsement, approval , sponsorship, or association unless explicitly agreed to by Springer Nature in
writing;
use bots or other automated methods to access the content or redirect messages
override any security feature or exclusionary protocol; or
share the content in order to create substitute for Springer Nature products or services or a systematic database of Springer Nature journal
content.
In line with the restriction against commercial use, Springer Nature does not permit the creation of a product or service that creates revenue,
royalties, rent or income from our content or its inclusion as part of a paid for service or for other commercial gain. Springer Nature journal
content cannot be used for inter-library loans and librarians may not upload Springer Nature journal content on a large scale into their, or any
other, institutional repository.
These terms of use are reviewed regularly and may be amended at any time. Springer Nature is not obligated to publish any information or
content on this website and may remove it or features or functionality at our sole discretion, at any time with or without notice. Springer Nature
may revoke this licence to you at any time and remove access to any copies of the Springer Nature journal content which have been saved.
To the fullest extent permitted by law, Springer Nature makes no warranties, representations or guarantees to Users, either express or implied
with respect to the Springer nature journal content and all parties disclaim and waive any implied warranties or warranties imposed by law,
including merchantability or fitness for any particular purpose.
Please note that these rights do not automatically extend to content, data or other material published by Springer Nature that may be licensed
from third parties.
If you would like to use or distribute our Springer Nature journal content to a wider audience or on a regular basis or in any other manner not
expressly permitted by these Terms, please contact Springer Nature at
onlineservice@springernature.com
... Data extraction is done by taking all research data obtained from scientific journals used for research. Then, the researcher changed the data obtained into new data by filtering the data into several categories (Mathes et al., 2017;Munn et al., 2014;Pedder et al., 2016;Schmidt et al., 2021). Researchers only take valid data and do not include data that is less valid so that optimal new data and satisfactory results are obtained. ...
... Data extraction is the most important phase in research using a systematic literature review method (Jonnalagadda et al., 2015). This phase is very vulnerable to a lot of research data that may be lost, if not careful in filtering the data (Mathes et al., 2017). ...
... The main data taken from the journal article include researchers and research year, research design, research location, number and characteristics of research samples, questionnaire instrument with Likert scale, and research results and conclusions. The data is entered in the data extraction form and displayed in the form of a table (Mathes et al., 2017;Popenoe et al., 2021). ...
Article
Full-text available
p style="text-align: justify;">This study reviews 60 papers using a Likert scale and published between 2012 – 2021. Screening for literature review uses the PRISMA method. The data analysis technique was carried out through data extraction, then synthesized in a structured manner using the narrative method. To achieve credible research results at the stage of the data collection and data analysis process, a group discussion forum (FGD) was conducted. The findings show that only 10% of studies use a measurement scale with an even answer choice category (4, 6, 8, or 10 choices). In general, (90%) of research uses a measurement instrument that involves a Likert scale with odd response choices (5, 7, 9, or 11) and the most popular researchers use a Likert scale with a total response of 5 points. The use of a rating scale with an odd number of responses of more than five points (especially on a seven-point scale) is the most effective in terms of reliability and validity coefficients, but if the researcher wants to direct respondents to one side, then a scale with an even number of responses (six points) is possible. more suitable. The presence of response bias and central tendency bias can affect the validity and reliability of the use of the Likert scale instrument.</p
... Two calibrated reviewers (A.S.A. and R.M.) independently extracted the data using the reduced data extraction method by focusing on the aspects critical to the results (i.e., exposure and primary outcomes) and aspects which required more subjective interpretations (i.e., patients characteristics and added outcomes) [49,50]. Data were extracted based on the PICO/S framework; (1) population, (2) exposure, (3) comparator, (4) outcome, and (5) study design. ...
Article
Full-text available
Background: Recognising the association between the perceived risks of e-cigarettes and e-cigarette usage among youth is critical for planning effective prevention and intervention initiatives; thus, a systematic review and meta-analysis were performed. Methods: Fourteen databases were searched for eligible studies from the Inception of database until March 2022 to examine the effect estimates of the association between perceptions of harmfulness and addictiveness and overall e-cigarette usage among adolescents and youth. Results: The meta-analysis showed that in comparison to non-users, young people who were ever e-cigarette users were two times more likely to disagree that e-cigarettes are harmful (OR: 2.20, 95% CI: 1.41–3.43) and perceived e-cigarettes as less harmful than tobacco cigarettes (OR: 2.01, 95% CI 1.47–2.75). Youths who were ever e-cigarette users were also 2.3 and 1.8 times more likely to perceive e-cigarettes as less addictive (OR: 2.28, 95% CI: 1.81–2.88) or perceive e-cigarettes as more addictive (OR: 1.82, 95% CI: 1.22–2.73) than tobacco cigarettes, as compared with non-users. The subgroup analysis reported that adolescents were more likely to believe that e-cigarettes are less harmful than tobacco cigarettes, while youth users perceived otherwise. Conclusion: the risk perceptions of e-cigarettes are associated with e-cigarette use among adolescents and youth and could be the focus of health promotion to prevent and curb the uptake of e-cigarettes among young people.
... First, a data extraction template was developed to collate relevant data from the studies (see Appendix B for table of characteristics). This was done using a partial double extraction process to reduce errors during the process [20,21]. Two members of the team with moderate and substantial literature review experience independently developed data extraction templates based on a review of the same ten articles. ...
Article
Full-text available
Community health volunteers are considered a vital part of the community health structure in Africa. Despite this vital role in African health systems, very little is known about the community health volunteers’ day-to-day lived experiences providing services in communities and supporting other health workers. This scoping review aims to advance understanding of the day-to-day experiences of community health volunteers in Africa. In doing so, this review draws attention to these under-considered actors in African health systems and identifies critical factors and conditions that represent challenges to community health volunteers’ work in this context. Ultimately, our goal is to provide a synthesis of key challenges and considerations that can inform efforts to reduce attrition and improve the sustainability of community health volunteers in Africa. This scoping review was conducted using the Preferred Reporting Items for Systematic reviews and Meta-Analyses extension for scoping reviews checklist to achieve the objectives. A comprehensive search of six databases returned 2140 sources. After screening, 31 peer-reviewed studies were selected for final review. Analytical themes were generated based on the reviewers’ extraction of article data into descriptive themes using an inductive approach. In reviewing community health volunteers’ accounts of providing health services, five key challenges become apparent. These are: (1) challenges balancing work responsibilities with family obligations; (2) resource limitations; (3) exposure to stigma and harassment; (4) gendered benefits and risks; and (5) health-system level challenges. This scoping review highlights the extent of challenges community health volunteers must navigate to provide services in communities. Sustained commitment at the national and international level to understand the lived experiences of community health volunteers and mitigate common stressors these health actors face could improve their performance and inform future programs.
... The data and findings extraction process was done through an agreed checklist. Two authors used the two authors' independent extraction approach where they independently extracted the data based on a pre-agreed list including details on authors, publication year, aim, and core findings, respectively [12,13]. Any differences in the extraction of reviews and studies were resolved through consensus by the authors. ...
Article
Full-text available
Aim and Significance: The review examines the impact of Australian government policies on healthcare quality delivery to the minority communities of the aboriginal and Torres Strait Islander community within Australia. The focus is on the value and impacts of the policies, denoting their success and limitation areas. This is informed by the need to ensure equity for all in accessing healthcare services as a global healthcare provision sustainability goal. Methods: This review was based on a search of the existing secondary data that is publicly available. The analysis searched the Australian government website for its official healthcare policy publications and supporting peer reviews from the Google scholar, PubMed, Scopus, and Medline databases, respectively. The findings were analysed through the PRISMA model, and inclusion and exclusion criteria were applied to ensure findings relevance and alignment with the review aim. Findings and Conclusion: The review analysis established that the Australian government has enacted healthcare policies targeting the aboriginal and Torres Strait Islander community within Australia. Key areas of focus in enhancing healthcare quality for this target healthcare population were (i) improving healthcare access and (ii) enhancing healthcare affordability through cost reduction.
Preprint
Full-text available
Extracting data from studies is the norm in meta-analyses, enabling researchers to generate effect sizes when raw data are otherwise not available. While there has been a general push for increased reproducibility throughout the many facets of meta-analysis, the transparency and reproducibility of the data extraction phase are still lagging be-hind. This particular meta-analytic facet is critical because it facilitates error-checking and enables users to update older meta-analyses. Unfortunately, there is little guidance of how to make the process of data extraction more transparent and shareable, in part this is as a result of relatively few data extraction tools currently offering such functionality. Here, we suggest a simple framework that aims to help increase the reproducibility of data extraction for meta-analysis. We also provide suggestions of software that can further help users adopt open data policies. More specifically, we overview two GUI style software in the R environment, shinyDigitise and juicr, that both facilitate reproducible workflows while reducing the need for coding skills in R. Adopting the guiding principles listed here and using appropriate software will provide a more streamlined, transparent, and shareable form of data extraction for meta-analyses.
Article
Objective To study the effect of modifying content and design elements within written informed-consent-forms (ICF) for patients undergoing elective surgical or invasive procedures. Methods We included (quasi-)randomized trials in which a modified written ICF (e.g. visual aids) was compared to a standard written ICF. We searched PubMed, Web-of-Science and PsycINFO until 08/2021. Risk of Bias was assessed. The complexity of intervention was assessed using the Intervention Complexity Assessment Tool for Systematic Reviews. Results Eleven trials with 1,091 participants were eligible. Providing patients with more information in general or specific information on risks and complications mostly increased anxiety. The use of verbal risk presentation decreased anxiety and increased satisfaction. A lower readability level decreased anxiety and improved comprehension and knowledge. However, the effect sizes and levels of evidence varied from trivial to moderate. Furthermore, there were contradictory findings for some outcomes. Conclusion Our results suggest that providing more information and addressing certain types of risks have differential effects. While more information improved knowledge, it also increased anxiety. We did not find any or only insufficient evidence for many other possible ICF modifications. Practice Implications When developing ICFs the differential impact of different elements on patient important outcomes should be carefully considered.
Preprint
Full-text available
Microplastics (MP) are perceived as a threat to aquatic ecosystems but bear many similarities to suspended sediments which are often considered less harmful. It is, therefore pertinent to determine if and to what extent MPs are different from other particles occurring in aquatic ecosystems in terms of their adverse effects. We applied meta-regressions to hazard data extracted from the literature and harmonized the data to construct Species Sensitivity Distributions (SSDs) for both types of particles. The results demonstrate that the average toxicity of MPs is approximately one order of magnitude higher than that of suspended solids. However, the estimates were associated with large uncertainties and did not provide very strong evidence. In part, this is due to the general lack of comparable experimental studies and dose-dependent point estimates. We, therefore, argue that a precautionary approach should be used and MP in the 1–1000 µm size range should be considered moderately more hazardous to aquatic organisms capable of ingesting such particles. Organisms inhabiting oligotrophic habitats like coral reefs and alpine lakes, with naturally low levels of non-food particles are likely more vulnerable, and it is reasonable to assume that MP pose a relatively higher risk to aquatic life in such habitats. Synopsis A meta-analysis indicates that microplastics are one order of magnitude more toxic than suspended sediments/solids, an estimate surrounded by considerable uncertainty. Graphical abstract
Article
Summary In this study, we examined the data reproducibility issues in systematic reviews in sleep medicine. We searched for systematic reviews of randomized controlled trials published in sleep medicine journals. The metadata in meta-analyses among the eligible systematic reviews were collected. The original sources of the data were reviewed to see if the components used in the meta-analyses were correctly extracted or estimated. The impacts of the data reproducibility issues were investigated. We identified 48 systematic reviews with 244 meta-analyses of continuous outcomes and 54 of binary outcomes. Our results suggest that for continuous outcomes, 20.03% of the data used in meta-analyses cannot be reproduced at the trial level, and 43.44% of the data cannot be reproduced at the meta-analysis level. For binary outcomes, the proportions were 14.14% and 40.74%. In total, 83.33% of the data cannot be reproduced at the systematic review level. Our further analysis suggested that these reproducibility issues would lead to as much as 6.52% of the available meta-analyses changing the direction of the effects, and 9.78% changing the significance of the P-values. Sleep medicine systematic reviews and meta-analyses face serious issues in terms of data reproducibility, and further efforts are urgently needed to improve this situation.
Thesis
Full-text available
Die hohe Prävalenz körperlicher Inaktivität stellt ein globales Problem dar, das zu steigender Morbidität beiträgt und jährlich 5,3 Millionen frühzeitige Todesfälle sowie ökonomische Kosten von 53,8 Milliarden US-Dollar verursacht. Die Entwicklung von Strategien der Bewegungsförderung ist vor diesem Hintergrund eine zentrale Herausforderung. Bewegungsförderung ist eine zentrale Strategie der Gesundheitsförderung; die Reduktion körperlicher Inaktivität um 10 % ist eines der Kernziele der Weltgesundheitsorganisation (WHO) zur Prävention und Kontrolle nichtübertragbarer Erkrankungen. Bemühungen zur Bewegungsförderung sind bei Menschen mit nichtübertragbaren Erkrankungen wie Herzkreislauferkrankungen, Krebserkrankungen, Diabetes, chronisch obstruktiver Lungenerkrankung (COPD) oder psychischen Erkrankungen von besonderer Bedeutung. Trotz der nachgewiesenen umfassenden Gesundheitswirkungen regelmäßiger Bewegung ist ein körperlich inaktiver Lebensstil bei Menschen mit Vorerkrankungen weit verbreitet. Gleichzeitig bilden Menschen mit nichtübertragbaren Erkrankungen in Deutschland mit rund 40 % der erwachsenen Bevölkerung eine große Personengruppe, die in Zukunft aller Voraussicht nach weiter anwachsen wird. Die vorliegende kumulative Habilitationsschrift widmet sich dem Thema Bewegungsförderung bei Menschen mit nichtübertragbaren Erkrankungen. Die Arbeit möchte einen Beitrag dazu leisten, körperlich aktive Lebensstile bei Menschen mit nichtübertragbaren Erkrankungen erfolgreich zu fördern. Hierfür werden 22 Einzelbeiträge zu den drei übergeordneten Themenbereichen Bewegungsempfehlungen, Bewegungsverhalten und Bewegungstherapie eingebracht, die ein breites inhaltliches Spektrum adressieren und vielfältige wissenschaftliche Ansätze und Methoden nutzen. Die zentralen Arbeitsergebnisse der Habilitationsschrift beinhalten: 1.) Die konzeptionelle Entwicklung von Nationalen Empfehlungen für Bewegung und Bewegungsförderung für Erwachsene mit nichtübertragbaren Erkrankungen, die Dissemination dieser Empfehlungen mittels eines Beteiligungsansatzes sowie die ergänzende analytische Betrachtung der Entwicklung, Dissemination und Implementierung der Empfehlungen. 2.) Eine epidemiologische Analyse zum Bewegungsverhalten von Erwachsenen mit nichtübertragbaren Erkrankungen in Deutschland. 3.) Die Herausarbeitung nichtlinearer Dosis-Wirkungs-Beziehungen zwischen körperlicher Aktivität und Mortalität für Menschen mit nichtübertragbaren Erkrankungen auf Basis einer systematischen Übersichtsarbeit. 4.) Eine randomisierte kontrollierte Wirksamkeitsanalyse zu den langfristigen bewegungsförderlichen Effekten einer Schrittzähler-basierten Verhaltensintervention im Rahmen der pneumologischen Rehabilitation für Rehabilitandinnen und Rehabilitanden mit COPD sowie die Prädiktion deren körperlicher Aktivitätsverläufe sechs Wochen und sechs Monate nach der Rehabilitationsmaßnahme, basierend auf dem integrativen Modell der Bewegungsbezogenen Gesundheitskompetenz. 5.) Die Erarbeitung einer nationalen Bestandsaufnahme der Bewegungstherapie in der medizinischen Rehabilitation mit Blick sowohl auf die Einrichtungs- als auch auf die Akteursebene, jeweils mit besonderem Fokus auf das Ziel der Bewegungsförderung Die Ergebnisse dieser Arbeit a) bestätigen den hohen gesundheitsförderlichen Nutzen körperlicher Aktivität, b) erweitern das Wissen über das tatsächliche Ausmaß an körperlicher Aktivität bei Menschen mit nichtübertragbaren Erkrankungen und ermöglichen es, besonders inaktive Subgruppen zu definieren, c) verbessern das Verständnis zu den personbezogenen Determinanten des Bewegungsverhaltens von Menschen mit Vorerkrankungen, d) legen die Grundlage für die systematische Weiterentwicklung und Optimierung einer bewegungsförderlichen Bewegungstherapie, e) geben Handlungsorientierung für Gesundheitsfachberufe im Bereich der Bewegungsförderung, f) unterstützen die Entwicklung bewegungsförderlicher Politik und g) begünstigen die Netzwerkbildung für Bewegungsförderung. In der Summe stärkt diese Arbeit die Evidenzbasis für erfolgreiche Bewegungsförderung bei Menschen mit nichtübertragbaren Erkrankungen. Diese Arbeit bietet relevante Erkenntnisse für Erwachsene mit nichtübertragbaren Erkrankungen, Bewegungstherapeutinnen und -therapeuten, Ärztinnen und Ärzte sowie andere Gesundheitsfachkräfte, die sich mit Bewegungsförderung und Bewegungstherapie befassen, wie auch für Entscheidungsträgerinnen und träger im Gesundheitswesen und in der Politik. In ihrer Gesamtheit setzt diese Habilitationsschrift vielfältige Impulse für Bewegungsförderung auf individueller, organisationaler sowie politischer Ebene und kann dazu beitragen, zukünftige Bemühungen der Bewegungsförderung bei Menschen mit nichtübertragbaren Erkrankungen erfolgreich zu gestalten.
Article
Background Most mHealth app users rely on an app’s rankings, star ratings, or reviews, which may not reflect users’ individual healthcare needs. To help healthcare providers, researchers, and users select an optimal mHealth app, the Method of App Selection based on User Needs (MASUN) 1.0¹ was developed and tested in prior research. Initial testing found the need for improvement. Objective This multiple-phase study aimed to simplify and improve MASUN 1.0, resulting in MASUN 2.0, and verify the feasibility and usability of MASUN 2.0. Methods This study was conducted in three phases: 1) modification of MASUN 1.0 to improve its importance, applicability, relevance, and clarity, in consultation with 21 experts in medical or nursing informatics; 2) validation of the draft MASUN 2.0, with 13 experts; and 3) feasibility testing of MASUN 2.0 and usability evaluation of the best app found through MASUN 2.0. Menstrual apps were used to test the framework. Results From Phases 1 and 2, MASUN 2.0, the framework for mHealth App selection, was derived with improved simplicity, usability, and applicability through a reduced number of tasks and time required. In Phase 3, after screening and scoring 2377 menstrual apps, five candidate apps were selected and evaluated by five clinical experts, five app experts, and five potential users. Finally, 194 users evaluated the usability of the app selected as the best. The best app helped users understand their health-related syndromes and patterns. Additionally, user-provided scores for impact, usefulness, and ease of use for the app were higher than for others. Conclusions This study successfully modified MASUN 1.0 into MASUN 2.0 and verified MASUN 2.0 through content validity, feasibility, and usability testing. The selected apps through MASUN 2.0 helped health consumers more easily address health discomfort. Future research should extend this work to an automated system and different medical conditions with multiple stakeholders for digital health equity.
Article
Full-text available
Background The Cochrane risk of bias tool is commonly criticized for having a low reliability. We aimed to investigate whether training of raters, with objective and standardized instructions on how to assess risk of bias, can improve the reliability of the Cochrane risk of bias tool. Methods In this pilot study, four raters inexperienced in risk of bias assessment were randomly allocated to minimal or intensive standardized training for risk of bias assessment of randomized trials of physical therapy treatments for patients with knee osteoarthritis pain. Two raters were experienced risk of bias assessors who served as reference. The primary outcome of our study was between-group reliability, defined as the agreement of the risk of bias assessments of inexperienced raters with the reference assessments of experienced raters. Consensus-based assessments were used for this purpose. The secondary outcome was within-group reliability, defined as the agreement of assessments within pairs of inexperienced raters. We calculated the chance-corrected weighted Kappa to quantify agreement within and between groups of raters for each of the domains of the risk of bias tool. ResultsA total of 56 trials were included in our analysis. The Kappa for the agreement of inexperienced raters with reference across items of the risk of bias tool ranged from 0.10 to 0.81 for the minimal training group and from 0.41 to 0.90 for the standardized training group. The Kappa values for the agreement within pairs of inexperienced raters across the items of the risk of bias tool ranged from 0 to 0.38 for the minimal training group and from 0.93 to 1 for the standardized training group. Between-group differences in Kappa for the agreement of inexperienced raters with reference always favored the standardized training group and was most pronounced for incomplete outcome data (difference in Kappa 0.52, p < 0.001) and allocation concealment (difference in Kappa 0.30, p = 0.004). Conclusions Intensive, standardized training on risk of bias assessment may significantly improve the reliability of the Cochrane risk of bias tool.
Article
Full-text available
As more complex meta-analytical techniques such as network and multivariate meta-analyses become increasingly common, further pressures are placed on reviewers to extract data in a systematic and consistent manner. Failing to do this appropriately wastes time, resources and jeopardises accuracy. This guide (data extraction for complex meta-analysis (DECiMAL)) suggests a number of points to consider when collecting data, primarily aimed at systematic reviewers preparing data for meta-analysis. Network meta-analysis (NMA), multiple outcomes analysis and analysis combining different types of data are considered in a manner that can be useful across a range of data collection programmes. The guide has been shown to be both easy to learn and useful in a small pilot study. Electronic supplementary material The online version of this article (doi:10.1186/s13643-016-0368-4) contains supplementary material, which is available to authorized users.
Article
Full-text available
Background Data abstraction, a critical systematic review step, is time-consuming and prone to errors. Current standards for approaches to data abstraction rest on a weak evidence base. We developed the Data Abstraction Assistant (DAA), a novel software application designed to facilitate the abstraction process by allowing users to (1) view study article PDFs juxtaposed to electronic data abstraction forms linked to a data abstraction system, (2) highlight (or “pin”) the location of the text in the PDF, and (3) copy relevant text from the PDF into the form. We describe the design of a randomized controlled trial (RCT) that compares the relative effectiveness of (A) DAA-facilitated single abstraction plus verification by a second person, (B) traditional (non-DAA-facilitated) single abstraction plus verification by a second person, and (C) traditional independent dual abstraction plus adjudication to ascertain the accuracy and efficiency of abstraction. Methods This is an online, randomized, three-arm, crossover trial. We will enroll 24 pairs of abstractors (i.e., sample size is 48 participants), each pair comprising one less and one more experienced abstractor. Pairs will be randomized to abstract data from six articles, two under each of the three approaches. Abstractors will complete pre-tested data abstraction forms using the Systematic Review Data Repository (SRDR), an online data abstraction system. The primary outcomes are (1) proportion of data items abstracted that constitute an error (compared with an answer key) and (2) total time taken to complete abstraction (by two abstractors in the pair, including verification and/or adjudication). Discussion The DAA trial uses a practical design to test a novel software application as a tool to help improve the accuracy and efficiency of the data abstraction process during systematic reviews. Findings from the DAA trial will provide much-needed evidence to strengthen current recommendations for data abstraction approaches. Trial registration The trial is registered at National Information Center on Health Services Research and Health Care Technology (NICHSR) under Registration # HSRP20152269: https://wwwcf.nlm.nih.gov/hsr_project/view_hsrproj_record.cfm?NLMUNIQUE_ID=20152269&SEARCH_FOR=Tianjing%20Li. All items from the World Health Organization Trial Registration Data Set are covered at various locations in this protocol. Protocol version and date: This is version 2.0 of the protocol, dated September 6, 2016. As needed, we will communicate any protocol amendments to the Institutional Review Boards (IRBs) of Johns Hopkins Bloomberg School of Public Health (JHBSPH) and Brown University. We also will make appropriate as-needed modifications to the NICHSR website in a timely fashion. Electronic supplementary material The online version of this article (doi:10.1186/s13643-016-0373-7) contains supplementary material, which is available to authorized users.
Article
Full-text available
Background One of the best sources for high quality information about healthcare interventions is a systematic review. A well-conducted systematic review includes a comprehensive literature search. There is limited empiric evidence to guide the extent of searching, in particular the number of electronic databases that should be searched. We conducted a cross-sectional quantitative analysis to examine the potential impact of selective database searching on results of meta-analyses. Methods Our sample included systematic reviews (SRs) with at least one meta-analysis from three Cochrane Review Groups: Acute Respiratory Infections (ARI), Infectious Diseases (ID), Developmental Psychosocial and Learning Problems (DPLP) (n = 129). Outcomes included: 1) proportion of relevant studies indexed in each of 10 databases; and 2) changes in results and statistical significance of primary meta-analysis for studies identified in Medline only and in Medline plus each of the other databases. ResultsDue to variation across topics, we present results by group (ARI n = 57, ID n = 38, DPLP n = 34). For ARI, identification of relevant studies was highest for Medline (85 %) and Embase (80 %). Restricting meta-analyses to trials that appeared in Medline + Embase yielded fewest changes in statistical significance: 53/55 meta-analyses showed no change. Point estimates changed in 12 cases; in 7 the change was less than 20 %. For ID, yield was highest for Medline (92 %), Embase (81 %), and BIOSIS (67 %). Restricting meta-analyses to trials that appeared in Medline + BIOSIS yielded fewest changes with 1 meta-analysis changing in statistical significance. Point estimates changed in 8 of 31 meta-analyses; change less than 20 % in all cases. For DPLP, identification of relevant studies was highest for Medline (75 %) and Embase (62 %). Restricting meta-analyses to trials that appeared in Medline + PsycINFO resulted in only one change in significance. Point estimates changed for 13 of 33 meta-analyses; less than 20 % in 9 cases. Conclusions Majority of relevant studies can be found within a limited number of databases. Results of meta-analyses based on the majority of studies did not differ in most cases. There were very few cases of changes in statistical significance. Effect estimates changed in a minority of meta-analyses but in most the change was small. Results did not change in a systematic manner (i.e., regularly over- or underestimating treatment effects), suggesting that selective searching may not introduce bias in terms of effect estimates.
Article
Full-text available
Background Meta-research studies investigating methods, systems, and processes designed to improve the efficiency of systematic review workflows can contribute to building an evidence base that can help to increase value and reduce waste in research. This study demonstrates the use of an economic evaluation framework to compare the costs and effects of four variant approaches to identifying eligible studies for consideration in systematic reviews. MethodsA cost-effectiveness analysis was conducted using a basic decision-analytic model, to compare the relative efficiency of ‘safety first’, ‘double screening’, ‘single screening’ and ‘single screening with text mining’ approaches in the title-abstract screening stage of a ‘case study’ systematic review about undergraduate medical education in UK general practice settings. Incremental cost-effectiveness ratios (ICERs) were calculated as the ‘incremental cost per citation ‘saved’ from inappropriate exclusion’ from the review. Resource use and effect parameters were estimated based on retrospective analysis of ‘review process’ meta-data curated alongside the ‘case study’ review, in conjunction with retrospective simulation studies to model the integrated use of text mining. Unit cost parameters were estimated based on the ‘case study’ review’s project budget. A base case analysis was conducted, with deterministic sensitivity analyses to investigate the impact of variations in values of key parameters. ResultsUse of ‘single screening with text mining’ would have resulted in title-abstract screening workload reductions (base case analysis) of >60 % compared with other approaches. Across modelled scenarios, the ‘safety first’ approach was, consistently, equally effective and less costly than conventional ‘double screening’. Compared with ‘single screening with text mining’, estimated ICERs for the two non-dominated approaches (base case analyses) ranged from £1975 (‘single screening’ without a ‘provisionally included’ code) to £4427 (‘safety first’ with a ‘provisionally included’ code) per citation ‘saved’. Patterns of results were consistent between base case and sensitivity analyses. Conclusions Alternatives to the conventional ‘double screening’ approach, integrating text mining, warrant further consideration as potentially more efficient approaches to identifying eligible studies for systematic reviews. Comparable economic evaluations conducted using other systematic review datasets are needed to determine the generalisability of these findings and to build an evidence base to inform guidance for review authors.
Article
Full-text available
Background Heart failure with preserved ejection fraction (HFpEF) causes significant cardiovascular morbidity and mortality. Current consensus guidelines reflect the neutral results from randomised controlled trials (RCTs). Adequate trial reporting is a fundamental requirement before concluding on RCT intervention efficacy and is necessary for accurate meta-analysis and to provide insight into future trial design. The Consolidated Standards of Reporting Trials (CONSORT) 2010 statement provides a framework for complete trial reporting. Reporting quality of HFpEF RCTs has not been previously assessed, and this represents an important validation of reporting qualities to date. Objectives The aim was to systematically identify RCTs investigating the efficacy of pharmacological therapies in HFpEF and to assess the quality of reporting using the CONSORT 2010 statement. Methods MEDLINE, EMBASE and CENTRAL databases were searched from January 1996 to November 2015, with RCTs assessing pharmacological therapies on clinical outcomes in HFpEF patients included. The quality of reporting was assessed against the CONSORT 2010 checklist. Results A total of 33 RCTs were included. The mean CONSORT score was 55.4% (SD 17.2%). The CONSORT score was strongly correlated with journal impact factor (r=0.53, p=0.003) and publication year (r=0.50, p=0.003). Articles published after the introduction of CONSORT 2010 statement had a significantly higher mean score compared with those published before (64% vs 50%, p=0.02). Conclusions Although the CONSORT score has increased with time, a significant proportion of HFpEF RCTs showed inadequate reporting standards. The level of adherence to CONSORT criteria could have an impact on the validity of trials and hence the interpretation of intervention efficacy. We recommend improving compliance with the CONSORT statement for future RCTs.
Article
Full-text available
Reviews of primary research are becoming more common as evidence-based practice gains recognition as the benchmark for care, and the number of, and access to, primary research sources has grown. One of the newer review types is the 'scoping review'. In general, scoping reviews are commonly used for 'reconnaissance' - to clarify working definitions and conceptual boundaries of a topic or field. Scoping reviews are therefore particularly useful when a body of literature has not yet been comprehensively reviewed, or exhibits a complex or heterogeneous nature not amenable to a more precise systematic review of the evidence. While scoping reviews may be conducted to determine the value and probable scope of a full systematic review, they may also be undertaken as exercises in and of themselves to summarize and disseminate research findings, to identify research gaps, and to make recommendations for the future research. This article briefly introduces the reader to scoping reviews, how they are different to systematic reviews, and why they might be conducted. The methodology and guidance for the conduct of systematic scoping reviews outlined below was developed by members of the Joanna Briggs Institute and members of five Joanna Briggs Collaborating Centres.
Article
Full-text available
Automation of the parts of systematic review process, specifically the data extraction step, may be an important strategy to reduce the time necessary to complete a systematic review. However, the state of the science of automatically extracting data elements from full texts has not been well described. This paper performs a systematic review of published and unpublished methods to automate data extraction for systematic reviews. We systematically searched PubMed, IEEEXplore, and ACM Digital Library to identify potentially relevant articles. We included reports that met the following criteria: 1) methods or results section described what entities were or need to be extracted, and 2) at least one entity was automatically extracted with evaluation results that were presented for that entity. We also reviewed the citations from included reports. Out of a total of 1190 unique citations that met our search criteria, we found 26 published reports describing automatic extraction of at least one of more than 52 potential data elements used in systematic reviews. For 25 (48 %) of the data elements used in systematic reviews, there were attempts from various researchers to extract information automatically from the publication text. Out of these, 14 (27 %) data elements were completely extracted, but the highest number of data elements extracted automatically by a single study was 7. Most of the data elements were extracted with F-scores (a mean of sensitivity and positive predictive value) of over 70 %. We found no unified information extraction framework tailored to the systematic review process, and published reports focused on a limited (1-7) number of data elements. Biomedical natural language processing techniques have not been fully utilized to fully or even partially automate the data extraction step of systematic reviews.
Article
Searching multiple sources when conducting systematic reviews is considered good practice. We aimed to investigate the impact of using sources beyond PubMed in systematic reviews of therapeutic interventions. We randomly selected 50 Cochrane reviews that searched the PubMed (or MEDLINE) and EMBASE databases and included a meta-analysis of ≥10 studies. We checked whether each eligible record in each review (n = 2,700) was retrievable in PubMed and EMBASE. For the first-listed meta-analysis of ≥10 studies in each review, we examined whether excluding studies not found in PubMed affected results. A median of one record per review was indexed in EMBASE but not in PubMed; a median of four records per review was not indexed in PubMed or EMBASE. Meta-analyses included a median of 13.5 studies; a median of zero studies per meta-analysis was indexed in EMBASE but not in PubMed; a median of one study per meta-analysis was not indexed in PubMed or EMBASE. Meta-analysis using only PubMed-indexed vs. all available studies led to a different conclusion in a single case (on the basis of conventional criteria for statistical significance). In meta-regression analyses, effects in PubMed- vs. non-PubMed-indexed studies were statistically significantly different in a single data set. For systematic reviews of the effects of therapeutic interventions, gains from searching sources beyond PubMed, and from searching EMBASE in particular are modest. Copyright © 2015 Elsevier Inc. All rights reserved.